status
stringclasses
1 value
repo_name
stringclasses
13 values
repo_url
stringclasses
13 values
issue_id
int64
1
104k
updated_files
stringlengths
11
1.76k
title
stringlengths
4
369
body
stringlengths
0
254k
issue_url
stringlengths
38
55
pull_url
stringlengths
38
53
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
unknown
language
stringclasses
5 values
commit_datetime
unknown
closed
PostHog/posthog
https://github.com/PostHog/posthog
1,407
["posthog/api/capture.py", "posthog/api/test/test_capture.py", "posthog/middleware.py", "posthog/test/test_middleware.py"]
Event capturing bug
## Bug description While exploring the onboarding/setup funnel, I found what seems to be a bug capturing events (perhaps it's an API request issue). When I send this request, the events don't get saved (checked directly on the database). ``` POST <host>/capture ``` ```json { "api_key": "API_KEY_HERE", "event": "Purchase item", "properties": { "distinct_id": "845555992929291" } } ``` ## Expected behavior - Events should be correctly saved to the database. If on the onboarding flow, continue the user and complete the onboarding. - If the request is malformed or an invalid API key is sent, I should not receive a 200 status code. _There seems to be an issue with the request and therefore the event is not saved_, however **if I add a random non-existent key as the API key (e.g. `invalid`), I still get a 200 and a 1 in the response.** ## Reproduction environment - Tested using the one-line Docker deploy. ```bash doker run -t -i --rm --publish 8000:8000 -v postgres:/var/lib/postgresql posthog/posthog:preview ``` - Tested on the cloud version with the same result (please reach out privately if you want to know the account's email address).
https://github.com/PostHog/posthog/issues/1407
https://github.com/PostHog/posthog/pull/1418
70f59f9b5807ffe37dd9f6f48859a6abcc5c3dd6
31533e39e69db1cb602998250a30f5e1ca0f01f1
"2020-08-11T17:03:51Z"
python
"2020-08-14T12:18:27Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
1,375
["frontend/src/toolbar/elements/heatmapLogic.ts"]
Toolbar clicks counted wrong
## Bug description It appears 40% of people who visit posthog.com click "careers" ![image](https://user-images.githubusercontent.com/53387/89546838-50c8e280-d805-11ea-8565-7cf572d1ed3d.png) This is not true. It's actually ~100 clicks that landed on this link when analysing the API response. ## Expected behavior This number should make more sense ## Additional context This came out of a customer call
https://github.com/PostHog/posthog/issues/1375
https://github.com/PostHog/posthog/pull/1400
e9d9b98d63ad3619c79e1f0497b4059071d14a50
75a12b7bbf666fd0590ca250fc0c2ffed5933c2c
"2020-08-06T14:53:59Z"
python
"2020-08-11T11:53:33Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
1,374
["frontend/src/lib/utils.js", "frontend/src/toolbar/elements/Elements.tsx", "frontend/src/toolbar/elements/HeatmapLabel.tsx", "frontend/src/toolbar/utils.ts"]
Toolbar heatmap "yellow labels" aren't clear
## Bug description Is this the number of clicks? Or what is this number? ![image](https://user-images.githubusercontent.com/53387/89546526-fd569480-d804-11ea-98ad-c64401edfd90.png) ## Expected behavior It should be clear either directly or with a tooltip that this is the ranking, not a list. Perhaps it should be "#1" instead? Or "No 1" (with "No" being in a small font). Not sure... ## Additional context Came out of a call with a customer.
https://github.com/PostHog/posthog/issues/1374
https://github.com/PostHog/posthog/pull/1459
764e10696b6ee9cba927b38e0789ed896f5d67dd
1b116059b26346b0f045e3393c90ebebec337d3a
"2020-08-06T14:52:24Z"
python
"2020-08-18T19:47:47Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
1,372
["frontend/src/toolbar/elements/InfoWindow.tsx", "frontend/src/toolbar/elements/elementsLogic.ts"]
Not obvious you can "pin" toolbar infowindows
## Bug description When hovering over a page with lots of elements, unless someone tells you, it's not inherently obvious you can click on any of the elements to "pin" the infowindow to the screen. ![2020-08-06 16 44 51](https://user-images.githubusercontent.com/53387/89545909-3cd0b100-d804-11ea-82fa-b3d512575e2d.gif) ## Expected behavior Somehow this should be more obvious. One idea is that the infowindow is normally not pegged to the bottom/top of the heatmap element, but is always floating right under the mouse cursor (while you hover the element). This would make the page feel smoother and less flashy... and it would probably make the click more obvious. Maybe. ## Additional context This came out in a toolbar feedback session with a customer
https://github.com/PostHog/posthog/issues/1372
https://github.com/PostHog/posthog/pull/1472
1da7f43ef3aaba30d32c06bdee750ec9f5aa92af
b559fee904a96cd36ede6f87d28a2d4ac8ebfe7b
"2020-08-06T14:48:16Z"
python
"2020-09-01T13:51:53Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
1,370
["frontend/src/scenes/actions/ActionStep.js", "frontend/src/scenes/actions/hints.tsx", "frontend/src/toolbar/actions/ActionsTab.scss", "frontend/src/toolbar/actions/StepField.tsx", "frontend/src/toolbar/actions/UrlMatchingToggle.tsx"]
Toolbar Actions Regex
## Is your feature request related to a problem? When making a new action on the toolbar, I can't select "regex" match and it's not clear that I can use wildcards in "contains" ![image](https://user-images.githubusercontent.com/53387/89545067-2aa24300-d803-11ea-9c06-ad385a5150d8.png) ## Describe the solution you'd like Something like the actions page: ![image](https://user-images.githubusercontent.com/53387/89545133-41489a00-d803-11ea-8589-ea84e2aae690.png) ![image](https://user-images.githubusercontent.com/53387/89545142-4574b780-d803-11ea-95a0-95865d9c3a98.png) ## Describe alternatives you've considered Try random wildcard characters until something works ## Additional context Came out in a feedback session
https://github.com/PostHog/posthog/issues/1370
https://github.com/PostHog/posthog/pull/1457
be30a464efdafbfc4ce5b24863c392acc514de95
ff1a1f94088922126831d17b3f6bb1265b3a1338
"2020-08-06T14:39:07Z"
python
"2020-08-20T00:40:19Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
1,369
["plugin-server/yarn.lock"]
Cookies rejected on new deploy with docker
I just pulled the latest posthog and wanted to deploy it on my server. Filling the account details was fine, however I cannot login. Developers console shows why... A cookie associated with a cross-site resource at http://posthog.com/ was set without the `SameSite` attribute. It has been blocked, as Chrome now only delivers cookies with cross-site requests if they are set with `SameSite=None` and `Secure`. You can review cookies in developer tools under Application>Storage>Cookies and see more details
https://github.com/PostHog/posthog/issues/1369
https://github.com/PostHog/posthog/pull/8223
6144eecff9ce12f25b676437f70aabff021c9063
12d3bc12118309fd012379bebf3fc08bd5183adb
"2020-08-06T13:26:35Z"
python
"2022-02-02T13:24:09Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
1,368
["frontend/src/editor/index.js", "frontend/src/toolbar/index.tsx", "frontend/src/toolbar/utils.ts", "package.json", "posthog/models/event.py", "posthog/test/test_event_model.py", "yarn.lock"]
Toolbar Heatmap crashes on sites built with Tailwind CSS
## Bug description Sites built with Tailwind use a lot of classes like: `text-base leading-6 font-medium text-gray-500 hover:text-gray-900 focus:outline-none focus:text-gray-900 transition ease-in-out duration-150` The problem is with the ones that have a colon (`:`) inside them. Simmerjs crashes with the following: ``` Failed to execute 'querySelectorAll' on 'Document': 'A.text-base.leading-6.text-gray-400.hover:text-gray-500.hover:text-white' is not a valid selector ``` ... bringing down the entire toolbar. ## Expected behavior The toolbar shouldn't crash when your page contains a CSS class with a colon in it. https://stackoverflow.com/questions/45110893/select-elements-by-attributes-with-colon ## How to reproduce 1. Make a page with a `<button />` that has a class `with:colon`. 2. Wait for a few events to be captured or click the button yourself 3. Open the toolbar, click "heatmap" and watch it burst up in flames (and then crash)
https://github.com/PostHog/posthog/issues/1368
https://github.com/PostHog/posthog/pull/1397
75a12b7bbf666fd0590ca250fc0c2ffed5933c2c
83e94c5e689154d91a12079faae35026dc43ec30
"2020-08-06T12:09:03Z"
python
"2020-08-11T12:54:33Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
1,352
["plugin-server/yarn.lock"]
[Toolbar] We should capture clicks even when toolbar is open
## Bug description As a user who's just using the toolbar, it's confusing that when I create an action it doesn't have any clicks (even by me). ## Expected behavior We should capture events. We should have better ways of not capturing data of test users ## How to reproduce 1. Open toolbar 2. 3. ## Environment - PostHog cloud or self-managed? both - PostHog version/commit latest ## Additional context This came out of a customer interview #### *Thank you* for your bug report – we love squashing them!
https://github.com/PostHog/posthog/issues/1352
https://github.com/PostHog/posthog/pull/8223
6144eecff9ce12f25b676437f70aabff021c9063
12d3bc12118309fd012379bebf3fc08bd5183adb
"2020-08-04T17:17:33Z"
python
"2022-02-02T13:24:09Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
1,282
["requirements.txt"]
Setup test performance environment on AWS with A/B testing capabilities
We have a great opportunity to test out performance tweaks and ensure our customers have a good experience on Posthog. Use TeeProxy to reverse proxy events to multiple backends with only one being the primary. Things to test: - [x] Aurora vs PG - [ ] Kinesis for event pipelining - [x] Changes in backend logic (view this as a staging environment) Things we will need to do once we have this setup: - [x] Instrument all the metrics - [ ] Setup CD to all (n number) environments
https://github.com/PostHog/posthog/issues/1282
https://github.com/PostHog/posthog/pull/40
4ccbb508e2e9db41dd09afb4c1912d9678bc4413
a33799ec3519ab4bd15ddf2304b58ef29b26ab0e
"2020-07-27T15:15:36Z"
python
"2020-02-09T23:30:06Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
1,279
["requirements.txt"]
[EPIC] New analytics views
There are a couple of analytics views/functionalities that we don't yet have. - [ ] User Composition I want to be able to select a Person property and show the breakdown of that property in a pie chart - [ ] Engagement matrix I want to be able to select a couple of events/actions, and understand, from all users that were active in that time period a) what percentage of those did that event and b) how many times on average a user did that event. <img width="400" alt="" src="https://user-images.githubusercontent.com/1727427/88530443-adf0b700-d001-11ea-8c19-0c3d5d9485bf.png"> - [ ] "Compass" / "Signal" I want to be able to answer "How does performing action Y predict a user coming back a week later", ie is there a positive correlation. [Mixp@anel docs](https://help.mixpanel.com/hc/en-us/articles/115004567503-Signal-Report-Basics). - [ ] Impact analysis I want to be able to answer "How does performing action Y impact action Z". <img width="400" alt="" src="https://user-images.githubusercontent.com/1727427/88531994-1345a780-d004-11ea-9bf5-8f8a7fa91c6e.png"> - [ ] Revenue LTV We'd need to be able to capture `revenue` events with a `$revenue` property to be able to do this. - [ ] Personas Some kind of user clustering functionality? <img width="400" alt="" src="https://user-images.githubusercontent.com/5864173/118990569-01494280-b951-11eb-9c99-b9e305d6fb75.png"> - [X] Ratio of DAUs over MAUs - [x] Lifecycle #1209
https://github.com/PostHog/posthog/issues/1279
https://github.com/PostHog/posthog/pull/40
4ccbb508e2e9db41dd09afb4c1912d9678bc4413
a33799ec3519ab4bd15ddf2304b58ef29b26ab0e
"2020-07-27T10:25:36Z"
python
"2020-02-09T23:30:06Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
1,268
["plugin-server/yarn.lock"]
Separate query logic from APIs
## Is your feature request related to a problem? We do a lot of our analytics in API viewsets etc. This is a bad place for it, for a couple of reasons: - These API files are huge and a big mess - There's analytics stuff in events.py, actions.py, retention.py, funnels.py etc, but it's hard to mix and match or add new capabilities - Tests run very slowly because they have to go through Django rest framework ## Describe the solution you'd like Separate the query generation and serialisation from APIs and Models ## Describe alternatives you've considered ## Additional context I'm going to do a quick spike for this to see what people think of a potential solution. This fits in with the work Eric's doing in #1250 #### *Thank you* for your feature request – we love each and every one!
https://github.com/PostHog/posthog/issues/1268
https://github.com/PostHog/posthog/pull/8223
6144eecff9ce12f25b676437f70aabff021c9063
12d3bc12118309fd012379bebf3fc08bd5183adb
"2020-07-23T13:57:12Z"
python
"2022-02-02T13:24:09Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
1,212
["plugin-server/yarn.lock"]
Automate creation of DigitalOcean image on cut of release
Will be used to close this ticket as well https://github.com/PostHog/posthog/issues/905 The issue currently is we support images on several marketplaces and that number will only grow. We need a way to cut these images when we cut a release (or just on landing a PR to master). GOAL: Setup a GitHub action to build images on push or on cut of a release. At first we just want to support DigitalOcean's image. Keep in mind that we want this to be extensible to be able to build images on other platforms like AWS, GCP, Azure, and Linode. Most of what you'll need to automate the image building can be found here: https://github.com/PostHog/deployment
https://github.com/PostHog/posthog/issues/1212
https://github.com/PostHog/posthog/pull/8223
6144eecff9ce12f25b676437f70aabff021c9063
12d3bc12118309fd012379bebf3fc08bd5183adb
"2020-07-14T15:04:04Z"
python
"2022-02-02T13:24:09Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
1,154
["frontend/src/layout/LatestVersion.js"]
Make "new version available" less threatening
**Is your feature request related to a problem? Please describe.** I got some reports from users that thought something was wrong with their PostHog installation because of this message, as it's in red and has a error triangle. **Describe the solution you'd like** Make the warning more subtle by making it black and removing the icon (or make it something less error-y). **Describe alternatives you've considered** **Additional context** ***Thank*** you for your feature request - we love each and every one!
https://github.com/PostHog/posthog/issues/1154
https://github.com/PostHog/posthog/pull/1216
6af4a6bf5d05cca2849f82e97c6a27dd2933ade3
493dcc033827af4f74bbbf3ff416b2cbeef5be5d
"2020-07-06T20:07:51Z"
python
"2020-07-15T18:49:12Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
1,144
["cypress/integration/feature_flags.js", "frontend/src/lib/utils.js", "frontend/src/scenes/experiments/EditFeatureFlag.js", "frontend/src/scenes/experiments/FeatureFlags.js", "frontend/src/scenes/experiments/featureFlagLogic.js"]
Deleting Feature Flags
**Is your feature request related to a problem? Please describe.** I was testing feature flags and want to get rid of some old flags. But I can't. There's no delete button. **Describe the solution you'd like** I'd like a big red button (dark red, but not too red) with a beautiful trash icon with a slight inset shadow on it, which spins in 3D when clicked and then deletes the feature flag, only to disappear in a cloud of smoke to reveal a floating, colourful and inviting "undo" button (I'm thinking greenish, fresh, leafy, at roughly the same coordinates), telling me all is well in the world and that I can still get my flag back if I really want to. This button, too, should disappear in a cloud of smoke once its time has passed. The entire experience should leave me joyful. **Describe alternatives you've considered** Renaming them to "deleted1", "deleted2", etc. Doesn't sound like joy. **Additional context** Nah.. ***Thank*** you for your feature request - we love each and every one!
https://github.com/PostHog/posthog/issues/1144
https://github.com/PostHog/posthog/pull/1761
632b2195f7c3f42af71f7f51a889c9bc7ab05048
f117b66ebe7411a2efe00dc987190c3f15bfd506
"2020-07-03T12:48:51Z"
python
"2020-10-13T10:47:46Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
1,089
["posthog/api/test/test_funnel.py", "posthog/models/funnel.py"]
Postgres query has syntaxerror when defining funnel with $autocapture and css selector
**Describe the bug** When creating a funnel with $autocapture and css selector the postgresql query fails. **To Reproduce** Create the following funnel: ![Screenshot from 2020-06-24 23-08-13](https://user-images.githubusercontent.com/148820/85622527-a2f7df00-b66f-11ea-8463-c67ff0b54ab8.png) **Expected behavior** No crash **Additional context** The generated query is here: ```sql SELECT "posthog_person"."id", "posthog_person"."created_at", "posthog_person"."team_id", "posthog_person"."properties", "posthog_person"."is_user_id", MIN("step_0"."step_ts") as "step_0" FROM ( SELECT "pdi"."person_id", MIN("posthog_event"."timestamp") AS "step_ts" FROM posthog_event JOIN posthog_persondistinctid pdi ON pdi.distinct_id = posthog_event.distinct_id WHERE ( "posthog_event"."timestamp" >= '2020-06-17T00:00:00+00:00' :: timestamptz AND "posthog_event"."event" = '$autocapture' AND "posthog_event"."team_id" = 2 AND EXISTS( SELECT W0."id" FROM posthog_event JOIN posthog_persondistinctid pdi ON (pdi.distinct_id = posthog_event.distinct_id) W0 WHERE ( W0."id" = "posthog_event"."id" AND W0."elements_hash" IN ( SELECT V0."hash" FROM "posthog_elementgroup" V0 WHERE ( V0."team_id" = 2 AND ( SELECT U0."order" FROM "posthog_element" U0 WHERE ( U0."attr_class" @ > ARRAY ['foobar'] :: varchar(200) [] AND U0."group_id" = V0."id" ) ORDER BY U0."order" ASC LIMIT 1 ) IS NOT NULL ) ) ) ) ) GROUP BY "pdi"."person_id" ) step_0 JOIN posthog_person ON posthog_person.id = "step_0".person_id WHERE "step_0".person_id IS NOT NULL GROUP BY "posthog_person"."id", "posthog_person"."created_at", "posthog_person"."team_id", "posthog_person"."properties", "posthog_person"."is_user_id" ``` The syntax error is in this subquery: ```sql SELECT W0."id" FROM posthog_event JOIN posthog_persondistinctid pdi ON (pdi.distinct_id = posthog_event.distinct_id) W0 ``` Have not yet dug into what causes the faulty query.
https://github.com/PostHog/posthog/issues/1089
https://github.com/PostHog/posthog/pull/1123
c058084646601c6e917a35d71a122a125197d5eb
705d12f78b73c680b3ebc125cbc5b46c53e24102
"2020-06-24T20:11:37Z"
python
"2020-06-30T13:16:21Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
1,010
["posthog/celery.py"]
"Configuration error" warning routinely appears
**Describe the bug** Very often as I'm browsing stats I see in red a "Configuration Error" warning in the top right of the UI. Clicking it shows a warning dialog that includes "We can't seem to reach your worker". **To Reproduce** 1. 2. 3. **Expected behavior** Not to show an error **Screenshots** **Hosted or self hosted?:** Self hosted, 1.8.0 **Additional context** I have a feeling it may be due to performance issues? This instance tracks ~745k events daily. Although system load is nowhere to be a concern.
https://github.com/PostHog/posthog/issues/1010
https://github.com/PostHog/posthog/pull/1013
7faa932c6c28c11d55b06d73a96b409956749a47
340ee4cc33cb92ab338929352ac59d83f51a29dd
"2020-06-13T21:51:01Z"
python
"2020-06-15T09:33:02Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
953
["frontend/src/lib/utils.js", "frontend/src/scenes/dashboard/DashboardHeader.js"]
It isn't obvious how to add items to dashboard, from the dashboard itself
**Is your feature request related to a problem? Please describe.** You have to 'discover' the "add to dashboard" button on the trends or funnels pages. **Describe the solution you'd like** It'd be easier to use if a user looks at the dashboard itself that they have a + icon, which then asks "trend or funnel" (similar to the inspect element versus manual action creation lightbox), then takes the user to the right place. **Describe alternatives you've considered** Quit the dashboard and go to the trends/funnels pages directly.
https://github.com/PostHog/posthog/issues/953
https://github.com/PostHog/posthog/pull/1242
25d9f8700cc71a774b42fa53d8a11bf1ac27be58
de50908c4d89133e35cc6cc89afce791786307aa
"2020-06-08T13:18:14Z"
python
"2020-07-20T13:24:16Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
936
["frontend/src/scenes/insights/ActionFilter/ActionFilter.js", "frontend/src/scenes/insights/ActionFilter/ActionFilter.scss", "frontend/src/scenes/insights/ActionFilter/entityFilterLogic.js", "frontend/src/scenes/insights/InsightTabs/FunnelTab/FunnelTab.tsx", "package.json", "yarn.lock"]
Support funnel step reordering
**Is your feature request related to a problem? Please describe.** I need to add a new step at the start of my funnel but I can't just insert a new one or move one to the start. I have to delete all existing actions+filters and start again. **Describe the solution you'd like** To click + drag actions to reorder them in the list. **Describe alternatives you've considered** **Additional context** ***Thank*** you for your feature request - we love each and every one!
https://github.com/PostHog/posthog/issues/936
https://github.com/PostHog/posthog/pull/2862
9f9516ab8e6097dc088a92c7370ba7d46ce70a1a
663d853a071e2c13873e4cb1e7996c9677dcf7f0
"2020-06-06T18:53:52Z"
python
"2021-01-08T08:35:12Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
891
[".github/workflows/ci-e2e.yml", ".github/workflows/ci-frontend.yml"]
Temporary Token errors in app
**Describe the bug** If PostHog is setup behind a proxy but `IS_BEHIND_PROXY` is not set, most endpoint do just work except api/action because of the weird temporary token thing we've built. This should just work. **To Reproduce** 1. 2. 3. **Expected behavior** **Screenshots** **Hosted or self hosted?:** - hosted/self-hosted - (if self-hosted) what version? **Additional context**
https://github.com/PostHog/posthog/issues/891
https://github.com/PostHog/posthog/pull/19894
aba39b210248c661c9c066dc3fb068cab1202e70
a6fec59eed7f503c4e0c3fe2340824b89d217f82
"2020-05-28T21:31:53Z"
python
"2024-01-23T13:27:04Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
888
[".github/workflows/ci-e2e.yml", ".github/workflows/ci-frontend.yml"]
API for elements volume
Create an API to get volume for all elements on a page
https://github.com/PostHog/posthog/issues/888
https://github.com/PostHog/posthog/pull/19894
aba39b210248c661c9c066dc3fb068cab1202e70
a6fec59eed7f503c4e0c3fe2340824b89d217f82
"2020-05-28T13:23:17Z"
python
"2024-01-23T13:27:04Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
810
[".github/workflows/ci-e2e.yml", ".github/workflows/ci-frontend.yml"]
Clean up and document APIs
At the moment the only way to work out what our APIs are is through opening the developer console. We need first class API documentation instead. I've tried using something like redoc to automatically generate documentation, but it's not very configurable and misses quite a lot of variables. I'd be in favour of just manually documenting APIs and having a standard template for it.
https://github.com/PostHog/posthog/issues/810
https://github.com/PostHog/posthog/pull/19894
aba39b210248c661c9c066dc3fb068cab1202e70
a6fec59eed7f503c4e0c3fe2340824b89d217f82
"2020-05-20T14:46:04Z"
python
"2024-01-23T13:27:04Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
809
[".github/workflows/ci-e2e.yml", ".github/workflows/ci-frontend.yml"]
Improve event insert/worker performance
**Describe the bug** We have a user that's doing 40 req/sec. The workers can't keep up with the volume of events, so the Redis cache fills up and then falls over At the moment their setup is 1. 3 web dynos 2X 1. 15 worker dynos 2X 1. 1GB Redis cache 1. Standard-1 postgres database As you can see [in this test](https://github.com/PostHog/posthog/blob/master/posthog/tasks/test/test_process_event.py#L19), we do 19 queries for each event insert (with autocapture). **ideas** - I think the query that [finds the relevant actions for an event](https://github.com/PostHog/posthog/blob/master/posthog/models/event.py#L239) is slow, so it might be worth using the same pattern we have for Cohorts to calculate them after the fact. - For this specific user, the dom tree is probably unique for a lot of events so have to be re-inserted. We already bulk-insert all of the Elements, but there might still be gains to be had here
https://github.com/PostHog/posthog/issues/809
https://github.com/PostHog/posthog/pull/19894
aba39b210248c661c9c066dc3fb068cab1202e70
a6fec59eed7f503c4e0c3fe2340824b89d217f82
"2020-05-20T13:34:05Z"
python
"2024-01-23T13:27:04Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
794
["frontend/src/scenes/sceneLogic.js", "frontend/src/scenes/users/Person.js"]
Getting 404 Not found when navigating to user
**Describe the bug** I identify my users using their email. On the All Events page I click on a person with the name [email protected] and it sends me to https://site.com/person/t%40t.com. On this page I get a 404 Page not found. I expect to see the persons page with all their events. **Additional context** This worked previously before 1.5.0. After updating to 1.5.0 all my identified persons link are broken. Unidentified users which have the generated UUID as the distinct_Id still work
https://github.com/PostHog/posthog/issues/794
https://github.com/PostHog/posthog/pull/795
f5cb8f6e0960e236566e6879d8070506b93c6107
95430592e8a3dc83235ac3e52986842da5313656
"2020-05-18T06:39:06Z"
python
"2020-05-18T12:08:37Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
787
["frontend/src/layout/Sidebar.js", "frontend/src/layout/Sidebar.scss", "frontend/src/lib/hooks/useEscapeKey.js", "frontend/src/scenes/App.js"]
Mobile sidebar bad ux
**Is your feature request related to a problem? Please describe.** Below when the menu is in mobile mode, opening it shouldn't squish the rest of the interface to tiny levels ![Screenshot_20200515-203711](https://user-images.githubusercontent.com/53387/82084715-03e4dc80-96ec-11ea-8509-6a21ca1a17c1.jpg) **Describe the solution you'd like** The menu should come on top or push the content to the right **Describe alternatives you've considered** **Additional context** ***Thank*** you for your feature request - we love each and every one!
https://github.com/PostHog/posthog/issues/787
https://github.com/PostHog/posthog/pull/839
ce4da7a1f391b9869584ba104196139c371cdbf6
b68f7a6aae83193135b9b9d73be964c11a429292
"2020-05-15T18:40:51Z"
python
"2020-05-27T11:01:26Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
716
["frontend/src/scenes/trends/trendsLogic.js", "package.json", "yarn.lock"]
Add breakpoint in trendsLogic
https://github.com/PostHog/posthog/issues/716
https://github.com/PostHog/posthog/pull/754
20fab5d5a24d5eb14a038d8f54cf68c956c721c4
b446731cedf5b4ddbfc5520aa92b0e6eedca1974
"2020-05-05T02:52:06Z"
python
"2020-05-13T14:02:01Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
713
["frontend/src/scenes/trends/ActionsLineGraph.js", "frontend/src/scenes/trends/LineGraph.js"]
If data is complete, do not show a dashed line in the graph
**Describe the bug** ![Screenshot 2020-05-04 at 17 11 33](https://user-images.githubusercontent.com/47497682/80988034-d44fed80-8e2a-11ea-89f9-17bde1f815e9.png) The last data point above is in the past, so should have complete data. **To Reproduce** 1. Draw a graph with an end date in the past 2. The last data point will have a dotted line to it, even if data is complete **Expected behavior** The last data point should only have a dotted line to it if it is in the past.
https://github.com/PostHog/posthog/issues/713
https://github.com/PostHog/posthog/pull/735
6aea09228c2dfe0ebb810c24102fad456e2589a1
2c3c09beb404a1915811a44940b264f4c20e581d
"2020-05-04T16:16:28Z"
python
"2020-05-11T10:49:55Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
599
[".github/workflows/container-images-cd.yml", ".github/workflows/pr-cleanup.yml", ".github/workflows/pr-deploy.yml"]
EventsTable: Easily allow me to see first events/sort by timestamp
**Is your feature request related to a problem? Please describe.** I want to be able to see the first events that happened, especially if I'm filtering for something **Describe the solution you'd like** Either allow me to 'skip to the end' in the table, or (probably easier) allow me to sort the EventsTable in reverse. **Describe alternatives you've considered** Clicking "Load more events" a lot. **Additional context** ***Thank*** you for your feature request - we love each and every one!
https://github.com/PostHog/posthog/issues/599
https://github.com/PostHog/posthog/pull/19557
9af89cd98ffd3be52685593efa6921890cab2742
a49eda3f50fe92de71950ec6415b6ed7dd9f994b
"2020-04-15T19:07:31Z"
python
"2024-01-02T15:06:55Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
595
["plugins/package.json", "plugins/yarn.lock"]
[In progress] Offload storing events to workers
**Is your feature request related to a problem? Please describe.** Sending an event to PostHog can sometimes be slow because we do ~18 queries for each event. **Describe the solution you'd like** I'd like storing of events to be offloaded to workers.
https://github.com/PostHog/posthog/issues/595
https://github.com/PostHog/posthog/pull/6465
296647fa1023e8a4844d52fdd03fc27b96845844
21aac01bbf566520a11fba7bd074e63049ef8d0c
"2020-04-15T15:04:37Z"
python
"2021-10-15T10:46:34Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
390
["plugins/package.json", "plugins/yarn.lock"]
Change "add event property filter" to "Filter Event Property"
***Thank*** you for your feature request - we love each and every one :) **Is your feature request related to a problem? Please describe.** Watching a user go through set up showed the wording of this button was confusing https://www.loom.com/share/2e07357fadc54b70bdc39287734a00dd They thought this could affect a permanent change to the database (this was the user view where it is at the same line as "delete data on this person" so felt it held equivalency especially as one is red and the other is green. <img width="1158" alt="Posthog" src="https://user-images.githubusercontent.com/60791437/77164962-45208d80-6aa9-11ea-834e-47777d7100e9.png"> **Describe the solution you'd like** Change the text from "Add event property filter" to "Filter event property" **Describe alternatives you've considered** Leave as is
https://github.com/PostHog/posthog/issues/390
https://github.com/PostHog/posthog/pull/4404
66538f70d78b067f8f48953d0ca8f5b6042471df
bfcd4bed6a80d0364ef6558e318d20cb4e7cf34b
"2020-03-20T12:50:35Z"
python
"2021-05-19T19:56:28Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
263
[".github/workflows/benchmark.yml", ".github/workflows/ci-backend-update-test-timing.yml", ".github/workflows/storybook-deploy.yml"]
Logout Button
**Is your feature request related to a problem? Please describe.** I can't logout with a button on PostHog, I need to navigate to app.posthog.com/logout **Describe the solution you'd like** At my log in user name I should be able to select a logout button **Describe alternatives you've considered** Continue to navigate to app.posthog.com/logout **Additional context** ![Posthog](https://user-images.githubusercontent.com/60791437/75726087-900f6800-5c96-11ea-8da3-52b1f5d6094a.png)
https://github.com/PostHog/posthog/issues/263
https://github.com/PostHog/posthog/pull/19892
3fba4c63aa98deb5986243ca1ee18b52bbc68374
62f11944e313606d9a84d67e62c28e8f95a5c1d7
"2020-03-02T23:01:42Z"
python
"2024-01-23T13:13:50Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
226
["pnpm-lock.yaml"]
Django community
Progress * Post made in Show and Tell of the forums: https://forum.djangoproject.com/c/projects/11 * Listed the project in Django Projects: https://djangopackages.org/packages/p/posthog/ * Sent personal emails to a few Django key people * Listed the project in /r/django, [here](https://www.reddit.com/r/django/comments/fclp16/djangobased_open_source_product_analytics/). To check if suitable: - [x] https://www.reddit.com/r/django - [x] Django IRC: irc://irc.freenode.net/django - [x] #Show-and-tell in https://pyslackers.com/web/slack
https://github.com/PostHog/posthog/issues/226
https://github.com/PostHog/posthog/pull/16854
6eb05ecf01b900d061f10d94a26949ce4f4c7a4c
f26642a6f283d057515edc799baac7ef85f85be8
"2020-02-28T07:09:49Z"
python
"2023-08-07T12:01:20Z"
closed
PostHog/posthog
https://github.com/PostHog/posthog
136
["yarn.lock"]
Live Actions won't load
When. navigating to live actions, live actions don't load 1. Go to '.Actions>Live Actions' 2. See error <img width="1120" alt="Posthog" src="https://user-images.githubusercontent.com/60791437/74898128-d20ee480-534d-11ea-8858-8e30b1c1ae71.png"> - MacOS - Chrome
https://github.com/PostHog/posthog/issues/136
https://github.com/PostHog/posthog/pull/10105
7e55991c0e39b23d5e0848b9025478edc45435ae
433b9d8cf437068a68a03b690a43f193dbaba5ee
"2020-02-20T03:27:37Z"
python
"2022-06-02T10:49:00Z"
closed
localstack/localstack
https://github.com/localstack/localstack
10,328
["localstack/services/s3/utils.py", "tests/aws/services/s3/test_s3.py", "tests/aws/services/s3/test_s3.snapshot.json", "tests/aws/services/s3/test_s3.validation.json"]
bug: S3 CopyObject returns 500 - Internal error
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior - Localstack fails with HTTP 500, error code "InternalError", message "not enough values to unpack (expected 2, got 1)" when `CopySource` is an invalid string for `copy_object` API. ### Expected Behavior - For the same API `copy_object`, the cloud fails with HTTP 400, error code "InvalidArgument". - Localstack should handle invalid `CopySource` i.e., it is a dictionary with one value or an invalid string not adhering to "{bucket}/{key}" format. ### How are you starting LocalStack? With a `docker run` command ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker run localstack/localstack #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) botoclient.copy_object(CopySource="Invalid_random_string", Bucket="test_bucket", Key=key) test_bucket is a valid bucket. ### Environment ```markdown - OS: Ubuntu 22.04.2 LTS - LocalStack: 2.1.0 - boto3: 1.28.6 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/10328
https://github.com/localstack/localstack/pull/10338
b0dea55360d71697e82ba25a137d690e13ed4c72
307cbe952b35a3d38979492cbf8b4f6fbad56500
"2024-02-26T23:32:53Z"
python
"2024-02-29T11:54:10Z"
closed
localstack/localstack
https://github.com/localstack/localstack
10,311
["localstack/services/apigateway/helpers.py", "tests/aws/services/apigateway/test_apigateway_common.py", "tests/aws/services/apigateway/test_apigateway_common.validation.json", "tests/unit/test_apigateway.py"]
bug: When defining APIGateway Resources with Path Parameters, the order of definition in CDK causes behavior change
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior When defining path parameter resources, for example: * `/documents/{documentId+}` * `/documents/search` In CDK, the order of definition changes what LocalStack returns. If you define the path parameter `/documents/{documentId+}` before `/documents/search` and then try to hit `/documents/search`, LocalStack instead invokes the integration for `/documents/{documentId+}`, rather than the intended one. However, if you define `/documents/search` before `/documents/{documentId+}` in CDK, LocalStack invokes the correct integration. ### Expected Behavior In AWS, the order does not matter, and no matter which order they're defined in trying to hit `/document/search` will result in the integration for that resource being invokes. ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker run localstack/localstack #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) ```ts export class GatewayStack extends Stack { constructor(scope: Construct, id: string, props: StackProps) { super(scope, id, props); const api = new RestApi(this, "test-api", { restApiName: "test-api", }); const getByIdLambda = new Function(this, "get-by-id-handler", { runtime: Runtime.NODEJS_20_X, code: Code.fromInline( "exports.handler = async () => ({ statusCode: 200, body: 'Get By Id' });", ), handler: "index.handler", }); const searchLambda = new Function(this, "search-lambda-handler", { runtime: Runtime.NODEJS_20_X, code: Code.fromInline( "exports.handler = async () => ({ statusCode: 200, body: 'Search Results' });", ), handler: "index.handler", }); const documents = api.root.addResource("document"); const getById = documents.addResource("{documentId+}"); const search = documents.addResource("search"); getById.addMethod("GET", new LambdaIntegration(getByIdLambda)); search.addMethod("GET", new LambdaIntegration(searchLambda)); } } ``` ```sh GATEWAY_ID=$(awslocal apigateway get-rest-apis --query "items[0].id" --output text) curl https://$GATEWAY_ID.execute-api.localhost.localstack.cloud:4566/prod/document/search ``` ### Environment ```markdown - OS: Ubuntu 20.04 - LocalStack: 3.1 ``` ### Anything else? I've reproduced the issue here: https://github.com/Garethp/localstack-bugs/tree/order-dependant-path-parts If you clone down the branch `order-dependant-path-parts`, run `yarn install` and then `./start.sh` you should see the issue in action
https://github.com/localstack/localstack/issues/10311
https://github.com/localstack/localstack/pull/10317
4743245ec1fe8b3e6577c7aaa518dda2931939a3
079aaff1b072344439141b023f12f402b5555870
"2024-02-23T17:37:07Z"
python
"2024-02-27T11:51:33Z"
closed
localstack/localstack
https://github.com/localstack/localstack
10,233
["localstack/config.py", "localstack/constants.py", "localstack/runtime/init.py", "localstack/services/infra.py", "tests/aws/test_terraform.py"]
bug: AWS_REGION overwritten with us-east-1 when using docker compose
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior I pass `AWS_REGION=eu-central-1` to the localstack container in the environment section but it is overwritten with `us-east-1`. ### Expected Behavior `AWS_REGION` should be `eu-central-1` ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce This is my localstack container in the docker compose file: ``` mock-aws: image: localstack/localstack-pro:3.1 ports: - "4566:4566" volumes: - type: bind source: ../integration-tests/aws target: /etc/localstack/init/ready.d read_only: true # Path to local Docker UNIX domain socket - "/var/run/docker.sock:/var/run/docker.sock" environment: - LS_LOG=debug - AWS_ACCESS_KEY_ID=test - AWS_SECRET_ACCESS_KEY=test - AWS_REGION=eu-central-1 - LOCALSTACK_AUTH_TOKEN=${LOCALSTACK_AUTH_TOKEN:- } - [email protected] ``` I mount a folder which contains bash script to be run to initialize the localstack services. Here is for example the content of `integration-tests/aws/ses.sh`: ``` #!/bin/bash awslocal ses verify-email-identity \ --email-address ${EMAIL_SENDER_ADDRESS} \ --region ${AWS_REGION} awslocal ses list-identities \ --region ${AWS_REGION} ``` The script is executed but `AWS_REGION` seems to be overwritten and now has the value `us-east-1`. Please note that the other variable called `EMAIL_SENDER_ADDRESS` has the proper value. See output: ``` local-dev-mock-aws-1 | + awslocal ses verify-email-identity --email-address [email protected] --region us-east-1 local-dev-mock-aws-1 | 2024-02-13T14:17:04.827 INFO --- [ asgi_gw_1] localstack.request.aws : AWS ses.VerifyEmailIdentity => 200 local-dev-mock-aws-1 | + awslocal ses list-identities --region us-east-1 local-dev-mock-aws-1 | 2024-02-13T14:17:06.026 INFO --- [ asgi_gw_0] localstack.request.aws : AWS ses.ListIdentities => 200 local-dev-mock-aws-1 | 2024-02-13T14:17:06.165 INFO --- [ asgi_gw_1] localstack.request.aws : AWS ses.ListIdentities => 200 local-dev-mock-aws-1 | { local-dev-mock-aws-1 | "Identities": [ local-dev-mock-aws-1 | "[email protected]" local-dev-mock-aws-1 | ] local-dev-mock-aws-1 | } ``` This was working with localstack version `2.1.0` ### Environment ```markdown - OS: Ubuntu 22.04.3 LTS - LocalStack: 3.1.0 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/10233
https://github.com/localstack/localstack/pull/10272
a346f192b88e16664ba0071211ba638e3a377b57
baa61fb00c3235c4c64ad972962c4b3c5aefd16d
"2024-02-13T14:21:47Z"
python
"2024-02-23T12:12:16Z"
closed
localstack/localstack
https://github.com/localstack/localstack
10,216
["localstack/services/stepfunctions/asl/component/state/state_execution/state_task/service/state_task_service.py", "localstack/services/stepfunctions/asl/component/state/state_execution/state_task/service/state_task_service_callback.py", "localstack/services/stepfunctions/asl/component/state/state_execution/state_task/service/state_task_service_ecs.py", "localstack/services/stepfunctions/asl/component/state/state_execution/state_task/service/state_task_service_factory.py", "localstack/services/stepfunctions/asl/component/state/state_execution/state_task/service/state_task_service_sfn.py", "tests/aws/cdk_templates/StepFunctionsEcsTask/StepFunctionsEcsTaskStack.json", "tests/aws/services/stepfunctions/conftest.py", "tests/aws/services/stepfunctions/v2/services/test_ecs_task_service.py", "tests/aws/services/stepfunctions/v2/services/test_ecs_task_service.snapshot.json", "tests/aws/services/stepfunctions/v2/services/test_ecs_task_service.validation.json"]
bug: Step functions does not support ECS service type
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior ``` 6:48:04 AM | CREATE_FAILED | AWS::StepFunctions::StateMachine | BuildSourceCodeWor...ateMachine67076E41 An error occurred (InvalidDefinition) when calling the CreateStateMachine operation: Error=NotImplementedError Args=["Unsupported service: 'ecs'."] in definition '{"StartAt":"Update Deployment: Status=BUILD_IN_PROGRESS","States" ``` I get the above error when trying to deploy my step functions workflow with CDKLocal ### Expected Behavior The stack should deploy successfully ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker run localstack/localstack #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) awslocal s3 mb s3://mybucket ### Environment ```markdown - OS: Linux - LocalStack: latest ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/10216
https://github.com/localstack/localstack/pull/10321
f0993ac829ef81ae602e5fec8d8842e536d9f5e7
b927e63dfba9e126d04ffa7b054ea9eda59a2428
"2024-02-10T07:19:48Z"
python
"2024-03-01T11:29:15Z"
closed
localstack/localstack
https://github.com/localstack/localstack
10,122
["requirements-base-runtime.txt", "requirements-dev.txt", "requirements-runtime.txt", "requirements-test.txt", "requirements-typehint.txt", "setup.cfg"]
bug: no availability zones available in region eu-central-2
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior when listing the availability zones of eu-central-2 I receive an empty list: ``` { "AvailabilityZones": [] } ``` As a result of this I cannot create a VPC subnet in the region eu-central-2 ### Expected Behavior ``` { "AvailabilityZones": [ { "State": "available", "Messages": [], "RegionName": "eu-central-2", "ZoneName": "eu-central-2a", "ZoneId": "euc2-az2", "ZoneType": "availability-zone" }, { "State": "available", "Messages": [], "RegionName": "eu-central-2", "ZoneName": "eu-central-2b", "ZoneId": "euc2-az3", "ZoneType": "availability-zone" }, { "State": "available", "Messages": [], "RegionName": "eu-central-2", "ZoneName": "eu-central-2c", "ZoneId": "euc2-az1", "ZoneType": "availability-zone" } ] } ``` ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce docker compose -f ./docker-compose.yml -p localstack-docker-compose up -d localstack ``` version: "3.8" services: localstack: container_name: "${LOCALSTACK_DOCKER_NAME:-localstack-main}" image: localstack/localstack-pro # required for Pro ports: - "127.0.0.1:4566:4566" # LocalStack Gateway - "127.0.0.1:4510-4559:4510-4559" # external services port range - "127.0.0.1:443:443" # LocalStack HTTPS Gateway (Pro) environment: # Activate LocalStack Pro: https://docs.localstack.cloud/getting-started/auth-token/ - LOCALSTACK_AUTH_TOKEN=${LOCALSTACK_AUTH_TOKEN:?} # required for Pro # LocalStack configuration: https://docs.localstack.cloud/references/configuration/ - DEBUG=${DEBUG:-0} - PERSISTENCE=${PERSISTENCE:-0} volumes: - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack" - "/var/run/docker.sock:/var/run/docker.sock" ``` aws --region=eu-central-2 --endpoint-url=http://localhost:4566 ec2 describe-availability-zones ### Environment ```markdown - OS:Kubuntu 23.10 - LocalStack: stable (3.1.0) ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/10122
https://github.com/localstack/localstack/pull/10286
1156279ee0198d11032d4e284ab29017b9e4b49e
0fae374328169e4ceafc720ae85af2f03c6590df
"2024-01-25T14:41:21Z"
python
"2024-02-21T10:59:13Z"
closed
localstack/localstack
https://github.com/localstack/localstack
10,107
["localstack/services/sqs/models.py", "tests/aws/services/sqs/test_sqs.py", "tests/aws/services/sqs/test_sqs.snapshot.json", "tests/aws/services/sqs/test_sqs.validation.json"]
bug: can't receive message from fifo queue with the same MessageGroupId
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior with python boto3 client - send a message to fifo queue with MessageGroupId='1' - receive the message - delete the message - send another message to the queue with MessageGroupId='1' - try to receive the message, get none - check message availability with get_queue_attributes, get `{'ApproximateNumberOfMessages': '1', 'ApproximateNumberOfMessagesNotVisible': '0', 'ApproximateNumberOfMessagesDelayed': '0', ...` ### Expected Behavior expecting to receive message from the queue ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce docker-compose up ### Environment ```markdown - OS:MacOS Sonoma 14.2.1 - LocalStack: latest ``` ### Anything else? I tried to run with Rosetta, no result
https://github.com/localstack/localstack/issues/10107
https://github.com/localstack/localstack/pull/10223
3cce93ca0bcd16af03d9666288d2fbb7e21c29f6
4cc83f3dd05ea3f959debbab531c8353a34e0c12
"2024-01-23T14:44:10Z"
python
"2024-02-13T15:41:07Z"
closed
localstack/localstack
https://github.com/localstack/localstack
10,106
["localstack/services/stepfunctions/provider.py", "tests/aws/services/stepfunctions/v2/test_sfn_api.py", "tests/aws/services/stepfunctions/v2/test_sfn_api.snapshot.json", "tests/aws/services/stepfunctions/v2/test_sfn_api.validation.json"]
bug: StepFunctions: --reverse-order doesn't work with get-execution-history
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior When I fetch execution history via command below, it always returns history events in ascending order. `awslocal stepfunctions --region eu-west-1 get-execution-history --reverse-order --execution-arn <execution-arn> ` I also tried with ``awslocal stepfunctions --region eu-west-1 get-execution-history --no-reverse-order --execution-arn <execution-arn> ` it returned the same results, which proves again that `--reverse-order` didn't take effect. I also tried with `GetExecutionHistoryCommand` from `@aws-sdk/client-sfn`, it's the same results that reverse order didn't take effect. it seems that `PROVIDER_OVERRIDE_STEPFUNCTIONS=v2` will cause this issue, and it's set to be v2 by default in localstack v3. but if I set it back to `PROVIDER_OVERRIDE_STEPFUNCTIONS=legacy` and use loacalstack v2, then this problem won't appear anymore. ### Expected Behavior when `--reverse-order` is provided. Execution history is returned in descending order. Also, command sent from sdk should see the same results. ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker compose -f docker-compose.yml up -d #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) awslocal stepfunctions --region eu-west-1 get-execution-history --reverse-order --execution-arn <execution-arn> ### Environment ```markdown - OS: macOS 14.1.1 - LocalStack: I tried both v3.0 and v3.0.2, neither of them worked. But v2.1 worked. ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/10106
https://github.com/localstack/localstack/pull/10131
faefc34f8a83d21e9b4cf0ba57aa586490c258b6
7da657c420379ab46b37c87115fee37ba4ad170d
"2024-01-23T12:28:51Z"
python
"2024-01-29T08:22:32Z"
closed
localstack/localstack
https://github.com/localstack/localstack
10,090
["localstack/services/s3/cors.py", "tests/aws/services/s3/test_s3_cors.py", "tests/aws/services/s3/test_s3_cors.snapshot.json", "tests/aws/services/s3/test_s3_cors.validation.json"]
bug: S3 CORS is not honoring subdomain `*` syntax
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior Local bucket's CORS policy: ``` $ aws s3api get-bucket-cors --bucket foo --endpoint-url=http://localhost:4566 { "CORSRules": [ { "AllowedHeaders": [ "*" ], "AllowedMethods": [ "GET", "PUT" ], "AllowedOrigins": [ "http://*.example.com", "https://app.localstack.cloud", "http://app.localstack.cloud" ] } ] } ``` Doing OPTIONS preflight through curl: `curl -v --request OPTIONS 'http://127.0.0.1:4566/foo/' -H 'Origin: http://subd.example.com' -H 'Access-Control-Request-Method: GET'` Returns: ``` < HTTP/1.1 403 < Content-Type: application/xml < Content-Length: 534 < x-amz-request-id: 2432170e-f270-42a8-91fe-2b8415cada4a < x-amz-id-2: s9lzHYrFp76ZVxRcpX9+5cjAnEH2ROuNkd2BHfIa6UkFVdtjf5mKR3/eTPFvsiP/XV/VLi31234= < Connection: close < date: Fri, 19 Jan 2024 20:26:37 GMT < server: hypercorn-h11 < <?xml version='1.0' encoding='utf-8'?> * Closing connection <Error><Code>AccessForbidden</Code><Message>CORSResponse: This CORS request is not allowed. This is usually because the evalution of Origin, request method / Access-Control-Request-Method or Access-Control-Request-Headers are not whitelisted by the resource's CORS spec.</Message><RequestId>2432170e-f270-42a8-91fe-2b8415cada4a</RequestId><HostId>9Gjjt1m+cjU4OPvX9O9/8RuvnG41MRb/18Oux2o5H5MY7ISNTlXN+Dz9IG62/ILVxhAGI0qyPfg=</HostId><Method>GET</Method><ResourceType>OBJECT</ResourceType></Error>% ``` ### Expected Behavior Would expect an OPTIONS call to e.g. `http://subd.example.com` to return 200 OK. I have tested this against real S3 and this works. Documentation: https://docs.aws.amazon.com/AmazonS3/latest/userguide/ManageCorsUsing.html#cors-allowed-origin ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce See above. ### Environment ```markdown - OS: macOS Sonoma 14.2.1 - LocalStack: latest ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/10090
https://github.com/localstack/localstack/pull/10365
f3c2b4b3825745167d8f731f51162b6b172c1f1c
690fa807029f36c127ef01cacf96f2b34775abe7
"2024-01-19T20:29:38Z"
python
"2024-03-06T16:01:42Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,990
["localstack/services/s3/utils.py", "tests/aws/services/s3/test_s3.py", "tests/aws/services/s3/test_s3.snapshot.json", "tests/aws/services/s3/test_s3.validation.json", "tests/aws/services/s3/test_s3_api.py", "tests/aws/services/s3/test_s3_list_operations.py"]
bug: Copying files with spaces in the name returns a 404 error
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior First, I upload a file that includes spaces in its filename. Then, when I try to copy that file, I get a 404 error, and the file is not copied. ``` 2024/01/04 21:31:18 error s3client copy object: operation error S3: CopyObject, https response error StatusCode: 404, RequestID: 81bbb363-4bf9-49ac-ac4e-fe4918a9a09c, HostID: s9lzHYrFp76ZVxRcpX9+5cjAnEH2ROuNkd2BHfIa6UkFVdtjf5mKR3/eTPFvsiP/XV/VLi31234=, api error NoSuchKey: The specified key does not exist. Exiting. ``` ### Expected Behavior The file should be copied successfully. ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) ```docker-compose.yml version: '3.9' services: localstack: build: context: localstack dockerfile: Dockerfile environment: - AWS_DEFAULT_REGION=ap-northeast-1 - AWS_DEFAULT_OUTPUT=json - AWS_ACCESS_KEY_ID=dummy - AWS_SECRET_ACCESS_KEY=dummy - LOCALSTACK_SERVICES=s3,ses - LS_LOG=debug - DEBUG=1 ports: - "4566:4566" ``` ```Dockerfile FROM localstack/localstack:latest COPY ready.d /etc/localstack/init/ready.d ``` Here is the complete code: git clone https://github.com/yuki2006/localstack_check go run main.go ### Environment ```markdown - OS: docker image localstack/localstack - LocalStack:latest ``` ### Anything else? This is the implementation following the example provided here: https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/go/example_code/s3/s3_copy_object.go#L65C36-L65C47 (https://github.com/yuki2006/localstack_check/blob/master/main.go#L79) Additionally, this issue does not occur with the actual S3; no errors were encountered.
https://github.com/localstack/localstack/issues/9990
https://github.com/localstack/localstack/pull/9992
84fb504e9a2b4491b2e91f2dcd0facd5e96429b0
0c01be0932d34e09b2127927bca6a3ccc3c099b6
"2024-01-04T12:10:08Z"
python
"2024-01-04T21:29:45Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,989
["localstack/services/s3/v3/provider.py", "tests/aws/services/s3/test_s3.py"]
bug: Extra content added to S3 object when putObject with CRT client
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior I'm implementing the second example, using the transferManager described here: https://docs.aws.amazon.com/AmazonS3/latest/userguide/example_s3_Scenario_UploadStream_section.html ``` Long sizeOnS3; var key = dataSource.getObjectKey(); S3TransferManager transferManager = S3TransferManager.builder() .s3Client(s3AsyncClient) .build(); BlockingInputStreamAsyncRequestBody body = AsyncRequestBody.forBlockingInputStream(null); Upload upload = transferManager.upload(builder -> builder .requestBody(body) .putObjectRequest(req -> req.bucket(bucketName).key(key)) .build()); body.writeInputStream(dataSource.getDataStream()); try { upload.completionFuture().get(); sizeOnS3 = s3Client.headObject(b -> b.bucket(bucketName).key(dataSource.getObjectKey())).contentLength(); } catch (Exception e) { log.error("Error while uploading file to S3", e); throw new RuntimeException(e); } finally { IOUtils.closeQuietly(dataSource.getDataStream()); } ``` with the S3AsyncClient configure as such: ``` @Bean public S3AsyncClient amazonAsyncS3Client(final Region region, final AwsCredentialsProvider awsCredentialsProvider, @Value("${application.aws.s3.uri}") final String maybeS3Uri) throws URISyntaxException { var s3ClientBuilder = S3AsyncClient.crtBuilder() .forcePathStyle(true) .region(region) .credentialsProvider(awsCredentialsProvider); if (StringUtils.isNotBlank(maybeS3Uri)) { s3ClientBuilder.endpointOverride(new URI(maybeS3Uri)); } return s3ClientBuilder.build(); } ``` The upload is successful but some extra data seems to be added before and after the original content: original content (csv fetched from an API): `MyData` Content that I get if I fetch it again: ``` 503 MyData 0 x-amz-checksum-crc32:e4f9zQ== ``` Deploying on AWS this extra content doesn't appear so it is my assumption that it is coming from localstack ### Expected Behavior No extra information is added to the S3 file content other than the original data ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker-compose localstack: image: localstack/localstack:3.0.2 ports: - "4566:4566" # Default port forward healthcheck: test: [ "CMD", "awslocal", "s3api", "wait", "bucket-exists", "--bucket", "stored-files" ] interval: 10s timeout: 5s retries: 20 environment: - SERVICES=s3,sts - DEBUG=1 - LAMBDA_REMOTE_DOCKER=0 - DATA_DIR=/tmp/localstack/data - DEFAULT_REGION=eu-west-1 volumes: - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack" - '/var/run/docker.sock:/var/run/docker.sock' - "./data/localstack/init.sh:/etc/localstack/init/ready.d/init.sh" ### Environment ```markdown - OS:macOs 14.1.2 - LocalStack: 3.0.2 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9989
https://github.com/localstack/localstack/pull/9999
6befc66f0c36da0f78859c531e80759e22c56ff2
124a4e288951a587f581f586f6e8b1e8ab40e438
"2024-01-04T08:41:21Z"
python
"2024-01-05T12:42:56Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,930
["localstack/aws/protocol/serializer.py", "tests/aws/services/s3/test_s3.py", "tests/aws/services/s3/test_s3.validation.json"]
bug: [PERL] Net::Amazon::S3 module can't list buckets or bucket contents while Amazon::S3 can
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior Hi! I believe there is compatibility issue between localstack and Perl module wrapping S3 API [Net::Amazon::S3](https://metacpan.org/pod/Net::Amazon::S3). Uploading and downloading file works great, but listing bucket contents always gives empty list (i think also listing all buckets gives empty list but this is not an issue for me, since i'm always using one bucket). Listing files using AWS CLI, python's Boto3 or even old Perl alternative [Amazon::S3](https://metacpan.org/release/BIGFOOT/Amazon-S3-0.59/view/lib/Amazon/S3.pm) works OK for me. ### Expected Behavior Both perl modules should be able to list bucket contents ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce #### Part of `docker-compose.yml` ``` localstack: container_name: "${LOCALSTACK_DOCKER_NAME:-localstack-main}" image: localstack/localstack:latest ports: - "127.0.0.1:4566:4566" # LocalStack Gateway - "127.0.0.1:4510-4559:4510-4559" # external services port range environment: # LocalStack configuration: https://docs.localstack.cloud/references/configuration/ - DEBUG=${DEBUG- } networks: - default ``` #### Perl code that prints nothing (i.e. can't see uploaded file) ``` #!/usr/bin/perl use warnings; use Net::Amazon::S3; use Config::IniFiles; my $bucket_name = "myname"; my $s3_endpoint = "localstack:4566"; my $homedir = "/usr/share/httpd"; my $aws_cfg = Config::IniFiles->new(-file => "$homedir/.aws/config"); my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_cfg->val('default', 'aws_access_key_id'), aws_secret_access_key => $aws_cfg->val('default', 'aws_secret_access_key'), retry => 1, host => $s3_endpoint, secure => 0, use_virtual_host => 0, } ); $s3->add_bucket( { bucket => $bucket_name } ) or die $s3->err . ": " . $s3->errstr; my $bucket_obj = $s3->bucket($bucket_name); my $bucket_contents = $bucket_obj->list or die $s3->err . ": " . $s3->errstr; for my $key ( @{ $bucket_contents->{keys} } ) { print "$key->{key}\n"; } ``` #### Perl code that WORKS OK (i.e. prints out uploaded file "key") ``` #!/usr/bin/perl use warnings; use CGI::Carp; use Amazon::S3; use Config::IniFiles; my $bucket_name = "/mybucket"; my $s3_endpoint = "localstack:4566"; my $homedir = "/usr/share/httpd"; my $aws_cfg = Config::IniFiles->new(-file => "$homedir/.aws/config"); my $s3 = Amazon::S3->new( { aws_access_key_id => $aws_cfg->val('default', 'aws_access_key_id'), aws_secret_access_key => $aws_cfg->val('default', 'aws_secret_access_key'), retry => 1, host => $s3_endpoint, secure => 0, } ); $s3->add_bucket( { bucket => $bucket_name } ) or die $s3->err . ": " . $s3->errstr; my $bucket = $s3->bucket( $bucket_name ); my $keyname = 'testing.txt'; my $value = 'T'; $bucket->add_key( $keyname, $value, { content_type => 'text/plain', 'x-amz-meta-colour' => 'orange', } ); my $response = $bucket->list or die $s3->err . ": " . $s3->errstr; print $response->{bucket}."\n"; for my $key (@{ $response->{keys} }) { print "\t".$key->{key}."\n"; } ``` ### Environment ```markdown - host OS: windows + WSL - container OS (where perl is run): CentOS 7.9.2009 - LocalStack: localstack-main | LocalStack version: 3.0.3.dev localstack-main | LocalStack build date: 2023-12-20 localstack-main | LocalStack build git hash: 5c3ef157 ``` ### Anything else? Net::Amazon::S3 way works OK with real AWS S3 (eu-west-1)
https://github.com/localstack/localstack/issues/9930
https://github.com/localstack/localstack/pull/9983
c71816174f2d534f02b9e0a35e0f836d3bd46d9c
3d6ba443c0ef6e05b44cc6dc87526e63532bb6c5
"2023-12-22T11:12:16Z"
python
"2024-01-04T11:15:29Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,912
["localstack/services/s3/resource_providers/aws_s3_bucket.py", "tests/aws/services/cloudformation/resources/test_s3.py", "tests/aws/services/cloudformation/resources/test_s3.snapshot.json", "tests/aws/services/cloudformation/resources/test_s3.validation.json", "tests/aws/services/s3/test_s3.py", "tests/aws/templates/s3_object_lock_config.yaml"]
bug(CloudFormation): ObjectLockConfiguration is not created
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior The bucket is created without object lock configuration ### Expected Behavior The bucket is created with object lock configuration ### How are you starting LocalStack? With the `localstack` script ### Steps To Reproduce Deploy stack: ```bash awslocal cloudformation deploy --stack-name test --template-file test2.yaml Waiting for changeset to be created.. Waiting for stack create/update to complete Successfully created/updated stack - test awslocal s3api list-buckets --no-cli-pager { "Buckets": [ { "Name": "test-bucket", "CreationDate": "2023-12-19T10:03:10+00:00" } ], "Owner": { "DisplayName": "webfile", "ID": "75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a" } } ``` Check the Object Lock configuration: ```bash awslocal s3api get-object-lock-configuration --bucket test-bucket An error occurred (ObjectLockConfigurationNotFoundError) when calling the GetObjectLockConfiguration operation: Object Lock configuration does not exist for this bucket ``` Logs: ```bash 2023-12-19 16:43:20 2023-12-19 16:43:20 LocalStack version: 3.0.3.dev 2023-12-19 16:43:20 LocalStack Docker container id: 20c4fda0118d 2023-12-19 16:43:20 LocalStack build date: 2023-12-05 2023-12-19 16:43:20 LocalStack build git hash: c1dcbc50 2023-12-19 16:43:20 2023-12-19 16:43:21 2023-12-19T09:43:21.334 INFO --- [-functhread4] hypercorn.error : Running on https://0.0.0.0:4566 (CTRL + C to quit) 2023-12-19 16:43:21 2023-12-19T09:43:21.334 INFO --- [-functhread4] hypercorn.error : Running on https://0.0.0.0:4566 (CTRL + C to quit) 2023-12-19 16:43:21 Ready. 2023-12-19 17:02:52 2023-12-19T10:02:52.302 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.ListBuckets => 200 2023-12-19 17:03:02 2023-12-19T10:03:02.452 INFO --- [ asgi_gw_0] localstack.request.aws : AWS cloudformation.ListStacks => 200 2023-12-19 17:03:09 2023-12-19T10:03:09.913 INFO --- [ asgi_gw_0] localstack.request.aws : AWS cloudformation.DescribeStacks => 400 (ValidationError) 2023-12-19 17:03:09 2023-12-19T10:03:09.928 INFO --- [ asgi_gw_0] localstack.request.aws : AWS cloudformation.CreateChangeSet => 200 2023-12-19 17:03:09 2023-12-19T10:03:09.937 INFO --- [ asgi_gw_0] localstack.request.aws : AWS cloudformation.DescribeChangeSet => 200 2023-12-19 17:03:09 2023-12-19T10:03:09.946 INFO --- [ asgi_gw_0] localstack.request.aws : AWS cloudformation.ExecuteChangeSet => 200 2023-12-19 17:03:09 2023-12-19T10:03:09.953 INFO --- [ asgi_gw_0] localstack.request.aws : AWS cloudformation.DescribeStacks => 200 2023-12-19 17:03:39 2023-12-19T10:03:39.969 INFO --- [ asgi_gw_0] localstack.request.aws : AWS cloudformation.DescribeStacks => 200 2023-12-19 17:03:51 2023-12-19T10:03:51.330 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.ListBuckets => 200 2023-12-19 17:03:59 2023-12-19T10:03:59.264 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.GetObjectLockConfiguration => 404 (ObjectLockConfigurationNotFoundError) ``` Template: ```yaml AWSTemplateFormatVersion: "2010-09-09" Resources: MyMediaBucket: Type: AWS::S3::Bucket Properties: BucketName: test-bucket ObjectLockEnabled: true ObjectLockConfiguration: ObjectLockEnabled: "Enabled" Rule: DefaultRetention: Mode: "GOVERNANCE" Days: 2 ``` ### Environment ```markdown - OS: mac m1 - LocalStack: 3.0.3.dev - localstack cli: 2.3.2 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9912
https://github.com/localstack/localstack/pull/10070
e96821cd8e0a49277dc482943448aebe1282334a
cabd9a4a9c57913efba39483548c9c6734d9d15e
"2023-12-19T10:08:55Z"
python
"2024-01-22T16:18:19Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,882
["localstack/services/stepfunctions/asl/component/state/state_execution/state_map/iteration/itemprocessor/item_processor_decl.py", "localstack/services/stepfunctions/asl/parse/preprocessor.py", "tests/aws/services/stepfunctions/templates/scenarios/scenarios_templates.py", "tests/aws/services/stepfunctions/templates/scenarios/statemachines/map_state_no_processor_config.json5", "tests/aws/services/stepfunctions/v2/scenarios/test_base_scenarios.py", "tests/aws/services/stepfunctions/v2/scenarios/test_base_scenarios.snapshot.json"]
bug: Step Functions Map ProcessorConfig should be optional
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior I'm trying to create a state machine using the following definition that I validated on AWS: ```json { "StartAt": "SetupVariables", "States": { "SetupVariables": { "Type": "Pass", "Parameters": { "items": [] }, "Next": "Map" }, "Map": { "Type": "Map", "ItemsPath": "$.items", "ItemProcessor": { "StartAt": "Pass", "States": { "Pass": { "Type": "Pass", "End": true } } }, "End": true } } } ``` I get the following error: ``` │ Error: updating Step Functions State Machine (arn:aws:states:eu-west-3:000000000000:stateMachine:my-state-machine): InvalidDefinition: Error=ValueError Args=['Expected a ProcessorConfig declaration at \'"ItemProcessor":{"StartAt":"Pass","States":{"Pass":{"Type":"Pass","End":true}}}\'.'] in definition '{ │ "StartAt": "SetupVariables", │ "States": { │ "SetupVariables": { │ "Type": "Pass", │ "Parameters": { │ "items": [] │ }, │ "Next": "Map" │ }, │ "Map": { │ "Type": "Map", │ "ItemsPath": "$.items", │ "ItemProcessor": { │ "StartAt": "Pass", │ "States": { │ "Pass": { │ "Type": "Pass", │ "End": true │ } │ } │ }, │ "End": true │ } │ } │ } │ '. │ │ with aws_sfn_state_machine.main, │ on stepFunction.tf line 1, in resource "aws_sfn_state_machine" "main": │ 1: resource "aws_sfn_state_machine" "main" { ``` On AWS the ProcessorConfig field is optional. [The AWS documentation](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-asl-use-map-state-inline.html) specifies that this field is optional and provides [an example omitting it](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-asl-use-map-state-inline.html#inline-map-state-example-params). When using the AWS workflow editor I get an error saying "Map (ItemProcessor.ProcessorConfig): Value is mandatory." but the creation still succeeds. ### Expected Behavior The definition should be considered valid and the state machine should be created. ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce Create a state machine using the provided definition. ### Environment ```markdown - LocalStack: 3.0.3.dev20231207105008 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9882
https://github.com/localstack/localstack/pull/9888
3ed14e15e470ec9581aa59690bd34e7939b9244f
b03dc545da92c9b0e2ac3a36a22650c53fb05edc
"2023-12-15T12:35:05Z"
python
"2023-12-20T08:40:26Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,881
["localstack/services/stepfunctions/asl/component/state/state_execution/state_parallel/branches_decl.py", "tests/aws/services/stepfunctions/templates/scenarios/scenarios_templates.py", "tests/aws/services/stepfunctions/templates/scenarios/statemachines/parallel_state_order.json5", "tests/aws/services/stepfunctions/v2/scenarios/test_base_scenarios.py", "tests/aws/services/stepfunctions/v2/scenarios/test_base_scenarios.snapshot.json"]
bug: Step Functions Parallel results are passed in wrong order
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior I'm executing a state machine using the following definition: ```json { "StartAt": "Parallel", "States": { "Parallel": { "Type": "Parallel", "Branches": [ { "StartAt": "BranchA", "States": { "BranchA": { "Type": "Pass", "Result": { "branch": "A" }, "End": true } } }, { "StartAt": "BranchB", "States": { "BranchB": { "Type": "Pass", "Result": { "branch": "B" }, "End": true } } } ], "End": true } } } ``` I get the following output: ```json [ { "branch": "B" }, { "branch": "A" } ] ``` The results are not in the correct order. The result from branch A should always be first even when the branch A is slower, and the result from branch B should always be second. ### Expected Behavior The output should be: ```json [ { "branch": "A" }, { "branch": "B" } ] ``` ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce Create and execute a state machine using the provided definition. ### Environment ```markdown - LocalStack: 3.0.3.dev20231207105008 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9881
https://github.com/localstack/localstack/pull/9889
6e45bcb6258916a4481fc474bdeba77de2606e64
3ed14e15e470ec9581aa59690bd34e7939b9244f
"2023-12-15T11:56:46Z"
python
"2023-12-20T07:38:57Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,872
["localstack/services/stepfunctions/asl/component/state/state_execution/state_task/service/state_task_service.py", "tests/aws/services/stepfunctions/templates/services/services_templates.py", "tests/aws/services/stepfunctions/templates/services/statemachines/aws_sdk_sfn_start_execution_implicit_json_serialisation.json5", "tests/aws/services/stepfunctions/v2/services/test_aws_sdk_task_service.py", "tests/aws/services/stepfunctions/v2/services/test_aws_sdk_task_service.snapshot.json", "tests/aws/services/stepfunctions/v2/services/test_aws_sdk_task_service.validation.json"]
bug: Step Functions aws-sdk:sfn:startExecution requires manual JSON conversion
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior I'm executing a state machine using the following definition: _(It's not a valid definition on AWS because the Parameters keys in the state StartTarget start with a lowercase for compatibility with LocalStack, see [9853](https://github.com/localstack/localstack/issues/9853))_ ```json { "StartAt": "SetupVariables", "States": { "SetupVariables": { "Type": "Pass", "Next": "StartTarget", "Parameters": { "input": { "key": "value" } } }, "StartTarget": { "Type": "Task", "Resource": "arn:aws:states:::aws-sdk:sfn:startExecution", "Parameters": { "stateMachineArn": "arn:aws:states:eu-west-3:000000000000:stateMachine:my-state-machine", "input.$": "$.input" }, "End": true } } } ``` It fails with the following error: ```json { "error": "States.Runtime", "cause": "UnknownServiceError(Unknown service: 'sfn'. Valid service names are: accessanalyzer, account, acm, acm-pca, alexaforbusiness, amp, amplify, amplifybackend, amplifyuibuilder, apigateway, apigatewaymanagementapi, apigatewayv2, appconfig, appconfigdata, appfabric, appflow, appintegrations, application-autoscaling, application-insights, applicationcostprofiler, appmesh, apprunner, appstream, appsync, arc-zonal-shift, athena, auditmanager, autoscaling, autoscaling-plans, backup, backup-gateway, backupstorage, batch, bedrock, bedrock-runtime, billingconductor, braket, budgets, ce, chime, chime-sdk-identity, chime-sdk-media-pipelines, chime-sdk-meetings, chime-sdk-messaging, chime-sdk-voice, cleanrooms, cloud9, cloudcontrol, clouddirectory, cloudformation, cloudfront, cloudfront-keyvaluestore, cloudhsm, cloudhsmv2, cloudsearch, cloudsearchdomain, cloudtrail, cloudtrail-data, cloudwatch, codeartifact, codebuild, codecatalyst, codecommit, codedeploy, codeguru-reviewer, codeguru-security, codeguruprofiler, codepipeline, codestar, codestar-connections, codestar-notifications, cognito-identity, cognito-idp, cognito-sync, comprehend, comprehendmedical, compute-optimizer, config, connect, connect-contact-lens, connectcampaigns, connectcases, connectparticipant, controltower, cur, customer-profiles, databrew, dataexchange, datapipeline, datasync, datazone, dax, detective, devicefarm, devops-guru, directconnect, discovery, dlm, dms, docdb, docdb-elastic, drs, ds, dynamodb, dynamodbstreams, ebs, ec2, ec2-instance-connect, ecr, ecr-public, ecs, efs, eks, elastic-inference, elasticache, elasticbeanstalk, elastictranscoder, elb, elbv2, emr, emr-containers, emr-serverless, entityresolution, es, events, evidently, finspace, finspace-data, firehose, fis, fms, forecast, forecastquery, frauddetector, fsx, gamelift, glacier, globalaccelerator, glue, grafana, greengrass, greengrassv2, groundstation, guardduty, health, healthlake, honeycode, iam, identitystore, imagebuilder, importexport, inspector, inspector-scan, inspector2, internetmonitor, iot, iot-data, iot-jobs-data, iot-roborunner, iot1click-devices, iot1click-projects, iotanalytics, iotdeviceadvisor, iotevents, iotevents-data, iotfleethub, iotfleetwise, iotsecuretunneling, iotsitewise, iotthingsgraph, iottwinmaker, iotwireless, ivs, ivs-realtime, ivschat, kafka, kafkaconnect, kendra, kendra-ranking, keyspaces, kinesis, kinesis-video-archived-media, kinesis-video-media, kinesis-video-signaling, kinesis-video-webrtc-storage, kinesisanalytics, kinesisanalyticsv2, kinesisvideo, kms, lakeformation, lambda, launch-wizard, lex-models, lex-runtime, lexv2-models, lexv2-runtime, license-manager, license-manager-linux-subscriptions, license-manager-user-subscriptions, lightsail, location, logs, lookoutequipment, lookoutmetrics, lookoutvision, m2, machinelearning, macie2, managedblockchain, managedblockchain-query, marketplace-catalog, marketplace-entitlement, marketplacecommerceanalytics, mediaconnect, mediaconvert, medialive, mediapackage, mediapackage-vod, mediapackagev2, mediastore, mediastore-data, mediatailor, medical-imaging, memorydb, meteringmarketplace, mgh, mgn, migration-hub-refactor-spaces, migrationhub-config, migrationhuborchestrator, migrationhubstrategy, mobile, mq, mturk, mwaa, neptune, neptunedata, network-firewall, networkmanager, nimble, oam, omics, opensearch, opensearchserverless, opsworks, opsworkscm, organizations, osis, outposts, panorama, payment-cryptography, payment-cryptography-data, pca-connector-ad, personalize, personalize-events, personalize-runtime, pi, pinpoint, pinpoint-email, pinpoint-sms-voice, pinpoint-sms-voice-v2, pipes, polly, pricing, privatenetworks, proton, qldb, qldb-session, quicksight, ram, rbin, rds, rds-data, redshift, redshift-data, redshift-serverless, rekognition, resiliencehub, resource-explorer-2, resource-groups, resourcegroupstaggingapi, robomaker, rolesanywhere, route53, route53-recovery-cluster, route53-recovery-control-config, route53-recovery-readiness, route53domains, route53resolver, rum, s3, s3control, s3outposts, sagemaker, sagemaker-a2i-runtime, sagemaker-edge, sagemaker-featurestore-runtime, sagemaker-geospatial, sagemaker-metrics, sagemaker-runtime, savingsplans, scheduler, schemas, sdb, secretsmanager, securityhub, securitylake, serverlessrepo, service-quotas, servicecatalog, servicecatalog-appregistry, servicediscovery, ses, sesv2, shield, signer, simspaceweaver, sms, sms-voice, snow-device-management, snowball, sns, sqs, sqs-query, ssm, ssm-contacts, ssm-incidents, ssm-sap, sso, sso-admin, sso-oidc, stepfunctions, storagegateway, sts, support, support-app, swf, synthetics, textract, timestream-query, timestream-write, tnb, transcribe, transfer, translate, trustedadvisor, verifiedpermissions, voice-id, vpc-lattice, waf, waf-regional, wafv2, wellarchitected, wisdom, workdocs, worklink, workmail, workmailmessageflow, workspaces, workspaces-web, xray)" } ``` The error above states that sfn is not a valid service name. On the other hand it lists stepfunctions as valid. So I updated the provided definition to use stepfunctions instead of sfn. I then get the following error: ```json { "error": "SFN.InvalidExecutionInputException", "cause": "the JSON object must be str, bytes or bytearray, not dict (Service: SFN, Status Code: 400, Request ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx)" } ``` It seems the input was not automatically converted to string. I changed the definition to pass `States.JsonToString($.input)` as input to StartTarget and it now works with both aws-sdk:sfn:startExecution and aws-sdk:stepfunctions:startExecution. I think there are three issues here: - The input of aws-sdk:sfn:startExecution is not automatically converted to string. - When using aws-sdk:sfn:startExecution the returned error is incorrect. sfn is a valid service name when using aws-sdk on LocalStack Step Functions. - stepfunctions is not a valid service name when using aws-sdk on AWS Step Functions. ### Expected Behavior The input of aws-sdk:sfn:startExecution should be automatically converted to string. sfn should appear in the list of valid service names for aws-sdk on Step Functions. stepfunctions should not be a valid service name for aws-sdk on Step Functions. ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce 1. Make sure the target state machine exists, in the provided definition it is arn:aws:states:eu-west-3:000000000000:stateMachine:my-state-machine 2. Create and execute a state machine using the provided definition. 3. Apply the updates explained above to see the second error and the state machine finally working. ### Environment ```markdown - LocalStack: 3.0.3.dev20231207105008 ``` ### Anything else? I doubt this issue is only related to aws-sdk:sfn:startExecution. I had a similar issue with aws-sdk:sqs:sendMessage. I think there might be a general issue with automatic Parameters conversion when using aws-sdk on LocalStack Step Functions.
https://github.com/localstack/localstack/issues/9872
https://github.com/localstack/localstack/pull/10174
6880b984b562c132131e6f3ca2e8bec46d2b5a95
4e48a876f514b2b7d00b4e13ba9df7c5bda5a9e5
"2023-12-14T13:17:33Z"
python
"2024-02-19T17:02:13Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,863
["localstack/services/stepfunctions/asl/component/state/state_execution/state_parallel/branch_worker.py", "localstack/services/stepfunctions/asl/component/state/state_execution/state_parallel/branches_decl.py", "localstack/services/stepfunctions/asl/component/state/state_execution/state_parallel/state_parallel.py", "localstack/services/stepfunctions/asl/eval/program_worker.py", "tests/aws/services/stepfunctions/templates/scenarios/scenarios_templates.py", "tests/aws/services/stepfunctions/templates/scenarios/statemachines/parallel_state_catch.json5", "tests/aws/services/stepfunctions/templates/scenarios/statemachines/parallel_state_fail.json5", "tests/aws/services/stepfunctions/templates/scenarios/statemachines/parallel_state_retry.json5", "tests/aws/services/stepfunctions/v2/scenarios/test_base_scenarios.py", "tests/aws/services/stepfunctions/v2/scenarios/test_base_scenarios.snapshot.json"]
bug: Step Functions error not propagated outside of Parallel state
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior I'm executing a state machine using the following definition: ```json { "StartAt": "Parallel", "States": { "Parallel": { "Type": "Parallel", "Next": "Unreachable", "Branches": [ { "StartAt": "Fail", "States": { "Fail": { "Type": "Fail" } } } ] }, "Unreachable": { "Type": "Pass", "Parameters": { "UnreachableOutput": true }, "End": true } } } ``` The execution is a success with the following output: ```json { "UnreachableOutput": true } ``` ### Expected Behavior The execution should fail. On AWS executing a state machine using the provided definition results in an error. The execution should fail before reaching the state named `Unreachable` because an error occurred in one of the branches of the preceding Parallel state. ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce Create and execute a state machine using the provided definition. ### Environment ```markdown - LocalStack: 3.0.3.dev20231207105008 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9863
https://github.com/localstack/localstack/pull/9891
ce5fe90919f92d60a141793afa8de7d14f418d58
bd1f2b3b9798ad9181a5ccb1eb9fa501afb29867
"2023-12-13T02:37:00Z"
python
"2023-12-23T07:18:54Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,854
["setup.cfg"]
EC2.describeInstanceTypes outputs invalid data for EnaSupport
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior `EC2.describeInstanceTypes` outputs the value `False` for `InstanceTypes[].NetworkInfo.EnaSupport` in the NetworkInfo type ### Expected Behavior It should return one of "required", "supported", "unsupported". See AWS reference documentation https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_NetworkInfo.html ### How are you starting LocalStack? With a `docker run` command ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker run localstack/localstack #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) awslocal ec2 describe-instance-types ### Environment ```markdown - OS: macOS 14.1.1 - LocalStack: 3.0.2.dev ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9854
https://github.com/localstack/localstack/pull/10017
19f00301506cc9c1dc08c7826a780c20d4d97d28
4407db9bc9373103950f8c364669e95b26ccecdd
"2023-12-12T11:55:53Z"
python
"2024-01-10T10:26:16Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,837
["localstack/aws/protocol/parser.py", "tests/aws/services/s3/test_s3.py", "tests/aws/services/s3/test_s3.snapshot.json"]
bug: S3 listObjects and folder names containing a space
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior When folder names contain a space then `listObjects` returns names without `/` at the end. Without space it works fine. ### Expected Behavior Folders containing space in the name are returned with `/` at the end. ### How are you starting LocalStack? With a `docker run` command ### Steps To Reproduce awslocal s3api put-object --bucket bucket-name --key "folder name/" awslocal s3api list-objects --bucket bucket-name ``` { "Contents": [ { "Key": "folder name", "LastModified": "2023-12-11T10:15:20.000Z", "ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"", "Size": 0, "StorageClass": "STANDARD", "Owner": { "DisplayName": "webfile", "ID": "75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a" } } ], "RequestCharged": null } ``` ### Environment ```markdown - OS: Linux Mint 21.2 - LocalStack: localstack/localstack:3.0 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9837
https://github.com/localstack/localstack/pull/9856
7488c721dea9b3352966a5cf7e80e2c8075b4a66
e0bbe20f9ce9f7dc7ff2cdb25f541ac706685638
"2023-12-11T10:30:45Z"
python
"2023-12-12T21:22:02Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,826
["localstack/services/cloudformation/resource_provider.py", "localstack/services/sqs/resource_providers/aws_sqs_queue.py", "tests/aws/services/cloudformation/resources/test_sqs.py", "tests/aws/services/cloudformation/resources/test_sqs.snapshot.json", "tests/aws/templates/sqs_queue_update_no_change.yml"]
bug: SQS queues cause deployment failures when using IaC (AWS CDK)
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior After the initial `cdklocal deploy` run, after any change has happened, Localstack will not be able to locate the created SQS queue and will delete it. ![Screenshot 2023-12-08 at 10 16 15](https://github.com/localstack/localstack/assets/16351444/f7a29aa3-43f9-41b2-bba1-c71b1fbc2d8b) ### Expected Behavior No error when adding changes after having provisioned SQS queue ### How are you starting LocalStack? With the `localstack` script ### Steps To Reproduce How to replicate: 1. In an empty folder initialise a new CDK project using `cdklocal init -l python` 2. Substitute the `..._stack.py` file with. ``` from aws_cdk import Duration, Stack, aws_sqs as sqs, aws_s3 as s3 from constructs import Construct class LocalstackTestStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) # The code that defines your stack goes here # example resource queue = sqs.Queue( self, "LocalstackTestQueue", visibility_timeout=Duration.seconds(300), ) ``` 3. Run `cdklocal bootstrap`and `cdklocal deploy`. 4. Make any "significant" change in ..._stack.py file. For example add a bucket. ``` from aws_cdk import Duration, Stack, aws_sqs as sqs, aws_s3 as s3 from constructs import Construct class LocalstackTestStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) # The code that defines your stack goes here # example resource queue = sqs.Queue( self, "LocalstackTestQueue", visibility_timeout=Duration.seconds(300), ) self.input_bucket = s3.Bucket( self, "input_bucket", block_public_access=s3.BlockPublicAccess.BLOCK_ALL, ) ``` 5. Run `cdklocal deploy`. This will throw the error. ### Environment ```markdown - OS: macOS Sonoma Version 14.1.1 - LocalStack: > cdklocal --version 2.114.1 (build 02bbb1d) >localstack --version 3.0.2 ``` ``` ### Anything else? ![Screenshot 2023-12-08 at 10 21 44](https://github.com/localstack/localstack/assets/16351444/e9960a4e-6c39-4827-9550-b71606125a77) ![Screenshot 2023-12-08 at 10 21 59](https://github.com/localstack/localstack/assets/16351444/7f2e67a1-4d27-482f-8dae-8263f8e7d63d) `localstack config show` output
https://github.com/localstack/localstack/issues/9826
https://github.com/localstack/localstack/pull/9831
34862f27310407f63e7728f5894571b8199913a5
7488c721dea9b3352966a5cf7e80e2c8075b4a66
"2023-12-08T12:50:31Z"
python
"2023-12-12T15:16:21Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,812
["localstack/services/transcribe/provider.py", "tests/aws/files/en-us_video.mkv", "tests/aws/files/en-us_video.mp4", "tests/aws/services/transcribe/test_transcribe.py"]
Transcribe service always fails with sample_rate error for video codecs
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior Every call to the transcribe service starts a job, but then always fails with a `sample_rate` error. ### Expected Behavior The job should be successful and the transcription should be returned. ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce Docker compose: ```yml localstack: image: localstack/localstack-pro restart: always profiles: ["local-not-yet"] # TODO: Rename it back to "local" for the final stage of adding localstack # The localstack documentation (under GATEWAY_LISTEN: https://docs.localstack.cloud/references/configuration/) are not 100 great for macOS. # If we were to follow it we would have used: 127.0.0.1:4566:4566 # The problem is IPv6 that could apply here and LocalSTack does not support it: https://docs.localstack.cloud/references/network-troubleshooting/ # So we're defining the ports WITHOUT the localhost address ports: - "4566:4566" # LocalStack Gateway - "4510-4559:4510-4559" # external services port range - "53:53" # DNS config (required for Pro) - https://docs.localstack.cloud/user-guide/tools/transparent-endpoint-injection/ - "53:53/udp" # DNS config (required for Pro) - https://docs.localstack.cloud/user-guide/tools/transparent-endpoint-injection/ - "443:443" # LocalStack HTTPS Gateway (required for Pro) environment: # LS_LOG: "trace" # Uncomment for getting much more debug logs from LocalStack DEBUG: 1 # Change to 0 if you would like to have less noise in the LocalStack container's logs LOCALSTACK_AUTH_TOKEN: ${LOCALSTACK_AUTH_TOKEN} # Add it to your .env file at the root of the project PERSISTENCE: 1 ENFORCE_IAM: 0 # https://docs.localstack.cloud/references/configuration/#iam EXTRA_CORS_ALLOWED_ORIGINS: "*" # https://docs.localstack.cloud/references/configuration/#security DOCKER_HOST: unix:///var/run/docker.sock DNS_LOCAL_NAME_PATTERNS: ".*(secretsmanager).*.amazonaws.com" # The services that we would like their requests be redirected to AWS instead of Localstack. This is done via https://docs.localstack.cloud/user-guide/tools/dns-server/#skip-localstack-dns-resolution DNS_SERVER: 8.8.8.8 # DNS fallback server, all the non AWS requests will be forwarded to it DNS_ADDRESS: 127.0.0.1 # The LocalStack local DNS server address volumes: - "${LOCALSTACK_VOLUME_DIR:-./.localstack-volume}:/var/lib/localstack" - "/var/run/docker.sock:/var/run/docker.sock" # Enable to use services that use external docker images, like Lambda functions ``` Steps: SInce transcribe is setup by default, all you need to do is call it with an S3 `mp4` file: ```js import { CreateVocabularyCommand, GetTranscriptionJobCommand, GetVocabularyCommand, StartTranscriptionJobCommand, TranscribeClient, UpdateVocabularyCommand } from "@aws-sdk/client-transcribe"; this.transcribeClient = new TranscribeClient({ region: "us-east-1" }); const transcriptJob = await this.transcribeClient.send( new StartTranscriptionJobCommand({ TranscriptionJobName: "testName", LanguageCode: "en-US", MediaFormat: "mp4", Media: { MediaFileUri: <s3 URI> }, OutputBucketName: <your value>, OutputKey: <your value>, Subtitles: { Formats: ["srt", "vtt"] }, }) ``` Then checking to see the status: ```js const transcriptionJob = await this.transcribeClient.send( new GetTranscriptionJobCommand({ TranscriptionJobName: "testName" }) ); ``` ### Environment ```markdown - OS: macOS 14.1.2 - LocalStack: Latest using docker-compose ``` ### Anything else? I verified with the team that this is indeed a bug: https://localstack-community.slack.com/archives/CMAFN2KSP/p1701843995174299?thread_ts=1701777783.269879&cid=CMAFN2KSP
https://github.com/localstack/localstack/issues/9812
https://github.com/localstack/localstack/pull/9898
c618ac007e0879e6e5a5f803acc64bb26d748f84
ef6d7aca0d4fd72106c5a5df3b56997340da04e0
"2023-12-06T07:23:02Z"
python
"2023-12-19T05:26:02Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,771
["localstack/services/stepfunctions/asl/antlr/ASLIntrinsicLexer.g4", "localstack/services/stepfunctions/asl/antlr/runtime/ASLIntrinsicLexer.interp", "localstack/services/stepfunctions/asl/antlr/runtime/ASLIntrinsicLexer.py", "localstack/services/stepfunctions/asl/parse/intrinsic/preprocessor.py", "tests/aws/services/stepfunctions/templates/intrinsicfunctions/intrinsic_functions_templates.py", "tests/aws/services/stepfunctions/templates/intrinsicfunctions/statemachines/generic/escape_sequence.json5", "tests/aws/services/stepfunctions/templates/intrinsicfunctions/statemachines/generic/nested_calls_1.json5", "tests/aws/services/stepfunctions/templates/intrinsicfunctions/statemachines/generic/nested_calls_2.json5", "tests/aws/services/stepfunctions/v2/intrinsic_functions/test_generic.py", "tests/aws/services/stepfunctions/v2/intrinsic_functions/test_generic.snapshot.json"]
bug: InvalidDefinition error when creating/updating State Machine
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior I'm trying to create a state machine using the following definition that I validated on AWS: ```json { "StartAt": "GetSuffix", "States": { "GetSuffix": { "Type": "Pass", "Parameters": { "suffix.$": "States.ArrayGetItem(States.StringSplit($$.StateMachine.Name, '-'), States.MathAdd(States.ArrayLength(States.StringSplit($$.StateMachine.Name, '-')), -1))" }, "End": true } } } ``` It fails with the following error: ``` │ Error: updating Step Functions State Machine (arn:aws:states:eu-west-3:000000000000:stateMachine:my-state-machine): InvalidDefinition: Error=ValueError Args=["Expected 2 arguments for function type '<class 'localstack.services.stepfunctions.asl.component.intrinsic.function.statesfunction.string_operations.string_split.StringSplit'>', but got: '(FunctionArgumentList| {'arg_list': [], 'size': 0}'."] in definition '{ │ "StartAt": "GetSuffix", │ "States": { │ "GetSuffix": { │ "Type": "Pass", │ "Parameters": { │ "suffix.$": "States.ArrayGetItem(States.StringSplit($$.StateMachine.Name, '-'), States.MathAdd(States.ArrayLength(States.StringSplit($$.StateMachine.Name, '-')), -1))" │ }, │ "End": true │ } │ } │ } │ '. │ │ with aws_sfn_state_machine.main, │ on stepFunction.tf line 1, in resource "aws_sfn_state_machine" "main": │ 1: resource "aws_sfn_state_machine" "main" { ``` Based on my tests there seems to be 2 issues: - Using a variable starting with `$$` as a function argument is not supported. - Passing the result of a function directly as another function argument is not supported. ### Expected Behavior The definition should be considered valid and the state machine should be created. ### How are you starting LocalStack? With the `localstack` script ### Steps To Reproduce Create a state machine using the provided definition. ### Environment ```markdown - LocalStack: 2.3.0 - PROVIDER_OVERRIDE_STEPFUNCTIONS=v2 - Terraform v1.6.2 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9771
https://github.com/localstack/localstack/pull/9783
b4723cecad7fa4bd2f4e2c896cb0d3d62f017db0
c3d24de417faa57a2f525b7aad76b133589141e3
"2023-11-29T17:44:15Z"
python
"2023-12-01T18:12:23Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,753
["localstack/services/sqs/models.py"]
Bring back SQS queue "dynamic" format
### Is there an existing issue for this? - [X] I have searched the existing issues ### Enhancement description Before 3.x, SQS would use the hostname and port of the incoming request to determine the URL to return, so the URL would work both if it was queried from the host or from inside the docker (compose) network. This was a critical and incredibly useful feature for us. While we have tried to not depend on it by providing environment variables that the users of our tool should use, some teams depend on third party libraries that query the URL and won't have the capacity to replace them. Additionally, we have the requirement of running with published random ports and there are use cases that get the URL both from the host and the docker network. While we will be looking into workarounds and try to find a way to still upgrade (since the [AWS SDK compatibility fix](https://github.com/localstack/localstack/issues/8267) is also critical) I don't think it's possible to have something that works in both use cases (host and network). Honestly, I think it was awesome that LocalStack did this and I would have kept it as the default, so I hope you consider and decide bringing it back, maybe as a `SQS_ENDPOINT_STRATEGY` option. ### 🧑‍💻 Implementation _No response_ ### Anything else? https://localstack-community.slack.com/archives/CMAFN2KSP/p1700665707589479
https://github.com/localstack/localstack/issues/9753
https://github.com/localstack/localstack/pull/10135
45d39edbdc27c00d1f762c40e3c050d5b1b36e39
f589a983cffdb3a132b3f76896b62264f833d3a6
"2023-11-28T12:02:38Z"
python
"2024-01-30T12:29:41Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,742
["localstack/services/stepfunctions/asl/component/intrinsic/function/statesfunction/string_operations/string_split.py", "tests/aws/services/stepfunctions/templates/intrinsicfunctions/intrinsic_functions_templates.py", "tests/aws/services/stepfunctions/templates/intrinsicfunctions/statemachines/string_operations/string_split_context_object.json5", "tests/aws/services/stepfunctions/v2/intrinsic_functions/test_string_operations.py", "tests/aws/services/stepfunctions/v2/intrinsic_functions/test_string_operations.snapshot.json"]
bug: SFN - `States.StringSplit` errors when the same value in AWS succeeds
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior When using the `States.StringSplit` intrinsic function for AWS Step Functions, I get the following error: ``` │ Error: creating Step Functions State Machine (repro): InvalidDefinition: Error=ValueError Args=["Expected 2 arguments for function type '<class 'localstack.services.stepfunctions.asl.component.intrinsic.function.statesfunction.string_operations.string_split.StringSplit'>', but got: '(FunctionArgumentList| {'arg_list': [], 'size': 0}'."] in definition '{ │ "Comment": "States.StringSplit error", │ "StartAt": "WillStatesStringSplitError", │ "States": { │ "WillStatesStringSplitError": { │ "Type": "Task", │ "Resource": "arn:aws:states:::lambda:invoke", │ "End": true, │ "Parameters": { │ "FunctionName": "SomeFunction", │ "Payload": { │ "inputArray.$": "States.StringSplit($$.Execution.Input.csv, ',')" │ } │ } │ } │ } │ } │ '. │ │ with module.step_function.aws_sfn_state_machine.this[0], │ on .terraform/modules/step_function/main.tf line 15, in resource "aws_sfn_state_machine" "this": │ 15: resource "aws_sfn_state_machine" "this" { ``` See repro for state machine definition that produces this error. The same repro works in AWS. ### Expected Behavior I expect the step function to behave as intended and to split the CSV into an array. ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce I'm using docker-compose with the following step function definition: ```json { "Comment": "States.StringSplit error", "StartAt": "WillStatesStringSplitError", "States": { "WillStatesStringSplitError": { "Type": "Task", "Resource": "arn:aws:states:::lambda:invoke", "End": true, "Parameters": { "FunctionName": "SomeFunction", "Payload": { "inputArray.$": "States.StringSplit($$.Execution.Input.csv, ',')" } } } } } ``` ### Environment ```markdown - OS: MacOS Ventura - LocalStack: latest ``` ### Anything else? Thank you for all your hard work!
https://github.com/localstack/localstack/issues/9742
https://github.com/localstack/localstack/pull/9749
3d519178c1d68593a16072e3dfb2993411694e48
54138a0eb1cbf046633741cfa01f813e99811354
"2023-11-27T19:09:27Z"
python
"2023-11-28T16:05:08Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,741
["localstack/services/sns/resource_providers/aws_sns_topic.py", "tests/aws/services/cloudformation/resources/test_sns.py", "tests/aws/services/cloudformation/resources/test_sns.snapshot.json"]
bug: Unable to provision FIFO SNS Topic via CDK
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior When attempting to provision a FIFO SNS topic in LocalStack via CDK using `cdklocal`, e.g.: ```typescript const topic = new sns.Topic(this, 'FIFOTopic', { displayName: 'topic.fifo', fifo: true, contentBasedDeduplication: true, }); ``` The resulting topic created in LocalStack is not FIFO: ![Screenshot 2023-11-27 at 12 26 44 PM](https://github.com/localstack/localstack/assets/77160631/fe0079be-1197-4a8b-8405-8014f3d633f1) This doesn't appear to be an issue with `cdklocal`, because the template output does appear to have the correct properties: ```json { "Resources": { "FIFOTopic5C947601": { "Type": "AWS::SNS::Topic", "Properties": { "ContentBasedDeduplication": true, "DisplayName": "topic.fifo", "FifoTopic": true, "TopicName": "SNSStack-FIFOTopic-99AA2860.fifo" }, ... } ``` ### Expected Behavior A FIFO SNS Topic would be provisioned when setting `fifo: true` on the CDK construct. ### How are you starting LocalStack? With the `localstack` script ### Steps To Reproduce I created this git repository to provide an example duplicating the issue: https://github.com/tbellerose-godaddy/ls-fifo-sns-cdk-bug #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) localstack start -d #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) cdklocal bootstrap cdklocal deploy '*' ### Environment ```markdown - OS: macOS Ventura 13.5.2 - LocalStack: 2.3.2 ``` ### Anything else? Creating a FIFO SNS Topic via the `awslocal-cli` works as expected. This is only an issue when creating via CDK.
https://github.com/localstack/localstack/issues/9741
https://github.com/localstack/localstack/pull/9743
54138a0eb1cbf046633741cfa01f813e99811354
7977dd45490e1fa75ad24ee16ad57dd0e7112272
"2023-11-27T18:57:36Z"
python
"2023-11-28T16:51:40Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,731
["localstack/services/stepfunctions/asl/utils/boto_client.py"]
bug: Long Running Lambda Fails StepFunction State Machine Execution
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior As of `v3.0.0` and `v3.0.1`, StepFunction StateMachines that have long-running Lambda tasks fail execution. It also looks like the StateMachine then retries by re-invoking the lambda 3 times in the background with a 1 minute gap in between invocations. Unfortunately, the state machine will have already failed execution by this point and these lambda runs fail when they try to update the state. The lambda is started successfully, but then fails with a timeout after 3 seconds: ``` 2023-11-24T22:09:56.758 ERROR --- [ad-35 (eval)] l.s.s.a.c.eval_component : Exception=FailureEventException, Error=Exception, Details={"taskFailedEventDetails": {"error": "Exception", "cause": "{\"errorMessage\":\"2023-11-24T22:09:56Z dbd4767f-32b8-46b7-9ef4-382ee583ad0a Task timed out after 3.00 seconds\"}", "resource": "invoke", "resourceType": "lambda"}} at '(StateTaskServiceLambda| {'comment': None, 'input_path': (InputPath| {'input_path_src': '$'}, 'output_path': (OutputPath| {'output_path': '$'}, 'state_entered_event_type': 'TaskStateEntered', 'state_exited_event_type': 'TaskStateExited', 'result_path': None, 'result_selector': None, 'retry': (RetryDecl| {'retriers': [(RetrierDecl| {'error_equals': (ErrorEqualsDecl| {'error_names': [(CustomErrorName| {'error_name': 'Lambda.ClientExecutionTimeoutException'}, (CustomErrorName| {'error_name': 'Lambda.ServiceException'}, (CustomErrorName| {'error_name': 'Lambda.AWSLambdaException'}, (CustomErrorName| {'error_name': 'Lambda.SdkClientException'}]}, 'interval_seconds': (IntervalSecondsDecl| {'seconds': 2}, 'max_attempts': (MaxAttemptsDecl| {'attempts': 6}, 'backoff_rate': (BackoffRateDecl| {'rate': 2.0}, '_attempts_counter': 0, '_next_interval_seconds': 2}]}, 'catch': None, 'timeout': (TimeoutSeconds| {'timeout_seconds': 99999999, 'is_default': None}, 'heartbeat': None, 'parameters': (Parameters| {'payload_tmpl': (PayloadTmpl| {'payload_bindings': [(PayloadBindingValue| {'field': 'FunctionName', 'value': (PayloadValueStr| {'val': 'arn:aws:lambda:us-east-1:000000000000:function:TestAppStack-lambdaslongrunning51EEA4-b04d9aee'}}, (PayloadBindingPath| {'field': 'Payload', 'path': '$'}]}}, 'name': 'long-running-task', 'state_type': <StateType.Task: 15>, 'continue_with': <localstack.services.stepfunctions.asl.component.state.state_continue_with.ContinueWithEnd object at 0xfffee6793b90>, 'resource': (ServiceResource| {'_region': '', '_account': '', 'resource_arn': 'arn:aws:states:::lambda:invoke', 'partition': 'aws', 'service_name': 'lambda', 'api_name': 'lambda', 'api_action': 'invoke', 'condition': None}}' ``` Even if I specify long timeouts on both the Lambda and the LambdaTask the state machine still fails the task after 3 seconds. This was working in version 2, and if I use the old StepFunctions provider, the StateMachine completes successfully. ### Expected Behavior The State Machine should finish successfully because the long running lambda finishes before the timeout. ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) I've created a repository that demonstrates the bug: https://github.com/noseworthy/localstack-sfn-bugs. I'm using localstack pro, so your terminal must have `LOCALSTACK_AUTH_TOKEN` specified. This should work with non-pro localstack however. You just need to modify the `compose.yaml` file. 1. Start localstack using docker-compose: `docker compose up --force-recreate --build -d` #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) 1. Install dependencies: `yarn install` 2. Bootstrap the CDK project: `yarn cdklocal bootstrap` 3. Deploy the CDK project: `yarn cdklocal deploy` 4. Trigger the state machine: `yarn trigger` Watch as the statemachine tries to execute, but fails saying that the long running lambda timed out after 3.00 seconds. ### Environment ```markdown - OS: macOS Sonoma 14.1.1 (23B81) - LocalStack: v3.0.1 Pro Docker Image ``` ### Anything else? Demo Repository: https://github.com/noseworthy/localstack-sfn-bugs
https://github.com/localstack/localstack/issues/9731
https://github.com/localstack/localstack/pull/9732
7a9c7469dca92f8262bcde8f3af677148947c120
1e4aa802fd5a7e4e5f5b0359acde1fb9d6040506
"2023-11-24T23:20:25Z"
python
"2023-11-27T16:06:12Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,695
["localstack/services/stepfunctions/asl/component/state/state.py", "localstack/services/stepfunctions/asl/component/state/state_execution/state_map/iteration/itemprocessor/map_run_record.py", "localstack/services/stepfunctions/asl/eval/event/event_history.py", "localstack/services/stepfunctions/backend/execution.py", "localstack/services/stepfunctions/backend/state_machine.py", "localstack/services/stepfunctions/provider.py", "tests/aws/services/stepfunctions/templates/base/base_templates.py", "tests/aws/services/stepfunctions/templates/base/statemachines/pass_start_time_format.json5", "tests/aws/services/stepfunctions/v2/base/test_base.py"]
bug: SFN - `$$.Execution.StartTime` is giving `"21:09:14.150266"` instead of `"2019-03-26T20:14:13.192Z"` format
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior Currently, when passing `$$.Execution.StartTime` as a parameter to a SFN lambda, it is coming in in the format `HH:MM:SS.###`. ### Expected Behavior `$$.Execution.StartTime` should be an ISO 8601 datetime string. ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce This is run in a docker compose file. Step function definition is ```json { "Comment": "Sample SFN", "StartAt": "SampleLambda", "States": { "SampleLambda": { "Type": "Task", "Resource": "arn:aws:states:::lambda:invoke", "End": true, "Parameters": { "FunctionName": "Sample-Function", "Payload": { "data": { "startDateTime.$": "$$.Execution.StartTime" } } } } } } ``` ### Environment ```markdown - OS: Mac OS Ventura 13.6.1 - LocalStack: LocalStack version: 3.0.1.dev LocalStack build date: 2023-11-20 LocalStack build git hash: 3e32b438 ``` ### Anything else? Thank you so much for your hard work.
https://github.com/localstack/localstack/issues/9695
https://github.com/localstack/localstack/pull/9702
ec15870db07c0e9f8865159018e871011a08d797
8772623e86fbe071e237f6864dd7469eafa92dd3
"2023-11-20T21:28:02Z"
python
"2023-12-02T21:55:28Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,666
["localstack/services/s3/presigned_url.py", "tests/aws/services/lambda_/functions/lambda_s3_integration_presign.js", "tests/aws/services/lambda_/functions/lambda_s3_integration_sdk_v2.js", "tests/aws/services/s3/test_s3.py", "tests/aws/services/s3/test_s3.snapshot.json"]
bug: S3 Object Metadata not being persisted on localstack:3.0+ with presigned urls
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior Currently, when using localstack S3 to generate a presigned upload that has metadata written into the object, the metadata is not persisted to the object. Resulting in no metadata being returned when `aws s3api head-object` is executed. ### Expected Behavior As a localstack user I want localstack S3 to correctly persist the metadata into the s3 object when using a presigned upload So that I can test if my presign upload feature is working as expected ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) ``` # docker-compose.yml localstack: image: ${DEP_PROXY_PREFIX}/localstack/localstack:2.3 hostname: localstack ports: - 4566:4566 environment: Services: 's3,sqs,sns' AWS_ACCESS_KEY_ID: localstack AWS_SECRET_ACCESS_KEY: localstack AWS_DEFAULT_REGION: eu-central-1 EAGER_SERVICE_LOADING: 1 volumes: - localstack:/var/lib/localstack - '/var/run/docker.sock:/var/run/docker.sock ``` ```bash docker compose up ``` #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) ```typescript // test.ts import { HeadObjectCommand, PutObjectCommand, S3Client } from '@aws-sdk/client-s3'; import { getSignedUrl } from '@aws-sdk/s3-request-presigner'; import { uploadFileViaHttp } from './tests/integration/scripts'; import { randomBytes } from 'crypto'; async function runError() { const client = new S3Client({ region: 'eu-central-1', endpoint: 'http://localhost:4566', forcePathStyle: true }); const buffer = randomBytes(100); const initialMetaData = { firstid: 'b8066141-3c1a-4c10-96a7-0664aeebe2aa', secondid: '800001', thirdid: 'b0372e50-f4f3-4ae2-9a44-7b8c70604cf4' }; const putObjectCommand = new PutObjectCommand({ Bucket: 'my-test-bucket', Key: 'test-key.txt', ContentLength: buffer.length, Metadata: initialMetaData }); const url = await getSignedUrl(client, putObjectCommand); // simply uploads the buffer with a PUT request await uploadFileViaHttp(url, buffer); const headObjectCommand = new HeadObjectCommand({ Bucket: 'my-test-bucket', Key: 'test-key.txt' }); const response = await client.send(headObjectCommand); console.log(`Expected Metadata`, initialMetaData); console.log('Received Metadata', response.Metadata); } runError().then(() => { console.log('done'); }); ``` ```bash ts-node ./test.ts ``` When this script is run with Localstack 2.3 (f7c3a25c88a7) this works fine. ### Environment ```markdown - OS: 22.04 - LocalStack: latest (c31daa9ad74a) ``` ### Anything else? Error only occurrs on newer localstack version. Thank you for the good work on the project! Love it!
https://github.com/localstack/localstack/issues/9666
https://github.com/localstack/localstack/pull/9676
d909f39114177ba8d3507d0d740c0487a6a472cd
36a9fb2fdebe4a788ccfe9b2436c78335d950752
"2023-11-17T12:21:36Z"
python
"2023-11-20T10:22:19Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,664
["localstack/services/s3/presigned_url.py", "tests/aws/services/lambda_/functions/lambda_s3_integration_presign.js", "tests/aws/services/lambda_/functions/lambda_s3_integration_sdk_v2.js", "tests/aws/services/s3/test_s3.py", "tests/aws/services/s3/test_s3.snapshot.json"]
Getting 403 AccessDenied on upload with presigned url, localstack 3.0.0
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior hi guys, I'm stuggling to test our S3 logic locally. The service that calls localstack s3 is getting the following error: `2023-11-13 15:45:57 com.amazonaws.services.s3.model.AmazonS3Exception: There were headers present in the request which were not signed (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: ea7d6d50-1e86-452c-a1c6-00fca1a237a5; S3 Extended Request ID: 9Gjjt1m+cjU4OPvX9O9/8RuvnG41MRb/18Oux2o5H5MY7ISNTlXN+Dz9IG62/ILVxhAGI0qyPfg=; Proxy: null)` The localstack logs are the following: > LocalStack version: 3.0.0 LocalStack build date: 2023-11-16 LocalStack build git hash: 3cd32364 2023-11-17T08:46:39.471 INFO --- [-functhread6] hypercorn.error : Running on https://0.0.0.0:4566 (CTRL + C to quit) 2023-11-17T08:46:39.471 INFO --- [-functhread6] hypercorn.error : Running on https://0.0.0.0:4566 (CTRL + C to quit) 2023-11-17T08:46:39.723 INFO --- [ MainThread] localstack.utils.bootstrap : Execution of "start_runtime_components" took 903.21ms Ready. 2023-11-17T08:47:33.276 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.GetBucketAcl => 404 (NoSuchBucket) 2023-11-17T08:47:33.381 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.CreateBucket => 200 2023-11-17T08:47:33.406 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.GetBucketAcl => 404 (NoSuchBucket) 2023-11-17T08:47:33.413 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.CreateBucket => 200 2023-11-17T08:47:33.425 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.GetBucketAcl => 404 (NoSuchBucket) 2023-11-17T08:47:33.432 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.CreateBucket => 200 2023-11-17T08:47:40.277 INFO --- [ asgi_gw_1] localstack.request.aws : AWS s3.HeadObject => 404 (NoSuchKey) 2023-11-17T08:47:40.278 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.HeadObject => 404 (NoSuchKey) 2023-11-17T08:47:40.281 INFO --- [ asgi_gw_2] localstack.request.aws : AWS s3.HeadObject => 404 (NoSuchKey) 2023-11-17T08:47:40.330 WARN --- [ asgi_gw_1] l.s.s3.presigned_url : Signatures do not match, but not raising an error, as S3_SKIP_SIGNATURE_VALIDATION=1 2023-11-17T08:47:40.332 WARN --- [ asgi_gw_0] l.s.s3.presigned_url : Signatures do not match, but not raising an error, as S3_SKIP_SIGNATURE_VALIDATION=1 2023-11-17T08:47:40.334 WARN --- [ asgi_gw_2] l.s.s3.presigned_url : Signatures do not match, but not raising an error, as S3_SKIP_SIGNATURE_VALIDATION=1 2023-11-17T08:47:40.344 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.PutObject => 200 2023-11-17T08:47:40.344 INFO --- [ asgi_gw_1] localstack.request.aws : AWS s3.PutObject => 200 2023-11-17T08:47:40.345 INFO --- [ asgi_gw_2] localstack.request.aws : AWS s3.PutObject => 200 2023-11-17T08:47:40.372 WARN --- [ asgi_gw_0] l.s.s3.presigned_url : Signatures do not match, but not raising an error, as S3_SKIP_SIGNATURE_VALIDATION=1 2023-11-17T08:47:40.374 WARN --- [ asgi_gw_1] l.s.s3.presigned_url : Signatures do not match, but not raising an error, as S3_SKIP_SIGNATURE_VALIDATION=1 2023-11-17T08:47:40.376 WARN --- [ asgi_gw_2] l.s.s3.presigned_url : Signatures do not match, but not raising an error, as S3_SKIP_SIGNATURE_VALIDATION=1 2023-11-17T08:47:40.378 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.PutObject => 200 2023-11-17T08:47:40.379 INFO --- [ asgi_gw_1] localstack.request.aws : AWS s3.PutObject => 200 2023-11-17T08:47:40.380 INFO --- [ asgi_gw_2] localstack.request.aws : AWS s3.PutObject => 200 2023-11-17T08:47:40.403 INFO --- [ asgi_gw_1] localstack.request.aws : AWS s3.PutObject => 403 (AccessDenied) 2023-11-17T08:47:40.409 INFO --- [ asgi_gw_2] localstack.request.aws : AWS s3.PutObject => 403 (AccessDenied) 2023-11-17T08:47:40.411 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.PutObject => 403 (AccessDenied) 2023-11-17T08:47:41.430 INFO --- [ asgi_gw_1] localstack.request.aws : AWS s3.PutObject => 403 (AccessDenied) 2023-11-17T08:47:41.470 INFO --- [ asgi_gw_2] localstack.request.aws : AWS s3.PutObject => 403 (AccessDenied) 2023-11-17T08:47:41.471 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.PutObject => 403 (AccessDenied) 2023-11-17T08:47:44.439 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.PutObject => 403 (AccessDenied) 2023-11-17T08:47:44.483 INFO --- [ asgi_gw_1] localstack.request.aws : AWS s3.PutObject => 403 (AccessDenied) 2023-11-17T08:47:44.485 INFO --- [ asgi_gw_2] localstack.request.aws : AWS s3.PutObject => 403 (AccessDenied) 2023-11-17T08:47:48.788 INFO --- [ MainThread] l.runtime.shutdown : [shutdown] Stopping all services Localstack docker-compose: > localstack-s3-bucket: image: localstack/localstack:3.0.0 environment: SERVICES: s3 ports: - 4566:4566 I added the following env vars to the service that calls localstack s3: > - "AWS_ACCESS_KEY_ID=test" - "AWS_SECRET_ACCESS_KEY=test" Also I make sure that the AmazonS3 client creation gets the same as login and password. I read few issues here and I found similar, but these was mostly for previous versions for localstack S3. Also tried the PROVIDER_OVERRIDE_S3 = asf, but as I know its for localstack 2.x, and not for 3, since in localstack 3 this is the default provider. The code that calls S3 putobject is the following: > @S3Retryable public void putObject(String fileName, InputStream inputStream, ObjectMetadata metadata) { final var url = s3storage.generatePresignedUrl(new GeneratePresignedUrlRequest(s3bucketName, fileName, HttpMethod.PUT)); s3storage.upload(new PresignedUrlUploadRequest(url) .withInputStream(inputStream) .withMetadata(metadata)); } Can you please guys help me? In case of any information needed, just reply and I'll try to tell them. ### Expected Behavior There should not be 403 AccessDenied errors. ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce To be honest, I was trying to create small code snippets, but I was not really able to reproduce the error with small codes, just with our framework that using Flink. Getting records from kafka, putting and deleting from S3. ### Environment ```markdown - OS: Windows 10 - LocalStack: 3.0.0 - Java: 11 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9664
https://github.com/localstack/localstack/pull/9676
d909f39114177ba8d3507d0d740c0487a6a472cd
36a9fb2fdebe4a788ccfe9b2436c78335d950752
"2023-11-17T09:14:13Z"
python
"2023-11-20T10:22:19Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,598
["localstack/services/sqs/models.py", "tests/aws/services/sqs/test_sqs.py", "tests/aws/services/sqs/test_sqs.snapshot.json"]
SQS: ChangeMessageVisibility not working
Using the most recent version of LocalStack and the most recent version of the AWS CLI... ``` awslocal sqs create-queue --queue-name foo { "QueueUrl": "http://localhost:4566/000000000000/foo" } awslocal sqs send-message --message-body hello --queue-url http://localhost/000000000000/foo { "MD5OfMessageBody": "5d41402abc4b2a76b9719d911017c592", "MessageId": "53be5840-770a-49e6-8e23-cfca3469c7ff" } awslocal sqs receive-message --queue-url http://localhost/000000000000/foo { "Messages": [ { "MessageId": "53be5840-770a-49e6-8e23-cfca3469c7ff", "ReceiptHandle": "ZmE3NTAxZjItZTEzMS00ZDNjLWE0ODgtODUzNTU0ZjE4YzBjIGFybjphd3M6c3FzOnVzLWVhc3QtMTowMDAwMDAwMDAwMDA6Zm9vIDUzYmU1ODQwLTc3MGEtNDllNi04ZTIzLWNmY2EzNDY5YzdmZiAxNjk4OTUwNjM2LjQxODc3ODc=", "MD5OfBody": "5d41402abc4b2a76b9719d911017c592", "Body": "hello" } ] } awslocal sqs change-message-visibility --visibility-timeout 10 --queue-url http://localhost:4566/000000000000/foo --receipt-handle ZmE3NTAxZjItZTEzMS00ZDNjLWE0ODgtODUzNTU0ZjE4YzBjIGFybjphd3M6c3FzOnVzLWVhc3QtMTo\wMDAwMDAwMDAwMDA6Zm9vIDUzYmU1ODQwLTc3MGEtNDllNi04ZTIzLWNmY2EzNDY5YzdmZiAxNjk4OTUwNjM2LjQxODc3ODc= An error occurred (AWS.SimpleQueueService.MessageNotInflight) when calling the ChangeMessageVisibility operation: Unknown ```
https://github.com/localstack/localstack/issues/9598
https://github.com/localstack/localstack/pull/9632
74efe18eeb28c07fb900370194dbc227e3f55454
cca230a5837282937c19af26314fbf60f744b056
"2023-11-06T20:36:32Z"
python
"2023-11-15T13:48:35Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,556
["localstack/services/opensearch/provider.py", "tests/aws/services/opensearch/test_opensearch.py"]
bug: opensearch/elasticsearch custom endpoint not saved or included in responses
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior When creating an opensearch/elasticsearch domain, a custom endpoint URL may be set on the domain via the `CustomEndpoint`/`CustomEndpointEnabled` request params. localstack respects these params when initially setting up the domain and correctly routes requests made to the custom endpoint, but it doesn't "remember" this config, which has two effects: 1. The response from the `CreateElasticsearchDomain` (and any subsequent `DescribeElasticsearchDomain`) request incorrectly has `CustomEndpoint` omitted and `CustomEndpointEnabled` set to `false` 2. When restoring an opensearch/elasticsearch domain from persisted state, any custom endpoint config is lost/ignored, and making a request to the custom endpoint returns a 404 (of course this is also broken by #8092, but this bug would still happen even if that one is fixed!) ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce ``` awslocal es create-elasticsearch-domain \ --domain-name example \ --domain-endpoint-options CustomEndpointEnabled=true,CustomEndpoint=example.com:1234/foo ``` This gives the following response (irrelevant properties omitted): ```jsonc { "DomainStatus": { "Endpoint": "example.com:1234/foo", "DomainEndpointOptions": { "EnforceHTTPS": false, "TLSSecurityPolicy": "Policy-Min-TLS-1-0-2019-07", "CustomEndpointEnabled": false }, ... } } ``` Note that `CustomEndpointEnabled` is false when it should be true, and `CustomEndpoint` is omitted when it should be `example.com:1234/foo` (the same as the `Endpoint` property). The same response is given when subsequently running `awslocal es describe-elasticsearch-domain --domain-name example` ### Environment ```markdown - OS: docker desktop (on a windows 11 host) - LocalStack: latest (2.3.3.dev) ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9556
https://github.com/localstack/localstack/pull/9566
563958defd2acf9dd84a3477d7cf77a09c23fdc1
310889ac7db0dc08795ade6403e1dbad801260bd
"2023-11-05T20:29:59Z"
python
"2023-11-07T08:06:53Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,488
["Makefile"]
enhancement request: publish major docker tag (e.g. localstack/localstack:2)
### Is there an existing issue for this? - [X] I have searched the existing issues ### Enhancement description as localstack got much more stable we don't need to manually update every minor version and would like to pin the localstack docker file to the latest major release. ### 🧑‍💻 Implementation I will open a PR in a few minutes ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9488
https://github.com/localstack/localstack/pull/9490
782397a13fe6334832428a809e5c5fcfacefb099
b2ad9be7426162bd3e3bad2009679a2200061012
"2023-10-27T04:10:45Z"
python
"2023-11-07T17:01:33Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,476
["localstack/services/firehose/provider.py", "localstack/testing/pytest/fixtures.py", "tests/aws/services/firehose/conftest.py", "tests/aws/services/firehose/test_firehose.py", "tests/aws/services/firehose/test_firehose.snapshot.json", "tests/aws/services/firehose/test_firehose.validation.json"]
bug: Multiple firehose delivery streams subscribing to kinesis stream doesn't fire both.
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior 1. Create kinesis stream with `awslocal kinesis create-stream` 2. Create two firehose delivery stream with `awslocal firehose create-delivery-stream`, both of which have `--kinesis-stream-source-configuration` to the stream created above 3. Put record to the stream 4. Only one of the delivery stream receives the message ### Expected Behavior At the 4th step, both of the delivery stream should receive the message. (which was true in real AWS environment.) ### How are you starting LocalStack? With a `docker run` command ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker run localstack/localstack:2.3.2 #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) docker exec -it admiring_chatelet bash (in the localstack container) awslocal opensearch create-domain --domain-name my-domain awslocal kinesis create-stream --stream-name samplestream --shard-count 2 --region dummy awslocal firehose create-delivery-stream --delivery-stream-name s1 --delivery-stream-type KinesisStreamAsSource --kinesis-stream-source-configuration "KinesisStreamARN=arn:aws:kinesis:us-east-1:000000000000:stream/samplestream,RoleARN=arn:aws:kinesis:us-east-1:000000000000:role/stream-role" --amazonopensearchservice-destination-configuration "ClusterEndpoint=http://my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566,IndexName=index1,TypeName=_doc,RoleARN=arn:aws:kinesis:us-east-1:000000000000:role/es-role,S3Configuration={RoleARN=arn:aws:iam::000000000000:role/Firehose-Reader-Role,BucketARN=arn:aws:s3:::kinesis-activity-backup-local}" awslocal firehose create-delivery-stream --delivery-stream-name s2 --delivery-stream-type KinesisStreamAsSource --kinesis-stream-source-configuration "KinesisStreamARN=arn:aws:kinesis:us-east-1:000000000000:stream/samplestream,RoleARN=arn:aws:kinesis:us-east-1:000000000000:role/stream-role" --amazonopensearchservice-destination-configuration "ClusterEndpoint=http://my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566,IndexName=index2,TypeName=_doc,RoleARN=arn:aws:kinesis:us-east-1:000000000000:role/es-role,S3Configuration={RoleARN=arn:aws:iam::000000000000:role/Firehose-Reader-Role,BucketARN=arn:aws:s3:::kinesis-activity-backup-local}" awslocal kinesis put-record --stream-name samplestream --data '{"test":"value"}' --partition-key 1 curl http://my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/index1/_search curl http://my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/index2/_search by the last two `curl`s, only one of the results shows the given data. ### Environment ```markdown - OS: MacOS 13.2.1 - LocalStack: 2.3.2 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9476
https://github.com/localstack/localstack/pull/10155
871bfb89ae9c7392b84a0fb3930e5e0b30b8961d
d49889b2bf24ffca4c99c3245f006e50d2695dc1
"2023-10-26T03:07:14Z"
python
"2024-02-07T10:13:50Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,428
["tests/aws/services/events/scheduled_rules/test_events_scheduled_rules_logs.py", "tests/aws/services/s3/test_s3.py"]
bug: DeleteObjects Percent Decoding Input
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior ``` $ docker run -d -p 4566:4566 localstack/localstack:2.3 $ aws --no-sign-request --endpoint-url=http://localhost:4566 s3 mb s3://tustvold make_bucket: tustvold ``` If we then create a path with a percent encoded object path, deleting it works correctly ``` $ curl -X PUT --data-binary 'test' -H "Content-Type: text/plain" 'http://localhost:4566/tustvold/a%252Fb.txt' $ aws --no-sign-request --endpoint-url=http://localhost:4566 s3 ls s3://tustvold 2023-10-21 21:33:50 4 a%2Fb.txt $ aws s3api delete-objects --no-sign-request --endpoint http://localhost:4566 --bucket tustvold --delete 'Objects=[{Key=a%2Fb.txt}]' { "Deleted": [ { "Key": "a%2Fb.txt" } ] } ``` However, if the object key is a percent encoded emoji, we get very peculiar behaviour, where the DeleteObject appears to be trying to delete the percent decoded version of the object ``` $ aws --no-sign-request --endpoint-url=http://localhost:4566 s3 ls s3://tustvold $ curl -X PUT --data-binary 'test' -H "Content-Type: text/plain" 'http://localhost:4566/tustvold/a/%25F0%259F%2598%2580.file' $ aws --no-sign-request --endpoint-url=http://localhost:4566 s3 ls s3://tustvold --recursive 2023-10-21 21:36:07 4 a/%F0%9F%98%80.file $ aws s3api delete-objects --no-sign-request --endpoint http://localhost:4566 --bucket tustvold --delete 'Objects=[{Key=a/%F0%9F%98%80.file}]' { "Deleted": [ { "Key": "a/😀.file" } ] } $ aws --no-sign-request --endpoint-url=http://localhost:4566 s3 ls s3://tustvold --recursive 2023-10-21 21:36:07 4 a/%F0%9F%98%80.file ``` ### Expected Behavior In version 2.0 this worked correctly ``` $ docker run -d -p 4566:4566 localstack/localstack:2.0 $ aws --no-sign-request --endpoint-url=http://localhost:4566 s3 mb s3://tustvold $ curl -X PUT --data-binary 'test' -H "Content-Type: text/plain" 'http://localhost:4566/tustvold/a/%25F0%259F%2598%2580.file' $ aws s3api delete-objects --no-sign-request --endpoint http://localhost:4566 --bucket tustvold --delete 'Objects=[{Key=a/%F0%9F%98%80.file}]' { "Deleted": [ { "Key": "a/%F0%9F%98%80.file" } ] } $ aws --no-sign-request --endpoint-url=http://localhost:4566 s3 ls s3://tustvold --recursive ``` ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce See above ### Environment ```markdown - OS: - LocalStack: ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9428
https://github.com/localstack/localstack/pull/9507
3f648eb779f6a53b13dc00712da96b55596f9640
987109436de91c440e01ff94fa566f1de090718c
"2023-10-21T20:44:48Z"
python
"2023-10-30T15:20:17Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,427
["localstack/services/s3/v3/provider.py", "tests/aws/services/s3/test_s3.py", "tests/aws/services/s3/test_s3.snapshot.json", "tests/aws/services/s3/test_s3_api.py", "tests/aws/services/s3/test_s3_api.snapshot.json"]
bug: ListObjectsV2 Returns EncodingType When Not Specified
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior ``` $ docker run -d -p 4566:4566 -e PROVIDER_OVERRIDE_S3=v3 localstack/localstack:2.3 $ aws --no-sign-request --endpoint-url=http://localhost:4566 s3 mb s3://tustvold $ curl -X PUT --data-binary 'test' -H "Content-Type: text/plain" 'http://localhost:4566/tustvold/a%252Fb.txt' $ curl 'http://localhost:4566/tustvold' <?xml version='1.0' encoding='utf-8'?> <ListBucketResult><IsTruncated>false</IsTruncated><Marker /><Name>tustvold</Name><Prefix /><MaxKeys>1000</MaxKeys><EncodingType>url</EncodingType><Contents><Key>a%252Fb.txt</Key><ETag>"d41d8cd98f00b204e9800998ecf8427e"</ETag><Owner><DisplayName>webfile</DisplayName><ID>75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a</ID></Owner><Size>0</Size><LastModified>2023-10-21T18:45:37.298607Z</LastModified><StorageClass>STANDARD</StorageClass></Contents></ListBucketResult> ``` Using `PROVIDER_OVERRIDE_S3=v3` does not appear to change this behaviour, nor does it differ between ListObjects and ListObjectsV2 ### Expected Behavior ``` $ docker run -d -p 4566:4566 localstack/localstack:2.0 $ aws --no-sign-request --endpoint-url=http://localhost:4566 s3 mb s3://tustvold $ curl -X PUT --data-binary 'test' -H "Content-Type: text/plain" 'http://localhost:4566/tustvold/a%252Fb.txt' $ curl http://localhost:4566/tustvold/a%252Fb.txt test $ aws --no-sign-request --endpoint-url=http://localhost:4566 s3 ls s3://tustvold 2023-10-21 19:43:04 4 a%2Fb.txt $ curl 'http://localhost:4566/tustvold' <?xml version='1.0' encoding='utf-8'?> <ListBucketResult><IsTruncated>false</IsTruncated><Marker /><Contents><Key>a%2Fb.txt</Key><LastModified>2023-10-21T18:43:04Z</LastModified><ETag>"098f6bcd4621d373cade4e832627b4f6"</ETag><Size>4</Size><StorageClass>STANDARD</StorageClass><Owner><DisplayName>webfile</DisplayName><ID>75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a</ID></Owner></Contents><Name>tustvold</Name><MaxKeys>1000</MaxKeys></ListBucketResult> ``` Unless encoding-type is specified in the request, it shouldn't return encoded paths for compatibility with the behaviour of S3 itself. ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce See above ### Environment ```markdown Linux ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9427
https://github.com/localstack/localstack/pull/9429
6e894e0d7ca42f5afe6acb76a4ad4dc3665c692a
fb929d759e5144ef90588dc90a413849d61ddd4b
"2023-10-21T19:02:40Z"
python
"2023-10-23T19:13:56Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,408
["localstack/constants.py", "localstack/services/opensearch/versions.py", "tests/aws/scenario/bookstore/test_bookstore.snapshot.json", "tests/aws/services/es/test_es.py", "tests/aws/services/opensearch/test_opensearch.py"]
enhancement request: OpenSearch 2.9 support
### Is there an existing issue for this? - [X] I have searched the existing issues ### Enhancement description OpenSearch 2.9 supports a lot of useful features that AWS opensearch also supports, such as being able to use filters in on FAISS searches in the KNN plugin. It would be great to have these in localstack! ### 🧑‍💻 Implementation this may be as trivial as a version bump? As far as I can tell, localstack is just pulling the opensearch docker image ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9408
https://github.com/localstack/localstack/pull/9626
e18c518ef531023f7cb4a7687d2ac6bdcf1129ab
6d7392a505d9edd90b716587fb70b04864f71a12
"2023-10-19T21:12:36Z"
python
"2023-11-14T14:15:43Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,340
["setup.cfg"]
localstack-core 2.3.2 requires cachetools~=5.0.0, but you have cachetools 5.3.0 which is incompatible
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior Error while running `cdk synth`: <img width="936" alt="image" src="https://github.com/localstack/localstack/assets/63163183/58c784d8-fbc6-4c11-a7d3-f6eb15a54eba"> ### Expected Behavior _No response_ ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce I am trying to run `cdk synth` and our project uses `cachetools==5.3.0`. I get the above error while the command tries to install all the packages. ### Environment ```markdown - OS: Windows 10 - LocalStack: 2.3.0 - localstack-core: 2.3.2 ``` ### Anything else? Is there a way to upgrade cachetools or increasing the range of acceptable versions.
https://github.com/localstack/localstack/issues/9340
https://github.com/localstack/localstack/pull/9341
83c8550adfa83e11b1d77599a88b97c2e8489a02
3bab4ad00e6530a15f560a24ff4cd6c7231a8ac1
"2023-10-12T11:18:47Z"
python
"2023-10-12T13:01:16Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,331
["localstack/services/dynamodb/provider.py", "tests/aws/services/dynamodb/test_dynamodb.py", "tests/aws/services/dynamodb/test_dynamodb.snapshot.json", "tests/aws/services/dynamodb/test_dynamodb.validation.json"]
bug: SSESpecification field is not updated
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior `SSESpecification` field is not updated. ### Expected Behavior I expect the `SSESpecification ` field to be updated as [described](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/encryption.tutorial.html) in the documentation. ### How are you starting LocalStack? With the `localstack` script ### Steps To Reproduce ```bash DEBUG=1 localstack start -d awslocal dynamodb create-table \ --table-name MusicCollection \ --attribute-definitions AttributeName=Artist,AttributeType=S AttributeName=SongTitle,AttributeType=S \ --key-schema AttributeName=Artist,KeyType=HASH AttributeName=SongTitle,KeyType=RANGE \ --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \ --tags Key=Owner,Value=blueTeam --sse-specification Enabled=true,SSEType=KMS,KMSMasterKeyId=abcd1234-abcd-1234-a123-ab1234a1b234 --no-cli-pager --query "TableDescription.SSEDescription" { "Status": "ENABLED", "SSEType": "KMS", "KMSMasterKeyArn": "arn:aws:kms:eu-central-1:000000000000:key/abcd1234-abcd-1234-a123-ab1234a1b234" } awslocal dynamodb update-table \ --table-name MusicCollection \ --sse-specification Enabled=false \ --query "TableDescription.SSEDescription" \ --no-cli-pager { "Status": "ENABLED", "SSEType": "KMS", "KMSMasterKeyArn": "arn:aws:kms:eu-central-1:000000000000:key/abcd1234-abcd-1234-a123-ab1234a1b234" } awslocal dynamodb describe-table --table-name MusicCollection --query "Table.SSEDescription --no-cli-pager " { "Status": "ENABLED", "SSEType": "KMS", "KMSMasterKeyArn": "arn:aws:kms:eu-central-1:000000000000:key/abcd1234-abcd-1234-a123-ab1234a1b234" } ``` Log when updating the table: ```bash 2023-10-11T07:56:53.533 DEBUG --- [functhread10] l.services.dynamodb.server : 07:56:53.532 [qtp42820240-19] WARN com.amazonaws.services.dynamodbv2.local.server.LocalDynamoDBServerHandler - DynamoDBLocalServiceException exception occured 2023-10-11T07:56:53.534 DEBUG --- [functhread10] l.services.dynamodb.server : com.amazonaws.services.dynamodbv2.exceptions.DynamoDBLocalServiceException: Nothing to update 2023-10-11T07:56:53.534 DEBUG --- [functhread10] l.services.dynamodb.server : at com.amazonaws.services.dynamodbv2.exceptions.AWSExceptionFactory.buildLocalServiceException(AWSExceptionFactory.java:93) ~[DynamoDBLocal.jar:?] 2023-10-11T07:56:53.534 DEBUG --- [functhread10] l.services.dynamodb.server : at com.amazonaws.services.dynamodbv2.exceptions.AWSExceptionFactory.buildAWSException(AWSExceptionFactory.java:58) ~[DynamoDBLocal.jar:?] 2023-10-11T07:56:53.534 DEBUG --- [functhread10] l.services.dynamodb.server : at com.amazonaws.services.dynamodbv2.local.shared.access.api.cp.UpdateTableFunction$1.criticalSection(UpdateTableFunction.java:59) ~[DynamoDBLocal.jar:?] 2023-10-11T07:56:53.534 DEBUG --- [functhread10] l.services.dynamodb.server : at com.amazonaws.services.dynamodbv2.local.shared.access.LocalDBAccess$WriteLockWithTimeout.execute(LocalDBAccess.java:361) ~[DynamoDBLocal.jar:?] 2023-10-11T07:56:53.534 DEBUG --- [functhread10] l.services.dynamodb.server : at com.amazonaws.services.dynamodbv2.local.shared.access.api.cp.UpdateTableFunction.apply(UpdateTableFunction.java:332) ~[DynamoDBLocal.jar:?] 2023-10-11T07:56:53.534 DEBUG --- [functhread10] l.services.dynamodb.server : at com.amazonaws.services.dynamodbv2.local.shared.access.awssdkv1.client.LocalAmazonDynamoDB.updateTable(LocalAmazonDynamoDB.java:319) ~[DynamoDBLocal.jar:?] 2023-10-11T07:56:53.534 DEBUG --- [functhread10] l.services.dynamodb.server : at com.amazonaws.services.dynamodbv2.local.server.LocalDynamoDBRequestHandler.updateTable(LocalDynamoDBRequestHandler.java:396) ~[DynamoDBLocal.jar:?] 2023-10-11T07:56:53.535 DEBUG --- [functhread10] l.services.dynamodb.server : at com.amazonaws.services.dynamodbv2.local.dispatchers.UpdateTableDispatcher.enact(UpdateTableDispatcher.java:18) ~[DynamoDBLocal.jar:?] 2023-10-11T07:56:53.535 DEBUG --- [functhread10] l.services.dynamodb.server : at com.amazonaws.services.dynamodbv2.local.dispatchers.UpdateTableDispatcher.enact(UpdateTableDispatcher.java:12) ~[DynamoDBLocal.jar:?] 2023-10-11T07:56:53.535 DEBUG --- [functhread10] l.services.dynamodb.server : at com.amazonaws.services.dynamodbv2.local.server.LocalDynamoDBServerHandler.packageDynamoDBResponse(LocalDynamoDBServerHandler.java:407) ~[DynamoDBLocal.jar:?] 2023-10-11T07:56:53.535 DEBUG --- [functhread10] l.services.dynamodb.server : at com.amazonaws.services.dynamodbv2.local.server.LocalDynamoDBServerHandler.handle(LocalDynamoDBServerHandler.java:496) ~[DynamoDBLocal.jar:?] 2023-10-11T07:56:53.535 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.535 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.535 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1440) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.535 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:190) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.535 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1355) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.536 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.536 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:191) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.536 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.536 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.server.Server.handle(Server.java:516) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.536 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487) ~[jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.536 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732) [jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.536 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479) [jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.536 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277) [jetty-server-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.536 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) [jetty-io-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.536 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) [jetty-io-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.536 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) [jetty-io-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.536 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338) [jetty-util-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.536 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315) [jetty-util-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.536 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) [jetty-util-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.536 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131) [jetty-util-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.537 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409) [jetty-util-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.537 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) [jetty-util-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.537 DEBUG --- [functhread10] l.services.dynamodb.server : at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) [jetty-util-9.4.48.v20220622.jar:9.4.48.v20220622] 2023-10-11T07:56:53.537 DEBUG --- [functhread10] l.services.dynamodb.server : at java.lang.Thread.run(Unknown Source) [?:?] 2023-10-11T07:56:53.550 INFO --- [ asgi_gw_0] localstack.request.aws : AWS dynamodb.UpdateTable => 200 ``` ### Environment ```markdown - OS: MacOS - LocalStack: 2.3.2 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9331
https://github.com/localstack/localstack/pull/10040
23c7e54fcda14ff61e3df10d488364cebd52dc1d
29d19959d904f4a850a9121da4c31991c8b307ef
"2023-10-11T08:08:37Z"
python
"2024-01-11T11:56:05Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,261
["localstack/services/sns/publisher.py", "tests/aws/services/sns/test_sns.py", "tests/aws/services/sns/test_sns.snapshot.json"]
bug: message passes through without attribute if there is a filterPolicy in place for null value
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior It seems that it is possible to receive sns message in an sqs listener if there is a filterpolicy in place for null value in a message attribute, and a message is being sent without the message attribute. ### Expected Behavior Don't recieve these type of messages ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce `sns subscribe --topic-arn "demoTopicArn" --protocol sqs --notification-endpoint "demoSqsEndpoint" --attributes '{"RawMessageDelivery": "true"}'` `sns set-subscription-attributes --subscription-arn demoSubsArn --attribute-name FilterPolicy --attribute-value '{"testForNull": [null,{"anything-but": ["notgood"]}]}'` `sns publish --topic-arn demoTopicArn --message "testforrecieval"` `sqs receive-message --queue-url "demoUrl"` ### Environment ```markdown - OS: macOS 13.6 - LocalStack: 2.2.0 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9261
https://github.com/localstack/localstack/pull/9264
44407254793dae6eb1a05bde62c0da214812143c
e7cf994ab6edf29d585d13699e6306722d03145a
"2023-09-29T09:59:44Z"
python
"2023-09-29T17:02:15Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,243
["localstack/services/dynamodb/provider.py", "localstack/services/dynamodb/utils.py", "tests/unit/test_dynamodb.py"]
bug: Index not found temporarily returned after update table operations
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior As part of our test suite, we setup DynamoDB tables starting from initial table creation to a set of updates we've made over time. The tables are created and updates are applied successfully, so we begin to run our tests but when our tests run quickly, we end up getting an `Index not found` error returned when searching against a global secondary index that was created as part of an update table operation. ### Expected Behavior The expected behavior should be that we should be able to query against the updated secondary index that has been created in a table. ### How are you starting LocalStack? With a `docker run` command ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker run localstack/localstack #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) We have an internal set of migrations we run against DynamoDB that get the tables to the state that we expect. The main issue pops up when we do a replication update as part of a table update operation and then do the global secondary key creation. When doing the replication update, the schema gets cached internally and a bad version of the schema stays valid until the initial TTL expires (20s default). ### Environment ```markdown - OS: Mac OS Ventura (Docker 4.23.0) - LocalStack: latest (also tried 2.2.0, and some older versions) ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9243
https://github.com/localstack/localstack/pull/9244
b2a63acbe6a4656b7dfc71db954c0368cfec4250
06606c844813afe4112405fcb3192d2f33a38a1f
"2023-09-27T21:48:49Z"
python
"2023-10-10T10:39:23Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,193
["localstack/services/opensearch/cluster.py", "tests/aws/services/opensearch/test_opensearch.py"]
Bug: [EDITED] java.lang.UnsatisfiedLinkError: no opensearchknn_faiss in java.library.path
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior I have a terraform-managed localstack setup that uses s3 and opensearch. In our docker-compose for localstack, we set the OPENSEARCH_ENDPOINT_STRATEGY to path, and when we use the aws cli for opensearch we can see the endpoint follows the expected path strategy. However, when we curl that endpoint, we get the following error: ``` $ curl http://localhost:4566/opensearch/us-east-1/example/ <?xml version='1.0' encoding='utf-8'?> <Error><Code>InternalError</Code><Message>exception while calling s3 with unknown operation: MyHTTPConnectionPool(host='127.0.0.1', port=60769): Max retries exceeded with url: / (Caused by NewConnectionError('&lt;urllib3.connection.HTTPConnection object at 0x7efc3c205840&gt;: Failed to establish a new connection: [Errno 111] Connection refused'))</Message><RequestId>09b749b2-78a0-4a90-b175-9f672f14babc</RequestId></Error> ``` looks like localstack is routing to s3 instead of opensearch. Interestingly, this issue only started after I created an opensearch index and added a document to it -- before that, I was getting the expected opensearch curl results. ### Expected Behavior I would expect the curl to hit the opensearch endpoint instead of s3 ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce I'm not fully certain what the minimal repro is. We're starting localstack with a docker compose, which I can include: ``` version: "3" services: localstack-mongodb: image: mongo:4.4.21 container_name: localstack-mongodb ports: - "27017:27017" command: mongod --replSet rs0 # Enable interactive mode stdin_open: true # Allocate a pseudo-TTY tty: true localstack-redis: image: redis:7.0.11 container_name: localstack-redis ports: - "6379:6379" localstack: image: localstack/localstack container_name: localstack environment: - OPENSEARCH_ENDPOINT_STRATEGY=path ports: - "4566:4566" - "4510-4559:4510-4559" ``` followed by: opensearch.js ``` opensearchClient.create({ index: 'foo'. body: { settings: { 'index.knn': true }, mappings: { embedding: { type: 'knn_vector', dimension: 512, method: { engine: 'faiss', name: 'nsw' } } } } }) client.index({ index: 'foo', body: { embedding: [ ... 512 numbers ] }); ``` ### Environment ```markdown - OS: 20.04 - LocalStack: latest ``` ### Anything else? NA
https://github.com/localstack/localstack/issues/9193
https://github.com/localstack/localstack/pull/9234
ca807efdf2a489f8e42edfc504a20549ad9d31b4
78e4be3456c0560ea661bc33042d81ac74e37de0
"2023-09-20T21:45:23Z"
python
"2023-09-27T12:33:40Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,154
["localstack/services/sns/provider.py", "tests/aws/services/sns/test_sns.py", "tests/aws/services/sns/test_sns.snapshot.json"]
bug: error when re-subscribing an SQS queue to an SNS topic with RawMessageDelivery=true
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior When re-subscribing an SQS queue to an SNS topic with the attribute "RawMessageDelivery=true", localstack throws an error. It's occuring both on my AWS Java Client and with CLI awslocal it's working well without this attribute ### Expected Behavior Since the queue is subscribed with the attribute "RawMessageDelivery=true" to the topic already, it should be a no-op and simply return the same subscription arn. ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker run localstack/localstack #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) ``` awslocal sns create-topic --name localstack-topic awslocal sqs create-queue --queue-name localstack-queue awslocal sns subscribe --topic-arn arn:aws:sns:eu-central-1:000000000000:localstack-topic --protocol sqs --notification-endpoint arn:aws:sqs:eu-central-1:000000000000:localstack-queue --return-subscription-arn --attributes RawMessageDelivery=True awslocal sns subscribe --topic-arn arn:aws:sns:eu-central-1:000000000000:localstack-topic --protocol sqs --notification-endpoint arn:aws:sqs:eu-central-1:000000000000:localstack-queue --return-subscription-arn --attributes RawMessageDelivery=True ``` ### Environment ```markdown aws-cli/1.22.34 Python/3.10.12 Linux/6.2.0-32-generic botocore/1.31.47 ``` ### Anything else? subscription created with "RawMessageDelivery=true" ![image](https://github.com/localstack/localstack/assets/98832464/faac719d-b5fe-4920-a756-28abc17fd0e7) but if the subscription is created without attributes it's working well ![image](https://github.com/localstack/localstack/assets/98832464/44ed7afd-a3b2-4a19-aff4-cba7b75ad52c) maybe it's related to https://github.com/localstack/localstack/issues/9058
https://github.com/localstack/localstack/issues/9154
https://github.com/localstack/localstack/pull/9167
ab92c16fe1ee811cd60b3c3ca617eaf7221629c2
35e4d879e77d160fcc856f914b387258efb1fdff
"2023-09-14T16:00:42Z"
python
"2023-09-15T16:57:38Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,030
["localstack/services/dynamodb/provider.py", "tests/aws/services/dynamodb/test_dynamodb.py"]
bug: describeTimeToLive should throw resource not found
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior Calling `describeTimeToLive` to a non-existent table returns a result rather than throwing resource not found. Test case pseudo-code: ``` try { dynamo.describeTimeToLive(new DescribeTimeToLiveRequest() .withTableName("non-existent-table")); fail(); } catch (ResourceNotFoundException ex) { // Pass } ``` ### Expected Behavior Expecting call to throw ### How are you starting LocalStack? Custom (please describe below) ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) TestContainers from Java ### Environment ```markdown - OS: MacOS - LocalStack: 2.2.0 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/9030
https://github.com/localstack/localstack/pull/9038
5217b87e3bfa39b8f052cd888d2a78c0ec193555
691841665d681e0db6ddc00df29fc80236e05e02
"2023-08-31T04:02:38Z"
python
"2023-09-01T06:45:11Z"
closed
localstack/localstack
https://github.com/localstack/localstack
9,016
["localstack/services/lambda_/invocation/execution_environment.py", "tests/aws/services/lambda_/functions/lambda_role.py", "tests/aws/services/lambda_/test_lambda.py", "tests/aws/services/lambda_/test_lambda.snapshot.json"]
bug: Lambda - Failed to start runtime environment
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior Localstack failed to start runtime environment because parameter validation failed: ``` localstack_main | 2023-08-30T02:29:12.992 WARN --- [-g:$LATEST_0] l.s.l.i.runtime_environmen : Failed to start runtime environment for ID=347b7abb3b6dfaa3ff89b7e8370ab803 with: Parameter validation failed: localstack_main | Invalid length for parameter RoleSessionName, value: 1, valid min length: 2 localstack_main | Traceback (most recent call last): localstack_main | File "/opt/code/localstack/localstack/services/lambda_/invocation/runtime_environment.py", line 172, in start localstack_main | self.runtime_executor.start(self.get_environment_variables()) localstack_main | File "/opt/code/localstack/localstack/services/lambda_/invocation/runtime_environment.py", line 92, in get_environment_variables localstack_main | credentials = self.get_credentials() localstack_main | File "/opt/code/localstack/localstack/services/lambda_/invocation/runtime_environment.py", line 268, in get_credentials localstack_main | return sts_client.assume_role( localstack_main | File "/opt/code/localstack/.venv/lib/python3.10/site-packages/botocore/client.py", line 535, in _api_call localstack_main | return self._make_api_call(operation_name, kwargs) localstack_main | File "/opt/code/localstack/.venv/lib/python3.10/site-packages/botocore/client.py", line 936, in _make_api_call localstack_main | request_dict = self._convert_to_request_dict( localstack_main | File "/opt/code/localstack/.venv/lib/python3.10/site-packages/botocore/client.py", line 1007, in _convert_to_request_dict localstack_main | request_dict = self._serializer.serialize_to_request( localstack_main | File "/opt/code/localstack/.venv/lib/python3.10/site-packages/botocore/validate.py", line 381, in serialize_to_request localstack_main | raise ParamValidationError(report=report.generate_report()) localstack_main | botocore.exceptions.ParamValidationError: Parameter validation failed: localstack_main | Invalid length for parameter RoleSessionName, value: 1, valid min length: 2 localstack_main | 2023-08-30T02:29:12.994 DEBUG --- [-g:$LATEST_0] l.u.c.docker_sdk_client : Stopping container: localstack-main-lambda-g-347b7abb3b6dfaa3ff89b7e8370ab803 localstack_main | 2023-08-30T02:29:13.010 DEBUG --- [-g:$LATEST_0] l.s.l.i.runtime_environmen : Unable to shutdown runtime handler '347b7abb3b6dfaa3ff89b7e8370ab803' ``` ### Expected Behavior Localstack should start runtime environment and lambda ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker-compose up ``` version: "3.8" services: localstack: container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}" image: localstack/localstack:2.2.0 ports: - "127.0.0.1:4566:4566" # LocalStack Gateway - "127.0.0.1:4510-4559:4510-4559" # external services port range environment: - DEBUG=1 - DOCKER_HOST=unix:///var/run/docker.sock - SERVICES=sqs,lambda,s3 volumes: - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack" - "/var/run/docker.sock:/var/run/docker.sock" - "./localstack:/etc/localstack" - "./resources:/resources" networks: - localstack-network networks: localstack-network: name: localstack-network ``` #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) ready.d/init.sh ``` #!/bin/bash awslocal lambda create-function \ --region us-east-1 \ --function-name g \ --runtime=nodejs16.x \ --timeout=100 \ --memory-size 256 \ --handler handleCLSPreprocessing.handler \ --role arn:aws:iam::000000000000:role/lambda-role \ --zip-file fileb:///resources/lambda.zip \ --environment file:///resources/environment.json awslocal sqs create-queue \ --region us-east-1 \ --queue-name cls-preprocessing-fifo \ --attributes '{"VisibilityTimeout": "600"}' awslocal sqs create-queue \ --region us-east-1 \ --queue-name lambda-out \ --attributes '{"VisibilityTimeout": "900"}' awslocal sqs create-queue \ --region us-east-1 \ --queue-name lambda-in \ --attributes '{"VisibilityTimeout": "600"}' awslocal lambda create-event-source-mapping \ --region us-east-1 \ --function-name g \ --batch-size 1 \ --event-source-arn arn:aws:sqs::000000000000:lambda-in \ --starting-position LATEST awslocal s3 mb s3://bucket ``` index.js: ``` exports.handler = async (event, context) => { const record = event.Records ? event.Records[0] : event.records[0]; let body = JSON.parse(record.body); console.log(`body=>${body}`); }; ``` environment.json: ``` { "Variables": { "ENVIRONMENT": "IT" } } ``` After LocalStack started, send message to `lambda-in` queue: ``` aws sqs send-message --endpoint-url http://localhost:4566 --queue-url http://localhost:4566/000000000000/lambda-in --message-body file://message.json ``` ### Environment ```markdown - OS: macOS Ventura 13.5.1 - LocalStack: 2.2.0 ``` ### Anything else? Similar setup works with LocalStack 1.4.0
https://github.com/localstack/localstack/issues/9016
https://github.com/localstack/localstack/pull/9328
48a09e9372bf7f414464995f7626aec0a2b614c3
3e6da2a1abd4242225b645e5bc14907f8eec3f6b
"2023-08-30T03:12:39Z"
python
"2023-10-12T09:18:40Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,984
["setup.cfg"]
enhancement request: Update available ELB security policies
### Is there an existing issue for this? - [X] I have searched the existing issues ### Enhancement description Recent SSL Policies such as "ELBSecurityPolicy-TLS13-1-2-2021-06" are not yet available in Localstack. Please consider update the list of policies. Thank you. ### 🧑‍💻 Implementation This needs an update in `moto.elbv2.responses`. Also see <https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html#describe-ssl-policies> ### Anything else? _No response_
https://github.com/localstack/localstack/issues/8984
https://github.com/localstack/localstack/pull/9216
4c55a469f3eb5c459b321dbe7718ae8ebfb5cb4a
41ae1274d10fe2ff094aeab4da7b4199be132f65
"2023-08-25T06:54:08Z"
python
"2023-09-25T10:15:50Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,928
["localstack/http/request.py", "tests/unit/http_/test_request.py"]
bug: LocalStack.NET all S3 operations fails on LocalStack 2.2
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior Hello, I'm the maintainer of the [LocalStack.NET](https://github.com/localstack-dotnet/localstack-dotnet-client) client library. LocalStack.NET is a thin wrapper around the official [aws-sdk-net](https://github.com/aws/aws-sdk-net) just like [localstack-python-client](https://github.com/localstack/localstack-python-client). I've discovered issues related to any S3-related operation when testing against LocalStack version 2.2. Notably, these operations are successful with previous versions like 1.3.1 and 2.0. Related Issue: https://github.com/localstack-dotnet/localstack-dotnet-client/issues/24 **Description of the Problem:** The operation fails when attempting any S3-related operation using the LocalStack.NET library. I think the logs provided below might offer more insight. ## Test Scenario **Scenario Link:** [S3Service_Should_Create_A_Bucket](https://github.com/localstack-dotnet/localstack-dotnet-client/blob/v1.4.1/tests/LocalStack.Client.Functional.Tests/Scenarios/S3/BaseS3Scenario.cs#L18) ```csharp [Fact] public async Task S3Service_Should_Create_A_Bucket() { var bucketName = Guid.NewGuid().ToString(); PutBucketResponse putBucketResponse = await CreateTestBucket(bucketName); Assert.Equal(HttpStatusCode.OK, putBucketResponse.HttpStatusCode); } protected Task<PutBucketResponse> CreateTestBucket(string bucketName = null) { var putBucketRequest = new PutBucketRequest { BucketName = bucketName ?? BucketName, UseClientRegion = true }; return AmazonS3.PutBucketAsync(putBucketRequest); } ``` The test scenario above generates a GUID and uses it as the bucket name when creating a bucket. The above is just an example. All other S3-related operations also fail. ## LocalStack Testcontainer setup ```csharp public static LocalStackBuilder LocalStackBuilder(string version) { return new LocalStackBuilder().WithImage($"localstack/localstack:{version}") .WithName($"localStack-{version}-{Guid.NewGuid().ToString().ToLower()}") .WithEnvironment("DOCKER_HOST", "unix:///var/run/docker.sock") .WithEnvironment("DEBUG", "1") .WithEnvironment("LS_LOG", "trace-internal") .WithPortBinding(4566, true) .WithCleanUp(true); } ``` ## Mitmproxy raw request ```http PUT http://s3.eu-central-1.amazonaws.com/d9137298-3452-477d-9c0c-30f7ec52f120/ HTTP/1.1 User-Agent: aws-sdk-dotnet-coreclr/3.7.102.0 aws-sdk-dotnet-core/3.7.200.17 .NET_Core/7.0.9 OS/Microsoft_Windows_10.0.22621 ClientAsync amz-sdk-invocation-id: 148e1966-07dd-403e-bcb1-b3ea27097db6 amz-sdk-request: attempt=1; max=5 x-amz-security-token: my-AwsSessionToken Host: s3.eu-central-1.amazonaws.com X-Amz-Date: 20230817T074856Z X-Amz-Content-SHA256: 270d7010e28541d025cba79779722247d464e2edd8b448b42fdf618a477e0432 Authorization: AWS4-HMAC-SHA256 Credential=my-AwsAccessKeyId/20230817/eu-central-1/s3/aws4_request, SignedHeaders=content-md5;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-security-token, Signature=280f9675d5de93f8e991d81b1f0796d70ad031d4d777027398662d48ac7db162 Content-Length: 156 Content-Type: application/xml Content-MD5: 3TSjRZmsSt7Rb6d4WVpfOQ== <CreateBucketConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><LocationConstraint>eu-central-1</LocationConstraint></CreateBucketConfiguration> ``` ## Localstack container logs ```bash 2023-08-17T07:48:54.135648905Z 2023-08-17T07:48:54.135 DEBUG --- [ asgi_gw_0] l.aws.serving.wsgi : PUT s3.eu-central-1.amazonaws.comhttp://s3.eu-central-1.amazonaws.com/6d34cfe9-0400-494e-9551-dfaf9798030e/ 2023-08-17T07:48:54.140016928Z 2023-08-17T07:48:54.136 ERROR --- [ asgi_gw_0] l.aws.handlers.logging : exception during call chain 2023-08-17T07:48:54.140025268Z Traceback (most recent call last): 2023-08-17T07:48:54.140027608Z File "/opt/code/localstack/localstack/aws/chain.py", line 90, in handle 2023-08-17T07:48:54.140029348Z handler(self, self.context, response) 2023-08-17T07:48:54.140030938Z File "/opt/code/localstack/localstack/aws/handlers/routes.py", line 27, in __call__ 2023-08-17T07:48:54.140032538Z router_response = self.router.dispatch(context.request) 2023-08-17T07:48:54.140033908Z File "/opt/code/localstack/localstack/http/router.py", line 443, in dispatch 2023-08-17T07:48:54.140035318Z handler, args = matcher.match(get_raw_path(request), method=request.method) 2023-08-17T07:48:54.140036718Z File "/opt/code/localstack/.venv/lib/python3.10/site-packages/werkzeug/routing/map.py", line 635, in match 2023-08-17T07:48:54.140038268Z raise RequestRedirect( 2023-08-17T07:48:54.140039648Z werkzeug.routing.exceptions.RequestRedirect: 308 Permanent Redirect: http://s3.eu-central-1.amazonaws.com/http:/s3.eu-central-1.amazonaws.com/6d34cfe9-0400-494e-9551-dfaf9798030e/ 2023-08-17T07:48:54.140233396Z 2023-08-17T07:48:54.140 DEBUG --- [ asgi_gw_0] l.aws.protocol.serializer : No accept header given. Using request's Content-Type (application/xml) as preferred response Content-Type. 2023-08-17T07:48:54.142987739Z 2023-08-17T07:48:54.142 INFO --- [ asgi_gw_0] localstack.request.http : PUT /6d34cfe9-0400-494e-9551-dfaf9798030e/ => 500; Request(b'<CreateBucketConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><LocationConstraint>eu-central-1</LocationConstraint></CreateBucketConfiguration>', headers={'User-Agent': 'aws-sdk-dotnet-coreclr/3.7.102.0 aws-sdk-dotnet-core/3.7.200.17 .NET_Core/7.0.9 OS/Microsoft_Windows_10.0.22621 ClientAsync', 'amz-sdk-invocation-id': '148e1966-07dd-403e-bcb1-b3ea27097db6', 'amz-sdk-request': 'attempt=1; max=5', 'x-amz-security-token': 'my-AwsSessionToken', 'Host': 's3.eu-central-1.amazonaws.com', 'X-Amz-Date': '20230817T074851Z', 'X-Amz-Content-SHA256': '270d7010e28541d025cba79779722247d464e2edd8b448b42fdf618a477e0432', 'Authorization': 'AWS4-HMAC-SHA256 Credential=my-AwsAccessKeyId/20230817/eu-central-1/s3/aws4_request, SignedHeaders=content-md5;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-security-token, Signature=64a84fae5e3cc966e56f4561053b2d74c1893dcc27347eeda8c02c274b8d5bbd', 'Content-Length': '156', 'Content-Type': 'application/xml', 'Content-MD5': '3TSjRZmsSt7Rb6d4WVpfOQ==', 'x-localstack-tgt-api': 's3'}); Response(b'<?xml version=\'1.0\' encoding=\'utf-8\'?>\n<Error><Code>InternalError</Code><Message>exception while calling s3 with unknown operation: Traceback (most recent call last):\n File "/opt/code/localstack/localstack/aws/chain.py", line 90, in handle\n handler(self, self.context, response)\n File "/opt/code/localstack/localstack/aws/handlers/routes.py", line 27, in __call__\n router_response = self.router.dispatch(context.request)\n File "/opt/code/localstack/localstack/http/router.py", line 443, in dispatch\n handler, args = matcher.match(get_raw_path(request), method=request.method)\n File "/opt/code/localstack/.venv/lib/python3.10/site-packages/werkzeug/routing/map.py", line 635, in match\n raise RequestRedirect(\nwerkzeug.routing.exceptions.RequestRedirect: 308 Permanent Redirect: http://s3.eu-central-1.amazonaws.com/http:/s3.eu-central-1.amazonaws.com/6d34cfe9-0400-494e-9551-dfaf9798030e/\n</Message><RequestId>d5ced6d2-d015-4c6c-8c83-0fef33501c7a</RequestId></Error>', headers={'Content-Type': 'application/xml', 'Content-Length': '981', 'x-amz-request-id': 'd5ced6d2-d015-4c6c-8c83-0fef33501c7a', 'x-amz-id-2': 's9lzHYrFp76ZVxRcpX9+5cjAnEH2ROuNkd2BHfIa6UkFVdtjf5mKR3/eTPFvsiP/XV/VLi31234=', 'Connection': 'close'}) 2023-08-17T07:48:54.584494830Z 2023-08-17T07:48:54.584 DEBUG --- [ asgi_gw_0] l.aws.serving.wsgi : PUT s3.eu-central-1.amazonaws.comhttp://s3.eu-central-1.amazonaws.com/6d34cfe9-0400-494e-9551-dfaf9798030e/ 2023-08-17T07:48:54.584981886Z 2023-08-17T07:48:54.584 ERROR --- [ asgi_gw_0] l.aws.handlers.logging : exception during call chain 2023-08-17T07:48:54.584999376Z Traceback (most recent call last): 2023-08-17T07:48:54.585002226Z File "/opt/code/localstack/localstack/aws/chain.py", line 90, in handle 2023-08-17T07:48:54.585004066Z handler(self, self.context, response) 2023-08-17T07:48:54.585005506Z File "/opt/code/localstack/localstack/aws/handlers/routes.py", line 27, in __call__ 2023-08-17T07:48:54.585006916Z router_response = self.router.dispatch(context.request) 2023-08-17T07:48:54.585008256Z File "/opt/code/localstack/localstack/http/router.py", line 443, in dispatch 2023-08-17T07:48:54.585009746Z handler, args = matcher.match(get_raw_path(request), method=request.method) 2023-08-17T07:48:54.585011126Z File "/opt/code/localstack/.venv/lib/python3.10/site-packages/werkzeug/routing/map.py", line 635, in match 2023-08-17T07:48:54.585012576Z raise RequestRedirect( 2023-08-17T07:48:54.585013906Z werkzeug.routing.exceptions.RequestRedirect: 308 Permanent Redirect: http://s3.eu-central-1.amazonaws.com/http:/s3.eu-central-1.amazonaws.com/6d34cfe9-0400-494e-9551-dfaf9798030e/ 2023-08-17T07:48:54.585077746Z 2023-08-17T07:48:54.584 DEBUG --- [ asgi_gw_0] l.aws.protocol.serializer : No accept header given. Using request's Content-Type (application/xml) as preferred response Content-Type. 2023-08-17T07:48:54.585715972Z 2023-08-17T07:48:54.585 INFO --- [ asgi_gw_0] localstack.request.http : PUT /6d34cfe9-0400-494e-9551-dfaf9798030e/ => 500; Request(b'<CreateBucketConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><LocationConstraint>eu-central-1</LocationConstraint></CreateBucketConfiguration>', headers={'User-Agent': 'aws-sdk-dotnet-coreclr/3.7.102.0 aws-sdk-dotnet-core/3.7.200.17 .NET_Core/7.0.9 OS/Microsoft_Windows_10.0.22621 ClientAsync', 'amz-sdk-invocation-id': '148e1966-07dd-403e-bcb1-b3ea27097db6', 'amz-sdk-request': 'attempt=2; max=5', 'x-amz-security-token': 'my-AwsSessionToken', 'Host': 's3.eu-central-1.amazonaws.com', 'X-Amz-Date': '20230817T074854Z', 'X-Amz-Content-SHA256': '270d7010e28541d025cba79779722247d464e2edd8b448b42fdf618a477e0432', 'Authorization': 'AWS4-HMAC-SHA256 Credential=my-AwsAccessKeyId/20230817/eu-central-1/s3/aws4_request, SignedHeaders=content-length;content-md5;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-security-token, Signature=8bf2a988e09b2cf55f7eace8c036f9858ec742f7cf1b9ce6f3e3be16a4d769a6', 'Content-Length': '156', 'Content-Type': 'application/xml', 'Content-MD5': '3TSjRZmsSt7Rb6d4WVpfOQ==', 'x-localstack-tgt-api': 's3'}); Response(b'<?xml version=\'1.0\' encoding=\'utf-8\'?>\n<Error><Code>InternalError</Code><Message>exception while calling s3 with unknown operation: Traceback (most recent call last):\n File "/opt/code/localstack/localstack/aws/chain.py", line 90, in handle\n handler(self, self.context, response)\n File "/opt/code/localstack/localstack/aws/handlers/routes.py", line 27, in __call__\n router_response = self.router.dispatch(context.request)\n File "/opt/code/localstack/localstack/http/router.py", line 443, in dispatch\n handler, args = matcher.match(get_raw_path(request), method=request.method)\n File "/opt/code/localstack/.venv/lib/python3.10/site-packages/werkzeug/routing/map.py", line 635, in match\n raise RequestRedirect(\nwerkzeug.routing.exceptions.RequestRedirect: 308 Permanent Redirect: http://s3.eu-central-1.amazonaws.com/http:/s3.eu-central-1.amazonaws.com/6d34cfe9-0400-494e-9551-dfaf9798030e/\n</Message><RequestId>11e660d6-ff95-490b-9e59-4af093e60738</RequestId></Error>', headers={'Content-Type': 'application/xml', 'Content-Length': '981', 'x-amz-request-id': '11e660d6-ff95-490b-9e59-4af093e60738', 'x-amz-id-2': 's9lzHYrFp76ZVxRcpX9+5cjAnEH2ROuNkd2BHfIa6UkFVdtjf5mKR3/eTPFvsiP/XV/VLi31234=', 'Connection': 'close'}) ``` ### Expected Behavior Using the LocalStack.NET library, all S3-related operations should be successful, consistent with the behavior observed on LocalStack versions 1.3.1 and 2.0. ### How are you starting LocalStack? Custom (please describe below) ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) Using TestContainers with the specific configurations mentioned above. #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) Using the [LocalStack.NET](https://github.com/localstack-dotnet/localstack-dotnet-client) client library mentioned above. I can provide and example project for this specific case as well. ### Environment ```markdown - OS: Windows 11 x64 - LocalStack: 2.2 - Docker: Docker Desktop running on WSL 2 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/8928
https://github.com/localstack/localstack/pull/8962
736e741dc44cf2360f383046364af248517ca682
5a21709f9a1f11322da0b81b3fb5e096bf3303d9
"2023-08-17T08:14:13Z"
python
"2023-08-22T19:33:17Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,924
["localstack/http/request.py", "tests/unit/http_/test_request.py"]
bug: LocalStack.NET SQS DeleteQueue fails on LocalStack 2.2
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior Hello, I'm the maintainer of the [LocalStack.NET](https://github.com/localstack-dotnet/localstack-dotnet-client) client library. LocalStack.NET is a thin wrapper around the official [aws-sdk-net](https://github.com/aws/aws-sdk-net) just like [localstack-python-client](https://github.com/localstack/localstack-python-client). I've discovered a bug related to deleting SQS queues when testing against LocalStack version 2.2. Notably, this operation is successful with previous versions like 1.3.1 and 2.0. Related Issue: https://github.com/localstack-dotnet/localstack-dotnet-client/issues/23 **Description of the Problem:** When attempting to delete an SQS queue using the LocalStack.NET library, the operation fails (specifically, the test fails during the queue deletion step). There isn't a clear error message, but the logs provided below might offer more insight. ## Test Scenario **Scenario Link:** [AmazonSqsService_Should_Delete_A_Queue](https://github.com/localstack-dotnet/localstack-dotnet-client/blob/v1.4.1/tests/LocalStack.Client.Functional.Tests/Scenarios/SQS/BaseSqsScenario.cs#L31) ```csharp [Fact] public async Task AmazonSqsService_Should_Delete_A_Queue() { var guid = Guid.NewGuid(); var queueName = $"{guid}.fifo"; var dlQueueName = $"{guid}-DLQ.fifo"; CreateQueueResponse createQueueResponse = await CreateFifoQueueWithRedrive(queueName, dlQueueName); DeleteQueueResponse deleteQueueResponse = await DeleteQueue(createQueueResponse.QueueUrl); Assert.Equal(HttpStatusCode.OK, deleteQueueResponse.HttpStatusCode); } protected async Task<CreateQueueResponse> CreateFifoQueueWithRedrive(string queueName = null, string dlQueueName = null) { var createDlqRequest = new CreateQueueRequest { QueueName = dlQueueName ?? TestDlQueueName, Attributes = new Dictionary<string, string> { { "FifoQueue", "true" }, } }; CreateQueueResponse createDlqResult = await AmazonSqs.CreateQueueAsync(createDlqRequest); GetQueueAttributesResponse attributes = await AmazonSqs.GetQueueAttributesAsync(new GetQueueAttributesRequest { QueueUrl = createDlqResult.QueueUrl, AttributeNames = new List<string> { "QueueArn" } }); var redrivePolicy = new { maxReceiveCount = "1", deadLetterTargetArn = attributes.Attributes["QueueArn"] }; var createQueueRequest = new CreateQueueRequest { QueueName = queueName ?? TestQueueName, Attributes = new Dictionary<string, string> { { "FifoQueue", "true" }, { "RedrivePolicy", JsonSerializer.Serialize(redrivePolicy) }, } }; return await AmazonSqs.CreateQueueAsync(createQueueRequest); } protected async Task<DeleteQueueResponse> DeleteQueue(string queueUrl) { var deleteQueueRequest = new DeleteQueueRequest(queueUrl); return await AmazonSqs.DeleteQueueAsync(deleteQueueRequest); } ``` ### Queue Creation - A FIFO Dead Letter Queue (DLQ) is created with the name `{guid}-DLQ.fifo`. This is done using the CreateQueueRequest with the FifoQueue attribute set to true. - Retrieve the QueueArn attribute of the DLQ just created. - Define a Redrive policy for the main queue. This policy specifies that after 1 unsuccessful receive attempt, the message should be sent to the DLQ. The ARN of the DLQ is set as the deadLetterTargetArn in this policy. - A FIFO main queue with the name `{guid}.fifo` is then created. This main queue is created with the attributes FifoQueue set to true and RedrivePolicy set to the policy created in the previous step. ### Queue Deletion - Attempt to delete the main SQS queue using its URL through the DeleteQueue function. - The test fails at the queue deletion step. ## LocalStack Testcontainer setup ```csharp public static LocalStackBuilder LocalStackBuilder(string version) { return new LocalStackBuilder().WithImage($"localstack/localstack:{version}") .WithName($"localStack-{version}-{Guid.NewGuid().ToString().ToLower()}") .WithEnvironment("DOCKER_HOST", "unix:///var/run/docker.sock") .WithEnvironment("DEBUG", "1") .WithEnvironment("LS_LOG", "trace-internal") .WithPortBinding(4566, true) .WithCleanUp(true); } ``` ## Mitmproxy raw request ```http POST http://sqs.eu-central-1.amazonaws.com/000000000000/c85c8994-510d-4f6e-801d-4b7753932583.fifo HTTP/1.1 User-Agent: aws-sdk-dotnet-coreclr/3.7.200.18 aws-sdk-dotnet-core/3.7.200.17 .NET_Core/7.0.9 OS/Microsoft_Windows_10.0.22621 ClientAsync amz-sdk-invocation-id: 87581c5c-6d6c-40ae-99bb-d2f979236c08 amz-sdk-request: attempt=1; max=5 x-amz-security-token: my-AwsSessionToken Host: sqs.eu-central-1.amazonaws.com X-Amz-Date: 20230815T202851Z X-Amz-Content-SHA256: 62743f562eb66199a917d414916c2c9ea1c35200bcaccc4b457e5ad914a1d7ed Authorization: AWS4-HMAC-SHA256 Credential=my-AwsAccessKeyId/20230815/eu-central-1/sqs/aws4_request, SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-security-token, Signature=1e97ce548572fc8dc72f91ff36b259d479f86fcd6706d2ee14e1fb266cbf7713 Content-Length: 37 Content-Type: application/x-www-form-urlencoded; charset=utf-8 Action=DeleteQueue&Version=2012-11-05 ``` ## Localstack container logs ```bash 2023-08-15T20:28:51.761985583Z 2023-08-15T20:28:51.761 INFO --- [ asgi_gw_0] localstack.request.aws : AWS sqs.CreateQueue => 200; CreateQueueRequest({'QueueName': 'c85c8994-510d-4f6e-801d-4b7753932583-DLQ.fifo', 'Attributes': {'FifoQueue': 'true'}}, headers={'User-Agent': 'aws-sdk-dotnet-coreclr/3.7.200.18 aws-sdk-dotnet-core/3.7.200.17 .NET_Core/7.0.9 OS/Microsoft_Windows_10.0.22621 ClientAsync', 'amz-sdk-invocation-id': '87581c5c-6d6c-40ae-99bb-d2f979236c08', 'amz-sdk-request': 'attempt=1; max=5', 'x-amz-security-token': 'my-AwsSessionToken', 'Host': 'sqs.eu-central-1.amazonaws.com', 'X-Amz-Date': '20230815T202851Z', 'X-Amz-Content-SHA256': '62743f562eb66199a917d414916c2c9ea1c35200bcaccc4b457e5ad914a1d7ed', 'Authorization': 'AWS4-HMAC-SHA256 Credential=my-AwsAccessKeyId/20230815/eu-central-1/sqs/aws4_request, SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-security-token, Signature=1e97ce548572fc8dc72f91ff36b259d479f86fcd6706d2ee14e1fb266cbf7713', 'Content-Length': '143', 'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', 'x-localstack-tgt-api': 'sqs', 'x-moto-account-id': '000000000000'}); CreateQueueResult({'QueueUrl': 'http://sqs.eu-central-1.amazonaws.com/000000000000/c85c8994-510d-4f6e-801d-4b7753932583-DLQ.fifo'}, headers={'Content-Type': 'text/xml', 'Content-Length': '385', 'Connection': 'close', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'HEAD,GET,PUT,POST,DELETE,OPTIONS,PATCH', 'Access-Control-Allow-Headers': 'authorization,cache-control,content-length,content-md5,content-type,etag,location,x-amz-acl,x-amz-content-sha256,x-amz-date,x-amz-request-id,x-amz-security-token,x-amz-tagging,x-amz-target,x-amz-user-agent,x-amz-version-id,x-amzn-requestid,x-localstack-target,amz-sdk-invocation-id,amz-sdk-request', 'Access-Control-Expose-Headers': 'etag,x-amz-version-id'}) 2023-08-15T20:28:51.781754058Z 2023-08-15T20:28:51.781 DEBUG --- [ asgi_gw_0] l.aws.serving.wsgi : POST sqs.eu-central-1.amazonaws.comhttp://sqs.eu-central-1.amazonaws.com/000000000000/c85c8994-510d-4f6e-801d-4b7753932583-DLQ.fifo 2023-08-15T20:28:51.783127552Z 2023-08-15T20:28:51.783 DEBUG --- [ asgi_gw_0] l.aws.protocol.serializer : No accept header given. Using request's Content-Type (application/x-www-form-urlencoded; charset=utf-8) as preferred response Content-Type. 2023-08-15T20:28:51.783548737Z 2023-08-15T20:28:51.783 INFO --- [ asgi_gw_0] localstack.request.aws : AWS sqs.GetQueueAttributes => 200; GetQueueAttributesRequest({'QueueUrl': None, 'AttributeNames': ['QueueArn']}, headers={'User-Agent': 'aws-sdk-dotnet-coreclr/3.7.200.18 aws-sdk-dotnet-core/3.7.200.17 .NET_Core/7.0.9 OS/Microsoft_Windows_10.0.22621 ClientAsync', 'amz-sdk-invocation-id': '012b6c13-4afa-4555-b090-e78392da68ec', 'amz-sdk-request': 'attempt=1; max=5', 'x-amz-security-token': 'my-AwsSessionToken', 'Host': 'sqs.eu-central-1.amazonaws.com', 'X-Amz-Date': '20230815T202851Z', 'X-Amz-Content-SHA256': '6af491181b9d88aef50bfaefb28263f67b71658ce5c3820d3db0f4de1aeed054', 'Authorization': 'AWS4-HMAC-SHA256 Credential=my-AwsAccessKeyId/20230815/eu-central-1/sqs/aws4_request, SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-security-token, Signature=01f05262c5cb1615577f075b22310dfebcbbfff71b5597ebb7116f7189f4db9e', 'Content-Length': '69', 'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', 'x-localstack-tgt-api': 'sqs', 'x-moto-account-id': '000000000000'}); GetQueueAttributesResult({'Attributes': {'QueueArn': 'arn:aws:sqs:eu-central-1:000000000000:c85c8994-510d-4f6e-801d-4b7753932583-DLQ.fifo'}}, headers={'Content-Type': 'text/xml', 'Content-Length': '438', 'Connection': 'close', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'HEAD,GET,PUT,POST,DELETE,OPTIONS,PATCH', 'Access-Control-Allow-Headers': 'authorization,cache-control,content-length,content-md5,content-type,etag,location,x-amz-acl,x-amz-content-sha256,x-amz-date,x-amz-request-id,x-amz-security-token,x-amz-tagging,x-amz-target,x-amz-user-agent,x-amz-version-id,x-amzn-requestid,x-localstack-target,amz-sdk-invocation-id,amz-sdk-request', 'Access-Control-Expose-Headers': 'etag,x-amz-version-id'}) 2023-08-15T20:28:51.801249377Z 2023-08-15T20:28:51.801 DEBUG --- [ asgi_gw_0] l.aws.serving.wsgi : POST sqs.eu-central-1.amazonaws.comhttp://sqs.eu-central-1.amazonaws.com/ 2023-08-15T20:28:51.802294585Z 2023-08-15T20:28:51.802 DEBUG --- [ asgi_gw_0] l.services.sqs.provider : creating queue key=c85c8994-510d-4f6e-801d-4b7753932583.fifo attributes={'FifoQueue': 'true', 'RedrivePolicy': '{"maxReceiveCount":"1","deadLetterTargetArn":"arn:aws:sqs:eu-central-1:000000000000:c85c8994-510d-4f6e-801d-4b7753932583-DLQ.fifo"}'} tags=None 2023-08-15T20:28:51.802379784Z 2023-08-15T20:28:51.802 DEBUG --- [ asgi_gw_0] l.aws.protocol.serializer : No accept header given. Using request's Content-Type (application/x-www-form-urlencoded; charset=utf-8) as preferred response Content-Type. 2023-08-15T20:28:51.802886258Z 2023-08-15T20:28:51.802 INFO --- [ asgi_gw_0] localstack.request.aws : AWS sqs.CreateQueue => 200; CreateQueueRequest({'QueueName': 'c85c8994-510d-4f6e-801d-4b7753932583.fifo', 'Attributes': {'FifoQueue': 'true', 'RedrivePolicy': '{"maxReceiveCount":"1","deadLetterTargetArn":"arn:aws:sqs:eu-central-1:000000000000:c85c8994-510d-4f6e-801d-4b7753932583-DLQ.fifo"}'}}, headers={'User-Agent': 'aws-sdk-dotnet-coreclr/3.7.200.18 aws-sdk-dotnet-core/3.7.200.17 .NET_Core/7.0.9 OS/Microsoft_Windows_10.0.22621 ClientAsync', 'amz-sdk-invocation-id': '461c8546-c9aa-4a10-8ddd-db4d158c79e0', 'amz-sdk-request': 'attempt=1; max=5', 'x-amz-security-token': 'my-AwsSessionToken', 'Host': 'sqs.eu-central-1.amazonaws.com', 'X-Amz-Date': '20230815T202851Z', 'X-Amz-Content-SHA256': '1dc9cc6f3965102c46707d37a16b9e67ca44e96240792183727e004b50e33c2c', 'Authorization': 'AWS4-HMAC-SHA256 Credential=my-AwsAccessKeyId/20230815/eu-central-1/sqs/aws4_request, SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-security-token, Signature=c36009b0b3e34498c2b70d3bdc73f3dcf69b047de6b2e751d6154ad632026761', 'Content-Length': '356', 'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', 'x-localstack-tgt-api': 'sqs', 'x-moto-account-id': '000000000000'}); CreateQueueResult({'QueueUrl': 'http://sqs.eu-central-1.amazonaws.com/000000000000/c85c8994-510d-4f6e-801d-4b7753932583.fifo'}, headers={'Content-Type': 'text/xml', 'Content-Length': '381', 'Connection': 'close', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'HEAD,GET,PUT,POST,DELETE,OPTIONS,PATCH', 'Access-Control-Allow-Headers': 'authorization,cache-control,content-length,content-md5,content-type,etag,location,x-amz-acl,x-amz-content-sha256,x-amz-date,x-amz-request-id,x-amz-security-token,x-amz-tagging,x-amz-target,x-amz-user-agent,x-amz-version-id,x-amzn-requestid,x-localstack-target,amz-sdk-invocation-id,amz-sdk-request', 'Access-Control-Expose-Headers': 'etag,x-amz-version-id'}) 2023-08-15T20:28:51.808230863Z 2023-08-15T20:28:51.808 DEBUG --- [ asgi_gw_0] l.aws.serving.wsgi : POST sqs.eu-central-1.amazonaws.comhttp://sqs.eu-central-1.amazonaws.com/000000000000/c85c8994-510d-4f6e-801d-4b7753932583.fifo 2023-08-15T20:28:51.809442889Z 2023-08-15T20:28:51.808 ERROR --- [ asgi_gw_0] l.aws.handlers.logging : exception during call chain 2023-08-15T20:28:51.809457699Z Traceback (most recent call last): 2023-08-15T20:28:51.809460599Z File "/opt/code/localstack/localstack/aws/chain.py", line 90, in handle 2023-08-15T20:28:51.809462449Z handler(self, self.context, response) 2023-08-15T20:28:51.809463979Z File "/opt/code/localstack/localstack/aws/handlers/service.py", line 123, in __call__ 2023-08-15T20:28:51.809465439Z handler(chain, context, response) 2023-08-15T20:28:51.809466779Z File "/opt/code/localstack/localstack/aws/handlers/service.py", line 93, in __call__ 2023-08-15T20:28:51.809468169Z skeleton_response = self.skeleton.invoke(context) 2023-08-15T20:28:51.809469499Z File "/opt/code/localstack/localstack/aws/skeleton.py", line 154, in invoke 2023-08-15T20:28:51.809470869Z return self.dispatch_request(context, instance) 2023-08-15T20:28:51.809472219Z File "/opt/code/localstack/localstack/aws/skeleton.py", line 166, in dispatch_request 2023-08-15T20:28:51.809473639Z result = handler(context, instance) or {} 2023-08-15T20:28:51.809474999Z File "/opt/code/localstack/localstack/aws/forwarder.py", line 60, in _call 2023-08-15T20:28:51.809483999Z return handler(context, req) 2023-08-15T20:28:51.809485819Z File "/opt/code/localstack/localstack/aws/skeleton.py", line 118, in __call__ 2023-08-15T20:28:51.809487309Z return self.fn(*args, **kwargs) 2023-08-15T20:28:51.809488709Z File "/opt/code/localstack/localstack/services/sqs/provider.py", line 759, in delete_queue 2023-08-15T20:28:51.809494308Z account_id, region, name = parse_queue_url(queue_url) 2023-08-15T20:28:51.809495798Z File "/opt/code/localstack/localstack/services/sqs/utils.py", line 32, in parse_queue_url 2023-08-15T20:28:51.809497358Z url = urlparse(queue_url.rstrip("/")) 2023-08-15T20:28:51.809498768Z AttributeError: 'NoneType' object has no attribute 'rstrip' 2023-08-15T20:28:51.809582387Z 2023-08-15T20:28:51.809 DEBUG --- [ asgi_gw_0] l.aws.protocol.serializer : No accept header given. Using request's Content-Type (application/x-www-form-urlencoded; charset=utf-8) as preferred response Content-Type. 2023-08-15T20:28:51.810237589Z 2023-08-15T20:28:51.810 INFO --- [ asgi_gw_0] localstack.request.aws : AWS sqs.DeleteQueue => 500 (InternalError); DeleteQueueRequest({'QueueUrl': None}, headers={'User-Agent': 'aws-sdk-dotnet-coreclr/3.7.200.18 aws-sdk-dotnet-core/3.7.200.17 .NET_Core/7.0.9 OS/Microsoft_Windows_10.0.22621 ClientAsync', 'amz-sdk-invocation-id': 'aeb4c6dc-5a11-48d0-9e47-fa12ba44d49e', 'amz-sdk-request': 'attempt=1; max=5', 'x-amz-security-token': 'my-AwsSessionToken', 'Host': 'sqs.eu-central-1.amazonaws.com', 'X-Amz-Date': '20230815T202851Z', 'X-Amz-Content-SHA256': '234f83d4860d1a65d3197e883f74f64ea74ddb51defe9f1e5d6d1f592e3d93d5', 'Authorization': 'AWS4-HMAC-SHA256 Credential=my-AwsAccessKeyId/20230815/eu-central-1/sqs/aws4_request, SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-security-token, Signature=169401817cbf42532a02242bde240e6c75ad1fa9d111ff907b037afae39dd449', 'Content-Length': '37', 'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', 'x-localstack-tgt-api': 'sqs', 'x-moto-account-id': '000000000000'}); InternalError(exception while calling sqs.DeleteQueue: Traceback (most recent call last): 2023-08-15T20:28:51.810258679Z File "/opt/code/localstack/localstack/aws/chain.py", line 90, in handle 2023-08-15T20:28:51.810261319Z handler(self, self.context, response) 2023-08-15T20:28:51.810262919Z File "/opt/code/localstack/localstack/aws/handlers/service.py", line 123, in __call__ 2023-08-15T20:28:51.810264469Z handler(chain, context, response) 2023-08-15T20:28:51.810265779Z File "/opt/code/localstack/localstack/aws/handlers/service.py", line 93, in __call__ 2023-08-15T20:28:51.810267199Z skeleton_response = self.skeleton.invoke(context) 2023-08-15T20:28:51.810268559Z File "/opt/code/localstack/localstack/aws/skeleton.py", line 154, in invoke 2023-08-15T20:28:51.810269959Z return self.dispatch_request(context, instance) 2023-08-15T20:28:51.810271259Z File "/opt/code/localstack/localstack/aws/skeleton.py", line 166, in dispatch_request 2023-08-15T20:28:51.810272659Z result = handler(context, instance) or {} 2023-08-15T20:28:51.810273939Z File "/opt/code/localstack/localstack/aws/forwarder.py", line 60, in _call 2023-08-15T20:28:51.810275319Z return handler(context, req) 2023-08-15T20:28:51.810276589Z File "/opt/code/localstack/localstack/aws/skeleton.py", line 118, in __call__ 2023-08-15T20:28:51.810277959Z return self.fn(*args, **kwargs) 2023-08-15T20:28:51.810279709Z File "/opt/code/localstack/localstack/services/sqs/provider.py", line 759, in delete_queue 2023-08-15T20:28:51.810285069Z account_id, region, name = parse_queue_url(queue_url) 2023-08-15T20:28:51.810286729Z File "/opt/code/localstack/localstack/services/sqs/utils.py", line 32, in parse_queue_url 2023-08-15T20:28:51.810288239Z url = urlparse(queue_url.rstrip("/")) 2023-08-15T20:28:51.810289609Z AttributeError: 'NoneType' object has no attribute 'rstrip' 2023-08-15T20:28:51.810291039Z , headers={'Content-Type': 'text/xml', 'Content-Length': '1634', 'Connection': 'close', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'HEAD,GET,PUT,POST,DELETE,OPTIONS,PATCH', 'Access-Control-Allow-Headers': 'authorization,cache-control,content-length,content-md5,content-type,etag,location,x-amz-acl,x-amz-content-sha256,x-amz-date,x-amz-request-id,x-amz-security-token,x-amz-tagging,x-amz-target,x-amz-user-agent,x-amz-version-id,x-amzn-requestid,x-localstack-target,amz-sdk-invocation-id,amz-sdk-request', 'Access-Control-Expose-Headers': 'etag,x-amz-version-id'}) 2023-08-15T20:28:52.228157360Z 2023-08-15T20:28:52.227 DEBUG --- [ asgi_gw_0] l.aws.serving.wsgi : POST sqs.eu-central-1.amazonaws.comhttp://sqs.eu-central-1.amazonaws.com/000000000000/c85c8994-510d-4f6e-801d-4b7753932583.fifo 2023-08-15T20:28:52.229107389Z 2023-08-15T20:28:52.228 ERROR --- [ asgi_gw_0] l.aws.handlers.logging : exception during call chain 2023-08-15T20:28:52.229124879Z Traceback (most recent call last): 2023-08-15T20:28:52.229128259Z File "/opt/code/localstack/localstack/aws/chain.py", line 90, in handle 2023-08-15T20:28:52.229138019Z handler(self, self.context, response) 2023-08-15T20:28:52.229139999Z File "/opt/code/localstack/localstack/aws/handlers/service.py", line 123, in __call__ 2023-08-15T20:28:52.229141999Z handler(chain, context, response) 2023-08-15T20:28:52.229143739Z File "/opt/code/localstack/localstack/aws/handlers/service.py", line 93, in __call__ 2023-08-15T20:28:52.229145609Z skeleton_response = self.skeleton.invoke(context) 2023-08-15T20:28:52.229147329Z File "/opt/code/localstack/localstack/aws/skeleton.py", line 154, in invoke 2023-08-15T20:28:52.229149099Z return self.dispatch_request(context, instance) 2023-08-15T20:28:52.229150849Z File "/opt/code/localstack/localstack/aws/skeleton.py", line 166, in dispatch_request 2023-08-15T20:28:52.229152729Z result = handler(context, instance) or {} 2023-08-15T20:28:52.229154499Z File "/opt/code/localstack/localstack/aws/forwarder.py", line 60, in _call 2023-08-15T20:28:52.229156319Z return handler(context, req) 2023-08-15T20:28:52.229158069Z File "/opt/code/localstack/localstack/aws/skeleton.py", line 118, in __call__ 2023-08-15T20:28:52.229159919Z return self.fn(*args, **kwargs) 2023-08-15T20:28:52.229161649Z File "/opt/code/localstack/localstack/services/sqs/provider.py", line 759, in delete_queue 2023-08-15T20:28:52.229163449Z account_id, region, name = parse_queue_url(queue_url) 2023-08-15T20:28:52.229165199Z File "/opt/code/localstack/localstack/services/sqs/utils.py", line 32, in parse_queue_url 2023-08-15T20:28:52.229174549Z url = urlparse(queue_url.rstrip("/")) 2023-08-15T20:28:52.229176619Z AttributeError: 'NoneType' object has no attribute 'rstrip' ``` ### Expected Behavior Using the LocalStack.NET library, the SQS queue should be deleted successfully when calling the DeleteQueue function, consistent with the behavior observed on LocalStack versions 1.3.1 and 2.0. ### How are you starting LocalStack? Custom (please describe below) ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) Using TestContainers with the specific configurations mentioned above. #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) Using the [LocalStack.NET](https://github.com/localstack-dotnet/localstack-dotnet-client) client library mentioned above. I can provide and example project for this specific case as well. ### Environment ```markdown - OS: Windows 11 x64 - LocalStack: 2.2 - Docker: Docker Desktop running on WSL 2 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/8924
https://github.com/localstack/localstack/pull/8962
736e741dc44cf2360f383046364af248517ca682
5a21709f9a1f11322da0b81b3fb5e096bf3303d9
"2023-08-16T13:34:26Z"
python
"2023-08-22T19:33:17Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,921
["setup.cfg"]
bug: Route53 implementation doesn’t return caller reference associated with hosted zone
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior The [Route53 documentation for the `create-hosted-zone` action](https://botocore.amazonaws.com/v1/documentation/api/latest/reference/services/route53/client/create_hosted_zone.html#Route53.Client.create_hosted_zone) indicates that hosted zones creation requests must be accompanied by a unique “caller reference id”. The [Route53 documentation for the `get-hosted-zone` action](https://botocore.amazonaws.com/v1/documentation/api/latest/reference/services/route53/client/get_hosted_zone.html#Route53.Client.get_hosted_zone) indicates that the caller reference above is expected to be returns along with the hosted zone object. This is not the behaviour I observe, when creating and subsequently querying a hosted zone on LocalStack: ``` aws --endpoint-url=http://localstack:4566 route53 create-hosted-zone --name test.me --caller-reference test-me-uniq { "Location": "https://route53.amazonaws.com/2013-04-01/hostedzone/LU2N4H8R9C047W3", "HostedZone": { "Id": "/hostedzone/LU2N4H8R9C047W3", "Name": "test.me.", "Config": { "PrivateZone": false }, "ResourceRecordSetCount": 2 }, "ChangeInfo": { "Id": "/change/C1PA6795UKMFR9", "Status": "INSYNC", "SubmittedAt": "2017-03-15T01:36:41.958000Z" }, "DelegationSet": { "Id": "", "NameServers": [ "ns-2048.awsdns-64.com", "ns-2049.awsdns-65.net", "ns-2050.awsdns-66.org", "ns-2051.awsdns-67.co.uk" ] } } ``` ``` aws --endpoint-url=http://localstack:4566 route53 get-hosted-zone --id /hostedzone/LU2N4H8R9C047W3 { "HostedZone": { "Id": "/hostedzone/LU2N4H8R9C047W3", "Name": "test.me.", "Config": { "PrivateZone": false }, "ResourceRecordSetCount": 2 }, "DelegationSet": { "Id": "", "NameServers": [ "ns-2048.awsdns-64.com", "ns-2049.awsdns-65.net", "ns-2050.awsdns-66.org", "ns-2051.awsdns-67.co.uk" ] } } ``` The `HostedZone.CallerReference` field is missing from the object returned by the second query, above. ### Expected Behavior The object returned by the `get-hosted-zone` action should include the `CallerReference` field with which the hosted zone was created. In the reproducer provided, that would be `test-me-uniq`. References: - https://botocore.amazonaws.com/v1/documentation/api/latest/reference/services/route53/client/create_hosted_zone.html#Route53.Client.create_hosted_zone - https://botocore.amazonaws.com/v1/documentation/api/latest/reference/services/route53/client/get_hosted_zone.html#Route53.Client.get_hosted_zone ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce I made sure to pull the latest Docker image before creating the issue: ```console $ docker pull localstack/localstack ``` #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) Docker compose section: ```yaml localstack: container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}" image: localstack/localstack ports: - "0.0.0.0:4566:4566" - "0.0.0.0:4571:4571" environment: - SERVICES=${SERVICES- } - DEBUG=${DEBUG- } - LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- } - DOCKER_HOST=unix:///var/run/docker.sock - HOSTNAME=localstack - HOSTNAME_EXTERNAL=localstack - DATA_DIR=/tmp/localstack/data - PERSISTENCE=1 volumes: - localstack:/tmp/localstack - "/var/run/docker.sock:/var/run/docker.sock" networks: - testing_net ``` #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) ```console $ aws --endpoint-url=http://localstack:4566 route53 create-hosted-zone --name test.me --caller-reference test-me-uniq ``` ```console $ aws --endpoint-url=http://localstack:4566 route53 get-hosted-zone --id … ``` ### Environment ```markdown - OS: `18.04.6 LTS (Bionic Beaver)` - LocalStack: `latest` ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/8921
https://github.com/localstack/localstack/pull/9359
de140aa4da845f7737028a80fe796f4cd93ed008
f2c1052af834e16b8e6210f21559b3a24de6d9ac
"2023-08-16T12:20:01Z"
python
"2023-10-18T06:07:34Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,916
["localstack/services/ec2/provider.py"]
bug: ec2 describe-availability-zones with --zone-ids returns all zones
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior Similar to [bug: ec2 describe-availability-zones with --zone-name returns all zones](https://github.com/localstack/localstack/issues/4715) but with `--zone-ids`. ``` aws --endpoint http://localhost:4566 ec2 describe-availability-zones { "AvailabilityZones": [ { "State": "available", "Messages": [], "RegionName": "eu-west-1", "ZoneName": "eu-west-1a", "ZoneId": "euw1-az3", "ZoneType": "availability-zone" }, { "State": "available", "Messages": [], "RegionName": "eu-west-1", "ZoneName": "eu-west-1b", "ZoneId": "euw1-az1", "ZoneType": "availability-zone" }, { "State": "available", "Messages": [], "RegionName": "eu-west-1", "ZoneName": "eu-west-1c", "ZoneId": "euw1-az2", "ZoneType": "availability-zone" } ] } ``` ``` aws --endpoint http://localhost:4566 ec2 describe-availability-zones --zone-ids euw1-az2 { "AvailabilityZones": [ { "State": "available", "Messages": [], "RegionName": "eu-west-1", "ZoneName": "eu-west-1a", "ZoneId": "euw1-az3", "ZoneType": "availability-zone" }, { "State": "available", "Messages": [], "RegionName": "eu-west-1", "ZoneName": "eu-west-1b", "ZoneId": "euw1-az1", "ZoneType": "availability-zone" }, { "State": "available", "Messages": [], "RegionName": "eu-west-1", "ZoneName": "eu-west-1c", "ZoneId": "euw1-az2", "ZoneType": "availability-zone" } ] } ``` ### Expected Behavior ``` aws --endpoint http://localhost:4566 ec2 describe-availability-zones --zone-ids euw1-az2 { "AvailabilityZones": [ { "State": "available", "Messages": [], "RegionName": "eu-west-1", "ZoneName": "eu-west-1c", "ZoneId": "euw1-az2", "ZoneType": "availability-zone" } ] } ``` Similar to real AWS: ``` aws ec2 describe-availability-zones --zone-ids euw1-az2 { "AvailabilityZones": [ { "State": "available", "OptInStatus": "opt-in-not-required", "Messages": [], "RegionName": "eu-west-1", "ZoneName": "eu-west-1c", "ZoneId": "euw1-az2", "GroupName": "eu-west-1", "NetworkBorderGroup": "eu-west-1", "ZoneType": "availability-zone" } ] } ``` ### How are you starting LocalStack? With a `docker run` command ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker run localstack/localstack #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) awslocal s3 mb s3://mybucket ### Environment ```markdown - OS:Ubuntu 22.04 - LocalStack: 2.2.0 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/8916
https://github.com/localstack/localstack/pull/9024
b11bf2f0189cf5b7b69479d8eaa35248dc3b7365
5eba6785bb39a45331a01255f41f433e778793a2
"2023-08-15T17:58:28Z"
python
"2023-08-31T08:28:37Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,895
["localstack/services/kinesis/kinesis_mock_server.py"]
bug: KINESIS_INITIALIZE_STREAMS not set in latest docker image
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior Setting `KINESIS_INITIALIZE_STREAMS` no longer is taking in the newest localstack/localstack:latest image. You can see some logs here: https://github.com/etspaceman/kinesis4cats/actions/runs/5835796510/job/15828063287 `initializeStreams` is null ``` [info] kinesis.mock.KinesisMockService$ 2023-08-11T18:33:21.836147Z contextId=1ce47d17-0b9a-4ad5-927d-4e9fed8b5829, cacheConfig={"awsAccountId":"000000000000","awsRegion":"us-east-1","createStreamDuration":{"length":0,"unit":"MILLISECONDS"},"deleteStreamDuration":{"length":0,"unit":"MILLISECONDS"},"deregisterStreamConsumerDuration":{"length":0,"unit":"MILLISECONDS"},"initializeStreams":null,"logLevel":"INFO","mergeShardsDuration":{"length":0,"unit":"MILLISECONDS"},"onDemandStreamCountLimit":10,"persistConfig":{"fileName":"000000000000.json","interval":{"length":5,"unit":"SECONDS"},"loadIfExists":true,"path":"../../../var/lib/localstack/tmp/state/kinesis","shouldPersist":true},"registerStreamConsumerDuration":{"length":0,"unit":"MILLISECONDS"},"shardLimit":100,"splitShardDuration":{"length":0,"unit":"MILLISECONDS"},"startStreamEncryptionDuration":{"length":0,"unit":"MILLISECONDS"},"stopStreamEncryptionDuration":{"length":0,"unit":"MILLISECONDS"},"updateShardCountDuration":{"length":0,"unit":"MILLISECONDS"}} Logging Cache Config ``` Reverting to 2.2.0 resolves the issue. ### Expected Behavior `KINESIS_INITIALIZE_STREAMS` sets values ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker run localstack/localstack:latest #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) awslocal kinesis list-streams ### Environment ```markdown - OS: ubuntu - LocalStack: latest ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/8895
https://github.com/localstack/localstack/pull/8896
e900daabd662a90700b4c93d86c2762655be29cb
166bfcc00b37125196b15b9553bfabbe16466811
"2023-08-11T18:58:19Z"
python
"2023-08-15T12:33:37Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,841
["localstack/http/client.py", "tests/aws/s3/test_s3.py", "tests/aws/s3/test_s3.snapshot.json", "tests/aws/s3/test_s3_notifications_lambda.py", "tests/unit/http_/test_proxy.py"]
bug: S3 Content-Length not set with AWS SDK for Java
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior When reading a file of a bucket with the Java AWS SDK the content-length is not set in the GetObjectResponse. ![image](https://github.com/localstack/localstack/assets/90579580/4c89a103-b064-4061-a91e-2012b19b838f) ![image](https://github.com/localstack/localstack/assets/90579580/8f52adf0-1205-4d6f-8fd0-6d026cac6db6) ### Expected Behavior The content-length is included in the response ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce This is the content of my docker-compose file: ` s3: container_name: "s3test" image: "localstack/localstack:2.2.0" ports: - "4566:4566" volumes: - ./src/test/local/s3/data:/var/lib/localstack/s3 - ./src/test/local/s3/init-s3.sh:/etc/localstack/init/ready.d/init-s3.sh` In the init script I create a bucket and put some file to it: `awslocal s3api create-bucket --bucket test-bucket aws s3api put-object --body /var/lib/localstack/s3/test.png --bucket test-bucket --key storage/test.png --content-type image/png --endpoint-url http://localhost:4566` Then I query the file using the getObject Method of the S3Client of the Java AWS SDK ### Environment ```markdown - OS: Windows 10 - LocalStack:2.2.0 ``` ### Anything else? This behavior starts with version 2.2.0. Using version 2.1.0 everything works as expected
https://github.com/localstack/localstack/issues/8841
https://github.com/localstack/localstack/pull/8845
6969bcbf31716576bd62b0ec743d8055ada4740b
52e0d14aef9b80a1e35f08f45ca48c2d72249a5c
"2023-08-07T14:27:13Z"
python
"2023-08-08T02:16:06Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,824
["localstack/services/dynamodb/provider.py", "tests/aws/test_dynamodb.py"]
enhancement request: DynamoDB missing operation 'scan' and 'query'
### Is there an existing issue for this? - [X] I have searched the existing issues ### Enhancement description Based on [customer request](https://localstack-community.slack.com/archives/C0598EHPT1N). Implement operation `scan` and `query` for global tables in DynamoDB service. Test sample: ```shell # Create a table awslocal dynamodb create-table --table-name global01 --key-schema AttributeName=id,KeyType=HASH --attribute-definitions AttributeName=id,AttributeType=S --billing-mode PAY_PER_REQUEST --region ap-south-1 #Create replicas awslocal dynamodb update-table --table-name global01 --replica-updates '[{"Create": {"RegionName": "eu-central-1"}}, {"Create": {"RegionName": "us-west-1"}}]' --region ap-south-1 # Table can be operated on in all replicated regions awslocal dynamodb list-tables --region eu-central-1 awslocal dynamodb put-item --table-name global01 --item '{"id":{"S":"foo"}}' --region eu-central-1 awslocal dynamodb describe-table --table-name global01 --query 'Table.ItemCount' --region ap-south-1 # Get all replicas awslocal dynamodb describe-table --table-name global01 --query 'Table.Replicas' --region us-west-1 # Query in all regions awslocal dynamodb query --table-name global01 --key-condition-expression 'id = :id' --expression-attribute-values '{":id":{"S":"foo"}}' --region ap-south-1 awslocal dynamodb query --table-name global01 --key-condition-expression 'id = :id' --expression-attribute-values '{":id":{"S":"foo"}}' --region eu-central-1 awslocal dynamodb query --table-name global01 --key-condition-expression 'id = :id' --expression-attribute-values '{":id":{"S":"foo"}}' --region us-west-1 ``` ### 🧑‍💻 Implementation _No response_ ### Anything else? _No response_
https://github.com/localstack/localstack/issues/8824
https://github.com/localstack/localstack/pull/8905
9c710ee906afcaacfd8e67b6d9e26734c1114e26
d4c06f93d0356a61a645841cf9d9f23ac1ded0ec
"2023-08-04T14:47:13Z"
python
"2023-08-16T07:14:14Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,812
["localstack/services/s3/provider.py", "tests/integration/s3/test_s3.py", "tests/integration/s3/test_s3.snapshot.json"]
bug: s3.PutBucketLogging fails with 500 when deleting bucket
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior Given the following minimal terraform code that creates two s3 buckets where one of the buckets is used for logging ```tf terraform { required_version = ">= 1.5.2" required_providers { aws = { source = "hashicorp/aws" version = "~> 5.8.0" } } } resource "aws_s3_bucket" "bucket" { bucket = "main" } resource "aws_s3_bucket" "logging" { bucket = "logging" } resource "aws_s3_bucket_logging" "bucket" { bucket = aws_s3_bucket.bucket.id target_bucket = aws_s3_bucket.logging.id target_prefix = "log/" } ``` Using `tflocal` to `tflocal apply` the resources to localstack running in docker-compose works as expected but `tflocal destroy` will become stuck at ``` aws_s3_bucket_logging.bucket: Destroying... [id=main] aws_s3_bucket_logging.bucket: Still destroying... [id=main, 10s elapsed] ``` when reviewing the localstack output the following error is observed repeatedly ``` localstack_main | 2023-08-03T00:01:55.897 INFO --- [ asgi_gw_2] localstack.request.aws : AWS s3.PutBucketLogging => 500 (InternalError) localstack_main | 2023-08-03T00:01:55.899 INFO --- [ asgi_gw_1] localstack.request.http : PUT / => 500 localstack_main | 2023-08-03T00:01:55.949 ERROR --- [ asgi_gw_2] l.aws.handlers.logging : exception during call chain: 'NoneType' object has no attribute 'get' ``` ### Expected Behavior `tflocal destroy` to destroy the `aws_s3_bucket_logging` logging resource. Using terraform instead of tflocal with the same terraform configuration succeeds during apply and destroy ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce 1. Create a minimal terraform module with a `main.tf` that includes the terraform code above 2. from the terraform module execute `tflocal init` 3. from the terraform module execute `tflocal apply` 4. observe the resources are created in localstack 5. from the terraform module execute `tflocal destroy` 6. observe the 500 error in the localstack logs and the terraform output is stuck on "Still destroying..." ### Environment ```markdown - OS: macOS 13.4.1 - LocalStack: latest ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/8812
https://github.com/localstack/localstack/pull/8813
3f017752e148df042974881cbcb9a61d242613df
e6195d70186a38ccd69da539a9d1f9aa86b5d4a1
"2023-08-03T00:08:10Z"
python
"2023-08-03T11:41:58Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,805
["localstack/config.py"]
bug: LAMBDA_KEEPALIVE_MS is not set in LocalStack container
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior There appears to be an issue with this variable `export LAMBDA_KEEPALIVE_MS=1337`. It's not picked up when you start `localstack start -d` ```shell $ export LAMBDA_KEEPALIVE_MS=1000 $ env | grep LAMBDA_KEEPALIVE_MS LAMBDA_KEEPALIVE_MS=1000 $ localstack start -d $ docker inspect localstack_main -f '{{json .Config.Env}}' | grep LAMBDA_KEEPALIVE_MS ``` ### Expected Behavior It should be set in the container environment. ### How are you starting LocalStack? With the `localstack` script ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker run localstack/localstack #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) awslocal s3 mb s3://mybucket ### Environment ```markdown - OS: Windows (WSL) - LocalStack: 2.2.1.dev20230731233910 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/8805
https://github.com/localstack/localstack/pull/8808
838f1ff0dcd374e4ca71e84fcedc10448f5d01ef
a326a16126d81973d28bfff5d00b17fc7b28109e
"2023-08-02T12:31:10Z"
python
"2023-08-02T16:50:59Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,793
["localstack/http/client.py", "localstack/services/opensearch/cluster.py", "localstack/services/s3/virtual_host.py", "tests/aws/services/opensearch/test_opensearch.py", "tests/unit/http_/test_client.py", "tests/unit/http_/test_proxy.py"]
bug: OpenSearch response with compressed content
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior While running `curl` to reach OpenSearch health endpoint, I receive a binary response instead of plain JSON. ```shell $ curl -vvv my-domain.us-east-1.opensearch.localhost.localstack.cloud:443/_cluster/health * Trying 127.0.0.1:443... * Connected to my-domain.us-east-1.opensearch.localhost.localstack.cloud (127.0.0.1) port 443 (#0) > GET /_cluster/health HTTP/1.1 > Host: my-domain.us-east-1.opensearch.localhost.localstack.cloud:443 > User-Agent: curl/8.0.1 > Accept: */* > < HTTP/1.1 200 < content-type: application/json; charset=UTF-8 < content-encoding: gzip < Connection: close < date: Tue, 01 Aug 2023 08:46:35 GMT < server: hypercorn-h11 < Transfer-Encoding: chunked < Warning: Binary output can mess up your terminal. Use "--output -" to tell Warning: curl to output it to your terminal anyway, or consider "--output Warning: <FILE>" to save to a file. * Failure writing output to destination * Failed reading the chunked-encoded stream * Closing connection 0 ``` ### Expected Behavior I expect to receive a JSON response without the need to use `--compressed` option in the `curl` command. curl -vvv --compressed my-domain.us-east-1.opensearch.localhost.localstack.cloud:443/_cluster/health * Trying 127.0.0.1:443... * Connected to my-domain.us-east-1.opensearch.localhost.localstack.cloud (127.0.0.1) port 443 (#0) > GET /_cluster/health HTTP/1.1 > Host: my-domain.us-east-1.opensearch.localhost.localstack.cloud:443 > User-Agent: curl/7.81.0 > Accept: */* > Accept-Encoding: deflate, gzip, br, zstd > * Mark bundle as not supporting multiuse < HTTP/1.1 200 < content-type: application/json; charset=UTF-8 < content-encoding: gzip < Connection: close < date: Tue, 01 Aug 2023 08:47:06 GMT < server: hypercorn-h11 < Transfer-Encoding: chunked < * Closing connection 0 {"cluster_name":"opensearch","status":"green","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"discovered_master":true,"discovered_cluster_manager":true,"active_primary_shards":0,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0} ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) ```yaml version: '3.8' services: localstack: container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}" image: localstack/localstack-pro:latest ports: - "127.0.0.1:53:53" - "127.0.0.1:53:53/udp" - "127.0.0.1:443:443" - "127.0.0.1:4510-4559:4510-4559" - "127.0.0.1:4566:4566" environment: - DEBUG=1 - LS_LOG=trace - LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY-} # only required for Pro - DOCKER_HOST=unix:///var/run/docker.sock - OPENSEARCH_ENDPOINT_STRATEGY=domain volumes: - "/var/run/docker.sock:/var/run/docker.sock" - "./volume:/var/lib/localstack" # mount Docker volume healthcheck: disable: true ``` #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) ```shell awslocal opensearch create-domain --domain-name my-domain curl -vvv my-domain.us-east-1.opensearch.localhost.localstack.cloud:443/_cluster/health ``` ### Environment ```markdown - OS: Windows 10 - LocalStack: 2.2.1.dev20230731233910 ``` ### Anything else? Following the guide from the documentation page: [OpenSearch](https://docs.localstack.cloud/user-guide/aws/opensearch/)
https://github.com/localstack/localstack/issues/8793
https://github.com/localstack/localstack/pull/9026
0b1f2492735e5a8d1089a007415a13e0ad402a4e
f872187f2cab8a8cabb808694194560d7e526b9e
"2023-08-01T10:55:27Z"
python
"2023-08-30T15:51:00Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,777
["localstack/services/cloudformation/resource_provider.py", "localstack/services/scheduler/resource_providers/__init__.py", "localstack/services/scheduler/resource_providers/aws_scheduler_schedule.py", "localstack/services/scheduler/resource_providers/aws_scheduler_schedule.schema.json", "localstack/services/scheduler/resource_providers/aws_scheduler_schedulegroup.py", "localstack/services/scheduler/resource_providers/aws_scheduler_schedulegroup.schema.json", "tests/aws/services/cloudformation/resource_providers/scheduler/templates/__init__.py", "tests/aws/services/cloudformation/resource_providers/scheduler/templates/schedule.yml", "tests/aws/services/cloudformation/resource_providers/scheduler/test_scheduler.py", "tests/aws/services/cloudformation/resource_providers/scheduler/test_scheduler.snapshot.json"]
enhancement request: AWS::Scheduler::Schedule and AWS::Scheduler::ScheduleGroup
### Is there an existing issue for this? - [X] I have searched the existing issues ### Enhancement description https://github.com/localstack/localstack/pull/8754 added EventBridge Scheduler support however LocalStack is missing support for resource types: `AWS::Scheduler::Schedule` and `AWS::Scheduler::ScheduleGroup` Follow up on https://github.com/localstack/localstack/issues/7268 ### 🧑‍💻 Implementation https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-scheduler-schedule.html https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-scheduler-schedulegroup.html ### Anything else? _No response_
https://github.com/localstack/localstack/issues/8777
https://github.com/localstack/localstack/pull/9122
bfa479306af03c71d007962efdff403d4df98386
402450dc4e142e593b92969c9cbd48dd9e4c7de3
"2023-07-31T11:24:11Z"
python
"2023-09-25T14:28:55Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,756
["localstack/services/s3/codec.py", "tests/unit/test_s3.py"]
bug: error on multipart upload to S3
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior Using localstack S3 with the Amazon AWSSDK.S3 NuGet v3.7.104.11 in a dotnet application. Triggering a multipart upload against an instance of the localstack:latest image produces an error. Full stack trace `"Exception in multipart upload. Aborting.","RenderedMessage":"Exception in multipart upload. Aborting.","Exception":"Amazon.S3.AmazonS3Exception: exception while calling s3.UploadPart: invalid literal for int() with base 16: b' Agency</SourceName>\' ---> Amazon.Runtime.Internal.HttpErrorResponseException: Exception of type 'Amazon.Runtime.Internal.HttpErrorResponseException' was thrown. at Amazon.Runtime.HttpWebRequestMessage.GetResponseAsync(CancellationToken cancellationToken) at Amazon.Runtime.Internal.HttpHandler`1.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.RedirectHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.Unmarshaller.InvokeAsync[T](IExecutionContext executionContext) at Amazon.S3.Internal.AmazonS3ResponseHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.ErrorHandler.InvokeAsync[T](IExecutionContext executionContext) --- End of inner exception stack trace --- at Amazon.Runtime.Internal.HttpErrorResponseExceptionHandler.HandleExceptionStream(IRequestContext requestContext, IWebResponseData httpErrorResponse, HttpErrorResponseException exception, Stream responseStream) at Amazon.Runtime.Internal.HttpErrorResponseExceptionHandler.HandleExceptionAsync(IExecutionContext executionContext, HttpErrorResponseException exception) at Amazon.Runtime.Internal.ExceptionHandler`1.HandleAsync(IExecutionContext executionContext, Exception exception) at Amazon.Runtime.Internal.ErrorHandler.ProcessExceptionAsync(IExecutionContext executionContext, Exception exception) at Amazon.Runtime.Internal.ErrorHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.CallbackHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.Signer.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.EndpointDiscoveryHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.EndpointDiscoveryHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.CredentialsRetriever.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.RetryHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.RetryHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.CallbackHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.CallbackHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.S3.Internal.AmazonS3ExceptionHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.ErrorCallbackHandler.InvokeAsync[T](IExecutionContext executionContext) at Amazon.Runtime.Internal.MetricsHandler.InvokeAsync[T](IExecutionContext executionContext)` ### Expected Behavior The upload completes without exception. Pinning the version to localstack:2.2.0 resolves the issue. ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker run localstack/localstack #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) Custom dotnet application using the AWSSDK.S3 NuGet v3.7.104.11 ### Environment ```markdown - OS: Windows - LocalStack: localstack/localstack:latest (digest 5614f89737075ffe9f1d473053d87284931fb01b36b76cb7d208950b8ca1536b) ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/8756
https://github.com/localstack/localstack/pull/8760
b4c662ee53f1e7bdbcd13d26d54d7f334f27cd9c
3922cf467dc59009f12c72d1173fb44d0f38dc02
"2023-07-27T12:20:29Z"
python
"2023-07-27T17:24:50Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,732
["localstack/utils/bootstrap.py"]
bug: localstack config validate: `Errno 2 no such file or directory: 'docker-compose'`
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior - following the [installation steps](https://docs.localstack.cloud/getting-started/installation/) fails when validating localstack configuration - error ![image](https://github.com/localstack/localstack/assets/10324554/d82ef62e-c8b8-402e-8195-e691733cfeb1) - localstack is indeed running ![image](https://github.com/localstack/localstack/assets/10324554/44d7180a-5090-416d-afa9-c4eaaedca33e) ### Expected Behavior - localstack can consume the path I pass `--file` ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker compose up -d localstack config validate --file ./compose.yaml ### Environment ```markdown OS: Ubuntu 20.04 LocalStack: latest ``` ### Anything else? - you can reproduce [on this branch](https://github.com/noahehall/localstack/tree/rds) if it still exists - else switch to the develop branch
https://github.com/localstack/localstack/issues/8732
https://github.com/localstack/localstack/pull/8734
93dd6f796fae1b8ae9d310e80b5c5edcb47e4676
27edb72e307c4db17a74ae031f27e46359204799
"2023-07-20T21:56:10Z"
python
"2023-07-21T12:37:24Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,703
["localstack/services/s3/provider.py", "localstack/services/s3/provider_stream.py"]
bug: The last version throws the "Expected hash not equal to calculated hash" error for uploading a multi part.
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior I am using the latest tag for the CI/CD process and today it started to fail because unit tests for the S3 provider that cover multipart upload throw the "Expected hash not equal to calculated hash" error on uploading a part. It worked just fine with the old version. The same unit tests work correctly with the real S3 provider as well. And there is another issue. You fixed the https://github.com/localstack/localstack/issues/8392 bug that I reported previously but there is no version on Docker Hub that has that fix but does not have the new bug I described above. It seems that you rewrite the latest tag in Docker Hub which makes it impossible to rollback to the previous version. ### Expected Behavior 1. The upload part works without throwing an exception. 2. The previous versions of Docker images are available on Docker Hub. ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce 1. Initiate multipart upload with the correct MD5 hash. 2. Upload a part for which the hash was calculated. ### Environment ```markdown - OS: Windows 11 - LocalStack: latest ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/8703
https://github.com/localstack/localstack/pull/8712
602a4ad0fa2c00424f59b7db2f31732e68e9154e
b0623d2dd11a8a22ef699a203b8ce54772a20ae0
"2023-07-14T22:31:03Z"
python
"2023-07-17T16:45:19Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,676
["localstack/services/s3/provider.py", "tests/integration/s3/test_s3.py", "tests/integration/s3/test_s3.snapshot.json"]
S3 HeadObject does not respect ChecksumMode
As reported in https://github.com/localstack/localstack/issues/6659#issuecomment-1630932643_, `HeadObject` does not respect `ChecksumMode`. ```shell at 09:31:57 ➜ AWS_DEFAULT_REGION=us-west-1 aws --endpoint-url=http://localhost:4566 s3api create-bucket --bucket test-bucket --create-bucket-configuration LocationConstraint=us-west-1 { "Location": "http://test-bucket.s3.localhost.localstack.cloud:4566/" } at 09:32:13 ➜ aws --endpoint-url=http://localhost:4566 s3api put-object --bucket test-bucket --key file.json --body README.md --checksum-algorithm SHA256 { "ETag": "\"4bb2f1387449d690bff6336ea255909c\"", "ChecksumSHA256": "bzOikXXMUqRTCrpVq7Pt7LjJ0fjR3ccUfM19S/gnfqA=" } # SHA256 missing here at 09:32:20 ➜ aws --endpoint-url=http://localhost:4566 s3api head-object --bucket test-bucket --key file.json --checksum-mode Enabled { "AcceptRanges": "bytes", "LastModified": "2023-07-11T13:32:15+00:00", "ContentLength": 1455, "ETag": "\"4bb2f1387449d690bff6336ea255909c\"", "ContentType": "binary/octet-stream", "Metadata": {} } ``` Note the equivalent response from S3 actual includes the SHA256 value ```shell at 10:21:23 ➜ aws s3api head-object --bucket test-bucket --key file.json --checksum-mode Enabled { "AcceptRanges": "bytes", "Expiration": "expiry-date=\"Thu, 13 Jul 2023 00:00:00 GMT\", rule-id=\"all\"", "LastModified": "2023-07-11T14:21:09+00:00", "ContentLength": 1956, "ChecksumSHA256": "bj7fqCqtaF95M1ULwQRDJuHbL/Xd9NVMH6pua9OINcc=", "ETag": "\"f4be535d45019a4408dd5369986919f8\"", "ContentEncoding": "", "ContentType": "binary/octet-stream", "Metadata": {}, } ``` _Originally posted by @bdurrani in https://github.com/localstack/localstack/issues/6659#issuecomment-1630932643_
https://github.com/localstack/localstack/issues/8676
https://github.com/localstack/localstack/pull/8677
e04eeb2148d656d832af9348bc74e37bdab3f3ff
2e8194722fc156daaad8373ec8569dae5e090863
"2023-07-11T14:54:19Z"
python
"2023-07-12T15:35:44Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,675
["localstack/services/s3/provider.py"]
bug: S3 SelectObjectContentCommand results in ProtocolSerializerError in LocalStack
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior I'm using LocalStack to emulate AWS S3 and I'm trying to use the SelectObjectContentCommand from the AWS SDK to perform SQL-like queries on a CSV file stored in S3. However, I'm encountering a ProtocolSerializerError stating "Expected iterator for streaming event serialization". I'm encountering the following error: ``` localstack.aws.protocol.serializer.ProtocolSerializerError: Expected iterator for streaming event serialization. ``` This issue occurs consistently every time I try to run the command. I'm not sure if it's a bug in LocalStack, an issue with the way my local environment is set up, or potentially some other unforeseen issue. Any assistance would be greatly appreciated. ### Expected Behavior Payload async generator should properly and send events, more specifically `event.Records` Payload does exist but I never get inside the for await loop ``` const contentResponse = await this.s3Client.send(contentCommand); for await (const event of contentResponse.Payload) { debugger; } ``` ### How are you starting LocalStack? Custom (please describe below) ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) const localstackContainer = await new GenericContainer('localstack/localstack', 'latest') .withName('tests-localstack-s3') .withEnvironment({ SERVICES: 's3', DEBUG: 1 }) .withExposedPorts(4566) .start(); #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) 1. I'm storing a CSV file in S3 with the following data: ``` i,d 0,aaaa 1,aaaa 2,aaaa 3,aaaa 4,aaaa ``` 2. I'm running the following command on my s3 client: ``` new SelectObjectContentCommand({ Bucket: this.bucket, Key: this.key, Expression: 'SELECT s.d FROM S3Object s WHERE CAST(s.i AS int) >= 0 AND CAST(s.i AS int) <= 9', ExpressionType: 'SQL', InputSerialization: { CSV: { FileHeaderInfo: 'USE', }, }, OutputSerialization: { CSV: {}, }, }); ``` ### Environment ```markdown - OS: windows 10 - LocalStack: latest ``` ### Anything else? this is the entire error: ``` 023-07-11T14:15:38.417 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.SelectObjectContent => 200 2023-07-11T14:15:38.420 ERROR --- [-functhread5] hypercorn.error : Error in ASGI Framework Traceback (most recent call last): File "/opt/code/localstack/.venv/lib/python3.10/site-packages/hypercorn/asyncio/task_group.py", line 22, in _handle await app(scope, receive, send, sync_spawn) File "/opt/code/localstack/.venv/lib/python3.10/site-packages/hypercorn/app_wrappers.py", line 31, in __call__ await self.app(scope, receive, send) File "/opt/code/localstack/localstack/aws/serving/asgi.py", line 67, in __call__ return await self.wsgi(scope, receive, send) File "/opt/code/localstack/localstack/http/asgi.py", line 319, in __call__ return await self.handle_http(scope, receive, send) File "/opt/code/localstack/localstack/http/asgi.py", line 374, in handle_http async for packet in iterable: File "/opt/code/localstack/localstack/http/asgi.py", line 107, in to_async_generator val = await loop.run_in_executor(executor, _next_sync) File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/opt/code/localstack/localstack/http/asgi.py", line 102, in _next_sync return next(it) File "/opt/code/localstack/.venv/lib/python3.10/site-packages/werkzeug/wsgi.py", line 289, in __next__ return self._next() File "/opt/code/localstack/.venv/lib/python3.10/site-packages/werkzeug/wrappers/response.py", line 32, in _iter_encoded for item in iterable: File "/opt/code/localstack/localstack/aws/protocol/serializer.py", line 347, in event_stream_serializer raise ProtocolSerializerError( localstack.aws.protocol.serializer.ProtocolSerializerError: Expected iterator for streaming event serialization. 2023-07-11T14:15:38.420 ERROR --- [-functhread5] hypercorn.error : Error in ASGI Framework Traceback (most recent call last): File "/opt/code/localstack/.venv/lib/python3.10/site-packages/hypercorn/asyncio/task_group.py", line 22, in _handle await app(scope, receive, send, sync_spawn) File "/opt/code/localstack/.venv/lib/python3.10/site-packages/hypercorn/app_wrappers.py", line 31, in __call__ await self.app(scope, receive, send) File "/opt/code/localstack/localstack/aws/serving/asgi.py", line 67, in __call__ return await self.wsgi(scope, receive, send) File "/opt/code/localstack/localstack/http/asgi.py", line 319, in __call__ return await self.handle_http(scope, receive, send) File "/opt/code/localstack/localstack/http/asgi.py", line 374, in handle_http async for packet in iterable: File "/opt/code/localstack/localstack/http/asgi.py", line 107, in to_async_generator val = await loop.run_in_executor(executor, _next_sync) File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/opt/code/localstack/localstack/http/asgi.py", line 102, in _next_sync return next(it) File "/opt/code/localstack/.venv/lib/python3.10/site-packages/werkzeug/wsgi.py", line 289, in __next__ return self._next() File "/opt/code/localstack/.venv/lib/python3.10/site-packages/werkzeug/wrappers/response.py", line 32, in _iter_encoded for item in iterable: File "/opt/code/localstack/localstack/aws/protocol/serializer.py", line 347, in event_stream_serializer raise ProtocolSerializerError( localstack.aws.protocol.serializer.ProtocolSerializerError: Expected iterator for streaming event serialization. ```
https://github.com/localstack/localstack/issues/8675
https://github.com/localstack/localstack/pull/8689
a00068326bd5ba3d19872f08158652d84252f408
48f27fa9ddba20d3346bf3978c668c6c2208fed2
"2023-07-11T14:28:22Z"
python
"2023-07-12T20:35:33Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,642
["localstack/services/iam/provider.py", "localstack/services/infra.py", "localstack/services/moto.py", "localstack/state/inspect.py", "localstack/utils/aws/request_context.py", "setup.cfg", "tests/aws/services/s3/test_s3.py", "tests/aws/services/s3/test_s3_list_operations.py", "tests/unit/state/test_inspect.py"]
bug: Tags missing from SecurityGroupRules
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior Currently whilst tags can be set on an security group they do not seem to actually set, nor be shown in the visible: ```console $ awslocal ec2 describe-security-group-rules --region eu-west-1 { "SecurityGroupRules": [ ... { "SecurityGroupRuleId": "sgr-a61b55ecc57fc7326", "GroupId": "sg-8142c14106428334a", "GroupOwnerId": "000000000000", "IsEgress": false, "IpProtocol": "tcp", "FromPort": 80, "ToPort": 80, "CidrIpv4": "0.0.0.0/0" } ] } ``` I'm hoping this is the cause of terraform always attempting to add rules every time it runs, but I thought that about #8528 ### Expected Behavior ```console $ aws ec2 describe-security-group-rules --region eu-west-1 { "SecurityGroups": [ ... { "SecurityGroupRuleId": "sgr-092116e510c39844a", "GroupId": "sg-0316c5c1e70a68903", "GroupOwnerId": "586083211487", "IsEgress": false, "IpProtocol": "tcp", "FromPort": 80, "ToPort": 80, "CidrIpv4": "0.0.0.0/0", "Tags": [ { "Key": "test", "Value": "test" } ] } ] } ``` ### How are you starting LocalStack? With the `localstack` script ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) localstack run #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) ##### AWS Note there is a default group and rule in localstack so we can just jump straight to checking that: ``` awslocal ec2 describe-security-group-rules ``` ##### Terraform I found this bug whilst investigating terraform continually applying rules, I believe it may be the cause (but I previously believed #8528 was): ```terraform provider "aws" { region = "eu-west-1" } resource "aws_vpc" "example" { cidr_block = "10.0.0.0/16" } resource "aws_security_group" "example" { name = "example" vpc_id = resource.aws_vpc.example.id } resource "aws_vpc_security_group_ingress_rule" "http" { security_group_id = aws_security_group.example.id cidr_ipv4 = "0.0.0.0/0" from_port = 80 ip_protocol = "tcp" to_port = 80 tags = { "test" = "test" } } ``` ``` tflocal apply tflocal apply # Should not change anything but currently does ``` ### Environment ```markdown - OS: 20.04.6 - LocalStack version: 2.1.1.dev - LocalStack Docker container id: e79ee64af0e2 - LocalStack build date: 2023-07-06 - LocalStack build git hash: 3908ec6b ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/8642
https://github.com/localstack/localstack/pull/10112
5e664761b78246cad9a2adbe6f8b0be746d135ac
4d18e48e4c861c385eb3fd908dce14e23b5e6aee
"2023-07-06T15:58:02Z"
python
"2024-01-30T08:18:34Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,596
["localstack/services/sqs/models.py", "localstack/services/sqs/provider.py", "localstack/services/sqs/utils.py", "tests/aws/services/sqs/test_sqs.py", "tests/aws/services/sqs/test_sqs.snapshot.json"]
bug: Fifo batch publish silently fails
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior if a Fifo queue is created withotu deduplication configuration such as `MessageDeduplicationId` or `ContentBasedDeduplication` Sending a batch of message to such queue will silently fail (with http code 200) ### Expected Behavior it should fail with error `InvalidParameterValue: The Queue should either have ContentBasedDeduplication enabled or MessageDeduplicationId provided explicitly ` ### How are you starting LocalStack? With a `docker run` command ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) `docker run -p 127.0.0.1:4566:4566/tcp --network=host localstack/localstack:latest` #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) ```js const { SQSClient, SendMessageCommand, ReceiveMessageCommand, CreateQueueCommand, SendMessageBatchCommand } = require('@aws-sdk/client-sqs') const credentials = { accessKeyId: 'test', secretAccessKey: 'test', }; const connection = { region: 'us-east-1', accountId: '000000000000', endpoint: 'http://localhost:4566', }; const queueName = 'test-queue6.fifo'; async function main() { const sqsClient = new SQSClient({ ...connection, credentials, }); const { QueueUrl } = await sqsClient.send(new CreateQueueCommand({ QueueName: queueName, Attributes: { FifoQueue: 'true', //ContentBasedDeduplication: 'true', } })); await sqsClient.send(new SendMessageBatchCommand({ QueueUrl, Entries: [{ Id: '1', MessageBody: 'test4', MessageGroupId: 'test-group' }] })); const { Messages } = await sqsClient.send(new ReceiveMessageCommand({ QueueUrl })); return Messages } main().then((r) => console.log(r)).catch((e) => console.error(e)); ``` ### Environment ```markdown - OS: Osx - LocalStack: ``` ### Anything else? Use the lastest version on AWS SDK (!)
https://github.com/localstack/localstack/issues/8596
https://github.com/localstack/localstack/pull/8809
66165cd680b5044c32ecaa5f7cdcfdbf44a2bff0
bc1aa90edce84979724a8c372d422381d9b3445e
"2023-06-29T15:36:44Z"
python
"2023-08-25T12:06:06Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,570
["localstack/services/sqs/provider.py", "tests/aws/services/sqs/test_sqs.py", "tests/aws/services/sqs/test_sqs.snapshot.json", "tests/aws/services/sqs/test_sqs.validation.json"]
bug: SQS queue - MaximumMessageSize parameter not being respected for individual messages in batch
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior When I try to send an SQS message batch to localstack, it accept messages that surpasse the MaximumMessageSize limit (it's not the same issue as #6740). ```json { "Successful": [ { "Id": "a4cff0d1-1961-44bd-ae53-c6d5ed71ed08", "MessageId": "f4b6b57e-0f56-42a6-8d4b-df3b6d2d0bb0", "MD5OfMessageBody": "197c4e03eeff64b4e7bc4ad934e425f9" }, { "Id": "35b535ed-b76a-4ebd-b749-6eb35cdb55ee", "MessageId": "57d98a94-e718-4a13-80d8-be35666c159d", "MD5OfMessageBody": "5fac72a66fb05539a171a91883dc60e4" } ] } ``` ### Expected Behavior Localstack should fail the messages in the batch that are over the MaximumMessageSize parameter. Below is what AWS returns when I issue the ```send-message-batch``` command against a real SQS queue. ```json { "Successful": [ { "Id": "a4cff0d1-1961-44bd-ae53-c6d5ed71ed08", "MessageId": "3bbcb7e2-ef54-4440-a467-1ee8019a282a", "MD5OfMessageBody": "197c4e03eeff64b4e7bc4ad934e425f9" } ], "Failed": [ { "Id": "35b535ed-b76a-4ebd-b749-6eb35cdb55ee", "SenderFault": true, "Code": "InvalidParameterValue", "Message": "One or more parameters cannot be validated. Reason: Message must be shorter than 1024 bytes." } ] } ``` ### How are you starting LocalStack? With a `docker run` command ### Steps To Reproduce ```shell docker run -e "SERVICES=sqs" -e "DEFAULT_REGION=us-east-1" -p 4566:4566 localstack/localstack ``` First I create the queue with the parameter MaximumMessageSize as 1KiB using AWS CLI: ```shell aws sqs create-queue --endpoint-url http://localhost:4566 --queue-name MyQueue --attributes "MaximumMessageSize=1024" ``` Then I send the message batch: ```shell aws sqs send-message-batch --endpoint-url http://localhost:4566 --queue-url http://localhost:4566/000000000000/MyQueue --entries file://send-message-batch.json ``` This is the content of the send-message-batch.json file. Note that the second message surpasses 1KiB size limit in 1 byte (it has 1025 bytes). ```json [ { "Id": "a4cff0d1-1961-44bd-ae53-c6d5ed71ed08", "MessageBody": "abcd012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789" }, { "Id": "35b535ed-b76a-4ebd-b749-6eb35cdb55ee", "MessageBody": "abcde012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789" } ] ``` ### Environment ```markdown - OS:Ubuntu 22.04.2 LTS - LocalStack:latest ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/8570
https://github.com/localstack/localstack/pull/9981
ee6c4e1dec5340f942854c4fe486605da840fae6
37e3bed0f0d20493cb36f31aab7c09f38f768fa8
"2023-06-25T20:24:17Z"
python
"2024-01-04T16:08:21Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,566
["localstack/services/sqs/provider.py", "tests/integration/test_sqs.py", "tests/integration/test_sqs.snapshot.json"]
bug: MessageAttributes inconsistency on SQS Receive message
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior using AWS SDK V3 for JS/TS localstack ```js const { Messages } = await sqsClient.send(new ReceiveMessageCommand({ QueueUrl, MessageAttributeNames: ['*'], })); console.log(Messages[0].MessageAttributes) ``` it will **not** retrieve the attributes, replacing the astrix with "All" will work same code will work with real SQS in both cases Same for aws cli ```shell > aws --endpoint-url=http://localhost:4566 sqs receive-message --queue-url http://localhost:4566/000000000000/test-queue --max-number-of-messages 1 --message-attribute-names * { "Messages": [ { "MessageId": "99bad5bf-75f6-48b8-ab97-889e679c5a04", "ReceiptHandle": "ZTVlNGE2NjAtMDllZC00N2YxLWI4ODItYzUzMzkwZWUyYTA3IGFybjphd3M6c3FzOnVzLWVhc3QtMTowMDAwMDAwMDAwMDA6dGVzdC1xdWV1ZTEgOTliYWQ1YmYtNzVmNi00OGI4LWFiOTctODg5ZTY3OWM1YTA0IDE2ODc3MDM2NTkuOTgyNTA2", "MD5OfBody": "098f6bcd4621d373cade4e832627b4f6", "Body": "test" } ] } > aws --endpoint-url=http://localhost:4566 sqs receive-message --queue-url http://localhost:4566/000000000000/test-queue --max-number-of-messages 1 --message-attribute-names All { "Messages": [ { "MessageId": "73cbe585-70a3-4966-99c2-4d4cdf12f095", "ReceiptHandle": "Y2M0ZjAwNDAtYTY5MS00MjY5LTkwMmQtNDU4NzBmYzBiYzgxIGFybjphd3M6c3FzOnVzLWVhc3QtMTowMDAwMDAwMDAwMDA6dGVzdC1xdWV1ZTEgNzNjYmU1ODUtNzBhMy00OTY2LTk5YzItNGQ0Y2RmMTJmMDk1IDE2ODc3MDM2NjMuODcyNzg4", "MD5OfBody": "098f6bcd4621d373cade4e832627b4f6", "Body": "test", "MD5OfMessageAttributes": "f92843aa316e790fb678e5e4ed34d80f", "MessageAttributes": { "my-test": { "StringValue": "test-value", "DataType": "String" } } } ] } ``` ### Expected Behavior Respect the astrix and behave as as "All" ### How are you starting LocalStack? With a `docker run` command ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker run -p 127.0.0.1:4566:4566/tcp --network=host localstack/localstack:2.1.0 #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) CLI ``` aws --endpoint-url=http://localhost:4566 sqs receive-message --queue-url http://localhost:4566/000000000000/test-queue --max-number-of-messages 1 --message-attribute-names * ``` SDK ```js const { Messages } = await sqsClient.send(new ReceiveMessageCommand({ QueueUrl, MessageAttributeNames: ['*'], })); ``` ### Environment ```markdown - LocalStack: 2.1.0 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/8566
https://github.com/localstack/localstack/pull/8572
7710a0a10f085ea6a36134949b98ff9cd4b7e807
6b553919f0d011ae3243f6697d9597bb33c872ed
"2023-06-25T14:37:13Z"
python
"2023-08-01T10:51:06Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,550
["setup.cfg"]
bug: Cross-account VPCs are not found when using multi-account features.
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior I tried to deploy a VPC on an a account A, deploy a VPC on a account B, and deploy a peering connection between A and B (B's VPC = target VPC) on account A. Unfortunately, a bug exists and the VPC ID of account B is not found from the account A. ### Expected Behavior I think some resources (such as VPCs, by nature) should be exposed / visible / referable from all accounts involved in the localstack instance. It seems like the list of supported cross-account resources is increasing as this area is actively being worked on. But VPCs do not seem to be part of that list yet. The expected behaviour is to be able to create cross-account vpc peering connections. ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) localstack start -d #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) ```bash AWS_ACCESS_KEY_ID=111111111111 awslocal ec2 create-vpc --cidr-block 10.0.0.0/24 \ --region us-east-1 AWS_ACCESS_KEY_ID=222222222222 awslocal ec2 create-vpc --cidr-block 10.1.0.0/24 \ --region eu-central-1 AWS_ACCESS_KEY_ID=111111111111 awslocal ec2 create-vpc-peering-connection \ --peer-owner-id 222222222222 \ --peer-region eu-central-1 \ --vpc-id <VPC1> --peer-vpc-id <VPC2> \ --region us-east-1 ``` ### Environment ```markdown - OS: macOS 11.7 20G817 x86_64 - LocalStack: 2.1.0 ``` ### Anything else? I think the bug is related to #7041. If I understand this well, that area is under active deployment as explained here : https://docs.localstack.cloud/references/cross-account-access/ In the docs above, Some resources seem to be supported already, VPCs are not mentioned and I assume are not supported yet.
https://github.com/localstack/localstack/issues/8550
https://github.com/localstack/localstack/pull/9199
e42c4495fe9aeaea9d20ca31821ea973da3f4acf
623e38311b579edfccabbfd1ab5d0728eaa87bb6
"2023-06-22T14:04:36Z"
python
"2023-09-21T11:53:24Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,512
["localstack/services/events/provider.py", "tests/aws/services/events/test_events.py", "tests/aws/services/events/test_events.snapshot.json", "tests/aws/services/events/test_events.validation.json"]
bug: EventBridge event-pattern not correctly matched on detail content
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior The provided example does not put a message in SQS. ### Expected Behavior The provided example puts a message in SQS. ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce The following scripts returns no items in SQS, but it should. The bit detail: foo is not matched. ``` awslocal events create-event-bus \ --name detail-bus-1 awslocal sqs create-queue \ --queue-name "detail-sqs-1" awslocal events put-rule \ --name "detail-rule-1" \ --event-bus-name "detail-bus-1" \ --event-pattern '{ "detail": { "foo": [ { "exists": true } ] } }' awslocal events put-targets \ --rule "detail-rule-1" \ --event-bus-name "detail-bus-1" \ --targets "Id"="detail-target-rule-1","Arn"="arn:aws:sqs:us-east-1:000000000000:detail-sqs-1","InputPath"="$.detail" awslocal events put-events --entries '[{"DetailType": "test", "Source": "aws.test", "Detail": "{ \"foo\": \"bar\" }", "EventBusName": "detail-bus-1"}]' awslocal sqs receive-message --queue-url http://localhost:4566/000000000000/detail-sqs-1 ``` ### Environment ```markdown - OS: macOS 13.3.1 - LocalStack: 1.4.0 ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/8512
https://github.com/localstack/localstack/pull/9931
3b30c563780176ec5512f77de87223881533bdc9
563edf7a7752a8976bba73b9a1400bc30279ce4d
"2023-06-15T16:41:17Z"
python
"2023-12-29T12:40:20Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,486
["localstack/services/sns/provider.py", "tests/integration/test_sns.py", "tests/integration/test_sns.snapshot.json"]
bug: SNS batch publish with SQS subscription fails when MessageStructure json is used
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior Messages containing a MessageStructure set to "json" are not delivered to SQS when these are published on SNS using 'publish-batch' ### Expected Behavior Messages do make it to SQS. ### How are you starting LocalStack? With a docker-compose file ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker run localstack/localstack #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) ##### Creating SNS toppic/SQS queue/subscription ``` $ ./awslocal sns create-topic --name 'my-sns-topic' { "TopicArn": "arn:aws:sns:us-east-1:000000000000:my-sns-topic" } ``` ``` $ ./awslocal sqs create-queue --queue-name 'my-sqs-queue' { "QueueUrl": "http://localhost:4566/000000000000/my-sqs-queue" } ``` ``` $ ./awslocal sqs get-queue-attributes --queue-url "http://localhost:4566/000000000000/my-sqs-queue" --attribute-names QueueArn { "Attributes": { "QueueArn": "arn:aws:sqs:us-east-1:000000000000:my-sqs-queue" } } ``` ``` $ ./awslocal sns subscribe --topic-arn "arn:aws:sns:us-east-1:000000000000:my-sns-topic" --protocol sqs --notification-endpoint "arn:aws:sqs:us-east-1:000000000000:my-sqs-queue" { "SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:my-sns-topic:2e8368a4-b3b3-4106-abc7-84a22eeb69af" } ``` ##### Verify using 'publish' ``` $ ./awslocal sns publish --topic-arn "arn:aws:sns:us-east-1:000000000000:my-sns-topic" --message-structure 'JSON' --message '{"default": "default message", "sqs": "sqs message"}' { "MessageId": "e5bc1165-51bb-4710-9e65-eed1516ce229" } ``` ``` $ ./awslocal sqs receive-message --queue-url "http://localhost:4566/000000000000/my-sqs-queue" { "Messages": [ { "MessageId": "b997e242-687b-4a2e-befb-d2dd8169f80c", "ReceiptHandle": "ZGM5YTAyY2UtYWI5ZC00ZDg5LWFjOTMtYzk2NmFkYmNiY2NlIGFybjphd3M6c3FzOnVzLWVhc3QtMTowMDAwMDAwMDAwMDA6bXktc3FzLXF1ZXVlIGI5OTdlMjQyLTY4N2ItNGEyZS1iZWZiLWQyZGQ4MTY5ZjgwYyAxNjg2NTc5MDAxLjk0MDQ2MTQ=", "MD5OfBody": "32fc7c66865504d74e50c8c646ad1156", "Body": "{\"Type\": \"Notification\", \"MessageId\": \"2057f9f0-b8b9-4fb2-80c4-0dd6cf3e2dd2\", \"TopicArn\": \"arn:aws:sns:us-east-1:000000000000:my-sns-topic\", \"Message\": \"sqs message\", \"Timestamp\": \"2023-06-12T14:09:57.879Z\", \"SignatureVersion\": \"1\", \"Signature\": \"EXAMPLEpH+..\", \"SigningCertURL\": \"https://sns.us-east-1.amazonaws.com/SimpleNotificationService-0000000000000000000000.pem\", \"UnsubscribeURL\": \"http://localhost:4566/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-east-1:000000000000:my-sns-topic:2e8368a4-b3b3-4106-abc7-84a22eeb69af\"}" } ] } ``` The SQS message contains "sqs message" as message. This is the expected result so that's good. ##### Verify 'publish-batch' (without MessageStructure) ``` $ ./awslocal sns publish-batch --topic-arn "arn:aws:sns:us-east-1:000000000000:my-sns-topic" --publish-batch-request-entries '[{"Id": "1", "Message": "sqs batch 1"}]' { "Successful": [ { "Id": "1", "MessageId": "bf04ebfa-193d-42f1-afe6-aadf3fb9d314" } ], "Failed": [] } ``` ``` ./awslocal sqs receive-message --queue-url "http://localhost:4566/000000000000/my-sqs-queue" { "Messages": [ { "MessageId": "54036176-ee15-4426-a571-295a5d5f6657", "ReceiptHandle": "ZGJkMjBmOWMtZmQ2Mi00YTMwLTk5YTktNzNkY2M4YjFlNmU1IGFybjphd3M6c3FzOnVzLWVhc3QtMTowMDAwMDAwMDAwMDA6bXktc3FzLXF1ZXVlIDU0MDM2MTc2LWVlMTUtNDQyNi1hNTcxLTI5NWE1ZDVmNjY1NyAxNjg2NTc5Mjc1LjQ1MzExNw==", "MD5OfBody": "1bc38fc8c7d7a7a480784f4ea1bb37f3", "Body": "{\"Type\": \"Notification\", \"MessageId\": \"bf04ebfa-193d-42f1-afe6-aadf3fb9d314\", \"TopicArn\": \"arn:aws:sns:us-east-1:000000000000:my-sns-topic\", \"Message\": \"sqs batch 1\", \"Timestamp\": \"2023-06-12T14:14:30.161Z\", \"SignatureVersion\": \"1\", \"Signature\": \"EXAMPLEpH+..\", \"SigningCertURL\": \"https://sns.us-east-1.amazonaws.com/SimpleNotificationService-0000000000000000000000.pem\", \"UnsubscribeURL\": \"http://localhost:4566/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-east-1:000000000000:my-sns-topic:2e8368a4-b3b3-4106-abc7-84a22eeb69af\"}" } ] } ``` The SQS message contains "sqs batch 1" as message. This is the expected result so that's good. ##### Verify 'publish-batch' (with MessageStructure) ``` $ ./awslocal sns publish-batch --topic-arn "arn:aws:sns:us-east-1:000000000000:my-sns-topic" --publish-batch-request-entries '[{"Id": "1", "MessageStructure": "json", "Message": "{\"default\": \"default batch 2\", \"sqs\": \"sqs batch 2\"}"}]' { "Successful": [ { "Id": "1", "MessageId": "29b26566-79a8-40eb-8abc-37fd42232dcc" } ], "Failed": [] } ``` ``` $ ./awslocal sqs receive-message --queue-url "http://localhost:4566/000000000000/my-sqs-queue" ``` There are no messages on the queue. This is **unexpected**. The **expected** result is a message with content "sqs batch 2". ### Environment ```markdown - OS: Debian - LocalStack: - LocalStack version: 2.1.1.dev - LocalStack build date: 2023-06-10 - LocalStack build git hash: 710f950c ``` ### Anything else? ### Relevant logging ``` DEBUG --- [ asgi_gw_2] l.services.sns.publisher : Topic 'arn:aws:sns:us-east-1:000000000000:my-sns-topic' batch publishing 1 messages to subscribed 'arn:aws:sqs:us-east-1:000000000000:my-sqs-queue' with protocol 'sqs' (subscription 'arn:aws:sns:us-east-1:000000000000:my-sns-topic:2e8368a4-b3b3-4106-abc7-84a22eeb69af') INFO --- [ asgi_gw_2] localstack.request.aws : AWS sns.PublishBatch => 200 ``` It shows that the SNS publish-batch succeeded and then it attempted to deliver to SQS. There are **no messages** about the actual delivery to SQS and/or no failure logged. I.e. it doesn't log the result of the `AWS sqs.SendMessageBatch` call (which it does do in the other cases) ### Debugging In SqsBatchTopicPublisher the "self.prepare_message(message_ctx, subscriber)" call fails. It throws an exception but it appears this isn't logged, adding a try except around the for loop in SqsBatchTopicPublisher::_publish and resending the message: ``` Traceback (most recent call last): File "/opt/code/localstack/localstack/services/sns/publisher.py", line 318, in _publish message_body = self.prepare_message(message_ctx, subscriber) File "/opt/code/localstack/localstack/services/sns/publisher.py", line 108, in prepare_message return create_sns_message_body(message_context, subscriber) File "/opt/code/localstack/localstack/services/sns/publisher.py", line 777, in create_sns_message_body message_content = message_context.message_content(protocol) File "/opt/code/localstack/localstack/services/sns/models.py", line 71, in message_content return self.message.get(protocol, self.message.get("default")) AttributeError: 'str' object has no attribute 'get' ``` Adding some logging in the 'create_sns_message_body' function: ``` message_context=SnsMessage(type='Notification', message='{"default": "default batch 2", "sqs": "sqs batch 2"}', message_attributes={}, message_structure='json', subject=None, message_deduplication_id=None, message_group_id=None, token=None, message_id='29b26566-79a8-40eb-8abc-37fd42232dcc', is_fifo=False, sequencer_number=None) ``` The `message_context.message` is a string (JSON) but the code appears to expect a dictionary?
https://github.com/localstack/localstack/issues/8486
https://github.com/localstack/localstack/pull/8487
22dcf56e895a58f56a51cd6e6bdecf9c0d727342
d26f2a55dab78890dd0688f09c2bfd46a5a80988
"2023-06-12T14:50:40Z"
python
"2023-06-15T16:42:04Z"
closed
localstack/localstack
https://github.com/localstack/localstack
8,478
["localstack/services/sqs/provider.py", "tests/integration/test_sqs.py", "tests/integration/test_sqs.snapshot.json"]
bug: max SQS batch size must be always 10
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior I can delete messages with batch sizes greater than 10 without any errors. The same applies to changing message visibility. ### Expected Behavior SQS allows processing batches with a maximum size of 10 elements. ### How are you starting LocalStack? With a `docker run` command ### Steps To Reproduce #### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`) docker run --rm -it -p 4566:4566 -p 4510-4559:4510-4559 localstack/localstack #### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) ```python import boto3 sqs = boto3.client("sqs", endpoint_url="http://localhost:4566") queue_url = sqs.create_queue(QueueName="my-queue")["QueueUrl"] sqs.send_message_batch( QueueUrl=queue_url, Entries=[{"Id": str(i), "MessageBody": "foo"} for i in range(10)], ) sqs.send_message_batch( QueueUrl=queue_url, Entries=[{"Id": str(i), "MessageBody": "foo"} for i in range(10)], ) messages = [] messages.extend( sqs.receive_message( QueueUrl=queue_url, AttributeNames=["All"], MaxNumberOfMessages=10, )["Messages"] ) messages.extend( sqs.receive_message( QueueUrl=queue_url, AttributeNames=["All"], MaxNumberOfMessages=10, )["Messages"] ) # This should raise an error because we're providing a batch with more than 10 elements sqs.change_message_visibility_batch( QueueUrl=queue_url, Entries=[ { "Id": str(i), "ReceiptHandle": msg["ReceiptHandle"], "VisibilityTimeout": 123, } for i, msg in enumerate(messages) ], ) # This should raise an error because we're providing a batch of more than 10 elements sqs.delete_message_batch( QueueUrl=queue_url, Entries=[ {"Id": str(i), "ReceiptHandle": msg["ReceiptHandle"]} for i, msg in enumerate(messages) ], ) ``` ### Environment ```markdown - OS: Ubuntu 22.04 - LocalStack: latest ``` ### Anything else? _No response_
https://github.com/localstack/localstack/issues/8478
https://github.com/localstack/localstack/pull/8479
909ac86278682233a92d778b4d36c1ad2782405c
f3a06d845ad432e3a562aebaa34d50dfca63ad98
"2023-06-11T12:23:43Z"
python
"2023-06-30T15:14:01Z"