status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | localstack/localstack | https://github.com/localstack/localstack | 8,471 | ["localstack/services/sqs/models.py", "localstack/services/sqs/provider.py", "localstack/services/sqs/utils.py", "localstack/testing/snapshots/transformer_utility.py", "tests/aws/services/sqs/test_sqs_move_task.py", "tests/aws/services/sqs/test_sqs_move_task.snapshot.json", "tests/aws/services/sqs/test_sqs_move_task.validation.json"] | Enhancement request: Support AWS SQS's newly announced "SQS Dead-Letter Queue Redrive via AWS SDK or CLI" | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Enhancement description
On June 8, 2023, Amazon just announced `support for dead-letter queue redrive via AWS SDK or CLI`
The SDK introduces three new 'tasks' that can be performed against an SQS dead letter queue:
* `StartMessageMoveTask`
* `CancelMessageMoveTask`
* `ListMessageMoveTasks`
LocalStack's SQS implementation should be updated to support these new DLQ redrive operations.
-------------------------
Not surprisingly when I attempting to do this the day after announcement, LocalStack doesn't yet support these operations. Using the latest LocalStack docker container tag of `2.1.0` and the latest Java AWS SDK version `2.20.82`, I attempted to perform one of these new `StartMessageMoveTask` against LocalStack using the following:
```java
StartMessageMoveTaskResponse moveTaskResponse =
sqs.startMessageMoveTask(builder -> builder.sourceArn(dlqArn).destinationArn(qArn));
```
The container produced the following error message:
```
2023-06-09T17:06:50.098 ERROR --- [ asgi_gw_0] l.aws.handlers.logging : exception during call chain: Operation detection failed.Operation StartMessageMoveTask could not be found for service ServiceModel(sqs).
2023-06-09T17:06:50.099347340Z 2023-06-09T17:06:50.099 INFO --- [ asgi_gw_0] localstack.request.http : POST / => 500
```
### 🧑💻 Implementation
_No response_
### Anything else?
Relevant AWS announcements and API specs:
* https://aws.amazon.com/blogs/aws/a-new-set-of-apis-for-amazon-sqs-dead-letter-queue-redrive/
* https://aws.amazon.com/about-aws/whats-new/2023/06/amazon-sqs-dead-letter-queue-redrive-aws-sdk-cli/
* https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-dead-letter-queue-redrive.html | https://github.com/localstack/localstack/issues/8471 | https://github.com/localstack/localstack/pull/9988 | 0c01be0932d34e09b2127927bca6a3ccc3c099b6 | e4aa388fcf632a86e098147f4739d295f4d09aef | "2023-06-09T17:12:25Z" | python | "2024-01-05T08:36:13Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,459 | ["localstack/services/awslambda/packages.py"] | bug: Lambda execution doesn't work with Golang binary lambdas: <title> | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Running `awslocal lambda invoke --function-name "localStage-mediaStack-backfillGetFunction91FAA-b70d6394" "manual_ test_output.json"` even a single time causes localstack to continuously try and fail to run the lambda, the terminal hands whilst docker compose logs output:
```
localstack_main | 2023-06-07T16:47:15.160 INFO --- [ asgi_gw_4] localstack.request.aws : AWS sts.AssumeRole => 200
localstack_main | 2023-06-07T16:47:16.000 INFO --- [ asgi_gw_0] localstack.request.http : POST /_localstack_lambda/f1a13ad3d39e7c5f771f6cf85f78456d/status/f1a13ad3d39e7c5f771f6cf85f78456d/error => 202
localstack_main | 2023-06-07T16:47:16.378 WARN --- [ asgi_gw_2] l.s.a.i.executor_endpoint : Execution environment startup failed: {"errorMessage":"Error: fork/exec /var/task/bootstrap: no such file or directory","errorType":"Runtime.InvalidEntrypoint"}
```
The logs won't stop until I pull the container down and restart it (hence I know it's continuously retrying to no avail)
### Expected Behavior
Lambda should be executed just like it does on AWS, returning a sample return string like "Test" (the lambda is very simple)
Even if it does fail (I'm not sure why it'd fail here but not on AWS) then it should try a small finite number of times and then stop
### How are you starting LocalStack?
With a docker-compose file (shown below)
### Steps To Reproduce
I don't think my code itself is the problem since it does work fine on AWS, but for a bit of context:
I use the cdk to compile the binaries and infrastructure which is deployed to LocalStack via:
cdklocal deploy -a "cdk.out/assembly-localStage/" --all --require-approval "never"
My infra:
https://github.com/KamWithK/exSTATic-backend/tree/master/infrastructure
I can potentially create a small isolated example which illustrates the problem if needed
### Environment
```markdown
version: "3.8"
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack:latest
ports:
- "127.0.0.1:4566:4566" # LocalStack Gateway
- "127.0.0.1:4510-4559:4510-4559" # external services port range
environment:
- DEBUG=${DEBUG-}
- DOCKER_HOST=unix:///var/run/docker.sock
- PERSISTENCE=/tmp/localstack/data
- AWS_DEFAULT_REGION=ap-southeast-2
volumes:
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
healthcheck:
test: curl http://localhost:4566/_localstack/health
interval: 1s
timeout: 1s
retries: 10
```
### Anything else?
Here's some function info:
```
{
"FunctionName": "localStage-mediaStack-backfillGetFunction91FAA-b70d6394",
"FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:localStage-mediaStack-backfillGetFunction91FAA-b70d6394",
"Runtime": "provided.al2",
"Role": "arn:aws:iam::000000000000:role/localStage-mediaStack-backfillGetFunctionServi-f0162982",
"Handler": "bootstrap",
"CodeSize": 14604,
"Description": "",
"Timeout": 3,
"MemorySize": 128,
"LastModified": "2023-06-07T15:58:33.766076+0000",
"CodeSha256": "nqvy9NYt9j59ura5fEUE4QaeXOTuTiEsDNgEvsSQGdk=",
"Version": "$LATEST",
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "377a6465-e32b-4ed3-a9f1-396af7ee0d86",
"PackageType": "Zip",
"Architectures": [
"x86_64"
],
"EphemeralStorage": {
"Size": 512
},
"SnapStart": {
"ApplyOn": "None",
"OptimizationStatus": "Off"
}
}
```
Test lambda:
```golang
package main
import (
"github.com/aws/aws-lambda-go/lambda"
)
func HandleRequest() (string, error) {
return "Test", nil
}
func main() {
lambda.Start(HandleRequest)
}
```
I did find this issue which sounded similar at first, but in my case the code does run on AWS (so I think it's a different problem, put here for reference though):
https://github.com/localstack/localstack/issues/4216
Any help would be greatly appreciated! | https://github.com/localstack/localstack/issues/8459 | https://github.com/localstack/localstack/pull/8679 | 0ff2710cd9ea929395fb5e3cb48039b92e3a7c35 | 5df0f4a3077804ca8c31cb857478f0c1cd40d3bd | "2023-06-07T16:59:40Z" | python | "2023-07-12T08:26:44Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,444 | ["localstack/services/events/provider.py", "localstack/services/s3/notifications.py", "localstack/services/s3/provider.py", "tests/integration/s3/test_s3_notifications_eventbridge.py", "tests/integration/s3/test_s3_notifications_eventbridge.snapshot.json", "tests/integration/s3/test_s3_notifications_sqs.py", "tests/integration/s3/test_s3_notifications_sqs.snapshot.json", "tests/integration/test_events.py", "tests/integration/test_events.snapshot.json"] | enhancement request: support for s3:ObjectRestore:* bucket notifications | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Enhancement description
Currently, I'm using Localstack to test locally a lambda function that takes s3`ObjectRestore:Completed` notifications as inputs and it would be really great to have support for these events.
I know that right now as a workaround I can invoke the lambda function manually using a payload with the same shape that s3 uses, but it's better to have the process run as close as it would run in AWS.
Thanks for creating and maintaining localstack, it's really great!
### 🧑💻 Implementation
Not sure, but happy to help if you can give me some pointers.
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/8444 | https://github.com/localstack/localstack/pull/8690 | 0d58ab5f8f5ad7b448d644bba289b8d1930e29e8 | 3b81d4bcd038f3b03e67142e3c6579cb725ce121 | "2023-06-05T19:53:10Z" | python | "2023-07-17T11:31:23Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,441 | ["localstack/services/sns/provider.py", "tests/integration/test_sns.py", "tests/integration/test_sns.snapshot.json"] | bug: [SNS] wrong keys on message attributes passed to SQS | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When passed to SQS, keys of the messages attributes on SNS are renamed from StringValue to Value and DataType to Type.
### Expected Behavior
The keys should not be renamed
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Flow
- Create a SQS Queue
- Create a SNS Topic
- Configure a Subscription of the new Topic that sends to the SQS queue (specify a raw delivery at true)
- Publish a message to SNS Topic with attribute { DataType: "string", StringValue: "anything"}
- Attributes are no more StringValue but just value.
(can't provide more, I have issues with my awslocal, I got this issue with the PHP client initially)
### Environment
```markdown
- OS: Docker Desktop for mac
- LocalStack: 1.4
```
### Anything else?
The AWS SNS official documentation : https://docs.aws.amazon.com/sns/latest/api/API_MessageAttributeValue.html
MR for fix is coming | https://github.com/localstack/localstack/issues/8441 | https://github.com/localstack/localstack/pull/8458 | 3a590c023ce2d00df9b823d7a95956ecab90eae6 | 710f950c0b57d66b5e8524e2099c0e766650cef5 | "2023-06-05T15:45:50Z" | python | "2023-06-10T13:36:03Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,422 | ["localstack/services/s3/utils.py", "tests/integration/s3/test_s3.py", "tests/integration/s3/test_s3.snapshot.json"] | bug: Localstack S3 Allows put-object and get-object on KMS encrypted objects after the KMS Key is Disabled | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
LocalStack does not validate the KMS key when do S3 ops `put-object` or `get-object` even configured required environment variables:
```
environment:
- PROVIDER_OVERRIDE_S3=asf
- S3_SKIP_KMS_KEY_VALIDATION=0
```
Had similar issue here https://github.com/localstack/localstack/issues/7782 got resolved but still facing the problem.
### Expected Behavior
LocalStack should throw either `KMS.DisabledException` or `KMS.KMSInvalidStateException` when `KeyState` is `Disabled` or `PendingDeletion`.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker-compose.yml
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
1. My LocalStack not pointing default `4566` so I have to do `export EDGE_PORT=<PORT>`.
2. Create the key in `ap-south-1` region:
```
awslocal kms create-key --description "byok confluence" --region ap-south-1
```
3. Get the above key `Arn` and export in env var: `export KEY_ID=<Arn>`
Now, I have two scenarios, one is disable the key and another is schedule the key deletion.
**Scenario 1**
1. Disable the key:
```
awslocal kms disable-key --key-id "$KEY_ID" --region ap-south-1
```
2. Run the command to confirm the key is disabled, i.e. `"KeyState": "Disabled"`:
```
awslocal kms describe-key --key-id "$KEY_ID" --region ap-south-1
```
Output:
```
{
"KeyMetadata": {
"AWSAccountId": "000000000000",
"KeyId": "f0e44ead-305f-4f7d-a492-e93ac8e9bda9",
"Arn": "arn:aws:kms:ap-south-1:000000000000:key/f0e44ead-305f-4f7d-a492-e93ac8e9bda9",
"CreationDate": 1685650954.265076,
"Enabled": false,
"Description": "byok confluence",
"KeyUsage": "ENCRYPT_DECRYPT",
"KeyState": "Disabled",
"Origin": "AWS_KMS",
"KeyManager": "CUSTOMER",
"CustomerMasterKeySpec": "SYMMETRIC_DEFAULT",
"KeySpec": "SYMMETRIC_DEFAULT",
"EncryptionAlgorithms": [
"SYMMETRIC_DEFAULT"
],
"MultiRegion": false
}
}
```
3. Calling `put-object` went through which is not expected since it needs to throw `KMS.DisabledException`:
```
awslocal s3api put-object --bucket <bucket-name> --key test.json --server-side-encryption aws:kms --ssekms-key-id "$KEY_ID" --body test.json
```
Output:
```
{
"ETag": "\"d56d80665de6de5b29e9b5c9c907f02f\"",
"ServerSideEncryption": "aws:kms",
"VersionId": "1e9973de-eec2-4b69-bc5f-3b7492eb250c",
"SSEKMSKeyId": "arn:aws:kms:ap-south-1:000000000000:key/f0e44ead-305f-4f7d-a492-e93ac8e9bda9"
}
```
Similarly calling `get-object` using `awslocal s3api get-object --bucket <bucket-name> --key test.json output.txt` went through no errors out.
**Scenario 2**
1. Schedule the key deletion:
```
awslocal kms schedule-key-deletion --key-id "$KEY_ID" --pending-window-in-days 7 --region ap-south-1
```
2. Run the command to confirm the key is pending deletion, i.e. `"KeyState": "PendingDeletion"`:
```
awslocal kms describe-key --key-id "$KEY_ID" --region ap-south-1
```
Output:
```
{
"KeyMetadata": {
"AWSAccountId": "000000000000",
"KeyId": "f0e44ead-305f-4f7d-a492-e93ac8e9bda9",
"Arn": "arn:aws:kms:ap-south-1:000000000000:key/f0e44ead-305f-4f7d-a492-e93ac8e9bda9",
"CreationDate": 1685650954.265076,
"Enabled": false,
"Description": "byok confluence",
"KeyUsage": "ENCRYPT_DECRYPT",
"KeyState": "PendingDeletion",
"DeletionDate": 1686256290.104983,
"Origin": "AWS_KMS",
"KeyManager": "CUSTOMER",
"CustomerMasterKeySpec": "SYMMETRIC_DEFAULT",
"KeySpec": "SYMMETRIC_DEFAULT",
"EncryptionAlgorithms": [
"SYMMETRIC_DEFAULT"
],
"MultiRegion": false
}
}
```
3. Similar to above scenario 1 no errors out when calling `put-object` and `get-object`.
### Environment
```markdown
- OS: Apple M1 Pro, 13.3.1 (a) (22E772610a).
- LocalStack: 1.4.0.
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/8422 | https://github.com/localstack/localstack/pull/8423 | bea357fc954da7a7515899db44e6f770f04d2b99 | c9f5b49d7e2231a5a43b471a500269fdebe8b5ce | "2023-06-01T20:37:52Z" | python | "2023-06-06T19:31:25Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,415 | ["localstack/services/sns/provider.py", "tests/integration/test_sns.py", "tests/integration/test_sns.snapshot.json"] | bug: impossible to create/update signatureversion on SNS (Terraform / CDKTF) | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I am using Terraform (via CDKTF) to create SNS Topics, explicitely stating that I want to use SignatureVersion 2:
```ts
const myTopic = new SnsTopic(this, 'topic', {
name: 'myTopic',
signatureVersion: 2,
})
```
However, on every plan / apply, I see this logged, just like the signatureVersion wasn't set properly on the topic (see screenshot below showing the planned changes). I can also confirm that my Sns subscription receives a SignatureVersion of 1 all the time.
<img width="377" alt="image" src="https://github.com/localstack/localstack/assets/3542313/6039191e-56b3-4359-bcb3-02da94adca59">
### Expected Behavior
The SignatureVersion should be properly set on the SnsTopic to 2.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker-compose.yml
```yaml
version: '3.9'
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack-pro:2.1.0
ports:
- "127.0.0.1:4566:4566" # LocalStack Gateway
- "127.0.0.1:4510-4559:4510-4559" # external services port range
environment:
- DEBUG=${DEBUG-}
- DOCKER_HOST=unix:///var/run/docker.sock
- LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY- }
- LOCALSTACK_ENFORCE_IAM=1
volumes:
- "${LOCALSTACK_VOLUME_DIR:-~/.localstack-volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
```
Ran using `docker compose up -d localstack`
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
NA
### Environment
```markdown
- OS: MacOs 13.3.1
- LocalStack: 2.1.0 (Pro)
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/8415 | https://github.com/localstack/localstack/pull/8458 | 3a590c023ce2d00df9b823d7a95956ecab90eae6 | 710f950c0b57d66b5e8524e2099c0e766650cef5 | "2023-06-01T12:40:51Z" | python | "2023-06-10T13:36:03Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,397 | ["localstack/services/sqs/constants.py", "tests/integration/test_sqs.py", "tests/integration/test_sqs.snapshot.json"] | Unable to change ContentBasedDeduplication attribute on existing queue | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
If I create a queue and try to change its `ContentDeduplication` attribute, I see this error:
` An error occurred (InvalidAttributeName) when calling the SetQueueAttributes operation: Unknown Attribute ContentBasedDeduplication.`
### Expected Behavior
I should be able to set `ContentBasedDeduplication` from `true` to `false` on an existing queue. It appears to work on AWS.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```
aws sqs create-queue --queue-name test1.fifo --endpoint-url http://localhost:4566/ --attributes FifoQueue=true,ContentBasedDeduplication=true
{
"QueueUrl": "http://localhost:4566/000000000000/test1.fifo"
}
aws sqs get-queue-attributes --endpoint-url http://localhost:4566/ --queue-url http://localhost:4566/000000000000/test1.fifo --attribute-names '["ContentBasedDeduplication"]'
{
"Attributes": {
"FifoQueue": "true,
"ContentBasedDeduplication": "true"
}
}
aws sqs set-queue-attributes --endpoint-url http://localhost:4566/ --queue-url http://localhost:4566/000000000000/test1.fifo --attributes ContentBasedDeduplication=false
An error occurred (InvalidAttributeName) when calling the SetQueueAttributes operation: Unknown Attribute ContentBasedDeduplication.
```
### Environment
```markdown
- OS: MacOs Ventura 13.3.1 (a)
- LocalStack: 2.1.0
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/8397 | https://github.com/localstack/localstack/pull/8398 | e1918aa25bf1538e717972cd7e1a9b241224effe | c7dd47905e4058d5b9b952919d78c1db2721d9b3 | "2023-05-30T18:07:08Z" | python | "2023-05-31T11:40:35Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,392 | ["localstack/aws/api/s3/__init__.py", "localstack/aws/spec-patches.json", "localstack/services/s3/provider.py", "tests/integration/s3/test_s3.py", "tests/integration/s3/test_s3.snapshot.json"] | bug: Complete multipart upload for non-existing uploads does not produce the NoSuchUpload error code. | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
The Complete Multipart Upload operation generates the **InternalError** error for non-existing uploads.
### Expected Behavior
The Complete Multipart Upload operation generates the **NoSuchUpload** error for non-existing uploads.
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
docker run -it --rm -p 4566:4566 localstack/localstack
aws s3api create-bucket --bucket test --endpoint-url http://localhost:4566 --create-bucket-configuration LocationConstraint=us-west-2
aws s3api complete-multipart-upload --bucket test --endpoint-url http://localhost:4566 --key test --upload-id test
### Environment
```markdown
- OS: Windows 11
- LocalStack: latest
```
### Anything else?
The Abort Multipart Upload operation works correctly in LocalStack.
The official documentation: https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html
Amazon S3 throws the correct **NoSuchUpload** error for both Abort and Complete operations. | https://github.com/localstack/localstack/issues/8392 | https://github.com/localstack/localstack/pull/8396 | 32835d80c33373d3a9537a57ff29bd80caac22eb | e1918aa25bf1538e717972cd7e1a9b241224effe | "2023-05-30T02:23:28Z" | python | "2023-05-31T11:40:05Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,380 | ["localstack/aws/accounts.py"] | Need ability to silence production creds warning, or support TEST_AWS_ACCOUNT_ID | Please either support the (single account) TEST_AWS_ACCOUNT_ID overrride that was removed from v2, or provide a way to silence the production creds warning that was added in v2.
In v1, my localstack init/ready.d scripts created resources using the default 0 id, my client services used genuine test accounts, and my localstack v1 logs were clean.
In v2, my localstack v2 logs are full of production creds warnings.
I'm running several services in an e2e test and using localstack to stub only a _subset_ of the AWS services. The services under test are using genuine AWS test account creds in a test cluster.
In this scenario, I don't want the services under test to need to know if they're communicating with a real AWS service (e.g., SSM Parameter Store) or a Localstack emulation (e.g., S3). They shouldn't need to switch creds depending on where their request is sent. That would require the code-under-test to be test-aware, which would result in a different code path being taken during testing.
This mixed-service e2e test worked fine up until now, but now in v2, my localstack logs are spammed with warnings about ignoring production creds.
I tried the new PARITY_AWS_ACCESS_KEY_ID config (v2.1), but then localstack uses the real id instead of the default id, and this results in a mismatch between the resources created with the default id in the init script, and those referenced using a real id during testing. This result is worse than having the logs filled with warnings.
_Originally posted by @joebowbeer in https://github.com/localstack/localstack/issues/8225#issuecomment-1536429238_
| https://github.com/localstack/localstack/issues/8380 | https://github.com/localstack/localstack/pull/8530 | 50520994d622db11556bb53f1593022224744086 | de2629de33cb7a87d8735e7e2483db4b0ac398b1 | "2023-05-26T19:57:15Z" | python | "2023-07-17T14:30:10Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,325 | ["localstack/aws/api/s3/__init__.py", "localstack/aws/spec-patches.json", "tests/integration/s3/test_s3.py"] | bug: S3 bucket owner's name is too long for the API to read. A null exception raised. | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Bucket.getOwner() in the Java API is not able to parse the new size of the owner's name. A null pointer exception is raised.
### Expected Behavior
You should see an object of the `Owner` in the Java in version 1.4.0 while a crash with a null pointer exception in localstack version 2.0.0
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
1. To replicate, use the following docker-compose file, twice, once with version 1.4.0, and the other with 2.0.0
Each time run the following command:
2. ` awslocal --endpoint="http://localhost:8010" s3api list-buckets`
In the first you should get
```
{
"Buckets": [],
"Owner": {
"DisplayName": "webfile",
"ID": "bcaf1ffd86f41161ca5fb16fd081034f"
}
}
````
In the second you should get
```
{
"Buckets": [],
"Owner": {
"DisplayName": "webfile",
"ID": "75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a"
}
}
```
Notice the difference in the large string in the ID.
3. create a new bucket: `awslocal --endpoint="http://localhost:8010" s3 mb s3://mybucket`
4. Use java API to get the owner of the bucket which should map to the behavior [here](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/Bucket.html#getOwner--).
Docker file:
```
version: '3.0'
services:
localstack:
container_name: localstack
image: localstack/localstack:2.0.0 # or switch to 1.4.0
network_mode: bridge
environment:
- DEBUG=1
- DEFAULT_REGION=us-east-1
- LAMBDA_EXECUTOR=docker
- LAMBDA_REMOTE_DOCKER=true
- LAMBDA_REMOVE_CONTAINERS=true
- DOCKER_HOST=unix:///var/run/docker.sock
- LOCALSTACK_API_KEY=XXXXXX
volumes:
- ./localstack/bootstrap:/opt/bootstrap/
ports:
- '8010:4566'
```
### Environment
```markdown
- OS:Ubuntu 22.04.2 LTS
- LocalStack: The problem is found since 2.0.0
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/8325 | https://github.com/localstack/localstack/pull/8329 | b21527051837e2d36ecaf31af137268dbffdee17 | 8f434320d95e9b43b1320c5db04964c03c442746 | "2023-05-17T09:16:37Z" | python | "2023-05-18T12:54:57Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,261 | ["localstack/services/events/provider.py", "tests/integration/test_events.py", "tests/integration/test_events.snapshot.json"] | bug: events.PutEvents fails with custom bus name using boto3 | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
The LocalStack Docker image >=2.0.0 throws a 500 `InternalError` when calling the `events.PutEvents` operation via boto3 **with a custom event bus name** (i.e. the event bus name is not `default`).
(docker-compose logs)
```
ERROR --- [ asgi_gw_0] l.aws.handlers.logging : exception during call chain: 'customBusName'
INFO --- [ asgi_gw_0] localstack.request.aws : AWS events.PutEvents => 500 (InternalError)
```
(boto3 error traceback)
```
Traceback (most recent call last):
File "eb_publish.py", line 6, in <module>
response = eb.put_events(
File "/Users/sangeeta.jadoonanan/.pyenv/versions/hydra/lib/python3.8/site-packages/botocore/client.py", line 530, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/sangeeta.jadoonanan/.pyenv/versions/hydra/lib/python3.8/site-packages/botocore/client.py", line 960, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InternalError) when calling the PutEvents operation (reached max retries: 4): exception while calling events.PutEvents: 'customBusName'
```
### Expected Behavior
I expected the `events.PutEvents` operation to get called successfully and result in a 200 status code with a custom event bus name, like I've tested with **LocalStack 1.4**.
(docker-compose logs running LocalStack 1.4)
```
INFO --- [ asgi_gw_0] localstack.request.aws : AWS events.PutEvents => 200
```
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
Here's my `docker-compose.yml`:
```yaml
version: "3.4"
services:
eventbridge:
image: localstack/localstack
container_name: hydra_eventbridge
ports:
- 4566:4566
environment:
- EAGER_SERVICE_LOADING=1
- SERVICES=events
```
To start LocalStack, I run `docker-compose up`.
The [image versions `2.0.0`](https://hub.docker.com/layers/localstack/localstack/2.0.0/images/sha256-2d0861a7fd281bb4f8a8404d8249ab4aed278c5ac8bdc55f8c246399e4bffcb8?context=explore) and higher fail when passing a custom bus name. The highest working [image version is `1.4`](https://hub.docker.com/layers/localstack/localstack/1.4/images/sha256-4a966e42eff4bbeec44afe13c2d24a67954742b191caae7c15e56186cc0b9ed8?context=explore).
You can try toggling the versions in the `docker-compose.yml` to verify:
```
image: localstack/localstack:1.4
```
and
```
image: localstack/localstack:2.0.0
```
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
Here's my Python boto3 snippet to publish an event with a custom bus name:
```python
import boto3
eb = boto3.client(service_name="events", endpoint_url="http://localhost:4566")
eb.put_events(
Entries=[{
"Source": "test-source",
"Detail": "test-detail",
"DetailType": "test-detail-type",
"EventBusName": "customBusName"
}]
)
```
### Environment
```markdown
- OS: macOS Ventura 13.2
- LocalStack: 2.0.0
- Python: 3.8.13
- boto3==1.26.127
- botocore==1.29.127
```
### Anything else?
**TL;DR:** The `1.4` localstack Docker image works for me, and anything from `2.0.0` and higher fails.
I noticed this issue with the latest `localstack` Docker image and worked backwards until I found that the breaking image version was `2.0.0`.
Also worth noting that all versions work when I use the default bus name of `default`. | https://github.com/localstack/localstack/issues/8261 | https://github.com/localstack/localstack/pull/8264 | 82cec4825da24b5fb062f6e05b4bed12a097f4a4 | 0e8a7be2998590a283edfd501119585f971251a9 | "2023-05-05T06:28:16Z" | python | "2023-05-12T20:43:49Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,219 | ["localstack/services/awslambda/invocation/lambda_models.py", "localstack/testing/pytest/fixtures.py", "tests/integration/awslambda/conftest.py", "tests/integration/awslambda/test_lambda.py", "tests/integration/awslambda/test_lambda.snapshot.json", "tests/integration/awslambda/test_lambda_api.py", "tests/integration/awslambda/test_lambda_api.snapshot.json", "tests/integration/awslambda/test_lambda_common.snapshot.json", "tests/integration/awslambda/test_lambda_runtimes.snapshot.json"] | enhancement request: Support Java 17 Lambda runtime | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Enhancement description
AWS Lambda recently added support for Java 17 as managed runtime:
https://aws.amazon.com/about-aws/whats-new/2023/04/aws-lambda-java-17/
### 🧑💻 Implementation
_No response_
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/8219 | https://github.com/localstack/localstack/pull/8237 | 8c9f822ecf8547f27831cda454ff804beeae631e | 6859db92e1a67eb7d084d0ab974760c48c8e8829 | "2023-04-28T07:41:41Z" | python | "2023-05-03T10:35:15Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,213 | ["localstack/services/lambda_/urlrouter.py", "tests/aws/services/lambda_/functions/lambda_echo_json_body.py", "tests/aws/services/lambda_/test_lambda.py", "tests/aws/services/lambda_/test_lambda.validation.json"] | Invoking a lambda using a function url always returns 200 | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Given the following javascript lambda function
```javascript
exports.main = async (event) => {
console.log('Hello World');
return {
statusCode: '302',
body: {},
headers: {
Location: 'https://example.com'
}
};
}
```
When deploying to local stack and adding a function url, the url returns the correct `body` and `headers` but it will return a 200 status code.
In a real aws environment, an actual 302 is returned.
### Expected Behavior
The correct status code is returned
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
My docker compose file
```yaml
version: '3.4'
services:
localstack:
image: localstack/localstack:2.0.2
environment:
- SERVICES=dynamodb,lambda
ports:
- '4566:4566'
expose:
- '4566'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
```
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
deploying using terraform
```hcl
data "aws_iam_policy_document" "assume_role" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
actions = ["sts:AssumeRole"]
}
}
resource "aws_iam_role" "iam_for_lambda" {
name = "iam_for_lambda"
assume_role_policy = data.aws_iam_policy_document.assume_role.json
}
data "archive_file" "lambda" {
type = "zip"
source_file = "../index.js"
output_path = local.archive_file
}
resource "aws_lambda_function" "redirect_lambda" {
filename = local.archive_file
function_name = "redirects"
role = aws_iam_role.iam_for_lambda.arn
handler = "index.main"
source_code_hash = data.archive_file.lambda.output_base64sha256
runtime = "nodejs18.x"
environment {
variables = {
foo = "bar"
}
}
}
resource "aws_lambda_function_url" "lambda_url" {
function_name = aws_lambda_function.redirect_lambda.function_name
authorization_type = "NONE"
}
```
### Environment
```markdown
- OS: docker
- LocalStack: latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/8213 | https://github.com/localstack/localstack/pull/10170 | 8a9845db38bdbdbf7a3da3c088564b697e655f95 | 1515848b7b6044991f32e1db151160f782d0c2c7 | "2023-04-27T10:18:22Z" | python | "2024-02-06T13:31:10Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,183 | ["tests/integration/s3/test_s3.py", "tests/integration/s3/test_s3.snapshot.json"] | bug: S3 (AccessDenied) while trying to remove the file from the versioned bucket with legal hold locking enabled | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
1. Create a bucket with object locking enabled.
2. Enable versioning in this bucket.
3. Upload file with **key**. Received **version1** for this file.
4. Put a legal hold lock on the file ( **key** - **version1**).
5. Upload another file with the same **key**. Received **version2** for this file.
6. Put a legal hold lock on the file ( **key** - **version2**).
7. Remove the legal hold lock from the first version of the file ( **key** - **version1**).
8. Try to delete the first version of the file ( **key** - **version1**) -> get An error occurred (AccessDenied) when calling the DeleteObject operation: Access Denied.
The script would run successfully if we change the order of commands (basically remove versions of the file in reverse order):
(We have one file with version1 and version2, both locked)
- First, remove the lock from the version2.
- Delete version2.
- Remove the lock from version1.
- Delete version1.
### Expected Behavior
After the legal hold lock removed (Status=OFF) from the first uploaded version of the file, files should be accessible for deletion by its key and version.
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
```
#!/bin/bash
echo -e "\033[0;32mCreate two text files\033[0m"
echo "Some text version 01..." > example-v1.txt
echo "Some text version 02..." > example-v2.txt
echo -e "\033[0;32mCreate bucket\033[0m"
awslocal s3api create-bucket --bucket my-bucket-001 --create-bucket-configuration LocationConstraint=ap-southeast-2 --region ap-southeast-2 --object-lock-enabled-for-bucket
echo -e "\033[0;32mEnable versioning\033[0m"
awslocal s3api put-bucket-versioning --bucket my-bucket-001 --versioning-configuration Status=Enabled
echo -e "\033[0;32mUpload first version of the file\033[0m"
version1=$(awslocal s3api put-object --bucket my-bucket-001 --key 123456789 --body example-v1.txt | jq -r '.VersionId')
echo -e "\033[0;32mPut lock on the version1 of the file\033[0m"
awslocal s3api put-object-legal-hold --bucket my-bucket-001 --key 123456789 --version-id $version1 --legal-hold Status=ON
echo -e "\033[0;32mUpload second version of the file\033[0m"
version2=$(awslocal s3api put-object --bucket my-bucket-001 --key 123456789 --body example-v2.txt | jq -r '.VersionId')
echo -e "\033[0;32mPut lock on the version2 of the file\033[0m"
awslocal s3api put-object-legal-hold --bucket my-bucket-001 --key 123456789 --version-id version2 --legal-hold Status=ON
echo -e "\033[0;32mRemove lock from the version1 of the file\033[0m"
awslocal s3api put-object-legal-hold --bucket my-bucket-001 --key 123456789 --version-id version1 --legal-hold Status=OFF
echo -e "\033[0;32mDelete version1 of the file\033[0m"
awslocal s3api delete-object --bucket my-bucket-001 --key 123456789 --version-id $version1
```
### Environment
```markdown
- OS: macOS 13.3.1
- LocalStack: latest, 2.0.2, 2.0.3.dev
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/8183 | https://github.com/localstack/localstack/pull/8291 | 4b7a6f2378ebe943dea58395d692db305232380b | 8af4f8f77192583df0d4ad3c61ecf83bb6ad1007 | "2023-04-23T07:24:43Z" | python | "2023-05-22T12:33:24Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,178 | ["localstack/cli/localstack.py", "localstack/utils/diagnose.py", "tests/integration/awslambda/conftest.py", "tests/integration/awslambda/test_lambda.py", "tests/integration/awslambda/test_lambda_runtimes.py", "tests/integration/awslambda/test_lambda_runtimes.snapshot.json"] | bug: localstack update all pulls non-existing and deprecated images | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
`localstack update all` pulls:
#### non-existent image
```
ERROR: '['docker', 'pull', 'public.ecr.aws/lambda/python:3.9-rapid-x86_64']': exit code 1; output:
b'Error response from daemon: manifest for public.ecr.aws/lambda/python:3.9-rapid-x86_64 not found:
manifest unknown: Requested image not found\n'
✖ Image public.ecr.aws/lambda/python:3.9-rapid-x86_64 pull failed: Docker process returned with
errorcode 1
```
#### deprecated images
These images were used by the old lambda provider and can probably be removed:
```
✔ Image localstack/lambda:nodejs14.x up-to-date.
✔ Image localstack/lambda:python3.9 up-to-date.
✔ Image mlupin/docker-lambda:nodejs16.x updated.
✔ Image localstack/lambda:provided.al2 up-to-date.
✔ Image localstack/lambda:java11 up-to-date.
✔ Image mlupin/docker-lambda:python3.9 updated.
✔ Image mlupin/docker-lambda:nodejs14.x updated.
✔ Image lambci/lambda:java11 up-to-date.
✔ Image lambci/lambda:java8.al2 up-to-date.
✔ Image lambci/lambda:ruby2.7 up-to-date.
✔ Image lambci/lambda:nodejs12.x up-to-date.
✔ Image lambci/lambda:java8 up-to-date.
✔ Image lambci/lambda:go1.x up-to-date.
```
### Expected Behavior
Just pull existing and non-deprecated images.
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
```bash
localstack update all
─────────────────────────── Updating LocalStack CLI ────────────────────────────
⠏ Updating LocalStack CLI...
[notice] A new release of pip is available: 23.0.1 -> 23.1
[notice] To update, run: pip install --upgrade pip
✔ LocalStack CLI updated
──────────────────────────── Updating docker images ────────────────────────────
✔ Image localstack/localstack-pro:latest updated.
✔ Image localstack/localstack:latest updated.
✔ Image public.ecr.aws/lambda/java:8 up-to-date.
✔ Image public.ecr.aws/lambda/python:3.9 updated.
✔ Image public.ecr.aws/lambda/nodejs:18 updated.
✔ Image public.ecr.aws/lambda/nodejs:14 updated.
✔ Image localstack/localstack-pro:2.0 up-to-date.
✔ Image public.ecr.aws/lambda/nodejs:16 updated.
✔ Image localstack/localstack-pro:2.0.1 up-to-date.
✔ Image public.ecr.aws/lambda/nodejs:12 updated.
✔ Image public.ecr.aws/lambda/java:11 updated.
✔ Image localstack/localstack-docker-desktop:0.4.0 up-to-date.
✔ Image public.ecr.aws/lambda/python:3.7 updated.
✔ Image public.ecr.aws/lambda/python:3.8 updated.
ERROR: '['docker', 'pull', 'public.ecr.aws/lambda/python:3.9-rapid-x86_64']': exit code 1; output:
b'Error response from daemon: manifest for public.ecr.aws/lambda/python:3.9-rapid-x86_64 not found:
manifest unknown: Requested image not found\n'
✖ Image public.ecr.aws/lambda/python:3.9-rapid-x86_64 pull failed: Docker process returned with
errorcode 1
✔ Image public.ecr.aws/lambda/python:3.9-x86_64 updated.
✔ Image public.ecr.aws/lambda/dotnet:6 updated.
✔ Image public.ecr.aws/lambda/go:1 updated.
✔ Image public.ecr.aws/lambda/java:8.al2 updated.
✔ Image public.ecr.aws/lambda/dotnet:core3.1 updated.
✔ Image public.ecr.aws/lambda/ruby:2.7 updated.
✔ Image public.ecr.aws/lambda/provided:al2 updated.
✔ Image public.ecr.aws/lambda/provided:alami updated.
✔ Image localstack/localstack-pro:1.4.0 up-to-date.
✔ Image localstack/localstack-docker-desktop:0.3.1 up-to-date.
✔ Image localstack/bigdata:latest up-to-date.
✔ Image localstack/lambda:nodejs14.x up-to-date.
✔ Image localstack/lambda:python3.9 up-to-date.
✔ Image mlupin/docker-lambda:nodejs16.x updated.
✔ Image localstack/lambda:provided.al2 up-to-date.
✔ Image localstack/lambda:java11 up-to-date.
✔ Image mlupin/docker-lambda:python3.9 updated.
✔ Image mlupin/docker-lambda:nodejs14.x updated.
✔ Image lambci/lambda:java11 up-to-date.
✔ Image lambci/lambda:java8.al2 up-to-date.
✔ Image lambci/lambda:ruby2.7 up-to-date.
✔ Image lambci/lambda:nodejs12.x up-to-date.
✔ Image lambci/lambda:java8 up-to-date.
✔ Image lambci/lambda:go1.x up-to-date.
────────────────────────────────────────────────────────────────────────────────────────────────────────
Images updated: 21, Images failed: 1, total images processed: 39.
```
### Environment
```markdown
- OS: macOS Ventura 13.3.1
- LocalStack: 2.0.2
```
### Anything else?
**Architecture sidenote:**
On ARM macs supporting x86_64 emulation, downloading all Lambda images might fetch a lot of arm64 images although Lambda mostly runs on emulated x86_64 unless deploying arm64 Lambdas specifically or setting `LAMBDA_IGNORE_ARCHITECTURE=1`.
**Status feedback: (nit)**
downloading/updating the large localstack images can take several minutes with slow connections, which makes it appear that the command is hanging on the first image without feedback:
```
──────────────────────────── Updating docker images ────────────────────────────
Processing image... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0% -:--:-- 0/39
```
| https://github.com/localstack/localstack/issues/8178 | https://github.com/localstack/localstack/pull/8780 | 651f60c656896e6d4256323ed8b8525ed6098888 | 14e8b21bbcc8a32cd25e133552f2a97f13740b77 | "2023-04-21T09:39:52Z" | python | "2023-08-01T16:00:27Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,174 | ["localstack/services/s3/utils.py", "tests/integration/s3/test_s3.py", "tests/integration/s3/test_s3.snapshot.json"] | bug: S3 returns PutObject => 404 (NoSuchKey) when creating a folder with special character | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When trying to create a folder with special characters ( e.g. a@a ) S3 will return 404 (NoSuchKey) error.
what is interesting that the folder will actually be created eventually
` <ListBucketResult>
<IsTruncated>false</IsTruncated>
<Marker/>
<Contents>
<Key>a@a/</Key>
<LastModified>2023-04-20T12:05:04Z</LastModified>
<ETag>"d41d8cd98f00b204e9800998ecf8427e"</ETag>
<Size>0</Size>
<StorageClass>STANDARD</StorageClass>
<Owner>
<DisplayName>webfile</DisplayName>
<ID>
75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a
</ID>
</Owner>
</Contents>`
### Expected Behavior
S3 PutObject command shall return 200 instead of 404
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
localstack start
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
import boto3
from localstack_client.patch import enable_local_endpoints
enable_local_endpoints()
s3 = boto3.client("s3")
buckets = s3.list_buckets()
print(s3.put_object(Bucket='my-test-bucket', Key=('a@a/')))
print(buckets)
#### Command output
>Traceback (most recent call last):
File "/tmp/test.py", line 7, in <module>
print(s3.put_object(Bucket='my-test-bucket', Key=('aaabbg%40derr/')))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rmagdy/.local/lib/python3.11/site-packages/botocore/client.py", line 530, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rmagdy/.local/lib/python3.11/site-packages/botocore/client.py", line 960, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.NoSuchKey: An error occurred (NoSuchKey) when calling the PutObject operation: The specified key does not exist.
### Environment
```markdown
- OS: CentOS 7
- LocalStack: 2.0.3.dev
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/8174 | https://github.com/localstack/localstack/pull/8470 | 3d8dd50a14d3d2b8ebef3a4dd0e8a0fb7c1fa6c4 | b7e9ac8b4970602d9588bfe02e73bba037bfab8a | "2023-04-20T13:32:02Z" | python | "2023-07-03T13:47:21Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,172 | ["tests/integration/s3/test_s3.py", "tests/integration/s3/test_s3.snapshot.json"] | bug: s3 copy-object fails and does not generate a checksum | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
In localstack, copying an s3 object in place and requesting a checksum causes an error
An error occurred (InvalidRequest) when calling the CopyObject operation: This copy request is illegal because it is trying to copy an object to itself without changing the object's metadata, storage class, website redirect location or encryption attributes.
### Expected Behavior
AWS s3 allows you to copy the object in place and request a checksum.
aws s3api put-object --bucket my-test-bucket --key go.mod --body .\go.mod
{
"ETag": "\"f65b3f9b21e0c27be3c60df9c1cd87db\"",
"ServerSideEncryption": "AES256"
}
aws s3api copy-object --copy-source my-test-bucket/go.mod --bucket my-test-bucket --key go.mod --storage-class STANDARD --checksum-algorithm SHA256
{
"ServerSideEncryption": "AES256",
"CopyObjectResult": {
"ETag": "\"f65b3f9b21e0c27be3c60df9c1cd87db\"",
"LastModified": "2023-04-19T17:09:16+00:00",
"ChecksumSHA256": "nAPE7pxlr5L1D67nDJa1rwvmU3fKD2SuaUzeyhSR9Vc="
}
}
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
docker compose up -d
awslocal s3api create-bucket --bucket my-test-bucket
awslocal s3api put-object --bucket my-test-bucket --key go.mod --body .\go.mod
awslocal s3api copy-object --copy-source my-test-bucket/go.mod --bucket my-test-bucket --key go.mod --storage-class STANDARD --checksum-algorithm SHA256
### Environment
```markdown
- OS: Windows 11
- LocalStack: pro latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/8172 | https://github.com/localstack/localstack/pull/8216 | c62d8123d2eb3c12f411e6486fc1cf1d32c599b9 | 4b7a6f2378ebe943dea58395d692db305232380b | "2023-04-19T17:18:09Z" | python | "2023-05-22T10:54:03Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,139 | ["localstack/http/client.py", "localstack/services/s3/virtual_host.py", "tests/integration/s3/test_s3.py"] | bug: S3 automatically decodes gzip requests (it should leave that to the client) | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I am storing a gzipped file in an S3 bucket. When retrieving the contents, I expect to have to decode the response. However, once I updated to `2.0.1` I am finding that the response already has the gzipped content decoded.
The following is the response when localstack is run in `legacy` mode. Note the payload reflects the uncompressed contents and the `Content-Length` header reflects the length of the compressed content (ie. the length before it is decompressed).
```bash
$> wget -q -S -O - https://originaluploads.dev.mydomain.com/2023/4/13/79012447-af8a-4732-8595-b378d9ddb8ac/original_upload.gpx
HTTP/1.1 200
Server: nginx/1.22.1
Date: Thu, 13 Apr 2023 21:35:21 GMT
Content-Type: application/gpx+xml
Content-Length: 484
Connection: keep-alive
x-amz-version-id: 721618c3-d9d2-42bd-b433-c0530c4e3a09
cache-control: no-cache
content-encoding: gzip
ETag: "1d1f83feb12d0f0185a12bb7909b3955"
last-modified: Thu, 13 Apr 2023 21:34:31 GMT
x-amz-server-side-encryption: AES256
x-amz-server-side-encryption-bucket-key-enabled: false
x-amzn-requestid: zeRF2ZIh8iI5WgtR21aoNDIMAgxatwkKndFENfFQc1dfl7tqffx4
Access-Control-Allow-Credentials: true
x-amz-request-id: 5BC4A0ADB1AC8913
x-amz-id-2: MzRISOwyjmnup5BC4A0ADB1AC89137/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp
accept-ranges: bytes
content-language: en-US
gu8d͕ˎ�0��y%����\����AZm�5KX�^
...
$>
```
The following is what I get when I disable the legacy mode and use 2.0.1's new S3 provider. I have uploaded the same file (even though the file path is different in this second run). Note how that the payload is now the uncompressed file and the `Content-Length` value reflects the uncompressed file content length:
```bash
$> wget -q -S -O - https://originaluploads.dev.mydomain.com/2023/4/13/e01bcd66-f84b-44af-b4ba-5f6819a22ff2/original_upload.gpx
HTTP/1.1 200
Server: nginx/1.22.1
Date: Thu, 13 Apr 2023 21:29:22 GMT
Content-Type: application/gpx+xml
Content-Length: 1630
Connection: keep-alive
accept-ranges: bytes
Last-Modified: Thu, 13 Apr 2023 21:11:33 GMT
ETag: "de859a1f6072e5582b5877dc6fb278b2"
x-amz-version-id: c025335d-b74a-4f01-b36a-67a378338145
Cache-Control: no-cache
Content-Encoding: gzip
x-amz-server-side-encryption: AES256
x-amz-server-side-encryption-bucket-key-enabled: false
x-amz-request-id: 8WKX4I040PZOR9HC5S41CMBK7VQEVHSNN1OPEV8AZ7WP4JECHPEZ
x-amz-id-2: MzRISOwyjmnup8WKX4I040PZOR9HC5S41CMBK7VQEVHSNN1OPEV8AZ7WP4JECHPEZ7/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp
<?xml version="1.0" encoding="UTF-8"?>
<gpx version="1.1" creator="Apple Health Export" xmlns="http://www.topografix.com/GPX/1/1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.topografix.com/GPX/1/1 http://www.topografix.com/GPX/1/1/gpx.xsd">
...
$>
```
### Expected Behavior
The S3 requests should be returning the compressed payload and relying on the client to do the decompression.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
docker-compose.yml
### Environment
```markdown
- OS: official Docker image
- LocalStack: 2.0.1
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/8139 | https://github.com/localstack/localstack/pull/8148 | f9a4b6344814254ce3adaf91d807ff265c0472a1 | 6c31ed2e5cb42d282bc3eaa657bf3ecd71767cd6 | "2023-04-13T21:48:09Z" | python | "2023-04-17T21:52:36Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,092 | ["localstack/services/opensearch/provider.py"] | bug: pod save/restore of elastic search domain fails to restart ES (2.0.1dev) | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Create an ElasticSearch domain. Run pod save. Restart LocalStack. Run pod restore. Domain still exists in config, but the ES software isn't downloaded/installed/started.
### Expected Behavior
That it restarts ES. This worked in 1.14
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
localstack start -d
awslocal es create-elasticsearch-domain --elasticsearch-version 7.7 --domain-name audit
localstack pod save file://$PWD/estest.pod
localstack stop
localstack start
localstack pod load file://$PWD/estest.pod
### Environment
```markdown
- OS: Mac 13.2.1
- LocalStack: 2.0.1dev
```
### Anything else?
Note the pod save/restore does preserve your ES domains, it just doesn't start/install ES.
No interesting messages from the localstack console
```
💻 LocalStack CLI 2.0.0.post1
[15:22:37] starting LocalStack in Docker mode 🐳 localstack.py:142
2023-04-06T15:22:37.327 WARN --- [ MainThread] localstack_ext.plugins : Unable to start DNS: cannot import name 'dns_server' from 'localstack_ext.services' (/opt/homebrew/lib/python3.11/site-packages/localstack_ext/services/__init__.py)
─────────────────────────────────────────────────────────────────────────── LocalStack Runtime Log (press CTRL-C to quit) ───────────────────────────────────────────────────────────────────────────
2023-04-06T15:22:37.413 WARN --- [ MainThread] localstack_ext.plugins : failed to configure DNS: cannot import name 'dns_server' from 'localstack_ext.services' (/opt/homebrew/lib/python3.11/site-packages/localstack_ext/services/__init__.py)
LocalStack version: 2.0.1.dev
LocalStack Docker container id: 77863cef09a0
LocalStack build date: 2023-04-05
LocalStack build git hash: ae1dfbf9
2023-04-06T22:22:38.738 WARN --- [ MainThread] localstack.deprecations : DEFAULT_REGION is deprecated (since 0.12.7) and will be removed in upcoming releases of LocalStack! LocalStack now has full multi-region support. Please remove this environment variable.
2023-04-06T22:22:38.825 WARN --- [-functhread3] hypercorn.error : ASGI Framework Lifespan error, continuing without Lifespan support
2023-04-06T22:22:38.825 WARN --- [-functhread3] hypercorn.error : ASGI Framework Lifespan error, continuing without Lifespan support
2023-04-06T22:22:38.827 INFO --- [-functhread3] hypercorn.error : Running on https://0.0.0.0:4566 (CTRL + C to quit)
2023-04-06T22:22:38.827 INFO --- [-functhread3] hypercorn.error : Running on https://0.0.0.0:4566 (CTRL + C to quit)
Ready.
2023-04-06T22:23:11.871 INFO --- [ asgi_gw_0] localstack.request.http : GET / => 200
2023-04-06T22:23:11.882 INFO --- [ asgi_gw_0] localstack.request.http : GET /_localstack/pods/environment => 200
2023-04-06T22:23:12.034 INFO --- [ asgi_gw_0] localstack.request.http : POST /_localstack/pods => 201
``` | https://github.com/localstack/localstack/issues/8092 | https://github.com/localstack/localstack/pull/9555 | 016d57bf2647e97eb94e63c46b2f4e0de18a171b | f3092183524957979129e69d9ef5d81608630ea2 | "2023-04-06T22:27:53Z" | python | "2023-11-06T10:08:30Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,090 | ["localstack/aws/protocol/service_router.py", "tests/unit/aws/test_service_router.py"] | bug: 'pictures' a reserved word for an S3 bucket name? | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I created an S3 bucket with the name `pictures`. Each time I tried to write to it, I would get a CORS error.
I used the same code as I used for another bucket which was working fine (`originaluploads`). I even verified that the same code path that was causing issues under `pictures` worked fine when pointed against the `originaluploads` bucket.
When I renamed the `pictures` bucket to `userpictures`, the code worked fine.
Is `pictures` a reserved bucket name in LocalStack? If so, I couldn't find any documentation. Otherwise, seems like a bug.
### Expected Behavior
`pictures`-named S3 buckets shouldn't reject with CORS errors.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
Attempt to write to an S3 bucket named `pictures`.
### Environment
```markdown
- OS: Docker
- LocalStack: 2.0.0
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/8090 | https://github.com/localstack/localstack/pull/8091 | 0ea6706d43f9dac69cf1a2c51c99a95a807164a4 | cd5791504c0de9ca0aefe07d9e72b06081f5a6c6 | "2023-04-06T13:40:26Z" | python | "2023-04-06T20:15:19Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,089 | ["localstack/services/transcribe/provider.py"] | bug: Transcribe does not store output in designated S3 bucket | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When a transcription job is scheduled I specify `OutputBucketName` and `OutputKey` to store the transcription, when the transcription job is complete, I check if output transcription is put there. Unfortunately, it's not there.
### Expected Behavior
Output transcription should be stored on S3 at the given bucket and key.
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
Using Test Containers
Reproduced in project: https://github.com/kkocel/local-stack-transcribe-repro
### Environment
```markdown
- OS: macOS
- LocalStack: latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/8089 | https://github.com/localstack/localstack/pull/8612 | c758aaee70d30e62d655e249bdb0d9678840ccaf | 88d3eb91c4b831f04c2ec600a4a8ba6d93c0a211 | "2023-04-06T10:11:53Z" | python | "2023-07-04T14:56:04Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,083 | ["localstack/services/kms/provider.py", "tests/integration/test_kms.py", "tests/integration/test_kms.snapshot.json"] | bug: KMS keys has no check on the plaintext size to encrypt | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Localstack KMS allows plaintext data larger than the maximum size allowed by the key specs to be encrypted, which is different from AWS KMS. This results in inconsistent behavior between Localstack and AWS.
### Expected Behavior
The KMS feature in Localstack should check the size of plaintext based on the key specs to encrypt, just like AWS KMS. This ensures that the encrypted data is consistent between Localstack and AWS.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
```yaml
version: "3.8"
services:
localstack:
container_name: localstack_main
image: localstack/localstack:1.4.0
restart: always
ports:
- "127.0.0.1:53:53"
- "127.0.0.1:53:53/udp"
- "127.0.0.1:443:443"
- "127.0.0.1:4510-4559:4510-4559"
- "127.0.0.1:4566:4566"
environment:
- DEFAULT_REGION=eu-west-3
- LAMBDA_EXECUTOR=docker
- LAMBDA_REMOTE_DOCKER=true
- LAMBDA_REMOVE_CONTAINERS=true
- DEBUG=1
- PERSISTENCE=1
- DNS_ADDRESS=0
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
- ./scripts/localstack_bootstrap:/docker-entrypoint-initaws.d/
env_file:
- .env-localstack
```
#### Python script to reproduce
```py
import boto3
def assume_role():
sts_client = boto3.client(
"sts",
endpoint_url="http://localhost:4566",
)
assumed_role_object = sts_client.assume_role(
RoleArn="arn:aws:iam::123456789101:role/fake-role",
RoleSessionName="fake-role",
)
credentials = assumed_role_object["Credentials"]
resource = boto3.client(
"kms",
aws_access_key_id=credentials["AccessKeyId"],
aws_secret_access_key=credentials["SecretAccessKey"],
aws_session_token=credentials["SessionToken"],
region_name="eu-west-3",
endpoint_url="http://localhost:4566",
)
return resource
role = assume_role()
response = role.create_key(
KeyUsage="ENCRYPT_DECRYPT",
CustomerMasterKeySpec="RSA_4096",
Origin="AWS_KMS",
)
role.create_alias(
AliasName="alias/my-custom-key", TargetKeyId=response["KeyMetadata"]["KeyId"]
)
response = role.encrypt(
KeyId="alias/my-custom-key",
Plaintext="Based on the boto3 documentation anything bigger than 446 bytes should raise an error",
EncryptionAlgorithm="RSAES_OAEP_SHA_256",
)
```
### Environment
```markdown
- OS: Ubuntu 22.04.2
- LocalStack: 1.4.0
```
### Anything else?
Yes, I'm using the Pro version.
You can find information about the permissible size of plaintexts on the boto3 documentation, which is available at this link: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/kms/client/encrypt.html.
I have implemented a solution for the issue of plaintext size verification on my Localstack forked project. However, I would like to confirm whether the lack of verification for plaintext size was intentional or not. If it was not, I am excited to contribute to the project by submitting my solution. | https://github.com/localstack/localstack/issues/8083 | https://github.com/localstack/localstack/pull/8113 | b2cc972824d9c77ea15364908650568ad1070b31 | 9debbcc15c83727a560c78f336b0cc44dc516cec | "2023-04-05T22:42:42Z" | python | "2023-04-12T06:47:17Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,073 | ["localstack/services/s3/provider.py"] | bug: Cannot connect to S3 using v2.x Docker container inside GitLab Runner | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
No matter what I do I cannot seem to connect to LocalStack v2.x inside of a GitLab container.
Locally, this works `http://s3.localhost.localstack.cloud:4566` using the same container without having to set path style.
In GitLab,
This works using v1.4 (for whatever reason this only worked when I set the path style)
```yaml
services:
- name: localstack/localstack:1.4
alias: localstack
variables:
AWS_ACCESS_KEY_ID: test
AWS_DEFAULT_REGION: us-east-1
AWS_SECRET_ACCESS_KEY: test
DEBUG: 1
FORCE_NONINTERACTIVE: "true"
LOCALSTACK_HOST: localstack
SERVICES: s3
S3_ENDPOINT_OVERRIDE: http://localstack:4566
S3_PATH_STYLE_ACCESS: "true"
```
Now, it either cannot context or gives a "length" issue when trying to create a bucket.
```yaml
services:
- name: localstack/localstack #:1.4
alias: localstack
variables:
AWS_ACCESS_KEY_ID: test
AWS_DEFAULT_REGION: us-east-1
AWS_SECRET_ACCESS_KEY: test
DEBUG: 1
FORCE_NONINTERACTIVE: "true"
LOCALSTACK_HOST: localstack
SERVICES: s3
SKIP_E2E_TESTS: "false"
S3_ENDPOINT_OVERRIDE: http://s3.localstack.localstack.cloud:4566
S3_PATH_STYLE_ACCESS: "false"
```
### Expected Behavior
Access to S3 using v2 works the same, or with minor changes, since it worked with v1.4.
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
Running localstack as a GitLab service inside of a runner that does not have increased docker privileges.
### Environment
```markdown
- LocalStack: v2.0
```
### Anything else?
I've tried everything I can think of, assumingit is the URL endpoint (we had to make that change locally earlier), but no luck. | https://github.com/localstack/localstack/issues/8073 | https://github.com/localstack/localstack/pull/8082 | d80b76a0de459fe728a175dce664c5bcc93c6a60 | ae1dfbf9a57871cb089c26f8c8d2e24fcd567fc2 | "2023-04-05T00:12:04Z" | python | "2023-04-05T17:01:42Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,072 | ["localstack/services/s3/provider.py"] | bug: pod save fails when large lambdas are uploaded. (2.1.1.dev) | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Localstack prints a python stack error when attempting to save a large lambda (>1G) via "pod save". It references S3, but that is a red-herring. Once this error is encountered, the pod save exits leaving other services absent from the file.
### Expected Behavior
A services should be saved to the file.
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
localstack start -d
awslocal lambda create-function --function-name kinesisToElasticSearch --zip-file fileb://kinesisToElasticSearch.zip --handler index.handler --runtime nodejs12.x --role arn:aws:iam::000000000000:role/lambda-role --region us-east-1
localstack pod save file://$PWD/localstack.pod
### Environment
```markdown
- OS:MacOS Ventura 13.2.1
- LocalStack: 2.0.1.dev
```
### Anything else?
Note if you delete the lambda, the problem still occurs. I have not determined the exact limit on the lambda zip file.
Output
2023-04-04T23:51:16.437 WARN --- [-functhread3] hypercorn.error : ASGI Framework Lifespan error, continuing without Lifespan support
2023-04-04T23:51:16.437 WARN --- [-functhread3] hypercorn.error : ASGI Framework Lifespan error, continuing without Lifespan support
2023-04-04T23:51:16.438 INFO --- [-functhread3] hypercorn.error : Running on https://0.0.0.0:4566 (CTRL + C to quit)
2023-04-04T23:51:16.438 INFO --- [-functhread3] hypercorn.error : Running on https://0.0.0.0:4566 (CTRL + C to quit)
Ready.
2023-04-04T23:52:32.051 INFO --- [ asgi_gw_0] localstack.request.aws : AWS lambda.CreateFunction => 201
2023-04-04T23:52:32.126 INFO --- [ asgi_gw_2] localstack.request.aws : AWS sts.AssumeRole => 200
2023-04-04T23:52:43.746 INFO --- [ asgi_gw_2] localstack.request.http : GET / => 200
2023-04-04T23:52:43.797 ERROR --- [ asgi_gw_0] l.pods.manager : Error while saving state of service s3 into pod
Traceback (most recent call last):
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_persistence/pods/manager.py", line 15, in extract_into
try:A.accept_state_visitor(E)
File "/opt/code/localstack/localstack/services/plugins.py", line 145, in accept_state_visitor
ReflectionStateLocator(service=self.name()).accept_state_visitor(visitor)
File "/opt/code/localstack/localstack/state/inspect.py", line 133, in accept_state_visitor
visitor.visit(attribute)
File "/usr/local/lib/python3.10/functools.py", line 926, in _method
return method.__get__(obj, cls)(*args, **kwargs)
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_persistence/pods/save.py", line 27, in _
G=os.path.join(constants.API_STATES_DIRECTORY,C,B,E,constants.MOTO_BACKEND_STATE_FILE)
File "/usr/local/lib/python3.10/posixpath.py", line 90, in join
genericpath._check_arg_types('join', a, *p)
File "/usr/local/lib/python3.10/genericpath.py", line 152, in _check_arg_types
raise TypeError(f'{funcname}() argument must be str, bytes, or '
TypeError: join() argument must be str, bytes, or os.PathLike object, not 'NoneType'
2023-04-04T23:52:43.805 INFO --- [ asgi_gw_0] localstack.request.http : GET /_localstack/pods/state => 200
2023-04-04T23:52:44.031 INFO --- [ asgi_gw_2] localstack.request.http : GET /_localstack/pods/environment => 200 | https://github.com/localstack/localstack/issues/8072 | https://github.com/localstack/localstack/pull/8699 | 102d7bda5f10b3542ef3355f4734cea2d0578cb8 | 2984115cf3c7c6f1a87e54864d67f19be92e980e | "2023-04-05T00:06:33Z" | python | "2023-07-14T11:57:45Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,044 | ["localstack/services/transcribe/provider.py", "localstack/testing/pytest/fixtures.py"] | bug: Transcribe StartTranscriptionJobResponse does not contain TranscriptionJob object | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When starting the transcription job in response job object is null:
<img width="1154" alt="Screenshot 2023-04-03 at 09 21 09" src="https://user-images.githubusercontent.com/454412/229447066-1671f811-da49-478d-bf24-252f30b17699.png">
### Expected Behavior
Response from the real AWS stack - job object is not null
<img width="1161" alt="Screenshot 2023-04-03 at 09 30 13" src="https://user-images.githubusercontent.com/454412/229441498-06125248-3b68-4679-8204-0163e8245db6.png">
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
Using test containers, reproducing project can be found here:
https://github.com/kkocel/local-stack-transcribe-repro
### Environment
```markdown
- OS: macOS
- LocalStack: latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/8044 | https://github.com/localstack/localstack/pull/8059 | 81f96fda4f5889181d024c3596e1c4f3f8853782 | 3d1195139c382833bc49d45230d3cb040b6d9c17 | "2023-04-03T07:58:10Z" | python | "2023-04-04T08:22:27Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,034 | ["localstack/services/sns/provider.py", "tests/integration/test_sns.py", "tests/integration/test_sns.snapshot.json"] | bug: Internal exception after publishing to sns topic that has a http subscription endpoint - TypeError: object supporting the buffer API required | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Im using localstack via a local minikube cluster using the helm charts:
https://github.com/localstack/helm-charts
Ive set up the following topic and endpoint:
```bash
awslocal sns create-topic --name test
awslocal sns subscribe \
--protocol http \
--topic-arn arn:aws:sns:us-east-1:000000000000:test \
--endpoint-url http://localstack:4566 \
--notification-endpoint http://myapp:8000/subscribe
```
Ive got some code in a where I publish a message in the topic manually via boto3, and it calls my endpoint successfully which I can also confirm from my logs of the pod running the endpoint:
` INFO: 10.244.0.40:40552 - "POST /subscribe HTTP/1.1" 200 OK`
However immediately after, I get an error from localstack pod that looks like:
```
2023-04-01T01:15:55.238 INFO --- [ sns_pub_0] l.services.sns.publisher : Received error on sending SNS message, putting to DLQ (if configured): object supporting the buffer API required
2023-04-01T01:15:55.239 ERROR --- [ sns_pub_0] l.services.sns.publisher : An internal error occurred while trying to send the SNS message SnsMessage(type='Notification', message={'default': {'foo': 'bar'}}, message_attributes={}, message_structure='json', subject='example subject', message_deduplication_id=None, message_group_id=None, token=None, message_id='6f150269-fe84-4daf-876f-9c7cfae99f87')
Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/sns/publisher.py", line 444, in _publish
store_delivery_log(message_context, subscriber, success=True, delivery=delivery)
File "/opt/code/localstack/localstack/services/sns/publisher.py", line 853, in store_delivery_log
"messageMD5Sum": md5(message),
File "/opt/code/localstack/localstack/utils/strings.py", line 145, in md5
m.update(to_bytes(string))
TypeError: object supporting the buffer API required
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/sns/publisher.py", line 80, in publish
self._publish(context=context, subscriber=subscriber)
File "/opt/code/localstack/localstack/services/sns/publisher.py", line 451, in _publish
store_delivery_log(message_context, subscriber, success=False)
File "/opt/code/localstack/localstack/services/sns/publisher.py", line 853, in store_delivery_log
"messageMD5Sum": md5(message),
File "/opt/code/localstack/localstack/utils/strings.py", line 145, in md5
m.update(to_bytes(string))
TypeError: object supporting the buffer API required
```
From what I can tell, this doesnt appear to be a result of any code or data im in control or at least I dont believe it is but i could be wrong. As I mentioned, my subscription endpoint appears to work as expected and the exception is raised internally after from the local stack modules. Any direction on how I might be able to address this bug and prevent the exception would be great, thanks.
Thanks.
### Expected Behavior
Interanal exception doesnt occur after publish to topic.
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
As described in behavior.
### Environment
```markdown
- OS: Pop!_OS 22.04
- LocalStack: 1.4
```
### Anything else?
As requested. | https://github.com/localstack/localstack/issues/8034 | https://github.com/localstack/localstack/pull/8055 | b216d67206d2671869ce06d98441520d9c860892 | e1b80232a5aa08f5ccc2b3f7c0950297bbe498f1 | "2023-04-01T01:35:09Z" | python | "2023-04-04T17:58:24Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,030 | ["localstack/aws/api/s3/__init__.py", "localstack/aws/protocol/serializer.py", "localstack/aws/spec-patches.json", "localstack/services/s3/provider.py"] | bug: CreateMultipartUpload should return InitiateMultipartUploadResult element | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Initiating a multipart S3 upload to LocalStack returns an XML document with a root element CreateMultipartUploadOutput. This causes an error in the ex_aws_s3 library I'm using. We've used the combination of LocalStack + ex_aws_s3 successfully for a few years and the error started occurring a few days ago.
### Expected Behavior
The element should be called InitiateMultipartUploadResult.
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run -p 4566:4566 localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
aws --debug --endpoint-url=http://localhost:4566 s3api create-multipart-upload --bucket my-bucket --key 'multipart/01'
### Environment
```markdown
- OS: Ubuntu 20.04.5 LTS
- LocalStack: latest
```
### Anything else?
S3 docs specifying InitiateMultipartUploadResult:
https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html
Reference in official AWS Java client:
https://github.com/aws/aws-sdk-java/blob/ef22b94750d4c155d74b09012b3be5d9d98bbd33/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/model/transform/XmlResponsesSaxParser.java#L1865
Where our particular error gets raised in ex_aws_s3:
https://github.com/ex-aws/ex_aws_s3/blob/7e12f2b0578b2620c047d2e44c40a6de8773584a/lib/ex_aws/s3/parsers.ex#L78 | https://github.com/localstack/localstack/issues/8030 | https://github.com/localstack/localstack/pull/8037 | 5c99dde9ce932e58cdbff55dd15c5341de57e23e | 8f196a0fd079ca5e3523dcfb518bf6cbb0600cba | "2023-03-31T20:20:16Z" | python | "2023-04-03T09:03:17Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 8,002 | ["localstack/aws/protocol/serializer.py", "tests/integration/test_sqs.py"] | bug: " when sending replaced by quotes when receiving | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Sending a message with body:
`""""`
is received on the consumer side with body:
`""""`
### Expected Behavior
Sending a message with body:
""""
is received on the consumer side with body:
""""
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
Running localstack in docker desktop on Mac using helm chart
https://localstack.github.io/helm-charts
version 0.3.7
Image version: 0.14.3
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
### Environment
```markdown
- OS: macOS Monterey
- LocalStack: 0.14.3
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/8002 | https://github.com/localstack/localstack/pull/8180 | 65e08f6786c797ece211371fbef125a6323a8672 | eff7f2c7c5bffcde65c1c91529749a22e80e48b3 | "2023-03-29T12:28:24Z" | python | "2023-04-21T12:49:47Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,993 | ["localstack/config.py", "localstack/services/stores.py", "tests/integration/test_stores.py", "tests/unit/test_stores.py"] | feature request: Let us use custom regions again | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Feature description
I prefer not to use the real regions when doing tests or local stuff to make sure that if it accidentally picks up real credentials, it won't actually create or interact with resources in AWS. This used to work find in localstack - I even dug through the code to find that there is an option in the `RegionStore` to disable validation of the region string. At some point, you removed any way to disable that validation though, which means it just fails all requests using a custom region.
### 🧑💻 Implementation
There's already a flag that is passed to the account and region store constructors to disable validation. Seems like an environment variable or configuration flag to disable validation could be propagated.
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7993 | https://github.com/localstack/localstack/pull/7997 | 6dd1c414a817cf599e8af1d2bfd72d7e8120d1ab | cb50656ebe461cab201411eeb8c26338b51533ba | "2023-03-28T18:28:07Z" | python | "2023-03-29T15:14:57Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,975 | ["localstack/services/apigateway/provider.py", "tests/integration/apigateway/test_apigateway_api.py", "tests/integration/apigateway/test_apigateway_api.snapshot.json"] | bug: latest image on docker hub still fails to stand up | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When trying to run chalice on localstack on the latest image I was having the same problem as in this issue https://github.com/localstack/localstack/issues/7964. After the update the problem changed to this one:
```
b'{"FunctionName": "sync", "FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:sync", "Runtime": "python3.9", "Role": "arn:aws:iam::000000000000:role/local", "Handler": "app.recurrentSync",
"CodeSize": 31013564, "Description": "", "Timeout": 300, "MemorySize": 512, "LastModified": "2023-03-27T09:23:46.147967+0000", "CodeSha256": "yAvtez9/MH2+s+o31HmmMK83UbRAeOXxwVF8JiYhw04=",
"Version": "$LATEST", "Environment": {"Variables": {"SQS_QUEUE_URL": "http://${LOCALSTACK_HOSTNAME}:4566", "PSQL_ADDRESS": "postgresql://user:password@localhost:5432/application", "BRICK_ENVIRONMENT": "sgs-staging", "DD_FLUSH_TO_LOG": "TRUE", "CSTOOL_ENVIRONMENT": "sgs-staging", "CSTOOL_KEY_ID": "7900", "CSTOOL_VALUE_ID": "2797", "ENVIRONMENT": "LOCAL", "CORS_ALLOWED_ORIGIN": "http://localhost:3000", "LOG_LEVEL": "DEBUG", "S3_WWE_BLUEPRINTS_BUCKET": "wl-blueprints", "S3_WWE_ASSETS_BUCKET": "wl-png", "SYNC_QUEUE": "sync-queue", "SYNC_QUEUE_REGION": "us-east-1", "FRONTEND_URL": "http://localhost:3000"}}, "TracingConfig": {"Mode": "PassThrough"}, "RevisionId": "5c5854b0-0b1c-43ed-898e-e7ca2d072ee3",
"State": "Failed", "StateReason": "Error while creating lambda: Docker not available", "StateReasonCode": "InternalError", "LastUpdateStatus": "Failed", "PackageType": "Zip", "Architectures": ["x86_64"], "EphemeralStorage": {"Size": 512}, "SnapStart": {"ApplyOn": "None", "OptimizationStatus": "Off"}, "RuntimeVersionConfig": {"RuntimeVersionArn": "arn:aws:lambda:us-east-1::runtime:8eeff65f6809a3ce81507fe733fe09b835899b99481ba22fd75b5a7338290ec1"}}'
03/27/2023 06:23:51 AM DEBUG botocore.hooks Event needs-retry.lambda.GetFunctionConfiguration: calling handler <botocore.retryhandler.RetryHandler object at 0x7f95e6a80130>
03/27/2023 06:23:51 AM DEBUG botocore.retryhandler No retry needed.
Traceback (most recent call last):
File "venv/lib/python3.9/site-packages/chalice/cli/__init__.py", line 636, in main
return cli(obj={})
File "venv/lib/python3.9/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "venv/lib/python3.9/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "venv/lib/python3.9/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "venv/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "venv/lib/python3.9/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "venv/lib/python3.9/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "venv/lib/python3.9/site-packages/chalice/cli/__init__.py", line 189, in deploy
deployed_values = d.deploy(config, chalice_stage_name=stage)
File "venv/lib/python3.9/site-packages/chalice/deploy/deployer.py", line 376, in deploy
return self._deploy(config, chalice_stage_name)
File "venv/lib/python3.9/site-packages/chalice/deploy/deployer.py", line 392, in _deploy
self._executor.execute(plan)
File "venv/lib/python3.9/site-packages/chalice/deploy/executor.py", line 42, in execute
getattr(self, '_do_%s' % instruction.__class__.__name__.lower(),
File "venv/lib/python3.9/site-packages/chalice/deploy/executor.py", line 55, in _do_apicall
result = method(**final_kwargs)
File "venv/lib/python3.9/site-packages/chalice/awsclient.py", line 412, in create_function
self._wait_for_active(function_name)
File "venv/lib/python3.9/site-packages/chalice/awsclient.py", line 419, in _wait_for_active
waiter.wait(FunctionName=function_name)
File "venv/lib/python3.9/site-packages/botocore/waiter.py", line 55, in wait
Waiter.wait(self, **kwargs)
File "venv/lib/python3.9/site-packages/botocore/waiter.py", line 375, in wait
raise WaiterError(
botocore.exceptions.WaiterError: Waiter FunctionActive failed: Waiter encountered a terminal failure state: For expression "State" we matched expected path: "Failed"
```
### Expected Behavior
I expected that chalice-local should work correctly
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker compose up -d
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
python /usr/local/bin/chalice-local deploy --stage local
### Environment
```markdown
- OS:"Ubuntu 22.04.1 LTS" on WSL
- LocalStack:latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7975 | https://github.com/localstack/localstack/pull/8068 | cf32f3d4726a43e3db8beb56c82e88ba1e444045 | 0aee30863d5e5b04ec8951f554071aca3812c405 | "2023-03-27T09:45:16Z" | python | "2023-04-04T17:46:42Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,935 | ["localstack/services/sns/constants.py", "localstack/services/sns/models.py", "localstack/services/sns/provider.py", "localstack/services/sns/publisher.py", "tests/integration/test_sns.py"] | feature request: expose endpoint to retrieve SMS messages delivered via SNS | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Feature description
SNS allows for the publishing of messages to an SMS destination with a phone number. Those messages are logged and stored in the `SnsStore`, but there is no way to retrieve them for verification. The request here is to expose a new endpoint which returns SMS messages from the store, similar to the one made available to retrieve email messages sent via SES.
### 🧑💻 Implementation
The `SNSServicePlatformEndpointMessagesApiResource` class makes endpoints available for accessing platform endpoint messages sent via SNS. A similar resource class could be implemented to provide new endpoints for accessing SMS messages.
Given that interaction platform endpoint messages is exposed at the path `/_aws/sns/platform-endpoint-messages`, SMS message interaction would presumably be exposed at `/_aws/sns/sms-messages`. A `GET` endpoint could accept an optional `phoneNumber` query parameter to filter messages based on the destination phone number.
```python
class SNSServiceSMSMessagesApiResource:
@route(sns_constants.SMS_MSGS_ENDPOINT, methods=["GET"])
def on_get(self, request: Request):
account_id = #...
region = #...
filter_phone_number = request.args.get("phoneNumber")
# get store; conditionally filter messages based on filter_phone_number
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7935 | https://github.com/localstack/localstack/pull/8667 | 2984115cf3c7c6f1a87e54864d67f19be92e980e | 015e39e99e7ba4cc9fd59c20b233aa41eb9a4c79 | "2023-03-22T23:26:27Z" | python | "2023-07-14T13:42:45Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,934 | ["localstack/services/iam/provider.py", "localstack/testing/snapshots/transformer_utility.py", "tests/integration/cloudformation/resources/test_iam.snapshot.json", "tests/integration/cloudformation/resources/test_sam.snapshot.json", "tests/integration/test_iam.py", "tests/integration/test_iam.snapshot.json"] | bug: key error causes internal server exception rather than failing gracefully | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
First [reported here](https://stackoverflow.com/questions/73934496/localstack-creating-stack-from-cloudformation-file-which-refer-to-some-existin), I've reproduced it but using terraform: when you try to attach the following policy you get an internal server error:
> <ErrorResponse xmlns="https://iam.amazonaws.com/doc/2010-05-08/"><Error><Code>InternalError</Code><Message>exception while calling iam.AttachRolePolicy: Traceback (most recent call last):
File "/opt/code/localstack/localstack/aws/chain.py", line 90, in handle
handler(self, self.context, response)
File "/opt/code/localstack/localstack/aws/handlers/service.py", line 122, in __call__
handler(chain, context, response)
File "/opt/code/localstack/localstack/aws/handlers/service.py", line 92, in __call__
skeleton_response = self.skeleton.invoke(context)
File "/opt/code/localstack/localstack/aws/skeleton.py", line 153, in invoke
return self.dispatch_request(context, instance)
File "/opt/code/localstack/localstack/aws/skeleton.py", line 165, in dispatch_request
result = handler(context, instance) or {}
File "/opt/code/localstack/localstack/aws/forwarder.py", line 67, in _call
return fallthrough_handler(context, req)
File "/opt/code/localstack/localstack/services/moto.py", line 83, in _proxy_moto
return call_moto(context)
File "/opt/code/localstack/localstack/services/moto.py", line 46, in call_moto
return dispatch_to_backend(context, dispatch_to_moto, include_response_metadata)
File "/opt/code/localstack/localstack/aws/forwarder.py", line 120, in dispatch_to_backend
http_response = http_request_dispatcher(context)
File "/opt/code/localstack/localstack/services/moto.py", line 111, in dispatch_to_moto
response = dispatch(request, request.url, request.headers)
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/moto/core/responses.py", line 225, in dispatch
return cls()._dispatch(*args, **kwargs)
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/moto/core/responses.py", line 366, in _dispatch
return self.call_action()
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/moto/core/responses.py", line 455, in call_action
response = method()
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/moto/iam/responses.py", line 17, in attach_role_policy
self.backend.attach_role_policy(policy_arn, role_name)
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/moto/iam/models.py", line 1702, in attach_role_policy
policy = arns[policy_arn]
KeyError: 'local-bucket-access-policy'
</Message></Error><RequestId>DQJQDJKNC93JXLHUJ4T593IIKH8KS0OR7FJGL3J7EWIZKPAHGTRU</RequestId></ErrorResponse>" http.response.header.access_control_allow_headers=authorization,cache-control,content-length,content-md5,content-type,etag,location,x-amz-acl,x-amz-content-sha256,x-amz-date,x-amz-request-id,x-amz-security-token,x-amz-tagging,x-amz-target,x-amz-user-agent,x-amz-version-id,x-amzn-requestid,x-localstack-target,amz-sdk-invocation-id,amz-sdk-request http.response.header.access_control_allow_methods=HEAD,GET,PUT,POST,DELETE,OPTIONS,PATCH http.response.header.date="Wed, 22 Mar 2023 22:42:30 GMT" http.response.header.server=hypercorn-h11 http.response_content_length=2481 tf_req_id=74a915fe-1b32-ebcf-e1a6-88a84e30c539 @caller=github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/[email protected]/logger.go:138 @module=aws http.duration=9 http.response.header.access_control_allow_origin=* http.response.header.access_control_expose_headers=etag,x-amz-version-id tf_provider_addr=registry.terraform.io/hashicorp/aws tf_resource_type=aws_iam_role_policy_attachment tf_rpc=ApplyResourceChange timestamp=2023-03-22T22:42:30.954Z
terraform (tflocal):
```terraform
resource "aws_s3_bucket" "bucket" {
bucket = "mybucket"
}
# this works
resource "aws_iam_role" "lambda_role" {
name = "lambda_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_policy" "s3_access" {
name = "local-bucket-access-policy"
policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [
{
"Sid" : "",
"Effect" : "Allow",
"Action" : ["s3:GetObject", "s3:PutObject", "s3:DeleteObject"],
"Resource" : "${aws_s3_bucket.bucket.arn}"
}
]
})
}
resource "aws_iam_role_policy_attachment" "exe" {
role = aws_iam_role.lambda_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_iam_role_policy_attachment" "s3_access" {
role = aws_iam_role.lambda_role.name
# this causes the error, should use "arn" not "name"
policy_arn = aws_iam_policy.s3_access.name
}
```
### Expected Behavior
should attach the policy or give a reason why not due to the key error. The above error was only visible in debug logs
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker-compose up
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
tflocal apply
#### Compose file
```yaml
version: "3.8"
services:
localstack:
container_name: localstack-main
image: "localstack/localstack:${LOCALSTACK_VERSION}"
environment:
- SERVICES=s3,lambda,logs,iam
- PROVIDER_OVERRIDE_LAMBDA=asf
- DOCKER_HOST=unix:///var/run/docker.sock
- DEBUG=1
- EDGE_PORT=4566
- AWS_ACCESS_KEY_ID=local
- AWS_SECRET_ACCESS_KEY=local
- DEFAULT_REGION=us-west-1
ports:
- "127.0.0.1:4566:4566"
- "127.0.0.1:4510-4559:4510-4559"
volumes:
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
```
### Environment
```markdown
- OS: `Linux 07109cc2728b 5.15.0-67-generic #74-Ubuntu SMP Wed Feb 22 14:14:39 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux`
- LocalStack: `1.4.0-arm64`
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7934 | https://github.com/localstack/localstack/pull/8615 | 9fb6be13b92050d4fb643c5caa01d64017565a94 | 0ff2710cd9ea929395fb5e3cb48039b92e3a7c35 | "2023-03-22T22:56:00Z" | python | "2023-07-12T08:05:30Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,902 | ["localstack/services/ses/provider.py", "tests/integration/test_ses.py"] | bug: Incorrect email count from SES:GetSendQuota after using SES:SendRawEmail | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
The email count returned from SES:GetSendQuota is double the expected count after using the SES:SendRawEmail e.g. returns 2 instead of 1 when sending an email to 1 recipient.
Issue not present when using SES:SendEmail.
### Expected Behavior
The email count returned from SES:GetSendQuota should be the expected count after using the SES:SendRawEmail e.g. returns 1 instead of 2 when sending an email to 1 recipient.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker-compose.yml:
```yaml
version: '3.7'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=eu-west-1
- LOCALSTACK_DEFAULT_REGION=eu-west-1
- LOCALSTACK_SERVICES=ses
- LOCALHOST_DOCKER_HOST=unix:///var/run/docker.sock
- LOCALSTACK_LS_LOG=debug
ports:
- '4566:4566'
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
```
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```python
import boto3
from mypy_boto3_ses import Client
CLIENT: Client = boto3.client(service_name='ses', endpoint_url='http://localhost:4566')
EMAIL = '[email protected]'
def email_quota():
resp = CLIENT.get_send_quota()
print(resp['SentLast24Hours'])
if __name__ == '__main__':
CLIENT.verify_email_address(EmailAddress=EMAIL)
email_quota()
# CLIENT.send_email(
# Source=EMAIL,
# Destination={
# 'ToAddresses': ['[email protected]'],
# },
# Message={
# 'Subject': {
# 'Data': 'string',
# 'Charset': 'string',
# },
# 'Body': {
# 'Text': {
# 'Data': 'string',
# 'Charset': 'string'
# },
# },
# },
# )
# email_quota()
CLIENT.send_raw_email(
Source=EMAIL,
Destinations=['[email protected]'],
RawMessage={
'Data': b'bytes'
},
)
email_quota()
```
### Environment
```markdown
- OS: Ubuntu
- LocalStack: latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7902 | https://github.com/localstack/localstack/pull/7919 | b4e4c5d1b3d023d054cfe18154f4070078d0fe34 | 6ec56831a87d90ea5e27c489d18b9e40690faabf | "2023-03-18T14:22:15Z" | python | "2023-03-23T13:23:43Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,861 | ["localstack/services/sns/provider.py", "tests/integration/test_sns.py", "tests/integration/test_sns.snapshot.json"] | bug: Topic not found, error message does not match | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Im using AWS SDK v3
We use the `GetTopicAttributesCommand` to validate topic existence (there is not other way to do it), then we check the error code and error message.
When trying to get attributes of a topic by ARN with lockstack, it returns error with different error message than AWS:
```ts
{
Code: 'NotFound',
Message: 'Topic with arn <MY ARN> not found',
message: 'Topic with arn <MY ARN> not found'
}
```
The original AWS error
```ts
{
Type: "Sender",
Code: "NotFound",
Message: "Topic does not exist",
message: "Topic does not exist"
}
```
### Expected Behavior
Lockstack should return exact same error structure as AWS cloud
```ts
{
Type: "Sender",
Code: "NotFound",
Message: "Topic does not exist",
message: "Topic does not exist"
}
```
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
```ts
const client = new SNSClient({ })
try {
await client.send(new GetTopicAttributesCommand({ TopicArn: '<ARN TO TEST>' }))
} catch (err) {
console.log(err)
}
```
Run this both with AWS and lockstack and u will see the diff
### Environment
```markdown
- OS: osX
- LocalStack:latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7861 | https://github.com/localstack/localstack/pull/7862 | fd255135a810e7c1bc21e0b3e77ba01cb11386fb | c24a2606f7a8303afab6d185960c20642b563f37 | "2023-03-14T09:27:21Z" | python | "2023-03-15T01:04:32Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,842 | ["localstack/services/apigateway/helpers.py", "localstack/services/apigateway/invocations.py", "localstack/services/apigateway/provider.py", "tests/integration/apigateway/conftest.py", "tests/integration/apigateway/test_apigateway_common.py", "tests/integration/apigateway/test_apigateway_common.snapshot.json", "tests/unit/test_apigateway.py"] | bug: API Gateway V1 REST requests breaks when path parameter validation is enabled in serverless | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When performing any requests with path parameters with validation enabled the following error message is returned
`GET http://localhost:4566/restapis/<apigw_id>/local/_user_request_/test/world`
returns
`{"Type": "User", "message": "Invalid request body", "__type": "InvalidRequest"}`
### Expected Behavior
Path parameter validation works like AWS, i.e. validation is performed, lambda is invoked and response returned to caller.
`GET http://localhost:4566/restapis/<apigw_id>/local/_user_request_/test/world`
returns
`"hello world"`
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
localstack start
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
See project in zip archive - [localstack-path-parameter-test.zip](https://github.com/localstack/localstack/files/10952340/localstack-path-parameter-test.zip). Also included below.
To deploy to localstack:
1. `yarn install`
2. `npx sls deploy --stage local --region eu-west-2`
This has two endpoints. The first, test, works but the second endpoint, broken-test, fails with the error `Invalid request body`.
It looks like the error is caused by localstack not being able to handle `AWS::ApiGateway::RequestValidator` correctly.
Also if you try to fix an endpoint by removing the validation, i.e. change the broken-test endpoint to:
```
broken-test:
handler: handler.test
events:
- http:
path: broken-test/{value}
method: get
```
Then after deploying and hitting this endpoint you get an error like
```
{
"__type": "InternalError",
"message": "exception while calling apigateway with unknown operation: An error occurred (NotFoundException) when calling the GetRequestValidator operation: Validator 06ce6f for API Gateway 5ajrq316ln not found"
}
```
**serverless.yml**
```
service: locastack-test
provider:
name: aws
runtime: nodejs16.x
environment:
NODE_ENV: dev
plugins:
- serverless-localstack
custom:
localstack:
stages: [local]
host: http://127.0.0.1
debug: true
functions:
test:
handler: handler.test
events:
- http:
path: test/{value}
method: get
broken-test:
handler: handler.test
events:
- http:
path: broken-test/{value}
method: get
request:
parameters:
paths:
issuer: true
```
**handler.js**
```
exports.test = async function (event, context) {
return {
body: `hello ${event.pathParameters.value}`,
headers: {},
statusCode: 200,
};
}
```
**package.json**
```
{
"name": "localstack-test",
"devDependencies": {
"serverless": "^3.28.1",
"serverless-localstack": "^1.0.4"
}
}
```
### Environment
```markdown
- OS: AWS Linux Workspace
- LocalStack: latest
```
### Anything else?
[localstack_logs.txt](https://github.com/localstack/localstack/files/10952376/localstack_logs.txt)
| https://github.com/localstack/localstack/issues/7842 | https://github.com/localstack/localstack/pull/7846 | 1dfc637aaa402deeea2bff1b0d0504240ff85d92 | 92dca7dd5928cf5803fe7e0198ed53c7ec450b22 | "2023-03-12T23:38:31Z" | python | "2023-03-17T14:07:52Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,782 | ["localstack/services/s3/provider.py", "localstack/services/s3/utils.py", "tests/integration/s3/test_s3.py", "tests/integration/s3/test_s3.snapshot.json"] | bug: Localstack S3 Allows put-object and get-object on KMS encrypted objects after the KMS Key is Disabled | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Using localstack, I uploaded an object to a s3 bucket specifying the server side encryption as `aws:kms`, an SSEKMSId, and an SSEKMSEncryption Context. I verified that the object was correctly uploaded with the property metadata:
```
awslocal --endpoint-url=http://localhost:4566/ s3api get-object --bucket attachments-development --key .attachments/9ca85a00cde7481c02ee01f2d5e06770/384d0c0a outfile.txt
{
"AcceptRanges": "bytes",
"LastModified": "2023-03-02T15:20:00+00:00",
"ContentLength": 12,
"ETag": "\"e4d7f1b4ed2e42d15898f4b27b019da4\"",
"VersionId": "null",
"ContentLanguage": "en-US",
"ContentType": "text/plain",
"ServerSideEncryption": "aws:kms",
"Metadata": {},
"SSEKMSKeyId": "arn:aws:kms:us-east-1:000000000000:key/89816c0d-acfc-4a76-aa18-cabc2c8e477c",
"TagCount": 2
}
```
I then Disabled the KMS Key and verified that it was disabled:
```
awslocal --endpoint-url=http://localhost:4566/ kms describe-key --key-id arn:aws:kms:us-east-1:000000000000:key/89816c0d-acfc-4a76-aa18-cabc2c8e477c
{
"KeyMetadata": {
"AWSAccountId": "000000000000",
"KeyId": "89816c0d-acfc-4a76-aa18-cabc2c8e477c",
"Arn": "arn:aws:kms:us-east-1:000000000000:key/89816c0d-acfc-4a76-aa18-cabc2c8e477c",
"CreationDate": "2023-03-02T10:20:00-05:00",
"Enabled": false,
"Description": "kms test with localstack",
"KeyUsage": "ENCRYPT_DECRYPT",
"KeyState": "Disabled",
"Origin": "AWS_KMS",
"KeyManager": "CUSTOMER",
"CustomerMasterKeySpec": "SYMMETRIC_DEFAULT",
"KeySpec": "SYMMETRIC_DEFAULT",
"EncryptionAlgorithms": [
"SYMMETRIC_DEFAULT"
]
}
}
```
Then, when I tried to do a `get-object` command on the encrypted object, I expected a `DisabledException` since s3 shouldn't be able to decrypt the encrypted object with a disabled key. However, the `get-object` command completed without an error and returned the decrypted text.
### Expected Behavior
I would expect that `put-object`, `copy-object`, and `get-object` should throw an error if they specify a SSE-KMS key that has been disabled or does not exist.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker compose -f localstack/docker-compose.dev.yml up -d
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
awslocal --endpoint-url=http://localhost:4566 s3api mb s3://mybucket
awslocal --endpoint-url=http://localhost:4566 kms create-key
awslocal --endpoint-url=http://localhost:4566 s3api put-object --bucket <BUCKET NAME> --key <Object key> --server-side-encryption aws:kms -–ssekms-key-id <KMS ARN> --body outfile.txt
awslocal --endpoint-url=http://localhost:4566/ s3api get-object --bucket <Bucket name> --key <Object key> outfile.txt
(Expect that this works and specifies the object is encrypted with aws:kms)
awslocal kms disable-key --key-id <KMS ARN>
awslocal --endpoint-url=http://localhost:4566/ s3api get-object --bucket <Bucket name> --key <Object key> outfile.txt
(Expect that this to throw an error but it DOES NOT)
ALTERNATIVELY
more concisely put
awslocal --endpoint-url=http://localhost:4566 s3api mb s3://mybucket
awslocal --endpoint-url=http://localhost:4566 s3api put-object --bucket <BUCKET NAME> --key <Object key> --server-side-encryption aws:kms -–ssekms-key-id <KMS ARN THAT DOES NOT EXIST> --body outfile.txt
The above command does not throw an error when I believe that it should
### Environment
```markdown
- OS: macOS Ventura 13.1
- LocalStack: 1.1.0
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7782 | https://github.com/localstack/localstack/pull/7786 | e7f74efec5fd643a76aa867bad736aa3760df2db | 62c2c90860ed4a12c95780f00c09e1a5bd4a6651 | "2023-03-02T16:35:59Z" | python | "2023-03-08T11:38:58Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,779 | ["localstack/services/iam/provider.py", "localstack/testing/snapshots/transformer_utility.py", "tests/integration/test_iam.py", "tests/integration/test_iam.snapshot.json"] | bug: Cannot list roles which have permission boundaries attached | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When attempting to list roles where permission boundaries are attached the response cannot be serialized
`aws --endpoint-url=http://localhost:4566 iam list-roles --region ap-southeast-2`
```
An error occurred (InternalError) when calling the ListRoles operation (reached max retries: 2): exception while calling iam.ListRoles: Invalid type when serializing AttachedPermissionsBoundary: '<Element 'member' at 0xffff8275ef20>' cannot be parsed to structure.
```
### Expected Behavior
Expect that all roles are listed
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
* Create an IAM policy
* Create an IAM role which used the policy as a permission boundary
* Attempt to list all roles with `aws --endpoint-url=http://localhost:4566 iam list-roles --region ap-southeast-2`
### Environment
```markdown
- OS: MacOS
- LocalStack: latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7779 | https://github.com/localstack/localstack/pull/8614 | ffd23be6191707fc76f44d366878bc13f31a9fa1 | ac78d6fcf81e97a029ed134d3c063032b8952d86 | "2023-03-02T05:08:25Z" | python | "2023-07-05T11:28:45Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,742 | ["localstack/services/opensearch/cluster.py", "tests/integration/test_opensearch.py"] | OpenSearch is not gzipped when Accept-Encoding: gzip is defined | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
In version 1.4.0 OpenSearch is returning invalid responses when specifying gzip encoding. This was working in 1.3.1. The proxied response is not gzip encoded even though the content-encoding response header says its gzip. Reproduction steps show the differences between the output directly from OpenSearch vs the proxied result.
### Expected Behavior
Returning properly encoded responses.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
```
version: "3.9"
services:
opensearch:
container_name: opensearch
image: opensearchproject/opensearch:2.3.0
environment:
- node.name=opensearch
- cluster.name=opensearch-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
- "DISABLE_SECURITY_PLUGIN=true"
ports:
- "9200:9200"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/opensearch/data
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack
ports:
- "4566:4566"
depends_on:
- opensearch
environment:
- OPENSEARCH_CUSTOM_BACKEND=http://opensearch:9200
- DEBUG=${DEBUG- }
- PERSISTENCE=${PERSISTENCE- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
volumes:
data01:
driver: local
```
1. Run docker compose
```sh
docker-compose up -d
```
2. Create the OpenSearch domain:
```
awslocal opensearch create-domain --domain-name my-domain
```
3. Test - Notice content length and output
```
curl -v -X GET my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/ -H 'Accept-Encoding: gzip' --output -
curl -v -X GET localhost:9200/ -H 'Accept-Encoding: gzip' --output -
```
### Environment
```markdown
- OS:
- LocalStack: latest 1.4.0
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7742 | https://github.com/localstack/localstack/pull/8628 | 28427b9deedb747fbe0c360a572adaa4c44312a5 | 75ad307397958354c7ca263a802ad287cd89019c | "2023-02-23T20:02:23Z" | python | "2023-07-06T05:43:35Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,699 | ["localstack/services/cloudformation/engine/entities.py", "localstack/services/cloudformation/provider.py", "localstack/services/cloudformation/stores.py", "tests/aws/services/cloudformation/api/test_changesets.py", "tests/aws/services/cloudformation/api/test_changesets.snapshot.json", "tests/aws/services/cloudformation/api/test_stacks.py", "tests/aws/services/cloudformation/api/test_stacks.snapshot.json"] | bug: cdklocal destroy doesn't remove stack completely | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Running `cdklocal destroy` to destroy a stack and re-running `cdklocal deploy` causes no changes in deployment. Checking with `awslocal cloudformation list-stacks --region ap-southeast-1` leads to:
```
{
"StackId": "arn:aws:cloudformation:ap-southeast-1:000000000000:stack/ExampleStack/d098b2f8",
"StackName": "ExampleStack",
"CreationTime": "2023-02-16T08:02:10.181000Z",
"LastUpdatedTime": "2023-02-16T08:02:10.181000Z",
"DeletionTime": "2023-02-16T08:02:44.247000Z",
"StackStatus": "DELETE_COMPLETE",
"DriftInformation": {
"StackDriftStatus": "NOT_CHECKED"
}
},
```
### Expected Behavior
_No response_
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
awslocal s3 mb s3://mybucket
### Environment
```markdown
- OS: MacOS 13.2
- LocalStack: latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7699 | https://github.com/localstack/localstack/pull/9748 | c3d24de417faa57a2f525b7aad76b133589141e3 | ec15870db07c0e9f8865159018e871011a08d797 | "2023-02-16T08:08:36Z" | python | "2023-12-01T18:53:16Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,681 | ["localstack/config.py", "localstack/utils/docker_utils.py", "tests/integration/docker_utils/test_docker.py"] | bug: PORTS_CHECK_DOCKER_IMAGE is not configurable which prevents ECS tasks from running on air-gapped environments | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
`PORTS_CHECK_DOCKER_IMAGE` is not configurable which prevents ECS tasks from running on air-gapped environment
### Expected Behavior
`PORTS_CHECK_DOCKER_IMAGE` can be configured via configuration option to allow use of any available image
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
Run an ECS task in an environment where localstack/localstack is not available.
### Environment
```markdown
- OS: RHEL8
- Docker runtime version: Podman v4.2.0
- LocalStack: 1.3.1
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7681 | https://github.com/localstack/localstack/pull/7690 | 029285d5f6726abdece9948a6efc91daf7bfa00c | f4caa2e82395b992bff2d286f56b7d8b99c51e81 | "2023-02-14T11:37:09Z" | python | "2023-02-15T21:44:54Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,635 | ["localstack/services/awslambda/invocation/docker_runtime_executor.py"] | feature request: PROVIDER_OVERRIDE_LAMBDA=asf, Lambda container prefix | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Feature description
We're trying to switch from the old lambda implementation to asf. In the old lambda implementation, when using `LAMBDA_EXECUTOR=docker-reuse`, the lambda container name received a prefix that was equal to that of the local stack container in docker. (This was not the case for all `LAMBDA_EXECUTOR` flavours)
This way, we were able to track the lambda containers that were started for a given localstack instance, print the logs, kill them when necessary.
Request: Would it be possible to prefix the lambda containers with the name of the local stack container itself if present?
### 🧑💻 Implementation
_No response_
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7635 | https://github.com/localstack/localstack/pull/7667 | 8a3ffe8bc3e0508a9fb94a970895005f158e2b3d | 14675b04b772e43578ea6595f944d2e4658ec0ad | "2023-02-07T12:56:16Z" | python | "2023-02-14T23:31:48Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,562 | ["localstack/services/sns/provider.py", "tests/integration/test_sns.py"] | bug: SNS FIFO: The request includes MessageGroupId parameter that is not valid for this topic type | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Publishing to fifo topics is generating the error message: `The request includes MessageGroupId parameter that is not valid for this topic type` with 400 status code
The behavior occurs on amazon python and java sdk clients. (Works ok in aws cli)
### Expected Behavior
The message should be published successfully.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker-compose up using this example:
```yaml
services:
localstack:
image: 'localstack/localstack:latest'
environment:
LOCALSTACK_SERVICES: 'sns,sqs'
DEBUG: 1
ports:
- '4566:4566'
aws-environment:
image: 'amazon/aws-cli:latest'
environment:
AWS_ACCESS_KEY_ID: DEV123
AWS_SECRET_ACCESS_KEY: DEV123
AWS_DEFAULT_REGION: us-east-1
entrypoint: /bin/sh -c
command: |
"
sleep 15
aws sns create-topic --endpoint-url=http://localstack:4566 --name my-topic.fifo --attributes FifoTopic=true,ContentBasedDeduplication=true;
"
```
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
Trying to publish a message using a python or java client:
```python
import json
import boto3
client = boto3.client('sns',
endpoint_url="http://localhost:4566",
region_name='us-east-1',
aws_access_key_id='DEV123',
aws_secret_access_key='DEV123')
message = {"foo": "bar"}
response = client.publish(
TargetArn=
'arn:aws:sns:us-east-1:000000000000:my-topic.fifo',
Message=json.dumps({'default': json.dumps(message)}),
MessageStructure='json',
MessageGroupId="123"
)
print(response)
```
### Environment
```markdown
- OS:Ubuntu 20.04
- LocalStack: latest
- Boto version: 1.24.96
- Python version: 3.8.16
- aws-java-sdk-sns: 1.12.392
```
### Anything else?
I checked these issues related to sqs:
- https://github.com/localstack/localstack/issues/5465
- https://github.com/localstack/localstack/issues/5374
- https://github.com/localstack/localstack/issues/5236
Testing in real aws environments or using [moto server](http://docs.getmoto.org/en/latest/docs/server_mode.html) the scripts worked normally | https://github.com/localstack/localstack/issues/7562 | https://github.com/localstack/localstack/pull/7564 | 2b114d829290d0480b79f1353dd7234e99a27377 | fb4f5f6016c7e0961da2a8239391ed1fe9f01538 | "2023-01-26T20:38:19Z" | python | "2023-01-31T13:53:19Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,555 | ["localstack/services/s3/provider.py", "localstack/services/s3/provider_stream.py", "tests/integration/s3/test_s3.py", "tests/integration/s3/test_s3.snapshot.json"] | Localstack S3 emulation does not check precondition headers for CopyObject operation | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
CopyObject API does not check precondition headers such as `x-amz-copy-source-if-modified-since`. Instead it seems to always copy the object.
### Expected Behavior
The API should return a 412 response with content similar to:
```
<Error><Code>PreconditionFailed</Code><Message>At least one of the pre-conditions you specified did not hold</Message><Condition>x-amz-copy-source-If-Modified-Since</Condition><RequestId>14HR9PXJRG5B1GR5</RequestId><HostId>HztBs2jYea1fzzPWo1JWy33NQJqo1S1IsK62rkbnVEI04PnGNCi8sPVrichGQxH3FQwLA2RIyF4=</HostId></Error>
```
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```
aws s3 --profile localstack --endpoint-url http://s3.localhost.localstack.cloud:4566 \
ls s3://net-amazon-s3-test/Makefile.am
2023-01-25 10:47:40 537 Makefile.am
aws s3api copy-object --copy-source net-amazon-s3-test/Makefile.am --key Makefile.am.2 \
--bucket net-amazon-s3-test2 --profile localstack --endpoint-url http://s3.localhost.localstack.cloud:4566 \
--copy-source-if-modified-since "Wed, 25 Jan 2023 16:16:27 GMT"
{
"CopyObjectResult": {
"ETag": "\"46ada9d1fe8f8a311ff4504c226e061c\"",
"LastModified": "2023-01-25T16:18:43+00:00"
}
}
```
### Environment
```markdown
- OS: penguin
- LocalStack: latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7555 | https://github.com/localstack/localstack/pull/8653 | a4e24108706d7f7e44874c46c9a53c83c099d70c | 6c743db88062bab70021108752798e467357f485 | "2023-01-25T16:23:20Z" | python | "2023-07-08T19:10:03Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,529 | ["localstack/services/sns/publisher.py", "tests/integration/test_sns.py", "tests/integration/test_sns.snapshot.json"] | bug: SNS FIFO topic to SQS FIFO queue does not seem to work #6657 | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
My SQS queue is not receiving notifications from SNS when they both have fifo enabled:
```
#Create sqs
aws --endpoint-url=http://localhost:4566 sqs create-queue --queue-name onexlab.fifo --attributes FifoQueue=true
#create sns
aws --endpoint-url=http://localhost:4566 sns create-topic --name onexlab-sns.fifo --attributes FifoTopic=true,ContentBasedDeduplication=true
#create subscription
aws --endpoint-url=http://localhost:4566 sns subscribe --topic-arn arn:aws:sns:us-east-1:000000000000:onexlab-sns.fifo --protocol sqs --notification-endpoint arn:aws:sqs:us-east-1:000000000000:onexlab.fifo
#publish messasge
aws --endpoint-url=http://localhost:4566 sns publish --topic-arn arn:aws:sns:us-east-1:000000000000:onexlab-sns.fifo --message 'Welcome to Onexlab!' --message-group-id "test"
#check messages
aws --endpoint-url=http://localhost:4566 sqs receive-message --queue-url http://localhost:4566/000000000000/onexlab.fifo
```
No messages on the SQS queue.
### Expected Behavior
Expect the sqs queue to have the message.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker-compose up
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```
#Create sqs
aws --endpoint-url=http://localhost:4566 sqs create-queue --queue-name onexlab.fifo --attributes FifoQueue=true
#create sns
aws --endpoint-url=http://localhost:4566 sns create-topic --name onexlab-sns.fifo --attributes FifoTopic=true,ContentBasedDeduplication=true
#create subscription
aws --endpoint-url=http://localhost:4566 sns subscribe --topic-arn arn:aws:sns:us-east-1:000000000000:onexlab-sns.fifo --protocol sqs --notification-endpoint arn:aws:sqs:us-east-1:000000000000:onexlab.fifo
#publish messasge
aws --endpoint-url=http://localhost:4566 sns publish --topic-arn arn:aws:sns:us-east-1:000000000000:onexlab-sns.fifo --message 'Welcome to Onexlab!' --message-group-id "test"
#check messages
aws --endpoint-url=http://localhost:4566 sqs receive-message --queue-url http://localhost:4566/000000000000/onexlab.fifo
```
### Environment
```markdown
- OS: Mac os
- LocalStack:latest
version: '3.0'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=sqs,sns
ports:
- '4566-4597:4566-4597'
volumes:
- "${TMPDIR:-/lib/localstack}:/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7529 | https://github.com/localstack/localstack/pull/7566 | 04c26f2362d8338caa7bd81b57f2531a18a6ea72 | 332a7b812af3f0cad15c146857c380977e248f43 | "2023-01-20T15:45:06Z" | python | "2023-01-28T17:40:28Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,494 | ["localstack/services/kms/provider.py", "tests/integration/test_kms.py", "tests/integration/test_kms.snapshot.json", "tests/unit/test_kms.py"] | bug: KMS Alias Creation Fails to Return Error | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
This looks similar to https://github.com/localstack/localstack/issues/6471
I am trying to sign something using KMS for some tests. It seems like doing so using an alias does not work. For example I create a key and an alias like so:
```
# Add a key used for signing urls
aws-cli --endpoint-url=http://localhost:4566 kms create-key \
--key-usage SIGN_VERIFY \
--key-spec RSA_4096
# Add well known alias for key
aws-cli --endpoint-url=http://localhost:4566 kms create-alias \
--alias-name "some-nice-alias-name" \
--target-key-id <key id generated above>
```
I can see that this looks to have worked by verifying the key and alias on the CLI
```
aws-cli --endpoint-url=http://localhost:4566 kms list-keys
{
"Keys": [
{
"KeyId": "f7d2d869-f6b8-4977-96ea-5bd70cb0d5f2",
"KeyArn": "arn:aws:kms:us-east-1:000000000000:key/<someuuid>"
}
]
}
```
and
```
aws-cli --endpoint-url=http://localhost:4566 kms list-aliases
{
"Aliases": [
{
"AliasName": "census-webform-url-signing-key",
"AliasArn": "arn:aws:kms:us-east-1:000000000000:alias/some-nice-alias-name",
"TargetKeyId": "<sameuuid>",
"CreationDate": "2023-01-13T16:58:52.279782-05:00"
}
]
}
```
however attempting to sign something does not work
```
# Make sure we can sign
aws-cli --endpoint-url=http://localhost:4566 kms sign \
--cli-binary-format raw-in-base64-out \
--key-id "alias/some-nice-alias-name" \
--message 'wwwtestcom' \
--message-type RAW \
--signing-algorithm "RSASSA_PSS_SHA_512"
```
results in
An error occurred (NotFoundException) when calling the Sign operation: Unable to find KMS alias with name alias/some-nice-alias-name
### Expected Behavior
Would expect output from the last command not the resulting error.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```
aws-cli --endpoint-url=http://localhost:4566 kms create-key \
--key-usage SIGN_VERIFY \
--key-spec RSA_4096
aws-cli --endpoint-url=http://localhost:4566 kms create-alias \
--alias-name "some-nice-alias-name" \
--target-key-id <key id generated above>
aws-cli --endpoint-url=http://localhost:4566 kms list-keys
{
"Keys": [
{
"KeyId": "f7d2d869-f6b8-4977-96ea-5bd70cb0d5f2",
"KeyArn": "arn:aws:kms:us-east-1:000000000000:key/<someuuid>"
}
]
}
aws-cli --endpoint-url=http://localhost:4566 kms list-aliases
{
"Aliases": [
{
"AliasName": "census-webform-url-signing-key",
"AliasArn": "arn:aws:kms:us-east-1:000000000000:alias/some-nice-alias-name",
"TargetKeyId": "<sameuuid>",
"CreationDate": "2023-01-13T16:58:52.279782-05:00"
}
]
}
aws-cli --endpoint-url=http://localhost:4566 kms sign \
--cli-binary-format raw-in-base64-out \
--key-id "alias/some-nice-alias-name" \
--message 'wwwtestcom' \
--message-type RAW \
--signing-algorithm "RSASSA_PSS_SHA_512"
```
### Environment
```markdown
- OS: Macos 12.6.2
- LocalStack: latest docker image
```
### Anything else?
I did test using the same using the actual generated key ID and this works. I also attempted this through a BOTO3 client in python and the same resulted. | https://github.com/localstack/localstack/issues/7494 | https://github.com/localstack/localstack/pull/7826 | 7e38646b24ce40f348006d59002fdff7a3a33b83 | d3a21e8aec616bb34cf42196b00c20dc00816bd3 | "2023-01-13T22:48:13Z" | python | "2023-03-13T10:10:50Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,493 | ["localstack/aws/api/s3/__init__.py", "localstack/aws/spec-patches.json", "localstack/services/s3/provider.py", "localstack/services/s3/utils.py", "tests/integration/s3/test_s3.py", "tests/integration/s3/test_s3.snapshot.json"] | bug: Multi part upload does not accept GLACIER Instant Retrieval storage class | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When multi uploading a file for Glacier Instant Retrieval, the operation fails at Complete stage with a message claiming storage class not supported.
### Expected Behavior
It works with real S3.
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
localstack start
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```golang
b, p := parse1(path)
cmuI := &awsS3.CreateMultipartUploadInput{
Bucket: b,
Key: p,
StorageClass: mtypes.StorageClassGlacierIr,
}
_, err := client.CreateMultipartUpload(context.Background(), cmuI)
```
and process the upload of the part and then complete
### Environment
```markdown
- OS: Ubuntu 20.04
- LocalStack: pro latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7493 | https://github.com/localstack/localstack/pull/7505 | ae1a4985780a8fbac9fca9482596cf0887333256 | 400edb1c10d0a16b128911e5c1a8de590f14732e | "2023-01-13T22:04:11Z" | python | "2023-01-19T11:07:06Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,492 | ["localstack/aws/api/s3/__init__.py", "localstack/aws/spec-patches.json", "localstack/services/s3/provider.py", "tests/integration/s3/test_s3.py", "tests/integration/s3/test_s3.snapshot.json"] | bug: Multipart upload complete does not generate an error if the parts are not sorted in ascending order | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When requesting a Complete operation at the end of a multi part upload, the parts have to be sorted in ascending order of part number.
AWS S3 issues an error if it is not the case. localstack does not generate such error (although, its log issues warnings)/
### Expected Behavior
Complete should generate an error
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
localstack start
Buggy software extract (simplified)
```golang
wg := errgroup.Group{}
for i := 0; i < 3; i++ {
wg.Go(func() error {
rd, err := os.Open(source)
if err != nil {
return err
}
return u.Upload(rd, i, 10*_cMB) // uploads and stores for part i
})
}
err := wg.Wait()
if err != nil {
...
}
err = u.Complete()
// as using i inside the routines, all the four parts will have the same value 3
```
### Environment
```markdown
- OS:Ubuntu 20.04
- LocalStack: latest pro
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7492 | https://github.com/localstack/localstack/pull/7495 | 6dda0b1af0639de16fe8fb36be8ebb736a077925 | b8da5a1f7dd559b98b8f22d50bdba5612121702a | "2023-01-13T21:53:55Z" | python | "2023-01-16T16:21:42Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,426 | ["localstack/services/dynamodb/models.py", "localstack/services/dynamodb/provider.py", "tests/integration/test_dynamodb.py"] | bug: DynamoDB DescribeTable treats the queried table as a replica | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
The DynamoDB DescribeTable operation treats the current queried table as a replica.
```
$ awslocal dynamodb describe-table --table-name global01 --query 'Table.Replicas' --region us-west-1
[
{
"RegionName": "ap-south-1",
"ReplicaStatus": "ACTIVE"
},
{
"RegionName": "eu-central-1",
"ReplicaStatus": "ACTIVE"
},
{
"RegionName": "us-west-1", # <<< THIS MUST NOT BE RETURNED
"ReplicaStatus": "ACTIVE" # <<<
}
]
```
### Expected Behavior
The correct response should be like so:
```
$ awslocal dynamodb describe-table --table-name global01 --query 'Table.Replicas' --region us-west-1
[
{
"RegionName": "ap-south-1",
"ReplicaStatus": "ACTIVE"
},
{
"RegionName": "eu-central-1",
"ReplicaStatus": "ACTIVE"
}
]
```
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
### Environment
```markdown
- OS: Ubuntu 22.04
- LocalStack: latest
```
### Anything else?
Localstack maintains a single copy of a global table rather than truly replicating it. Requests are forwarded to the region where the table exists. On the receiving region, it is not possible to know the originating region. This is a technical limitation which will be solvable once the new internal AWS client is stable, see https://github.com/localstack/localstack/pull/7240. Here it will be possible to use the DTO to determine the origin region. | https://github.com/localstack/localstack/issues/7426 | https://github.com/localstack/localstack/pull/8549 | 577f0dc6ce7fed7ef58031443535e6660e085905 | a5e443bcbf480f621031d9306f35ddfad761c7b2 | "2023-01-04T06:21:05Z" | python | "2023-06-23T14:12:19Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,424 | ["localstack/services/s3/presigned_url.py", "localstack/services/s3/provider.py", "localstack/services/s3/utils.py", "tests/integration/s3/test_s3.py", "tests/unit/test_s3.py"] | ASF S3 Provider: 404 NoSuchBucket | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Listing buckets with ASF Provider fails.
```
awslocal --endpoint-url=http://aws.mycompany..local/ s3api --region us-east-1 list-buckets
An error occurred (NoSuchBucket) when calling the ListBuckets operation: The specified bucket does not exist
```
### Expected Behavior
Listing buckets with ASF provider should return a json response.
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
Localstack helm chart is executing against a kind cluster, with port mappings. The kind cluster config looks like this
```
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: local
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
extraMounts:
- hostPath: ./charts/_localstack/init-scripts
containerPath: /localstack/init-scripts
readOnly: true
```
The helm chart values.yaml looks like this
```
debug: true
image:
tag: 1.3.1
volumes:
- name: init-scripts
# Mounted by kind.yaml
hostPath:
path: /localstack/init-scripts
volumeMounts:
- name: init-scripts
mountPath: /etc/localstack/init/ready.d
readOnly: true
ingress:
enabled: true
hosts:
- host: aws.mycompany.local
paths:
- path: /
pathType: ImplementationSpecific
extraEnvVars:
# Disable usage tracking
- name: DISABLE_EVENTS
value: "1"
- name: PROVIDER_OVERRIDE_S3
value: asf
- name: LS_LOG
value: trace
```
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```
aws configure set cli_follow_urlparam false
BUCKET=mybucket
awslocal s3 mb s3://$BUCKET
awslocal s3api \
put-bucket-notification-configuration \
--bucket $BUCKET \
--notification-configuration '{ "EventBridgeConfiguration": {} }'
awslocal s3api \
put-bucket-cors \
--bucket $BUCKET \
--cors-configuration '{
"CORSRules": [
{
"AllowedOrigins": ["http://app.mycompany.local","https://app.mycompany.local"],
"AllowedHeaders": ["*"],
"AllowedMethods": ["PUT", "GET"],
"MaxAgeSeconds": 3000
}
]}'
# snip
# event bridge rule, target, connection, api destination
# ...
```
### Environment
```markdown
- OS: MacOS 12.5
- LocalStack: 1.3.1
```
### Anything else?
The AWS list buckets call works from within the pod, but not from the outside. Discussion with @bentsku can be found in slack [here](https://localstack-community.slack.com/archives/CMAFN2KSP/p1672768637503899) | https://github.com/localstack/localstack/issues/7424 | https://github.com/localstack/localstack/pull/7431 | 1879b79c88e542fc8ce0998942e32fb34989490b | 225b8927972808ad0ad92d3c7396b1ae73e2bde7 | "2023-01-03T20:29:44Z" | python | "2023-01-05T18:02:04Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,407 | ["localstack/services/s3/notifications.py", "localstack/services/s3/provider.py", "tests/integration/s3/test_s3.py", "tests/integration/s3/test_s3.snapshot.json", "tests/integration/s3/test_s3_notifications_eventbridge.py", "tests/integration/s3/test_s3_notifications_eventbridge.snapshot.json", "tests/integration/s3/test_s3_notifications_sqs.py", "tests/integration/s3/test_s3_notifications_sqs.snapshot.json"] | S3 does not trigger Lambda when the event is s3:ObjectAcl:Put | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
S3 does not trigger Lambda when the event is:
`s3:ObjectAcl:Put`
With the event below e.g. works fine:
`s3:ObjectCreated:Put`
### Expected Behavior
In the AWS works fine, S3 triggers the Lambda with both events:
`s3:ObjectAcl:Put`
`s3:ObjectCreated:Put`
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
#### Starting localstack:
docker-compose down -v --remove-orphans && docker-compose up
#### Put the trigger:
```shell
aws --endpoint-url=http://localhost:4566 s3api put-bucket-notification-configuration --bucket bucket --notification-configuration file://notification.json
```
```json
{
"LambdaFunctionConfigurations": [
{
"Id": "xxxxxxxxxx",
"LambdaFunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:xxxxxxxxxx",
"Events": [ "s3:ObjectAcl:Put" ],
"Filter": {
"Key": {
"FilterRules": [
{
"Name": "suffix",
"Value": ".json"
}
]
}
}
}
]
}
```
#### Check the trigger:
```shell
aws --endpoint-url=http://localhost:4566 s3api get-bucket-notification-configuration --bucket bucket
```
#### Change ACL:
The command below works fine, but do not trigger the Lambda:
```shell
aws --endpoint-url=http://localhost:4566 s3api put-object-acl --bucket bucket --key key --grant-full-control id=canonical
aws --endpoint-url=http://localhost:4566 s3api get-object-acl --color on --bucket bucket --key key
```
#### Try with SDK:
Even using `s3AsyncClient.putObjectAcl`, similar to the code below, the Lambda is not triggered:
https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/main/java/com/example/s3/SetAcl.java
### Environment
```markdown
- OS: Ubuntu 18.04
- LocalStack: latest
```
### Anything else?
However, if I change the event e.g. to `s3:ObjectCreated:Put`, the Lambda trigger works fine:
```shell
aws --endpoint-url=http://localhost:4566 s3 cp notification.json s3://bucket/notification.json
``` | https://github.com/localstack/localstack/issues/7407 | https://github.com/localstack/localstack/pull/7409 | d8af6ee582f7ab278907ef6e1f803a695752a15a | 6dda0b1af0639de16fe8fb36be8ebb736a077925 | "2022-12-30T17:12:53Z" | python | "2023-01-16T11:53:50Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,398 | ["localstack/services/sns/models.py", "localstack/services/sns/provider.py", "localstack/services/sns/publisher.py", "localstack/testing/pytest/fixtures.py", "tests/integration/test_sns.py", "tests/integration/test_sns.snapshot.json"] | bug: SNS Topic subscription with filter policy scope MessageBody, potentially not supported in v1.3.1 | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When I try creating a new SNS topic subscription, where the filter policy scope is being set to MessageBody, resp. when I try specifying the FilterPolicyScope:
`aws --endpoint-url "http://localhost:4566" sns subscribe \
--topic-arn arn:aws:sns:eu-central-1:000000000000:order-commands \
--protocol sqs \
--notification-endpoint arn:aws:sqs:eu-central-1:000000000000:order-commands-add-product \
--attributes '{"RawMessageDelivery":"true","FilterPolicyScope":"MessageBody","FilterPolicy":"{\"messageType\":[\"add-product\"]}"}'`
The following error is being returned:
`An error occurred (InvalidParameter) when calling the Subscribe operation: AttributeName`
And the localstack logs show:
`2022-12-29T04:01:26.567 INFO --- [ asgi_gw_1] localstack.request.aws : AWS sns.Subscribe => 400 (InvalidParameter)`
If I try without the mentioned parameter, the subscription is being created successfully - but in this case, the filter does not apply when sending messages to the topic, as expected because the filter then only looks for message attributes:
`2022-12-29T04:08:00.375 INFO --- [ asgi_gw_3] localstack.request.aws : AWS sns.Publish => 200
2022-12-29T04:08:00.377 INFO --- [ncthread1819] l.services.sns.provider : SNS filter policy {'messageType': ['add-product']} does not match attributes {}
`
### Expected Behavior
Although I have not tried right now against the real AWS API, I can see in the AWS documentation, it is supported resp. one of the newer features they added.
Having double checked my local dependency versions, I would also expect that the latest version of localstack, supports message body based filter policy as well 😄
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker-compose -f ${COMPOSE_FILE_INFRA} up -d
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
Using terraform locally, just for reference:
resource "aws_sns_topic" "order-commands" {
name = "order-commands"
}
resource "aws_sqs_queue" "order-commands-add-product" {
name = "order-commands-add-product"
}
resource "aws_sns_topic_subscription" "order-commands-subscription1" {
topic_arn = aws_sns_topic.order-commands.arn
protocol = "sqs"
endpoint = aws_sqs_queue.order-commands-add-product.arn
raw_message_delivery = true
filter_policy = jsonencode(
{
"messageType" = [
"add-product"
]
}
)
filter_policy_scope = "MessageBody"
}
### Environment
```markdown
- OS: MacOS Monterey (M1 Pro)
- LocalStack: 1.3.1
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7398 | https://github.com/localstack/localstack/pull/7408 | 200fc1bfc0e10041b5198fbf7eb9a2dd02fbd1ad | e9c8625e0507cbcd4d55cfe15ee630afb0e069e6 | "2022-12-29T04:28:43Z" | python | "2023-01-11T12:42:09Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,381 | ["localstack/services/sqs/provider.py", "tests/integration/test_sqs.py", "tests/integration/test_sqs.snapshot.json"] | bug: Internal Error with SQS Binary Message Attributes | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Adding a binary message attribute to an SQS message results in a 500 Internal Error due to a `exception during call chain: Incorrect padding` error:
```
2022-12-21T15:56:38.698 INFO --- [ asgi_gw_2] localstack.request.aws : AWS sqs.SendMessage => 500 (InternalError)
2022-12-21T15:56:41.701 ERROR --- [ asgi_gw_2] l.aws.handlers.logging : exception during call chain: Incorrect padding
```
The binary data (hex encoded) is:
```
7472616365706172656E741E30302D39613235616632623863356336353937383338383830316165353633313736622D616331393436313566356662326561662D3031
```
This is UTF-8/ASCII text but with one control character in it (`0x1E`), but the presence of the control character seems immaterial - the problem remains when this is swapped out for a printable character.
Swapping out localstack for netcat in order to observe the actual posted data (note the data is slightly different as it is random each time):
```
Host: localhost:6565
User-Agent: aws-sdk-go-v2/1.17.3 os/linux lang/go/1.18.4 md/GOOS/linux md/GOARCH/amd64 api/sqs/1.19.17
Content-Length: 335
Amz-Sdk-Invocation-Id: 8e51778c-8aca-4ea9-ae5a-3d36f80b6f19
Amz-Sdk-Request: attempt=1; max=3
Authorization: AWS4-HMAC-SHA256 Credential=1234/20221221/us-east-1/sqs/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;content-length;content-type;host;x-amz-date, Signature=5312bc321178fe3deaac7426c1d3276139ed4a404cc39261d4324e5becdda380
Content-Type: application/x-www-form-urlencoded
X-Amz-Date: 20221221T191345Z
Accept-Encoding: gzip
Action=SendMessage&MessageAttribute.1.Name=OTEL&MessageAttribute.1.Value.BinaryValue=dHJhY2VwYXJlbnQeMDAtNzc0MDYyZDZjMzcwODFhNWEwYjliNWI4OGUzMDYyN2MtMmQyNDgyMjExZjY0ODlkYS0wMQ%3D%3D&MessageAttribute.1.Value.DataType=Binary&MessageBody=Hello+World%21&QueueUrl=http%3A%2F%2Flocalhost%3A6565%2F000000000000%2Ftest-queue&Version=2012-11-05
```
Decoding the URL-encoded base64 data, it looks OK to me:
```
dHJhY2VwYXJlbnQeMDAtNzc0MDYyZDZjMzcwODFhNWEwYjliNWI4OGUzMDYyN2MtMmQyNDgyMjExZjY0ODlkYS0wMQ==
```
And back to binary looks OK:
```
echo 'dHJhY2VwYXJlbnQeMDAtNzc0MDYyZDZjMzcwODFhNWEwYjliNWI4OGUzMDYyN2MtMmQyNDgyMjExZjY0ODlkYS0wMQ==' | base64 -d | od -A x -t x1z -v --endian=big
000000 74 72 61 63 65 70 61 72 65 6e 74 1e 30 30 2d 37 >traceparent.00-7<
000010 37 34 30 36 32 64 36 63 33 37 30 38 31 61 35 61 >74062d6c37081a5a<
000020 30 62 39 62 35 62 38 38 65 33 30 36 32 37 63 2d >0b9b5b88e30627c-<
000030 32 64 32 34 38 32 32 31 31 66 36 34 38 39 64 61 >2d2482211f6489da<
000040 2d 30 31 >-01<
```
```
>>> data='dHJhY2VwYXJlbnQeMDAtNzc0MDYyZDZjMzcwODFhNWEwYjliNWI4OGUzMDYyN2MtMmQyNDgyMjExZjY0ODlkYS0wMQ=='
>>> import base64
>>> binarydata = base64.b64decode(data)
>>> print(binarydata)
b'traceparent\x1e00-774062d6c37081a5a0b9b5b88e30627c-2d2482211f6489da-01'
```
Conclusion: Nothing seems to be wrong with the base64 padding in the URL parameter.
This works fine against real AWS.
### Expected Behavior
Binary Message Attributes to be successfully accepted by Localstack SQS.
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
localstack start
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
### Environment
```markdown
- OS:NAME="Ubuntu"
VERSION="20.04.4 LTS (Focal Fossa)"
- LocalStack: 1.3.1
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7381 | https://github.com/localstack/localstack/pull/7386 | 41ae0afdd75f08b9b19b29d262f19f8553c1c42f | 44c23463b16d2b5550d3237a580170dd1000d501 | "2022-12-21T19:27:34Z" | python | "2022-12-22T21:46:16Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,378 | ["localstack/services/logs/models.py", "localstack/services/logs/provider.py", "tests/integration/test_logs.py", "tests/integration/test_logs.snapshot.json"] | bug: Can't create a cloudwatch log group using latest AWS SDK/CLI | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
A 500 error happens in localstack that indicates localstack might need a boto upgrade to get newly added functionality:
```
2022-12-21T16:51:43.172 INFO --- [ asgi_gw_4] localstack.request.http : POST / => 500
2022-12-21T16:51:48.727 ERROR --- [ asgi_gw_0] l.aws.handlers.logging : exception during call chain
Traceback (most recent call last):
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/botocore/model.py", line 362, in operation_model
model = self._service_description['operations'][operation_name]
KeyError: 'ListTagsForResource'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/code/localstack/localstack/aws/protocol/parser.py", line 172, in wrapper
return func(*args, **kwargs)
File "/opt/code/localstack/localstack/aws/protocol/parser.py", line 895, in parse
operation = self.service.operation_model(operation_name)
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/botocore/utils.py", line 1419, in _cache_guard
result = func(self, *args, **kwargs)
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/botocore/model.py", line 364, in operation_model
raise OperationNotFoundError(operation_name)
botocore.model.OperationNotFoundError: ListTagsForResource
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/code/localstack/localstack/aws/chain.py", line 90, in handle
handler(self, self.context, response)
File "/opt/code/localstack/localstack/aws/handlers/service.py", line 63, in __call__
return self.parse_and_enrich(context)
File "/opt/code/localstack/localstack/aws/handlers/service.py", line 76, in parse_and_enrich
operation, instance = parser.parse(context.request)
File "/opt/code/localstack/localstack/aws/protocol/parser.py", line 176, in wrapper
raise UnknownParserError(
localstack.aws.protocol.parser.UnknownParserError: An unknown error occurred when trying to parse the request.
```
### Expected Behavior
CloudWatch Log group to be successfully created
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
awslocal logs create-log-group --log-group-name test
### Environment
```markdown
- OS: Ubuntu 22.04
- LocalStack: latest, and also 1.3.1
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7378 | https://github.com/localstack/localstack/pull/7389 | d3b89ee46a5587cba7f8441a73b4d7ca561b9024 | 40fcbd8bf1865c7c3658628230a7c5992c73b52f | "2022-12-21T17:07:41Z" | python | "2022-12-24T20:52:03Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,377 | ["localstack/services/dynamodb/provider.py", "localstack/services/dynamodb/utils.py", "tests/integration/cloudformation/resources/test_dynamodb.py", "tests/integration/cloudformation/resources/test_lambda.py", "tests/integration/test_dynamodb.py"] | bug: DynamoDB global tables v2019.11.21 does not seem to fully replicate to other regions | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When using DynamoDB with global tables (v2019.11.21), I am experiencing two problems that leads me to believe something is making global tables not fully replicate:
1) When creating the table with CLI and then adding replication, I can see that the table gets replication turned on, but when listing the tables in the other region – nothing is returned. Additionally, no DynamoDbStream is created in the other region from what I can see.
2) Creating the table with terraform and replication turned on does not work, and I need to turn off replication for the table to properly instantiate. I can add replication later with CLI however, but then I get problem 1 above.
### Expected Behavior
1. When listing tables in the other region, I would expect to see a table there and be able to find a corresponding DynamoDbStream by describing the table.
2. I expect that creating a global table with v2019.11.21 using terraform would work.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
(In the examples, the default region is set to us-east-1)
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
```
version: "3.6"
services:
localstack:
image: localstack/localstack:latest
restart: unless-stopped
healthcheck:
test: [ "CMD-SHELL", "awslocal dynamodb list-tables && awslocal dynamodbstreams list-streams" ]
environment:
- DEBUG=1
- HOSTNAME=localstack
- AWS_DEFAULT_REGION=us-east-1
ports:
- 4566:4566
volumes:
- /var/run/docker.sock:/var/run/docker.sock
```
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) [issue 1]
1. create the table
```
awslocal dynamodb create-table \
--table-name 'Table' \
--attribute-definitions AttributeName=PK,AttributeType=S AttributeName=SK,AttributeType=S \
--key-schema AttributeName=PK,KeyType=HASH AttributeName=SK,KeyType=RANGE \
--provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1 \
--stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES
```
2. set up replication
```
awslocal dynamodb update-table \
--table-name 'Table' \
--cli-input-json \
"{
\"ReplicaUpdates\":
[
{
\"Create\": {
\"RegionName\": \"eu-west-1\"
}
}
]
}"
```
3. verify replication is setup
```
awslocal dynamodb describe-table --table-name 'Table' --query 'Table.Replicas'
```
4. when listing the tables in the other region, no table is returned and I can't query for the DynamoDbStream
```
awslocal dynamodb list-tables --region eu-west-1
```
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands) [issue 2]
1. run `terraform apply` with a simple table configuration where replica is set
```
resource "aws_dynamodb_table" "table" {
name = "table"
billing_mode = "PROVISIONED"
read_capacity = 1
write_capacity = 1
hash_key = "PK"
range_key = "SK"
attribute {
name = "PK"
type = "S"
}
attribute {
name = "SK"
type = "S"
}
replica {
region_name = "eu-west-1"
}
stream_enabled = true
stream_view_type = "NEW_AND_OLD_IMAGES"
}
```
This yields the error `Error: creating Amazon DynamoDB Table (table): replicas: waiting for replica (eu-west-1) creation: unexpected state '', wanted target 'ACTIVE'. last error: %!s(<nil>)`. When removing the replica section, it works just fine.
### Environment
```markdown
- OS: macOS Ventura 13.0
- LocalStack: latest
```
### Anything else?
I tried to look at global tables with version 2019.11.21 in AWS using the CLI and it looks to me that the replicated table would be listable/describe:able in each of the replicated regions, that's why I assumed this might be a bug of some sort.
Creating one issue describing both problems as I believe these might be related. Possibly, some part of the replication is not complete which is why 1) is happening. Additionally, 2) could potentially be a guide for finding the root cause. It seems though that items are replicated in both regions in scenario 1), even though the replica is not describe:able.
I've tried searching for similar issues and more information on whether multi region is supported at all by Localstack, and since I had a hard time finding answers, I thought it'd be best to create an issue around this! | https://github.com/localstack/localstack/issues/7377 | https://github.com/localstack/localstack/pull/7400 | 8e5b4b092ac40dfb31c0de39b1e0df3cde8e8d24 | 1a18c29809d75ad1fe4ba8282a969fa227b73d60 | "2022-12-21T13:56:13Z" | python | "2023-01-04T13:39:14Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,358 | ["CONTRIBUTING.md", "localstack/services/kinesis/packages.py"] | bug: LocalStack authentication expects a space after commas in Authorization header parameters | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I'm using [`LocalStack`](https://github.com/localstack/localstack) - the Docker container - to perform local tests on top of [`ex_aws`](https://github.com/ex-aws/ex_aws), an Elixir library.
I'm getting the following error
```
Authorization header requires \\Signature\\ parameter. Authorization header requires \\SignedHeaders\\ parameter.
```
The `Authorization` header does have those parameters, but... they're written like this
```
AWS4-HMAC-SHA256 Credential=<cred>,SignedHeaders=<sign_head>,Signature=<sign>
```
instead of
```
AWS4-HMAC-SHA256 Credential=<cred>, SignedHeaders=<sign_head>, Signature=<sign>
^ notice the extra space
```
which seemingly LocalStack expects, but shouldn't (?)
[AWS' Authenticating Requests (AWS Signature Version 4) > Using an Authorization Header > Overview](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html#sigv4-auth-header-overview) is not very clear either since in the example they post, we find
```
Authorization: AWS4-HMAC-SHA256
Credential=AKIAIOSFODNN7EXAMPLE/20130524/us-east-1/s3/aws4_request,
^ a space here
SignedHeaders=host;range;x-amz-date,
^ no space here
Signature=fe5f80f77d5fa3beca038a248ff027d0445342fe2855ddc963176630326f1024
```
### Expected Behavior
LocalStack should consume both forms of the `Authorization` header made explicit in section "Current Behavior"
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
using an Elixir library, as explained above
### Environment
```markdown
- OS: MacOS 12.6
- LocalStack: 1.3.0
```
### Anything else?
I initially posted about this in [discuss.localstack.cloud:
Authorization header potentially not parsed right](https://discuss.localstack.cloud/t/authorization-header-potentially-not-parsed-right/183/3). | https://github.com/localstack/localstack/issues/7358 | https://github.com/localstack/localstack/pull/7375 | d7cff707a624916ede0dc64897c6ee06c8bb3b7e | 2ccd9a1ce2e5f531a2d9a8b4e64b8483166dd4c3 | "2022-12-19T21:22:08Z" | python | "2022-12-22T17:26:10Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,338 | ["localstack/services/awslambda/invocation/docker_runtime_executor.py", "localstack/services/awslambda/lambda_executors.py", "localstack/services/awslambda/lambda_utils.py", "tests/integration/awslambda/functions/lambda_networks.py", "tests/integration/awslambda/test_lambda_developer_tools.py"] | feature request: Pass all networks assigned from localstack to lambda container | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Feature description
Allow lambda container to have multiple networks not just one.
With the current implementation Lambda network only gets a single network, even though the Localstack container accepts multiple network values.
this is how it works currently:
> Gets the main network of the LocalStack container (if we run in one, bridge otherwise)
If there are multiple networks connected to the LocalStack container, we choose the first as "main" network
### 🧑💻 Implementation
This is where the "main" network value is retrieved, may be it should be a List?
https://github.com/localstack/localstack/blob/a71e661fa6b0189e54b468713f710f0ae1b93791/localstack/utils/container_networking.py#L14
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7338 | https://github.com/localstack/localstack/pull/8621 | 7bc23ff03876a2638e1a38627065243f04e5868f | 65087804fc8600fdeaa14e1d377d76e55b868309 | "2022-12-15T13:35:31Z" | python | "2023-07-06T11:00:39Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,323 | ["localstack/services/ses/provider.py", "tests/integration/test_ses.py", "tests/integration/test_ses.snapshot.json"] | bug: SES Events notification messages are missing property tags | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
This is a follow up to these PR from @simonrw https://github.com/localstack/localstack/pull/7244 and https://github.com/localstack/localstack/pull/7207
Setting up a configuration set for SES enables us to receive event notification from SES when different events happen (sent, delivered, bounced, opened etc)
The implementation in Localstack to mimic this behaviour was recently merged in the mentioned PR, but not all the properties from the payload are present. One property we would need are tags, specifically the custom tags which can be attached with the email request in order to better process the events later.
For example this email request:
```
aws ses send-email --destination '{"ToAddresses”:[“xxx@xxxcom"]}' --message '{"Subject":{"Data":"Test Link"},"Body":{"Html":{"Data":"<a href=\"nba.com\">Link test</a>"}}}' --configuration-set-name ses_config_set --from ‘yyy@oyyycom' --region eu-central-1 --tags Name=identifier,Value=testidentifier --output table | cat
````
Should generate events that contains the custom tag "Identifier" (among other generic tags from SES)
In real AWS:
```
{
"Type":"Notification",
"MessageId":"91873157-d824-57d9-8228-6d457e4cd029",
"TopicArn":"arn:aws:sns:eu-central-1:736957585402:ses_events_topic",
"Subject":"Amazon SES Email Event Notification",
"Message":"{\"eventType\":\"Delivery\",\"mail\":{\"timestamp\":\"2022-12-13T16:11:19.531Z\",\"source\":\"xxx\",\"sourceArn\":\"arn:aws:ses:eu-central-1:736957585402:identity/xxx\",\"sendingAccountId\":\"736957585402\",\"messageId\":\"010701850c413a6b-9f7f1f77-2ec7-44b1-80a3-4d2c840d67a2-000000\",\"destination\":[\"[email protected]\"],\"headersTruncated\":false,\"headers\":[{\"name\":\"From\",\"value\":\"xxx\"},{\"name\":\"To\",\"value\":\"[email protected]\"},{\"name\":\"Subject\",\"value\":\"Test Link\"},{\"name\":\"MIME-Version\",\"value\":\"1.0\"},{\"name\":\"Content-Type\",\"value\":\"text/html; charset=UTF-8\"},{\"name\":\"Content-Transfer-Encoding\",\"value\":\"7bit\"}],\"commonHeaders\":{\"from\":[\"xxx\"],\"to\":[\"[email protected]\"],\"messageId\":\"010701850c413a6b-9f7f1f77-2ec7-44b1-80a3-4d2c840d67a2-000000\",\"subject\":\"Test Link\"},\"tags\":{\"identifier\":[\"testidentifier\"],\"ses:operation\":[\"SendEmail\"],\"ses:configuration-set\":[\"ses_config_set\"],\"ses:source-ip\":[\"3.68.108.181\"],\"ses:from-domain\":[\"yyy.com\"],\"ses:caller-identity\":[\"nw-admin\"],\"ses:outgoing-ip\":[\"69.169.224.1\"]}},\"delivery\":{\"timestamp\":\"2022-12-13T16:11:20.279Z\",\"processingTimeMillis\":748,\"recipients\":[\"[email protected]\"],\"smtpResponse\":\"250 2.0.0 OK 1670947880 6-20020a056000156600b00236c23d73ddsi99176wrz.662 - gsmtp\",\"reportingMTA\":\"b224-1.smtp-out.eu-central-1.amazonses.com\"}}\n",
"Timestamp":"2022-12-13T16:11:20.388Z",
"SignatureVersion":"1",
"Signature":"C3BAipDonFzymok4kMnj7rwZFxi4447VQNZLpy8TApeSY/FZaN9WS4YtALDQvTiWANMk8ad/hbVrrNtzN90xZNJVNlCiDFA1g9qkFrzwZ7/2UNhQSIStHmsQbGTxGZybv8SucCz3OZlFrTqYyQg8RsgUtke0+BgWJErFAAWszd1ActeGjFTLUY3vwSBIUe7zkJakEqzg+XndjU+IjtBBEgI31reIhrTlYPvlvABLi8SeyvB04dvH5Uekshw5rbYe12556vYzqA6N8Kd1n3J+BMK/mIPUcYeMC/PMmzTpeOWaov7v65dkZtrJD4sTvLx5wCIBeiwod2UI4pttId2Zug==",
"SigningCertURL":"https://sns.eu-central-1.amazonaws.com/SimpleNotificationService-56e67fcb41f6fec09b0196692625d385.pem",
"UnsubscribeURL":"https://sns.eu-central-1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:eu-central-1:736957585402:ses_events_topic:c5810a4e-f8e6-41a6-bfca-b5b7e9dfe33e"
}
```
Currently in Localstack:
```
{
"Type":"Notification",
"MessageId":"1d579434-7249-415f-87cf-c25be7105d31",
"TopicArn":"arn:aws:sns:eu-central-1:000000000000:ses_events_topic",
"Message":"{\"eventType\": \"Delivery\", \"mail\": {\"timestamp\": \"2022-12-13T15:56:04.071889+00:00\", \"source\": \"Sender <[email protected]>\", \"sourceArn\": \"arn:aws:ses:eu-central-1:000000000000:identity/Sender <[email protected]>\", \"sendingAccountId\": \"000000000000\", \"destination\": [\"Recipient <[email protected]>\"], \"messageId\": \"ytjtivdqeyygqyhj-dxkcrufx-tppt-hinv-ohqa-qzdyybhobphe-owevcr\"}, \"delivery\": {\"recipients\": [\"Recipient <[email protected]>\"], \"timestamp\": \"2022-12-13T15:56:04.071889+00:00\"}}",
"Timestamp":"2022-12-13T15:56:04.088Z",
"SignatureVersion":"1",
"Signature":"EXAMPLEpH+..",
"SigningCertURL":"https://sns.us-east-1.amazonaws.com/SimpleNotificationService-0000000000000000000000.pem",
"UnsubscribeURL":"http://localhost:4566/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:eu-central-1:000000000000:ses_events_topic:b8d988ba-62c9-4c2c-8857-74984bdbe26b",
"Subject":"Amazon SES Email Event Notification"
}
```
### Expected Behavior
The SES event notification payload should include tags, specially the custom ones. ("identifier", in this case)
```
"tags\":{\"identifier\":[\"testidentifier\"],\"ses:operation\":[\"SendEmail\"],\"ses:configuration-set\":[\"ses_config_set\"],\"ses:source-ip\":[\"33.61.18.181\"],\"ses:from-domain\":[\"yyy.com\"],\"ses:caller-identity\":[\"xx-admin\"],\"ses:outgoing-ip\":[\"79.143.224.1\"]}},
```
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
```
services:
localstack:
image: localstack/localstack:latest
environment:
- SERVICES=sqs,sns,ses
- AWS_DEFAULT_REGION=eu-central-1
- EDGE_PORT=4566
ports:
- '4566-4597:4566-4597'
volumes:
- ./localstack_setup.sh:/docker-entrypoint-initaws.d/setup.sh
- '${TMPDIR:-/tmp/localstack}:/tmp/localstack'
- '/var/run/docker.sock:/var/run/docker.sock'
networks:
default:
name: development-network
external: true
```
**Verify email address**
`aws --endpoint-url=http://localhost:4566 ses verify-email-identity --email-address [email protected] --profile test-profile --region eu-central-1 --output table | cat`
**create queue for SNS - SES events**
```
aws --endpoint-url=http://localhost:4566 sqs create-queue --queue-name ses_emails_feedback_queue
--attributes '{"SqsManagedSseEnabled": "false"}' --profile test-profile --region eu-central-1 --output table | cat
```
**create SNS topic for SES events**
`aws --endpoint-url=http://localhost:4566 sns create-topic --name ses_events_topic --region eu-central-1 --profile test-profile --output table`
**subscribe queue to topic**
`aws --endpoint-url=http://localhost:4566 sns subscribe --topic-arn arn:aws:sns:eu-central-1:000000000000:ses_events_topic --protocol sqs --notification-endpoint arn:aws:sqs:eu-central-1:000000000000:ses_emails_feedback_queue --profile test-profile --region eu-central-1 --output table | cat`
**create config set**
`aws --endpoint-url=http://localhost:4566 ses create-configuration-set --configuration-set "{\"Name\":\"ses_config_set\"}" --profile test-profile --region eu-central-1 --output table | cat`
**create event destination**
`aws --endpoint-url=http://localhost:4566 ses create-configuration-set-event-destination --configuration-set-name ses_config_set --event-destination '{"Name":"some_name2","Enabled":true,"MatchingEventTypes":["send","bounce","delivery","open","click"],"SNSDestination":{"TopicARN":"arn:aws:sns:eu-central-1:000000000000:ses_events_topic"}}' --profile test-profile --region eu-central-1 --output table | cat`
**Send email with custom tag included:**
`aws --endpoint-url=http://localhost:4566 ses send-email --destination '{"ToAddresses”:[“[email protected]"]}' --message '{"Subject":{"Data":"Test Link"},"Body":{"Html":{"Data":"<a href=\"nba.com\">Link test</a>"}}}' --configuration-set-name ses_config_set --from ‘[email protected]' --region eu-central-1 --tags Name=identifier,Value=testidentifier --output table | cat`
### Environment
```markdown
- OS: macOS 12.6.1
- LocalStack: "1.3.0.dev"
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7323 | https://github.com/localstack/localstack/pull/7332 | 5a3d5f3d04804f47de748142035332dcc1130e71 | e43e912d48200c10738993fd1c01880175d5a29a | "2022-12-13T16:36:57Z" | python | "2022-12-21T11:08:14Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,310 | ["localstack/services/ses/provider.py"] | bug: SES unexpected exception when Destination is CcAddresses or BccAddresses instead of ToAddresses | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
According to the AWS docs the Destination object might contain any or all of these 3 properties ToAddresses, CcAddresses or BccAddresses
https://docs.aws.amazon.com/cli/latest/reference/ses/send-email.html
```
The message must include at least one recipient email address. The recipient address can be a To: address, a CC: address, or a BCC: address. If a recipient email address is invalid (that is, it is not in the format UserName@[SubDomain.]Domain.TopLevelDomain ), the entire message will be rejected, even if the message contains other recipients that are valid.
```
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-ses/interfaces/destination.html
When sending an email with ToAddresses property as property of Destination, the email is sent correctly.
```bash
aws ses send-email --endpoint-url=http://localhost:4566 --destination '{"ToAddresses":["[email protected]"]}' --message '{"Subject":{"Data":"foo subject"},"Body":{"Text":{"Data":"saml body"}}}' --configuration-set-name ses_config_set --from '[email protected]' --profile test-profile --region eu-central-1 --output table | cat
```
When sending an email with CcAddresses or BccAddresses instead it fails
```bash
aws ses send-email --endpoint-url=http://localhost:4566 --destination '{“CcAddresses":["[email protected]"]}' --message '{"Subject":{"Data":"foo subject"},"Body":{"Text":{"Data":"saml body"}}}' --configuration-set-name ses_config_set --from '[email protected]' --profile test-profile --region eu-central-1 --output table | cat
```
The process throws this error: ` "message": "exception while calling ses.SendEmail: 'ToAddresses'`
### Expected Behavior
No error should be thrown when sending email to recipient in Cc or Bcc and not in To
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
Localstack `docker-compose.yml`
```version: '3.0'
services:
localstack:
image: localstack/localstack:latest
environment:
- SERVICES=sqs,sns,ses
- AWS_DEFAULT_REGION=eu-central-1
- EDGE_PORT=4566
ports:
- '4566-4597:4566-4597'
volumes:
- ./localstack_setup.sh:/docker-entrypoint-initaws.d/setup.sh
- '${TMPDIR:-/tmp/localstack}:/tmp/localstack'
- '/var/run/docker.sock:/var/run/docker.sock'
networks:
default:
name: development-network
external: true
```
To create all resources either in setup script or manually:
```
#!/bin/bash
echo "########### Setting up localstack profile ###########"
aws configure set aws_access_key_id "dummy" --profile test-profile
aws configure set aws_secret_access_key "dummy" --profile test-profile
aws configure set region "eu-central-1" --profile test-profile
echo "########### Creating sns topics ###########"
aws --endpoint-url=http://localhost:4566 sns create-topic --name ses_events_topic --region eu-central-1 --profile test-profile
echo "########### Whitelist [email protected] email address ###########"
aws --endpoint-url=http://localhost:4566 ses verify-email-identity --email-address [email protected] --profile test-profile --region eu-central-1
echo "########### Subscribe queues to topics ###########"
aws --endpoint-url=http://localhost:4566 sns subscribe --topic-arn arn:aws:sns:eu-central-1:000000000000:ses_events_topic --protocol sqs --notification-endpoint arn:aws:sqs:eu-central-1:000000000000:ses_emails_feedback_queue --profile test-profile --region eu-central-1
echo "########### Create SES config set and event destination ###########"
aws --endpoint-url=http://localhost:4566 ses create-configuration-set --configuration-set "{\"Name\":\"ses_config_set\"}" --profile test-profile --region eu-central-1
aws --endpoint-url=http://localhost:4566 ses create-configuration-set-event-destination --configuration-set-name ses_config_set --event-destination '{"Name":"some_name2","Enabled":true,"MatchingEventTypes":["send","bounce","delivery","open","click"],"SNSDestination":{"TopicARN":"arn:aws:sns:eu-central-1:000000000000:ses_events_topic"}}' --profile test-profile --region eu-central-1 --output table | cat
```
And send the email:
```
aws ses send-email --endpoint-url=http://localhost:4566 --destination ‘{“CcAddresses":["[email protected]"]}' --message '{"Subject":{"Data":"foo subject"},"Body":{"Text":{"Data":"saml body"}}}' --configuration-set-name ses_config_set --from '[email protected]' --profile test-profile --region eu-central-1 --output table | cat
```
### Environment
```markdown
- OS: macOS 12.6.1
- LocalStack: "1.3.0.dev"
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7310 | https://github.com/localstack/localstack/pull/7385 | 44c23463b16d2b5550d3237a580170dd1000d501 | a3a23b76f78bbcec54a5d2c324985480a2a0f125 | "2022-12-12T14:03:22Z" | python | "2022-12-23T05:54:06Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,305 | ["localstack/services/events/provider.py", "tests/aws/services/events/test_events.py", "tests/aws/services/events/test_events.snapshot.json", "tests/aws/services/events/test_events.validation.json"] | bug: Bug with EventBus filters | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I wanted to test an EventBus filter that will trigger a Lambda only when a field doesn't exist or when exists and equals to false.
So I created a Lambda with the following configuration (using Serverless Framework)
This is my Lambda Configuration
```
test:
handler: test.handler
memorySize: 512
description: test
events:
- eventBridge:
eventBus: ${ssm:/${self:custom.env_name}/infra/event-bus/test, 'default'}
pattern:
source:
- 'other-service'
detail-type:
- 'test'
detail:
event_name:
- 'test_event_name'
test_field:
- false
- eventBridge:
eventBus: ${ssm:/${self:custom.env_name}/infra/event-bus/test, 'default'}
pattern:
source:
- 'other-service'
detail-type:
- 'test'
detail:
event_name:
- 'test_event_name'
test_field:
is_background_job: [{exists: false}]
```
When I push an event, It doesn't trigger the Lambda.
When I tested it on AWS, it worked perfectly.
### Expected Behavior
To trigger the Lambda
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
localstack:
image: localstack/localstack:1.3.0
ports:
- "4566-4583:4566-4583"
- "${PORT_WEB_UI-4666}:${PORT_WEB_UI-8080}"
- "8080:8080"
- "4510:4510"
environment:
- LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY- }
- PORT_WEB_UI=8080
- PERSISTENCE=0
- LAMBDA_REMOTE_DOCKER=0
- LAMBDA_EXECUTOR=docker-reuse
- LAMBDA_REMOVE_CONTAINERS=true
- DOCKER_HOST=unix:///var/run/docker.sock
- LAMBDA_DOCKER_NETWORK=localstack-net
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- LS_LOG=debug
- DEBUG=1
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "${LOCALSTACK_VOLUME_DIR:-~/Library/Caches/localstack/volume}:/var/lib/localstack"
networks:
- localstack-net
### Environment
```markdown
- OS: ubuntu 20.04
- LocalStack: 1.3.0
```
### Anything else?
I tried to deploy it without the test_field filter:
```
test:
handler: test.handler
memorySize: 512
description: test
events:
- eventBridge:
eventBus: ${ssm:/${self:custom.env_name}/infra/event-bus/test, 'default'}
pattern:
source:
- 'other-service'
detail-type:
- 'test'
detail:
event_name:
- 'test_event_name'
```
And it worked well.
| https://github.com/localstack/localstack/issues/7305 | https://github.com/localstack/localstack/pull/9931 | 3b30c563780176ec5512f77de87223881533bdc9 | 563edf7a7752a8976bba73b9a1400bc30279ce4d | "2022-12-11T12:10:10Z" | python | "2023-12-29T12:40:20Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,282 | ["localstack/services/cloudformation/models/elasticsearch.py", "localstack/services/cloudformation/models/opensearch.py", "localstack/utils/collections.py", "tests/integration/cloudformation/resources/test_elasticsearch.py", "tests/integration/templates/opensearch_domain.yml", "tests/unit/utils/test_collections.py"] | unable to create opensearch domain and elastic search domain | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Using cloudformation template to create resources
Only some of the resources are getting replicated
this is cloudformation template
```
{
"Resources": {
"LocalBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "cfn-quickstart-bucket"
}
},
"myDynamoDBTable" : {
"Type" : "AWS::DynamoDB::Table",
"Properties" : {
"AttributeDefinitions" : [
{
"AttributeName" : "Album",
"AttributeType" : "S"
},
{
"AttributeName" : "Artist",
"AttributeType" : "S"
},
{
"AttributeName" : "Sales",
"AttributeType" : "N"
},
{
"AttributeName" : "NumberOfSongs",
"AttributeType" : "N"
}
],
"KeySchema" : [
{
"AttributeName" : "Album",
"KeyType" : "HASH"
},
{
"AttributeName" : "Artist",
"KeyType" : "RANGE"
}
],
"ProvisionedThroughput" : {
"ReadCapacityUnits" : "5",
"WriteCapacityUnits" : "5"
},
"TableName" : "myTableName",
"GlobalSecondaryIndexes" : [{
"IndexName" : "myGSI",
"KeySchema" : [
{
"AttributeName" : "Sales",
"KeyType" : "HASH"
},
{
"AttributeName" : "Artist",
"KeyType" : "RANGE"
}
],
"Projection" : {
"NonKeyAttributes" : ["Album","NumberOfSongs"],
"ProjectionType" : "INCLUDE"
},
"ProvisionedThroughput" : {
"ReadCapacityUnits" : "5",
"WriteCapacityUnits" : "5"
}
},
{
"IndexName" : "myGSI2",
"KeySchema" : [
{
"AttributeName" : "NumberOfSongs",
"KeyType" : "HASH"
},
{
"AttributeName" : "Sales",
"KeyType" : "RANGE"
}
],
"Projection" : {
"NonKeyAttributes" : ["Album","Artist"],
"ProjectionType" : "INCLUDE"
},
"ProvisionedThroughput" : {
"ReadCapacityUnits" : "5",
"WriteCapacityUnits" : "5"
}
}],
"LocalSecondaryIndexes" :[{
"IndexName" : "myLSI",
"KeySchema" : [
{
"AttributeName" : "Album",
"KeyType" : "HASH"
},
{
"AttributeName" : "Sales",
"KeyType" : "RANGE"
}
],
"Projection" : {
"NonKeyAttributes" : ["Artist","NumberOfSongs"],
"ProjectionType" : "INCLUDE"
}
}]
}
},
"OpenSearchServiceDomain": {
"Type":"AWS::OpenSearchService::Domain",
"Properties": {
"DomainName": "test",
"EngineVersion": "OpenSearch_1.0",
"ClusterConfig": {
"DedicatedMasterEnabled": true,
"InstanceCount": "2",
"ZoneAwarenessEnabled": true,
"InstanceType": "m3.medium.search",
"DedicatedMasterType": "m3.medium.search",
"DedicatedMasterCount": "3"
},
"EBSOptions":{
"EBSEnabled": true,
"Iops": "0",
"VolumeSize": "20",
"VolumeType": "gp2"
}
}
},
"ElasticsearchDomain": {
"Type":"AWS::Elasticsearch::Domain",
"Properties": {
"DomainName": "test",
"ElasticsearchVersion": "7.10",
"ElasticsearchClusterConfig": {
"DedicatedMasterEnabled": true,
"InstanceCount": "2",
"ZoneAwarenessEnabled": true,
"InstanceType": "m3.medium.elasticsearch",
"DedicatedMasterType": "m3.medium.elasticsearch",
"DedicatedMasterCount": "3"
},
"EBSOptions":{
"EBSEnabled": true,
"Iops": "0",
"VolumeSize": "20",
"VolumeType": "gp2"
}
}
}
}
}
```
I want to mock these resources on my local machine using localstack, but i am only able to create dynamodb table and s3 bucket
I don't know what is the problem with opensearch
this is my docker-compose.yml
```
version: '2.1'
services:
localstack:
image: 'localstack/localstack:latest'
container_name: 'localstack_re'
ports:
- '4566-4620:4566-4620'
- '127.0.0.1:8055:8080'
environment:
- SERVICES=lambda,s3,apigateway,cloudformation,dynamodb,opensearch,sns
- DEBUG=1
- EDGE_PORT=4566
- DATA_DIR=/var/lib/localstack/data
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
- LAMBDA_EXECUTOR=docker
- DYNAMODB_SHARE_DB=1
- DISABLE_CORS_CHECKS=1
- AWS_DDB_ENDPOINT=http://localhost:4566
volumes:
- '${TMPDIR:-/var/lib/localstack}:/var/lib/localstack'
- '/var/run/docker.sock:/var/run/docker.sock'
networks:
- 'local'
```
### Expected Behavior
Should create all the resources
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
awslocal s3 mb s3://mybucket
### Environment
```markdown
- OS: windows 11
- LocalStack: 1.3.0
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7282 | https://github.com/localstack/localstack/pull/7293 | ac196b9341faf5b14d43cb2617bf39064300afcb | 4abe1966d2e107670f5a951c8c5ced9758d8cfb1 | "2022-12-05T06:42:35Z" | python | "2022-12-07T08:04:08Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,268 | ["localstack/aws/api/scheduler/__init__.py", "localstack/services/providers.py", "localstack/services/scheduler/__init__.py", "localstack/services/scheduler/models.py", "localstack/services/scheduler/provider.py", "tests/integration/test_scheduler.py"] | feature request: Event Bridge Scheduler | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Feature description
Support Event Bridge Scheduler and CFN Resources for the Scheduler (Schedule, ScheduleGroup).
https://aws.amazon.com/eventbridge/scheduler/
### 🧑💻 Implementation
Extend the Scheduled Rules support to one time schedules and the new API/SDK.
### Anything else?
https://aws.amazon.com/eventbridge/scheduler/ | https://github.com/localstack/localstack/issues/7268 | https://github.com/localstack/localstack/pull/8754 | 8602269b9ee715fb10cc27bcaf19de09cad590ad | d0c0258f22697cb5024fda86b7fa28e2c5f614a8 | "2022-11-30T19:43:20Z" | python | "2023-07-31T07:25:31Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,257 | ["doc/third-party-software-tools/README.md", "localstack/services/kinesis/kinesis_starter.py", "tests/integration/awslambda/test_lambda_integration_kinesis.py", "tests/integration/s3/test_s3.py", "tests/integration/s3/test_s3_cors.py", "tests/integration/test_logs.py", "tests/unit/conftest.py", "tests/unit/test_apigateway.py"] | LocalStack v1.3 Deprecation Notice | Hello Everyone — We have launched a minor release of LocalStack with v1.3. Some significant deprecations are coming up, and we would like to use this issue to keep you updated.
## What's changed?
With our preparation for the v2.0 release, the v1.3 release will feature the following deprecations across our AWS & LocalStack-specific features. We also recommend you migrate your LocalStack setup before the next major release to make your update as seamless as possible!
We will merge the below changes into the master on the 30th of November. At this point, the changes will end up in the latest tag of the image. The tagged release will be created on the 1st of December. We recommend you to follow up with our release PR to check all of the changes coming with this release: [https://github.com/localstack/localstack/pull/7197](https://github.com/localstack/localstack/pull/7197)
### Deprecating Light & Full image
We have decided to deprecate Light, Full & Big-Data images. From now on, we will have only two images:
- Community users will use `localstack` image
- Pro users will use `localstack-pro` image
The current `localstack-light` and `localstack-full` images will be deprecated, and LocalStack will show a deprecation warning if either of the images is being used. We also intend to move away from the BigData image with the v2.0 release, and we would currently feature an opt-in BigData Mono Container which is not yet the default (more on our official v1.3 release notes!).
### Removal of legacy SQS provider
The legacy SQS provider has been deprecated and is not the default anymore with the v1.0 release. The old SQS provider has been removed, and if you are using `PROVIDER_OVERRIDE_SQS=”legacy”` or `“legacy_pro”` environment variable, your LocalStack setup will break. We recommend you migrate to the new SQS provider.
### Removal of Legacy API Gateway Provider
The legacy API Gateway provider has been deprecated and is not the default anymore with the v1.0 release. The old API Gateway and API Gateway v2 provider has been removed, and if you are using the following environment variables, your LocalStack setup will break:
- `PROVIDER_OVERRIDE_APIGATEWAY="legacy"` or `"legacy_pro"`
- `PROVIDER_OVERRIDE_APIGATWAYV2="legacy"` or `"legacy_pro"`
We recommend you migrate to the new API Gateway provider.
### Removal of Legacy Kinesis Provider
The legacy Kinesis provider has been deprecated and is not the default anymore with the v1.0 release. The old Kinesis provider has been removed, and if you use `PROVIDER_OVERRIDE_KINESIS="legacy"` or `"legacy_pro"` environment variables, your LocalStack setup will break. Using `KINESIS_PROVIDER="kinesalite"` will not have any effect. We recommend you migrate to the new Kinesis provider.
### Deprecating Legacy IAM Enforcement
The old IAM provider has been deprecated with 1.3. Due to this, the `LEGACY_IAM_ENFORCEMENT` environment variable is deprecated and will be removed in the v2.0 release. This deprecation only affects Pro users!
### Deprecating `SYNCHRONOUS_*_EVENTS`
The following `SYNCHRONOUS_*_EVENTS` configuration variables will be deprecated:
```
SYNCHRONOUS_SNS_EVENTS
SYNCHRONOUS_SQS_EVENTS
SYNCHRONOUS_API_GATEWAY_EVENTS
SYNCHRONOUS_KINESIS_EVENTS
SYNCHRONOUS_DYNAMODB_EVENTS
```
We will remove them in the v2.0 release; hence, it's not recommended to use them anymore!
## Deprecating `USE_SINGLE_REGION` and `DEFAULT_REGION`
The `USE_SINGLE_REGION` and `DEFAULT_REGION` configuration variables have been deprecated. We will remove them in the v2.0 release; hence it's not recommended to use them anymore!
## Deprecating `MOCK_UNIMPLEMENTED`
The `MOCK_UNIMPLEMENTED` configuration variables have been deprecated, previously being used to return some mock responses for unimplemented operations. We will remove them in the v2.0 release; hence it's not recommended to use them anymore!
## Deprecating `SKIP_INFRA_DOWNLOADS`
The `SKIP_INFRA_DOWNLOADS` configuration variables have been deprecated, previously used to disable some on-demand downloads of additional infrastructure. We will remove them in the v2.0 release; hence it's not recommended to use them anymore!
## Deprecating legacy Init scripts
The `/docker-entrypoint-initaws.d` directory usage has now been deprecated. The Pluggable initialization hooks in `/etc/localstack/init/<stage>.d` have replaced the legacy Init scripts, and the former will be removed entirely in the 2.0 release.
## Deprecating root level non-AWS endpoints
We have deprecated root-level non-AWS endpoints, which we will remove entirely in the v2.0 release. These endpoints are not AWS specific but LocalStack internal endpoints (such as `health`). The deprecated endpoints and the new endpoints are:
| Deprecated endpoints | New endpoints |
| ------------------------- | ------------------------------ |
| `/health` | `/_localstack/health` |
| `/cloudwatch/metrics/raw` | `/_aws/cloudwatch/metrics/raw` |
The prefixed endpoints need to be used, while the deprecated endpoints will be removed entirely in the v2.0 release.
## Miscellaneous
In the previous release, we noted the deprecation of the old filesystem and persistence. Due to its continued usage, the warnings (previously displayed if used) will change to an error. If you start LocalStack with a volume on `/tmp/localstack` (indicating that you are using the old filesystem), it will not start if it’s not explicitly enabled. We will remove it entirely with the v2.0 release. | https://github.com/localstack/localstack/issues/7257 | https://github.com/localstack/localstack/pull/7261 | c882f297e5206d3b924921300b04234dae996c2f | ebbe5dd1fd3683ca49bd08190ac06efc196f4b16 | "2022-11-29T05:56:32Z" | python | "2022-11-30T15:12:04Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,236 | ["localstack/services/ses/models.py", "localstack/services/ses/provider.py", "tests/integration/test_ses.py"] | bug: _localstack/ses invalid destination schema | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
If we send SES via Raw Data, `Destination` is empty, it returns `[]`. e.g.:
```
Destination: [],
```
If we send SES via non-Raw Data, it contains values, it returns a json object:
```
Destination: {
"ToAddresses": [
"Address1", "Address2"
]
}
```
### Expected Behavior
To have a consistent schema
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
1. Send a SES Message using Raw Data
2. Send SES Message using non-Raw Data Fields.
Access <localstack>/_localstack/ses
### Environment
```markdown
- OS: MacOS
- LocalStack: 1.1.0
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7236 | https://github.com/localstack/localstack/pull/7388 | 2f53ce376f31d6a918c056911f7bd109f788adca | 29b7283a542bdb21c55e472e1bf60cf954defae8 | "2022-11-23T11:52:57Z" | python | "2022-12-28T07:13:48Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,216 | ["localstack/services/kms/models.py"] | bug: unable to verify AWS KMS asymmetric key signatures generated by localstack locally with OpenSSL | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
```
unable to load Public Key
4343219756:error:0DFFF0A8:asn1 encoding routines:CRYPTO_internal:wrong tag:/AppleInternal/Library/BuildRoots/a0876c02-1788-11ed-b9c4-96898e02b808/Library/Caches/com.apple.xbs/Sources/libressl/libressl-2.8/crypto/asn1/tasn_dec.c:1144:
4343219756:error:0DFFF03A:asn1 encoding routines:CRYPTO_internal:nested asn1 error:/AppleInternal/Library/BuildRoots/a0876c02-1788-11ed-b9c4-96898e02b808/Library/Caches/com.apple.xbs/Sources/libressl/libressl-2.8/crypto/asn1/tasn_dec.c:317:Type=X509_ALGOR
4343219756:error:0DFFF03A:asn1 encoding routines:CRYPTO_internal:nested asn1 error:/AppleInternal/Library/BuildRoots/a0876c02-1788-11ed-b9c4-96898e02b808/Library/Caches/com.apple.xbs/Sources/libressl/libressl-2.8/crypto/asn1/tasn_dec.c:646:Field=algor, Type=X509_PUBKEY
```
I followed these steps "https://aws.amazon.com/fr/blogs/security/how-to-verify-aws-kms-asymmetric-key-signatures-locally-with-openssl/".
### Expected Behavior
Generated a file inst.pem without error.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
version: '3.2'
networks:
netapp:
services:
localstack:
image: localstack/localstack:1.2
environment:
AWS_ENDPOINT: "http://localstack:4566"
SERVICES: lambda,apigateway,iam,s3,dynamodb,sts,cloudwatch,events,kms,ssm,kinesis,logs,sns,sqs,secretsmanager
LAMBDA_EXECUTOR: docker
DOCKER_HOST: unix:///var/run/docker.sock
LAMBDA_CONTAINER_REGISTRY: "lambci/lambda"
LAMBDA_REMOTE_DOCKER: "true"
LAMBDA_DOCKER_NETWORK: netapp
HOSTNAME_EXTERNAL: localstack
EDGE_PORT: 4566
DEBUG: 1
ports:
- 4566:4566
volumes:
- /var/run/docker.sock:/var/run/docker.sock
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
1. Run ```AWS_REGION=eu-west-3 AWS_ACCESS_KEY_ID=fake AWS_SECRET_ACCESS_KEY=fake aws --endpoint-url="http://localhost:4566" kms create-key --customer-master-key-spec="RSA_2048" --key-usage="SIGN_VERIFY" --description="test1839"```
2. Take the value of "KeyMetadata.KeyId"
3. Run ```AWS_REGION=eu-west-3 AWS_ACCESS_KEY_ID=fake AWS_SECRET_ACCESS_KEY=fake aws --endpoint-url="http://localhost:4566" kms get-public-key --key-id [keyid] --output text --query PublicKey | base64 -d > inst.der```
4. Run ```openssl rsa -pubin -inform DER -outform PEM -in inst.der -pubout -out inst.pem```
Then you obtains this result
I followed these steps "https://aws.amazon.com/fr/blogs/security/how-to-verify-aws-kms-asymmetric-key-signatures-locally-with-openssl/".
I have the version 1.2.0 of localstack, arch arm64.
### Environment
```markdown
- OS: macOs Montery 12.6.1 (MacBook Air (M1, 2020)
- Chip: Apple M1
- Memory: 16 GB
- LocalStack: 1.2.0
```
### Anything else?
It works with the current original AWS.
We started to investigate when the error "x509: malformed tbs certificate" appers in our logs from our application in go.
| https://github.com/localstack/localstack/issues/7216 | https://github.com/localstack/localstack/pull/7856 | c24a2606f7a8303afab6d185960c20642b563f37 | eb15fd71c20d2b0f66f0949d0f1e67db293f8561 | "2022-11-21T17:58:18Z" | python | "2023-03-15T05:44:52Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,203 | ["localstack/aws/handlers/auth.py", "localstack/config.py", "localstack/services/dynamodb/provider.py", "localstack/services/kinesis/kinesis_mock_server.py", "localstack/services/kinesis/kinesis_starter.py", "localstack/services/kinesis/provider.py", "localstack/utils/aws/aws_stack.py", "localstack/utils/aws/queries.py", "tests/integration/test_dynamodb.py", "tests/integration/test_multi_accounts.py"] | bug: Kinesis not namespacing based on account ID and region | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Kinesis does not create resources in correct namespace.
### Expected Behavior
Resources are created in the correct namespace and have the appropriate account ID and region in ARNs.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```
$ AWS_ACCESS_KEY_ID=888877776666 \
AWS_DEFAULT_REGION=eu-central-2 \
awslocal kinesis create-stream --stream-name foo1 --shard-count 1
$ awslocal kinesis describe-stream --stream-name foo1
{
"StreamDescription": {
"Shards": [
{
"ShardId": "shardId-000000000000",
"HashKeyRange": {
"StartingHashKey": "0",
"EndingHashKey": "340282366920938463463374607431768211455"
},
"SequenceNumberRange": {
"StartingSequenceNumber": "49635293216433304706025617534454680056322001902963261442"
}
}
],
"StreamARN": "arn:aws:kinesis:us-east-1:000000000000:stream/foo1", # <<<<<<< BAD ARN
"StreamName": "foo1",
"StreamStatus": "ACTIVE",
"RetentionPeriodHours": 24,
"EnhancedMonitoring": [
{
"ShardLevelMetrics": []
}
],
"EncryptionType": "NONE",
"KeyId": null,
"StreamCreationTimestamp": 1668766849.927
}
}
```
### Environment
```markdown
- OS: Ubuntu 22.04
- LocalStack: latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7203 | https://github.com/localstack/localstack/pull/7230 | 61e8c1e33c2d85ab4d64e910a33953332e791be9 | c1278ddd7a64c38a14d944b41ce257a83b9cefda | "2022-11-18T10:25:27Z" | python | "2022-12-08T13:15:19Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,198 | ["localstack/aws/api/ram/__init__.py", "localstack/services/providers.py", "localstack/services/ram/__init__.py", "localstack/services/ram/provider.py", "tests/aws/services/ram/__init__.py", "tests/aws/services/ram/test_ram.py", "tests/aws/services/ram/test_ram.snapshot.json"] | feature request: AWS Resource Access Manager | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Feature description
It would be great if localstack supported AWS Resource Access Manager. This would allow for testing multi-account AWS Transit Gateway setups, for example.
### 🧑💻 Implementation
_No response_
### Anything else?
Related: https://github.com/localstack/localstack/issues/7041 | https://github.com/localstack/localstack/issues/7198 | https://github.com/localstack/localstack/pull/9161 | 2830e2f621b8d49617fe1d1d9e3c6207648305e7 | 3c2b0af1eb7414f24335bea96c325437141e19a4 | "2022-11-17T13:37:30Z" | python | "2023-09-20T04:18:40Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,188 | ["localstack/services/apigateway/integration.py", "localstack/services/apigateway/templates.py", "tests/integration/test_apigateway.py", "tests/unit/test_apigateway.py", "tests/unit/test_templating.py"] | bug: VTL request templates not handled properly | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
VTL template for request is not properly processed.
Testing with sample project provided at https://github.com/localstack/localstack-terraform-samples/tree/master/apigateway-lambda-velocity several parameters of the request, such as path and identity are not properly parsed.
For a request like this:
```
curl -X POST "0naclskncu.execute-api.localhost.localstack.cloud:4566/local/test" -H 'content-type: application/json' -d '{ "greeter": "cesar" }'
```
the result is:
```
{
"body": {"greeter": "cesar"},
"method": "POST",
"principalId": "",
"stage": "local",
"cognitoPoolClaims" : {
"sub": ""
},
"enhancedAuthContext": {
}
,
"headers": {
}
,
"query": {
}
,
"path": {
""proxy"":
""test""
}
,
"identity": {
""accountId"":
"0"
,
""sourceIp"":
""172.17.0.1""
,
""userAgent"":
""curl/7.84.0""
}
,
"stageVariables": {
}
,
"requestPath": "/{proxy+}"
}
```
Notice the extra double quotes on path and identity values which results in a non valid JSON document.
From the localstack logs:
```
botocore.errorfactory.UnsupportedMediaTypeException: An error occurred (UnsupportedMediaTypeException) when calling the Invoke operation: The payload is not JSON:
{
"body": {"greeter": "cesar"},
"method": "POST",
"principalId": "",
"stage": "local",
"cognitoPoolClaims" : {
"sub": ""
},
"enhancedAuthContext": {
}
,
"headers": {
}
,
"query": {
}
,
"path": {
""proxy"":
""test""
}
,
"identity": {
""accountId"":
"0"
,
""sourceIp"":
""172.17.0.1""
,
""userAgent"":
""curl/7.84.0""
}
,
"stageVariables": {
}
,
"requestPath": "/{proxy+}"
}
2022-11-16T09:47:17.833 INFO --- [ asgi_gw_0] localstack.request.http : POST /local/test => 400; Request(b'{ "greeter": "cesar" }', headers={'Host': '0naclskncu.execute-api.localhost.localstack.cloud:4566', 'User-Agent': 'curl/7.84.0', 'Accept': '*/*', 'content-type': 'application/json', 'Content-Length': '22', 'x-localstack-tgt-api': 'apigateway'}); Response(b'{"Type": "User", "message": "Error invoking integration for API Gateway ID \'0naclskncu\': An error occurred (UnsupportedMediaTypeException) when calling the Invoke operation: The payload is not JSON: \\n\\n \\n {\\n \\"body\\": {\\"greeter\\": \\"cesar\\"},\\n \\"method\\": \\"POST\\",\\n \\"principalId\\": \\"\\",\\n \\"stage\\": \\"local\\",\\n\\n \\"cognitoPoolClaims\\" : {\\n\\n \\"sub\\": \\"\\"\\n },\\n\\n \\"enhancedAuthContext\\": {\\n }\\n ,\\n\\n \\"headers\\": {\\n }\\n ,\\n\\n \\"query\\": {\\n }\\n ,\\n\\n \\"path\\": {\\n \\n \\"\\"proxy\\"\\":\\n \\"\\"test\\"\\"\\n }\\n ,\\n\\n \\"identity\\": {\\n \\n \\"\\"accountId\\"\\":\\n \\"0\\"\\n , \\n \\"\\"sourceIp\\"\\":\\n \\"\\"172.17.0.1\\"\\"\\n , \\n \\"\\"userAgent\\"\\":\\n \\"\\"curl/7.84.0\\"\\"\\n }\\n ,\\n\\n \\"stageVariables\\": {\\n }\\n ,\\n\\n \\"requestPath\\": \\"/{proxy+}\\"\\n }\\n", "__type": "InvalidRequest"}', headers={'Content-Type': 'text/html; charset=utf-8', 'x-amzn-errortype': 'InvalidRequest', 'Content-Length': '1108', 'Connection': 'close'})
```
### Expected Behavior
The template should render a proper JSON document
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
LS_LOG=trace-internal localstack start
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
- localstack-terraform-samples/apigateway-lambda-velocity $ terraform init; terraform plan; terraform apply --auto-approve
- localstack-terraform-samples/apigateway-lambda-velocity $ curl -X POST "9b2k2o2wd1.execute-api.localhost.localstack.cloud:4566/local/test" -H 'content-type: application/json' -d '{ "greeter": "cesar" }'
*** APIGW ID is taked from the terraform apply output
### Environment
```markdown
- OS: Macos 13.0.1
- LocalStack: 1.2.1.dev
- Docker Desktop: v4.13.1
```
### Anything else?
Searching for velocity bug, I encountered bug https://github.com/localstack/localstack/issues/5587 which is resolved but actually the bug is still there. | https://github.com/localstack/localstack/issues/7188 | https://github.com/localstack/localstack/pull/7226 | 39ec690149ec78408c979c49af8fe0d4447c5824 | d3b89ee46a5587cba7f8441a73b4d7ca561b9024 | "2022-11-16T10:03:43Z" | python | "2022-12-24T13:24:56Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,177 | ["localstack/config.py", "localstack/services/s3/provider.py", "localstack/services/s3/utils.py", "localstack/testing/pytest/fixtures.py", "tests/integration/s3/test_s3.py", "tests/integration/s3/test_s3.snapshot.json"] | bug: Non-existent KMS key does not return an error when passed through S3 copyObject and putObject | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
copyObject and putObject (and perhaps other requests) requests are accepted and do not return an error when an SSE KMS key is specified.
### Expected Behavior
The error which should be returned is KMS.NotFoundException.
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
Localstack testcontainers
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
Call copyObject/putObject via latest version using AWS SDK V2 and specific a ssekmsKeyId.
```java
final var copyObjectRequestBuilder =
CopyObjectRequest.builder()
.serverSideEncryption(ServerSideEncryption.AWS_KMS)
.ssekmsKeyId("NON-EXISTENT KMS KEY")
.sourceBucket(sourceBucketName)
.sourceKey(sourceKey)
.destinationBucket(destinationBucket)
.destinationKey(destinationKey);
final var copyObjectResponse = sourceRegionS3Client.copyObject(copyObjectRequest); // Error should be thrown here but localstack does not return an error
```
### Environment
```markdown
- OS: MacOS 13.0
- LocalStack: Latest (v1.2.0)
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7177 | https://github.com/localstack/localstack/pull/7448 | 590de96b3dd65e349709300b668e70f1f9664e99 | 3ff45bfc11ff63b85e8020859484d967d631230a | "2022-11-15T02:11:16Z" | python | "2023-01-13T20:57:19Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,160 | ["localstack/services/apigateway/helpers.py", "localstack/services/apigateway/patches.py", "tests/integration/test_apigateway.py"] | bug: Api-Gateway V1 404 response when using custom id | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When adding a custom id to an apigateway V1 via tags in aws-cdk the id will be applied and deploying returns the url including the custom id (https://<custom-id>.execute-api.localhost.localstack.cloud:4566/prod/). Querying this url however will return a 404 response.
The url generated when no custom id is provided works with issue.
### Expected Behavior
The custom id is applied and querying works with the url including the custom id.
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
localstack start -d
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
*AWS CDK (python)*
```python
# test-stack.py
from aws_cdk import (
Stack,
aws_lambda,
aws_apigateway as api_gateway,
Tags,
)
from pathlib import Path
from constructs import Construct
class TestStack(Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
my_lambda = aws_lambda.Function(
self,
"test-lambda",
code=aws_lambda.Code.from_asset("lambdas"),
runtime=aws_lambda.Runtime.PYTHON_3_9,
handler="test_lambda.handle",
)
api = api_gateway.RestApi(
self,
"Api",
rest_api_name="apiName",
)
Tags.of(api).add(
"_custom_id_", "myCustomId"
) # makes api id and therefore url static
api.root.add_method(
"GET", api_gateway.LambdaIntegration(my_lambda) # type: ignore
)
```
```python
# lambdas/test_lambda.py
def handle(event, context):
return {"statusCode": 200, "body": {}}
```
Deploy outputs: https://myCustomId.execute-api.localhost.localstack.cloud:4566/prod/
Querying this url returns 404

### Environment
```markdown
- OS: Windows 10
- LocalStack: latest (10.11.2022)
- [email protected]
```
### Anything else?
I found [#4499](https://github.com/localstack/localstack/issues/4499), but from the deploy output and awslocal it seems that the Api Gateway should have been created with the custom id but cannot be queried.
---
Result without custom id:

| https://github.com/localstack/localstack/issues/7160 | https://github.com/localstack/localstack/pull/7221 | 0aa9ffb17d6e0e76b0cab5f7f9ddca45bc46c685 | 231547f3ce14e6d6ca2a5787f4204840c33383c7 | "2022-11-10T16:02:11Z" | python | "2022-11-25T08:10:23Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,156 | ["localstack/cli/localstack.py"] | feature request: LocalStack CLI Update, flag all, should ignore dangling images | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Feature description
Command `localstack update all` should ignore the dangling `localstack/localstack:<none>` images in the `docker images` list.
I.e, images with tag `<none>`.

### 🧑💻 Implementation
_No response_
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7156 | https://github.com/localstack/localstack/pull/7691 | 2e16c94b9fa11df38e35db9771112922797ad0c7 | 029285d5f6726abdece9948a6efc91daf7bfa00c | "2022-11-10T07:11:11Z" | python | "2023-02-15T21:43:55Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,154 | ["localstack/services/cloudwatch/provider.py", "tests/integration/test_cloudwatch.py", "tests/integration/test_cloudwatch.snapshot.json"] | bug: CloudWatch PutMetricData with arrays of values not working | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
PutMetricData with arrays of values does not work properly. Zero values seem to be published. PutMetricData with a single value works as expected. This happens to both CLI and SDK.
### Expected Behavior
PutMetricData with arrays of values should work as publishing a single value.
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack:1.2.0
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```
awslocal cloudwatch put-metric-data --namespace test --metric-data MetricName=TestCounter,Values=10,1,Counts=1,1
awslocal cloudwatch get-metric-statistics --namespace test --metric-name TestCounter --statistics Maximum --period 60 --start-time 2022-11-9T00:00:00Z --end-time 2022-11-11T00:00:00Z
{
"Label": "TestCounter",
"Datapoints": [
{
"Timestamp": "2022-11-09T23:25:10+00:00",
"Maximum": 0.0,
"ExtendedStatistics": {}
},
{
"Timestamp": "2022-11-09T23:27:10+00:00",
"Maximum": 0.0,
"ExtendedStatistics": {}
}
]
}
```
### Environment
```markdown
- OS: Mac Monterey
- LocalStack: 1.2.0
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7154 | https://github.com/localstack/localstack/pull/7166 | aa70eb04c52e3a7d030190ff72bf9c16530df252 | 68947c216ea3883c67f7a8e64756c0dfca4d4ac5 | "2022-11-09T23:31:53Z" | python | "2022-11-29T17:57:09Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,147 | ["localstack/services/sqs/provider.py", "tests/integration/test_sqs.py", "tests/integration/test_sqs.snapshot.json", "tests/unit/test_sqs.py"] | Message size limit should include message attributes | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
With AWS, you can't send messages where the total size of message body + message attributes are greater than 262,144 bytes.
Currently, you are taking into account the message body only.
```python
>>> client = boto3.client('sqs', aws_access_key_id='fake', aws_secret_access_key='fake', endpoint_url='http://localstack:4566')
...
>>> client.send_message(QueueUrl='http://localstack:4566/000000000000/foo', MessageBody=('x' * 262_144), MessageAttributes={ 'k': { 'DataType': 'String', 'StringValue': 'v' } })
# the message is successfully delivered
{'MD5OfMessageBody': '1566aa66d825eb4354d3e9533b753995', 'MD5OfMessageAttributes': '731170ec8e13273a4e68fdcc2abaf9b4', 'MessageId': '6d12f05f-173f-45f2-a561-848070cd1390', 'ResponseMetadata': {'RequestId': 'DL5GNQG3WJ3IUOGTRD49RCG7DQAL333ZZNBP7RL8IEHAZVPMA367', 'HTTPStatusCode': 200, 'HTTPHeaders': {'content-type': 'text/xml', 'content-length': '493', 'connection': 'close', 'access-control-allow-origin': '*', 'access-control-allow-methods': 'HEAD,GET,PUT,POST,DELETE,OPTIONS,PATCH', 'access-control-allow-headers': 'authorization,cache-control,content-length,content-md5,content-type,etag,location,x-amz-acl,x-amz-content-sha256,x-amz-date,x-amz-request-id,x-amz-security-token,x-amz-tagging,x-amz-target,x-amz-user-agent,x-amz-version-id,x-amzn-requestid,x-localstack-target,amz-sdk-invocation-id,amz-sdk-request', 'access-control-expose-headers': 'etag,x-amz-version-id', 'date': 'Tue, 08 Nov 2022 20:56:15 GMT', 'server': 'hypercorn-h11'}, 'RetryAttempts': 0}}
```
### Expected Behavior
With AWS I obtain an error because the message attributes are taken into account when computing the message size:
```python
>>> client.send_message(QueueUrl=MY_QUEUE_URL, MessageBody=('x' * 262_144), MessageAttributes={ 'k': { 'DataType': 'String', 'StringValue': 'v' } })
# fail
botocore.exceptions.ClientError: An error occurred (InvalidParameterValue) when calling the SendMessage operation: One or more parameters are invalid. Reason: Message must be shorter than 262144 bytes.
```
In fact, if I subtract the byte size of the attribute values (`len('k')` + `len('String')` + `len('x')` = 8) from the message body I managed to send the message:
```python
>>> client.send_message(QueueUrl=MY_QUEUE_URL, MessageBody=('x' * 262_136), MessageAttributes={ 'k': { 'DataType': 'String', 'StringValue': 'v' } })
# the message is successfully delivered
```
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run --rm -it -p 4566:4566 -p 4510-4559:4510-4559 localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```python
import boto3
client = boto3.client('sqs', aws_access_key_id='fake', aws_secret_access_key='fake', endpoint_url='http://localstack:4566')
client.create_queue(QueueName='foo')
client.send_message(QueueUrl='http://localstack:4566/000000000000/foo', MessageBody=('x' * 262_144), MessageAttributes={ 'x': { 'DataType': 'String', 'StringValue': 'x' } })
```
### Environment
```markdown
- OS: Mac OS X Monterey
- LocalStack: 1.1.1
```
### Anything else?
If this bug is confirmed, I'd like to contribute by making a PR 😃 | https://github.com/localstack/localstack/issues/7147 | https://github.com/localstack/localstack/pull/7168 | 06be32c1dd92822cd7a3f4632bd3402fb929d42f | 86db835beb39f1e86876ea9f61fd13c031651fcc | "2022-11-08T21:11:27Z" | python | "2022-11-16T12:47:30Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,146 | ["localstack/services/dynamodb/utils.py", "localstack/utils/aws/aws_stack.py", "tests/integration/test_dynamodb.py", "tests/integration/test_dynamodb.snapshot.json"] | bug: DynamoDB Streams update returns base64 encoded binary value | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Updating a dynamodb item updates other binary fields in the stream into their base64 encoded form.
### Expected Behavior
Updating the dynamodb item should not affect other fields in the stream.
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
#### Start LocalStack
```bash
# docker run --rm -it -p 4566:4566 -p 4510-4559:4510-4559 localstack/localstack
...
LocalStack version: 1.2.1.dev
LocalStack build date: 2022-11-04
LocalStack build git hash: a4d60b3f
```
#### Run the following
```python
from time import sleep
from uuid import uuid4
import boto3
ENDPOINT_URL = "http://localhost.localstack.cloud:4566"
def wait_for_table_active(dynamo_client, table_name):
while True:
response = dynamo_client.describe_table(TableName=table_name)
table_status = response.get('Table', {}).get('TableStatus')
if (table_status == 'ACTIVE'):
return response
sleep(1)
def poll_records(streams_client, shard_iterator):
while True:
response = streams_client.get_records(ShardIterator=shard_iterator)
records = response.get('Records')
shard_iterator = response.get('NextShardIterator')
if records:
return (shard_iterator, records)
def main():
table_name = 'StreamBug-' + str(uuid4())
# aws clients - switch these to run on aws
# dynamo_client = boto3.client('dynamodb')
# streams_client = boto3.client('dynamodbstreams')
dynamo_client = boto3.client('dynamodb', endpoint_url=ENDPOINT_URL)
streams_client = boto3.client('dynamodbstreams', endpoint_url=ENDPOINT_URL)
# create DynamoDB table
dynamo_client.create_table(
TableName=table_name,
KeySchema=[{
'AttributeName': '__pkey',
'KeyType': 'HASH'
}],
AttributeDefinitions=[{
'AttributeName': '__pkey',
'AttributeType': 'S'
}],
BillingMode='PAY_PER_REQUEST',
StreamSpecification={
'StreamEnabled': True,
'StreamViewType': 'NEW_IMAGE',
}
)
table = wait_for_table_active(
dynamo_client=dynamo_client,
table_name=table_name
)
# There is an issue where the table is marked as ACTIVE, however issueing a
# put-item immediately does not go into the stream.
sleep(10)
# put a dummy item to be used with the test
dynamo_client.put_item(
TableName=table_name,
Item={
'__pkey': {'S': 'test'},
'version': {'N': '1'},
'data': {'B': b'\x90'},
}
)
get_item_response = dynamo_client.get_item(
TableName=table_name,
Key={'__pkey': {'S': 'test'}}
)
stream_arn = table['Table']['LatestStreamArn']
shard_id = streams_client.describe_stream(StreamArn=stream_arn)[
'StreamDescription']['Shards'][0]['ShardId']
shard_iterator = streams_client.get_shard_iterator(
StreamArn=stream_arn,
ShardId=shard_id,
ShardIteratorType='TRIM_HORIZON'
)['ShardIterator']
(shard_iterator, get_records) = poll_records(
streams_client=streams_client, shard_iterator=shard_iterator)
dynamo_client.update_item(
TableName=table_name,
Key={'__pkey': {'S': 'test'}},
UpdateExpression='SET version=:v',
ExpressionAttributeValues={':v': {'N': '2'}}
)
get_item_response2 = dynamo_client.get_item(
TableName=table_name,
Key={'__pkey': {'S': 'test'}}
)
(shard_iterator, get_records_after_update) = poll_records(
streams_client=streams_client, shard_iterator=shard_iterator)
# cleanup
dynamo_client.delete_table(TableName=table_name)
print('GetItem =>', get_item_response['Item'])
print('Stream Records =>', get_records[0]['dynamodb']['NewImage'])
print('GetItem After Update =>', get_item_response2['Item'])
print('Stream Records After Update =>',
get_records_after_update[0]['dynamodb']['NewImage'])
assert get_item_response['Item']['data'] == get_records[0]['dynamodb']['NewImage']['data']
assert get_item_response['Item']['data'] == get_records_after_update[0]['dynamodb']['NewImage']['data']
if __name__ == "__main__":
main()
```
The script does the following:
- Puts a new item
- Gets the new item
- Gets the item from the dynamodb stream
- Updates the `version` field on the item. Note, the `data` binary field breaks which are not part of the update
- Gets the new item
- Gets the item from the dynamodb stream
Output:
```
GetItem => {'__pkey': {'S': 'test'}, 'data': {'B': b'\x90'}, 'version': {'N': '1'}}
Stream Records => {'__pkey': {'S': 'test'}, 'version': {'N': '1'}, 'data': {'B': b'\x90'}}
GetItem After Update => {'__pkey': {'S': 'test'}, 'data': {'B': b'\x90'}, 'version': {'N': '2'}}
Stream Records After Update => {'__pkey': {'S': 'test'}, 'data': {'B': b'kA=='}, 'version': {'N': '2'}}
```
`Stream Records After Update` the `data` field is not correct.
### Environment
```markdown
- OS: MacOS 12.5.1
- LocalStack: latest
```
### Anything else?
Relates to #6786 | https://github.com/localstack/localstack/issues/7146 | https://github.com/localstack/localstack/pull/7302 | 9998caeda377a7e1de9a5f4576d0b32797050614 | 2fb702e5dd8cf41d94c56c7204c26aa3bd72c3f2 | "2022-11-08T15:54:13Z" | python | "2022-12-09T16:59:18Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,126 | ["localstack/services/dynamodb/provider.py", "localstack/services/dynamodb/utils.py", "localstack/utils/aws/aws_stack.py", "tests/integration/test_multi_accounts.py"] | bug: DynamoDB not namespacing based on account ID and region | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
DynamoDB does not correctly namespace resources based on region and account ID.
For instance, using the same table name across accounts and regions throws an error.
### Expected Behavior
It must be possible to create resources with same name on different regions and/or different accounts.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```
$ AWS_ACCESS_KEY_ID=445500880268 AWS_DEFAULT_REGION=eu-central-1 \
awslocal dynamodb create-table --table-name CertificateTargets-local1 \
--key-schema AttributeName=target_uuid,KeyType=HASH \
--attribute-definitions AttributeName=target_uuid,AttributeType=S \
--provisioned-throughput ReadCapacityUnits=1000,WriteCapacityUnits=1000
$ AWS_ACCESS_KEY_ID=445500880268 AWS_DEFAULT_REGION=us-east-1 \
awslocal dynamodb create-table --table-name CertificateTargets-local1 \
--key-schema AttributeName=target_uuid,KeyType=HASH \
--attribute-definitions AttributeName=target_uuid,AttributeType=S \
--provisioned-throughput ReadCapacityUnits=1000,WriteCapacityUnits=1000
# ResourceInUseException!
$ AWS_ACCESS_KEY_ID=000000000000 \
awslocal dynamodb create-table --table-name CertificateTargets-local1 \
--key-schema AttributeName=target_uuid,KeyType=HASH \
--attribute-definitions AttributeName=target_uuid,AttributeType=S \
--provisioned-throughput ReadCapacityUnits=1000,WriteCapacityUnits=1000
# ResourceInUseException!
```
### Environment
```markdown
OS: Ubuntu 22.04
LocalStack: latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7126 | https://github.com/localstack/localstack/pull/7157 | 481e0c6de9e3689fa00d0cc989fa4474b18c8418 | bfb3c7e6784579c2e6ec0dcf940ea0a2bbdc76aa | "2022-11-04T11:25:36Z" | python | "2022-11-21T09:39:15Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,109 | ["localstack/services/sns/provider.py", "tests/integration/test_sns.py", "tests/integration/test_sns.snapshot.json"] | bug: InvalidParameterException when sending to SNS topic since version 1.2 | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I'm using localstack in my current build. Except since version 1.2 I get the following exception (Java 17 & Spring Boot 2.7.1):
```
com.amazonaws.services.sns.model.InvalidParameterValueException: The message attribute 'timestamp' has an invalid message attribute type, the set of supported type prefixes is Binary, Number, and String. (Service: AmazonSNS; Status Code: 400; Error Code: ParameterValueInvalid; Request ID: E8OZ22XIRX11DTY2PWOGI5FB55U5J0S11VC8YJK6ES9UKCVL0DY1; Proxy: null)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1862) ~[aws-java-sdk-core-1.12.132.jar:na]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1415) ~[aws-java-sdk-core-1.12.132.jar:na]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1384) ~[aws-java-sdk-core-1.12.132.jar:na]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1154) ~[aws-java-sdk-core-1.12.132.jar:na]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:811) ~[aws-java-sdk-core-1.12.132.jar:na]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:779) ~[aws-java-sdk-core-1.12.132.jar:na]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:753) ~[aws-java-sdk-core-1.12.132.jar:na]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:713) ~[aws-java-sdk-core-1.12.132.jar:na]
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:695) ~[aws-java-sdk-core-1.12.132.jar:na]
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:559) ~[aws-java-sdk-core-1.12.132.jar:na]
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:539) ~[aws-java-sdk-core-1.12.132.jar:na]
at com.amazonaws.services.sns.AmazonSNSClient.doInvoke(AmazonSNSClient.java:3545) ~[aws-java-sdk-sns-1.12.132.jar:na]
at com.amazonaws.services.sns.AmazonSNSClient.invoke(AmazonSNSClient.java:3512) ~[aws-java-sdk-sns-1.12.132.jar:na]
at com.amazonaws.services.sns.AmazonSNSClient.invoke(AmazonSNSClient.java:3501) ~[aws-java-sdk-sns-1.12.132.jar:na]
at com.amazonaws.services.sns.AmazonSNSClient.executePublish(AmazonSNSClient.java:2475) ~[aws-java-sdk-sns-1.12.132.jar:na]
at com.amazonaws.services.sns.AmazonSNSClient.publish(AmazonSNSClient.java:2444) ~[aws-java-sdk-sns-1.12.132.jar:na]
at io.awspring.cloud.messaging.core.TopicMessageChannel.sendInternal(TopicMessageChannel.java:91) ~[spring-cloud-aws-messaging-2.4.0.jar:2.4.0]
at org.springframework.messaging.support.AbstractMessageChannel.send(AbstractMessageChannel.java:139) ~[spring-messaging-5.3.21.jar:5.3.21]
at org.springframework.messaging.support.AbstractMessageChannel.send(AbstractMessageChannel.java:125) ~[spring-messaging-5.3.21.jar:5.3.21]
at io.awspring.cloud.messaging.core.support.AbstractMessageChannelMessagingSendingTemplate.doSend(AbstractMessageChannelMessagingSendingTemplate.java:59) ~[spring-cloud-aws-messaging-2.4.0.jar:2.4.0]
at io.awspring.cloud.messaging.core.support.AbstractMessageChannelMessagingSendingTemplate.doSend(AbstractMessageChannelMessagingSendingTemplate.java:44) ~[spring-cloud-aws-messaging-2.4.0.jar:2.4.0]
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:109) ~[spring-messaging-5.3.21.jar:5.3.21]
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:99) ~[spring-messaging-5.3.21.jar:5.3.21]
at com.polovyi.ivan.tutorials.service.PurchaseTransactionService.processRequest(PurchaseTransactionService.java:36) ~[classes/:na]
at com.polovyi.ivan.tutorials.controller.PurchaseTransactionController.acceptPurchaseTransaction(PurchaseTransactionController.java:17) ~[classes/:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:568) ~[na:na]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) ~[spring-web-5.3.21.jar:5.3.21]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150) ~[spring-web-5.3.21.jar:5.3.21]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) ~[spring-webmvc-5.3.21.jar:5.3.21]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895) ~[spring-webmvc-5.3.21.jar:5.3.21]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808) ~[spring-webmvc-5.3.21.jar:5.3.21]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.3.21.jar:5.3.21]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067) ~[spring-webmvc-5.3.21.jar:5.3.21]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:963) ~[spring-webmvc-5.3.21.jar:5.3.21]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) ~[spring-webmvc-5.3.21.jar:5.3.21]
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:909) ~[spring-webmvc-5.3.21.jar:5.3.21]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:681) ~[tomcat-embed-core-9.0.64.jar:4.0.FR]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) ~[spring-webmvc-5.3.21.jar:5.3.21]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:764) ~[tomcat-embed-core-9.0.64.jar:4.0.FR]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat-embed-websocket-9.0.64.jar:9.0.64]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-5.3.21.jar:5.3.21]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.21.jar:5.3.21]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-5.3.21.jar:5.3.21]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.21.jar:5.3.21]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-5.3.21.jar:5.3.21]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.21.jar:5.3.21]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:360) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:399) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:890) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1787) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-9.0.64.jar:9.0.64]
at java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]
```
When using localstack 1.1 (and earlier versions) and leaving everything else the same I don't get the exception.
The message header 'timestamp' is set by Spring messaging under the hood and is immutable, so there's no way to change that without using reflection or something else ugly. What I could do is use the aws-sdk directly.
However, I just wanted to mention the change in behaviour of localstack v1.2
### Expected Behavior
I'd expect to get a 202/Accepted when the application is sending a message to the SNS topic.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
You can use the code from this project: https://github.com/polovyivan/spring-cloud-sns-topic-publisher
And only update the localstack version to 1.2
```
cd src/main/resources/docker-compose
docker-compose up
mvn clean spring-boot:run
```
Then send an empty http POST to http://localhost:8080/spring-cloud-sns-topic-publisher/purchase-transactions
### Environment
```markdown
- OS: macOS Montery 12.6
- LocalStack: 1.2
- Java: 17
- Spring boot: 2.7.1
- Maven: 3.8.1
- Docker: 20.10.17, build 100c701
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7109 | https://github.com/localstack/localstack/pull/7181 | 4aca6a77d79eb48b6b7ef2e8386e87859e4600c5 | 65230eba2acbe0e1860063f9ab5192e78e3bb1d1 | "2022-10-28T07:50:03Z" | python | "2022-11-18T09:09:32Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,078 | ["setup.cfg"] | feature request: update the requests dependence | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Feature description
Please update the requests dependence to support the latest 2.x version. Or I will get the following warning or errors.
```
localstack 1.2.0 requires requests<2.26,>=2.20.0, but you have requests 2.28.1 which is incompatible.
localstack-ext 1.2.0 requires requests<2.26,>=2.20.0, but you have requests 2.28.1 which is incompatible.
```
### 🧑💻 Implementation
update the requests dependence to something like `requests<3.0,>=2.20.0`
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7078 | https://github.com/localstack/localstack/pull/7220 | 51b97888bb11fea19cf841b1744f22a1ac68514d | ab870ed811b4efd8d483c5b04a284f71634c8ffd | "2022-10-24T03:04:35Z" | python | "2022-11-22T20:09:54Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,060 | ["localstack/services/apigateway/invocations.py", "localstack/services/apigateway/provider.py", "localstack/services/cloudformation/models/apigateway.py", "tests/integration/cloudformation/resources/test_apigateway.py", "tests/integration/cloudformation/resources/test_apigateway.snapshot.json", "tests/integration/templates/apigateway_models.json", "tests/integration/templates/template35.yaml"] | bug: ApiGateway Model schema stored in Wrong JSON format | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
APiGateway Model that being created by CDK project stores the JSON schema with a single quotes instead of double quotes which leading to an error in parsing this json object.
you can refere to this article to see the steps
https://dev.to/dvddpl/reduce-lambda-invocations-and-code-boilerplate-by-adding-a-json-schema-validator-to-your-api-gateway-15af
and it works fine on AWS account
also you can look on the attached image how localstack stores the JSON object with a single quotes, I tried to edit it manually through UI but it return it back with single quotes

### Expected Behavior
store the json schema in correct JSON format
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
localstack start
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
cdklocal deploy
### Environment
```markdown
- OS:OSX
- LocalStack: 1.0.4
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7060 | https://github.com/localstack/localstack/pull/8629 | 6ee7a4f4a591d0253e86f55d9a0bd2d6dc0b2b74 | 9f6af09d24c155f48eb87f08a10adc4d4a7937fb | "2022-10-20T23:26:15Z" | python | "2023-07-06T20:34:04Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 7,000 | ["localstack/packages/core.py", "localstack/services/kinesis/packages.py", "tests/unit/packages/test_core.py"] | bug: LocalStack 1.2.0 - Unable to start when behind corporate proxy w/ Custom CA | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Currently it's looping this error
```
LocalStack version: 1.2.0
LocalStack build date: 2022-10-07
LocalStack build git hash: 9be2a7aa
Error starting infrastructure: Installation of kinesis-mock failed. Traceback (most recent call last):
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/urllib3/connection.py", line 414, in connect
self.sock = ssl_wrap_socket(
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "/usr/local/lib/python3.10/ssl.py", line 513, in wrap_socket
return self.sslsocket_class._create(
File "/usr/local/lib/python3.10/ssl.py", line 1071, in _create
self.do_handshake()
File "/usr/local/lib/python3.10/ssl.py", line 1342, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/requests/adapters.py", line 439, in send
resp = conn.urlopen(
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "/opt/code/localstack/.venv/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: MyHTTPSConnectionPool(host='api.github.com', port=443): Max retries exceeded with url: /repos/etspaceman/kinesis-mock/releases/tags/0.2.5 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)')))
```
### Expected Behavior
every call to work properly
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
Test Containers Go
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
DynamoDb Connection:
```
Error: Received unexpected error:
operation error DynamoDB: CreateTable, exceeded maximum number of attempts, 3, https response error StatusCode: 0, RequestID: , request send failed, Post "http://localhost:53740/": EOF
```
### Environment
```markdown
- OS: MacOS
- LocalStack: 1.2.0
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/7000 | https://github.com/localstack/localstack/pull/7006 | dcc04b6d0a89f45a92c11fdb40f7b14ceeac0b72 | d07b7b32f1a62fd74eaafd6a4c673b1ea3d06a46 | "2022-10-10T15:43:36Z" | python | "2022-10-11T11:17:58Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,995 | ["localstack/services/awslambda/packages.py"] | bug: java.lang.IllegalArgumentException: argument type mismatch with RequestHandler | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
The following request handler:
```java
public class LegalDocPublisher implements RequestHandler<SQSEvent, Void> {
@Override
public Void handleRequest(final SQSEvent event, final Context context) {
return null;
}
}
```
causes
```
2022-10-10T06:38:23.362 INFO --- [ Thread-244] l.s.a.lambda_executors : Error executing Lambda "arn:aws:lambda:us-east-2:000000000000:function:LegalDocPublisher": InvocationException: Lambda process returned error status code: 1. Result: . Output:
Exception in thread "main" java.lang.IllegalArgumentException: argument type mismatch
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at cloud.localstack.LambdaExecutor.main(LambdaExecutor.java:117) File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 1423, in do_execute
execute_result = lambda_function_callable(inv_context.event, context)
File "/opt/code/localstack/localstack/services/awslambda/lambda_api.py", line 579, in execute
result = lambda_executors.EXECUTOR_LOCAL.execute_java_lambda(
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 1532, in execute_java_lambda
invocation_result = self._execute_in_custom_runtime(cmd, lambda_function=lambda_function)
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 1366, in _execute_in_custom_runtime
raise InvocationException(
```
when execution is triggered.
This works fine until LocalStack 1.0.4.
### Expected Behavior
No exceptions.
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
LocalStack is started as part of integration tests run by Maven, via `docker-maven-plugin`.
### Environment
```markdown
- OS: 20.04
- LocalStack: 1.2.0
```
### Anything else?
AWS SDK version: 1.12.271 | https://github.com/localstack/localstack/issues/6995 | https://github.com/localstack/localstack/pull/7373 | ff3617cd0ea6c991daf8e4cc3ae9724cf4f8dc3a | 661444908ab2aac6d46d4faf5fe3ca9086f94db7 | "2022-10-10T06:51:30Z" | python | "2022-12-20T20:58:06Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,974 | ["localstack/services/cloudformation/provider.py", "tests/integration/cloudformation/api/test_get_template_summary.py", "tests/integration/cloudformation/api/test_get_template_summary.snapshot.json"] | bug: KeyError: 'Parameters' after deploying cloudformation template again | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I'm receiving this error after call `samlocal deploy --guided` and filled all options
And this just occur after the first time (creation)
```
Traceback (most recent call last):
File "/opt/homebrew/bin/samlocal", line 81, in <module>
main.cli()
File "/opt/homebrew/lib/python3.10/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/opt/homebrew/lib/python3.10/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/opt/homebrew/lib/python3.10/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/homebrew/lib/python3.10/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/opt/homebrew/lib/python3.10/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/opt/homebrew/lib/python3.10/site-packages/samcli/lib/cli_validation/image_repository_validation.py", line 92, in wrapped
return func(*args, **kwargs)
File "/opt/homebrew/lib/python3.10/site-packages/click/decorators.py", line 73, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/opt/homebrew/lib/python3.10/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/opt/homebrew/lib/python3.10/site-packages/samcli/lib/telemetry/metric.py", line 176, in wrapped
raise exception # pylint: disable=raising-bad-type
File "/opt/homebrew/lib/python3.10/site-packages/samcli/lib/telemetry/metric.py", line 126, in wrapped
return_value = func(*args, **kwargs)
File "/opt/homebrew/lib/python3.10/site-packages/samcli/lib/utils/version_checker.py", line 41, in wrapped
actual_result = func(*args, **kwargs)
File "/opt/homebrew/lib/python3.10/site-packages/samcli/cli/main.py", line 86, in wrapper
return func(*args, **kwargs)
File "/opt/homebrew/lib/python3.10/site-packages/samcli/commands/_utils/cdk_support_decorators.py", line 38, in wrapped
return func(*args, **kwargs)
File "/opt/homebrew/lib/python3.10/site-packages/samcli/commands/deploy/command.py", line 192, in cli
do_cli(
File "/opt/homebrew/lib/python3.10/site-packages/samcli/commands/deploy/command.py", line 359, in do_cli
deploy_context.run()
File "/opt/homebrew/lib/python3.10/site-packages/samcli/commands/deploy/deploy_context.py", line 166, in run
return self.deploy(
File "/opt/homebrew/lib/python3.10/site-packages/samcli/commands/deploy/deploy_context.py", line 249, in deploy
result, changeset_type = self.deployer.create_and_wait_for_changeset(
File "/opt/homebrew/lib/python3.10/site-packages/samcli/lib/deploy/deployer.py", line 525, in create_and_wait_for_changeset
result, changeset_type = self.create_changeset(
File "/opt/homebrew/lib/python3.10/site-packages/samcli/lib/deploy/deployer.py", line 166, in create_changeset
existing_parameters = [parameter["ParameterKey"] for parameter in summary["Parameters"]]
KeyError: 'Parameters'
```
### Expected Behavior
Update the stack already deployed to localstack
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker-compose up localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
samlocal deploy --guided
### Environment
```markdown
- OS: MacOS 12.3.1
- LocalStack: latest
```
### Anything else?
No | https://github.com/localstack/localstack/issues/6974 | https://github.com/localstack/localstack/pull/7264 | 75faee8adaeb07f6e5c76760e774889f66ad5ab9 | 59e38cfa6e363c36cd87de543d1e03b08a72145a | "2022-10-04T21:53:41Z" | python | "2022-12-02T12:40:23Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,934 | ["localstack/services/ses/provider.py", "tests/integration/test_ses.py"] | feature request: SES CloneReceiptRuleSet | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Feature description
Implement the `CloneReceiptRuleSet` operation for SES.
https://docs.aws.amazon.com/ses/latest/APIReference/API_CloneReceiptRuleSet.html
### 🧑💻 Implementation
_No response_
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/6934 | https://github.com/localstack/localstack/pull/6992 | dd9dd9c3baa87544b90b847cda0fbe41e961d7c8 | dcc04b6d0a89f45a92c11fdb40f7b14ceeac0b72 | "2022-09-26T07:02:51Z" | python | "2022-10-11T10:58:55Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,863 | ["localstack/config.py", "localstack/services/sns/constants.py", "localstack/services/sns/models.py", "localstack/services/sns/provider.py", "localstack/services/sns/publisher.py", "tests/integration/test_edge.py", "tests/integration/test_sns.py", "tests/integration/test_sns.snapshot.json", "tests/unit/test_sns.py"] | bug: SNS Publishing multiple messages with PublishBatch() API is much slower than with Publish() | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Publishing multiple messages to SNS topic with batching, using PublishBatch() API, is much slower than calling N times Publish() API.
The more SQS queues is subscribed to the SNS topic the longer is publishing time with batching - dependency is linear. For individual messages publishing time is independent on number of queues.
### Expected Behavior
Publishing messages to SNS topic with batching should be faster than publishing messages individually and publishing time should be independent on number of SQS queues subscribed to SNS topic - like with AWS.
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run -p 4566:4566 --rm -e SERVICES=sqs,sns -e EAGER_SERVICE_LOADING=1 localstack/localstack:1.1.0
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
AWS SDK code snippet to show the issue:
````
static async Task TestSNSPublishing()
{
const int numMsgs = 10;
string msg = new string('x', 100);
Stopwatch sw = new Stopwatch();
using (AmazonSQSClient SQS = new AmazonSQSClient("ignore", "ignore", new AmazonSQSConfig { ServiceURL = "http://localhost:4566" }))
using (AmazonSimpleNotificationServiceClient SNS = new AmazonSimpleNotificationServiceClient("ignore", "ignore", new AmazonSimpleNotificationServiceConfig { ServiceURL = "http://localhost:4566" }))
{
// Create a topic
CreateTopicResponse createTopicResp = await SNS.CreateTopicAsync("testTopic");
// Create and subscribe queues to the topic
CreateQueueResponse createQueueResp1 = await SQS.CreateQueueAsync($"testQueue_{Guid.NewGuid()}");
string subscriptionArn1 = await SNS.SubscribeQueueAsync(createTopicResp.TopicArn, SQS, createQueueResp1.QueueUrl);
CreateQueueResponse createQueueResp2 = await SQS.CreateQueueAsync($"testQueue_{Guid.NewGuid()}");
string subscriptionArn2 = await SNS.SubscribeQueueAsync(createTopicResp.TopicArn, SQS, createQueueResp2.QueueUrl);
// Publish messages individually
Task<PublishResponse>[] publTasks = new Task<PublishResponse>[numMsgs];
sw.Start();
for (int i = 0; i < numMsgs; i++)
{
publTasks[i] = SNS.PublishAsync(createTopicResp.TopicArn, msg);
}
await Task.WhenAll(publTasks);
sw.Stop();
Console.WriteLine($"Individual: Num. messages = {numMsgs} Publishing time = {sw.ElapsedMilliseconds} ms");
await Task.Delay(1000);
// Publish batch of messages
PublishBatchRequest pubReq = new PublishBatchRequest() { TopicArn = createTopicResp.TopicArn };
for (int msgId = 0; msgId < numMsgs; msgId++)
{
pubReq.PublishBatchRequestEntries.Add(new PublishBatchRequestEntry() { Id = msgId.ToString(), Message = msg });
}
sw.Restart();
await SNS.PublishBatchAsync(pubReq);
sw.Stop();
Console.WriteLine($"Batch: Num. messages = {numMsgs} Publishing time = {sw.ElapsedMilliseconds} ms");
// Cleanup subscriptions and queues
await SNS.UnsubscribeAsync(subscriptionArn1);
await SNS.UnsubscribeAsync(subscriptionArn2);
await SNS.DeleteTopicAsync(createTopicResp.TopicArn);
await SQS.DeleteQueueAsync(createQueueResp1.QueueUrl);
await SQS.DeleteQueueAsync(createQueueResp2.QueueUrl);
}
}
````
Results look as follows:
````
For 2 queues:
Individual: Num. messages = 10 Publishing time = 73 ms
Batch: Num. messages = 10 Publishing time = 443 ms
````

For AWS:
````
For 2 queues:
Individual: Num. messages = 10 Publishing time = 501 ms
Batch: Num. messages = 10 Publishing time = 153 ms
````

### Environment
```markdown
- OS: Win 10 / .NET 6
- LocalStack: 1.1.0
- AWS SDK SNS 3.7.4.3
- AWS SDK SQS 3.7.2.100
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/6863 | https://github.com/localstack/localstack/pull/7267 | 76a097863fd5a28e8484e4f93bc68df31b2eabe1 | 59bfd5a9d965625fa74e32633d32df7dd6092789 | "2022-09-13T10:47:17Z" | python | "2023-01-03T18:20:17Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,849 | ["localstack/aws/serving/hypercorn.py", "localstack/http/hypercorn.py", "localstack/utils/server/http2_server.py", "setup.cfg", "tests/unit/aws/test_gateway.py", "tests/unit/http_/conftest.py"] | HTTP header casing when localstack proxies requests | In #6778 we found that AWS forwards HTTP headers to Lambdas from clients without modifying their casing. Since users are relying on this behavior, it effectively renders a web server that lowercases headers unusable as proxy for localstack. In general, I think there is a case to be made that web servers should not manipulate the casing of request headers, so that they can function as application-layer proxy to backends that make assumptions about header casing.
The issue of header casing for HTTP/1.1 in hypercorn was addressed with https://github.com/pgjones/hypercorn/issues/66, but later reverted with https://github.com/pgjones/hypercorn/issues/77. Apparently the relevant ASGI spec is being discussed here https://github.com/django/asgiref/issues/246 where I think we should make that case.
If this is a fundamental limitation of ASGI and we cannot get this into hypercorn, then we will be forced to either
- apologize to lambda users and tell them that they need to re-write their lambda code in a way that treats headers in a case-insensitive way
- fork hypercorn and maintain our own version
- use another webserver that does not manipulate HTTP headers, or at least allows access to the raw headers in some way | https://github.com/localstack/localstack/issues/6849 | https://github.com/localstack/localstack/pull/8660 | a985d68d469858dfe77537617a16ccdf1f119483 | fa2c765c1cb0835effc2d56a40870457cb3d2314 | "2022-09-11T23:08:52Z" | python | "2023-07-10T10:14:30Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,806 | ["localstack/services/apigateway/integration.py", "localstack/services/apigateway/invocations.py", "localstack/services/apigateway/router_asf.py", "localstack/services/apigateway/templates.py", "localstack/services/events/provider.py", "localstack/utils/aws/templating.py", "tests/integration/apigateway/test_apigateway_eventbridge.py", "tests/integration/apigateway/test_apigateway_eventbridge.snapshot.json", "tests/integration/test_events.py", "tests/integration/test_events.snapshot.json"] | bug: API Gateway to EventBridge, GET not yet implemented | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
IaC = CDK.
Attempting to connect an API-GW endpoint directly to EventBridge.
```
const awsIntegration = new apigateway.AwsIntegration({
service: 'events',
action: 'PutEvents',
integrationHttpMethod: 'POST',
options: {
credentialsRole: role,
requestTemplates: {
'application/json': requestTemplate,
},
passthroughBehavior: apigateway.PassthroughBehavior.NEVER,
integrationResponses: [
successResponse,
serverErrorResponse,
clientErrorResponse,
],
},
});
```
Receiving response:
```
{
statusCode: 400,
data: {
Type: 'User',
message: `Error invoking integration for API Gateway ID '1z7ky5zzcb': API Gateway AWS integration action URI "arn:aws:apigateway:ap-southeast-2:events:action/PutEvents", method "GET" not yet implemented`,
__type: 'InvalidRequest'
}
}
```
Similar to bug reported:
* https://github.com/localstack/localstack/issues/5508
### Expected Behavior
* 200 response
* Event created in relevant Eventbus
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
Calling the API endpoint using axios from jest tests.
### Environment
```markdown
- OS: MacOS
- LocalStack: latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/6806 | https://github.com/localstack/localstack/pull/7137 | 604c0508e8b9dc5db263ff24b1eee2159bb5d5d4 | af47056e2f97bce0a7f691f639a43409d6d0c430 | "2022-09-02T03:52:26Z" | python | "2023-07-06T17:33:35Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,798 | ["localstack/utils/collections.py", "localstack/utils/net.py", "tests/unit/test_common.py"] | ElastiCache and RDS instance want to use the same Port | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I am running a script to set up a local stack environment. When running the following commands there is an issue with Redis and MySQL fighting for the same port.
```
echo "Create an Mysql RDS Instance"
awslocal rds create-db-instance \
--db-instance-identifier devops-toolkit \
--db-instance-class db.t3.micro \
--engine mysql \
--master-user-password 12345678 \
--master-username admin \
--allocated-storage 5
sleep 5
echo "Create a Redis Cache Cluster"
awslocal elasticache create-cache-cluster --cache-cluster-id cluster-1
```
This can be seen in the logs
```
2022-09-01T07:49:51.054 INFO --- [ Thread-119] l.s.rds.engine_mysql : Starting MySQL RDS server on port 4510 (backend port 4510) - database "test", user "admin"
2022-09-01T07:49:53.245 INFO --- [ asgi_gw_0] l.s.elasticache.provider : Waiting for Redis server on tcp://localhost:4510 to start
2022-09-01T07:49:53.259 INFO --- [ Thread-123] l.s.elasticache.redis : 1181:C 01 Sep 2022 07:49:53.258 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
2022-09-01T07:49:53.259 INFO --- [ Thread-123] l.s.elasticache.redis : 1181:C 01 Sep 2022 07:49:53.258 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=1181, just started
2022-09-01T07:49:53.259 INFO --- [ Thread-123] l.s.elasticache.redis : 1181:C 01 Sep 2022 07:49:53.258 # Configuration loaded
2022-09-01T07:49:53.265 INFO --- [ Thread-123] l.s.elasticache.redis : 1181:M 01 Sep 2022 07:49:53.265 * Running mode=standalone, port=4510.
2022-09-01T07:49:53.265 INFO --- [ Thread-123] l.s.elasticache.redis : 1181:M 01 Sep 2022 07:49:53.265 # Server initialized
2022-09-01T07:49:53.266 INFO --- [ Thread-123] l.s.elasticache.redis : 1181:M 01 Sep 2022 07:49:53.265 # WARNING you have Transparent Huge Pages (THP) support en
```
And then results in mysql errors
```
ERROR: 'cat /tmp/tmpfbw8o1x_ | mysql --socket /tmp/mysql-states/09a73ec8/mysqld.sock': exit code 1; output: b"ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql-states/09a73ec8/mysqld.sock' (2)\n"
```
Any ideas, thanks in advance ✌️
### Expected Behavior
The two services should be run on different ports, is the the possibility to preconfigure the port on which each service is run?
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
Run this simple script
```
localstack start -d
echo "wait for local stack to start"
sleep 10
echo "Now trying to configure local stack"
echo "Create an Mysql RDS Instance"
awslocal rds create-db-instance \
--db-instance-identifier devops-toolkit \
--db-instance-class db.t3.micro \
--engine mysql \
--master-user-password 12345678 \
--master-username admin \
--allocated-storage 5
sleep 5
echo "Create a Redis Cache Cluster"
awslocal elasticache create-cache-cluster --cache-cluster-id cluster-1
```
### Environment
```markdown
- OS: MacOS 11
- LocalStack: latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/6798 | https://github.com/localstack/localstack/pull/6813 | aad37efb7b73d9cc29878372e7216f0581219b85 | de626355d46d8c17b1f07439ff2c0a6702ca34ef | "2022-09-01T07:59:33Z" | python | "2022-09-05T10:56:22Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,792 | ["localstack/aws/serving/asgi.py"] | bug: [SQS] LocalStack in ASF mode doesn't process multiple messages sent concurrently | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When multiple (hundreds of) messages are attempted to be sent concurrently, LS 1.0.4 (in default ASF mode) is unable to receive them.
In legacy mode (LEGACY_EDGE_PROXY=1) all messages are received successfully.
### Expected Behavior
LS in ASF mode should be able to receive (process) multiply messages sent concurrently.
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run -p 4566:4566 --rm -e SERVICES=sqs -e EAGER_SERVICE_LOADING=1 localstack/localstack:1.0.4
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
The issue is reproducible with this AWS SDK code snippet:
```
static async Task TestSQS()
{
const int numMsgs = 500;
Stopwatch stopWatch = new Stopwatch();
using (AmazonSQSClient SQS = new AmazonSQSClient("ignore", "ignore", new AmazonSQSConfig { ServiceURL = "http://localhost:4566" }))
{
CreateQueueResponse createQueueResp = await SQS.CreateQueueAsync($"testQueue_{Guid.NewGuid()}");
// Send messages
Console.WriteLine($"Sending {numMsgs} messages...");
string msgText = new string('x', 100);
Task[] sentMsgs = new Task[numMsgs];
stopWatch.Start();
for (int i = 0; i < numMsgs; i++)
{
sentMsgs[i] = SQS.SendMessageAsync(createQueueResp.QueueUrl, msgText);
}
await Task.WhenAll(sentMsgs);
stopWatch.Stop();
Console.WriteLine($"Num. messages = {numMsgs,2} Sending time = {stopWatch.ElapsedMilliseconds} ms");
await SQS.DeleteQueueAsync(createQueueResp.QueueUrl);
}
}
```
After couple of minutes the application fails with this exception:
```
Sending 500 messages...
Unhandled exception. System.IO.IOException: Unable to read data from the transport connection: The I/O operation has been aborted because of either a thread exit or an application request..
---> System.Net.Sockets.SocketException (995): The I/O operation has been aborted because of either a thread exit or an application request.
--- End of inner exception stack trace ---
at Amazon.Runtime.HttpWebRequestMessage.GetResponseAsync(CancellationToken cancellationToken)
at Amazon.Runtime.Internal.HttpHandler`1.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.Unmarshaller.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.SQS.Internal.ValidationResponseHandler.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.ErrorHandler.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.ErrorHandler.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.CallbackHandler.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.EndpointDiscoveryHandler.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.EndpointDiscoveryHandler.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.CredentialsRetriever.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.RetryHandler.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.RetryHandler.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.CallbackHandler.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.CallbackHandler.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.ErrorCallbackHandler.InvokeAsync[T](IExecutionContext executionContext)
at Amazon.Runtime.Internal.MetricsHandler.InvokeAsync[T](IExecutionContext executionContext)
...
```
In legacy mode (LEGACY_EDGE_PROXY=1) sending is successful and the output looks like this:
```
Sending 500 messages...
Num. messages = 500 Sending time = 4410 ms
```
### Environment
```markdown
- OS: Win 10 (1909)
- LocalStack: 1.0.4
- AWSSDK.SQS 3.7.2.94 / .NET Core 3.1
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/6792 | https://github.com/localstack/localstack/pull/6814 | de626355d46d8c17b1f07439ff2c0a6702ca34ef | d7b2e909788da4312ec60484b47994bd1a3aa969 | "2022-08-31T08:44:10Z" | python | "2022-09-05T11:33:19Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,787 | ["localstack/aws/forwarder.py", "localstack/aws/protocol/serializer.py", "localstack/logging/setup.py", "tests/integration/test_kinesis.py"] | bug: not able to get records from kinesis with "latest" docker image | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Using the kinesis java client I'm not able to get records from kinesis. This is the code snippet of the client
```
private GetRecordsResponse getStreamRecords() {
final KinesisAsyncClient client = createClient();
final String shardId = client.listShards(
ListShardsRequest.builder()
.streamName(STREAM_NAME)
.build()
).get()
.shards()
.get(0)
.shardId();
final String iterator = client.getShardIterator(GetShardIteratorRequest.builder()
.streamName(STREAM_NAME)
.shardId(shardId)
.shardIteratorType(ShardIteratorType.TRIM_HORIZON)
.build())
.get()
.shardIterator();
final GetRecordsResponse response = client.getRecords(
GetRecordsRequest
.builder()
.shardIterator(iterator)
.build())
.get();
return response;
}
```
I get this error
```
java.util.concurrent.ExecutionException: software.amazon.awssdk.core.exception.SdkClientException: Unable to parse date : 1.661885385488E9
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073)
at org.apache.pulsar.io.kinesis.KinesisSinkTest.getStreamRecords(KinesisSinkTest.java:177)
at org.apache.pulsar.io.kinesis.KinesisSinkTest.testWrite(KinesisSinkTest.java:116)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:132)
at org.testng.internal.InvokeMethodRunnable.runOne(InvokeMethodRunnable.java:45)
at org.testng.internal.InvokeMethodRunnable.call(InvokeMethodRunnable.java:73)
at org.testng.internal.InvokeMethodRunnable.call(InvokeMethodRunnable.java:11)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: software.amazon.awssdk.core.exception.SdkClientException: Unable to parse date : 1.661885385488E9
at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:97)
at software.amazon.awssdk.protocols.core.StringToInstant.lambda$safeParseDate$0(StringToInstant.java:77)
at software.amazon.awssdk.protocols.core.StringToInstant.convert(StringToInstant.java:56)
at software.amazon.awssdk.protocols.core.StringToInstant.convert(StringToInstant.java:32)
at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller$SimpleTypeJsonUnmarshaller.unmarshall(JsonProtocolUnmarshaller.java:160)
at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.unmarshallStructured(JsonProtocolUnmarshaller.java:210)
at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.unmarshallStructured(JsonProtocolUnmarshaller.java:114)
at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.lambda$unmarshallList$2(JsonProtocolUnmarshaller.java:143)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.unmarshallList(JsonProtocolUnmarshaller.java:145)
at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.unmarshallStructured(JsonProtocolUnmarshaller.java:210)
at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.unmarshall(JsonProtocolUnmarshaller.java:197)
at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.unmarshall(JsonProtocolUnmarshaller.java:168)
at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonResponseHandler.handle(JsonResponseHandler.java:79)
at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonResponseHandler.handle(JsonResponseHandler.java:36)
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonResponseHandler.handle(AwsJsonResponseHandler.java:43)
at software.amazon.awssdk.core.internal.handler.BaseClientHandler.lambda$resultTransformationResponseHandler$5(BaseClientHandler.java:232)
at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler.lambda$prepare$0(AsyncResponseHandler.java:88)
at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1150)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler$BaosSubscriber.onComplete(AsyncResponseHandler.java:129)
at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.runAndLogError(ResponseHandler.java:171)
at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.access$500(ResponseHandler.java:68)
at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$PublisherAdapter$1.onComplete(ResponseHandler.java:287)
at com.typesafe.netty.HandlerPublisher.complete(HandlerPublisher.java:408)
at com.typesafe.netty.HandlerPublisher.handlerRemoved(HandlerPublisher.java:395)
at io.netty.channel.AbstractChannelHandlerContext.callHandlerRemoved(AbstractChannelHandlerContext.java:946)
at io.netty.channel.DefaultChannelPipeline.callHandlerRemoved0(DefaultChannelPipeline.java:637)
at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:477)
at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:423)
at com.typesafe.netty.http.HttpStreamsHandler.removeHandlerIfActive(HttpStreamsHandler.java:328)
at com.typesafe.netty.http.HttpStreamsHandler.handleReadHttpContent(HttpStreamsHandler.java:189)
at com.typesafe.netty.http.HttpStreamsHandler.channelRead(HttpStreamsHandler.java:165)
at com.typesafe.netty.http.HttpStreamsClientHandler.channelRead(HttpStreamsClientHandler.java:148)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at software.amazon.awssdk.http.nio.netty.internal.LastHttpContentHandler.channelRead(LastHttpContentHandler.java:43)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at software.amazon.awssdk.http.nio.netty.internal.http2.Http2ToHttpInboundAdapter.onDataRead(Http2ToHttpInboundAdapter.java:66)
at software.amazon.awssdk.http.nio.netty.internal.http2.Http2ToHttpInboundAdapter.channelRead0(Http2ToHttpInboundAdapter.java:44)
at software.amazon.awssdk.http.nio.netty.internal.http2.Http2ToHttpInboundAdapter.channelRead0(Http2ToHttpInboundAdapter.java:38)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.handler.codec.http2.AbstractHttp2StreamChannel$Http2ChannelUnsafe.doRead0(AbstractHttp2StreamChannel.java:901)
at io.netty.handler.codec.http2.AbstractHttp2StreamChannel.fireChildRead(AbstractHttp2StreamChannel.java:555)
at io.netty.handler.codec.http2.Http2MultiplexHandler.channelRead(Http2MultiplexHandler.java:180)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.http2.Http2FrameCodec.onHttp2Frame(Http2FrameCodec.java:707)
at io.netty.handler.codec.http2.Http2FrameCodec$FrameListener.onDataRead(Http2FrameCodec.java:646)
at io.netty.handler.codec.http2.Http2FrameListenerDecorator.onDataRead(Http2FrameListenerDecorator.java:36)
at io.netty.handler.codec.http2.Http2EmptyDataFrameListener.onDataRead(Http2EmptyDataFrameListener.java:49)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onDataRead(DefaultHttp2ConnectionDecoder.java:307)
at io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onDataRead(Http2InboundFrameLogger.java:48)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readDataFrame(DefaultHttp2FrameReader.java:415)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:250)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:159)
at io.netty.handler.codec.http2.Http2InboundFrameLogger.readFrame(Http2InboundFrameLogger.java:41)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:173)
at io.netty.handler.codec.http2.DecoratingHttp2ConnectionDecoder.decodeFrame(DecoratingHttp2ConnectionDecoder.java:63)
at io.netty.handler.codec.http2.Http2ConnectionHandler$FrameDecoder.decode(Http2ConnectionHandler.java:378)
at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:438)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:449)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:995)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
... 1 more
Caused by: java.lang.NumberFormatException: For input string: "1.661885385488E9"
at java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:67)
at java.base/java.lang.Long.parseLong(Long.java:711)
at java.base/java.lang.Long.parseLong(Long.java:836)
at software.amazon.awssdk.utils.DateUtils.parseUnixTimestampMillisInstant(DateUtils.java:146)
at software.amazon.awssdk.protocols.core.StringToInstant.lambda$safeParseDate$0(StringToInstant.java:72)
```
Note that just sending raw bytes and no dates are involved in the message.
This is a regression with the ["latest"](https://hub.docker.com/layers/localstack/localstack/latest/images/sha256-bcc915531722412c0b3987b0562c1851a6fabb125c306f3b2da1bae2e204957c?context=explore) docker image. Reverting to 1.0.4 everything works fine.
I suspect it's a regression introduced with https://github.com/localstack/localstack/pull/6166
### Expected Behavior
To work as in 1.0.4
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
TestContainers
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```
private GetRecordsResponse getStreamRecords() {
final KinesisAsyncClient client = createClient();
final String shardId = client.listShards(
ListShardsRequest.builder()
.streamName(STREAM_NAME)
.build()
).get()
.shards()
.get(0)
.shardId();
final String iterator = client.getShardIterator(GetShardIteratorRequest.builder()
.streamName(STREAM_NAME)
.shardId(shardId)
.shardIteratorType(ShardIteratorType.TRIM_HORIZON)
.build())
.get()
.shardIterator();
final GetRecordsResponse response = client.getRecords(
GetRecordsRequest
.builder()
.shardIterator(iterator)
.build())
.get();
return response;
}
```
### Environment
```markdown
- OS: MacOs
- LocalStack: latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/6787 | https://github.com/localstack/localstack/pull/6791 | da6981cf5ea1f164e34eaec8b49615a1b5a352d2 | 591111c0113955c7af9181dbd413a2cdcd0cca50 | "2022-08-30T19:54:18Z" | python | "2022-08-31T13:42:16Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,786 | ["localstack/services/dynamodbstreams/dynamodbstreams_api.py", "localstack/services/dynamodbstreams/provider.py", "setup.cfg", "tests/integration/test_dynamodb.py", "tests/integration/test_dynamodb.snapshot.json"] | bug: DynamoDB Streams has different base64 encoded value | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Sometimes `GetRecords` on a dynamodb stream will return a different base64 encoded value
### Expected Behavior
The `GetRecords` call should return the same base64 encoded value as `GetItem`
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
#### Start LocalStack
```bash
# docker run --rm -it -p 4566:4566 -p 4510-4559:4510-4559 localstack/localstack
...
LocalStack version: 1.0.5.dev
LocalStack build date: 2022-08-29
LocalStack build git hash: bc8cb7ed
```
#### Run the following script
The original problem was in a Clojure program, I ported the exact failing case to python to help your debugging.
```python
from time import sleep
from uuid import uuid4
import boto3
endpoint_url = "http://localhost.localstack.cloud:4566"
def wait_for_table_active(dynamo_client, table_name):
while True:
response = dynamo_client.describe_table(TableName=table_name)
table_status = response.get('Table', {}).get('TableStatus')
if (table_status == 'ACTIVE'):
return response
sleep(1)
def poll_records(streams_client, shard_iterator):
while True:
response = streams_client.get_records(ShardIterator=shard_iterator)
records = response.get('Records')
if records:
return records
shard_iterator = response.get('NextShardIterator')
def main():
boto3.set_stream_logger(name='botocore')
table_name = 'StreamBug-' + str(uuid4())
# nippy compressed bytes - this fails
test_data = b'NPY\x08\x00\x00\x00\r\xd0i\x0bhello/world'
# This works
# test_data = 'hello/world'.encode()
# aws clients - switch these to run on aws
# dynamo_client = boto3.client('dynamodb')
# streams_client = boto3.client('dynamodbstreams')
dynamo_client = boto3.client('dynamodb', endpoint_url=endpoint_url)
streams_client = boto3.client('dynamodbstreams', endpoint_url=endpoint_url)
# create DynamoDB table
dynamo_client.create_table(
TableName=table_name,
KeySchema=[{
'AttributeName': '__pkey',
'KeyType': 'HASH'
}],
AttributeDefinitions=[{
'AttributeName': '__pkey',
'AttributeType': 'S'
}],
BillingMode='PAY_PER_REQUEST',
StreamSpecification={
'StreamEnabled': True,
'StreamViewType': 'NEW_IMAGE',
}
)
table = wait_for_table_active(
dynamo_client=dynamo_client,
table_name=table_name
)
# There is an issue where the table is marked as ACTIVE, however issueing a
# put-item immediately does not go into the stream.
sleep(10)
# put a dummy item to be used with the test
dynamo_client.put_item(
TableName=table_name,
Item={
'__pkey': {'S': 'test'},
'data': {'B': test_data},
}
)
dynamo_client.get_item(
TableName=table_name,
Key={'__pkey': {'S': 'test'}}
)
stream_arn = table['Table']['LatestStreamArn']
shard_id = streams_client.describe_stream(StreamArn=stream_arn)[
'StreamDescription']['Shards'][0]['ShardId']
shard_iterator = streams_client.get_shard_iterator(
StreamArn=stream_arn,
ShardId=shard_id,
ShardIteratorType='TRIM_HORIZON'
)['ShardIterator']
poll_records(streams_client=streams_client, shard_iterator=shard_iterator)
# cleanup
dynamo_client.delete_table(TableName=table_name)
if __name__ == "__main__":
main()
```
look for the `GetItem` response body
```
2022-08-30 11:48:55,361 botocore.parsers [DEBUG] Response body:
b'{"Item": {"__pkey": {"S": "test"}, "data": {"B": "TlBZCAAAAA3QaQtoZWxsby93b3JsZA=="}}}'
```
look for the `GetRecords` response body
```
2022-08-30 11:48:55,436 botocore.parsers [DEBUG] Response body:
b'{"Records": [{"eventID": "c1ffbeeb", "eventVersion": "1.1", "dynamodb": {"ApproximateCreationDateTime": 1661874535.0, "SizeBytes": 86, "Keys": {"__pkey": {"S": "test"}}, "NewImage": {"__pkey": {"S": "test"}, "data": {"B": "TlBZCAAAAA3vv71pC2hlbGxvL3dvcmxk"}}, "StreamViewType": "NEW_IMAGE", "SequenceNumber": "49632833953051709227967803742008155050319272171947425794"}, "awsRegion": "eu-west-1", "eventSource": "aws:dynamodb", "eventName": "INSERT"}], "NextShardIterator": "AAAAAAAAAAHC3kaHiqEu2QMfgVQVl4V2uwXtFTVE3oNGtEO3bZ8a68TbYVSd9anOKoZ/TyTdLZcH0j49Ji8vOcGFxVOTxmf4hQlki2T7VTkDfLS7tVgQsr0Bvsz9AtZkxBgUCOEaFkwls4P4ApfhZz4QCIxlYEMPrCxhYwbDTmPsVAJHeKh9o7yDg1qCD5zbML838dYFx4zSI2T7RU5M2y+RPKwgMDrc4EL6A9WhaXC+uhLv2lac+W7+lfBIGyQePpmEYRAqZFo="}'
```
`GetItem` returns `{"B": "TlBZCAAAAA3QaQtoZWxsby93b3JsZA=="}`
`GetRecords` returns `{"B": "TlBZCAAAAA3vv71pC2hlbGxvL3dvcmxk"}}`
### Environment
```markdown
- OS: MacOS 12.5.1
- LocalStack: latest
```
### Anything else?
This may be related to https://github.com/localstack/localstack/issues/6700
Version `0.14.2` returns the value base64 encoded twice.
Version `0.14.0` works.
I have tested the script on AWS and it works as expected | https://github.com/localstack/localstack/issues/6786 | https://github.com/localstack/localstack/pull/6918 | 8192c1913ab9ec415418b9c8cb6eea7269748525 | 04e1f02a5816692cc0a15a76e044c90fdf9785d5 | "2022-08-30T16:13:16Z" | python | "2022-10-05T21:39:11Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,784 | ["localstack/services/sns/provider.py", "tests/integration/test_sns.py", "tests/integration/test_sns.snapshot.json"] | bug: SNS publish batch should fail when publishing to invalid topic | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When publishing a batch of messages to SNS using publish-batch command the result is always success, regardless of the topic being present.
Example result:
```
{
"Successful": [
{
"Id": "1",
"MessageId": "539277ec-2053-4e30-bc97-2f7075687cbd"
}
],
"Failed": []
}
```
### Expected Behavior
Publishing a batch of messages to a non-existing topic should return an error, as done by real implementation.
Expected Result
```
An error occurred (NotFound) when calling the PublishBatch operation: Topic does not exist
```
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker compose up -d
Docker-compose file:
```
services:
localstack:
image: 256612512925.dkr.ecr.eu-central-1.amazonaws.com/localstack:1.0.1
ports:
- "4576:4566"
environment:
- SERVICES=sns,sqs
- AWS_DEFAULT_REGION=eu-central-1
- DATA_DIR=/tmp/localstack/data
- DEBUG=1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /docker/localstack/data:/tmp/localstack/data
- ./docker/localstack/init:/docker-entrypoint-initaws.d
```
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
awslocal sns publish-batch --topic-arn "arn:aws:sns:eu-central-1:000000000000:invalid.fifo" --publish-batch-request-entries Id=1,Message=hi,MessageGroupId=1234
### Environment
```markdown
- OS: MacOS Monterey 12.5
- LocalStack: 1.0.1
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/6784 | https://github.com/localstack/localstack/pull/6803 | c50b1d4b256490e0d7213bbd50d24a843d754599 | e09a39ed923481ec1d02e6981e448193cc6bf5e9 | "2022-08-30T14:46:31Z" | python | "2022-09-01T21:02:25Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,764 | ["localstack/aws/api/s3/__init__.py", "localstack/aws/spec-patches.json", "localstack/services/s3/provider.py", "tests/integration/s3/test_s3.py", "tests/integration/s3/test_s3.snapshot.json"] | PutBucketLogging giving MalformedXML error for payload working with real AWS | Hi,
I am using below payload for PutBucketLogging api and getting MalformedXML error in response. I have checked it with real AWS and it is working perfectly fine. Am I missing something or this needs a fix in localstack?
`"LoggingEnabled": {
"TargetBucket": "test_bucket_name",
"TargetPrefix": "log",
"TargetGrants": [
{
"Grantee": {
"URI": "http://acs.amazonaws.com/groups/s3/LogDelivery",
"Type": "Group",
},
"Permission": "WRITE",
},
{
"Grantee": {
"URI": "http://acs.amazonaws.com/groups/s3/LogDelivery",
"Type": "Group",
},
"Permission": "READ_ACP",
},
],
}
` | https://github.com/localstack/localstack/issues/6764 | https://github.com/localstack/localstack/pull/8654 | 7d7d253a07ce9ee163f1702aaf0815341deae78d | a4e24108706d7f7e44874c46c9a53c83c099d70c | "2022-08-26T14:56:55Z" | python | "2023-07-08T14:35:49Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,739 | ["bin/supervisord.conf"] | bug: LocalStack failing to startup, spawnerr: can't find command '.venv/bin/python' | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Trying to use localstack within kubernetes, I get this output and startupprobes fail
```
- localstack -- terminated (0)
-----Logs-------------
Waiting for all LocalStack services to be ready
2022-08-24 01:58:31,819 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
2022-08-24 01:58:31,820 INFO supervisord started with pid 16
2022-08-24 01:58:32,823 INFO spawnerr: can't find command '.venv/bin/python'
2022-08-24 01:58:33,826 INFO spawnerr: can't find command '.venv/bin/python'
2022-08-24 01:58:35,829 INFO spawnerr: can't find command '.venv/bin/python'
Waiting for all LocalStack services to be ready
2022-08-24 01:58:38,832 INFO spawnerr: can't find command '.venv/bin/python'
2022-08-24 01:58:38,832 INFO gave up: infra entered FATAL state, too many start retries too quickly
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Sending -15 to supervisord
2022-08-24 02:00:00,913 WARN received SIGTERM indicating exit request
```
```
[Warning][jenkins/ive-sync-bugfix-2fdrive-203-localstack-jenkins-support-14-4gb9j][Unhealthy] Startup probe failed: Get "http://10.53.126.85:4566/health": dial tcp 10.53.126.85:4566: connect: connection refused
[Warning][jenkins/ive-sync-bugfix-2fdrive-203-localstack-jenkins-support-14-4gb9j][Unhealthy] Startup probe failed: Get "http://10.53.126.85:4566/health": dial tcp 10.53.126.85:4566: connect: connection refused
[Warning][jenkins/ive-sync-bugfix-2fdrive-203-localstack-jenkins-support-14-4gb9j][Unhealthy] Startup probe failed: Get "http://10.53.126.85:4566/health": dial tcp 10.53.126.85:4566: connect: connection refused
```
### Expected Behavior
I expect localstack to start without error and the startup probe to be successful, retrieving a 200 response from the /health endpoint.
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
My k8s file looks like this:
```
apiVersion: v1
kind: Pod
metadata:
labels:
app: btc-test
spec:
containers:
- name: localstack
image: localstack/localstack:1.0.4
ports:
- containerPort: 4566
env:
- name: "SERVICES"
value: "sqs,sns"
- name: "DEBUG"
value: "1"
- name: "DATA_DIR"
value: "/tmp/localstack"
startupProbe:
httpGet:
path: /health
port: 4566
initialDelaySeconds: 10
periodSeconds: 10
```
### Environment
```markdown
- OS: Unknown, in Jenkins
- LocalStack: 1.0.4
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/6739 | https://github.com/localstack/localstack/pull/6952 | 75f70e667fa164d442413372154b3e28269fea58 | faf831a603577e303d6c59be3d9cf36f5322013b | "2022-08-24T02:10:15Z" | python | "2022-09-30T14:11:29Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,718 | ["localstack/services/redshift/provider.py", "tests/integration/test_redshift.py"] | bug: Redshift Security Group Ingress Rule Not Applied | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When I create an ingress rule for a Redshift cluster security group, the rule doesn't stick: I'll describe cluster security groups, and the rule won't be applied.
### Expected Behavior
From [this issue](https://github.com/localstack/localstack/issues/2775), I figured that the rule would be properly applied.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
```sh
services:
localstack:
image: localstack/localstack:latest
ports:
- 4563-4584:4563-4584
volumes:
- ./volume:/var/lib/localstack
```
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```sh
awslocal redshift create-cluster-security-group --cluster-security-group-name "group-name" --description "Security group description"
awslocal redshift authorize-cluster-security-group-ingress --cluster-security-group-name "group-name" --cidrip 192.168.100.101/32
awslocal redshift describe-cluster-security-groups
```
### Environment
```markdown
- OS: Ubuntu 20.04
- LocalStack: latest
```
### Anything else?
This was tested with and without Pro. I'm curious if this should be a Pro-only feature though. | https://github.com/localstack/localstack/issues/6718 | https://github.com/localstack/localstack/pull/6749 | 37b8fe04af92403f37f8c422087ae3d63b949c85 | 786b7a84b5383f78de5b5fa1f769d9099b50e42e | "2022-08-22T03:46:40Z" | python | "2022-08-25T11:59:13Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,707 | ["localstack/aws/handlers/cors.py", "localstack/constants.py", "localstack/services/apigateway/invocations.py", "localstack/services/generic_proxy.py"] | bug: API gateway Authorization Header is being replaced with "" on lambda | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Authorization headers being sent to API gateway are received on lambdas as "empty string"
### Expected Behavior
_No response_
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
Run a API Gateway with lambda integration
### Environment
```markdown
- OS:
- LocalStack:
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/6707 | https://github.com/localstack/localstack/pull/6895 | 10717e2bc42fff08a7bae3ef1972d45c22f01c0a | 128323e0b2c11b95ed438c3d66ee6ecd861b5da1 | "2022-08-19T20:39:05Z" | python | "2022-09-20T10:51:27Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,700 | ["localstack/services/dynamodbstreams/dynamodbstreams_api.py", "localstack/services/dynamodbstreams/provider.py", "setup.cfg", "tests/integration/test_dynamodb.py", "tests/integration/test_dynamodb.snapshot.json"] | bug: DynamoDB streams encode binary data for no particular reason | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When using DynamoDb streams (and kinesis) subsequent requests to put or update an item result in binary data being encoded in base64 format.
When an item is first created the record contains the correct data (binary, but not base64 encoded). Subsequent requests to the same item (put or update or delete) result in subsequent record data being encoded in base64.
This behavior is not observed at all when directly swapping in amazon/dynamodb-local or when using real aws.
### Expected Behavior
Binary data is not encoded in base64
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
`docker-compose.yml`
```
localstack:
image: localstack/localstack
environment:
- EDGE_PORT=8000
- SERVICES=dynamodb,kinesis,s3,kms
```
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
putItem, updateItem and deleteItem from the java sdk and also a records processor that can read the records created from the stream.
### Environment
```markdown
- OS:
- LocalStack:
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/6700 | https://github.com/localstack/localstack/pull/6918 | 8192c1913ab9ec415418b9c8cb6eea7269748525 | 04e1f02a5816692cc0a15a76e044c90fdf9785d5 | "2022-08-19T01:38:42Z" | python | "2022-10-05T21:39:11Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,682 | ["localstack/services/stores.py"] | bug: 1.0.x breaks concurrent BatchWriteItem to dynamodb tables | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Previously to `v1` I would initiate a single `localstack` instance within `docker` at the root of my tests and each test would run in parallel creating a test table before executing the actual test. Since `v1` this now breaks and returns the following error:
```
Unknown table: JmYQVZBQZk not found in dict_keys(['SlERnKhpEP'])
```
Switching off parallel tests fixes this issue but now means my test suit takes longer to run.
Further to this when I print out the list of tables from the AWS Go SDK client it lists the correct tables - so I know they are there and created. Also pegging the `localstack` version to `0.14.5` solves this issue as well.
Steps tried to resolves:
- updating volume mount as per migration steps
- setting `EAGER_SERVICE_LOADING=1`
Additional info:
- This only happens when executing `BatchWriteItem` command - If I switch out to `PutItem` then it works as expected
- If I remove the parallel calls it works as expected
### Expected Behavior
Correctly executes in parallel as tables exist and confirmed with `ListTables` call
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
```shell
docker run --rm -it -p 4566:4566 -p 4510-4559:4510-4559 localstack/localstack
```
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```golang
package localstack_bug
import (
"context"
"github.com/aws/aws-sdk-go-v2/service/dynamodb"
"github.com/aws/aws-sdk-go-v2/service/dynamodb/types"
)
type Client struct {
Table string
AWSClient *dynamodb.Client
}
func (c Client) Write() error {
_, err := c.AWSClient.BatchWriteItem(context.Background(), &dynamodb.BatchWriteItemInput{
RequestItems: map[string][]types.WriteRequest{
c.Table: {
{
PutRequest: &types.PutRequest{
Item: map[string]types.AttributeValue{
"id": &types.AttributeValueMemberS{Value: "foo"},
"sortKey": &types.AttributeValueMemberS{Value: "bar"},
},
},
},
},
},
})
return err
}
// TESTS
package localstack_bug
import (
"context"
"fmt"
"net/url"
"os"
"testing"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/aws/retry"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/service/dynamodb"
"github.com/aws/aws-sdk-go-v2/service/dynamodb/types"
)
var (
client *dynamodb.Client
)
func TestMain(m *testing.M) {
opts := []func(options *config.LoadOptions) error{
config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider("ACCESS_KEY", "SECRET_KEY", "TOKEN")),
config.WithEndpointResolverWithOptions(aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) {
u := url.URL{
Host: "0.0.0.0:4566",
Scheme: "http",
}
endpoint := aws.Endpoint{
URL: u.String(),
HostnameImmutable: true,
SigningRegion: "eu-west-2",
}
return endpoint, nil
})),
config.WithRegion("eu-west-2"),
config.WithRetryer(func() aws.Retryer {
return retry.AddWithMaxAttempts(retry.NewStandard(), 5)
}),
}
cfg, err := config.LoadDefaultConfig(context.TODO(),
opts...,
)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
client = dynamodb.NewFromConfig(cfg)
m.Run()
}
var PartitionKey = "id"
var SortKey = "sortKey"
func TestWrite(t *testing.T) {
t.Parallel()
tst := make([]struct{ name string }, 0)
for i := 0; i < 10; i++ {
tst = append(tst, struct{ name string }{name: fmt.Sprintf("name%d", i)})
}
for _, v := range tst {
v := v
t.Run(v.name, func(t *testing.T)
t.Parallel() // comment this out and will succeed
t.Cleanup(func() {
teardown(v.name)
})
_, err := client.CreateTable(context.Background(), &dynamodb.CreateTableInput{
AttributeDefinitions: []types.AttributeDefinition{
{
AttributeName: &PartitionKey,
AttributeType: types.ScalarAttributeTypeS,
},
{
AttributeName: &SortKey,
AttributeType: types.ScalarAttributeTypeS,
},
},
KeySchema: []types.KeySchemaElement{
{
AttributeName: &PartitionKey,
KeyType: types.KeyTypeHash,
},
{
AttributeName: &SortKey,
KeyType: types.KeyTypeRange,
},
},
TableName: &v.name,
BillingMode: types.BillingModePayPerRequest,
})
if err != nil {
t.Error(err)
}
c := Client{
Table: v.name,
AWSClient: client,
}
err = c.Write()
if err != nil {
t.Error(err)
}
})
}
}
func teardown(name string) {
_, err := client.DeleteTable(context.Background(), &dynamodb.DeleteTableInput{
TableName: &name,
})
if err != nil {
fmt.Println(err)
}
}
```
### Environment
```markdown
- OS: Macos
- LocalStack: 1.0.5.dev // pulled as `latest`
```
### Anything else?
Logs
```shell
Waiting for all LocalStack services to be ready
2022-08-16 16:16:43,236 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
2022-08-16 16:16:43,238 INFO supervisord started with pid 17
2022-08-16 16:16:44,240 INFO spawned: 'infra' with pid 22
LocalStack version: 1.0.5.dev
LocalStack build date: 2022-08-16
LocalStack build git hash: 7e3045dc
2022-08-16T16:16:45.548 WARN --- [ Thread-110] hypercorn.error : ASGI Framework Lifespan error, continuing without Lifespan support
2022-08-16T16:16:45.548 WARN --- [ Thread-110] hypercorn.error : ASGI Framework Lifespan error, continuing without Lifespan support
2022-08-16 16:16:45,549 INFO success: infra entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-08-16T16:16:45.550 INFO --- [ Thread-110] hypercorn.error : Running on https://0.0.0.0:4566 (CTRL + C to quit)
2022-08-16T16:16:45.550 INFO --- [ Thread-110] hypercorn.error : Running on https://0.0.0.0:4566 (CTRL + C to quit)
Ready.
2022-08-16T16:17:14.027 INFO --- [ Thread-126] l.services.dynamodb.server : Initializing DynamoDB Local with the following configuration:
2022-08-16T16:17:14.027 INFO --- [ Thread-126] l.services.dynamodb.server : Port: 45225
2022-08-16T16:17:14.027 INFO --- [ Thread-126] l.services.dynamodb.server : InMemory: false
2022-08-16T16:17:14.027 INFO --- [ Thread-126] l.services.dynamodb.server : DbPath: /var/lib/localstack/tmp/state/dynamodb
2022-08-16T16:17:14.028 INFO --- [ Thread-126] l.services.dynamodb.server : SharedDb: false
2022-08-16T16:17:14.028 INFO --- [ Thread-126] l.services.dynamodb.server : shouldDelayTransientStatuses: false
2022-08-16T16:17:14.028 INFO --- [ Thread-126] l.services.dynamodb.server : CorsParams: *
2022-08-16T16:17:14.038 INFO --- [ Thread-126] l.services.dynamodb.server :
2022-08-16T16:17:15.616 INFO --- [ asgi_gw_8] localstack.utils.bootstrap : Execution of "require" took 2024.54ms
2022-08-16T16:17:17.922 INFO --- [ asgi_gw_1] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:17:18.011 INFO --- [ asgi_gw_0] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:17:18.065 INFO --- [ asgi_gw_6] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:17:18.069 INFO --- [ asgi_gw_13] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:18.109 ERROR --- [ asgi_gw_10] l.aws.handlers.logging : exception during call chain: Unknown table: name6 not found in dict_keys(['name4'])
2022-08-16T16:17:18.110 INFO --- [ asgi_gw_10] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:18.130 INFO --- [ asgi_gw_8] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:17:18.173 INFO --- [ asgi_gw_5] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:17:18.203 INFO --- [ asgi_gw_2] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:17:18.211 INFO --- [ asgi_gw_4] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:17:18.260 INFO --- [ asgi_gw_9] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:17:18.287 INFO --- [ asgi_gw_8] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:18.332 ERROR --- [ asgi_gw_18] l.aws.handlers.logging : exception during call chain: Unknown table: name1 not found in dict_keys(['name8'])
2022-08-16T16:17:18.333 INFO --- [ asgi_gw_18] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:18.357 INFO --- [ asgi_gw_3] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:17:18.378 INFO --- [ asgi_gw_7] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:17:18.394 INFO --- [ asgi_gw_8] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:18.453 INFO --- [ asgi_gw_7] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:18.464 ERROR --- [ asgi_gw_10] l.aws.handlers.logging : exception during call chain: Unknown table: name4 not found in dict_keys(['name3'])
2022-08-16T16:17:18.465 INFO --- [ asgi_gw_10] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:18.510 ERROR --- [ asgi_gw_13] l.aws.handlers.logging : exception during call chain: Unknown table: name6 not found in dict_keys(['name3'])
2022-08-16T16:17:18.511 INFO --- [ asgi_gw_13] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:18.534 INFO --- [ asgi_gw_1] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:18.544 INFO --- [ asgi_gw_11] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:18.567 INFO --- [ asgi_gw_9] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:18.592 ERROR --- [ asgi_gw_6] l.aws.handlers.logging : exception during call chain: Unknown table: name9 not found in dict_keys(['name3'])
2022-08-16T16:17:18.593 INFO --- [ asgi_gw_6] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:18.596 INFO --- [ asgi_gw_8] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:18.597 ERROR --- [ asgi_gw_17] l.aws.handlers.logging : exception during call chain: Unknown table: name2 not found in dict_keys(['name3'])
2022-08-16T16:17:18.603 INFO --- [ asgi_gw_17] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:18.626 INFO --- [ asgi_gw_14] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:18.629 ERROR --- [ asgi_gw_0] l.aws.handlers.logging : exception during call chain: Unknown table: name0 not found in dict_keys(['name3'])
2022-08-16T16:17:18.630 INFO --- [ asgi_gw_0] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:18.660 ERROR --- [ asgi_gw_19] l.aws.handlers.logging : exception during call chain: Unknown table: name5 not found in dict_keys(['name3'])
2022-08-16T16:17:18.663 INFO --- [ asgi_gw_19] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:18.671 ERROR --- [ asgi_gw_16] l.aws.handlers.logging : exception during call chain: Unknown table: name8 not found in dict_keys(['name3'])
2022-08-16T16:17:18.672 INFO --- [ asgi_gw_16] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:18.704 INFO --- [ asgi_gw_10] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:18.707 INFO --- [ asgi_gw_7] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:18.728 ERROR --- [ asgi_gw_12] l.aws.handlers.logging : exception during call chain: Unknown table: name7 not found in dict_keys(['name3'])
2022-08-16T16:17:18.729 INFO --- [ asgi_gw_12] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:18.740 INFO --- [ asgi_gw_15] localstack.request.aws : AWS dynamodb.BatchWriteItem => 200
2022-08-16T16:17:19.121 INFO --- [ asgi_gw_14] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:19.143 ERROR --- [ asgi_gw_1] l.aws.handlers.logging : exception during call chain: Unknown table: name9 not found in dict_keys(['name3'])
2022-08-16T16:17:19.157 INFO --- [ asgi_gw_1] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:19.235 INFO --- [ asgi_gw_11] localstack.services.infra : Starting mock Lambda service on http port 4566 ...
2022-08-16T16:17:19.460 INFO --- [ asgi_gw_19] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:19.478 ERROR --- [ asgi_gw_0] l.aws.handlers.logging : exception during call chain: Unknown table: name2 not found in dict_keys(['name3'])
2022-08-16T16:17:19.478 INFO --- [ asgi_gw_0] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:19.748 INFO --- [ asgi_gw_11] localstack.utils.bootstrap : Execution of "require" took 893.23ms
2022-08-16T16:17:19.756 INFO --- [ asgi_gw_4] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:17:19.819 INFO --- [ asgi_gw_10] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:19.833 ERROR --- [ asgi_gw_13] l.aws.handlers.logging : exception during call chain: Unknown table: name6 not found in dict_keys(['name3'])
2022-08-16T16:17:19.834 INFO --- [ asgi_gw_13] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:19.893 INFO --- [ asgi_gw_9] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:19.906 ERROR --- [ asgi_gw_12] l.aws.handlers.logging : exception during call chain: Unknown table: name2 not found in dict_keys(['name3'])
2022-08-16T16:17:19.907 INFO --- [ asgi_gw_12] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:20.062 INFO --- [ asgi_gw_6] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:20.080 ERROR --- [ asgi_gw_3] l.aws.handlers.logging : exception during call chain: Unknown table: name0 not found in dict_keys(['name3'])
2022-08-16T16:17:20.080 INFO --- [ asgi_gw_3] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:20.408 INFO --- [ asgi_gw_19] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:20.416 INFO --- [ asgi_gw_11] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:20.427 ERROR --- [ asgi_gw_14] l.aws.handlers.logging : exception during call chain: Unknown table: name1 not found in dict_keys(['name3'])
2022-08-16T16:17:20.427 INFO --- [ asgi_gw_14] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:20.434 ERROR --- [ asgi_gw_18] l.aws.handlers.logging : exception during call chain: Unknown table: name7 not found in dict_keys(['name3'])
2022-08-16T16:17:20.435 INFO --- [ asgi_gw_18] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:20.480 INFO --- [ asgi_gw_10] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:20.497 ERROR --- [ asgi_gw_5] l.aws.handlers.logging : exception during call chain: Unknown table: name4 not found in dict_keys(['name3'])
2022-08-16T16:17:20.498 INFO --- [ asgi_gw_5] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:20.511 INFO --- [ asgi_gw_5] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:20.527 ERROR --- [ asgi_gw_15] l.aws.handlers.logging : exception during call chain: Unknown table: name8 not found in dict_keys(['name3'])
2022-08-16T16:17:20.528 INFO --- [ asgi_gw_15] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:20.545 INFO --- [ asgi_gw_2] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:20.558 ERROR --- [ asgi_gw_12] l.aws.handlers.logging : exception during call chain: Unknown table: name5 not found in dict_keys(['name3'])
2022-08-16T16:17:20.559 INFO --- [ asgi_gw_12] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:20.737 INFO --- [ asgi_gw_16] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:20.752 ERROR --- [ asgi_gw_3] l.aws.handlers.logging : exception during call chain: Unknown table: name7 not found in dict_keys(['name3'])
2022-08-16T16:17:20.753 INFO --- [ asgi_gw_3] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:21.462 INFO --- [ asgi_gw_11] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:21.477 ERROR --- [ asgi_gw_4] l.aws.handlers.logging : exception during call chain: Unknown table: name0 not found in dict_keys(['name3'])
2022-08-16T16:17:21.478 INFO --- [ asgi_gw_4] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:21.491 INFO --- [ asgi_gw_13] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:21.505 ERROR --- [ asgi_gw_18] l.aws.handlers.logging : exception during call chain: Unknown table: name4 not found in dict_keys(['name3'])
2022-08-16T16:17:21.505 INFO --- [ asgi_gw_18] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:22.191 INFO --- [ asgi_gw_5] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:22.211 ERROR --- [ asgi_gw_17] l.aws.handlers.logging : exception during call chain: Unknown table: name9 not found in dict_keys(['name3'])
2022-08-16T16:17:22.212 INFO --- [ asgi_gw_17] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:22.214 INFO --- [ asgi_gw_2] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:22.233 ERROR --- [ asgi_gw_15] l.aws.handlers.logging : exception during call chain: Unknown table: name5 not found in dict_keys(['name3'])
2022-08-16T16:17:22.233 INFO --- [ asgi_gw_15] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:22.656 INFO --- [ asgi_gw_16] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:22.670 ERROR --- [ asgi_gw_1] l.aws.handlers.logging : exception during call chain: Unknown table: name8 not found in dict_keys(['name3'])
2022-08-16T16:17:22.671 INFO --- [ asgi_gw_1] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:22.991 INFO --- [ asgi_gw_11] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:23.005 ERROR --- [ asgi_gw_19] l.aws.handlers.logging : exception during call chain: Unknown table: name7 not found in dict_keys(['name3'])
2022-08-16T16:17:23.006 INFO --- [ asgi_gw_19] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:23.076 INFO --- [ asgi_gw_13] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:23.090 ERROR --- [ asgi_gw_4] l.aws.handlers.logging : exception during call chain: Unknown table: name4 not found in dict_keys(['name3'])
2022-08-16T16:17:23.090 INFO --- [ asgi_gw_4] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:23.179 INFO --- [ asgi_gw_6] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:23.194 ERROR --- [ asgi_gw_9] l.aws.handlers.logging : exception during call chain: Unknown table: name2 not found in dict_keys(['name3'])
2022-08-16T16:17:23.194 INFO --- [ asgi_gw_9] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:23.391 INFO --- [ asgi_gw_2] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:23.405 ERROR --- [ asgi_gw_12] l.aws.handlers.logging : exception during call chain: Unknown table: name1 not found in dict_keys(['name3'])
2022-08-16T16:17:23.405 INFO --- [ asgi_gw_12] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:23.545 INFO --- [ asgi_gw_16] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:23.560 ERROR --- [ asgi_gw_0] l.aws.handlers.logging : exception during call chain: Unknown table: name4 not found in dict_keys(['name3'])
2022-08-16T16:17:23.560 INFO --- [ asgi_gw_0] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:23.607 INFO --- [ asgi_gw_14] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:17:23.785 INFO --- [ asgi_gw_8] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:23.800 ERROR --- [ asgi_gw_13] l.aws.handlers.logging : exception during call chain: Unknown table: name8 not found in dict_keys(['name3'])
2022-08-16T16:17:23.800 INFO --- [ asgi_gw_13] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:24.448 INFO --- [ asgi_gw_17] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:24.462 ERROR --- [ asgi_gw_6] l.aws.handlers.logging : exception during call chain: Unknown table: name8 not found in dict_keys(['name3'])
2022-08-16T16:17:24.463 INFO --- [ asgi_gw_6] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:24.508 INFO --- [ asgi_gw_2] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:17:24.748 INFO --- [ asgi_gw_11] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:24.763 ERROR --- [ asgi_gw_7] l.aws.handlers.logging : exception during call chain: Unknown table: name6 not found in dict_keys(['name3'])
2022-08-16T16:17:24.763 INFO --- [ asgi_gw_7] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:25.926 INFO --- [ asgi_gw_8] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:25.940 ERROR --- [ asgi_gw_4] l.aws.handlers.logging : exception during call chain: Unknown table: name5 not found in dict_keys(['name3'])
2022-08-16T16:17:25.941 INFO --- [ asgi_gw_4] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:27.081 INFO --- [ asgi_gw_17] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:27.097 ERROR --- [ asgi_gw_9] l.aws.handlers.logging : exception during call chain: Unknown table: name7 not found in dict_keys(['name3'])
2022-08-16T16:17:27.097 INFO --- [ asgi_gw_9] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:27.142 INFO --- [ asgi_gw_12] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:17:27.333 INFO --- [ asgi_gw_11] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:27.347 ERROR --- [ asgi_gw_19] l.aws.handlers.logging : exception during call chain: Unknown table: name6 not found in dict_keys(['name3'])
2022-08-16T16:17:27.348 INFO --- [ asgi_gw_19] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:27.394 INFO --- [ asgi_gw_14] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:17:27.412 INFO --- [ asgi_gw_17] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:27.424 ERROR --- [ asgi_gw_15] l.aws.handlers.logging : exception during call chain: Unknown table: name0 not found in dict_keys(['name3'])
2022-08-16T16:17:27.424 INFO --- [ asgi_gw_15] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:27.562 INFO --- [ asgi_gw_3] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:27.577 ERROR --- [ asgi_gw_1] l.aws.handlers.logging : exception during call chain: Unknown table: name9 not found in dict_keys(['name3'])
2022-08-16T16:17:27.578 INFO --- [ asgi_gw_1] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:27.826 INFO --- [ asgi_gw_18] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:27.840 ERROR --- [ asgi_gw_16] l.aws.handlers.logging : exception during call chain: Unknown table: name1 not found in dict_keys(['name3'])
2022-08-16T16:17:27.841 INFO --- [ asgi_gw_16] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:30.626 INFO --- [ asgi_gw_10] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:30.642 ERROR --- [ asgi_gw_11] l.aws.handlers.logging : exception during call chain: Unknown table: name2 not found in dict_keys(['name3'])
2022-08-16T16:17:30.642 INFO --- [ asgi_gw_11] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:30.685 INFO --- [ asgi_gw_5] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:17:31.396 INFO --- [ asgi_gw_0] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:31.410 ERROR --- [ asgi_gw_17] l.aws.handlers.logging : exception during call chain: Unknown table: name1 not found in dict_keys(['name3'])
2022-08-16T16:17:31.411 INFO --- [ asgi_gw_17] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:31.457 INFO --- [ asgi_gw_3] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:17:32.050 INFO --- [ asgi_gw_10] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:32.065 ERROR --- [ asgi_gw_19] l.aws.handlers.logging : exception during call chain: Unknown table: name0 not found in dict_keys(['name3'])
2022-08-16T16:17:32.065 INFO --- [ asgi_gw_19] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:32.108 INFO --- [ asgi_gw_4] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:17:32.716 INFO --- [ asgi_gw_0] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:32.730 ERROR --- [ asgi_gw_15] l.aws.handlers.logging : exception during call chain: Unknown table: name9 not found in dict_keys(['name3'])
2022-08-16T16:17:32.731 INFO --- [ asgi_gw_15] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:32.778 INFO --- [ asgi_gw_1] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:17:39.564 INFO --- [ asgi_gw_14] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:17:39.576 ERROR --- [ asgi_gw_10] l.aws.handlers.logging : exception during call chain: Unknown table: name5 not found in dict_keys(['name3'])
2022-08-16T16:17:39.577 INFO --- [ asgi_gw_10] localstack.request.aws : AWS dynamodb.BatchWriteItem => 500 (InternalError)
2022-08-16T16:17:39.616 INFO --- [ asgi_gw_2] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:18:02.868 INFO --- [ asgi_gw_11] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:18:02.911 INFO --- [ asgi_gw_17] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:18:02.924 INFO --- [ asgi_gw_5] localstack.request.aws : AWS dynamodb.BatchWriteItem => 200
2022-08-16T16:18:02.967 INFO --- [ asgi_gw_0] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:18:02.998 INFO --- [ asgi_gw_1] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:18:03.040 INFO --- [ asgi_gw_6] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:18:03.054 INFO --- [ asgi_gw_19] localstack.request.aws : AWS dynamodb.BatchWriteItem => 200
2022-08-16T16:18:03.096 INFO --- [ asgi_gw_17] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:18:03.126 INFO --- [ asgi_gw_3] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:18:03.169 INFO --- [ asgi_gw_8] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:18:03.186 INFO --- [ asgi_gw_13] localstack.request.aws : AWS dynamodb.BatchWriteItem => 200
2022-08-16T16:18:03.229 INFO --- [ asgi_gw_6] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:18:03.258 INFO --- [ asgi_gw_7] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:18:03.302 INFO --- [ asgi_gw_11] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:18:03.316 INFO --- [ asgi_gw_18] localstack.request.aws : AWS dynamodb.BatchWriteItem => 200
2022-08-16T16:18:03.357 INFO --- [ asgi_gw_8] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:18:03.388 INFO --- [ asgi_gw_16] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:18:03.510 INFO --- [ asgi_gw_1] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:18:03.526 INFO --- [ asgi_gw_0] localstack.request.aws : AWS dynamodb.BatchWriteItem => 200
2022-08-16T16:18:03.568 INFO --- [ asgi_gw_11] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:18:03.601 INFO --- [ asgi_gw_5] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:18:03.649 INFO --- [ asgi_gw_3] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:18:03.665 INFO --- [ asgi_gw_17] localstack.request.aws : AWS dynamodb.BatchWriteItem => 200
2022-08-16T16:18:03.711 INFO --- [ asgi_gw_1] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:18:03.747 INFO --- [ asgi_gw_19] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:18:03.797 INFO --- [ asgi_gw_7] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:18:03.811 INFO --- [ asgi_gw_6] localstack.request.aws : AWS dynamodb.BatchWriteItem => 200
2022-08-16T16:18:03.857 INFO --- [ asgi_gw_3] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:18:03.892 INFO --- [ asgi_gw_13] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:18:03.942 INFO --- [ asgi_gw_16] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:18:03.958 INFO --- [ asgi_gw_8] localstack.request.aws : AWS dynamodb.BatchWriteItem => 200
2022-08-16T16:18:04.004 INFO --- [ asgi_gw_7] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:18:04.039 INFO --- [ asgi_gw_18] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:18:04.085 INFO --- [ asgi_gw_5] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:18:04.099 INFO --- [ asgi_gw_11] localstack.request.aws : AWS dynamodb.BatchWriteItem => 200
2022-08-16T16:18:04.140 INFO --- [ asgi_gw_16] localstack.request.aws : AWS dynamodb.DeleteTable => 200
2022-08-16T16:18:04.173 INFO --- [ asgi_gw_0] localstack.request.aws : AWS dynamodb.CreateTable => 200
2022-08-16T16:18:04.218 INFO --- [ asgi_gw_19] localstack.request.aws : AWS dynamodb.GetItem => 200
2022-08-16T16:18:04.233 INFO --- [ asgi_gw_1] localstack.request.aws : AWS dynamodb.BatchWriteItem => 200
2022-08-16T16:18:04.271 INFO --- [ asgi_gw_5] localstack.request.aws : AWS dynamodb.DeleteTable => 200
``` | https://github.com/localstack/localstack/issues/6682 | https://github.com/localstack/localstack/pull/6689 | fc8b122e2780056165f143582cf797be9fd8fb3f | ae4a2ad2d23ec6d34750fd226a64c997bd1844b6 | "2022-08-16T16:24:34Z" | python | "2022-08-20T19:49:16Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,674 | ["localstack/services/sqs/provider.py", "localstack/testing/snapshots/transformer_utility.py", "tests/integration/test_sqs.py", "tests/integration/test_sqs.snapshot.json"] | bug: localstack no longer sends the attribute SequenceNumber for SQS FIFO queues | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
The attribute `SequenceNumber` isn't present in the message for high throughput FIFO queues.
```
docker exec -it localstack awslocal sqs receive-message --queue-url http://localhost:4566/000000000000/my-queue.fifo --attribute-name All
```
```json
{
"Messages": [
{
"MessageId": "...",
"ReceiptHandle": "...",
"MD5OfBody": "...",
"Body": "...",
"Attributes": {
"SenderId": "...",
"SentTimestamp": "...",
"ApproximateReceiveCount": "...",
"ApproximateFirstReceiveTimestamp": "...",
"MessageDeduplicationId": "...",
"MessageGroupId": "..."
}
}
]
}
```
### Expected Behavior
Up to version `0.14.2`, the message contains the attribute `SequenceNumber`
```json
{
"Messages": [
{
"MessageId": "...",
"ReceiptHandle": "...",
"MD5OfBody": "...",
"Body": "...",
"Attributes": {
"SenderId": "...",
"SentTimestamp": "...",
"ApproximateReceiveCount": "...",
"ApproximateFirstReceiveTimestamp": "...",
"MessageDeduplicationId": "...",
"MessageGroupId": "...",
"SequenceNumber": "09883580580611393766"
}
}
]
}
```
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
```
docker exec -it localstack awslocal sns create-topic --name my-topic.fifo --attributes FifoTopic=true
docker exec -it localstack awslocal sqs create-queue --queue-name my-queue.fifo --attributes FifoQueue=true,DeduplicationScope=messageGroup,FifoThroughputLimit=perMessageGroupId
docker exec -it localstack awslocal sns subscribe --topic-arn arn:aws:sns:us-east-1:000000000000:my-topic.fifo --protocol sqs --notification-endpoint http://localhost:4566/000000000000/my-queue.fifo
docker exec -it localstack awslocal sns publish --topic-arn arn:aws:sns:us-east-1:000000000000:my-topic.fifo --message 'my-message' --message-group-id my-group --message-deduplication-id 'my-deduplication-id'
docker exec -it localstack awslocal sqs receive-message --queue-url http://localhost:4566/000000000000/my-queue.fifo --attribute-name All
```
### Environment
```markdown
- OS: Docker on Mac
- LocalStack: 1.0.4
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/6674 | https://github.com/localstack/localstack/pull/6713 | 35874212437cc58407a59f17e007fa0dc9aab854 | f3c393f61c2d8ae0673d6071ba4e8a24df3813c3 | "2022-08-15T20:37:01Z" | python | "2022-08-22T19:16:50Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,659 | ["localstack/services/s3/provider.py", "tests/integration/s3/test_s3.py", "tests/integration/s3/test_s3.snapshot.json"] | bug: Checksum mismatch when putting object using AWS Java SDK | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When performing a putObject using Java AWS SDK with a providing only the checksum algorithm localstack returns a 400 InvalidRequest with error: `Value for x-amz-checksum-sha256 header is invalid.`
localstack logs:
```
2022-08-11T11:59:45.411 DEBUG --- [ asgi_gw_0] l.aws.serving.wsgi : PUT localstack:4566/integration-bucket/testObjectKey_5f1gUa
2022-08-11T11:59:45.433 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.PutObject => 400 (InvalidRequest); PutObjectRequest({'ACL': None, 'Bucket': 'integration-bucket', 'CacheControl': None, 'ContentDisposition': None, 'ContentEncoding': 'aws-chunked', 'ContentLanguage': None, 'ContentLength': 340, 'ContentMD5': None, 'ContentType': 'application/octet-stream', 'ChecksumAlgorithm': 'SHA256', 'ChecksumCRC32': None, 'ChecksumCRC32C': None, 'ChecksumSHA1': None, 'ChecksumSHA256': None, 'Expires': None, 'GrantFullControl': None, 'GrantRead': None, 'GrantReadACP': None, 'GrantWriteACP': None, 'Key': 'testObjectKey_5f1gUa', 'Metadata': {}, 'ServerSideEncryption': None, 'StorageClass': None, 'WebsiteRedirectLocation': None, 'SSECustomerAlgorithm': None, 'SSECustomerKey': None, 'SSECustomerKeyMD5': None, 'SSEKMSKeyId': None, 'SSEKMSEncryptionContext': None, 'BucketKeyEnabled': None, 'RequestPayer': None, 'Tagging': None, 'ObjectLockMode': None, 'ObjectLockRetainUntilDate': None, 'ObjectLockLegalHoldStatus': None, 'ExpectedBucketOwner': None, 'Body': <localstack.http.asgi.HTTPRequestEventStreamAdapter object at 0x7f69676d64a0>}, headers={'Host': 'localstack:4566', 'amz-sdk-invocation-id': 'a2de9efb-37a6-edbe-9fea-8bc0bfe9f616', 'amz-sdk-request': 'attempt=1; max=4', 'Authorization': 'AWS4-HMAC-SHA256 Credential=accesskey/20220811/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;content-encoding;content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length;x-amz-sdk-checksum-algorithm;x-amz-trailer, Signature=520a45c4211245e316fb696748efd0077cb4582a2f9f65f1ff3d6fb1578f61f3', 'Content-Encoding': 'aws-chunked', 'Content-Type': 'application/octet-stream', 'Expect': '100-continue', 'User-Agent': 'aws-sdk-java/2.17.155 Linux/5.10.102.1-microsoft-standard-WSL2 OpenJDK_64-Bit_Server_VM/11.0.16+8-LTS Java/11.0.16 vendor/Azul_Systems__Inc. io/sync http/Apache cfg/retry-mode/legacy', 'x-amz-content-sha256': 'STREAMING-AWS4-HMAC-SHA256-PAYLOAD-TRAILER', 'X-Amz-Date': '20220811T115945Z', 'x-amz-decoded-content-length': '10', 'x-amz-sdk-checksum-algorithm': 'SHA256', 'x-amz-trailer': 'x-amz-checksum-sha256', 'X-Amzn-Trace-Id': 'Root=1-da6881b4-014a482e02f8da9638b20691;Parent=e90ba8e1cedb7f04;Sampled=1', 'Content-Length': '340', 'Connection': 'Keep-Alive', 'x-localstack-tgt-api': 's3', 'x-localstack-edge': 'http://localstack:4566', 'X-Forwarded-For': '127.0.0.1, localstack:4566'}); InvalidRequest(Value for x-amz-checksum-sha256 header is invalid., headers={'Content-Type': 'application/xml', 'Content-Length': '150', 'Location': '/integration-bucket', 'Last-Modified': 'Thu, 11 Aug 2022 11:59:45 GMT', 'x-amz-request-id': '0D5C04B30BC16C20', 'x-amz-id-2': 'MzRISOwyjmnup0D5C04B30BC16C207/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'HEAD,GET,PUT,POST,DELETE,OPTIONS,PATCH', 'Access-Control-Allow-Headers': 'authorization,cache-control,content-length,content-md5,content-type,etag,location,x-amz-acl,x-amz-content-sha256,x-amz-date,x-amz-request-id,x-amz-security-token,x-amz-tagging,x-amz-target,x-amz-user-agent,x-amz-version-id,x-amzn-requestid,x-localstack-target,amz-sdk-invocation-id,amz-sdk-request', 'Access-Control-Expose-Headers': 'etag,x-amz-version-id'})
```
### Expected Behavior
No error is returned and file is uploaded with checksum.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
Java AWS snippet:
```java
S3Client s3Client = S3Client.builder().build();
String bucketName = "integration-bucket";
String key = "testObjectKey_5f1gUa";
byte[] fileContent = "Hello Blob".getBytes(StandardCharsets.UTF_8);
PutObjectRequest request = PutObjectRequest.builder()
.bucket(bucketName)
.key(key)
.checksumAlgorithm(ChecksumAlgorithm.SHA256)
.build();
s3Client.putObject(request, RequestBody.fromBytes(fileContent));
```
### Environment
```markdown
- OS: Docker on Windows 10
- LocalStack: latest
```
### Anything else?
The integrity checks are currently something that is still in progress in #6008 but PR #6619 was made for this feature a couple of days ago.
According [the documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html), when using an SDK it is not mandatory to calculate the checksum yourself:
> When you're using an SDK, you can set the value of the x-amz-sdk-checksum-algorithm parameter to the algorithm that you want Amazon S3 to use when calculating the checksum. Amazon S3 automatically calculates the checksum value.
> When you're using the REST API, you don't use the x-amz-sdk-checksum-algorithm parameter. Instead, you use one of the algorithm-specific headers (for example, x-amz-checksum-crc32).
However the "Amazon S3 automatically calculates the checksum value" part might actually be misleading according to [the API documentation](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html):
> x-amz-sdk-checksum-algorithm: Indicates the algorithm used to create the checksum for the object when using the SDK. This header will not provide any additional functionality if not using the SDK. When sending this header, there must be a corresponding x-amz-checksum or x-amz-trailer header sent.
Versions before this PR (1.0.3) would not throw an error but return the following contents when retrieving the object:
```
a;chunk-signature=a70d7ab7401685b64fea62f21aa3a6b726687d74a14a94d0607f407e6bd17c68
Hello Blob
0;chunk-signature=58755adb9d0360af56399ae88a8a2d2c319c262a3eb518739bff1aeb93482b03
x-amz-checksum-sha256:/cvtq0zfnMYt+eWeUUr1zGtu/6eL8ahMzDU8PmUaFys=
x-amz-trailer-signature:cc210b37d675e70b192e09889b95910411ddaca666c5913f95855f7841debe87
```
In this case, the `x-amz-trailer` is used instead of the `x-amz-checksum`. The value of the `x-amz-trailer` (=`x-amz-checksum-sha256`) refers to some extra trailing data in the body.
It's unclear whether the chunked data is supported by localstack, there is some code for stripping chunk-signatures in there but this is only done when the "Content-MD5" header is set in a function called `check_md5_content`: https://github.com/localstack/localstack/blob/95e9ba2fd950df46cadf2810410aadfef235af0d/localstack/services/s3/s3_listener.py#L997
I don't know the fully story, but the stripping of the signatures seems out of place (unrelated to the MD5 checksum)
| https://github.com/localstack/localstack/issues/6659 | https://github.com/localstack/localstack/pull/8677 | e04eeb2148d656d832af9348bc74e37bdab3f3ff | 2e8194722fc156daaad8373ec8569dae5e090863 | "2022-08-12T10:22:48Z" | python | "2023-07-12T15:35:44Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,657 | ["localstack/services/sns/provider.py", "tests/integration/test_sns.py", "tests/integration/test_sns.snapshot.json"] | SNS FIFO topic to SQS FIFO queue does not seem to work | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
My SQS queue is not receiving notifications from SNS when they both have fifo enabled:
```
# Create SNS topic
docker exec -it localstack awslocal sns create-topic --name my-topic.fifo --attributes FifoTopic=true
# Create SQS queue
docker exec -it localstack awslocal sqs create-queue --queue-name my-queue.fifo --attributes FifoQueue=true
# Subscribe queue to topic
docker exec -it localstack awslocal sns subscribe --topic-arn arn:aws:sns:us-east-1:000000000000:my-topic.fifo --protocol sqs --notification-endpoint http://localhost:4566/000000000000/my-queue.fifo
# Send test notification
docker exec -it localstack awslocal sns publish-batch --topic-arn arn:aws:sns:us-east-1:000000000000:my-topic.fifo --publish-batch-request-entries Id=myId,Message=myMessage,MessageGroupId=myMessageGroupId,MessageDeduplicationId=myMessageDeduplicationId
# Receive messages
docker exec -it localstack awslocal sqs receive-message --queue-url http://localhost:4566/000000000000/my-queue.fifo --attribute-name All
```
Output:
```
{
"TopicArn": "arn:aws:sns:us-east-1:000000000000:my-topic.fifo"
}
{
"QueueUrl": "http://localhost:4566/000000000000/my-queue.fifo"
}
{
"SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:my-topic.fifo:46cd4bc8-c572-4a5b-ab69-cd0db1b160b4"
}
{
"Successful": [
{
"Id": "myId",
"MessageId": "c9b2c232-4a56-4875-9e11-21cb6b386221"
}
],
"Failed": []
}
```
No messages were received.
### Expected Behavior
If I do the above steps without fifo on SNS and SQS, then it works as it should:
```
docker exec -it localstack awslocal sns create-topic --name my-topic
docker exec -it localstack awslocal sqs create-queue --queue-name my-queue
docker exec -it localstack awslocal sns subscribe --topic-arn arn:aws:sns:us-east-1:000000000000:my-topic --protocol sqs --notification-endpoint http://localhost:4566/000000000000/my-queue
docker exec -it localstack awslocal sns publish-batch --topic-arn arn:aws:sns:us-east-1:000000000000:my-topic --publish-batch-request-entries Id=myId,Message=myMessage,MessageGroupId=myMessageGroupId,MessageDeduplicationId=myMessageDeduplicationId
docker exec -it localstack awslocal sqs receive-message --queue-url http://localhost:4566/000000000000/my-queue --attribute-name All
```
Outputs:
```
{
"TopicArn": "arn:aws:sns:us-east-1:000000000000:my-topic"
}
{
"QueueUrl": "http://localhost:4566/000000000000/my-queue"
}
{
"SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:my-topic:06b585e5-5ac3-434b-b817-489694fa891b"
}
{
"Successful": [
{
"Id": "myId",
"MessageId": "46bb7006-4e5e-4472-a1c0-29c902edff2c"
}
],
"Failed": []
}
{
"Messages": [
{
"MessageId": "1a7d9db0-430b-424f-b23c-9f8f2f5f5c5f",
"ReceiptHandle": "NjU0ZjJjMDYtN2I0Yy00OTgzLThmMzYtNWVjNTQ5ZjAwNWEzIGFybjphd3M6c3FzOnVzLWVhc3QtMTowMDAwMDAwMDAwMDA6bXktcXVldWUgMWE3ZDlkYjAtNDMwYi00MjRmLWIyM2MtOWY4ZjJmNWY1YzVmIDE2NjAyMzg2MzAuMzg4NzAyOQ==",
"MD5OfBody": "f7ad57b0788283b7afaa4dba8ea1784b",
"Body": "{\"Type\": \"Notification\", \"MessageId\": \"46bb7006-4e5e-4472-a1c0-29c902edff2c\", \"TopicArn\": \"arn:aws:sns:us-east-1:000000000000:my-topic\", \"Message\": \"myMessage\", \"Timestamp\": \"2022-08-11T17:23:49.814Z\", \"SignatureVersion\": \"1\", \"Signature\": \"EXAMPLEpH+..\", \"SigningCertURL\": \"https://sns.us-east-1.amazonaws.com/SimpleNotificationService-0000000000000000000000.pem\", \"UnsubscribeURL\": \"http://localhost:4566/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-east-1:000000000000:my-topic:06b585e5-5ac3-434b-b817-489694fa891b\"}",
"Attributes": {
"SenderId": "000000000000",
"SentTimestamp": "1660238629853",
"ApproximateReceiveCount": "1",
"ApproximateFirstReceiveTimestamp": "1660238630388"
}
}
]
}
```
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
awslocal s3 mb s3://mybucket
### Environment
```markdown
- OS: 5.4.0-1086-azure (Github Codespaces)
- LocalStack:latest
docker-compose.yml
localstack:
image: localstack/localstack:latest
container_name: localstack
environment:
- SERVICES=sqs,sns
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
ports:
- '4566-4597:4566-4597'
volumes:
- "${TMPDIR:-/tmp/localstack}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- app_network
```
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/6657 | https://github.com/localstack/localstack/pull/6660 | facdc00018439937a947561ccce34aa4ccc606a4 | 58326f9cc379586925cf9c0d80a926c72274ff53 | "2022-08-11T17:31:36Z" | python | "2022-08-19T15:50:02Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,654 | ["localstack/services/s3/provider.py", "localstack/services/s3/provider_stream.py", "localstack/services/s3/utils.py", "tests/integration/s3/test_s3.py", "tests/integration/s3/test_s3.snapshot.json", "tests/unit/test_s3.py"] | bug: S3 Lifecycle not showing expiration header | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I'm configuring the S3 bucket to have a lifecycle, everything works fine, but when I get the object, the header `Expiration` is not showing
### Expected Behavior
According to the documentation, if the bucket have a `lifecycle configuration` enabled, the header `expiration` should appear in the response
Executing the command
```bash
$ awslocal s3api get-object help
```
We get the documentation of `get-object` command and in the `OUTPUT` section we have:
>Expiration -> (string)
> If the object expiration is configured (see PUT Bucket lifecycle),
> the response includes this header. It includes the expiry-date and
> rule-id key-value pairs providing object expiration information. The
> value of the rule-id is URL-encoded.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### Created the bucket-lifecycle.json
```json
{
"Rules": [
{
"ID": "My Rule ID",
"Status": "Enabled",
"Expiration": {
"Days": 1
},
"NoncurrentVersionExpiration": {
"NoncurrentDays": 1
}
}
]
}
```
#### Command for create the lifecycle configuration on a existing bucket
```bash
$ awslocal s3api put-bucket-lifecycle-configuration --bucket my-bucket --lifecycle-configuration file://bucket-lifecycle.json
```
#### Validate the configuration
```bash
$ awslocal s3api get-bucket-lifecycle-configuration --bucket my-bucket
output:
{
"Rules": [
{
"Expiration": {
"Days": 1
},
"ID": "My Rule ID",
"Status": "Enabled",
"NoncurrentVersionExpiration": {
"NoncurrentDays": 1
}
}
]
}
```
#### Uploaded an object:
```bash
$ awslocal s3api put-object --bucket my-bucket --key teste.json --body bucket-lifecycle.json --content-type application/json
output:
{
"ETag": "\"eb702ecb4e41b09ca62934217d5f13af\""
}
```
#### Fetch the object
```bash
$ awslocal s3api get-object --bucket my-bucket --key teste.json bucket-lifecycle.json
output:
{
"AcceptRanges": "bytes",
"LastModified": "2022-08-11T13:39:11+00:00",
"ContentLength": 287,
"ETag": "\"eb702ecb4e41b09ca62934217d5f13af\"",
"VersionId": "null",
"ContentLanguage": "en-US",
"ContentType": "application/json",
"Metadata": {}
}
```
### Environment
```markdown
- OS: MacOS Monterey
- LocalStack: 1.0.3
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/6654 | https://github.com/localstack/localstack/pull/8651 | 6c743db88062bab70021108752798e467357f485 | a985d68d469858dfe77537617a16ccdf1f119483 | "2022-08-11T14:02:37Z" | python | "2023-07-08T21:53:15Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 6,639 | ["localstack/services/sns/provider.py", "localstack/utils/aws/dead_letter_queue.py", "tests/integration/test_sns.py", "tests/integration/test_sns.snapshot.json"] | bug: SNS creates wrong MessageAttributes when relaying msg to an SQS DLQ | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When relaying a message with `MessageAttributes` to a SQS DLQ after failing to deliver to another SQS queue, the messages attributes were wrong.
Snapshots results:
With `RawDelivery` set to `false`:
```json
{
"Body": {
"Message": "test_dlq_after_sqs_endpoint_deleted",
"MessageAttributes": {
"attr1": {
"Type": "Number",
"Value": "111"
},
"attr2": {
"Type": "Binary",
"Value": "AgME"
}
},
"MessageId": "<uuid:1>",
"Signature": "<signature>",
"SignatureVersion": "1",
"SigningCertURL": "https://sns.<region>.amazonaws.com/SimpleNotificationService-<signing-cert-file:1>",
"Timestamp": "date",
"TopicArn": "arn:aws:sns:<region>:111111111111:<resource:1>",
"Type": "Notification",
"UnsubscribeURL": "<unsubscribe-domain>/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:<region>:111111111111:<resource:1>:<resource:2>",
},
"MD5OfBody": "<md5-hash>",
"MD5OfMessageAttributes": "<md5-hash>",
"MessageAttributes": {
"ErrorCode": {
"DataType": "String",
"StringValue": "200"
},
"ErrorMessage": {
"DataType": "String",
"StringValue": "An error occurred (AWS.SimpleQueueService.NonExistentQueue) when calling the GetQueueUrl operation (reached max retries: 0): The specified queue does not exist for this wsdl version."
},
"RequestID": {
"DataType": "String",
"StringValue": "<uuid:?>"
}
},
"MessageId": "<uuid:2>",
"ReceiptHandle": "<receipt-handle:1>"
}
```
We can see added `MessageAttributes` on the received SQS message.
With `RawDelivery` set to `true`:
```json
{
"Body": "test_dlq_after_sqs_endpoint_deleted",
"MD5OfBody": "<md5-hash>",
"MD5OfMessageAttributes": "<md5-hash>",
"MessageAttributes": {
"ErrorCode": {
"DataType": "String",
"StringValue": "200"
},
"ErrorMessage": {
"DataType": "String",
"StringValue": "An error occurred (AWS.SimpleQueueService.NonExistentQueue) when calling the GetQueueUrl operation (reached max retries: 0): The specified queue does not exist for this wsdl version."
},
"RequestID": {
"DataType": "String",
"StringValue": "<uuid:?>"
}
},
"MessageId": "<uuid:1>",
"ReceiptHandle": "<receipt-handle:1>"
}
```
### Expected Behavior
With `RawDelivery` set to `false`:
```json
{
"Body": {
"Type": "Notification",
"MessageId": "<uuid:1>",
"TopicArn": "arn:aws:sns:<region>:111111111111:<resource:1>",
"Message": "test_dlq_after_sqs_endpoint_deleted",
"Timestamp": "date",
"SignatureVersion": "1",
"Signature": "<signature>",
"SigningCertURL": "https://sns.<region>.amazonaws.com/SimpleNotificationService-<signing-cert-file:1>",
"UnsubscribeURL": "<unsubscribe-domain>/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:<region>:111111111111:<resource:1>:<resource:2>",
"MessageAttributes": {
"attr2": {
"Type": "Binary",
"Value": "AgME"
},
"attr1": {
"Type": "Number",
"Value": "111"
}
}
},
"MD5OfBody": "<md5-hash>",
"MessageId": "<uuid:2>",
"ReceiptHandle": "<receipt-handle:1>"
}
```
With `RawDelivery` set to `true`:
```json
{
"Body": "test_dlq_after_sqs_endpoint_deleted",
"MD5OfBody": "<md5-hash>",
"MD5OfMessageAttributes": "<md5-hash>",
"MessageAttributes": {
"attr1": {
"DataType": "Number",
"StringValue": "111"
},
"attr2": {
"BinaryValue": "b'\\x02\\x03\\x04'",
"DataType": "Binary"
}
},
"MessageId": "<uuid:1>",
"ReceiptHandle": "<receipt-handle:1>"
}
```
We can see the attributes are kept and not replaced with Error messages.
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
Using `tests.integration.test_sns.TestSNSProvider.test_redrive_policy_sqs_queue_subscription`
### Environment
```markdown
- OS:MacOS Monterey 12.5
- LocalStack: 1.0.5dev (latest)
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/6639 | https://github.com/localstack/localstack/pull/6640 | 18ee9992f634ff9c8c8c1ace9509016c0e30e766 | 99557cfc4771348e15fd5c2e73102eb237bd2286 | "2022-08-10T13:45:46Z" | python | "2022-08-11T16:33:34Z" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.