status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | localstack/localstack | https://github.com/localstack/localstack | 4,581 | ["localstack/services/dynamodb/dynamodb_listener.py", "tests/integration/test_dynamodb.py"] | bug: Dynamodb table describe output missing SSEDescription key | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
MusicCollection
{
"Table": {
"AttributeDefinitions": [
{
"AttributeName": "Artist",
"AttributeType": "S"
},
{
"AttributeName": "SongTitle",
"AttributeType": "S"
}
],
"TableName": "MusicCollection",
"KeySchema": [
{
"AttributeName": "Artist",
"KeyType": "HASH"
},
{
"AttributeName": "SongTitle",
"KeyType": "RANGE"
}
],
"TableStatus": "ACTIVE",
"CreationDateTime": "2021-09-12T00:30:23.741000-07:00",
"ProvisionedThroughput": {
"LastIncreaseDateTime": "1969-12-31T16:00:00-08:00",
"LastDecreaseDateTime": "1969-12-31T16:00:00-08:00",
"NumberOfDecreasesToday": 0,
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
},
"TableSizeBytes": 0,
"ItemCount": 0,
"TableArn": "arn:aws:dynamodb:us-east-1:000000000000:table/MusicCollection"
}
}
### Expected Behavior
{
"TableDescription": {
"AttributeDefinitions": [
{
"AttributeName": "Artist",
"AttributeType": "S"
},
{
"AttributeName": "SongTitle",
"AttributeType": "S"
}
],
"TableName": "MusicCollection",
"KeySchema": [
{
"AttributeName": "Artist",
"KeyType": "HASH"
},
{
"AttributeName": "SongTitle",
"KeyType": "RANGE"
}
],
"TableStatus": "CREATING",
"CreationDateTime": "2020-05-27T11:12:16.431000-07:00",
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
},
"TableSizeBytes": 0,
"ItemCount": 0,
"TableArn": "arn:aws:dynamodb:us-west-2:123456789012:table/MusicCollection",
"TableId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",
"SSEDescription": {
"Status": "ENABLED",
"SSEType": "KMS",
"KMSMasterKeyArn": "arn:aws:kms:us-west-2:123456789012:key/abcd1234-abcd-1234-a123-ab1234a1b234"
}
}
}
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
-> docker run --rm -it -p 4566:4566 -p 4571:4571 localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
## Create the dynamodb table from separate terminal
-> aws --endpoint-url http://localhost:4566 dynamodb create-table \
--table-name MusicCollection \
--attribute-definitions AttributeName=Artist,AttributeType=S AttributeName=SongTitle,AttributeType=S \
--key-schema AttributeName=Artist,KeyType=HASH AttributeName=SongTitle,KeyType=RANGE \
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
--sse-specification Enabled=true,SSEType=KMS,KMSMasterKeyId=abcd1234-abcd-1234-a123-ab1234a1b234
## Describe the table
-> aws --endpoint-url http://localhost:4566 dynamodb describe-table --table-name MusicCollection
Not able to see the description of the server-side encryption status on the specified table.
### Environment
```markdown
- OS: macOS Big Sur
- LocalStack: 0.12.17.5
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/4581 | https://github.com/localstack/localstack/pull/4601 | 09e370a5beccd85cae8f8b2dd9acdc673e08e0c6 | c8ed088ee6af21d9f770545676724df90535f8cd | "2021-09-12T08:01:45Z" | python | "2021-09-16T22:11:13Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,541 | ["localstack/services/kms/kms_listener.py", "tests/integration/fixtures.py", "tests/integration/test_kms.py"] | bug: KMS create grant failed with 500 | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
```
An error occurred (500) when calling the CreateGrant operation (reached max retries: 4): <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>
```
### Expected Behavior
_No response_
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
helm install localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
aws kms create-grant --key-id=738f46cb-444b-4fc5-9c6f-8c1165279dbb --grantee-principal=arn:aws:iam::000000000000:role/test --operations=Encrypt
### Environment
```markdown
- OS:
- LocalStack: 0.12.17
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/4541 | https://github.com/localstack/localstack/pull/4627 | aa418828e295bebf24d439149ea7f93c02fafaa5 | acd18ec2b66f2391af61d37e0b092dfc132853e2 | "2021-09-06T02:56:27Z" | python | "2021-09-29T19:32:08Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,533 | ["localstack/services/cloudformation/models/stepfunctions.py", "localstack/utils/generic/wait_utils.py", "tests/integration/cloudformation/test_cloudformation_stepfunctions.py", "tests/integration/fixtures.py", "tests/integration/templates/stepfunctions_statemachine_substitutions.yaml", "tests/unit/test_cloudformation.py"] | bug: State Machine references don't get resolved properly | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Lambda refs get lost
### Expected Behavior
Lambda refs work in state machines
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
awslocal s3 mb s3://mybucket
### Environment
```markdown
- OS:
- LocalStack:
```
### Anything else?
This is based on a conversation I had with @dominikschubert | https://github.com/localstack/localstack/issues/4533 | https://github.com/localstack/localstack/pull/4575 | 47a735b908c47e84bf1a1167555b16b83b9778b1 | d55d1fed1c1461c0b6a072335d8e6eddff807b53 | "2021-09-02T17:23:23Z" | python | "2021-09-12T20:36:33Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,532 | ["localstack/utils/http_utils.py", "tests/integration/test_api_gateway.py", "tests/unit/utils/test_http_utils.py"] | bug: API Gateway incorrectly canoncalizes headers | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
localstack v0.12.17 released with this enhancement that downcases all headers:
> canonicalize HTTP header names in API GW Lambda events to lower-case
> https://github.com/localstack/localstack/commit/657d9fe4bd0e3c4284e6fa67aec62d265d5d9fff
This does not match what we observe in production - custom headers are not downcased. See this part of the docs on API Gateway and how it canonicalizes headers:
> API Gateway enacts the following restrictions and limitations when handling methods with either Lambda integration or HTTP integration.
> * Header names and query parameters are processed in a case-sensitive way.
> * The following table lists the headers that may be dropped, remapped, or otherwise modified when sent to your integration endpoint or sent back by your integration endpoint. In this table:
> https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-known-issues.html
### Expected Behavior
I would expect that localstack follows the same canonicalization rules that API Gateway implements.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack:0.12.17
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
Used [Insomnia](https://insomnia.rest/) client to issue a POST request to an AWS Lambda function via API Gateway with custom headers. For example, `X-A-Custom-Header`. Attempt to read the custom header in your lambda function case sensitively.
### Environment
```markdown
- OS: macOS for POST request + whatever the docker container uses for localstack
- LocalStack: 0.12.17
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/4532 | https://github.com/localstack/localstack/pull/5213 | 9db4cc14f1e0d11c3a214b747c7fbcb0f4fcc05f | 122722754c30e9ed5f2fe4c6f8926582b7b21e60 | "2021-09-02T17:09:58Z" | python | "2022-01-02T21:21:44Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,529 | ["localstack/services/cloudformation/models/apigateway.py"] | ApiKeyRequired in API Gateway not working from cloudformation | Hello,
I have tried to create a mock integration in api gateway from cloudformation.
Tested with latest (105b2dd7ba2c) docker-image.
We would like to be able to test our api-keys in api gateway but setting up ApiKeyRequired does not seem to work.
Using attached template.yml
Running:
aws cloudformation create-stack --stack-name api-gateway --template-body file://template.yml --endpoint-url=http://localhost:4566
aws apigateway get-rest-apis --endpoint-url=http://localhost:4566
aws apigateway get-resources --rest-api-id ${CREATED_REST_API_ID} --endpoint-url=http://localhost:4566
But the response here is apiKeyRequired = false
```
{
"items": [
{
"id": "rootId",
"path": "/"
},
{
"id": "childId",
"parentId": "rootId",
"pathPart": "mock",
"path": "/mock",
"resourceMethods": {
"POST": {
"httpMethod": "POST",
"authorizationType": "NONE",
"apiKeyRequired": false,
"methodIntegration": {
"type": "MOCK",
"requestParameters": {},
"requestTemplates": {
"application/json": "{\"statusCode\": $input.json('$.statusCode'), \"message\": $input.json('$.message')}"
},
"passthroughBehavior": "WHEN_NO_MATCH",
"cacheNamespace": "d30e4c73",
"cacheKeyParameters": []
}
}
}
}
]
}
```
It is also possible to access the endpoint without providing an api key.
```
curl -v -H "Content-Type: application/json" http://localhost:4566/restapis/${CREATED_REST_API_ID}/LATEST/_user_request_/mock -d '
{"statusCode":200}
'
```
Expecting forbidden as when this template is run in AWS.
Regards,
Björn Bohlin
[template.zip](https://github.com/localstack/localstack/files/7097904/template.zip)
| https://github.com/localstack/localstack/issues/4529 | https://github.com/localstack/localstack/pull/4610 | c8ed088ee6af21d9f770545676724df90535f8cd | b2daf471735c15f834f37e33f5d9d8eea8d0b366 | "2021-09-02T09:46:40Z" | python | "2021-09-17T08:21:52Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,522 | ["README.md", "localstack/services/install.py"] | bug: kinesis register-stream-consumer returns error StreamARN not found | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I cannot register a consumer using `register-stream-consumer`.
### Expected Behavior
I should be able to register a stream consumer without problem.
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
- Start localstack `docker run --rm -it -p 4566:4566 -p 4571:4571 localstack/localstack:0.12.17`
- Create a kinesis stream `aws --endpoint-url http://localhost:4566 kinesis create-stream --stream-name kinesisStream --shard-count 1`
- Get the arn of that created stream `aws --endpoint-url http://localhost:4566 kinesis describe-stream --stream-name kinesisStream | jq .StreamDescription.StreamARN` (returns `arn:aws:kinesis:us-west-2:000000000000:stream/kinesisStream`)
- Create a consumer `aws --endpoint-url http://localhost:4566 kinesis register-stream-consumer --consumer-name myConsumer --stream-arn arn:aws:kinesis:us-west-2:000000000000:stream/kinesisStream`
- Get error `An error occurred (ResourceNotFoundException) when calling the RegisterStreamConsumer operation: StreamARN arn:aws:kinesis:us-west-2:000000000000:stream/kinesisStream not found`
### Environment
```markdown
- OS: macOS 11.5.2
- LocalStack: 0.12.17
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/4522 | https://github.com/localstack/localstack/pull/4573 | b769810a400eb3aab3fcc95f35299bbfd8e09074 | 739fc55e711824b0f6b93b8170786560b2a5ee40 | "2021-09-01T02:54:36Z" | python | "2021-09-10T08:18:33Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,515 | ["localstack/utils/docker.py", "requirements.txt", "tests/integration/docker/test_docker.py"] | bug: PRO - lambda cannot comunicate with ext after upgrading to 0.12.17 or latest. - [Errno -3] Temporary failure in name resolution | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
this is my docker-config.yml
```
localstack:
image: localstack/localstack:latest
restart : always
environment:
- AWS_DEFAULT_REGION=eu-central-1
- SERVICES=neptune,route53,events,stepfunctions,kinesis,cloudtrail,ses,elbv2,es,ec2,secretsmanager,firehose,ecs,ecr,sns,sqs,appsync,edge,cognito,lambda,s3,docdb,apigateway,apigatewayv2,cloudformation,sts,iam,dynamodb:4701,cloudfront
- START_WEB=0
- DEBUG=1
- NODE_TLS_REJECT_UNAUTHORIZED=0
- DISABLE_EVENTS=true
- TEST_AWS_ACCOUNT_ID=123456789012
- LAMBDA_EXECUTOR=docker-reuse
- HOSTNAME_EXTERNAL=localstack
- LAMBDA_DOCKER_NETWORK=localstack_default
- LAMBDA_REMOTE_DOCKER=true
- LAMBDA_REMOVE_CONTAINERS=false
- DATA_DIR=/tmp/localstack/data
- HOST_TMP_FOLDER=/tmp/localstack
- LOCALSTACK_API_KEY=xxxxxxxxxx
- DOCKER_HOST=unix:///var/run/docker.sock
- DYNAMODB_PORT_EXTERNAL=4701
- HOSTNAME_EXTERNAL=localstack
- LAMBDA_DOCKER_DNS=8.8.8.8
```
the lambda container cannot comunicate with external service, but only with service inside docker network named localstack_default
### Expected Behavior
lambda container can comunicate with external service
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
executing this py script inside lambda container
```
import requests
print (requests.get('https://ifconfig.co/json’).text)
```
i got " [Errno -3] Temporary failure in name resolution'"
work replacing the idfconfig.co with ip addess
### Environment
```markdown
- OS: debian 11
- LocalStack: 0.12.17 or latest
```
### Anything else?
the /etc/hosts file into the lambda container started from localstack:0.12.17
```
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.18.0.4 77c41d53db2b
172.17.0.1 localhost.localstack.cloud
```
the /etc/hosts file into the lambda container started from localstack:0.12.16
```
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.18.0.3 8e545993dcf9
172.18.0.8 localhost.localstack.cloud
```
| https://github.com/localstack/localstack/issues/4515 | https://github.com/localstack/localstack/pull/4520 | bf4fdc81f08594b0c47a574bba08871d648d1021 | fb79f63c7cd792360f2f52261b56b3b047d9c329 | "2021-08-31T06:32:37Z" | python | "2021-08-31T17:40:00Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,489 | ["localstack/services/install.py"] | bug: Error tagging kinesis stream (invalid characters) after upgrade to 0.12.16 | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
This worked using version 0.12.13. No configuration changes were made other than bumping the image version to 0.12.16, and the same tag key/value pairs are present from .13 to .16. The error is related to the value `[email protected]` for tagging. Full output:
```# aws_kinesis_stream.stream will be created
+ resource "aws_kinesis_stream" "stream" {
+ arn = (known after apply)
+ encryption_type = "NONE"
+ enforce_consumer_deletion = false
+ id = (known after apply)
+ kms_key_id = "alias/aws/kinesis"
+ name = "ci0-p631641-kinesis-kinesis-test"
+ retention_period = 48
+ shard_count = 2
+ tags = {
+ "chart_name" = "kinesis"
+ "deployment_model" = "mcs2"
+ "env_name" = "ci0-p631641"
+ "owner" = "[email protected]"
...
...
}
+ tags_all = {
+ "chart_name" = "kinesis"
+ "deployment_model" = "mcs2"
+ "env_name" = "ci0-p631641"
+ "owner" = "[email protected]"
...
...
}
}
Plan: 4 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ arn = (known after apply)
+ consumer_policy_arn = (known after apply)
+ id = (known after apply)
+ name = "ci0-p631641-kinesis-kinesis-test"
+ producer_policy_arn = (known after apply)
+ shard_count = 2
+ shard_mgt_policy_arn = (known after apply)
# terraform apply -lock=false plan.out
aws_kinesis_stream.stream: Creating...
aws_kinesis_stream.stream: Still creating... [10s elapsed]
Error: error updating Kinesis Stream (ci0-p631641-kinesis-kinesis-test) tags: error tagging resource (ci0-p631641-kinesis-kinesis-test): InvalidArgumentException: Values contain invalid characters. Invalid values: [email protected]
on kinesis.tf line 3, in resource "aws_kinesis_stream" "stream":
3: resource "aws_kinesis_stream" "stream" {
```
### Expected Behavior
The kinesis stream resource is able to be tagged using the same tagging key/value pairs that were valid in 0.12.13, and are valid in AWS.
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
As a kubernetes deployment
### Environment
```markdown
- Kubernetes 1.18
- LocalStack: 0.12.16
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/4489 | https://github.com/localstack/localstack/pull/4502 | 267f546c121eabc2a7154d2bb2f4d694510c0db3 | 392da1fc4d5c0f0350b7cceb9b88e06dcb1d1ef3 | "2021-08-25T17:18:45Z" | python | "2021-08-27T17:29:20Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,487 | ["localstack/services/s3/s3_listener.py", "tests/integration/test_s3.py"] | bug: Incorrect content-type for files from an s3 website bucket | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When making a website request to an s3 bucket, the content-type response header is incorrent.
### Expected Behavior
The `content-type` header in the response from the s3 website endpoint for an object should be the same as the `ContentType` of the object.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack
A reduced compose file:
```yaml
version: "3.9"
services:
s3:
image: localstack/localstack:latest
ports:
- "4566-4583:4566-4583"
environment:
- AWS_DEFAULT_REGION=eu-west-1
- EDGE_PORT=4566
- SERVICES=s3
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
- S3_SKIP_SIGNATURE_VALIDATION=0
volumes:
- "./.localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
```
#### Client commands
```shell
# bucket setup
awslocal s3api create-bucket --bucket public
awslocal s3api put-bucket-acl --bucket public --acl public-read
awslocal s3 website s3://public --index-document index.html
# sample object
echo "'use strict;'" > script.js
awslocal s3api put-object --bucket public --body script.js --content-type "application/javascript; charset=utf-8" --key script.js
```
Confirm the metadata of the object has the expected `ContentType` metadata:
```shell
awslocal s3api head-object --bucket public --key script.js
```
The result will look like:
```json
{
"LastModified": "Wed, 25 Aug 2021 13:36:20 GMT",
"ContentLength": 14,
"ETag": "\"63a55f759bd133b92f61ae079fd2cc73\"",
"ContentType": "application/javascript; charset=utf-8",
"Metadata": {}
}
```
A curl (either on the host machine or in the container) to the website endpoint for the object results in an incorrect content-type header (`text/html; charset=utf-8`):
```shell
curl --resolve public.s3-website.localhost.localstack.cloud:4566:127.0.0.1 public.s3-website.localhost.localstack.cloud:4566/script.js -v
```
The output:
```
* Added public.s3-website.localhost.localstack.cloud:4566:127.0.0.1 to DNS cache
* Hostname public.s3-website.localhost.localstack.cloud was found in DNS cache
* Trying 127.0.0.1:4566...
* TCP_NODELAY set
* Connected to public.s3-website.localhost.localstack.cloud (127.0.0.1) port 4566 (#0)
> GET /script.js HTTP/1.1
> Host: public.s3-website.localhost.localstack.cloud:4566
> User-Agent: curl/7.67.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200
< content-type: text/html; charset=utf-8
< content-length: 14
< last-modified: Wed, 25 Aug 2021 13:36:43 GMT
< x-amz-request-id: 2E0649A71F21ACAC
< x-amz-id-2: MzRISOwyjmnup2E0649A71F21ACAC7/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp
< accept-ranges: bytes
< content-language: en-US
< access-control-allow-origin: *
< access-control-allow-methods: HEAD,GET,PUT,POST,DELETE,OPTIONS,PATCH
< access-control-allow-headers: authorization,content-type,content-length,content-md5,cache-control,x-amz-content-sha256,x-amz-date,x-amz-security-token,x-amz-user-agent,x-amz-target,x-amz-acl,x-amz-version-id,x-localstack-target,x-amz-tagging,amz-sdk-invocation-id,amz-sdk-request
< access-control-expose-headers: x-amz-version-id
< connection: close
< date: Wed, 25 Aug 2021 13:36:43 GMT
< server: hypercorn-h11
<
'use strict;'
* Closing connection 0
```
### Environment
```markdown
- Host OS: macos big-sur 11.5.1
- LocalStack: latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/4487 | https://github.com/localstack/localstack/pull/4491 | b7520260fa9281d7c11aad64a7ae1aff53953b97 | 06fea6560c9c60b7d04048227a7d8585c3b3ed5b | "2021-08-25T13:38:00Z" | python | "2021-08-26T00:03:46Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,484 | ["localstack/services/sns/sns_listener.py"] | feature request: Support email-json for SNS subscription | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Feature description
I would like to request extending `sns_listener` with an ability to understand `email-json` next to `email` protocol.
### 🧑💻 Implementation
1. check if protocol is `email` or `email-json`
2. for email json interpret body/message as json
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/4484 | https://github.com/localstack/localstack/pull/5247 | ed80dd92c13fc4c2ed5ca50e97abb7be6ff2b900 | ec8b72d5c926ae8495ca50ce168494247aef54be | "2021-08-24T13:28:42Z" | python | "2022-01-10T18:04:02Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,458 | ["localstack/services/sns/sns_listener.py", "tests/integration/test_sns.py"] | bug: SNS fanout to SQS, different message attributes behavior | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Localstack always bypasses SNS message attributes to SQS message attributes, while AWS does it only when `raw_message_delivery` is enabled. See https://stackoverflow.com/questions/44238656/how-to-add-sqs-message-attributes-in-sns-subscription
### Expected Behavior
SNS message attributes bypassed to SQS message attribute only when `raw_message_delivery` is `true`.
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
Try to send SNS message with attributes in Localstack and in AWS.
AWS will not map SNS message attributes to SQS message attributes without `raw_message_delivery = true` parameter for a topic subscription.
### Environment
```markdown
- OS: docker
- LocalStack: latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/4458 | https://github.com/localstack/localstack/pull/4594 | 941a1900fc0e7c557161126063a3162515315f3f | 067b740e70040669f05426886ccb58bb32ab3bfb | "2021-08-17T14:32:15Z" | python | "2021-09-14T15:07:19Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,409 | ["localstack/plugins.py", "localstack/services/configservice/__init__.py", "localstack/services/configservice/configservice_starter.py", "localstack/services/support/support_starter.py", "requirements.txt", "tests/integration/test_config_service.py"] | feature request: ConfigService | `moto` provides support for several of the AWS ConfigService APIs. Would it be possible to provide that same support with LocalStack? | https://github.com/localstack/localstack/issues/4409 | https://github.com/localstack/localstack/pull/4500 | 9d805367381a670406237769a94592f8ef0d4bf4 | da096b90375da1546b2f71df997764ad906a7f3e | "2021-08-03T16:31:51Z" | python | "2021-08-30T10:54:06Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,379 | ["localstack/services/sns/sns_listener.py", "tests/unit/test_sns.py"] | bug: SNS Subscription Filter Policy "exists" false does not match | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I'm encountering an issue with the "exists" property on SNS Filter Policy. It logs that it can't match the policy even though I would expect it to match.
```
2021-07-28T17:57:02:INFO:localstack.services.sns.sns_listener: SNS filter policy {'source': ['source1'], 'priority': [{'exists': False}]}
does not match attributes {'clientId': {'Type': 'String', 'Value': 'Id123'}, 'source': {'Type': 'String', 'Value': 'source1'}, 'awsRegionName': {'Type': 'String', 'Value': 'us-east-1'}}
```
### Expected Behavior
Policy matches and message is sent to subscribers.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
I run in a docker-compose script and load my CloudFormation script on init.
Using the following CloudFormation script I create the subscription.
```
"Subscription": {
"Type": "AWS::SNS::Subscription",
"Properties": {
"TopicArn": {
"Ref": "MyTopic"
},
"Endpoint": {
"Fn::GetAtt": [
"MyQueue",
"Arn"
]
},
"Protocol": "sqs",
"FilterPolicy": {
"source": [
"source1"
],
"priority": [{"exists": false}]
},
"RawMessageDelivery": true
}
},
```
### Environment
```markdown
- OS: Windows
- LocalStack: latest
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/4379 | https://github.com/localstack/localstack/pull/4911 | 757d9a692b7e2fc686ac2fe54b6a326faf9f6777 | 067a382d5401dff2c1b0f464ab535e9b3852e885 | "2021-07-28T20:00:24Z" | python | "2021-11-13T18:51:54Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,363 | ["localstack/services/awslambda/lambda_api.py", "localstack/services/infra.py", "localstack/utils/config_listener.py", "tests/integration/test_config_endpoint.py", "tests/unit/test_misc.py"] | Changing LAMBDA_EXECUTOR at runtime with ENABLE_CONFIG_UPDATES=1 does not work | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
First of all, I'd like to thank you for this awesome project. Keep up the good work.
Now, to business. I have a script that tries to set the executor at runtime (from `docker-reuse` to `docker` and back again at the end) for a very specific use case. In my `docker-compose` file, I have set the `ENABLE_CONFIG_UPDATES=1` and upon POSTing the request to `/?_config_`, I see the following log message in the logs:
```shell
localstack_1 | 2021-07-26T07:03:35:INFO:localstack.services.infra: Updating value of config variable "LAMBDA_EXECUTOR": docker
````
That said, I was expecting that the next invocation of a Lambda would be using its own container, which it doesn't:
```
localstack.services.awslambda.lambda_executors: Command for docker-reuse Lambda executor:
```
### Expected Behavior
The next invocation of a Lambda would be in its own container.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
1. Create a simple `docker-compose.yml` like so:
```yaml
localstack:
image: localstack/localstack:0.12.15
ports:
- "4566:4566"
environment:
SERVICES: lambda,cloudformation,s3,cloudwatch,iam,apigateway
DEFAULT_REGION: eu-west-1
HOSTNAME_EXTERNAL: localstack
LAMBDA_EXECUTOR: docker-reuse
LAMBDA_DOCKER_NETWORK: offline-environment_default
LAMBDA_REMOTE_DOCKER: 0
DOCKER_HOST: unix:///var/run/docker.sock
DEBUG: 1
ENABLE_CONFIG_UPDATES: 1
HOST_TMP_FOLDER: /private${TMPDIR}/localstack
AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: test
volumes:
- "./localstack/scripts:/docker-entrypoint-initaws.d"
- "./localstack/cloudformation:/cloudformation"
- "/var/run/docker.sock:/var/run/docker.sock"
- "/private${TMPDIR}/localstack:/tmp/localstack"
```
2. Create a CloudFormation template with a Lambda and an S3 bucket (and their required permissions) and place it under the `localstack/cloudformation` directory. The Lambda should be triggered by an S3 object created event (may not matter).
3. Create a bash script that deploys said CloudFormation template and place it under the `localstack/scripts` directory with the prefix `1_`.
4. Create another bash script that changes the value of `LAMBDA_EXECUTOR` to `docker` (via POSTing to `/?_config_`) and copies a file into the bucket. Again, place the script under the `localstack/scripts` directory with the prefix `2_`.
5. Start the `docker-compose.yml`:
```shell
docker-compose-up
```
6. Inspect the logs and verify that everything was deployed correctly and that you see the message `Updating value of config variable "LAMBDA_EXECUTOR": docker`
7. Open the Docker Dashboard or issue the `docker ps` command to confirm that Localstack invoked the Lambda using the `docker-reuse` executor. Can also check the logs and verify that the following message appears:
```shell
localstack.services.awslambda.lambda_executors: Command for docker-reuse Lambda executor:
```
### Environment
```markdown
- OS: macOS Big Sur 11.5
- LocalStack: 0.12.15
``` | https://github.com/localstack/localstack/issues/4363 | https://github.com/localstack/localstack/pull/4364 | 54975a0c84f9f82ed990dd90eb3e8b5eea6933b8 | f8790abf4e28254dcd5a2d81b17d3428d333df5c | "2021-07-26T07:39:24Z" | python | "2021-07-27T21:57:16Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,343 | ["localstack/services/sqs/sqs_listener.py", "tests/integration/test_sqs.py"] | bug: Localstack drops empty string tag value and causes ElasticMQ exception | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
If I run the following Terraform configuration against Localstack with ElasticMQ backend, then the resource creation hangs while ElasticMQ throws exceptions.
```terraform
terraform {
required_providers {
aws = "3.50.0"
}
}
provider "aws" {
region = "eu-west-1"
access_key = "mock_access_key"
secret_key = "mock_secret_key"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
sqs = "http://localhost:4566"
}
}
resource "aws_sqs_queue" "queue" {
name = "foobar"
tags = {
NoFoo = ""
}
}
```
If I run `ncat -l localhost 4566` instead of Locastack, I can see the `Tag.1.Value=` is there.
```
POST / HTTP/1.1
Host: localhost:4566
User-Agent: APN/1.0 HashiCorp/1.0 Terraform/0.12.31 (+https://www.terraform.io) terraform-provider-aws/3.50.0 (+https://registry.terraform.io/providers/hashicorp/aws) aws-sdk-go/1.39.5 (go1.16; linux; amd64)
Content-Length: 382
Authorization: AWS4-HMAC-SHA256 Credential=mock_access_key/20210722/eu-west-1/sqs/aws4_request, SignedHeaders=content-length;content-type;host;x-amz-date, Signature=1a535809020db386c13b4b4169308bd79d81f47edb3922c9a59c4707d7b51061
Content-Type: application/x-www-form-urlencoded; charset=utf-8
X-Amz-Date: 20210722T133950Z
Accept-Encoding: gzip
Connection: close
Action=CreateQueue&Attribute.1.Name=DelaySeconds&Attribute.1.Value=0&Attribute.2.Name=MaximumMessageSize&Attribute.2.Value=262144&Attribute.3.Name=MessageRetentionPeriod&Attribute.3.Value=345600&Attribute.4.Name=ReceiveMessageWaitTimeSeconds&Attribute.4.Value=0&Attribute.5.Name=VisibilityTimeout&Attribute.5.Value=30&QueueName=foobar&Tag.1.Key=NoFoo&Tag.1.Value=&Version=2012-11-05
```
If I exec into the Localstack container and attach `tshark` on ElasticMQ's port then I can see the query param got dropped.
(`tshark -i lo -f "tcp port 52097" -e http.file_data -Tjson`)
```
"Action=CreateQueue&Attribute.1.Name=DelaySeconds&Attribute.1.Value=0&Attribute.2.Name=MaximumMessageSize&Attribute.2.Value=262144&Attribute.3.Name=MessageRetentionPeriod&Attribute.3.Value=345600&Attribute.4.Name=ReceiveMessageWaitTimeSeconds&Attribute.4.Value=0&Attribute.5.Name=VisibilityTimeout&Attribute.5.Value=30&QueueName=foobar&Tag.1.Key=NoFoo&Version=2012-11-05"
```
### Expected Behavior
The SQS queue should be created without error and the tag set with empty value (`""`).
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run --rm -it -p "4566:4566" -e "SERVICES=sqs" -e "DEBUG=1" -e "SQS_PROVIDER=elasticmq" localstack/localstack:0.12.15
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
terraform init && terraform apply -auto-approve // on the above Terraform snippet put in main.tf
### Environment
```markdown
- OS: Ubuntu 20.04
- LocalStack: 0.12.15
- Terraform: 1.0.2
```
### Anything else?
As ElasticMQ could handle this edge case more gracefully (at least respond with HTTP 400 and not HTTP 500) I also opened in issue there: https://github.com/softwaremill/elasticmq/issues/511 | https://github.com/localstack/localstack/issues/4343 | https://github.com/localstack/localstack/pull/4365 | 59af78dbae4b44acafe3deaee5c554b20fbd9d97 | 41a5ebbe048d497990abc183a968d6322b35355c | "2021-07-22T13:53:22Z" | python | "2021-09-01T22:29:55Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,312 | ["localstack/services/route53/route53_listener.py", "tests/integration/test_route53.py"] | bug: Route53 zone vpc associations do not persist | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When associating a route53 hosted zone with a VPC, the association does not persist.
### Expected Behavior
Subsequent calls to get-hosted-zone should show the VPC association config.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker-compose up localstack
```
version: "3.3"
services:
localstack:
image: localstack/localstack
environment:
HOSTNAME: "localstack"
HOSTNAME_EXTERNAL: "localstack"
SERVICES: "sts,route53,iam,elb,acm,ec2,efs"
LOCALSTACK_API_KEY: ${LOCALSTACK_API_KEY}
DEBUG: 1
ports:
- "443:443"
- "4566:4566"
- "4571:4571"
```
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```
> awslocal route53 associate-vpc-with-hosted-zone --hosted-zone-id "KEHOISS658KE87K" --vpc "VPCRegion=us-west-2,VPCId=vpc-6589e232"
{
"ChangeInfo": {
"Id": "efe11a97",
"Status": "INSYNC",
"SubmittedAt": "2021-07-16T19:35:33.461000+00:00"
}
}
> awslocal route53 get-hosted-zone --id KEHOISS658KE87K
{
"HostedZone": {
"Id": "/hostedzone/KEHOISS658KE87K",
"Name": "example.com.",
"Config": {
"Comment": "Managed by Terraform",
"PrivateZone": false
},
"ResourceRecordSetCount": 0
},
"DelegationSet": {
"NameServers": [
"dns.localhost.localstack.cloud"
]
}
}
```
### Environment
```markdown
- OS: Windows 10
- LocalStack: `754a96b122ec`
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/4312 | https://github.com/localstack/localstack/pull/5118 | 0ece31f494a87485e3823bce703445f86267f4f6 | 5a8fb2a9704b86f0dcd0066c2a8b777b9b1e7719 | "2021-07-16T19:41:14Z" | python | "2021-12-14T17:57:05Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,291 | ["localstack/services/iam/iam_starter.py", "tests/integration/test_iam.py"] | bug: Error while deleting a non existent IAM policy. | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I got the following unexpected error when trying to delete a policy that does not exist.
```log
self = <botocore.parsers.QueryParser object at 0x7fa3d2cd2820>
xml_string = b'<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Serv...nd was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n'
def _parse_xml_string_to_dom(self, xml_string):
try:
parser = ETree.XMLParser(
target=ETree.TreeBuilder(),
encoding=self.DEFAULT_ENCODING)
parser.feed(xml_string)
root = parser.close()
except XMLParseError as e:
> raise ResponseParserError(
"Unable to parse response (%s), "
"invalid XML received. Further retries may succeed:\n%s" %
(e, xml_string))
E botocore.parsers.ResponseParserError: Unable to parse response (syntax error: line 1, column 54), invalid XML received. Further retries may succeed:
E b'<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n'
```
### Expected Behavior
I expect a `ClientError` with the message code as `NoSuchEntity`
### How are you starting LocalStack?
With a `docker run` command
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
`docker run --rm -it -p 4566:4566 -p 4571:4571 localstack/localstack`
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
Im using boto3 with the following init:
```python
client = boto3.client(
'iam',
region_name="us-east-1",
endpoint_url="http://localstack:4566",
aws_access_key_id="dummy_id",
aws_secret_access_key="dummy_key")
```
To reproduce the error try to delete a non existent IAM policy:
```python
import boto3
from botocore.exceptions import ClientError
client = boto3.client(
'iam',
region_name="us-east-1",
endpoint_url="http://localstack:4566",
aws_access_key_id="dummy_id",
aws_secret_access_key="dummy_key")
try:
client.delete_policy(PolicyArn="arn:aws:iam::000000000000:policy/non-existent-policy")
except Exception as ex:
if isinstance(ex, ClientError):
print("expected")
else:
print("not expected")
```
*Output*
```log
not expected
```
*Error*
```log
Unable to parse response (syntax error: line 1, column 54), invalid XML received. Further retries may succeed:
b'<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title>500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n'
```
### Environment
```markdown
- OS: MacOS BigSur 11.4
- LocalStack: latest
- Docker: v20.10.7
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/4291 | https://github.com/localstack/localstack/pull/4298 | 63750e6d8472634bb0c26443c997ddf3ec05588f | 8ae21166a4deccc803c917556e88ea7904463d62 | "2021-07-13T11:12:41Z" | python | "2021-07-16T19:40:35Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,287 | ["tests/integration/test_sqs.py"] | bug: Messages are received out of order from FIFO queue after they became visible again | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
1. Send messages to a FIFO queue
2. Receive multiple messages (3 in the example)
3. Wait for visibility timeout
4. Receive messages again
5. Messages are received out of order
### Expected Behavior
According to https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues-understanding-logic.html
> If the consumer detects a failed ReceiveMessage action, it can retry as many times as necessary, using the same receive request attempt ID. Assuming that the consumer receives at least one acknowledgement before the visibility timeout expires, multiple retries don't affect the ordering of messages.
I wasn't using Attempt ID. But based on testing on an actual AWS SQS FIFO queue, the order of messages was still preserved.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
```yaml
version: '2.1'
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack
ports:
- "4566-4599:4566-4599"
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=s3,sqs
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR-/tmp/localstack}
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
```
docker-compose up
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
(save as `create_queue.sh`)
```sh
#!/usr/bin/env bash
[[ -n "$1" ]] && AWS_HOST="$1" || AWS_HOST="localhost"
DEFAULT_SQS_ATTRS='{ "MessageRetentionPeriod": "10800", "ReceiveMessageWaitTimeSeconds": "10", "VisibilityTimeout": "900", "ContentBasedDeduplication": "true" }'
function create_queue() {
local region="$1"
local name="$2"
local dlq="$3"
local max_recv="$4"
local vis_timeout="$5"
[[ -n "$dlq" ]] && dlq="-DLQ"
local sqs_attributes
local fifo=""
if [[ "$name" == *".fifo" ]]; then
sqs_attributes="$(jq -r -c '.FifoQueue = "true"' <<<"$DEFAULT_SQS_ATTRS")"
name="${name/.fifo/}"
fifo=".fifo"
else
sqs_attributes="$DEFAULT_SQS_ATTRS"
fi
if [[ -n "$vis_timeout" ]]; then
sqs_attributes="$(jq -r -c --arg vis_timeout "$vis_timeout" '.VisibilityTimeout = $vis_timeout' <<<"$sqs_attributes")"
fi
local redrive_policy='{ "maxReceiveCount": "5" }'
if [[ -n "$max_recv" ]]; then
redrive_policy="$(jq -r -c --arg max_recv "$max_recv" '.maxReceiveCount = $max_recv' <<<"$redrive_policy")"
fi
if [[ -n "$dlq" ]]; then
aws --endpoint-url="http://$AWS_HOST:4566" --region "$region" sqs create-queue --queue-name "$name$dlq$fifo" --attributes "$sqs_attributes"
redrive_policy="$(jq -r -c "$(printf '.deadLetterTargetArn = "arn:aws:sqs:%s:000000000000:%s-DLQ%s"' "$region" "$name" "$fifo")" <<<"$redrive_policy")"
sqs_attributes="$(jq -r -c --arg policy "$redrive_policy" '.RedrivePolicy = $policy' <<<"$sqs_attributes")"
fi
aws --endpoint-url="http://$AWS_HOST:4566" --region "$region" sqs create-queue --queue-name "$name$fifo" --attributes "$sqs_attributes"
}
```
```sh
$ . ./create_queue.sh
$ create_queue us-east-1 test1.fifo 1 50 60
{
"QueueUrl": "http://$AWS_HOST:4566/000000000000/test1-DLQ.fifo"
}
{
"QueueUrl": "http://$AWS_HOST:4566/000000000000/test1.fifo"
}
$ aws --endpoint-url=http://$AWS_HOST:4566 sqs get-queue-attributes --queue-url http://$AWS_HOST:4566/000000000000/test1.fifo
{
"Attributes": {
"ApproximateNumberOfMessages": "0",
"ApproximateNumberOfMessagesDelayed": "0",
"ApproximateNumberOfMessagesNotVisible": "0",
"CreatedTimestamp": "1626137940.978319",
"DelaySeconds": "0",
"LastModifiedTimestamp": "1626137940.978319",
"MaximumMessageSize": "262144",
"MessageRetentionPeriod": "10800",
"QueueArn": "arn:aws:sqs:us-east-1:000000000000:test1.fifo",
"RedrivePolicy": "{\"maxReceiveCount\":50,\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:000000000000:test1-DLQ.fifo\"}",
"ReceiveMessageWaitTimeSeconds": "10",
"VisibilityTimeout": "60",
"FifoQueue": "true",
"ContentBasedDeduplication": "true"
}
}
$ aws --endpoint-url=http://$AWS_HOST:4566 sqs send-message --queue-url http://$AWS_HOST:4566/000000000000/test1.fifo --message-body '{"foo": "bar4"}' --message-group-id 'helloworld'
{
"MD5OfMessageBody": "e600d75d2e7c74c5c2299fab1762bc50",
"MessageId": "0fb8fe29-6b73-aa21-f845-03bdac37150a"
}
$ aws --endpoint-url=http://$AWS_HOST:4566 sqs send-message --queue-url http://$AWS_HOST:4566/000000000000/test1.fifo --message-body '{"foo": "bar5"}' --message-group-id 'helloworld'
{
"MD5OfMessageBody": "bbc312e807f097092fbaa4ea30859c5e",
"MessageId": "5a529b21-53e9-0154-05df-cfafd5c96e6e"
}
$ aws --endpoint-url=http://$AWS_HOST:4566 sqs send-message --queue-url http://$AWS_HOST:4566/000000000000/test1.fifo --message-body '{"foo": "bar6"}' --message-group-id 'helloworld'
{
"MD5OfMessageBody": "21b93ade231a56d0babcd3f4100a060e",
"MessageId": "9dc8d2c8-ccf7-ae90-bc40-1039fac2376c"
}
$ aws --endpoint-url=http://$AWS_HOST:4566 sqs receive-message --queue-url http://$AWS_HOST:4566/000000000000/test1.fifo --attribute-names All --message-attribute-names All --max-number-of-messages 10
{
"Messages": [
{
"MessageId": "0fb8fe29-6b73-aa21-f845-03bdac37150a",
"ReceiptHandle": "kfzfupygrguevwxpmoqewrajxtrpuedcptmmxsnqbchlczneuduxpwtnntibfqzgnkbbqwjjukghgfgjloicgpylulswbrtihdhmfzracmfsvuygczwaakuxfoeecduzoaxbslnxdbjgrivpblnzmienvtuycsvfxawnffacxmbpgssteoopsslmx",
"MD5OfBody": "e600d75d2e7c74c5c2299fab1762bc50",
"Body": "{\"foo\": \"bar4\"}",
"Attributes": {
"SenderId": "AIDAIT2UOQQY3AUEKVGXU",
"SentTimestamp": "1626138003146",
"ApproximateReceiveCount": "1",
"ApproximateFirstReceiveTimestamp": "1626138037904",
"MessageDeduplicationId": "e03bde47f4d505a2b7d6cc082e0424fed2fad73c67f0f16d0602db4d49dd0fc4",
"MessageGroupId": "helloworld"
}
},
{
"MessageId": "5a529b21-53e9-0154-05df-cfafd5c96e6e",
"ReceiptHandle": "cbecbinornqpeegsesrpukgnitwnuzspyxrfgzzauzwcgsqsgdcsicfjsepnqxggokhtqildwmjltsuvgjtbkvtkgnqvojgsqzljlkjabyeobwbcanftyqfopuxqnqebudxunjnsnkfydifhfdjbijteenqjyylgzdercbcvsknxrkalxkeksvcaa",
"MD5OfBody": "bbc312e807f097092fbaa4ea30859c5e",
"Body": "{\"foo\": \"bar5\"}",
"Attributes": {
"SenderId": "AIDAIT2UOQQY3AUEKVGXU",
"SentTimestamp": "1626138007641",
"ApproximateReceiveCount": "1",
"ApproximateFirstReceiveTimestamp": "1626138037905",
"MessageDeduplicationId": "ee6e00124d6352962c5ad9bd21738086b814eec5b739da18c3aaa7ebe66d9aea",
"MessageGroupId": "helloworld"
}
},
{
"MessageId": "9dc8d2c8-ccf7-ae90-bc40-1039fac2376c",
"ReceiptHandle": "rgfhtlxitiwmhuqqbwmxlossabddczvqjkwaznvknusxqrokfndqokcewkkmztmimgkpyvvmqynnpmjvjyksoymnohxitznkwnrpfxlusjtsijbvncbqwtidxfweorzvnqkjwyzurkjqnxcmsojfqwdvoquckhwtexmuoplqetvksssxtniwhfrhx",
"MD5OfBody": "21b93ade231a56d0babcd3f4100a060e",
"Body": "{\"foo\": \"bar6\"}",
"Attributes": {
"SenderId": "AIDAIT2UOQQY3AUEKVGXU",
"SentTimestamp": "1626138011355",
"ApproximateReceiveCount": "1",
"ApproximateFirstReceiveTimestamp": "1626138037906",
"MessageDeduplicationId": "606514bb7bae1a6bdde8176c07d192b8d7d596c8b33cd611afaf5fc53f2c00f4",
"MessageGroupId": "helloworld"
}
}
]
}
$ aws --endpoint-url=http://$AWS_HOST:4566 sqs receive-message --queue-url http://$AWS_HOST:4566/000000000000/test1.fifo --attribute-names All --message-attribute-names All --max-number-of-messages 10
{
"Messages": [
{
"MessageId": "9dc8d2c8-ccf7-ae90-bc40-1039fac2376c",
"ReceiptHandle": "jdbonffjtigvdhabkbyhebzxthmoyqpjcicylxsufullmsjkywvrkwejyvpddaswmvznkggrbflcaoqhayhokflzfastgvwltmyjuhwbpaslbvguagnojvmmvqoxwoehwtpfwesjkpbotpqawggysosqwtpkircawnehpzxustijeyhpezrutifwa",
"MD5OfBody": "21b93ade231a56d0babcd3f4100a060e",
"Body": "{\"foo\": \"bar6\"}",
"Attributes": {
"SenderId": "AIDAIT2UOQQY3AUEKVGXU",
"SentTimestamp": "1626138011355",
"ApproximateReceiveCount": "2",
"ApproximateFirstReceiveTimestamp": "1626138037906",
"MessageDeduplicationId": "606514bb7bae1a6bdde8176c07d192b8d7d596c8b33cd611afaf5fc53f2c00f4",
"MessageGroupId": "helloworld"
}
}
]
}
```
### Environment
```markdown
- OS: Arch Linux
- LocalStack version: 0.12.15
- LocalStack build date: 2021-07-12
- LocalStack build git hash: 01e81ec3
```
### Anything else?
_No response_ | https://github.com/localstack/localstack/issues/4287 | https://github.com/localstack/localstack/pull/6578 | a364f14e880a8f66a5ee9d1ee5b34ca5f3ac409b | a65e0c26c236c7c240530a0fa73eda6e187f182f | "2021-07-13T01:49:47Z" | python | "2022-08-03T13:23:37Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,227 | ["localstack/services/install.py", "localstack/services/kinesis/kinesis_starter.py"] | Startup fails with DATA_DIR and kinesis-mock | # Type of request: This is a ...
bug report
# Detailed description
localstack fails to start correctly with `DATA_DIR` and the kinesis-mock kinesis provider
## Expected behavior
localstack should start correctly
## Actual behavior
instead the startup throws error messages like these:
```
76b772e7226f_localstackmain |
76b772e7226f_localstackmain | Traceback (most recent call last):
76b772e7226f_localstackmain | File "/opt/code/localstack/localstack/utils/server/http2_server.py", line 129, in index
76b772e7226f_localstackmain | raise result
76b772e7226f_localstackmain | File "/opt/code/localstack/localstack/utils/bootstrap.py", line 701, in run
76b772e7226f_localstackmain | result = self.func(self.params, **kwargs)
76b772e7226f_localstackmain | File "/opt/code/localstack/localstack/utils/async_utils.py", line 29, in _run
76b772e7226f_localstackmain | return fn(*args, **kwargs)
76b772e7226f_localstackmain | File "/opt/code/localstack/localstack/services/generic_proxy.py", line 428, in handler
76b772e7226f_localstackmain | client_address=request.remote_addr, server_address=parsed_url.netloc)
76b772e7226f_localstackmain | File "/opt/code/localstack/localstack/services/generic_proxy.py", line 244, in modify_and_forward
76b772e7226f_localstackmain | listener_result = listener.forward_request(method=method, path=path, data=data, headers=headers)
76b772e7226f_localstackmain | File "/opt/code/localstack/localstack/services/edge.py", line 122, in forward_request
76b772e7226f_localstackmain | return do_forward_request(api, method, path, data, headers, port=port)
76b772e7226f_localstackmain | File "/opt/code/localstack/localstack/services/edge.py", line 143, in do_forward_request
76b772e7226f_localstackmain | result = do_forward_request_inmem(api, method, path, data, headers, port=port)
76b772e7226f_localstackmain | File "/opt/code/localstack/localstack/services/edge.py", line 165, in do_forward_request_inmem
76b772e7226f_localstackmain | client_address=client_address, server_address=server_address)
76b772e7226f_localstackmain | File "/opt/code/localstack/localstack/services/generic_proxy.py", line 290, in modify_and_forward
76b772e7226f_localstackmain | response = requests.request(method, request_url, data=data_to_send, headers=headers, stream=True, verify=False)
76b772e7226f_localstackmain | File "/opt/code/localstack/.venv/lib/python3.7/site-packages/requests/api.py", line 61, in request
76b772e7226f_localstackmain | return session.request(method=method, url=url, **kwargs)
76b772e7226f_localstackmain | File "/opt/code/localstack/.venv/lib/python3.7/site-packages/requests/sessions.py", line 542, in request
76b772e7226f_localstackmain | resp = self.send(prep, **send_kwargs)
76b772e7226f_localstackmain | File "/opt/code/localstack/.venv/lib/python3.7/site-packages/requests/sessions.py", line 655, in send
76b772e7226f_localstackmain | r = adapter.send(request, **kwargs)
76b772e7226f_localstackmain | File "/opt/code/localstack/.venv/lib/python3.7/site-packages/requests/adapters.py", line 516, in send
76b772e7226f_localstackmain | raise ConnectionError(e, request=request)
76b772e7226f_localstackmain | requests.exceptions.ConnectionError: MyHTTPConnectionPool(host='localhost', port=56787): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fbe40c9aed0>: Failed to establish a new connection: [Errno 111] Connection refused'))
76b772e7226f_localstackmain |
76b772e7226f_localstackmain | 2021-06-28T17:54:34:WARNING:localstack.services.plugins: Service "kinesis" not yet available, retrying...
^CGracefully stopping... (press Ctrl+C again to force)
```
# Steps to reproduce
## Command used to start LocalStack
DATA_DIR=/tmp/localstack/data DEBUG=1 bin/localstack start --host
| https://github.com/localstack/localstack/issues/4227 | https://github.com/localstack/localstack/pull/4269 | 4332042008d9d977c64b4db6e6052d0758c1ff36 | 38408897dd5a552a254b6d0e617dfc45afc68018 | "2021-06-28T19:25:20Z" | python | "2021-07-06T23:17:32Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,203 | ["localstack/services/cloudformation/service_models.py", "localstack/utils/cloudformation/template_deployer.py", "tests/integration/cloudformation/test_cloudformation_apigateway.py", "tests/integration/fixtures.py", "tests/integration/templates/apigw-awsintegration-request-parameters.yaml"] | cloudformation: not creating request parameters in api gateway | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
when you create a cloudformation stack with apigateway / s3 integration the cloudformation code is fine, but in the resulting method integration the request parameters are missing. this works in the aws cloud
...
## Expected behavior
the request parameters are not created.
...
## Actual behavior
the request parameters should be created.
...
# Steps to reproduce
```yaml
Resources:
DevOfferApi161BA234:
Type: AWS::ApiGateway::RestApi
Properties:
EndpointConfiguration:
Types:
- REGIONAL
Name: DevOfferApi
Metadata:
aws:cdk:path: OfferApiStack/DevOfferApi/Resource
DevOfferApiCloudWatchRole18727FB4:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: sts:AssumeRole
Effect: Allow
Principal:
Service: apigateway.amazonaws.com
Version: "2012-10-17"
ManagedPolicyArns:
- Fn::Join:
- ""
- - "arn:"
- Ref: AWS::Partition
- :iam::aws:policy/service-role/AmazonAPIGatewayPushToCloudWatchLogs
Metadata:
aws:cdk:path: OfferApiStack/DevOfferApi/CloudWatchRole/Resource
DevOfferApiAccount61030BB1:
Type: AWS::ApiGateway::Account
Properties:
CloudWatchRoleArn:
Fn::GetAtt:
- DevOfferApiCloudWatchRole18727FB4
- Arn
DependsOn:
- DevOfferApi161BA234
Metadata:
aws:cdk:path: OfferApiStack/DevOfferApi/Account
DevOfferApiDeployment67942EBFb7bddcff9bebe189def1e5b267a6cc3f:
Type: AWS::ApiGateway::Deployment
Properties:
RestApiId:
Ref: DevOfferApi161BA234
Description: Automatically created by the RestApi construct
DependsOn:
- DevOfferApiidDELETE322BCA4A
- DevOfferApiidGET0A920ED1
- DevOfferApiidPUTF9263322
- DevOfferApiidC396C3C1
- DevOfferApiPOST14096B08
- DevOfferApiDefaultValidator74076375
- newoffer4C59669F
Metadata:
aws:cdk:path: OfferApiStack/DevOfferApi/Deployment/Resource
DevOfferApiDeploymentStageprod61EB7D20:
Type: AWS::ApiGateway::Stage
Properties:
RestApiId:
Ref: DevOfferApi161BA234
DeploymentId:
Ref: DevOfferApiDeployment67942EBFb7bddcff9bebe189def1e5b267a6cc3f
StageName: prod
Metadata:
aws:cdk:path: OfferApiStack/DevOfferApi/DeploymentStage.prod/Resource
DevOfferApiidC396C3C1:
Type: AWS::ApiGateway::Resource
Properties:
ParentId:
Fn::GetAtt:
- DevOfferApi161BA234
- RootResourceId
PathPart: "{id}"
RestApiId:
Ref: DevOfferApi161BA234
Metadata:
aws:cdk:path: OfferApiStack/DevOfferApi/Default/{id}/Resource
DevOfferApiidGET0A920ED1:
Type: AWS::ApiGateway::Method
Properties:
HttpMethod: GET
ResourceId:
Ref: DevOfferApiidC396C3C1
RestApiId:
Ref: DevOfferApi161BA234
AuthorizationType: NONE
Integration:
Credentials:
Fn::GetAtt:
- OfferIntegrationsroleA20FBA4E
- Arn
IntegrationHttpMethod: GET
IntegrationResponses:
- StatusCode: "200"
RequestParameters:
integration.request.path.object: method.request.path.id
Type: AWS
Uri:
Fn::Join:
- ""
- - "arn:"
- Ref: AWS::Partition
- ":apigateway:"
- Ref: AWS::Region
- :s3:path/
- Ref: OfferIntegrationsofferbucketB94C26F7
- /{object}.json
MethodResponses:
- StatusCode: "200"
RequestParameters:
method.request.path.id: false
Metadata:
aws:cdk:path: OfferApiStack/DevOfferApi/Default/{id}/GET/Resource
DevOfferApiidDELETE322BCA4A:
Type: AWS::ApiGateway::Method
Properties:
HttpMethod: DELETE
ResourceId:
Ref: DevOfferApiidC396C3C1
RestApiId:
Ref: DevOfferApi161BA234
AuthorizationType: NONE
Integration:
Credentials:
Fn::GetAtt:
- OfferIntegrationsroleA20FBA4E
- Arn
IntegrationHttpMethod: DELETE
IntegrationResponses:
- StatusCode: "200"
RequestParameters:
integration.request.path.object: method.request.path.id
Type: AWS
Uri:
Fn::Join:
- ""
- - "arn:"
- Ref: AWS::Partition
- ":apigateway:"
- Ref: AWS::Region
- :s3:path/
- Ref: OfferIntegrationsofferbucketB94C26F7
- /{object}.json
MethodResponses:
- StatusCode: "200"
RequestParameters:
method.request.path.id: false
Metadata:
aws:cdk:path: OfferApiStack/DevOfferApi/Default/{id}/DELETE/Resource
DevOfferApiidPUTF9263322:
Type: AWS::ApiGateway::Method
Properties:
HttpMethod: PUT
ResourceId:
Ref: DevOfferApiidC396C3C1
RestApiId:
Ref: DevOfferApi161BA234
AuthorizationType: NONE
Integration:
Credentials:
Fn::GetAtt:
- OfferIntegrationsroleA20FBA4E
- Arn
IntegrationHttpMethod: PUT
IntegrationResponses:
- StatusCode: "200"
RequestParameters:
integration.request.path.object: method.request.path.id
integration.request.header.Content-Type: method.request.header.Content-Type
Type: AWS
Uri:
Fn::Join:
- ""
- - "arn:"
- Ref: AWS::Partition
- ":apigateway:"
- Ref: AWS::Region
- :s3:path/
- Ref: OfferIntegrationsofferbucketB94C26F7
- /{object}.json
MethodResponses:
- StatusCode: "200"
RequestModels:
application/json:
Ref: newoffer4C59669F
RequestParameters:
method.request.path.id: false
method.request.header.Content-Type: false
RequestValidatorId:
Ref: DevOfferApiDefaultValidator74076375
Metadata:
aws:cdk:path: OfferApiStack/DevOfferApi/Default/{id}/PUT/Resource
DevOfferApiPOST14096B08:
Type: AWS::ApiGateway::Method
Properties:
HttpMethod: POST
ResourceId:
Fn::GetAtt:
- DevOfferApi161BA234
- RootResourceId
RestApiId:
Ref: DevOfferApi161BA234
AuthorizationType: NONE
Integration:
Credentials:
Fn::GetAtt:
- OfferIntegrationsroleA20FBA4E
- Arn
IntegrationHttpMethod: PUT
IntegrationResponses:
- ResponseTemplates:
application/json: $context.requestId
StatusCode: "200"
RequestParameters:
integration.request.path.object: context.requestId
integration.request.header.Content-Type: method.request.header.Content-Type
Type: AWS
Uri:
Fn::Join:
- ""
- - "arn:"
- Ref: AWS::Partition
- ":apigateway:"
- Ref: AWS::Region
- :s3:path/
- Ref: OfferIntegrationsofferbucketB94C26F7
- /{object}.json
MethodResponses:
- StatusCode: "200"
RequestModels:
application/json:
Ref: newoffer4C59669F
RequestParameters:
method.request.header.Content-Type: false
RequestValidatorId:
Ref: DevOfferApiDefaultValidator74076375
Metadata:
aws:cdk:path: OfferApiStack/DevOfferApi/Default/POST/Resource
DevOfferApiDefaultValidator74076375:
Type: AWS::ApiGateway::RequestValidator
Properties:
RestApiId:
Ref: DevOfferApi161BA234
ValidateRequestBody: true
ValidateRequestParameters: true
Metadata:
aws:cdk:path: OfferApiStack/DevOfferApi/DefaultValidator/Resource
newoffer4C59669F:
Type: AWS::ApiGateway::Model
Properties:
RestApiId:
Ref: DevOfferApi161BA234
ContentType: application/json
Name: NewOffer
Schema:
type: object
required:
- productId
- totalPremium
- totalNetPremium
properties:
productId:
type: string
description: ID des Produkts
totalPremium:
type: number
description: Gesamtbeitrag
totalNetPremium:
type: number
description: Gesamtnettobeitrag
components:
type: array
items:
type: object
required:
- startOfInsurance
- paymentMode
- typeOfDefault
- targetAmount
- typeOfRuntime
- runningTime
- appropriationOfProfits
- entryAge
- durationOfInsurance
- occupationalGroup
- sumInsured
- premium
- netPremium
- totalPayoutAmount
properties:
productName:
type: string
description: name of product
smoker:
type: boolean
homeowner:
type: boolean
jobPosition:
type: string
graduation:
type: string
jobDetails:
properties:
personnelResponsibility:
properties:
hasResponsibility:
type: boolean
numberOfEmployees:
type: number
homeOfficePercentage:
type: number
officePercentage:
type: number
travelPercentage:
type: number
physicalWorkPercentage:
type: number
smokingBehaviour:
type: number
description: Rauch-Verhalten
productConditions:
type: string
description: Konditionen
startOfInsurance:
type: string
format: date
description: Beginn
paymentMode:
type: string
enum:
- monthly
- quarterly
- semiAnnually
- annually
description: Inkasso-Zahlweise
typeOfDefault:
type: string
enum:
- premium
- sumInsured
- totalPayoutAmount
description: Art der Vorgabe
targetAmount:
type: number
description: Vorgabesumme
typeOfRuntime:
type: string
enum:
- runningTime
- endingAge
description: Art der Laufzeit
runningTime:
type: number
description: Laufzeit
appropriationOfProfits:
type: string
enum:
- instantDiscount
- deathBenefit
description: Gewinnverwendung
entryAge:
type: number
description: Eintrittsalter
durationOfInsurance:
type: number
description: Versicherungsdauer
jobTitle:
type: string
occupationalGroup:
type: string
enum:
- A
- B
description: Berufsgruppe
sumInsured:
type: number
description: Versicherungssumme
premium:
type: number
description: Beitrag
netPremium:
type: number
description: Nettobeitrag
totalPayoutAmount:
type: number
description: Gesamtauszahlungssumme
riskCalculation:
properties:
riskResult:
type: string
enum:
- normal
- extra-charge
- riskClause
- expert-decision
riskExtraCharge:
type: number
$schema: http://json-schema.org/draft-04/schema#
Metadata:
aws:cdk:path: OfferApiStack/new-offer/Resource
OfferIntegrationsofferbucketB94C26F7:
Type: AWS::S3::Bucket
UpdateReplacePolicy: Retain
DeletionPolicy: Retain
Metadata:
aws:cdk:path: OfferApiStack/OfferIntegrations/offer-bucket/Resource
OfferIntegrationsroleA20FBA4E:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: sts:AssumeRole
Effect: Allow
Principal:
Service: apigateway.amazonaws.com
Version: "2012-10-17"
Path: /service-role/
Metadata:
aws:cdk:path: OfferApiStack/OfferIntegrations/role/Resource
OfferIntegrationsroleDefaultPolicyB9943AE2:
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
Statement:
- Action:
- s3:GetObject*
- s3:GetBucket*
- s3:List*
- s3:DeleteObject*
- s3:PutObject
- s3:Abort*
Effect: Allow
Resource:
- Fn::GetAtt:
- OfferIntegrationsofferbucketB94C26F7
- Arn
- Fn::Join:
- ""
- - Fn::GetAtt:
- OfferIntegrationsofferbucketB94C26F7
- Arn
- /*
Version: "2012-10-17"
PolicyName: OfferIntegrationsroleDefaultPolicyB9943AE2
Roles:
- Ref: OfferIntegrationsroleA20FBA4E
Metadata:
aws:cdk:path: OfferApiStack/OfferIntegrations/role/DefaultPolicy/Resource
CDKMetadata:
Type: AWS::CDK::Metadata
Properties:
Analytics: v2:deflate64:H4sIAAAAAAAAE12QTW/DIAyGf8vulDbapF3XdteqUybtjsDr3BCcgVkVIf77ILTax8mPX+zXNp3sNo9yc/ekLmGlzbBOmjzI9MpKD2L/7o6Rp8hiTy6wj5qr1kOg6DVULg8GGcllUS2SmvCkGC5qlqnU8XbCa8sNt1pTdCyeYbI0j+AWz19ZmX1avBvcpu1U+En+rHEA/iBTpSv18BnLwDdl0Sgm36r/aQcyYJemClmgGsvKZJt3jS9kUc81bZRFuJdpF/UAy86Ncs7CFQt5Duuv7kF29T/PAXHly5k4guxb/Ab5F452bAEAAA==
Metadata:
aws:cdk:path: OfferApiStack/CDKMetadata/Default
Condition: CDKMetadataAvailable
Outputs:
DevOfferApiEndpoint5E4AE11B:
Value:
Fn::Join:
- ""
- - https://
- Ref: DevOfferApi161BA234
- .execute-api.
- Ref: AWS::Region
- "."
- Ref: AWS::URLSuffix
- /
- Ref: DevOfferApiDeploymentStageprod61EB7D20
- /
Conditions:
CDKMetadataAvailable:
Fn::Or:
- Fn::Or:
- Fn::Equals:
- Ref: AWS::Region
- af-south-1
- Fn::Equals:
- Ref: AWS::Region
- ap-east-1
- Fn::Equals:
- Ref: AWS::Region
- ap-northeast-1
- Fn::Equals:
- Ref: AWS::Region
- ap-northeast-2
- Fn::Equals:
- Ref: AWS::Region
- ap-south-1
- Fn::Equals:
- Ref: AWS::Region
- ap-southeast-1
- Fn::Equals:
- Ref: AWS::Region
- ap-southeast-2
- Fn::Equals:
- Ref: AWS::Region
- ca-central-1
- Fn::Equals:
- Ref: AWS::Region
- cn-north-1
- Fn::Equals:
- Ref: AWS::Region
- cn-northwest-1
- Fn::Or:
- Fn::Equals:
- Ref: AWS::Region
- eu-central-1
- Fn::Equals:
- Ref: AWS::Region
- eu-north-1
- Fn::Equals:
- Ref: AWS::Region
- eu-south-1
- Fn::Equals:
- Ref: AWS::Region
- eu-west-1
- Fn::Equals:
- Ref: AWS::Region
- eu-west-2
- Fn::Equals:
- Ref: AWS::Region
- eu-west-3
- Fn::Equals:
- Ref: AWS::Region
- me-south-1
- Fn::Equals:
- Ref: AWS::Region
- sa-east-1
- Fn::Equals:
- Ref: AWS::Region
- us-east-1
- Fn::Equals:
- Ref: AWS::Region
- us-east-2
- Fn::Or:
- Fn::Equals:
- Ref: AWS::Region
- us-west-1
- Fn::Equals:
- Ref: AWS::Region
- us-west-2
```
the created api-gateway, see the request paramters are empty:
```json
{
"items": [
{
"id": "7sw7q6lreq",
"path": "/",
"resourceMethods": {
"POST": {
"httpMethod": "POST",
"authorizationType": "NONE",
"apiKeyRequired": false,
"requestParameters": {
"method.request.header.Content-Type": false
},
"methodIntegration": {
"type": "AWS",
"httpMethod": "PUT",
"uri": "arn:aws:apigateway:us-east-1:s3:path/offerapistack-offerintegrationsofferbucketb94c26f7-43d1b30b/{object}.json",
"requestParameters": {},
"passthroughBehavior": "WHEN_NO_MATCH",
"cacheNamespace": "4d2918ca",
"cacheKeyParameters": [],
"integrationResponses": {}
}
}
}
},
{
"id": "6wov03mq4d",
"parentId": "7sw7q6lreq",
"pathPart": "{id}",
"path": "/{id}",
"resourceMethods": {
"GET": {
"httpMethod": "GET",
"authorizationType": "NONE",
"apiKeyRequired": false,
"requestParameters": {
"method.request.path.id": false
},
"methodIntegration": {
"type": "AWS",
"httpMethod": "GET",
"uri": "arn:aws:apigateway:us-east-1:s3:path/offerapistack-offerintegrationsofferbucketb94c26f7-43d1b30b/{object}.json",
"requestParameters": {},
"passthroughBehavior": "WHEN_NO_MATCH",
"cacheNamespace": "0a506b54",
"cacheKeyParameters": [],
"integrationResponses": {}
}
},
"DELETE": {
"httpMethod": "DELETE",
"authorizationType": "NONE",
"apiKeyRequired": false,
"requestParameters": {
"method.request.path.id": false
},
"methodIntegration": {
"type": "AWS",
"httpMethod": "DELETE",
"uri": "arn:aws:apigateway:us-east-1:s3:path/offerapistack-offerintegrationsofferbucketb94c26f7-43d1b30b/{object}.json",
"requestParameters": {},
"passthroughBehavior": "WHEN_NO_MATCH",
"cacheNamespace": "f7f7182e",
"cacheKeyParameters": [],
"integrationResponses": {}
}
},
"PUT": {
"httpMethod": "PUT",
"authorizationType": "NONE",
"apiKeyRequired": false,
"requestParameters": {
"method.request.path.id": false,
"method.request.header.Content-Type": false
},
"methodIntegration": {
"type": "AWS",
"httpMethod": "PUT",
"uri": "arn:aws:apigateway:us-east-1:s3:path/offerapistack-offerintegrationsofferbucketb94c26f7-43d1b30b/{object}.json",
"requestParameters": {},
"passthroughBehavior": "WHEN_NO_MATCH",
"cacheNamespace": "27674016",
"cacheKeyParameters": [],
"integrationResponses": {}
}
}
}
}
]
}
```
## Command used to start LocalStack
docker-compose:
```yaml
version: '3'
services:
localstack:
image: localstack/localstack
container_name: localstack
network_mode: bridge
ports:
- "4566:4566"
- "4571:4571"
- '8055:8080'
environment:
- SERVICES=s3,apigateway,cloudformation,iam
- DATA_DIR=/tmp/localstack/data
volumes:
- './.localstack:/tmp/localstack'
- '/var/run/docker.sock:/var/run/docker.sock'
```
...
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
cdk-code:
```typescript
const defaultRequestParameters = {
'integration.request.path.object': 'method.request.path.id'
};
this.postIntegration = new apigateway.AwsIntegration(this.createIntegrationProps('PUT',
this.bucket.bucketName, {
'integration.request.path.object': 'context.requestId',
'integration.request.header.Content-Type': 'method.request.header.Content-Type'
}, this.executeRole, {'application/json': '$context.requestId'}));
this.getIntegration = new apigateway.AwsIntegration(this.createIntegrationProps('GET',
this.bucket.bucketName, {...defaultRequestParameters}, this.executeRole));
this.putIntegration = new apigateway.AwsIntegration(this.createIntegrationProps('PUT',
this.bucket.bucketName, {
...defaultRequestParameters,
'integration.request.header.Content-Type': 'method.request.header.Content-Type'
}, this.executeRole));
this.deleteIntegration = new apigateway.AwsIntegration(this.createIntegrationProps('DELETE',
this.bucket.bucketName, {...defaultRequestParameters}, this.executeRole));
```
...
| https://github.com/localstack/localstack/issues/4203 | https://github.com/localstack/localstack/pull/4494 | 06fea6560c9c60b7d04048227a7d8585c3b3ed5b | af040ebd8ac36d1e69b369643967d2f2259c3f30 | "2021-06-25T07:10:19Z" | python | "2021-08-26T17:48:13Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,200 | [".circleci/config.yml", ".coveragerc", ".coveralls.yml", "Dockerfile", "Makefile", "requirements.txt"] | Fix coveralls coverage reporting | It seems the nosetest run isn't collecting the coverage data correctly, as the .coverage database is empty after a docker build. Therefore the coveralls script isn't reporting anything.
| https://github.com/localstack/localstack/issues/4200 | https://github.com/localstack/localstack/pull/4206 | 3c44321d8c4e10065b499c2f1c8d53f82266cb13 | 18f6351d4bcf7f96c1ebd723a2b3cafcc748e07e | "2021-06-24T22:53:44Z" | python | "2021-06-28T15:17:03Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,130 | ["localstack/utils/cloudformation/template_deployer.py", "tests/integration/cloudformation/test_cloudformation_lambda.py", "tests/integration/templates/cfn_lambda_noname.yaml"] | [PRO] Use construct ID in CDK-deployed lambda function names | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[ ] bug report
[x] feature request
# Detailed description
* Define a lambda function in a CDK stack (e.g. "TestStack"), giving it a descriptive ID value (e.g. "DescriptiveName")
* Deploy the CDK template (`cdklocal deploy`)
* Check the function's name in the dashboard
## Expected behavior
Function's name includes its CDK stack ID. For example, deploying the template to AWS will produce a function name of the form `<StackName>-<FunctionID><mishmash>` (e.g. TestStack-DescriptiveName8C62FF46-hXye7CyPgrc7)
## Actual behavior
Function's name has the form `<StackName>-lambda-<mishmash>` (e.g. TestStack-lambda-cb0e840d), making it more difficult to use in the localstack dashboard when there are several lambdas.
# Steps to reproduce
See detailed description
## Command used to start LocalStack
`LOCALSTACK_API_KEY=... localstack start`
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
(none) | https://github.com/localstack/localstack/issues/4130 | https://github.com/localstack/localstack/pull/4495 | af040ebd8ac36d1e69b369643967d2f2259c3f30 | 9ce7c892c6351517d3a5125e5317f9a3bc7d4363 | "2021-06-10T19:17:54Z" | python | "2021-08-26T17:51:53Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,126 | ["localstack/services/apigateway/apigateway_listener.py", "tests/integration/lambdas/lambda_integration.py", "tests/integration/test_api_gateway.py"] | The isBase64Encoded flag is not respected in API Gateway | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[ ] bug report
[x] feature request
# Detailed description
The API Gateway response body only accepts strings. To transfer binary data, it needs to be base64 encoded, inserted as a string in the response body and the flag ´isBase64Encoded´ must be set. Once this is done API Gateway will base64 decode the response and return the binary content to the requester.
## Expected behavior
When setting the isBase64Encoded flag, the response is base64 decoded before being returned from API Gateway.
## Actual behavior
Setting the isBase64Encoded flag has no effect, the response body will contain the base64 encoded string instead of binary data.
# Steps to reproduce
## Command used to start LocalStack
Docker compose config:
```
version: '2.1'
services:
docker-host:
image: qoomon/docker-host:2.5.0
cap_add: [ 'NET_ADMIN', 'NET_RAW' ]
mem_limit: 20M
restart: on-failure
localstack:
image: localstack/localstack:0.12.10
depends_on: [ docker-host ]
ports:
- "4566:4566"
- "4571:4571"
environment:
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
- DATA_DIR=${TMPDIR}/data
- DEFAULT_REGION=eu-west-1
- LAMBDA_REMOTE_DOCKER=0
- LAMBDA_EXECUTOR=docker-reuse
- LAMBDA_DOCKER_NETWORK=scripts_default
- AWS_CBOR_DISABLE=true
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
```
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
Need to set up a Lambda Rest API that returns a base64 encoded string, where the flag `isBase64Encoded` is set. The returned response should reflect the base64 decoded version of the string.
| https://github.com/localstack/localstack/issues/4126 | https://github.com/localstack/localstack/pull/4212 | 9a36099bfcde9e8cc50fe003a3a8e970388fc178 | 045489b2a0b13260722add9c55c70ec6af86ce81 | "2021-06-10T08:27:05Z" | python | "2021-06-27T09:29:13Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,108 | ["Dockerfile", "requirements.txt"] | ModuleNotFoundError while running awslocal with Docker image 0.12.12 | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x ] bug report
[ ] feature request
# Detailed description
When using latest Localstack Docker image - version 0.12.12 - running any 'awslocal' command fails with:
```
ModuleNotFoundError: No module named 'localstack_client'
```
## Expected behavior
Running 'awslocal' executes without errors.
## Actual behavior
The following error is raised:
```
Traceback (most recent call last):
File "/usr/bin/awslocal", line 34, in <module>
from localstack_client import config # noqa: E402
ModuleNotFoundError: No module named 'localstack_client'
```
| https://github.com/localstack/localstack/issues/4108 | https://github.com/localstack/localstack/pull/4109 | b0f71ab689aba14921ec4d26eed60e37ae885bd1 | ffb29b06e15fdce9123899c8a3d4a20945c2ce2d | "2021-06-07T06:50:52Z" | python | "2021-06-07T12:59:57Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,095 | ["localstack/services/kinesis/kinesis_listener.py", "tests/integration/test_kinesis.py"] | kinesis subscribe_to_shard decode error when putRecord with data of type string using boto3 | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
Use localstack as AWS kinesis service mock and try to put a record with raw string data
```python
client.put_record(StreamName=stream_name, Data="raw_string", PartitionKey="MyParticonKey")
```
Then get the record using `subscribe_to_shard`
```python
result = self._client.subscribe_to_shard(
ConsumerARN=self._consumer_arn, ShardId='shardId-000000000000', StartingPosition=start_position)
event = next(event_stream_iterator.__iter__())
```
And there is a `binascii.Error: Incorrect padding`. Backtrace:
```python
> /root/watercube/ms/aws/kds_consumer.py(188)_handle_event_stream() [0/1969]
-> event = next(event_stream_iterator.__iter__())
/usr/local/lib/python3.7/site-packages/botocore/eventstream.py(571)__iter__()
-> parsed_event = self._parse_event(event)
/usr/local/lib/python3.7/site-packages/botocore/eventstream.py(584)_parse_event()
-> parsed_response = self._parser.parse(response_dict, self._output_shape)
/usr/local/lib/python3.7/site-packages/botocore/parsers.py(242)parse()
-> parsed = self._do_parse(response, shape)
/usr/local/lib/python3.7/site-packages/botocore/parsers.py(644)_do_parse()
-> final_parsed[event_type] = self._do_parse(response, event_shape)
/usr/local/lib/python3.7/site-packages/botocore/parsers.py(648)_do_parse()
-> self._parse_payload(response, shape, shape.members, final_parsed)
/usr/local/lib/python3.7/site-packages/botocore/parsers.py(689)_parse_payload()
-> body_parsed = self._parse_shape(shape, original_parsed)
/usr/local/lib/python3.7/site-packages/botocore/parsers.py(302)_parse_shape()
-> return handler(shape, node)
/usr/local/lib/python3.7/site-packages/botocore/parsers.py(572)_handle_structure()
-> raw_value)
/usr/local/lib/python3.7/site-packages/botocore/parsers.py(302)_parse_shape()
-> return handler(shape, node)
/usr/local/lib/python3.7/site-packages/botocore/parsers.py(310)_handle_list()
-> parsed.append(self._parse_shape(member_shape, item))
/usr/local/lib/python3.7/site-packages/botocore/parsers.py(302)_parse_shape()
-> return handler(shape, node)
/usr/local/lib/python3.7/site-packages/botocore/parsers.py(572)_handle_structure()
-> raw_value)
/usr/local/lib/python3.7/site-packages/botocore/parsers.py(302)_parse_shape()
-> return handler(shape, node)
/usr/local/lib/python3.7/site-packages/botocore/parsers.py(586)_handle_blob()
-> return self._blob_parser(value)
/usr/local/lib/python3.7/site-packages/botocore/parsers.py(215)_default_blob_parser()
-> return base64.b64decode(value)
/usr/local/lib/python3.7/base64.py(87)b64decode()
-> return binascii.a2b_base64(s)
```
While using AWS service endpoint, this does not happen.
## Expected behavior
`subscribe_to_shard` could properly decode the data returned from localstack when the record is put as string
## Actual behavior
`subcribe_to_shard` gets decode error
# Steps to reproduce
## Command used to start LocalStack
Start the localstack container using docker-compose,
```yaml
version: '2.4'
services:
aws:
image: localstack/localstack:0.12.10
restart: always
ports:
- "4566"
enviroments:
- LOCALSTACK_SERVICES=s3,kinesis
- LOCALSTACK_DEBUG=1
- LOCALSTACK_HOSTNAME=aws
```
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
```python
client.put_record(StreamName=stream_name, Data="raw_string", PartitionKey="MyParticonKey")
```
```python
result = self._client.subscribe_to_shard(
ConsumerARN=self._consumer_arn, ShardId='shardId-000000000000', StartingPosition=start_position)
event = next(event_stream_iterator.__iter__())
```
- boto3==1.9.134
- botocore==1.12.253 | https://github.com/localstack/localstack/issues/4095 | https://github.com/localstack/localstack/pull/5272 | 2ef9d66614800f28d06e5a6d31758dc8f6c40127 | b2f35e0c820f6057a87984d1d3797a9d71339be3 | "2021-06-03T03:06:32Z" | python | "2022-01-22T13:37:38Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,079 | ["localstack/services/awslambda/lambda_api.py"] | AWS Lambda PutFunctionEventInvokeConfig returns LastModified timestamp in incorrect format | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
When I call the Lambda PutFunctionEventInvokeConfig API using the aws-java-sdk, the sdk throws an exception claiming it cannot parse the LastModified timestamp in the response:
```
software.amazon.awssdk.core.exception.SdkClientException: Unable to parse date : 2021-05-28T16:02:30.018Z
at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:98)
at software.amazon.awssdk.protocols.core.StringToInstant.lambda$safeParseDate$0(StringToInstant.java:77)
at software.amazon.awssdk.protocols.core.StringToInstant.convert(StringToInstant.java:54)
at software.amazon.awssdk.protocols.core.StringToInstant.convert(StringToInstant.java:32)
```
For comparison, the response for a getting a FunctionEventInvokeConfig for a Lambda of a real AWS account using the aws-cli is the following:
```
{
"LastModified": "2021-05-27T19:26:43.454000-07:00",
"FunctionArn": "...",
...
}
```
## Expected behavior
LambdaAsyncClient in aws-java-sdk should correctly parse the response for a PutFunctionEventInvokeConfig request.
## Actual behavior
LambdaAsyncClient in aws-java-sdk throws an exception when parsing the LastModified field in the response.
# Steps to reproduce
## Command used to start LocalStack
```
docker run -d --rm --name localstack -p 4566:4566 localstack/localstack:latest
```
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
```
val putEventInvokeConfigRequest =
PutFunctionEventInvokeConfigRequest.builder
.functionName(functionName)
.maximumEventAgeInSeconds(21600)
.maximumRetryAttempts(0)
.build
lambdaClient.putFunctionEventInvokeConfig(putEventInvokeConfigRequest)
```
| https://github.com/localstack/localstack/issues/4079 | https://github.com/localstack/localstack/pull/4107 | ec9293366eae208de1b14d0753a9072ae81b228d | 507dda6811023848b416c61072f6e3308977b94b | "2021-05-28T16:19:46Z" | python | "2021-06-08T06:19:31Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,073 | ["Makefile", "localstack/services/awslambda/lambda_utils.py", "tests/integration/test_lambda.py"] | Support ruby2.7 runtime | AWS supports the following Ruby runtimes:
<!--StartFragment-->
Name | Identifier | SDK for Ruby | Operating system
-- | -- | -- | --
Ruby 2.7 | ruby2.7 | 3.0.1 | Amazon Linux 2
Ruby 2.5 | ruby2.5 | 3.0.1 | Amazon Linux
<!--EndFragment-->
Currently, `localstack/lambda` only contains the `ruby2.5` tag. Will the 2.7 runtime be supported in the (near) future? | https://github.com/localstack/localstack/issues/4073 | https://github.com/localstack/localstack/pull/4075 | 9066c4f231b44dbaff10b978262c8ca5973e9054 | b6c21a699e2b40cbe33c5542956d31fce4f57246 | "2021-05-27T06:55:58Z" | python | "2021-05-27T20:34:21Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,038 | ["localstack/utils/cloudformation/template_deployer.py", "tests/integration/test_cloudformation.py"] | Support KinesisStreamSpecification parameter for AWS::DynamoDB::Table resource in CloudFormation | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[ ] bug report
[x] feature request
# Detailed description
Currently, the KinesisStreamSpecification parameter of AWS::DynamoDB::Table resource in CloudFormation is not supported and will be ignored even if this parameter is set.
We would like to be able to use this parameter.
## Expected behavior
When the following template is applied...
```yaml
---
AWSTemplateFormatVersion: '2010-09-09'
Resources:
EventStream:
Type: AWS::Kinesis::Stream
Properties:
Name: EventStream
ShardCount: 1
EventTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: EventTable
AttributeDefinitions:
- AttributeName: pkey
AttributeType: S
KeySchema:
- AttributeName: pkey
KeyType: HASH
BillingMode: PAY_PER_REQUEST
StreamSpecification:
StreamViewType: NEW_IMAGE
KinesisStreamSpecification:
StreamArn: !GetAtt EventStream.Arn
```
We will get the following results as when we run enable-kinesis-streaming-destination.
```sh
$ awslocal dynamodb describe-kinesis-streaming-destination --table-name EventTable
{
"TableName": "EventTable",
"KinesisDataStreamDestinations": [
{
"StreamArn": "arn:aws:kinesis:us-east-1:000000000000:stream/EventStream",
"DestinationStatus": "ACTIVE"
}
]
}
```
## Actual behavior
The KinesisStreamSpecification parameter will be ignored even if this parameter is set.
```sh
$ awslocal dynamodb describe-kinesis-streaming-destination --table-name EventTable
{
"TableName": "EventTable",
"KinesisDataStreamDestinations": []
}
```
# Steps to reproduce
## Command used to start LocalStack
```
$ docker run --rm -p 4566:4566 localstack/localstack:0.12.11
```
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
CloudFormation templates to apply.
```yaml
---
AWSTemplateFormatVersion: '2010-09-09'
Resources:
EventStream:
Type: AWS::Kinesis::Stream
Properties:
Name: EventStream
ShardCount: 1
EventTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: EventTable
AttributeDefinitions:
- AttributeName: pkey
AttributeType: S
KeySchema:
- AttributeName: pkey
KeyType: HASH
BillingMode: PAY_PER_REQUEST
StreamSpecification:
StreamViewType: NEW_IMAGE
KinesisStreamSpecification:
StreamArn: !GetAtt EventStream.Arn
```
Check the results.
```sh
awslocal dynamodb describe-kinesis-streaming-destination --table-name EventTable
```
Manually enable kinesis streaming.
```sh
awslocal dynamodb enable-kinesis-streaming-destination --table-name EventTable --stream-arn arn:aws:kinesis:us-east-1:000000000000:stream/EventStream
``` | https://github.com/localstack/localstack/issues/4038 | https://github.com/localstack/localstack/pull/4091 | dddc4c012e1e2a0a3cb954f3c6d1d0c6e272c366 | 4b420f92e097613cfd5c1d3865f0e31c643d905d | "2021-05-17T17:05:22Z" | python | "2021-06-03T17:58:59Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,023 | ["localstack/services/es/es_api.py", "tests/integration/test_elasticsearch.py"] | --elasticsearch-cluster-config is (at least partially) ignored | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
I'm using `localstack:latest` as a docker image. But when I try to create a new Elasticsearch domain with a specific configuration, the `InstanceType` and `InstanceCount` are ignored.
## Expected behavior
`InstanceCount` and `InstanceType` are recognised.
## Actual behavior
InstanceCount and InstanceType are ignored.
# Steps to reproduce
Setup `localstack:latest` as a docker image and try to setup an elasticsearch domain with `--elasticsearch-cluster-config` included.
## Command used to start LocalStack
Since I'm using docker (or more specific docker-compose) I'm using `docker-compose up -d`
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
I've tried 2 different approaches, once with shorthand syntax and one with json syntax, both got the same unexpected Output:
```sh
aws --endpoint-url=http://localhost:4566 es create-elasticsearch-domain --domain-name mylogs-2 --elasticsearch-version 7.10 --elasticsearch-cluster-config '{ "InstanceType": "m3.xlarge.elasticsearch", "InstanceCount": 4, "DedicatedMasterEnabled": true, "ZoneAwarenessEnabled": true, "DedicatedMasterType": "m3.xlarge.elasticsearch", "DedicatedMasterCount": 3}'
```
I'm expecting to get `m3.xlarge.elasticsearch` as `InstanceType` and 4 as `InstanceCount` but the output looks like this:
```json
{
"DomainStatus": {
"DomainId": "000000000000/mylogs-2",
"DomainName": "mylogs-2",
"ARN": "arn:aws:es:us-east-1:000000000000:domain/mylogs-2",
"Created": true,
"Deleted": false,
"Endpoint": "http://localhost:4571",
"Processing": false,
"ElasticsearchVersion": "7.10",
"ElasticsearchClusterConfig": {
"InstanceType": "m3.medium.elasticsearch",
"InstanceCount": 1,
"DedicatedMasterEnabled": true,
"ZoneAwarenessEnabled": false,
"DedicatedMasterType": "m3.medium.elasticsearch",
"DedicatedMasterCount": 1
},
"EBSOptions": {
"EBSEnabled": true,
"VolumeType": "gp2",
"VolumeSize": 10,
"Iops": 0
},
"CognitoOptions": {
"Enabled": false
}
}
}
```
What I get is `m3.medium.elasticsearch` as `InstanceType` instead of `m3.xlarge.elasticsearch` and 1 as `InstanceCount` instead of 4.
I've used the json syntax as well:
```sh
aws es create-elasticsearch-domain --domain-name mylogs-1 --elasticsearch-version 7.10 --elasticsearch-cluster-config InstanceType=r6g.large.elasticsearch,InstanceCount=6 --endpoint-url=http://localhost:4566
```
but got the same result. :/ | https://github.com/localstack/localstack/issues/4023 | https://github.com/localstack/localstack/pull/4030 | b39d746401c4d286bba47ba580ba686481767aa5 | eea59125f1948ac2431af4db2f24d441c2aa0d4f | "2021-05-14T08:32:09Z" | python | "2021-05-16T19:34:50Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 4,010 | ["localstack/utils/aws/aws_stack.py"] | Localstack on Docker Hangs with DATA_DIR set | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
When using persistence via DATA_DIR, after some use, localstack fails in initialise after a pod restart.
Log output with DEBUG=1
```
2021-05-12 22:25:27,962 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
2021-05-12 22:25:27,968 INFO supervisord started with pid 14
2021-05-12 22:25:28,972 INFO spawned: 'dashboard' with pid 20
2021-05-12 22:25:28,975 INFO spawned: 'infra' with pid 21
2021-05-12 22:25:28,994 INFO success: dashboard entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2021-05-12 22:25:29,000 INFO exited: dashboard (exit status 0; expected)
(. .venv/bin/activate; exec bin/localstack start --host)
2021-05-12 22:25:30,002 INFO success: infra entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Starting local dev environment. CTRL-C to quit.
Waiting for all LocalStack services to be ready
LocalStack version: 0.12.11
LocalStack build date: 2021-05-10
LocalStack build git hash: b320d70c
2021-05-12T22:25:36:DEBUG:bootstrap.py: Loading plugins - scope "services", module "localstack": <function register_localstack_plugins at 0x7f3363fd6160>
Starting edge router (https port 4566)...
Starting mock CloudWatch service on http port 4566 ...
[2021-05-12 22:25:37 +0000] [22] [INFO] Running on https://0.0.0.0:4566 (CTRL + C to quit)
2021-05-12T22:25:37:INFO:hypercorn.error: Running on https://0.0.0.0:4566 (CTRL + C to quit)
2021-05-12T22:25:37:INFO:localstack.multiserver: Starting multi API server process on port 33611
[2021-05-12 22:25:37 +0000] [22] [INFO] Running on http://0.0.0.0:33611 (CTRL + C to quit)
2021-05-12T22:25:37:INFO:hypercorn.error: Running on http://0.0.0.0:33611 (CTRL + C to quit)
Starting mock IAM service on http port 4566 ...
Starting mock STS service on http port 4566 ...
Starting mock Lambda service on http port 4566 ...
Starting mock CloudWatch Logs service on http port 4566 ...
Starting mock S3 service on http port 4566 ...
Starting mock SNS service on http port 4566 ...
Starting mock Cloudwatch Events service on http port 4566 ...
2021-05-12 22:25:38,463:API: * Running on http://0.0.0.0:49993/ (Press CTRL+C to quit)
2021-05-12T22:25:38:DEBUG:localstack.services.awslambda.lambda_executors: Getting all lambda containers names.
2021-05-12T22:25:38:DEBUG:localstack.services.awslambda.lambda_executors: docker ps -a --filter="name=localstack_lambda_*" --format "{{.Names}}"
2021-05-12T22:25:38:DEBUG:localstack.services.awslambda.lambda_executors: Removing 0 containers.
2021-05-12T22:25:38:DEBUG:localstack.services.awslambda.lambda_executors: Checking if there are idle containers ...
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
2021-05-12T22:26:38:DEBUG:localstack.services.awslambda.lambda_executors: Checking if there are idle containers ...
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
2021-05-12T22:27:38:DEBUG:localstack.services.awslambda.lambda_executors: Checking if there are idle containers ...
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
2021-05-12T22:28:38:DEBUG:localstack.services.awslambda.lambda_executors: Checking if there are idle containers ...
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
2021-05-12T22:29:38:DEBUG:localstack.services.awslambda.lambda_executors: Checking if there are idle containers ...
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
2021-05-12T22:30:38:DEBUG:localstack.services.awslambda.lambda_executors: Checking if there are idle containers ...
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
2021-05-12T22:31:38:DEBUG:localstack.services.awslambda.lambda_executors: Checking if there are idle containers ...
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
2021-05-12T22:32:38:DEBUG:localstack.services.awslambda.lambda_executors: Checking if there are idle containers ...
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
2021-05-12T22:33:38:DEBUG:localstack.services.awslambda.lambda_executors: Checking if there are idle containers ...
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
2021-05-12T22:34:38:DEBUG:localstack.services.awslambda.lambda_executors: Checking if there are idle containers ...
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
2021-05-12T22:35:38:DEBUG:localstack.services.awslambda.lambda_executors: Checking if there are idle containers ...
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
2021-05-12T22:36:38:DEBUG:localstack.services.awslambda.lambda_executors: Checking if there are idle containers ...
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
2021-05-12T22:37:38:DEBUG:localstack.services.awslambda.lambda_executors: Checking if there are idle containers ...
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
2021-05-12T22:38:38:DEBUG:localstack.services.awslambda.lambda_executors: Checking if there are idle containers ...
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
2021-05-12T22:39:38:DEBUG:localstack.services.awslambda.lambda_executors: Checking if there are idle containers ...
Waiting for all LocalStack services to be ready
```
Environment(dind sidecar):
```
- env:
- name: SERVICES
value: sns,s3,lambda,iam,sts,cloudwatch,cloudwatchlogs,cloudwatchevents,events
- name: DATA_DIR
value: /mnt/data
- name: LAMBDA_EXECUTOR
value: docker-reuse
- name: LAMBDA_REMOTE_DOCKER
value: "false"
- name: DOCKER_HOST
value: tcp://localhost:2376
- name: DOCKER_TLS_VERIFY
value: "1"
- name: DOCKER_CERT_PATH
value: /opt/docker/tls/client
- name: PERSISTENCE_SINGLE_FILE
value: "false"
- name: DEBUG
value: "1"
```
If I go deleting files in DATA_DIR, seems to be the S3 persistence file I need to delete in order to get the startup to progress.
Things happening on localstack before restart:
2 Lambda functions setup, 2 S3 buckets, an SNS and some S3 events and SNS events to kick off the Lambda functions.
...
## Expected behavior
That localstack can restart.
...
## Actual behavior
Localstack hangs indefinately
...
# Steps to reproduce
I may be able to share some terraform and boto commands later.
## Command used to start LocalStack
Docker
...
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
IAC: Terraform, Application: Boto
...
| https://github.com/localstack/localstack/issues/4010 | https://github.com/localstack/localstack/pull/4033 | 630342c918cdb1dd359d4790a13c22fc012d7bba | c0f88e04e45347c360e3cff79805195c987f5d0f | "2021-05-12T22:46:50Z" | python | "2021-05-15T14:56:34Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,924 | ["localstack/services/generic_proxy.py"] | API Gateway does not allow CORS preflight when using withCredentials in requests (Missing Mock integrations) | # This is a ...
[x] bug report
[ ] feature request
# Detailed description
When running an `serverless-express` API in a lambda pushed to localstack in Docker, OPTIONS requests from a client with the `withCredentials` flag fail because of the wildcard `'*'` in `Access-Control-Allow-Origin` header returned by calling the endpoint (e.g.: `http://localhost:4566/restapis/<id>/<stage>/_user_request_/api/`). The request doesn't reach the lambda at all.
docker-compose.yml:
```yml
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack:latest
network_mode: bridge
ports:
- "4566:4566"
- "4571:4571"
environment:
- DEFAULT_REGION=us-east-1
- SERVICES=iam,lambda,apigateway,s3,ses,sts,sns,cloudformation,cloudwatch
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
- LAMBDA_EXECUTOR=docker-reuse
- LAMBDA_REMOTE_DOCKER=0
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${PWD}/.localstack
- LAMBDA_DOCKER_NETWORK=bridge
volumes:
- "${PWD}/.localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
```
## Expected behavior
Allow defining a single(?) allowed origin for the header so that the preflight request doesn't fail.
## Actual behavior
Browser errors out with:
```
Access to XMLHttpRequest at '<endpoint>' from origin 'http://localhost:8080' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: The value of the 'Access-Control-Allow-Origin' header in the response must not be the wildcard '*' when the request's credentials mode is 'include'. The credentials mode of requests initiated by the XMLHttpRequest is controlled by the withCredentials attribute.
```
# Steps to reproduce
- Create a simple API based on `serverless-express` with `express-session` as session handler.
- Use `withCredentials` flag to send requests to the API via `axios` or similar
## Command used to start LocalStack
`docker-compose up`
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
| https://github.com/localstack/localstack/issues/3924 | https://github.com/localstack/localstack/pull/4003 | 464f47e15a1f92c90ebbc04bee795ddd7b754aee | cb2e893e04421cdedadf6932064fda638c94dae8 | "2021-04-23T11:27:38Z" | python | "2021-05-13T12:03:39Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,879 | ["localstack/config.py", "localstack/services/dynamodb/dynamodb_listener.py", "tests/integration/test_error_injection.py"] | Dynamodb BatchWriteItem not Throttling Properly | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[ X ] bug report
[ ] feature request
# Detailed description
On BatchWriteItem if items are throttled, we are expected to receive a list of `UnprocessedItems` in the response instead of HTTP 500
## Expected behavior
Receive HTTP 200 but with Unprocessed Items in the response: https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchWriteItem.html#DDB-BatchWriteItem-response-UnprocessedItems
## Actual behavior
Receive HTTP 500 ProvisionedThroughPutException. "All or nothing"
# Steps to reproduce
Start LocalStack with DynamoDb Service and "DYNAMODB_ERROR_PROBABILITY=1.0"
## Command used to start LocalStack
```
new LocalStackContainer(DockerImageName.parse("localstack/localstack:0.12.9"))
.withEnv("DYNAMODB_ERROR_PROBABILITY", "1.0")
.withServices(DYNAMODB)
```
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
You can just Randomly generate a big list of objects (exceeding 1kb per batch) and check this behaviour
| https://github.com/localstack/localstack/issues/3879 | https://github.com/localstack/localstack/pull/3896 | dcb2876b607e0b89af624a7df02f2446a4ed686d | 0eb1df9f393a6d9c104ca55cc3f87fd6df743184 | "2021-04-15T16:26:40Z" | python | "2021-04-30T14:24:54Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,878 | ["localstack/services/cloudformation/cloudformation_api.py"] | LocalStack: Cloudformation Stack Filtering not working as expected | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[ x ] bug report
[ ] feature request
# Detailed description
When I try to list stacks and filter the stack status, it lists all the stacks rather than filter them out.
I saw someone had this issue before and was meant to be fixed, but doesnt seem like its fixed:
- https://github.com/localstack/localstack/issues/2698
## Expected behavior
Stacks should be filtered
## Actual behavior
All stacks are returned
# Steps to reproduce
Run latest docker version of localstack.
Create new stack
Try to run something like:
aws --endpoint-url=http://localhost:4566 cloudformation list-stacks --stack-status-filter DELETE_FAILED
It will return all stacks when it should return an empty list
## Command used to start LocalStack
docker run --network host -d -e DEFAULT_REGION="eu-west-1" localstack/localstack:latest
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
aws --endpoint-url=http://localhost:4566 cloudformation list-stacks --stack-status-filter DELETE_FAILED
Using boto3, the same thing happens:
response = self.__cloudformation_client.list_stacks(StackStatusFilter=['CREATE_FAILED'])
...
| https://github.com/localstack/localstack/issues/3878 | https://github.com/localstack/localstack/pull/3895 | de5984545cdbfcb53b8567c9ee1329b76da02bd9 | 1bd8b0fb550f35f8389428e03d3f3853922726b5 | "2021-04-15T08:55:12Z" | python | "2021-04-20T22:10:41Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,734 | ["localstack/services/cloudformation/cloudformation_api.py"] | ListStackResources does not have LastUpdatedTimestamp field | # Type of request: This is a ...
bug report
# Detailed description
aws cloudformation list-stack-resources --stack-name "mystack"
{
"StackResourceSummaries": [
{
"LogicalResourceId": "MyS3Bucket",
"PhysicalResourceId": "mystack",
"ResourceType": "AWS::S3::Bucket",
"ResourceStatus": "CREATE_COMPLETE"
}
]
}
**LastUpdatedTimestamp** is not returned.
...
## Expected behavior
Trying to use Localstack with some dependencies which expects the field LastUpdatedTimestamp. I saw in few older posts the field was being returned.
| https://github.com/localstack/localstack/issues/3734 | https://github.com/localstack/localstack/pull/3752 | dc276a8d6ce6111b19c36140dab8ff6e3d56edf3 | 6acfcdafd6b3ccc8ac2d9edced6d60bdfa73147c | "2021-03-16T16:10:28Z" | python | "2021-03-20T12:32:17Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,716 | ["localstack/services/edge.py", "localstack/utils/common.py", "tests/integration/test_sqs.py", "tests/unit/test_edge.py"] | Support for Authorization Parameters in GET Parameters (instead of header) | [X] bug report
[ ] feature request
# Detailed description
The AWS API supports Authorization parameters as GET parameters, e.g. here's a GET request to SendMessage on SQS:
```
curl 'http://localhost:4566/000000000000/SomeQueue.fifo?Action=SendMessage&MessageGroupId=Default&MessageDeduplicationId=1615597859837&MessageBody=%7B%22gateway_id%22%3A%20%2204e73c80-3539-499c-92a4-a0b106c2f6ab%22%7D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=test%2F20210313%2Fus-east-1%2Fsqs%2Faws4_request&X-Amz-Date=20210313T011059Z&X-Amz-Expires=86400000&X-Amz-SignedHeaders=host&X-Amz-Signature=2c652c7bc9a3b75579db3d987d1e6dd056f0ac776c1e1d4ec91e2ce84e5ad3ae'
```
## Expected behavior
Should send the message and return receipt information.
## Actual behavior
Returns...
```
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist</Message>
<BucketName>000000000000</BucketName>
<RequestID>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</RequestID>
</Error>
```
| https://github.com/localstack/localstack/issues/3716 | https://github.com/localstack/localstack/pull/3720 | 4826ad3a37198397a62ef15b8714e4e43221b2e9 | 803da290b230b319d10f85cc158336d85c98fe44 | "2021-03-13T04:17:23Z" | python | "2021-03-14T21:21:44Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,682 | ["localstack/services/events/events_starter.py", "tests/integration/test_events.py"] | EventBridge PutEvents not working when Detail is missing | # Type of request: This is a ...
[X ] bug report
[ ] feature request
# Detailed description
Calling PutEvents operation returns 500 if no `Detail` is specified. This case is similar to #3043
## Expected behavior
According to the doc https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_PutEventsRequestEntry.html `Details` is optional.
```
Detail
A valid JSON string. There is no other schema imposed. The JSON string may contain fields and nested subobjects.
Type: String
Required: No
```
So this should works and set `Detail` as `{}`
```
eb_client = boto3.client('events', endpoint_url='http://localhost:4587')
eb_client.put_events(
Entries=[
{
'DetailType': 'Test'
}
]
)
```
## Actual behavior
500 is returned
# Steps to reproduce
1. Start LocalStack with SERVICES=events
2. Run the client code
## Command used to start LocalStack
```
docker run --rm --name localstack -p 4587:4587 -e SERVICES=events -e DEBUG=true localstack/localstack
```
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
CLI:
```
aws events put-events --endpoint-url http://localhost:4587 --entries '[{"DetailType": "Test"}]'
```
Or via python:
```
eb_client = boto3.client('events', endpoint_url='http://localhost:4587')
eb_client.put_events(
Entries=[
{
'DetailType': 'Test',
'Detail': '{}'
}
]
)
```
| https://github.com/localstack/localstack/issues/3682 | https://github.com/localstack/localstack/pull/3683 | a5a46cb2821ca3837bf920fa9763a3187c4167f2 | 792f58b9f47862eebeb79976466b8e873c31584f | "2021-03-05T08:27:07Z" | python | "2021-03-05T15:50:35Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,675 | ["localstack/config.py"] | Lambda invocation errors in github actions | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
* [x] bug report
* [ ] feature request
# Detailed description
Lambda invocation works in macos but fails in github actions:
### docker-compose.yml
```
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- LAMBDA_EXECUTOR=docker
- DEBUG=1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53
- DATA_DIR=/tmp/localstack/data
ports:
- '4566-4597:4566-4597'
volumes:
- "./localstacktemp:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
- ./aws:/docker-entrypoint-initaws.d
networks:
- test
```
### serverless.yml
```
service: doctor-local
frameworkVersion: '2'
plugins:
- serverless-localstack
custom:
localstack:
debug: true
stages:
- local
- dev
endpointFile: localstack_endpoints.json
provider:
name: aws
runtime: nodejs12.x
functions:
hola:
handler: handler.hello
memorySize: 256
timeout: 900
events:
- http:
path: svc/ping
method: get
- http:
path: svc/jobs
method: get
```
### localstack_endpoints.json
```
{
"CloudFormation" : "http://localhost:4566",
"CloudWatch" : "http://localhost:4566",
"Lambda" : "http://localhost:4566",
"S3" : "http://localhost:4566",
"APIGateway" : "http://localhost:4566",
"Route53" : "http://localhost:4566"
}
```
### Invocation result (In my local) (success)
```
{
"status": 200,
"message": "I got invoked and returning you this response"
}
```
### Invocation result (In Github actions run - ubuntu-latest) (Error)
```
{
errorType: 'InvocationException',
errorMessage: 'Lambda process returned error status code: 1. Result: ' +
'{"errorType":"Runtime.ImportModuleError","errorMessage":"Error: ' +
"Cannot find module 'handler'\\nRequire stack:\\n- " +
'/var/runtime/UserFunction.js\\n- /var/runtime/index.js"}. Output:\n' +
"Unable to find image 'lambci/lambda:nodejs12.x' locally\nnodejs12.x: " +
'Pulling from lambci/lambda\nb8f7c23f9c29: Pulling fs layer\n' +
'c061d4866919: Pulling fs layer\naacc65296390: Pulling fs layer\n' +
'aacc65296390: Verifying Checksum\naacc65296390: Download complete\n' +
'c061d4866919: Verifying Checksum\nc061d4866919: Download complete\n' +
'b8f7c23f9c29: Verifying Checksum\nb8f7c23f9c29: Download complete\n' +
'b8f7c23f9c29: Pull complete\nc061d4866919: Pull complete\naacc65296390: ' +
'Pull complete\nDigest: ' +
'sha256:098709a2d12098c2ab5ad45138a2f97d3acc1788c2855b3659dd20eed62fd2af\n' +
'Status: Downloaded newer image for lambci/lambda:nodejs12.x\n' +
'2021-03-04T06:52:16.506Z\tundefined\tERROR\tUncaught Exception \t' +
'{"errorType":"Runtime.ImportModuleError","errorMessage":"Error: ' +
"Cannot find module 'handler'\\nRequire stack:\\n- " +
'/var/runtime/UserFunction.js\\n- ' +
'/var/runtime/index.js","stack":["Runtime.ImportModuleError: Error: ' +
`Cannot find module 'handler'","Require stack:","- ` +
'/var/runtime/UserFunction.js","- /var/runtime/index.js"," at ' +
'_loadUserApp (/var/runtime/UserFunction.js:100:13)"," at ' +
'Object.module.exports.load (/var/runtime/UserFunction.js:140:17)"," ' +
' at Object.<anonymous> (/var/runtime/index.js:43:30)"," at ' +
'Module._compile (internal/modules/cjs/loader.js:999:30)"," at ' +
'Object.Module._extensions..js ' +
'(internal/modules/cjs/loader.js:1027:10)"," at Module.load ' +
'(internal/modules/cjs/loader.js:863:32)"," at ' +
'Function.Module._load (internal/modules/cjs/loader.js:708:14)"," ' +
'at Function.executeUserEntryPoint [as runMain] ' +
'(internal/modules/run_main.js:60:12)"," at ' +
'internal/main/run_main_module.js:17:47"]}\n\u001b[32mSTART RequestId: ' +
'51b7e5dd-7f1a-15c2-efd3-5bb4aad84560 Version: $LATEST\u001b[0m\n\u001b[32mEND ' +
'RequestId: 51b7e5dd-7f1a-15c2-efd3-5bb4aad84560\u001b[0m\n\u001b[32mREPORT ' +
'RequestId: 51b7e5dd-7f1a-15c2-efd3-5bb4aad84560\tInit Duration: 135.74 ' +
'ms\tDuration: 1.52 ms\tBilled Duration: 2 ms\tMemory Size: 1536 MB\tMax ' +
'Memory Used: 48 MB\t\u001b[0m',
stackTrace: [Array]
}
```
I'm pretty sure the path to handler is valid because it works in my mac
...
## Expected Behavior
Lambda should be invoked in github actions as well
...
## Actual behavior
Lambda InvocationError
...
# Steps to reproduce
* docker-compose up
* SLS_DEBUG=* node_modules/.bin/serverless deploy --stage local
```
% node_modules/.bin/serverless --version
Framework Core: 2.15.0 (local)
Plugin: 4.4.3
SDK: 2.3.2
Components: 3.7.2
% node -v
v12.4.0
Versions:
"serverless": "2.15.0",
"serverless-localstack": "0.4.28",
```
## Command used to start LocalStack
docker-compose up
...
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
```
Making api call to
http://localhost:4566/restapis/%7BrestapiId%7D/local/_user_request_/svc/ping
```
... | https://github.com/localstack/localstack/issues/3675 | https://github.com/localstack/localstack/pull/3717 | cd86e6e659ed3ea672bd5d8a015132faa7544801 | 4826ad3a37198397a62ef15b8714e4e43221b2e9 | "2021-03-04T09:14:09Z" | python | "2021-03-14T11:45:21Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,663 | ["localstack/services/ses/ses_starter.py", "tests/integration/test_ses.py"] | ses delete template feature is not implemented | # Type of request: This is a ...
[x] feature request
# Detailed description
Whenever we try to do delete template operation through AWS java SDK it is throwing 500 error with null message
AWS SDK version: version: 1.11.327
## Expected behavior
delete template operation should work properly deleting given template
## Actual behavior
Delete template operation is throwing 500 exception with null message
com.amazonaws.services.simpleemail.model.AmazonSimpleEmailServiceException: null (Service: AmazonSimpleEmailService; Status Code: 500; Error Code: 500 ; Request ID: null)
or through command line it is giving
Unable to parse response (syntax error: line 1, column 54), invalid XML received. Further retries may succeed:
b'<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n<title xmlns="http://ses.amazonaws.com/doc/2010-01-31/">500 Internal Server Error</title>\n<h1>Internal Server Error</h1>\n<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>\n'
# Steps to reproduce
1. use AWS java SDK in java project
2. make localstack up using docker latest image
3. perform delete template operation on already created template
## Command used to start LocalStack
docker run --env SERVICES=ses -p 4566:4566 localstack/localstack
## Client code
val client: AmazonSimpleEmailService = AmazonSimpleEmailServiceClientBuilder.standard()
.withEndpointConfiguration(endpoint).build()
val deleteTemplateResult = client.deleteTemplate(deleteTemplatesRequest)
OR through the command line
aws ses delete-template --template-name on_demand_template --region=us-east-1 --endpoint-url=http://localhost:4566
| https://github.com/localstack/localstack/issues/3663 | https://github.com/localstack/localstack/pull/3665 | 8611302c6df45dc0bb8458e3f490b8716a336f14 | 641b890e9328ad3d010570766109c646d1f10a3a | "2021-03-02T15:07:48Z" | python | "2021-03-02T18:27:29Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,608 | ["localstack/services/s3/s3_listener.py", "tests/integration/test_s3.py"] | Regression: listObjects does not work with phpsdk | # Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
If I upload objects to my bucket I don't get a list of my objects back, using the listObjects method of the phpsdk.
This was fixed previously in 0.11 but it's broken again in 0.12.x [#3021](https://github.com/localstack/localstack/issues/3021)
## Expected behavior
listObjects should return the objects in my bucket/folder.
## Actual behavior
Only metadata is returned.
# Steps to reproduce
- docker-compose.yml
```
version: '3.7'
services:
localstack:
#image: localstack/localstack:0.11.0
image: localstack/localstack:latest
ports:
- "4566:4566"
- "8080:8080"
environment:
- SERVICES=s3
- DATA_DIR=/tmp/localstack-test/data
- DEBUG=1
- USE_LIGHT_IMAGE=1
- DOCKER_HOST=unix:///var/run/docker.sock
- DEFAULT_REGION=eu-central-1
volumes:
- "${TMPDIR:-/tmp/localstack-test}:/tmp/localstack-test"
```
- test.php
```
<?php
require 'vendor/aws-autoload.php';
use Aws\S3\S3Client;
$config = [
'credentials' => [
'key' => 'test-id',
'secret' => 'test-secret',
],
'region' => 'eu-central-1',
'version' => 'latest',
'endpoint' => 'http://localhost:4566',
'use_path_style_endpoint' => true,
];
$s3client = new S3Client($config);
$buckets = $s3client->listBuckets();
if (empty($buckets['Buckets'])) {
$s3client->createBucket([
'Bucket' => 'test',
]);
echo 'Bucket test created' . PHP_EOL;
} else {
echo 'Bucket test exists' . PHP_EOL;
}
$s3client->upload('test', 'test.txt', 'test');
var_dump($s3client->listObjectsV2(['Bucket' => 'test']));
```
## Expected example
```
> php test.php
Bucket test exists
class Aws\Result#245 (2) {
private $data =>
array(7) {
'IsTruncated' =>
bool(false)
'Contents' =>
array(1) {
[0] =>
array(5) {
...
}
}
'Name' =>
string(4) "test"
'Prefix' =>
string(0) ""
'MaxKeys' =>
int(1000)
'KeyCount' =>
int(1)
'@metadata' =>
array(4) {
'statusCode' =>
int(200)
'effectiveUri' =>
string(38) "http://localhost:4566/test?list-type=2"
'headers' =>
array(9) {
...
}
'transferStats' =>
array(1) {
...
}
}
}
private $monitoringEvents =>
array(0) {
}
}
```
## Actual
```
> php test.php
Bucket test exists
class Aws\Result#245 (2) {
private $data =>
array(1) {
'@metadata' =>
array(4) {
'statusCode' =>
int(200)
'effectiveUri' =>
string(38) "http://localhost:4566/test?list-type=2"
'headers' =>
array(16) {
...
}
'transferStats' =>
array(1) {
...
}
}
}
private $monitoringEvents =>
array(0) {
}
}
```
| https://github.com/localstack/localstack/issues/3608 | https://github.com/localstack/localstack/pull/3849 | 02743c122cda1131b562b0818c751c908a933cac | edfcb461043e9e850773b907f44293d78792c1d5 | "2021-02-16T11:47:11Z" | python | "2021-04-10T13:24:51Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,487 | ["localstack/services/kinesis/kinesis_listener.py", "localstack/utils/aws/aws_responses.py"] | KCL 2.0x kinesis status code 400 | I am using localstack for running integration tests for Kinesis data stream KCL 2.0x consumer. I have used latest docker image for this.
I am able to publish data to localstack kinesis using the aws-java-sdk. My consumer should work fine and consume the data but instead throws this error:
```
2021-01-18 19:13:56.329 INFO 112252 --- [ restartedMain] s.amazon.kinesis.coordinator.Scheduler : Initialization complete. Starting worker loop.
2021-01-18 19:14:16.497 INFO 112252 --- [oordinator-0001] s.a.k.l.dynamodb.DynamoDBLeaseTaker : Worker d670dc71-4c81-4c8c-898d-f48de247136f saw 1 total leases, 1 available leases, 1 workers. Target is 1 leases, I have 0 leases, I will take 1 leases
2021-01-18 19:14:16.550 INFO 112252 --- [oordinator-0001] s.a.k.l.dynamodb.DynamoDBLeaseTaker : Worker d670dc71-4c81-4c8c-898d-f48de247136f successfully took 1 leases: shardId-000000000000
2021-01-18 19:14:17.463 INFO 112252 --- [ restartedMain] s.a.k.r.f.FanOutConsumerRegistration : Waiting for StreamConsumer test to have ACTIVE status...
2021-01-18 19:14:18.496 INFO 112252 --- [ restartedMain] s.amazon.kinesis.coordinator.Scheduler : Created new shardConsumer for : ShardInfo(shardId=shardId-000000000000, concurrencyToken=b738bc15-f561-4530-813b-bfc3a10bae1c, parentShardIds=[], checkpoint={SequenceNumber: LATEST,SubsequenceNumber: 0})
2021-01-18 19:14:18.498 INFO 112252 --- [dProcessor-0000] s.a.k.lifecycle.BlockOnParentShardTask : No need to block on parents [] of shard shardId-000000000000
2021-01-18 19:14:19.525 INFO 112252 --- [dProcessor-0000] c.k.p.consumer.DeliveryStatusProcessor : Initializing @ Sequence: {SequenceNumber: LATEST,SubsequenceNumber: 0}
2021-01-18 19:14:21.568 WARN 112252 --- [nc-response-0-4] s.a.k.r.fanout.FanOutRecordsPublisher : shardId-000000000000: [SubscriptionLifetime] - (FanOutRecordsPublisher#errorOccurred) @ 2021-01-18T13:44:21.523Z id: shardId-000000000000-1 -- CompletionException/software.amazon.awssdk.services.kinesis.model.KinesisException: null (Service: Kinesis, Status Code: 400, Request ID: 44dfbbf0-5993-11eb-afc4-a991dcff0b58). Last successful request details -- request id - NONE, timestamp - NONE
java.util.concurrent.CompletionException: software.amazon.awssdk.services.kinesis.model.KinesisException: null (Service: Kinesis, Status Code: 400, Request ID: 44dfbbf0-5993-11eb-afc4-a991dcff0b58)
at software.amazon.awssdk.utils.CompletableFutureUtils.errorAsCompletionException(CompletableFutureUtils.java:61) ~[utils-2.10.66.jar:na]
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncExecutionFailureExceptionReportingStage.lambda$execute$0(AsyncExecutionFailureExceptionReportingStage.java:51) ~[sdk-core-2.10.66.jar:na]
at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836) ~[na:1.8.0_275]
at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811) ~[na:1.8.0_275]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) ~[na:1.8.0_275]
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990) ~[na:1.8.0_275]
at software.amazon.awssdk.utils.CompletableFutureUtils.lambda$forwardExceptionTo$0(CompletableFutureUtils.java:75) ~[utils-2.10.66.jar:na]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) ~[na:1.8.0_275]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) ~[na:1.8.0_275]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) ~[na:1.8.0_275]
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990) ~[na:1.8.0_275]
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.retryResponseIfNeeded(AsyncRetryableStage.java:157) ~[sdk-core-2.10.66.jar:na]
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.retryIfNeeded(AsyncRetryableStage.java:121) ~[sdk-core-2.10.66.jar:na]
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.lambda$execute$0(AsyncRetryableStage.java:108) ~[sdk-core-2.10.66.jar:na]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) ~[na:1.8.0_275]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) ~[na:1.8.0_275]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) ~[na:1.8.0_275]
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) ~[na:1.8.0_275]
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.lambda$executeHttpRequest$1(MakeAsyncHttpRequestStage.java:169) ~[sdk-core-2.10.66.jar:na]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) ~[na:1.8.0_275]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) ~[na:1.8.0_275]
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456) ~[na:1.8.0_275]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_275]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_275]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_275]
Caused by: software.amazon.awssdk.services.kinesis.model.KinesisException: null (Service: Kinesis, Status Code: 400, Request ID: 44dfbbf0-5993-11eb-afc4-a991dcff0b58)
at software.amazon.awssdk.services.kinesis.model.KinesisException$BuilderImpl.build(KinesisException.java:95) ~[kinesis-2.10.66.jar:na]
at software.amazon.awssdk.services.kinesis.model.KinesisException$BuilderImpl.build(KinesisException.java:55) ~[kinesis-2.10.66.jar:na]
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.unmarshall(AwsJsonProtocolErrorUnmarshaller.java:88) ~[aws-json-protocol-2.10.66.jar:na]
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.handle(AwsJsonProtocolErrorUnmarshaller.java:63) ~[aws-json-protocol-2.10.66.jar:na]
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.handle(AwsJsonProtocolErrorUnmarshaller.java:42) ~[aws-json-protocol-2.10.66.jar:na]
at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler.lambda$prepare$0(AsyncResponseHandler.java:88) ~[sdk-core-2.10.66.jar:na]
at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:966) ~[na:1.8.0_275]
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:940) ~[na:1.8.0_275]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) ~[na:1.8.0_275]
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975) ~[na:1.8.0_275]
at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler$BaosSubscriber.onComplete(AsyncResponseHandler.java:129) ~[sdk-core-2.10.66.jar:na]
```
further logs:
```
2021-01-18 19:14:21.571 WARN 112252 --- [dProcessor-0001] s.a.k.lifecycle.ShardConsumerSubscriber : shardId-000000000000: onError(). Cancelling subscription, and marking self as failed. KCL will recreate the subscription as neccessary to continue processing.
java.util.concurrent.CompletionException: software.amazon.awssdk.services.kinesis.model.KinesisException: null (Service: Kinesis, Status Code: 400, Request ID: 44dfbbf0-5993-11eb-afc4-a991dcff0b58)
at software.amazon.awssdk.utils.CompletableFutureUtils.errorAsCompletionException(CompletableFutureUtils.java:61) ~[utils-2.10.66.jar:na]
```
How do I resolve this?
Does localstack not support working for KCL2.0x? Is there any workaround or example for this? | https://github.com/localstack/localstack/issues/3487 | https://github.com/localstack/localstack/pull/3691 | 16a6cdccda95a095649d2bcf0a278bb905d58072 | 01e02de2aec0248cfa8658497da7a4be1d561f22 | "2021-01-19T11:19:21Z" | python | "2021-03-07T22:11:00Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,466 | ["localstack/plugins.py", "localstack/services/ses/ses_listener.py", "localstack/services/ses/ses_starter.py"] | AWS Java SDK is throwing different exceptions than the expected [SES] | # Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
AWS Java SDK is throwing different exceptions than the expected
version: 1.11.327
**Example 1:** send a notification with an unverified email address as from address follows
## Expected behavior
exception which is thrown by AWS is like
com.amazonaws.services.simpleemail.model.MessageRejectedException: Email address is not verified. The following identities failed the check in region US-EAST-1: arn:aws:ses:us-east-1:654856854863:identity/miqdigitl.com (Service: AmazonSimpleEmailService; Status Code: 400; Error Code: MessageRejected; Request ID: 0632aab8-0be1-4eeb-9d0f-8c699330aabf)
## Actual behavior
exception thrown when using localstack is like
com.amazonaws.services.simpleemail.model.AmazonSimpleEmailServiceException: null (Service: AmazonSimpleEmailService; Status Code: 400; Error Code: 400 ; Request ID: null)
**Example 2 :** send create template request
## Expected behavior
Template gets created successfully
## Actual behavior
com.amazonaws.services.simpleemail.model.AmazonSimpleEmailServiceException: null (Service: AmazonSimpleEmailService; Status Code: 500; Error Code: 500 ; Request ID: null)
# Steps to reproduce
1. use AWS java SDK in java project
2. make localstack up using docker latest image
3. try to send with unverified email address/ create template connecting to localstack
## Command used to start LocalStack
docker run --env SERVICES=ses -p 4566:4566 localstack/localstack
## Client code
**in kotlin**
val endpoint : AwsClientBuilder.EndpointConfiguration = AwsClientBuilder
.EndpointConfiguration("http://localhost:4566","us-east-1")
val client = AmazonSimpleEmailServiceClientBuilder.standard()
.withEndpointConfiguration(endpointConfiguration)
.build()
client.createTemplate(templateRequest)
or
client.sendEmail(sendEmailRequest)
please let me know if I am doing something wrong here as well | https://github.com/localstack/localstack/issues/3466 | https://github.com/localstack/localstack/pull/3491 | 2f5e25a4b51db33db9f57b78d97f9999a339ba6e | f02b32361efd545a7de0a5ef95c9072eb62ba386 | "2021-01-14T09:46:23Z" | python | "2021-01-23T17:26:08Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,442 | [".gitignore", "localstack/services/cloudformation/cloudformation_api.py", "tests/integration/templates/template27.yaml", "tests/integration/test_cloudformation.py"] | Cloudformation list_exports result is malformed | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[X] bug report
[ ] feature request
# Detailed description
The return of the endpoint for listExports (cloudformation service) is malformed. The funny thing is that the aws python sdk does not see issues, but the js sdk is not able to parse it.
The example is taken from the aws docs [AWS DOCS](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_ListExports.html)
At the moment the single export is not wrapped by a `member` xml element in an Exports element container , but the single export is wrapped in a Exports element
I am about to create a PR to fix the issue, it would be great if you can review it
## Expected behavior
`
<ListExportsResponse xmlns="http://cloudformation.amazonaws.com/doc/2010-05-15/">
<ListExportsResult>
<Exports>
<member>
<Name>mySampleStack1-SecurityGroupID</Name>
<ExportingStackId>arn:aws:cloudformation:us-east-1:123456789012:stack/mySampleStack1/12a3b456-0e10-4ce0-9052-5d484a8c4e5b</ExportingStackId>
<Value>sg-0a123b45</Value>
</member>
<member>
<Name>mySampleStack1-SubnetID</Name>
<ExportingStackId>arn:aws:cloudformation:us-east-1:123456789012:stack/mySampleStack1/12a3b456-0e10-4ce0-9052-5d484a8c4e5b</ExportingStackId>
<Value>subnet-0a123b45</Value>
</member>
<member>
<Name>mySampleStack1-VPCID</Name>
<ExportingStackId>arn:aws:cloudformation:us-east-1:123456789012:stack/mySampleStack1/12a3b456-0e10-4ce0-9052-5d484a8c4e5b</ExportingStackId>
<Value>vpc-0a123b45</Value>
</member>
<member>
<Name>WebSiteURL</Name>
<ExportingStackId>arn:aws:cloudformation:us-east-1:123456789012:stack/myS3StaticSite/12a3b456-0e10-4ce0-9052-5d484a8c4e5b</ExportingStackId>
<Value>http://testsite.com.s3-website-us-east-1.amazonaws.com</Value>
</member>
</Exports>
</ListExportsResult>
<ResponseMetadata>
<RequestId>5ccc7dcd-744c-11e5-be70-1b08c228efb3</RequestId>
</ResponseMetadata>
</ListExportsResponse>
`
## Actual behavior
`
<ListExportsResponse xmlns="http://cloudformation.amazonaws.com/doc/2010-05-15/">
<ListExportsResult>
<Exports>
<ExportingStackId>arn:aws:cloudformation:us-east-1:000000000000:stack/CDKToolkit/c70373cc</ExportingStackId>
<Name></Name>
<Value>cdktoolkit-stagingbucket-5eb65ccc</Value>
</Exports>
<Exports>
<ExportingStackId>arn:aws:cloudformation:us-east-1:000000000000:stack/CDKToolkit/c70373cc</ExportingStackId>
<Name></Name>
<Value></Value>
</Exports>
<Exports>
<ExportingStackId>arn:aws:cloudformation:us-east-1:000000000000:stack/TestStackXX/d54e8974</ExportingStackId>
<Name>shopally-AppTableName</Name>
<Value>shopally-ShopifyAppInfo</Value>
</Exports>
</ListExportsResult>
</ListExportsResponse>
`
# Steps to reproduce
just deploy something with an export to see the problem
## Command used to start LocalStack
I used the docker image
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
| https://github.com/localstack/localstack/issues/3442 | https://github.com/localstack/localstack/pull/3443 | ea6a5862974c9d92efa1f5f4b106fa718732a0fa | 0f32888003e07d900e2bee45f8253a0e325f1444 | "2021-01-07T14:15:41Z" | python | "2021-01-08T07:29:11Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,439 | ["localstack/services/es/es_api.py"] | How to read data written in elastic search cluster ? | # Type of request: This is a ...
[ ] bug report
[ ] feature request
# Detailed description
Need details on how to read data written in elastic search index in localstack, running ES in a different container.
Also getting error 500 on hitting command -"list-elasticsearch-versions"
aws --endpoint-url=http://localhost:4571 es list-elasticsearch-versions
## Expected behavior
Should be able to lookup for running es version.
Also please provide some details on how to read es index content?
## Actual behavior
Getting error on running command - list-elasticsearch-versions
# Steps to reproduce
-Running es in a different container(docker-compose file attached)
- Once lambda is executed, it should write the data to es index
## Command used to start LocalStack
docker-compose up -d
Docker-compose.yml:
```
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.6
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch6-8/data
ports:
- 9200:9200
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack:latest
ports:
- "4566-4599:4566-4599"
- "${PORT_WEB_UI-8081}:${PORT_WEB_UI-8080}"
environment:
- AWS_REGION=us-east-1
- SERVICES=${SERVICES- }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=docker
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- LAMBDA_DOCKER_NETWORK=ecare-localstackdemoallservices_default
- HOST_TMP_FOLDER=${TMPDIR}
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
volumes:
data01:
driver: local
```
Screenshot of the error:

| https://github.com/localstack/localstack/issues/3439 | https://github.com/localstack/localstack/pull/3582 | 6ec01eed855bb7f9f7faad4192e8f827c78cdb64 | f30c16a204be80f99a77be542daf0832ebdeb6de | "2021-01-06T20:46:41Z" | python | "2021-02-09T00:58:54Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,385 | ["localstack/services/edge.py", "tests/integration/test_cloudwatch.py"] | CloudWatch PutMetricData does not support gzip encoding | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[*] bug report
[ ] feature request
# Detailed description
Sending CloudWatch metrics data using `content-encoding: gzip` encoding results in 404. The same metrics sent uncompressed using `content-encoding: identity` returns 200 OK.
The metrics compressed with `content-encoding: gzip` work against AWS CloudWatch.
According to [CloudWatch documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_PutMetricData.html) `PutMetricData` accepts `gzip` compressed payloads but the docs are not clear on content-encoding at that point.
> Each PutMetricData request is limited to 40 KB in size for HTTP POST requests. You can send a payload compressed by gzip. Each request is also limited to no more than 20 different metrics.
## Expected behavior
I expected `localstack` `cloudwatch` to be able to receive `gzip` encoded payloads.
## Actual behavior
`encoding: gzip` results in HTTP error 404.
# Steps to reproduce
The Elixir library `telemetrics_library_cloudwatch` uses `gzip` compression by default. I configured a bunch of metrics and configured the library to periodically send the metrics. This resulted in 404 failures.
The requests against the `localstack` container started working when I commented out the `gzip` content type setting in line 12 of `TelemetryMetricsCloudWatch.Cloudwatch` module [here](https://github.com/bmuller/telemetry_metrics_cloudwatch/blob/2293af20e80c058338c8d4248191518642484986/lib/telemetry_metrics_cloudwatch/cloudwatch.ex#L12)
Apologies for not having simpler steps to reproduce.
## Command used to start LocalStack
Docker Compose with following settings:
```
localstack:
image: localstack/localstack:0.12.3
restart: always
environment:
- SERVICES=s3,cloudwatch
- DEFAULT_REGION=eu-west-2
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=/tmp/localstack
ports:
- 4566-4599:4566-4599
volumes:
- /tmp/localstack:/tmp/localstack
- /var/run/docker.sock:/var/run/docker.sock
```
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
| https://github.com/localstack/localstack/issues/3385 | https://github.com/localstack/localstack/pull/3412 | a70a8b43f9a3af986769bb5419806780860a72c3 | 59180e8fc1b0ff05e67d645b39502a67157d596c | "2020-12-21T15:21:43Z" | python | "2020-12-31T18:21:46Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,317 | ["localstack/services/edge.py", "tests/integration/test_sts.py"] | Unable to retrieve credentials with assume_role_with_saml | # Type of request: This is a ...
[X] bug report
[ ] feature request
# Detailed description
When I try to retrieve credentials with `assume_role_with_saml` using boto3 v1.16.26 the following error occured:
```
File "/Users/benjamin.brabant/Projects/OCTO/DSI/awscli-saml-sso/.venv/lib/python3.8/site-packages/botocore/parsers.py", line 505, in _do_error_parse
root = self._parse_xml_string_to_dom(xml_contents)
File "/Users/benjamin.brabant/Projects/OCTO/DSI/awscli-saml-sso/.venv/lib/python3.8/site-packages/botocore/parsers.py", line 454, in _parse_xml_string_to_dom
raise ResponseParserError(
botocore.parsers.ResponseParserError: Unable to parse response (not well-formed (invalid token): line 1, column 0), invalid XML received. Further retries may succeed:
b'{"status": "running"}'
```
## Expected behavior
I expected to retrieve credentials from API in the following form:
```
{
"Credentials": {
"AccessKeyId": "ASIA...",
"SecretAccessKey": "...",
"SessionToken": "FQoGZXIvYXdzEBYaDwL8pPz/cNvhUKkibZTashetWcPahlTMbaBUvDwXxjiehDkRQGYYUQrTrMdv7+6SinGiDNBiB7ZKEoyfDja6vhHwnBP2UcY/XozN+MFFPGEMhHcsUqPApwOErN37uHAM5kIOukhGlNmIPvPVWZtDoWryAuygKbqZTWwKecCwtURG2I0KF8MpS+s6SaG6EOUl5OJf/mJJQvH725q2VOWUk7HBezFCIXO+t3L8SzMygdt2FNzwUenhazYvDs2ngSlsbFbAaeeMHikZrWgTs6GkUv1uyAknpTRnInmwBDHb7SZAqpDmc7Q9+b+NXTcO1qzx/eMarHHlFQyeEEI3BEc=",
"Expiration": "2020-12-06T18:54:38.114Z"
},
"AssumedRoleUser": {
"AssumedRoleId": "AROA3X42LBCD9KGW7O43L:benjamin.brabant",
"Arn": "arn:aws:sts::123456789012:assumed-role/Role.Admin/benjamin.brabant"
},
"Subject": "AROA3X42LBCD9KGW7O43L:benjamin.brabant",
"SubjectType": "persistent",
"Issuer": "http://localhost:3000/",
"Audience": "https://signin.aws.amazon.com/saml",
"NameQualifier": "B64EncodedStringOfHashOfIssuerAccountIdAndUserId="
}
```
## Actual behavior
In localstack logs, I have the following information:
```
localstack | 2020-12-07T09:36:13:INFO:localstack.services.edge: Unable to find forwarding rule for host "localhost:4577", path "/", target header "", auth header "", data "b'Action=AssumeRoleWithSAML&Version=2011-06-15&RoleArn=arn%3Aaws%3Aiam%3A%3A000000000000%3Arole%2FRole'..."
```
I also try to call localstack from awscli with the following command:
```
AWS_ACCESS_KEY_ID='_not_needed_locally_' AWS_SECRET_ACCESS_KEY='_not_needed_locally_' aws --endpoint-url=http://localhost:4577 sts assume-role-with-saml \
--role-arn arn:aws:iam::000000000000:role/Role.Admin \
--principal-arn arn:aws:iam::000000000000:saml-provider/SamlExampleProvider \
--saml-assertion $(cat samlresponse.xml | base64)
```
Which respond only `'AssumeRoleWithSAMLResult'`
# Steps to reproduce
## Command used to start LocalStack
I start localstack through docker-compose:
```
localstack:
image: localstack/localstack:0.12.2
container_name: localstack
restart: always
ports:
- "4577:4566"
- "4592:4592"
environment:
LOCALSTACK_SERVICES: iam, sts, s3
LOCALSTACK_START_WEB: 0
LOCALSTACK_DATA_DIR: /tmp/localstack/data
LOCALSTACK_DOCKER_HOST: unix:///var/run/docker.sock
LOCALSTACK_S3_BUCKET_NAME: example-bucket
LOCALSTACK_DEBUG: 1
volumes:
- localstack-s3-data:/tmp/localstack
- ./docker/localstack:/docker-entrypoint-initaws.d
```
and init script in `docker/localstack`:
```
echo "Creating AWS S3 bucket $LOCALSTACK_S3_BUCKET_NAME..."
awslocal s3api create-bucket --bucket $LOCALSTACK_S3_BUCKET_NAME
echo "Creating AWS SAML Provider SamlExampleProvider..."
awslocal iam create-saml-provider --name SamlExampleProvider --saml-metadata-document file:///docker-entrypoint-initaws.d/SAML-Metadata-IDPSSODescriptor.xml
echo "Creating AWS Role.User..."
awslocal iam create-role --role-name Role.User --path / --assume-role-policy-document file:///docker-entrypoint-initaws.d/test-role-trust-relationship-policy.json
awslocal iam put-role-policy --role-name Role.User --policy-name UserPolicy --policy-document file:///docker-entrypoint-initaws.d/test-role-policy.json
echo "Creating AWS Role.Admin..."
awslocal iam create-role --role-name Role.Admin --path / --assume-role-policy-document file:///docker-entrypoint-initaws.d/test-role-trust-relationship-policy.json
awslocal iam put-role-policy --role-name Role.Admin --policy-name AdminPolicy --policy-document file:///docker-entrypoint-initaws.d/test-role-policy.json
echo "... Finished"
```
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
```
client = boto3.client("sts", endpoint_url=endpoint_url)
sts_response = client.assume_role_with_saml(RoleArn=role_arn, PrincipalArn=principal_arn, SAMLAssertion=assertion)
``` | https://github.com/localstack/localstack/issues/3317 | https://github.com/localstack/localstack/pull/3318 | 4f14113a7cf135498e7c7076a72332e37a141814 | 9cce0a73f918236ca5fa7846baf27091f8b0c2a8 | "2020-12-07T10:42:33Z" | python | "2020-12-08T00:46:21Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,281 | ["localstack/services/awslambda/lambda_api.py", "tests/unit/test_lambda.py"] | Create and update lambda function with publish=true doesn't publish function | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
When using java-aws-sdk `CreateFunctionRequest` and `UpdateFunctionCodeRequest` with `publish` flag set to `true` should publish new version of lambda function. When I tried such operations against localstack container in version `localstack/localstack:0.11.2` those operations didn't publish new function version.
## Expected behavior
`CreateFunctionResponse` nad `UpdateFunctionCodeResponse` when `publish` flag is set to `true` will return under `version` field new version of changed AWS lambda function.
## Actual behavior
Even flag `publish` is set to `true`, above operations responses return in version `$LATEST` value what indicate that operation didn't produce new function version
# Steps to reproduce
- check available versions of given function
- create function or update existing with `publish` flag set to `true`
```
CreateFunctionRequest.builder
.functionName(functionName)
.runtime(runtimeName)
.handler(handlerName)
.layers(functionLayers)
.role(lambdaExecutionRole)
.tags(cdsLambdaTags)
.memorySize(memorySize)
.code(functionCode)
.publish(true)
.build
//or
UpdateFunctionCodeRequest.builder()
.functionName(functionName)
.zipFile(functionCode.zipFile())
.publish(true)
.build()
```
- check available versions of given function
## Command used to start LocalStack
Used testcontainers java localstack library.
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
```
CreateFunctionRequest.builder
.functionName(functionName)
.runtime(runtimeName)
.handler(handlerName)
.layers(functionLayers)
.role(lambdaExecutionRole)
.tags(cdsLambdaTags)
.memorySize(memorySize)
.code(functionCode)
.publish(true)
.build
//or
UpdateFunctionCodeRequest.builder()
.functionName(functionName)
.zipFile(functionCode.zipFile())
.publish(true)
.build()
``` | https://github.com/localstack/localstack/issues/3281 | https://github.com/localstack/localstack/pull/3349 | b636364750d1f42554718be7fb65e18b08e4234a | 1017b406be41a4b8126f0f1e5801588a1633ee70 | "2020-11-26T11:32:35Z" | python | "2020-12-14T21:40:28Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,270 | ["localstack/services/awslambda/lambda_api.py", "localstack/utils/aws/aws_models.py", "localstack/utils/aws/aws_stack.py", "tests/integration/terraform/provider.tf"] | Terraform just broke on localstack | Actually, it is only broke for deploying lambdas. You can also work around it by using the previous providor:
```
terraform {
required_providers {
aws = "<= 3.16.0"
}
}
```
The reason it broke is it using the new aws feature for code signing lambdas. Those new APIs are not implemented or mocked in a reasonable way in localstack.
Is there a plan to mock or implement those APIs?
┆Issue is synchronized with this [Jira Bug](https://localstack.atlassian.net/browse/LOC-347) by [Unito](https://www.unito.io/learn-more)
| https://github.com/localstack/localstack/issues/3270 | https://github.com/localstack/localstack/pull/3280 | da434cb05b1d2cbbbc85fd40aebd9b57da0ba57a | 40352dbc4be4aac5b4763cf37376061b3a568ea6 | "2020-11-24T07:00:16Z" | python | "2020-11-27T11:52:47Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,264 | ["localstack/services/awslambda/lambda_executors.py"] | Rust Lambda stuck in localstack | <!-- Love localstack? Please consider supporting our collective:
:point_right: https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
Simple Rust Lambda function got stuck in localstack.
## Expected behavior
Lambda works normally as it does in AWS.
## Actual behavior
Lambda can start, but it gets stuck as soon as it tries to read request.
Via `docker logs <container id>` I can see:
```
Started!
START RequestId: 8736a8ca-e1b1-151b-ec25-1d825d277d49 Version: $LATEST
```
# Steps to reproduce
## Command used to start LocalStack
```yaml
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack
ports:
- "4566-4599:4566-4599"
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=${SERVICES- }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR-/tmp/localstack}
- HOSTNAME_EXTERNAL=xxx.lan
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
```
Then `docker-compose up`.
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
```rust
use lambda_runtime::error::HandlerError;
use lambda_runtime::lambda;
use serde::{Deserialize, Serialize};
use std::error::Error;
#[derive(Deserialize, Debug, Clone)]
struct FooRequest {
bar: String,
}
#[derive(Serialize, Clone)]
struct FooResponse {
success: bool,
}
fn main() -> Result<(), Box<dyn Error>> {
println!("Started!");
lambda!(hello_world);
Ok(())
}
fn hello_world(ev: FooRequest, _ctx: lambda_runtime::Context) -> Result<FooResponse, HandlerError> {
println!("{:?}", ev);
Ok(FooResponse { success: true })
}
```
```sh
$ cargo build --target=x86_64-unknown-linux-musl --release
$ cp target/x86_64-unknown-linux-musl/release/hello_world ./bootstrap
$ zip -j lambda.zip bootstrap
$ aws --endpoint-url http://xxx:4566 lambda create-function --function-name "helloworld" --handler "hello.world" --runtime provided.al2 \
--zip-file fileb://./lambda.zip --role "" --environment Variables={RUST_BACKTRACE=1} \
--timeout 30 --memory-size 512
$ aws --endpoint-url http://xxx:4566 lambda invoke --function-name helloworld --payload '{"bar":"bar"}' ./output.json
```
┆Issue is synchronized with this [Jira Bug](https://localstack.atlassian.net/browse/LOC-341) by [Unito](https://www.unito.io/learn-more)
| https://github.com/localstack/localstack/issues/3264 | https://github.com/localstack/localstack/pull/3344 | de360b516125a0280d7fcbf34c6194e0186b1d08 | d288c4b66895db4ee84c3b9d60b35f8669d009b9 | "2020-11-22T13:56:19Z" | python | "2020-12-11T09:39:56Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,260 | ["localstack/services/awslambda/lambda_api.py", "tests/integration/terraform/provider.tf", "tests/integration/test_lambda.py", "tests/unit/test_lambda.py"] | Localstack Pro + localstack-chalice + AWS Chalice not working | [x] bug report
[ ] feature request
# Detailed description
Using localstack-chalice with was chalice is not working.
I thought the error was because Chalice uses Lambda layers which is a localstack pro feature, but after subscribing and adding the KEY I still get the same error.
## Expected behavior
run `chalice-local deploy local` and get all chalice resources deployed to localstack
## Actual behavior
When I do `chalice-local deploy local` I get the following:
```
chalice.deploy.deployer.ChaliceDeploymentError: ERROR - While deploying your chalice application, received the following error:
An error occurred (405) when calling the DeleteFunctionConcurrency operation:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>405 Method Not
Allowed</title>
<h1>Method Not Allowed</h1>
<p>The method is not allowed for
the requested URL.</p>
```
my `.chalice/config-local.json`
```
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["ssm:GetParameter"],
"Resource": ["arn:aws:ssm:*:*:parameter/REDACTED/local/*"],
"Effect": "Allow"
},
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*",
"Effect": "Allow"
}
]
}
```
Part of my app.py (only the functions)
```
@app.on_sqs_message(queue='my-queue', batch_size=1)
def handle_sqs_message(event):
print("event", event)
for record in event:
app.log.debug("Received message with contents: %s", record.body)
@app.route('/job-worker', methods=['POST'])
def job_worker():
body = app.current_request.json_body
print("request.post", body)
```
After I run `chalice-local deploy local` I check localstack and there just one function created there:
```
╰─± aws --endpoint-url=http://0.0.0.0:4566 lambda list-functions
{
"Functions": [
{
"FunctionName": "ftg_chalice-local-handle_sqs_message",
"FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:ftg_chalice-local-handle_sqs_message",
"Runtime": "python3.8",
"Role": "arn:aws:iam::000000000000:role/ftg_chalice-local-handle_sqs_message",
"Handler": "app.handle_sqs_message",
"CodeSize": 7452977,
"Description": "",
"Timeout": 60,
"MemorySize": 128,
"LastModified": "2020-11-20T18:11:58.196+0000",
"CodeSha256": "JidgtUqdyNwEQPlXX6evd3YSoq23JAjZ18qg843r14k=",
"Version": "$LATEST",
"VpcConfig": {
"SubnetIds": [],
"SecurityGroupIds": []
},
"Environment": {
"Variables": {
"ENV": "local"
}
},
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "336fffb3-940f-4901-803d-929d0878387d",
"Layers": [],
"State": "Active",
"LastUpdateStatus": "Successful"
}
]
}
```
localstack logs
```
2020-11-20T18:11:58:INFO:localstack.services.awslambda.lambda_api: Function not found: arn:aws:lambda:us-east-1:000000000000:function:ftg_chalice-local-handle_sqs_message
```
# Steps to reproduce
```
╰─± SERVICES=sqs,sts,iam,cloudformation,lambda,apigateway,ssm DATA_DIR=/tmp/localstack/data PORT_WEB_UI=8081 DEBUG=1 LOCALSTACK_API_KEY=REDACTED localstack start
```
have a chalice app with the configuration above and run `chalice-local deploy local`
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
┆Issue is synchronized with this [Jira Bug](https://localstack.atlassian.net/browse/LOC-337) by [Unito](https://www.unito.io/learn-more)
| https://github.com/localstack/localstack/issues/3260 | https://github.com/localstack/localstack/pull/3269 | 1962d8e25e7fdb2f0a805c7692bc712b423c0272 | 1a7f97825471657f267765a07f604c1ece01e1ca | "2020-11-20T18:22:13Z" | python | "2020-11-25T08:10:41Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,243 | ["localstack/constants.py", "localstack/services/kinesis/kinesis_listener.py"] | AWS SDKv2 kinesis throws exception when using `getRecords` | <!-- Love localstack? Please consider supporting our collective:
:point_right: https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
Currently when using [software.amazon.awssdk:kinesis:2.15.27](https://search.maven.org/artifact/software.amazon.awssdk/kinesis/2.15.27/jar) and attempting to use [getRecords](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/kinesis/KinesisAsyncClient.html#getRecords-java.util.function.Consumer-) we end up getting a general aws client exception `software.amazon.awssdk.core.exception.SdkClientException: Unable to parse date : 1.605283809962E9`. This doesn't happen when using a live kinesis stream.
## Expected behavior
Should be able to get a response from localstack with the correct data.
## Actual behavior
An exception is thrown from inside aws sdk v2 with the message `software.amazon.awssdk.core.exception.SdkClientException: Unable to parse date : 1.605283809962E9`.
# Steps to reproduce
## Command used to start LocalStack
`docker run -d -p 4566:4566 -e "SERVICES=kinesis" localstack/localstack:0.12.2`
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
https://github.com/lukecollier/localstack-jvm-aws-sdk-v2
┆Issue is synchronized with this [Jira Task](https://localstack.atlassian.net/browse/LOC-53) by [Unito](https://www.unito.io/learn-more)
| https://github.com/localstack/localstack/issues/3243 | https://github.com/localstack/localstack/pull/3278 | 993df276882764c7149e6475ed21e1f7ea380c1a | 535eebe2496a8c1c815e09a1337d6463edf081bd | "2020-11-13T16:57:37Z" | python | "2020-11-29T14:23:30Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,242 | ["localstack/services/awslambda/lambda_executors.py"] | Usage of `docker inspect ..` is fragile, depends on how and what built the docker image | [x] bug report
[ ] feature request
# Detailed description
`lambda_executor.py` current retrieves the container entrypoint from the docker image via `docker inspect --format="{{ .ContainerConfig.Entrypoint }}" ..`. This is fragile and may be missing depending on how the image in question is built. There is a `config` block _and_ a `containerconfig` block that are mostly the same, but sometimes different depending what built and what version of that thing built the image, for example we are seeing the entrypoint missing on images built with Docker for Mac 2.5.0.1, but not on earlier versions, others using `podman` are noticing the fragility in other projects:
https://github.com/containers/podman/issues/2017
## Expected behavior
entrypoint value is picked up from a validly built container
## Actual behavior
entrypoint is sometimes an empty string, which then for a `provided` lambda executor ends up with a script error trying to execute the handler name.
The simple fix is to change `--format="{{ .ContainerConfig.Entrypoint }}"` to `--format="{{ .Config.Entrypoint }}"` which seems like the more canonical way of getting that value.
┆Issue is synchronized with this [Jira Task](https://localstack.atlassian.net/browse/LOC-54) by [Unito](https://www.unito.io/learn-more)
| https://github.com/localstack/localstack/issues/3242 | https://github.com/localstack/localstack/pull/3366 | b71b6e1c0e1072045737c4c2d0afed0b1579687d | fa16a64ab91df78b0251e017bee3068f68dda011 | "2020-11-12T23:09:17Z" | python | "2020-12-16T13:02:13Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,209 | ["localstack/services/apigateway/apigateway_starter.py", "localstack/services/awslambda/lambda_api.py"] | Terraform crashes when creating API Gateway stages | # Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
We're having trouble testing our serverless resources with Terraform and LocalStack. The problem seems related to API Gateway Stages, however I'm not able to narrow it down. I was able to create a minimal Terraform setup to reproduce the error.
## Expected behavior
The below code works in conjunction with LocalStack, as it does with AWS.
## Actual behavior
There seem to be two errors, but I'm not certain whether they are related or not. On the first run of Terraform against a fresh LocalStack container Terraform quits saying the REST API doesn't have any methods defined. When running Terraform again it now crashes with, at least to me, little information on what happened.
On the other hand LocalStack does not generate any relevant output at all, not even when the debugging flag is set.
### Output from frist `terraform apply`
```
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_api_gateway_deployment.service_api_deployment will be created
+ resource "aws_api_gateway_deployment" "service_api_deployment" {
+ created_date = (known after apply)
+ execution_arn = (known after apply)
+ id = (known after apply)
+ invoke_url = (known after apply)
+ rest_api_id = (known after apply)
}
# aws_api_gateway_integration.service_proxy_method_integration will be created
+ resource "aws_api_gateway_integration" "service_proxy_method_integration" {
+ cache_namespace = (known after apply)
+ connection_type = "INTERNET"
+ http_method = "ANY"
+ id = (known after apply)
+ integration_http_method = "POST"
+ passthrough_behavior = (known after apply)
+ resource_id = (known after apply)
+ rest_api_id = (known after apply)
+ timeout_milliseconds = 29000
+ type = "AWS_PROXY"
+ uri = (known after apply)
}
# aws_api_gateway_method.service_proxy_method will be created
+ resource "aws_api_gateway_method" "service_proxy_method" {
+ api_key_required = false
+ authorization = "NONE"
+ http_method = "ANY"
+ id = (known after apply)
+ resource_id = (known after apply)
+ rest_api_id = (known after apply)
}
# aws_api_gateway_resource.service_proxy_resource will be created
+ resource "aws_api_gateway_resource" "service_proxy_resource" {
+ id = (known after apply)
+ parent_id = (known after apply)
+ path = (known after apply)
+ path_part = "{proxy+}"
+ rest_api_id = (known after apply)
}
# aws_api_gateway_rest_api.service_api will be created
+ resource "aws_api_gateway_rest_api" "service_api" {
+ api_key_source = "HEADER"
+ arn = (known after apply)
+ created_date = (known after apply)
+ execution_arn = (known after apply)
+ id = (known after apply)
+ minimum_compression_size = -1
+ name = "test"
+ root_resource_id = (known after apply)
+ endpoint_configuration {
+ types = (known after apply)
+ vpc_endpoint_ids = (known after apply)
}
}
# aws_api_gateway_stage.service_api_stage will be created
+ resource "aws_api_gateway_stage" "service_api_stage" {
+ arn = (known after apply)
+ cache_cluster_size = "0.5"
+ deployment_id = (known after apply)
+ execution_arn = (known after apply)
+ id = (known after apply)
+ invoke_url = (known after apply)
+ rest_api_id = (known after apply)
+ stage_name = "test"
}
# aws_iam_role.service_function_role will be created
+ resource "aws_iam_role" "service_function_role" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "sts:AssumeRole"
+ Effect = "Allow"
+ Principal = {
+ Service = "lambda.amazonaws.com"
}
+ Sid = ""
},
]
+ Version = "2012-10-17"
}
)
+ create_date = (known after apply)
+ force_detach_policies = false
+ id = (known after apply)
+ max_session_duration = 3600
+ name = "test"
+ path = "/"
+ unique_id = (known after apply)
}
# aws_lambda_alias.main will be created
+ resource "aws_lambda_alias" "main" {
+ arn = (known after apply)
+ description = "points to latest version"
+ function_name = (known after apply)
+ function_version = (known after apply)
+ id = (known after apply)
+ invoke_arn = (known after apply)
+ name = "main"
}
# aws_lambda_function.service_function will be created
+ resource "aws_lambda_function" "service_function" {
+ arn = (known after apply)
+ function_name = "test"
+ handler = "test"
+ id = (known after apply)
+ invoke_arn = (known after apply)
+ last_modified = (known after apply)
+ memory_size = 256
+ publish = true
+ qualified_arn = (known after apply)
+ reserved_concurrent_executions = -1
+ role = (known after apply)
+ runtime = "dotnetcore3.1"
+ s3_bucket = "test"
+ s3_key = "test.zip"
+ source_code_hash = (known after apply)
+ source_code_size = (known after apply)
+ timeout = 300
+ version = (known after apply)
+ tracing_config {
+ mode = "Active"
}
}
# aws_lambda_permission.service_function_api_invocation_permission will be created
+ resource "aws_lambda_permission" "service_function_api_invocation_permission" {
+ action = "lambda:InvokeFunction"
+ function_name = "test"
+ id = (known after apply)
+ principal = "apigateway.amazonaws.com"
+ qualifier = "main"
+ source_arn = (known after apply)
+ statement_id = "AllowExecutionFromAPIGateway"
}
# aws_s3_bucket.local-devops-bucket will be created
+ resource "aws_s3_bucket" "local-devops-bucket" {
+ acceleration_status = (known after apply)
+ acl = "private"
+ arn = (known after apply)
+ bucket = "test"
+ bucket_domain_name = (known after apply)
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = false
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ region = (known after apply)
+ request_payer = (known after apply)
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
+ versioning {
+ enabled = (known after apply)
+ mfa_delete = (known after apply)
}
}
# aws_s3_bucket_object.service_function_archive will be created
+ resource "aws_s3_bucket_object" "service_function_archive" {
+ acl = "private"
+ bucket = "test"
+ content_type = (known after apply)
+ etag = (known after apply)
+ force_destroy = false
+ id = (known after apply)
+ key = "test.zip"
+ kms_key_id = (known after apply)
+ server_side_encryption = (known after apply)
+ source = "test.zip"
+ storage_class = (known after apply)
+ version_id = (known after apply)
}
Plan: 12 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_api_gateway_rest_api.service_api: Creating...
aws_iam_role.service_function_role: Creating...
aws_s3_bucket_object.service_function_archive: Creating...
aws_s3_bucket.local-devops-bucket: Creating...
aws_iam_role.service_function_role: Creation complete after 0s [id=test]
aws_api_gateway_rest_api.service_api: Creation complete after 0s [id=ep4yjxqb8x]
aws_api_gateway_resource.service_proxy_resource: Creating...
aws_api_gateway_deployment.service_api_deployment: Creating...
aws_s3_bucket_object.service_function_archive: Creation complete after 0s [id=test.zip]
aws_lambda_function.service_function: Creating...
aws_api_gateway_resource.service_proxy_resource: Creation complete after 0s [id=4b9q58vlzd]
aws_api_gateway_method.service_proxy_method: Creating...
aws_api_gateway_method.service_proxy_method: Creation complete after 1s [id=agm-ep4yjxqb8x-4b9q58vlzd-ANY]
aws_s3_bucket.local-devops-bucket: Creation complete after 1s [id=test]
aws_lambda_function.service_function: Creation complete after 7s [id=test]
aws_lambda_alias.main: Creating...
aws_lambda_alias.main: Creation complete after 0s [id=arn:aws:lambda:eu-central-1:000000000000:function:test:main]
aws_lambda_permission.service_function_api_invocation_permission: Creating...
aws_api_gateway_integration.service_proxy_method_integration: Creating...
aws_api_gateway_integration.service_proxy_method_integration: Creation complete after 0s [id=agi-ep4yjxqb8x-4b9q58vlzd-ANY]
aws_lambda_permission.service_function_api_invocation_permission: Creation complete after 0s [id=AllowExecutionFromAPIGateway]
Error: Error creating API Gateway Deployment: : The REST API doesn't contain any methods
status code: 400, request id:
```
### Output from second `terraform apply`
```
aws_iam_role.service_function_role: Refreshing state... [id=test]
aws_s3_bucket_object.service_function_archive: Refreshing state... [id=test.zip]
aws_api_gateway_rest_api.service_api: Refreshing state... [id=ep4yjxqb8x]
aws_s3_bucket.local-devops-bucket: Refreshing state... [id=test]
aws_api_gateway_resource.service_proxy_resource: Refreshing state... [id=4b9q58vlzd]
aws_lambda_function.service_function: Refreshing state... [id=test]
aws_api_gateway_method.service_proxy_method: Refreshing state... [id=agm-ep4yjxqb8x-4b9q58vlzd-ANY]
aws_lambda_alias.main: Refreshing state... [id=arn:aws:lambda:eu-central-1:000000000000:function:test:main]
aws_lambda_permission.service_function_api_invocation_permission: Refreshing state... [id=AllowExecutionFromAPIGateway]
aws_api_gateway_integration.service_proxy_method_integration: Refreshing state... [id=agi-ep4yjxqb8x-4b9q58vlzd-ANY]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
~ update in-place
-/+ destroy and then create replacement
Terraform will perform the following actions:
# aws_api_gateway_deployment.service_api_deployment will be created
+ resource "aws_api_gateway_deployment" "service_api_deployment" {
+ created_date = (known after apply)
+ execution_arn = (known after apply)
+ id = (known after apply)
+ invoke_url = (known after apply)
+ rest_api_id = "ep4yjxqb8x"
}
# aws_api_gateway_integration.service_proxy_method_integration will be updated in-place
~ resource "aws_api_gateway_integration" "service_proxy_method_integration" {
cache_key_parameters = []
cache_namespace = "96ba41d5"
connection_type = "INTERNET"
http_method = "ANY"
id = "agi-ep4yjxqb8x-4b9q58vlzd-ANY"
integration_http_method = "POST"
passthrough_behavior = "WHEN_NO_MATCH"
request_parameters = {}
request_templates = {}
resource_id = "4b9q58vlzd"
rest_api_id = "ep4yjxqb8x"
~ timeout_milliseconds = 0 -> 29000
type = "AWS_PROXY"
uri = "arn:aws:apigateway:eu-central-1:lambda:path/2015-03-31/functions/arn:aws:lambda:eu-central-1:000000000000:function:test:main/invocations"
}
# aws_api_gateway_stage.service_api_stage will be created
+ resource "aws_api_gateway_stage" "service_api_stage" {
+ arn = (known after apply)
+ cache_cluster_size = "0.5"
+ deployment_id = (known after apply)
+ execution_arn = (known after apply)
+ id = (known after apply)
+ invoke_url = (known after apply)
+ rest_api_id = "ep4yjxqb8x"
+ stage_name = "test"
}
# aws_lambda_alias.main will be updated in-place
~ resource "aws_lambda_alias" "main" {
arn = "arn:aws:lambda:eu-central-1:000000000000:function:test:main"
description = "points to latest version"
function_name = "arn:aws:lambda:eu-central-1:000000000000:function:test"
~ function_version = "1" -> (known after apply)
id = "arn:aws:lambda:eu-central-1:000000000000:function:test:main"
invoke_arn = "arn:aws:apigateway:eu-central-1:lambda:path/2015-03-31/functions/arn:aws:lambda:eu-central-1:000000000000:function:test:main/invocations"
name = "main"
}
# aws_lambda_function.service_function will be updated in-place
~ resource "aws_lambda_function" "service_function" {
arn = "arn:aws:lambda:eu-central-1:000000000000:function:test"
function_name = "test"
handler = "test"
id = "test"
invoke_arn = "arn:aws:apigateway:eu-central-1:lambda:path/2015-03-31/functions/arn:aws:lambda:eu-central-1:000000000000:function:test/invocations"
last_modified = "2020-11-02T12:02:20.471+0000"
layers = []
memory_size = 256
publish = true
~ qualified_arn = "arn:aws:lambda:eu-central-1:000000000000:function:test:1" -> (known after apply)
reserved_concurrent_executions = -1
role = "arn:aws:iam::000000000000:role/test"
runtime = "dotnetcore3.1"
s3_bucket = "test"
s3_key = "test.zip"
source_code_hash = "b8UrP6K2Q/w+4t0ZUTTh/AoI8bzMljmKvc/i1Ka+/Ho="
source_code_size = 2875930
tags = {}
timeout = 300
~ version = "1" -> (known after apply)
~ tracing_config {
~ mode = "PassThrough" -> "Active"
}
}
# aws_lambda_permission.service_function_api_invocation_permission must be replaced
-/+ resource "aws_lambda_permission" "service_function_api_invocation_permission" {
action = "lambda:InvokeFunction"
function_name = "test"
~ id = "AllowExecutionFromAPIGateway" -> (known after apply)
principal = "apigateway.amazonaws.com"
+ qualifier = "main" # forces replacement
source_arn = "arn:aws:execute-api:eu-central-1::ep4yjxqb8x/*/*/*"
statement_id = "AllowExecutionFromAPIGateway"
}
Plan: 3 to add, 3 to change, 1 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_lambda_permission.service_function_api_invocation_permission: Destroying... [id=AllowExecutionFromAPIGateway]
aws_api_gateway_deployment.service_api_deployment: Creating...
aws_lambda_function.service_function: Modifying... [id=test]
aws_api_gateway_deployment.service_api_deployment: Creation complete after 0s [id=053vuojcjh]
aws_api_gateway_stage.service_api_stage: Creating...
Error: rpc error: code = Unavailable desc = transport is closing
Error: rpc error: code = Unavailable desc = transport is closing
Error: rpc error: code = Unavailable desc = transport is closing
panic: runtime error: invalid memory address or nil pointer dereference
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x48d844c]
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5:
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: goroutine 89 [running]:
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: github.com/terraform-providers/terraform-provider-aws/aws.resourceAwsApiGatewayStageCreate(0xc001674780, 0x5f0a640, 0xc000179b80, 0x0, 0xffffffffffffffff)
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-aws/aws/resource_aws_api_gateway_stage.go:172 +0x85c
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xc00041b3f0, 0x7688d40, 0xc0010d6700, 0xc001674780, 0x5f0a640, 0xc000179b80, 0x0, 0x0, 0x0)
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:268 +0x88
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc00041b3f0, 0x7688d40, 0xc0010d6700, 0xc0004b88c0, 0xc00218cb20, 0x5f0a640, 0xc000179b80, 0x0, 0x0, 0x0, ...)
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:386 +0x681
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: github.com/hashicorp/terraform-plugin-sdk/v2/internal/helper/plugin.(*GRPCProviderServer).ApplyResourceChange(0xc000c91780, 0x7688d40, 0xc0010d6700, 0xc0004b8770, 0xc000c91780, 0xc000c91790, 0x6c492b0)
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/internal/helper/plugin/grpc_provider.go:952 +0x8b2
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfplugin5._Provider_ApplyResourceChange_Handler.func1(0x7688d40, 0xc0010d6700, 0x6943a20, 0xc0004b8770, 0xc0010d6700, 0x6091660, 0xc000e58d01, 0xc00218c860)
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/internal/tfplugin5/tfplugin5.pb.go:3312 +0x86
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: github.com/hashicorp/terraform-plugin-sdk/v2/plugin.Serve.func3.1(0x7688e00, 0xc00119c300, 0x6943a20, 0xc0004b8770, 0xc00218c840, 0xc00218c860, 0xc000a77ba0, 0x1090018, 0x6737900, 0xc00119c300)
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/plugin/serve.go:76 +0x87
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: github.com/hashicorp/terraform-plugin-sdk/v2/internal/tfplugin5._Provider_ApplyResourceChange_Handler(0x69ca620, 0xc000c91780, 0x7688e00, 0xc00119c300, 0xc000e58de0, 0xc0018c7ea0, 0x7688e00, 0xc00119c300, 0xc00211aa00, 0x265)
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/internal/tfplugin5/tfplugin5.pb.go:3314 +0x14b
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: google.golang.org/grpc.(*Server).processUnaryRPC(0xc000bb6700, 0x76ac4e0, 0xc001064480, 0xc0002a8c00, 0xc000d264b0, 0xac87d80, 0x0, 0x0, 0x0)
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/google.golang.org/[email protected]/server.go:1171 +0x50a
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: google.golang.org/grpc.(*Server).handleStream(0xc000bb6700, 0x76ac4e0, 0xc001064480, 0xc0002a8c00, 0x0)
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/google.golang.org/[email protected]/server.go:1494 +0xccd
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000f96e20, 0xc000bb6700, 0x76ac4e0, 0xc001064480, 0xc0002a8c00)
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/google.golang.org/[email protected]/server.go:834 +0xa1
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: created by google.golang.org/grpc.(*Server).serveStreams.func1
2020-11-02T13:02:36.909+0100 [DEBUG] plugin.terraform-provider-aws_v3.13.0_x5: /opt/teamcity-agent/work/5d79fe75d4460a2f/pkg/mod/google.golang.org/[email protected]/server.go:832 +0x204
2020-11-02T13:02:36.912+0100 [WARN] plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2020/11/02 13:02:36 [DEBUG] aws_lambda_permission.service_function_api_invocation_permission: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalWriteState
2020/11/02 13:02:36 [DEBUG] aws_lambda_function.service_function: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2020/11/02 13:02:36 [TRACE] EvalWriteState: writing current state object for aws_lambda_permission.service_function_api_invocation_permission
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalMaybeTainted
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalWriteState
2020/11/02 13:02:36 [TRACE] EvalWriteState: recording 2 dependencies for aws_lambda_function.service_function
2020/11/02 13:02:36 [TRACE] EvalWriteState: writing current state object for aws_lambda_function.service_function
2020/11/02 13:02:36 [DEBUG] aws_api_gateway_stage.service_api_stage: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalMaybeTainted
2020/11/02 13:02:36 [TRACE] EvalMaybeTainted: aws_api_gateway_stage.service_api_stage encountered an error during creation, so it is now marked as tainted
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalWriteState
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalApplyPost
2020/11/02 13:02:36 [TRACE] EvalWriteState: removing state object for aws_api_gateway_stage.service_api_stage
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalApplyProvisioners
2020/11/02 13:02:36 [TRACE] EvalApplyProvisioners: aws_api_gateway_stage.service_api_stage has no state, so skipping provisioners
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalMaybeTainted
2020/11/02 13:02:36 [TRACE] EvalMaybeTainted: aws_api_gateway_stage.service_api_stage encountered an error during creation, so it is now marked as tainted
2020/11/02 13:02:36 [ERROR] eval: *terraform.EvalApplyPost, err: rpc error: code = Unavailable desc = transport is closing
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalWriteState
2020/11/02 13:02:36 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2020/11/02 13:02:36 [TRACE] EvalWriteState: removing state object for aws_api_gateway_stage.service_api_stage
2020/11/02 13:02:36 [ERROR] eval: *terraform.EvalOpFilter, err: rpc error: code = Unavailable desc = transport is closing
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalIf
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalIf
2020/11/02 13:02:36 [TRACE] [walkApply] Exiting eval tree: aws_lambda_permission.service_function_api_invocation_permission (destroy)
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalWriteDiff
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalApplyPost
2020/11/02 13:02:36 [TRACE] vertex "aws_lambda_permission.service_function_api_invocation_permission (destroy)": visit complete
2020/11/02 13:02:36 [ERROR] eval: *terraform.EvalApplyPost, err: rpc error: code = Unavailable desc = transport is closing
2020/11/02 13:02:36 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2020/11/02 13:02:36 [TRACE] [walkApply] Exiting eval tree: aws_api_gateway_stage.service_api_stage
2020/11/02 13:02:36 [TRACE] vertex "aws_api_gateway_stage.service_api_stage": visit complete
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalApplyProvisioners
2020/11/02 13:02:36 [TRACE] EvalApplyProvisioners: aws_lambda_function.service_function is not freshly-created, so no provisioning is required
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalMaybeTainted
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalWriteState
2020/11/02 13:02:36 [TRACE] EvalWriteState: recording 2 dependencies for aws_lambda_function.service_function
2020/11/02 13:02:36 [TRACE] EvalWriteState: writing current state object for aws_lambda_function.service_function
2020-11-02T13:02:36.912+0100 [DEBUG] plugin: plugin process exited: path=.terraform/plugins/registry.terraform.io/hashicorp/aws/3.13.0/darwin_amd64/terraform-provider-aws_v3.13.0_x5 pid=5341 error="exit status 2"
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalIf
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalIf
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalWriteDiff
2020/11/02 13:02:36 [TRACE] eval: *terraform.EvalApplyPost
2020/11/02 13:02:36 [ERROR] eval: *terraform.EvalApplyPost, err: rpc error: code = Unavailable desc = transport is closing
2020/11/02 13:02:36 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2020/11/02 13:02:36 [TRACE] [walkApply] Exiting eval tree: aws_lambda_function.service_function
2020/11/02 13:02:36 [TRACE] vertex "aws_lambda_function.service_function": visit complete
2020/11/02 13:02:36 [TRACE] dag/walk: upstream of "aws_lambda_alias.main" errored, so skipping
2020/11/02 13:02:36 [TRACE] dag/walk: upstream of "aws_lambda_permission.service_function_api_invocation_permission" errored, so skipping
2020/11/02 13:02:36 [TRACE] dag/walk: upstream of "aws_api_gateway_integration.service_proxy_method_integration" errored, so skipping
2020/11/02 13:02:36 [TRACE] dag/walk: upstream of "meta.count-boundary (EachMode fixup)" errored, so skipping
2020/11/02 13:02:36 [TRACE] dag/walk: upstream of "provider[\"registry.terraform.io/hashicorp/aws\"] (close)" errored, so skipping
2020/11/02 13:02:36 [TRACE] dag/walk: upstream of "root" errored, so skipping
2020/11/02 13:02:36 [TRACE] statemgr.Filesystem: have already backed up original terraform.tfstate to terraform.tfstate.backup on a previous write
2020/11/02 13:02:36 [TRACE] statemgr.Filesystem: state has changed since last snapshot, so incrementing serial to 13
2020/11/02 13:02:36 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
2020/11/02 13:02:36 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
2020/11/02 13:02:36 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate using fcntl flock
2020-11-02T13:02:36.922+0100 [DEBUG] plugin: plugin exited
!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.
When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.
SECURITY WARNING: the "crash.log" file that was created may contain
sensitive information that must be redacted before it is safe to share
on the issue tracker.
[1]: https://github.com/hashicorp/terraform/issues
!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
```
# Steps to reproduce
## Command used to start LocalStack
`LOCALSTACK_API_KEY=xxx DEBUG=1 localstack start`
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
This is the minimal Terraform code I could write that reproduces the error. Steps to reproduce:
1. Copy below code into a new file
2. Run `terraform init`
3. Run `TF_LOG=DEBUG terraform apply`. Terraform will create some resources and abort with:
`Error: Error creating API Gateway Deployment: : The REST API doesn't contain any methods
status code: 400, request id: `
4. Run `TF_LOG=DEBUG terraform apply` again. This time the AWS provider will crash as stated before.
```hcl
provider "aws" {
region = "eu-central-1"
access_key = "test"
secret_key = "test"
s3_force_path_style = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
apigateway = "http://localhost:4566"
cloudformation = "http://localhost:4566"
cloudwatch = "http://localhost:4566"
dynamodb = "http://localhost:4566"
es = "http://localhost:4566"
firehose = "http://localhost:4566"
iam = "http://localhost:4566"
kinesis = "http://localhost:4566"
lambda = "http://localhost:4566"
route53 = "http://localhost:4566"
redshift = "http://localhost:4566"
s3 = "http://localhost:4566"
secretsmanager = "http://localhost:4566"
ses = "http://localhost:4566"
sns = "http://localhost:4566"
sqs = "http://localhost:4566"
ssm = "http://localhost:4566"
stepfunctions = "http://localhost:4566"
sts = "http://localhost:4566"
}
}
##############################
# API GATEWAY
##############################
resource "aws_api_gateway_rest_api" "service_api" {
name = "test"
api_key_source = "HEADER"
}
resource "aws_api_gateway_deployment" "service_api_deployment" {
rest_api_id = aws_api_gateway_rest_api.service_api.id
lifecycle {
create_before_destroy = true
}
}
resource "aws_api_gateway_stage" "service_api_stage" {
stage_name = "test"
rest_api_id = aws_api_gateway_rest_api.service_api.id
deployment_id = aws_api_gateway_deployment.service_api_deployment.id
# needed to prevent Terraform from updating this resource every time
cache_cluster_size = "0.5"
}
resource "aws_api_gateway_resource" "service_proxy_resource" {
rest_api_id = aws_api_gateway_rest_api.service_api.id
parent_id = aws_api_gateway_rest_api.service_api.root_resource_id
path_part = "{proxy+}"
}
resource "aws_api_gateway_method" "service_proxy_method" {
rest_api_id = aws_api_gateway_rest_api.service_api.id
resource_id = aws_api_gateway_resource.service_proxy_resource.id
authorization = "NONE"
http_method = "ANY"
}
resource "aws_api_gateway_integration" "service_proxy_method_integration" {
rest_api_id = aws_api_gateway_rest_api.service_api.id
resource_id = aws_api_gateway_resource.service_proxy_resource.id
http_method = aws_api_gateway_method.service_proxy_method.http_method
/* credentials = "DO NOT SET" */
integration_http_method = "POST"
type = "AWS_PROXY"
uri = aws_lambda_alias.main.invoke_arn
}
##############################
# S3 ARCHIVE
##############################
resource "aws_s3_bucket" "local-devops-bucket" {
bucket = "test"
}
resource "aws_s3_bucket_object" "service_function_archive" {
bucket = "test"
key = "test.zip"
source = "test.zip"
}
##############################
# LAMBDA FUNCTION
##############################
resource "aws_lambda_function" "service_function" {
s3_bucket = "test"
s3_key = aws_s3_bucket_object.service_function_archive.key
function_name = "test"
role = aws_iam_role.service_function_role.arn
handler = "test"
runtime = "dotnetcore3.1"
memory_size = 256
timeout = "300"
publish = true
tracing_config {
mode = "Active"
}
}
resource "aws_lambda_alias" "main" {
name = "main"
description = "points to latest version"
function_name = aws_lambda_function.service_function.arn
function_version = aws_lambda_function.service_function.version
}
resource "aws_lambda_permission" "service_function_api_invocation_permission" {
statement_id = "AllowExecutionFromAPIGateway"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.service_function.function_name
principal = "apigateway.amazonaws.com"
qualifier = aws_lambda_alias.main.name
# More: http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-control-access-using-iam-policies-to-invoke-api.html
//source_arn = "${aws_api_gateway_rest_api.service_api.execution_arn}/*/*/"
source_arn = "${aws_api_gateway_rest_api.service_api.execution_arn}/*/*/*" // must contain full path as defined by API
}
resource "aws_iam_role" "service_function_role" {
name = "test"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
``` | https://github.com/localstack/localstack/issues/3209 | https://github.com/localstack/localstack/pull/3231 | 3e7dd0808991ce642b07a42e98f8b3cb1e487dc7 | 78a72923e25700d7da53e3540126cda825c129f1 | "2020-11-02T12:25:24Z" | python | "2020-11-11T11:19:28Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,208 | ["localstack/services/kinesis/kinesis_listener.py"] | JSON parsing problems when using jackson 2.11 in a Kinesis Consumer | <!-- Love localstack? Please consider supporting our collective:
:point_right: https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
When developing a Kinesis Data Stream Consumer using `com.amazonaws:aws-java-sdk-kinesis:1.11.887` I'm encountering following error:
```
com.amazonaws.SdkClientException: Unable to marshall request to JSON: Jackson jackson-core/jackson-dataformat-cbor incompatible library version detected.
You have two possible resolutions:
1) Ensure the com.fasterxml.jackson.core:jackson-core & com.fasterxml.jackson.dataformat:jackson-dataformat-cbor libraries on your classpath have the same version number
2) Disable CBOR wire-protocol by passing the -Dcom.amazonaws.sdk.disableCbor property or setting the AWS_CBOR_DISABLE environment variable (warning this may affect performance)
```
The direct cause of this is that I have different versions of the jackson libraries on my classpath (they are both transitive dependencies of the above mentioned aws library and they are also included for unrelated reasons in our project, hence the mismatch):
* `com.fasterxml.jackson.dataformat:jackson-dataformat-cbor:jar:2.6.7`
* `com.fasterxml.jackson.core:jackson-core:jar:2.11.2`
I can't downgrade the core version so in order to address the problem, I've forced both to version `2.11.2`. This produces the problem I want to report (and ask for help on):
```
com.amazonaws.SdkClientException: Unable to execute HTTP request: Current token (START_OBJECT) not VALUE_EMBEDDED_OBJECT, can not access as binary
at [Source: (com.amazonaws.event.ResponseProgressInputStream); line: -1, column: 297]
at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:373) ~[classes/:?]
at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:891) ~[?:1.8.0_202]
at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:346) ~[classes/:?]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_202]
Caused by: com.amazonaws.SdkClientException: Unable to execute HTTP request: Current token (START_OBJECT) not VALUE_EMBEDDED_OBJECT, can not access as binary
at [Source: (com.amazonaws.event.ResponseProgressInputStream); line: -1, column: 297]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1207) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1153) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.services.kinesis.AmazonKinesisClient.doInvoke(AmazonKinesisClient.java:2866) ~[aws-java-sdk-kinesis-1.11.887.jar:?]
at com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:2833) ~[aws-java-sdk-kinesis-1.11.887.jar:?]
at com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:2822) ~[aws-java-sdk-kinesis-1.11.887.jar:?]
at com.amazonaws.services.kinesis.AmazonKinesisClient.executeGetRecords(AmazonKinesisClient.java:1307) ~[aws-java-sdk-kinesis-1.11.887.jar:?]
at com.amazonaws.services.kinesis.AmazonKinesisAsyncClient$12.call(AmazonKinesisAsyncClient.java:757) ~[aws-java-sdk-kinesis-1.11.887.jar:?]
at com.amazonaws.services.kinesis.AmazonKinesisAsyncClient$12.call(AmazonKinesisAsyncClient.java:751) ~[aws-java-sdk-kinesis-1.11.887.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_202]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_202]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_202]
... 1 more
Caused by: com.fasterxml.jackson.core.JsonParseException: Current token (START_OBJECT) not VALUE_EMBEDDED_OBJECT, can not access as binary
at [Source: (com.amazonaws.event.ResponseProgressInputStream); line: -1, column: 297]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1851) ~[jackson-core-2.11.2.jar:2.11.2]
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:707) ~[jackson-core-2.11.2.jar:2.11.2]
at com.fasterxml.jackson.dataformat.cbor.CBORParser.getBinaryValue(CBORParser.java:1660) ~[jackson-dataformat-cbor-2.11.2.jar:2.11.2]
at com.fasterxml.jackson.core.JsonParser.getBinaryValue(JsonParser.java:1495) ~[jackson-core-2.11.2.jar:2.11.2]
at com.amazonaws.transform.SimpleTypeCborUnmarshallers$ByteBufferCborUnmarshaller.unmarshall(SimpleTypeCborUnmarshallers.java:198) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.transform.SimpleTypeCborUnmarshallers$ByteBufferCborUnmarshaller.unmarshall(SimpleTypeCborUnmarshallers.java:196) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.services.kinesis.model.transform.RecordJsonUnmarshaller.unmarshall(RecordJsonUnmarshaller.java:61) ~[aws-java-sdk-kinesis-1.11.887.jar:?]
at com.amazonaws.services.kinesis.model.transform.RecordJsonUnmarshaller.unmarshall(RecordJsonUnmarshaller.java:29) ~[aws-java-sdk-kinesis-1.11.887.jar:?]
at com.amazonaws.transform.ListUnmarshaller.unmarshallJsonToList(ListUnmarshaller.java:92) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.transform.ListUnmarshaller.unmarshall(ListUnmarshaller.java:46) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.services.kinesis.model.transform.GetRecordsResultJsonUnmarshaller.unmarshall(GetRecordsResultJsonUnmarshaller.java:55) ~[aws-java-sdk-kinesis-1.11.887.jar:?]
at com.amazonaws.services.kinesis.model.transform.GetRecordsResultJsonUnmarshaller.unmarshall(GetRecordsResultJsonUnmarshaller.java:29) ~[aws-java-sdk-kinesis-1.11.887.jar:?]
at com.amazonaws.http.JsonResponseHandler.handle(JsonResponseHandler.java:118) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.JsonResponseHandler.handle(JsonResponseHandler.java:43) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.response.AwsResponseHandlerAdapter.handle(AwsResponseHandlerAdapter.java:69) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1743) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleSuccessResponse(AmazonHttpClient.java:1463) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1371) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530) ~[aws-java-sdk-core-1.11.887.jar:?]
at com.amazonaws.services.kinesis.AmazonKinesisClient.doInvoke(AmazonKinesisClient.java:2866) ~[aws-java-sdk-kinesis-1.11.887.jar:?]
at com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:2833) ~[aws-java-sdk-kinesis-1.11.887.jar:?]
at com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:2822) ~[aws-java-sdk-kinesis-1.11.887.jar:?]
at com.amazonaws.services.kinesis.AmazonKinesisClient.executeGetRecords(AmazonKinesisClient.java:1307) ~[aws-java-sdk-kinesis-1.11.887.jar:?]
at com.amazonaws.services.kinesis.AmazonKinesisAsyncClient$12.call(AmazonKinesisAsyncClient.java:757) ~[aws-java-sdk-kinesis-1.11.887.jar:?]
at com.amazonaws.services.kinesis.AmazonKinesisAsyncClient$12.call(AmazonKinesisAsyncClient.java:751) ~[aws-java-sdk-kinesis-1.11.887.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_202]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_202]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_202]
... 1 more
```
**I think this is a problem inside Localstack, because if I run the same on an actual AWS backend, then it works, no Exception is thrown.**
# Steps to reproduce
Just run [this test](https://github.com/jbartok/hazelcast-jet/blob/localstack-issue/extensions/kinesis/src/test/java/com/hazelcast/jet/kinesis/KinesisIntegrationTest.java
) and it will result in the exception I've described.
┆Issue is synchronized with this [Jira Task](https://localstack.atlassian.net/browse/LOC-69) by [Unito](https://www.unito.io/learn-more)
| https://github.com/localstack/localstack/issues/3208 | https://github.com/localstack/localstack/pull/3509 | 6cbd6124dae8bee166b0d55471229ceab90a6d7d | a44d36a00c527b051c79ed1d94bf771032df2cac | "2020-11-02T11:47:51Z" | python | "2021-01-27T18:56:28Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,195 | ["localstack/services/secretsmanager/secretsmanager_starter.py", "tests/integration/test_secretsmanager.py"] | Terraform triggers NotImplementedError: The delete_resource_policy action has not been implemented | Bug report.
I get frequent errors with terraform when trying to modify a secret. Concrete terraform script to follow. Terraform gets stuck in a loop. This seems to only happen when a resource get's modified. The only thing I've been able to do is tear down the docker container and start agian.
aws_secretsmanager_secret.emulator_password: Still modifying.. [id=arn:aws:secretsmanager:eu-west-2:123456...secret:emulator-JqEOp, 30s elapsed]
And debug output from local stack keeps showing this:
```
local-aws_1 | 2020-10-28 22:56:20,853:API: 172.28.0.5 - - [28/Oct/2020 22:56:20] "POST / HTTP/1.1" 500 -
local-aws_1 | 2020-10-28 22:56:20,862:API: Error on request:
local-aws_1 | Traceback (most recent call last):
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/werkzeug/serving.py", line 323, in run_wsgi
local-aws_1 | execute(self.server.app)
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/werkzeug/serving.py", line 312, in execute
local-aws_1 | application_iter = app(environ, start_response)
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/server.py", line 168, in __call__
local-aws_1 | return backend_app(environ, start_response)
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 2464, in __call__
local-aws_1 | return self.wsgi_app(environ, start_response)
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 2450, in wsgi_app
local-aws_1 | response = self.handle_exception(e)
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask_cors/extension.py", line 165, in wrapped_function
local-aws_1 | return cors_after_request(app.make_response(f(*args, **kwargs)))
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask_cors/extension.py", line 165, in wrapped_function
local-aws_1 | return cors_after_request(app.make_response(f(*args, **kwargs)))
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 1867, in handle_exception
local-aws_1 | reraise(exc_type, exc_value, tb)
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
local-aws_1 | raise value
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
local-aws_1 | response = self.full_dispatch_request()
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
local-aws_1 | rv = self.handle_user_exception(e)
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask_cors/extension.py", line 165, in wrapped_function
local-aws_1 | return cors_after_request(app.make_response(f(*args, **kwargs)))
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask_cors/extension.py", line 165, in wrapped_function
local-aws_1 | return cors_after_request(app.make_response(f(*args, **kwargs)))
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
local-aws_1 | reraise(exc_type, exc_value, tb)
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
local-aws_1 | raise value
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
local-aws_1 | rv = self.dispatch_request()
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
local-aws_1 | return self.view_functions[rule.endpoint](**req.view_args)
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/core/utils.py", line 151, in __call__
local-aws_1 | result = self.callback(request, request.url, {})
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/core/responses.py", line 202, in dispatch
local-aws_1 | return cls()._dispatch(*args, **kwargs)
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/core/responses.py", line 312, in _dispatch
local-aws_1 | return self.call_action()
local-aws_1 | File "/opt/code/localstack/.venv/lib/python3.8/site-packages/moto/core/responses.py", line 409, in call_action
local-aws_1 | raise NotImplementedError(
local-aws_1 | NotImplementedError: The delete_resource_policy action has not been implemented
```
docker-compose
```
services:
local-aws:
image: localstack/localstack:latest
ports:
- "4566:4566"
- "8081:8080"
environment:
EDGE_PORT: 4566
SERVICES: sqs, secretsmanager, sts
DEFAULT_REGION: eu-west-2
HOSTNAME: local-aws
HOSTNAME_EXTERNAL: local-aws
START_WEB: 0
DEBUG: 1
networks:
- habitat-local-stack-network
```
Terraform script:
```
resource "aws_secretsmanager_secret_version" "emulator" {
secret_id = aws_secretsmanager_secret.emulator_password.id
secret_string = random_password.emulator_password.result
}
resource "aws_secretsmanager_secret" "emulator_password" {
name = "emulator_pw"
}
resource "random_password" "emulator_password" {
length = 16
upper = true
lower = true
special = true
number = true
}
terraform {
required_providers {
random = {
source = "hashicorp/random"
}
time = {
source = "hashicorp/time"
}
aws = {
source = "hashicorp/aws"
}
}
required_version = ">= 0.13"
}
```
| https://github.com/localstack/localstack/issues/3195 | https://github.com/localstack/localstack/pull/3215 | 711229640da9898daa81e6ffdb0cf17d0116132b | 764dba4a7c9c23b259a70202f42362f3a8cbd337 | "2020-10-28T23:08:31Z" | python | "2020-11-03T23:34:07Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,136 | [".travis.yml", "Dockerfile", "bin/Dockerfile.base", "doc/developer_guides/README.md", "tests/integration/lambdas/java/lambda-function-with-lib-0.0.1.jar", "tests/integration/lambdas/java/pom.xml", "tests/integration/test_lambda.py"] | Unable to invoke lambda function compiled in java 11. Localstack latest image still shows java version 8. | #2093 # Type of request: This is a ...
[*] bug report
[ ] feature request
# Detailed description
Unable to invoke lambda function compiled in java 11. Localstack latest image still shows java version 8.
## Expected behavior
Should be able to invoke lambda function compiled in java 11.
## Actual behavior
Getting error-
Exception in thread "main" java.lang.UnsupportedClassVersionError: publisFunction has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0
# Steps to reproduce
docker-compose up -d
Create lambda function
Create SQS
map lambda and sqs
send message to SQS to invoke lambda
Error message SS:


Similar issue was reported #2093. Couldn't find the answer if that was fixed and latest image of Localstack should compile in JAVA 11. | https://github.com/localstack/localstack/issues/3136 | https://github.com/localstack/localstack/pull/3166 | 819d90f6e7376d3cd3dcf59128854281ce2d70c1 | d0fbb06386ea59f0abc3f74102a1a7f53eeea1e4 | "2020-10-20T14:54:47Z" | python | "2020-10-26T23:45:33Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,111 | ["localstack/services/awslambda/lambda_executors.py", "tests/unit/test_lambda.py"] | Fix debug port parser for java lambda executor to support different formats | # Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
Support different ways of configuring debug ports for Java lambdas
```
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=_debug_port_
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=localhost:_debug_port_
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=127.0.0.1:_debug_port_
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=*:_debug_port_
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=1234
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=localhost:1234
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=127.0.01:1234
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=*:1234
````
## Expected behavior
Ports are parsed correctly
| https://github.com/localstack/localstack/issues/3111 | https://github.com/localstack/localstack/pull/3112 | 6d463be76ad416e0418645c9e8a0212dfdefec3a | 6738e4798818004cdec451d34dca046520ec627d | "2020-10-16T01:45:00Z" | python | "2020-10-16T11:40:06Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,059 | ["CHANGELOG.md", "localstack/constants.py", "localstack/services/events/events_listener.py", "localstack/services/iam/iam_listener.py", "localstack/services/sts/sts_listener.py", "localstack/utils/aws/aws_responses.py"] | IAM account password policy timeout | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
Hi, I use locak stack to test module IAM Account but I receive timeout in module aws_iam_account_password_policy.
## Expected behavior
Create the password policy
## Actual behavior
...

# Steps to reproduce
```
provider "aws" {
access_key = "mock_access_key"
region = "us-east-1"
s3_force_path_style = true
secret_key = "mock_secret_key"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
apigateway = "http://localhost:4566"
cloudformation = "http://localhost:4566"
cloudwatch = "http://localhost:4566"
dynamodb = "http://localhost:4566"
es = "http://localhost:4566"
firehose = "http://localhost:4566"
iam = "http://localhost:4566"
kinesis = "http://localhost:4566"
lambda = "http://localhost:4566"
route53 = "http://localhost:4566"
redshift = "http://localhost:4566"
s3 = "http://localhost:4566"
secretsmanager = "http://localhost:4566"
ses = "http://localhost:4566"
sns = "http://localhost:4566"
sqs = "http://localhost:4566"
ssm = "http://localhost:4566"
stepfunctions = "http://localhost:4566"
sts = "http://localhost:4566"
}
}
##############
# IAM account
##############
module "iam_account" {
source = "../../modules/iam-account"
account_alias = var.iam_account_alias
max_password_age = var.iam_max_password_age
minimum_password_length = var.iam_minimum_password_length
password_reuse_prevention = var.iam_password_reuse_prevention
allow_users_to_change_password = true
require_lowercase_characters = true
require_uppercase_characters = true
require_numbers = true
require_symbols = true
}
variable "iam_account_alias" {}
variable "iam_minimum_password_length" {}
variable "iam_password_reuse_prevention" {}
variable "iam_max_password_age" {}
```
Module:
https://github.com/youse-seguradora/terraform-aws-iam/tree/master/modules/iam-account
## Command used to start LocalStack
localstatck start
When I run in mey own aws account terraform apply and terraform destroy works fine.
Can you guys help?
Thanks
| https://github.com/localstack/localstack/issues/3059 | https://github.com/localstack/localstack/pull/3064 | 13944ff3aee0e5c2ede7783bf39c2cc452c933c3 | 19f6d939e0d3479026f8fc69471f3468215a4dc8 | "2020-10-02T13:02:41Z" | python | "2020-10-02T20:05:09Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,040 | ["localstack/services/awslambda/lambda_api.py", "tests/integration/test_lambda.py"] | Issue with lambda add-permission function | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x ] bug report
[ ] feature request
# Detailed description
I am using localstack on Windows 10. I can create a lambda function, load it to localstack and invoke it via the CLI. But when I try to modify the permissions via the add-permission function it fails with a long stack dump and the dreaded 'botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "http://e06d84562302:0/"' error. After looking at the lambda container the environment variable 'LOCALSTACK_HOSTNAME' = 'e06d84562302' which leads me to believe either I have a configuration issue or there is a translation that is not occurring in the add-permissions.
## Expected behavior
A success message.
## Actual behavior
The add-permissions function returns:
An error occurred (500) when calling the AddPermission operation (reached max retries: 2):
!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"
500 Internal Server Error
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
In the log is a long stack dump and the error:
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "http://e06d84562302:0/"
When examining the lambda container, the environment variable 'LOCALSTACK_HOSTNAME' = 'e06d84562302'
The e value will change if the current lambda is destroyed and a new lambda is created, but the error and the LOCALSTACK_HOSTNAME always match.
# Steps to reproduce
1) Start localstack via 'docker-compose up -d' command. The docker-compose.yml is listed below.
This succeeds.
2) Start lambda function via:
_awslocal_ lambda create-function --function-name ambProcessUASsim --zip-file fileb://./artifact/ambProcessUASsim.zip --handler=ambProcessUASsim::ambProcessUASsim.Function::FunctionHandler --runtime dotnetcore3.1 --role somerole --environment "Variables={Runtime=Debug,S3Port=4566}"_
This succeeds.
3) Create S3 bucket (if it does not already exist):
_awslocal s3 mb s3://demo-bucket_
This succeeds.
4) Add a permission via:
_awslocal lambda add-permission --function-name ambProcessUASsim --action "lambda:InvokeFunction" --principal s3.amazonaws.com --source-arn arn:aws:s3:::demo-bucket --statement-id 1_
This fails with:
An error occurred (500) when calling the AddPermission operation (reached max retries: 2):
!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"
500 Internal Server Error
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
## Command used to start LocalStack
docker-compose.yml:
version: '3.7'
services:
localstack-s3:
image: localstack/localstack-full:latest
container_name: localstack
environment:
- SERVICES=s3,dynamodb,lambda,sqs
- DEBUG=1
- DEFAULT_REGION=us-east-1
- DATA_DIR=/tmp/localstack/data
- HOST_TMP_FOLDER=${TMPDIR}
- LAMBDA_EXECUTOR=docker-reuse
- LAMBDA_REMOTE_DOCKER=false
- DOCKER_HOST=unix:///var/run/docker.sock
- START_WEB=1
- PORT_WEB_UI=8080
ports:
- "4563-4599:4563-4599"
- "8080:8080"
volumes:
- type: bind
source: /d/tmp/localstack
target: /tmp/localstack
- "/var/run/docker.sock:/var/run/docker.sock"
- "sqs:/tmp/localstack/sqs"
networks:
- localstack_default
volumes:
sqs: null
localstack-data:
name: localstack-data
networks:
localstack_default:
name: localstack_default
driver: bridge
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
The code is C# using the basic Visual Studio 2019 lambda template code for invoking from S3.
The only meaningful change is to create a local S3Client from environment variables.
```
var ddbconfig = new AmazonS3Config();
var ddbport = Environment.GetEnvironmentVariable("S3Port") ?? "4566";
var hostname = Environment.GetEnvironmentVariable("LOCALSTACK_HOSTNAME") ?? "http://localhost:";
ddbconfig.ServiceURL = string.Format("http://{0}:{1}", hostname, ddbport);
S3Client = new AmazonS3Client(ddbconfig);
```
| https://github.com/localstack/localstack/issues/3040 | https://github.com/localstack/localstack/pull/3149 | ca85774a9a7f6d269080503567ff127b97fed3b1 | 819d90f6e7376d3cd3dcf59128854281ce2d70c1 | "2020-09-28T23:39:50Z" | python | "2020-10-26T21:34:56Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 3,009 | ["localstack/services/infra.py", "localstack/services/ses/ses_starter.py", "tests/integration/test_ses.py"] | SES Java SDKv2 TemplatesMetadata CreatedTimestamp fails to parse | <!-- Love localstack? Please consider supporting our collective:
:point_right: https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
SES template list returns a different format for CreatedTimestamp compared with the Java SDK (2.14.19)
```bash
software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Text '2020-09-19 00:53:47.289893' could not be parsed at index 10
```
I also happening in `0.11.4` `0.11.5`
## Expected behavior
Returned CreatedTimestamp is one of the formats the sdk expects
## Actual behavior
Returned CreatedTimestamp is not well parsed. Fractional seconds might be needed
# Steps to reproduce
## Command used to start LocalStack
`docker run --rm -p 4566:4566 -e SERVICES=ses -e DEFAULT_REGION=us-east-1 localstack/localstack:latest`
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
command
`aws --profile localstack --endpoint-url=http://localhost:4566 ses create-template --cli-auto-prompt`
input
`--template: TemplateName=hell-world,SubjectPart=subject,TextPart=hello\nworld,HtmlPart=hello<br/>world`
```kotlin
fun listTemplates(): List<String> {
val listTemplateRequest = ListTemplatesRequest
.builder()
.build()
return this.syncClient.listTemplates(listTemplateRequest).templatesMetadata().map { list -> list.name() }
}
```
┆Issue is synchronized with this [Jira Bug](https://localstack.atlassian.net/browse/LOC-268) by [Unito](https://www.unito.io/learn-more)
| https://github.com/localstack/localstack/issues/3009 | https://github.com/localstack/localstack/pull/3457 | f4e553168171e816615523df3d32569fe2f0ab50 | bbfa32635650fdfb74ec2afb80e403d5c77af24e | "2020-09-19T01:46:01Z" | python | "2021-01-19T18:49:44Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,819 | ["localstack/services/awslambda/lambda_executors.py"] | Latest lambci/docker-lambda image has removed hardcoded default credentials | # Type of request: This is a ...
[*] bug report
[ ] feature request
# Detailed description
Perhaps not a bug report per se, but this affected me at least. A new version of `lambci/docker-lambda` was pushed recently. [This commit](https://github.com/lambci/docker-lambda/commit/7ece2742a5b0e84eb09d6ed659f123474e619b27) removed hardcoded defaults for `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`.
I have a lambda that reads a value from secret in `secretsmanager`. If I don't set dummy values for `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` when creating the lambda invoking it will time out reading the secret.
## Expected behavior
A lambda function can read a secret from `secretsmanager`.
## Actual behavior
Lambda times out trying to read secret.
# Steps to reproduce
Create a lambda which tries to read a secret from `secretsmanager` without adding the environments variables mentioned above.
## Command used to start LocalStack
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
This will work:
```bash
awslocal lambda create-function \
--function-name foo \
--runtime go1.x \
--handler main \
--role arn:aws:iam::123456790:role/ignore \
--zip-file fileb://foo.zip \
--environment "Variables={FOO=bar,AWS_ACCOUNT_ID=123456790,AWS_ACCESS_KEY_ID=0,AWS_SECRET_ACCESS_KEY=0}"
```
This will time out while reading:
```bash
awslocal lambda create-function \
--function-name foo \
--runtime go1.x \
--handler main \
--role arn:aws:iam::123456790:role/ignore \
--zip-file fileb://foo.zip \
--environment "Variables={FOO=bar}"
``` | https://github.com/localstack/localstack/issues/2819 | https://github.com/localstack/localstack/pull/2829 | 6758d8d673b5071d678585c356bf47e663ee6170 | 1ad3fea2d58e9ffc1dbb4dfb2ec788e5fae8a39b | "2020-08-07T13:14:29Z" | python | "2020-08-09T19:57:02Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,777 | ["localstack/utils/bootstrap.py"] | Triple access log output | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[X] bug report
[ ] feature request
# Detailed description
Running localstack in a Docker (v19.03.8) container on macOS 10.15.6, some log output lines appears in triplicate. It appears that the messages, such as access log entries, are generated by Flask. The lines only appear if **DEBUG** is set in the environment; if **DEBUG** is not set, no access log messages appear, but if **DEBUG** is set, they appear three times each.
This is certainly not keeping me from getting things done -- it's just a little confusing when troubleshooting calls into localstack. Thank you in advance for this amazing project and any help you can offer!
## Expected behavior
One line of log output per event, such as an access log line.
## Actual behavior
Three identical copies of each log line are generated.
# Steps to reproduce
## Command used to start LocalStack
docker run -it -p 4566:4566 -e DEBUG=1 -e SERVICES=s3 localstack/localstack
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
awslocal s3 mb s3://stuff
## Console output
...
Running on 0.0.0.0:4566 over https (CTRL + C to quit)
2020-07-27T21:09:38:INFO:localstack.multiserver: Starting multi API server process on port 40641
Running on 0.0.0.0:4572 over http (CTRL + C to quit)
Running on 0.0.0.0:40641 over http (CTRL + C to quit)
2020-07-27 21:09:39,891:API: * Running on http://0.0.0.0:46165/ (Press CTRL+C to quit)
2020-07-27 21:09:39,891:API: * Running on http://0.0.0.0:46165/ (Press CTRL+C to quit)
2020-07-27 21:09:39,891:API: * Running on http://0.0.0.0:46165/ (Press CTRL+C to quit)
...
2020-07-27 21:09:48,942:API: 127.0.0.1 - - [27/Jul/2020 21:09:48] "PUT /stuff HTTP/1.1" 200 -
2020-07-27 21:09:48,942:API: 127.0.0.1 - - [27/Jul/2020 21:09:48] "PUT /stuff HTTP/1.1" 200 -
2020-07-27 21:09:48,942:API: 127.0.0.1 - - [27/Jul/2020 21:09:48] "PUT /stuff HTTP/1.1" 200 - | https://github.com/localstack/localstack/issues/2777 | https://github.com/localstack/localstack/pull/2821 | e774835386895ef3d840401d089235b1969cb1a8 | ad52baa86bc48cb8d9822456798caa273d1edf92 | "2020-07-27T21:19:36Z" | python | "2020-08-08T15:57:33Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,745 | ["localstack/services/dynamodb/dynamodb_listener.py", "tests/integration/test_dynamodb.py"] | Dynamodb returns 500 for TransactWriteItems with ConditionCheck | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
- [x] bug report
# Detailed description
DynamoDb service for some reason returns 500 with an empty-json-object body when sending
`TransactWriteItems` with `ConditionCheck` and some other write operation.
I have verified this behaviour with single `ConditionCheck` in `TransactItems` and it works, but with any of ther `Put, Update, Delete` operations in the transaction it fails...
## Expected behavior
It should not return an error, but it does.
I checked against dynamodb local with:
```
docker run --rm -p 8000:8000 amazon/dynamodb-local
```
And it works, but localstack's dynamodb doesnt...
## Actual behavior
It returns an unhelpful 500 response with empty json object (see Rust SDK output)
# Steps to reproduce
Start localstack, wait until it is up and running and run any of JavaScript or Rust snippets bellow.
## Command used to start LocalStack
```
localstack start
```
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
JavaScript SDK gives little information about the underlying error, but it is less code:
<details>
<summary>JavaScript SDK</summary>
```ts
const aws = require("aws-sdk");
let tableName = "localstack-global";
const ddb = new aws.DynamoDB({
region: "localstack",
endpoint: "http://localhost:4566",
});
async function main() {
await ddb.createTable({
TableName: tableName,
AttributeDefinitions: [{
AttributeName: "partition_key",
AttributeType: "S",
}],
KeySchema: [{
AttributeName: "partition_key",
KeyType: "HASH",
}],
BillingMode: "PAY_PER_REQUEST",
}).promise();
console.log("Table", tableName, "is created");
const req = ddb.transactWriteItems({
TransactItems: [
{
ConditionCheck: {
TableName: tableName,
ConditionExpression: "attribute_not_exists(partition_key)",
Key: {
partition_key: {
S: "foo",
},
},
},
},
{
Put: {
Item: {
partition_key: {
S: "bar",
},
},
TableName: tableName,
},
},
],
});
console.time("transact");
await req.promise().then(
result => console.log("Ok", result),
err => console.error("Err", err),
);
console.timeEnd("transact");
const result = await ddb.scan({ TableName: tableName });
console.log("Items:", (await result.promise()).Items);
}
main().then(() => console.log("Done"), err => console.error("Whoops:", err));
```
For some reason it takes a lot of time for it to error out, here is the output:
```bash
~/junk/ts-sandbox $ node index.js
Table localstack-global-6 is created
Err Error [UnknownError]: null
at Request.extractError (/home/veetaha/junk/ts-sandbox/node_modules/aws-sdk/lib/protocol/json.js:51:27)
at Request.callListeners (/home/veetaha/junk/ts-sandbox/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/home/veetaha/junk/ts-sandbox/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/home/veetaha/junk/ts-sandbox/node_modules/aws-sdk/lib/request.js:688:14)
at Request.transition (/home/veetaha/junk/ts-sandbox/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/home/veetaha/junk/ts-sandbox/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /home/veetaha/junk/ts-sandbox/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/home/veetaha/junk/ts-sandbox/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/home/veetaha/junk/ts-sandbox/node_modules/aws-sdk/lib/request.js:690:12)
at Request.callListeners (/home/veetaha/junk/ts-sandbox/node_modules/aws-sdk/lib/sequential_executor.js:116:18) {
code: 'UnknownError',
time: 2020-07-18T14:41:19.788Z,
requestId: undefined,
statusCode: 500,
retryable: true
}
transact: 40.678s
Items: [ { partition_key: { S: 'bar' } } ]
Done
```
</details>
Rust SDK gives more info about the failed request and also doesn't have that weird 40s latency (it is instant)
<details>
<summary>Rust SDK</summary>
```rust
#![allow(unused)]
use dynamodb::DynamoDb;
use dynomite::{dynamodb, attr_map};
use rusoto_core::RusotoResult;
#[tokio::main]
async fn main() {
let table_name = "localstack-global".to_owned();
let client = dynamodb::DynamoDbClient::new(rusoto_core::Region::Custom {
name: "localstack".to_owned(),
endpoint: "http://localhost:4566".to_owned()
});
client.create_table(dynamodb::CreateTableInput {
attribute_definitions: vec![
dynamodb::AttributeDefinition {
attribute_name: "partition_key".to_owned(),
attribute_type: "S".to_owned(),
},
],
key_schema: vec![
dynamodb::KeySchemaElement {
attribute_name: "partition_key".to_owned(),
key_type: "HASH".to_owned(),
},
],
table_name: table_name.clone(),
billing_mode: Some("PAY_PER_REQUEST".to_owned()),
..Default::default()
}).await.unwrap();
let result = client.transact_write_items(dynamodb::TransactWriteItemsInput {
transact_items: vec![
dynamodb::TransactWriteItem {
condition_check: Some(dynamodb::ConditionCheck {
table_name: table_name.clone(),
condition_expression: "attribute_not_exists(partition_key)".to_owned(),
key: attr_map! {
"partition_key" => "foo".to_owned(),
},
..Default::default()
}),
..Default::default()
},
dynamodb::TransactWriteItem {
put: Some(dynamodb::Put {
item: attr_map! {
"partition_key" => "bar".to_owned()
},
table_name: table_name.clone(),
..Default::default()
}),
..Default::default()
},
],
..Default::default()
}).await;
dbg!(result);
}
```
Output:
```bash
~/junk/rust-sandbox $ cargo run
Compiling crate_foo v0.1.0 (/home/veetaha/junk/rust-sandbox/crate_foo)
Finished dev [unoptimized + debuginfo] target(s) in 4.20s
Running `target/debug/crate_foo`
[crate_foo/src/main.rs:57] result = Err(
Unknown(
BufferedHttpResponse {status: 500, body: "{}", headers: {"content-type": "text/html; charset=utf-8", "content-length": "2", "access-control-allow-origin": "*", "access-control-allow-methods": "HEAD,GET,PUT,POST,DELETE,OPTIONS,PATCH", "access-control-allow-headers": "authorization,content-type,content-md5,cache-control,x-amz-content-sha256,x-amz-date,x-amz-security-token,x-amz-user-agent,x-amz-target,x-amz-acl,x-amz-version-id,x-localstack-target,x-amz-tagging", "access-control-expose-headers": "x-amz-version-id", "connection": "close", "date": "Sat, 18 Jul 2020 14:46:04 GMT", "server": "hypercorn-h11"} },
),
)
```
</details> | https://github.com/localstack/localstack/issues/2745 | https://github.com/localstack/localstack/pull/2805 | 493c7259df0527c21f3a867cfd0961d6c426277b | 9a53b446515b08d930faf47846d0805df8aa9445 | "2020-07-18T14:17:05Z" | python | "2020-08-03T19:36:38Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,742 | ["README.md", "localstack/services/plugins.py"] | When enabling SSL (USE_SSL), /health return {} | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
I start localstack with USE_SSL true and when I try to get the services status with https://localhost:4566/health, I got {},
same with http://localhost:4566/health
...
## Expected behavior
I expect to see a json long of my services with their status
...
## Actual behavior
curl http or https to /health return {}
...
# Steps to reproduce
## Command used to start LocalStack
we use docker-compose :
localstack:
image: localstack/localstack-light
container_name: localstack
environment:
HOSTNAME: localhost
SERVICES: ${LOCALSTACK_SERVICES:-kinesis,cloudwatch,dynamodb}
KINESIS_ERROR_PROBABILITY: ${KINESIS_ERROR_PROBABILITY:- }
LAMBDA_REMOTE_DOCKER: 'false'
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID:-dev}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY:-dev}
DEFAULT_REGION: us-east-1
AWS_CBOR_DISABLE: 'true'
CBOR_ENABLED: 'true'
USE_SSL: 'true'
healthcheck:
test: ["CMD", "curl", "-qf", "http://localhost:4566/health?reload"]
interval: 30s
timeout: 1s
retries: 10
networks:
backend:
aliases:
- localstack
ports:
- 4566:4566
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- type: bind
source: ./localstack/initaws.d
target: /docker-entrypoint-initaws.d
read_only: true
tmpfs:
- /tmp/localstack:exec,mode=600
I tried it with localstack/localstack image, localstack/localstack-light, and even a fresh image from a git clone of the repo.
...
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
...
| https://github.com/localstack/localstack/issues/2742 | https://github.com/localstack/localstack/pull/2744 | 0d67aad3990ba7a4c82cf2f6ff570018fcfcd3c7 | 9993989188007c8c555eead9abd02dda76834a94 | "2020-07-17T22:53:58Z" | python | "2020-07-18T10:56:40Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,735 | ["localstack/services/events/events_starter.py", "tests/integration/test_events.py"] | Regression between 0.11.2 & 0.11.3 causes events in event bridge to not serialize correctly | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[X] bug report
[ ] feature request
# Detailed description
Running docker-compose with `latest` or `0.11.3` causes messages to come out wrong. Going back to `0.11.2` they work fine.
Executing the lambda directly the serialization happens correctly. So it's an issue somewhere in the event bridge.
## Expected behavior
Messages published to an event bus which are ruled to a Lambda should deserialize to the correct information.
## Actual behavior
The information is incorrect.
In 0.11.2 I get:
```
localstack_main | START RequestId: d4e76268-5617-169a-e9f1-0b0f76fe30e9 Version: $LATEST
localstack_main | Executing Lambda::BeginQCHandler | FileId: 957481
localstack_main | It's wrong: Exception
```
In 0.11.3 I get:
```
localstack_main | START RequestId: daa063db-42e8-1b69-3c64-34ced4eaecaf Version: $LATEST
localstack_main | Executing Lambda::BeginQCHandler | FileId: 0
localstack_main | It's wrong: Exception
```
The `FileId` is `0` when it should be the value from the event.
# Steps to reproduce
## Command used to start LocalStack
`sudo docker-compose up` using the compose file from the repository.
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
Setup aws stuff:
```
awslocal iam create-role --role-name lambda-role --assume-role-policy-document file://testrole.json
dotnet tool run dotnet-lambda package -pl ./src/TestLocalStack.Workflows.MediaServices/ -o ./output/QCWorkflowBeginQCHandler.zip -c Release -f netcoreapp3.1 --msbuild-parameters "--self-contained true"
awslocal lambda create-function --function-name workflow-begin-qc --runtime dotnetcore3.1 --memory-size 256 --timeout 60 --zip-file fileb://output/QCWorkflowBeginQCHandler.zip --handler TestLocalStack.Workflows.MediaServices::TestLocalStack.Workflows.MediaServices.QCWorkflowHandler::BeginQCHandler --role arn:aws:iam::000000000000:role/lambda-role
awslocal events create-event-bus --name test-event-bus
awslocal events put-rule --name request-qc-event-rule --event-bus-name test-event-bus --event-pattern "{\"source\":[\"test.events\"],\"detail-type\":[\"QCRequestEvent\"]}"
awslocal events put-targets --rule request-qc-event-rule --event-bus-name test-event-bus --targets "Id"="1","Arn"="arn:aws:lambda:us-east-1:000000000000:function:workflow-begin-qc"
awslocal lambda add-permission --function-name workflow-begin-qc --statement-id request-qc-event-rule-statement --action 'lambda:InvokeFunction' --principal events.amazonaws.com --source-arn arn:aws:events:us-east-1:000000000000:rule/request-qc-event-rule
```
Lambda Code
```csharp
public class QCWorkflowHandler
{
public QCWorkflowHandler()
{
}
public async Task<RequestQCResponse> BeginQCHandler(QCRequestEvent request, ILambdaContext context)
{
context.Logger.LogLine($"Executing Lambda::BeginQCHandler | FileId: {request.FileId}");
throw new Exception("It's wrong");
return new RequestQCResponse
{
Message = $"The requested FileId = {request.FileId}"
};
}
}
public class RequestQCResponse
{
public string MessageId { get; set; } = null!;
public string Message { get; set; } = null!;
}
public class QCRequestEvent
{
public int FileId { get; set; }
}
```
Publishing the event:
```
awslocal events put-events --entries file://testevent.json
```
testevent.json
```json
[
{
"Source": "test.events",
"DetailType": "QCRequestEvent",
"Detail": "{\"fileId\":957481}",
"EventBusName": "test-event-bus"
}
]
``` | https://github.com/localstack/localstack/issues/2735 | https://github.com/localstack/localstack/pull/2870 | 07ad751420e00dc75e1a74afea83f80075ef684c | 447668b7ee8eaea678a946920bd1420f882cc997 | "2020-07-17T08:05:48Z" | python | "2020-08-25T19:11:12Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,714 | ["Makefile", "localstack/utils/aws/aws_stack.py", "localstack/utils/testutil.py", "tests/integration/test_lambda.py", "tests/unit/test_message_transformation.py", "tests/unit/test_templating.py"] | Appsync - VTL - String Concatenation not supported | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
VTL request containing an expression evaluated to String Concatenation does not work.
### Query
```
mutation createShop {
createShop(shop: { shopName: "Apple" } ) {
shopName
}
}
```
### VTL Template (**Works when deploy to Appsync on AWS services**)
```
{
"version" : "2018-05-29",
"operation" : "PutItem",
"key" : {
"PK": $util.dynamodb.toDynamoDBJson("SHOP#${ctx.args.shop.shopName}"),
"SK": $util.dynamodb.toDynamoDBJson("PROFILE#$ctx.args.shop.shopName")
},
"attributeValues" : { "shopName": { "S": "Apple" }}
}
```
## Expected behavior
### Item stored in dynamodb
```
{
"PK": {
"S": "SHOP#Apple"
},
"SK": {
"S": "PROFILE#Apple"
},
"shopName": {
"S": "Apple"
}
}
```
## Actual behavior
### Item stored in dynamodb
```
{
"PK": {
"S": "SHOP#${ctx.args.shop.shopName}"
},
"SK": {
"S": "PROFILE#$ctx.args.shop.shopName"
},
"shopName": {
"S": "Apple"
}
}
```
# Steps to reproduce
## Command used to start LocalStack
docker-compose
````
version: '2.1'
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack
ports:
- "4566-4700:4566-4700"
- "443:443"
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=serverless,cloudformation,dynamodb,iam,s3,appsync,edge
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
- LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY}
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
````
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
...
| https://github.com/localstack/localstack/issues/2714 | https://github.com/localstack/localstack/pull/2716 | 0639d9632e794cc45bd6c7935b3afd77db444f25 | ca1a43a0b2735b95d259e580c39b0cab558feea6 | "2020-07-12T08:12:23Z" | python | "2020-07-12T20:33:16Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,676 | ["localstack/utils/bootstrap.py", "localstack/utils/server/http2_server.py", "tests/unit/test_common.py"] | [Bug Report] Dashboard Pro does not show resources | ## Detailed description
The pro version of the dashboard always shows that there is no resources found... After looking in the requests I see that it does a POST request to 0.0.0.0:4566/graph but without parameters ..
# Normal Darshboard

# Pro Dashboard

# Steps to reproduce
## Command used to start LocalStack
docker run -it -e LOCALSTACK_HOSTNAME="localhost" -e DEFAULT_REGION="us-east-1" -e TEST_AWS_ACCOUNT_ID="000000000000" -p 4566:4566 --rm --privileged --name localstack_main -p 4567-4620:4567-4620 -p 12121:12121 -p 8080-8081:8080-8081 -v "/tmp/localstack:/tmp/localstack" -v "/var/run/docker.sock:/var/run/docker.sock" -e DOCKER_HOST="unix:///var/run/docker.sock" -e HOST_TMP_FOLDER="/tmp/localstack" "localstack/localstack" -e SERVICES="edge,dynamodb,cognito"
| https://github.com/localstack/localstack/issues/2676 | https://github.com/localstack/localstack/pull/2746 | 9993989188007c8c555eead9abd02dda76834a94 | 7fe5c9a669918619721dfcc4858d6b8acc2e2319 | "2020-07-05T19:22:20Z" | python | "2020-07-18T16:56:26Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,640 | ["localstack/services/sqs/sqs_listener.py", "localstack/utils/aws/aws_responses.py"] | SqsClient fails with Crc32MismatchException | # Type of request: This is a ...
[X] bug report
[ ] feature request
# Detailed description
When using the AWS Java client v2 to access eg SQS in localstack, the client throws a `software.amazon.awssdk.core.exception.Crc32MismatchException: Expected 3405306242 as the Crc32 checksum but the actual calculated checksum was 4230070430`.
## Expected behavior
The V2 client should work in the same circumstances that the V1 client does.
## Actual behavior
Exception is thrown:
```
software.amazon.awssdk.core.exception.Crc32MismatchException: Expected 3934920683 as the Crc32 checksum but the actual calculated checksum was 3696428791
at software.amazon.awssdk.core.exception.Crc32MismatchException$BuilderImpl.build(Crc32MismatchException.java:88)
at software.amazon.awssdk.core.internal.util.Crc32ChecksumValidatingInputStream.validateChecksum(Crc32ChecksumValidatingInputStream.java:62)
at software.amazon.awssdk.core.internal.util.Crc32ChecksumValidatingInputStream.close(Crc32ChecksumValidatingInputStream.java:50)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
at software.amazon.awssdk.protocols.query.internal.unmarshall.AwsQueryResponseHandler.lambda$handle$1(AwsQueryResponseHandler.java:63)
at java.util.Optional.ifPresent(Optional.java:159)
at software.amazon.awssdk.protocols.query.internal.unmarshall.AwsQueryResponseHandler.handle(AwsQueryResponseHandler.java:61)
at software.amazon.awssdk.protocols.query.internal.unmarshall.AwsQueryResponseHandler.handle(AwsQueryResponseHandler.java:41)
at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler$Crc32ValidationResponseHandler.handle(AwsSyncClientHandler.java:94)
at software.amazon.awssdk.core.internal.handler.BaseClientHandler.lambda$resultTransformationResponseHandler$5(BaseClientHandler.java:231)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleSuccessResponse(CombinedResponseHandler.java:97)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleResponse(CombinedResponseHandler.java:72)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:59)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:40)
at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:40)
at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:30)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:73)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:42)
at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:77)
at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:39)
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:64)
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:34)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:56)
at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:36)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:80)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:60)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:189)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:121)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:147)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:101)
at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45)
at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:55)
at software.amazon.awssdk.services.sqs.DefaultSqsClient.getQueueUrl(DefaultSqsClient.java:752)
at software.amazon.awssdk.services.sqs.SqsClient.getQueueUrl(SqsClient.java:1299)
```
# Steps to reproduce
AmazonSQS sqs = AmazonSQSClientBuilder
.standard()
.withEndpointConfiguration(container.getEndpointConfiguration(SQS))
.withCredentials(container.getDefaultCredentialsProvider())
.build();
sqs.createQueue(INIT_QUEUE_NAME);
SqsClient sqs = SqsClient.builder()
.endpointOverride(container.getEndpointOverride(service))
.credentialsProvider(
StaticCredentialsProvider.create(
AwsBasicCredentials.create(
container.getAccessKey(),
container.getSecretKey()
)
)
)
.region(Region.of(container.getRegion()))
.build;
assertThat(sqs.getQueueUrl(b -> b.queueName(INIT_QUEUE_NAME)).queueUrl()).isNotBlank();
## Command used to start LocalStack
```
new LocalStackContainer("0.11.3")
.withNetwork(Network.SHARED)
.withNetworkAliases("localstack")
.withEnv("HOSTNAME_EXTERNAL", "localhost")
.withServices(S3, SQS, SNS)
.withEnv("DEBUG", "1")
```
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
See "steps to reproduce".
| https://github.com/localstack/localstack/issues/2640 | https://github.com/localstack/localstack/pull/2949 | 2685fe48a8d13de5fe20531e44fb2c10ffe079d7 | 4766b6625fdb5e52b192a6ac98e4975731e2476b | "2020-06-29T09:01:53Z" | python | "2020-09-14T08:15:35Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,626 | ["localstack/services/awslambda/lambda_executors.py", "localstack/services/edge.py", "localstack/services/sqs/sqs_starter.py", "localstack/utils/common.py", "localstack/utils/server/http2_server.py", "tests/integration/test_sqs.py"] | Small deviation in SQS implementation | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
In the `ReceiveMessageResult` of the SQS service, I saw this XML from localstack:
```
...
<MessageAttribute>
<Name>encoding</Name>
<Value>
<DataType>String</DataType>
<StringValue><![CDATA[gzip]]></StringValue>
</Value>
</MessageAttribute>
...
```
The `<![CDATA[gzip]]>` is slightly different from AWS, which doesn't wrap the value in a `<!CDATA[[]]>` section. This shouldn't be a problem for XML parsers that accept both plain characters and CDATA sections for this element, but the XML parser I was using did not, which causes an error when running the code against localstack but not when running the code against AWS. There is obviously a bug in the XML parser as well, but I thought that I should at least post this here since it is different than what AWS SQS does.
## Expected behavior
MessageAttribute::Value elements should not be wrapped in `CDATA` sections.
## Actual behavior
MessageAttribute::Value elements all seem to be wrapped in `CDATA` sections.
# Steps to reproduce
Send a message to the SQS service which contains some custom message attributes, then make a ReceiveMessageRequest.
## Command used to start LocalStack
`docker run --env "SERVICES=sqs" localstack/localstack`
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
...
| https://github.com/localstack/localstack/issues/2626 | https://github.com/localstack/localstack/pull/2648 | 19208b11902d97b307f6e9ae19930781a322742a | a0664b085d55a78284be6fc1db0d4d908e892920 | "2020-06-26T16:05:09Z" | python | "2020-06-30T09:33:53Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,617 | ["localstack/services/awslambda/lambda_executors.py", "localstack/services/cloudformation/cloudformation_starter.py"] | Error Creating Function with DynamoStream Event | # Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
Using serverless & the localstack/localstack:latest docker image (released 12 hours ago at time of creating this issue according to docker-hub), a previously correctly working cloudformation stack that creates a DynamoDB table with a DynamoStream that feeds the event source of a lambda function fails to deploy if we use the :latest version of the docker image.
...
## Expected behavior
It should deploy the stack to localstack correctly.
...
## Actual behavior
It does not deploy the stack correctly, and serverless raises an error 'Invalid EventSourceArn'. This goes away if I pin mydocker image to version 0.11.2 and the stack is deployed correctly.
...
# Steps to reproduce
We use docker compose to run localstack. Our docker-compose file looks like:
```
version: '2.1'
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack
ports:
- "4566-4599:4566-4599"
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=${SERVICES- }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
- LAMBDA_REMOTE_DOCKER=${LAMBDA_REMOTE_DOCKER-}
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
```
We use a script that calls `docker-compose up -d`
We then call a standard `serverless deploy` command to deploy our stacks into localstack to create the application. Everything worked perfectly until this morning when one of my colleagues decided to prune his docker images and, therefore pulled the latest image down. Of course, the fact that the failure was caused by localstack wasn't immediately obvious, but this has cost an entire man-day to track down the problem.
Please revert the latest image to 0.11.2 until you have it working.
| https://github.com/localstack/localstack/issues/2617 | https://github.com/localstack/localstack/pull/2637 | 5d89ae7df0f7bb9e09a255d0b0bc4d511d11277f | 598200c7ea153287a92052d6038534b277e56e2d | "2020-06-25T11:37:51Z" | python | "2020-06-28T18:00:59Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,612 | ["localstack/services/edge.py", "localstack/services/generic_proxy.py", "localstack/services/sqs/sqs_listener.py", "localstack/utils/common.py", "tests/integration/test_sqs.py"] | Queue not found | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
When I run localstack on Jenkins, I get:
```
<ErrorResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/"> <Error> <Type>Sender</Type> <Code>AWS.SimpleQueueService.NonExistentQueue</Code> <Message>The specified queue does not exist for this wsdl version.</Message> <Detail/> </Error> <RequestId>F31Y6CVCNXCG3WA0U9JHKH734H2YOTJW0087VX8HEC5I716VNVQH</RequestId> </ErrorResponse>
```
for:
```
curl "localstack:4576?Action=GetQueueUrl&QueueName=jobs"
```
Although the queue was created:
```
[36mlocalstack_1 |[0m /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initaws.d/create-sqs-queue.sh
[36mlocalstack_1 |[0m {
[36mlocalstack_1 |[0m "QueueUrl": "http://localstack:4576/000000000000/jobs"
[36mlocalstack_1 |[0m }
```
This worked for us until a few days ago.
## Expected behavior
The job will be created and ready
## Actual behavior
Queue not found (404)
# Steps to reproduce
```
awslocal sqs create-queue --queue-name jobs;
```
## Command used to start LocalStack
docker
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
```
awslocal sqs create-queue --queue-name jobs;
```
| https://github.com/localstack/localstack/issues/2612 | https://github.com/localstack/localstack/pull/2622 | db388f64eedd78c334c091d73ebd6674016b06a8 | ab4bf6dc1d2e2edf77b368f11e5f69a2daa46a51 | "2020-06-24T15:32:33Z" | python | "2020-06-25T23:51:29Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,585 | ["localstack/services/s3/s3_listener.py"] | S3 list bucket CreationDate fails to parse w/ Java SDK 2 | # Type of request: This is a bug report
# Detailed description
S3 list buckets returns a different format for CreationDate than the one expected by the Java SDK and parsing fails. Using Java SDK 2.13.39. Also verified w/ .40, .38, .30 and .20. Will check previous Docker releases for localstack.
``Unable to unmarshall response (Text '2020-06-17T15:26:46.032838+00:00' could not be parsed at index 19). Response Code: 200, Response Text:
software.amazon.awssdk.core.exception.SdkClientException: Unable to unmarshall response (Text '2020-06-17T15:26:46.032838+00:00' could not be parsed at index 19). Response Code: 200, Response Text: <truncated...>
``
## Expected behavior
Return CreationDate in same format as the actual S3 API call.
## Actual behavior
Returns CreationDate in slightly different format from the actual S3 API call. Fractional seconds are included.
# Steps to reproduce
Run "aws s3api list-buckets" vs "aws s3api list-buckets --endpoint-url http://localstack:4566" and note different date format for CreationDate.
## Command used to start LocalStack
docker-compose:
```
localstack:
image: localstack/localstack
ports:
- '4566:4566'
environment:
SERVICES: s3
DEBUG: 1
```
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
See steps to reproduce. | https://github.com/localstack/localstack/issues/2585 | https://github.com/localstack/localstack/pull/2588 | edd4f346cf67c9f0a48e59eec68fe3e2862949a8 | 79607beded4a2918121f1d3e18e456125be0c8e6 | "2020-06-19T13:30:30Z" | python | "2020-06-19T21:39:31Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,582 | ["localstack/services/edge.py", "localstack/services/generic_proxy.py", "tests/integration/test_s3.py"] | S3 universal port 4566 does not allow to download file with SDK | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
Using node.js AWS SDK function `getObject` returns object with `0` bytes using 4566 port.
On port 4572 it works fine.
# Steps to reproduce
## Command used to start LocalStack
```
localstack:
image: localstack/localstack
networks:
- localstack-network
ports:
- 4566:4566
- 4572:4572
- 8080:8080
environment:
- DEBUG=1
- HOSTNAME_EXTERNAL=localstack
- SERVICES=sqs,lambda,cloudwatch,s3,iam,ec2,stepfunctions,cloudwatchlogs
- LAMBDA_REMOTE_DOCKER=false
- LAMBDA_EXECUTOR=docker
- LAMBDA_DOCKER_NETWORK=localstack-network
- AWS_ACCESS_KEY_ID=123
- AWS_SECRET_ACCESS_KEY=xyz
- DEFAULT_REGION=us-east-1
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /tmp/localstack:/tmp/localstack
```
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
```
export const s3 = new AWS.S3({
endpoint: `http://localhost:4566`,
s3ForcePathStyle: true,
sslEnabled: false
});
const { Body } = await s3.getObject({
Bucket: 'bucket',
Key: 'file.json'
}).promise();
``` | https://github.com/localstack/localstack/issues/2582 | https://github.com/localstack/localstack/pull/2667 | 9a415e2067f6fafa3cdc9dd84f5b491b0b2a2acd | f8b2deb32c087853f7ff9546712ea0af5e19f1e4 | "2020-06-19T05:34:05Z" | python | "2020-07-03T21:16:19Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,571 | ["localstack/services/awslambda/lambda_api.py", "localstack/services/dynamodbstreams/dynamodbstreams_api.py", "localstack/services/edge.py", "localstack/utils/common.py", "tests/integration/test_dynamodb.py", "tests/integration/test_sqs.py"] | Unable to determine forwarding port for API "monitoring" | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x ] bug report
[ ] feature request
# Detailed description
Not able to use the edge service to write metric data to cloudwatch via the javascript aws-sdk.
However, It **DOES** work when using the deprecated cloudwatch port.
## Expected behavior
Metric data to be visible when calling `awslocal cloudwatch list-metrics`
## Actual behavior
404 from edge service. No metric data is persisted, localstack instance logs the following info:
```
INFO:localstack.services.edge: Unable to determine forwarding port for API "monitoring" - please make sure this API is enabled via the SERVICES configuration
```
# Steps to reproduce
Run JS code in "Client code" section.
See 404 Error
## Command used to start LocalStack
`TMPDIR=/tmp/localstack docker-compose up`
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
```js
const AWS = require('aws-sdk')
const cw = new AWS.CloudWatch({
endpoint: 'http://localhost:4566',
})
const data = {
Namespace: 'ns',
MetricData: [
{
MetricName: 'metric-name',
Timestamp: new Date(),
Value: 12345,
Unit: 'Milliseconds',
Dimensions: [
{ Name: 'Environment', Value: 'local' },
]
}
]
}
cw.putMetricData(data, (err, data) => {
if (err) {
console.log('Error', err)
} else {
console.log('Success', JSON.stringify(data))
}
})
```
| https://github.com/localstack/localstack/issues/2571 | https://github.com/localstack/localstack/pull/2581 | 2bb21f0147be47d63f089952f8d8f41f0c55feaf | 37d9f6e7c4957ad15afd6e03702678cda488f01d | "2020-06-17T12:59:26Z" | python | "2020-06-19T09:40:48Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,569 | ["localstack/services/awslambda/lambda_api.py", "localstack/services/dynamodbstreams/dynamodbstreams_api.py", "localstack/services/edge.py", "localstack/utils/common.py", "tests/integration/test_dynamodb.py", "tests/integration/test_sqs.py"] | DynamoDB Stream shardId wrong format | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[ ] bug report
[X ] feature request
# Detailed description
I'm using localstack to create a dynamoDb table locally with a dynamodb stream and a flink application consuming data from the stream
The shardId created in localstack follows this format
shardId-000000000000-798cedd84a2953c5af70a131f8701f35
Flink expects this format
shardId-00000001592367572726-07776efd
The difference is just after the prefix shardId.
Normally the real shardId created by amazon use is composed by shardId prefix, 20-digit timestamp and 0-36 or more characters separated by '-'
The current shardId format is not valid for flink applications and it's not following the real pattern used by amazon
## Expected behavior
ShardId must follow the pattern
shardId prefix, 20-digit timestamp and 0-36 or more characters separated by '-'
## Actual behavior
The shardId follows the pattern
shardId prefix, 12 0s and 33 characters separated by '-'
# Steps to reproduce
Create a dynamodb table
## Command used to start LocalStack
I'm using docker and docker compose to run localstack
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
aws --endpoint-url=http://localhost:4566 dynamodb create-table \
--table-name temp4 \
--attribute-definitions AttributeName=partition_key,AttributeType=S AttributeName=sort_key,AttributeType=S \
--key-schema AttributeName=partition_key,KeyType=HASH AttributeName=sort_key,KeyType=RANGE \
--provisioned-throughput ReadCapacityUnits=100,WriteCapacityUnits=100 \
--stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES
| https://github.com/localstack/localstack/issues/2569 | https://github.com/localstack/localstack/pull/2581 | 2bb21f0147be47d63f089952f8d8f41f0c55feaf | 37d9f6e7c4957ad15afd6e03702678cda488f01d | "2020-06-17T05:29:28Z" | python | "2020-06-19T09:40:48Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,562 | ["localstack/services/awslambda/lambda_executors.py", "localstack/services/dynamodb/dynamodb_listener.py", "localstack/services/s3/s3_listener.py", "localstack/utils/common.py", "tests/integration/test_dynamodb.py", "tests/integration/test_route53.py", "tests/integration/test_s3.py", "tests/unit/test_lambda.py"] | Simulate S3 BadDigest and InvalidDigest errors | # Type of request: This is a ...
[ ] bug report
[x] feature request
# Detailed description
As a Developer,
I want to be able to simulate BadDigest and InvalidDigest errors,
So that I can test the error handling logic
## Expected behavior
https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
BadDigest: The Content-MD5 you specified did not match what we received.
InvalidDigest: The Content-MD5 you specified is not valid.
## Actual behavior
localstack returns InvalidDigest when MD5 does not match payload.
| https://github.com/localstack/localstack/issues/2562 | https://github.com/localstack/localstack/pull/2635 | 4014831e4dfcd41b3ea81d267c48c121bee6d0f9 | 5d89ae7df0f7bb9e09a255d0b0bc4d511d11277f | "2020-06-15T18:41:08Z" | python | "2020-06-28T10:42:59Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,543 | ["localstack/services/edge.py"] | DynamoDB streams describe-streams returns ResourceNotFoundException | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
DynamoDB streams can be listed using `aws dynamodbstreams list-streams`, but doing `aws dynamodbstreams describe-stream` on a specific stream returns:
```
An error occurred (ResourceNotFoundException) when calling the DescribeStream operation: Requested resource not found: Stream: arn:aws:dynamodb:ca-central-1:000000000000:table/myspecialtable/stream/2020-06-10T18:51:23.742 not found
```
**_The issue is not present in 0.11.1 - it happens on 0.11.2_**
## Expected behavior
When invoking `aws dynamodbstreams describe-streams --stream-arn ${MY_STREAM_ARN}` should return a response similar to:
```
{
"StreamDescription": {
"StreamArn": "arn:aws:dynamodb:ca-central-1:000000000000:table/stream/2020-06-10T18:51:23.742",
"StreamLabel": "2020-06-10T18:51:23.742",
"StreamStatus": "ENABLED",
"StreamViewType": "NEW_AND_OLD_IMAGES",
...
}
}
```
## Actual behavior
Invoking `aws dynamodbstreams describe-streams --stream-arn ${MY_STREAM_ARN}` returns:
```
An error occurred (ResourceNotFoundException) when calling the DescribeStream operation: Requested resource not found: Stream: arn:aws:dynamodb:ca-central-1:000000000000:table/myspecialtable/stream/2020-06-10T18:51:23.742 not found
```
Even if the `aws dynamodbstreams list-streams` display the stream.
# Steps to reproduce
## Command used to start LocalStack
Using docker:
```
docker run --rm -e 'DEBUG=1' -e 'SERVICES=dynamodb,dynamodbstreams' -p 4566:4566 --name localstack localstack/localstack-light:0.11.2
```
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
1. Create a DynamoDB table with streams enabled:
```
aws --endpoint-url http://localhost:4566 dynamodb create-table --attribute-definitions AttributeName=MyAttribute,AttributeType=S --key-schema AttributeName=MyAttribute,KeyType=HASH --table-name MyTable --stream-specification StreamEnabled=true,StreamViewType=NEW_IMAGE --billing-mode PAY_PER_REQUEST
{
"TableDescription": {
"AttributeDefinitions": [
{
"AttributeName": "MyAttribute",
"AttributeType": "S"
}
],
"TableName": "MyTable",
"KeySchema": [
{
"AttributeName": "MyAttribute",
"KeyType": "HASH"
}
],
"TableStatus": "ACTIVE",
"CreationDateTime": 1591825040.59,
"ProvisionedThroughput": {
"LastIncreaseDateTime": 0.0,
"LastDecreaseDateTime": 0.0,
"NumberOfDecreasesToday": 0,
"ReadCapacityUnits": 0,
"WriteCapacityUnits": 0
},
"TableSizeBytes": 0,
"ItemCount": 0,
"TableArn": "arn:aws:dynamodb:us-east-1:000000000000:table/MyTable",
"BillingModeSummary": {
"BillingMode": "PAY_PER_REQUEST",
"LastUpdateToPayPerRequestDateTime": -1607826.226
},
"StreamSpecification": {
"StreamEnabled": true,
"StreamViewType": "NEW_IMAGE"
},
"LatestStreamLabel": "2020-06-10T21:37:20.590",
"LatestStreamArn": "arn:aws:dynamodb:us-east-1:000000000000:table/MyTable/stream/2020-06-10T21:37:20.590"
}
}
```
2. List streams:
```
aws --endpoint-url http://localhost:4566 dynamodbstreams list-streams
{
"Streams": [
{
"StreamArn": "arn:aws:dynamodb:us-east-1:000000000000:table/MyTable/stream/2020-06-10T21:37:20.590",
"TableName": "MyTable",
"StreamLabel": "2020-06-10T21:37:20.590"
}
]
}
```
3. Describe a stream:
```
aws --endpoint-url http://localhost:4566 dynamodbstreams describe-stream --stream-arn arn:aws:dynamodb:us-east-1:000000000000:table/MyTable/stream/2020-06-10T21:37:20.590
An error occurred (ResourceNotFoundException) when calling the DescribeStream operation: Requested resource not found: Stream: arn:aws:dynamodb:us-east-1:000000000000:table/MyTable/stream/2020-06-10T21:37:20.590 not found
```
| https://github.com/localstack/localstack/issues/2543 | https://github.com/localstack/localstack/pull/2559 | 4474ae17db342be0a379b3a344fc60638eb4b6dc | b4c478eafdb56f00647e21744f8e9c0aa8e11a43 | "2020-06-10T21:42:42Z" | python | "2020-06-14T18:16:27Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,535 | ["localstack/config.py", "localstack/services/awslambda/lambda_api.py", "localstack/services/generic_proxy.py", "localstack/services/s3/s3_listener.py", "localstack/utils/common.py", "localstack/utils/http_utils.py", "localstack/utils/server/http2_server.py", "tests/unit/test_misc.py"] | listObjects returns empty list when request come from php aws s3Client (regression bug) | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
# Detailed description
have aws s3 localstack in docker-compose declared as:
```
version: "3"
services:
...
localstack:
image: localstack/localstack
environment:
- SERVICES=s3
- USE_SSL=false
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
ports:
- "4572:4572"
- "4566:4566"
- "8083:8080"
networks:
- mynetwork
```
After build everything works fine. I am able to connect to the image:
```
docker exec -ti my-project_localstack_1 /bin/bash
```
And make a new bucket using command line:
```
awslocal s3 mb s3://my-bucket
```
Initially I was able to put new objects into the bucket from my php app.
But was not able to see/view list of them from php/postman/browser.
I've made some research and found [this][1] solution.
```
awslocal s3 mb s3://my-bucket
awslocal s3api put-bucket-acl --bucket my-bucket --acl public-read
```
Now, I am able to get list of objects by prefix in anonymous mode (no credentials or tokens) in my Chrome browser and using Postman.
But I fail to get `$s3Client->listObjects(...)`. It always returns empty result.
Note: I am still able to execute `$s3Client->putObject(...)`.
And I checked another commands `$s3client->getBucketAcl(...)` and `$s3Client->getObjectUrl(...)`. They work fine.
What I want to say, connection to the localstack host from php is fine and instance is working and responding fine.
Here is the code on php side that I use to instantiate `$s3Client`:
```
class S3
{
/** @var \Aws\S3\S3Client */
private static $client = null;
private static function init() // Lazy S3client initiation
{
if (is_null (self::$client)) {
self::$client = new Aws\S3\S3Client ([
'region' => 'us-east-1',
'version' => '2006-03-01',
'credentials' => false,
'endpoint' => "http://localstack:4572",
'use_path_style_endpoint' => true,
'debug' => true
]);
}
}
...
public static function list_objects($bucket, array $options)
{
self::init();
return self::$client->listObjects([
'Bucket' => "my-bucket",
'Prefix' => "year/month/folder/",
'Delimiter' => $options['delimiter'] ? $options['delimiter'] : '/',
]);
}
...
}
```
This method returns `@metadata->effectiveUri` :
```
array (size=2)
'instance' => string '0000000040d78e4d00000000084dbdb3' (length=32)
'data' =>
array (size=1)
'@metadata' =>
array (size=4)
'statusCode' => int 200
'effectiveUri' => string 'http://localstack:4572/my-bucket?prefix=year%2Fmonth%2Ffolder%2F&delimiter=%2F&encoding-type=url'
```
If I take this url and run it in **browser** or **postman** or php docker terminal **curl** it returns list of my files. It only returns empty array when I call it though s3Client in php.
I have a feeling that something is wrong with permissions. But since I don't have that much knowledge and experience with aws-s3 service I can't figure that out. And it seem confusing that some "default" permissions allows client to put objects but restrict to read index. And I can read index of objects using browser or curl, but not through the app.
I've asked[ this question](https://stackoverflow.com/questions/62244318/cant-get-list-objects-from-localstack-s3-bucket-using-php-aws-s3client) on SO.
And someone recommended me to downgrade docker image to `localstack/localstack:0.11.0` and that fixed the issue.
## Expected behavior
Php aws s3client `listObjects` should return list of objects stored in bucket same as it is returned for curl or browser.
## Actual behavior
Empty list returned
| https://github.com/localstack/localstack/issues/2535 | https://github.com/localstack/localstack/pull/2537 | 59bf9ec61d2bdad919b7117f961f561a2c499980 | 6a380e5580d9fe6c14f0b328141c5b140b0bee2f | "2020-06-09T12:08:35Z" | python | "2020-06-10T00:53:00Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,506 | ["localstack/services/edge.py", "localstack/services/s3/s3_listener.py", "localstack/utils/common.py"] | HEAD request is not properly forwarded to S3 through edge router | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
- [x] bug report
- [ ] feature request
# Detailed description
I get a 404 when I request a file I uploaded via a HEAD request.
This only happens when using the new edge router port.
I think there's faulty logic in the edge router when looking at HEAD requests.
## Expected behavior
I expect the behavior I get from requesting via the dedicated S3 port (HTTP 200 if a file exists)
## Actual behavior
A 404 on HEAD request through the edge router port.
# Steps to reproduce
```
version: '2.2'
services:
localstack:
image: localstack/localstack
ports:
- '4566:4566'
environment:
- SERVICES=s3
- HOSTNAME_EXTERNAL=s3.test
```
```
docker-compose up -d
```
Making a bucket and uploading a file:
```
AWS_ENDPOINT_URL=http://s3.test:4566
aws --endpoint-url=$AWS_ENDPOINT_URL s3 mb s3://attachments
aws --endpoint-url=$AWS_ENDPOINT_URL s3 cp test s3://attachments
```
Lastly, HEADing the file to see if it exists.
```
$ curl -I $AWS_ENDPOINT_URL/attachments/test -H 'Authorization: aaa'
HTTP/1.1 404 Not Found
Server: BaseHTTP/0.6 Python/3.8.2
Date: Wed, 03 Jun 2020 21:28:46 GMT
Content-Length: 21
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: HEAD,GET,PUT,POST,DELETE,OPTIONS,PATCH
Access-Control-Allow-Headers: authorization,content-type,content-md5,cache-control,x-amz-content-sha256,x-amz-date,x-amz-security-token,x-amz-user-agent,x-amz-target,x-amz-acl,x-amz-version-id,x-localstack-target,x-amz-tagging
Access-Control-Expose-Headers: x-amz-version-id
```
Localstack logs from requesting via *:4566
```
localstack_1 | Starting edge router (https port 4566)...
localstack_1 | Starting mock S3 service in http ports 4566 (recommended) and 4572 (deprecated)...
localstack_1 | 2020-06-03T21:04:36:INFO:localstack.multiserver: Starting multi API server process on port 38679
localstack_1 | Ready.
localstack_1 | 2020-06-03T21:04:41:INFO:localstack.utils.persistence: Restored 10 API calls from persistent file: /tmp/localstack/recorded_api_calls.json
localstack_1 | 2020-06-03T21:05:09:INFO:localstack.services.edge: Unable to find forwarding rule for host "s3.test:4566", path "/attachments/test", target header "", auth header ""
```
Switching the exposed port to 4572 and requesting through there gives me:
```
AWS_ENDPOINT_URL=http://s3.test:4572
```
```
$ curl -I $AWS_ENDPOINT_URL/attachments/test -H 'Authorization: aaa'
HTTP/1.1 200 OK
Server: BaseHTTP/0.6 Python/3.8.2
Date: Wed, 03 Jun 2020 21:39:38 GMT
Content-Type: binary/octet-stream
ETag: "d41d8cd98f00b204e9800998ecf8427e"
last-modified: Wed, 03 Jun 2020 21:31:11 GMT
Content-Length: 0
Access-Control-Allow-Origin: *
x-amz-request-id: 3144F10D603C9424
x-amz-id-2: MzRISOwyjmnup3144F10D603C94247/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp
Access-Control-Allow-Methods: HEAD,GET,PUT,POST,DELETE,OPTIONS,PATCH
Access-Control-Allow-Headers: authorization,content-type,content-md5,cache-control,x-amz-content-sha256,x-amz-date,x-amz-security-token,x-amz-user-agent,x-amz-target,x-amz-acl,x-amz-version-id,x-localstack-target,x-amz-tagging
Access-Control-Expose-Headers: x-amz-version-id
``` | https://github.com/localstack/localstack/issues/2506 | https://github.com/localstack/localstack/pull/2508 | a105b36b15f98d8fd102c2520d789ecba7284529 | 3187a303cb39b67cf7f3f41dca58f5549aebca0a | "2020-06-03T22:04:27Z" | python | "2020-06-04T07:30:52Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,488 | ["localstack/services/apigateway/apigateway_listener.py"] | ApiGateway doesn't honor api-key-required flag on methods | # Type of request: This is a ...
[x ] bug report
[ ] feature request
# Detailed description
When creating a method that requires an api key, it is not enforced by the mock service.
...
## Expected behavior
Only valid api keys allowed through
...
## Actual behavior
No checking of api keys.
...
# Steps to reproduce
Create an APIG with method that requires API keys.
## Command used to start LocalStack
```
PORT_WEB_UI=8081 SERVICES=lambda,dynamodb,apigateway localstack start --docker
```
...
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
Create lambda.js:
```
'use strict'
const apiHandler = (payload, context, callback) => {
console.log(`Function apiHandler called with payload ${JSON.stringify(payload)}`);
callback(null, {
statusCode: 201,
body: JSON.stringify({
message: 'Hello World'
}),
headers: {
'X-Custom-Header': 'ASDF'
}
});
}
module.exports = {
apiHandler,
}
```
Zip it up
```
zip api-handler.zip index.js
```
Run this:
```
#!/bin/sh
API_NAME=test
REGION=us-east-1
STAGE=prod
function fail() {
echo $2
exit $1
}
awslocal lambda create-function \
--region ${REGION} \
--function-name ${API_NAME} \
--runtime nodejs8.10 \
--handler lambda.apiHandler \
--memory-size 128 \
--zip-file fileb://api-handler.zip \
--role arn:aws:iam::123456:role/irrelevant
[ $? == 0 ] || fail 1 "Failed: AWS / lambda / create-function"
LAMBDA_ARN=$(awslocal lambda list-functions --query "Functions[?FunctionName==\`${API_NAME}\`].FunctionArn" --output text --region ${REGION})
awslocal apigateway create-rest-api \
--region ${REGION} \
--name ${API_NAME}
[ $? == 0 ] || fail 2 "Failed: AWS / apigateway / create-rest-api"
API_ID=$(awslocal apigateway get-rest-apis --query "items[?name==\`${API_NAME}\`].id" --output text --region ${REGION})
PARENT_RESOURCE_ID=$(awslocal apigateway get-resources --rest-api-id ${API_ID} --query 'items[?path==`/`].id' --output text --region ${REGION})
awslocal apigateway create-usage-plan --name "basic" --description "basic" --quota limit=2000000,offset=0,period=MONTH --api-stages apiId=$API_ID,stage=prod
awslocal apigateway create-usage-plan --name "premium" --description "premium" --quota limit=5000000,offset=0,period=MONTH --api-stages apiId=$API_ID,stage=prod
awslocal apigateway create-resource \
--region ${REGION} \
--rest-api-id ${API_ID} \
--parent-id ${PARENT_RESOURCE_ID} \
--path-part "{somethingId}"
[ $? == 0 ] || fail 3 "Failed: AWS / apigateway / create-resource"
RESOURCE_ID=$(awslocal apigateway get-resources --rest-api-id ${API_ID} --query 'items[?path==`/{somethingId}`].id' --output text --region ${REGION})
awslocal apigateway put-method \
--region ${REGION} \
--rest-api-id ${API_ID} \
--resource-id ${RESOURCE_ID} \
--http-method GET \
--request-parameters "method.request.path.somethingId=true" \
--authorization-type "NONE" \
--api-key-required \
[ $? == 0 ] || fail 4 "Failed: AWS / apigateway / put-method"
awslocal apigateway put-integration \
--region ${REGION} \
--rest-api-id ${API_ID} \
--resource-id ${RESOURCE_ID} \
--http-method GET \
--type AWS_PROXY \
--integration-http-method POST \
--uri arn:aws:apigateway:${REGION}:lambda:path/2015-03-31/functions/${LAMBDA_ARN}/invocations \
--passthrough-behavior WHEN_NO_MATCH \
[ $? == 0 ] || fail 5 "Failed: AWS / apigateway / put-integration"
awslocal apigateway put-method \
--region ${REGION} \
--rest-api-id ${API_ID} \
--resource-id ${RESOURCE_ID} \
--http-method POST \
--request-parameters "method.request.path.somethingId=true" \
--authorization-type "NONE" \
--api-key-required \
[ $? == 0 ] || fail 4 "Failed: AWS / apigateway / put-method"
awslocal apigateway put-integration \
--region ${REGION} \
--rest-api-id ${API_ID} \
--resource-id ${RESOURCE_ID} \
--http-method POST \
--type AWS_PROXY \
--integration-http-method POST \
--uri arn:aws:apigateway:${REGION}:lambda:path/2015-03-31/functions/${LAMBDA_ARN}/invocations \
--passthrough-behavior WHEN_NO_MATCH \
[ $? == 0 ] || fail 5 "Failed: AWS / apigateway / put-integration"
awslocal apigateway create-deployment \
--region ${REGION} \
--rest-api-id ${API_ID} \
--stage-name ${STAGE} \
[ $? == 0 ] || fail 6 "Failed: AWS / apigateway / create-deployment"
ENDPOINT=http://localhost:4567/restapis/${API_ID}/${STAGE}/_user_request_/HowMuchIsTheFish
echo "API available at: ${ENDPOINT}"
echo "Testing GET:"
curl -i ${ENDPOINT}
echo "Testing POST:"
curl -iX POST ${ENDPOINT}
```
...
| https://github.com/localstack/localstack/issues/2488 | https://github.com/localstack/localstack/pull/2785 | 8b8b4288e149e9e64f9ff104ffd762c2d4cb2c6d | 409728b98e0e7b241cb76ce9ee51d946de30fe62 | "2020-05-31T20:18:30Z" | python | "2020-07-28T22:48:08Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,485 | ["localstack/utils/common.py"] | Can't get custom SSL certificate to work | # Type of request: This is a ...
[X] bug report
[ ] feature request
# Detailed description
I try to configure SSL to work with a certificate bundle from my CA.
Here's an excerpt from my `docker-compose.yml`
```
localstack:
image: localstack/localstack
ports:
- 4566:4566
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- LOCALSTACK_SERVICES=s3,secretsmanager
- LOCALSTACK_DEFAULT_REGION=us-east-1
- LOCALSTACK_USE_SSL=true
- LOCALSTACK_DATA_DIR=/tmp/localstack/data
- LOCALSTACK_PORT_WEB_UI=8080
- LOCALSTACK_HOSTNAME=aws.local.domain.com
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- ./support/docker/localstack/aws.local.domain.com.pem:/tmp/localstack/server.test.pem
- ./support/docker/localstack/aws.local.domain.com.crt:/tmp/localstack/server.test.pem.crt
- ./support/docker/localstack/aws.local.domain.com.key:/tmp/localstack/server.test.pem.key
```
My `.pem` file has the following format:
```
-----BEGIN RSA PRIVATE KEY-----
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
```
Here's what I see in `localstack` container logs when I run `docker-compose up`:
```
localstack | Waiting for all LocalStack services to be ready
localstack | Starting edge router (https port 4566)...
localstack | Starting mock S3 service in https ports 4566 (recommended) and 4572 (deprecated)...
localstack | 2020-05-31T16:22:20:INFO:localstack.utils.common: Unable to store key/cert files for custom SSL certificate: substring not found
localstack | 2020-05-31T16:22:20:INFO:localstack.multiserver: Starting multi API server process on port 34651
localstack | 2020-05-31T16:22:20:INFO:localstack.utils.common: Unable to store key/cert files for custom SSL certificate: substring not found
localstack | Starting mock Secrets Manager service in https ports 4566 (recommended) and 4584 (deprecated)...
localstack | 2020-05-31T16:22:21:INFO:localstack.utils.common: Unable to store key/cert files for custom SSL certificate: substring not found
localstack | Waiting for all LocalStack services to be ready
localstack | Ready.
```
I also have `aws.local.domain.com` in my `/etc/hosts` file pointing to `127.0.0.1`
...
## Expected behavior
I should be able to issue AWS CLI commands successfully.
...
## Actual behavior
Here's the output from AWS CLI:
```
$ aws --endpoint-url=https://aws.local.domain.com:4566 s3 ls
SSL validation failed for https://aws.local.domain.com:4566/ [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1076)
```
...
Is there anything I'm missing?
┆Issue is synchronized with this [Jira Bug](https://localstack.atlassian.net/browse/LOC-192) by [Unito](https://www.unito.io/learn-more)
| https://github.com/localstack/localstack/issues/2485 | https://github.com/localstack/localstack/pull/3749 | 1d9587525b23d262c76fe9ac4411b23eca5cf6cf | d71f99a7d4476d4b5ca6f0527935f1af0c3ba665 | "2020-05-31T16:44:18Z" | python | "2021-03-20T14:57:13Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,478 | ["localstack/services/edge.py"] | Unable to find forwarding rule for host - SQS | # Type of request: This is a ...
[x ] bug report
[ ] feature request
# Detailed description
When attempting to run in a Docker environment with Spring Cloud AWS Messaging - getting a 404 on the application side and `INFO:localstack.services.edge: Unable to find forwarding rule for host "awslocal:4566", path "/", target header "", auth header ""` on Localstack container.
Using the following docker-compose entry:
```
localstack:
image: localstack/localstack:0.11.2
container_name: awslocal
hostname: awslocal
environment:
SERVICES: "sqs,sns"
HOST_TMP_FOLDER: /tmp/localstack
HOSTNAME_EXTERNAL: awslocal
ports:
- "4566:4566"
- "8055:8080"
volumes:
- "./.localstack:/tmp/localstack"
```
## Expected behavior
Application can listen to SQS messages on the `test` queue created externally with AWS CLI:
```
aws --endpoint-url=http://localhost:4566 sqs create-queue --queue-name test
{
"QueueUrl": "http://localhost:4566/queue/test"
}
aws --endpoint-url=http://localhost:4566 sns create-topic --name test
{
"TopicArn": "arn:aws:sns:us-east-1:000000000000:test"
}
aws --endpoint-url=http://localhost:4566 sns subscribe --topic-arn arn:aws:sns:us-east-1:000000000000:test --protocol sqs --notification-endpoint arn:aws:sqs:us-east-1:000000000000:test
{
"SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:test:de195743-b616-4c13-8539-fe17d8e00084"
}
aws --endpoint-url=http://localhost:4566 sns publish --topic-arn arn:aws:sns:us-east-1:000000000000:test --message "TEST"
```
## Actual behavior
Application fails with a 404 on this log line:
```
app_1 | 2020-05-30 01:02:36.955 DEBUG 6 --- [ main] com.amazonaws.request : Sending Request: POST http://awslocal:4566 / Parameters: ({"Action":["GetQueueUrl"],"Version":["2012-11-05"],"QueueName":["test"]}Headers: (User-Agent: aws-sdk-java/1.11.415 Linux/4.19.76-linuxkit OpenJDK_64-Bit_Server_VM/11.0.7+10 java/11.0.7, amz-sdk-invocation-id: 1ae4996f-7f4e-509a-0ee1-a5a7d7d274a0, )
app_1 | 2020-05-30 01:02:37.214 DEBUG 6 --- [ main] com.amazonaws.request : Received error response: com.amazonaws.services.sqs.model.AmazonSQSException: null (Service: AmazonSQS; Status Code: 404; Error Code: 404 Not Found; Request ID: null)
app_1 | 2020-05-30 01:02:37.220 WARN 6 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'simpleMessageListenerContainer' defined in class path resource [org/springframework/cloud/aws/messaging/config/annotation/SqsConfiguration.class]: Invocation of init method failed; nested exception is com.amazonaws.services.sqs.model.AmazonSQSException: null (Service: AmazonSQS; Status Code: 404; Error Code: 404 Not Found; Request ID: null)
app_1 | 2020-05-30 01:02:37.224 INFO 6 --- [ main] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
app_1 | 2020-05-30 01:02:37.225 INFO 6 --- [ main] .SchemaDropperImpl$DelayedDropActionImpl : HHH000477: Starting delayed evictData of schema as part of SessionFactory shut-down'
app_1 | 2020-05-30 01:02:37.233 INFO 6 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
app_1 | 2020-05-30 01:02:37.235 INFO 6 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
app_1 | 2020-05-30 01:02:37.249 INFO 6 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
app_1 | 2020-05-30 01:02:37.253 INFO 6 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
app_1 | 2020-05-30 01:02:37.271 INFO 6 --- [ main] ConditionEvaluationReportLoggingListener :
app_1 |
app_1 | Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
app_1 | 2020-05-30 01:02:37.278 ERROR 6 --- [ main] o.s.boot.SpringApplication : Application run failed
app_1 | com.amazonaws.services.sqs.model.AmazonSQSException: null (Service: AmazonSQS; Status Code: 404; Error Code: 404 Not Found; Request ID: null)
app_1 | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1660)
app_1 | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1324)
app_1 | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1074)
app_1 | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:745)
app_1 | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:719)
app_1 | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:701)
app_1 | at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:669)
app_1 | at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:651)
app_1 | at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:515)
app_1 | at com.amazonaws.services.sqs.AmazonSQSClient.doInvoke(AmazonSQSClient.java:2147)
app_1 | at com.amazonaws.services.sqs.AmazonSQSClient.invoke(AmazonSQSClient.java:2116)
app_1 | at com.amazonaws.services.sqs.AmazonSQSClient.invoke(AmazonSQSClient.java:2105)
app_1 | at com.amazonaws.services.sqs.AmazonSQSClient.executeGetQueueUrl(AmazonSQSClient.java:1138)
app_1 | at com.amazonaws.services.sqs.AmazonSQSClient.getQueueUrl(AmazonSQSClient.java:1110)
```
# Steps to reproduce
A demo repository is here: https://github.com/MatthewEdge/boot-sqs-test
Relevant client code:
```
@SpringBootApplication
@Slf4j
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
@SqsListener("test")
public void receiveMessage(@NotificationMessage String message, @Payload String payload) {
log.info(payload);
log.info(message);
}
@Value("${cloud.aws.endpoint.uri}")
private String endpointUrl;
@Value("${cloud.aws.region.static}")
private String region;
private AWSCredentialsProvider credentialsProvider() {
return new AWSStaticCredentialsProvider(new AnonymousAWSCredentials());
}
private EndpointConfiguration endpointConfiguration() {
log.info("Using endpoint: " + endpointUrl);
log.info("Region: " + region);
return new AwsClientBuilder.EndpointConfiguration(endpointUrl, region);
}
@Bean
public AmazonSQS amazonSQS() {
return AmazonSQSAsyncClientBuilder.standard()
.withCredentials(credentialsProvider())
.withEndpointConfiguration(endpointConfiguration())
.build();
}
}
```
## Command used to start LocalStack
```
docker-compose up -d localstack
docker-compose up --build app
```
┆Issue is synchronized with this [Jira Bug](https://localstack.atlassian.net/browse/LOC-190) by [Unito](https://www.unito.io/learn-more)
| https://github.com/localstack/localstack/issues/2478 | https://github.com/localstack/localstack/pull/2489 | c57e55e7a6876d507d411198d59f4bc476d7a198 | c807267c9305afcd85b56afcf83b13c212fb627a | "2020-05-30T01:11:02Z" | python | "2020-06-01T08:13:48Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,472 | ["localstack/config.py", "localstack/services/awslambda/lambda_executors.py"] | Windows Pathing Issue with Volume Sharing Lambdas | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
When `mountCode: true` for a serverless.yml deployment to localstack on Windows, Localstack mounts the wrong path.
...
## Expected behavior
Docker expects the Windows client path value of the `-v` flag to begin with `/host_mtn/`.
...
## Actual behavior
Localstack does not add this to the Windows client path. This causes the volume sharing to fail. Once one tries to invoke a lambda without the proper volume sharing, the lambda fails with an error along the lines of "can't find no module 'lambda_name'".
...
# Steps to reproduce
1. In your `serverless.yml` set `mountCode: true`
2. In your `docker-compose.yml` set the environment variable `LAMBDA_REMOTE_DOCKER: "0"` for the localstack container
3. Deploy some test lambda to the localstack container
4. Attempt to invoke with `awslocal lambda invoke --function-name LAMBDA_NAME out.txt` and replace LAMBDA_NAME with the name of your test lambda.
5. Note the invocation failure.
## Command used to start LocalStack
Started via a `docker-compose.yml` file. Pretty basic stuff.
...
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
Just the `awslocal` invoke command outlined above.
...
| https://github.com/localstack/localstack/issues/2472 | https://github.com/localstack/localstack/pull/2474 | 1a184c006ed3d0110a56f0b9a51106776494a7bc | 34daf13e8f0c883eceb3ae3f83a6ab20e8fc9460 | "2020-05-28T17:18:49Z" | python | "2020-05-30T08:35:10Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,464 | ["localstack/services/edge.py", "tests/integration/test_edge.py", "tests/unit/test_edge.py"] | Edge Router doesn't handle S3 Presigned URL POSTs properly | # Bug Report
# Detailed description
This is a similar issue to #2329, specific to [S3 Presigned URLs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-presigned-urls.html). Using the edge router port (`:4566`), it is possible to use `generate_presigned_post`, but attempting to use the resultant URL to upload a file to localstack's S3 fails with a 404.
## Expected behavior
Localstack's edge router port should accept POST requests with an S3 Presigned URL (generated from that same Localstack instance) in the same way that the old S3 port (`:4572`) does.
## Actual behavior
While the old S3 port (`:4572`) handles this fine, the edge router responds with a 404 and a message similar to the following:
```
2020-05-24T15:36:54:INFO:localstack.services.edge: Unable to find forwarding rule for host "localhost:4566", path "/local-job-documents", target header "", auth header ""
```
(borrowed from @philippmalkov's comment on #2329 )
```
aws_1 | 2020-05-27T16:24:38:INFO:localstack.services.edge: Unable to find forwarding rule for host "aws:4566", path "/test-bucket", target header "", auth header ""
```
(our observed case with the below setup)
# Steps to reproduce
## Command used to start LocalStack
```
# Used in docker-compose.yml as:
aws:
image: localstack/localstack-light
environment:
- SERVICES=dynamodb,s3
- HOSTNAME_EXTERNAL=aws
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
expose:
- 4566
- 4572
volumes:
- ./.localstack:/tmp/localstack
```
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
Client code is a Django app using `boto3` to generate a Presigned URL and then upload a file to it using `requests`, effectively identical to the examples in [the boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-presigned-urls.html) | https://github.com/localstack/localstack/issues/2464 | https://github.com/localstack/localstack/pull/2499 | cecbb6aa7b0beca607364c49c01ae73c5a4d8301 | 8ca55b79899b1685b7d4f167ac5714bcdd98823b | "2020-05-27T20:14:37Z" | python | "2020-06-02T20:48:15Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,442 | ["localstack/services/edge.py", "tests/integration/test_edge.py"] | S3: POST/PUT to bucket URLs don't route correctly on port 4566 | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
We noticed this while converting a service to use Localstack for tests. The service generates S3 presigned post URLs. We're able to create and use presigned S3 URLs on port 4572 (deprecated S3 port), but not 4566, the new shared one. The same issue happens with PUT requests, which is the simplest to repro.
While just using 4572 works, this does force us to use the deprecated port and I figured it was worth opening an issue because of the discrepancy.
## Expected behavior
POST http://localhost:4566/hello (with appropriate form params) should return a 204, in the same way that POST http://localhost:4572/hello does.
PUT http://localhost:4566/hello should create a bucket and return a 200, in the same way that PUT http://localhost:4572/hello does.
## Actual behavior
Both PUT and POST http://localhost:4566/hello return a 404.
In the localstack logs:
2020-05-20T13:37:41:INFO:localstack.services.edge: Unable to find forwarding rule for host "localhost:4566", path "/hello", target header "", auth header ""
# Steps to reproduce
```bash
$ curl -i -XPUT http://localhost:4572/hello
HTTP/1.1 200 OK
Server: BaseHTTP/0.6 Python/3.8.2
Date: Wed, 20 May 2020 13:43:17 GMT
Content-Type: application/xml; charset=utf-8
content-length: 159
Access-Control-Allow-Origin: *
Last-Modified: Wed, 20 May 2020 13:43:17 GMT
x-amz-request-id: 0ABD347D7A4E0697
x-amz-id-2: MzRISOwyjmnup0ABD347D7A4E06977/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp
Access-Control-Allow-Methods: HEAD,GET,PUT,POST,DELETE,OPTIONS,PATCH
Access-Control-Allow-Headers: authorization,content-type,content-md5,cache-control,x-amz-content-sha256,x-amz-date,x-amz-security-token,x-amz-user-agent,x-amz-target,x-amz-acl,x-amz-version-id,x-localstack-target,x-amz-tagging
Access-Control-Expose-Headers: x-amz-version-id
<CreateBucketResponse xmlns="http://s3.amazonaws.com/doc/2006-03-01"><CreateBucketResponse><Bucket>hello</Bucket></CreateBucketResponse></CreateBucketResponse>%
$ curl -i -XPUT http://localhost:4566/hello
HTTP/1.1 404 Not Found
Server: BaseHTTP/0.6 Python/3.8.2
Date: Wed, 20 May 2020 13:43:22 GMT
Content-Length: 21
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: HEAD,GET,PUT,POST,DELETE,OPTIONS,PATCH
Access-Control-Allow-Headers: authorization,content-type,content-md5,cache-control,x-amz-content-sha256,x-amz-date,x-amz-security-token,x-amz-user-agent,x-amz-target,x-amz-acl,x-amz-version-id,x-localstack-target,x-amz-tagging
Access-Control-Expose-Headers: x-amz-version-id
{"status": "running"}%
```
## Command used to start LocalStack
`localstack start` | https://github.com/localstack/localstack/issues/2442 | https://github.com/localstack/localstack/pull/2487 | 1f725599e2fb2a0c7cbd0fb9fc8a740a0b49250b | c57e55e7a6876d507d411198d59f4bc476d7a198 | "2020-05-20T13:45:53Z" | python | "2020-05-31T18:51:26Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,436 | ["Dockerfile", "localstack/services/edge.py", "localstack/utils/common.py"] | Best way to connect the different services and to access the dashboard when using single edge port | I'm trying to use below AWS services using docker-compose which looks like below:
version: '2.1'
```
services:
localstack:
container_name: localstack-main
image: localstack/localstack
ports:
- "4566-4599:4566-4599"
- "9000:9000"
environment:
- SERVICES=sns,sqs,iam,sts or sns:4575,sqs:4576 etc.
- DEBUG=1
- DEFAULT_REGION=eu-west-1
- DATA_DIR=/tmp/localstack/data
- AWS_ACCESS_KEY_ID=dummyaccess
- AWS_SECRET_ACCESS_KEY=dummysecret
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=/tmp
- SNS_PORT=4575
- SQS_PORT=4576
- IAM_PORT=4593
- STS_PORT=4592
volumes:
- /tmp/localstack:/tmp/localstack
- "/var/run/docker.sock:/var/run/docker.sock"
```
what is best way to access each services of AWS:
below works fine
```
aws sqs create-queue --endpoint-url=http://localhost:4566 --queue-name my_queue
{
"QueueUrl": "http://localhost:4566/queue/my_queue"
}
```
as well as
```
aws sqs create-queue --endpoint-url=http://localhost:4576 --queue-name my_queue_1
{
"QueueUrl": "http://localhost:4576/queue/my_queue_1"
}
```
which ones I should use, my understanding is each service port will be disabled or not available in future.only customised edge_port will be available .
also if I click both of the QueueUrl posted above , I get below in my docker
```
localstack-main | 2020-05-18T19:58:33:INFO:localstack.services.edge: Unable to find forwarding rule for host "localhost:4566", path "/queue/my_queue_1", target header "", auth header ""
localstack-main | 2020-05-18T20:01:28:INFO:localstack.services.edge: Unable to find forwarding rule for host "localhost:4566", path "/queue/my_queue", target header "", auth header ""
```
also in my data folder, the `recorded_api_calls.json` is empty, even though I called to `create-queue`.
Which gives me also a feeling that my setup is not quite right? or may be I'm wrong.
can anyone share a thought on customised docker compose which is quite acceptable by localstack.
Also, I can't access the dashboard using port 9000(in my docker-compose). how can I access the dashboard?
BTW, when I click only `http://localhost:4566 `--> it shows `{"status": "running"}` | https://github.com/localstack/localstack/issues/2436 | https://github.com/localstack/localstack/pull/2504 | b71860abc1c13f4e95c7501962d67ef63b8f17d6 | a105b36b15f98d8fd102c2520d789ecba7284529 | "2020-05-18T20:05:45Z" | python | "2020-06-03T18:17:06Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,432 | ["localstack/services/sns/sns_listener.py", "tests/integration/test_sns.py", "tests/unit/test_sns.py"] | Binary MessageAttribute is corrupted when routed to SQS Queue | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x ] bug report
# Detailed description
When a message gets posted directly to SQS Queue, then Listener is able to read payload as well as deserialize binary message attribute.
If the same message is posted to SNS Topic first, and then gets routed to SQS Queue,
binary message attribute becomes corrupted and not deserialize by the listener anymore.
Both scenarios work as expected (message arrives intact and deserialize) when the same infrastructure is deployed to AWS.
## Expected behavior
Regardless, whether the message with binary message attribute is posted to SNS Topic or SQS Queue, it should arrive as it was sent, including binary message attribute.
## Actual behavior
When a message is sent to SNS Topic first, and then routed via subscription with RawMessageDelivery to SQS Queue, the binary message attribute arrives corrupted.
# Steps to reproduce
Please see example project and readme on [gruff4l0/localstack-sns-sqs-bug-report](https://github.com/gruff4l0/localstack-sns-sqs-bug-report) | https://github.com/localstack/localstack/issues/2432 | https://github.com/localstack/localstack/pull/2525 | bd777b7702abe49e936cec805a48e0c349d49a16 | 043074eed66737577d246f429cd1383f4744e77d | "2020-05-17T19:08:09Z" | python | "2020-06-10T20:17:50Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,387 | ["README.md", "localstack/utils/common.py"] | USE_SSL=1 breaks cloudwatch logging | [x] bug report
[ ] feature request
# Detailed description
cloudwatch logging doesn't work if USE_SSL is enabled. The problem seems to be in
https://github.com/localstack/localstack/blob/f9d5c1a77088bf65e10d05a298ef7ab61fc64314/localstack/utils/cloudwatch/cloudwatch_util.py#L57
If protocol is https ,the service is enabled resolver returns False. Logging works normally when USE_SSL=0
Happens at least with 0.11.1, but seems to be the same problem with older versions also.
## Expected behavior
Cloudwatch logging works
...
## Actual behavior
Nothing is logged to cloudwatch logs
# Steps to reproduce
set USE_SSL=1
## Command used to start LocalStack
docker-compose up
| https://github.com/localstack/localstack/issues/2387 | https://github.com/localstack/localstack/pull/2389 | f9d5c1a77088bf65e10d05a298ef7ab61fc64314 | ae9c131cd1482b01e239dc46541eb19ef4c06ea5 | "2020-05-04T07:21:47Z" | python | "2020-05-04T21:56:26Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,375 | ["setup.cfg"] | `generic_proxy.py`'s `Connection` header handling should be case insensitive | # Type of request:
- [x] bug report
- [ ] feature request
# Detailed description
In `0.10.8`, `localstack/services/generic_proxy.py` has regressed with respect to `Connection` header handling. HTTP headers are expected to be [case insensitive](https://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2).
In version `0.10.7` of `generic_proxy.py`:
```python
if forward_headers.get('Connection', '').lower() != 'keep-alive':
self.close_connection = 1
```
in `0.10.8` [this was changed](https://github.com/localstack/localstack/commit/859cbc5ad1c23fefcec10a7a25570fe7581cbbb5) to
```python
if forward_headers.get('Connection') not in ['keep-alive', None]:
self.close_connection = 1
```
Note that the call to `lower()` was removed.
## Expected behavior
Keep case insensitive `Connection` header handling.
## Actual behavior
Case sensitive
# Steps to reproduce
Send a request with `Connection: Keep-Alive`
## Command used to start LocalStack
N/A
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
N/A
| https://github.com/localstack/localstack/issues/2375 | https://github.com/localstack/localstack/pull/5954 | d12d037398ec8c1db26be15ac60e481579397e36 | 01cb58478ee3a2329b86b0d88fbb12695ce1c3a6 | "2020-04-30T20:16:17Z" | python | "2022-04-29T09:18:38Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,359 | ["localstack/services/events/events_listener.py", "tests/integration/test_events.py"] | Cloudformation doesn't create Cloudwatch event rule | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[ ] bug report
[x] feature request
# Detailed description
Deploying a Cloudformation stack with a `Type: AWS::Events::Rule` resource doesn't deploy the rule.
## Expected behavior
Deploying the stack with `Type: AWS::Events::Rule` deploys a Cloudwatch events rule correctly
## Actual behavior
The rule doesn't get created. In the logs I see:
```
2020-04-27T10:51:35:WARNING:localstack.utils.cloudformation.template_deployer: Unable to extract name for resource type "Events::Rule"
2020-04-27T10:51:35:WARNING:localstack.utils.cloudformation.template_deployer: Unable to extract name for resource type "Events::Rule"
```
Based on this warning I've marked this a feature request as opposed to a bug report, though it could also be seen as a bug in that the deployed template doesn't perform correctly.
# Steps to reproduce
## Command used to start LocalStack
`localstack start`
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
template-scheduled.yaml:
<details>
<summary>Click to expand</summary>
```yaml
AWSTemplateFormatVersion: '2010-09-09'
Description: Workflow for testing scheduling state machines
Resources:
# IAM role to use for the step function
TestStateMachineExecutionRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service: !Sub states.${AWS::Region}.amazonaws.com
Action: "sts:AssumeRole"
Policies:
- PolicyName: InvokeLambdas
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action: "lambda:InvokeFunction"
Resource: "*"
# Step function describing the workflow
TestStateMachine:
Type: "AWS::StepFunctions::StateMachine"
Properties:
RoleArn: !GetAtt TestStateMachineExecutionRole.Arn
DefinitionString:
!Sub
- |-
{
"Comment": "A Hello World example of the Amazon States Language using Pass states",
"StartAt": "Hello",
"States": {
"Hello": {
"Type": "Pass",
"Result": "Hello",
"Next": "World"
},
"World": {
"Type": "Pass",
"Result": "World",
"End": true
}
}
}
- {}
# IAM role to use to invoke the step function
TestStateMachineScheduleTargetRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service: events.amazonaws.com
Action: "sts:AssumeRole"
Policies:
- PolicyName: InvokeTestStateMachine
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action: "states:StartExecution"
Resource: !Ref TestStateMachine
# Cloudwatch events rule to schedule the step function
TestStateMachineSchedule:
Type: AWS::Events::Rule
Properties:
ScheduleExpression: "cron(0/1 * * * ? *)"
State: "ENABLED"
Targets:
- Arn: !Ref TestStateMachine
Id: "TestStateMachine"
RoleArn: !GetAtt TestStateMachineScheduleTargetRole.Arn
```
</details>
```bash
$ awslocal cloudformation create-stack --stack-name test-scheduled-state-machine --template-body file://template-scheduled.yaml --capabilities CAPABILITY_IAM
{
"StackId": "arn:aws:cloudformation:us-east-1:000000000000:stack/test-scheduled-state-machine/7defb145-dd75-4979-a315-46011d1ab663"
}
$ awslocal events list-rules
{
"Rules": []
}
```
Note no rule has been created, although one exists in the template. | https://github.com/localstack/localstack/issues/2359 | https://github.com/localstack/localstack/pull/2494 | 88bebe1c0186a639eeece8702711035ec4332c08 | c30756ab111599dc9aa764b7d7eb2f067690d2f5 | "2020-04-27T11:08:27Z" | python | "2020-06-01T17:04:39Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,346 | [".travis.yml", "Makefile"] | Unit test coverage not happening | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[X ] bug report
[ ] feature request
# Detailed description
Created some of the test cases under https://github.com/localstack/localstack/blob/master/tests/unit/test_common.py
But these test cases are not covered in the latest build
https://coveralls.io/builds/30298765/source?filename=localstack%2Futils%2Fcommon.py
Eg: The below test case should hv covered, but the coverage didnt happen
https://github.com/localstack/localstack/blob/master/tests/unit/test_common.py#L8
Coverage didnt happen
<img width="483" alt="coverage-error" src="https://user-images.githubusercontent.com/28680638/80177767-fc2b9e00-863f-11ea-88af-9bf4cd792290.png">
## Expected behavior
The coverage should happen for the unit test cases
## Actual behavior
...
# Steps to reproduce
## Command used to start LocalStack
...
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
...
| https://github.com/localstack/localstack/issues/2346 | https://github.com/localstack/localstack/pull/2353 | 1aa5d5241846f94801f7aa7b6b4c52306d649078 | 0f3b8cdcf3c04cc498ecaa3040ceb66337a48f95 | "2020-04-24T05:27:05Z" | python | "2020-04-25T14:51:15Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,340 | ["localstack/services/ec2/ec2_starter.py", "localstack/services/s3/s3_listener.py"] | Error terraform destroy vpc module using localstack | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
Hi, I try to run some test with localstack to test my module terraform VPC and the terraform apply works but when I run terraform destroy I get the error and terraform crash:
```
Error: rpc error: code = Unavailable desc = transport is closin
```
# Steps to reproduce
```
provider "aws" {
access_key = "mock_access_key"
region = "us-east-1"
s3_force_path_style = true
secret_key = "mock_secret_key"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
apigateway = "http://localhost:4567"
cloudformation = "http://localhost:4581"
cloudwatch = "http://localhost:4582"
dynamodb = "http://localhost:4569"
es = "http://localhost:4578"
firehose = "http://localhost:4573"
iam = "http://localhost:4593"
kinesis = "http://localhost:4568"
lambda = "http://localhost:4574"
route53 = "http://localhost:4580"
redshift = "http://localhost:4577"
s3 = "http://localhost:4572"
secretsmanager = "http://localhost:4584"
ses = "http://localhost:4579"
sns = "http://localhost:4575"
sqs = "http://localhost:4576"
ssm = "http://localhost:4583"
stepfunctions = "http://localhost:4585"
sts = "http://localhost:4592"
ec2 = "http://localhost:4597"
}
}
data "aws_security_group" "default" {
name = "default"
vpc_id = module.vpc.vpc_id
}
module "vpc" {
source = "../../"
name = "vpc-terratest"
cidr = "10.120.0.0/16"
azs = ["eu-west-1a", "eu-west-1c"]
compute_public_subnets = ["10.120.3.0/24", "10.120.4.0/24"]
compute_private_subnets = ["10.120.0.0/24", "10.120.1.0/24"]
lb_subnets = ["10.120.5.0/24", "10.120.6.0/24"]
database_subnets = ["10.120.7.0/24", "10.120.8.0/24"]
create_database_subnet_group = false
enable_nat_gateway = true
single_nat_gateway = true
create_database_subnet_route_table = true
tags = {
Owner = "user"
Environment = "dev"
}
}
```
The module is
https://github.com/youse-seguradora/terraform-aws-vpc
## Command used to start LocalStack
localstack start
When I run in mey own aws account terraform apply and terraform destroy works fine.
Can you guys help?
Thanks
| https://github.com/localstack/localstack/issues/2340 | https://github.com/localstack/localstack/pull/2484 | 73683ac5d3ac64175620f682bd0a9f15a402ee11 | 1f725599e2fb2a0c7cbd0fb9fc8a740a0b49250b | "2020-04-22T21:57:47Z" | python | "2020-05-31T12:56:18Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,329 | ["localstack/services/edge.py", "tests/integration/test_lambda.py", "tests/unit/test_sns.py"] | s3.upload returns `Location: http://localhost:4566` | # Bug report
# Detailed description
The `AWS.s3.upload()` (official SDK - https://github.com/aws/aws-sdk-js) returns an object with the `Location` key that points to 4566 instead of 4572 (LocalStack S3 port).
## Expected behavior
The `Location` should point to the file on S3.
Example:
```
Location: http://localhost:4572/path/to/bucket.txt
```
## Actual behavior
The `Location` points to the LocalStack entrypoint.
Example:
```
Location: http://localhost:4566/path/to/bucket.txt
```
# Steps to reproduce
- Upload a file to S3 using the official AWS SDK (https://github.com/aws/aws-sdk-js).
- Check out the `Location` property.
## Client code
```javascript
const AWS = require('aws-sdk');
const s3 = new AWS.S3({
region: 'us-west-1',
endpoint: 'http://localhost:4566',
apiVersion: '2006-03-01',
s3ForcePathStyle: true,
});
(async () => {
await s3
.createBucket({ Bucket: 'my-bucket', ACL: 'private' })
.promise();
const { Location } = await s3
.upload({ Key: 'file.txt', Body: 'test', Bucket: 'my-bucket' })
.promise();
console.assert(Location === 'http://localhost:4572/my-bucket/file.txt');
})();
``` | https://github.com/localstack/localstack/issues/2329 | https://github.com/localstack/localstack/pull/2332 | 8433682f8ad29dc23a5e909cb229d0cb033beeaa | df8a1c0fc8cb4beecf824ff59274bb06540278a1 | "2020-04-21T12:41:05Z" | python | "2020-04-22T17:03:33Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,285 | [".travis.yml", "localstack/services/awslambda/lambda_api.py", "localstack/services/awslambda/lambda_executors.py", "tests/integration/lambdas/dotnetcore31/dotnetcore31.sln", "tests/integration/lambdas/dotnetcore31/dotnetcore31.zip", "tests/integration/lambdas/dotnetcore31/src/dotnetcore31/Function.cs", "tests/integration/lambdas/dotnetcore31/src/dotnetcore31/Readme.md", "tests/integration/lambdas/dotnetcore31/src/dotnetcore31/dotnetcore31.csproj", "tests/integration/test_lambda.py"] | Initial stack deployment fails on Update Stack | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
Using the serverless-local plugin, I cannot get the stack to deploy to the localstack environment and it always produces an error on Update Stack. I am using the latest image (as of 7 hours ago at the time of this issue report. I am also using the latest of serverless-localstack and serverless framework.
`custom:
localstack:
stages:
- local
autostart: true
host: http://localhost
debug: true`
```
❯ sls deploy -s local
Serverless: Using serverless-localstack
Serverless: Reconfiguring service apigateway to use http://localhost:4567
Serverless: Reconfiguring service cloudformation to use http://localhost:4581
Serverless: Reconfiguring service cloudwatch to use http://localhost:4582
Serverless: Reconfiguring service lambda to use http://localhost:4574
Serverless: Reconfiguring service dynamodb to use http://localhost:4569
Serverless: Reconfiguring service kinesis to use http://localhost:4568
Serverless: Reconfiguring service route53 to use http://localhost:4580
Serverless: Reconfiguring service firehose to use http://localhost:4573
Serverless: Reconfiguring service stepfunctions to use http://localhost:4585
Serverless: Reconfiguring service es to use http://localhost:4578
Serverless: Reconfiguring service s3 to use http://localhost:4572
Serverless: Reconfiguring service ses to use http://localhost:4579
Serverless: Reconfiguring service sns to use http://localhost:4575
Serverless: Reconfiguring service sqs to use http://localhost:4576
Serverless: Reconfiguring service sts to use http://localhost:4592
Serverless: Reconfiguring service iam to use http://localhost:4593
Serverless: Reconfiguring service ssm to use http://localhost:4583
Serverless: Reconfiguring service rds to use http://localhost:4594
Serverless: Reconfiguring service ec2 to use http://localhost:4597
Serverless: Reconfiguring service elasticache to use http://localhost:4598
Serverless: Reconfiguring service kms to use http://localhost:4599
Serverless: Reconfiguring service secretsmanager to use http://localhost:4584
Serverless: Reconfiguring service logs to use http://localhost:4586
Serverless: Reconfiguring service cloudwatchlogs to use http://localhost:4586
Serverless: Reconfiguring service iot to use http://localhost:4589
Serverless: Reconfiguring service cognito-idp to use http://localhost:4590
Serverless: Reconfiguring service cognito-identity to use http://localhost:4591
Serverless: Reconfiguring service ecs to use http://localhost:4601
Serverless: Reconfiguring service eks to use http://localhost:4602
Serverless: Reconfiguring service xray to use http://localhost:4603
Serverless: Reconfiguring service appsync to use http://localhost:4605
Serverless: Reconfiguring service cloudfront to use http://localhost:4606
Serverless: Reconfiguring service athena to use http://localhost:4607
Serverless: config.options_stage: local
Serverless: serverless.service.custom.stage: undefined
Serverless: serverless.service.provider.stage: local
Serverless: config.stage: local
Serverless: Packaging service...
Serverless Error ---------------------------------------
Bad Request
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: darwin
Node Version: 12.16.0
Framework Version: 1.67.3
Plugin Version: 3.6.6
SDK Version: 2.3.0
Components Version: 2.29.1
```
Inside the localstack container i see the following logs:
```
2020-04-11T18:33:46:DEBUG:localstack.services.cloudformation.cloudformation_listener: Error response for CloudFormation action "DescribeStacks" (400) POST /: b'Function not found: arn:aws:lambda:us-east-1:000000000000:function:phoenix-local-app-service-domain-projects'
2020-04-11T18:33:46:DEBUG:localstack.services.cloudformation.cloudformation_starter: Currently processing stack resource phoenix-local/ServerlessDeploymentBucketPolicy: None
2020-04-11T18:33:46:WARNING:moto: No Moto CloudFormation support for AWS::S3::BucketPolicy
2020-04-11T18:33:46:DEBUG:localstack.services.cloudformation.cloudformation_starter: Currently processing stack resource phoenix-local/AppDashserviceDashdomainDashprojectsLogGroup: False
2020-04-11T18:33:46:DEBUG:localstack.services.cloudformation.cloudformation_starter: Deploying CloudFormation resource (update=False, exists=False, updateable=False): {'Type': 'AWS::Logs::LogGroup', 'Properties': {'LogGroupName': '/aws/lambda/phoenix-local-app-service-domain-projects'}}
2020-04-11T18:33:46:DEBUG:localstack.utils.cloudformation.template_deployer: Running action "create" for resource type "Logs::LogGroup" id "AppDashserviceDashdomainDashprojectsLogGroup"
2020-04-11T18:33:46:DEBUG:localstack.utils.cloudformation.template_deployer: Request for resource type "Logs::LogGroup" in region us-east-1: create_log_group {'logGroupName': '/aws/lambda/phoenix-local-app-service-domain-projects'}
2020-04-11T18:33:46:WARNING:localstack.utils.cloudformation.template_deployer: Error calling <bound method ClientCreator._create_api_method.<locals>._api_call of <botocore.client.CloudWatchLogs object at 0x7fae34911160>> with params: {'logGroupName': '/aws/lambda/phoenix-local-app-service-domain-projects'} for resource: {'Type': 'AWS::Logs::LogGroup', 'Properties': {'LogGroupName': '/aws/lambda/phoenix-local-app-service-domain-projects'}}
2020-04-11T18:33:46:ERROR:localstack.services.cloudformation.cloudformation_starter: Unable to parse and create resource "AppDashserviceDashdomainDashprojectsLogGroup": An error occurred (ResourceAlreadyExistsException) when calling the CreateLogGroup operation: The specified log group already exists Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 195, in parse_and_create_resource
return _parse_and_create_resource(
File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 304, in _parse_and_create_resource
result = deploy_func(logical_id, resource_map_new, stack_name=stack_name)
File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 1017, in deploy_resource
return execute_resource_action(resource_id, resources, stack_name, ACTION_CREATE)
File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 1043, in execute_resource_action
result = configure_resource_via_sdk(resource_id, resources, resource_type, func, stack_name)
File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 1120, in configure_resource_via_sdk
raise e
File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 1117, in configure_resource_via_sdk
result = function(**params)
File "/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/client.py", line 316, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/client.py", line 626, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ResourceAlreadyExistsException: An error occurred (ResourceAlreadyExistsException) when calling the CreateLogGroup operation: The specified log group already exists
2020-04-11T18:33:46:DEBUG:localstack.services.cloudformation.cloudformation_listener: Error response for CloudFormation action "DescribeStackResource" (400) POST /: b'The specified log group already exists'
2020-04-11T18:33:46:WARNING:localstack.utils.cloudformation.template_deployer: Unable to get details for resource "ApiGatewayResourceProxyVar" in CloudFormation stack "phoenix-local": Unable to parse response (syntax error: line 1, column 0), invalid XML received:
b'The specified log group already exists'
2020-04-11T18:33:46:INFO:localstack.services.cloudformation.cloudformation_starter: Resource ApiGatewayDeployment1586630024025 cannot be deployed, found unsatisfied dependencies. {'Type': 'AWS::ApiGateway::Deployment', 'Properties': {'RestApiId': 'jac6pumuy6', 'StageName': 'local'}, 'DependsOn': ['ApiGatewayMethodProxyVarAny']}
2020-04-11T18:33:46:WARNING:localstack.services.cloudformation.cloudformation_starter: Unresolvable dependencies, there may be undeployed stack resources: {'AppDashserviceDashdomainDashprojectsLambdaFunction': ['AppDashserviceDashdomainDashprojectsLambdaFunction', {'Type': 'AWS::Lambda::Function', 'Properties': {'Code': {'S3Bucket': 'phoenix-local-serverlessdeploymentbucket-defkbstuynwj', 'S3Key': 'serverless/phoenix/local/1586629890845-2020-04-11T18:31:30.845Z/App.Service.Domain.Projects.zip'}, 'FunctionName': 'phoenix-local-app-service-domain-projects', 'Handler': 'App.Service.Domain.Projects::App.Service.Domain.Projects.LambdaEntryPoint::FunctionHandlerAsync', 'MemorySize': 512, 'Role': 'arn:aws:iam::123456789012:role/phoenix-local-us-east-1-lambdaRole', 'Runtime': 'dotnetcore3.1', 'Timeout': 6, 'Environment': {'Variables': {'DOTNET_URLS': 'http://*:80'}}, 'VpcConfig': {'SecurityGroupIds': ['sg-00a50ac2317aa947c'], 'SubnetIds': ['subnet-0ce40cbf66cd8bc36', 'subnet-0861e62180b7f2514']}}, 'DependsOn': ['AppDashserviceDashdomainDashprojectsLogGroup', 'IamRoleLambdaExecution']}, <moto.cloudformation.parsing.ResourceMap object at 0x7fae34bc3820>, 'us-east-1'], 'SnsDashtestLambdaFunction': ['SnsDashtestLambdaFunction', {'Type': 'AWS::Lambda::Function', 'Properties': {'Code': {'S3Bucket': 'phoenix-local-serverlessdeploymentbucket-defkbstuynwj', 'S3Key': 'serverless/phoenix/local/1586629890845-2020-04-11T18:31:30.845Z/SNSTest.zip'}, 'FunctionName': 'phoenix-local-sns-test', 'Handler': 'SNSTest::SNSTest.HandlerWrapper::HandleAsync', 'MemorySize': 128, 'Role': 'arn:aws:iam::123456789012:role/phoenix-local-us-east-1-lambdaRole', 'Runtime': 'dotnetcore3.1', 'Timeout': 6, 'VpcConfig': {'SecurityGroupIds': ['sg-00a50ac2317aa947c'], 'SubnetIds': ['subnet-0ce40cbf66cd8bc36', 'subnet-0861e62180b7f2514']}}, 'DependsOn': ['SnsDashtestLogGroup', 'IamRoleLambdaExecution']}, <moto.cloudformation.parsing.ResourceMap object at 0x7fae34bc3820>, 'us-east-1'], 'ApiGatewayDeployment1586629849594': ['ApiGatewayDeployment1586629849594', {'Type': 'AWS::ApiGateway::Deployment', 'Properties': {'RestApiId': 'jac6pumuy6', 'StageName': 'local'}, 'DependsOn': ['ApiGatewayMethodProxyVarAny']}, <moto.cloudformation.parsing.ResourceMap object at 0x7fae34bc3820>, 'us-east-1'], 'ApiGatewayDeployment1586630024025': ['ApiGatewayDeployment1586630024025', {'Type': 'AWS::ApiGateway::Deployment', 'Properties': {'RestApiId': 'jac6pumuy6', 'StageName': 'local'}, 'DependsOn': ['ApiGatewayMethodProxyVarAny']}, <moto.cloudformation.parsing.ResourceMap object at 0x7fae34bc3820>, 'us-east-1']}
2020-04-11T18:33:46:WARNING:bootstrap.py: Thread run method <function apply_patches.<locals>.run_dependencies_deployment_loop.<locals>.run_loop at 0x7fae345e7ee0>(None) failed: Traceback (most recent call last):
File "/opt/code/localstack/localstack/utils/bootstrap.py", line 483, in run
self.func(self.params)
File "/opt/code/localstack/localstack/services/cloudformation/cloudformation_starter.py", line 743, in run_loop
raise Exception('Unable to resolve all CloudFormation resources after traversing ' +
Exception: Unable to resolve all CloudFormation resources after traversing dependency tree (maximum depth 40 reached): dict_keys(['AppDashserviceDashdomainDashprojectsLambdaFunction', 'SnsDashtestLambdaFunction', 'ApiGatewayDeployment1586629849594', 'ApiGatewayDeployment1586630024025'])
```
## Expected behavior
I expect the stack to be created and updated like it does on AWS directly
...
## Actual behavior
It creates the stack, but fails to update. Sometimes i start getting 502's from the edge endpoint as well when I use AWS CLI to describe the stack.
```
❯ aws cloudformation describe-stacks --endpoint-url=http://localhost:4566
Unable to parse response (syntax error: line 1, column 0), invalid XML received:
b'Function not found: arn:aws:lambda:us-east-1:000000000000:function:phoenix-local-app-service-domain-projects'
```
# Steps to reproduce
## Command used to start LocalStack
serverless-localstack plugin autostart or `ENTRYPOINT=-d localstack start --docker`, i've tried both
Let me know if you need any additional detail! | https://github.com/localstack/localstack/issues/2285 | https://github.com/localstack/localstack/pull/2366 | ae9c131cd1482b01e239dc46541eb19ef4c06ea5 | 745648a34ac1f0ea7ccf125e73de8819f7874137 | "2020-04-11T18:41:26Z" | python | "2020-05-04T23:32:47Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,272 | ["localstack/utils/bootstrap.py", "tests/unit/test_misc.py"] | Localstack start - duplicate -p 8080:8080, results in error | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
[x] bug report
[ ] feature request
# Detailed description
Localstack docker run command contains `-p 8080:8080` twice, which results in an error. When running the following command manually, I can successfully run Localstack:
```
docker run -it -e LOCALSTACK_HOSTNAME="localhost" -e TEST_AWS_ACCOUNT_ID="000000000000" -e DEFAULT_REGION="us-east-1" \
-p 443:443 \
-p 4566:4566 \
-p 8081:8081 \
-p 4567-4617:4567-4617 \
-p 8080:8080 \
--rm --privileged --name localstack_main \
-v "/private/var/folders/01/wnz8g_c95fx81w85tpl2dx300000gn/T/localstack:/tmp/localstack" -v "/var/run/docker.sock:/var/run/docker.sock" -e DOCKER_HOST="unix:///var/run/docker.sock" -e HOST_TMP_FOLDER="/private/var/folders/01/wnz8g_c95fx81w85tpl2dx300000gn/T/localstack" "localstack/localstack"
```
**OS:** MacOS Mojave 10.14.6
**Docker version:** Docker version 19.03.8, build afacb8b
## Expected behavior
Running `localstack start` should set up the stack as expected.
## Actual behavior
Running `localstack start` results in the following Docker error:
```
docker: Error response from daemon: Ports are not available: /forwards/expose/port returned unexpected status: 500.
ERRO[0000] error waiting for container: context canceled
```
# Steps to reproduce
1. Install Localstack - `pip install localstack`
2. Run Localstack - `localstack start`
3. Error is displayed
## Command used to start LocalStack
`localstack start`
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
N/A
| https://github.com/localstack/localstack/issues/2272 | https://github.com/localstack/localstack/pull/2280 | ec598b0d5b303d4d9d2f4f433871fa79cbeefd57 | 651f87eb51c36f7e58b421acf8e9966a8932feb1 | "2020-04-08T21:50:08Z" | python | "2020-04-11T00:17:24Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,242 | ["localstack/services/es/es_api.py", "tests/integration/test_elasticsearch.py"] | elastic search domain only 7.1? | It seems that --elasticsearch-version is not use. | https://github.com/localstack/localstack/issues/2242 | https://github.com/localstack/localstack/pull/2264 | f9ab90292d20eebb5d14198baf3b9a223c21d6f8 | db74cdde6b7c521a63087341db6e07959024db8e | "2020-04-02T17:45:04Z" | python | "2020-04-07T21:49:24Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,231 | [".dockerignore", "localstack/services/iam/iam_listener.py", "tests/integration/test_iam.py"] | [IAM] AmazonIdentityManagement with null message is thrown instead of EntityAlreadyExistsException | # Type of request: This is a ...
[X] bug report
# Detailed description
`EntityAlreadyExistsException` is not thrown correctly when creating IAM objects that are already present. `AmazonIdentityManagementException` with a null message is thrown instead
## Expected behavior
Localstack should throw `EntityAlreadyExistsException` with a populated message (not null)
## Actual behavior
```
com.amazonaws.services.identitymanagement.model.AmazonIdentityManagementException: null (Service: AmazonIdentityManagement; Status Code: 409; Error Code: 409 Conflict; Request ID: null)
```
# Steps to reproduce
- create an IAM role
- try to re-create it, catch `EntityAlreadyExistsException` but `AmazonIdentityManagementException` with null message is thrown instead
## Command used to start LocalStack
docker-compose up with `0.10.9`
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
```
try {
localStackIAMClient.createRole(createRoleRequest);
localStackIAMClient.createRole(createRoleRequest);
} catch (EntityAlreadyExistsException e) {
// AmazonIdentityManagementException with null is thrown instead
}
```
| https://github.com/localstack/localstack/issues/2231 | https://github.com/localstack/localstack/pull/2316 | 28d3b76087979229f586911423307e6fd8995f19 | a7a669fa96685def97cdfdc69f1a5695fc8b1af0 | "2020-04-01T18:58:19Z" | python | "2020-04-19T00:20:25Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,206 | ["bin/supervisord.conf"] | Can't run latest localstack in docker as non-root user | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a bug report
# Detailed description
Cannot run localstack in docker with non-root user using latest docker image (0.10.9).
Supervisor doesn't allow non-root to switch to root, probably caused by this [commit](https://github.com/localstack/localstack/commit/1d222646c14fbe5c1e5088fb493fc0b473315066#diff-c54bdd2a91116a1c99267149fd2b5390)
Related supervisor issue:
https://github.com/Supervisor/supervisor/issues/1218
Latest docker image is version 0.10.9, which is not a release (?)
Image version 0.10.8 works fine, printing just a warning.
## Actual behavior
Service won't start
```
localstack_1 | Error: Can't drop privilege as nonroot user
```
# Steps to reproduce
## Command used to start LocalStack
`docker-compose up` with config:
``` localstack:
image: localstack/localstack
user: ${NONROOT}
environment:
SERVICES: s3
```
| https://github.com/localstack/localstack/issues/2206 | https://github.com/localstack/localstack/pull/2214 | 12279ee6f625310e5d078b47c21ea4a645722659 | 95127963f122359ca5b2e2cb770cda4f92189dcc | "2020-03-27T11:52:24Z" | python | "2020-03-29T01:08:02Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,178 | [".github/ISSUE_TEMPLATE.md", "README.md", "localstack/services/cloudformation/cloudformation_listener.py", "localstack/utils/cloudformation/template_deployer.py"] | CloudFormation - CreationTime is now supported in moto | Originally reported in https://github.com/localstack/localstack/issues/2099, the CreationTime for CF stacks was hardcoded. This was fixed in in LocalStack itself (https://github.com/localstack/localstack/pull/2103), but is now also fixed in moto (https://github.com/spulec/moto/pull/2818).
I don't know what version of moto you're using, and whether it's acceptable to use dev-releases, but if you do:
moto 1.3.15.dev550 should fix this issue, so the LS fix can be removed if you upgrade :) | https://github.com/localstack/localstack/issues/2178 | https://github.com/localstack/localstack/pull/2182 | d4ddb08a3353e0cdf6e5cd84dc80d2c4acb97e54 | 770d11597da58394799819041f1af1612710c008 | "2020-03-20T07:24:56Z" | python | "2020-03-21T14:28:28Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,177 | [".github/workflows/asf-updates.yml"] | Can't create domain in ES service | I am getting An error occurred (406) when calling the CreateElasticsearchDomain operation:
Here are the logs
```
Sending http request: <AWSPreparedRequest stream_output=False, method=POST, url=http://localhost:9200/2015-01-01/es/domain, headers={‘Content-Length’: ‘208’, ‘Authorization’: ‘AWS4-HMAC-SHA256 Credential=ASIAVIBP2Y7V2FIEWRQB/20200319/us-west-2/es/aws4_request, SignedHeaders=host;x-amz-date;x-amz-security-token, Signature=b52723f9a6fc8cddf65200b18074251cf38fb52e59f334ab4945d2fc6a4bf855’, ‘X-Amz-Security-Token’: ‘FwoGZXIvYXdzEPb//////////wEaDJFsgwOwLMAovXUMpiLhAXiJfJVZ6HfEDTaTfJSyEAd+tztc+2L45V4fboZO9Ae9z2gXsPl5DnatO6M8zAAj9/ICvWl7AMv+cTP6sjSf2VmO8157rgT6PwUCPqnBGkQOWSii2HdjfLYHGkg1UxG8AQYhJgFO34x5ANSqk/H3YND9XyDusUc5fVNmMDer+NeHJ1KER5nfPw0AD+60zB5YybU6DioBecIOBrVTTQ9JNlTs7KBXAOpLlF0RyTbXvFRB2XqtDEko6220zdlxciUbRMTJ7HtCfMiEgWN1slfYy6ZcV8Cg7+/xMr7OJodRfQJKzCjS9MvzBTIyXOB7gL4fcvVnjXsAMMdRKu0iZlsvHnYez6trsmZEHHRBLnyEJgWCK4XTWknG3Enjiww=’, ‘X-Amz-Date’: ‘20200319T080759Z’, ‘User-Agent’: ‘aws-cli/1.16.110 Python/2.7.10 Darwin/17.7.0 botocore/1.12.100’}>
2020-03-19 13:37:59,661 - MainThread - urllib3.util.retry - DEBUG - Converted retries value: False -> Retry(total=False, connect=None, read=None, redirect=0, status=None)
2020-03-19 13:37:59,661 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTP connection (1): localhost:9200
2020-03-19 13:37:59,678 - MainThread - urllib3.connectionpool - DEBUG - http://localhost:9200 “POST /2015-01-01/es/domain HTTP/1.1” 406 64
2020-03-19 13:37:59,678 - MainThread - botocore.parsers - DEBUG - Response headers: {‘content-length’: ‘64’, ‘content-type’: ‘application/json; charset=UTF-8’}
2020-03-19 13:37:59,678 - MainThread - botocore.parsers - DEBUG - Response body:
{“error”:“Content-Type header [] is not supported”,“status”:406}
2020-03-19 13:37:59,679 - MainThread - botocore.hooks - DEBUG - Event needs-retry.elasticsearch-service.CreateElasticsearchDomain: calling handler <botocore.retryhandler.RetryHandler object at 0x10462c510>
2020-03-19 13:37:59,679 - MainThread - botocore.retryhandler - DEBUG - No retry needed.
2020-03-19 13:37:59,680 - MainThread - awscli.clidriver - DEBUG - Exception caught in main()
Traceback (most recent call last):
File “/usr/local/aws/lib/python2.7/site-packages/awscli/clidriver.py”, line 207, in main
return command_table[parsed_args.command](remaining, parsed_args)
File “/usr/local/aws/lib/python2.7/site-packages/awscli/clidriver.py”, line 348, in __call__
return command_table[parsed_args.operation](remaining, parsed_globals)
File “/usr/local/aws/lib/python2.7/site-packages/awscli/clidriver.py”, line 520, in __call__
call_parameters, parsed_globals)
File “/usr/local/aws/lib/python2.7/site-packages/awscli/clidriver.py”, line 640, in invoke
client, operation_name, parameters, parsed_globals)
File “/usr/local/aws/lib/python2.7/site-packages/awscli/clidriver.py”, line 652, in _make_client_call
**parameters)
File “/usr/local/aws/lib/python2.7/site-packages/botocore/client.py”, line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File “/usr/local/aws/lib/python2.7/site-packages/botocore/client.py”, line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
ClientError: An error occurred (406) when calling the CreateElasticsearchDomain operation:
2020-03-19 13:37:59,681 - MainThread - awscli.clidriver - DEBUG - Exiting with rc 255
An error occurred (406) when calling the CreateElasticsearchDomain operation:
```
┆Issue is synchronized with this [Jira Bug](https://localstack.atlassian.net/browse/LOC-166) by [Unito](https://www.unito.io/learn-more)
| https://github.com/localstack/localstack/issues/2177 | https://github.com/localstack/localstack/pull/10178 | 75042c5c7feb0a3eafa2beea6cc30471e2b527a0 | 75db5a92d9f0264ace027bef216b3d0aeaeaf285 | "2020-03-20T04:33:07Z" | python | "2024-02-06T07:43:16Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 2,124 | ["README.md", "localstack/services/cloudformation/cloudformation_starter.py", "localstack/services/s3/s3_starter.py", "tests/integration/test_s3.py"] | Integrate S3 startup into multiserver.py | In [this PR](https://github.com/localstack/localstack/pull/1200/files) we introduced a new way of starting S3 in a separate process.
In the meantime, the service startup has been significantly reworked, and it seems that we can integrate the S3 starter into the [`multiserver.py`](https://github.com/localstack/localstack/blob/master/localstack/utils/server/multiserver.py) based loading approach that loads multiple APIs in a single process, for performance reasons. We should look into that. | https://github.com/localstack/localstack/issues/2124 | https://github.com/localstack/localstack/pull/2132 | 230b7e0d62659d40882f9eedd738dc341f4ce047 | a341fa7b7a35788cc3e3c6ddeb52bb8256c01df0 | "2020-03-04T12:55:26Z" | python | "2020-03-05T22:20:46Z" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.