status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | localstack/localstack | https://github.com/localstack/localstack | 1,235 | ["localstack/services/sns/sns_listener.py", "tests/integration/test_sns.py", "tests/unit/test_sns_listener.py"] | Feature : Honor attributes on SNS Subscription | Hello,
I am working with a SNS to SQS project, in it we want to use the feature `RawMessageBody` when subscribing.
By looking into the code I found that the attributes are not persisted, for example `RawMessageBody` is always set to 'false'. Elsewhere in the code the parameters seems to be correctly used and send transform the message accordingly.
I created a PR fixing this : https://github.com/localstack/localstack/pull/1234
Thank you | https://github.com/localstack/localstack/issues/1235 | https://github.com/localstack/localstack/pull/1234 | 6ff360533f66a170dcb4b4314d4e729236de24e5 | 6528d90c054b715ed9649de1b38ebb620e05fbbd | "2019-04-03T22:10:50Z" | python | "2019-04-29T21:07:02Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,225 | ["localstack/services/s3/s3_listener.py", "tests/integration/test_s3.py"] | Presigned S3 url doesnt notify sqs | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
I have configured s3 bucket with event configuration to sqs for every object creation. When I try out aws cli command I get the notification correctly.
When I try using presigned url with curl/postman command, I dont get the sqs notification. **Is this a known issue and are there any work arounds?** | https://github.com/localstack/localstack/issues/1225 | https://github.com/localstack/localstack/pull/1640 | 7f6efc9a765c461b23095b019ddcdbd8f51716e9 | f8ddf25b373af5a72d11e3caf036d2d74cce1388 | "2019-04-01T10:23:35Z" | python | "2020-02-29T12:45:27Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,216 | ["localstack/services/awslambda/lambda_executors.py", "localstack/services/cloudformation/cloudformation_listener.py", "localstack/services/kinesis/kinesis_listener.py", "localstack/services/s3/s3_listener.py", "tests/integration/test_kclpy.py", "tests/integration/test_kinesis.py", "tests/unit/test_lambda.py"] | Unable to get sqs notification on s3 object creation | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
We have been trying to setup s3 and sqs on local stack. What we are trying to do is any object uploaded to s3 sqs queue needs to be notified. Find below bunch commands used.
```
aws --endpoint-url=http://localhost:4572 s3 mb s3://march
aws --endpoint-url=http://localhost:4576 sqs create-queue --queue-name march
aws --endpoint-url=http://localhost:4572 s3api put-bucket-notification-configuration --bucket march --notification-configuration file://notification.json
aws --endpoint-url=http://localhost:4572 s3 cp test.csv s3://march
aws --endpoint-url=http://localhost:4576 sqs receive-message --queue-url http://localhost:4576/queue/march
```
notification.json content
```
{
"QueueConfigurations": [
{
"QueueArn": "http://localstack:4576/queue/march",
"Events": [
"s3:ObjectCreated:*"
]
}
]
}
```
Is there anything missing in my above steps? Is this a know issue? | https://github.com/localstack/localstack/issues/1216 | https://github.com/localstack/localstack/pull/1817 | a64e1f4e3324fc0b65d7251523fdcd7ba169178c | 78031dd65da9394f8b1b020be01ef02c63c433ee | "2019-03-27T16:53:02Z" | python | "2019-12-01T16:09:25Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,215 | ["localstack/services/awslambda/lambda_api.py", "localstack/services/sqs/sqs_listener.py", "tests/integration/test_sqs.py", "tests/unit/test_sqs_listener.py"] | Lambda event does not get SQS MessageAttributes. | Hi,
It seems like when you have an SQS Queue invoking an Lambda function, that it does get the SQS message, however it seems that the MessageAttributes is always an empty object, even though there is MessageAttributes sent a long.
| https://github.com/localstack/localstack/issues/1215 | https://github.com/localstack/localstack/pull/1239 | 2d89a688fe609ff0e572bcccb36f0b0b3679ea3c | 386112d3af789ae84afb55f74a64afb439213c6d | "2019-03-27T16:04:14Z" | python | "2019-04-10T17:55:40Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,203 | ["localstack/services/s3/s3_listener.py", "tests/integration/test_s3.py"] | response-content-disposition is not honored on download | When generating a pre-signed URL for download and specifying response-content-disposition to set a target filename, the actual download does not contain the Content-Disposition header therefore the download file does not have the requested filename.
I have confirmed that this works corrected in S3 itself.
I'd be happy to explore a fix if anyone can point me in the right direction/location. | https://github.com/localstack/localstack/issues/1203 | https://github.com/localstack/localstack/pull/1579 | 418e996c666288c9ef82a678244898cb427deaa0 | abd07163e659904842a535d6e113df2388a97f94 | "2019-03-18T17:11:57Z" | python | "2019-09-20T20:03:53Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,198 | ["localstack/ext/java/src/main/java/cloud/localstack/Localstack.java", "localstack/ext/java/src/main/java/cloud/localstack/TestUtils.java", "localstack/ext/java/src/main/java/cloud/localstack/docker/Container.java", "localstack/ext/java/src/main/java/cloud/localstack/docker/annotation/LocalstackDockerAnnotationProcessor.java", "localstack/ext/java/src/main/java/cloud/localstack/docker/annotation/LocalstackDockerConfiguration.java", "localstack/ext/java/src/main/java/cloud/localstack/docker/annotation/LocalstackDockerProperties.java", "localstack/ext/java/src/main/java/cloud/localstack/docker/command/PortCommand.java", "localstack/ext/java/src/test/java/cloud/localstack/docker/ContainerTest.java", "localstack/ext/java/src/test/java/cloud/localstack/docker/PortBindingTest.java"] | JUnit random port test issue. | Hi,
I am seeing the following issue. I am running a test with the following annotation:
`@RunWith(LocalstackDockerTestRunner.class)
@LocalstackDockerProperties(randomizePorts = false, services = { "sqs:1120", "sns:232"})`
When the test start I am tailing the docker logs
`docker logs --tail all 67177a3832a2
...
Starting mock SQS (http port 1120)...
Starting mock SNS (http port 232)...
2019-03-15T20:30:34:WARNING:werkzeug: * Debugger is active!
2019-03-15T20:30:34:INFO:werkzeug: * Debugger PIN: 106-165-007
Waiting for all LocalStack services to be ready
Ready.
`
I am seeing the mock being started at the ports as as expected per the properties. However the port forwarding does not account of for the ports of the services:
`PORTS
4584-4593/tcp, 0.0.0.0:4567-4583->4567-4583/tcp, 8080/tcp `
So while the service starts at the specified port, the port biding is not.
is this a bug? Or is there a way to change the default port biding for selected services at run time? No
Thanks | https://github.com/localstack/localstack/issues/1198 | https://github.com/localstack/localstack/pull/1843 | 7f494e7d2cd8b781cfedc0f3662681677fffe15e | 45bafca1889bda7593e282e1c5f92f059ca4cede | "2019-03-15T20:36:23Z" | python | "2019-12-08T17:01:04Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,191 | ["localstack/services/awslambda/lambda_api.py", "localstack/services/cloudformation/cloudformation_starter.py", "localstack/services/cloudformation/service_models.py", "localstack/utils/aws/aws_stack.py", "localstack/utils/cloudformation/template_deployer.py", "localstack/utils/common.py", "tests/integration/templates/template1.yaml", "tests/integration/test_cloudformation.py"] | AWS::SNS::Subscription Cloudformation support | Hi
I have issues creating sns subscriptions through Cloudformation. If I attempt to include a `AWS::SNS::Subscription` resource in a Cloudformation template and run `create-stack` then I get a 500 Internal Server error. The full template:
```
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Test'
Resources:
MySnsTopic:
Type: AWS::SNS::Topic
Properties:
TopicName: MySnsTopic
MySnsTopicSubscription:
Type: AWS::SNS::Subscription
Properties:
Protocol: sqs
TopicArn: !Ref MySnsTopic
Endpoint: !GetAtt
- MySqsQueue
- QueueArn
MySqsQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: MySqsQueue
```
If I replace the references with hard-coded arns like this:
```
TopicArn: arn:aws:sns:eu-west-2:123456789012:MySnsTopic
Endpoint: arn:aws:sqs:elasticmq:000000000000:MySqsQueue
```
Then I get no errors but I get a warning:
`No moto Cloudformation support for AWS::SNS::Subscription`.
I can get around this issue by creating the subscription when creating the topic but the issue there is that I cannot add any attributes and in my case I need `RawMessageDelivery` set to true. So my questions are:
1. Is there no support to create `AWS::SNS::Subscription` resources through Cloudformation or am I doing something wrong?
2. If support is not there can I request it as a new feature?
3. Any suggestions on how I can work around this? I know I could do this through a script at startup but would like to not have any additional script steps other than deploying the Cloudformation stack.
<!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate --> | https://github.com/localstack/localstack/issues/1191 | https://github.com/localstack/localstack/pull/1662 | a25cca53916567a6da672c89c6f7a699e2f4406c | dde3337305aac2ccdad4931d32823d6aef2e42da | "2019-03-13T16:14:18Z" | python | "2019-10-19T22:50:35Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,170 | [".github/workflows/asf-updates.yml"] | S3 multipart upload discards ACLs | When uploading a large file (in my case 8MB), ACLs aren't respected:
```bash
dd bs=1M if=/dev/urandom of=small-file count=5
dd bs=1M if=/dev/urandom of=large-file count=8
alias awsl="aws --endpoint-url=http://localhost:4572"
awsl s3api create-bucket --bucket test
awsl s3 cp --acl public-read small-file s3://test/1
awsl s3 cp --acl public-read large-file s3://test/2
awsl s3api get-object-acl --bucket test --key 1
awsl s3api get-object-acl --bucket test --key 2
```
Key 1 returns:
```json
{
"Owner": {
"DisplayName": "webfile",
"ID": "75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a"
},
"Grants": [
{
"Grantee": {
"ID": "75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a",
"Type": "CanonicalUser"
},
"Permission": "FULL_CONTROL"
},
{
"Grantee": {
"Type": "Group",
"URI": "http://acs.amazonaws.com/groups/global/AllUsers"
},
"Permission": "READ"
}
]
}
```
Key 2 returns:
```json
{
"Owner": {
"DisplayName": "webfile",
"ID": "75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a"
},
"Grants": [
{
"Grantee": {
"ID": "75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a",
"Type": "CanonicalUser"
},
"Permission": "FULL_CONTROL"
}
]
}
```
Looking through the debug output, the big difference between the two is that the 5MB is a PutObject, and the 8MB is a multipart upload.
As a workaround, currently switching over to the `aws s3api put-object` command.
<!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate --> | https://github.com/localstack/localstack/issues/1170 | https://github.com/localstack/localstack/pull/9067 | 16b7f53c98d303476e70a1956aa1f193e7b534a7 | dfbaa14635557ab5cf6cb39178f01e3bcbca7313 | "2019-03-05T04:25:55Z" | python | "2023-09-05T07:26:54Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,160 | ["localstack/services/awslambda/lambda_executors.py", "localstack/services/cloudformation/cloudformation_listener.py", "localstack/services/kinesis/kinesis_listener.py", "localstack/services/s3/s3_listener.py", "tests/integration/test_kclpy.py", "tests/integration/test_kinesis.py", "tests/unit/test_lambda.py"] | Java lambda function classpath mix up with localstack-utils-fat.jar | I'm writing a Lambda function with Java8 and try to use AWS SDK for Java to access a S3 bucket.
The AWS Java SDK for S3 is bundled with the java function in version 1.11.510. The function does work on AWS.
When executing the function on localstack the function fails with the following error when trying to access the S3 resource:
```
Exception: Lambda process returned error status code: 1. Output:
Exception in thread "main" java.lang.NoSuchFieldError: SERVICE_ID
at com.amazonaws.services.s3.AmazonS3Client.createRequest(AmazonS3Client.java:4655)
at com.amazonaws.services.s3.AmazonS3Client.createRequest(AmazonS3Client.java:4630)
at com.amazonaws.services.s3.AmazonS3Client.listBuckets(AmazonS3Client.java:976)
at com.amazonaws.services.s3.AmazonS3Client.listBuckets(AmazonS3Client.java:984)
at com.App.handleRequest(App.java:22)
at cloud.localstack.LambdaExecutor.main(LambdaExecutor.java:100)
```
When checking the class that defines the field `SERVICE_ID` with `HandlerContextKey.class.getProtectionDomain().getCodeSource()` the output is
`file:/home/user/.local/lib/python2.7/site-packages/localstack/infra/localstack-utils-fat.jar` which is not from the bundled jar file of the Lambda function but from localstacks own jar files.
This results in incompatibilities between different class versions.
Is there a requirement that localstack only works with a specific AWS Java SDK version? If not how can I get the function to work on localstack?
I'm using localstack v0.8.10 on Windows 10 via Windows Subsystem for Linux (WSL) Ubuntu 18.04.1 LTS.
possible duplicate of #952
UPDATE:
After changing the lambda function to use aws-java-sdk v1.11.310 which is the version included in localstack-utils-fat.jar the lambda function executes properly.
This is still an isolation issue that might result in issues with other classes where no easy workaround is available.
One would rather want to load the jar file in it's own classpath (hacky) or apply a proper modularization via OSGi or Java 9 modules (JIGSAW) | https://github.com/localstack/localstack/issues/1160 | https://github.com/localstack/localstack/pull/1817 | a64e1f4e3324fc0b65d7251523fdcd7ba169178c | 78031dd65da9394f8b1b020be01ef02c63c433ee | "2019-03-02T16:02:10Z" | python | "2019-12-01T16:09:25Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,159 | ["localstack/services/awslambda/lambda_executors.py", "localstack/services/cloudformation/cloudformation_listener.py", "localstack/services/kinesis/kinesis_listener.py", "localstack/services/s3/s3_listener.py", "tests/integration/test_kclpy.py", "tests/integration/test_kinesis.py", "tests/unit/test_lambda.py"] | Kinesis register-stream-consumer UnknownOperationException | I am trying to create a consume stream in kinesis. I created my stream using
`awslocal kinesis create-stream --stream-name mylogstream --shard-count 2`
which worked. Now I am trying to use lambda consumer, following the tutorial [here](https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html#services-kinesis-configure), hence I did
`awslocal kinesis register-stream-consumer --consumer-name con1 --stream-arn arn:aws:kinesis:us-east-1:000000000000:stream/mylogstream`
which resulted in following error:
>An error occurred (UnknownOperationException) when calling the RegisterStreamConsumer operation:
I tried again with `aws` instead of awslocal:
`aws kinesis register-stream-consumer --consumer-name con1 --stream-arn arn:aws:kinesis:us-east-1:000000000000:stream/mylogstream --endpoint-url=http://localhost:4568`
which again resulted in same error.
I've configured my aws credentials etc. but nothing seems to fix the issue
| https://github.com/localstack/localstack/issues/1159 | https://github.com/localstack/localstack/pull/1817 | a64e1f4e3324fc0b65d7251523fdcd7ba169178c | 78031dd65da9394f8b1b020be01ef02c63c433ee | "2019-03-02T15:56:46Z" | python | "2019-12-01T16:09:25Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,139 | ["Makefile", "localstack/ext/java/src/main/java/cloud/localstack/Localstack.java", "localstack/ext/java/src/main/java/cloud/localstack/TestUtils.java", "localstack/ext/java/src/test/java/cloud/localstack/docker/BasicDockerFunctionalityTest.java", "localstack/services/es/es_api.py", "tests/integration/test_cloudwatch.py", "tests/integration/test_logs.py"] | Elasticsearch domain managed by Terraform cannot be updated; request for /tags/? returns 404 | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
We use Terraform to create and update resources in Localstack, which has worked for services like S3 and Dynamo so far.
We hit an issue with Elasticsearch domains, where the domain is created successfully but Terraform fails to apply in subsequent runs, when it makes a request to:
```
logs: ---[ REQUEST POST-SIGN ]-----------------------------
logs: GET /2015-01-01/tags/?arn=arn%3Aaws%3Aes%3Aus-east-1%3A000000000000%3Adomain%2Fepdam-local-amd HTTP/1.1
logs: Host: localhost:4578
logs: User-Agent: aws-sdk-go/1.14.31 (go1.9.2; darwin; amd64) APN/1.0 HashiCorp/1.0 Terraform/0.11.8-dev
logs: Authorization: AWS4-HMAC-SHA256 Credential=mock_access_key/20190221/us-west-2/es/aws4_request, SignedHeaders=host;x-amz-date, Signature=26f42429e2af2240466635ab9202c8888617afe9be7b8ef91a8831d6b4160bd1
logs: X-Amz-Date: 20190221T191447Z
logs: Accept-Encoding: gzip
```
and the response is:
```
logs: ---[ RESPONSE ]--------------------------------------
logs: HTTP/1.0 404 NOT FOUND
logs: Connection: close
logs: Content-Length: 233
logs: Access-Control-Allow-Origin: *
logs: Content-Type: text/html
logs: Date: Thu, 21 Feb 2019 19:14:47 GMT
logs: Server: Werkzeug/0.14.1 Python/2.7.15
```
While a request to `localhost:4578/2015-01-01/tags/?arn=...` gets 404, a request to `localhost:4578/2015-01-01/tags?arn=...`, (without the `/` before the query params), is successful.
The reason we are reporting this against Localstack and not [terraform](https://github.com/hashicorp/terraform) or [terraform-provider-aws](https://github.com/terraform-providers/terraform-provider-aws) is that the AWS REST API apparently supports requests with slashes before query parameters, or else Terraform could not be used to manage Elasticsearch domains in AWS. | https://github.com/localstack/localstack/issues/1139 | https://github.com/localstack/localstack/pull/1842 | ee9ca7e0bee91f85c81b658b93751c0cc3edffeb | 7f494e7d2cd8b781cfedc0f3662681677fffe15e | "2019-02-21T22:26:26Z" | python | "2019-12-08T16:59:08Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,131 | ["localstack/services/awslambda/lambda_api.py"] | Lambda Event Source Mapping From Kinesis Does Not Always Honor Batch Size | I have a setup where there is a node.js application that writes to a kinesis stream using putRecords for two items. The kinesis stream feeds a Lambda function that is configured with an eventSourceMapping with a batch size of 1. However, when I debug what the Records Kinesis feeds the lambda for an invocation, I see that the lambda is processing two records. Both of these records are on the same shard and have a sequence number which is pretty close.
The interesting thing is that if I switch the node.js published to use putRecord instead of putRecords, the kinesis records have a slightly farther apart sequence number and the lambda is fed the records individually.
The java code to configure the event source mapping
```
AWSLambdaAsync lambdaClient = AWSResourceFactory.getLambdaClient();
CreateEventSourceMappingRequest request = new CreateEventSourceMappingRequest();
request.setEventSourceArn(kinesisArn);
request.setFunctionName(functionName);
request.setEnabled(true);
request.setStartingPosition(EventSourcePosition.TRIM_HORIZON);
request.setBatchSize(1);
lambdaClient.createEventSourceMapping(request);
```
The localstack lambda output that shows multiple records being read at a time:
```
{
"Records": [{
"eventID": "shardId-000000000000:49593014281924840790139112603493819506073986345293840386",
"eventSourceARN": "arn:aws:kinesis:us-east-1:000000000000:stream/MyStream",
"kinesis": {
"partitionKey": "ed0f756b97234f4fae9e3d1a886b8fbc",
"data": "myData1",
"sequenceNumber": "49593014281924840790139112603493819506073986345293840386"
}
}, {
"eventID": "shardId-000000000000:49593014281924840790139112603495028431893600974468546562",
"eventSourceARN": "arn:aws:kinesis:us-east-1:000000000000:stream/MyStream",
"kinesis": {
"partitionKey": "abee63a9299385699d65e89a290a9065",
"data": "myData2",
"sequenceNumber": "49593014281924840790139112603495028431893600974468546562"
}
}]
}
```
| https://github.com/localstack/localstack/issues/1131 | https://github.com/localstack/localstack/pull/2110 | a9b593c1699aeef9efbfcfc45ff13155fe8662e8 | a7f9ae2d2f1899d188fa2f4d36aea3160bcf69ca | "2019-02-16T01:01:49Z" | python | "2020-03-01T17:01:34Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,116 | ["localstack/services/awslambda/lambda_api.py", "localstack/utils/aws/aws_models.py", "localstack/utils/testutil.py", "tests/unit/test_lambda.py"] | lambda/TagResource method missing | The TagResource api call method does not seem to be available. Get a 404 (NOT FOUND) when calling this method. | https://github.com/localstack/localstack/issues/1116 | https://github.com/localstack/localstack/pull/1242 | 89ab5e3ea74aca8a44c07dd59c881e9e541fe46f | 101f34c039c5ed5b5f18eb2473df23fe86d82ef3 | "2019-02-08T13:44:33Z" | python | "2019-04-10T22:06:42Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,102 | ["README.md", "bin/docker-entrypoint.sh", "localstack/config.py"] | Consider adding a prefix to docker environment variables | Hi,
Since environment variables are global, I think they should be prefixed by `LOCALSTACK_` in the docker image. For example `SERVICES` environment variable might conflict at some point with something else, so naming it `LOCALSTACK_SERVICES` would be nicer.
This could be optional to avoid breaking changes. Localstack would look for `LOCALSTACK_SERVICES` first, and if it's not defined would fallback to `SERVICES` just as today.
As far as I know, most (maybe all) official docker images do this. I think it's a good practice. See:
- https://hub.docker.com/_/postgres
- https://hub.docker.com/_/nginx
- https://hub.docker.com/_/mysql
Thanks. | https://github.com/localstack/localstack/issues/1102 | https://github.com/localstack/localstack/pull/1181 | 1d1b3a0ca574af85766f54d1b30d26f777c71844 | 56b1cfedef29180d528d1de63d1ee95f4392ddb9 | "2019-01-29T18:21:26Z" | python | "2019-03-12T02:54:44Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,073 | ["README.md", "localstack/constants.py", "requirements.txt"] | Update code climate and badge | https://codeclimate.com/github/atlassian/localstack is the old repo, is there a new code climate check for the new repo? The README is pointing to this old code climate project. | https://github.com/localstack/localstack/issues/1073 | https://github.com/localstack/localstack/pull/1075 | ff5091b46229d2940066c991301ef27192cd1377 | 66e11a09cbda5c792e417b4b9fa42e39b603fcfb | "2018-12-31T18:28:13Z" | python | "2019-01-04T09:58:13Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,066 | ["README.md", "bin/Dockerfile.base", "localstack/constants.py", "localstack/services/dynamodb/dynamodb_starter.py", "localstack/services/infra.py", "localstack/utils/cloudwatch/cloudwatch_util.py", "localstack/utils/common.py", "requirements.txt", "tests/integration/test_integration.py"] | Coverage is locked to 4.0.3 | Why has coverage been locked to `==4.0.3` for two years? Why not `>=4.0.0`?
If I add localstack to my pipenv, I can't lock because of this unnecessary dependency. | https://github.com/localstack/localstack/issues/1066 | https://github.com/localstack/localstack/pull/1072 | 26f44a79e9bf47c1a24053a32a83480e7eeb9480 | 75df19b29afaaf2e10c48013bfc214e650741aec | "2018-12-20T20:07:44Z" | python | "2019-01-01T15:01:09Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,061 | ["localstack/ext/java/src/test/java/cloud/localstack/S3FeaturesTest.java", "localstack/services/s3/s3_listener.py", "requirements.txt", "tests/integration/test_s3.py"] | Localstack S3 listNextBatchOfObjects pagination does not work as expected | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
Found a pagination bug with localstack's s3. Primarily against s3Client.listNextBatchOfObjects.
private void test() {
s3Client.putObject(s3BucketName, "key1", "content");
s3Client.putObject(s3BucketName, "key2", "content");
s3Client.putObject(s3BucketName, "key3", "content");
ListObjectsRequest listObjectsRequest = new ListObjectsRequest()
.withBucketName(s3BucketName)
.withPrefix("")
.withDelimiter("/")
.setMaxKeys(1); // 1 Key per request
ObjectListing objectListing = s3Client.listObjects(listObjectsRequest);
List<SomeObject> someObjList = createEmptyList();
someObjList.addAll(mapFilesToSomeObject(objectListing)); // puts at least 1 item into the list
while (objectListing.isTruncated()) {
objectListing = s3Client.listNextBatchOfObjects(objectListing);
someObjList.addAll(mapFilesToSomeObject(objectListing));
}
assertEquals(3, someObjList.size());
}
private List<SomeObject> mapFilesToSomeObject(ObjectListing objectListing) {
return objectListing.getObjectSummaries()
.stream()
.map(S3ObjectSummary::getKey)
.map(x -> this.toSomeObject(x)) // some deserialization method
.collect(Collectors.toList());
}
This while loop will only get the second item in its first loop. It still enters the loop once more to grab the third item since the condition is still true, but the 2nd call of listNextBatchOfObjects in that very loop returns an empty list, thus failing to grab that third item. I confirmed that listNextBatchOfObjects retrieves the "second page" only against localstack, with changing the amount of items and page sizes. On AWS s3, this works as expected.
Refer to s3Client.listNextBatchOfObjects:
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Client.html#listNextBatchOfObjects-com.amazonaws.services.s3.model.ObjectListing- | https://github.com/localstack/localstack/issues/1061 | https://github.com/localstack/localstack/pull/1895 | db088d532780ea4aaf0df7802e332ab7f74b852c | 917d8f0ddc797e938aeaac11d196aeb12f378232 | "2018-12-17T17:27:13Z" | python | "2019-12-21T12:40:17Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,055 | [".travis.yml", "localstack/ext/java/src/main/java/cloud/localstack/docker/LocalstackDocker.java", "localstack/services/awslambda/lambda_executors.py", "localstack/utils/server/multiserver.py"] | Argument list too long when a lambda function event body is too large | I'm currently working on a process by which a kinesis stream triggers a lambda function to process a batch of records. Whenever the `putRecords` method is used with a large amount of records, the call to trigger the lambda will fail b/c the argument list is too long. The command is:
```
CONTAINER_ID="$(docker create -e AWS_LAMBDA_FUNCTION_INVOKED_ARN="$AWS_LAMBDA_FUNCTION_INVOKED_ARN" -e AWS_LAMBDA_FUNCTION_NAME="$AWS_LAMBDA_FUNCTION_NAME" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -e AWS_LAMBDA_FUNCTION_VERSION="$AWS_LAMBDA_FUNCTION_VERSION" -e LOCALSTACK_HOSTNAME="$LOCALSTACK_HOSTNAME" "lambci/lambda:nodejs8.10" "index.handler")"
```
Instead of sending the event body to the lambda via an argument list, the body should be written to a file that is then copied into the docker and then the command should reference the file in the arguments list.
<!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate --> | https://github.com/localstack/localstack/issues/1055 | https://github.com/localstack/localstack/pull/1474 | 3f9862bacdf9678517d38920b7f811f6d96f4e26 | 53d8e7ee9ab782e9421b715495a2d9bea8d2661e | "2018-12-12T15:33:07Z" | python | "2019-08-13T20:57:01Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,052 | ["requirements.txt"] | PyYAML requirements inconsistent | Similar to #488. Using `localstack-0.8.8`, `Python 3.6.7` and `pipenv 2018.10.13`.
Installing via `pipenv install localstack` there seems to be internal inconsistency in PyYAML requirements. Localstack wants ==4.2b4, but AWS CLI wants >=3.0 <=3.13. Everything else seems to be happy with anything.
`pipenv graph` gives (search for PyYAML and pyyaml to find inconsistencies):
```
localstack==0.8.8
- airspeed [required: ==0.5.5.dev20160812, installed: 0.5.5.dev20160812]
- cachetools [required: ==0.8.0, installed: 0.8.0]
- six [required: Any, installed: 1.10.0]
- awscli [required: >=1.14.18, installed: 1.16.72]
- botocore [required: ==1.12.62, installed: 1.12.62]
- docutils [required: >=0.10, installed: 0.14]
- jmespath [required: >=0.7.1,<1.0.0, installed: 0.9.3]
- python-dateutil [required: >=2.1,<3.0.0, installed: 2.7.5]
- six [required: >=1.5, installed: 1.10.0]
- urllib3 [required: >=1.20,<1.25, installed: 1.22]
- colorama [required: >=0.2.5,<=0.3.9, installed: 0.3.9]
- docutils [required: >=0.10, installed: 0.14]
- PyYAML [required: >=3.10,<=3.13, installed: 4.2b4]
- rsa [required: >=3.1.2,<=3.5.0, installed: 3.4.2]
- pyasn1 [required: >=0.1.3, installed: 0.4.4]
- s3transfer [required: >=0.1.12,<0.2.0, installed: 0.1.13]
- botocore [required: >=1.3.0,<2.0.0, installed: 1.12.62]
- docutils [required: >=0.10, installed: 0.14]
- jmespath [required: >=0.7.1,<1.0.0, installed: 0.9.3]
- python-dateutil [required: >=2.1,<3.0.0, installed: 2.7.5]
- six [required: >=1.5, installed: 1.10.0]
- urllib3 [required: >=1.20,<1.25, installed: 1.22]
- boto [required: ==2.46.1, installed: 2.46.1]
- boto3 [required: >=1.4.5, installed: 1.9.62]
- botocore [required: >=1.12.62,<1.13.0, installed: 1.12.62]
- docutils [required: >=0.10, installed: 0.14]
- jmespath [required: >=0.7.1,<1.0.0, installed: 0.9.3]
- python-dateutil [required: >=2.1,<3.0.0, installed: 2.7.5]
- six [required: >=1.5, installed: 1.10.0]
- urllib3 [required: >=1.20,<1.25, installed: 1.22]
- jmespath [required: >=0.7.1,<1.0.0, installed: 0.9.3]
- s3transfer [required: >=0.1.10,<0.2.0, installed: 0.1.13]
- botocore [required: >=1.3.0,<2.0.0, installed: 1.12.62]
- docutils [required: >=0.10, installed: 0.14]
- jmespath [required: >=0.7.1,<1.0.0, installed: 0.9.3]
- python-dateutil [required: >=2.1,<3.0.0, installed: 2.7.5]
- six [required: >=1.5, installed: 1.10.0]
- urllib3 [required: >=1.20,<1.25, installed: 1.22]
- coverage [required: ==4.0.3, installed: 4.0.3]
- docopt [required: ==0.6.2, installed: 0.6.2]
- elasticsearch [required: ==6.2.0, installed: 6.2.0]
- urllib3 [required: >=1.21.1,<1.23, installed: 1.22]
- flake8 [required: ==3.6.0, installed: 3.6.0]
- mccabe [required: >=0.6.0,<0.7.0, installed: 0.6.1]
- pycodestyle [required: >=2.4.0,<2.5.0, installed: 2.4.0]
- pyflakes [required: >=2.0.0,<2.1.0, installed: 2.0.0]
- setuptools [required: >=30, installed: 40.6.2]
- flake8-quotes [required: >=0.11.0, installed: 1.0.0]
- flake8 [required: Any, installed: 3.6.0]
- mccabe [required: >=0.6.0,<0.7.0, installed: 0.6.1]
- pycodestyle [required: >=2.4.0,<2.5.0, installed: 2.4.0]
- pyflakes [required: >=2.0.0,<2.1.0, installed: 2.0.0]
- setuptools [required: >=30, installed: 40.6.2]
- flask [required: ==0.12.3, installed: 0.12.3]
- click [required: >=2.0, installed: 7.0]
- itsdangerous [required: >=0.21, installed: 1.1.0]
- Jinja2 [required: >=2.4, installed: 2.10]
- MarkupSafe [required: >=0.23, installed: 1.1.0]
- Werkzeug [required: >=0.7, installed: 0.14.1]
- flask-cors [required: ==3.0.3, installed: 3.0.3]
- Flask [required: >=0.9, installed: 0.12.3]
- click [required: >=2.0, installed: 7.0]
- itsdangerous [required: >=0.21, installed: 1.1.0]
- Jinja2 [required: >=2.4, installed: 2.10]
- MarkupSafe [required: >=0.23, installed: 1.1.0]
- Werkzeug [required: >=0.7, installed: 0.14.1]
- Six [required: Any, installed: 1.10.0]
- flask-swagger [required: ==0.2.12, installed: 0.2.12]
- Flask [required: >=0.10, installed: 0.12.3]
- click [required: >=2.0, installed: 7.0]
- itsdangerous [required: >=0.21, installed: 1.1.0]
- Jinja2 [required: >=2.4, installed: 2.10]
- MarkupSafe [required: >=0.23, installed: 1.1.0]
- Werkzeug [required: >=0.7, installed: 0.14.1]
- PyYAML [required: >=3.0, installed: 4.2b4]
- jsonpath-rw [required: ==1.4.0, installed: 1.4.0]
- decorator [required: Any, installed: 4.3.0]
- ply [required: Any, installed: 3.11]
- six [required: Any, installed: 1.10.0]
- localstack-client [required: ==0.6, installed: 0.6]
- boto3 [required: Any, installed: 1.9.62]
- botocore [required: >=1.12.62,<1.13.0, installed: 1.12.62]
- docutils [required: >=0.10, installed: 0.14]
- jmespath [required: >=0.7.1,<1.0.0, installed: 0.9.3]
- python-dateutil [required: >=2.1,<3.0.0, installed: 2.7.5]
- six [required: >=1.5, installed: 1.10.0]
- urllib3 [required: >=1.20,<1.25, installed: 1.22]
- jmespath [required: >=0.7.1,<1.0.0, installed: 0.9.3]
- s3transfer [required: >=0.1.10,<0.2.0, installed: 0.1.13]
- botocore [required: >=1.3.0,<2.0.0, installed: 1.12.62]
- docutils [required: >=0.10, installed: 0.14]
- jmespath [required: >=0.7.1,<1.0.0, installed: 0.9.3]
- python-dateutil [required: >=2.1,<3.0.0, installed: 2.7.5]
- six [required: >=1.5, installed: 1.10.0]
- urllib3 [required: >=1.20,<1.25, installed: 1.22]
- localstack-ext [required: >=0.8.6, installed: 0.8.6]
- dnslib [required: ==0.9.7, installed: 0.9.7]
- pyaes [required: ==1.6.0, installed: 1.6.0]
- pyminifier [required: ==2.1, installed: 2.1]
- srp-ext [required: ==1.0.7.1, installed: 1.0.7.1]
- six [required: Any, installed: 1.10.0]
- warrant-ext [required: >=0.6.1.1, installed: 0.6.1.1]
- boto3 [required: >=1.4.3, installed: 1.9.62]
- botocore [required: >=1.12.62,<1.13.0, installed: 1.12.62]
- docutils [required: >=0.10, installed: 0.14]
- jmespath [required: >=0.7.1,<1.0.0, installed: 0.9.3]
- python-dateutil [required: >=2.1,<3.0.0, installed: 2.7.5]
- six [required: >=1.5, installed: 1.10.0]
- urllib3 [required: >=1.20,<1.25, installed: 1.22]
- jmespath [required: >=0.7.1,<1.0.0, installed: 0.9.3]
- s3transfer [required: >=0.1.10,<0.2.0, installed: 0.1.13]
- botocore [required: >=1.3.0,<2.0.0, installed: 1.12.62]
- docutils [required: >=0.10, installed: 0.14]
- jmespath [required: >=0.7.1,<1.0.0, installed: 0.9.3]
- python-dateutil [required: >=2.1,<3.0.0, installed: 2.7.5]
- six [required: >=1.5, installed: 1.10.0]
- urllib3 [required: >=1.20,<1.25, installed: 1.22]
- envs [required: >=0.3.0, installed: 1.2.6]
- mock [required: >=2.0.0, installed: 2.0.0]
- pbr [required: >=0.11, installed: 5.1.1]
- six [required: >=1.9, installed: 1.10.0]
- python-jose-ext [required: >=1.3.2.4, installed: 1.3.2.4]
- ecdsa [required: <1.0, installed: 0.13]
- future [required: <1.0, installed: 0.17.1]
- pycryptodomex [required: ==3.4.9, installed: 3.4.9]
- six [required: <2.0, installed: 1.10.0]
- requests [required: >=2.13.0, installed: 2.20.0]
- certifi [required: >=2017.4.17, installed: 2018.11.29]
- chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
- idna [required: >=2.5,<2.8, installed: 2.7]
- urllib3 [required: >=1.21.1,<1.25, installed: 1.22]
- moto-ext [required: ==1.3.4, installed: 1.3.4]
- aws-xray-sdk [required: >=0.93,<0.96, installed: 0.95]
- jsonpickle [required: Any, installed: 1.0]
- requests [required: Any, installed: 2.20.0]
- certifi [required: >=2017.4.17, installed: 2018.11.29]
- chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
- idna [required: >=2.5,<2.8, installed: 2.7]
- urllib3 [required: >=1.21.1,<1.25, installed: 1.22]
- wrapt [required: Any, installed: 1.10.11]
- boto [required: >=2.36.0, installed: 2.46.1]
- boto3 [required: >=1.6.16, installed: 1.9.62]
- botocore [required: >=1.12.62,<1.13.0, installed: 1.12.62]
- docutils [required: >=0.10, installed: 0.14]
- jmespath [required: >=0.7.1,<1.0.0, installed: 0.9.3]
- python-dateutil [required: >=2.1,<3.0.0, installed: 2.7.5]
- six [required: >=1.5, installed: 1.10.0]
- urllib3 [required: >=1.20,<1.25, installed: 1.22]
- jmespath [required: >=0.7.1,<1.0.0, installed: 0.9.3]
- s3transfer [required: >=0.1.10,<0.2.0, installed: 0.1.13]
- botocore [required: >=1.3.0,<2.0.0, installed: 1.12.62]
- docutils [required: >=0.10, installed: 0.14]
- jmespath [required: >=0.7.1,<1.0.0, installed: 0.9.3]
- python-dateutil [required: >=2.1,<3.0.0, installed: 2.7.5]
- six [required: >=1.5, installed: 1.10.0]
- urllib3 [required: >=1.20,<1.25, installed: 1.22]
- botocore [required: >=1.9.16, installed: 1.12.62]
- docutils [required: >=0.10, installed: 0.14]
- jmespath [required: >=0.7.1,<1.0.0, installed: 0.9.3]
- python-dateutil [required: >=2.1,<3.0.0, installed: 2.7.5]
- six [required: >=1.5, installed: 1.10.0]
- urllib3 [required: >=1.20,<1.25, installed: 1.22]
- cookies [required: Any, installed: 2.2.1]
- cryptography [required: >=2.0.0, installed: 2.4.2]
- asn1crypto [required: >=0.21.0, installed: 0.24.0]
- cffi [required: >=1.7,!=1.11.3, installed: 1.11.5]
- pycparser [required: Any, installed: 2.19]
- idna [required: >=2.1, installed: 2.7]
- six [required: >=1.4.1, installed: 1.10.0]
- docker [required: >=2.5.1, installed: 3.6.0]
- docker-pycreds [required: >=0.3.0, installed: 0.4.0]
- six [required: >=1.4.0, installed: 1.10.0]
- requests [required: >=2.14.2,!=2.18.0, installed: 2.20.0]
- certifi [required: >=2017.4.17, installed: 2018.11.29]
- chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
- idna [required: >=2.5,<2.8, installed: 2.7]
- urllib3 [required: >=1.21.1,<1.25, installed: 1.22]
- six [required: >=1.4.0, installed: 1.10.0]
- websocket-client [required: >=0.32.0, installed: 0.54.0]
- six [required: Any, installed: 1.10.0]
- Jinja2 [required: >=2.7.3, installed: 2.10]
- MarkupSafe [required: >=0.23, installed: 1.1.0]
- jsondiff [required: ==1.1.1, installed: 1.1.1]
- mock [required: Any, installed: 2.0.0]
- pbr [required: >=0.11, installed: 5.1.1]
- six [required: >=1.9, installed: 1.10.0]
- pyaml [required: Any, installed: 18.11.0]
- PyYAML [required: Any, installed: 4.2b4]
- python-dateutil [required: >=2.1,<3.0.0, installed: 2.7.5]
- six [required: >=1.5, installed: 1.10.0]
- python-jose [required: <3.0.0, installed: 2.0.2]
- ecdsa [required: <1.0, installed: 0.13]
- future [required: <1.0, installed: 0.17.1]
- pycryptodome [required: >=3.3.1,<4.0.0, installed: 3.7.2]
- six [required: <2.0, installed: 1.10.0]
- pytz [required: Any, installed: 2018.7]
- requests [required: >=2.5, installed: 2.20.0]
- certifi [required: >=2017.4.17, installed: 2018.11.29]
- chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
- idna [required: >=2.5,<2.8, installed: 2.7]
- urllib3 [required: >=1.21.1,<1.25, installed: 1.22]
- responses [required: >=0.9.0, installed: 0.10.4]
- requests [required: >=2.0, installed: 2.20.0]
- certifi [required: >=2017.4.17, installed: 2018.11.29]
- chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
- idna [required: >=2.5,<2.8, installed: 2.7]
- urllib3 [required: >=1.21.1,<1.25, installed: 1.22]
- six [required: Any, installed: 1.10.0]
- six [required: >1.9, installed: 1.10.0]
- werkzeug [required: Any, installed: 0.14.1]
- xmltodict [required: Any, installed: 0.10.2]
- nose [required: ==1.3.7, installed: 1.3.7]
- psutil [required: ==5.4.3, installed: 5.4.3]
- pyopenssl [required: ==17.5.0, installed: 17.5.0]
- cryptography [required: >=2.1.4, installed: 2.4.2]
- asn1crypto [required: >=0.21.0, installed: 0.24.0]
- cffi [required: >=1.7,!=1.11.3, installed: 1.11.5]
- pycparser [required: Any, installed: 2.19]
- idna [required: >=2.1, installed: 2.7]
- six [required: >=1.4.1, installed: 1.10.0]
- six [required: >=1.5.2, installed: 1.10.0]
- python-coveralls [required: ==2.7.0, installed: 2.7.0]
- coverage [required: ==4.0.3, installed: 4.0.3]
- PyYAML [required: Any, installed: 4.2b4]
- requests [required: Any, installed: 2.20.0]
- certifi [required: >=2017.4.17, installed: 2018.11.29]
- chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
- idna [required: >=2.5,<2.8, installed: 2.7]
- urllib3 [required: >=1.21.1,<1.25, installed: 1.22]
- sh [required: Any, installed: 1.12.14]
- six [required: Any, installed: 1.10.0]
- pyyaml [required: ==4.2b4, installed: 4.2b4]
- requests [required: ==2.20.0, installed: 2.20.0]
- certifi [required: >=2017.4.17, installed: 2018.11.29]
- chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
- idna [required: >=2.5,<2.8, installed: 2.7]
- urllib3 [required: >=1.21.1,<1.25, installed: 1.22]
- requests-aws4auth [required: ==0.9, installed: 0.9]
- requests [required: Any, installed: 2.20.0]
- certifi [required: >=2017.4.17, installed: 2018.11.29]
- chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
- idna [required: >=2.5,<2.8, installed: 2.7]
- urllib3 [required: >=1.21.1,<1.25, installed: 1.22]
- six [required: ==1.10.0, installed: 1.10.0]
- subprocess32-ext [required: ==3.2.8.2, installed: 3.2.8.2]
- xmltodict [required: ==0.10.2, installed: 0.10.2]
``` | https://github.com/localstack/localstack/issues/1052 | https://github.com/localstack/localstack/pull/1062 | 5970a62426b0c6fbbf64f2c480fc32128f0de9f0 | 696a4ca575e0684b44ea47a3f9af9b8095f0cf1a | "2018-12-10T22:59:38Z" | python | "2018-12-21T15:38:06Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,046 | ["README.md", "bin/Dockerfile.base", "localstack/constants.py", "localstack/services/dynamodb/dynamodb_starter.py", "localstack/services/infra.py", "localstack/utils/cloudwatch/cloudwatch_util.py", "localstack/utils/common.py", "requirements.txt", "tests/integration/test_integration.py"] | docker build from repo clone error | After cloning the repo and running: `docker build -t localstack:latest .`
I can see the following errors:
```
Starting mock Lambda service (http port 4574)...
ERROR: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4c7b4e7c10>: Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/generic_proxy.py", line 216, in forward
headers=forward_headers, stream=True)
File "/opt/code/localstack/.venv/lib/python2.7/site-packages/requests/api.py", line 116, in post
return request('post', url, data=data, json=json, **kwargs)
File "/opt/code/localstack/.venv/lib/python2.7/site-packages/requests/api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "/opt/code/localstack/.venv/lib/python2.7/site-packages/requests/sessions.py", line 524, in request
resp = self.send(prep, **send_kwargs)
File "/opt/code/localstack/.venv/lib/python2.7/site-packages/requests/sessions.py", line 637, in send
r = adapter.send(request, **kwargs)
File "/opt/code/localstack/.venv/lib/python2.7/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4c7b4e7c10>: Failed to establish a new connection: [Errno 111] Connection refused',))
```
I see the same thing when running `localstack start --docker` which is what I originally tried as I would like to use localstack for development purposes in an existing stack.
MacOS Mojave 10.14
Docker for Mac: 18.06.1-ce-mac73 (26764) | https://github.com/localstack/localstack/issues/1046 | https://github.com/localstack/localstack/pull/1072 | 26f44a79e9bf47c1a24053a32a83480e7eeb9480 | 75df19b29afaaf2e10c48013bfc214e650741aec | "2018-12-08T19:33:13Z" | python | "2019-01-01T15:01:09Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,037 | ["localstack/services/s3/s3_listener.py", "localstack/services/s3/s3_starter.py", "tests/integration/test_s3.py"] | S3 Put/Get Object is Collapsing XML, causes MD5 Digest validation errors | Loving this tool so far! I recently ran into an issue where it seems that performing a Put or a Get operation against my local S3 bucket is collapsing XML-like structures. This in turn causes MD5 digest validation errors.
I have an S3 bucket...
```
aws --endpoint-url=http://localhost:4572 s3api create-bucket --bucket my-bucket
```
I start with a file called **test.xml**:
```
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl"?>
<submission>
<field1>fwefwefewfew</field1>
<field2>fwefwefwe</field2>
<field3>fefwefwe</field3>
</submission>
```
I then put this object into S3 using the following command:
```
aws --endpoint-url=http://localhost:4572 s3api put-object --bucket my-bucket --key data/test1 --body test.xml
```
I'll grab the file back using the following command:
```
aws --endpoint-url=http://localhost:4572 s3api get-object --bucket my-bucket --key data/test1 test-back.xml
```
The file now looks like this:
```
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl"?>
<submission><field1>fwefwefewfew</field1><field2>fwefwefwe</field2><field3>fefwefwe</field3></submission>
```
I found this while troubleshooting the following exception stack:
```
com.amazonaws.SdkClientException: Unable to verify integrity of data download. Client calculated content hash didn't match hash calculated by Amazon S3. The data may be corrupt.
at com.amazonaws.services.s3.internal.DigestValidationInputStream.validateMD5Digest(DigestValidationInputStream.java:79)
at com.amazonaws.services.s3.internal.DigestValidationInputStream.read(DigestValidationInputStream.java:61)
at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:72)
at java.io.FilterInputStream.read(FilterInputStream.java:107)
at com.amazonaws.util.IOUtils.toByteArray(IOUtils.java:44)
at com.amazonaws.util.IOUtils.toString(IOUtils.java:58)
at com.amazonaws.services.s3.AmazonS3Client.getObjectAsString(AmazonS3Client.java:1485)
```
I was able to work around this bug when I collapsed the XML structure myself prior to adding the file to S3. This is not ideal though.
Any ideas on why this is happening? I am using localstack version 0.8.8, installed with pip.
Thanks for any help!
| https://github.com/localstack/localstack/issues/1037 | https://github.com/localstack/localstack/pull/1814 | 03c69b1c1f29a57d865b70890347d9510ae22c3a | 9f7df7b74a1492f15d9f9515623a233ce7fcb38f | "2018-12-03T18:55:47Z" | python | "2019-11-28T20:47:28Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,035 | ["bin/Dockerfile.base", "localstack/constants.py", "localstack/services/install.py"] | Localstack Elasticsearch plugin Ingest User Agent Processor not available | Plugin `Ingest User Agent Processor` is installed by default for Elasticsearch (ELK) on AWS. It is not the case in Localstack and think we basically expect it.
In addition, I was not able to install it manually through command `bin/elasticsearch-plugin install ingest-user-agent` as bin/elasticsearch-plugin is missing. | https://github.com/localstack/localstack/issues/1035 | https://github.com/localstack/localstack/pull/1082 | 1c5f2e9650155a839cc842a9cd07faf3e76ed5d2 | 423106ea1b876fbfabe11bd34438c111f2d05ea8 | "2018-11-30T10:45:45Z" | python | "2019-01-27T22:53:21Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 1,003 | ["localstack/package.json"] | Kinesis persistence not working | Hi there,
We are executing localstack in a docker-compose environment defined in this docker-compose.yml:
```
version: '2.1'
services:
localstack:
image: localstack/localstack
ports:
- "4567-4583:4567-4583"
environment:
- SERVICES=s3,dynamodb,sns,sqs,kinesis,ses
- DEBUG=${DEBUG- } .
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- HOSTNAME_EXTERNAL=localstack
- DATA_DIR=/tmp/localstack/data
volumes:
- ./localstack-persistence:/tmp/localstack/data
```
We enabled persistence for the localstack to persist S3 and DynamoDB. We got the side effect of storing Kinesis (we don't care about).
The Kinesis persistence seems that is broken at the startup:
How to reproduce:
Start a clean localsack container and create a kinesis stream
```
docker-compose up
aws kinesis --region us-east-1 --endpoint-url=http://localhost:4568 create-stream --stream-name test --shard-count 1
aws kinesis --region us-east-1 --endpoint-url=http://localhost:4568 list-streams
{
"StreamNames": [
"test"
]
}
^CGracefully stopping... (press Ctrl+C again to force)
```
Start again the container so it reads the persisted data:
```
docker-compose up
localstack_1 | 2018-11-08 13:37:44,050 CRIT Supervisor running as root (no user in config file)
localstack_1 | 2018-11-08 13:37:44,052 INFO supervisord started with pid 1
localstack_1 | 2018-11-08 13:37:45,056 INFO spawned: 'dashboard' with pid 10
localstack_1 | 2018-11-08 13:37:45,058 INFO spawned: 'infra' with pid 11
localstack_1 | (. .venv/bin/activate; bin/localstack web)
localstack_1 | (. .venv/bin/activate; exec bin/localstack start)
localstack_1 | Starting local dev environment. CTRL-C to quit.
localstack_1 | 2018-11-08T13:37:45:INFO:werkzeug: * Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)
localstack_1 | 2018-11-08T13:37:45:INFO:werkzeug: * Restarting with stat
localstack_1 | 2018-11-08T13:37:46:WARNING:werkzeug: * Debugger is active!
localstack_1 | 2018-11-08 13:37:46,196 INFO success: dashboard entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
localstack_1 | 2018-11-08 13:37:46,196 INFO success: infra entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
localstack_1 | 2018-11-08T13:37:46:INFO:werkzeug: * Debugger PIN: 319-990-396
localstack_1 | Starting mock DynamoDB (http port 4569)...
localstack_1 | Starting mock SES (http port 4579)...
localstack_1 | Starting mock Kinesis (http port 4568)...
localstack_1 | Starting mock S3 (http port 4572)...
localstack_1 | Starting mock SQS (http port 4576)...
localstack_1 | Starting mock SNS (http port 4575)...
localstack_1 | 2018-11-08T13:37:50:ERROR:localstack.services.generic_proxy: Error forwarding request: ('Connection aborted.', BadStatusLine("''",)) Traceback (most recent call last):
localstack_1 | File "/opt/code/localstack/localstack/services/generic_proxy.py", line 201, in forward
localstack_1 | headers=forward_headers, stream=True)
localstack_1 | File "/opt/code/localstack/.venv/lib/python2.7/site-packages/requests/api.py", line 112, in post
localstack_1 | return request('post', url, data=data, json=json, **kwargs)
localstack_1 | File "/opt/code/localstack/.venv/lib/python2.7/site-packages/requests/api.py", line 58, in request
localstack_1 | return session.request(method=method, url=url, **kwargs)
localstack_1 | File "/opt/code/localstack/.venv/lib/python2.7/site-packages/requests/sessions.py", line 508, in request
localstack_1 | resp = self.send(prep, **send_kwargs)
localstack_1 | File "/opt/code/localstack/.venv/lib/python2.7/site-packages/requests/sessions.py", line 618, in send
localstack_1 | r = adapter.send(request, **kwargs)
localstack_1 | File "/opt/code/localstack/.venv/lib/python2.7/site-packages/requests/adapters.py", line 490, in send
localstack_1 | raise ConnectionError(err, request=request)
localstack_1 | ConnectionError: ('Connection aborted.', BadStatusLine("''",))
localstack_1 |
localstack_1 | 2018-11-08T13:37:51:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4565): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb0d286d290>: Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):
localstack_1 | File "/opt/code/localstack/localstack/services/generic_proxy.py", line 201, in forward
localstack_1 | headers=forward_headers, stream=True)
localstack_1 | File "/opt/code/localstack/.venv/lib/python2.7/site-packages/requests/api.py", line 112, in post
localstack_1 | return request('post', url, data=data, json=json, **kwargs)
localstack_1 | File "/opt/code/localstack/.venv/lib/python2.7/site-packages/requests/api.py", line 58, in request
localstack_1 | return session.request(method=method, url=url, **kwargs)
localstack_1 | File "/opt/code/localstack/.venv/lib/python2.7/site-packages/requests/sessions.py", line 508, in request
localstack_1 | resp = self.send(prep, **send_kwargs)
localstack_1 | File "/opt/code/localstack/.venv/lib/python2.7/site-packages/requests/sessions.py", line 618, in send
localstack_1 | r = adapter.send(request, **kwargs)
localstack_1 | File "/opt/code/localstack/.venv/lib/python2.7/site-packages/requests/adapters.py", line 508, in send
localstack_1 | raise ConnectionError(e, request=request)
localstack_1 | ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4565): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb0d286d290>: Failed to establish a new connection: [Errno 111] Connection refused',))
localstack_1 |
```
Can someone could take a look? At least provide a way to disable/enable persistence per service? | https://github.com/localstack/localstack/issues/1003 | https://github.com/localstack/localstack/pull/1711 | 9cba1010dfcbf2b7672df4d7da46b7e8ab287f64 | 2af21304eecda89405e042b322c1e9e2810e4583 | "2018-11-08T13:49:18Z" | python | "2019-11-01T00:22:24Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 891 | [".github/workflows/asf-updates.yml", ".github/workflows/marker-report.yml", ".github/workflows/tests-pro-integration.yml"] | Python 3 node-gyp and leveldown | I just trying to install localstack with python 3.6
The installation of localdown 1.6.0 fail and is related to node-gyp that is not compatible with python 3
````
> [email protected] install /Users//node_modules/leveldown
> prebuild-install || node-gyp rebuild
prebuild-install info begin Prebuild-install version 2.5.3
prebuild-install info looking for local prebuild @ prebuilds/leveldown-v1.6.0-node-v64-darwin-x64.tar.gz
prebuild-install info looking for cached prebuild @ /Users//.npm/_prebuilds/https-github.com-level-leveldown-releases-download-v1.6.0-leveldown-v1.6.0-node-v64-darwin-x64.tar.gz
prebuild-install http request GET https://github.com/level/leveldown/releases/download/v1.6.0/leveldown-v1.6.0-node-v64-darwin-x64.tar.gz
prebuild-install http 404 https://github.com/level/leveldown/releases/download/v1.6.0/leveldown-v1.6.0-node-v64-darwin-x64.tar.gz
prebuild-install WARN install No prebuilt binaries found (target=10.8.0 runtime=node arch=x64 platform=darwin)
gyp info it worked if it ends with ok
gyp verb cli [ '/usr/local/Cellar/node/10.8.0/bin/node',
gyp verb cli '/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js',
gyp verb cli 'rebuild' ]
gyp info using [email protected]
gyp info using [email protected] | darwin | x64
gyp verb command rebuild []
gyp verb command clean []
gyp verb clean removing "build" directory
gyp verb command configure []
gyp verb check python checking for Python executable "python2" in the PATH
gyp verb `which` failed Error: not found: python2
gyp verb `which` failed at getNotFoundError (/usr/local/lib/node_modules/npm/node_modules/which/which.js:13:12)
gyp verb `which` failed at F (/usr/local/lib/node_modules/npm/node_modules/which/which.js:68:19)
gyp verb `which` failed at E (/usr/local/lib/node_modules/npm/node_modules/which/which.js:80:29)
gyp verb `which` failed at /usr/local/lib/node_modules/npm/node_modules/which/which.js:89:16
gyp verb `which` failed at /usr/local/lib/node_modules/npm/node_modules/isexe/index.js:42:5
gyp verb `which` failed at /usr/local/lib/node_modules/npm/node_modules/isexe/mode.js:8:5
gyp verb `which` failed at FSReqWrap.oncomplete (fs.js:152:21)
gyp verb `which` failed python2 { Error: not found: python2
gyp verb `which` failed at getNotFoundError (/usr/local/lib/node_modules/npm/node_modules/which/which.js:13:12)
gyp verb `which` failed at F (/usr/local/lib/node_modules/npm/node_modules/which/which.js:68:19)
gyp verb `which` failed at E (/usr/local/lib/node_modules/npm/node_modules/which/which.js:80:29)
gyp verb `which` failed at /usr/local/lib/node_modules/npm/node_modules/which/which.js:89:16
gyp verb `which` failed at /usr/local/lib/node_modules/npm/node_modules/isexe/index.js:42:5
gyp verb `which` failed at /usr/local/lib/node_modules/npm/node_modules/isexe/mode.js:8:5
gyp verb `which` failed at FSReqWrap.oncomplete (fs.js:152:21)
gyp verb `which` failed stack:
gyp verb `which` failed 'Error: not found: python2\n at getNotFoundError (/usr/local/lib/node_modules/npm/node_modules/which/which.js:13:12)\n at F (/usr/local/lib/node_modules/npm/node_modules/which/which.js:68:19)\n at E (/usr/local/lib/node_modules/npm/node_modules/which/which.js:80:29)\n at /usr/local/lib/node_modules/npm/node_modules/which/which.js:89:16\n at /usr/local/lib/node_modules/npm/node_modules/isexe/index.js:42:5\n at /usr/local/lib/node_modules/npm/node_modules/isexe/mode.js:8:5\n at FSReqWrap.oncomplete (fs.js:152:21)',
gyp verb `which` failed code: 'ENOENT' }
gyp verb check python checking for Python executable "python" in the PATH
gyp verb `which` succeeded python /Users//localstack/bin/python
gyp verb check python version `/Users//localstack/bin/python -c "import platform; print(platform.python_version());"` returned: "3.6.3\n"
gyp ERR! configure error
gyp ERR! stack Error: Python executable "/Users//localstack/bin/python" is v3.6.3, which is not supported by gyp.
gyp ERR! stack You can pass the --python switch to point to Python >= v2.5.0 & < 3.0.0.
gyp ERR! stack at PythonFinder.failPythonVersion (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:501:19)
gyp ERR! stack at PythonFinder.<anonymous> (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:483:14)
gyp ERR! stack at ChildProcess.exithandler (child_process.js:279:7)
gyp ERR! stack at ChildProcess.emit (events.js:182:13)
gyp ERR! stack at maybeClose (internal/child_process.js:962:16)
gyp ERR! stack at Socket.stream.socket.on (internal/child_process.js:381:11)
gyp ERR! stack at Socket.emit (events.js:182:13)
gyp ERR! stack at Pipe._handle.close (net.js:599:12)
````
Based on this bug report, probably node-gyp will never be compatible with python 3:
https://bugs.chromium.org/p/gyp/issues/detail?id=36
It seems that now the new tool used is GN.
Also the leveldown documentation recommend to use levelup.
https://www.npmjs.com/package/leveldown
If I checked correctly, localstack use both levelup and leveldown.
Any plan to switch only on levelup or GN to fully support python 3?
| https://github.com/localstack/localstack/issues/891 | https://github.com/localstack/localstack/pull/10101 | 0217fedbf8ca7c4b8fc2516fc2a5a2a750ddd538 | f81c2d71bb4eef8145d79726f03b8caed839aa82 | "2018-08-10T14:06:21Z" | python | "2024-01-24T08:19:54Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 888 | [".github/workflows/asf-updates.yml", ".github/workflows/marker-report.yml", ".github/workflows/tests-pro-integration.yml"] | Application is unable to connect to localstack SQS | As a test developer, I am trying to use localstack to mock SQS for Integration Test.
Docker-compose:
```
localstack:
image: localstack/localstack:0.8.7
ports:
- "4567-4583:4567-4583"
- "9898:${PORT_WEB_UI-8080}"
environment:
- SERVICES=sqs
- DOCKER_HOST=unix:///var/run/docker.sock
- HOSTNAME=localstack
- HOSTNAME_EXTERNAL=192.168.99.101
- DEFAULT_REGION=us-east-1
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
```
After spinning up the localstack SQS: Able to connect and create queue and able to retrieve it via AWS CLI. Localstack Dashboard also displays the queue created.
```
$ aws --endpoint-url=http://192.168.99.101:4576 --region=us-west-1 sqs create-queue --queue-name myqueue
{
"QueueUrl": "http://192.168.99.101:4576/queue/myqueue"
}
```
The application is using com.amazon.sqs.javamessaging.SQSConnectionFactory to connect to SQS. It also uses com.amazonaws.auth.DefaultAWSCredentialsProviderChain for the AWS credentials
1) If I give "-e AWS_REGION=us-east-1 -e AWS_ACCESS_KEY_ID=foobar -e AWS_SECRET_ACCESS_KEY=foobar" while bringing up the application, I am getting
```
HTTPStatusCode: 403 AmazonErrorCode: InvalidClientTokenId
com.amazonaws.services.sqs.model.AmazonSQSException: The security token included in the request is invalid
```
2) If I give the ACCESS_KEY and SECRET_KEY of the AWS SQS, I am getting
```
HTTPStatusCode: 400 AmazonErrorCode: AWS.SimpleQueueService.NonExistentQueue
com.amazonaws.services.sqs.model.QueueDoesNotExistException: The specified queue does not exist for this wsdl version.
```
Below is the application code. The first 2 log messages are printing the connection and session it obtained. The error is coming from "Queue publisherQueue = sqsSession.createQueue(sqsName);"
```
sqsConnection = (Connection) context.getBean("outputSQSConnection");
LOGGER.info("SQS connection Obtained " + sqsConnection);
sqsSession = sqsConnection.createSession(false, Session.AUTO_ACKNOWLEDGE);
LOGGER.info("SQS Session Created " + sqsSession);
Queue publisherQueue = sqsSession.createQueue(sqsName);
```
I tried both "http://localstack:4576/queue/myqueue" "http://192.168.99.101:4576/queue/myqueue". The results are same.
Can you please help me to resolve the issue? | https://github.com/localstack/localstack/issues/888 | https://github.com/localstack/localstack/pull/10101 | 0217fedbf8ca7c4b8fc2516fc2a5a2a750ddd538 | f81c2d71bb4eef8145d79726f03b8caed839aa82 | "2018-08-09T02:51:41Z" | python | "2024-01-24T08:19:54Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 833 | [".github/workflows/asf-updates.yml", ".github/workflows/marker-report.yml", ".github/workflows/tests-pro-integration.yml"] | Redshift database support | Hello! Sorry to file an issue like this but it's the best way I have to leave a message for the folks here.
A painpoint in testability for my work has been testing code which hits Redshift as a database. Most companies I know provision a tiny Redshift instance to test things with but that's impractical for reasons I'm sure everyone here is familiar with.
Redshift looks a lot like Postgres (as it's forked off Postgres 8.0.2). Not quite enough that Postgres can just be dropped in, however. The main differences (other than architectural) are some syntax extensions, extra sql features and a bunch of native functions (such as to operate on json text objects).
A few months ago I took a Postgres instance under a machete and cut it up until it looked enough like Redshift to please our tests. I recently turned it into a fully-functional Docker instance:
https://github.com/HearthSim/docker-pgredshift
There isn't much there yet feature-wise (as there isn't much needed to pass our tests; and we're using an ORM which can target postgres syntax in most places), so it's more of a proof of concept. But I'm happy to take PRs to implement more of it. I don't know if something like this will be interesting in the future for localstack if it ever gets to a more usable point but if it is, I'm happy to help -- and either way I figured I would give the heads up :)
┆Issue is synchronized with this [Jira Bug](https://localstack.atlassian.net/browse/LOC-310) by [Unito](https://www.unito.io/learn-more)
| https://github.com/localstack/localstack/issues/833 | https://github.com/localstack/localstack/pull/10101 | 0217fedbf8ca7c4b8fc2516fc2a5a2a750ddd538 | f81c2d71bb4eef8145d79726f03b8caed839aa82 | "2018-07-04T01:13:47Z" | python | "2024-01-24T08:19:54Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 809 | [".github/workflows/asf-updates.yml", ".github/workflows/marker-report.yml", ".github/workflows/tests-pro-integration.yml"] | Cloudformation script unable to parse GlobalSecondaryIndexes Read/Write CapacityUnits given as string value | 
When ProvisionedThroughput of GlobalSecondaryIndexes Read/Write CapacityUnits is given as a string, cloudformation service is unable to parse it. It needs int or long.
(was able to run once I changed throughput to number)
But when ProvisionedThroughput of a table is given as a string, it is parsing it correctly. | https://github.com/localstack/localstack/issues/809 | https://github.com/localstack/localstack/pull/10101 | 0217fedbf8ca7c4b8fc2516fc2a5a2a750ddd538 | f81c2d71bb4eef8145d79726f03b8caed839aa82 | "2018-06-19T18:27:33Z" | python | "2024-01-24T08:19:54Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 804 | [".github/workflows/tests-pro-integration.yml"] | Localstack seems not to set policy attributes to SQS queue | Hello,
I am trying to set a policy attribute to a SQS queue I have created. I don't get any error with these commands:
```
$ sqs_policy='{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Principal": { "AWS": "*" },
"Action":"sqs:SendMessage",
"Resource":"'$sqs_queue_arn'",
"Condition":{
"ArnEquals":{
"aws:SourceArn":"'$sns_topic_arn'"
}
}
}
]
}'
$ sqs_policy_escaped=$(echo $sqs_policy | perl -pe 's/"/\\"/g')
$ sqs_attributes='{"Policy":"'$sqs_policy_escaped'"}'
$ awslocal sqs set-queue-attributes \
--queue-url "$sqs_queue_url" \
--attributes "$sqs_attributes"
```
But when I check the attribute of the queue with these commands, it seems like there is no policy attribute
```
$ awslocal sqs get-queue-attributes --queue-url $sqs_queue_url --attribute-names Policy
$ awslocal sqs get-queue-attributes --queue-url $sqs_queue_url --attribute-names All
{
"Attributes": {
"ApproximateNumberOfMessagesNotVisible": "1",
"ApproximateNumberOfMessagesDelayed": "0",
"CreatedTimestamp": "1528886541",
"ApproximateNumberOfMessages": "0",
"ReceiveMessageWaitTimeSeconds": "20",
"DelaySeconds": "0",
"VisibilityTimeout": "300",
"LastModifiedTimestamp": "1528886541",
"QueueArn": "arn:aws:sqs:elasticmq:000000000000:TestQueue"
}
}
```
However, when I subscribe the queue to a SNS topic, and I publish a message to the topic, that message is sent to the SQS queue.
I am using 0.8.6.2 version of localstack and run it with docker.
At the end, what I am trying to do is to publish an event from S3 to SQS passing by SNS.
I don't know if I am doing something wrong of if it a localstack bug/limitation.
| https://github.com/localstack/localstack/issues/804 | https://github.com/localstack/localstack/pull/9452 | ada0bc4595bcf38153b5b7e6771a131b3d3fc017 | d8741aec352ae136f3eebb4df45eabbb4afd693d | "2018-06-13T13:34:29Z" | python | "2023-10-24T05:39:01Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 794 | [".github/workflows/tests-pro-integration.yml"] | Lambda - Nodejs 8.10 | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
I'm unable to successfully get Node.js 8.10 to execute through API Gateway.
I'm using the stock example from Lambda:
```
exports.handler = async (event) => {
// TODO implement
return 'Hello from Lambda!'
};
```
However results in:
```
File "/opt/code/localstack/localstack/services/apigateway/apigateway_listener.py", line 224, in forward_request
response.status_code = int(parsed_result.get('statusCode', 200))
AttributeError: 'unicode' object has no attribute 'get'
```
If I pass a full object though: `{ statusCode: 200, body: 'Hello from Lambda!' }` it works fine.
Am I missing something regarding getting it setup to better emulate AWS?
| https://github.com/localstack/localstack/issues/794 | https://github.com/localstack/localstack/pull/9452 | ada0bc4595bcf38153b5b7e6771a131b3d3fc017 | d8741aec352ae136f3eebb4df45eabbb4afd693d | "2018-05-31T00:34:10Z" | python | "2023-10-24T05:39:01Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 741 | [".github/workflows/asf-updates.yml", ".github/workflows/marker-report-issue.yml", ".github/workflows/marker-report.yml", ".github/workflows/tests-cli.yml", ".github/workflows/tests-podman.yml", ".github/workflows/tests-pro-integration.yml"] | Have issue with running on vm - ubuntu 16.04 with "localstack start" (windows 10) | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
After debug mode:
Starting local dev environment. CTRL-C to quit.
Starting mock API Gateway (http port 4567)...
Starting mock DynamoDB (http port 4569)...
Starting mock SES (http port 4579)...
Starting mock Kinesis (http port 4568)...
Error: Unable to access jarfile DynamoDBLocal.jar
Starting mock Redshift (http port 4577)...
Starting mock S3 (http port 4572)...
Starting mock CloudWatch (http port 4582)...
Starting mock CloudFormation (http port 4581)...
Starting mock SSM (http port 4583)...
Starting mock SQS (http port 4576)...
Starting local Elasticsearch (http port 4571)...
/usr/bin/env: ‘node’: No such file or directory
Starting mock SNS (http port 4575)...
Error: Unable to access jarfile /home/vagrant/.local/lib/python2.7/site-packages/localstack/infra/elasticmq/elasticmq-server.jar
Starting mock DynamoDB Streams service (http port 4570)...
Starting mock Firehose service (http port 4573)...
Starting mock Route53 (http port 4580)...
Starting mock ES service (http port 4578)...
Starting mock Lambda service (http port 4574)...
Killed
2018-04-28T11:31:09:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f528462b5d0>: Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):
File "/home/vagrant/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py", line 201, in forward
headers=forward_headers)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/adapters.py", line 508, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f528462b5d0>: Failed to establish a new connection: [Errno 111] Connection refused',))
* Running on http://0.0.0.0:4583/ (Press CTRL+C to quit)
* Running on http://0.0.0.0:4577/ (Press CTRL+C to quit)
* Running on http://0.0.0.0:4559/ (Press CTRL+C to quit)
* Running on http://0.0.0.0:4580/ (Press CTRL+C to quit)
* Running on http://0.0.0.0:4566/ (Press CTRL+C to quit)
* Running on http://0.0.0.0:4563/ (Press CTRL+C to quit)
* Running on http://0.0.0.0:4579/ (Press CTRL+C to quit)
* Running on http://0.0.0.0:4582/ (Press CTRL+C to quit)
* Running on http://0.0.0.0:4562/ (Press CTRL+C to quit)
2018-04-28T11:32:09:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f52845caa10>: Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):
File "/home/vagrant/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py", line 201, in forward
headers=forward_headers)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/adapters.py", line 508, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f52845caa10>: Failed to establish a new connection: [Errno 111] Connection refused',))
2018-04-28T11:33:09:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f52845cac10>: Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):
File "/home/vagrant/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py", line 201, in forward
headers=forward_headers)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/adapters.py", line 508, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f52845cac10>: Failed to establish a new connection: [Errno 111] Connection refused',))
2018-04-28T11:34:09:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f52845cae10>: Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):
File "/home/vagrant/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py", line 201, in forward
headers=forward_headers)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "/home/vagrant/.local/lib/python2.7/site-packages/requests/adapters.py", line 508, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f52845cae10>: Failed to establish a new connection: [Errno 111] Connection refused',))
| https://github.com/localstack/localstack/issues/741 | https://github.com/localstack/localstack/pull/9848 | d68c9def509f0d948bf394d0f37cd1ce5c11a233 | 706d2823db41ded7cfb6eebf39f204a05a7c03c6 | "2018-04-28T11:37:21Z" | python | "2023-12-12T13:35:04Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 520 | ["localstack/services/s3/s3_listener.py", "tests/integration/test_s3.py"] | s3 api ignores ExposeHeaders CORS rule | looks like setting ExposeHeaders has no effect on requests to s3 (headers such as `x-amz-version-id` don't reach the clients) for browser-based uploads.
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html | https://github.com/localstack/localstack/issues/520 | https://github.com/localstack/localstack/pull/1224 | 1cfc2641cfeb8274f4024df460a801dc92d70106 | 28ead938d6fd07405717917430d2cf2d32955b43 | "2017-12-20T13:30:03Z" | python | "2019-04-13T20:43:29Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 503 | [".github/workflows/tests-pro-integration.yml"] | Files posted to S3 as multipart/form-data do not trigger notifications | I have an S3 bucket set up with a notification to trigger a Lambda task when any file is added to the bucket with an `images` prefix. It works fine when I add the file via the s3 API or even via curl, but when I post the file through the browser (using [this library](https://github.com/CulturalMe/meteor-slingshot)), the notification ignores it, **even though the file makes it successfully into the bucket.** It works no problem on AWS. In case it helps I've pasted the HTTP requests and responses I'm seeing:
Request:
```
POST /my-uploads-dev HTTP/1.1
Host: localhost:4572
Connection: keep-alive
Content-Length: 19415
Origin: http://localhost:3000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36
Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryAKROadl9hSmCrpB2
Accept: */*
Referer: http://localhost:3000/dashboard/profile
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9,es;q=0.8,la;q=0.7,zu;q=0.6
```
Payload:
```
------WebKitFormBoundaryKbp7jAe69oUb5cGq
Content-Disposition: form-data; name="key"
images/uploads/ZrdLcEpmH95txSEP7/1512767815680-foo.png
------WebKitFormBoundaryKbp7jAe69oUb5cGq
Content-Disposition: form-data; name="bucket"
my-uploads-dev
------WebKitFormBoundaryKbp7jAe69oUb5cGq
Content-Disposition: form-data; name="Content-Type"
image/png
------WebKitFormBoundaryKbp7jAe69oUb5cGq
Content-Disposition: form-data; name="acl"
public-read
------WebKitFormBoundaryKbp7jAe69oUb5cGq
Content-Disposition: form-data; name="Content-Disposition"
inline; filename="foo.png"; filename*=utf-8''foo.png
------WebKitFormBoundaryKbp7jAe69oUb5cGq
Content-Disposition: form-data; name="x-amz-algorithm"
AWS4-HMAC-SHA256
------WebKitFormBoundaryKbp7jAe69oUb5cGq
Content-Disposition: form-data; name="x-amz-credential"
AKIAIKNDNJYOHNZLSCJA/20171208/us-west-2/s3/aws4_request
------WebKitFormBoundaryKbp7jAe69oUb5cGq
Content-Disposition: form-data; name="x-amz-date"
20171208T000000Z
------WebKitFormBoundaryKbp7jAe69oUb5cGq
Content-Disposition: form-data; name="policy"
eyJjb25kaXRpb25zIjpbWyJjb250ZW50LWxlbmd0aC1yYW5nZSIsMCwxNzI1NV0seyJrZXkiOiJpbWFnZXMvdXBsb2Fkcy9pbWFnZXMvcHJvZmlsZXMvYXZhdGFycy9acmRMY0VwbUg5NXR4U0VQNy8xNTEyNzY3ODE1NjgwLWF1cmVsaXVzLnBuZyJ9LHsiYnVja2V0IjoiaWZkYi11cGxvYWRzLWRldiJ9LHsiQ29udGVudC1UeXBlIjoiaW1hZ2UvcG5nIn0seyJhY2wiOiJwdWJsaWMtcmVhZCJ9LHsiQ29udGVudC1EaXNwb3NpdGlvbiI6ImlubGluZTsgZmlsZW5hbWU9XCJhdXJlbGl1cy5wbmdcIjsgZmlsZW5hbWUqPXV0Zi04JydhdXJlbGl1cy5wbmcifSx7IngtYW16LWFsZ29yaXRobSI6IkFXUzQtSE1BQy1TSEEyNTYifSx7IngtYW16LWNyZWRlbnRpYWwiOiJBS0lBSUtORE5KWU9ITlpMU0NKQS8yMDE3MTIwOC91cy13ZXN0LTIvczMvYXdzNF9yZXF1ZXN0In0seyJ4LWFtei1kYXRlIjoiMjAxNzEyMDhUMDAwMDAwWiJ9XSwiZXhwaXJhdGlvbiI6IjIwMTctMTItMDhUMjE6MjE6NTUuNjgwWiJ9
------WebKitFormBoundaryKbp7jAe69oUb5cGq
Content-Disposition: form-data; name="x-amz-signature"
2ae346dc1b8ed7fc74ce137a5587b365af3def914b289c482a4c13bd3b776449
------WebKitFormBoundaryKbp7jAe69oUb5cGq
Content-Disposition: form-data; name="file"; filename="foo.png"
Content-Type: image/png
```
Response:
```
HTTP/1.1 200 OK
Server: BaseHTTP/0.3 Python/2.7.10
Date: Fri, 08 Dec 2017 21:13:23 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 0
Server: Werkzeug/0.12.2 Python/2.7.10
Date: Fri, 08 Dec 2017 21:13:23 GMT
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: HEAD,GET,PUT,POST,DELETE,OPTIONS,PATCH
Access-Control-Allow-Headers: authorization,content-type,content-md5,x-amz-content-sha256,x-amz-date,x-amz-security-token,x-amz-user-agent
``` | https://github.com/localstack/localstack/issues/503 | https://github.com/localstack/localstack/pull/9800 | 0ca29a2e311cb8d1c8ae5df298976eb3d6f58920 | 7c82821ec0df521d64dbcbaf6f2e2501b80e04b9 | "2017-12-08T21:37:20Z" | python | "2023-12-05T07:38:02Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 480 | ["README.md"] | The path is not shared from OS X and is not known to Docker. | Get this error when start localstack on mac.
```
$ localstack start --docker
Starting local dev environment. CTRL-C to quit.
docker run -it -p 8080:8080 -p 443:443 -p 4567-4582:4567-4582 -p 4590-4593:4590-4593 -v "/var/folders/q5/s3zcbh6mzhwq4gxj/T/localstack:/tmp/localstack" -v "/var/run/docker.sock:/var/run/docker.sock" -e DOCKER_HOST="unix:///var/run/docker.sock" -e HOST_TMP_FOLDER="/var/folders/q5/s3zcbh6mzhwq4gxj/T/localstack" "localstack/localstack"
docker: Error response from daemon: Mounts denied:
The path /var/folders/q5/s3zcbh6mzhwq4gxj/T/localstack
is not shared from OS X and is not known to Docker.
You can configure shared paths from Docker -> Preferences... -> File Sharing.
See https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.
```
I installed `docker for mac`.
How to fix it? | https://github.com/localstack/localstack/issues/480 | https://github.com/localstack/localstack/pull/485 | 23877c7b83e3910380be9a3128263ed1af4bd286 | 46ca7432455c2a22b8cacd9bd67ec8d3ab466055 | "2017-11-23T21:58:59Z" | python | "2017-11-25T16:32:49Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 357 | [".github/workflows/pr-welcome-first-time-contributors.yml", ".github/workflows/rebase-release-prs.yml", ".github/workflows/rebase-release-targeting-prs.yml", ".github/workflows/tests-pro-integration.yml"] | SNS subject is empty | Hi!
I am having problems that the subject is an empty string when receiving the notification. Anyone else experiencing this? | https://github.com/localstack/localstack/issues/357 | https://github.com/localstack/localstack/pull/9622 | dfdf585cf3382e8166d338e3a17fd923b1b8dc27 | 218e855c680ce4f838169b7e7ec66ddfbd8c0a67 | "2017-09-28T23:41:27Z" | python | "2023-11-14T06:49:13Z" |
closed | localstack/localstack | https://github.com/localstack/localstack | 78 | [".github/workflows/sync-project.yml"] | Unable to use MessageAttributeValues with SQS | I'm trying to use localstack's SQS endpoint with a Java application and am running into some issues when sending MessageAttributeValues. Given this test code:
```
@Test
public void testMessageAttributeValues() {
AmazonSQSAsync sqs = AmazonSQSAsyncClientBuilder.standard().withEndpointConfiguration(new AwsClientBuilder
.EndpointConfiguration("http://localhost:4576", "us-east-1")).build();
sqs.createQueue(new CreateQueueRequest("test-queue"));
MessageAttributeValue value= new MessageAttributeValue();
value.setDataType("Number.java.lang.Long");
value.setStringValue("1493147359900");
Map<String, MessageAttributeValue> mavs = new HashMap<>();
mavs.put("timestamp", value);
SendMessageResult result = sqs.sendMessage(new SendMessageRequest()
.withQueueUrl("http://localhost:4576/123456789012/test-queue")
.withMessageAttributes(mavs)
.withMessageBody("foo"));
System.out.println(result);
}
```
I get:
```
15:26:13.661 [main] INFO com.amazonaws.http.DefaultErrorResponseHandler - Unable to parse HTTP response (Invocation Id:25b666a4-1136-bae4-f39d-666e59b56b58) content to XML document 'The message attribute 'timestamp' has an invalid message attribute type, the set of supported type prefixes is Binary, Number, and String.'
org.xml.sax.SAXParseException: Content is not allowed in prolog.
at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:203)
at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177)
at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:400)
...
```
Debugging the HTTP request, I see the actual response appears to be:
```
"The message attribute 'timestamp' has an invalid message attribute type, the set of supported type prefixes is Binary, Number, and String."
```
As you can see in my test, I'm actually sending `Number.java.lang.Long` (this simulates what the messaging library I'm using in my application sends). Based on [Amazon's docs](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-attributes.html#message-attributes-data-types-validation), this should be allowed as a "CustomType". I think this is maybe a case of localstack being overly strict during validation?
If I change my test to just send `Number`, I instead get a different error:
```
15:31:09.263 [main] DEBUG com.amazonaws.request - Received successful response: 200, AWS Request ID: 27daac76-34dd-47df-bd01-1f6e873584a0
15:31:09.263 [main] DEBUG com.amazonaws.requestId - x-amzn-RequestId: not available
15:31:09.263 [main] DEBUG com.amazonaws.requestId - AWS Request ID: 27daac76-34dd-47df-bd01-1f6e873584a0
15:31:09.263 [main] DEBUG com.amazonaws.services.sqs.MessageMD5ChecksumHandler - Message body: foo
15:31:09.264 [main] DEBUG com.amazonaws.services.sqs.MessageMD5ChecksumHandler - Expected MD5 of message body: acbd18db4cc2f85cedef654fccc4a4d8
15:31:09.264 [main] DEBUG com.amazonaws.services.sqs.MessageMD5ChecksumHandler - Message attribtues: {timestamp={StringValue: 1493147359900,StringListValues: [],BinaryListValues: [],DataType: Number}}
15:31:09.264 [main] DEBUG com.amazonaws.services.sqs.MessageMD5ChecksumHandler - Expected MD5 of message attributes: 235c5c510d26fb653d073faed50ae77c
com.amazonaws.AmazonClientException: MD5 returned by SQS does not match the calculation on the original request. (MD5 calculated by the message attributes: "235c5c510d26fb653d073faed50ae77c", MD5 checksum returned: "324758f82d026ac6ec5b31a3b192d1e3")
at com.amazonaws.services.sqs.MessageMD5ChecksumHandler.sendMessageOperationMd5Check(MessageMD5ChecksumHandler.java:120)
at com.amazonaws.services.sqs.MessageMD5ChecksumHandler.afterResponse(MessageMD5ChecksumHandler.java:80)
at com.amazonaws.handlers.RequestHandler2Adaptor.afterResponse(RequestHandler2Adaptor.java:49)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.afterResponse(AmazonHttpClient.java:971)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:745)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.sqs.AmazonSQSClient.doInvoke(AmazonSQSClient.java:1792)
at com.amazonaws.services.sqs.AmazonSQSClient.invoke(AmazonSQSClient.java:1768)
at com.amazonaws.services.sqs.AmazonSQSClient.executeSendMessage(AmazonSQSClient.java:1526)
at com.amazonaws.services.sqs.AmazonSQSClient.sendMessage(AmazonSQSClient.java:1503)
```
If I don't send any MessageAttributes at all, then the request works as expected.
| https://github.com/localstack/localstack/issues/78 | https://github.com/localstack/localstack/pull/9224 | 82796a99ec085bdc4e1f7113d2ddbba52c449232 | e7a5afa4614656b2aec71509550978e32af4b629 | "2017-04-25T19:33:47Z" | python | "2023-09-26T05:40:03Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,731 | ["cmd/dagger/functions.go", "core/integration/module_functions_test.go"] | 🐞 [CLI]: `dagger functions $function_name` returns an error if return type is not a dagger object | ### What is the issue?
calling `dagger functions foo` on the following function definition raises the following error:
`Error: function 'foo' returns non-object type STRING_KIND`
```go
package main
type Testmodule struct{}
func (m *Testmodule) Foo() string {
return "bar"
}
```
### Dagger version
dagger v0.9.11
### Steps to reproduce
run `dagger functions foo` on the code given above
### Log output
Error: function 'foo' returns non-object type STRING_KIND
| https://github.com/dagger/dagger/issues/6731 | https://github.com/dagger/dagger/pull/6733 | 58f4b8f7791786a740134ee36854002f9021c1fc | 4359dd91d1739cddbc83ca37328943f6187ce50a | "2024-02-25T14:27:41Z" | go | "2024-02-26T12:02:06Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,728 | ["internal/mage/util/engine.go", "sdk/python/runtime/.gitignore", "sdk/python/runtime/dagger.json", "sdk/python/runtime/go.mod", "sdk/python/runtime/go.sum", "sdk/python/runtime/main.go", "sdk/python/runtime/template/src/main.py"] | Zenith: default to use a class instead of top-level functions in Python | ## Summary
Change the `dagger init` template for Python to use a class instead of just functions. Also update documentation snippets to do the same.
## Motivation
In an earlier iteration of Dagger modules, when the concept of objects that contain functions were introduced, the Python SDK kept the convenience of being able to define simple top-level functions, without having to convert into a class. Python allows this by transparently “creating” the object and associate the top-level functions with it, behind the scenes. It provides a nicer experience for simple use cases, where users don’t have to get the name right for the class, since it has to match the module name.
However, you can’t add a description to the main object this way. And over time we’ve added state to object types, and constructors to initialize that state. Using state has become a common pattern, so a user may **quite often need to convert the top-level functions into a class**.
That takes me back to a few years ago, working with React. You used to be able to create components using either a function or a class. Most people prefer the simplicity of functions, but in React, if you reached a point where you needed to access some lifecycle hooks, you’d need to convert the function into a class. I had to do that a lot myself.
I remember the frustration in the community around this, growing as adoption increased[^1]. I’ve found myself doing this in Python a few times too, and I’ve grown concerned that it’ll add some of the same confusion, frustration or just friction for our users.
For this reason, I think we should default to talk about Python SDK modules using the class, instead of top-level functions, in the documentation and examples. It also makes it more consistent with Go and TypeScript.
[^1]: They eventually fixed it by making classes obsolete and have full-featured hooks in functions.
## Deprecation
I have no plans, nor do I suggest, to deprecate top-level functions at this point. They’ll still be available, just more hidden. We’ll revisit at a later time.
\cc @vikram-dagger @jpadams | https://github.com/dagger/dagger/issues/6728 | https://github.com/dagger/dagger/pull/6729 | e64f185334734a37a2f18a95ec3cf21a27e32437 | 4b7ab1e217d1f2da30723905282ba3cf27de8cab | "2024-02-24T00:58:03Z" | go | "2024-02-26T19:06:17Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,725 | ["cmd/dagger/flags.go", "cmd/dagger/functions.go", "core/integration/module_call_test.go"] | Support passing modules as arguments in the CLI | Modules can accept other modules as arguments to functions. Use cases:
1. SDKs implemented as modules do this already by accepting `ModuleSource` as an arg
2. Support for running tests of modules could use this: https://github.com/dagger/dagger/issues/6724
3. Support for re-executing modules in specialized hosts (i.e. run my module on a "gpu-as-a-service" platform)
4. Combined with support for interfaces, you could pass a module as an implementation of an interface argument (if it matches)
* @vito mentioned one use case specifically around using this to construct a dev env that is defined across multiple modules and could be dynamically composed into one (IIUC, correct if I misunderstood)
However, it's not yet possible to do this from the CLI with `dagger call`, which greatly limits its utility. We could likely just re-use the syntax we use for `-m`, e.g.:
* `dagger call go-test --module github.com/my-org/my-repo/my-mod`, where `--module` is an argument of type either `ModuleSource`, `Module` or an interface that the module must implement.
That would in theory be extremely straightforward to implement while also unlocking all sorts of very interesting use cases | https://github.com/dagger/dagger/issues/6725 | https://github.com/dagger/dagger/pull/6761 | c5bf6978ba169abbc5cef54b3d7cd829f141d792 | e02ff3d2b50665275deb52902abc46ac0f6f138a | "2024-02-23T18:56:08Z" | go | "2024-02-27T22:55:04Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,719 | ["engine/server/buildkitcontroller.go", "engine/server/server.go"] | Very high memory usage in dagger engine | I noticed that on `main` when running `go test -run=TestModule ...` the engine ends up using over 6GB of RSS at times. By eye, it seems like it particularly spikes during `TestModuleLotsOfFunctions` (but did not fully confirm yet).
`pprof` is showing:
```
File: dagger-engine
Type: inuse_space
Time: Feb 22, 2024 at 10:18am (PST)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 1811.45MB, 71.51% of 2533.09MB total
Dropped 846 nodes (cum <= 12.67MB)
Showing top 10 nodes out of 239
flat flat% sum% cum cum%
622.26MB 24.57% 24.57% 622.26MB 24.57% github.com/moby/buildkit/solver/pb.(*FileActionMkFile).Unmarshal
605.22MB 23.89% 48.46% 605.22MB 23.89% github.com/moby/buildkit/solver/pb.(*Op).Marshal
104.69MB 4.13% 52.59% 104.69MB 4.13% google.golang.org/grpc/internal/transport.newBufWriter
103.08MB 4.07% 56.66% 103.08MB 4.07% encoding/json.(*decodeState).literalStore
90.83MB 3.59% 60.25% 201.88MB 7.97% github.com/dagger/dagger/core.(*ModDeps).lazilyLoadSchema
90.26MB 3.56% 63.81% 90.26MB 3.56% bufio.NewReaderSize
75.57MB 2.98% 66.79% 75.57MB 2.98% google.golang.org/protobuf/internal/impl.consumeStringValidateUTF8
51.02MB 2.01% 68.81% 69.52MB 2.74% google.golang.org/grpc/internal/transport.(*http2Server).operateHeaders
34.51MB 1.36% 70.17% 34.51MB 1.36% github.com/moby/buildkit/client/llb.mergeMetadata
34.02MB 1.34% 71.51% 34.02MB 1.34% github.com/dagger/dagger/dagql.Class[go.shape.*uint8].Install
```
The top memory users seem pretty consistent across two runs.
I can't help but wonder to what extent the goroutine leak as mentioned by @jedevc could be related: https://github.com/dagger/dagger/pull/6597. I.e. the goroutine leak is resulting in memory allocated by the above still be reachable. Just a superficial connection in that goroutine leaks are a very common culprit of memory leaks, could be red herring.
Also wondering if the recent changes to make more [heavy use of caching `OpDAG` modifications](https://github.com/dagger/dagger/pull/6505) could be related, just in that it could be connected to the heavy usage of `Op.Marshal` in the pprof output.
* Can test by going back before that commit and seeing if the memory usage was any different. | https://github.com/dagger/dagger/issues/6719 | https://github.com/dagger/dagger/pull/6760 | f71928d6e4baef0735e06afacdf2772880bf1536 | a0b622addceef9308b75c928935394c976c4872b | "2024-02-22T19:02:08Z" | go | "2024-02-27T19:34:42Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,687 | [".changes/unreleased/Fixed-20240307-145421.yaml", "go.mod", "go.sum"] | 🐞 Directory.glob with recursive patterns (`**/*.go`) returns duplicates and is very slow | ### What is the issue?
It looks like Buildkit returns weird results when you `ReadDir` with a glob like `**/*.go`. It actually seems to find everything recursively, but then trim each result to only have the first path segment. So for `a/1.go a/2.go b/3.go` it'll return `a a b`. We then recurse into `a` twice, which will also have this issue, so it quickly explodes into a huge number of results.
We could work around this issue by deduping, but it would probably be better to fix this upstream in Buildkit. It would be perfect if it just returned all the full paths, which it *seems* capable of doing. Then we wouldn't need to recurse ourselves. While we're there, maybe we could add support for exclude filters too (edit: interestingly it seems to support `!` prefixes for exclusions - maybe we should be phasing out `include: ["a"], exclude: ["b"]` in favor of `globs: ["a", "!b"]`?).
cc @jedevc @TomChv for continuity from [https://github.com/dagger/dagger/pull/5824#discussion_r1350193214](https://github.com/dagger/dagger/pull/5824#discussion_r1350193214)
### Dagger version
v0.9.1+ (when the feature was added; not a regression)
### Steps to reproduce
```graphql
{
git(url:"https://github.com/dagger/dagger") {
branch(name:"main"){
tree {
glob(pattern:"**/dagger.json")
}
}
}
}
```
### Log output
```json
{
"data": {
"git": {
"branch": {
"tree": {
"glob": [
"ci/dagger.json",
"core/integration/testdata/modules/go/basic/dagger.json",
"core/integration/testdata/modules/go/broken/dagger.json",
"core/integration/testdata/modules/go/ifaces/dagger.json",
"core/integration/testdata/modules/go/ifaces/impl/dagger.json",
"core/integration/testdata/modules/go/ifaces/test/dagger.json",
"core/integration/testdata/modules/go/namespacing/dagger.json",
"core/integration/testdata/modules/go/namespacing/sub1/dagger.json",
"core/integration/testdata/modules/go/namespacing/sub2/dagger.json",
"core/integration/testdata/modules/go/basic/dagger.json",
"core/integration/testdata/modules/go/broken/dagger.json",
"core/integration/testdata/modules/go/ifaces/dagger.json",
"core/integration/testdata/modules/go/ifaces/impl/dagger.json",
"core/integration/testdata/modules/go/ifaces/test/dagger.json",
"core/integration/testdata/modules/go/namespacing/dagger.json",
"core/integration/testdata/modules/go/namespacing/sub1/dagger.json",
"core/integration/testdata/modules/go/namespacing/sub2/dagger.json",
"core/integration/testdata/modules/go/basic/dagger.json",
"core/integration/testdata/modules/go/broken/dagger.json",
"core/integration/testdata/modules/go/ifaces/dagger.json",
"core/integration/testdata/modules/go/ifaces/impl/dagger.json",
"core/integration/testdata/modules/go/ifaces/test/dagger.json",
"core/integration/testdata/modules/go/namespacing/dagger.json",
"core/integration/testdata/modules/go/namespacing/sub1/dagger.json",
"core/integration/testdata/modules/go/namespacing/sub2/dagger.json",
// you get the idea
``` | https://github.com/dagger/dagger/issues/6687 | https://github.com/dagger/dagger/pull/6852 | 0d815c84455006adb0187f6aa144bfe1356a35cc | 49863fb3638be2bfc351fc8db3d5b1a4fc5668e7 | "2024-02-18T01:09:37Z" | go | "2024-03-07T16:43:26Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,669 | ["cmd/codegen/generator/go/templates/format.go", "core/integration/module_test.go"] | 🐞 installed module is unusable (codegen'd with wrong capitalization) | ### What is the issue?
Created module for running remote commands via SSH https://daggerverse.dev/mod/github.com/samalba/dagger-modules/ssh@35ed3e343d7e6faa3eab44570ee7531914dd4e65
I initialized the module with:
`dagger init --name ssh --sdk go --source .`
The module code uses a struct named `Ssh`. It works fine as standalone. However when you install the module from another module, it's available via `dag.SSH()` (different capitalization), which then fails to compile, because it cannot find the module version.
Simple way to reproduce: install the module from any other module and try to use it.
### Dagger version
dagger v0.9.10 (registry.dagger.io/engine) darwin/arm64 | https://github.com/dagger/dagger/issues/6669 | https://github.com/dagger/dagger/pull/6692 | a659c04b9982ef90a999dc20efb9485b11eda556 | 15cb7d10a00d0e0b19ea1a2e8fc07cf8c360d04c | "2024-02-13T22:55:54Z" | go | "2024-02-20T11:15:21Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,664 | ["cmd/dagger/module.go", "core/integration/module_config_test.go"] | Unable to reference remote modules by short name | Version: 0.9.10
What I expect to work:
```sh
dagger init
dagger install github.com/shykes/daggerverse/hello
dagger -m hello functions
```
What I get:
`Error: failed to get configured module: failed to get local root path: input: resolve: moduleSource: resolveContextPathFromCaller: cannot resolve non-local module source from caller`
Additionally, when I run `dagger -m hello functions`, I get local files created mimicking the module's remote ref:
```sh
$ tree github.com
github.com
└── shykes
└── daggerverse
└── hello@ac880927d5368eaf2e5d94450e587732753df1a6
3 directories, 0 files
``` | https://github.com/dagger/dagger/issues/6664 | https://github.com/dagger/dagger/pull/6668 | 430ea3a7fec9f4e88584e1aa352a7e43c083e518 | 41a311347e4d8539b2206903a5e93acaf1108d34 | "2024-02-13T17:43:20Z" | go | "2024-02-19T17:06:53Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,662 | ["cmd/codegen/generator/go/templates/module_objects.go", "core/integration/module_test.go"] | Using embedded fields with go modules fails with unhelpful error | Take the following code:
```go
type Playground struct {
// This breaks
*Directory
// This doesn't
// Directory *Directory
}
func New() Playground {
return Playground{Directory: dag.Directory()}
}
```
When attempting to evaluate anything on the `Playground` object, we get the following error:
```
Error: response from query: input: resolve: playground: failed to convert return value: unexpected result value type string for object "Playground"
``` | https://github.com/dagger/dagger/issues/6662 | https://github.com/dagger/dagger/pull/6715 | 2f975ef29ad78e08c5b9328f6db3797b4c57da69 | b966257dbc24b714e6ee39f01158f10f8fa24fd3 | "2024-02-13T15:54:25Z" | go | "2024-02-26T15:11:43Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,659 | ["cmd/codegen/generator/go/generator.go", "core/integration/module_test.go"] | Error running `dagger init`: `package name is empty` | To reproduce this, try initializing a dagger module in a directory that already contains a `go.work`. For example:
```
$ git clone https://github.com/dagger/dagger.git
$ go work init
$ dagger init --sdk=go
✘ ModuleSource.asModule: Module! 2.0s
✘ Module.withSource(
source: ✔ ModuleSource.resolveFromCaller: ModuleSource! 0.0s
): Module! 2.0s
✘ Container.directory(path: "/src"): Directory! 1.4s
✔ rm /dagger/dagger.gen.go 0.4s
✘ exec /usr/local/bin/codegen --module-context /src --module-name dagger --propagate-logs=true --introspection-json-path /schema.json 0.9s
┃ Error: load package ".": package name is empty
✘ generating go module: dagger 0.4s
┃ writing dagger.gen.go
┃ writing go.mod
┃ writing go.sum
┃ writing main.go
┃ creating directory querybuilder
┃ writing querybuilder/marshal.go
┃ writing querybuilder/querybuilder.go
┃ needs another pass...
Error: failed to generate code: input: resolve: moduleSource: withName: withSDK: withSourceSubpath: resolveFromCaller: asModule: failed to create module: failed to update codegen and runtime: failed to generate code: failed to get modified source directory for go module sdk codegen: process "/usr/local/bin/codegen --module-context /src --module-name dagger --propagate-logs=true --introspection-json-path /schema.json" did not complete successfully: exit code: 1
Stderr:
Error: load package ".": package name is empty
```
Originally reported by @nipuna-perera on discord: <https://discord.com/channels/707636530424053791/1206727343918293042>
(reproduced in dagger v0.9.10) | https://github.com/dagger/dagger/issues/6659 | https://github.com/dagger/dagger/pull/6678 | b1afa431038bc0a96e4783b30d34b5c8f67f6488 | 31ddf2787ec1e05fb4ad00c33df767796063705f | "2024-02-13T13:35:20Z" | go | "2024-02-18T23:23:31Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,640 | ["cmd/dagger/module.go", "core/integration/module_test.go", "core/integration/suite_test.go"] | 🐞 CLI: dagger listen --disable-host-read-write fails in v0.9.9 | ### What is the issue?
There is a regression on dagger v0.9.9 with the dagger listen command when the flag `--disable-host-read-write` is specified. Prior to v0.9.8 this was working correctly. Since v0.9.9 it fails with:
```
Error: failed to get configured module: failed to get local root path: input: resolve: moduleSource: resolveContextPathFromCaller: failed to stat source root: failed to receive file bytes message: rpc error: code = Unimplemented desc = unknown service moby.filesync.v1.FileSync
```
If you run this command inside a module it also fails with the same error:
```
$ dagger mod init --name test --sdk go
$ dagger listen --disable-host-read-write
├ [0.01s] loading module
✘ directory ERROR [0.01s]
▶ directory ▶ host.directory /home/matipan/bin/test
✘ upload /home/matipan/bin/test from pop-os (client id: 09oib86lak5rdy6j1tbxf2gbu) ERROR [0.01s]
├ [0.00s] transferring /home/matipan/bin/test:
• Engine: 6241366fb45d (version v0.9.7)
⧗ 1.97s ✔ 6 ✘ 3
Error: failed to get loaded module ID: input: resolve: host: directory: host directory /home/matipan/bin/test: no local sources enabled
input: resolve: host: directory: host directory /home/matipan/bin/test: no local sources enabled
```
### Dagger version
dagger v0.9.9 ([registry.dagger.io/engine](http://registry.dagger.io/engine)) linux/amd64
### Steps to reproduce
```
$ dagger listen --progress plain --disable-host-read-write
```
### Log output
```
Connected to engine 5c478db0e017 (version v0.9.9)
Error: failed to get configured module: failed to get local root path: input: resolve: moduleSource: resolveContextPathFromCaller: failed to stat source root: failed to receive file bytes message: rpc error: code = Unimplemented desc = unknown service moby.filesync.v1.FileSync
``` | https://github.com/dagger/dagger/issues/6640 | https://github.com/dagger/dagger/pull/6732 | b966257dbc24b714e6ee39f01158f10f8fa24fd3 | ca447cd4d7ca6d25e62008d3e1f87100111709df | "2024-02-09T14:29:11Z" | go | "2024-02-26T17:12:49Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,625 | ["core/integration/module_test.go", "core/schema/module.go"] | 🐞 Modules: generated .gitxxx files are put next do dagger.json rather than on source path | Follow-up from:
- https://github.com/dagger/dagger/pull/6575
For example, in our root:
```
dagger init --sdk=python --source=dev
```
This will add, or overwrite, .gitattributes and .gitignore with `/sdk`, when it's actually in `dev/sdk`.
Everytime I do `dagger develop`, these files get overwritten.
\cc @sipsma | https://github.com/dagger/dagger/issues/6625 | https://github.com/dagger/dagger/pull/6699 | 4a04803cfb834c39b39ef7bac57fcf7b74c35d38 | 77a53a85956942540fb2078ef490ac8eeac56e0e | "2024-02-08T19:20:56Z" | go | "2024-02-20T14:01:32Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,623 | ["core/integration/module_call_test.go", "core/integration/module_config_test.go", "core/integration/module_test.go", "core/modulesource.go", "core/schema/module.go", "core/schema/modulesource.go", "core/schema/sdk.go"] | Need integ tests for git modules | Right now all our integ tests only use local refs because git modules are locked into github repos and we've been trying to avoid tests depending on modules in an external git repo. However, I think we should just bite the bullet on that at this point since missing that test coverage is too big a gap. | https://github.com/dagger/dagger/issues/6623 | https://github.com/dagger/dagger/pull/6693 | 6a3689eeb680920fe5f830ac972be3dc1fa4f29b | a659c04b9982ef90a999dc20efb9485b11eda556 | "2024-02-08T18:15:16Z" | go | "2024-02-20T10:52:53Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,532 | ["cmd/codegen/generator/go/templates/module_funcs.go", "cmd/codegen/generator/go/templates/modules.go", "core/integration/module_test.go"] | 🐞 Confusing error message when `Context` is not the first parameter of a function | ### What is the issue?
Mostly what the title says. The error message I get from putting the `Context` in the wrong place is cryptic.
### Dagger version
dagger v0.9.7
### Steps to reproduce
Create any Zenith module function and put the `context.Context` in any location but the first parameter.
### Log output
```
Error: failed to automate vcs: failed to get vcs ignored paths: input: resolve: module: withSource: failed to get modified source directory for go module sdk codegen: process "/usr/local/bin/codegen --module . --propagate-logs=true --introspection-json-path /schema.json" did not complete successfully: exit code: 1
input: resolve: module: withSource: failed to get modified source directory for go module sdk codegen: process "/usr/local/bin/codegen --module . --propagate-logs=true --introspection-json-path /schema.json" did not complete successfully: exit code: 1
```
This was also in the output
```
Stderr:
internal error during module code generation: runtime error: invalid memory address or nil pointer dereference
``` | https://github.com/dagger/dagger/issues/6532 | https://github.com/dagger/dagger/pull/6551 | 5b273004464711d1efcf427da9cefa7dc389497d | dcee33ec84858610450ef30ddfcad60a9b9be053 | "2024-01-30T19:02:52Z" | go | "2024-02-07T17:20:29Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,510 | ["core/container.go", "core/integration/container_test.go"] | 🐞 `with_user` breaks `with_exec`'s `stdin` | ### What is the issue?
`with_exec`'s `stdin` parameter enables writing content to the command's standard input before it closes. However, if you switch to another user, it breaks without any notification, preventing the command from executing.
### Dagger version
dagger v0.9.7 (registry.dagger.io/engine) darwin/arm64
### Steps to reproduce
This works:
```python
import anyio
import dagger
from dagger import dag
async def main():
async with dagger.connection():
out = (
await dag.container()
.from_("alpine:latest")
.with_exec(
["sh"],
stdin="""
echo "Hello World!"
for i in $(seq 1 10); do
echo "$i"
sleep 1
done
""",
)
.stdout()
)
print(out)
anyio.run(main)
```
Outputs:
```
Hello World!
1
2
3
4
5
6
7
8
9
10
⠼ Disconnecting⏎
```
This fails:
```python
import anyio
import dagger
from dagger import dag
async def main():
async with dagger.connection():
out = (
await dag.container()
.from_("alpine:latest")
.with_exec(["adduser", "-D", "bob"])
.with_user("bob")
.with_exec(
["sh"],
stdin="""
echo "Hello World!"
for i in $(seq 1 10); do
echo "$i"
sleep 1
done
""",
)
.stdout()
)
print(out)
anyio.run(main)
```
Outputs:
```
⠇ Disconnecting⏎
```
### Log output
_No response_ | https://github.com/dagger/dagger/issues/6510 | https://github.com/dagger/dagger/pull/6511 | 30b22dd06e4366aed01f8f86d0a1729835b12aec | 6a31727f759d9137f5940458a06e196ab99b0717 | "2024-01-26T20:02:16Z" | go | "2024-01-29T12:22:43Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,501 | ["engine/buildkit/client.go"] | 🐞 Logs are duplicated when a vertex is used multiple times | ### What is the issue?
This issue is only on `main`; the current release, v0.9.7, is not affected.
If you run a pipeline with a vertex that is used multiple times in the same session, the log output will be duplicated, possibly with each extra use (i.e. cache hit).
### Dagger version
main
### Steps to reproduce
Once #6456 is merged, run a function that calls another function twice; you should see duplicate logs in its `codegen` vertex.
### Log output
`main`:

`v0.9.7`:
 | https://github.com/dagger/dagger/issues/6501 | https://github.com/dagger/dagger/pull/6505 | 53179a064559cf376fa2ad7596e32bd4e4934c74 | 30b22dd06e4366aed01f8f86d0a1729835b12aec | "2024-01-26T02:30:45Z" | go | "2024-01-29T11:51:30Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,430 | ["engine/buildkit/client.go"] | 🐞 Service stop semantics work differently between SDK and Modules | ### What is the issue?
Initially repoted by @matipan here: https://discord.com/channels/707636530424053791/1120503349599543376/1196809237150584995
Given the current function
```go
func (m *Foo) Nginx(ctx context.Context) error {
svc := dag.Container().
From("nginx").
WithExposedPort(80).
AsService()
tunnel, err := dag.Host().Tunnel(svc).Start(ctx)
if err != nil {
return err
}
defer tunnel.Stop(ctx)
endpoint, err := tunnel.Endpoint(ctx)
if err != nil {
return err
}
res, err := http.Get("http://" + endpoint + "/")
if err != nil {
return err
}
defer res.Body.Close()
io.Copy(os.Stdout, res.Body)
return nil
}
```
upon calling `dagger call nginx`, dagger will hang for a long time in the `defer tunnel.Stop(ctx)` call. If instead of just stopping the tunnel, we stop both the service and the tunnel in the defer function, this works as intended.
This doesn't happen while using the SDK directly without modules, the `defer` statement described in the example returns immediately.
### Dagger version
v0.9.6
### Steps to reproduce
Run the snippet above withing a context of a module.
### Log output
_No response_ | https://github.com/dagger/dagger/issues/6430 | https://github.com/dagger/dagger/pull/6518 | 8483a5e7ace6174c60e37ba395ddd1ad9b849c1e | 6982b28be3c7b40fb5b5dae70601077f27bae1b8 | "2024-01-16T14:58:22Z" | go | "2024-01-29T19:37:23Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,416 | [".changes/unreleased/Added-20240116-163957.yaml", "core/integration/file_test.go", "core/schema/file.go", "docs/docs-graphql/schema.graphqls", "sdk/elixir/lib/dagger/gen/file.ex", "sdk/go/dagger.gen.go", "sdk/python/src/dagger/client/gen.py", "sdk/rust/crates/dagger-sdk/src/gen.rs", "sdk/typescript/api/client.gen.ts"] | ✨ Add name field to File | ### What are you trying to do?
Files passed to `dagger call` currently does not preserve the name of a file passed to it.
### Why is this important to you?
Some tools often require the original file name (for example: Spectral requires the file name do determine the file type)
Discussed it with @sipsma here: https://discord.com/channels/707636530424053791/1120503349599543376/1195832958150529165
### How are you currently working around this?
I pass the original file name to the module. | https://github.com/dagger/dagger/issues/6416 | https://github.com/dagger/dagger/pull/6431 | 61bf06b970402efa254a40f13c4aee98adfbdb42 | c716a042f290b11c6122d247f8b31651adb5f1d0 | "2024-01-13T21:04:31Z" | go | "2024-01-17T16:17:48Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,409 | ["cmd/dagger/call.go", "cmd/dagger/flags.go", "cmd/dagger/functions.go", "core/integration/module_call_test.go"] | ✨ Allow cache volumes as CLI parameters | ### What are you trying to do?
I have a module with the following constructor:
```go
func New(
// ...
// Disable mounting cache volumes.
disableCache Optional[bool],
// Module cache volume to mount at /go/pkg/mod.
modCache Optional[*CacheVolume],
// Build cache volume to mount at ~/.cache/go-build.
buildCache Optional[*CacheVolume],
) *Go {
// ...
}
```
Currently, running this module in the CLI fails due to CacheVolume being an unsupported CLI argument:
```shell
❯ dagger call -m "github.com/sagikazarmark/daggerverse/go@main" --help
✘ load call ERROR [1.21s]
├ [0.56s] loading module
├ [0.65s] loading objects
┃ Error: unsupported object type "CacheVolume" for flag: mod-cache
```
Given cache volumes are created by their names (and that other types of objects, like files, are also referenced simply by their path or ID), I think it would be useful to do the same for cache volumes. In this instance:
```shell
❯ dagger call -m "github.com/sagikazarmark/daggerverse/go@main" --mod-cache go-build
```
An alternative I can imagine is adding a pragma comment for ignoring certain arguments from the CLI, but including them in the generated code. That's a temporary measure, but would still be easier for module developers, so they can utilize constructors instead of falling back to method chaining for tasks like setting cache.
### Why is this important to you?
_No response_
### How are you currently working around this?
I will probably remove those arguments from the constructor for now.
I might add them to the module API, but honestly, I'd rather not. | https://github.com/dagger/dagger/issues/6409 | https://github.com/dagger/dagger/pull/6520 | 0b91d2a3e04c81ad0dad799d34af30d3801ae76f | 7350f0b8dd5ff74b7ae0ddfd05e35c37bf16712b | "2024-01-12T07:04:04Z" | go | "2024-01-31T15:53:55Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,369 | ["cmd/codegen/generator/go/templates/module_objects.go"] | 🐞 Optional type in module field yields error | ### What is the issue?
When using `Optional` in a module field type (combined with constructors) Dagger yields the following error:
> Error: query module objects: json: error calling MarshalJSON for type *dagger.Module: returned error 400 Bad Request: failed to get schema for module "ci": failed to create field: failed to get mod type for field "githubActor"
### Dagger version
dagger v0.9.5 (registry.dagger.io/engine) darwin/arm64
### Steps to reproduce
Create a module with optional fields:
```go
type Ci struct {
GithubActor Optional[string]
GithubToken Optional[*Secret]
}
func New(
// Actor the token belongs to.
githubActor Optional[string],
// Token to access the GitHub API.
githubToken Optional[*Secret],
) *Ci {
return &Ci{
GithubActor: githubActor,
GithubToken: githubToken,
}
}
```
Run `dagger call` with any parameters.
### Log output
Error: query module objects: json: error calling MarshalJSON for type *dagger.Module: returned error 400 Bad Request: failed to get schema for module "ci": failed to create field: failed to get mod type for field "githubActor" | https://github.com/dagger/dagger/issues/6369 | https://github.com/dagger/dagger/pull/6370 | 42d9870f1535cff19dc2ca85134aee6ffcd3f0dd | 25955caab25bc35543ebcd1c0746c857533c7021 | "2024-01-07T22:56:03Z" | go | "2024-01-09T11:09:04Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,368 | ["docs/current_docs/cookbook.mdx", "docs/current_docs/guides/723462-use-secrets.mdx", "docs/current_docs/quickstart/635927-caching.mdx"] | Document that changing secrets doesn't invalidate the cache | ### What is the issue?
This is coming from a Discord question here: https://discord.com/channels/707636530424053791/1193141267903815700
Our official secrets docs (https://docs.dagger.io/723462/use-secrets/) do not mention anything about the fact that changing secrets doesn't invalidate the cache. I think it's important to highlight this property so users are aware of this.
| https://github.com/dagger/dagger/issues/6368 | https://github.com/dagger/dagger/pull/6472 | 71e6723bb75ebf1611d2fa7af39763119350bb1b | cd38b2f3bb2a6914705b0d7549d3fe0610cf7fd8 | "2024-01-06T15:26:48Z" | go | "2024-01-23T15:49:53Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,359 | ["cmd/dagger/call.go", "core/integration/module_call_test.go", "core/integration/module_test.go"] | CLI: update handling of various core types | As described in [this comment](https://github.com/dagger/dagger/issues/6229#issuecomment-1863649336) (and that issue generally), we need to make some adjustments to the CLIs behavior when `dagger call` ends in various core types.
There's a handful of related adjustments needed here. Creating as a checklist for now to save issue spam; can break down into more issues if useful though. I'm including my initial suggestions on how to handle these, but to be clear these are not finalized decisions and the final call can be made as part of implementing this
- [ ] Handle arbitrary user objects
- Print fields (as in `TypeDef` fields, so just trivially resolvable values) as a json object
- [x] Handle container
- call `sync` only, rely on progress output for anything else
- [x] Handle directory
- call `sync` only, rely on progress output for anything else
- [x] Handle file
- call `sync` only, rely on progress output for anything else
- [ ] Handle IDs returned by `sync` (i.e. the case where the user explicitly chains `sync`)
- just rely on progress output, don't show the id
| https://github.com/dagger/dagger/issues/6359 | https://github.com/dagger/dagger/pull/6482 | 2999573c5a34e85b8baac8c0150881d9c08a86b8 | d91ac42c196873830c2e0876b251f3bf4d62ea49 | "2024-01-03T20:19:06Z" | go | "2024-01-26T18:48:04Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,358 | ["cmd/dagger/cloud.go", "cmd/dagger/functions.go", "cmd/dagger/main.go", "cmd/dagger/module.go", "cmd/dagger/query.go", "cmd/dagger/run.go"] | CLI: improve display of core API in `--help` output | https://github.com/dagger/dagger/pull/6293 exposed the core API in `dagger call`, which results in `--help` and `dagger functions` output to be pretty overwhelming and messy when dealing with core types like `Container`.
There's most likely going to be a lot of little things needed to address this, possibly including but not limited to:
1. displaying multi-line doc strings better
2. sorting of functions more cleanly
I think we should start out trying to do this purely from the CLI without engine-side changes. But if that ends up not being feasible then we can consider whether typedefs could get optional support for annotating CLI specific presentation metadata. Preferable to avoid going down that road if possible though. | https://github.com/dagger/dagger/issues/6358 | https://github.com/dagger/dagger/pull/6549 | 7a63c3d019530d1f08b2f72cc28e753d99b5896d | 7c0ee45d762719005fad5981d41b038811ebb7f6 | "2024-01-03T20:03:17Z" | go | "2024-02-01T14:32:43Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,357 | ["cmd/dagger/call.go", "cmd/dagger/download.go", "cmd/dagger/functions.go", "core/integration/module_call_test.go"] | CLI: support for `-o` flag | As described in [this comment](https://github.com/dagger/dagger/issues/6229#issuecomment-1863649336), we want to add a `-o` flag for smartly redirecting output in `dagger call`.
---
This flag would be applicable for any return type and result in that result being written to the caller's filesystem rather than written to stdout.
For simple cases like a `string` return type, this would have the same end effect as a shell redirection. The difference is that we would allow *some* special case handling for it too. So if `myfunc` returns a `Directory`, this would work:
```
dagger run -o ./some/output/dir myfunc
```
and result in the the directory returned by `myfunc` to be written to `./some/output/dir`.
To start, `-o` would be exclusively for writing to the cli caller's filesystem.
* I suppose you could in theory expand to other output destinations via a `scheme://` type approach if we ever wanted to, but that's just a possible future extension point, not in scope now and just a random idea.
Also important to note that this would not inherently *remove* support for the full `... export --path ./some/output/dir` style either, it would just be a supplement to it. | https://github.com/dagger/dagger/issues/6357 | https://github.com/dagger/dagger/pull/6432 | 5201d6cf969a539189ec996eaa543233e20edd21 | 829fe2ca13b97ac3a4ee98401ebea7c7d1f4a9dd | "2024-01-03T19:54:37Z" | go | "2024-01-22T17:33:09Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,312 | ["cmd/codegen/introspection/introspection.go", "core/integration/module_test.go", "core/moddeps.go", "core/module.go", "core/schema/schema.go", "core/schema/sdk.go", "dagql/types.go"] | Modules: `dag.Host()` name is extremely confusing | When module code uses `dag.Host()`, the "host" it refers to is the module container's "host", not the CLI caller's host.
While this is technically consistent, it's very non-obvious at first glance and a constant source of confusion.
I think at a minimum we need to change `Host` to something with a more obvious name. TBD if this change also impacts non-module SDKs or if it's somehow scoped just to the codegen for modules.
There's certainly larger scope changes to the API that could also be worth discussing, but changing to a more clear name seems like the bare-minimum baseline thing to do here.
---
As a side-note, support for Module code being able to read files/env vars/etc. from the *CLI caller's host* is a separate topic, discussed in this other issue https://github.com/dagger/dagger/issues/6112 | https://github.com/dagger/dagger/issues/6312 | https://github.com/dagger/dagger/pull/6535 | c241d901a48f699a1d62f0daad882c88730e1fba | 7a63c3d019530d1f08b2f72cc28e753d99b5896d | "2023-12-21T19:46:28Z" | go | "2024-02-01T14:23:42Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,304 | ["sdk/rust/crates/dagger-sdk/tests/mod.rs"] | tests: flaky rust test | cc @dagger/sdk-rust :heart:
I frequently see this rust test failing: https://github.com/dagger/dagger/actions/runs/7263920431/job/19790293305 (which passes upon a re-run):
```
Diff < left / right > :
failed to query dagger engine: domain error:
Look at json field for more details
<unexpected status from HEAD request to https://mirror.gcr.io/v2/library/fake.invalid/manifests/latest?ns=docker.io: 429 Too Many Requests
>pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
```
Not quite sure why this shows up, but it appears reasonably frequently, so would be nice to clean up! Will take a look myself if I find some spare time :pray: | https://github.com/dagger/dagger/issues/6304 | https://github.com/dagger/dagger/pull/6385 | e355c57d5af509ba595f7fd0e851c561b0f724de | fcf2f9b1cae19d340f5f42ba788d13bc157e4198 | "2023-12-21T11:02:19Z" | go | "2024-01-10T11:06:34Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,303 | ["core/integration/module_up_test.go", "core/schema/service.go"] | Bug: `bind: address already in use` when using `dagger up --port` | Hit this while backfilling integ test coverage for `dagger up` as part of CLI changes. Repro below.
Module code:
```go
package main
import "context"
func New(ctx context.Context) *Test {
return &Test{
Ctr: dag.Container().
From("python").
WithMountedDirectory(
"/srv/www",
dag.Directory().WithNewFile("index.html", "hey there"),
).
WithWorkdir("/srv/www").
WithExposedPort(8000).
WithExec([]string{"python", "-m", "http.server"}),
}
}
type Test struct {
Ctr *Container
}
```
Everything works when using `--native`:
```console
sipsma@dagger_dev:~/repo/github.com/sipsma/daggerverse/test$ dagger up --native ctr
∅ dagger up ctr CANCELED [10.17s]
┃ 8000/TCP: tunnel 0.0.0.0:8000 -> cpi9mli1o4u78.b7d44b699trss.dagger.local:8000
┃ Error: context canceled
• Cloud URL: https://dagger.cloud/runs/fc1afa30-d2e8-473c-ade3-3910adbfb046
• Engine: 3b84e1257d0d (version v0.9.4)
⧗ 11.78s ✔ 46 ∅ 8
```
But if I run anything with `--port` I get bind errors, including with `--port 8000:8000` which I thought would have been the same as `--native`:
```console
sipsma@dagger_dev:~/repo/github.com/sipsma/daggerverse/test$ dagger up --port 23456:8000 ctr
✘ dagger up ctr ERROR [1.62s]
┃ Error: failed to start tunnel: input:1: host.tunnel.start host to container: failed to receive listen response: rpc error: code = Unknown desc = listen tcp 0.0.0.0:23456: bind: address already in use
✘ start ERROR [0.39s]
• Cloud URL: https://dagger.cloud/runs/849eceaf-86f3-4f1d-b8db-b23a31fb69b1
• Engine: 3b84e1257d0d (version v0.9.4)
⧗ 2.77s ✔ 44 ∅ 8 ✘ 2
sipsma@dagger_dev:~/repo/github.com/sipsma/daggerverse/test$ dagger up --port 8000:8000 ctr
✘ dagger up ctr ERROR [1.76s]
┃ Error: failed to start tunnel: input:1: host.tunnel.start host to container: failed to receive listen response: rpc error: code = Unknown desc = listen tcp 0.0.0.0:8000: bind: address already in use
✘ start ERROR [0.43s]
• Cloud URL: https://dagger.cloud/runs/bcd2c12b-66cb-4a0d-abcd-4236201617e5
• Engine: 3b84e1257d0d (version v0.9.4)
⧗ 3.08s ✔ 44 ∅ 8 ✘ 2
```
---
Not sure if specific to the CLI somehow or more general, but the fact that `--native` works and `--port 8000:8000` doesn't is quite mysterious. | https://github.com/dagger/dagger/issues/6303 | https://github.com/dagger/dagger/pull/6626 | 7f31f89b7e121368199403ff36cbbece3b31a6cb | 5ee49b9e9a955432d688157893b299f8a6180b1d | "2023-12-20T21:23:29Z" | go | "2024-02-08T20:42:59Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,286 | ["cmd/codegen/generator/go/templates/module_funcs.go", "cmd/codegen/generator/go/templates/modules.go", "core/integration/module_test.go", "core/schema/usermod.go"] | 🐞 Zenith: camelCase automagic issues with PythonSDK | ### What is the issue?
When attempting to run the examples provided in the dagger module's documentation, I encountered two issues that presumably have the same underlying cause regarding the automatic camelCase conversion performed in the Python SDK.
## First issue
Take the following example (taken from [here](https://docs.dagger.io/zenith/developer/python/539756/advanced-programming#chain-modules-together)):
```python
"""A Dagger module for saying hello world!."""
from dagger import field, function, object_type
@object_type
class HelloWorld:
greeting: str = field(default="Hello")
name: str = field(default="World")
@function
def with_greeting(self, greeting: str) -> "HelloWorld":
self.greeting = greeting
return self
@function
def with_name(self, name: str) -> "HelloWorld":
self.name = name
return self
@function
def message(self) -> str:
return f"{self.greeting} {self.name}!"
```
And here is an example query for this module:
```graphql
{
helloWorld {
message
withName(name: "Monde") {
withGreeting(greeting: "Bonjour") {
message
}
}
}
}
```
The result is as expected
```plain
{
"helloWorld": {
"message": "Hello, World!",
"withName": {
"withGreeting": {
"message": "Bonjour, Monde!"
}
}
}
}
```
Now, if I rename `name` to `my_name`:
```python
@object_type
class HelloWorld:
greeting: str = field(default="Hello")
my_name: str = field(default="World")
@function
def with_greeting(self, greeting: str) -> "HelloWorld":
self.greeting = greeting
return self
@function
def with_my_name(self, my_name: str) -> "HelloWorld":
self.my_name = my_name
return self
@function
def message(self) -> str:
return f"{self.greeting} {self.my_name}!"
```
and use the following query:
```graphql
{
helloWorld {
message
withMyName(myName: "Monde") {
withGreeting(greeting: "Bonjour") {
message
}
}
}
}
```
I get this result
```plain
{
"helloWorld": {
"message": "Hello, World!",
"withMyName": {
"withGreeting": {
"message": "Bonjour World!"
}
}
}
}
```
## Second issue
Take the following example (taken from [here](https://docs.dagger.io/zenith/developer/python/539756/advanced-programming#write-an-asynchronous-module-constructor-function)):
```python
"""A Dagger module for searching an input file."""
import dagger
from dagger import dag, object_type, field, function
@object_type
class Grep:
src: dagger.File = field()
@classmethod
async def create(cls, src: dagger.File | None = None):
if src is None:
src = await dag.http("https://dagger.io")
return cls(src=src)
@function
async def grep(self, pattern: str) -> str:
return await (
dag
.container()
.from_("alpine:latest")
.with_mounted_file("/src", self.src)
.with_exec(["grep", pattern, "/src"])
.stdout()
)
```
Similarly, if I alter this example by renaming `src` to `my_src`:
```python
...
@object_type
class Grep:
my_src: dagger.File = field()
@classmethod
async def create(cls, my_src: dagger.File | None = None):
if my_src is None:
my_src = await dag.http("https://dagger.io")
return cls(my_src=my_src)
...
```
I get the following error:
```shell
$ dagger call grep --pattern dagger
✘ dagger call grep ERROR [1.92s]
┃ Error: response from query: input:1: grep.grep failed to get function output directory: process "/runtime" did not complete successfully: exit code: 1
✘ grep(pattern: "dagger") ERROR [0.85s]
✘ exec /runtime ERROR [0.85s]
┃ ╭───────────────────── Traceback (most recent call last) ──────────────────────╮
┃ │ /runtime:8 in <module> │
┃ │ │
┃ │ 5 from dagger.mod.cli import app │
┃ │ 6 │
┃ │ 7 if __name__ == "__main__": │
┃ │ ❱ 8 │ sys.exit(app()) │
┃ │ 9 │
┃ │ │
┃ │ /sdk/src/dagger/mod/cli.py:32 in app │
┃ │ │
┃ │ 29 │ ) │
┃ │ 30 │ try: │
┃ │ 31 │ │ mod = get_module() │
┃ │ ❱ 32 │ │ mod() │
┃ │ 33 │ except FatalError as e: │
┃ │ 34 │ │ if logger.isEnabledFor(logging.DEBUG): │
┃ │ 35 │ │ │ logger.exception("Fatal error") │
┃ │ │
┃ │ /sdk/src/dagger/mod/_module.py:181 in __call__ │
┃ │ │
┃ │ 178 │ def __call__(self) -> None: │
┃ │ 179 │ │ if self._log_level is not None: │
┃ │ 180 │ │ │ configure_logging(self._log_level) │
┃ │ ❱ 181 │ │ anyio.run(self._run) │
┃ │ 182 │ │
┃ │ 183 │ async def _run(self): │
┃ │ 184 │ │ async with await dagger.connect(): │
┃ │ │
┃ │ /usr/local/lib/python3.11/site-packages/anyio/_core/_eventloop.py:66 in run │
┃ │ │
┃ │ 63 │ │
┃ │ 64 │ try: │
┃ │ 65 │ │ backend_options = backend_options or {} │
┃ │ ❱ 66 │ │ return async_backend.run(func, args, {}, backend_options) │
┃ │ 67 │ finally: │
┃ │ 68 │ │ if token: │
┃ │ 69 │ │ │ sniffio.current_async_library_cvar.reset(token) │
┃ │ │
┃ │ /usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py:1960 in │
┃ │ run │
┃ │ │
┃ │ 1957 │ │ debug = options.get("debug", False) │
┃ │ 1958 │ │ options.get("loop_factory", None) │
┃ │ 1959 │ │ options.get("use_uvloop", False) │
┃ │ ❱ 1960 │ │ return native_run(wrapper(), debug=debug) │
┃ │ 1961 │ │
┃ │ 1962 │ @classmethod │
┃ │ 1963 │ def current_token(cls) -> object: │
┃ │ │
┃ │ /usr/local/lib/python3.11/asyncio/runners.py:190 in run │
┃ │ │
┃ │ 187 │ │ │ "asyncio.run() cannot be called from a running event loop" │
┃ │ 188 │ │
┃ │ 189 │ with Runner(debug=debug) as runner: │
┃ │ ❱ 190 │ │ return runner.run(main) │
┃ │ 191 │
┃ │ 192 │
┃ │ 193 def _cancel_all_tasks(loop): │
┃ │ │
┃ │ /usr/local/lib/python3.11/asyncio/runners.py:118 in run │
┃ │ │
┃ │ 115 │ │ │
┃ │ 116 │ │ self._interrupt_count = 0 │
┃ │ 117 │ │ try: │
┃ │ ❱ 118 │ │ │ return self._loop.run_until_complete(task) │
┃ │ 119 │ │ except exceptions.CancelledError: │
┃ │ 120 │ │ │ if self._interrupt_count > 0: │
┃ │ 121 │ │ │ │ uncancel = getattr(task, "uncancel", None) │
┃ │ │
┃ │ /usr/local/lib/python3.11/asyncio/base_events.py:653 in run_until_complete │
┃ │ │
┃ │ 650 │ │ if not future.done(): │
┃ │ 651 │ │ │ raise RuntimeError('Event loop stopped before Future comp │
┃ │ 652 │ │ │
┃ │ ❱ 653 │ │ return future.result() │
┃ │ 654 │ │
┃ │ 655 │ def stop(self): │
┃ │ 656 │ │ """Stop running the event loop. │
┃ │ │
┃ │ /usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py:1953 in │
┃ │ wrapper │
┃ │ │
┃ │ 1950 │ │ │ _task_states[task] = TaskState(None, None) │
┃ │ 1951 │ │ │ │
┃ │ 1952 │ │ │ try: │
┃ │ ❱ 1953 │ │ │ │ return await func(*args) │
┃ │ 1954 │ │ │ finally: │
┃ │ 1955 │ │ │ │ del _task_states[task] │
┃ │ 1956 │
┃ │ │
┃ │ /sdk/src/dagger/mod/_module.py:185 in _run │
┃ │ │
┃ │ 182 │ │
┃ │ 183 │ async def _run(self): │
┃ │ 184 │ │ async with await dagger.connect(): │
┃ │ ❱ 185 │ │ │ await self._serve() │
┃ │ 186 │ │
┃ │ 187 │ async def _serve(self): │
┃ │ 188 │ │ mod_name = await self._mod.name() │
┃ │ │
┃ │ /sdk/src/dagger/mod/_module.py:193 in _serve │
┃ │ │
┃ │ 190 │ │ resolvers = self.get_resolvers(mod_name) │
┃ │ 191 │ │ │
┃ │ 192 │ │ result = ( │
┃ │ ❱ 193 │ │ │ await self._invoke(resolvers, parent_name) │
┃ │ 194 │ │ │ if parent_name │
┃ │ 195 │ │ │ else await self._register(resolvers, to_pascal_case(mod_na │
┃ │ 196 │ │ ) │
┃ │ │
┃ │ /sdk/src/dagger/mod/_module.py:266 in _invoke │
┃ │ │
┃ │ 263 │ │ ) │
┃ │ 264 │ │ │
┃ │ 265 │ │ resolver = self.get_resolver(resolvers, parent_name, name) │
┃ │ ❱ 266 │ │ return await self.get_result(resolver, parent_json, inputs) │
┃ │ 267 │ │
┃ │ 268 │ async def get_result( │
┃ │ 269 │ │ self, │
┃ │ │
┃ │ /sdk/src/dagger/mod/_module.py:279 in get_result │
┃ │ │
┃ │ 276 │ │ │ isinstance(resolver, FunctionResolver) │
┃ │ 277 │ │ │ and inspect.isclass(resolver.wrapped_func) │
┃ │ 278 │ │ ): │
┃ │ ❱ 279 │ │ │ root = await self.get_root(resolver.origin, parent_json) │
┃ │ 280 │ │ │
┃ │ 281 │ │ try: │
┃ │ 282 │ │ │ result = await resolver.get_result(self._converter, root, │
┃ │ │
┃ │ /sdk/src/dagger/mod/_module.py:325 in get_root │
┃ │ │
┃ │ 322 │ │ if not parent: │
┃ │ 323 │ │ │ return origin() │
┃ │ 324 │ │ │
┃ │ ❱ 325 │ │ return await asyncify(self._converter.structure, parent, origi │
┃ │ 326 │ │
┃ │ 327 │ def field( │
┃ │ 328 │ │ self, │
┃ │ │
┃ │ /usr/local/lib/python3.11/site-packages/anyio/to_thread.py:49 in run_sync │
┃ │ │
┃ │ 46 │ │ │ stacklevel=2, │
┃ │ 47 │ │ ) │
┃ │ 48 │ │
┃ │ ❱ 49 │ return await get_async_backend().run_sync_in_worker_thread( │
┃ │ 50 │ │ func, args, abandon_on_cancel=abandon_on_cancel, limiter=limite │
┃ │ 51 │ ) │
┃ │ 52 │
┃ │ │
┃ │ /usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py:2103 in │
┃ │ run_sync_in_worker_thread │
┃ │ │
┃ │ 2100 │ │ │ │ │ worker_scope = scope._parent_scope │
┃ │ 2101 │ │ │ │ │
┃ │ 2102 │ │ │ │ worker.queue.put_nowait((context, func, args, future, │
┃ │ ❱ 2103 │ │ │ │ return await future │
┃ │ 2104 │ │
┃ │ 2105 │ @classmethod │
┃ │ 2106 │ def check_cancelled(cls) -> None: │
┃ │ │
┃ │ /usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py:823 in │
┃ │ run │
┃ │ │
┃ │ 820 │ │ │ │ │ exception: BaseException | None = None │
┃ │ 821 │ │ │ │ │ threadlocals.current_cancel_scope = cancel_scope │
┃ │ 822 │ │ │ │ │ try: │
┃ │ ❱ 823 │ │ │ │ │ │ result = context.run(func, *args) │
┃ │ 824 │ │ │ │ │ except BaseException as exc: │
┃ │ 825 │ │ │ │ │ │ exception = exc │
┃ │ 826 │ │ │ │ │ finally: │
┃ │ │
┃ │ /usr/local/lib/python3.11/site-packages/cattrs/converters.py:332 in │
┃ │ structure │
┃ │ │
┃ │ 329 │ │
┃ │ 330 │ def structure(self, obj: Any, cl: Type[T]) -> T: │
┃ │ 331 │ │ """Convert unstructured Python data structures to structured │
┃ │ ❱ 332 │ │ return self._structure_func.dispatch(cl)(obj, cl) │
┃ │ 333 │ │
┃ │ 334 │ # Classes to Python primitives. │
┃ │ 335 │ def unstructure_attrs_asdict(self, obj: Any) -> Dict[str, Any]: │
┃ │ in structure_Grep:9 │
┃ ╰──────────────────────────────────────────────────────────────────────────────╯
┃ ClassValidationError: While structuring Grep (1 sub-exception)
• Engine: 9e91eb66912d (version v0.9.4)
⧗ 20.86s ✔ 112 ∅ 16 ✘ 3
```
### Dagger version
dagger v0.9.4 (registry.dagger.io/engine) darwin/arm64
### Steps to reproduce
_No response_
### Log output
_No response_ | https://github.com/dagger/dagger/issues/6286 | https://github.com/dagger/dagger/pull/6287 | 1502402c4c028f15165a14ea8f05260057c8141e | 62e02912129760bc86c309e5107dd7eb463ac2bf | "2023-12-16T16:19:10Z" | go | "2023-12-20T11:15:41Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,285 | ["sdk/nodejs/package.json", "sdk/nodejs/yarn.lock"] | 🐞 Test linear integration | ### What is the issue?
New issue to test linear integration.
### Dagger version
v0.9.4
### Steps to reproduce
_No response_
### Log output
_No response_ | https://github.com/dagger/dagger/issues/6285 | https://github.com/dagger/dagger/pull/4314 | 6dbef118931766411a10c8f06124be7b455bc7b9 | 57c4819807f4f2e0da741fe20c8cb8ccd3332fb3 | "2023-12-15T21:18:11Z" | go | "2023-01-09T08:20:52Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,252 | ["core/integration/module_test.go", "core/schema/coremod.go", "core/schema/usermod.go"] | 🐞 Module constructor returns pointer for empty object instead of returning nil for *Directory type | ### What is the issue?
When I created a module constructor with `// +optional=true` missing flag returned a pointer of the empty object instead of nil.
Example Code:
```go
package main
func New(
// +optional=true
src *Directory,
) *A {
return &A{Src: src}
}
type A struct {
Src *Directory
}
func (m *A) IsEmpty() bool {
return m.Src == nil
}
```
### Log output
```shell
dagger call is-empty
✔ dagger call is-empty [1.38s]
┃ false
• Cloud URL: https://dagger.cloud/runs/6ed9327f-a4e3-4257-9864-87733b642a5f
• Engine: 64fae4326b32 (version v0.9.4)
⧗ 2.79s ✔ 31 ∅ 7
```
### Steps to reproduce
The code from example can be used for reproducing the issue.
### SDK version
GO SDK v0.9.4
### OS version
macOS 14.0
cc: @jedevc | https://github.com/dagger/dagger/issues/6252 | https://github.com/dagger/dagger/pull/6257 | fd8922f8b964be83bd3fed1490fda114641ac480 | 3b82755058493c63d399bf095b1c3c4b4eba2834 | "2023-12-11T16:59:25Z" | go | "2023-12-13T11:16:00Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,163 | ["core/integration/remotecache_test.go", "engine/buildkit/filesync.go", "engine/sources/blob/blobsource.go"] | 🐞 WithMountedDirectory invalidating remote cache | ### What is the issue?
Given the following pipeline:
```go
func main() {
// initialize Dagger client
ctx := context.Background()
client, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
defer client.Close()
src := client.Host().Directory(".", dagger.HostDirectoryOpts{
Include: []string{"bar"},
})
mountCache := client.Host().Directory("mount")
_, err = client.Container().
From("alpine").
WithWorkdir("/root").
WithExec([]string{"apk", "add", "curl"}).
WithMountedDirectory("/mount", mountCache).
WithDirectory(".", src).
WithExec([]string{"cat", "bar"}).
Sync(ctx)
if err != nil {
panic(err)
}
}
```
Running that twice in my local setup, correctly caches all operations:
```
130|marcos:tmp/test (⎈ |N/A)$ dagger run go run main.go
┣─╮
│ ▽ init
│ █ [1.28s] connect
│ ┣ [1.11s] starting engine
│ ┣ [0.17s] starting session
│ ┃ OK!
│ ┻
█ [1.36s] go run main.go
┣─╮
│ ▽ host.directory mount
│ █ [0.01s] upload mount from xps (client id: nf66pihk4bujhmj9nw5jrpzps)
│ ┣ [0.00s] transferring mount:
┣─┼─╮
│ │ ▽ host.directory .
│ │ █ [0.01s] upload . from xps (client id: nf66pihk4bujhmj9nw5jrpzps) (include: bar)
│ │ ┣ [0.00s] transferring .:
│ █ │ CACHED upload mount from xps (client id: nf66pihk4bujhmj9nw5jrpzps)
│ █ │ [0.00s] blob://sha256:afad735041289d8662a90d002343ddac7dfdeca7ebb85fd3b1a786ff1a02183c
│ ┣─┼─╮ blob://sha256:afad735041289d8662a90d002343ddac7dfdeca7ebb85fd3b1a786ff1a02183c
│ ┻ │ │
│ █ │ CACHED upload . from xps (client id: nf66pihk4bujhmj9nw5jrpzps) (include: bar)
│ █ │ [0.00s] blob://sha256:f4314233344fbbc7c171c06d5cace46282b4ee97b153ed06e0aa21ce6de98ae1
│ ╭─┫ │ blob://sha256:f4314233344fbbc7c171c06d5cace46282b4ee97b153ed06e0aa21ce6de98ae1
│ │ ┻ │
┣─┼─╮ │
│ │ ▽ │ from alpine
│ │ █ │ [1.11s] resolve image config for docker.io/library/alpine:latest
│ │ █ │ [0.01s] pull docker.io/library/alpine:latest
│ │ ┣ │ [0.01s] resolve docker.io/library/alpine@sha256:eece025e432126ce23f223450a0326fbebde39cdf496a85d8c016293fc851978
│ │ ┣─┼─╮ pull docker.io/library/alpine:latest
│ │ ┻ │ │
█◀┼───┼─╯ CACHED exec apk add curl
█◀╯ │ CACHED copy / /root
█◀────╯ CACHED exec cat bar
```
When using magicache, for some reason the last `exec cat bar` operation doesn't seem to be cached at all:
**output from a newly spawned engine** (local cache is empty):
```
┣─╮
│ ▽ init
│ █ [19.56s] connect
│ ┣ [19.42s] starting engine
│ ┣ [0.14s] starting session
│ ┃ OK!
│ ┻
█ [5.60s] go run main.go
┣─╮
│ ▽ host.directory .
│ █ [0.04s] upload . from xps (client id: fvl7ohtwyhom301qxtt817rpg) (include: bar)
│ ┣ [0.00s] transferring .:
┣─┼─╮
│ │ ▽ host.directory mount
│ │ █ [0.04s] upload mount from xps (client id: fvl7ohtwyhom301qxtt817rpg)
│ │ ┣ [0.00s] transferring mount:
│ █ │ [0.01s] upload . from xps (client id: fvl7ohtwyhom301qxtt817rpg) (include: bar)
│ ┣ │ [0.43s] █████████████ sha256:f4314233344fbbc7c171c06d5cace46282b4ee97b153ed06e0aa21ce6de98ae1
│ ┣ │ [0.01s] extracting sha256:f4314233344fbbc7c171c06d5cace46282b4ee97b153ed06e0aa21ce6de98ae1
│ │ █ [0.01s] upload mount from xps (client id: fvl7ohtwyhom301qxtt817rpg)
│ │ ┣ [0.82s] █████████████ sha256:afad735041289d8662a90d002343ddac7dfdeca7ebb85fd3b1a786ff1a02183c
│ │ ┣ [0.01s] extracting sha256:afad735041289d8662a90d002343ddac7dfdeca7ebb85fd3b1a786ff1a02183c
│ █ │ [0.00s] blob://sha256:f4314233344fbbc7c171c06d5cace46282b4ee97b153ed06e0aa21ce6de98ae1
│ ┣─┼─╮ blob://sha256:f4314233344fbbc7c171c06d5cace46282b4ee97b153ed06e0aa21ce6de98ae1
│ ┻ │ │
│ █ │ [0.00s] blob://sha256:afad735041289d8662a90d002343ddac7dfdeca7ebb85fd3b1a786ff1a02183c
│ ╭─┫ │ blob://sha256:afad735041289d8662a90d002343ddac7dfdeca7ebb85fd3b1a786ff1a02183c
│ │ ┻ │
┣─┼─╮ │
│ │ ▽ │ from alpine
│ │ █ │ [2.12s] resolve image config for docker.io/library/alpine:latest
│ │ █ │ [0.01s] pull docker.io/library/alpine:latest
│ │ ┣ │ [0.01s] resolve docker.io/library/alpine@sha256:eece025e432126ce23f223450a0326fbebde39cdf496a85d8c016293fc851978
│ │ ┣─┼─╮ pull docker.io/library/alpine:latest
│ │ ┻ │ │
█◀┼───┼─╯ CACHED exec apk add curl
█◀┼───╯ CACHED copy / /root
█◀╯ [0.30s] exec cat bar
┃ bar
┻
```
^ as you can see `exec cat bar` is not cached.
If I check Dagger cloud, it also shows the same:

Looking a bit in-depth looks like there seems to be a `merge` op which is not cached when using magicache

Dagger cloud URL: https://dagger.cloud/runs/058cf603-4334-4b85-8fd2-80483e379df2
cc @RonanQuigley
### Log output
N/A
### Steps to reproduce
N/A
### SDK version
Go SDK 0.9.3
### OS version
linux | https://github.com/dagger/dagger/issues/6163 | https://github.com/dagger/dagger/pull/6211 | 099f2aebb0b486b6f584de1074f4ff1521541b07 | a789dbe3747ad3cef142102447194d3e59f9ed7f | "2023-11-27T18:54:01Z" | go | "2023-12-06T17:02:45Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,162 | ["cmd/codegen/generator/go/templates/modules.go", "cmd/codegen/generator/go/templates/modules_test.go", "core/integration/module_test.go", "core/integration/testdata/modules/go/minimal/main.go"] | Go SDK Modules: enable general type annotation support via comments | Discord discussion: https://discord.com/channels/707636530424053791/1177373379255349248
As we need to add more support in Go for things like marking a field as private, setting default values, etc.
The most obvious option was to require use of structs (including the `Opts` struct idea, the "inline anonymous struct for args", etc.) and annotate more information via struct tags.
However, after discussion we realized a better general solution would be to not rely on struct tags but instead add support for the equivalent functionality via specially formatted comments. The reasoning being that:
1. Struct tags are essentially just glorified comments (strings that get interpreted by reflection/ast parsing), so there's not much of a practical difference in that respect
2. Comments are more general in that they can be applied equivalently not only to fields of a struct but also to function args (similar to how we support doc strings on args today), functions as a whole, structs as a whole, etc.
There's prior art for this:
1. Go's support for pragmas: https://dave.cheney.net/2018/01/08/gos-hidden-pragmas
2. K8s support for annotations: https://github.com/kubernetes-sigs/controller-tools/blob/881ffb4682cb5882f5764aca5a56fe9865bc9ed6/pkg/crd/testdata/cronjob_types.go#L105-L125
TBD the exact format of comments (i.e. use of a special prefix `//dagger: default=abc` vs something simpler like `//+default abc`).
We should use this to support at least:
1. Optional (can co-exist w/ existing `Optional` type)
2. Default values (will be limited to what can be represented as a string, but that's ok)
3. Marking object fields as private (so not included in public API but still included in serialization of object)
cc @vito @jedevc | https://github.com/dagger/dagger/issues/6162 | https://github.com/dagger/dagger/pull/6179 | e8097f5798af7f8b22f2a1be94a27c9800185053 | a499195df3de436969c4477bb1e9f074aa586eb6 | "2023-11-27T16:51:18Z" | go | "2023-12-06T14:29:35Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,096 | ["sdk/python/src/dagger/mod/_utils.py", "sdk/python/tests/modules/test_utils.py"] | 🐞 Python modules: A regular union is documented as a fallback | ### What is the issue?
When not providing documentation with `Annotated[str | None, Doc("foobar")]`, the union itself is annotated:
<pre>
Usage:
dagger call foo [flags]
Flags:
<strong> --bar string Represent a PEP 604 union type
E.g. for int | str</strong>
-h, --help help for foo
</pre>
It should be empty in this case.
### Steps to reproduce
Create module with:
```python
from dagger.mod import function
@function
def foo(bar: str | None = None) -> str:
return bar or "foobar"
````
Check help:
```shell
❯ dagger call foo --help
```
### SDK version
Python SDK v0.9.3
### OS version
macOS 13.5.2 | https://github.com/dagger/dagger/issues/6096 | https://github.com/dagger/dagger/pull/6097 | f737550ed8e30b00b510dc07ef2df8a6f14618f3 | 40a445b12e2886d3305a56de4f14803412c158f1 | "2023-11-13T14:18:51Z" | go | "2023-11-14T16:27:11Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,094 | ["sdk/python/pyproject.toml", "sdk/python/src/dagger/mod/_converter.py", "sdk/python/src/dagger/mod/_utils.py", "sdk/python/tests/modules/test_utils.py"] | 🐞 Python modules: "unsupported union type" error when using `Optional[str]` in a parameter | ### What is the issue?
Can't use `Optional` in a function parameter (module).
Reported by user in [Discord](https://discord.com/channels/707636530424053791/1172173031007862835).
### Log output
```
TypeError: Unsupported union type: typing.Optional[str]
```
### Steps to reproduce
With this module:
```python
from typing import Optional
from dagger.mod import function
@function
def foo(bar: Optional[str] = None) str:
return bar or "foobar"
```
Run:
```shell
❯ dagger call
```
### SDK version
Python SDK v0.9.3
### OS version
macOs 13.5.2 | https://github.com/dagger/dagger/issues/6094 | https://github.com/dagger/dagger/pull/6095 | 40a445b12e2886d3305a56de4f14803412c158f1 | 86ea539f8a91914981f1be9593d6005c63b7aa7c | "2023-11-13T13:57:03Z" | go | "2023-11-14T16:52:07Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,091 | ["sdk/elixir/.changes/unreleased/Fixed-20231113-215135.yaml", "sdk/elixir/lib/dagger/codegen/elixir/function.ex", "sdk/elixir/lib/dagger/codegen/elixir/module.ex", "sdk/elixir/lib/dagger/codegen/elixir/templates/object_tmpl.ex", "sdk/elixir/lib/dagger/gen/client.ex", "sdk/elixir/lib/dagger/gen/container.ex", "sdk/elixir/lib/dagger/gen/host.ex", "sdk/elixir/test/dagger/client_test.exs"] | 🐞 Service bindings don't work with Elixir SDK | ### What is the issue?
I've tried converting the NodeJS docker with postgres example into Elixir, but the `.with_service_binding` doesn't seem to work properly - an error is always thrown. If I remove the service binding the workflow completes ok (although the tests fail, because they need the database). My workflow is:
```elixir
defmodule Mix.Tasks.ElixirWithDagger.Test do
use Mix.Task
@impl Mix.Task
def run(_args) do
Application.ensure_all_started(:dagger)
client = Dagger.connect!()
app =
client
|> Dagger.Client.host()
|> Dagger.Host.directory(".", exclude: ["deps", "_build"])
database =
client
|> Dagger.Client.container()
|> Dagger.Container.from("postgres:16")
|> Dagger.Container.with_env_variable("POSTGRES_PASSWORD", "test")
|> Dagger.Container.with_exec(~w"postgres")
|> Dagger.Container.with_exposed_port(5432)
|> Dagger.Container.as_service()
{:ok, _} =
client
|> Dagger.Client.container()
|> Dagger.Container.from("hexpm/elixir:1.15.7-erlang-26.1.2-debian-bullseye-20231009-slim")
|> Dagger.Container.with_exec(~w"apt-get update")
|> Dagger.Container.with_exec(~w"apt-get install -y build-essential git")
|> Dagger.Container.with_exec(~w"apt-get clean")
|> Dagger.Container.with_exec(~w"rm -f /var/lib/apt/lists/*_*")
|> Dagger.Container.with_mounted_directory("/app", app)
|> Dagger.Container.with_workdir("/app")
|> Dagger.Container.with_exec(~w"mix local.hex --force")
|> Dagger.Container.with_exec(~w"mix local.rebar --force")
|> Dagger.Container.with_exec(~w"mix deps.get")
|> Dagger.Container.with_service_binding("db", database)
|> Dagger.Container.with_env_variable("DB_HOST", "db")
|> Dagger.Container.with_env_variable("DB_USER", "postgres")
|> Dagger.Container.with_env_variable("DB_PASSWORD", "test")
|> Dagger.Container.with_env_variable("DB_NAME", "postgres")
|> Dagger.Container.with_exec(~w"mix test")
|> Dagger.Sync.sync()
Dagger.close(client)
IO.puts("Tests succeeded!")
end
end
```
### Log output
The output from the Dagger CLI is:
```
┃ ** (Protocol.UndefinedError) protocol String.Chars not implemented for {:query_timeout, :infinity} of type Tuple. This protocol is implemented for the following type(s): Atom, B
┃ itString, Date, DateTime, Decimal, Float, Integer, List, NaiveDateTime, Phoenix.LiveComponent.CID, Postgrex.Copy, Postgrex.Query, Time, URI, Version, Version.Requirement
┃ (elixir 1.15.7) lib/string/chars.ex:3: String.Chars.impl_for!/1
┃ (elixir 1.15.7) lib/string/chars.ex:22: String.Chars.to_string/1
┃ (dagger 0.9.3) lib/dagger/query_builder.ex:82: Dagger.QueryBuilder.Selection.encode_value/1
┃ (elixir 1.15.7) lib/enum.ex:1693: Enum."-map/2-lists^map/1-1-"/2
┃ (dagger 0.9.3) lib/dagger/query_builder.ex:65: Dagger.QueryBuilder.Selection.encode_value/1
┃ (dagger 0.9.3) lib/dagger/query_builder.ex:76: anonymous fn/1 in Dagger.QueryBuilder.Selection.encode_value/1
┃ (elixir 1.15.7) lib/enum.ex:1701: anonymous fn/3 in Enum.map/2
┃ (stdlib 5.0.2) maps.erl:416: :maps.fold_1/4
```
### Steps to reproduce
Run the above workflow with `dagger run mix elixir_with_dagger.test`
### SDK version
dagger v0.9.3 (registry.dagger.io/engine) darwin/arm64
### OS version
macOS | https://github.com/dagger/dagger/issues/6091 | https://github.com/dagger/dagger/pull/6099 | a6ba9e8016145ff3d19c25fba9247b4678ed3504 | 5989170918a9c365699bf020acc733445e3a7330 | "2023-11-10T12:20:50Z" | go | "2023-11-22T07:51:17Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,090 | ["sdk/python/.changes/unreleased/Fixed-20231113-164527.yaml", "sdk/python/src/dagger/_codegen/generator.py", "sdk/python/tests/client/test_codegen.py"] | Python SDK: module dependency w/ doc string that ends in `"` causes error | Repro:
```
mkdir -p test/submod
cd test/submod
dagger mod init --sdk=go --name=submod
cd ..
dagger mod init --sdk=python --name=test
dagger mod install ./submod
dagger functions
```
Error:
```
✘ exec /runtime ERROR [0.49s]
┃ Traceback (most recent call last):
┃ File "/runtime", line 3, in <module>
┃ from dagger.mod.cli import app
┃ File "/sdk/src/dagger/__init__.py", line 22, in <module>
┃ from .client.gen import *
┃ File "/sdk/src/dagger/client/gen.py", line 4254
┃ """example usage: "dagger call container-echo --string-arg yo""""
┃ ^
┃ SyntaxError: unterminated string literal (detected at line 4254)
```
The problem is that the dependency has a doc string that ends in `"`, which breaks when wrapped w/ `"""`
cc @helderco | https://github.com/dagger/dagger/issues/6090 | https://github.com/dagger/dagger/pull/6104 | 758604428f70ac78df9106017e2dfa2f62436ecf | 0def6c777bbd4f7b284f9bee68d8f3f58e16c0b6 | "2023-11-08T16:55:48Z" | go | "2023-11-14T19:25:49Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 6,066 | ["cmd/dagger/module.go"] | dagger mod publish should switch to `PUT /crawl` instead of `GET /crawl` | There has been a recent change - 7pm GMT on Nov. 5, 2023 - in the <https://daggerverse.dev> route that publishes a module.
It used to be `GET /crawl`. It is not `PUT /crawl`. This private PR has more details: https://github.com/dagger/dagger.io/pull/3058#issuecomment-1793819422
This is a reminder that we need to change the route that `dagger mod publish` uses & also cut a new release.
As a temporary workaround publish directly from <https://daggerverse.dev> instead.
I will be travelling & will not have internet for the next 10 hours. Unless someone else doesn't beat me to it - @vito @sipsma @jedevc @helderco @marcosnils @aluzzardi - I will look into this when I get on the other side.
🛫 | https://github.com/dagger/dagger/issues/6066 | https://github.com/dagger/dagger/pull/6274 | 4d96689943afb13c23f5984dbe19e7967c912a7d | d63f3fb2f14314206d748a25a95abf8e9731bd52 | "2023-11-06T11:07:55Z" | go | "2023-12-15T21:29:50Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,991 | ["cmd/codegen/generator/go/templates/modules.go", "core/integration/module_test.go"] | Zenith: allow private fields in types | Creating an issue based on @shykes's comment here: https://discord.com/channels/707636530424053791/1120503349599543376/1166880734741549176
Today, an example type in Go might look something like:
```go
type X struct {
Foo string `json:"foo"`
}
```
In this example, `Foo` is exported and persisted between multiple calls (since each function call is a separate execution of the go binary) as well as being queriable by the user / calling module.
However, a relatively normal use case would be to want only that first property - a private field `Foo`, accessible and modifiable by the module, but inaccessible to the calling code.
There are two main parts of this:
- How should we mark a field as private at the API level?
- We could have a `WithPrivate` call for fields, similar to `WithOptional` for function parameters.
- This would mark the field as private, so we would always serialize/deserialize it, but would not generate it as part of the graphql schema for the module.
- How should we indicate that the field is private in Go/Python/etc?
- For Go, ideally we could use unexported fields as private - since `dagger.gen.go` is in the same package as `main.go`, then this wouldn't be a problem.
However, a couple caveats: this would be the first thing (as far as I'm aware) to require access to private fields in `main.go` from `dagger.gen.go`, so it would be difficult to split this into a separate package later (if we wanted to prevent the user from accessing private parts of `dagger.gen.go`). It would still be possible though! We'd just need the code that accessed the parts of `main.go` to be auto-generated into the same package (and extract the internals into a separate package).
Secondly, we'd then have no way to have fields with "forbidden types" - today, that's any structs (until we have module-defined IDable types), and any types declared outside the current package. This is similar in behavior to how args work today, so maybe this isn't a huge issue. | https://github.com/dagger/dagger/issues/5991 | https://github.com/dagger/dagger/pull/6224 | 9a0f81a7367a675b13854f0be82694d4ebc44dd3 | 69da5d811ce20d7abc1d4af3a9912bb63ce93baf | "2023-10-27T14:51:01Z" | go | "2023-12-11T13:49:57Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,975 | ["cmd/codegen/generator/go/generator.go", "core/integration/module_test.go"] | Zenith: error creating module named "Go" | ```
~/G/g/F/P/go ❯❯❯ dagger mod init --name Go --sdk go bugfix/prevent-stack-smashing ✱ ◼
✘ asModule(sourceSubpath: ".") ERROR [0.92s]
✘ .....
✘ generating go module: Go ERROR [0.09s]
├ [0.02s] go mod tidy
┃ writing dagger.gen.go
┃ writing go.mod
┃ writing go.sum
┃ writing main.go
┃ creating directory querybuilder
┃ writing querybuilder/marshal.go
┃ writing querybuilder/querybuilder.go
┃ panic: go: internal error: missing go root module
┃
┃ goroutine 1 [running]:
┃ cmd/go/internal/modload.mustHaveGoRoot(...)
┃ cmd/go/internal/modload/buildlist.go:104
┃ cmd/go/internal/modload.newRequirements(0xe0?, {0x4000218b40?, 0x543be0?, 0x40001720f0?}, 0x5aedc0?)
┃ cmd/go/internal/modload/buildlist.go:118 +0x584
┃ cmd/go/internal/modload.updateUnprunedRoots({0x71df30?, 0xab31e0?}, 0x0?, 0x40000f20a0, {0x0, 0x0, 0x3d0f40?})
┃ cmd/go/internal/modload/buildlist.go:1465 +0x820
┃ cmd/go/internal/modload.updateRoots({0x71df30?, 0xab31e0?}, 0x0?, 0x40000a80c0?, {0x0?, 0x2?, 0x40000a80c0?}, {0x0?, 0x0?, 0x40000a80d0?}, ...)
┃ cmd/go/internal/modload/buildlist.go:781 +0x6c
┃ cmd/go/internal/modload.loadModFile({0x71df30, 0xab31e0}, 0x40000fe280)
┃ cmd/go/internal/modload/init.go:939 +0x134c
┃ cmd/go/internal/modload.LoadPackages({0x71df30?, 0xab31e0}, {{0x0, 0x0}, 0x4000119ef0, 0x1, {0x0, 0x0}, 0x1, 0x1, ...}, ...)
┃ cmd/go/internal/modload/load.go:345 +0x31c
┃ cmd/go/internal/modcmd.runTidy({0x71df30, 0xab31e0}, 0x40000ea4c8?, {0x400009e1a0?, 0x54c8e0?, 0x10edbc?})
┃ cmd/go/internal/modcmd/tidy.go:127 +0x204
┃ main.invoke(0xa70ea0, {0x400009e1a0, 0x1, 0x1})
┃ cmd/go/main.go:268 +0x4f0
┃ main.main()
┃ cmd/go/main.go:186 +0x754
┃ needs another pass...
• Engine: c8919d4606c0 (version v0.9.1)
```
Not immediately sure why this would happen (other names like "Golang" are find); need to dig through stack trace. | https://github.com/dagger/dagger/issues/5975 | https://github.com/dagger/dagger/pull/5983 | dfdb520e16167d2629fb005bfedb20522270e580 | 77600801b2a6bc4552cfc4fc5ae4307826e29334 | "2023-10-26T19:25:12Z" | go | "2023-10-27T09:22:51Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,971 | ["cmd/codegen/generator/go/templates/modules.go", "core/integration/module_test.go"] | 🐞 `runtime error: index out of range` error when generating `go` modules | ### What is the issue?
Using common parameter types like following code block causing `runtime error: index out of range` while generating module code.
```go
func (ar *ActionRun) WithInput(name, value string) *ActionRun {
ar.Config.With = append(ar.Config.With, fmt.Sprintf("%s=%s", name, value))
return ar
}
```
### Log output
```shell
dagger mod sync
✘ asModule(sourceSubpath: "daggerverse/actions/runtime") ERROR [3.18s]
✘ exec /usr/local/bin/codegen --module . --propagate-logs=true ERROR [0.39s]
┃ Error: generate code: template: module:56:3: executing "module" at <Modu
┃ MainSrc>: error calling ModuleMainSrc: runtime error: index out of range
┃ 1] with length 1
┃ Usage:
┃ codegen [flags]
┃
┃ Flags:
┃ -h, --help help for codegen
┃ --lang string language to generate (default "go")
┃ --module string module to load and codegen dependency code
┃ -o, --output string output directory (default ".")
┃ --propagate-logs propagate logs directly to progrock.sock
┃
┃ Error: generate code: template: module:56:3: executing "module" at <Modu
┃ MainSrc>: error calling ModuleMainSrc: runtime error: index out of range
┃ 1] with length 1
✘ generating go module: actions-runtime ERROR [0.28s]
• Engine: d8c37de1d5af (version devel ())
⧗ 4.89s ✔ 238 ∅ 2 ✘ 3
Error: failed to automate vcs: failed to get vcs ignored paths: input:1: host.directory.asModule failed to call module "actions-runtime" to get functions: failed to get function output directory: process "/usr/local/bin/codegen --module . --propagate-logs=true" did not complete successfully: exit code: 1
```
### Steps to reproduce
- Create a go module
- Add command to module using shared type parameter like `func (ar *ActionRun) WithInput(name, value string) *ActionRun`
- Run `dagger mod sync`
### SDK version
Go SDK v0.9.1, Dagger CLI v0.9.1
### OS version
macOS 14.0 | https://github.com/dagger/dagger/issues/5971 | https://github.com/dagger/dagger/pull/5972 | c8990707600e665638707f19c12d4ecb80ef3e3a | 35442c542873fd260cddbcbed2ece93ce4b5a79f | "2023-10-26T15:48:07Z" | go | "2023-10-26T17:53:44Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,953 | ["cmd/dagger/module.go", "core/module.go", "core/modules/resolver.go"] | 🐞 `dagger call -m <git-module>` fails if remote module contains dependencies with local path | ### What is the issue?
When we call a module with local dependency using `git-ref`, cli isn't able to find dependencies.
`dagger.json` for following error log https://github.com/aweris/gale/blob/0fcf43c126b03fc4296f05970328aaca085c18af/daggerverse/gale/dagger.json#L1-L9
### Log output
```shell
dagger call --mod github.com/aweris/gale/daggerverse/gale@0fcf43c126b03fc4296f05970328aaca085c18af
✘ build "dagger call" ERROR [4.15s]
├ [4.15s] loading module
✘ asModule ERROR [0.00s]
• Engine: 61b34bf9d33e (version v0.9.0)
⧗ 5.44s ✔ 27 ✘ 2
Error: failed to get loaded module ID: input:1: git.commit.tree.directory.asModule failed to create module from config `.::`: failed to get config file: lstat dagger.json: no such file or directory
```
### Steps to reproduce
- create a module with has a dependency on the local path.
- push module to repository
- make a `dagger call -m <git-ref>`
### SDK version
Dagger CLI v0.9.0
### OS version
macOS 14.0 | https://github.com/dagger/dagger/issues/5953 | https://github.com/dagger/dagger/pull/5955 | 0b52104f252de11914914520948bf83c94d2068c | 51baf38f1c59be4d4935a2d2ca5af4b642f21a2e | "2023-10-23T16:39:21Z" | go | "2023-10-24T16:05:15Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,930 | ["core/secret.go", "core/secret_test.go"] | 🐞 Can't hide secrets in Git Clones | ### What is the issue?
Discord here -> https://discord.com/channels/707636530424053791/1164639553865383947
I must use HTTPS / GH Token, can't use SSH for git clones.
There is no way for me to hide credentials that are in the git https url. The below snippet will print my `please-dont-print` password all over the place :(
```golang
package main
import (
"context"
"os"
"dagger.io/dagger"
log "github.com/sirupsen/logrus"
)
func main() {
var err error
ctx := context.Background()
c, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stdout), dagger.WithLogOutput(os.Stderr))
if err != nil {
log.Error(err)
}
gitUrlRaw := "https://username:[email protected]/dagger/dagger"
c.SetSecret("ghApiToken", gitUrlRaw)
src := c.Git(gitUrlRaw, dagger.GitOpts{KeepGitDir: true}).
Branch("main").
Tree()
c.Container().WithMountedDirectory("/src", src).WithExec([]string{"ls", "-la", "/src"}).Stdout(ctx)
}
```
### Log output
```
➜ dagger-debug dagger run go run .
┣─╮
│ ▽ init
│ █ [1.49s] connect
│ ┣ [1.16s] starting engine
│ ┣ [0.20s] starting session
│ ┻
█ [2.64s] go run .
█ [0.72s] git://stobias123:[email protected]/dagger/dagger#main
┃ 94c8f92d7dd99616e4c6db05e5d0e4cd94ab13d6 refs/heads/main
█ [0.22s] ERROR exec ls -la /src
```
### Steps to reproduce
run above snippet
### SDK version
0.8.7
### OS version
osx | https://github.com/dagger/dagger/issues/5930 | https://github.com/dagger/dagger/pull/5951 | 752f324037801df10c9598c50eee0577d22a7f24 | 69bd45f6f777c6e8a949176e2d6014fd6388b812 | "2023-10-19T20:20:00Z" | go | "2023-10-24T09:59:36Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,927 | ["sdk/python/pyproject.toml", "sdk/python/src/dagger/client/_core.py", "sdk/python/src/dagger/client/_guards.py", "sdk/python/tests/client/test_codegen.py", "sdk/python/tests/client/test_default_client.py", "sdk/python/tests/client/test_integration.py", "sdk/python/tests/modules/test_registration.py"] | Services v2: Python null types on PortForward | The optional types on PortForward, `frontend` and `protocol` are being sent as `null` to the API when they are not set in the Python SDK.
```
gql.transport.exceptions.TransportQueryError: {'message': 'Syntax Error GraphQL request (3:41) Unexpected Name "null"\n\n2: host {\n3: service: service(ports: [{frontend: null, backend: 3306, protoc
┃ ol: TCP}]) {\n ^\n4: id: id\n', 'locations': [{'line': 3, 'column': 41}]}
``` | https://github.com/dagger/dagger/issues/5927 | https://github.com/dagger/dagger/pull/6087 | ae77f7e930cb917df48850dd5a05b4b3023a9cf4 | 93fd238a29c2ddde605b9c3a2d9fafa3cf33d989 | "2023-10-19T16:07:22Z" | go | "2023-11-08T14:01:14Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,917 | ["internal/mage/sdk/python.go"] | ci: use separate cache mounts for different python versions | We are getting flaky pip cache errors like https://github.com/dagger/dagger/actions/runs/6552195369/job/17794987073#step:4:1007
@helderco suggested it may be due to separate python versions being used in parallel on the same cache mount: https://github.com/dagger/dagger/pull/5906#issuecomment-1767391090
(I'll squash this quick since the flakes are quite annoying; just filing issue since I'm currently squashing something else and don't want to forget) | https://github.com/dagger/dagger/issues/5917 | https://github.com/dagger/dagger/pull/5919 | d6b3585f481dba441b77f521fd2b094e483b31c7 | 7560f6ec3e061c94adcb75740124c670cd4d07c7 | "2023-10-18T21:04:46Z" | go | "2023-10-19T02:21:32Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,911 | ["engine/client/client.go"] | POST http://dagger/query rpc error | Every Dagger run has something like:
```
[0.08s] Failed to connect; retrying... name:"error" value:"make request: Post \"http://dagger/query\": rpc error: code = Unknown desc = server \"wu4a6pa0asl96zq39iaijneaq\" not found"
```
It seems like it can be safely ignored, but the error message is concerning. | https://github.com/dagger/dagger/issues/5911 | https://github.com/dagger/dagger/pull/5918 | 70609dd344b7703de9ea52e2162ea574358e3fda | 572bd3de9340dea4121627163f4f7b58c818beb4 | "2023-10-18T17:09:08Z" | go | "2023-10-19T17:12:45Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,871 | ["cmd/codegen/generator/go/templates/modules.go", "core/integration/module_test.go"] | Zenith: custom return types require at least one method defined | Custom function return types require at least one empty method defined on them to be detected by codegen (note the JSON tags are currently required, #5860).
```go
// This is for a module "foo"
package main
import (
"context"
)
type Foo struct{}
type X struct {
Message string `json:"message"`
}
// This function definition is required, or we don't detect and incorporate this into schema generation.
// func (x *X) Void() {}
func (m *Foo) MyFunction(ctx context.Context, stringArg string) X {
return X{Message: stringArg}
}
```
```
$ echo '{foo{myFunction(stringArg: "hello"){message}}' | dagger query -m ".
Error: failed to get loaded module ID: input:1: host.directory.asModule.serve failed to install module schema: schema validation failed: input:1813: Undefined type X.
``` | https://github.com/dagger/dagger/issues/5871 | https://github.com/dagger/dagger/pull/5893 | c847458d6df1e184487e3a29f6d00a86ef67e661 | b74df2479214020502fcb917d77325f00bd4d4e1 | "2023-10-11T10:50:42Z" | go | "2023-10-18T12:22:03Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,864 | [".changes/unreleased/Breaking-20231024-152415.yaml", "cmd/codegen/generator/go/templates/modules.go", "cmd/codegen/generator/go/templates/src/header.go.tmpl", "core/integration/module_test.go", "core/integration/testdata/modules/go/minimal/main.go", "docs/current/labs/project-zenith.md", "go.mod", "go.sum", "sdk/go/dagger.gen.go"] | Zenith: Fix `Opts` types in Go SDK | From @jedevc:
---
Looks like `Opt` types can't be pointers, e.g. something like this doesn't work:
```go
type EchoOpts struct {
Suffix string `doc:"String to append to the echoed message." default:"..."`
Times int `doc:"Number of times to repeat the message." default:"3"`
}
func (m *Minimal) EchoOpts(msg string, opts *EchoOpts) string {
return m.EchoOptsInline(msg, opts)
}
```
The arg type needs to be changed to `opts EchoOpts` for it to work as intended.
Giving:
```
Error: failed to get loaded module ID: input:1: host.directory.asModule.serve failed to install module schema: schema validation failed: input:1809: Undefined type EchoOptsID.
```
---
`Opts` structs cannot be nested:
```go
package main
import "fmt"
type Potato struct{}
type PotatoOptions struct {
Count int
TestingOptions TestingOptions
}
type TestingOptions struct {
Foo string
}
func (m *Potato) HelloWorld(opts PotatoOptions) string {
return fmt.Sprintf("Hello world, I have %d potatoes (%s)", opts.Count, opts.TestingOptions.Foo)
}
```
```
Error: failed to get loaded module ID: input:1: host.directory.asModule.serve failed to install module schema: schema validation failed: input:1803: Undefined type TestingOptionsID.
``` | https://github.com/dagger/dagger/issues/5864 | https://github.com/dagger/dagger/pull/5907 | db901c8fe4c70cc32336e304492ca37f12b8f389 | e8ad5c62275172e3d54ae46447e741ff5d603450 | "2023-10-10T22:55:05Z" | go | "2023-10-25T19:38:59Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,860 | ["cmd/codegen/generator/go/templates/modules.go", "core/function.go", "core/integration/module_test.go", "core/schema/module.go"] | Zenith: fix Go struct field json serialization | https://github.com/dagger/dagger/pull/5757#issuecomment-1744739721
Right now if you have something like:
```go
type Foo struct {
Bar `json:"blahblah"`
}
```
The graphql schema for `Foo` will be populated with a field called `Bar`, but when we do json serialization we use `blahblah`, which means that if you try to query for the value of the `Bar` field, our gql server hits the trivial resolver codepath and looks for `bar` in the json object and doesn't find it (since it's been serialized to `blahblah`).
My first thought was that if a json struct tag is present, we should use that as the graphql field name rather than the go name. That would fix this problem I think, but it's more debatable whether it's surprising behavior (or honestly if anyone will really care about this).
---
Possibly related issue from https://github.com/dagger/dagger/pull/5757#issuecomment-1752775608
```golang
package main
import (
"context"
)
type Foo struct{}
type Test struct {
X string
Y int
}
func (t *Test) Void() {}
func (m *Foo) MyFunction(ctx context.Context) Test {
return Test{X: "hello", Y: 1337}
}
```
```bash
$ echo '{foo{myFunction{x}}}' | dagger query -m ""
...
⧗ 1.17s ✔ 22 ∅ 20 ✘ 2
Error: make request: input:1: foo.myFunction.x Cannot return null for non-nullable field Test.x.
```
If I modify the struct with JSON field names, the above works:
```golang
type Test struct {
X string `json:"x"`
Y int `json:"y"`
}
```
---
cc @vito @jedevc | https://github.com/dagger/dagger/issues/5860 | https://github.com/dagger/dagger/pull/6057 | 2c554b2ac26c4c6863893bac94d85108cecb48d9 | e1d69edbfd0f94c93489edac4d9d6050c36fc3b7 | "2023-10-10T22:50:32Z" | go | "2023-11-14T23:56:25Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,791 | ["cmd/shim/secret_scrub.go", "cmd/shim/secret_scrub_test.go"] | 🐞 secrets scrubbing implementation causes excessive log output latency | ### What is the issue?
# Context
We have a dagger pipeline which, at some point, [mounts secrets](https://github.com/airbytehq/airbyte/blob/327d3c9ae8e14b28bf2bdf7528b22f86f510d629/airbyte-ci/connectors/pipelines/pipelines/actions/environments.py#L1053) to a container which is then runs [integration tests](https://github.com/airbytehq/airbyte/blob/327d3c9ae8e14b28bf2bdf7528b22f86f510d629/airbyte-ci/connectors/pipelines/pipelines/gradle.py#L157) by with_exec'ing a task in gradle.
We are just as bad computer programmers as anyone and sometimes we introduce regressions which cause our integration tests to hang. In our particular case, the bug we introduced was that we tried to connect a jdbc driver to a nonexistent host and we'd hit some absurdly large timeout in our code.
What's unexpected, and why I'm filing an issue right now, is that dagger would be completely silent about the output of this `with_exec` step until it eventually failed.
# The problem
I eventually found that dagger's secrets scrubbing implementation is to blame. In our case, we mount about 20 secrets files, which while a lot, doesn't seem that crazy. I read dagger's code [1] and it appears that if there's a nonzero number of secrets mounted (env vars or files) the output gets piped through `replace.Chain` which chains a bunch of `Transformer` instances from go's transform module [2]. There's one instance per secret and each appears to wrap a reader with a hardcoded buffer of 4kB [3].
In our case, our gradle process was simply not chatty enough to overcome this buffering and so it seemed like it didn't do anything. We're on an old version of dagger but the problem seems still present in the `main` branch from what I can tell.
# My expectations as a user
The need to scrub secrets is not in question. However, `transform` appears to be written in a way that biases throughput over latency, when in our case exactly the opposite is desirable: when scrubbing secrets my expectation is that the output becomes visible to me _as soon as possible_. I don't want to wait for some arbitrary unconfigurable buffer to fill up and I certainly don't want that behaviour to be `O(n)` to the number of secrets mounted.
I say as soon as possible but that's not strictly speaking true, it's not like each rune has to pass through as soon as it's determined that it's not part of a prefix of one of the secrets. This being textual output, I'd be fine if it was each _line_ that gets passed through ASAP. I'm guessing this would also be undoubtedly less complicated to implement on your end and I'm also guessing that most secrets are not multiline (or if they are, it's only a handful of lines).
tl;dr: no need to go crazy here, but I'll be happy with reasonable and predictable behaviour.
# Repro
I didn't look into how dagger is tested but the desired behaviour of the return value of `NewSecretScrubReader` seems adequately unit-testable from my naive standpoint:
1. mock mounting 1, 10 and 100 secrets in each test case;
2. kick off a goroutine with an `io.Reader` which mocks a `with_exec`'ed process' `stdout`;
3. have that `io.Reader` serve a fixed number of bytes and then block on a channel, then serve a bunch more bytes, then block again;
4. concurrently, expect the `io.Reader` returned by `NewSecretScrubReader` to read an expected subset of those bytes, then unblock the channel, then read another expected subset of bytes, then unblock again.
# Notes
Dear dagger maintainers: I hope you're not finding me overprescriptive here or feel that I'm babying you. Far from it! I'm being deliberately wordy because I'm reasonably hopeful that this issue is quite cheap to fix so I'm provide as much info as I can to that aim. On our end, it sure caused a lot of head-scratching.
This is only an issue in an interactive setting: developing locally, or looking at dagger.cloud, which is probably why it went unnoticed (I didn't find any similar issue, though I only did a quick search).
# References
[1] https://github.com/dagger/dagger/blob/v0.6.4/cmd/shim/main.go#L294
[2] https://golang.org/x/text/transform
[3] https://cs.opensource.google/go/x/text/+/refs/tags/v0.13.0:transform/transform.go;l=130
### Log output
_No response_
### Steps to reproduce
_No response_
### SDK version
0.6.4
### OS version
irrelevant, I believe, but macOS and some unremarkable linux distro | https://github.com/dagger/dagger/issues/5791 | https://github.com/dagger/dagger/pull/6034 | b28daae7531cdb2ed7c2ced9da2c69fd3b080ee7 | 80180aaaed1e1e91a13dbf7df7e0411a9a54c7d3 | "2023-09-18T13:41:28Z" | go | "2023-11-20T14:49:50Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,763 | ["core/docs/d7yxc-operator_manual.md", "docs/current/faq.md"] | Explain rootless limitations and why it's not possible for now | ### What is the issue?
Currently, Dagger runs using a Buildkit daemon with root privileged through `--privileged` option.
It's been more than a year that the subject of Dagger rootless in on the pipe and I would like to use this issue to gather all information we got so far and suggest a solution so we can close the debat for now.
## Privileges
The advantage of running rootless is that you run the engine as a `non-root` user, meaning
that you mitigate potential vulnerabilities in the runtime. However, this means you
can hit Kernel level limitations such as ports exposition, networks, volumes and anything
basically protected by the Kernel.
**Why it's important for some users to run as rootless?**
As explained in #1287, some CI environment cannot allow root container, for security purpose or simple restriction.
For example, [an uncontrolled environment that restrict the usage of privileges container](https://github.com/dagger/dagger/issues/1287#issuecomment-1168923051).
Basically, without rootless some users may not be able to integrate Dagger to their CI.
## Limitations
**TL;DR** Rootless could work but the limitations and tradeoffs involved are not worth it and will lead to a painful path.
### Network
It is known as an actual [limitation](https://github.com/dagger/dagger/issues/4675#issuecomment-1450645729) of rootless Buildkit:
> Network mode is set to network.host by default
Since we implement internal network, we can use [slirp](https://github.com/rootless-containers/slirp4netns) but it really decrease performance compared to root.
See [network speed comparison](https://github.com/rootless-containers/rootlesskit/blob/master/docs/network.md#network-drivers).
### Distribution
Dagger aims to work on Windows, Mac and Linux no matter the distribution, however running Buildkit without privileges seems to limit the distribution to Ubuntu.
> Using Ubuntu kernel is recommended.
Which lead to the question, what can we do for MacOS and Windows users?
Also, as explained by Erik
> Which snapshotter will be usable inside the rootless containers involves an insane matrix of kernel version, configuration, fuse availability, upstream kernel patches used by certain distros, etc.
> Any users that end up not being able to use the default overlayfs snapshotter will most likely experience noticeable slowdowns (with either the fuse-overlayfs implementation or especially the native copy-based one).
### Volumes
Dagger relies on overlayfs which has limitation with a rootless daemon. Otherwise, runner execution might become extremely slow.
> Using the overlayfs snapshotter requires kernel >= 5.11 or Ubuntu kernel. On kernel >= 4.18, the fuse-overlayfs snapshotter is used instead of overlayfs. On kernel < 4.18, the native snapshotter is used.
The constraint of version for overlayfs might create unstable behaviour and lack of performances on Dagger side.
Also it seems `Rootlesskit` supports [multiple strategy](https://github.com/rootless-containers/rootlesskit/blob/master/docs/mount.md)
I dug into [LWM mount doc](https://lwn.net/Articles/690679/) and I think this actually supports all kind of mount we support in our API (with some limits though)
> The propagation can be also set to rshared, but known not to work with --copy-up.
>
> Note that rslave and rshared do not work as expected when the host root filesystem isn't mounted with "shared". (Use findmnt -n -l -o propagation / to inspect the current mount flag.)
Do these limitations are issues for Dagger? I think yes, but I would prefer having your opinion @vito @sipsma
An user tried on February to [run Dagger using rootless Docker](https://github.com/dagger/dagger/issues/1287#issuecomment-1424807337), but it hanged on host import/export operation.
### GPU implementation
Not possible for now without `--privileged`
:bulb: See [@sispma's comment on GPU](https://github.com/dagger/dagger/issues/4675#issuecomment-1450645729).
> The more annoying part of this is that the dagger engine is itself a docker container by default, isolated from the host.
> It should already have access to the device files of the host since it runs w/ --privileged, but it will also need access to the rest of the possible files.
It seems that work around are possible but it can indeed lead to unstable behaviour.
## Work around
You can run your own OCI container runtime following that [guide](https://docs.dagger.io/541047/alternative-runtimes/) however, if Dagger engine isn't run with privileged access, we cannot guarantee that you will access to the following feature:
- Host interaction: `read/write` might fails or actually have some limitations.
- Internal network and services: it will most likely fails since network will be on `host` by default or become really slow if you use [slirp](https://github.com/rootless-containers/slirp4netns).
- GPU: It's still experimental but this cannot work for now without maximum privileges.
The generic solution used by rootless buildkit is [Rootlesskit](https://github.com/rootless-containers/rootlesskit) but that has performance penalties.
## Questions ?
- What about `rootless` with `--privileged`? Is that a safer option, [documentation says](https://github.com/moby/buildkit/blob/master/docs/rootless.md#docker) that it's _almost safe_. Is it technically possible?@sipsma
- Do you disagree that Dagger network cannot be handled without `--privileged`? @vito
- Do you disagree that Dagger cannot implement GPU without `--privileged`? @sipsma
- Do the rootless container can impact our shim? I'm scared of this side effect.
The purpose of these questions is to acknowledge that for now, running without `--privileged` is not possible and will limit Dagger capabilities.
Indeed, if some user are ready to lose some capabilities, we might consider implementing a rootless option but at which cost?
## Source
- https://github.com/dagger/dagger/pull/2978#issuecomment-1234794009
- https://github.com/rootless-containers/slirp4netns
- https://github.com/rootless-containers/rootlesskit
- https://github.com/moby/buildkit/blob/master/docs/rootless.md
- https://pythonspeed.com/articles/podman-buildkit/
- https://github.com/dagger/dagger/issues/4675
- https://github.com/dagger/dagger/issues/151
- https://github.com/dagger/dagger/issues/1287
## Next steps
As discussed with @gerhard, I'll open a PR that explain why Dagger should not be ran as `rootless`.
I wanted to first gather your opinions, that's why I started with that issue, then I'll include all your answers to the future doc.
WDYT? @vito @sipsma @dubo-dubon-duponey | https://github.com/dagger/dagger/issues/5763 | https://github.com/dagger/dagger/pull/5809 | 8f6c3125f14a31e39e251492897c86768147fe26 | e63200db6dd4da2ceff56447d99b01056140b482 | "2023-09-05T17:22:22Z" | go | "2023-10-12T22:04:22Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,666 | ["cmd/dagger/exec_unix.go"] | 🐞 Elixir SDK always hang when running with dagger run | ### What is the issue?
Found on Dagger 0.8.4. When using `dagger run elixir test.exs` (the script provided in steps to reproduce section). Instead of finish execution, it still hang.
<img width="1287" alt="Screenshot 2566-08-18 at 23 44 25" src="https://github.com/dagger/dagger/assets/484530/0259f47c-0310-4c12-83d8-3bd3b604af6e">
### Log output
_No response_
### Steps to reproduce
Use this script:
```elixir
Mix.install([{:dagger, "0.8.4"}])
client = Dagger.connect!()
client
|> Dagger.Client.container()
|> Dagger.Container.from("alpine")
|> Dagger.Container.with_exec(~w[echo hello])
|> Dagger.Sync.sync()
Dagger.close(client)
```
And then run `dagger run elixir test.exs`. It now hangs. :/
### SDK version
Elixir 0.8.4
### OS version
macOS | https://github.com/dagger/dagger/issues/5666 | https://github.com/dagger/dagger/pull/5712 | 28e09abe14bc4b27ed7515d22d59191c42194229 | 41634f3ad3f64c02cedc18c77fda1d4ca36c798a | "2023-08-18T16:14:17Z" | go | "2023-10-10T14:22:53Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,664 | ["sdk/elixir/.changes/unreleased/Fixed-20230818-231100.yaml", "sdk/elixir/lib/dagger/engine_conn.ex"] | 🐞 Dagger.connect!/1 always return :error when connect with stable engine | ### What is the issue?
When try executing elixir script with set any experimental env. It crash instead of download stable engine.
### Log output
_No response_
### Steps to reproduce
Consider snippet:
```elixir
Mix.install([{:dagger, "0.8.4"}])
client = Dagger.connect!()
client
|> Dagger.Client.container()
|> Dagger.Container.from("alpine")
|> Dagger.Container.with_exec(~w[echo hello])
|> Dagger.Sync.sync()
Dagger.close(client)
```
Running with `elixir test.exs`. It got an error:
```
** (RuntimeError) Cannot connect to Dagger engine, cause: :error
(dagger 0.8.4) lib/dagger.ex:39: Dagger.connect!/1
test.exs:3: (file)
```
### SDK version
Elixir 0.8.4
### OS version
macOS | https://github.com/dagger/dagger/issues/5664 | https://github.com/dagger/dagger/pull/5665 | 8dafad024042b9cca548247a8c33dcf64dc97274 | 6423278953f02d53519a84a461ada285fadd02a7 | "2023-08-18T16:05:50Z" | go | "2023-08-24T10:27:47Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,651 | ["sdk/elixir/.changes/unreleased/Fixed-20230817-221054.yaml", "sdk/elixir/lib/dagger/engine_conn.ex"] | 🐞 Elixir SDK fallback to stable cli when local cli session timeout | ### What is the issue?
Currently, when any error happens in local cli mode, the engine will be fallback to stable cli.
It should fallback only no local cli available. Otherwise, returns an error
### Log output
_No response_
### Steps to reproduce
_No response_
### SDK version
Elixir SDK 0.8.2
### OS version
macOS | https://github.com/dagger/dagger/issues/5651 | https://github.com/dagger/dagger/pull/5654 | 75cb4a9fc7a4596fad23ac0656044b157b853800 | a150dc6992cdcfc5e5e77105ee5e5ece4788038d | "2023-08-17T06:07:51Z" | go | "2023-08-17T16:20:51Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,642 | [".github/workflows/sdk-elixir.yml", "sdk/elixir/.changes/unreleased/Fixed-20230816-235838.yaml", "sdk/elixir/lib/dagger/session.ex"] | 🐞 Test flaky on sdk/elixir | ### What is the issue?
Found on:
* https://github.com/dagger/dagger/actions/runs/5868523844/job/15912901044?pr=5628
* https://github.com/dagger/dagger/actions/runs/5877596164/job/15939249914
* https://github.com/dagger/dagger/actions/runs/5877596164/job/15939953049
Some tests spend to much time, causing test failure due to a timeout of over 60 seconds. Some tests got session timeout.
### Log output
_No response_
### Steps to reproduce
_No response_
### SDK version
Elixir SDK main
### OS version
macOS, linux | https://github.com/dagger/dagger/issues/5642 | https://github.com/dagger/dagger/pull/5646 | 5d2e2f14b623f6afb5c5710a42a4bb52980aa37c | f3dc810f2e730c982bf1cb7192c4d9b56c944199 | "2023-08-16T15:03:21Z" | go | "2023-08-16T17:44:35Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,575 | ["codegen/generator/nodejs/templates/functions.go", "codegen/generator/nodejs/templates/src/header.ts.gtpl", "codegen/generator/nodejs/templates/src/types.ts.gtpl", "sdk/nodejs/.changes/unreleased/Fixed-20230809-184725.yaml", "sdk/nodejs/api/client.gen.ts", "sdk/nodejs/api/test/api.spec.ts", "sdk/nodejs/api/utils.ts"] | 🐞 Node.js SDK - Enum wrongly interpreted | ### What is the issue?
There appears to be an issue with the interpretation of the Enum in the Node.js SDK.
In the example bellow ImageMediaTypes.Dockermediatypes should return `DockerMediaTypes` but instead it returns 0.
```ts
await client
.container()
.publish(gitLabImageRepo, {
platformVariants: seededPlatformVariants,
mediaTypes: ImageMediaTypes.Dockermediatypes,
});
```
### Log output
```shell
GraphQLRequestError: Argument "mediaTypes" has invalid value 0.
Expected type "ImageMediaTypes", found 0.
at file:///Users/doe0003p/Projects/it-service/qa/teamscale/base-images/tflint/node_modules/@dagger.io/dagger/dist/api/utils.js:155:23
at Generator.throw (<anonymous>)
at rejected (file:///Users/doe0003p/Projects/it-service/qa/teamscale/base-images/tflint/node_modules/@dagger.io/dagger/dist/api/utils.js:5:65)
at processTicksAndRejections (node:internal/process/task_queues:95:5) {
cause: ClientError: Argument "mediaTypes" has invalid value 0.
```
### Steps to reproduce
_No response_
### SDK version
Node.js SDK v0.8.0
### OS version
macOs 13.4.1 | https://github.com/dagger/dagger/issues/5575 | https://github.com/dagger/dagger/pull/5594 | 2ce7f970373569b476956b8a0d5c37b3a384ff49 | 6e4ba34afd975beef1e8cadb7b17003f810b1207 | "2023-08-04T13:28:35Z" | go | "2023-08-09T18:42:54Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,572 | [".changes/unreleased/Fixed-20230804-083720.yaml", ".changes/unreleased/Fixed-20230804-085622.yaml", "core/schema/query.go", "engine/client/client.go", "internal/mage/engine.go", "internal/mage/util/engine.go"] | 🐞 `dagger run` is reporting an incorrect Engine version | ### What is the issue?
When running `dagger run` with `v0.8.0`, the Engine version is reported incorrectly:
<img width="607" alt="image" src="https://github.com/dagger/dagger/assets/3342/4c6fcc9e-985b-4032-95e7-ebeed03f94b2">
For comparison, this is what the output looks like for `v0.6.4`:
<img width="509" alt="image" src="https://github.com/dagger/dagger/assets/3342/1f94f12c-a46b-4c2a-8cf4-cd078d446ebd">
<img width="587" alt="image" src="https://github.com/dagger/dagger/assets/3342/b27d01d7-c137-4485-9006-6deb7fa01574">
---
I suspect that this is related to https://github.com/dagger/dagger/pull/5315, but I did not spend time digging into it. cc @TomChv @vito
### Log output
_No response_
### Steps to reproduce
_No response_
### SDK version
Go SDK v0.8.0
### OS version
SDK / CLI on macOS 12.6 & Engine on NixOS 22.11 (Linux 5.15) | https://github.com/dagger/dagger/issues/5572 | https://github.com/dagger/dagger/pull/5578 | 0a46ff767287dfadaa403c0882d24983eb2c8713 | 1d802ce9b96ccac30bd179eae86c50d1a92698d6 | "2023-08-03T19:23:31Z" | go | "2023-08-04T16:07:38Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,507 | ["docs/current/162770-faq.md", "docs/current/sdk/python/866944-install.md", "website/static/img/current/faq/release-notes.png"] | How to upgrade | ### What is the issue?
We don't have clear documentation on how to upgrade to a new Dagger version
| https://github.com/dagger/dagger/issues/5507 | https://github.com/dagger/dagger/pull/5515 | 4a4ea2423baaaf3d1991b361305f219ccb44a0b4 | d649f2d277ae3610da7c5c22929795694b3d1f19 | "2023-07-25T23:00:31Z" | go | "2023-08-03T13:24:58Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,437 | ["core/directory.go", "core/integration/directory_test.go"] | 🐞 MergeOp corner case results in wrong contents | Noticed this while working on Zenith, just filing issue atm so as it not get distracted but will fix before next release.
If you use `WithDirectory` where the dest path of the source dir is the same as the internal selector of the source dir, then direct merge is incorrectly triggered, which can result in contents from above the internal selector being revealed under the `/` of the dest dir.
Just need to enforce that direct merge requires the source's selector to be `/` I think. So easy fix, but bigger thing is to go add the missing test coverage for this case.
This should be a release blocker (cc @gerhard should we add a label for release blockers?) | https://github.com/dagger/dagger/issues/5437 | https://github.com/dagger/dagger/pull/5448 | 736927938824e8c35d28aec7287e79c1f89ff3fd | 4842448132758a07be75491d73548ca9ec1edd5e | "2023-07-11T17:07:41Z" | go | "2023-07-14T15:21:49Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,434 | ["codegen/generator/nodejs/templates/src/object.ts.gtpl", "sdk/nodejs/.changes/unreleased/Breaking-20230727-235539.yaml", "sdk/nodejs/api/client.gen.ts", "sdk/nodejs/api/test/api.spec.ts", "sdk/nodejs/connect.ts", "sdk/nodejs/index.ts", "sdk/nodejs/provisioning/bin.ts", "sdk/nodejs/provisioning/engineconn.ts"] | Node.js: Export ´Client´ as a named export, instead of default | Import `Client` like the rest of codegen types:
```diff
-import Client, { connect, Container } from "@dagger.io/dagger"
+import { connect, Client, Container } from "@dagger.io/dagger"
```
Discussed in:
- https://github.com/dagger/dagger/pull/5141#discussion_r1196161481
Mentioned by users as well. Example in [discord](https://discord.com/channels/707636530424053791/1125996361117093958/1126050103422103572) in relation to https://github.com/dagger/dagger/issues/4036.
> **Warning**
> Breaking change. This is meant for a breaking release:
> - https://github.com/dagger/dagger/discussions/5374 | https://github.com/dagger/dagger/issues/5434 | https://github.com/dagger/dagger/pull/5517 | f81011d16a0a0a188e4eef94f924146e6ff6a69d | c7a2ec341ddc61e07fb69f4eae52d9fb41a520d5 | "2023-07-11T11:38:05Z" | go | "2023-07-31T12:28:39Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,433 | ["core/container.go", "core/integration/container_test.go", "core/schema/container.go", "core/schema/container.graphqls", "sdk/go/api.gen.go", "sdk/nodejs/api/client.gen.ts", "sdk/python/src/dagger/api/gen.py", "sdk/python/src/dagger/api/gen_sync.py"] | 🐞 Engine 0.6.3 breaks docker publish for non OCI supporting registry | ### What is the issue?
We are on Artifactory and since the dagger engine v0.6.3 upgrade we can't push images to Artifactory. We are on Artifactory v6.x. Artifactory v7+ is supposed to support OCI images but I don't have a way to test. We don't have an upgrade planned in the near future so we won't be able to use new features of Dagger if non OCI supporting registries don't work with new Dagger versions.
The specific PR that affected this - https://github.com/dagger/dagger/pull/5365
### Log output
HTTP 400 from Artifactory
### SDK version
Go SDK 0.7.2
### OS version
macOS | https://github.com/dagger/dagger/issues/5433 | https://github.com/dagger/dagger/pull/5467 | 13aea3dbfb15226f9b8ffede05e98c805cc2fe53 | e9d557a7ed6e8b6dd3cc8cae79df6f4f9bbff517 | "2023-07-10T19:05:44Z" | go | "2023-07-15T18:39:14Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,417 | ["go.mod", "go.sum"] | ✨ Incorporation of a Configurable Parameter to Specify Address, Name, and Password of the Registry from which the Dagger Engine Image is Downloaded | ### What are you trying to do?
In the context of our project, we are attempting to set up Dagger, a task runner and workflow orchestrator, in an environment where all services and resources are exclusively accessible through our intranet. To this end, we need to download the Dagger engine image from a private Docker or Podman registry hosted within our intranet.
Currently, Dagger's code is structured in such a way that the registry's domain (in this case, dl.dagger.io) is hardcoded as a non-configurable global variable in the download.py file. For instance:
```
...
## omitted code
DEFAULT_CLI_HOST = "dl.dagger.io"
...
## omitted code
class Downloader:
"""Download the dagger CLI binary."""
CLI_BIN_PREFIX = "dagger-"
CLI_BASE_URL = f"https://{DEFAULT_CLI_HOST}/dagger/releases"
```
### Why is this important to you?
This feature is critical for us because all our services and resources are exclusively accessible through our private intranet. This network restriction means we need to establish parameters for a private repository from which the Dagger engine image can be downloaded, which is currently not possible due to hard-coded values in the software.
### How are you currently working around this?
For now, we do not have a satisfactory solution. Our current procedure involves opening access to the internet, which is far from ideal due to the need to maintain a separate Dagger code base and merge changes from the main project. This method is error prone and does not scale well with project growth.
We believe the addition of this feature could benefit others in similar environments or anyone who needs to download the Dagger engine image from a private repository for security or other reasons. | https://github.com/dagger/dagger/issues/5417 | https://github.com/dagger/dagger/pull/2793 | 31b0522dfa8fe7b41c43c63babb4285092421c81 | be8a77d02e956f5bb808f158d1d8d60ddd3c0338 | "2023-07-06T15:21:47Z" | go | "2022-07-13T23:30:23Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,412 | ["sdk/elixir/lib/dagger/codegen/elixir/templates/object_tmpl.ex", "sdk/elixir/lib/dagger/gen/container.ex", "sdk/elixir/lib/dagger/gen/directory.ex", "sdk/elixir/lib/dagger/gen/project.ex", "sdk/elixir/lib/dagger/gen/query.ex"] | 🐞 sdk(elixir): Adding a directory fails | ### What is the issue?
The following code fails with an error:
```elixir
Mix.install([{:dagger, github: "dagger/dagger", sparse: "sdk/elixir"}])
client = Dagger.connect!()
src =
client
|> Dagger.Query.git("https://github.com/hexpm/hexpm")
|> Dagger.GitRepository.branch("main")
|> Dagger.GitRef.tree()
client
|> Dagger.Query.container()
|> Dagger.Container.from("hexpm/elixir:1.15.2-erlang-26.0.2-alpine-3.18.2")
|> Dagger.Container.with_workdir("/src")
|> Dagger.Container.with_directory("/src", src)
|> Dagger.Container.with_exec("mix deps.get")
|> Dagger.Container.stdout()
```
### Log output
```
Connected to engine d7ea1f45c95e
** (Protocol.UndefinedError) protocol Jason.Encoder not implemented for {:ok, "eyJsbGIiOnsiZGVmIjpbIkdxSUJDaUZuYVhRNkx5OW5hWFJvZFdJdVkyOXRMMmhsZUhCdEwyaGxlSEJ0STIxaGFXNFNKd29VWjJsMExtRjFkR2hvWldGa1pYSnpaV055WlhRU0QwZEpWRjlCVlZSSVgwaEZRVVJGVWhJbENoTm5hWFF1WVhWMGFIUnZhMlZ1YzJWamNtVjBFZzVIU1ZSZlFWVlVTRjlVVDB0RlRoSXRDZ3RuYVhRdVpuVnNiSFZ5YkJJZWFIUjBjSE02THk5bmFYUm9kV0l1WTI5dEwyaGxlSEJ0TDJobGVIQnRXZ0E9IiwiQ2trS1IzTm9ZVEkxTmpveE5EUTBaRFppTXpZd05UaGpPVEl6TkRSa1l6RmhOMlV6TWpobE5EazVaakE0TXpGbU1qVXpOVFkyT1RCa1lqSm1aR1l4TWpsalkySTFOakl6WkdSaiJdLCJtZXRhZGF0YSI6eyJzaGEyNTY6MTQ0NGQ2YjM2MDU4YzkyMzQ0ZGMxYTdlMzI4ZTQ5OWYwODMxZjI1MzU2NjkwZGIyZmRmMTI5Y2NiNTYyM2RkYyI6eyJjYXBzIjp7InNvdXJjZS5naXQiOnRydWUsInNvdXJjZS5naXQuZnVsbHVybCI6dHJ1ZX0sInByb2dyZXNzX2dyb3VwIjp7ImlkIjoiW3tcIm5hbWVcIjpcIlwiLFwibGFiZWxzXCI6W3tcIm5hbWVcIjpcImRhZ2dlci5pby9lbmdpbmVcIixcInZhbHVlXCI6XCJkN2VhMWY0NWM5NWVcIn1dfV0ifX0sInNoYTI1NjphZjc1ZGJmNjMxMjIwNjRhMDhiNDYxYmU1ODBkNzdmNTcxZWZjOTVlZWZjMWZiOTY4MjAyMjhhZGM5NDVkNDY2Ijp7ImNhcHMiOnsiY29uc3RyYWludHMiOnRydWUsInBsYXRmb3JtIjp0cnVlfX19LCJTb3VyY2UiOnsibG9jYXRpb25zIjp7InNoYTI1NjoxNDQ0ZDZiMzYwNThjOTIzNDRkYzFhN2UzMjhlNDk5ZjA4MzFmMjUzNTY2OTBkYjJmZGYxMjljY2I1NjIzZGRjIjp7fX19fSwiZGlyIjoiIiwicGxhdGZvcm0iOnsiYXJjaGl0ZWN0dXJlIjoiYW1kNjQiLCJvcyI6ImxpbnV4In0sInBpcGVsaW5lIjpbeyJuYW1lIjoiIiwibGFiZWxzIjpbeyJuYW1lIjoiZGFnZ2VyLmlvL2VuZ2luZSIsInZhbHVlIjoiZDdlYTFmNDVjOTVlIn1dfV19"} of type Tuple, Jason.Encoder protocol must always be explicitly implemented. This protocol is implemented for the following type(s): Any, Atom, BitString, Date, DateTime, Decimal, Float, Integer, Jason.Fragment, Jason.OrderedObject, List, Map, NaiveDateTime, Time
(jason 1.4.0) lib/jason.ex:164: Jason.encode!/2
(dagger 0.0.0) lib/dagger/query_builder.ex:47: anonymous fn/1 in Dagger.QueryBuilder.Selection.build_args/1
(elixir 1.15.1) lib/enum.ex:1794: anonymous fn/2 in Enum.map_join/3
(elixir 1.15.1) lib/enum.ex:1763: anonymous fn/4 in Enum.map_intersperse/3
(stdlib 5.0.1) maps.erl:416: :maps.fold_1/4
(elixir 1.15.1) lib/enum.ex:2522: Enum.map_intersperse/3
(elixir 1.15.1) lib/enum.ex:1794: Enum.map_join/3
(dagger 0.0.0) lib/dagger/query_builder.ex:48: Dagger.QueryBuilder.Selection.build_args/1
```
### Steps to reproduce
1. copy the script above in a file on computer,
2. run the script using `elixir script.exs`.
### SDK version
Elixir SDK pre-1.0
### OS version
macOS 13.3 (22E252) | https://github.com/dagger/dagger/issues/5412 | https://github.com/dagger/dagger/pull/5413 | 34f76deefbaf594bd7cc17af1be14bbeee65a7c3 | 169c79059217b16f910432d4dde4d53284597c63 | "2023-07-05T21:39:29Z" | go | "2023-07-14T16:10:05Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,368 | ["sdk/elixir/README.md"] | 🐞 sdk(elixir): Running basic example fails | ### What is the issue?
Example from the [readme](https://github.com/dagger/dagger/tree/main/sdk/elixir#running) fails with what appears to be a generic issue:
```elixir
Mix.install([{:dagger, github: "dagger/dagger", sparse: "sdk/elixir"}])
client = Dagger.connect!()
client
|> Dagger.Query.container([])
|> Dagger.Container.from(address: "hexpm/elixir:1.14.4-erlang-25.3-debian-buster-20230227-slim")
|> Dagger.Container.with_exec(args: ["elixir", "--version"])
|> Dagger.Container.stdout()
|> IO.puts()
```
### Log output
The output when running the above code is:
```
** (Protocol.UndefinedError) protocol Jason.Encoder not implemented for {:args, ["elixir", "--version"]} of type Tuple, Jason.Encoder protocol must always be explicitly implemented. This protocol is implemented for the following type(s): Any, Atom, BitString, Date, DateTime, Decimal, Float, Integer, Jason.Fragment, Jason.OrderedObject, List, Map, NaiveDateTime, Time
(jason 1.4.0) lib/jason.ex:164: Jason.encode!/2
(dagger 0.0.0) lib/dagger/query_builder.ex:47: anonymous fn/1 in Dagger.QueryBuilder.Selection.build_args/1
(elixir 1.15.0) lib/enum.ex:1794: anonymous fn/2 in Enum.map_join/3
(elixir 1.15.0) lib/enum.ex:1763: anonymous fn/4 in Enum.map_intersperse/3
(stdlib 5.0.1) maps.erl:416: :maps.fold_1/4
(elixir 1.15.0) lib/enum.ex:2522: Enum.map_intersperse/3
(elixir 1.15.0) lib/enum.ex:1794: Enum.map_join/3
(dagger 0.0.0) lib/dagger/query_builder.ex:48: Dagger.QueryBuilder.Selection.build_args/1
```
### Steps to reproduce
Run the code excerpt above, see the issue.
Trivia: I'm using Elixir 1.15 and OTP 26.0.1.
### SDK version
Elixir SDK 1.0
### OS version
macOS 13.3 (22E252) | https://github.com/dagger/dagger/issues/5368 | https://github.com/dagger/dagger/pull/5369 | 00f952aaeb456c232f9d1059223d3f0ec1da402b | 1cf71a5dcdc8251565b5f49b2f4c7eea18b002ae | "2023-06-25T19:48:29Z" | go | "2023-06-26T18:22:41Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,342 | ["sdk/elixir/lib/dagger/query_builder.ex"] | 🐞 sdk(elixir): with_env_variables return error rather than returning the list of env variables | ### What is the issue?
From example:
```elixir
Mix.install([
{:dagger, path: "."}
])
client = Dagger.connect!()
client
|> Dagger.Query.container()
|> Dagger.Container.from("alpine")
|> Dagger.Container.env_variables()
|> IO.inspect()
```
The API will return an error:
```elixir
{:error,
%Dagger.QueryError{
errors: [
%{
"locations" => [%{"column" => 40, "line" => 1}],
"message" => "Field \"envVariables\" of type \"[EnvVariable!]!\" must have a sub selection."
}
]
}}
```
This should returns list of environment variable, not error.
### Log output
_No response_
### Steps to reproduce
_No response_
### SDK version
elixir
### OS version
macOS 13.1 (22C65) | https://github.com/dagger/dagger/issues/5342 | https://github.com/dagger/dagger/pull/5361 | f799e778fc786947075603bee83490c4e26aca74 | 61419451653e6ac2a6a4de72fcf1ce3bb7a99419 | "2023-06-20T17:16:32Z" | go | "2023-06-23T17:52:23Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,331 | ["go.mod", "go.sum"] | 🐞 sdk(elixir): any graphql api that accept id field is display the wrong type | ### What is the issue?
The codegen is convert any id field type into a proper type, for example, `FileID` is convert to `File`. But it doesn't cover in function argument and documentation. Here is screenshot example (borrow from `dagger_ex`)

### Log output
_No response_
### Steps to reproduce
_No response_
### SDK version
-
### OS version
- | https://github.com/dagger/dagger/issues/5331 | https://github.com/dagger/dagger/pull/2546 | 88bbd51639cedf93cabed0af8fff0a9c0f95e4d2 | d1533d0deb0767aa018ba35c8819757327fb0000 | "2023-06-16T15:11:08Z" | go | "2022-07-04T09:27:11Z" |
closed | dagger/dagger | https://github.com/dagger/dagger | 5,287 | ["cmd/dagger/exec_unix.go"] | `dagger run` does not work with Gradle | ### What is the issue?
[Gradle](https://gradle.org/) does not work when run within a `dagger run` context.
### Log output
Example. I have a simple application in a gradle module called playground. I can run the app easily with the following
```
$ ./gradlew playground:run -q --console=plain
hi
```
however, feed this through `dagger run ./gradlew playground:run -q --console=plain` and the process hangs indefinitely.
Adding a `--debug` does not offer a whole lot of insight
```
$ dagger run -s ./gradlew playground:run --debug --console=plain
Connected to engine b406cfb838a0
2023-06-04T14:00:30.865-0400 [INFO] [org.gradle.internal.nativeintegration.services.NativeServices] Initialized native services in: /Users/megame/.gradle/native
2023-06-04T14:00:30.890-0400 [INFO] [org.gradle.internal.nativeintegration.services.NativeServices] Initialized jansi services in: /Users/megame/.gradle/native
```
At this point, it just hangs indefinitely.
### Steps to reproduce
This can be reproduced even without an associated project. Install Gradle (my version is 8.1.1) and run `gradle tasks`. Assuming you are in a directory with no gradle project, this operation will fail, but it will show some output. Do the same via `dagger run -s gradle tasks --console=plain` and you will see that even this command hangs.
### SDK version
Dagger CLI 0.6.1
### OS version
Confirmed on both MacOS Mojave and Ubuntu 22.04 | https://github.com/dagger/dagger/issues/5287 | https://github.com/dagger/dagger/pull/5712 | 28e09abe14bc4b27ed7515d22d59191c42194229 | 41634f3ad3f64c02cedc18c77fda1d4ca36c798a | "2023-06-04T18:03:09Z" | go | "2023-10-10T14:22:53Z" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.