Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
The problem here is your approach. Docker does not have an init system like you are used to on traditional systems. What docker does isreplacePID 1 with the process you specify in theCMDorENTRYPOINTDockerfile commands. For now, ignoreENTRYPOINT, because it replaces what yourCMDis run with (normally, it's/bin/sh -c). You need to instruct docker to start your mongod service in your Dockerfile with theCMDcommand, like:CMD usr/bin/mongodAnd when you run your container, mongod will be your PID 1. Now, you're probably wondering at this point "But what about my SSH server?" and the answer is: Don't run an SSH server on your docker containers. Therearesome use cases where running an SSH server is okay, but almost all of the "normal" reasons (debug, C&C, etc) are nullified with the "best practice" for getting a shell on your container:docker exec -it myContainer /bin/bashThis will drop you into a shell on your running container. The recommendation here for managing configuration and changes in your docker container is to use something like Ansible. However, remember that docker containers are ephemeral, and you shouldn't be restarting services and changing configuration state on them. If you need a config change, change the Dockerfile or config data, and then start a new container. Good luck!Hereis a little more information on Dockerizing MongoDB, but keep in mind that the method described there alters theENTRYPOINTin the Dockerfile, which is a little more involved and requires a better understanding of what's going on in Dockerfiles.
I have installed mongodb on a docker container together with openssh on ubuntu 14.04. The container is running with ssh but when I ssh into the container I get the following error when trying to start mongod.root@430f9502ba2d:~# service mongod start Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service mongod start Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the start(8) utility, e.g. start mongodAlsostart mongoddoes not affect anything.Tried looking at this alsoMongo daemon doesn't run by service mongod startwithout it helping.mongod --config /your/path/to/mongod.confdoesn't seem to work also, just locks up.The error below is standard as of course there is no mongod server running.root@430f9502ba2d:/# mongo MongoDB shell version: 2.6.9 connecting to: test 2015-05-07T20:49:56.213+0000 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused 2015-05-07T20:49:56.214+0000 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146 exception: connect failed
Docker container mongod error when starting via ssh
What about host mounted volumes? If each application is only reading the configuration and the requirement is that it lives in different locations within the container you could do something like:docker run --name app1 --volume /opt/shared/config_file.yml:/opt/app1/config_file.yml:ro app1image docker run --name app2 --volume /opt/shared/config_file.yml:/opt/app2/config_file.yml:ro app2imageThe file on the host can be mounted at a separate location per container. In Docker 1.9 you can actually have arbitrary volumes from specific plugins to hold the data (such asFlocker). However, both of these solutions are still per host and the data isn't available on multiple hosts at the same time.
Suppose I have the following configuration file on my Docker host, and I want multiple Docker containers to be able to access this file./opt/shared/config_file.ymlIn a typical non-Docker environment I could use symbolic links, such that:/opt/app1/config_file.yml -> /opt/shared/config_file.yml /opt/app2/config_file.yml -> /opt/shared/config_file.ymlNow suppose app1 and app2 are dockerized. I want to be able to update config_file.yml in one place and have all consumers (docker containers) pick up this change without requiring the container to be rebuilt.I understand that symlinks cannot be used to access files on the host machine that are outside of the docker container.The first two options that come to mind are:Set up an NFS share from docker host to docker containersPut the config file in a shared Docker volume, and use docker-compose to connect app1 and app2 to the shared config dockerI am trying to identify other options and then ultimately decide upon the best course of action.
Sharing a configuration file to multiple docker containers
You're setting up a specific user and permissions for that user. OpenShift's default configuration is to run containers with a random UID. It's recommended to use the root GID (GID 0) when setting permissions, instead of UIDs, as OpenShift will automatically apply GID 0 to the user.You can find more guidelines on creating images for OpenShift in the documentationhttps://docs.openshift.com/container-platform/3.11/creating_images/guidelines.html#openshift-specific-guidelines
I'm trying to deploy a Create React App webapp on OpenShift using a Dockerfile. The OpenShift build completes successfully and when I visit the route I'm able to see the application running for 1 second and then this error comes on the screen:Failed to compile EACCES: permission denied, open '/home/node/app/.eslintcache'I don't understand why the permission denied error is coming because I've assigned the directory permissions needed to thenodeuser provided by the node Docker image in my Dockerfile.Here's my Dockerfile:FROM node:14-alpine RUN mkdir -p /home/node/app &&\ chmod -R 775 /home/node/app &&\ chown -R node:node /home/node/app WORKDIR /home/node/app COPY package*.json /home/node/app/ USER node RUN npm install COPY --chown=node:node . /home/node/app EXPOSE 3000 CMD ["npm", "start"]Software versions:react-scripts 4.0.1OpenShift 4.2, 4.4, 4.5 (Tried with all)Here's thetutorialI used as reference and thesource repo.Update:Thanks to Will Gordon's answer, I was able to figure it out. OpenShift expects you to specify the user ID and not the name. Also, OpenShift runs containers as a random ID belonging to group 0 so permission for that group needs to be specified. Here's the working Dockerfile:FROM node:14-alpine RUN mkdir -p /home/node/app &&\ chown -R node:node /home/node/app WORKDIR /home/node/app RUN chgrp -R 0 /home/node/app &&\ chmod -R g+rwX /home/node/app COPY package*.json /home/node/app/ USER 1000 RUN npm install COPY --chown=node:node . /home/node/app EXPOSE 3000 CMD ["npm", "start"]
Deploying Create React App on OpenShift: EACCES: permission denied, open '/home/node/app/.eslintcache'
The problem is that, as you have used double quotes, the command substitution is being done at the time ofaliasdeclaration, not afterwards.Use single quotes:alias polymer='docker run --rm -it -v $(pwd):/home/node/app -u node fresnizky/polymer-cli polymer'Also, instead of using thepwdcommand substitution,$(pwd)you can use the environment variablePWDexpansion,$PWD, which will expand to the same value. In fact,pwdcommand also gets its value from thePWDvariable.
I'm running Docker CE in Ubuntu 16.04. I've created a Docker image for the polymer-cli. The idea is to be able to run polymer commands from inside disposable docker containers using bash aliases that mount the current directory, run the command and then destroy the container, like this:docker run --rm -it -v $(pwd):/home/node/app -u node fresnizky/polymer-cli polymerThis works perfectlty, but if I create a bash alias for this command:alias polymer="docker run --rm -it -v $(pwd):/home/node/app -u node fresnizky/polymer-cli polymer "Then $(pwd) points to my home directory instead of my current directory.Anyone knows how I can solve this?
Docker $(pwd) and bash aliases
You're trying to reference a variable namedpwd. There isno such predefined variablein Azure Pipelines.You mention that this works in your local machine, but that's not because there's apwdvariable defined. (In fact, there's probablynotapwdenvironment variable in your environment.) That's because$(pwd)isPOSIX command substitution. It's actuallyexecutingthepwdcommand.This POSIX shell syntax will not work in Azure Pipelines configuration.Instead, use one of theAzure Pipelines variables.For example, if you want to map the sources directory, you can set the volumes field to:$(Build.SourcesDirectory):/srcOr map source and binaries:$(Build.SourcesDirectory):/src $(Build.BinariesDirectory):/build
I tried to run a docker command through the docker task in Azure devops with the build in docker task. As variable for the volume of the docker command I gave this as valueBut it keeps failing with the error/usr/bin/docker: Error response from daemon: create $(pwd)/out: "$(pwd)/out" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.I tried to give the same docker run command with the same -v value inside my local machine and this works perfectly. So I guess this is an Azure devops problem or a value that I forget to give alongMy host agent is an ubuntu16 And the build task is as followed
Docker task in Azure devops won't accept "$(pwd)' as variable
Here's a solution I came up with that's hackish; please let me know if you can do better.docker-compose-devel.yml:server: image: node:0.10 command: sleep infinity client: image: node:0.10 links: - serverIn window 1:docker-compose --file docker-compose-dev.yml up -d server docker exec --interactive --tty $(docker-compose --file docker-compose-dev.yml ps -q server) bashIn window 2:docker-compose --file docker-compose-dev.yml run client bash
I'm developing a server and its client simultaneously and I'm designing them in Docker containers. I'm using Docker Compose to link them up and it works just fine for production but I can't figure out how to make it work with a development workflow in which I've got a shell running for each one.Mydocker-compose-devel.yml:server: image: node:0.10 client: image: node:0.10 links: - serverI can dodocker-compose up clientor evendocker-compose run clientbut what I want is a shell running for both server and client so I can make rapid changes to both as I develop iteratively.I want to be able to dodocker-compose run server bashin one window anddocker-compose run --no-deps client bashin another window. The problem with this is that no address for the server is added to/etc/hostson the client because I'm usingdocker-compose runinstead ofup.The only solution I can figure out is to usedocker runand give up on Docker Compose for development. Is there a better way?
Development workflow for server and client using Docker Compose?
Asdescribed on GitHub, you can do this:watchOptions: { poll: true }Or, in thepackage.json, instead of--watchdo--watch --watch-poll.
Changed to: hot loading does not work in docker and it looks like it is a docker issue.Following this:React with webpackor thisReact hot loaderon local host machine they work fine and to me, they work the same - still I dont get why you would installReact hot loader?But running it in a container, updating/"hot loading" does not work in any of them. So this might be a question a docker expert?
webpack and react jsx - hot loading not working with docker container
When you work with docker containers/images so you need to set your configurations on them. So you must change localhost to your container name. For example:http://localhost:8020/security/register http://authentication:8020/security/register
I have three apps running in 3 containers on the same host.CONTAINER ID IMAGE COMMAND PORTS 3f938111c1bf registration "java -jar registration.jar" 0.0.0.0:8030->8030/tcp cb9c4782194e security "java -jar security.jar" 0.0.0.0:8020->8020/tcp 60005507a246 main "java -jar main.jar" 0.0.0.0:8000->8000/tcpI am able to access an endpoint of the security app from main app using an Ajax request.The registration app calls an endpoint of security app from a java method using a RestTemplate object. This call is refused by the security app as follows.I/O error on POST request for "http://localhost:8020/security/register": Connect to localhost:8020 [localhost/127.0.0.1] failed: Connection refused (Connection refused); nested exception is org.apache.http.conn.HttpHostConnectException: Connect to localhost:8020 [localhost/127.0.0.1] failed: Connection refused (Connection refused)I am not able to identify the issue.Note that this call is working perfectly fine when I run these apps locally through eclipse.I am very new to the dockers. Is there a possibility that I am missing out something? Any leads would be helpful.Thanks a lot!
Connection refused by Spring boot app running in different docker container
In that particular use case the solution should be like below. The reason for this is usually /bin/bash is used with-tito enter a shell inside the container.The same thing can be done with compose by using the run command with a particular service. Note that I am exposing the service ports too.docker-compose run --service-ports server bashhttps://docs.docker.com/compose/reference/run/If the container is already running thenexecshould be enough.
I'd like to enter a docker container in interactive mode with the commad /bin/bash using a docker-compose.yml only. There is a similar question here on stack overflow:Interactive shell using Docker ComposeAnswers provided there didn't work. This is what my docker-compose.yml looks like:version: "3" services: server: image: golang:1.11.1 volumes: - './server:/go' ports: - '8080:8080' command: '-ti' entrypoint: - '/bin/bash'This is my console in and output:[bluebrown@firefly gowild]$ docker-compose up --build Recreating gowild_server_1 ... done Attaching to gowild_server_1 server_1 | bash: cannot set terminal process group (-1): Inappropriate ioctl for device server_1 | bash: no job control in this shell server_1 | root@d5884893075a:/go# exit gowild_server_1 exited with code 0Reading the above-mentioned post I tried of course also to substitute:command: '-ti'for these two lines:stdin_open: true tty: truebut when doing this docker compose gets stuck while attaching:[bluebrown@firefly gowild]$ docker-compose up --build Recreating gowild_server_1 ... done Attaching to gowild_server_1And nothing happens further. No error and exit nor a 'done' message.When trying it withshinstead ofbashit says the following for thecommand: '-it:server_1 | /bin/sh: 0: Illegal option -tAnd also gets stuck just like with bash while attaching when substituting it.Note that I can build and run the server without the command and entrypoint simply using the following:docker-compose up docker-compose run --service-ports serverStill my question is how to do it using docker-compose and an entrypoint, so It can be done withdocker-compose uponly.Update: I'm using Linux manjaro
How to set a docker-compose /bin/bash entrypoint?
It turned out that I needed to put the IP (I also put the path) in quotes. After fixing that, the pvc goes to status Bound, and the pod can mount correctly.
I am trying to configure my Kubernetes cluster to use a local NFS server for persistent volumes.I set up the PersistentVolume as follows:apiVersion: v1 kind: PersistentVolume metadata: name: hq-storage-u4 namespace: my-ns spec: capacity: storage: 10Ti accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain nfs: path: /data/u4 server: 10.30.136.79 readOnly: falseThe PV looks OK in kubectl$ kubectl get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE hq-storage-u4 10Ti RWX Retain Released my-ns/pv-50g 49mI then try to create the PersistentVolumeClaim:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-50gb namespace: my-ns spec: accessModes: - ReadWriteMany resources: requests: storage: 5GiKubectl shows the pvc status is Pending$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE pvc-50gb Pending 16mWhen I try to add the volume to a deployment, I get the error:[SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "pvc-50gb", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "pvc-50gb", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "pvc-50gb", which is unexpected.]How to I get the pvc to a working state?
Kubernetes NFS PersistentVolumeClaim has status Pending
The ID needs to be unique within a given docker host among all containers that currently exist (including exited and created containers). Once deleted, the engine no longer tracks the container ID. A container could potentially reuse the same container ID as a previously existing container, but the odds of that are fairly low.The full ID is a 64 character hex string, which gives 16^64 possible permutations (115792089237316195423570985008687907853269984665640564039457584007913129639936 if my calculator is correct). If you only track the short ID's, that's a 12 character hex string, with 16^12 (281,474,976,710,656) permutations. If you create a significant number of containers and need to track them historically and uniquely, then you may want to use the full container ID.
I have read how the containers are assigned the containers IDs:How the docker container id is generatedHow is the docker ID uniqueness verified? And in which pool is it unique? Among all exited, among all running, among all deleted/removed, among all ever created by a specific docker service?I was wondering whether the container ID is a reusable value, since it comes from a random number, how likely is it that a new container will have exactly the same container ID as another one (exited, deleted etc)?Another relative issue:https://forums.docker.com/t/docker-container-id-uniqueness/5253UPDATE: Could you please point me to the code that verifies that if the container ID already exists in creates a new one?
Docker container ID uniqueness in docker service
You could create an temporary container just beforejunitto extract test results files to copy test result to your workspsace. And finally remove itsh 'docker create --name temporary-container spring-image' sh 'docker cp temporary-container:/var/www/java/target/surefire-reports .' sh 'docker rm temporary-container' junit 'surefire-reports'You could also take a look dodocker-pipelinedocumentations which provides you some abstraction to build docker images
I am trying to create a Pipeline where Jenkins builds my Docker image, runs tests, and then deploys the container if the tests pass. The problem is that I have maven running inside the docker container, and I can't actually access the published tests until I run the container. I want the Docker container to be ran and deployed after the tests pass. This seems like a simple thing to do, but I can't think of a good way to do it. Am I misunderstanding something? Thanks.Dockerfile:FROM openjdk:10 as step-one COPY ./ /var/www/java/ WORKDIR /var/www/java RUN apt-get update -y && apt-get install -y maven RUN mvn clean package -X ENTRYPOINT ["java"] CMD ["-jar", "target/gs-serving-web-content-0.1.0.jar"] EXPOSE 8080Jenkinsfile:pipeline { agent any stages { stage('Build') { steps { echo 'Building..' sh 'docker build -t spring-image .' } } stage('Test') { steps { echo 'Testing..' junit '/var/www/java/target/surefire-reports/TEST-ma.SpringTest.xml' } } stage('Deploy') { steps { echo 'Deploying....' sh 'docker run -i -d --name spring-container spring-image' } } } }
Jenkins: How to use JUnit plugin when Maven builds occur within Docker container
I restarted docker and tried again and now it works fine.
ANSWER: I still don't know what was really wrong but after I restarted docker and ran it again (same dockerfile, same everything), it worked fine.I'm using Docker on Windows and my Dockerfile isFROM ubuntu:15.04 COPY . /src RUN apt-get update RUN apt-get install -y nodejs ...etcbut when I try to build the image I getWARN[0001] SECURITY WARNING: You are building a Docker image from Windows against a Linux Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories ... Step 3: RUN apt-get -y update --> Using cache --->ccd4120f98dd Removing intermediate container 255796bdef29 Step 4: RUN apt-get install -y nodejs ---> Running in f6c1d5a54c7a Reading package lists... Reading dependency tree... Reading state information... E: Unable to locate package nodejs INFO[0029] The command [/bin/sh -c apt-get install -y nodejs] returned a non-zero code:100apt-get works but when I try to put other apt-get install lines in those don't work either so it doesn't seem to be a problem with.
apt-get not working in Dockerfile
You can solve this using a different version, like 16.04:docker run -d \ -h ubuntu \ --name ubuntu \ --privileged \ docker.io/library/ubuntu:16.04 /sbin/initAfter run, you can accessed using the follow command:docker exec -it ubuntu /bin/bashThis version usessystemd.
I'm trying to execute a Docker image built using 'ubuntu:latest' and I keep getting SystemD error messages when I run the container:System has not been booted with systemd as init system (PID 1). Can't operate.If I try thissolutionand spawn the container usingdocker run -it -e container=docker your-image-name /sbin/init, I get the following error:Failed to mount tmpfs at /run: Operation not permitted Failed to mount tmpfs at /run/lock: Operation not permitted [!!!!!!] Failed to mount API filesystems, freezing. Freezing execution.What should I try differently?
How do I run a Docker container that uses SystemD from the latest version of Ubuntu (18.10)?
My day job uses image names with a similar structure (hosted on Amazon ECR) and they work fine with plain Docker, Compose, and Kubernetes. I would not expect to run into any trouble with this, unless the specific image repository has stricter rules.
I need to copy images from Docker Hub into a private registry. For example, I needredislabs/rebloom:2.2.2. Then, can I name itmy-private-registry.com/my-organization/redislabs/rebloom:2.2.2? (Notice there ismy-organizationwhich Icannotmodify.)In other words, isa.com/b/c/d:v1.0ok or not?I readthispost and see Docker can parse it. However, will some tools reject this? Will Containerd reject this?I am afraid that they accept it but fails somewhere, which may be very difficult to debug.Thank you very much!
Can I have extra slash "/" in Docker (and Containerd) image name?
Start your mongod containerdocker run -d --name mymongod ... mongo ...Start a 2nd container for mongorestore, linking it to the first:docker run --link mymongod:db ... mongo mongorestore -h db ...mongorestorewill connect to themymongodcontainer via the aliasdbthat docker creates based on the specified--link
I'm trying to set up a mongodb-server with docker, let it download a dump from the web and populate it with that info. My problem is that I can make it run and fill the database, but after it's done that, it just closes.This was how I went about solving my problem:sudo -u mongodb /usr/bin/mongod --config $conf $args & mongorestore dumpThe problem here is that I can't runmongorestoreifmongodisn't running, but if I start mongod withmongod &, then the container will close down aftermongorestorehas finished running.In myDockerfile, I'm running those commands by doingCMD ["/etc/mongod/mongostart.sh"].
How to run mongorestore after mongod in docker
The solution is:func writeDb(dbName string) { var mysqldumpPath string = "/usr/bin/mysqldump" cmd := exec.Command("docker", "exec", "some-mysql", mysqldumpPath, "-u", fmt.Sprintf("%s", USER), fmt.Sprintf("-p%s", PASSWORD) , fmt.Sprintf("%s", dbName)) stdout, err := cmd.StdoutPipe() if err != nil { log.Fatal(err) } if err := cmd.Start(); err != nil { log.Fatal(err) } bytes, err := ioutil.ReadAll(stdout) if err != nil { log.Fatal(err) } err = ioutil.WriteFile("./backup/" + dbName +".sql", bytes, 0644) if err != nil { panic(err) }}Just without "> dbname.sql"
I'm trying to call docker's mysqldump from host system for save mysql dump from golang. It works correctly with host mysqldump, but don't work with docker's mysqldump.func writeDb(dbName string) { var mysqldumpPath string = "/usr/bin/mysqldump" //var mysqldumpPath string = "/Applications/MAMP/Library/bin/mysqldump" //cmd := exec.Command(mysqldumpPath, fmt.Sprintf("-u%s", USER), fmt.Sprintf("-p%s", PASSWORD) , dbName) cmd := exec.Command("docker", "exec", "some-mysql", mysqldumpPath, fmt.Sprintf("%s", USER), fmt.Sprintf("-p%s", PASSWORD) , dbName, ">", fmt.Sprintf("%s.sql", dbName)) stdout, err := cmd.StdoutPipe() if err != nil { log.Fatal(err) } if err := cmd.Start(); err != nil { log.Fatal(err) } bytes, err := ioutil.ReadAll(stdout) if err != nil { log.Fatal(err) } err = ioutil.WriteFile("./backup/" + dbName +".sql", bytes, 0644) if err != nil { panic(err) }}I got only that for non-empty databaseempty mysql dump
How to redirect stdout from docker container to host
If you take a look at the Tag for 8.0 you can see that the base uses a different version of Oracle linux (8 vs 7). Yum is not installed in 8. Instead, there's a minimal installer (microdnf). So this substitution should work for you:microdnf install -y vim
I had been using mysql version 5.7 and this was working.Dockerfile (working)FROM mysql:5.7 ... RUN yum update -y; yum install -y vimThen I upgraded to mysql 8 and now I'm getting this error.$ gnt build ... #5 [ 2/12] RUN yum update -y; yum install -y vim #5 sha256:a564337cc7df72796c4c967652d420ef76ec98034de106834a473bceb4889532 #5 0.325 /bin/sh: yum: command not found #5 0.326 /bin/sh: yum: command not found #5 ERROR: executor failed running [/bin/sh -c yum update -y; yum install -y vim]: exit code: 127 ------ > [ 2/12] RUN yum update -y; yum install -y vim: ------ executor failed running [/bin/sh -c yum update -y; yum install -y vim]: exit code: 127 > Task :Server-mysql:buildDockerImage FAILED FAILURE: Build failed with an exception.Dockerfile (not working)FROM mysql:8 ... RUN yum update -y; yum install -y vim
How to install vim in a docker image based on mysql version 8?
What about generating a private key and display it to the user?I use this snippet as part of the entrypoint script for an image:KEYGEN=/usr/bin/ssh-keygen KEYFILE=/root/.ssh/id_rsa if [ ! -f $KEYFILE ]; then $KEYGEN -q -t rsa -N "" -f $KEYFILE cat $KEYFILE.pub >> /root/.ssh/authorized_keys fi echo "== Use this private key to log in ==" cat $KEYFILE
I setup a Docker image that supports ssh. No problem, lots of examples. However, most examples show setting a password using passwd. I want to distribute my image. Having a fixed password, especially to root, seems like a gaping security hole. Better, to me, is to setup the image with root having no password. When a user gets the image they would then copy their public ssh file to the image /root/.ssh/authorized_keys file.Is there a recommended way to do this?Provide a Dockerfile that builds on my image with an ADD command that user can edit?Provide a shell script that runs something like "cat ~/.ssh/authorized_keys | docker run -i sh -c 'cat > root/.ssh/authorized_keys"?
Setting ssh public keys on Docker image
I managed to replace the standard nginx.conf with a dynamically generated one following these steps:Create a template config file with placeholders for dynamic dataParse the file using Terraform'stemplate_filedata sourceStore the parsed data in a ConfigMap and mount the map as a volume for the Nginx containerStep by step:Create nginx.conf template namednginx-conf.tpl:events { worker_connections 4096; ## Default: 1024 } http { server { listen 80; listen [::]:80; server_name ${server_name}; location /_plugin/kibana { proxy_pass https://${elasticsearch_kibana_endpoint}; } location / { proxy_pass https://${elasticsearch_endpoint}; } } }Parse thenginx-conf.tpltemplate with the following Terraform code:data "template_file" "nginx" { template = "${file("${path.module}/nginx-conf.tpl")}" vars = { elasticsearch_endpoint = "${aws_elasticsearch_domain.example-name.endpoint}" elasticsearch_kibana_endpoint = "${aws_elasticsearch_domain.example-name.kibana_endpoint}" server_name = "${var.server_name}" } }Create a ConfigMap and store the parsed template there withnginx.confkey:resource "kubernetes_config_map" "nginx" { metadata { name = "nginx" } data = { "nginx.conf" = data.template_file.nginx.rendered } }Finally, mount the ConfigMap key as a container volume:# ... spec { # ... container { # ... volume_mount { name = "nginx-conf" mount_path = "/etc/nginx" } } volume { name = "nginx-conf" config_map { name = "nginx" items { key = "nginx.conf" path = "nginx.conf" } } } } # ...That's it. Nginx server will start using the provided config.Useful links:Kubernetes ConfigMap as volume,Terraform temple_file data source doc.
As a part of a bigger module, I want to deploy annginxcontainer and replace its defaultnginx.conf. The new config should be built using Terraformresources' data which is generated at the time of deployment. Is there a way to do it?
How to modify a file in a Docker container when deploying with Terraform and Kubernetes?
Start your container like this:docker run -e VAR=value -e ANOTHER_VAR=another_value .... VAR and ANOTHER_VAR will be available in the container's environment.
I'm writing a Dockerfile to set up my customized WordPress environment. I'm starting with ubuntu:latest and would like to add some repositories dynamically, by setting an ENV variable to the codename of the current Ubuntu version. How can I do this, or is there a better way to achieve this?
Docker: Set value of ENV variable using RUN command?
Are you sure the command is really returning an error? The following Dockerfile doesn't get to theecho foo:FROM alpine RUN false RUN echo fooIt just gets:# docker build . Sending build context to Docker daemon 3.072 kB Step 0 : FROM alpine ---> 0a3b5ba3277d Step 1 : RUN false ---> Running in 22485c5e763c The command '/bin/sh -c false' returned a non-zero code: 1To check whether your command is really failing, you could try something like this:FROM alpine RUN false || echo failed RUN echo foowhich then gets me:# docker build . Sending build context to Docker daemon 3.072 kB Step 0 : FROM alpine ---> 0a3b5ba3277d Step 1 : RUN false || echo failed ---> Running in 674f09ae7530 failed ---> 232fd66c5729 Removing intermediate container 674f09ae7530 Step 2 : RUN echo foo ---> Running in c7b541fdb15c foo ---> dd1bece67e71 Removing intermediate container c7b541fdb15c Successfully built dd1bece67e71
If I have an error in a RUN command in my dockerfile it just carries on to the next one.
How do you do cancel a dockerfile image building on the first error it encounters?
You want to add the volume to the container.spec: containers: - name: discover image: docker:dind volumeMounts: - name: dockersock mountPath: "/var/run/docker.sock" volumes: - name: dockersock hostPath: path: /var/run/docker.sock
I'm setting up a kubernetes deployment with an image that will execute docker commands (docker psetc.).My yaml looks as the following:kind: Deployment apiVersion: apps/v1 metadata: name: discovery namespace: kube-system labels: discovery-app: kubernetes-discovery spec: selector: matchLabels: discovery-app: kubernetes-discovery strategy: type: Recreate template: metadata: labels: discovery-app: kubernetes-discovery spec: containers: - image: docker:dind name: discover ports: - containerPort: 8080 name: my-awesome-port imagePullSecrets: - name: regcred3 volumes: - name: some-volume emptyDir: {} serviceAccountName: kubernetes-discoveryNormally I will run a docker container as following:docker run -v /var/run/docker.sock:/var/run/docker.sock docker:dindNow, kubernetes yaml supportscommandsandargsbut for some reason does not supportoptions.What is the right thing to do?Perhaps I should configure a volume, but then, is it volumeMount or just a volume?I am new with kubernetes so it is important for me to do it the right way.Thank you
How to add "-v /var/run/docker.sock:/var/run/docker.sock" when running container from kubernetes deployment yaml
Reading the PRwhich added that line, it seems like it was added to fix an issue with Apple M1 support for the node-gyp package. A later PRtook the line back out, but that change does not seem to be reflected on the docker website.That does beg the question of why it breaks on M1, but I don't have an M1 laptop, so I can't answer that.
In thedocker tutorialI'm following it tells me to put the following commands in myDockerfile:# syntax=docker/dockerfile:1 FROM node:12-alpine RUN apk add --no-cache python2 g++ make WORKDIR /app COPY . . RUN yarn install --production CMD ["node", "src/index.js"]I understand what all of the lines are doing, except for:RUN apk add --no-cache python2 g++ makeAnd everything seems to work without it. Can I delete this line from myDockerfile? Will deleting this line cause problems for me down the road? Why do I need anything "python" in this node project?
Can I remove `RUN apk add --no-cache python2 g++ make` from my Dockerfile?
Docker volumes mount files from thehostinto the container.So in this case, you've mounted the current directory of whatever host docker-machine is pointing to into the container. Unless you have some funky VM crossmounting going on (like boot2docker does), this isn't going to match the directories on the machine you're running on.
So I have a simple containerised django project, with another container for sass css compilation.I use docker-compose with a docker-machine, but when I fire it up, the web container doesn't have any of my local files (manage.py etc) in, so it dies with afile not found: manage.pyerror.Let me explain more:docker-compose.ymlweb: build: . volumes: - .:/app ports: - "8001:5000" sass: image: ubuntudesign/sass command: sass --debug-info --watch /app/static/css -E "UTF-8" volumes: - .:/appDockerfileFROM ubuntu:14.04 # Install apt dependencies RUN apt-get update && apt-get install -y python-dev python-pip git bzr libpq-dev pkg-config FROM ubuntu:14.04 # Install apt dependencies RUN apt-get update && apt-get install -y python-dev python-pip git bzr libpq-dev pkg-config # Pip requirements files COPY requirements /requirements # Install pip requirements RUN pip install -r /requirements/dev.txt COPY . /app WORKDIR /app CMD ["python", "manage.py", "runserver", "0.0.0.0:5000"]And a standard django project in the local directory:$ ls docker-compose.yml Dockerfile manage.py README.md static templates webappAnd here's the error as isolated as I can make it:$ docker-compose run web pythoncan't open file 'manage.py': [Errno 2] No such file or directoryWhich is true:$ docker-compose run web lsstaticI think this is a problem with working with remotedocker-machines, I've tried to follow thesimple django tutorial, and I reckon local file sharing works differently.What works differently when using docker-machine?
local files missing from docker-machine container
The easiest way is to use the singularity image's runscript and set"python.pythonPath": "path/to/python.img"e.g.,$ sudo singularity build py36.simg docker://python:3.6 Docker image path: index.docker.io/library/python:3.6 Cache folder set to /root/.singularity/docker [9/9] |===================================| 100.0% Importing: base Singularity environment Exploding layer: sha256:6f2f362378c5a6fd915d96d11dda1e0223ccf213bf121ace56ae0f6616ea1dc8.tar.gz Exploding layer: sha256:494c27a8a6b820f9167ec7e368b3a9bb47d7029f4dc8c97c67091f3757a5bc4e.tar.gz Exploding layer: sha256:7596bb83081b6c8410df557d538a0ae45922cbf81e469c6f4cfa835247cb24ab.tar.gz Exploding layer: sha256:372744b62d49eba993652ee4a1201801fe278b687d85489101e07e7b9a4900e0.tar.gz Exploding layer: sha256:615db220d76c063138a2e6c5849703a7a80d682a682f7e1a841e6e7ed5f43879.tar.gz Exploding layer: sha256:1865698adfb04b47d1aa53e0f8dac0a511d78285cb4dda39b4f3b0b3b091bb2e.tar.gz Exploding layer: sha256:7159b3304cc0ff68a7903c2660aa37fdae97a02164449400c6ef283a6aaf3879.tar.gz Exploding layer: sha256:ad0713808ef687d1e541819f50497506f5dce12604d1af54dbae153d61d5cf21.tar.gz Exploding layer: sha256:7ba59390457320287875a9c381fee7936b50ecfd21abfe3c50278ac2f39b9786.tar.gz Exploding layer: sha256:14b2fefd5f8a77dd860f2f455a2108a55836dd0062ced0df5fbd636ce3188ff7.tar.gz Building Singularity image... Singularity container built: py36.simg Cleaning up... $ ./py36.simg --version Python 3.6.8 # this is equivalent to: $ singularity exec py36.simg python3 --version Python 3.6.8If you're using a custom singularity image with multiple versions of python, you'll probably need to make a wrapper script and then use that. e.g.,#!/bin/bash exec singularity exec python.img python3.6 "$@"
I want to be able to use a python interpreter inside a singularity image from visual studio code.It seems that all of the options to point VSC to python interpreters involve a direct path, but using python within an image requires a command:singularity exec path/to/image.img python3.6I tried putting this in the VSC settings.json file:"[python]": { "python.pythonPath": "singularity exec /home/sryadgir/all/docker/py_dock/pydock_v0.img python3.6" }with no luck, running any python code from VSC uses the python interpreter here:/usr/bin/python3
How can I use a python interpreter in a singularity/docker image in visual studio code
Finally, I figure out how to copy it and use the environment variable. Here is. the updatedYAMLfileapiVersion: apps/v1beta1 kind: Deployment metadata: name: my-app spec: template: spec: volumes: - name: google-cloud-keys secret: secretName: gac-keys containers: - name: my-app image: us.gcr.io/my-app volumeMounts: - name: google-cloud-keys mountPath: /var/secrets/google readOnly: true env: - name: GOOGLE_APPLICATION_CREDENTIALS value: /var/secrets/google/new-file-name.json
I am usingFirebasein myGoLangproject hosted onGoogle Kubernetes Engine.Steps I followed:Enable firebase admin SDK on the firebase account. It generated a service accountJSONfor me. This also created a service account under my Google console service credentials.Followed thisanswerand add a new secret key usingkubectl create secret generic google-application-credentials --from-file=./sample-project.jsonMade changes to mydeployment.YAMLfile (added volume mounts, and environment variable in)spec: containers: - image: gcr.io/sample-ee458/city:0.27 name: city-app volumeMounts: - name: google-application-credentials-volume mountPath: /etc/gcp readOnly: true env: - name: GOOGLE_APPLICATION_CREDENTIALS value: /etc/gcp/application-credentials.jsonsetup volume in the same filevolumes: - name: google-application-credentials-volume secret: secretName: google-application-credentials items: - key: application-credentials.json # default name created by the create secret from-file command path: application-credentials.jsonRunkubectl apply -f deployment.yamland deploy usingdocker pushcommand.It's throwing meerror getting credentials using google_application_credentials environment variable gke. What am I missing here? Anny hint would be appreciable.
How to set up an environment variables on google kubernetes engine?
From the questionDocker Minimal Image PyInstaller Binary File?'s commands,I get thelinksabout how to make python binary to static,which like the go application demo,say hello world in scratch.And I do a single ,easy demo,app.py:print("test")Then,do docker build with the Dockerfile:FROM bigpangl/python:3.6-slim AS complier WORKDIR /app COPY app.py ./app.py RUN apt-get update \ && apt-get install -y build-essential patchelf \ && pip install staticx pyinstaller \ && pyinstaller -F app.py \ && staticx /app/dist/app /tu2k1ed FROM scratch WORKDIR / COPY --from=complier /tu2k1ed / COPY --from=complier /tmp /tmp CMD ["/tu2k1ed"]Get the image below, just 7.22M(I am not sure if could see the pic):Try to run by codedocker run test,successfully:PS:With my teststheCMDmust write by ['xxx'] not xxx direct./tmpdirectory is required in the demo.other python application not test ,jsut the demo codes about print
I have a similar question to this :Is there a way to compile a Python program to binary and use it with a Scratch Dockerfile?In thispage, I saw that someone said that a C application runs well when compiled with-static.So I have a new question: doespyinstallerhave any parameters likegcc -staticto make a python application run well in a Scratch Docker image?
Does pyinstaller have any parameters like gcc -static?
Answer for first question:Vagrant is a way to quickly setup docker based container on your local machine. To run docker containers, you need linux kernel which you can provide either by running container on your physical machine. But having vagrant's provisioned vm and running containers will benefit in following wayYou can safely remove containers anytime.You can automate all docker commands via vagrant script.Answer to second question:Vagrant directly communicates with docker containers. Each docker container on VM gets their own IP and space. You may modify vagrant file for port forwarding per your machine need.Hope it helps.
I have a set of microservices whose deployment I would like to automate and standardize using Docker. I have been reading about Vagrant and I have a couple of questions on using Vagrant for setting up the environment.I understand that Vagrant is used for setting up VM's and Docker for creating containers. What is the benefit of running a Docker container inside a VM? Doesn't it defeat the purpose of using Docker in the first place?How is the interaction between Vagrant and Docker happening? Does the VM that I create using Vagrant contain Docker running inside it?
Vagrant and Docker with Microservices
There is an issue in this line:run: docker build ./api/Service/ --file Dockerfile --tag my-image-name:$(date +%s)The usage of--fileflag is wrong. The correct way would be:run: docker build --file ./api/Service/Dockerfile --tag my-image-name:$(date +%s)
I'm adding a dockerfile to my asp.net core application and it's located in a subdirectory. I'm trying to create a github action to run the dockerfile, but the action is having difficulty finding it. My folder structure is:api/ |--Data/ |--Service/ |--|--Dockerfile |--Tests/ |--MyProject.sln frontend/My action.yml is:name: Docker Image CI on: push: branches: [ master ] pull_request: branches: [ master ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Build the Docker image run: docker build ./api/Service/ --file Dockerfile --tag my-image-name:$(date +%s)When the action runs, I get the following error on the docker build.Run docker build ./api/Service/ --file Dockerfile --tag my-image-name:$(date +%s) docker build ./api/Service/ --file Dockerfile --tag my-image-name:$(date +%s) shell: /bin/bash -e {0} unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/runner/work/MyProject-actions/MyProject-actions/Dockerfile: no such file or directory ##[error]Process completed with exit code 1.Any help would be appreciated!
How do I specify the dockerfile location in my github action?
From the logs it seems that it tries to connect to REDIS onlocalhost(127.0.0.1). The express docker container can reach REDIS by service name which isredis.Try to replacelocalhostwithredisinredisConnectionString. Something like:redis://[[user][:password@]]redis:6379Hopefully that will solve your problem.
I have an Express app and React app, and in the backend part I'm using Redis. I setup one Dockerfile for the frontend, and one for the backend. Additionally, I setup thedocker-compose.ymlfile, which looks like this:# Specify docker-compose version. version: '3' # Define the services/containers to be run. services: react: build: admin ports: - '3000:3000' express: build: . container_name: api ports: - '3001:3001' depends_on: - redis links: - mongo - redis environment: - REDIS_URL=redis://cache - MONGO_URL=mongodb://db/tests mongo: image: mongo:4 container_name: db ports: - '27017:27017' redis: image: redis:4 container_name: cache ports: - '6379:6379'And inside my backend, I callredisClientas follows:const bluebird = require('bluebird'); const config = require('config'); const logger = require('./logger'); const redis = require('redis'); bluebird.promisifyAll(redis); const RedisService = function() { const redisConnectionString = process.env.REDIS_URL; this.client = redis.createClient({ url: redisConnectionString }); this.client.on('error', (err) => { logger.error(err); }); };Where config reads the.jsonfile inside myconfigfolder. However, when I rundocker-compose up, it throws the following error:express_1 | [2019-06-10T20:14:38.169Z] error: "uncaughtException: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379 Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1106:14)Any ideas how to properly connect Redis with docker-compose in my setting where I read the connection string from the config .json file?
Setting up node with redis using docker-compose
This is duplicate ofRun RSelenium in parallelYou can use code in above answer to do parallel executionlibrary(RSelenium) library(rvest) library(magrittr) library(foreach) library(doParallel) URLsPar <- c("http://www.bbc.com/", "http://www.cnn.com", "http://www.google.com", "http://www.yahoo.com", "http://www.twitter.com") appHTML <- c() (cl <- (detectCores() - 1) %>% makeCluster) %>% registerDoParallel # open a remoteDriver for each node on the cluster clusterEvalQ(cl, { library(RSelenium) remDr <- remoteDriver$new(remoteServerAddr = ip, port = port) remDr$open() }) myTitles <- c() ws <- foreach(x = 1:length(URLsPar), .packages = c("rvest", "magrittr", "RSelenium")) %dopar% { remDr$navigate(URLsPar[x]) remDr$getTitle()[[1]] } # close browser on each node clusterEvalQ(cl, { remDr$close() }) stopImplicitCluster()
I have a simple yaml file:seleniumhub: image: selenium/hub ports: - 4444:4444 firefoxnode: image: selenium/node-firefox-debug ports: - 4577 links: - seleniumhub:hub chromenode: image: selenium/node-chrome-debug ports: - 4578 links: - seleniumhub:hubthat I have executed in docker:docker-compose up -dI have one hub and two nodes running.Now I would like to run two very simple selenium commands in parallel (written in RSelenium):remDr$open() remDr$navigate("http://www.r-project.org") remDr$screenshot(display = TRUE)I would like to know how can I run above selenium commands in Python or R, in parallel. I tried several ways but none works. For example in R:library(RSelenium) remDr <- remoteDriver(remoteServerAddr = "192.168.99.100", port = 4444L) remDr$open() remDr$navigate("http://www.r-project.org") remDr$screenshot(display = TRUE)doesn't do anything. I have also tried to run two remoteDrivers, but that doesn't help ether:remDr <- remoteDriver(remoteServerAddr = "192.168.99.100", port = 4577L) remDr$open() remDr$navigate("http://www.r-project.org") remDr$screenshot(display = TRUE)
Run yaml file for parallel selenium test from R or python
You should convert the file with UNIX new line convention.You have a DOS file, which has the extra\rcharacter before\n, which is interpreted as a character in the command. So system will check the programphp\rand notphp, and so it fails.tr -d '\15' < original_file > converted_fileshould do the work (StackOverflow has many other methods and tricks)
So go straight to the problem, when I run./yiiseems I got that error from Debian:stretch that I ran from Docker.However when I run/usr/bin/env php -vI got the correct output and there's no problem on it.Seems there's a problem on new line being translated as string and I have no idea how to fix it.Sorry if my English a bit messy and thanks in advance.Just note:I've been trying to edit that file usingnanowithin debian but it's useless. I'm getting the same error.I've check php file within/usr/bin/phpand it's exist bothphpandphp7.1I can runphp -vwithout problem as well
Debian - /usr/bin/env: 'php\r': No such file or directory
In this case, docker was waiting for containerd to start. The containerd pid is located at/var/snap/docker/471/run/docker/containerd/containerd.pid.This pid didn't exist. But the file was not deleted when the server was unceremoniously shutdown. Deleting this file allows the containerd process to start again, and problem is solved. I believe similar problems exist out there where docker.pid file also points to a non-existent pid.
I have docker installed on Ubuntu 18.04.2 with snap.When I try to start docker it fails with the following error log.2020-07-16T23:49:14Z docker.dockerd[932]: failed to start containerd: timeout waiting for containerd to start 2020-07-16T23:49:14Z systemd[1]: snap.docker.dockerd.service: Main process exited, code=exited, status=1/FAILURE 2020-07-16T23:49:14Z systemd[1]: snap.docker.dockerd.service: Failed with result 'exit-code'. 2020-07-16T23:49:14Z systemd[1]: snap.docker.dockerd.service: Service hold-off time over, scheduling restart. 2020-07-16T23:49:14Z systemd[1]: snap.docker.dockerd.service: Scheduled restart job, restart counter is at 68. 2020-07-16T23:49:14Z systemd[1]: Stopped Service for snap application docker.dockerd. 2020-07-16T23:49:14Z systemd[1]: Started Service for snap application docker.dockerd.It goes over and over into a restart loop. What should I do to get docker working again?
Docker fails with "failed to start containerd: timeout waiting for containerd to start"
Finally I got this running. Error message comming from VS is very misleading and it has nothing to do with volume sharing. Eventually I realized that problem is in running a debugger, because when I ran solution withCtrl+F5everything was ok and container started correctly. Problem occurred only when running withF5and trying to attach a debugger.Then I found some clues in console output. VS tries to download some tooling for debugging containers with powershell script namedGetVsDbg.ps1. When running this script I could observe errors like:Add-Type : Cannot add type. The assembly 'System.IO.Compression.FileSystem' could not be found.Finally I fixed this issue by updating powershell version which was somehow in collision with my .net framework installed on my machine.
I'm trying to get running a docker support with Visual studio 2017 for a .net core 2.0 web app running on linux containers. I'm working on machine with win 7 OS, so I must use a Docker toolbox with Virtual box. I've already checked this question:How to get docker toolbox to work with .net core 2.0 project, but I got stuck in the following problem, when trying to run it with VS:Volume sharing is not enabled. Enable volume sharing in the docker ce for windows settingsSo far I know that there is a default volume mounted under the C:\Users, so my project files should be copied somewhere under this folder in case I don't want to mount any other volume. So I copied them there.When I check the settings of my Virtual box, folder seems to be shared:I can even cd into this folder with command line, but still can't get over this problem. Any ideas about this?
Docker toolbox with Visual studio - Volume sharing is not enabled
This is the correct way to do it:services: foo: ... volumes: - foo:/mnt deploy: mode: replicated replicas: 3 volumes: foo: name: 'foo-{{.Task.Slot}}' ...Scaling the service will then create the volume(s) as needed.All credits go to @larsks.
I would like to mount individual volumes to each replica of my Docker service using the{{.Task.Slot}}syntax:services: foo: ... volumes: - type: volume source: foo{{.Task.Slot}} target: /mnt deploy: mode: replicated replicas: 3 volumes: foo1: ... foo2: ... foo3: ...However, Docker fails with:service foo: undefined volume "foo{{.Task.Slot}}"It seems that the Go syntax is not interpreted in thesourceproperty but in thetargetproperty, it works smoothly:services: foo: ... volumes: - type: volume source: foo1 target: /mnt{{.Task.Slot}}But that's obviously not what I need.
Using {{.Task.Slot}} in Docker volumes
tl;dr:The incorrect part isaddress=127.0.0.1:8000it should be0.0.0.0:8000Full command in the docker compose:command: java -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=0.0.0.0:8000 -jar gs-spring-boot-docker-0.1.0.jarLong answer:Every container has its own network interface, keeping that in mind127.0.0.1means loopback interface and its only accessible from the same host (ie. if you are inside the container you can access it).In contrast if you want the application to listen on every network interface available, we can swap it with0.0.0.0which is in our case what we want, because we are connecting from outside of the container to the debugging port which is 8000 inside the container so loopback interface is not sufficient.
I've a simple (dockerized) Web Application in Spring Boot.The App compile correctly. The container build fine without errors. The App is running fine on localhost:8080, It's a simple "Hello World".Now I'm trying to attach Spring Tool Suite debugger to the containerized JVM with Remote debugging but without success.The fault message isFailed to connect to remote VM com.sun.jdi.connect.spi.ClosedConnectionExceptionThis is my Dockerfile:FROM openjdk:8-alpine WORKDIR / EXPOSE 8080 8000 COPY target /and that's my docker-compose.ymlversion: '3.7' services: web: build: . ports: - "8080:8080" - "8000:8000" command: java -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=127.0.0.1:8000 -jar gs-spring-boot-docker-0.1.0.jarIn Spring Tool Suite I have these settings for remote debugging:Remote Java Application - Connection type: Standard (Socket attach) - Host: localhost - port: 8000I'm using a Macbook Pro with OSX Mojave (10.14.6) Thanks for any suggetion.
Remote debug Spring Boot application
when I try to pull the image manually I shows me an error that no image foundThe method you're following provides private registry credentials to the ECS Agent, but not the Docker CLI (the Docker CLI stores its credential data in a different place). Since you've configured credentials for the Agent, you should be able to run a task definition referencing an image in your private registry without manually pulling the image from the Docker CLI.Edit: It looks like you probably have an error in your/etc/ecs/ecs.configfile on the instance due to how you're quoting theechocommand. You'll want to change this line:echo "ECS_ENGINE_AUTH_DATA={"https://index.docker.io/v1/":{"username":"my_name","password":"my_password","email":"[email protected]"}}" >>/etc/ecs/ecs.configtoecho 'ECS_ENGINE_AUTH_DATA={"https://index.docker.io/v1/":{"username":"my_name","password":"my_password","email":"[email protected]"}}' >>/etc/ecs/ecs.config
I am writing a terraform script for creating a ECS auto scaling cluster. I have created a cluster and added ec2 container instances in to it.My task definition file contains a image that is from a Private docker repository.I go through the aws official documentation and find a page forPrivate Registry Authenticationand tried both of the ways as described there.using dockercfgthe docker wayI put my ecs.config file in the S3 bucket and during the instance boot time I passed the user data as#!/bin/bash yum install -y aws-cli aws s3 cp s3:///ecs.config /etc/ecs/ecs.configIn my second approach I passed the used data asecho "ECS_ENGINE_AUTH_TYPE=docker" >>/etc/ecs/ecs.config echo "ECS_ENGINE_AUTH_DATA={"https://index.docker.io/v1/":{"username":"my_name","password":"my_password","email":"[email protected]"}}" >>/etc/ecs/ecs.configI find the data in my /etc/ecs/ecs.config when login onto my container instance but when I try to pull the image manually I shows me an error that no image found.Then I try docker login command there and enter my credentials manually and try to pull that image again and eventually it was successful.I am not sure not whether is there a way to achieve private docker registry authentication in ecs optimized image automatically by user data or not or If am doing something wrong.Please help me out in this.
Private docker registry authentication in aws ecs optimized AMI is not successful
Since you've mentioned thatDockerdaemon runs insideminikubeVM, I assume that you might hit K8sGarbage collectionmechanism, which keeps system utilization on appropriate level and reduce amount of unused containers(built from images) by adjusting the specific thresholds.These evictionthresholdsare fully managed byKubeletk8s node agent, cleaning uncertain images and containers according to the parameters(flags) propagated inkubeletconfiguration file.Therefore, I guess you can investigate K8s eviction behavior looking at the certain thresholds, adjusted inkubeletconfig file which is generated byminikubebootstrapper in the following path/var/lib/kubelet/config.yaml.
I loaded some docker images runningdocker load --input I can then see these images when executingdocker image lsAfter a while images start disappearing. Every few minutes there are less and less images listed. I did not run any of images yet. What could be the cause of this issue?EDIT: This issue arises with docker inside minikube VM.
Docker images disappearing over time
I tried something similar and gotStep 4 : RUN mkdir ~/.m2 ---> Running in 9216915b2463 mkdir: cannot create directory '/home/jenkins/.m2': No such file or directoryyour useradd is not enough to create /home/jenkinsI do for my user ggRUN useradd -d /home/gg -m -s /bin/bash gg RUN echo gg:gg | chpasswd RUN echo 'gg ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers.d/gg RUN chmod 0440 /etc/sudoers.d/gg USER gg ENV HOME /home/gg WORKDIR /home/ggThis creates the directory of the user gg `
I have the following in my dockerfile. (There is much more. but I have pasted the relevant part here)RUN useradd jenkins USER jenkins # Maven settings RUN mkdir ~/.m2 COPY settings.xml ~/.m2/settings.xmlThe docker build goes through fine and when I run docker image, I see NO errors.but I do not see.m2directory created at/home/jenkins/.m2in the host filesystem.I also tried replacing~with/home/jenkinsand still I do not see.m2being created.what am I doing wrong?Thanks
dockerfile is not creating directory and copying files?
As of Docker 1.10, DNS is managed differently for user-defined networks. DNS for the default bridge network is unchanged for backwards compatibility. In a user-defined network, docker daemon uses the embedded DNS server. According to the documentation found here:https://docs.docker.com/engine/userguide/networking/configure-dns/--dns=[IP_ADDRESS...] The IP addresses passed via the --dns option is used by the embedded DNS server to forward the DNS query if embedded DNS server is unable to resolve a name resolution request from the containers. These --dns IP addresses are managed by the embedded DNS server and will not be updated in the container’s /etc/resolv.conf file.So, the DNS nameserver will be used, it just is not visible in the container's /etc/resolv.conf.
I try create docker container with custom network and dos settings.docker network create --driver=bridge --opt "com.docker.network.bridge.enable_ip_masquerade"="true" --opt "com.docker.network.bridge.enable_icc"="true" --opt="com.docker.network.driver.mtu"="1500" --opt="com.docker.network.bridge.host_binding_ipv4"="0.0.0.0" net--docker run --dns 10.0.0.2 --network=net busybox cat /etc/resolv.confnameserver 127.0.0.11 options ndots:0Else if I use standard network all work finedocker run --dns 10.0.0.2 --network=bridge busybox cat /etc/resolv.confnameserver 10.0.0.2
Docker DNS settings
You're using Python 3 but installing the Python 2 packages. Change yourDockerfileto the following:FROM python:3.5 ENV HOME /root ENV PYTHONPATH "/usr/lib/python3/dist-packages:/usr/local/lib/python3.5/site-packages" # Install dependencies RUN apt-get update \ && apt-get upgrade -y \ && apt-get autoremove -y \ && apt-get install -y \ gcc \ build-essential \ zlib1g-dev \ wget \ unzip \ cmake \ python3-dev \ gfortran \ libblas-dev \ liblapack-dev \ libatlas-base-dev \ && apt-get clean # Install Python packages RUN pip install --upgrade pip \ && pip install \ ipython[all] \ numpy \ nose \ matplotlib \ pandas \ scipy \ sympy \ cython \ && rm -fr /root/.cache
I am trying to installscipyfrom aDockerfileand I cannot for the life of me figure out how.Here is theDockerfile:FROM python:3.5 ENV HOME /root # Install dependencies RUN apt-get update RUN apt-get install -y gcc RUN apt-get install -y build-essential RUN apt-get install -y zlib1g-dev RUN apt-get install -y wget RUN apt-get install -y unzip RUN apt-get install -y cmake RUN apt-get install -y python3-dev RUN apt-get install -y gfortran RUN apt-get install -y python-numpy RUN apt-get install -y python-matplotlib RUN apt-get install -y ipython RUN apt-get install -y ipython-notebook RUN apt-get install -y python-pandas RUN apt-get install -y python-sympy RUN apt-get install -y python-nose # Install Python packages RUN pip install --upgrade pip RUN pip install cython # Install scipy RUN apt-get install -y python-scipyThis builds an image, but when I run the container and try toimport scipyit says:Python 3.5.1 (default, Mar 9 2016, 03:30:07) [GCC 4.9.2] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import scipy Traceback (most recent call last): File "", line 1, in ImportError: No module named 'scipy'I have tried usingRUN pip install scipyandRUN pip install git+https://github.com/scipy/scipy.gitbut those throw an error before completing the build.
Can't install scipy
Adderror_reporting(-1);and you'll see:Notice: iconv(): Wrong charset, conversion from 'UTF-8' to 'UTF-8//IGNORE' is not allowed in /test.php on line 5Because apparentlythe alpine images just don't work properly with iconvandthe maintainers have simply given up on actually fixing it. I think that it is important to note here that PHP does not provideanyofficial docker images, these are "Docker Official" images for PHP that are maintained by the docker community.If you don't mind somewhat larger base images just switch to a not-alpine image.Edit:Yes thenoted workarounddoes seem to work. For the sake of not leaving useful information behind a link, example Dockerfile:FROM php:7.4-alpine # fix work iconv library with alpine RUN apk add --no-cache --repository http://dl-cdn.alpinelinux.org/alpine/edge/community/ --allow-untrusted gnu-libiconv ENV LD_PRELOAD /usr/lib/preloadable_libiconv.so phpexample build:docker build -t php:7.4-alpine-iconv ./
Given the following code:<?php $mb_name = "湊崎 紗夏"; $tmp_mb_name = iconv('UTF-8', 'UTF-8//IGNORE', $mb_name); if($tmp_mb_name != $mb_name) { echo "tmp_mb_name: {$tmp_mb_name}\n"; echo "mb_name: {$mb_name}\n"; exit; } else { echo "no problem!\n"; }I tested in3v4l.organd it outputsno problem!However, inphp:7.4-fpm-alpine dockerimage, it outputs the following:tmp_mb_name: mb_name: 湊崎 紗夏According tophp.net:If you append the string //IGNORE, characters that cannot be represented in the target charset are silently discarded.Why does$mb_namecannot be represented inUTF-8in php alpine image?
Why does iconv returns empty string in php:7.4-fpm-alpine docker
Expanding on my comment. Your client code's network address will not work:resp, err := http.Post("http://localhost:8000/orders", "application/json", bytes.NewBuffer(requestBody)) // brokenas it is literally talking to itself (localhostrefer to theclientdocker container -notthe host OS).The quickest way - for testing purposes - to expose the two containers to your host OS would be like so:docker run -it --rm -p 8000:8000 dockweb # server container docker run -it --rm --net host dockcli # client containerBeyond trivial testing, this can quickly get unwieldy. So I recommend using things likedocker-composewhich allows you to trivially link containers and their networks.
I am trying to make an http request from one project to another both using GO. The project that is making the request has the following dockerfile:FROM golang:alpine as builder WORKDIR /build COPY . . RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-extldflags "-static"' -o main . FROM scratch WORKDIR /app COPY --from=builder /build . CMD ["./main"]The project that is waiting for the request is running on localhost:8000 and it has the following dockerfile:FROM golang:1.13.8 AS build-env ADD . /dockerdev WORKDIR /dockerdev RUN go get -d -v ./... RUN go build -o /server # Final stage FROM debian:buster EXPOSE 8000 WORKDIR / COPY --from=build-env /server / CMD ["/server"]When I makeresp, err := http.Post("http://localhost:8000/orders", "application/json", bytes.NewBuffer(requestBody))it gives me the following errordial tcp 127.0.0.1:8000: connect: connection refusedI am new to docker so any improvements are welcome!
dial tcp 127.0.0.1:8000: connect: connection refused golang docker containers
At least as of this time (June 2018), Docker still doesn't support this. I was able to work around the issue utilizingenvsubst.envsubstis part ofgettextand it can be used to replace only environment variables you tell it to.Tweak thedocker-compose.ymlvalue to look like an array or map (either brackets or curly braces) but have the value be an environment variable.For examplegraylog: image: graylog2/server2 extra_hosts: [ ${EXTRA_HOSTS} ]Then, define your environment variable without brackets or curly braces.For example:export EXTRA_HOSTS="'host1:10.10.10.1','host2:10.10.10.2'"Then utilizeenvsubstenvsubst '${EXTRA_HOSTS}' < docker-compose.yml > docker-compose.subst.yml && docker stack deploy -c docker-compose.subst.yaml foobarNotice that you pass'${EXTRA_HOSTS}'toenvsubst. This tells it to only replace this environment variable. This ensures it doesn't accidentally replace some other variable that's utilizing the variable substitution syntax of Docker compose files.
How can I use variable substitution for a list, map, or array value in adocker-compose.ymlfile.For example:graylog: image: graylog2/server2 extra_hosts: ${EXTRA_HOSTS}andexport EXTRA_HOSTS="['host1:10.10.10.1','host2:10.10.10.2']"gives the following error:graylog.extra_hosts must be a mappingI've tried different variations of the above with no luck.I do see that there's an open issue about this here:https://github.com/docker/compose/issues/4249Is it just not possible? Does anyone know of a workaround?
docker-compose variable substitution / interpolation with list, map, or array value
Regarding this errorConnect to localhost:4576 [localhost/127.0.0.1] failed: Connection refused (Connection refused)Seems you have the setting ready in servicejc, you need the same for your problem application.links: - localstackI guess your application is running in another docker as well, not on host directly. So you can't accesslocalhost:4567from application container, because these aws emulated services are not reachable in container itself. Two solutions:link the localstack container to your application. for example if the link name islocalstack, then you can access service withlocalstack:4567get the real IP address of host, access withIP:4567
I am setting up an application inside a docker container. I want this application to be able to connect with the localstack stack containerlocalstack docs. When i rundocker-compose upthe containers start up successfully. I can run a seperate java application not included with in docker-compose file that will connect successfully to the localstack container. But the application that starts up along with the localstack cannot connect. Ive looked at the docker docs and localstack docs and I cant figure out how to get these things to communicate with one another. Any help would be greatly apprecaited. Here is mydocker-composefile:version: '3.4' networks: default: driver: bridge services: jc: build: context: . dockerfile: ./Dockerfile args: - PORT=5001 network: host image: jc depends_on: - localstack container_name: jc ports: - 5001:5001 links: - localstack environment: - SPRING_PROFILES_ACTIVE=local localstack: image: localstack/localstack ports: - "4567-4584:4567-4584"The error message that I get is:sqs.SqsPoller app=jc version=2.0.1.0 : An exception occurred while polling for messages: Unable to execute HTTP request: Connect to localhost:4576 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
How to make a docker container communicate with the localstack docker container with docker-compose?
This worked on my end(just replace the container ID):docker exec 1d3595c0ce87 sh -c 'mysqldump -uroot -pSomePassword DBName > /dumps/MyNewDump.sql' mysqldump: [Warning] Using a password on the command line interface can be insecure.
I want to create mysql dumps for a database which is running in docker container. However I do not want to get into the container and execute the command but do it from the host machine. Is there a way to do it. I tried few things but probably I am wrong with the commands.docker exec -d mysql sh mysqldump -uroot -pSomePassword DBName > /dumps/MyNewDump.sqldocker exec -d mysql sh $(mysqldump -uroot -pSomePassword DBName > /dumps/MyNewDump.sql)docker exec -d mysql mysqldump -uroot -pSomePassword DBName > /dumps/MyNewDump.sqlthedumpsdirectory is already bind to the host machine.These commands are seems not the right way to do it or probably not the right way to do it at all. These always ends up with an error:bash: /dumps/MyNewDump.sql: No such file or directoryBut if I just runmysqldump -uroot -pSomePassword DBName > /dumps/MyNewDump.sqlinside the container it works fine.
How to execute mysqldump command from the host machine to a mysql docker container
For that specificdocker-compose.ymlthere is noredison127.0.0.1, you should useredisas host, since services on the same Docker network are able to find each other using the service names as DNS.const Redis = require('ioredis'); const redis = new Redis({ host: 'redis' });Furthermore,depends_ondoes not wait forrediscontainer to be ready before starting, it will only launch it first, so it is your job to wait before startingapp.jsor just handle that insideapp.jsio-rediscomes with areconnectionstrategy, so you may want to try that first.You can see my answer here regarding that issue:Wait node.js until Logstash is ready using containers
I'm trying to connect my app to redis, but i get:[ioredis] Unhandled error event: Error: connect ECONNREFUSED 127.0.0.1:6379when i do:docker exec -it ed02b7e19810 ping test_redis_1i've received all packets.also the redis container declares:* Running mode=standalone, port=6379* Ready to accept connections( i get the WARNINGS but i don't think its related:Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.confWARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128this is my docker-compose.yaml:version: '3' services: test-service: build: . volumes: - ./:/usr/test-service/ ports: - 5001:3000 depends_on: - redis redis: image: "redis:alpine"DockerFileFROM node:8.11.2-alpine WORKDIR /usr/test-service/ COPY . /usr/test-service/ RUN yarn install EXPOSE 3000 CMD ["yarn", "run", "start"]app.jsconst Redis = require('ioredis'); const redis = new Redis(); redis.set('foo', 'bar'); redis.get('foo').then(function (result) { console.log(result); });i've also tried withredispackage but still can't connect:var redis = require("redis"), client = redis.createClient(); client.on("error", function (err) { console.log("Error " + err); });getting:Error Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
can't connect to redis through node app, both in dockers
I ended up using this project - Ofeliahttps://github.com/mcuadros/ofeliaso you just add it to your docker-composeand have config like:[job-exec "task name"] schedule = @daily container = myprojectname_1 command = python ./manage.py clearsessions
What's the best practices for running periodic/scheduled tasks ( like manage.py custom_command ) when running Django with docker (docker-compose) ?f.e. the most common case -./manage.py clearsessionsDjango recommends to run it with cronjobs...But Docker does not recommend adding more then one running service to single container...I guess I can create a docker-compose service from the same image for each command that i need to run - and the command should run infinite loop with a needed sleeps, but that seems overkill doing that for every command that need to be scheduledWhat's your advice ?
Django + docker + periodic commands
This works for me nicely:FROM microsoft/dotnet:2.1.300-sdk RUN apt-get update && apt-get install -y openjdk-8-jre RUN dotnet tool install --global dotnet-sonarscanner --version 4.3.1 COPY SonarQube.Analysis.xml /root/.dotnet/tools/.store/dotnet-sonarscanner/4.3.1/dotnet-sonarscanner/4.3.1/tools/netcoreapp2.1/any/SonarQube.Analysis.xml ENV PATH="/root/.dotnet/tools:${PATH}" RUN dotnet sonarscanner begin /k:project-key RUN dotnet build RUN dotnet sonarscanner endObviously, it needs to be build in a context withSonarQube.Analysis.xmlfile present.
I would like to run SonarQube analysis in a Linux container usingtheir new supportfor dotnet global tools. I wonder though where is configuration (server URL, user credentials) located in such case?
How to run a SonarQube analysis of .NET Core solution in a Linux container?
Update 2023-11-29: DDEV uses the ddev-phpmyadmin add-on, and has had https support for PHPMyAdmin for years now,ddev describewill show you the URL. As explained below by @HEYDANNY, you install PhpMyAdmin withddev get ddev-phpmyadmin && ddev restartand can launch it withddev phpmyadmin, and it works fine with https.
I am using DDEv and Docker with Windows 10 pro to set up a localhost install of drupal 8.8 using Composer. I have set up and configured the local drupal installation (it is a fresh install) and it appears to be running correctly, but in the admin section of the drupal site I receive a warning to change write permissions of sites/default/settings.php.I tried to change settings using Filezilla, but it appears that local files in Filezilla do not provide access to write permissions? When I right-click the file in Filezilla, no permissions option appears.Following troubleshooting tips from ddev, I tried to access phpmyadmin athttps://mysitename.ddev.site:8036Instead of loading phpmyadmin, I got the following error message:Secure Connection FailedAn error occurred during a connection to dmckimep.ddev.site:8036. SSL received a record that exceeded the maximum permissible length.Error code: SSL_ERROR_RX_RECORD_TOO_LONGThe page you are trying to view cannot be shown because the authenticity of the received data could not be verified. Please contact the website owners to inform them of this problem.I've been searching around for a couple of hours now and do not find a solution to this. I ran ddev describe and all seems fine with the installation. The drupal site in the container seems to run okay. There are no port conflicts present so far as I have found, so I am not sure why I cannot get access to phpmyadmin.I am a relative newbie in terms of skills, but have successfully maintained drupal 4-7 on localhost with XAMPP and my web host. Now I am wrestling with the move to drupal 8/composer/docker/ddev. Any suggestions would be much appreciated.Thank you!
How to access phpmyadmin on DDEV Windows 10 pro localhost with SSL record too long error
According to the official documentation:To confirm successful installation of both a hypervisor and Minikube, you can run the following command to start up a local Kubernetes cluster:minikube start --vm-driver=For setting the --vm-driver with minikube start, enter the name of the hypervisor you installed in lowercase letters where is mentioned below. A full list of --vm-driver values is available in specifying theVM driver documentation.So in your case it would be:minikube start --vm-driver=If you want ot make sure your previous steps were correct you can go through the wholetutorial.Please let me know if that helped.EDIT:There is aGithub threadshowing the same issue.Basically you still should useminikube start --vm-driver=but it will not work with v1.6.0 yet. Consider downgrading to v1.5.2 instead.
I have installedkubectlandminikubeon my windows environment, but when runningminikube startit creates the VM on vitualBox but I got this error when it trying to prepare kubernetes on Docker.C:\Users\asusstrix>minikube start * minikube v1.6.0 on Microsoft Windows 10 Home 10.0.18362 Build 18362 * Selecting 'virtualbox' driver from user configuration (alternates: []) * Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ... * Preparing Kubernetes v1.17.0 on Docker '19.03.5' ... * X Failed to setup kubeconfig: writing kubeconfig: Error writing file C:\Users\asusstrix/.kube/config: error acquiring lock for C:\Users\asusstrix/.kube/config: timeout acquiring mutex * * Sorry that minikube crashed. If this was unexpected, we would love to hear from you: - https://github.com/kubernetes/minikube/issues/new/choose
Failed to setup kubeconfig when starting minikube
Visual Studio can't find thedockercommand on your local computer. It needs this as a client to connect to the docker daemon on your Linux server. The easiest way to do this is to install Docker Toolbox from here:https://www.docker.com/products/docker-toolboxYou may have to uninstall and re-install "Visual Studio 2015 Tools for Docker" or manually add to Powershell's$env:Pathif the docker command still can't be found.Also, your Image Name must not contain uppercase characters. Usedockerdemorather thanDockerDemo.
I have installed docker engine on a Linux server. On my desktop's Visual Studio 2015, I created an asp.net application. Now I want to publish it to the Linux server and create a docker image.I followed thisstep.I don't have an azure account and I want to use my own Linux server. So next, I clicked theDocker Containers. The interface became:Then I clickedCustom Docker Hostand pressed OK button.The interface wasNow I input the image name asDockerDemo. Also I type the server url something liketcp://12.16.45.56:8080. Validate connection is okay then go to the next step.Finally I get this:However I get an error during publish.Severity Code Description Project File Line Suppression State Error An error occured during publish. The command [docker -H tcp://12.16.45.56:8080 build -t DockerDemo -f "C:\Users\me\AppData\Local\Temp\PublishTemp\DockerDemo63\approot\src\DockerDemo\Dockerfile" "C:\Users\me\AppData\Local\Temp\PublishTemp\DockerDemo63"] exited with code1: 'docker' is not recognized as an internal or external command, operable program or batch file. Please visithttp://go.microsoft.com/fwlink/?LinkID=529706for troubleshooting guide. DockerDemo 0By the way, the framework I am using is:"frameworks": { "dnx451": { }, "dnxcore50": { } }Thanks for help!
How to deploy asp.net application to docker container on Linux server?
TheDockerfile for that image(three and a half years old and not getting any updates!) has the line:VOLUME /var/www/htmlThis will prevent any subsequent RUN instructions from making any changes to that directory, even if it's in a derived image.There's no way to un-VOLUME a directory, so if you need this symlink to exist, you either need to create a new image from a base PHP image installing the software yourself, or wrap its entrypoint script with your own that creates the symlink at startup time.
In my Dockerfile I have this line:RUN ln -s /var/www/html/some_file /var/www/html/another_fileWhen running docker build all the steps are executed including the creation of the symbolic link, but when I start a container using the image created and check the folder/var/www/html/I don't see the link there. I tried searching online if this is something supported by docker and couldn't find an answer. The content of the container is already available by another container image I am using with theFROMinstruction, so the file/var/www/html/some_fileis not on my machine.No Volumes are involved. This is the Dockerfile:FROM piwik:3.2.1-apache RUN apt update RUN ln -s /var/www/html/some_file /var/www/html/another_file CMD [ "apache2-foreground" ]
Symlink command in Dockerfile doesn't create the link in the container
With docker 1.3, there is a new commanddocker exec. This allows you to enter a running docker:docker exec -it bash
BackgroundI had build a npm server(sinopia) docker image(https://github.com/feuyeux/docker-atue/blob/master/docker-images/feuyeux_sinopia.md), and in the CMD line, it will run the start.sh every time when the container is generated.CMD ["/opt/sinopia/start.sh"]This shell will create a yaml file dynamically.sed -e 's/\#listen\: localhost/listen\: 0.0.0.0/' -e 's/allow_publish\: admin/allow_publish\: all/' /tmp/config.yaml > /opt/sinopia/config.yamlQuestionI wish I could edit this config.yaml when the container is running, because I hope the content should be changed on demand.see the snapshot photoAs shown above, the first line runs asinopiacontainer, and in this container, there's /opt/sinopia/config.yaml. But I don't know how to obtain this running container and edit and check this file. If I did as the line ofsinopia-ls, there's a new container runs instead of the before running one.Thanks guys!Answer(details please see below what I accepted)sudo nsenter --target $PID --mount --uts --ipc --net --pid root@58075317e47d:/# ls /opt/sinopia/ config.yaml config_gen.js start.sh storage root@58075317e47d:/# cat /opt/sinopia/config.yaml
How to edit a file dynamically in a running docker container
You can't run a container from another container using Fargate. Running a container from another one, like in your case, would mean that you could have access to the docker daemon. Accessing the docker daemon means root access to the host machine. This breaks the docker container isolation and is unsafe.Depending on your usage, I suggest you use an EC2 instance, use CodeBuild or build an operator that is able to talk with the api to span containers.[Edit]: It seems that there is an open issue on this topic[ECS,Fargate]: Support for building Docker containers #95
I created a task definition on Amazon ECS and want to run in with Fargate. I set up my task, network mode is awsvpc. I created a new container with a docker image (simple "Hello world" project) on Amazon ECR. Run the task - everything works fine. Now I need to run a docker container from hub.docker.com as a part of the taskDockerfileFROM ubuntu RUN apt-get update && apt-install ... ADD script.sh /script.sh RUN chmod +x /script.sh ENTRYPOINT ["/script.sh"]script.sh#!/bin/bash ...prepare data docker run -rm some_container_from_docker_hub ...continue process dataInitially, I got "command not found" error. OK, I installed docker into my image. Now I've got "Cannot connect to the Docker daemon". My question: is there any way to run a docker container inside of another docker container on Amazon Fargate?
Run docker inside of docker on AWS Fargate
You have to configure each program (container) in different files and them must be into/etc/supervisor/conf.d/folder, in where the supervisor should look for the programs. In your case I propose:#It is the /redis.conf [program:redis] command= /bin/bash -c "fig up redis" "fig logs redis" directory=/path/of/fig_file autostart=true autorestart=true stdout_logfile=/path/to/log/redis.log redirect_stderr=trueAnd for pg:#It is the /pg.conf [program:pg] command= /bin/bash -c "fig up pg" "fig logs pg" directory=/path/of/fig_file autostart=true autorestart=true stdout_logfile=/path/to/log/pg.log redirect_stderr=trueAnd the same configuration (mongo.conf and app.conf) for the others program (mongo and app).When you boot your machine or restart it, each program must be up.The example above you run the container and you can keep it alive because you fallow the logs of it.You can check the state of each programs with:sudo supervisorctlAnd see:app RUNNING pid 17036, uptime 0:22:28 mongodb RUNNING pid 17018, uptime 0:22:29 pg RUNNING pid 17030, uptime 0:22:28 redis RUNNING pid 17019, uptime 0:22:29good luck!!
I have a fig configuration for launchNdockers containers (app, redis, mongo, postgre, etc...)When I runfig upeverything is ok.Name Command State Ports -------------------------------------------------------------------------- my_mongodb_1 /usr/local/bin/run Up 28017/tcp, 27017/tcp my_redis_1 /usr/local/bin/run Up 6379/tcp my_pg_1 /usr/local/bin/run Up 5432/tcp my_app_1 ... Up 443->443/tcp, 80->80/tcpbut for one not important reason one of this containers could be turned off.Name Command State Ports -------------------------------------------------------------------------- my_mongodb_1 /usr/local/bin/run Up 28017/tcp, 27017/tcp my_redis_1 /usr/local/bin/run Exit 6379/tcp my_pg_1 /usr/local/bin/run Up 5432/tcp my_app_1 ... Up 443->443/tcp, 80->80/tcpIs possible to configuratesupervisordfor monitoring all containers and start the container which has been turned off
fig docker monitoring broken container
You can first change therestart policywithdocker container update:docker container update --restart="no" and then continue with:docker container stop Restart policies (--restart):no: Do not automatically restart the container when it exits. This is the default.on-failure[:max-retries]: Restart only if the container exits with a non-zero exit status. Optionally, limit the number of restart retries the Docker daemon attempts.always: Always restart the container regardless of the exit status. When you specify always, the Docker daemon will try to restart the container indefinitely. The container will also always start on daemon startup, regardless of the current state of the container.unless-stopped: Always restart the container regardless of the exit status, including on daemon startup, except if the container was put into a stopped state before the Docker daemon was stopped.
I would like to stop a container which is failing to restart (it is in statusRestarting). The container hasrestart=always. Doing:docker stop seems to succeed (no error message), but the container is restarted anyway. The same command actually stops containers withrestart=alwayswhich have restarted normally.If I try to kill the container:docker kill I get a message:container is not running(which is true)Removing the container works:docker rm The container will not restart, since it does not exist anymore. But this is not what I wanted: I only wanted for it to stop restarting.How can I stop a failing, restarting container, without removing it?
Stop a failing container with restart=always
After two weeks of research, I finally stumbled upon a solution for this:The problem is related to the network, that was obvious, but it's precisely about how containers are isolated from one another. Problem is, the container has no outbound connection. A solution that work inside a standalone container is to use the--network hostparameter, which would expose the host network to the container. Note that using this would remove the port mapping from the container since the container's port 5000 is now linked to the host's port 5000Hope this solution can help others
First, for some context: I am using .NetCore to develop an API with Identity. Everything is on a Cloud server, inside a Docker. When a user is created, an email is sent to the new User using a mailkit and the webmail server through Plesk (Hosted on the same machine). The docker is accessed via a redirection trough Apache using a ProxyPass from the subdomain to the port on localhostEverything works great while debugging trough JetBrain's Rider, but it is not able to process the email in the docker on the server.Here is the stack:System.Net.Internals.SocketExceptionFactory+ExtendedSocketException (00000001, 11): Resource temporarily unavailable at System.Net.Dns.InternalGetHostByName(String hostName) at System.Net.Dns.ResolveCallback(Object context) --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw(Exception source) at System.Net.Dns.HostResolutionEndHelper(IAsyncResult asyncResult) at System.Net.Dns.EndGetHostAddresses(IAsyncResult asyncResult) at System.Net.Dns.<>c.b__25_1(IAsyncResult asyncResult) at System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean requiresSynchronization)I have yet to try and run the docker on another linux machine to test. One of my current guess would be a problem with the SSL certificate, but I don't think it would cause a problem with the DNS or any internal socket.Another guess is thats its a problem for the Docker to get the DNS Hostname, but since it works ok in a local.Edit: I tried multiple time to run the docker on the mac and the error is still triggered once in a while but not always. It is although always triggered on the server and never send the email
Dns.GetHostAddressesAsync: Resource temporarily unavailable
Just figured out how to solve this, nowElastic Beanstalk supports running a privileged containersand you just need to add the"privileged": "true"to yourDockerrun.aws.jsonas the following sample (please take a look at thecontainer-1):{ "AWSEBDockerrunVersion": 2, "containerDefinitions": [{ "name": "container-0", "image": "ubuntu", "memory": "512" }, { "name": "container-1", "image": "ubuntu", "memory": "512", "privileged": "true" }] }
I have tried numerous different ways to include the privileged flag in my task definition per the task definition documentation here:http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definition_securityI have also found forum postings over at amazon here:https://forums.aws.amazon.com/thread.jspa?threadID=180014&tstart=0&messageID=687696#687696where the amazon employee "ChrisB@AWS" said "ECS now supports privileged mode."I have successfully launched privileged containers on ECS using the aforementioned privileged key/val in the task definition and can confirm using docker command on the ec2 host. However the same task definition stanza does not prove successful on an elastic beanstalk multi-container solution stack host.I see a ~year old post on the amazon forum specifically about support in elasticbeanstalk here:https://forums.aws.amazon.com/thread.jspa?messageID=687694򧹎where amazon employee "DhanviK@AWS" says: "EB does not yet cleanly support the privileged mode of docker execution. We'll take this into account as feedback as we continue to release the next versions of our docker containers."I also see some old discussion from last april on github here:https://github.com/awslabs/eb-docker-virtual-hosting/issues/1where they say it's not supported on ECS. But clearly it has been implemented their at this point per my experiment above.So what gives? If EB multi-conatiner solution stack simply wraps the ECS service why can't my privileged flag be accepted by the ecs agent when passed from elasticbeanstalk? Is elasticbeanstalk simply deleting the flag before it get's to the ecs agent? If so that is wack. Can anyone shed any light on this?UPDATE: I found this SO question that pertains to the single container elasticbeanstalk solution stack. This is not what I am using. I am using the multi-container solution stack.How can I run a Docker container in AWS Elastic Beanstalk with non-default run parameters?
Is it possible to launch privileged docker containers on Amazon elasticbeanstalk?
Bind mounts with that syntaxalwaysoverwrite the files that are present in the image. This acts the same way as the normal Linuxmount(8) command: if you mount something like a USB disk over part of your source directory, the contents of the mounted device hide what was originally in the filesystem, and all reads and writes use the mounted device instead.When the container starts up, this means it can look to see if its data directory is empty, and if it is, install some initial data there. You cite theDocker Hubwordpressimage; that has anentrypoint scriptthat has an explicit checkif [ ! -e index.php ] && [ ! -e wp-includes/version.php ]; then echo >&2 "WordPress not found in $PWD - copying now..." # ... some code that creates sourceTarArgs and targetTarArgs ... tar "${sourceTarArgs[@]}" . | tar "${targetTarArgs[@]}" echo >&2 "Complete! WordPress has been successfully copied to $PWD" fiThemysqlimage has a similar check to see if its data directory is empty. If it is, it does its first-time initialization, including processing the/docker-entrypoint-initdb.ddirectory; if it isn't, it assumes there is pre-existing data there and completely skips the initialization step.In general you should try to reserve bind mounts and similar volumes for actual data, and not copy your code there. In the case of thewordpressimage, since it makes a copy of the application in the volume, it's not totally obvious what would happen if you tried to upgrade the underlying image: the volume takes precedence and the upgrade could get ignored.Named Docker volumes (as distinct from bind mounts) will copy content from the underlying image, butonlyif it's a named volume and not some other kind of mount,onlyon Docker proper (not in Kubernetes), andonlyif the volume is totally empty (it's the very first time you've run the container). Avoid relying on this behavior, since it's not especially portable and ignores updates in the underlying image.
Let's use thisdocker-compose.yml:version: '2' services: db: image: mysql:5.7 volumes: - ./mysql:/var/lib/mysql # <- important restart: always environment: MYSQL_ROOT_PASSWORD: somewordpress MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: wordpress wordpress: depends_on: - db image: wordpress:latest volumes: - ./wp:/var/www/html # <- important ports: - "8000:80" restart: always environment: WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_USER: wordpress WORDPRESS_DB_PASSWORD: wordpressI noticed that:When doingmkdir wp docker-compose up # create a basic Wordpress website from the browser # stop the containers from the command-line with CTRL+Cthen,./wp/(initially empty) is filled with the new Wordpress files (**). This is normal.Then let's dodocker rm wordpress_1 db_1 # remove the existing containers but keep # ./wp/ as it has been created in the previous step (**) docker-compose upDuring the re-creation of the containers,./wp/is not overwritten with new Wordpress files, instead the previous files from previous step (**) are kept! Why?How does it magically know thatnewWordpress files shouldnotbe written, but that instead, the previous files should be kept?Question: How doesdockerdecide if/hostdir/:/containerdir/listed involumes:should override the files which are already present in the original docker image, or not?
How does "volumes" override the docker image's original files with docker-compose?
I've had a similar issue and ended up building my own binaries with no dependencies using the alpine gcc toolchain that supports building "static" PIE binaries. The reason was that I wanted no dependencies, hardened build and also support ASLR.https://hub.docker.com/r/mwader/static-ffmpeg/
I'm trying to install ffmpeg via a multistage docker buildHere is the ffmpeg image that contains the ffmpeg binariesFROM jrottenberg/ffmpegHere is the pm2 image that I need to run my web serverFROM keymetrics/pm2:8-alpineI copy the bins into the current image, and I can see that ffmpeg, ffserver, and ffprobe all exist in /usr/local/bin.COPY --from=0 /usr/local /usr/localThe copy command appears to succeed, since those files exist when I run the container interactively.$# which ffmpeg /usr/local/bin/ffmpegHowever, when I try running the bins, it says the command isn't found.$# ffmpeg --version /bin/sh: ffmpeg: not found
Copy ffmpeg bins in multistage docker build
Stealing fromthis answer for keeping .NET apps alive, you can wait without usingConsolewhich means you don't need to keep stdin open in Docker (docke run -i) :private ManualResetEvent Wait = new ManualResetEvent(false); Wait.WaitOne();Docker will send a SIGTERM ondocker stopand a ctrl-c will send aSIGINTsothose signals should be trappedand then you let the program end withWait.Set();
I have a console application written in C# using servicestack that has the following form:static void Main(string[] args) { //Some service setup code here Console.ReadKey(); }This code works fine when run on windows as a console. The implementation is almost exactlyhttps://github.com/ServiceStack/ServiceStack/wiki/Self-hostingas this is a test projectI then compile this project using mono on linux and build into a docker file.I have no problems running up a container based on this image if it is interactivedocker run -it --name bob -p 1337:1337 The container runs in the foregroundHowever, if I omit the -it switch, the container exits straight away - I assume because there is no STDIN stream, so the Console.ReadKey() doesn't work.I am trying to get the service hosted in a swarm, so there is no notion of detached. I can spin up a loop within my main method to keep the console service alive, but this seems hacky to me...Is there a good way of keeping my service alive in the situation where I want to run my container detatched (docker run -d...)
Keep a self hosted servicestack service open as a docker swarm service without using console readline or readkey
Docker containers do not support named instances; this is mentionedhere:There is no concept of a named instance. Every container can have a unique name....Containers don't have a concept of running multiple SQL Server instances. So there is no option of running more than one instance name.Really, you should change the connection string. An alternative if you can't change the connection string (but you really should change the connection string) is to create an alias on each client using their local client network utility or configuration manager.For example, you can create an alias on the client that points toHostName\InstanceNamebut, underneath, the mapping would actually redirect toHostName,2700(assuming2700is the port you specified indocker run ... -p 2700:1433 ...).Aliases are talked about more thoroughlyhereand I talk about using custom and specific ports for Docker containershere.Did I mention it is much more logical to change your connection strings?Thatis probably the problem you want to fix.
How can I run a named SQL Server instance inside a Docker container?I have an application that has a connection string pointing to a named SQL Server instance, something likeData Source=HostName\InstanceName; this connection string is very problematic for me to change. I want to dockerize that SQL Server instance. I already configured it so that I can connect to the dockerized instance viasqlcmdusingsqlcmd -S HostNamebut when usingsqlcmd -S HostName\InstanceName(which should be equivalent to the connection string this application is using) it does not establish a connection.
SQL Server named instance in Docker
As mentioned in the previous answer, this isn't a best practice to handle media and static inside your project or app directory rather you can use a file or file storage server. But I am trying to give the answer to your question here.Suppose you have a Django project directory namedrootand inside this, you are managing themediaandstaticfolders. As you are usingDockerevery time your container gets restarted, it wipes out the contents from both these folders. So what you have to do here is mount yourmediaandstaticfolders from inside of your container to your local storage i.e/var/lib/docker/volumes/{volume_name}/_datato persist your media and static files in between restarts of your container.I am describing here thedocker-composeversion:version: "3.8" services: app: image: {your django project image} #build: {direct build of your Dockerfile} volumes: - media:/src/media/ - static:/src/static/ volumes: media: static:My goal here is to point outvolume mount, so in the above code, you have to define volumes this way, where/srcis the working directory defined in yourDockerfileusing theWORKDIRdirective. And in my casemedia and staticare the direct children of thesrcfolder. Now you can rundocker volume lsto see your volume names and using the name you can inspect your volumes using this commanddocker volume inspect {volume_name}. Usually, you will find your volumes i.e media and static here -/var/lib/docker/volumes/{{container_name]_media}/_data /var/lib/docker/volumes/{{container_name]_static}/_dataHope this clears the question.
I am using Django as a Web Framework. Azure NGINEX as WebServer. My Project is deployed with Docker Containers.In my Django project root structure will be as follows:root: - app1 - app2 - mediawhenever saving images, it will correctly save under media folder. But whenever doing "docker-compose up" it will replaces the source code, so that my media folder will be cleaned up everytime.In my settings.py file, I have added as follows:MEDIA_ROOT = os.path.join(BASE_DIR,'media') MEDIA_URL = 'media/'Kindly help me to maintain the media files with Docker based Environment
Maintain the media files after build with Docker Container - Django
Don't think you can specify memory constraints in the Dockerfile yet. So the way to do it is to override yourentrypointat the command line:$ docker run -i -t --memory "100m" --entrypoint "java -Xmx`cat /sys/fs/cgroup/memory/memory.limit_in_bytes` -jar helloworld.jar" example/java-hello
Is it possible to get the max memory of a docker container at runtime?What I want to achieve is:docker run --memory "100m"and access the max memory in the docker file:ENTRYPOINT ["java", "-Xmx$memory", "-jar", "helloworld.jar"]
Get memory limit in docker file?
I've managed to fix the issue. The problem was caused by openssl versions. Both my windows 10 pc and ubuntu 18.04 vm run an older version that had no problem connecting to the website. The python docker images contain a newer version of openssl that refused to connect.
I'm trying to setup a python script that uses the requests library to get data from a website. The script works without issues running in a virtual environment on my windows 10 pc or on a azure vm.However, when I try to create a docker container using thepython:3.6-slimimage I get DH_KEY_TOO_SMALL errors. Testing the website on ssllabs.com revealed that it supports weak DH key exchange parameters. What could be causing this error and how can I fix it?
Docker python requests results in DH KEY TOO SMALL error
Ok I got the solution in herehttps://forums.docker.com/t/ip-address-for-xdebug/10460/9I had to set my internal ip toxdebug.remote_hostand disablexdebug.remote_connect_back=0Seems this is a osx thing. Hope this helps someone here
I've read some posts about this but none helped in my case or simply overlooked the missing piece.I cannot get xdebug to work on PhpStorm using a Docker container.Docker-compose.ymlversion: '2' services: web: image: nginx:latest volumes: - .:/usr/share/nginx/html - ./nginx/nginx.conf:/etc/nginx/nginx.conf - ./nginx/logs:/var/logs/nginx - ./nginx/site-enabled/default.conf:/etc/nginx/sites-enabled/default.conf ports: - "80:80" depends_on: - php db: image: mysql:5.7 environment: MYSQL_ROOT_PASSWORD: 1234 MYSQL_DATABASE: local_db MYSQL_USER: root MYSQL_PASSWORD: 1234 ports: - "3306:3306" php: build: images/php volumes: - .:/usr/share/nginx/html - ./config/docker/php/php.ini:/usr/local/etc/php/php.ini - ./config/docker/php/ext-xdebug.ini:/usr/local/etc/php/conf.d/ext-xdebug.ini - ./config/docker/php/php-fpm.conf:/usr/local/etc/php-fpm.conf user: www-data depends_on: - dbconfig/docker/php/ext-xdebug.inizend_extension="/usr/lib/php7/xdebug.so" xdebug.remote_enable=1 xdebug.remote_port=9000 xdebug.overload_var_dump=1 xdebug.default_enable=1 xdebug.remote_autostart=1 xdebug.idekey=PHPSTORM xdebug.remote_connect_back=1 xdebug.remote_host=172.20.0.1 # ip of host inside docker container xdebug.remote_log=/usr/share/nginx/html/xdebug.logerror from xdebug.logLog opened at 2017-05-31 11:01:14 I: Checking remote connect back address. I: Checking header 'HTTP_X_FORWARDED_FOR'. I: Checking header 'REMOTE_ADDR'. I: Remote address found, connecting to 172.20.0.1:9000. W: Creating socket for '172.20.0.1:9000', poll success, but error: Operation now in progress (29). E: Could not connect to client. :-( Log closed at 2017-05-31 11:01:14In PhpStorm I'm using remote debugger with following settings:serverHost - 127.0.0.1 Port - 80Absolute path on server/usr/share/nginx/htmlIDE session keyPHPSTORM
Using xdebug through Docker container in PhpStorm
I had the same problem. This links solved my problem:https://improveandrepeat.com/2019/09/how-to-fix-network-errors-with-docker-and-windows-containers/My default Ethernet Adapter didn't have the lowest metric Check with:Get-NetIPInterface -AddressFamily IPv4 | Sort-Object -Property InterfaceMetric -DescendingSet with:Set-NetIPInterface -InterfaceAlias 'Ethernet' -InterfaceMetric 4
VS2019, created a brand new mvc app with Windows Docker support.Dockerfile contents (created from template):FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-nanoserver-1809 AS base WORKDIR /app EXPOSE 80 FROM mcr.microsoft.com/dotnet/core/sdk:2.2-nanoserver-1809 AS build WORKDIR /src COPY ["mvc1.csproj", "mvc1/"] RUN dotnet restore "mvc1/mvc1.csproj" COPY . . WORKDIR "/src/mvc1" RUN dotnet build "mvc1.csproj" -c Release -o /app FROM build AS publish RUN dotnet publish "mvc1.csproj" -c Release -o /app FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "mvc1.dll"]When I execute:docker build -t mvc1 .I get the following errors:C:\Program Files\dotnet\sdk\2.2.401\NuGet.targets(123,5): error : Unable to load the service index for sourcehttps://api.nuget.org/v3/index.json. [C:\src\mvc1\mvc1.csproj] C:\Program Files\dotnet\sdk\2.2.401\NuGet.targets(123,5): error : No such host is known [C:\src\mvc1\mvc1.csproj] The command 'cmd /S /C dotnet restore "mvc1/mvc1.csproj"' returned a non-zero code: 1EDIT: I've added this line to Dockerfile:RUN ping google.comand get:Step 4/17 : RUN ping google.com ---> Running in 6633175b21a8 Ping request could not find host google.com. Please check the name and try again.EDIT 2:So, it turns out when I edit my .csproj file and remove this line: netcoreapp2.2 InProcess It DOES work. Why is that?
VS2019 Docker support and Dockerfile failing
Docker image doesn't store any information on Dockerfile, but you can try to reverse engineer it. There's few approaches to that:Usedocker history --no-trunc, it will display information on each layer:> docker history golang:1.14.2-alpine3.11 --no-trunc --format="{{.CreatedBy}}" /bin/sh -c #(nop) WORKDIR /go /bin/sh -c mkdir -p "$GOPATH/src" "$GOPATH/bin" && chmod -R 777 "$GOPATH" /bin/sh -c #(nop) ENV PATH=/go/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin /bin/sh -c #(nop) ENV GOPATH=/go ...https://github.com/CenturyLinkLabs/dockerfile-from-image-- project for reverse engineering Dockerfile from docker image. You can run it like that:docker run -v /var/run/docker.sock:/var/run/docker.sock \ centurylink/dockerfile-from-image diveis actually tool for debugging/optimizing docker image, but can be helpful in this situation
I am working on a Linux machine. I built a Docker image around 3-4 weeks ago but I don't remember where the Dockerfile is located.What's the best way to locate the Dockerfile? Is it possible to somehow get its location given the image?I tried using:$ docker image inspect but it does not show this information.
Locate Dockerfile given image
It turns out that the connection string in the base appsettings.json configuration file was not being overwritten by the environment-specific settings file (appsettings.Development.json). Our DevOps group set the environment variable for the container and it correctly connected using the SQL Server credentials.
A .Net Core 2.2 application running in a Linux Docker container fails to authenticate to SQL Server on a different machine using SQL Authentication. The error message is:Cannot authenticate using Kerberos. Ensure Kerberos has been initialized on the client with 'kinit' and a Service Principal Name has been registered for the SQL Server to allow Kerberos authentication.We have configured the connection string to use SQL Authentication (user name and password). We have tried setting trusted_connection=false, but the connection still attempts to use Kerberos authentication.The (redacted) connection string is:"server=ourfullyqualifiedserver.domain,1433;database=our-database;user=sql-user;password=sql-password;"I would expect to be able to connect to SQL Server from the container using SQL Authentication, but it is still attempting to use Kerberos. Why, and how do we make the connection use SQL Server Authentication?
.Net Core Linux Container Won't Connect to SQL Server Using SQL Authentication
Any ideas how to accomplish this? Or do I need to create a new docker image where the repo files have been add:ed?When specifying theContainerusing Yaml Schema directly, the Azure DevOps Service will call an extraInitialize containerstask automatically beforecheckout source repotask and your real tasks.container: image: 'image-name' steps: - script: echo Hello, world! displayName: 'Run a one-line script inside docker image'During theInitialize containerstask, thepredefined variableslikeAgent.BuildDirectory,Build.Repository.LocalPathandBuild.SourcesDirectoryare not expanded (non-defined variables).So you can't use theBuild.SourcesDirectoryin this way cause the value of this variable is expandedaftertheInitialize containerstask.1.About whythe linkyou shared above can work: It's in adocker task/stepso it can recognize the$(Build.SourcesDirectory)variable. (The real build tasks runafterthe build variables are defined)2.If you're using specificMicorosft-hosted agent, you can try hard-coding the path. You can check thissimilar issue.Usually for windows-hosted agent:$(Build.SourcesDirectory)=>D:\a\1\sFor Ubuntu-hosted agent:$(Build.SourcesDirectory)=>/home/vsts/work/1/s.
I am running a docker image in Azure Devops yaml-pipeline using acontainer step. However, I have problems mounting the content of the repo so that this is accessible from inside the docker image.The Azure Devops pipeline.yml file is as follows:container: image: 'image-name' endpoint: 'foo' options: '-v $(Build.SourcesDirectory):/testing' steps: - script: echo Hello, world! displayName: 'Run a one-line script inside docker image'This fails with the error message:Error response from daemon: create $(Build.SourcesDirectory): "$(Build.SourcesDirectory)" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute pathI also tried replacing$(..)$with$[..](seeherebut this results in the same error. Also with${{..}}the pipeline will not even start (error: "A template expression is not allowed in this context" in the UI)If I removeoptionsthe script runs, but the repo is not mounted.For non-yaml pipelines, the question was addressedhere.Any ideas how to accomplish this? Or do I need to create a new docker image where the repo files have been add:ed?
Mount repo into docker image when running yaml-pipeline in Azure DevOps
The only way you could do this would be by having aDockerfile.herokufile which contains:FROM Then, inheroku.yml:build: docker: worker: Dockerfile.herokuWith this process, Heroku will always build from source. But it will do so by pulling the image from DockerHub, discarding everything else.There is no way to use Heroku's build system to only pull an image.
I want to create a 'Deploy to Heroku' button for an open source project. When the button is clicked, I want Heroku to deploy the latest image from Docker hub. How can I achieve this via myapp.jsonmanifest?Theapp.json schemaallows me to set"stack": "container"to specify that I want to run a container, yet all I have been able to achieve with this setting is to build the container from source, via aheroku.ymlfile.From myapp.json:"stack": "container", "formation": { "worker": { "quantity": 1 } }From myheroku.yml:build: docker: worker: DockerfileThe aboveapp.jsonandheroku.ymlfiles successfully build the container from master on deploy.How can I pull from Docker Hub on deploy, rather than building from source?
How can I run a Docker Hub container on Heroku via app.json?
You need to executeS3 Copy commandvia AWS CLI or it's equivalent inBOTO3 Python client.$aws s3 cp /localfolder/localfile.txt s3://mybucketOr equivalent in Python:import boto3 client = boto3.client('s3') response = client.put_object( Body='c:\HappyFace.jpg', Bucket='examplebucket', Key='HappyFace.jpg' ) print(response)In order for your container to have the right to upload files to S3 you need to setupTask Execution Roleand assign it to your task.
I have a containerized project, the output files are written in the local container (and are deleted when the execution completes), the container runs on Fargate, I want to write a Python script that can call the model that runs on Fargate and get the output file and upload it to an S3 bucket, I'm very new to AWS and Docker, can someone send me an example or share some ideas about how to achieve this?I think the answer by @jbleduigou makes things complicated, now I can use command to copy the file on my local machine from the container, I just need to write a script to call the model and copy this file out and upload it to S3, I know the concept but couldn't find an example.Anyone can give me an example to achieve this?
How to upload a file from a Docker container that runs on Fargate to S3 bucket?
Do you actuallydocker run aldream/myApp? In that case, with the Dockerfile that you provided, it should run MongODB, but not your app. Is there anotherCMDcommand, or another Dockerfile, or are you runningdocker run aldream/myApp ? In the latter case, it will override theCMDdirective and MongoDB will not be started.If you want to run multiple processes in a single container, you need a process manager (like e.g. Supervisor, god, monit) or start the processes in the background from a script; e.g.:#!/bin/sh mongod & node myapp.js & wait
I am trying to create a container for myNodeapp. This app usesMongoDBto ensure some data persistence. So I created thisDockerfile:FROM ubuntu:latest # --- Installing MongoDB # Add 10gen official apt source to the sources list RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/10gen.list # Hack for initctl not being available in Ubuntu RUN dpkg-divert --local --rename --add /sbin/initctl RUN ln -s /bin/true /sbin/initctl # Install MongoDB RUN apt-get update RUN apt-get install mongodb-10gen # Create the MongoDB data directory RUN mkdir -p /data/db CMD ["usr/bin/mongod", "--smallfiles"] # --- Installing Node.js RUN apt-get update RUN apt-get install -y python-software-properties python python-setuptools ruby rubygems RUN add-apt-repository ppa:chris-lea/node.js # Fixing broken dependencies ("nodejs : Depends: rlwrap but it is not installable"): RUN echo "deb http://archive.ubuntu.com/ubuntu precise universe" >> /etc/apt/sources.list RUN echo "deb http://us.archive.ubuntu.com/ubuntu/ precise universe" >> /etc/apt/sources.list RUN apt-get update RUN apt-get install -y nodejs # Removed unnecessary packages RUN apt-get purge -y python-software-properties python python-setuptools ruby rubygems RUN apt-get autoremove -y # Clear package repository cache RUN apt-get clean all # --- Bundle app source ADD . /src # Install app dependencies RUN cd /src; npm install EXPOSE 8080 CMD ["node", "/src/start.js"]Then I build and launch the whole thing through:$ sudo docker build -t aldream/myApp $ sudo docker run aldream/myAppBut the machine displays the following error:[error] Error: failed to connect to [localhost:27017]Any idea what I am doing wrong? Thanks!
Docker - Node.js + MongoDB - "Error: failed to connect to [localhost:27017]"
It may very well be that port forwarding fromappspot.comisn't performed, given that prior to the (relatively recent) release of managed VMs, the only traffic that went toappspot.comwas on port 80 or 443. I'd suggest using the IP-of-instance method you found to work.If you don't find that fully satisfying, you should go to thepublic issue tracker for app engineand post a feature request to have theappspot.comrouter detect whether a request is heading for a module that corresponds to a managed VM and attempt the port forwarding in that case.The thing is, putting the raw port on the end of the domain like that means that your browser will use the port you specified as a connection parameter to appspot.com, not as a query param, so appspot.com will have to listen on all ports and redirect if valid. This could be insecure/inefficient, so maybe the port number could be a query param or part of the domain string, similar to how version and module can be specified...At any rate, given the way in which ports work, I would highly doubt, if your very simple example caused a fail, that app engine'sappspot.comdomain was even set up to handle port forwarding to managed VM containers at all at present.
I'm using the Managed VM functionality to run a WebSocket server that I'd like to expose to the Internet on any port (preferably port 80) through a URL like: mvm.mydomain.comI'm not having much success yet. Here are the relevant parts of various files I'm using to accomplish this:Dockerfile:EXPOSE 8080 8081At the end of the Dockerfile, a Python app is started: it responds to health checks on port 8080 (I can verify this works) and responds to WebSocket requests on port 8081.app.yaml:module: mvm version: 1 runtime: custom vm: true api_version: 1 network: forwarded_ports: ["8081"]I deploy this app to the cloud using:$ gcloud preview app deploy .In the cloud console, I make sure TCP ports 8080 and 8081 are accepted for incoming traffic. I also observe the IP address assigned to the GCE instance (mvm:1) is: x.y.z.z.$ curl http://x.y.z.z:8080/_ah/health $ curl http://mvm.my-app-id.appspot.com/_ah/healthRepond both with200 OK.Connecting the WebSocket server using some JavaScript works as well:new WebSocket('ws://x.y.z.z:8081');So far so good. Except this didn't work (timeout):new WebSocket('ws://mvm.my-app-id.appspot.com:8081');I'd like to know why the above WebSocket command doesn't work. Perhaps something I don't understand in the GAE/GCE port forwarding interaction?If this could be made to work somehow, I envision the following would be the last steps to finish it.dispatch.yaml:dispatch: # Send all websocket traffic to the ManagedVM module. - url: "mvm.mydomain.com/*" module: mvmI also setup the GAE custom domain CNAME at mvm.mydomain.com.Connecting the WebSocket server using JavaScript should then work like:new WebSocket('ws://mvm.mydomain.com:8081');
Exposing multiple ports from within a ManagedVM
FromDocker success center:At this time, no, Docker for Windows Server 2016 does not support GUI-based applications. This is because Windows containers are based on either Nano or Core Server, which do not allow users to start up a GUI-based interface nor RDP into the container.And concerning running Windows containers on Ubuntu, you can find other posts related to that :Can window containers be hosted on linux ?andLinux machine with docker deploy windows container
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed5 years ago.Improve this questionI would like to know if there is any way to start a container from a Windows Docker image.The idea would be to start a Windows container on my Ubuntu and then connect by RDP to that Windows machine. It is possible?
Is there any way to run an image Windows Docker on Ubuntu? [closed]
Swarm doesn't give the option to listen on a specific interface, it defaults to listen on all interfaces. This is anopen issue. Modifying overlay networks inside of docker will not change this behavior.
When I launch an app via docker I can publish the app on a port specifying the IP.Suppose that my server has two ip (private 192.168.0.2 and public 200.168.0.2), I can expose an app on the private ip with this command:docker run -it -p 192.168.0.2:80:80 nginxHow can I achieve something similar with docker swarm?I guess I must create a docker network layer first, but I don't understand what the right syntax is.Basically I would like to do something similar:docker network create \ --driver overlay \ --IP 192.168.0.2 \ --IP 192.167.0.1 \ private_net docker service create --replicas 2 \ --network private_net --name my-web nginxWhere 192.168.0.2 and 192.167.0.1 are the IPs of the swarm cluster servers.
Docker Swarm and private IP
immutable updates can be the way to go for you, it basically recreates the EC2 instances completely on every deployOpen the Elastic Beanstalk console.Navigate to the management page for your environment.Choose Configuration.In the Rolling updates and deployments configuration category, choose Modify.Select immutable on deploy policyApplyyou can check more on how it workshere
I have a set of AWS Elastic beanstalk using Docker based configuration for both web server and worker server. The way we have setup is that the java process inside docker allocates 70% of the box memory when starting.Now the first deployment works fine, but when I try to update application version with in-place Rolling update, Elastic beanstalk tries to start an additional docker container with the java process before stopping the existing one. This fails the deploy as the Java server is not able to allocate the required memory. Is there a way that I can setup AWS to kill the old docker instance before starting the new one during deployment?I even tried Rolling with additional batch, but that one only works for the first batch and then fails for subsequent ones.
AWS Elastic Beanstalk - how to stop previous docker before starting new one
Just set the Name at runtime like:docker run --name MYCOOLCONTAINER alpine:latestThen:bashCommandName = `echo $NAME` output = subprocess.check_output(['bash','-c', bashCommandName]) print output
I need to get the containers name, from within the running container in pythoni could easily get the container id from inside the container in python withbashCommand = """head -1 /proc/self/cgroup|cut -d/ -f3""" output = subprocess.check_output(['bash','-c', bashCommand]) print outputnow i need the containername
Python getting Docker Container Name from the inside of a docker container
Seems likeflaskis not found from the PATH. It is either not installed (is it in requirements.txt?), or just not added into path.You could try to setCMD ["python", "-m", "flask", "run"]instead.Edit: Example here works for me well.https://docs.docker.com/compose/gettingstarted/You could try to pass--no-cacheoption to just in case make clean image:docker build --no-cache -t test .and then rundocker run testWhen attempting to test image, before going intodocker-composestate.
I'm learning docker. I try run a sample dockerfile on docker,com. But I have a problem is "Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"flask\": executable file not found in $PATH": unknown ".FROM python:3.7-alpine WORKDIR /code ENV FLASK_APP app.py ENV FLASK_RUN_HOST 0.0.0.0 RUN apk add --no-cache gcc musl-dev linux-headers COPY requirements.txt requirements.txt RUN pip install -r requirements.txt COPY . . CMD ["flask","run"]Many thanks.
Docker container build faild: "exec: \"flask\": executable file not found in $PATH": unknown
You can usedocker diff container_name. This inspect changes to files or directories on a container filesystem.It shows something like this.A /usr/local/lib/python2.7/email C /usr/local/lib/python2.7/email/mime D /usr/local/lib/python2.7/email/mime/audio.pycA: A file or directory was addedC: A file or directory was changedD: A file or directory was deletedHope this helps, good luck!
Is there an easy way to check what files were produced after container exits?I saw recommendations to rewriteDockerfileand addlscommands to it, but that's not the easy way for me.UPDATE: I was usingVOLUMEdirective insideDockerfileanddocker diffdoesn't show changes there.
List files in exited container
I had to resort to a scripted pipeline and combine all the stagesdef pythons = ["2.7.14", "3.5.4", "3.6.2"] def steps = pythons.collectEntries { ["python $it": job(it)] } parallel steps def job(version) { return { docker.image("python:${version}").inside { checkout scm sh 'pip install pipenv' sh 'pipenv install --dev' sh 'pipenv run pytest --junitxml=TestResults.xml' junit 'TestResults.xml' } } }The resulting pipeline looks likeIdeally we'd be able to break up each job into stages (Setup, Build, Test), but the UIcurrently doesn't support this(still not supported).
Using a declarative pipeline in Jenkins, how do I run stages across multiple versions of a docker image. I want to execute the following jenkinsfile on python 2.7, 3.5, and 3.6. Below is a pipeline file for building and testing a python project in a docker containerpipeline { agent { docker { image 'python:2.7.14' } } stages { stage('Build') { steps { sh 'pip install pipenv' sh 'pipenv install --dev' } } stage('Test') { steps { sh 'pipenv run pytest --junitxml=TestResults.xml' } } } post { always { junit 'TestResults.xml' } } }What is minimal amount of code to make sure the same steps succeed across python 3.5 and 3.6? The hope is that if a test fails, it is evident which version(s) the test fails on.Or is what I'm asking for not possible for declarative pipelines (eg. scripted pipelines may be what would most elegantly solve this problem)As a comparison, this is howTravis CI let's you specify runs across different python version.
Jenkins Pipeline Across Multiple Docker Images
While your two containers are both running and you have properly exposed the ports required for the NGINX container. You have not exposed any ports for thetest-appcontainer. The NGINX container has no way of talking to it. Exposing ports directly withdocker runwould likely defeat the point of using a reverse proxy in your situation. So instead, what you should do in this situation is create aNetworkfor both of your Docker containers and then add them to it. Then they will be able to communicate with one-another over a bridge. For example:docker network create example docker run -d --network=example --name=test-app test-app docker run -d -p 80:80 --network=example --name=nginx-proxy nginx-proxyNow that you have both of your pods on the same network, Docker will enable DNS-basedservice discoverybetween them by container name and you will be able to resolve them from each other. You can test connectivity like so:docker exec -it nginx-proxy ping test-app. Well, that is providedpingis installed in that Docker container.
I have my docker app running in the aws EC2 instance, and I am currently trying to map the app to the external IP address using Nginx. Here is a snap shot of the containers that I have running:My test-app is a fairly simple app that displays a static html website. And I deployed it using the following command:docker run -d --name=test-app test-appThe nginx-proxy has the followingproxy.confserver { listen 80; location / { proxy_pass http://test-app; } }Here is the Dockerfile for the nginx proxy:FROM nginx:alpine RUN rm /etc/nginx/conf.d/* COPY proxy.conf /etc/nginx/conf.d/nginx-proxy is run using the following command:docker run -d -p 80:80 --name=nginx-proxy nginx-proxyHowever, the nginx container never runs, and here the error log I get2020/03/27 15:55:19 [emerg] 1#1: host not found in upstream "test-app" in /etc/nginx/conf.d/proxy.conf:5 nginx: [emerg] host not found in upstream "test-app" in /etc/nginx/conf.d/proxy.conf:5
Docker Nginx: host not found in upstream
As I understood, the best way is to keep code in both containers, with code and with celery.It's useful to build smth likebase imagewhere will be almost all dependencies and app code. Then you will be able to build container with code and celery from this container. So if you'll need to build any other container with code inside just use thisbase imageand update Dockerfile with appropriate processes.
I want to make additional container for celery workers. So the structure should be the following:celery_container - Celery code_container - RabbitMQ, DB, code, everything elseI know how to organise a network, so celery is connected to Rabbit in another container.But I can't realize, should I keep my code in both containers?My tasks are done both with celery workers and synchronous. So, now I see only the option to run both containers with--volumeparam. Like this:docker run \ -tid \ -v $(pwd):/home \ --name code_container \ code_container docker run \ -tid \ -v $(pwd):/home \ --name celery_container \ celery_container
Docker. Celery and code in different containers
I had the same issue when using thelatest weekly, so I suggest to use theltssince when specifyjenkinsin your command you are pullinglatest weeklyrun your command like this:docker pull jenkins/jenkins:lts docker run -p 8080:8080 jenkins/jenkins:ltsseejenkins
I had launched jenkins through docker, it has been launched in administrator mode. After entering password when i selected to install suggested plugin it fails with most of the installation. Post that when i created jenkins user and navigated to jenkins home page it displays errors as shown in below screenshot.Installed docker and jenkins through below commandssudo yum install docker-ce systemctl start docker docker pull Jenkins docker run -p 8080:8080 jenkinsAlso when i go to manage jenkins and trying to install some other plugins like Git, it fails. I am not sure what is wrong with it? Why installation is failing.Below is the log being printed by jenkins while installation.Also below is the screenshot of warning mesage i am getting while installing through plugin manager
Unable to install suggested plugins of jenkins on docker
I believe you're running into trouble because the default shell run by Docker is not a login shell according tothis answer, which means scripts in/etc/profile.d/don't get processed.If you need profile processing, try changing your last line toCMD ["/bin/sh", "-l", "-c", "php71-php-fpm"]to invoke a login shell.
I am working in a Dockerfile for PHP-FPM 7.1. I am ending the Dockerfile with the following line:CMD ["php71-php-fpm"]Because I am usingdocker-composethis is how I start up the container:docker-compose up -dThe container compiles fine (apparently) as per this lines:Successfully built 014e24455b53 WARNING: Image for service php was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`. Creating php71-fpmBut it ends with the following error:ERROR: for php Cannot start service php: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"exec: \\\"php71-php-fpm\\\": executable file not found in $PATH\"\n" ERROR: Encountered errors while bringing up the project.I have tried the following:CMD php71-php-fpmAnd the error disappear but then the container exit with code 127:> docker-compose ps Name Command State Ports ------------------------------------------------------- php71-fpm /bin/sh -c php71-php-fpm Exit 127What I am missing here?UPDATEI have found the following answerhere:Value 127 is returned by /bin/sh when the given command is not found within your PATH system variable and it is not a built-in shell command. In other words, the system doesn't understand your command, because it doesn't know where to find the binary you're trying to call.Which makes me think that the filephp71-paths.shis not being executed so the paths are not setup properly.Once again, what I am missing?Thisphp71-fpmwill be linked with another container running Nginx (this is a WIP and my way to learn Docker)Here it's the completeDockerfilefor you to take a look.
Executable file not found in $PATH
For anyone wondering, I figured it out with a little help.The target definiton inside the build part of the docker-compose.yml is NOT to define the target image. It defines the target stage. To specify a image add the image portion to the multiple stages. Also no blank lines between the commands inside the Dockerfile, the interpreter will stop after a blank line. Here is the corrected, working code:Dockerfile:FROM openjdk:8-alpine as x86 RUN mkdir -p /usr/src/app COPY project/generated/distributions/executable/launch.jar /usr/src/app WORKDIR /usr/src/app CMD java -jar launch.jar FROM arm32v7/adoptopenjdk:8-jre-hotspot-bionic as arm32 RUN mkdir -p /usr/src/app COPY project/generated/distributions/executable/launch.jar /usr/src/app WORKDIR /usr/src/app CMD java -jar launch.jarAnd docker-compose.yml:version: '3.7' services: x86: build: context: . dockerfile: Dockerfile target: x86 image: foo.bar.example:x86_64 arm32: build: context: . dockerfile: Dockerfile target: arm32 image: foo.bar.example:arm32
I am currently on the way to deploying a Java application with Docker and K8s. As I am using a Raspberry Pi Kubernetes Cluster I want to generate two images, one for the x86 platform, and one for the arm32v7 (for testing on the Raspberry cluster). The goal is to generate two differently tagged docker images with one Dockerfile and push the resulting images to Docker Hub. I use the following Dockerfile:FROM openjdk:8-alpine as x86 RUN mkdir -p /usr/src/app COPY project/generated/distributions/executable/launch.jar /usr/src/app WORKDIR /usr/src/app CMD java -jar launch.jar FROM arm32v7/adoptopenjdk:8-jre-hotspot-bionic as arm32 RUN mkdir -p /usr/src/app COPY project/generated/distributions/executable/launch.jar /usr/src/app WORKDIR /usr/src/app CMD java -jar launch.jarMydocker-compose.ymllooks like this:version: '3.7' services: x86: build: context: . dockerfile: Dockerfile target: project:x86_64 arm32: build: context: . dockerfile: Dockerfile target: project:arm32Usingdocker build .works, but results in two unnamed, untagged images. I have tried numerous things like hardcoding the path to the Dockerfile and such stuff. Despite my efforts I am getting the following error:ERROR: failed to reach build target project:x86_64Any idea is appreciated.Edit: I took the idea fromhere
Building two differently tagged docker images with docker-compose
If the instance has correct permission all you need to pass the following option to your docker run command.docker run -it --log-driver=awslogs --log-opt awslogs-region=us-west-2 --log-opt awslogs-group=myLogGroup --log-opt awslogs-create-group=true node:alpineYou can check intoaws-console, you will see log group namemyLogGroupAs you also mentioned that you are getting timeout, to verify this check the below command.curl http://checkip.amazonaws.comIf it's not responding it means the instance does not have internet access and its in private subnet.
I have a running container on EC2 instance and I would like to populate my logs to CloudWatch in the same region.I was trying to use this tutorial:https://docs.docker.com/config/containers/logging/awslogs/However I have an issue related with the timeout of connection, also even though policy allows my ec2 instance to connect to the cloudwatch, when i am trying to describe anything I don't receive any response.Do You know how to get my logs from docker container running on EC2 to cloudwatch? I have tried multiple tutorials, however wasn't able to do it.
EC2 Docker container logs on CloudWatch
add option "-e PASSWORD=password" to set the environment variable. The set password is then the password for the jupyter login.
I run this command in the following order in order to run tensoflow in docker container after successful installation in Ubuntu 16.04 (NVIDIA GPU GeFORCE 840M) .1.sudo service docker start 2.sudo nvidia-docker run -it -p 8888:8888 gcr.io/tensorflow/tensorflow:latest-gpuThen I try to access jupyter in firefox browser by typing localhost:8888 and I am asked to enter the login password in the browser. What is the solution?
login password required to access jupyter notebook running in nvidia-docker container
Since you are using docker-compose.yml version 2, links should not be necessary. Containers within a compose network should be able to resolve other compose containers by service name.Reading the comments on your question it seems like the networking and host name resolution works, so it seems like the problem is in your web UI. I don't see you passing any type of configuration to the UI application saying where to find the api. Maybe there is a hard coded url to the api in your UI causing the error?Edit:Is your UI a client side/javascript app? Are you sure the app isn't actually making the call from your browser? Your browser running on your local machine and not in docker will not be able resolve the badderrer-api hostname.
tldr: I can't communicate with a docker composed service by its service name in order to make requests to an api running in networked containers.I have a single page application that makes requests to a json api. Its Dockerfile looks like this:FROM nginx:alpine COPY dist /usr/share/nginx/html EXPOSE 80A build process does it's thing and puts all the static assets in a dist directory which is then copied to the html directory of the nginx web server.I have a mock json api powered by json-server. Its Dockerfile looks like this:FROM node:7.10.0-alpine RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY package.json /usr/src/app/ RUN npm install COPY . /usr/src/app EXPOSE 3000 CMD [ "npm", "start" ]I have a docker-compose file that looks like this:version: '2' services: badass-ui: image: mydocker-hub/badass-ui container_name: badass-ui ports: - "80:80" badderer-api: image: mydocker-hub/badderer-api container_name: badderer-api ports: - "3000:3000"I'm able to build both containers successfully, and am able to run "docker-compose up" with both containers running smoothly. Fetch requests from badass-ui to badderer-api:3000/users returns "net::ERR_NAME_NOT_RESOLVED". Fetch requests tohttp://192.168.99.100:3000/users(or whatever the container IP may be) work fine. I thought by using docker compose I would be able to reference the name of a service defined in docker-compose.yml as a domain name, and that would enable communication between the containers via domain name. This doesn't seem to work. Is there something wrong with my docker-compose.yml? I'm on Windows 10 Home edition, using the tools that come with the Docker Quickstart terminal for Windows. I'm using docker-compose version 1.13.0, docker version 17.05.0-ce, docker-machine version 0.11.0 and VirtualBox 5.1.20.
Docker composed services can't communicate by service name
It is not supported on Hosted agent of VSTS, check this issue:Docker images based on nanoserver-1709 not building on hosted VS2017 agent
I'm trying to build a Docker image, which seems to build and run fine on my local machine, but it keeps failing with the following error:2018-05-06T13:56:15.2331697Z failed to register layer: re-exec error: exit status 1: output: ProcessUtilityVMImage C:\ProgramData\docker\windowsfilter\3b555fe81a5123419e06c66652d9e73adbbb17b10f52ddd9f59da3b7fb87adab\UtilityVM: The system cannot find the path specified. 2018-05-06T13:56:15.2531044Z ##[error]C:\Program Files\Docker\docker.exe failed with return code: 1It fails on the "Build an Image" step. I'm trying to use an Azure registry type.I'm trying to set up Continuous Deployment using Visual Studio Online. I selected the Hosted 2017 build agent (but have tried other ones with no success there either).My app is a .Net Core app. I think it's trying to use a Nano Server, and from what I read, that might be part of the problem (maybe the Hosted agent doesn't support the Nano Server).All of these technologies (.NET Core, Docker, Nano Server) are new to me (and probably new to mostly everyone), so I'm limited in my knowledge about them and where to start troubleshooting.Any ideas?The step of the Docker file that it's failing on is this oneFROM microsoft/aspnetcore:2.0-nanoserver-1709 AS base
Why is the "Build an image" step failing for Docker on Visual Studio Online?
One solution is touse an external shell scriptand useENTRYPOINT.Contents ofrun.sh:#!/bin/bash echo "Input something!" read some_var echo "You wrote ${some_var}!"Contents ofDockerfile:FROM ubuntu COPY "run.sh" . RUN ["chmod", "+x", "./run.sh"] ENTRYPOINT [ "./run.sh" ]This will allow./run.shto run when the container is spun:$ docker build -t test . Step 1/4 : FROM ubuntu ---> 4e2eef94cd6b Step 2/4 : COPY "run.sh" . ---> 37225979730d Step 3/4 : RUN ["chmod", "+x", "./run.sh"] ---> Running in 5f20ded00739 Removing intermediate container 5f20ded00739 ---> 41174edb932c Step 4/4 : ENTRYPOINT [ "./run.sh" ] ---> Running in bed7717c1242 Removing intermediate container bed7717c1242 ---> 554da7be7972 Successfully built 554da7be7972 Successfully tagged test:latest $ docker run -it test Input something! Test message You wrote Test message!
I want to make a Docker image that can perform the following:Get user input and store it in a local variable usingreadUtilize that variable for a later commandUsing that I have the following Dockerfile:FROM ubuntu RUN ["echo", "'Input something: '"] RUN ["read", "some_var"] RUN ["echo", "You wrote $some_var!"]which, when runningdocker build, yields the following output:Sending build context to Docker daemon 3.072kB Step 1/4 : FROM ubuntu ---> 4e2eef94cd6b Step 2/4 : RUN ["echo", "'Input something: '"] ---> Using cache ---> a9d967721ade Step 3/4 : RUN ["read", "some_var"] ---> Running in e1c603e2d376 OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"read\": executable file not found in $PATH": unknownreadseems to be a built-in bash "function" sincewhich readyields nothing. I replaced["read", "some_var"]with["/bin/bash -c read", "some_var"]and["/bin/bash", "-c", "read", "some_var"]but both yield the following:... Step 3/4 : RUN ["/bin/bash -c read", "some_var"] ---> Running in 6036267781a4 OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"/bin/bash -c read\": stat /bin/bash -c read: no such file or directory": unknown... Step 3/4 : RUN ["/bin/bash", "-c", "read", "some_var"] ---> Running in 947dda3a9a6c The command '/bin/bash -c read some_var' returned a non-zero code: 1In addition, I also replaced it withRUN read some_varbut which yields the following:... Step 3/4 : RUN read some_var ---> Running in de0444c67386 The command '/bin/sh -c read some_var' returned a non-zero code: 1Can anyone help me with this?
Adding interactive user input e.g., `read` in a Docker container
Docker for Windows uses a CIFS/Samba network file share to bind-mount host files into the Linux VM running docker. That is always done asroot:rootso all bind-mount files/dirs will always show that when seen from inside container. This is aknown limitation of the way docker shares these files between the OS's.Workarounds:In many cases, this isn't an issue. The host files are shared into the container world-readable, so local app development while running in the container is fine. For cache files, user uploads, etc. just be sure they are written into a container path that isn't to the host-bind mount, so they stay in Linux where you can control the perms.If needed, for development only, run the app in the container as root if it needs write permissions to host OS files. You can override this at runtime: e.g.docker run -u rootoruser:rootindocker-compose.ymlFor working with database files, don't bind-mount them, but use named volumes to keep the files in the Linux VM. You can always usedocker cpto copy files in and out of volumes for a quick backup.
I'm using an Apache / MySql Docker-compose set up which is all good. However the issue comes when, as this is for local development, the web container points to a local folder, for which I need Apache to have permissions to.UsingRUN mkdir /www \ && chown -R apache:apache /www VOLUME ["/www"]is fine if I run the Apache dockerfile by itself or if I run it in docker-compose without specifying a volume. But this means that I can't point that volume at a local directory, in this scenario "www" exists inside the container but doesn't map to the host machine. If I specify a volume inside the docker-compose file then it maps as expected but doesn't allow me to CHOWN the folder / files (even if I exec into the container)Below is a proof of concept, I'm running on Windows 10 / Docker Desktop Community Version 2.0.0.0-win81 (29211)EDIT (commented exposing the port, built the dockerfile from docker-compose and changed the port to 80 from 81)EDIT (I've updated the following files, see bottom, I'm leaving these for posterity)docker-compose.ymlversion: '3.2' services: web: restart: always build: context: . ports: - 80:80 volumes: - ./:/wwwDockerfileFROM centos:centos6 as stage1 RUN yum -y update && yum clean all \ && yum --setopt=tsflags=nodocs install -y yum-utils \ httpd \ php FROM stage1 as stage2 RUN mkdir /www \ && chown -R apache:apache /www #VOLUME ["/www"] #EXPOSE 80 ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]UPDATED Proof of concept filesDocker-compose.ymlversion: '3.2' services: web: build: context: . ports: - 80:80 volumes: - ./:/wwwDockerfileFROM centos:centos6 RUN yum -y update && yum clean all \ && yum --setopt=tsflags=nodocs install -y yum-utils \ httpd \ php COPY ./entrypoint.sh / ENTRYPOINT ["/entrypoint.sh"]entrypoint.sh#!/bin/bash set -e #exit straight away if there's an issue chown -R apache:apache /www # Apache /usr/sbin/httpd -D FOREGROUND
Docker compose mapping local directory to dockerfile volume
I was usingBierbarbar's approach. I got it working after getting over the following two pitfalls:Firstly,$NEO4J_HOME/datawas symlinked to/data, which seem to have permission issues. Changing the default data folder: by addingdbms.directories.data=mydataline to$NEO4J_HOME/conf/neo4j.conffixed this.Secondly, make suredata.cypherfile contains correct format for cypher-shell: 1) Semicolon is needed at the end of each cypher statemnt; 2) there are:beginand:commitcommands in some versions (or all versions?) ofcypher-shell
I tried create an docker image of neo4j that already provide some data, when you start an container. For my approach I inherited from the neo4j docker image, added some data via the neo4j cypher shell. But when i build the image and run a container from it the data did not appear in the database but the custom password is set. This is my current dockerfile:From neo4j:3.4 ENV NEO4J_AUTH=neo4j/password COPY data.cypher /var/lib/neo4j/import/ USER neo4j RUN bin/neo4j-admin set-initial-password password || true && \ bin/neo4j start && sleep 5 && \ cat /var/lib/neo4j/import/data.cypher | NEO4J_USERNAME=neo4j NEO4J_PASSWORD=password /var/lib/neo4j/bin/cypher-shell --fail-fast CMD [neo4j]I added also an match query to the data.cypher file to make sure that the shell added the data to neo4j. Maybe it has something to do that/datais defined as volume in the neo4j image?
Create custom Neo4j Docker image with intial data from cypher file
It's not caching. Once a file is copied into a container image (using theCOPYinstruction), modifying it from the host will have no effect - it's a different file.You've attempted to overwrite the file by bind-mounting a volume from the host using the-vargument todocker run. This will work - you will now be using the same file on host and container, except you made a typo - it should be/usrnot/user.
I use the official nginx docker image (https://registry.hub.docker.com/_/nginx/). When I modify the Index.html I don't see my change. Settingsendfile offin nginx.conf didn't help.I only see the change if i rebuild my image.Here is my Dockerfile:FROM nginx COPY . /usr/share/nginx/html COPY nginx/nginx.conf /etc/nginx/nginx.conf COPY nginx/default.conf /etc/nginx/conf.d/default.confAnd that's the commands I use to build and run it:docker build -t some-nginx . docker run --name app1 -p 80:80 -v $(pwd):/user/share/nginx/html -d some-nginxThank you
How to disable Nginx caching when running Nginx using Docker
I think I figured this out. It seems like the image 'mcr.microsoft.com/dotnet/core/runtime:3.1.1' is just a "layer" which only contains the recipe needed to install the runtime, but doesn't contain the underlying OS specification (please correct me if that is wrong). Therefore, I first need to provide the OS, and install to that, then apply the .NET Core runtime. This seems to work:FROM mcr.microsoft.com/windows/servercore:ltsc2019 WORKDIR /app ADD https://aka.ms/vs/16/release/vc_redist.x64.exe vc_redist.x64.exe RUN VC_redist.x64.exe /install /quiet /norestart /log vc_redist.log FROM mcr.microsoft.com/dotnet/core/runtime:3.1.1
I have seen lots of questions about exit code '3221225781' in response to docker RUN, but I am unable to find an answer still. Consider this dockerfile:FROM mcr.microsoft.com/dotnet/core/runtime:3.1 WORKDIR /app ADD https://aka.ms/vs/16/release/vc_redist.x64.exe vc_redist.x64.exe RUN VC_redist.x64.exe /install /quiet /norestart /log vc_redist.logWhen running this, I get the following output:C:\test>docker image build -t exitcodetest:1.0 . Sending build context to Docker daemon 113.2MB Step 1/4 : FROM mcr.microsoft.com/dotnet/core/runtime:3.1 ---> 3be5e0b7f3a5 Step 2/4 : WORKDIR /app ---> Using cache ---> 4508bead23e2 Step 3/4 : ADD https://aka.ms/vs/16/release/vc_redist.x64.exe vc_redist.x64.exe Downloading [==================================================>] 15.06MB/15.06MB ---> 37322d63b677 Step 4/4 : RUN VC_redist.x64.exe /install /quiet /norestart /log vc_redist.log ---> Running in c57b67befa33 The command 'cmd /S /C VC_redist.x64.exe /install /quiet /norestart /log vc_redist.log' returned a non-zero code: 3221225781Why would I be getting this exit code? What does it mean? I also confirmed that a vc_redist.log is not being written.Anybody know what I can do to get this to work?I should add that the command works when I run it on my local machine, and returns a zero %ERRORLEVEL%.Thanks!
Docker returning exit code 3221225781 installing vc_redist.x64.exe
You should absolutely restructure this to run one process per container and one container per pod. You do not typically need an init system or a process manager like supervisord or runit (there is an argument to have a dedicated init liketinithat can do the special pid-1 things).You mention two concerns here, restarting failed processes and process placement in the cluster. For both of these, Kubernetes handles these automatically for you.If the main process in a Pod fails, Kubernetes will restart it. You don't need to do anything for this. If it fails repeatedly, it will start delaying the restarts. This functionality only works if the main process fails – if your container's main process is a supervisor process, you will never get a pod restart and you may not directly notice if a process can't start up at all.Typically you'll run containers via Deployments that have some number of identical replica Pods. Kubernetes itself takes responsibility for deciding which node will run each pod; you don't need to manually specify this. The smaller the pods are, the easier it is to place them. Since you're controlling the number of replicas of a pod, you also want to separate concerns like Web servers vs. queue workers so you can scale these independently.Kubernetes has some ability to auto-scale, though the typical direction is to size the cluster based on the workload: in a cloud-oriented setup if you add a new pod that requests more CPUs than your cluster currently has available, it will provision a new node. The HorizonalPodAutoscaler is something of an advanced setup, but you can configure it so that the number of workers is a function of your queue length. Again, this works better if the only thing it's scaling is the worker pods, and not a collection of unrelated things packaged together.
a brief background to give context on the question.Currently my team and i are in the midst of migrating our microservices to k8s to lessen the effort of having to maintain multiple deployment tools & pipelines.One of the microservices that we are planning to migrate is an ETL worker that listens to messages on SQS and performs multi-stage processing.It is built using PHP Laravel and we use supervisord to control how many processes to run on each worker instance on aws ec2. Each process basically executes a laravel command to poll different queues for new messages. We also periodically adjust the number of processes to maximize utilization of each instance's compute power.So the questions are:is this method of deployment still feasible when moving to k8s? Is there still a need to "maximize" compute usage? Are we better off just running 1 process in each container using the "container way" (not sure what is the tool called. runit?)i read from multiple sources (e.ghttps://devops.stackexchange.com/questions/447/why-it-is-recommended-to-run-only-one-process-in-a-container) that it is ideal that for a container to run only 1 process. There's also the case of recovering crashed processes and how running supervisord might interfere with how container performs recovery. But i am not very sure if it applies for our use case.
Does it make sense to run multiple similar processes in a container?
By the looks of things, people are handling this outside of docker.They are adding Jenkins post-build steps that clean up orphaned docker containers on aborted or failed builds.See Martin Kenneth'sbuild scriptas an example.
I work at a large organization that runs hundreds of jobs in a shared Jenkins cluster.My Jenkins job needs to run integration tests against untrusted code running inside Docker containers. I am fearful that that when my Jenkins job gets terminated abruptly (e.g. job aborted or times out) I will be left with orphaned containers.I have triedhttps://github.com/moby/moby/issues/1905andulimitsdoes not work for me (this is becauseit only works for containers that runbash, and I cannot guarantee that mine will do so).I triedhttps://stackoverflow.com/a/26351355/14731but--lxc-confis not a recognized option for Docker for Windows (this needs to run across all platforms supported by docker).Any ideas?
Cleaning up orphaned docker containers after Jenkins job is terminated
If I am not mistaken the/in the element is what you are looking for.As perServiceManifest.xmlschema:Pass a comma delimited list of commands to the container.The schema excerpt: The repo and image on https://hub.docker.com or Azure Container Registry. Pass a comma delimited list of commands to the container.
I've a docker imagewiremock.net-nanowhich accepts additional commandline parameters like--Portand--AdminUsername.The normal docker commandline looks like:docker run --rm -p 9091:80 sheyenrath/wiremock.net-nano --ReadStaticMappings true --AdminUsername x --AdminPassword y --RequestLogExpirationDuration 24But how can I configure these parameters inAzure Service Fabric?TheServiceManifest.xmlfile defines only the image name (sheyenrath/wiremock.net-nano) and port forwarding ().
How to specify commandline arguments to a docker container in Azure Service Fabric