Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
You have similar issues illustrating the same error message inmongo issues 68orissue 74The host machine volume directory cannot be under/Users(or~). Try:docker run --name mongo -p 27017:27017 -v /var/lib/boot2docker/my-mongodb-data/:/data/db -d mongo --storageEngine wiredTigerThePR 470adds:WARNING: because MongoDB uses memory mapped files it is not possible to use it through vboxsf to your host (vbox bug).VirtualBox shared folders are not supported by MongoDB(seedocs.mongodb.organd relatedjira.mongodb.orgbug).This means that it is not possible with the default setup using Docker Toolbox to run a MongoDB container with the data directory mapped to the host.
I am attempting to use theofficial Mongo dockerfileto boot up a database, I am using the-vcommand to map a local directory to/datainside the container.As part of theDockerfile, it attempts to chown this directory to the user mongodb:RUN mkdir -p /data/db /data/configdb \ && chown -R mongodb:mongodb /data/db /data/configdb VOLUME /data/db /data/configdbHowever, this fails with the following command:chown: changing ownership of '/data/db': Permission deniedWhat I am doing wrong here? I cannot find any documentation around this - surely the container should have full permissions to the mapped directory, as it was explicitly passed in the docker run command:docker run -d --name mongocontainer -v R:\mongodata:/data/db -p 3000:27017 mongo:latest
Cannot call chown inside Docker container (Docker for Windows)
You can usecontainer transformwith boto3, that will convert docker-compose to equivalent ECS task definition. this is also base on python.container-transform is a small utility to transform various docker container formats to one another.Currently, container-transform can parse and convert:Kubernetes Pod specsECS task definitionsDocker-compose configuration filesMarathon Application Definitions or Groups of ApplicationsChronos Task Definitionscontainer-transformcat docker-compose.yml | container-transform -vcompose-to-ecsAlsosuggested toolby AWS ECS road map.we're unlikely to support the docker-compose format directly in our APIs. But, would a tool like container-transform to transform a docker-compose file into an ECS task definition work for you? Then you can use the resulting ECS task definition file in boto.
I have an application running using docker-compose.Now I'm migrating the application to be hosted on ECS.I'm translating the docker-compose settings to the boto3 ECS equivalents.Unfortunately I don't find an equivalent of docker-compose'scommandin theAWS CLI.
ECS equivalent of docker-compose's command
With a little magic, Docker Hubcando this!Pablo Chico de Guzmánhelped me out.Steps:add a file calledhooks/post_pushmakehooks/post_pushexecutable, commit and pushdelete the "Branch" build, but leave the "Tag" build in placeNow, any tags I push (e.g.git push --tags) fire off an automated build, and the same image is also given thelatesttag.Here's the change I had to makeso the most recent "vX.Y"-taggedmeonkeys/syncthingimage is also taggedlatest.
Docker Hub builds aSyncthing imagefor me fromthis source repo.I tagged thelatest commitv0.13.5, but Docker built it twice:once forlatestandonce for v0.13.5.Why? Shouldn't it be able to figure out the source is the same? Am I just doing something dumb in myDockerfile, breaking caching? Is there some way I need to hint to Docker Hub that this should really be two images with the same checksum but different tags?I'm thinking of the two Docker image tagslatestandv0.13.5like two git tags both pointing to the same commit. Shouldn't Docker Hub work that way too? If someone tries to pulllatestthey'd pull exactly the same image taggedv0.13.5? I know how to pull/re-tag/push, but again, seems like there just must be some way to get Docker Hub to do this automatically.Build settings:
How do I make a Docker hub use the same image for "latest" and "vX.Y"?
In your pom.xml, thecopy-dependenciesgoal is specified at theinstallphase : too late the package of the jar was already done.I am trying to dockerize a simple Spring Boot Application, built with Maven.You don't need to declare any plugin to create a fat jar with spring boot that could be run by a docker container.Declaring these plugins is error prone (and should be used only in corner cases) whiletherepackagegoal of the spring boot maven pluginattached by default to the package phase of maven will create for you the fat jar :Repackages existing JAR and WAR archives so that they can be executed from the command line using java -jarJuste remove these plugins declarations and executemvn clean packageand it should be good.Side note :FROM openjdk:latestDon't uselatestas image version but favor a specific version of the image othewhise you could have bad surprises. As you use JDK 8, you could specify a JRE or a JDK 8 such as :FROM openjdk:8-jre-alpine.
I am trying to dockerize a simple Spring Boot Application, built with Maven.Dockerfile:FROM openjdk:latest COPY target/backend-1.0-SNAPSHOT.jar app.jar ENTRYPOINT ["java","-jar","app.jar"]When I run the .jar without the container (java -jar target/backend-1.0-SNAPSHOT.jar), everything works fine and the app is running.Now I create the container withdocker build -t company/backend .But when I try to run the docker container withdocker run -p 8080:8080 company/backendthe following error occurs:Exception in thread "main" java.lang.NoClassDefFoundError: org/springframework/boot/SpringApplication at de.company.backend.Application.main(Application.java:10) Caused by: java.lang.ClassNotFoundException: org.springframework.boot.SpringApplication at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:602) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522) ... 1 moreIt seems like docker does not find the main class, even though it is defined in my pom.xml: 1.8 1.8 de.elbdev.backend.Application maven-dependency-plugin install copy-dependencies ${project.build.directory}/lib maven-jar-plugin true lib/ ${mainClass} Main Class:package de.company.backend; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); } }
Error when running docker container "NoClassDefFoundError"
A: No.Unfortunately this isn't possible (yet?) with appengine. More than a few people have run into this issue. For some reason, the container default for /dev/shm is crazy small....but there are other optionsIf the process you want to run has the ability to configure the location of the tmpfs it uses, then you cancreate a tmpfsand simply point it there.Chromium can't do this.Option 1If you want to deploy a container to google cloud, one option is to usecontainer engine. You can then mount a tmpfs volume to your pods like this:spec: volumes: - name: dshm emptyDir: medium: Memory containers: - image: gcr.io/project/image volumeMounts: - mountPath: /dev/shm name: dshmKubernetes has a fairly steep learning curve, but it will allow you to uncap the limit on /dev/shm.Option 2There is a new feature that will allow you todeploy containers to compute engine, but it's currently in alpha and you will need to apply to have your project whitelisted to use this feature.Option 3Of course, you could deploy containers to GCE in a more manual fashion by creating a GCE instance usingCOS (container optimized os)Update from speedplane's commentOption #4If the goal is to run a full browser on app engine flexible, then the new versions of Firefox run in headless just fine in Docker.
How do you change the size of the shared memory folder/dev/shmin an App Engine Flexible app?By default it is set to 64M, too low to run many apps (e.g., chrome). I don't see any way to change it. There are ways to change it if you have access to thedocker run command, but we don't have such access when launching app engine flexible apps.
How to Change the Size of /dev/shm in App Engine Flexible
I can reproduce the issue you raise, while it does not show up when I replace the base image withdebian:10, for example.It happens the issue is not due toalpinebut to thegitlab/gitlab-runner:alpineimage itself, namelythisDockerfilecontains the following line:STOPSIGNAL SIGQUITTo be more precise, the line above meansdocker stopwill send aSIGQUITsignal to the running containers (and wait for a "graceful termination time" before killing the containers, as if adocker killwere issued in the end).If this Dockerfile directive is not used,the default signal sent bydocker stopis SIGTERM.Beware thatSIGKILLwould be a very poor choice forSTOPSIGNAL, given that the KILL signal cannot be trapped.So, your first example should work if you use the following line:trap deregister_runner SIGINT SIGQUIT SIGTERMThis way, your cleanup functionderegister_runnerwill be triggered anytime you issue adocker stop, or use theCtrl-Ckeybinding (thanks toSIGINT).Finally, two additional notes related to this question ofDocker,bashand signals:The "graceful termination time" (between stop and kill) can be customized, and there are some pitfalls when using a Bash entrypoint (regarding the "signal propagation"). I explained both issues in more detail in this SO answer:Speed up docker-compose shutdown.Beware that in manyalpineimages,bashis not pre-installed, e.g.:$ sudo docker run --rm -it alpine /bin/bash /usr/bin/docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown.(fortunately, this was not the case ofgitlab/gitlab-runner:alpine, which indeed contains thebashpackage :)
I was trying to catch SIGTERM signal from a docker instance (basically when docker stop is called) but couldn't find a way since I have different results for each try I performed.Following is the setup I haveDockerfileFROM gitlab/gitlab-runner:alpine COPY ./start.sh /start.sh ENTRYPOINT ["/start.sh"]start.sh#!/bin/bash deregister_runner() { echo "even if nothing happened, something happened" exit } trap deregister_runner SIGTERM while true; do sleep 10 doneNow I build the docker image$ docker build -t dockertrapcatch . Sending build context to Docker daemon 51.71kB Step 1/3 : FROM gitlab/gitlab-runner:alpine ---> 9f8c39873bee Step 2/3 : COPY ./start.sh /start.sh ---> Using cache ---> ebb3cac0c509 Step 3/3 : ENTRYPOINT ["/start.sh"] ---> Using cache ---> 7ab67fe5a714 Successfully built 7ab67fe5a714 Successfully tagged dockertrapcatch:latestRun the docker$ docker run -it dockertrapcatchNow when I rundocker stop <>ordocker kill --signal=SIGTERM <>, myderegister_runnerfunction is not called.After that I changed thestart.shscript as following (SIGKILL ==> EXIT)#!/bin/bash deregister_runner() { echo "even if nothing happened, something happened" exit } trap deregister_runner EXIT while true; do sleep 10 doneAfter this change and creating the docker image and running itdocker stop <>still does not work butdocker kill --signal=SIGTERM <>works!$ docker run -it dockertrapcatch even if nothing happened, something happened$ docker kill --signal=SIGTERM 6b667af4ac6c 6b667af4ac6cI read that actuallydocker stopsends aSIGTERMbut I think this time it is not working? Any idea?
Catching SIGTERM from alpine image
After analysing the AWS ECS logs I found out that the problem was in the ECS Docker authentication.To solve that I've added the following data to the file /etc/ecs/ecs.configECS_CLUSTER=default ECS_ENGINE_AUTH_TYPE=dockercfg ECS_ENGINE_AUTH_DATA={"https://index.docker.io/v1/":{"auth":"YOUR_DOCKER_HUB_AUTH","email":"YOUR_DOCKER_HUB_EMAIL"}}Just replace theYOUR_DOCKER_HUB_AUTHandYOUR_DOCKER_HUB_EMAILby your own information and it shall work properly.To find this information you can executedocker loginon your own computer and then look for the data in the file ~/.docker/config.jsonFor more information on the Private Registry Authentication topic please look athttp://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html
I use Docker Hub to store a private Docker image, the repository has a webhook that once the image is updated it calls a service I built to:update the ECS task definitionupdate the ECS servicederegister the old ECS task definitionThe service is running accordingly. After it runs ECS creates a new task with the new task definition, stops the task with the old task definition and the service come back with the new definition.The point is that the Docker Image is not updated, once the service starts in the new task definition it remains with the old image.Am I doing something wrong? How o ensure the docker image is updated?
How to ensure to update Docker image on AWS ECS?
Here's a working solution I found yesterday which trigger a release. You can keep your deployment with docker and just add this little script to your pipeline.#!/bin/bash imageId=$(docker inspect registry.heroku.com/$YOUR_HEROKU_APP/web --format={{.Id}}) payload='{"updates":[{"type":"web","docker_image":"'"$imageId"'"}]}' curl -n -X PATCH https://api.heroku.com/apps/${YOUR_HEROKU_APP}/formation \ -d "$payload" \ -H "Content-Type: application/json" \ -H "Accept: application/vnd.heroku+json; version=3.docker-releases" \ -H "Authorization: Bearer $YOUR_HEROKU_API_KEY"This solution comes from Kai tödter and you can find it athttps://toedter.com/2018/06/02/heroku-docker-deployment-update/
I would like to deploy my application as a container from Gitlab CI/CD pipeline.A few days ago I could deploy my docker image as written in the heroku devCenter.docker login --username=_ --password=$(heroku auth:token) registry.heroku.comand pushed it to the heroku registry.docker tag imageregistry.heroku.com/app/process-typedocker push registry.heroku.com/app/process-typeBut then they changed the deploy in 2 stepsheroku cointainer:pushheroku container:releaseBefore the update it was deployed when the container was pushed into the container registry. Now I need to release it in any way.I tried to rename the image to release and tried to install heroku CLI but then I cannot log into heroku registry.How did you solve it?
Heroku: How to release an existing image in gitlab CI/CD?
You can login to multiple registries at the same time, but you have to push them separately, in bash you can execute commands in parallel by adding an ampersand&behind your command, for example:docker push [MY-IMAGE] my.private.registry & docker push [MY-IMAGE] my.private.registry2 &
I want to know if docker can login to multiple repos at time and if it can push the images to them simultaneously. For example, push multiple images to AWS and Azure registries at a same time.
How can I login to multiple docker registries at same time?
You need to build TensorFlow from source, the typical wheels that you install using pip were built with the requirement of using Compute Capability 3.5, but TensorFlow does indeed support Compute Capability 3.0:https://www.tensorflow.org/install/install_sourcesGPU card with CUDA Compute Capability 3.0 or higher. See NVIDIA documentation for a list of supported GPU cards.You can build the latest TF version as this will also auto-detect the capabilities of your CPU and should not use AVX.
I am running Tensorflow 1.5.0 on a docker container because i need to use a version that doesn't use the AVX bytecodes because the hardware i am running on is too old to support it.I finally got tensorflow-gpu to import correctly (after downgrading the docker image to tf 1.5.0) but now when i run any code to detect the GPU it says the GPU is not there.I looked at the docker log and Jupyter is spitting out this messageIgnoring visible gpu device (device: 0, name: GeForce GTX 760, pci bus id: 0000:01:00.0, compute capability: 3.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.The tensorflow website says that GPUs with compute capability of 3.0 is supported so why does it says it needs compute capability 3.5?Is there any way to get a docker image for tensorflow and jupyter that uses tf 1.5.0 but supports GPUs with compute capability?
Ignoring visible gpu device with compute capability 3.0. The minimum required Cuda capability is 3.5
This is not how memory management works under Linux.If you run full virtualization, like QEMU, then all memory can be allocated and passed down into the VM. That VM then boots the kernel and the memory is managed by the kernel in the VM.In Docker, or any other container/namespace system, the memory is managed by the kernel that runs docker and the "containers". The process that is run in container still runs like a normal process but in a differentcgroup. Eachcgrouphas limits, like how much memory the kernel will hand out to userland, or what network interfaces it sees, but it still runs on same kernel.An analogy of this is that docker is a "glorifiedulimit". Processes under this limit still behave as normal Linux processesthey allocate memory as-neededthey will cause OOM issues if they pass some limit, or host runs out of memoryAnd just like you can't pre-allocate memory for Firefox, you can't pre-allocate memory for a Docker container.
I'm runningnpminside a docker container and every so often it aborts because it cannot allocate enough memory. I see some flags like--memory(How do I set resources allocated to a container using docker?) for thedocker runcommand that seem to limit the maximum amount of memory that a container can consume, but haven't seen anything yet that would allow me to reserve an amount of memory for the container and abort immediately if it cannot be allocated.
Docker reserve a certain amount of memory for container
Like mentioned in the comments to the original question thephp:fpmimage requires for it's volume to be set to/var/www/html.If you want to use a different dir you can work around that by using your own Dockerfile (based on php:fpm). ThatDockerfilewould like this:FROM php:fpm WORKDIR /var/wwwIt seems like setting the workdir to the desired dir does the trick.Then in yourdocker-compose.ymlyou would build with that Dockerfile instead of using the php:fpm image directly.version: "2" services: # ... fpm: build: ./path/to/dockerfile volumes: - ./src:/var/www
I'm having an issue when trying to start multiple containers with docker-compose:Dockerfile:FROM nginx:1.9 ADD ./nginx-sites/default /etc/nginx/sites-available/defaultdocker-compose.yml:version: "2" services: web: build: . ports: - "80:80" volumes: - ./src:/var/www links: - fpm fpm: image: php:7-fpm volumes: - ./src:/var/wwwWhen I usedocker-compose upto start the application, I get the following error:ERROR: Container command not found or does not exist.Would love some help with this issue.
Issue with docker compose: container command not found
Yes you can, at least some.Travis has awhitelist of allowed packagesyou can install from using the containerised environment. Instead of using wget and dpkg, or apt, you define the packages in your yaml under theaddonssection. Checkhttps://docs.travis-ci.com/user/installing-dependencies/.In the yaml you'd have something like:addons: apt: packages: - ncftpncftpis whitelistedhere.If you need packages which are not whitelisted, you can setsudo: trueand your build will be launched in a non-containerised environment, so you have root (sudo) access to install whatever you want. Alternatively you can raise an issue on their Github to add a whitelist for your package.
Closed.This question does not meetStack Overflow guidelines. It is not currently accepting answers.This question does not appear to be abouta specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic onanother Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.Closed8 years ago.Improve this questionHow can I Iinstall a package on Travis-ci with sudo:false in travis.yml ?I have my travis.yml :sudo: false install: - wget http://security.ubuntu.com/ubuntu/pool/main/i/icu/libicu52_52.1-3ubuntu0.4_amd64.deb - sudo dpkg -i libicu52_52.1-3ubuntu0.4_amd64.debI have an error :sudo: must be setuid rootThe command "sudo dpkg -i libicu52_52.1-3ubuntu0.4_amd64.deb" failed and exited with 1 during .
Install package on Travis-ci with sudo:false [closed]
After quite a bit of research I didn't find any ready-made way for pytest to run a project tests with OS-level isolation and in a disposable environment. Many approaches are possible and have advantages and disadvantages, but most of them have more moving parts that I would feel comfortable with.The absolute minimal (but opinionated) approach I devised is the following:build a python docker image with:a dedicated non-root user:pytestall project dependencies fromrequirements.txtthe project installed in develop moderun py.test in a container that mounts the project folder on the host as the home ofpytestuserTo implement the approach add the followingDockerfileto the top folder of the project you want to test next to therequirements.txtandsetup.pyfiles:FROM python:3 # setup pytest user RUN adduser --disabled-password --gecos "" --uid 7357 pytest COPY ./ /home/pytest WORKDIR /home/pytest # setup the python and pytest environments RUN pip install --upgrade pip setuptools pytest RUN pip install --upgrade -r requirements.txt RUN python setup.py develop # setup entry point USER pytest ENTRYPOINT ["py.test"]Build the image once with:docker build -t pytest .Run py.test inside the container mounting the project folder as volume on /home/pytest with:docker run --rm -it -v `pwd`:/home/pytest pytest [USUAL_PYTEST_OPTIONS]Note that-vmounts the volume as uid 1000 so host files are not writable by the pytest user with uid forced to 7357.Now you should be able to develop and test your project with OS-level isolation.Update:If you also run the test on the host you may need to remove the python and pytest caches that are not writable inside the container. On the host run:rm -rf .cache/ && find . -name __pycache__ | xargs rm -rf
I'm interested in executing potentially untrusted tests with pytest in some kind of sandbox, like docker, similarly to what continuous integration services do.I understand that to properly sandbox a python process you need OS-level isolation, like running the tests in a disposable chroot/container, but in my use case I don't need to protect against intentionally malicious code, only from dangerous behaviour of pairing "randomly" functions with arguments. So lesser strict sandboxing may still be acceptable. But I didn't find any plugin that enables any form of sandboxing.What is the best way to sandbox tests execution in pytest?Update: This question is not aboutpython sandboxing in generalas the tests' code is run by pytest and I can't change the way it is executed to useexecorastor whatever. Also using pypy-sandbox is not an option unfortunately as it is "a prototype only" as per thePyPy feature page.Update 2: Hoger Krekel on the pytest-dev mailing listsuggests using a dedicated testuser via pytest-xdistfor user-level isolation:py.test --tx ssh=OTHERUSER@localhost --dist=eachwhichmade me realisethat for my CI-like use case:having a "disposable" environment is as important as having a isolated one, so that every test or every session runs from the same initial state and it is not influenced by what older sessions might have left on folders writable by thetestuser(/home/testuser, /tmp, /var/tmp, etc).So the testuser+xdist is close to a solution, but not quite there.Just for context I need isolation to runpytest-nodev.
Is there a way to sandbox test execution with pytest, especially filesystem access?
ProblemYou're trying to access the DB with wrong IP/hostname. As you can see, accessinglocalhostin the spring container would resolve to that container and there's no27017port listening there. When you run the jar on docker host, it has27017port available, that's why it works.SolutionYou can use--hostnameflag indocker runcommand to set the hostname of DB container so that you can connect to it from the Spring container using the hostname.The better solution, however, is to use a docker-compose file and start the containers usingdocker-compose up.First of all useMongoClient mongo = new MongoClient("db", 27017));in your Spring code and build an image of your code.Afterward, follow the steps below to start the containers:A) Create Compose fileCreate a file nameddocker-compose.ymlwith following content:version: "2.1" services: app: # replace imageName with your image name (block in your case) image: imageName:tag ports: - 9876:4000 # Replace the port of your application here if used depends_on: - db db: image: mongo volumes: - ./database:/data ports: - "27017:27017"B) Run the compose fileExecute following command to run the compose file:docker-compose up -d
I have tried many options to access the MongoDB image from docker. It works fine outside the docker but If I run the application in docker container it shows me an error. Mentioned below are screenshots of errors. Also, shared the code of connection and commands which I am running.Exception while running spring boot applicationMongo Db Container RunningJava Code used for connecting docker MongoDB imageMongoClient mongo = new MongoClient("mongodb//db", 27017));I tried with alternative options alsoMongoClient mongo = new MongoClient("localhost", 27017));It works fine if I run the jar directly but doesn't work inside the docker container.Kindly provide me the solution.
Can't Connect Mongodb to Springboot Container in docker
Set the permission to your executable it should work.RUN chmod +x ./main # Command to run the executable CMD ["./main"]
I have some troubles when I try to start my go application with docker.ERROR: for app Cannot start service app: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"./main\": permission denied": unknownIt happenes when I try to dodocker-compose upIt is my mulristage Dockerfil:# Dockerfile References: https://docs.docker.com/engine/reference/builder/ # Start from the latest golang base image FROM golang:1.13 as builder # Set the Current Working Directory inside the container WORKDIR /memesbot # Copy go mod and sum files COPY go.mod go.sum ./ # Download all dependencies. Dependencies will be cached if the go.mod and go.sum files are not changed RUN go mod download # Copy the source from the current directory to the Working Directory inside the container COPY . . # Build the Go app RUN go build -o /memesbot/cmd/main . ######## Start a new stage from scratch ####### FROM alpine:latest RUN apk --no-cache add ca-certificates WORKDIR /root/ # Copy the Pre-built binary file from the previous stage COPY --from=builder /memesbot/cmd/main . # Command to run the executable CMD ["./main"]And docker-compose.ymlversion: '3' services: app: build: context: . dockerfile: Dockerfile ports: - "7777:7777" environment: TELEGRAM_TOKEN: xxxyyyDoes somebody know how can I fix this?
Cannot start service app: OCI runtime create failed: container_linux.go:349
In order to get Keycloak responding properly on port 443, I need to remove theKC_HOSTNAME_PORTconfiguration, leaving me with:version: "3" services: traefik: image: docker.io/traefik command: - --api.insecure=true - --providers.docker - --entrypoints.web.address=:80 - --entrypoints.web-secure.address=:443 ports: - "127.0.0.1:8080:8080" - "80:80" - "443:443" volumes: - /var/run/docker.sock:/var/run/docker.sock keycloak: image: quay.io/keycloak/keycloak restart: always command: start environment: KC_PROXY_ADDRESS_FORWARDING: "true" KC_HOSTNAME_STRICT: "false" KC_HOSTNAME: auth.example.com KC_PROXY: edge KC_HTTP_ENABLED: "true" KC_DB: postgres KC_DB_URL: jdbc:postgresql://postgres:5432/$POSTGRES_DB?ssl=allow KC_DB_USERNAME: $POSTGRES_USER KC_DB_PASSWORD: $POSTGRES_PASSWORD KEYCLOAK_ADMIN: admin KEYCLOAK_ADMIN_PASSWORD: password labels: - "traefik.http.routers.cloud-network-keycloak.rule=Host(`auth.example.com`)" - "traefik.http.routers.cloud-network-keycloak.tls=true" - "traefik.http.services.cloud-network-keycloak.loadbalancer.server.port=8080" postgres: image: docker.io/postgres:14 environment: POSTGRES_USER: $POSTGRES_USER POSTGRES_PASSWORD: $POSTGRES_PASSWORD POSTGRES_DB: $POSTGRES_DBThis works for me without errors when I connect to it ashttps://auth.example.com. If I re-introduce theKC_HOSTNAME_PORTsetting, I get the same "infinite spinning wheel" that you reported in your question.
I have a domain example.org.I have docker running there with Traefik as proxy. Now I want to setup Keycloak. I want to access Keycloak on auth.example.org. This is my config (docker-compose):keycloak: image: quay.io/keycloak/keycloak restart: always command: start environment: KC_PROXY_ADDRESS_FORWARDING: true KC_HOSTNAME_STRICT: false KC_HOSTNAME: auth.example.org KC_HOSTNAME_PORT: 443 KC_HTTP_ENABLED: true KC_DB: postgres KC_DB_URL: jdbc:postgresql://postgres:5432/keycloak?ssl=allow KC_DB_USERNAME: root KC_DB_PASSWORD: password KEYCLOAK_ADMIN: admin KEYCLOAK_ADMIN_PASSWORD: password labels: - "traefik.http.routers.cloud-network-keycloak.rule=Host(`auth.example.org`)" - "traefik.http.routers.cloud-network-keycloak.entrypoints=websecure" - "traefik.http.routers.cloud-network-keycloak.tls.certresolver=letsencryptresolver" - "traefik.http.routers.cloud-network-keycloak.tls=true" - "traefik.http.services.cloud-network-keycloak.loadbalancer.server.port=8080" depends_on: postgres: condition: service_healthy networks: - internal - traefikHowever, loading the Keycloak admin console onhttps://auth.example.org/admin/master/console/throws an error in the browser:URL:https://auth.example.org/realms/master/protocol/openid-connect/login-status-iframe.html/init?client_id=security-admin-console&origin=https%3A%2F%2Fauth.example.orgStatus: 403I have no clue ... how to resolve this?
I do not get Keycloak working in docker behind Traefik
I just ran into a similar issue. I realize this is 11 months old, but its somewhat difficult to find information on this topic, so I will post information here.My issue turned out to be that the default subnet for the docker swarm overlay network was overlapping with my vpcs subnet, so the default amazon ec2 dns server (10.0.0.2) in my case was confusing the docker daemon's ip address routing into thinking it was a swarm overlay local service (I think). Anyway, I resolved my issue by changing the default overlay subnet via my stack files networking: section and my docker daemon began resolving the 10.0.0.2 vpc dns server again.If you put your nodes docker daemon in debug module (on linux/etc/docker/daemon.json, add"debug": trueto the json), you can monitor debug output by tailing the log for the daemon on your specific system. If the daemon is running via systemd,journalctl -u dockerwill give you the logs.-fwill follow the logs.There I found information about the connectivity issues (docker daemon was failing to get in touch with the dns server on 10.0.0.2:54 -- the udp dns port). However, nslookup was working fine on the host OS, the/etc/resolve.conflooked appropriate. The problem was obvious if you used docker exec to get an interactive/bin/shin one of the running services. nslookup fails for any external domain, and the docker daemon debug logs spit out more "connection refused" type messages regarding 10.0.0.2. After looking around docker support issues for dns resolution for an hour or two, I found a comment stating that the docker swarm virtual networks are assigned addresses based on some defaults, and that sometimes those defaults overlap with how you've set up your local subnets. I reasoned that if they were overlapping with regards to the dns server on my vpc, it might be trying to route the dns packets intra-swarm, instead of resolving to the vpc subnet routing.
I have set up an EC2 instance on AWS.Have set up my security groups properly so that the instance is able to reach the Internet, e.g.ubuntu@ip-10-17-0-78:/data$ ping www.google.com PING www.google.com (216.58.211.164) 56(84) bytes of data. 64 bytes from dub08s01-in-f4.1e100.net (216.58.211.164): icmp_seq=1 ttl=46 time=1.02 ms 64 bytes from dub08s01-in-f4.1e100.net (216.58.211.164): icmp_seq=2 ttl=46 time=1.00 msHowever, when I exec into a container, this is not possible:root@d1ca5ce50d3b:/app# ping www.google.com ping: www.google.com: Temporary failure in name resolutionupdate_1: the connectivity issue has to do with containers being initiated withdocker stack deploy, in specific stacks;When I just start a stand-alone container, connectivity to the Internet is there:ubuntu@ip-10-17-0-78:/data$ docker run -it alpine:latest /bin/ash / # ping www.google.gr PING www.google.gr (209.85.203.94): 56 data bytes 64 bytes from 209.85.203.94: seq=0 ttl=38 time=1.148 ms 64 bytes from 209.85.203.94: seq=1 ttl=38 time=1.071 msupdate_2: After some investigation, it turns out that:the stand-alone container,doesinherit the EC2 instance's dns-nameserver;the containers started viadocker stack deploydonot;i.e. this is from adocker swarm- initiated container:ubuntu@ip-10-17-0-78:~$ docker exec -it d1ca5ce50d3b bash root@d1ca5ce50d3b:/app# cat /etc/resolv.conf search eu-west-1.compute.internal nameserver 127.0.0.11 options ndots:0update_3: Same is the problem when I start the stack withdocker-composeinstead ofdocker stack deploy; does not seem to be aswarm- specific issue;update_4: I have explicitly added the gfile/etc/docker/daemon.jsonwith the following contents:{ "dns": ["10.0.0.2", "8.8.8.8"] }ubuntu@ip-10-17-0-78:/data$ docker run busybox nslookup google.com Server: 8.8.8.8 Address: 8.8.8.8:53Non-authoritative answer: Name: google.com Address: 216.58.211.174*** Can't find google.com: No answerbut lookup still fails:Any suggestions why this might be hapenning?
docker: containers in stacks within EC2 instance do not inherit dns nameserver
AFAIK, currently docker images do not hash to byte-exact hashes, since the metadata currently contains stateful information such as created date. You can check out thedesign doc from 1.10. Unfortunately, it looks like the history metadata is an important part of image validity and identification.Don't get me wrong, I'm all about reproducible builds. However I don't believe hash-exactness is the best criteria for measuring reproducibility of a docker image. A docker image isn't a compiled binary. There is no way to guarantee the results of a stage will ever be able to be reproduced, so even if the datetime metadata was absent, it would not guarantee reproducible builds. Take this pathological example:RUN curl "https://www.random.org/strings/?num=1&len=20&digits=on&unique=on&format=plain&rnd=new" -o nonce.txt
I'm trying to build Docker images and I would like my Docker images to be deterministic. Much to my surprise I found that even a trivial Dockerfile such asFROM scratch ENV a bProduces different IDs when built repeatedly usingdocker build --no-cache .How could I make my builds deterministic and whats causing the changes in image IDs? When caching is enabled the same ID is produced.The reason I'm trying to get this reproducibility is to enable producing the same layers in a distributed build environment. I can not control where a build is run therefore I can not know what is in the cache. Also the Docker build downloads files using wget from an ftp which may or may not have changed, currently I can not easily tell Docker from within a Dockerfile if the results of aRUNshould invalidate the cache. Therefore if I could just produce the same ID for identical layers (when no cache is used) these layers would not have to be "push"ed and "pull"ed again.Also all the reasons listed here:https://reproducible-builds.org/
How to do deterministic builds of Docker images?
As of now (Docker 18.06+) UDP broadcasts work out of the box, as long as your are using the default bridge networkandall containers run on thesamehost (and of course in the same docker network).Using docker-compose services are automagically run in the same network and thus the followingdocker-compose.yml:version: '3.4' services: master-cat: image: alpine command: nc -l -u -p 6666 slave-cat: image: alpine/socat depends_on: - master-cat entrypoint: '' command: sh -c "echo 'Meow' | socat - UDP4-DATAGRAM:255.255.255.255:6666,so-broadcast"withdocker-compose upwill showMeowon the master-cat (sic!).If you want to use broadcastsacrossmultiple hosts, this is not possible with the default network plugins that docker ships with. ->https://github.com/moby/moby/issues/17814. But a more sophisticated overlay network plugin, such asWeaveshouldwork (I have not tested it...)
I've been trying to enable some UDP discovery between a few containers. It tuned out that containers have disabled broadcasts by default, missing brd for inet in:$ ip addr show dev eth0 27: eth0: mtu 1500 qdisc noqueue state UP link/ether 00:00:01:4f:6a:47 brd ff:ff:ff:ff:ff:ff inet 172.17.0.12/16 scope global eth0 valid_lft forever preferred_lft foreverStack:host: ubuntu 14.04container: ubuntu 12.04docker 1.8.3How do I enable the broadcasts? Here's what I've tried so far:ip link set dev eth0 broadcast 172.17.255.255givesRTNETLINK answers: Invalid argumentsame with --privileged containersame with NET_ADMIN and NET_BROADCAST container capabilities
Enable broadcasts between docker containers
docker's --user parameter changes just id not a group id within a docker. So, within a docker I have:id uid=1002 gid=0(root) groups=0(root)and it is not like in original system where I have groups=1000(users)So, one workaround might be mapping passwd and group files into a docker.-v /etc/docker/passwd:/etc/passwd:ro -v /etc/docker/group:/etc/group:roThe other idea is to map a tmp directory owned by running --user and when docker's work is complete copy files to a final locationTMPFILE=`mktemp`; docker run -v $TMPFILE:/working_dir/ --user=$(id -u); cp $TMPDIR $NEWDIRThis discussionUnderstanding user file ownership in docker: how to avoid changing permissions of linked volumesbrings some light to my question.
I've played a lot with any rights combinations to make docker to work, but... at first my environment:Ubuntu linux 15.04 and Docker version 1.5.0, build a8a31ef.I have a directory '/test/dockervolume' and two users user1 and user2 in a group userschown user1.users /test/dockervolume chmod 775 /test/dockervolume ls -la drwxrwxr-x 2 user1 users 4096 Oct 11 11:57 dockervolumeEither user1 and user2 can write delete files in this directory. I use standard docker ubuntu:15.04 image. user1 has id 1000 and user2 has id 1002.I run docker with next command:docker run -it --volume=/test/dcokervolume:/tmp/job_output --user=1000 --workdir=/tmp/job_output ubuntu:15.04Within docker I just do simple 'touch test' and it works for user1 with id 1000. When I run docker with --user 1002 I can't write to that directory:I have no name!@6c5e03f4b3a3:/tmp/job_output$ touch test2 touch: cannot touch 'test2': Permission denied I have no name!@6c5e03f4b3a3:/tmp/job_output$Just to be clear both users can write to that directory if not in docker.So my question is this behavior by docker design or it is a bug or I missed something in the manual?
Docker with '--user' can not write to volume with different ownership
Note that thedefault entry point/cmd for an official centos 6 imageis:no entrypointonlyCMD ["/bin/bash"]If you are using the-ccommand, you need to passoneargument (which is the full command):"echo foo".Not a series of arguments (CMD ["echo", "foo"]).As stated indockerfile CMDsection:If you use the shell form of the CMD, then thewill execute in/bin/sh -c:FROM ubuntu CMD echo "This is a test." | wc -If you want to run yourwithout a shell then you must express the command as a JSON array and give the full path to the executableSinceechois a built-in command in the bash and C shells, the shell form here is preferable.
Here's a simple DockerfileFROM centos:6.6 ENTRYPOINT ["/bin/bash", "-l", "-c"] CMD ["echo", "foo"]Unfortunately it doesn't work. Nothing is echo'd when you run the resulting container that's built.If you comment out theENTRYPOINTthen it works. However, if you set theENTRYPOINTto/bin/sh -c, then it fails againFROM centos:6.6 ENTRYPOINT ["/bin/sh", "-c"] CMD ["echo", "foo"]I thought that was the defaultENTRYPOINTfor an container that didn't have one defined, why didn't that work?Finally, this also worksFROM centos:6.6 ENTRYPOINT ["/bin/bash", "-l", "-c"] CMD ["echo foo"]Before I submit an issue, I wanted to see if I'm doing something obviously wrong?I'm usingrvminside my container which sort of needs a login shell to work right.
Docker CMD weirdness when ENTRYPOINT is a shell script
Withdocker-compose,I don't believe there's any support for this. However, with swarm mode, which can use a similar compose file, you can pass{{.Task.Slot}}as an environment variable usingservice templates. E.g.version: '3' services: test: image: busybox command: /bin/sh -c "echo My task number is $$task_id && tail -f /dev/null" environment: task_id: "{{.Task.Slot}}" deploy: replicas: 5Instead ofdocker-compose up, I deploy withdocker stack deploy -c docker-compose.yml test. My local swarm cluster is just a single node created withdocker swarm init.Then, reviewing each of these running containers:$ docker ps --filter label=com.docker.swarm.service.name=test_test CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ccd0dbebbcbe busybox:latest "/bin/sh -c 'echo My…" About a minute ago Up About a minute test_test.3.i3jg6qrg09wjmntq1q17690q4 bfaa22fa3342 busybox:latest "/bin/sh -c 'echo My…" About a minute ago Up About a minute test_test.5.iur5kg6o3hn5wpmudmbx3gvy1 a372c0ce39a2 busybox:latest "/bin/sh -c 'echo My…" About a minute ago Up About a minute test_test.4.rzmhyjnjk00qfs0ljpfyyjz73 0b47d19224f6 busybox:latest "/bin/sh -c 'echo My…" About a minute ago Up About a minute test_test.1.tm97lz6dqmhl80dam6bsuvc8j c968cb5dbb5f busybox:latest "/bin/sh -c 'echo My…" About a minute ago Up About a minute test_test.2.757e8evknx745120ih5lmhk34 $ docker ps --filter label=com.docker.swarm.service.name=test_test -q | xargs -n 1 docker logs My task number is 3 My task number is 5 My task number is 4 My task number is 1 My task number is 2
I have a script that scrapes data by URLslist. This script is executing in a docker container. I would like to run it in multiple instances, for example, 20. For that, I wanted to usedocker-compose scale worker=20and to pass the INDEX to each instance so that the script knows which URLs should bescraped.Example.ID, URL 0 https://example.org/sdga2 1 https://example.org/fsdh34 2 https://example.org/fs4h35 3 https://example.org/f1h36 4 https://example.org/fs4h37 ...If there are 3 instances, 1st instance of script should process a url whose ID equals to 0, 3, 6, 9 i.e. ID = INDEX + INSTANCES_NUM * k.I don't know how to pass INDEX to script running in Docker container. Of course, I can duplicate services in docker-compose.yml with different INDEX in environment vars. But if instances number is greater 10 or even 50 it will be a very bad solution)Does anyone know how do this?
Parallel code execution in Docker containers
The upcoming Podman 3.0 supports the Docker REST API well enough to be used as back-end for docker-compose. It is planned to be released in a few weeks (seePodman releases).Caveats:Running Podman as root is supported, but not yet running as a normal user, i.e. running "rootless". (Seefeature request)Functionality relating to Swarm is not supportedTo enable Podman as the back-end for docker-compose, runsudo systemctl enable --now podman.socketPodman will then listen on the UNIX domain socket/var/run/docker.sockSee also:https://www.redhat.com/sysadmin/podman-docker-compose
How i use docker-composer file in podman?This examples:version: '3.7' services: gitea: image: gitea/gitea:latest environment: - DB_TYPE=postgres - DB_HOST=db:5432 - DB_NAME= - DB_USER= - DB_PASSWD= restart: always volumes: - git_data:/data ports: - 3000:3000Generate image using dockerfile normal?
Docker-compose with podman?
I was able to reproduce this problem on both3.1-nanoserver-2009and3.1-nanoserver-2004for you.I think the problem is related to the warning printed out during build:warning NETSDK1074: The application host executable will not be customized because adding resources requires that the build be performed on Windows (excluding Nano Server).If that's the case, then it seems that it is a limitation of thenanoserverbase image, and unfortunately it looks like this problem still has not been resolved, because it's still present when building inmcr.microsoft.com/dotnet/nightly/sdk:5.0.Here'sa related pull request that might shed some light on the subject.Having said that, I think the only option for now is to use windows image other thannanoserver(alternatives can be foundhere). I didn't find any image that would come with .NET Core SDK preinstalled (I didn't put much effort into finding it though), but it should be fairly simple to set it up. In the following example I usedservercoreimage since it is much more lightweight thanwindowsimage.FROM mcr.microsoft.com/windows/servercore:20H2 AS sdk WORKDIR /dotnet # Download the official .NET Core install script RUN powershell -c "Invoke-WebRequest -Uri https://dot.net/v1/dotnet-install.ps1 -OutFile dotnet-install.ps1" # Run the install script RUN powershell -c "& ./dotnet-install.ps1 -InstallDir ." # Add the installed executable to PATH RUN setx PATH "%PATH%;/dotnet" FROM sdk AS build # Do your stuff hereHereyou'll find the documentation for the install script.I also did confirm that the produced application did not spawn console window when run.
I'm facing the issue that a .Net Core WPF application automatically opens a console window when started. This only happens when build inside a Docker container. When I build it directly on my PC, only the actual application window opens.My best guess is that this is an issue with the operating system the .Net Core image is based on. The .Net Core SDK Docker Hub Repo knows the following tags: 3.1-nanoserver-1809, 3.1-nanoserver-1903, 3.1-nanoserver-1909, 3.1-nanoserver-2004, 3.1-nanoserver-2009. I was able to confirm the issue with the first three tags, but the 2004 and 2009 tags do not run on my machine, so I need someone to try this out and either confirm my theory (which would mean that it should not happen on at least on of these images) or to come up with a better explanation of why this is happening.This is reproducible with the default .Net Core WPF app Visual Studio creates for you. Here is a Dockerfile to test it out:FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build WORKDIR /src COPY . ./ RUN dotnet build -c Debug -o out FROM stefanscherer/chocolatey WORKDIR /app RUN choco install -y 7zip # Depending on your project setup it might be src/[project name]/out COPY --from=build /src/out ./test RUN 7z a -y test.zip ./test/*You can build the image and extract the compiled program with the following commands:docker build -t testimage .docker run -d --name testcontainer testimagedocker stop testcontainerdocker cp testcontainer:app/test.zip .
App opens console window when being build with Docker
Actually, this modification is quite old since it was made in 2021 and inserted in therelease1.28.6, one of the last versions before Docker ComposeV2.It was just a file renaming activity, with no new functionalities associated which, instead, follows the usual Docker Compose releases.The main difference is that if you execute docker-compose with no options it will search forcompose.ymlfirst and then, if not present, fordocker-compose.ymlfor backwards compatibility of earlier versions.
I've noticed that theDocker documentationnow recommends calling Docker Compose's filecompose.yamlinstead ofdocker-compose.yaml.Is there a reason for this change? Does the newer version provide more features?
Why should you call the Docker Compose file 'compose.yaml' instead of 'docker-compose.yaml'?
Fromofficial documentation:Only the last ENTRYPOINT instruction in the Dockerfile will have an effect.
If define a multistageDockerfilelike so:FROM exampleabc:latest COPY app.go . FROM alpine:latest RUN apk --no-cache add ca-certificates WORKDIR /root/ COPY --from=0 /go/src/github.com/alexellis/href-counter/app . CMD ["./app"]Would theexampleabc:latesthave it's entrypoint executed?
Does Docker execute the entrypoint when a container is used in a multistage build?
I am still not able to push a docker image from my local machine, but authorizing a compute instance with my account and pushing an image from there works. If you run into this issue, I recommend creating a Compute Engine instance (for yourself), authorizing an account withgcloud auththat can push containers, and pushing from there. I have my source code in a Git repository that I can just pull from to get the code.
EDIT: I'm just going to blame this on platform inconsistencies. I have given up on pushing to the Google Cloud Container Registry for now, and have created an Ubuntu VM where I'm doing it instead. I have voted to close this question as well, for the reasons stated previously, and also as this should probably have been asked on Server Fault in the first place. Thanks for everyone's help!running$ gcloud docker push gcr.io/kubernetes-test-1367/myappresults in:The push refers to a repository [gcr.io/kubernetes-test-1367/myapp] 595e622f9b8f: Preparing 219bf89d98c1: Preparing 53cad0e0f952: Preparing 765e7b2efe23: Preparing 5f2f91b41de9: Preparing ec0200a19d76: Preparing 338cb8e0e9ed: Preparing d1c800db26c7: Preparing 42755cf4ee95: Preparing ec0200a19d76: Waiting 338cb8e0e9ed: Waiting d1c800db26c7: Waiting 42755cf4ee95: Waiting denied: Unable to create the repository, please check that you have access to do so.$ gcloud initresults in:Welcome! This command will take you through the configuration of gcloud. Settings from your current configuration [default] are: [core] account = @gmail.com disable_usage_reporting = True project = kubernetes-test-1367 Your active configuration is: [default]Note: this is a duplicate ofKubernetes: Unable to create repository, but I tried his solution and it did not help me. I've tried appending:v1,/v1, and usingus.gcr.ioEdit: Additional Info$ gcloud --version Google Cloud SDK 116.0.0 bq 2.0.24 bq-win 2.0.18 core 2016.06.24 core-win 2016.02.05 gcloud gsutil 4.19 gsutil-win 4.16 kubectl kubectl-windows-x86_64 1.2.4 windows-ssh-tools 2016.05.13+$ gcloud components update All components are up to date.+$ docker -v Docker version 1.12.0-rc3, build 91e29e8, experimental
How to push container to Google Container Registry (unable to create repository)
Based on the comments, the exact cause of the issue isundetermined. However, the problem was solved by creating anew Service Discoveryin ECS.
I have six docker containers all running in their own Tasks (6 tasks), and each task running in a separate Fargate service (6 services) on ECS. I need the services to be able to communicate with each other, and some of them need to be publically accessible. I keep seeing info about using either Service Discovery or a Load Balancer assigned to each service. I would like to try and avoid having to set up 6 load balancers as it's more expensive and more effort to maintain.This is how I have set up Service Discovery currently:All Tasks are setup to use awsvpcAll services have been set up to use Service Discovery (set up from within the Service Creation page)All services are sharing the same Namespace, and they're all using the A DNS RecordWhen I try to ping.from within one of the docker containers I do not get a response. However, I can successfully ping another container when pinging the private IP Address.Can I achieve what I need to do with Service Discovery? If so, how exactly do the containers communicate with each other?Thanks heaps! Please let me know if I haven't provided enough info.EDIT: Recreating the services and setting them up with a new Service Discovery seemed to resolve the issue. No idea why the old discovery didn't work.
How to communicate between Fargate services on AWS ECS?
You will have to first detach the container from the custom network and the connect it back by providing the ip.You can follow the following steps :docker network disconnect [OPTIONS] NETWORK CONTAINERdocker network connect --ip 192.168.150.3 NETWORK CONTAINER
I have a docker linked to a bridge with IP address192.168.150.1/24. Once I create the docker instance from a docker image it gets an IP address,192.168.150.2, but according to my requirement, this IP address,192.168.150.2, must be reserved since I want to use it for some other thing.Now, I want to change the IP address of this docker instance as192.168.150.3. Is it possible to do? if so how? Please, help.
How to change the IP address of a docker after creating it?
The solution is very simple, instead of using IPs or Hostnames you can use the service's name.In your example, in thestreamappservice you can access the other by usinghttp://storeapp:8080.Similar, in thestoreappservice you can access the other athttp://streamapp:8080.Please not that you must use the internal ports, not the exported ones.This does not apply when you access the service from the other machines, i.e. from the internet. In that case you must use the formhttp://{IP_OF_THE_MACHINE}:8090
How it works now:Microservice X makes REST API request to Microservice Y with static iphttp://{ip-address}:{port}/doSomethingThe problem:The problem is that I can no long guarantee that static ip. I wan't to solve this by using the docker hostname instead:http://hostname:{port}/doSomethingI tried achieving this by creating a used defined network in docker-compose:#part of docker-compose file streamapp: hostname: twitterstreamapp image: twitterstreamapp container_name: twitterstreamapp restart: always ports: - '8090:8080' build: context: ./TwitterStream dockerfile: Dockerfile storeapp: hostname: twitterstoreapp image: twitterstoreapp container_name: twitterstoreapp restart: always ports: - '8095:8080' build: context: ./TwitterStore dockerfile: Dockerfile depends_on: - 'mysql-db' networks: - backend volumes: MyDataVolume: networks: backend: driver: bridgeI can ping from Container X to Container Y. But not Curl for example. How can I fix this, or is this not the best way to achieve what I want.
Communication between two microservices by Docker hostname
I believe you can use the--platformparameter ondocker buildx buildordocker buildto set platform(s) to build the image which will be used within anyFROMcalls within theDockerfileif nothing else is specified (seeDockerfile FROM), as mentioned in the documentation.You can then use theTARGETPLATFORMvariable within yourDockerfileto get what platform it's being built for if needed. If you want to change the default platform to build for, you can set theDOCKER_DEFAULT_PLATFORMenvironment variable.
Some context:-I'm aDockernewbie (on it since 1 day),-I've got a smallVMrunninglinux/AMDand I own a M1 Mac (ARM),-I'd like to also use Container for Dev (instead of virtual env).For building my container forprod, being on a M1 Mac, I have the belowDockerfile:See the--platform=linux/amd64arg inFROMand It works (= I'm able to deploy).FROM --platform=linux/amd64 python:3.10-slim-bullseye WORKDIR /usr/src/app ENV FLASK_APP=app.py ENV FLASK_RUN_HOST=0.0.0.0 ENV FLASK_ENV=development COPY requirements.txt requirements.txt RUN pip install -r requirements.txt COPY . . CMD ["flask", "run"]However, how can I say in myDockerfileto not use--platform=linux/amd64ifI want to build for local dev?I've seen some post on SO with conditions indockerfilebut onlyforRUNcommand.Any idea or best practices?Thanks.
Docker - How to build image for M1 Mac or AMD conditionally from Dockerfile?
To fix the my.cnf, you can usedocker container cp. It works with stopped containers.To copy file from your container to current pathdocker container cp containerId:/etc/mysql/my.cnf container-my.cnfThen edit container-my.cnf and copy back from path to container :docker container cp container-my.cnf containerId:/etc/mysql/my.cnfTo use the existing MySQL data with a new container:docker container inspect -f '{{.Mounts}}' [container]gives you the volume name (keyvolume) where the data is. Then start a new mysql container and mount the volume under/var/lib/mysql:docker container run -d -v [volume_name]:/var/lib/mysql [image]Afterwards you can remove the old container (Actually you can remove it before creating the new one)
After making an edit to "my.cnf", I now get an error from Kitematic on the Mac when I attempt to start the container:mysqld: [ERROR] Found option without preceding group in config file /etc/mysql/my.cnf at line 19! mysqld: [ERROR] Fatal error in defaults handling. Program aborted!I've tried accessing the container via:docker exec -it [container] bash... but I get the error:Error response from daemon: Container [container] is not runningI was able to accesssomethingvia the image, but the file didn't appear to be the same, so I'm not sure what was happening (I'm not too conversant with Docker).At this stage, either making the appropriate edit and fixing the container, or somehow cloning the MySQL data to another container would be ideal.
Docker: Edit "my.cnf" file in stopped container
Sound there is some issue with the network connectivity while build the docker container. UseHostas an network inside compose file to resolve the issue.version: '3.4' services: django_image: build: context: . network: hostGive it a try and it will solve the issue.
I'm new to docker and currently trying to build an image for my Django project. Here's myDockerfile:FROM python:3.8.5-alpine WORKDIR /my_project ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 RUN pip install --upgrade pip COPY ./requirements.txt . RUN pip install -r requirements.txt COPY . .When I rundocker-compose build, execution breaks at the second pip command with the following error;WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/asgiref/Some Context:OS: Ubuntu 20.04.1 LTSKernel: Linux 5.8.0-34-genericdocker --versionDocker version 20.10.2, build 2291f61docker-compose --versiondocker-compose version 1.27.4, build 40524192I have gone through a lot of similar questions online but none of their corresponding solutions work for me. I'll be more than glad to share any other info needed to assist in troubleshooting.
ConnectTimeoutError while running 'pip install' via docker-compose
This might be happening because stdout and stderr are buffered streams.When interactive, stdout and stderr streams are line-buffered. Otherwise, they are block-buffered like regular text files. You can override this value with the -u command-line option.Try adding the -u flag.CMD [ "python", "-u", "./your_script.py" ]Or, as pointed out by David Maze, you could set thePYTHONUNBUFFERED environment variableand achieve the same result.ENV PYTHONUNBUFFERED=1You could also flush stdout everytime you call print().sys.stdout.flush()Alternatively, see thelogging module.
When running a Docker container that calls a Python process, runningdocker logs #####will return nothing, despite events happening inside the container which emit tostdout. Noting appears in the logs until I rundocker stop ######, in which case the expected output is returned. The same is true withdocker logs -f #####— nothing appears, even when expected, until after the container is stopped.This is very cumbersome for debugging.Why might this happen?Is there a setting I can change to ensure the logs are updated in real-time?
`Docker logs` erroneously appears empty until container stops
This is a known issue and container apps team is working on it. As a workaround, useregistry.hub.docker.comas the server value instead ofdocker.io.
I have just created an Azure Container App and I am trying to link it to a private repository on docker.io. It works if I make it pull a public image but not if it's private, even though I specified all the information. I have also used the automatic "continuous deployment" with github (the azure portal basically did everything, which one would assume would work), I entered the same information ( docker.io, username and password and the image tag, all working when I test in console ), the generated github action is able to build and push to docker registry, proving the user and password are good, but it doesn't want to update the container app either.Here is the latest error I had trying to do it through the github action: The following field(s) are either invalid or missing. Invalid value: "docker.io/pasc32/companio:latest": GET https:: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:pasc32/companio Type:repository]]: template.containers.test.image.When I try to do it directly from the portal, it'll add a notification at the top, which will turn forever, if I refresh the page nothing's change and the container is back to how it was before, no trace of the notification in the activity log either.Does anyone have an idea of what the issue it ?Thank!
Azure container app, unable to pull from private registry
Your problem isn't that you're missing the mysqli extension.If you're doing something like this:namespace Listener; class Foo { public function bar() { $conn = new mysqli(...); } }Then PHP will interpretnew mysqli()asnew \Listener\mysqli()because you're currently in the\Listenernamespace. To fix this, you can just explicitly anchormysqli()to the root namespace:$conn = new \mysqli(...);
I'm runningphp:7-fpmin a docker container that is used by my nginx web server. Everything is working nicely except for when I'm trying to instantiate a mysqli connection in my PHP code. I receive the following error:"NOTICE: PHP message: PHP Fatal error: Uncaught Error: Class 'Listener\mysqli' not found in index.php:104Here's my Dockerfile for building the image, where I explicitly install the mysqli extension:FROM php:7-fpm RUN docker-php-ext-install mysqliIt appears to be installed given the phpinfo() output below. Do I need to configure or enable it somehow?
mysqli not found in (php-fpm) docker container
I do the same for in my development environment. I have a production Dockerfile that ADDs the project folder and then I run all the tests against it. Since the only difference between the development container and the production container is when the code is added to the container, not the code or settings, they have the same behavior.
I'm a recent user of docker and I am about to migrate from VM to containers in my production environment. But then, I suddenly realize that what works perfectly for my dev and qa environments is not ideal for production.On my dev and qa, I mount my versioned project folder into a python/php (name it) container and I consider this container as a "running service" for my code. This saves me from having huge containers as the container history doesnt change when I change my code (git commit or else).In production, the ideal case is that I will have clean self contained containers with my code inside, not mounted like I do in dev.So, did I get it wrong? How do you do it? Do you use the same containers from dev to prod?
Embed code in docker container or mount it as a volume?
My understanding is that it is not possible todirectlyuse a Cloud Run revision'senvironment variablesin the Dockerfile because the build is managed by Cloud Build, which doesn't know about Cloud Run revision before the deployment.But I was able to use Secret Manager'ssecretsin the Dockerfile.Sources:Passing secrets from Secret Manager tocloudbuild.yaml:https://cloud.google.com/build/docs/securing-builds/use-secretsPassing an environment variable fromcloudbuild.yamltoDockerfile:https://vsupalov.com/docker-build-pass-environment-variables/Quick summary:In your case, forAPP_USRandAPP_PASS:Grant the Secret Manager Secret Accessor (roles/secretmanager.secretAccessor) IAM role for the secret to the Cloud Build service account (see first source).Add anavailableSecretsblock at the end of thecloudbuild.yamlfile (out of thestepsblock):availableSecrets: secretManager: - versionName: env: 'APP_USR' - versionName: env: 'APP_PASS'Pass the secrets to your build step (depends on how you summondocker build, Google's documentation uses 'bash', I use Docker directly):- id: Build name: gcr.io/cloud-builders/docker args: - build - '-f=Dockerfile' - '.' # Add these two `--build-arg` params: - '--build-arg' - 'APP_USR=$$APP_USR' - '--build-arg' - 'APP_PASS=$$APP_PASS' secretEnv: ['APP_USR', 'APP_PASS'] # <=== add this lineUse these secrets as standard environment variables in yourDockerfile:ARG APP_USR ENV APP_USR $APP_USR ARG APP_PASS ENV APP_PASS $APP_PASS RUN pip install https://$APP_USR:[email protected]/*****/master.zip
I have built a containerised python application which runs without issue locally using a.envfile and and adocker-compose.ymlfile compiled withcompose build.I am then able to use variables within the Dockerfile like this.ARG APP_USR ENV APP_USR ${APP_USR} ARG APP_PASS ENV APP_PASS ${APP__PASS} RUN pip install https://${APP_USR}:${APP_PASS}@github.org/*****/master.zipI am deploying to cloud run via a synced bitbucket repository, and have defined under"REVISIONS" > "SECRETS AND VARIABLES",(as described here:https://cloud.google.com/run/docs/configuring/environment-variables) but I can not work out how to access these variables in the Dockerfile during build.As I understand it, I need to create a cloudbuild.yaml file to define the variables, but I haven't been able to find a clear example of how to set this up using the Environment variables defined in cloud run.
How to access cloud run environment variables in Dockerfile
If it can help those in the same situation as me:Docker 19.03Google cloud SDK 288.0.0Important: My user is not in adockeruser group. I then have to prependsudobefore any docker commandWhengcloudanddockerare not using the same config.jsonWhen I use gcloud credential helper:gcloud auth configure-dockerit updates the JSON config file in my $HOME:[/home/{username}/.docker/config.json]. However, when logging out and login again from Docker CLI,sudo docker loginThe warning shows a different path, which makes sense as Isudo-ed:WARNING! Your password will be stored unencrypted in /root/.docker/config.json.sudoeverywhereTo fix it, I did the following steps:# Clear everything sudo docker logout sudo rm /root/.docker/config.json rm /home/{username}/.docker/config.json # Re-login sudo docker login sudo gcloud auth login --no-launch-browser # --no-launch-browser is optional # Check both Docker CLI and gcloud credential helper are here sudo vim /root/.docker/config.json # Just in case sudo gcloud config set project {PROJECT_ID}I can now push my Docker images to both GCR and Docker hub
I am trying to push docker image to GCP, but i am still getting this error:unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authenticationI follow thishttps://cloud.google.com/container-registry/docs/quickstartstep by step and everything works fine until docker pushIt's clear GCP projectI've already tried:use gcloud as a Docker credential helper:gcloud auth configure-dockerreinstall Cloud SDK andgcloud initaddStorage Adminrole to my accountWhat I am doing wrong?Thanks for any suggestions
Google Container Registry permission
Docker itself imposes very little overhead, it's just isolating the process from other processes on the host. However, there are lots of things you can do to degrade the performance of a container:Run it inside Windows/MacOS while only giving the embedded VM a fraction of the memory/CPU of the parent OS.Restrict CPU or memory resources inside the container.Launch a lot of containers on your host. Docker isn't magic, if 10 instances of Java each using 2 gigs of ram bring the host to a crawl outside of container, they won't run any better inside of containers.Networking complications. Each container is by default spun up on an isolated network bridge, where IO may take a little longer with the extra hops. And if your DNS isn't properly configured, you may see extra delays from failed lookups.Bare metal requirements like direct disk access aren't allowed by default in Docker. You can give access to specific devices, but otherwise the containerized version of the app is isolated intentionally.Data in volumes may reside in a less efficient location. By default it's your /var/lib/docker filesystem, but you could easily point this to an NFS mount where the performance would be even worse.Misconfigured DB, e.g. forgetting to create an index.In short, the container is unlikely to be the issue itself, but make sure you're doing an apples to apples comparison.
Has anyone noticed any performance issues running a database (MySQL or Postgres) in a docker container, I'm told that severe performance degradation occurs.Please advise.
Performance issues running a database in a docker container
By default docker start a non-login shell. To read .profile file you need a login shelldocker exec -it ash -l.To read /etc/profile (or another startup file) everytime, you've to set ENV variable.Example DockerfileARG PHPVERSION=7.4 FROM php:$PHPVERSION-fpm-alpine ARG PHPVERSION=7.4 ENV PHPVERSION_ENV=$PHPVERSION # copy composer from official image COPY --from=composer:latest /usr/bin/composer /usr/bin/composer #set $ENV ENV ENV=/etc/profile #copy aliases definition COPY /assets/alias.sh /etc/profile.d/alias.sh
I have written a DOCKER file, which uses as an image an private adapted alpine image, which contains a nginx server. Note: alpine uses zsh, not bash.I love to have some shell aliases available, when working in the container and it drives me nuts, when they are missing. So I copy a small prepared file to /root/.profile, which works. I can view the file and it’s contents. But the file does not load only if I manually do . ~/.profile in the container then I have the aliases available.What do I have to do, that my profile is automatically loaded after I started the container and connect into it’s shell?FROM myprivatealpineimage/base-image-php:7.4.13 ARG TIMEZONE COPY ./docker/shared/bashrc /root/.profile COPY ./docker/shared/ /tmp/scripts/ RUN chmod +x -R /tmp/scripts/ \ && /tmp/scripts/set_timezone.sh ${TIMEZONE}\ && apk update\ && apk add --no-cache git RUN install-ext pecl/apcu pecl/imagick pecl/zip pecl/redis RUN apk add --no-cache --repository http://dl-3.alpinelinux.org/alpine/edge/testing gnu-libiconv ENV LD_PRELOAD /usr/lib/preloadable_libiconv.so php WORKDIR /var/www
How to load shell aliases in an alpine docker container with start
This is a whole bunch of questions - let's try to answer them sequentially:1. Do I really need to specify the volumes twice (inside and outside the services section)?This is not a duplicate specification: outside youdeclarethe volume and inside you specify how tomountit into a container. A volume has an independent life cycle from services. It can be mounted by several services and it will retain data if services are restarted.2. When the volume is specified outside the service block, it's almost always written as key: {}This key-only notation is the default and does not require any driver configuration. However, if you needed to e.g. connect to NFS, you would have something like:volumes: example: driver_opts: type: "nfs" o: "addr=10.40.0.199,nolock,soft,rw" device: ":/docker/example"Also, please differentiate betweenbind mountsand regular volumes. While regular volumes are managed independently from services (and containers), e.g. withdocker volumes list, bind-mounts are mere mappings between the host and the container file system. They are tied to the container they are mounted to.3. When I rundocker-compose down -vit actually only applies on volumes which are not explicitly specified as a folder on the host machineYes, this won't remove bind mounts, since bind mounts are mere host-container filesystem mappings and therefore Docker does not create an independent volume entity for them.For deeper understanding, please consider this excerpt from thedocumentation:Bind mounts have been around since the early days of Docker. Bind mounts have limited functionality compared to volumes. When you use a bind mount, a file or directory on the host machine is mounted into a container. The file or directory is referenced by its absolute path on the host machine. By contrast, when you use a volume, a new directory is created within Docker’s storage directory on the host machine, and Docker manages that directory’s contents.
In adocker-compose.yml file, do I really need to specify thevolumestwice; insideandoutside a service? If yes, Why? (the docker-compose part of the doc doesn't have much information on that)I have thefeelingthat, in the case shown here where themyappvolume is not explicitly a folder on the host machine, wehaveto set it twice, but if it actuallyisa folder on the host machine, specifying it only inside thefrontendservice block is enough.In addition, when the volume is specified outside the service block, it's almost always written askey:without any actual value (or sometime askey: {}), which makes me confused.Moreover, when I rundocker-compose down -vit actually only applies on volumes which are not explicitly specified as a folder on the host machine, according to the doc:-v, --volumes Remove named volumes declared in the `volumes` section of the Compose file and anonymous volumes attached to containers.So maybe the declaration of a volumeoutsidea service is for making this volume identifiable, hence 'removable'. And on the other hand, it will never be removable if it's not set outside the service?
Why have docker-compose volumes to be declared twice when not pointing to an actual folder on the host?
You can export the data to theSTDOUTand pipe the result to a file in the client machine:docker exec -it -u database_user_name container_name \ psql -d database_name -c "COPY (SELECT * FROM table) TO STDOUT CSV" > output.csv-ctells psql you to execute a given SQL statement when the connection is established.So your command should look like this:docker exec -it -u postgres pgdocker \ psql -d yourdb -c "COPY (SELECT * FROM test) TO STDOUT CSV" > test_1.csv
I'm not sure if this is possible of if I'm doing something wrong since I'm still pretty new to Docker. Basically, I want to export a query result inside PostgreSQL docker container as a csv file to my local machine.This is where I got so far. Firstly, I run my PostgreSQL docker container with this command:sudo docker run --rm --name pg-docker -e POSTGRES_PASSWORD=something -d -p 5432:5432 -v $HOME/docker/volumes/postgres:/var/lib/postgresql/data postgresThen I access the docker container with docker exec to run PostgreSQL command that would copy the query result to a csv file with specified location like this:\copy (select id,value from test) to 'test_1.csv' with csv;I thought that should export the query result as a csv file named test_1.csv in the local machine, but I couldn't find the file anywhere in my local machine, also checked both of these directories:$HOME/docker/volumes/postgres;/var/lib/postgresql/data postgres
Export Query Result as CSV file from Docker PostgreSQL container to local machine
Lets start with this:-XX:+UseG1GC -Xms512m -Xmx2048m -XX:MaxPermSize=256mThat says, use a heap that starts at 0.5Gb and can grow to 2GB, and also a permgen heap of 0.25GB. And that does not include the JVM's other non-heap usage; e.g. memory mapped files, thread stacks, cached JAR files, etc.Then you say that docker is reporting that the container is using 2.416 GB. That is not surprising. 2.42 - 2.25 is 0.17GB, and that is not excessive for non-heap memory usage.Finally, the 735736 RSS value is telling you the resident set size; i.e. the current amount of physical RAM that that process is using. The JVM arguments and the dockerstatscommand are measures of virtual memory size.Why is docker container using all the memory available if its content does not need it? I expected that docker would use a little more memory than myApp... not 100% of the available memory.I think that you are misreading theps auxoutput. The RSS is just the physical memory being used. In fact, the total memory usage of your process is given by the VSZ ... which is 5GB. Now that >does< look large, and it is not obvious why its is that large. But taking it on face value, that implies that Docker isunder-reportingthe containers true memory / virtual memory usage.The other thing is that a Docker container does not isolate an application in the container from resource demands by other things outside of the container. The JVM will be competing for physical RAM with other applications inside and outside of the container.For more information:https://goldmann.pl/blog/2014/09/11/resource-management-in-docker/explains how Docker resource management works, and what it can and cannot do.What is RSS and VSZ in Linux memory management.
I've a container which is running a java application with the following jvm arguments:-XX:+UseG1GC -Xms512m -Xmx2048m -XX:MaxPermSize=256mI'm using docker memory limit option:docker run -it -m 2304m foo bashRunningdocker stats myAppright after the container initialization will give me:CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O myApp 0.17% 660.5 MB/2.416 GB 27.34% 240.8 kB/133.4 kBBut after a few hours I've the following stats:CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O myApp 202.18% 2.416 GB/2.416 GB 100.00% 27.67 GB/19.49 GBAlthough, If I look into the process execution details of the running application inside the container, I have an usage of~735MBand myApp continues to compute requests without any problems:me@docker-container ~]$ ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND me+ 1 0.0 0.0 11636 1324 ? Ss 13:44 0:00 /bin/bash /home/bar/service/start-myApp.sh me+ 6 113 4.5 5014152 735736 ? Sl 13:44 438:46 java -XX:+UseG1GC -Xms512m -Xmx2048m -XX:MaxPermSize=256m -jar myApp-service-1.0-final.jar me+ 481 0.0 0.0 11768 1820 ? Ss 20:09 0:00 bash me+ 497 0.0 0.0 35888 1464 ? R+ 20:10 0:00 ps auxWorthy to mention that I've used jconsole to monitor process6and everything looks good.Why is docker container using all the memory available if its content does not need it? I expected that docker would use a little more memory than myApp... not 100% of the available memory.
Docker stats 100% memory
Firstly, make sure you are logged into hub.docker.comSimpleClick Repositories link (on blue menu bar) on topClick the name of repo to be deletedClick Settings link (on white sub menu bar)Click the 'Delete repository' buttonIn the confirmation dialog box, type the name of your repo to reconfirmClick DeleteDetailedClick "Repositories" link on top menu bar*Click on the repo you want to removeClick "Settings" sub-menuDecide from making it private or to delete and take action.Type the name of the repo to reconfirmWhen you have written the name of the repo, delete button would get enabled. Click it to delete your repo.
How do I delete a repository from Docker Hub entirely?Docker is evolving fast and so is their website. Here is the latest route to deleting your repo from docker hub web interface.
How to delete a repo from Docker Hub
I understand that you want to provide those credentials on build time and get rid of them afterwards.Well, the most secure way to handle this withpipwould be by using a multi-stage build process.First, you would declare an initialbuild-imagewith the file configurations and any dependency that could be needed to download/compile your desired packages; don't worry about the possibility of recovering those files, since you will only use them for the build process.Afterwards define your final image without the build dependencies and copy only the source code you want to run from your project and the dependencies from the build image. The resultant image won't have the configuration files and it'simpossibleto recover them, since they never were there.FROM python:3.10-slim as build RUN apt-get update RUN apt-get install -y --no-install-recommends \ build-essential gcc WORKDIR /usr/app RUN python -m -venv /usr/app/venv ENV PATH="/usr/app/venv/bin:$PATH" [HERE YOU COPY YOUR CONFIGURATION FILES WITH CREDENTIALS] COPY requirements.txt RUN pip install -r requirements FROM python:3.10-slim WORKDIR /usr/app COPY --from=build /usr/app/venv ./venv [HERE YOU COPY YOUR SOURCE CODE INTO YOUR CURRENT WORKDIR] ENV PATH="/usr/app/venv/bin:$PATH" ENTRYPOINT ["python", "whatever.py"]
I am building a Docker image and need to run pip install vs a private PyPi with credentials. What is the best way to secure the credentials? Using various file configuration options (pip.conf, requirements.txt, .netrc) is still a vulnerability even if I delete them because they can be recovered. Environment variables are also visible. What's the most secure approach?
Securing credentials for private PyPi in Docker
One of the following plugins should work fine:CloudBees Docker Custom Build Environment PluginCloudBees Docker Pipeline PluginI normally run my builds on slave nodes that have docker pre-installed.
I'm new to Jenkins and I have been searching around but I couldn't find what I was looking for.I'd like to know how to run docker command in Jenkins (Build - Execute Shell):Example:docker run hello-worldI have set Docker Installation for "Install latest from docker.io" in Jenkins Configure System and also have installed several Docker plugins. However, it still didn't work.Can anyone help me point out what else should I check or set?John
How to run a docker command in Jenkins Build Execute Shell
I wasn't usingdocker-php-ext-installwhich is required when adding working within the container...FROM php:7-fpm-alpine # install extensions needed for Laravel RUN apk update \ && apk add libmcrypt-dev \ && docker-php-ext-install mcrypt mysqli pdo_mysql \ && rm /var/cache/apk/*
I'm looking at setting up laravel on an fpm-alpine container. Running into a snag where the below Dockerfile is producing some errors...FROM php:7-fpm-alpine # install extensions needed for Laravel RUN apk --update add \ php7-mysqli \ php7-mcrypt \ php7-mbstring \ rm /var/cache/apk/*Errors produced are:Building fpm Step 1 : FROM php:7-fpm-alpine ---> 9e6811cb8bac Step 2 : RUN apk --update add php7-mysqli php7-mcrypt php7-mbstring rm /var/cache/apk/* ---> Running in 87364957eb57 fetch http://dl-cdn.alpinelinux.org/alpine/v3.3/main/x86_64/APKINDEX.tar.gz fetch http://dl-cdn.alpinelinux.org/alpine/v3.3/community/x86_64/APKINDEX.tar.gz ERROR: unsatisfiable constraints: /var/cache/apk/* (missing): required by: world[/var/cache/apk/*] php7-mbstring (missing): required by: world[php7-mbstring] php7-mcrypt (missing): required by: world[php7-mcrypt] php7-mysqli (missing): required by: world[php7-mysqli] rm (missing): required by: world[rm] ERROR: Service 'fpm' failed to build: The command '/bin/sh -c apk --update add php7-mysqli php7-mcrypt php7-mbstring rm /var/cache/apk/*' returned a non-zero code: 5I can search for these package names andfind them on the alpine linux web site. Any thoughts on how I can work around this? It's like it's not updating the apt cache... but adding an LS I can see contents there:Building fpm Step 1 : FROM php:7-fpm-alpine ---> 9e6811cb8bac Step 2 : RUN apk update ---> Using cache ---> 9ef09f3aa2a2 Step 3 : RUN ls /var/cache/apk ---> Running in e126a083a306 APKINDEX.5a59b88b.tar.gz APKINDEX.7c1f02d6.tar.gzAny ideas on what I can do to resolve this?
ERROR: unsatisfiable constraints - on php:7-fpm-alpine
My specific question is if I can control my container so that it shows 'starting' until the setup is ready and that the health check can somehow be started immediately after that?I don't think that it is possible with just K8s or Docker.Containers are not designed to communicate with Docker Daemon or Kubernetes to tell that its internal setup is done.If the application takes a time to setup you could play with readiness and liveness probe options of Kubernetes.You may indeed configure readynessProbeto perform the initial check after a specific delay.For example to specify 120 seconds as initial delay :readinessProbe: tcpSocket: port: 8080 initialDelaySeconds: 5 periodSeconds: 120Same thing forlivenessProbe:livenessProbe: httpGet: path: /healthz port: 8080 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 periodSeconds: 3For Docker "alone" while not so much configurable you could make it to work with the--health-start-period parameterof thedocker runsub command :--health-start-period : Start period for the container to initialize before starting health-retries countdownFor example you could specify an important value such as :docker run --health-start-period=120s ...
Background: My Docker container has a very long startup time, and it is hard to predict when it is done. And when the health check kicks in, it first may show 'unhealthy' since the startup is sometimes not finished. This may cause a restart or container removal from our automation tools.My specific question is if I can control my Docker container so that it shows 'starting' until the setup is ready and that the health check can somehow be started immediately after that? Or is there any other recommendation on how to handle states in a good way using health checks?Side question: I would love to get a reference to how transitions are made and determined during container startup and health check initiating. I have tried googling how to determine Docker (container) states but I can't find any good reference.
How can i get my container to go from starting -> healthy
Please referhttps://docs.docker.com/network/network-tutorial-standalone/It should be configured : ServerName localhost ProxyPass / http://172.17.0.1:8087 or : ServerName localhost ProxyPass / http://ip_addressof_my-server-container:8087 Use:docker inspect container_idto see ip address of container.
I am trying to setup apache in front of a server application (JIRA) on my local machine. Somewhat based on:https://mimiz.github.io/2017/05/18/Configure-docker-httpd-image.htmlBoth apache and the server application are run as docker containers.Starting my server application works fine and I can access the web-ui at:http://localhost:8087But when I start apache and try to access it in my browser:http://localhost:80I get:Service Unavailableand when I look at the logs it says:H00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.18.0.5. Set the 'ServerName' directive globally to suppress this message [Mon Apr 01 09:08:50.408757 2019] [mpm_event:notice] [pid 1:tid 140140879032384] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Mon Apr 01 09:08:50.409320 2019] [core:notice] [pid 1:tid 140140879032384] AH00094: Command line: 'httpd -D FOREGROUND' [Mon Apr 01 09:09:53.094495 2019] [proxy:error] [pid 8:tid 140140638869248] (111)Connection refused: AH00957: HTTP: attempt to connect to 127.0.0.1:8087 (localhost) failed [Mon Apr 01 09:09:53.094571 2019] [proxy_http:error] [pid 8:tid 140140638869248] [client 172.18.0.1:53110] AH01114: HTTP: failed to make connection to backend: localhostThis ishttpd.confdetails I have enabled/added:LoadModule proxy_module modules/mod_proxy.so #LoadModule proxy_connect_module modules/mod_proxy_connect.so #LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so ... ServerName www.app1.lol ProxyPass / http://localhost:8087 And this is how I start my server application:docker run --network sample-network -p 0.0.0.0:8087:8087 -ti -d --name my-server-container my-server-imageAnd this is how I start apache:docker run -d -p 80:80 --network sample-network --name my-apache-container my-apache-imageIs the problem my configuration in thehttpd.conffile or in the docker run commands (or a combination of both)?
AH01114: HTTP: failed to make connection to backend: localhost (apache as docker container)
There is no easy way to processARGsin docker-compose file from a subshell. But you can do this withdocker buildcommand and docker-compose with key-value.using the docker-compose command:MY_KEY=$(aws ssm get-parameter --name "test" --output text --query Parameter.Value) docker-compose build --no-cachedocker-composeversion: "2.3" services: base: build: context: . args: - PYTHON_ENV=developmen - API_KEY=${MY_KEY}Define ARGs in Dockerfile and run subshell during build time to get the SSM parameter value.FROM alpine ARG API_KEY=default ENV API_KEY="$API_KEY" RUN echo "API_KEY is : $API_KEY"During build get the value usingaws-clidocker build --no-cache --build-arg API_KEY="$(aws ssm get-parameter --name "test" --output text --query Parameter.Value)" -t myimage .With docker-compose you can also try with system environment variable.version: "2.3" services: base: build: context: . args: - PYTHON_ENV=developmen - API_KEY=${MY_KEY}Export it as an ENV before docker-compose.export MY_KEY=$(aws ssm get-parameter --name "test" --output text --query Parameter.Value) && docker-compose build --no-cache
I am creating a docker compose file which requires some environment variables. One of the env var is from aws ssm parameter. So I need to query the value from aws ssm when I build the docker image and put the value as one of the environment variable. How can I do that in docker compose file?version: "2.3" services: base: build: context: . args: - PYTHON_ENV=developmen - API_KEY= # find the value from ssm
How can I set runtime variable for docker compose environment variable
Per user, you can configure this in the$HOME/.docker/config.jsonfile. Add a json entry similar to:{ "auths": { ... }, "detachKeys": "ctrl-x,x" }The "auths" line is just giving a relative location in the json, ignore this if you don't have an existing logins stored in this file. Seethis documentationfor more details.
Docker container's detach key sequence by default is control+q or control+p. There is an option to set key sequence when starting a container using--detach-keys ""but I am looking for a permanent change.Is there a way to change this key sequence to something else?
How do you change default detach key sequence in docker?
TheEXPOSEinstruction informs Docker that the container listens on the specified network ports at runtime.EXPOSEdoes not make the ports of the container accessible to the host.To do that, you must use either the-pflagYourdocker runcommand should look like this:$docker run -p3000:3000 -t prasannarb/example-node-serviceAdditionally, thedocker inspectcommand give you the container IP address, not the host IP address.
After dockerizing my demo Express js app and starting the container, I am unable to access the service due to a"Connection Timeout"Url for the for project before dockerizing (Which produced "Hello world!" on the browser):http://localhost:3000/cars/example/fetchResultUrl for the project after starting the docker container (Gives a "172.17.0.2 took too long to respond.")http://172.17.0.2:3000/cars/example/fetchResultDockerfileFROM node:argon # Create app directory RUN mkdir -p /usr/src/app WORKDIR /usr/src/app # Install app dependencies COPY package.json /usr/src/app/ RUN npm install # Bundle app source COPY . /usr/src/app EXPOSE 3000 CMD [ "node", "server.js" ]I built my docker image likedocker build -t prasannarb/example-node-serviceI started my docker image as a container likedocker run -t prasannarb/example-node-serviceThen when I,docker ps, it gives meCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7cf955f5d799 prasannarb/example-node-service "node server.js" About a minute ago Up About a minute 3000/tcp thirsty_perlmandocker inspect 7cf955f5d799gives me"IPAddress": "172.17.0.2"Since I did not explicitly give a port to start my container, I was assuming it would take the same as exposed by my docker container (3000) which is the same port where my service would listen too.What am I doing wrongly here?
Dockerized Node js app does not start
If you don't want to use a docker registry, you have to import the locally built image into the k3d cluster:k3d image import [IMAGE | ARCHIVE [IMAGE | ARCHIVE...]] [flags]But don't forget to configure in your deployment:imagePullPolicy: Never
Just study the core of K8S on local machine (Linux Mint 20.2).Created one node cluster locally with:k3d cluster create myclusterAnd now I want to run spring boot application in a container.I build local image:library:0.1.0And here is snippet fromDeployment.yml:spec: terminationGracePeriodSeconds: 40 containers: - name: 'library' image: library:0.1.0 imagePullPolicy: IfNotPresentDespite the fact that image is already built:docker images REPOSITORY TAG IMAGE ID CREATED SIZE library 0.1.0 254c13416f46 About an hour ago 462MBStarting the container fails:pod/library-867dfb64db-vndtj Pulling image "library:0.1.0" pod/library-867dfb64db-vndtj Failed to pull image "library:0.1.0": rpc error: code = Unknown desc = failed to pull and unpack image "library:0.1.0": failed to resolve reference "library:0.1.0": failed to do request: Head "https://...com/v2/library/manifests/0.1.0": x509: certificate signed by unknown authority pod/library-867dfb64db-vndtj Error: ErrImagePull pod/library-867dfb64db-vndtj Error: ImagePullBackOff pod/library-867dfb64db-vndtj Back-off pulling image "library:0.1.0"How to resolve local images visibility for k3d cluster?Solution:Update theDeployment.yml:spec: terminationGracePeriodSeconds: 40 containers: - name: 'library-xp' image: xpinjection/library:0.1.0 imagePullPolicy: NeverAnd import the image to cluster:k3d image import xpinjection/library:0.1.0 -c mycluster
k3d tries to pull Docker image instead of using the local one
Eventually, I managed to find a solution.Here it is:environment: - JAVA_TOOL_OPTIONS= -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9010 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false
I got this list of JVM params from the following answerhttps://stackoverflow.com/a/35108974/7809534:-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9010 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=falseAnd I would like to run them in docker-compose.This is what I tried:environment: - JAVA_TOOL_OPTIONS="-Dcom.sun.management.jmxremote" - JAVA_TOOL_OPTIONS="-Dcom.sun.management.jmxremote.port=9010" - JAVA_TOOL_OPTIONS="-Dcom.sun.management.jmxremote.local.only=false" - JAVA_TOOL_OPTIONS="-Dcom.sun.management.jmxremote.authenticate=false" - JAVA_TOOL_OPTIONS="-Dcom.sun.management.jmxremote.ssl=false"But it is not working.How can I do it?
How to run multiple JVM params in docker-compose?
Containers each have their own network namespace by default. Compose will place all containers on a shared network and set an alias in DNS for the service name. So to connect between containers, all you need to do is point to your service name instead of the 127.0.0.1 (assuming mysql is your service name):"DefaultConnection": "Server=mysql;Database=mydatabase;UserId=SA;Password=mydbpassword"This is more portable and handles containers scaling/updating better than to attaching containers to the same network namespace.
I have a.net core 2.0project which usesmssql server. I have Created adocker imageand container for my.net core 2.0and running on9090:9090. I created it like below.docker container run --name mytestapp --publish 9090:9090 --detach my_.netapp_image_nameand below is my connection string in .net core 2.0 app."DefaultConnection": "Server=127.0.0.1;Database=mydatabase;UserId=SA;Password=mydbpassword"beforethis, I created a container formssql serverwith below,docker container run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=' \ -p 1433:1433 --name sql1 \ -d microsoft/mssql-server-linux:2017-latestmy .net core app has seeds for database. each time it gives me an error saysUnhandled Exception: System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 35 - An internal exception was caught) ---> System.AggregateException: One or more errors occurred. (Connection refused 127.0.0.1:1433) ---> System.Net.Internals.SocketExceptionFactory+ExtendedSocketException: Connection refused 127.0.0.1:1433NOTE: this works fine when I run my .net app via IDE(visual studio) and use db as docker mssql container. I ran these two containers separately. then I tried to run usingdocker-compose, but didn't work.What am I doing wrong here. hope your help with this.
How to communicate between two docker containers (mssql and .net core app) got Connection refused 127.0.0.1:1433
It seems the JAR file is not readable by thejbossuser (the user comming from parent image). Thepostgresql-9.4-1201.jdbc41.jaris added under the root user - find details inthis GitHub discussion.You couldeitheradd permissions to JAR filebefore adding it to the imageor add permissions to JAR file in the image after the addingor change ownership of the file in the imageThe simplest solution could be the first one. The other 2 solutions need also switching user to root (USER rootin dockerfile) and then back to jboss.
I'm trying to create a Wildfly docker image with a postgres datasource.When I build the dockerfile it always fails with Permission Denied when I try to install the postgres module.My dockerfile looks look this:FROM wildflyext/wildfly-camel RUN /opt/jboss/wildfly/bin/add-user.sh admin admin --silent ADD postgresql-9.4-1201.jdbc41.jar /tmp/ ADD config.sh /tmp/ ADD batch.cli /tmp/ RUN /tmp/config.shWhich calls the following:#!/bin/bash JBOSS_HOME=/opt/jboss/wildfly JBOSS_CLI=$JBOSS_HOME/bin/jboss-cli.sh JBOSS_MODE=${1:-"standalone"} JBOSS_CONFIG=${2:-"$JBOSS_MODE.xml"} function wait_for_wildfly() { until `$JBOSS_CLI -c "ls /deployment" &> /dev/null`; do sleep 10 done } echo "==> Starting WildFly..." $JBOSS_HOME/bin/$JBOSS_MODE.sh -c $JBOSS_CONFIG > /dev/null & echo "==> Waiting..." wait_for_wildfly echo "==> Executing..." $JBOSS_CLI -c --file=`dirname "$0"`/batch.cli --connect echo "==> Shutting down WildFly..." if [ "$JBOSS_MODE" = "standalone" ]; then $JBOSS_CLI -c ":shutdown" else $JBOSS_CLI -c "/host=*:shutdown" fiAndbatch module add --name=org.postgresql --resources=/tmp/postgresql-9.4-1201.jdbc41.jar --dependencies=javax.api,javax.transaction.api /subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=org.postgresql,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource) run-batchThe output when building is:==> Starting WildFly... ==> Waiting... ==> Executing... Failed to locate the file on the filesystem copying /tmp/postgresql-9.4-1201.jdbc41.jar to /opt/jboss/wildfly/modules/org/postgresql/main/postgresql-9.4-1201.jdbc41.jar: /tmp/postgresql-9.4-1201.jdbc41.jar (Permission denied)What permissions are required, and where do I set the permission(s)?Thanks
How to add module to Wildfly using CLI
-vvolume parameter expects path to be absolute.You need to pass full path to the folder like:/var/share/Volumebut not just relative path you did justVolumeI use this trick when need relative path-v $(pwd)/Volume:/data/Volume
Hello I have problem with sharing resources with docker. I got folderDocuments/Volume/In folder Volume I have file data.txt Now when I run image like this:docker run -v /Documents/Volume:/Volume -it busyboxI would expect that in folder Volume I will see file data.txt but file is missing. So I create new file in folder Volume by command:echo "Hello world" > test.txtNow I`m expecting that file test.txt is visible in /Documents/Volume/Why I can't see files created in docker and from docker I can't see OS files?Probably I'm missing something.
Docker shared folder with Linux
You could set the environment variablesDEBIAN_FRONTEND=noninteractiveandDEBCONF_NONINTERACTIVE_SEEN=truein your Dockerfile, beforeRUN sudo apt-get install php libapache2-mod-php -y.Your Dockerfile should look like this:FROM ubuntu:18.04 RUN apt-get update && \ apt-get install -y --no-install-recommends apt-utils && \ apt-get -y install sudo RUN sudo apt-get install apache2 -y RUN sudo apt-get install mysql-server -y ## for apt to be noninteractive ENV DEBIAN_FRONTEND noninteractive ENV DEBCONF_NONINTERACTIVE_SEEN true ## preesed tzdata, update package index, upgrade packages and install needed software RUN echo "tzdata tzdata/Areas select Europe" > /tmp/preseed.txt; \ echo "tzdata tzdata/Zones/Europe select Berlin" >> /tmp/preseed.txt; \ debconf-set-selections /tmp/preseed.txt && \ apt-get update && \ apt-get install -y tzdata RUN sudo apt-get install php libapache2-mod-php -y RUN rm -rf /var/www/html/ COPY . /var/www/html/ WORKDIR /var/www/html/ EXPOSE 80 RUN chmod -R 777 /var/www/html/app/tmp/ CMD systemctl restart apache2You should changeEuropeandBerlinwith wath you want.
This question already has answers here:How to fill user input for interactive command for "RUN" command?(2 answers)Closed1 year ago.I writing a Dockerfile for my PHP application, and instead of from dockerhub i am creating it from scratch.eg:FROM ubuntu:18.04 RUN apt-get update && \ apt-get install -y --no-install-recommends apt-utils && \ apt-get -y install sudo RUN sudo apt-get install apache2 -y RUN sudo apt-get install mysql-server -y RUN sudo apt-get install php libapache2-mod-php -y RUN rm -rf /var/www/html/ COPY . /var/www/html/ WORKDIR /var/www/html/ EXPOSE 80 RUN chmod -R 777 /var/www/html/app/tmp/ CMD systemctl restart apache2at this step:RUN sudo apt-get install php libapache2-mod-php -yI get stuck, because it asks for user input, like::Please select the geographic area in which you live. Subsequent configuration questions will narrow this down by presenting a list of cities, representing the time zones in which they are located.Africa 4. Australia 7. Atlantic 10. Pacific 13. EtcAmerica 5. Arctic 8. Europe 11. SystemVAntarctica 6. Asia 9. Indian 12. US Geographic area:I am not able to move ahead of this, i tried like this:RUN sudo apt-get install php libapache2-mod-php -y 9But no result, please help
how can i pass arguments or bypass it in docker build process? [duplicate]
Problem was innetstatcommand, after adding-anpflag, ports are listed.$ sudo netstat -anp | grep 8080 tcp6 0 0 :::8080 :::* LISTEN 16341/docker-proxy
For some reasonnetstatis not listing ports exposed by docker. As suggestedhereI usedEXPOSEfor both ports 8080 and 5050. But none of them is visible from host.Dockerfile... FROM openjdk:11-jre-slim COPY --from=build /usr/src/app/api/target/track-metadata-api-*.jar /app/track-metadata-api.jar WORKDIR /app EXPOSE 8080 5050 CMD java -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5050 -jar track-metadata-api.jardocker ps$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a6d3381a992d track-metadata_track-metadata "/bin/sh -c 'java -a…" 7 minutes ago Up 7 minutes 0.0.0.0:5050->5050/tcp, 0.0.0.0:8080->8080/tcp track-metadata_track-metadata_1netstat & curl$ sudo netstat --all | grep 8080 # returns nothing $ curl http://localhost:8080/v1/track-metadata/filtered [{"authorName":"AC/DC","duration":208,"id":1,"tags":"#rock","trackName":"Highway to Hell"},{"authorName":"Sum41","duration":209,"id":2,"tags":"#rock","trackName":"War"},{"authorName":"Ziggy Marley","duration":220,"id":3,"tags":"#ragge","trackName":"Beach in Hawaii"}]Docker & Ubuntu version$ docker --version Docker version 18.06.1-ce, build e68fc7a $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.10 Release: 18.10Codename: cosmic
Netstat not showing ports exposed by docker
You may read lines fromChannelFile(http://docs.paramiko.org/en/2.4/api/channel.html?highlight=stdout#paramiko.channel.ChannelFile).Example:stdin, stdout, stderr = client.exec_command('docker run ') while True: line = stdout.readline() if not line: break print(line, end="")
This question already has answers here:Paramiko with continuous stdout(2 answers)Closed2 years ago.NOTE: I have seen other posts on this, but not a single post can explain the answer, nor do they have one that works.Is there a way to get the output ofexec_command, specifically forexec_command('docker run ')in real-time for the Paramiko package?
Real-time output for Paramiko exec_command [duplicate]
You can executeosm2pgsqloutside of Docker:-H|--host Database server host name or socket location.As well aspsql:-h, --host=HOSTNAME database server host or socket directoryLike this:psql -h dockerIP -U postgres -d mydb -c 'create extension postgis' osm2pgsql -H dockerIP -U postgres -d mydb -s -S ./osm_stylesheet /home/ramnikov/Downloads/hessen-latest.osm
i am trying to use Docker. So i installed in Docker postgresql image.Until now, when i imported osm data into postql i used this command:psql -U postgres mydb CREATE EXTENSION postgis; osm2pgsql -U postgres -d mydb -s -S ./osm_stylesheet /home/ramnikov/Downloads/hessen-latest.osmHow can i do the same inside Docker after this command$ sudo docker exec -it postgresql sudo -u postgres psqlor before this command ?TnxAndrey
Import osm data in Docker postgresql
To get rid of "dangling" images, run the following:$ docker rmi $(docker images -q -f dangling=true)That should clear out all the images marked "none". Be aware however, that images will share base layers, so the total amount of diskspace used by Docker will be considerably less than what you get by adding up the sizes of all your images.
docker ps -aqShows only 7-9 images./var/lib/docker/graphshows me n number of images.When I create a file, I get write error due to system full error. I tried to create symbolic link. but I cannot able to move all the docker things.Is it good to remove everything under /var/lib/docker/graph? What are the other possibilities than creating symbolic link and extending disk? I would prefer deleting unnecessary things. 02a16288ef14 6 days ago 773.3 MB 21a606deee7e 6 days ago 773.3 MB 8a38f2888018 6 days ago 773.2 MB f41395b7637d 6 days ago 773.3 MB 8b82d707167c 6 days ago 773.3 MB
Docker images eats up lots of space?
You are running your server inside a shell, and the shell is the process receiving the signals. Your server doesn't exit until you force the shell to quit.When you use the "shell" form of CMD, it starts your server as an argument to/bin/sh -c. In order to exec the server binary directly, you need to provide an array of arguments to either CMD or ENTRYPOINT, starting with the full path of the executable.CMD ["/go/bin/simple_server"]A note from ENTRYPOINT in theDockerfile docs:The shell form prevents any CMD or run command line arguments from being used, but has the disadvantage that your ENTRYPOINT will be started as a subcommand of /bin/sh -c, which does not pass signals.
I am trying to run servers written in golang inside docker containers. For example:package main import "net/http" func main() { http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { w.Write([]byte("Hello")) }) http.ListenAndServe(":3000", nil) }If I run this code on my local machine, I can send it aSIGINTwithCtrl-Cand it will close the application. When I run it inside a docker container, I can't seem to kill it withCtrl-C.# Dockerfile FROM ubuntu:14.04 RUN apt-get update && apt-get -y upgrade RUN apt-get install -y golang ENV GOPATH /go COPY . /go/src/github.com/ehaydenr/simple_server RUN cd /go/src/github.com/ehaydenr/simple_server && go install CMD /go/bin/simple_serverI then proceeded to use docker to send signals to the container.docker kill --signal=INT 9354f574afd4Still running...docker kill --signal=TERM 9354f574afd4Still running...docker kill --signal=KILL 9354f574afd4Finally dead.I'm not catching any signals in my code and ignoring them. I've even tried augmented the code above to catch signals and print them out (which works on my host, but when in the container, it's as if the signals never got to the program).Has anyone experienced this before? I haven't tried something like this in another language, but I able to kill servers (e.g.mongo,nginx) usingCtrl-Cwhile they're in a docker container.. Why isn't Go be getting the signals?Not sure if this makes a difference, but I am on OSX and using docker-machine.Any help is much appreciated.
Sending signals to Golang application in Docker
You should assume systemd and systemctl just don't work in Docker, and find another approach to whatever your higher-level goals are. Best practices are to run one service and one service only in a Docker container, and to use multiple containers if you need multiple coordinating services; if you really must run multiple things in the same container thensupervisordis a common process manager.The biggest problem with systemd in Docker is that it by default wants to control a lot of things. Look at the graphic on thesystemd home page: it wants to do a bunch of kernel-level setup, manage filesystems, and launch several services, all of which have already been done on the host and are unnecessary in a Docker container. The "easy" way to run systemd in Docker involves giving it permission to reconfigure your host; the link you provide has a "hard" way that involves deleting most of its control files.In a Dockerfile context there's also a problem that each RUN line starts from a clean slate with no processes running at all. So yoursystemctl start ...command doesn't work because the systemd init isn't running; and even if it did, when that RUN command finished, the process would go away and the service wouldn't be running on the next line.You might be able to finda prebuilt syslog-ng imageby typing "syslog" into the search box onhttps://hub.docker.com, which would dodge this issue. It also might work to install syslog-ng on a CentOS base as you do, but skip systemd entirely and just run the service as the primary command the image runsCMD ["syslog-ng", "-F"]
I trying to build a CentOS image with systemctl command. But each timewhen I build it. I obtain this error :Step 5/7 : RUN systemctl enable syslog-ng ; systemctl start syslog-ng ---> Running in 8f5a357895e7 Failed to get D-Bus connection: Operation not permitted The command '/bin/sh -c systemctl enable syslog-ng ; systemctl start syslog-ng' returned a non-zero code: 1My Dockerfile :FROM centos_systemctl:latest RUN yum -y update RUN yum -y install epel-release ; \ yum -y install vim ; \ yum -y install wget ; \ yum -y install rsync ; \ yum -y groupinstall "Development tools" # Install syslog-ng 3.14 RUN cd /etc/yum.repos.d/ ; \ wget https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng314/repo/epel-7/czanik-syslog-ng314-epel-7.repo ; \ yum -y install syslog-ng RUN systemctl enable syslog-ng ; systemctl start syslog-ng RUN yum -y remove rsyslog # COPY config syslog-ng CMD ["/usr/sbin/init"]centos_systemctl:latestaccording to this :https://github.com/docker-library/docs/tree/master/centos#systemd-integrationSomeone know what I do wrong ?Thanks,
Docker CentOS systemctl not permitted
One reason I can think of is forusing a tool or commandthat is not available in your container. This example below comes directly from thedocker rundocs:NETWORK: CONTAINERExample running a Redis container with Redis binding tolocalhostthen running theredis-clicommand and connecting to the Redis server over thelocalhostinterface.$ docker run -d --name redis example/redis --bind 127.0.0.1 $ # use the redis container's network stack to access localhost $ docker run --rm -it --network container:redis example/redis-cli -h 127.0.0.1In a similar way, one can use this technique todebuga container. For example, if your container doesn't havetcpdump, you can create an image which has it:docker build -t tcpdump - <<EOF FROM ubuntu RUN apt-get update && apt-get install -y tcpdump CMD tcpdump -i eth0 EOFandruna container to debug your app:docker run --rm --net=container:my-app tcpdumpIf your question was more aboutKubernetes, a few interesting links are:The Kubernetes Network ModelWhat is the role of apausecontainer?Understanding kubernetes networking: pods
Why would you connect two docker containers via network namespace, and not just through one network?As far as I know the only difference is that you can call the other container using localhost. I don't see any use case where this would be necessary.Does anyone have experience with this?
Why would anyone use the same network namespace for two docker containers?
You are right not to check in thenode_modulesfolder, they are automatically populated at the time you runnpm installThis should be part of your build pipeline in the gitlab ci. The pipeline allows multiple steps and the ability to pass artefacts through to the next stage. In your case you want to save thenode_modulesfolder that is created by runningnpm installyou can then use the dependencies for tests or deployment.Since npm v5 there is a lockfile to make sure what you are running locally will be the same as what you are running on the serverAlso you can use something likerennovateto automatically update your dependancies if you want to fix them and automatically manage security updates. (rennovate is open source so can be ran on gitlab)A really simple gitlab CI pipeline could be:// .gitlab-ci.yml stages: - build - deploy build: stage: build script: - npm install artifacts: name: "${CI_BUILD_REF}" expire_in: 10 mins paths: - node_modules deploy: stage: deploy script: - some deploy command
I am new to node.js, I was trying to deploy node.Js project via gitlab ci. But after spotting the build error in pipeline I realized I added node_modules folder to .gitignore, and I am not pushing node_modules to gitlab. And node_modules folder is 889MB locally there is no way I will push it, so what approach should I use to use node_module s folder from somewhere else.i.e. node_modules path is always present and accessible on remote server! Do I need to include that path in package. JsonCan node_modules be maintained by using docker ? then how would I maintain to stay update specific to every project.
Deploying nodejs project from gitlab ci
This seems to have now resolved itself. Quite possibly it was caused by a problem at docker's end.
I'm new to docker and have followed the installation instructions on their sitehere.The installation completed successfully:docker -v Docker version 1.8.1, build d12ea79but when I try to runsudo docker run hello-worldI get the following:Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 535020c3e8ad: Pulling fs layer af340544ed62: Layer already being pulled by another client. Waiting. af340544ed62: Layer already being pulled by another client. Waiting.This then continues to hang indefinitely.I have tried restarting the service and my entire machine. I always get the same problem.Any idea what's causing this or how to resolve?
Error on Docker Pull - "Layer already being pulled by another client"
Add stderr/stdout to the logging stack in config/logging.phpThis was discussed before here, and Taylor added an example to stderr output (php://stderr) in the config/logging.php shipped with laravelhttps://github.com/laravel/ideas/issues/126or just change the .env LOG_CHANNEL example quoting original comment:https://github.com/laravel/ideas/issues/126#issuecomment-438548169In recent versions (5.6+) the default config/logging.php appears to include a stderr config, so you can just inject a LOG_CHANNEL=stderr environment variable into the container.This will redirect all error/log based on your logging level to docker logs
During developing we met some problems with getting the real error log of the code.Architecturenginx -> php-fpm with laravelProblemcan't get the logs of laravelEnviromentimage php:7.2.8-fpm-alpine3.7docker 18.06.1-celaravel 5.5www.conf[www] user = www-data group = www-data listen = 127.0.0.1:9000 clear_env = no catch_workers_output = yes pm = dynamic pm.max_children = 200 pm.start_servers = 80 pm.min_spare_servers = 50 pm.max_spare_servers = 80 pm.max_requests = 250 request_terminate_timeout = 60 slowlog = /var/log/error.log php_flag[display_errors] = on php_admin_value[error_log] = /var/log/error.log php_admin_flag[log_errors] = on php_value[session.save_handler] = files php_value[session.save_path] = /usr/local/lib/session php_value[soap.wsdl_cache_dir] = /usr/local/lib/wsdlcache ;php_value[opcache.file_cache] = /usr/local/lib/opcache ;monitoring pm.status_path = /phpfpm_status ping.path = /phpfpm_ping ping.response = pongphp.inierror_log = "/var/log/error.log" error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT & ~E_NOTICE display_errors = On display_startup_errors = On ...php-fpm.confinclude=/usr/local/etc/php-fpm.d/*.conf [global] error_log = "/var/log/error.log" log_level = notice events.mechanism = epolli already add full competence to the file /var/log/error.log & access.log right now,i only get php-fpm log in access.log and error.log/var/log # cat error.log [20-Mar-2019 06:08:34] NOTICE: fpm is running, pid 9 [20-Mar-2019 06:08:34] NOTICE: ready to handle connections /var/log # cat access.log 172.28.0.5 - 20/Mar/2019:06:34:12 +0000 "GET /index.php" 200 172.28.0.5 - 20/Mar/2019:06:34:18 +0000 "POST /index.php" 200 /var/log # pwd /var/loglooking for answers
How can i get logs of laravel in docker behind php-fpm?
First: docker login related to Artifactory -> Configurations -> HTTP Settings I used "Docker access method" as "Repository path"docker login -u admin -p **** x.x.x.x:8081Second: Since i use HTTP, this ip "x.x.x.x:8081" should be added to "insecure-registries" in Docker client.or just add it to insecure registries in ~/.docker/config.json like below:{ "auths": { "x.x.x.x:8081": {} }, "HttpHeaders": { "User-Agent": "Docker-Client/18.09.0 (windows)" }, "credsStore": "wincred" }and then restart docker
I downloaded artifactory 6.6.0 on remote desktop with ip (x.x.x.x) and connect to port 8081.I can connect to artifactory from my computerhttp://x.x.x.x:8081/artifactory. I have docker client on my computer but I don't have docker on remote desktop.I have virtual docker repository named "docker".I want to login by docker client to my docker repository on artifactory -> "docker login " and then pull images in this repository.How can I log in and pull images from artifactory? Notice I don't have SSL so I'm using HTTP.
Pull Artifactory Docker Images
You should be able to use arolling updatespecifying the same image name that you are currently using:kubectl rolling-update --image=foobar/myimage:[branch]-latestThis will (behind the scenes) create a new replication controller that is a copy of your existing replication controller with the "new" image, and then stepwise resize each of the replication controllers until the old one has zero pods and the new one has the desired number of pods, finally deleting the old one and renaming the new one to use the old name.
I have a kubernetes RC/pod consisting of containers with images like:foobar/my-image:[branch]-latestwhere "branch" is the git branch ("master", etc).What's the best way to use rolling-update to force the RC to re-pull the images to get the latest version? The brute force method is to simply delete the RC and re-create it, but that causes downtime for the service.Is rolling update only possible if you specify an exact image tag, rather than something like "latest"?
How to use rolling update to re-pull container image?
Nobody posted an answer so I will try to give my opinion on the second choice because that's what I think I would do in your situation.The second setup seems the most flexible, you have access to the datas and only need to open one port on for the federating server, so it should still be secure.One other bonus of this type of setup is that even if the firewall stop working for a reason or another, you will still have a prometheus scraping, you will have an alert because you won't be able to access the server(s) but when the connexion comes again you will have all the datas. You won't have a hole in the grafana dashboards because there was no datas, apart during the incident.The issue with this setup is the fact that you need to maintain a number of server equivalent to the number of networks. A solution for this would be to have a packer image or maybe an ansible playbook to deploy.
I love using Prometheus for monitoring and alerting. Until now, all my targets (nodes and containers) lived on the same network as the monitoring server.But now I'm facing a scenario, where we will deploy our application stack (as a bunch of Docker containers) to several client machines in thier networks. Nearly all of the clients networks are behind a firewall or NAT. So scraping becomes quite difficult.As we're still accountable for our stack, I'd like to have a central montioring server, altering and dashboards.I was wondering what could be the best architecture if want to implement it with Prometheus, but I couldn't find any convincing approaches. My ideas so far:Use a Pushgatewayon our side and push all data out of the client networks. As the docs state, it's not intended that way:https://prometheus.io/docs/practices/pushing/Use a federation setup(https://prometheus.io/docs/prometheus/latest/federation/): Place a Prometheus server in every client network behind a reverse proxy (to enable SSL and authentication) and aggregate relevant metricts there. Open/forward just a single port for federation scraping.Other more experimental setups, such as SSH Tunneling (e.g. herehttps://miek.nl/2016/february/24/monitoring-with-ssh-and-prometheus/) or VPN!?Thank you in advance for your help!
How to configure Prometheus in a multi-location scenario?
That won't work. TheACTIVEMQ_VERSIONhas already been used by thecloudesire/activemq:latestimage build to populate its image layers. All the ActiveMQ installation files based on version5.11.1are already extracted in their corresponding directories.In yourDockerfileyou only can build on top of what has already been build there and add your files. Your ownDockerfilebuild willnot re-runthe build instructions described in theirDockerfile.If you need to have your owncloudesire/activemqimage based on version 5.9.1 you need to clone theirDockerfile, adjust the version there and build it locally. So you could base your otherDockerfileon it.
From the following image:https://registry.hub.docker.com/u/cloudesire/activemq/dockerfile/If I wanted to override the ACTIVEMQ_VERSION environment variable in my child docker file, I assumed I would be able to do something like the following:FROM cloudesire/activemq:latest MAINTAINER abc <[email protected]> ENV ACTIVEMQ_VERSION 5.9.1 ADD ./src/main/resources/* /opt/activemq/conf/However this does not seem to work. Admittedly I am new to Docker and have obviously misunderstood something. Please could someone explain why this does not work, and how/if I can achieve it another way?
Override FROM image's ENV in Dockerfile
Without knowing your exact configuration, I would use something like this...version: "2" services: maven: image: whatever volumes: - m2-repo:/home/foo/.m2/repository volumes: m2-repo:This will create a data volume calledm2-repothat is mapped to the/home/foo/.m2/repository(adjust path as necessary). The data volume will survive up/down/start/stop of the Docker Compose project.You can delete the volume by running something likedocker-compose down -v, which will destroy containers and volumes.
I have a Maven project. I'm running my Maven builds inside Docker. But the problem with that is it downloads all of the Maven dependencies every time I run it and it does not cache any of those Maven downloads.I found some work arounds for that, where you mount your local .m2 folder into Docker container. But this will make the builds depend on local setup. What I would like to do is to create a volume (long live) and link/mount that volume to.m2folder inside Docker. That way when I run the Docker build for the 2nd time, it will not download everything. And it will not be dependent on environment.How can I do this with docker-compose?
How to mount docker volume into my docker project using compose?
The problem is the firewall, in Ubuntu this worked for me:sudo ufw allow in on virbr1 sudo ufw reloadBut you need to figure out the correct interface name viaifconfig.In my case I didminikube ipto realize the interface wasvirbr1I found the solution because in the past I had connectivity problems with docker which got resolved withsudo ufw allow in on docker0
I have a use case where I need a Docker container under kubernetes to access a hostPath. I'm using minikube, and the container is able to access a folder in the minikube VirtualBox VM. But I can't figure out how to get it to access a folder on the host itself.I do these commands on the host to create /opt/foo for sharing in the VM:$ sudo touch /opt/foo/FOO $ ls /opt/foo FOO $ minikube mount -v 5 /opt/foo:/opt/foo Mounting /opt/foo into /opt/foo on the minikubeVM This daemon process needs to stay alive for the mount to still be accessible... ufs startingIn another window I look in the minikube VM$ minikube ssh -- sudo ls -la /opt/foo total 0 drwxrwxr-x 2 root root 0 Jun 1 14:44 . drwxr-xr-x 5 root root 0 Jun 1 14:44 ..Is there another step needed to make the files in that directory accessible?FYI - use case is a container process creating files that a host process is harvesting. Thus I do not want to use nfs or PersistentVolumes. Host is Centos7. minikube version: v0.19.0.
How to mount a Host folder in minikube VM
Since your are running the containers individually you have different optionsRun django on network of postgres container$ docker run -d ... postgres $ docker run -d ... --net container: djangoThen django can find postgres onlocalhost:5432Run django and postgres as named containers container$ docker run --name postgresdb -d ... postgres $ docker run -d ... djangoNow django can find db onpostgresdb:5432Run both containers on host$ docker run --net host -d ... postgres $ docker run -d ... --net host djangoThen django can find postgres onlocalhost:5432Run containers on same network$ docker network create mynet $ docker run --name postgresdb --net mynet -d ... postgres $ docker run --net mynet -d ... djangoNow django can find db onpostgresdb:5432Connect to the host IP and mapped port$ docker run -d -p 32770:5432 .... postgres $ docker run -d .... djangoDjango can now connect to the DB on:32770.Better option is to run it usingdocker-compose. Learn more onhttps://docs.docker.com/compose/
I have two containers, the first one with adjangoand the second one withpostgresql.Well, in my first server I have runningdjangoand I'm trying to connect it with the second one. The second container has the port32770exposed but internally running in the port5432. In my local machine, I have the connection: Server: 'Localhost' Port: 32770 User: 'myuser' Password: ''And it's connecting, but with mydjangocontainer, I'm getting this error:could not connect to server: Connection refused Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 32770?The same happens for the port5432How I can connect both servers?
Connect two docker containers
You seems to have a Keyboard mapping issue where the pipe|turns into a redirect symbol>. It seems more related to Digital Ocean and their Console itself where your droplet is hosted - by the look of the image in the question - according tothis thread.The first option is to use SSH to log into your droplet.Your second option is to do this process on two steps:wget https://download.docker.com/linux/ubuntu/gpg sudo apt-key add gpg
I'm trying to add the docker GPG key, and I'm unable to do so because it doesn't recognize that i'm trying to pipe the GPG key into the APT KEYI'm getting back the following error (see picture):curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Trying to install DOCKER GPG key recieving error: Curl: option '-' is unknown
The SMB protocol works with hosts in the same LAN. A docker container, by default, has a virtual network interface behind a NAT, so the container is no longer in the same LAN. This is why you can ping the target, but you can't access the shared folder.The easier solution is to add the option--network hostto thedocker runcommand. In this way the container has access to the same network interfaces as the host and no virtual interface is created.
I'm trying to access a remotely shared folder from within a docker container on Docker for Windows.While inside the container runningdir \\target\shareproduces "The network path was not found.". The target can be pinged from inside the container and from the host system the share is accessible.The image used ismicrosoft/dotnet-framework:4.7.2-sdkand I'm running it with just the-itoption for testing.What am I missing to get this to work?
Docker vs. Shared windows folders
Solution: The start command seems to need the -a (attach) parameter as describedin the documentationwhen used in a systemd script. I assume this is because it by default forks to the background, although the systemdexpect daemonfeaturedoesn't appear to fix the issue.from thedocker-startmanpage:-a, --attach=true|false Attach container's STDOUT and STDERR and forward all signals to the process. The default is false.The whole systemd script then becomes:[Unit] Description=MyContainer After=docker.service Requires=docker.service [Service] ExecStart=/usr/bin/docker start -a containername ExecStop=/usr/bin/docker stop containername [Install] WantedBy=multi-user.target
I'm having trouble getting a Docker container to stay up when it's started by systemd. When I start it manually withsudo docker start containername, it stays up without trouble, but when it's started via systemd withsudo systemctl start containername, it stays up for 10 seconds then mysteriously dies, leaving messages in syslog something like the following:Mar 13 14:01:09 hostname docker[329]: time="2015-03-13T14:01:09Z" level="info" msg="POST /v1.17/containers/containername/stop?t=10" Mar 13 14:01:09 hostname docker[329]: time="2015-03-13T14:01:09Z" level="info" msg="+job stop(containername)"I am making the assumption that it's systemd killing the process, but I can't work out why it might be happening. The systemd unit file (/etc/systemd/system/containername.service) is pretty simple, as follows:[Unit] Description=MyContainer After=docker.service Requires=docker.service [Service] ExecStart=/usr/bin/docker start containername ExecStop=/usr/bin/docker stop containername [Install] WantedBy=multi-user.targetDocker starts fine on boot, and it looks like it does even start the docker container, but no matter if on boot or manually, it then quits after exactly 10 seconds. Help gratefully received!
Docker and systemd - service stopping after 10 seconds
The issue was due to theINFLUXDB_ADMIN_ENABLED=trueline.The documentation states:The administrator interface is deprecated as of 1.1.0 and will be removed in 1.3.0.I was using thelatestversion which is (currently) the1.4so it seems that there was a problem with that deprecatedINFLUXDB_ADMIN_ENABLEDvariable.Removing that line, everything worked perfectly.docker run -p 8086:8086 \ -e INFLUXDB_DB=defaultdb \ -e INFLUXDB_ADMIN_USER=admin \ -e INFLUXDB_ADMIN_PASSWORD=adminpass \ -e INFLUXDB_USER=user \ -e INFLUXDB_USER_PASSWORD=userpass \ -v influxdb:/var/lib/influxdb \ influxdb:latest
I'm running the following command to launch aInfluxDBcontainer. This should create a new databse with the namedefaultdb.docker run -p 8086:8086 \ -e INFLUXDB_DB=defaultdb -e INFLUXDB_ADMIN_ENABLED=true \ -e INFLUXDB_ADMIN_USER=admin -e INFLUXDB_ADMIN_PASSWORD=adminpass \ -e INFLUXDB_USER=user -e INFLUXDB_USER_PASSWORD=userpass \ -v influxdb:/var/lib/influxdb \ influxdb:latestBut it doesnt create the default databsedefaultdb. It creates the databsedb0instead ofdefaultdb. What I'm doing wrong?https://hub.docker.com/_/influxdb/Thanks in advance.
Launching a InfluxDB container in docker with a default database name
You're mounting /app/logs but the env variable indicates that logs are written to /logs, not /app/logs.Change one path or the other and see if the problem is still there.In general your approach is correct. But for production usage, especially in clustered environments, it's better to use serilog, splunk, application insights or other service-based log collector, as files might be difficult to maintain when you scale up your containers.
I have .net core app which Serilog as log framework. Right now Serilog are logging to file. I want to expose this file outside container and have a simple access as with other files.I tried with volume and volume-bind according to docker-compose reference:https://docs.docker.com/compose/compose-file/#volume-configuration-referenceMoreover, directory from host are shared. But log file appears in container in specific directory, but directory on host are still empty.I can manually copy this file using DOCKER COPY command. Is there any approach to make this file synchronized in my volumed directory on host?Or should I apply some additional approach e.g. ELK to achieve my goal?This is my docker-compose file. Frontend part is commented now because of tests.version: '3.7' services: backend: build: './backend' ports: - "80:80" networks: - gateway environment: - Logger:FilePath=//logs//logs.txt - Database:Name=Data Source = ./database.db volumes: - type: bind source: /Users/grzegorz/Desktop/logs target: /app/logs/ #frontend: # build: './frontend' # depends_on: # - backend # ports: # - "4200:4200" # networks: # - gateway networks: gateway: {}When I open terminal and go into app/logs directory I can see logs.txt file with current application logs.
Expose log file outside docker container
You just need to provide thedockerfileargument:clients.images.build( path="../../../.", dockerfile="path_to_my_dockerfile/Dockerfile", tag="mytag" )Note that thedockerfileshould be relative topath, not your current working directory.If you have dockerfile and context relative to your current working directory, you may be able to compute the relative path with something like this:dockerfile = pathlib.Path(os.path.normpath(dockerfile)).absolute() context = pathlib.Path(os.path.normpath(context)).absolute() path = dockerfile.relative_to(context)Alternatively, you can also provide a custom build context, by creating a tarfile yourself withcustom_context. This is the most flexible way, as you don't even need a filesystem that way.
I am trying to emulate the following CLI command using the docker python-sdk:docker build -t mytag -f path_to_my_dockerfile/Dockerfile ../../../.So in this case I want it to build the Dockerfile using the build context../../../.. I tried using the python-sdk for docker but it seems each time the build context is not the right one, I tried various combinations like :import docker client = docker.from_env() clients.images.build(path="../../../.", fileobj="path_to_my_dockerfile/Dockerfile", tag="mytag")but nothing seems to work. Looking intothe docker-py repois not helping.
Docker: provide custom context in python-sdk
First of all, Linking is a legacy feature, Create a user defined network first:docker network create mynetwork --driver=bridgeNow usemynetworkfor containers you want to be able to communicate with each other.docker run -p 5601:5601 --name kibana -d --network mynetwork kibana docker run -p 9200:9200 -p 9300:9300 --name elasticsearch -d --network mynetwork elasticsearchDocker will run adns serverfor your user defined network, so you can ping other container by name.docker exec -it kibana /bin/bash ping elasticsearchYou can usetelnetorcurlto verify kibana->elasticsearch connectivity from kibana container.p.s I usedofficial (library)docker images for ELK stack with user defined networking recently and it worked like a charm.
I have the following docker containers running on my box...CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5da7523e527b kibana "/docker-entrypoint.s" About a minute ago Up About a minute 0.0.0.0:5601->5601/tcp elated_lovelace 20aea0e545ca elasticsearch "/docker-entrypoint.s" 3 hours ago Up 3 hours 0.0.0.0:9200->9200/tcp, 9300/tcp sad_meitnerMy aim was to get kibana to link to my elasticsearch container however when I hit kibana it's telling me that I do not have any document stores. I know this is not right because I definitely have documents in elasticsearch. I'm guessing my link command is wrong.This is the docker command I used to start the kibana container.docker run -p 5601:5601 --link sad_meitner:elasticsearch -d kibanaCan someone tell me what I've done wrong?thanks
linking kibana with elasticsearch
It’s easy to make a Docker-hosted service only accessible to other containers on the same host. If you:Set the server to bind to or listen on 0.0.0.0 or ::0 (all addresses);Create a non-default Docker network (Docker Compose will do this automatically);Launch the server container and any associated client containers on that Docker network (Docker Compose will do this by default); andDonotset adocker run -por Docker Composeports:optionthen the client containers can reach the server container using its container name as a host name, but non-Docker processes on the host and other hosts can’t reach the server.If your host has multiple network interfaces and binding to one of those would make a service “private” then you can do the same thing withdocker run -p. If your host has public IP address 10.20.30.40/16 and also private IP address 192.168.144.128/24, thendocker run -p 192.168.144.128:6379:6379will make it available to the private network (and other Docker containers as above) but not the public network. (The server itself, inside the container, still needs to bind to 0.0.0.0.)If you otherwise need the server to be visible off-host, but only to some IP addresses, I think you’re down toiptablesmagic that’s not native to Docker.
So, I'm doing a project where I have two Docker containers, one for the main app and one for Redis (using docker compose btw). Naturally I wanted to connect both and tried the default bind setting, but of course the app couldn't connect to the db due to them being in two different containers. Then I just went with 0.0.0.0 after readingthis. However, I still feel like asking if there's a way to bind Redis to my local network, so that only machines running inside it would be able to connect.Thisisn't really what I want. Maybe incorporate something likethis?Does anyone have a good solution to how I could make Redis only accept connections from the other container (linked by Docker Compose) or binding Redis to 0.0.0.0 and using strong security measures is the only way?Thanks in advance!
Connecting a Redis container with another container (Docker)
Docker joinsENTRYPOINTandCMDinto single command line, if both useJSONnotation, like in your example.This is JSON notation:CMD [ "dotnet", "/app/netcore/Somename.dll"]This is shell notation:CMD dotnet /app/netcore/Somename.dllAnother thing you need to know - what is written indocker run ... ...after- considered asCMD.So, to conclude.The constant (immutable) part of command line, likedotnet foo.dllyou can put inENTRYPOINT.Variable part, like additional arguments, you supply withdocker runand optionally put defaults toCMDinDockerfileExample:Dockerfile... ENTRYPOINT ["dotnet", "/app/netcore/Somename.dll"] CMD ["--help"]Command line 1:docker run ... --environment=Staging --port=8080Will result indotnet /app/netcore/Somename.dll --environment=Staging --port=8080Command line 2:docker run ... Will result indotnet /app/netcore/Somename.dll --help.--helpcomes from default value, defined in Dockerfile.
Working on my first Docker image. It is a dotnet program that uses CMD to launch (only one CMD allowed in Docker). I would like to pass the program an argument (an API key) at runtime. After some googling, not finding a clear answer. Entrypoint doesn't seem helpful. Maybe ENV, but it seems ENV is only for Docker. My Dockerfile:FROM microsoft/dotnet WORKDIR /app COPY . /app CMD [ "dotnet", "/app/netcore/Somename.dll"]Thanks
How to pass command line arguments to a dotnet dll in a Docker image at run time?
Bind your container port to 127.0.0.1:5000.By default, if you don't specify an interface on port mapping, Docker bind that port to all available interfaces (0.0.0.0). If you want to bind a port only for localhost interface (127.0.0.1), you have to specify this interface on port binding.Dockerdocker run ... -p 127.0.0.1:5000:5000 ...Docker Composeports: - "127.0.0.1:5000:5000"For further information, check Docker docs:https://docs.docker.com/engine/userguide/networking/default_network/binding/
After Idocker-compose buildanddocker-compose up, if I go tolocalhost:5000in my browser (Which is the port I exposed in the yml file), I get:This site can’t be reached. localhost refused to connect.However, if I go to192.168.99.100:5000, the container loads. Is there a way I can fix this issue?
Docker refused to connect
The problem is not caused by Go but Alpine image.Default Alpine image does not have certificates so the app cannot call to https address (this case ishttps://accounts.google.com/o/oauth2/token).To fix this problem, install 2 packagesopensslandca-certificates. Example in Dockerfile:apk add --no-cache ca-certificates openssl
I have a web app written in Go, use oauth2 (packagegolang.org/x/oauth2) to sign user in by Google (follow this tutorialhttps://developers.google.com/identity/sign-in/web/server-side-flow).When I test app on local, it works fine but when I deploy app and run inside a Docker container (base onalpine:latest, run binary file), it has an error:Post https://accounts.google.com/o/oauth2/token: x509: certificate signed by unknown authorityHere is my code to exchange the accessToken:ctx = context.Background() config := &oauth2.Config{ ClientID: config.GoogleClientId, ClientSecret: config.GoogleClientSecret, RedirectURL: config.GoogleLoginRedirectUrl, Endpoint: google.Endpoint, Scopes: []string{"email", "profile"}, } accessToken, err := config.Exchange(ctx, req.Code) if err != nil { log.Println(err.Error()) // Error here }
Cannot exchange AccessToken from Google API inside Docker container
echo $HOMEis being evaluated on your host because you haven't got the syntax of the switch to bash correct. It's Linux so you need single quotes.Try replacing your double quotes with single quotes.eg. This is what I get:bash-3.2$ docker run ubuntu /bin/bash -c 'echo $HOME' /root
I've got a problem with environment variables in docker. When I run command:$ docker run ubuntu /bin/bash -c "echo $HOME"I've got response:/Users/bylekBut when I run:$ docker run -it ubuntu /bin/bashand then:root@5e079c47affa:/# echo $HOMEI've got:/rootSecond response is correct. Why first command return $HOME value from my host?
Environment variables in docker when exec docker run
A similardocker official tomcat image (8.0.40)runs:CMD ["catalina.sh", "run"]Withcatalina.sh made to start tomcat in the foreground: the process won't exit immediately.If you tomcat installation does include that script, you should use it instead orstartup.sh.Or run directly a tomcat image for testing:$ docker run -it --rm -p 8080:8080 tomcat:8.0You can test it by visitinghttp://container-ip:8080in a browser
I have docker image which I create from my docker file. When I run the image it has able to run the tomcat server then the command prompt come back. That mean the process is terminated and I think the container stops. So when I seehttp://localhost:8080no tomcat page is appear. So I am not able find actual what is the problem. I am actually trying to to build custom java8, tomcat8 and maven as environment and I want to deploy my maven project in that tomcat server. Bellow is the Dockerfile to create imageFROM scratch FROM ubuntu:16.04 RUN mkdir /opt/java8 RUN mkdir /opt/tomcat8 RUN mkdir /opt/maven3 ENV JAVA_HOME /opt/java8 ENV CATALINA_HOME /opt/tomcat8 ENV M2_HOME /opt/maven3 ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/bin:$M2_HOME/bin ADD jdk1.8.0_112 /opt/java8 ADD apache-tomcat-8.0.38 /opt/tomcat8 ADD apache-maven-3.3.9 /opt/maven3 EXPOSE 8080 CMD ["startup.sh", "run"]I put 3 folder of java, tomcat, maven near Docker file so those are added.Now when I build the image and run the image the bellow log appear.root@dhavalbhoot:/home/veni/Documents/dhaval_bhoot/docker_images/tomcat1# docker run -it -p 8080:8080 dhaval/tomcat:8.0.38Output:Using CATALINA_BASE: /opt/tomcat8 Using CATALINA_HOME: /opt/tomcat8 Using CATALINA_TMPDIR: /opt/tomcat8/temp Using JRE_HOME: /opt/java8 Using CLASSPATH: \#/opt/tomcat8/bin/bootstrap.jar:/opt/tomcat8/bin/tomcat-juli.jar Tomcat started. root@dhavalbhoot:/home/veni/Documents/dhaval_bhoot/docker_images/tomcat1#This way the prompt come back and I check in browserhttp://localhost:8080tomcat page not appear So help me solving the problem.
I run the docker images which start tomcat8 server but it don't start
The problem was in incorrect build agent type. "Hosted VS2017" build agent failed to build project because it uses docker with windows containers (and powershell as a default shell). But on my dev machine I use docker with linux containers (with /bin/sh as a default shell). Choosing the correct build agent type fixed the problem (because /bin/sh understands '&&').
I configuring CI for my ASP.NET Core application in VSTS (visual studio online). I've added "docker-compose build" task to build definition but it fails with errors:Step 4/9 : RUN dotnet restore QuizService.sln && dotnet publish QuizService.sln -c Release -o obj/Docker/publish ---> Running in 7ea0cf1881d1 ... rence = 'SilentlyContinue'; dotnet restore QuizService.sln && dotnet ... The token '&&' is not a valid statement separator in this version. + CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException + FullyQualifiedErrorId : InvalidEndOfLine Service 'quizservice' failed to build: The command 'powershell -Command $ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue'; dotnet restore QuizService.sln && dotnet publish QuizService.sln -c Release -o obj/Docker/publish' returned a non-zero code: 1 ##[error]Building quizservice ##[error]Service 'quizservice' failed to build: The command 'powershell -Command $ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue'; dotnet restore QuizService.sln && dotnet publish QuizService.sln -c Release -o obj/Docker/publish' returned a non-zero code: 1 ##[error]C:\ProgramData\Chocolatey\bin\docker-compose.exe failed with return code: 1The problem is with my dockerfile on line:RUN dotnet restore QuizService.sln && dotnet publish QuizService.sln -c Release -o obj/Docker/publishSomehow docker does not understand '&&' operator:The token '&&' is not a valid statement separator. The exception itself seems to be related to powershell rather than to docker. Poweshell has no '&&' syntax but why it starts using powershell for RUN command here instead of cmd.exe?It works like a charm when I build locally on my dev machine.Do somebody faced the same problem?
VSTS docker task failed on '&&' token in docker RUN command
You can find your solution herehttps://pkgs.alpinelinux.org/package/edge/community/x86_64/php7-redis
I crafted a docker image using alpine 3.5 as base Image. I want my php apllication running inside container to communicate with a redis server.But I don't find any php7-redis client in Alpine.Is there a workway around it ?I tried to use pecl to install redis but there is no pecl package in alpine.I tried with pear but pear doesn't have redis package. Any thoughts on this issue ?
Php7 Redis Client on Alpine OS
We resolved the same issue by using "printf" instead of "echo", the problem of echo is it will leave a new line character into the docker secret. You can refer to example in docker secret create =>https://docs.docker.com/engine/reference/commandline/secret_create/Also I have an example that load docker secrets directly into spring properties, such as "spring.datasource.password" =>https://github.com/kwonghung-YIP/spring-boot-docker-secret
I'm trying to set the environment variables for DB password for MySQL container and spring boot application which is commonly declared in the docker secrets.echo "db_secured_password" | docker secret create secret -here are the configuration files :spring boot application's -> application.ymldb: name: my-db host: localhost port: 3306 username: root password: /run/secrets/db-root-password spring: application: name: core-backend datasource: url: jdbc:mysql://${db.host}:${db.port}/${db.name} username: ${db.username} password: ${db.password}used for docker stack in docker swarm mode -> docker-compose.ymlversion: '3.1' services: mysql-db: container_name: mysql-db image: mysql:8.0.12 deploy: restart_policy: condition: on-failure volumes: - ./data/mysql:/var/lib/mysql - ./conf/mysql/my.cnf:/etc/mysql/conf.d/my.cnf environment: - MYSQL_ROOT_PASSWORD=/run/secrets/db-root-password - MYSQL_DATABASE=my_db ports: - "3306:3306" secrets: - db-root-password spring-boot-app: container_name: spring-boot-app image: spring-boot-app:local environment: - DB_PASSWORD=/run/secrets/db-root-password # Also tried adding with the file as property name # - DB_PASSWORD_FILE=/run/secrets/db-root-password ports: - "8080:8080" environment: HOST_NAME: localhost secrets: - db-root-password depends_on: - mysql-db secrets: db-root-password: external: trueI run the docker stack by using the following command:docker stack deploy --with-auth-registry -c docker-compose.yml test-stackI'm unable to get the value of thedb-root-passwordproperty exactly in spring boot app. When I inspect the value ofdb-root-passwordI get the value as/run/secrets/db-root-password.Is there something missing? If I want to override the value of Environment variable differently?
docker secret with spring boot application is not working in docker swarm mode /run/secrets
I would have to specify something there. I also faced such an issue, and if the solution to remove the volume is working, you can't delete a volume in use, which means you are to remove the container using the volume first...for most container / volumes, that's not an issue, but regarding to redis, if for example you are trying to downgrade your version, then it will force you to do the same operation with your database container leading you to dump your database as you will have to rebuild it again.So my point is do not run these commands blindly and prepare yourself a dumpp of your database first.Even better if you need to work on multiple version of and environment for the same project, instead of rebuilding your docker all the time, consider taking 2 differents projets so as your database will keep sage.
I can't start redis container in my docker-compose file. I know that docker-compose file is OK, because my colleagues can start the project successfully. I read that there is a solution to delete dump.rdb file. But I can't find it. I use Windows machine. Any suggestions will be very helpful.Error 2023-02-09 16:41:28 1:M 09 Feb 2023 13:41:28.699 # Can't handle RDB format version 10Redis in docker_compose:redis:container_name: redis hostname: redis image: redis:5.0 ports: - "6379:6379" volumes: - redis:/data restart: always
Redis Docker compose Can't handle RDB format version 10
AWS Lambda is not a generic docker runner. The docker containers you deploy to Lambda have to comply with the AWS Lambda runtime environment.The docker image you are using is trying to write to the path/home/sbx_user1051apparently. On AWS Lambda the file system is always read-only except for the/tmppath. You will have to modify the code running in the docker image to prevent it from writing anywhere else but/tmp/.
I get the following error{ "errorMessage": "[Errno 30] Read-only file system: '/home/sbx_user1051'", "errorType": "OSError", "stackTrace": [ " File \"/var/lang/lib/python3.8/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n", " File \"/var/lang/lib/python3.8/imp.py\", line 171, in load_source\n module = _load(spec)\n", " File \"\", line 702, in _load\n", " File \"\", line 671, in _load_unlocked\n", " File \"\", line 843, in exec_module\n", " File \"\", line 219, in _call_with_frames_removed\n", " File \"/var/task/app.py\", line 3, in \n nltk.download('stopwords')\n", " File \"/var/task/nltk/downloader.py\", line 777, in download\n for msg in self.incr_download(info_or_id, download_dir, force):\n", " File \"/var/task/nltk/downloader.py\", line 642, in incr_download\n yield from self._download_package(info, download_dir, force)\n", " File \"/var/task/nltk/downloader.py\", line 699, in _download_package\n os.makedirs(download_dir)\n", " File \"/var/lang/lib/python3.8/os.py\", line 213, in makedirs\n makedirs(head, exist_ok=exist_ok)\n", " File \"/var/lang/lib/python3.8/os.py\", line 223, in makedirs\n mkdir(name, mode)\n" ] }when testing my lambda function. I don't understand what this error is telling me to do about the docker image I am using, if that even is the correct route to explore. What should I do
AWS Lambda function returns "errorMessage": "[Errno 30] Read-only file system: '/home/sbx_user1051'"
Bind mounts in Linux do not perform any namespacing on the uid or gid, and host mounts are running a bind mount under the covers. So if the uid inside the container is different from the uid on the host, you'll get permission issues. I've worked around this in other containers with a fix-perms script. Implementing that looks like the following Dockerfile:FROM selenium/node-chrome-debug:3.141.59-neon COPY --from=sudobmitch/base:scratch /usr/bin/gosu /usr/bin/fix-perms /usr/bin/ COPY entrypoint.sh /entrypoint.sh # use a chmod here if you cannot fix permissions outside of docker RUN chmod 755 /entrypoint.sh USER root ENTRYPOINT [ "/entrypoint.sh" ]The entrypoint.sh looks like:#!/bin/sh if [ "$(id -u)" = "0" -a -d "/home/seluser/Downloads" ]; then fix-perms -r -u seluser /home/seluser/Downloads exec gosu seluser /opt/bin/entry_point.sh "$@" else exec /opt/bin/entry_point.sh "$@" fiWhat's happening here is the container starts as root, and the fix-perms script adjust theseluserinside the container to match the uid of the/home/seluser/Downloadsdirectory. Theexec gosuthen runs your container process as the seluser as the new pid 1.You can see the code used to implement this at:https://github.com/sudo-bmitch/docker-baseI've discussed this method in several of my presentations, including:https://sudo-bmitch.github.io/presentations/dc2019/tips-and-tricks-of-the-captains.html#fix-perms
I have the following setup:selenium-chrome: image: selenium/node-chrome-debug:3.141.59-neon container_name: chrome-e2e depends_on: - selenium-hub environment: - HUB_HOST=selenium-hub - HUB_PORT=4444 - SHM-SIZE=2g - GRID_DEBUG=false - NODE_MAX_SESSION=1 - NODE_MAX_INSTANCES=5 - TZ=Europe/Brussels hostname: chrome-e2e networks: - build-network ports: - 5900:5900 volumes: - ./target:/home/seluser/DownloadsSelenium tests are run inside the container, the actual test code is outside of the container. Using Maven we handle the lifecycle of the containers. As you can see I mounted the Chrome download folder (inside the container) to thetarget-folder of my application. All is mounted well but when Chrome tries to download a file, permission is denied to write to/home/seluser/Downloads. The UID and GID of/home/seluser/Downloadsis set to 2100:2100 by Docker. Chrome itself is run via theseluseruser.What do I need to do to giveseluserthe permission to write to a folder owned by 2100?Thanks in advance. Regards
Docker user cannot write to mounted folder
Usinggithub.com/fsouza/go-dockerclient, you have to firstcreate a container, using theCreateContainerOptionsto add the same options that you can via the command line.container, err := client.CreateContainer(createContainerOptions)Once you have the container, youstart it, with any extra options or overrides in theHostConfigclient.StartContainer(container.ID, hostConfig)To connect to the std io streams of a container, you need to useclient.AttachToContainer, and assign the appropriate stream in theAttachToContinerOptions.
How can I achieve the equivalent ofsudo docker run -it --rm --name my-python-container -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:2-slim python test.pyusing the Docker API for Golang?Eitherhttps://github.com/fsouza/go-dockerclientorhttps://github.com/samalba/dockerclientis fine.
Emulating `docker run` using the golang docker API
I have read this:https://azure.microsoft.com/blog/2015/04/16/docker-client-for-windows-is-now-available/As you can read there is only interface to manage docker containers inside Linux so far.
I have a hard time finding information about this. Somewhere I've seen news that Docker has now natively been integrated to Windows. So apparently this means they are not "Linux container" but some kind of "Windows containers"? Does anyone have more information on this?
Docker native Windows support?
Okay, so after quite some time I've come up with an solution for my problem. I could simplify the "" a bit:version: '2.1' services: db: image: sath89/oracle-12c:r1 healthcheck: test: ["CMD-SHELL", "if [ \"`echo \\\"SELECT ACCOUNT_STATUS FROM DBA_USERS WHERE USERNAME = 'ANONYMOUS' AND ACCOUNT_STATUS = 'EXPIRED';\\\"|/u01/app/oracle/product/12.1.0/xe/bin/sqlplus -S sys/oracle as sysdba|grep ACCOUNT_STATUS`\" = \"ACCOUNT_STATUS\" ];then true;else false;fi"] interval: 30s timeout: 3s # start_period: 900s retries: 30Right now "docker-compose" does not support the start_period option so the numbers of retries (and the interval) have to be quite high so the container isn't reported as "unhealthy". ThePull Requesthas already been merged so hopefully it will be in the next release.
I'm usingsath89/oracle-12cfor automated tests against a oracle db. This works fine, the only problem is that this container takes several minutes to start (~10-15 depending on the hardware). I tried to come up with a healthcheck for this container.I managed to come up withstatus=`su oracle -c "echo -e \"SELECT ACCOUNT_STATUS FROM DBA_USERS WHERE USERNAME = 'ANONYMOUS' AND ACCOUNT_STATUS = 'EXPIRED';\" | /u01/app/oracle/product/12.1.0/xe/bin/sqlplus -S / as sysdba | grep ACCOUNT_STATUS"`; if [ "$status" == "ACCOUNT_STATUS" ]; then true; else false; fiwhich returns 0 when theANONYMOUSaccount is unlocked, which is the last step in theentrypointscript of the image:entrypoint.sh. I tested this usingdocker exec -it bash.I am now stuck with converting this horribly long line into a healthcheck command for docker (docker-compose):version: "2" services: db: image: sath89/oracle-12c:r1 healthcheck: test: ["CMD", ""] interval: 10s timeout: 3s retries: 3Any help is appreciated - if you can improve the command itself I'm happy to here. I am aware of "select 1 from dual" as a validation query for Oracle (source), but this reports an operational DB after ~8 minutes but it resets connections a little bit later. I don't want to modify the container itself - if there's an update I just want to be able to pull it from the hub.
how to turn bash command into docker(-compose) healthcheck
You can map the external certs into a container atdocker runtime usingbind mounts. Assuming your certs are in/etc/docker/certson the host, and you want them to be at/etc/ssl/certsin the container, then add either of the following:-v /etc/docker/certs:/etc/ssl/certs:roor--mount type=bind,src=/etc/docker/certs,dst=/etc/ssl/certs,readonlyYour Tomcat config would use/etc/ssl/certsas its path in this case.
I'm new to Tomcat and Docker, and am stuck trying to enable https on my website. First on the server, not in any container:a) I generated a CSRb) Acquired a commercial SSL certificatec) Placed the certificates in a folder on the server /etc/docker/certsd) Then created my Docker containers with the configuration belowI can use the commanddocker exec -it shto navigate my container. I can editserver.xmlandweb.xmlbut I realize I should install the certificates at the OS level outside the container if I want https configuration to persist past individual containers. In other words, I should be able to remove a container, and create another one without needing to reinstall the ssl.How can I do this? Any ideas?. Thanks in advance! Below are my configurations:1.Databasedocker run -d --name=example-db --restart=always --net=example-net --mount type=volume,src=mydbdata,target=/example-db --hostname=example-db -e POSTGRES_DB=mydb -e POSTGRES_USER=myuser -e POSTGRES_PASSWORD=secret myapp/db2.Applicationdocker run -d --name=example-app --restart=always --mount type=volume,src=mydata,target=/example-app -p 80:8080 --net=example-net -e DB_HOST=example-db -e DB_NAME=mydb -e DB_USER=myuser -e DB_PASSWORD=secret myapp/myappAgain thanks for your help. Art
How to enable HTTPS on Tomcat in a Docker Container?