Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
So there was some mix-up with the names of images and containers. Obviously, the cp operation was acting on a different container than I brought up with the run command. In any case, the correct procedure is:# Build the image, call it foo-build
docker build -q -t foo-build .
# Create a container from the image called foo-tmp
docker create --name foo-tmp foo-build
# Run the copy command on the container
docker cp /src/path foo-tmp:/dest/path
# Commit the container as a new image
docker commit foo-tmp foo
# The new image will have the files
docker run foo ls /dest | I have a dockerized project. I build, copy a file from the host system into the docker container, and then shell into the container to find that the file isn't there. How isdocker cpsupposed to work?$ docker build -q -t foo .
Sending build context to Docker daemon 64 kB
Step 0 : FROM ubuntu:14.04
---> 2d24f826cb16
Step 1 : MAINTAINER Brandon Istenes <[email protected]>
---> Using cache
---> f53a163ef8ce
Step 2 : RUN apt-get update
---> Using cache
---> 32b06b4131d4
Successfully built 32b06b4131d4
$ docker cp ~/.ssh/known_hosts foo:/root/.ssh/known_hosts
$ docker run -it foo bash
WARNING: Your kernel does not support memory swappiness capabilities, memory swappiness discarded.
root@421fc2866b14:/# ls /root/.ssh
root@421fc2866b14:/# | `docker cp` doesn't copy file into container |
The issue is that Docker Rancher installer do not create thedockergroup.
Use the following commands:sudo addgroup --system docker
sudo adduser $USER docker
newgrp docker
# And something needs to be done so $USER always runs in group `docker` on the `Ubuntu` WSL
sudo chown root:docker /var/run/docker.sock
sudo chmod g+w /var/run/docker.sockthanks tohttps://github.com/rancher-sandbox/rancher-desktop/issues/1156#issuecomment-1017042882 | I have installed Docker Rancher on Windows 10 with dockerd option and WSL on true for my current WSL distribution (Ubuntu).
When i try to use docker in WSL2, I had the following error:fpapi@xxx:~$ docker ps
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json": dial unix /var/run/docker.sock: connect: permission deniedThe command works fine in the cmd shell.Which kind of permission I'm missing? | Docker Rancher - Permission Denied when using docker from WSL |
In Dockerfile, addENV AzureFunctionsJobHost__Logging__Console__IsEnabled=trueto enable logging, the setting isomitted in the basic imageso we have to do it manually for now.If you got 401 Unauthorized, find the file function.json, change authLevel toanonymousif it wasfunction(default value in template). We can't access http trigger in a local container with authlevel other thananonymous. Because we don't havefunction keysyet, which are available after we create a Function app using the container.As for why we can access http trigger withfunctionauthlevel when we usefunc host startout of container, authorization is disabled regardless of the specified authentication level when running locally. | Recently I have created a docker image with Azure Function (Node) having HttpTrigger. This is a basic HttpTrigger which generate by default. I'm developing this on Macbook Pro (MoJave) and I have following tools installed.NodeJs - node/10.13.0
.NET Core 2.1 for macOS
Azure Function core tools (via brew)When I run the function locally with "func host start", it all works fine and I could see the function loading messages. Also I was able to execute the Azure function with trigger endpoint.However, when I try to build the Docker container and run the same, I can load the home page of the app but could not reach the function endpoint. In the log I could only see following;Hosting environment: Production
Content root path: /
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down.My Docker file is as below (generated by Azure core tools);FROM mcr.microsoft.com/azure-functions/node:2.0
ENV AzureWebJobsScriptRoot=/home/site/wwwroot
COPY . /home/site/wwwrootWhen I try to to use 'microsoft/azure-functions-runtime:v2.0.0-beta1' as base image, then I can see the function loading and could able to access the http trigger also.Is there anything missing or do I need to use a different image? | Azure Function Docker not working with http trigger |
RUNgcloud config unset container/use_client_certificateAfter this logout and login. It should work. This happens when you disable Legacy Authorisation in the cluster settings, because the client certificate that you are using is a legacy authentication method | I was following the following tutorial on continuous integration using gitlab and Kubernetes (in my case on google cloud):https://about.gitlab.com/2016/12/14/continuous-delivery-of-a-spring-boot-application-with-gitlab-ci-and-kubernetes/.At some point in the tutorial you will have to first delete and then create a secret for the image registry of Gitlab:- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=$REGISTRY_USERNAME --docker-password=$REGISTRY_PASSWD --docker-email=$EMAILThings go wrong in this step, I get the following error:Error from server (Forbidden): secrets "registry.gitlab.com" is forbidden: User "client" cannot delete secrets in the namespace "default": Unknown user "client"
Error from server (Forbidden): secrets is forbidden: User "client" cannot create secrets in the namespace "default": Unknown user "client"I get the same exact error in the Google cloud shell:Adding the following line does not really help, I still get the creation error (I am also 100% sure that the deletion also 'crashes' but the '2>/dev/null' just makes it move to the creation step):kubectl delete secret registry.gitlab.com 2>/dev/null || echo "secret does not exist"What am I doing wrong? Thx in advance! | kubectl delete/create secret forbidden (Google cloud platform) |
I ended up resolving this week, hopefully the answer will help others.When using VSTS Hosted build agents to produce images running off the docker base microsoft/aspnetcore:latest - unless you use the (Linux Preview) hosted build agent, you will get produced a windows container, which will not run on the linux app services.Once I switched to using the hosted linux build agent, the container successfully loads, and my issue is resolved. | I have followed the tutorials for building a .net core web application into a docker image, publishing to an azure container registry, and then I have setup my VSTS Release template to deploy the container to the app service.This all appears to work, I can view my image in the container registry, and the deployment appears to succeed - but when navigating to the app service site, all I get is an HTTP 503 - Service unavailable.The app service is started, I can see deployments in my file system via bash - so I wondered if I am missing something?I do not have a 'startup' command in any of my templates, I wondered if it could be this?The site works perfectly from VS2017, including debugging via docker, so it really is just a case of... how do I get the App Service to actually load and execute the image?Thank you!EDITFurther to this, I have got access to the docker diagnostics logs which claim "image operating system "windows" cannot be used on this platform".My base image is the microsoft/aspnetcore:2.0 image, which runs perfectly fine on my linux container in my development environment... but appears to not work in the Linux App Service?Is the aspnetcore:2.0 base image not suitable for a linux app service? | Azure app service docker container 'Service Unavailable' |
TheCOPYinstructioncopies new files or directories from and adds them to the filesystem of the container at the path .That means the image has a snapshot of the files as they were when the image was built. When you start a container from that image, it will see the copy of your files in its file system. Modifying the original files will not have any effect on the copies inside the container, the app doesn't see those changes and doesn't restart.If you want the files to change inside the container, you canmount a host directory as a volume. Also from the docsThis command mounts the host directory, /src/webapp, into the container at /webapp. If the path /webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again.The Docker run command will probably look something like thisdocker run -d -v /absolute/path/to/src:/app Then your file changes should be reflected in the files inside the container (because they will be the same files) and everything should restart as expected.You may also be interested in this postDockerize a Flask, Celery, and Redis Application with Docker Compose. It takes it one step further and uses Docker Compose to orchestrate a Flask development environment. | I am using flask-script to run my app:if __name__ == "__main__":
manager.run()In docker I have the following:CMD [ "python", "manage.py", "runserver", "-h", "0.0.0.0", "-p", "5000"]Now when I build and run my container the app runs fine. However, if I make changes to my code and save the app does not restart despite my env having a DEBUG=True variable set. Am I missing something here?Dockerfile:FROM python:3.4-slim
RUN apt-get update -y && \
apt-get install -y \
python-pip \
python-dev \
pkg-config \
libpq-dev \
libfreetype6-dev
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY . /app
CMD [ "python", "manage.py", "runserver"] | Restart flask app in docker on changes |
You can start your container in adetachedmode:docker run -it -d my_containerThe-doption here means your container will run in "detached" mode, in the background.If you want toattachthe container and drop to a shell, you can use:docker exec -it my_container /bin/bashNote, if your container is based on an alpine image, you need to usesh, i.e.:docker exec -it my_container /bin/sh | I have DockefileFROM centos:7So I have no entrypoint in dockerfile.
Then I build it to imagesudo docker build -t my_container .Then I start it.sudo docker run -t my_containerAnd I get open tty to containerroot@my_container_id/If I start it without-tit stopped immidiately after start.
How can I run docker container without start tty and without entrypoint? | How can I run docker container without entering into container |
If You post your Dockerfile it will be helpful,but there are multiple options for You.1- instead of usingexportset yourNODE_EXTRA_CA_CERTSwithARGoption in dockerfile, it will be used for all users does not matter if You change your user between builds like this:FROM node:alpine AS deps
ARG NODE_EXTRA_CA_CERTS=/etc/ssl/certs/ca-certificates.crt
COPY my.crt /usr/local/share/ca-certificates/
RUN cat /usr/local/share/ca-certificates/my.crt >>/etc/ssl/certs/ca-certificates.crt
RUN npm install --global pm2But if You set a variable withexportit will be used just for thatRUNentry which You usedexport. Remember if You are doing multi stage buildARGis scoped to their stage, and if You need to set this in different stages, You have to use yourARGin each stage.2- to use http instead of https(it is not secure but usable). You cansetit within your configuration like:npm config set registry http://registry.npmjs.org/3- add your CA certificate to trusted certificates within your Dockerfile like:...
COPY ca.crt /usr/local/share/ca-certificates/ca.crt
RUN apt update && \
apt install -y ca-certificates && \
update-ca-certificates
... | I am getting an error running npm as root in a Dockerfile.> [runner 5/10] RUN npm install --global pm2:
#0 71.79 npm ERR! code UNABLE_TO_GET_ISSUER_CERT_LOCALLYWe have an antivirus/corporate firewall that we can't turn off, which substitutes SSL certificates to inspect traffic.My problem is that becausenpm install --global pm2is running as root, it does not honorexport NODE_EXTRA_CA_CERTS=/path/to/my-cacert.crt.I tried withRUN npm config set cafile /path/to/my-cacert.crt, but that also didn't work for some reason.How can I fixUNABLE_TO_GET_ISSUER_CERT_LOCALLYwhen running npm as root in a docker container?This dockerfile reproduces the issue:FROM node:alpine AS deps
COPY my.crt /usr/local/share/ca-certificates/
RUN cat /usr/local/share/ca-certificates/my.crt >>/etc/ssl/certs/ca-certificates.crt
RUN npm install --global pm2 | npm UNABLE_TO_GET_ISSUER_CERT_LOCALLY in docker behind corporate firewall |
Editing thestart.shfile may come up with other error things.Instead that, just put yourboot2docker.isoin below location as.c:\user\USERNAME\\.docker\machine\cacheand restart your Docker terminal. | I'm trying to install Docker on a Windows computer but I get this message:Running pre-create checks...(default) No default Boot2Docker ISO found locally, downloading the latest release...Error with pre-create check: "Gethttps://api.github.com/repos/boot2docker/boot2docker/releases/latest: dial tcp 192.30.252.124:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond."Looks like something went wrong in step 'Checking if machine default exists'...Press any key to continue...Any suggestions on how to resolve this? | Docker Installation Error on Windows behind Firewall |
I recommend you set up anHPAHorizontal Pod Autoscaler for your workers.It will require to set up support for themetrics API. For custom metrics on the later versions of Kubernetesheapsterhas been deprecated in favor of themetrics serverIf you are using the public Cloud like AWS, GCP, or Azure I'd also recommend setting up an Autoscaling Group so that you can scale your VMs or server base on metrics like CPU utilization average.Hope it helps! | I am going to deploy a Python Flask Server with Docker on Kubernetes using Gunicorn and Gevent/Eventlet as asynchronous workers. The application will:Subscribe to around 20 different topics on Apache Kafka.Score some machine learning models with that data.Upload the results to a relational database.Each topic in Kafka will receive 1 message per minute, so the application needs to consume around 20 messages per minute from Kafka. For each message, the handling and execution take around 45 seconds. The question is how I can scale this in a good way? I know that I can add multiple workers in Gunicorn and use multiple replicas of the pod when I deploy to Kubernetes. But is that enough? Will the workload be automatically balanced between the available workers in the different pods? Or what can I do to ensure scalability? | How is Python scaling with Gunicorn and Kubernetes? |
Enable Hyper V as following:Run a command prompt as administrator and execute:dism /Online /Disable-Feature:Microsoft-Hyper-Vreboot andbcdedit /set hypervisorlaunchtype off | After restarting machine and opening the Docker Quickstart Terminal, I get the following error:Unable to start the VM: C:\Program Files\Oracle\VirtualBox\VBoxManage.exe startvm default --type headless failed:
VBoxManage.exe: error: Raw-mode is unavailable courtesy of Hyper-V. (VERR_SUPDRV_NO_RAW_MODE_HYPER_V_ROOT)
VBoxManage.exe: error: Details: code E_FAIL (0x80004005), component ConsoleWrap, interface IConsole
Details: 00:00:02.064418 Power up failed (vrc=VERR_SUPDRV_NO_RAW_MODE_HYPER_V_ROOT, rc=E_FAIL (0X80004005))
Looks like something went wrong in step ´Checking status on default´... Press any key to continue... | Docker Quickstart Terminal. Unable to start the VM |
The problem lies in the Pycharm 'limit' of managing a docker machine on a remote host 'under the hood'. When inserting the volume mapping in the run / debug configuration, it is interpreted as a local path and therefore, in this case, a path that must be present on the remote server. So, for now, the only option is to mount the local path (the folder where the project is located) on the remote host of the Docker service by first sharing it through an SSHFS or NFS service.
So ... (1) I shared the Pycharm project folder (local machine ip 192.168.1.10) using NFS; (2) I mounted the shared folder on the server host (on server ip 192.168.1.22;mount -t nfs 192.168.1.10:/home/user/PythonProjects /home/ext-user/mnt/projects) then (3) in the run / debug configuration of Pycharm I mapped the volumes with the path mounted on the remote server ... Run ... the program now runs without any errors. [Run result]These are some specifications of my new configuration:Run/Debug Configurationdocker container setting with volume mapping into Run/Debug ConfigurationI hope the solution can be useful to other people. I also hope that there are better solutions than mine :-) | As specified in the title I am trying to use Pycharm Professional (2018.2) with a python remote interpreter in a Docker machine hosted on a remote server in my LAN. I created a very simple example by following the help 'https://www.jetbrains.com/help/pycharm/using-docker-as-a-remote-interpreter.html'.Pycharm 2018.2 is installed on a LAN pc (192.168.1.10) on a debian distro;Docker is installed on a LAN debian server (192.168.1.22)I was able to configure Docker as a remote interpreter, to connect with the Docker service through the Pycharm tool but when I try to run (or debug) the main.py in the Docker container I always get this:37073edcd9d2:python -u /opt/project/main.py (null): can't open file '/opt/project/main.py': [Errno 2] No such file or directory
Process finished with exit code 2The execution is certainly done in the remote Docker container but it seems that the file to be executed is not found. I manually attached the local volume as described on various blogs with all possible variations but I always get the same error.
These are some specifications of my configuration:docker tool settingproject interpreter settingRun/Debug Configurationdocker container setting with volume mapping into Run/Debug ConfigurationIs missing something?Tanks. Any help is appreciated! | Pycharm Remote interpreter on Docker remote: [Errno 2] No such file or directory |
Just as David has suggested in his comment, you need to add port mapping in docker-compose.yml. So, your modified docker-compose.yml would be something like this:version: '3'
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
ports:
- "3306:3306"
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8028:80"
- "8029:8029"
volumes:
- ./themes/travelmatic:/var/www/html/wp-content/themes/yadayada
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
VIRTUAL_HOST: leasepilot.local
volumes:
db_data:And you have already provided the creds in the docker-compose.yml in environment variables. | I have created a local docker wordpress instance and I am trying to connect to the database with a SQL Client (in my case TablePlus) but I am having trouble.I created the docker containers from a docker-compose.yml file shown here:version: '3'
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8028:80"
- "8029:8029"
volumes:
- ./themes/travelmatic:/var/www/html/wp-content/themes/yadayada
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
VIRTUAL_HOST: leasepilot.local
volumes:
db_data:I have tried any comibindation of wordpress and somewordpress in these fields:I also have the option to connect over SSH but I don't feel I would need to do that?1) What is the best way to debug this type of issue?
2) What are the creds? lol | Connecting to my local docker Database Instance from Table Plus |
You should connect to connect via container's name insteaddb:
image: mysql:5.7.26
container_name: godockerDB
environment:
MYSQL_USER: docker
MYSQL_ROOT_PASSWORD: password
MYSQL_PASSWORD: password
MYSQL_DATABASE: godocker
ports:
- "3306:3306"Then you can conenct via container namefunc dbConnect() *gorm.DB {
db, err := gorm.Open("mysql", "docker:password@tcp(godockerDB)/godocker")
if err != nil {
panic(err.Error())
}
return db
} | I'm trying connect mysql server from docker using golang gin gorm.The build itself has succeeded, but as shown in the title, the following error has been issued.panic: dial tcp 127.0.0.1:3306: connect: connection refusedTrying to connect in this wayfunc dbConnect() *gorm.DB {
db, err := gorm.Open("mysql", "docker:password@/godocker")
if err != nil {
panic(err.Error())
}
return db
}docker-compose.ymldb:
image: mysql:5.7.26
environment:
MYSQL_USER: docker
MYSQL_ROOT_PASSWORD: password
MYSQL_PASSWORD: password
MYSQL_DATABASE: godocker
ports:
- "3306:3306"Result of hittingdocker-compose pscommandName Command State Ports
------------------------------------------------------------------------------------------
gin-docker_api_1 /bin/sh -c gin -i run Up 0.0.0.0:3001->3001/tcp
gin-docker_db_1 docker-entrypoint.sh mysqld Up 0.0.0.0:3306->3306/tcp, 33060/tcpThanks | panic: dial tcp 127.0.0.1:3306: connect: connection refused |
Sincedocker-compose v2.5.0this is now possible.Dockerfile:# syntax=docker/dockerfile:1.2
RUN --mount=type=secret,id=mysecret,target=/root/mysecret cat /root/mysecretdocker-compose.ymlservices:
my-app:
build:
context: .
secrets:
- mysecret
secrets:
mysecret:
file: ~/.npmrc | I am trying to build a docker container with private node packages in it. I have followedthis guideto use secrets to reference npmrc file securely to install the dependencies. I can get this to work when building the image directly using a command like this:docker build --secret id=npm,src=$HOME/.npmrc .but I cannot get this working with docker compose. When running adocker compose buildit acts like there is no npmrc file and gives me a 401 when trying to download dependencies.I provided a stripped down version of Dockerfile and docker-compose.yml below.Dockerfile# syntax = docker/dockerfile:1.2
FROM node:14.17.1
COPY . .
RUN --mount=type=secret,id=npm,target=/root/.npmrc yarn --frozen-lockfile --production
EXPOSE 3000
CMD [ "npm", "start" ]docker-compose.ymlversion: '3.7'
services:
example:
build: packages/example
ports:
- "3000:3000"
secrets:
- npm
secrets:
npm:
file: ${HOME}/.npmrc | How to use file from home directory in docker compose secret? |
Jess Frazellewould not disagree with you.In herblog post "Docker Containers on the Desktop", she is containerizing everything.Everything.LikeChrome itself:$ docker run -it \
--net host \ # may as well YOLO
--cpuset-cpus 0 \ # control the cpu
--memory 512mb \ # max memory it can use
-v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
-e DISPLAY=unix$DISPLAY \ # pass the display
-v $HOME/Downloads:/root/Downloads \ # optional, but nice
-v $HOME/.config/google-chrome/:/data \ # if you want to save state
--device /dev/snd \ # so we have sound
--name chrome \
jess/chromeBut Docker containers are not limited to that usage, and are mainly a way to represent a stable well-defined and reproducible execution environment, for one service per container, that you can use from a development workstation up to a production server. | After playing around with Docker for the first time over the week-end and seeing tiny images for everything from irssi, mutt, browsers, etc, I was wondering if local installs of packages are making way for dozens of containers instead?I can see the benefit in keeping the base system very clean and having all these containers that are all self-contained and could be easily relocated to different desktops, even Windows. Each running a tiny distro like Alpine, with the app e.g. irssi, etc....Is this the way things are moving towards or am I missing the boat here? | Docker: containers vs local installs |
You can do this using docker volumes:https://docs.docker.com/userguide/dockervolumes/For example:docker run -v /var/log/docker:/var/log your-imagewill mount the log directory on your local file system. You can also get much fancier, creating containers just for data. It's all explained in the above link. | have an application running inside docker container. the application writes log messages into local log files. how can i make the log file persistent in case the docker container stops or crashes?Since the container are run time entity ,when i stop the image my logs/data is gone.Thanks,
Sohan | How can i persist my logs/data to local filesystem in docker |
I don't think this is going to work - you're effectively trying to use two X servers - the host and the container - and I suspect they are both expecting to have exclusive use of the video card.What you can do instead is use the X server on the host from the container by bind-mounting the X Server socket. This SO answer explains how:https://stackoverflow.com/a/25334301/4332I'm not sure this will help in your particular case, but I don't entirely understand why you need an XServer running in the container at all. I think you should still have access to the GPU with --privileged. | I created docker container with X server inside. I use it for some off-screen OpenGL rendering.
This container should work on any system (with or without X server running) and it should use hardware GPU if it exists (so I cannot use xvfb).When I use this container on server-like system without GUI, everything works perfectly. But when I run the container on Ubuntu 14.04 Desktop, the screen turns off each time I start X server in my container.I start container with --priviliged so /dev folder is shared with container. I believe it involves some kind of conflict.Is there a way to start X inside the container such as host X server is still working?UPDATE:I see the following in Xorg.0.log:AIGLX: Suspending AIGLX clients for VT switch
(II) NOUVEAU(0): NVLeaveVT is called.UPDATE:Can I use xvfb instead of real Xorg server? Does it support actual hardware GPU rendering? | Host screen turns off when I start X server in docker container |
Yes it is possible. If you use docker run then you should do belowdocker run -v /path/on/host:/wwwroot/path/in/container If you usedocker-composethen you should add below to the serviceversion: "3"
services:
myapp:
build: .
volumes:
- /path/on/host:/wwwroot/path/in/containerIf you are using this on Docker for Windows then you may have to some path translation like/c/path/on/host:/wwwroot/path/in/container | I am trying to run an ASP.NET Core application using Docker and I would like to expose the external wwwroot folder to the container, so that when I make changes to it from the outside, they are automatically available to my app. Is this possible, using volumes? | ASP.NET Core + Docker + Expose wwwroot |
To get a PHP docker container with the intl extension, you need to extend the official PHP image.To do so, declare the use of your ownDockerfilefor your PHP image indocker-compose.yml:services:
php:
# Remove this line
# image: php:7-fpm
# Add this one instead
build: './docker/php'
# ...Then, add the followingDockerfilefile to thedocker/phpfolder:FROM php:7.1-fpm
RUN apt-get update && apt-get install -y \
libicu-dev \
&& docker-php-ext-install \
intl \
&& docker-php-ext-enable \
intlYou can now rundocker-compose buildto get your PHP container built with the Intl extension.A few notes:I prefer to explicitly tell which PHP version I use (here "7.1.x") rather than the more generic "7.x" you defined withphp:7-fpm.I preferred to use thedocker-php-ext-installanddocker-php-ext-enablecommand utilities provided by the PHP official image (see "How to install more PHP extensions" section in thePHP image documentation). | I am a beginner with docker and docker-compose and i need your help.I'm making PHP-NGINX-PostgresSQL symfony developement environment using docker-compose.Here it is :web:
image: nginx:1.13.5
ports:
- "80:80"
volumes:
- ./html:/html
- ./site.conf:/etc/nginx/conf.d/default.conf
links:
- php
php:
image: php:7-fpm
volumes:
- ./html:/html
links:
- postgres
postgres:
image: postgres:9.6.5
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: postgresNow, i would like to install php7.2-intl into my php container. So i would like to execute something like :$ sudo LC_ALL=C.UTF-8 add-apt-repository ppa:ondrej/php
$ sudo apt-get update
$ sudo apt-get install php7.2-intlCould you help me? I'm really stuck and also I dont have a Dockerfile file, just a docker-compose.yml file. | install packages from docker-compose.yml into docker container |
With a named volume (not with a host volume, aka bind mount) docker will initialize an empty named volume to the contents of the image at the location you mount it. So if you have files in your image at /datavolume1, and DataVolume1 is empty, docker will copy those files into the named volume. | I want to share a file storage between two containers. From the documentation, I've seen that you can create and use volumes like this:docker volume create --name DataVolume1
docker run -ti --rm -v DataVolume1:/datavolume1 ubuntuHowever, I want containers to be able to access an initial set of shared data. Does docker support publishing of volumes? If not, does this mean I should write the initial data manually, after creating the volume, or is there another solution for publishing the data along with the images? | In docker, can I publish a volume with initial data? |
Just to add my 2-cents as I've also recently been through those GitLab documents to get the Docker GitLab runner working.Following theDocker image installation and configurationguide, it tells you to start that container, however that I believe, is a mistake, and you want to do that after registering the Runner.If you did run:docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latestJust remove the docker container withdocker rm -f gitlab-runner, and move on toregistering the runner.docker run --rm -t -i -v /srv/gitlab-runner/config:/etc/gitlab-runner --name gitlab-runner gitlab/gitlab-runner registerThis would register the runner, and also place the configuration in/srv/gitlab-runner/config/config.tomlon the local machine.You can then run the originaldocker run:docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest(NB, if this doesn't work because of the name being in use again - just run thedocker rm -f gitlab-runnercommand again - you won't lose the gitlab-runner configuration).And that would stand up the Docker gitlab-runner with the configuration set from the register command.Hope this helps! | I'm following thisguideto install docker for my GitLab server running on Ubuntu 16.4.When I execute the following command:docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latestSo far so good. However, when I run the next command to register the runner from thisguide:docker run --rm -t -i -v /srv/gitlab-runner/config:/etc/gitlab-runner --name gitlab-runner gitlab/gitlab-runner registerI keep getting the message:docker: Error response from daemon: Conflict. The container name "/gitlab-runner" is already in use by container "b055ded012f9d0ed085fe84756604464afbb11871b432a21300064333e34cb1d". You have to remove (or rename) that container to be able to reuse that name.However, when I rundocker container listto see the list of containers, it's empty.Anyone know how I can fix this error? | Conflict. The container name "/gitlab-runner" is already in use by container |
As mentioned in "IPs for all the Things" (byJess Frazelle), you should be able, with docker 1.10, to run your registry with a fixed IP address.It uses the--net= --ip=options of docker run.# create a new bridge network with your subnet and gateway for your ip block
$ docker network create --subnet 203.0.113.0/24 --gateway 203.0.113.254 iptastic
# run a nginx container with a specific ip in that block
$ docker run --rm -it --net iptastic --ip 203.0.113.2 nginx
# curl the ip from any other place (assuming this is a public ip block duh)
$ curl 203.0.113.2You can adapt this example to your registry docker run parameters. | I have a Docker image I want to push to my registry (hosted onlocalhost). I do:docker push localhost:5000/my_imageand works properly. However, if I tag the image and push it by:docker push 172.20.20.20:5000/my_imageI get an error.The push refers to a repository [172.20.20.20:5000/my_tomcat] (len: 1)unable to ping registry endpoint https://172.20.20.20:5000/v0/ v2ping attempt failed with error:Get https://172.20.20.20:5000/v2/: Gateway Time-outCan't I refer to registry by IP? If so, how could I push an image from another host that it is notlocalhost?EDITI'm running the registry this way:docker run -d -p 5000:5000 --restart=always --name registry registry:2 | Docker: Refer to registry by ip address |
This issue revolved around the host's network configuration. The eth0 interface was improperly configured. The following commands helped me determine it was a DNS issue.$ docker run --rm debian:jessie ping -c 5 google.com
ping: unknown host
$ docker run --rm debian:jessie ping -c 5 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=38 time=37.147 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=38 time=32.917 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=38 time=31.475 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=38 time=30.692 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=38 time=31.180 ms
--- 8.8.8.8 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 30.692/32.682/37.147/2.352 ms | During my Docker container build process, I attemptted to install a few packages using apt-get install. However the process failed to complete because the 3 of the 4 packages could not be found.Step 1 : RUN apt-get update && apt-get install -y netcat ca-certificates build-essential libssl-dev
---> Running in 38d22d97ec4a
Err http://http.debian.net jessie InRelease
Err http://http.debian.net jessie-updates InRelease
Err http://security.debian.org jessie/updates InRelease
Err http://http.debian.net jessie Release.gpg
Could not resolve 'http.debian.net'
Err http://security.debian.org jessie/updates Release.gpg
Could not resolve 'security.debian.org'
Err http://http.debian.net jessie-updates Release.gpg
Could not resolve 'http.debian.net'
Reading package lists...
W: Failed to fetch http://http.debian.net/debian/dists/jessie/InRelease
W: Failed to fetch http://http.debian.net/debian/dists/jessie-updates/InRelease
W: Failed to fetch http://security.debian.org/dists/jessie/updates/InRelease
W: Failed to fetch http://http.debian.net/debian/dists/jessie/Release.gpg Could not resolve 'http.debian.net'
W: Failed to fetch http://http.debian.net/debian/dists/jessie-updates/Release.gpg Could not resolve 'http.debian.net'
W: Failed to fetch http://security.debian.org/dists/jessie/updates/Release.gpg Could not resolve 'security.debian.org'
W: Some index files failed to download. They have been ignored, or old ones used instead.
Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package netcat
E: Unable to locate package build-essential
E: Unable to locate package libssl-dev
Removing intermediate container 38d22d97ec4a
2015/08/10 12:03:07 The command [/bin/sh -c apt-get update && apt-get install -y netcat ca-certificates build-essential libssl-dev] returned a non-zero code: 100At first, I thought this was an issue with my base image, however I have no issues building the container on another VM. Thoughts? | Unable to locate package while building Docker image |
This does in fact work. I erroneously had the papertrail details misconfigured so I was not seeing the logs. | I have the following docker-compose configuration:version: '3'
services:
worker:
image: // image
logging:
driver: syslog
options:
syslog-address: "udp://XXX.papertrailapp.com:XXXX"
tag: "{{.Name}}/{{.ID}}"When I deploy this to DigitalOcean under Ubuntu, I can successfully run thedocker-compose upcommand like so:docker-compose -f docker-compose.yml upWhen that command runs I can see this output:worker_2_844fc7675414 | WARNING: no logs are available with the 'syslog' log driver
worker_1_5c91a3426046 | WARNING: no logs are available with the 'syslog' log driverIt appears that syslog is correctly configured for thedocker-compose upcommand to run, but that perhaps the syslog driver is not available?All the instructions I can find for using syslog with docker refer todocker runcommands. But how can I get syslog working with docker-compose? | docker-compose logging is not working with syslog option |
You can usedocker committo build a new image based on the container.A better approach however is to use aDockerfilethat builds an image based onverdaccio/verdacciowith the necessary changes in it. This makes the process easily repeatable (for example if a new version of the base image comes out).A further option is the use of volumes as you already mentioned. | Do I understand Docker correctly?docker run -it --rm --name verdaccio -p 4873:4873 -d verdaccio/verdacciogets verdaccio if it does not exist yet on my server and runs it on a specific port. -d detaches it so I can leave the terminal and keep it running right?docker exec -it --user root verdaccio /bin/shlets me ssh into the running container. However whatever apk package that I add would be lost if Irmthe container then run the image again, as well as any edited file. So what's the use of this? Can I keep the changes in the image?As I need to edit the config.yaml that is present in/verdaccio/conf/config.yaml(in the container), my only option to keep this changes is to detach the data from the running instance? Is there another way?V_PATH=/path/on/my/server/verdaccio; docker run -it --rm --name
verdaccio -p 4873:4873 \
-v $V_PATH/conf:/verdaccio/conf \
-v $V_PATH/storage:/verdaccio/storage \
-v $V_PATH/plugins:/verdaccio/plugins \
verdaccio/verdaccioHowever this command would throwfatal--- cannot open config file /verdaccio/conf/config.yaml: ENOENT: no such file or directory, open '/verdaccio/conf/config.yaml' | Docker basics, how to keep installed packages and edited files? |
In the end, the problem "suddenly" went away andoc new-appnow indeed exposes the port (as the documentation says). I am so far using a trivialDockerfilesuch as thisFROM debian:stretch
EXPOSE 5432
COPY start.sh /usr/local/bin/start.sh
CMD ["start.sh"]wherestartup.shcallssleep infinity. In terms of explanation, I can only guess that I had made some secondary and transient error that caused an interference.Here are lessons learned while attempting to diagnose and solve the issue (big thanks to @GrahamDumpleton):If all goes well withoc new-app,oc get allshould indicate port5432/TCPfor resourcesvc/my_appand should also list new OpenShift (and Kubernetes) resources of typesdeploymentconfigs,buildconfigs,builds,imagestreams,po, andrc.This automatic mechanism exposes the port only inside the
cluster, i.e.svc/my_apphas (and listens on) a cluster-IP (not: external-IP).Additional arguments--dry-run -output jsoncauseoc new-appto conduct a dry-run and print an exact description (in JSON format) of what resources it would normally create. | I'm exploring OpenShift 3.9 and have managed to get a first container built and running withoc new-appand the Docker build strategy. My Dockerfile includes the commandEXPOSE 5432.After the rolloutoc describe istag/my_app:latest | grep ^ExposesreportsExposes Ports: 5432/tcp, so that looks good: the image exposes port 5432. Butoc describe po/my_app-1-some_id | grep "^\s*Port"reportsPort: , so overall it seems as if the port is exposed at the level of Docker, but not yet the level of Kubernetes/OpenShift.TheOpenShift documentationsays the following:The new-app command attempts to detect exposed ports in input images.
It uses the lowest numeric exposed port to generate a service that
exposes that port. In order to expose a different port, after new-app
has completed, simply use the oc expose command to generate additional
services.Why doesoc new-appnot expose port 5432 in this situation (in fact it does not create anyserviceresource either) and how can I make it do so automatically, as the input image already does and as seems possible judging from the documentation?UPDATEHere is more detail on how the new application was created:oc new-app ssh://my_account@my_git_server/my_path/my_repo.git
--context-dir=my_dir --strategy=docker --name my_appThe Git repository contains a so far trivialmy_dir/Dockerfile, and it in turn contains the commandEXPOSE 5432. | "oc new-app" does not expose port altough the input image does |
Ok - at this stage I have to admin, that I had a thorough misunderstanding about artifacts in build pipelines.Theupload(deprecated) andpublishtasks are short-hand for the Publish Pipeline Artifact task (https://learn.microsoft.com/en-us/azure/devops/pipelines/artifacts/pipeline-artifacts?view=azure-devops&tabs=yaml)Second of all, the path behind thepublishkeyword is the relative folder, that will be published. If you use the Kubernetes service template from Azure DevOps, it will create amanifestfolder on the root of your repo and pre-fill it with a deployment and service yml file.I don't want them there (and must have deleted the folder subconciously), so I moved them and amended the path:- publish: $(System.DefaultWorkingDirectory)/Code/Database/Docker
artifact: sql-drop(The Docker folder is where I keep Dockerfiles, docker-compose, overrides and all te other jazz for spinning up containers)Now in my deployment task, I need to be aware, that I cannot use the literal path from the repository. I need to download the artifact first, and then use the name of the artifact - "sql-drop" as the foldername:steps:
#current means the current pipline. shorthand sytnax
- download: current
# name from the aritfact specifed above
artifact: sql-drop
- task: KubernetesManifest@0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: $(imagePullSecret)
dockerRegistryEndpoint: $(dockerRegistryServiceConnection)
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
# Use sql-drop as folder name and specify manifests
manifests: |
$(Pipeline.Workspace)/sql-drop/his-sql.dev.deployment.yml
$(Pipeline.Workspace)/sql-drop/his-sql.dev.service.yml
imagePullSecrets: |
$(imagePullSecret)
containers: |
$(containerRegistry)/$(imageRepository):$(tag) | I am trying to push an image to AKS using the default Azure DevOps template:stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Docker@2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
- upload: manifests
artifact: manifestsWhich results in the following error:##[error]Path does not exist: /home/vsts/work/1/s/manifestsI've tried using the default Publish task- task: PublishPipelineArtifact@1
inputs:
artifactName: 'manifests'
path: 'manifests'but this did not change anything. Can somebody please explain, what is happening here and why the default template from Msft is not working? | Publish build artifact task results 'path does not exist' error |
Runningdocker-compose run ...starts a new container and executes the command in there. then when you rundocker-compose upit creates ANOTHER new container... which doesn't have the changes from your previous command.What you want to do is start up a data container to hold your static files. Add another container to your compose file like this...web-static:
build: .
volumes:
- /usr/src/app/static
env_file: .env
command: manage.py collectstaticand add web-static to the 'volumes-from' list on your nginx container | I have a Django environment that I create with Docker Compose, and I'm trying to usemanage.py collectstaticto copy my site's static files to a directory in the container. This directory (/usr/src/app/static) is also a Docker Volume.After building my docker containers (docker-compose build), I rundocker-compose run web python manage.py collectstatic, which works as expected, but my web server (Nginx) is not finding the files, nor are there any files when I rundocker-compose run web ls -la /usr/src/app/static.Any ideas on what I'm doing wrong?(Note: I don't havemanage.py collectstaticin my Dockerfile because my setup needs my ".env" file loaded, and I didn't see a way to load this in the Dockerfile. In either case, I would like to know why Docker Compose doesn't work as I'm expecting it to.)Here are my config files:## docker-compose.yml:
web:
restart: always
build: .
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
- .:/code
env_file: .env
command: /usr/local/bin/gunicorn myapp.wsgi:application -w 2 -b :8000 --reload
nginx:
restart: always
build: ./config/nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
image: postgres:latest
volumes:
- /var/lib/postgresql
ports:
- "5432:5432"
## Dockerfile:
FROM python:3.4.3
RUN mkdir /code
WORKDIR /code
ADD . /requirements/ /code/requirements/
RUN pip install -r /code/requirements/docker.txt
ADD . /code/ | Volume changes not persistent after "docker-compose run" command (Django's collectstatic) |
It is actually possible to just untar Orcale Java in/opt, but that's just a kind of last resort. The Oracle binaries of JRE and JDK don't require any system libraries, so it's pretty easy anywhere.I have written some pretty smallJREandJDKimages, with which I was able to runElasticsearchand other major open-source applications. I also wrote some containers that allow me to compile jars on CoreOS (errordeveloper/mvn,errordeveloper/sbt&errordeveloper/lein).As @ISanych pointed out, running multiple Java containers will not impact disk usage, it's pretty much equivalent to running multiple JVMs on the host. If you find that running multiple JVMs is not quite your cuppa tea, then the answer is really that JVM wouldn't have to be as complex as it is if containers existed before it. However, Java in container is still pretty good, as you can have one classpath that would be fixed forever and you won't get into dependency hell. Perhaps instead of building uberjars (which is what I mostly do, despite that they are known to benot exactly perfect, but I am lazy) one could instead bundle jars in tarball and then useADD jars.tar /app/lib/in theirDockerfile. | I'm learning CoreOS/Docker and am trying to wrap my mind around a few things.With Java infrastructure, is it possible to use the JVM in it's own container and have other Java apps/services use this JVM container? If not, I'm assuming the JVM would have to be bundled in each container, so essentially you have to pull the Java dockerfile and merge my Java services; essentially creating a Linux Machine + Java + Service container running on top of the CoreOS machine.The only other thought I had was it might be possible to run the JVM on CoreOS itself, but it seems like this isn't possible. | Java JVM on Docker/CoreOS |
When starting compose, make sure all environment variable are set for flask. Also provide the--host=0.0.0.0to listen on all network interfaces, in your entrypoint or command.The updated docker-compose file:version: "3"
services:
web:
build: ./web
volumes:
- './application:/application'
environment:
FLASK_DEBUG: 1
FLASK_ENV: development
FLASK_APP: web_app.py
ports:
- '5000:5000'
entrypoint:
- flask
- run
- --host=0.0.0.0When you want to run the container in interactive mode for development purposes, you could run it withdocker-compose runthe below command.--service-portsis required to expose the containers ports as specified in the compose file. If this flag isn't provided, no external traffic will reach the app. That was my original problem.docker-compose run --service-ports web bashAlternatively you could publish the port manuallydocker-compose run --publish 5000:5000 web bash | Hello I'm trying to setup flask-socketio in a docker container.
It seems to run but I get an error( from the browser) when I try to access localhost on port 5000 like I'm used to do with flask apps. It say's: unable to connect!I will show you the 5 important files: Dockerfile, requirements.txt, docker-compose.yml, web_app.py and index.htmlDockerfile:FROM python:3.6.5
WORKDIR /code
COPY * /code/
RUN pip install -r requirements.txtrequirements.txt:Flask==1.0.2
Flask-SocketIO==3.0.1
eventlet==0.24.1docker-compose.yml:version: "3"
services:
web:
build: ./web
ports:
- '5000:5000'
volumes:
- './web:/code'I use the commandsdocker-compose up --buildanddocker-compose run web /bin/bashto enter this container in interactive mode.web_app.py:from flask import Flask, render_template
from flask_socketio import SocketIO, emit
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
socketio = SocketIO(app)
@app.route('/')
def index():
return render_template('index.html')
@socketio.on('my event')
def log_message(message):
emit('my response', {'data': 'got it!'})
if __name__ == '__main__':
socketio.run(app)index.html:
SocketIO
Once inside the container I simple run:python web_app.pybut nothing happens. No error and no working page.I feel like I'm missing so steps to initialize everything correctly but I cant find out what it is. The web is full of very different examples and I'm confused. What makes it even harder is that I'm using eventlet here but not every example goes this route. Some use gevent or other things.I would be really glad if someone gave me a little hint.
Cheers | How to setup flask-socketio in a docker container? |
If anybody else needs an answer to this, the answer lies in creating a seperate NGINX service and then directing the front end rules to the static location (xyz.com/static), e.g. see below (part of docker-compose.yml):nginx:
image: nginx:alpine
container_name: nginx_static_files
restart: always
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
- ./saleor/static/:/static
labels:
- "traefik.enable=true"
- "traefik.backend=nginx"
- "traefik.frontend.rule=Host:xyz.co;PathPrefix:/static"
- "traefik.port=80"You also need to ensure that your Nginx config file (default.conf) is appropriately configured:server {
listen 80;
server_name _;
client_max_body_size 200M;
set $cache_uri $request_uri;
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { log_not_found off; access_log off; }
ignore_invalid_headers on;
add_header Access-Control-Allow_Origin *;
location /static {
autoindex on;
alias /static;
}
location /media {
autoindex on;
alias /media;
}
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
}All credit goes to Pedro Rigotti on the Traefik slack channel for helping me arrive at the solution. | I have a web application (Django based) that is utilising multiple containers:Web Application (Django + Gunicorn)Traefik (acting as the reverse proxy and SSL termination)Database which is used with the Web applicationRedis which is used with the Web applicationAccording to some of the documentation I have read, I should be serving my static content using something like NGINX. But I don't have any idea on how I would do that. Would I install NGINX on my Web Application container or as a seperate NGINX container. How do I pass the request from Traefik? As far as I am aware you cannot server static content with Traefik.This is what my docker-compose.yml looks like:traefik:
image: traefik
ports:
- 80:80
- 8080:8080
- 443:443
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik/traefik.toml:/etc/traefik/traefik.toml:ro
- ./traefik/acme:/etc/traefik/acme
web:
build: .
restart: always
depends_on:
- db
- redis
- traefik
command: python3 /var/www/html/applications/py-saleor/manage.py makemigrations --noinput
command: python3 /var/www/html/applications/py-saleor/manage.py migrate --noinput
command: python3 /var/www/html/applications/py-saleor/manage.py collectstatic --noinput
command: bash -c "cd /var/www/html/applications/py-saleor/ && gunicorn saleor.wsgi -w 2 -b 0.0.0.0:8000"
volumes:
- .:/app
ports:
- 127.0.0.1:8000:8000
labels:
- "traefik.enable=true"
- "traefik.backend=web"
- "traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE}"
environment:
- SECRET_KEY=changemeinprod
redis:
image: redis
db:
image: postgres:latest
restart: always
environment:
POSTGRES_USER: saleoradmin
POSTGRES_PASSWORD: **
POSTGRES_DB: **
PGDATA: /var/lib/postgresql/data/pgdata
volumes:
- ~/py-saleor/database:/app | How to serve static content with Nginx and Django Gunicorn when using Traefik |
In case you don't use the Docker Desktop app and have installed Docker in the WSL2 Ubuntu instance, edit/create a config file:/etc/docker/daemon.jsonand set default DNS:{
"dns": ["8.8.8.8"]
}Restart the Docker service:service docker restart | I am trying to use curl to download releases from github and it cannot seem to resolve the domain.I get the errorcurl: (6) Could not resolve host: objects.githubusercontent.comI am running Docker on WSL 2. Part of my Dockerfile is below and it doesn't get past thecurlcommandFROM alpine:latest
WORKDIR /app
RUN apk update && apk add curl unzip
RUN curl -LO https://github.com/oven-sh/bun/releases/download/bun-v0.1.3/bun-linux-x64.zip && unzip bun-linux-x64.zip
COPY ["package.json", "bun.lockb", "./"]
RUN echo ls
RUN /usr/local/bin/bun-linux-x64/bun installAny help is appreciated | Curl could not resolve host using Docker on WSL 2 |
Add the following todocker-compose.ymlunder theservices:key and set your host in.env.testingtomysql_test:mysql_test:
image: "mysql:8.0"
environment:
MYSQL_ROOT_PASSWORD: "${DB_PASSWORD}"
MYSQL_DATABASE: "${DB_DATABASE}"
MYSQL_USER: "${DB_USERNAME}"
MYSQL_PASSWORD: "${DB_PASSWORD}"
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
networks:
- sail | I'm using Laravel Sail as my development environment. According to thedocs,when the MySQL container is starting, it will ensure a database exists whose name matches the value of your DB_DATABASE environment variable.This works perfectly for my development environment, but not so much when it comes to testing since my.env.testingdefines a separate database, and it seems this database does not get created - when Isail mysqlinto the container and runshow databases;it is not listed. As a result, runningsail testfails every test where the database is concerned.SQLSTATE[HY000] [1045] Access denied for user ...My.envfile contains this:DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
DB_DATABASE=devMy.env.testingfile contains this:DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
DB_DATABASE=testDB_USERNAMEandDB_PASSWORDare the same in both files.How can I create this database so that it's available when runningsail test?EDIT:As I dug through therepository codeI found that the database is being created when themysqlcontainer image is built, butit doesn't look like there's an option for creating multiple databases.MYSQL_DATABASEThis variable is optional and allows you to specify the name of a database to be created on image startup. If a user/password was supplied (see below) then that user will be granted superuser access (corresponding to GRANT ALL) to this database. | Using Laravel Sail with a separate testing database |
When you need to run multiple processes in your docker container a solution is to usesupervisordas the main instruction. Docker will start and monitorsupervisordwhich in turn will start your other processes.Docker File Example:FROM debian:9
...
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/my.conf"]Supervisord config example (/etc/supervisor/my.conf):[supervisord]
nodaemon=true
[program:cron]
command=/usr/sbin/crond -f -l 8
stdout_logfile=/dev/stdout
stderr_logfile=/dev/stderr
stdout_logfile_maxbytes=0
stderr_logfile_maxbytes=0
autorestart=true
[program:php-fpm]
command=docker-php-entrypoint php-fpmNote that it is desirable to configure supervisord to output the logs to/dev/stdoutand/dev/stderrto allow docker to handle these logs. Otherwise you risk your container to slow down over time as the amount of file writes increases. | Hi I don't know how can I run a cron job insidethiscontainer.I've found this:How to run a cron job inside a docker containerBut that overrides the CMD, I don't know hot to keep php-fpm working | How to run cron jobs inside php-fpm-alpine docker container? |
If you use thesudocommand to create a folder outside of your home directory structure for use by Docker then that folder is going to be owned by therootuser, e.g.:$ sudo mkdir /var/mssql-data
$ ls -la /var/mssql-data
total 0
drwxr-xr-x 2 root wheel 64B 26 May 11:31 ./
drwxr-xr-x 30 root wheel 960B 26 May 11:31 ../When you try to launch an SQL Server container using a volume mapping with that folder the container will fail to start - because the Docker backend process doesn't have access - and you will see the "system directory could not be created" error message, e.g.:$ docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=StrongPassw0rd" -p 1433:1433 -v /var/mssql-data:/var/opt/mssql --name sqlservercontainer -d mcr.microsoft.com/mssql/server:2019-latest
9d6bf76a91af08329ea07fafb67ae68410d5320d9af9db3b1bcc8387821916da
$ docker logs 9d6bf76a91af08329ea07fafb67ae68410d5320d9af9db3b1bcc8387821916da
SQL Server 2019 will run as non-root by default.
This container is running as user mssql.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
/opt/mssql/bin/sqlservr: Error: The system directory [/.system] could not be created. File: LinuxDirectory.cpp:420 [Status: 0xC0000022 Access Denied errno = 0xD(13) Permission denied]To correct the situation you need to give your own account access to the folder and then a container using that volume mapping will start successfully:$ sudo chown $USER /var/mssql-data
$ docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=StrongPassw0rd" -p 1433:1433 -v /var/mssql-data:/var/opt/mssql --name sqlservercontainer -d mcr.microsoft.com/mssql/server:2019-latest
3b6634f234024e07af253e69f23971ab3303b3cb6b7bc286463e196dae4de82e | I am trying to run a SQL Server container on my mac through Docker.I ran the following command:docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=strongpassword" -p 1433:1433 --name sqlservercontainer -d mcr.microsoft.com/mssql/server:2019-latestBut the container is immediately exiting.The docker logs for the container look like this:SQL Server 2019 will run as non-root by default.
This container is running as user mssql.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
SQL Server 2019 will run as non-root by default.
This container is running as user mssql.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
/opt/mssql/bin/sqlservr: Error: The system directory [/.system] could not be created. File: LinuxDirectory.cpp:420 [Status: 0xC0000022 Access Denied errno = 0xD(13) Permission denied]
/opt/mssql/bin/sqlservr: Error: The system directory [/.system] could not be created. File: LinuxDirectory.cpp:420 [Status: 0xC0000022 Access Denied errno = 0xD(13) Permission denied]Any idea what needs to be done to solve this? | SQL Server Docker container immediately exiting |
Sure, becauseDockerusesrootas a default user. You should create user in your docker container, switch to that user and then make folder, then you will get them without root permissions on you host machine.DockerfileFROM rust:latest
...
RUN useradd -ms /bin/bash myuser
USER myuser | When I run a docker image for example likedocker run -v /home/n1/workspace:/root/workspace -it rust:latest bashand I create a directory in the container likemkdir /root/workspace/testIt's owned by root on my host machine. Which leads to I have to change the permissions everytime after I turn of the container to be able to operate with that directory.Is there a way how to tell Docker to handle directories and files from my machine (host machine) point of view under a certain user? | Docker volume and host permissions |
Docker containers will persist on disk until they are explicitly deleted withdocker rm. If your server restarts you may need to restart your service containers, but your data containers will continue to exist and their volumes will be available to other containers. | I'm a bit confused about data-only docker containers. I read it's a bad practice to mount directories directly to the source-os:https://groups.google.com/forum/#!msg/docker-user/EUndR1W5EBo/4hmJau8WyjAJAnd I get how I make data-only containers:http://container42.com/2014/11/18/data-only-container-madness/And I see somewhat similar question like mine:How to deal with persistent storage (e.g. databases) in dockerBut what if I have a lamp-server setup.. and I have everything nice setup with data-containers, not linking them 'directly' to my source-os and make a backup once a while..Than someone comes by, and restarts my server.. How do I setup my docker (data-only)-containers again, so I don't lose any data? | How persistent are docker data-only containers |
Personal access tokenis a way to go | I'm trying to use docker-compose to fetch, build and run multiple services from their git repositories. I made a simple docker-compose.yml to test it:version: '3'
services:
test-service:
build:[email protected]:dan-poltherm/partservicego.git
ports:
- 8005:443It seems that docker-compose can't fetch repository I get following error when callingdocker-compose up --build:ERROR: error fetching: fatal: cannot run ssh: No such file or directoryI have OpenSSH Client installed (Windows 10 port) and%SYSTEMROOT%\System32\OpenSSH\added to PATH, I also setGIT_SSHtoC:\Windows\System32\OpenSSH\ssh.exe. I can clone repo withgit cloneandsshalso works from powershell. | Docker Compose: cannot run ssh: No such file or directory |
ISE hosted powershell does not work properly with a bunch of things. So try to do this either inside standalone powershell console or use VS Code. | I want to check the content of my docker container. I want to run apowershellor command prompt inside container so I can list directories.This container image is hostingASP.NET Web APIapplication using ASP.net4.6.1framework.I ran following commands:docker container ls - to list conatinersdocker exec -i -t a1da40af6b3c powershellBut nothing happens (as shown in the image). Am i missing anything? | Running powershell or cmd on docker container |
When you connect your docker build task to ACR. The docker image will be prefixed by your registry URI. The***in***/myrepositoryis your registry URI encrypted.So you need to specify your image asYour_ACR_URI/myrepository:$(Build.BuildId)in the docker run command. See below example:- bash: docker run 'leviacr.azurecr.io/alpinelevi:$(Build.BuildId)' | I am running a Docker task 'buildAndPush'in Azure Pipeline yaml file, to build my repository image and push to ACR. This work perfectly fine and i could see my image in docker and ACR.However, I want to break this task. I want to build the image in docker. I want to run the docker image locally once, then run the test script in the docker image (a python file). Only after the test results are successful, I should be pushing this to ACR.So I started with the build task.- task: Docker@2
inputs:
containerRegistry: 'mycontainerRegistry'
repository: 'myrepository'
command: 'build'
Dockerfile: '**/Dockerfile'
tags: $(Build.BuildId)This successfully builds my image. Now I run a bash command to list my images.- bash: docker image lsI could see my image built but it shows as '***/myrepository' . This is where the problem lies.I want to use this image and run my newly built docker image to ensure the run completes successfully- bash: docker run 'myrepository:$(Build.BuildId)'I get the error that repo not found. The Build id is correct as I see it in the images tag. I cannot use docker run ***/myrepository:$(Build.BuildId) as it throws an error incorrect format.[error]invalid argument "***/myrepository:7167" for "-t, --tag" flag: invalid reference formatIs there a way to resolve this. Is this the right approach I am following?Thanks for your time! | Azure Devops: Docker Build task image name starts with ***/ |
I had a similar issue a few weeks ago. IIRCconfluent-kafka-gorequires a recent version oflibrdkafka-dev, which simply was not yet released to alpine or others.
I was able to find it for ubuntu though, so my solution (which was more involved than I hoped for, but it worked), was to start from clean ubuntu, installlibrdkafka-dev, install Go version that I want and compile inside docker.Here's how it looks:FROM ubuntu
# Install the C lib for kafka
RUN apt-get update
RUN apt-get install -y --no-install-recommends apt-utils wget gnupg software-properties-common
RUN apt-get install -y apt-transport-https ca-certificates
RUN wget -qO - https://packages.confluent.io/deb/5.1/archive.key | apt-key add -
RUN add-apt-repository "deb [arch=amd64] https://packages.confluent.io/deb/5.1 stable main"
RUN apt-get update
RUN apt-get install -y librdkafka-dev
# Install Go
RUN add-apt-repository ppa:longsleep/golang-backports
RUN apt-get update
RUN apt-get install -y golang-1.11-go
# build the library
WORKDIR /go/src/gitlab.appsflyer.com/rantav/kafka-mirror-tester
COPY *.go ./
COPY // the rest of your go files. You may copy recursive if you want
COPY vendor vendor
RUN GOPATH=/go GOOS=linux /usr/lib/go-1.11/bin/go build -a -o main .
EXPOSE 8000
ENTRYPOINT ["./main"] | I am trying to create a docker image with my go application. The application (which was developed on MacOS) depends onconfluent-kafka-gowhich in turn depends onlibrdkafka-devwhich I install in the Docker image like so:FROM golang:1.1
RUN apt-get update
RUN apt-get -y install librdkafka-dev
VOLUME /workspace
WORKDIR /workspace/src/my/app/folder
ENTRYPOINT ["/bin/sh", "-c"]I am getting the following error:my/app/folder/vendor/github.com/confluentinc/confluent-kafka-go/kafka
../folder/vendor/github.com/confluentinc/confluent-kafka-go/kafka/00version.go:44:2: error: #error "confluent-kafka-go requires librdkafka v0.11.5 or later. Install the latest version of librdkafka from the Confluent repositories, seehttp://docs.confluent.io/current/installation.html"As far as I understand the latest versionisinstalled.
How can I fix it? | Building Go Application using confluent-kafka-go on Linux |
If you want to delete all custom added images from the built-in library, you can do this:# get all images that start with localhost:32000, output the results into image_ls file
sudo microk8s ctr images ls name~='localhost:32000' | awk {'print $1'} > image_ls
# loop over file, remove each image
cat image_ls | while read line || [[ -n $line ]];
do
microk8s ctr images rm $line
done;Put it into a .sh file and run the script | I'm getting a low disk space warning on a server where my microk8s and applications are installed. When I run the microk8s ctr image ls command, multiple images appear for an application. Does the "docker image prune -f" command in Docker have an equivalent in microk8s? Or is there a way possible? | How to make microk8s ctr image prune |
You can use docker ps, get container id and write:$docker inspect container_idlike here:"Volumes": {
..
},
"VolumesRW": {
..
}It would give you all volumes of container. | How can I list all the volumes of a Docker container? I understand that it should be easy to get but I cannot find how.Also, is it possible to get the volumes of deleted containers and remove them? | List volumes of Docker container |
You can authenticate withjane,secretagainstadmindb, notmydbRunningmongo -u jane -p secretis equivalent to runningmongo -u jane -p secret -authenticationDatabase admin. Check container logs to verify it.MONGO_INITDB_DATABASE is for different purpose.Asdocsstate:MONGO_INITDB_ROOT_USERNAME, MONGO_INITDB_ROOT_PASSWORD These
variables, used in conjunction, create a new user and set that user's
password. This user is created in theadmin authentication databaseand given the role of root, which is a "superuser" role.MONGO_INITDB_DATABASE This variable allows you to specify the name of
a database to be used for creation scripts in
/docker-entrypoint-initdb.d/*.js (see Initializing a fresh instance
below). MongoDB is fundamentally designed for "create on first use",
so if you do not insert data with your JavaScript files, then no
database is created.Initializing a freshinstanceWhen a container is started for the first time it will execute files
with extensions .sh and .js that are found in
/docker-entrypoint-initdb.d. Files will be executed in alphabetical
order. .js files will be executed by mongo using the database
specified by the MONGO_INITDB_DATABASE variable, if it is present, or
test otherwise. You may also switch databases within the .js script. | So I'm trying to setup a MongoDB using the officialmongo Docker image, version4.2.What I want to achieve is to use the server with authentication enabled, and I want to have a custom database with a custom user and password.So, I'm setting the following environment variables:MONGO_INITDB_DATABASE: mydb
MONGO_INITDB_ROOT_USERNAME: jane
MONGO_INITDB_ROOT_PASSWORD: secretIn the documentation it states that if you provide the latter two environment variables, authentication is enabled automatically.However, when I then try to access themydbdatabase using the credentialsjaneandsecret, all I get is an error:Supported SASL mechanisms requested for unknown user 'jane@mydb'
SASL SCRAM-SHA-1 authentication failed for jane on mydb from client 172.17.0.1:54702 ; UserNotFound: Could not find user "jane" for db "mydb"Why is that? What am I missing?My guess is that the user created only has access to theadmindatabase, and I need to grant access for the userjaneto the databasemydb. I tried to do that using the following command:mongo admin -u jane -p secret --eval "db.grantRolesToUser('jane', [{role: 'dbOwner', db: 'mydb'}])"But this didn't work either. What am I missing? | User not found on MongoDB Docker image with authentication |
Looks like known issue.LinkIf you are using some editor like vim, when you save the file it does
not save the file directly, rather it creates a new file and copies it
into place. This breaks the bind-mount, which is based on inode. Since
saving the file effectively changes the inode, changes will not
propagate into the container. When the container is restarted the new
inode. If you edit the file in place you should see changes propagate.This is a known limitation of file-mounts and is not fixable.Further in comments you can find some workarounds for various editors. Check if any works | I'm newbie in Docker and I have created an image with this Dockerfile:FROM node:8.12.0
LABEL version="1.0"
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["node", "index.js"]I run the image and it works. But If I run the image mapping host directory with WORKDIR when I update index.js in host directory this updating is not propagated into WORKDIR.I run the image with this command:docker run --name basketmetrics -v /home/josecarlos/Workspace/nodejs/basketmetrics2:/usr/src/app -p 8080:8080 -d basketmetrics2/node-app:1.0This is my host directory /home/josecarlos/Workspace/nodejs/basketmetrics2And this is the target directory in the container /usr/src/app. If I inspect the container I can see that the host directory is mapped with the WORKDIRWhat am I doing wrong?Update I:I have stoped my container and modify the file index.js in my host directory. If I run again the image, then I can see the content updated!!!Why my content is not updated on the fly? | Docker: Files from volume not updated in target |
There is nothing wrong with your service, you should be able to access it using:32436.NodePort, as the name implies, opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service. So, On your node port 32436 is open and will receive all the external traffic on this port and forward it to the login service.EDIT:nodePort is the port that a client outside of the cluster will "see". nodePort is opened on every node in your cluster via kube-proxy. With iptables magic Kubernetes (k8s) then routes traffic from that port to a matching service pod (even if that pod is running on a completely different node).nodePort is unique, so 2 different services cannot have the same nodePort assigned. Once declared, the k8s master reserves that nodePort for that service. nodePort is then opened on EVERY node (master and worker) - also the nodes that do not run a pod of that service - k8s iptables magic takes care of the routing. That way you can make your service request from outside your k8s cluster to any node on nodePort without worrying whether a pod is scheduled there or not.See the following article, it shows different ways to expose your services:https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0 | Here is my service.yaml code :kind: Service
apiVersion: v1
metadata:
name: login
spec:
selector:
app: login
ports:
- protocol: TCP
name: http
port: 5555
targetPort: login-http
type: NodePortI wrote service type astype: NodePortbut when i hit command as below it does not show the external ip as 'nodes' :'kubectl get svc'here is output:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 443/TCP 7h
login NodePort 10.100.70.98 5555:32436/TCP 5mplease help me to understand the mistake. | Kubernetes cluster is not exposing external ip as <nodes> |
You must be from restricted countries which are banned by docker (from403status code). only way is to use proxies in your docker service.[Service]...Environment="HTTP_PROXY=http://proxy.example.com:80/
HTTPS_PROXY=http://proxy.example.com:80/"...after that you should issue:$ systemctl daemon-reload
$ systemctl restart docker | When i runsudo docker-compose buildi getBuilding web
Step 1/8 : FROM python:3.7-alpine
ERROR: Service 'web' failed to build: error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "403 Forbidden\nSince Docker is a US company, we must comply with US export control regulations. In an effort to comply with these, we now block all IP addresses that are located in Cuba, Iran, North Korea, Republic of Crimea, Sudan, and Syria. If you are not in one of these cities, countries, or regions and are blocked, please reach out to https://support.docker.com\n\n\n"I need to set proxy fordocker-composefor buildthings i have tried:looking athttps://docs.docker.com/network/proxy/#configure-the-docker-clienti have tried setting~/.docker/config.json{
"proxies":
{
"default":
{
"httpProxy": "http://127.0.0.1:9278"
}
}
}tried with--envargumenttried setting proxy variables on the server with no resulti also have tried thislinkservices:
myservice:
build:
context: .
args:
- http_proxy
- https_proxy
- no_proxybut i get this onversion: '3.6'Unsupported config option for services.web: 'args'these settings seem to be set on docker and not docker-composei also don't need to set any proxy on my local device (i don't want to loose portability if possible)docker-compose version 1.23.1, build b02f1306
Docker version 18.06.1-ce, build e68fc7a | Using proxy on docker-compose in server |
Finally, this solution worked for me. I have to move the VirtualBox VMs folder from my network home directory to my local machine and change their permissions.So on my VirtualBox GUI, under Settings>Storage, I changed the file locations from/Network/Servers/servername/Volumes/cal/Users/username/VirtualBox VMs/boot2docker.iso
/Network/Servers/servername/Volumes/cal/Users/username/VirtualBox VMs/boot2docker-vm/boot2docker-vm.vmdkto/Applications/VirtualBox VMs/boot2docker.iso
/Applications/VirtualBox VMs/boot2docker-vm/boot2docker-vm.vmdkand under the Settings>Portsfrom/Network/Servers/servername/Volumes/cal/Users/username/VirtualBox VMs/boot2docker-vm.sockto/Applications/VirtualBox VMs/boot2docker-vm.sockSo I think it’s a directory and permission issue. | I am new to docker and I'm attempting to run boot2docker on my work computer. I'm logged in to the computer running OS X version 10.10.1 (Yosemite) with a user account that mounts the home directory from the office network.I installed Docker v1.4.1 fromhttps://github.com/boot2docker/osx-installer/releasesand VirtualBox 4.3.20 for OS X hosts fromhttps://www.virtualbox.org/wiki/DownloadsI followed the instructions on docker.com mac installation but I didn't get the supposed to be results.The docker terminal gives this error:bash-3.2$ /usr/local/bin/boot2docker init
Virtual machine boot2docker-vm already exists
bash-3.2$ /usr/local/bin/boot2docker up
error in run: Failed to start machine "boot2docker-vm" (run again with -v for details)
bash-3.2$ $(/usr/local/bin/boot2docker shellinit)
error in run: VM "boot2docker-vm" is not running.
bash-3.2$ docker version
Client version: 1.4.1
Client API version: 1.16
Go version (client): go1.3.3
Git commit (client): 5bc2ff8
OS/Arch (client): darwin/amd64
FATA[0000] Get http:///var/run/docker.sock/v1.16/version: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?Starting boot2docker-vm on VirtualBox returns the following error:Failed to open a session for the virtual machine boot2docker-vm.
NamedPipe#0 failed to bind to local socket
/Network/Servers/servername/Volumes/cal/Users/username/.boot2docker/boot2docker-vm.sock (VERR_NOT_SUPPORTED) | Boot2Docker for OS X fails to start |
The problem is that you are using twoFROMinstructions, which is referred to as amulti-stage build. The final image will be based on thenodeimage that doesn't contain themongodatabase.* Edit *here are more details about what is happening:FROM mongo:latestthe base image ismongo:latestFROM nodenow the base image isnode:latest. The previous image is just standing there...RUN mongodCOPY . .RUN node ./scripts/import-data.jsnow you runmongodand the other commands in your final image that is based onnode(which doesn't contain mongo) | I'm building a dockerfile. But I meet a problem. It says that :/bin/sh: 1: mongod: not foundMy dockerfile:FROM mongo:latest
FROM node
RUN mongod
COPY . .
RUN node ./scripts/import-data.jsHere is what happen when docker build:Sending build context to Docker daemon 829.5MB
Step 1/8 : FROM rabbitmq
---> e8261c2af9fe
Step 2/8 : FROM portainer/portainer
---> 00ead811e8ae
Step 3/8 : FROM docker.elastic.co/elasticsearch/elasticsearch:6.5.1
---> 32f93c89076d
Step 4/8 : FROM mongo:latest
---> 5976dac61f4f
Step 5/8 : FROM node
---> b074182f4154
Step 6/8 : RUN mongod
---> Running in 0a4b66a77178
/bin/sh: 1: mongod: not found
The command '/bin/sh -c mongod' returned a non-zero code: 127Any idea ? | How to start mongodb from dockerfile |
If you are usingdocker-compose down/up, keep in mind thatthis is not a "restart"because:docker-compose upcreates newcontainers anddocker-compose downremoves them:docker-compose upBuilds, (re)creates, starts, and attaches to containers for a service.docker-compose downStops containers and removes containers, networks, volumes, and images created by up.So,removing the containers+not using a mechanism to persist data(such asvolumes) means that you lose your data ☹️On the other hand, if you keep using:docker-compose startdocker-compose stopdocker-compose restartyou deal with the same containers, the ones created when you randocker-compose up. | I'm running influxdb and grafana on Docker with Windows 10.Every time I shut down Docker, I lose my database.Here's what I know:I have tried adjusting the retention policies, with no effect on the
outcomeI can shut down and restart the containers (docker-compose down) and the database is still there. Only when I shut down Docker for Windows do I lose the database.I don't see any new folders on the mapped directory when I create a new database (/data/influxdb/data/)'. Only the '_internal' folder persists, which I assume corresponds to the persisting database called '_internal'Here's my yml file. Any help greatly appreciated.version: '3'
services:
# Define an InfluxDB service
influxdb:
image: influxdb
volumes:
- ./data/influxdb:/var/lib/influxdb
ports:
- "8086:8086"
- "80:80"
- "8083:8083"
grafana:
image: grafana/grafana
volumes:
- ./data/grafana:/var/lib/grafana
container_name: grafana
ports:
- "3000:3000"
env_file:
- 'env.grafana'
links:
- influxdb
# Define a service for using the influx CLI tool.
# docker-compose run influxdb-cli
influxdb-cli:
image: influxdb
entrypoint:
- influx
- -host
- influxdb
links:
- influxdb | database lost on docker restart |
At a purely mechanical level, the quotes are causing trouble. When you sayRUN "sh test.sh"it tries to run a single command namedsh\ test.sh; it does not try to runshwithtest.shas a parameter. Any of the following will actually run the scriptRUN ["sh", "test.sh"]
RUN sh test.sh
RUN chmod +x test.sh; ./test.shAt an operational level you'll have a lot of trouble running that command in the server container at all. The big problem is that you need to run that command after the server is already up and running. So you can't run it in the Dockerfile at all (no services are ever running in aRUNcommand). A container runs a single process and you need that process to be the Elasticsearch server itself, so you can't do this directly inENTRYPOINTorCMDeither.The easiest path is to run this command from the host:docker build -t my/elasticsearch .
docker run -d --name my-elasticsearch -p 9200:9200 my/elasticsearch
curl http://localhost:9200 # is it alive?
./test.shIf you have a Docker Compose setup, you could also run this from a separate container, or you could run it as part of the startup of your application container. There are some good examples of running database migrations in anENTRYPOINTscript for your application container running around, and that's basically the pattern you're looking for.(It istheoreticallypossible to run this in an entrypoint script. You have to start the server, wait for it to be up, run your script, stop the server, and then finallyexec "$@"to run theCMD. This is trickier for Elasticsearch, where you might need to connect to other servers in the same Elasticsearch cluster lest your state get out of sync. The official Docker Hubmysqldoes this, for a non-clustered database server; seeits rather involved entrypoint scriptfor ideas.) | I'm making a dockerfile to install elasticsearch:6.5.4 and add few files to required locations and run a script named test.sh to create a new index in elasticsearch while elasticsearch is running.I'm not sure whether i should use RUN, CMD or ENTRYPOINT to do that.I've successfully built an image and run a container by commenting my last line (containing RUN/CMD/ENTRYPOINT test.sh). I was able to run the test.sh from bash of container and get the desired result.but when i try to build an image for same process, i get the following error:$ docker build -t es .
Sending build context to Docker daemon 7.499MB
Step 1/8 : FROM elasticsearch:6.5.4
---> 93109ce1d590
Step 2/8 : WORKDIR /app
---> Running in 6b6412093d53
Removing intermediate container 6b6412093d53
---> a374ab69eb1a
Step 3/8 : ADD . /app
---> 6ed98ee7ad49
Step 4/8 : COPY test.sh .
---> 42184ec64c09
Step 5/8 : ADD analysis /usr/share/elasticsearch/config/analysis
---> 5a96f2098dd7
Step 6/8 : EXPOSE 9202
---> Running in 6c44b54dcc77
Removing intermediate container 6c44b54dcc77
---> d8723189c843
Step 7/8 : EXPOSE 9200
---> Running in c571b4cba1fa
Removing intermediate container c571b4cba1fa
---> 8fa11b03051e
Step 8/8 : RUN "sh test.sh"
---> Running in cf2e8cb3fd37
/bin/sh: sh test.sh: command not found
The command '/bin/sh -c "sh test.sh"' returned a non-zero code: 127I've tried different combinations of RUN, CMD and ENTRYPOINT for STEP 8my dockerfile is as follows :FROM elasticsearch:6.5.4
WORKDIR /app
ADD . /app
COPY test.sh .
ADD analysis /usr/share/elasticsearch/config/analysis
EXPOSE 9202
EXPOSE 9200
RUN "sh test.sh"I want to run elasticsearch in container and make a new index for elasticsearch | how to run .sh file when container is running using dockerfile |
Elaborating on @k0pernikus's comment, I would recommend to use a separate container that runs cron. The cronjobs in that container can then work with your mysql database.Here's how I would approach it:1. Create a Cron Docker ContainerYou can set up a cron container fairly simply. Here's an example Dockerfile that should do the job:FROM alpine
COPY ./crontab /etc/crontab
RUN crontab /etc/crontab
RUN touch /var/log/cron.log
CMD crond -fJust put your crontab into acrontabfile next to that Dockerfile and you should have a working cron container.An example crontab file:* * * * * mysql -h mysql --execute "INSERT INTO database.table VALUES 'v';"2. Add the cron container to your docker-compose.yml as a serviceMake sure you add your cron container to the docker-compose.yml, and put it in the same network as your mysql service:networks:
my_network:
services:
mysql:
image: mariadb
networks:
- my_network
cron:
image: my_cron
depends_on:
- mysql
build:
context: ./path/to/my/cron-docker-folder
networks:
- my_network | I want to include a cron task in a MariaDB container, based on the latest imagemariadb, but I'm stuck with this.I tried many things without success because I can't launch both MariaDB and Cron.Here is my actual dockerfile:FROM mariadb:10.3
# DB settings
ENV MYSQL_DATABASE=beurre \
MYSQL_ROOT_PASSWORD=beurette
COPY ./data /docker-entrypoint-initdb.d
COPY ./keys/keys.enc home/mdb/
COPY ./config/encryption.cnf /etc/mysql/conf.d/encryption.cnf
# Installations
RUN apt-get update && apt-get -y install python cron
# Cron
RUN touch /etc/cron.d/bp-cron
RUN printf '* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1\n#' >> /etc/cron.d/bp-cron
RUN touch /var/log/cron.log
RUN chmod 0644 /etc/cron.d/bp-cron
RUN cronWith its settings, the database starts correctly, but "Cron" is not initialized. To make it work, I have to get into the container and execute the "Cron" command, and everything works perfectly.So I'm looking for a way to launch both the db and cron from my Dockerfile used in my docker-compose.If this is not possible, maybe there is another way to do tasks planned? The purpose being to execute a script of the db. | How can I run a cron in MariaDB container? |
Ok, so StackOverflow is being a pain about posting this answer (Seems to not like all the config snippets). So here's the link to the Githubhttps://github.com/AndrewSmiley/django-docker-eb. Basically the README is the post I tried to add here to StackOverflow, but unsuccessfully. | I have a conceptual question here-
I'm looking to deploy a Django application on Elastic Beanstalk (which I've successfully done before) using a Docker (which I have yet to succeed with). I know the Elastic Beanstalk image prebuilt with Docker uses Ngnix, which I've deployed Django with before, but I'm a little lost on the accomplishing this on Elastic Beanstalk. I've used Amazon's documentation and successfully deployed a Dockerfile to elastic beanstalk using their code, but have yet to get it going on my own. Has anyone been successful with this? Can anyone point me in the right direction to find out how to accomplish this specific task? Thank you | Deploying Django with Docker on Amazon Elastic Beanstalk |
I had the same issue yesterday.
Since I am behind a company proxy, I had to define the http-proxy for the docker daemon in:/etc/systemd/system/docker.service.d/http-proxy.confThe problem was, that I misconfigured the https_proxy, how it is describedhere.
I usedhttps://in the https_proxy environment variable, which caused this error.This configuration works for me:cat /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment=http_proxy=http://IP:PORT/
Environment=no_proxy=localhost,127.0.0.1
Environment=https_proxy=http://IP:PORT/Remember that you have to restart the docker daemon after changing this configuration. You can achieve this by using:systemctl daemon-reload
systemctl restart docker | I am having issue with Docker on Ubuntu 18.04 withdocker-ce. While pulling a container:$ docker pull nginxor while trying to login$ docker loginI get the following message:Error response from daemon: Gethttps://registry-1.docker.io/v2/:
proxyconnect tcp: tls: oversized record received with length 20527I also purged and reinstalled today with latest version but it didn't help. Does anyone get it resolved? | Ubuntu 18.04 - Error response from daemon: Get https://registry-1.docker.io/v2/: proxyconnect tcp: tls: oversized record received with length 20527 |
This answer results from the comments ...The problem is the following bug/issue:https://github.com/ronmamo/reflections/issues/373- 'Reflections does not detect any classes, if base class (or package prefix) is passed as argument, and application is running as a jar'But it works (for me) with the workaround, suggested in the above mentioned issue.Reflections reflections = new Reflections(new ConfigurationBuilder().forPackages("my.package"));
Set> classes = reflections.getSubTypesOf(MyService.class); | I used org.reflections (latest):new Reflections("my.package").getSubTypesOf(MyService.class);It works well running in IntelliJ and returns all implementations ofMyService.class.But running in a docker container, it returns an emptySet.(Anything else works well in the docker-container)Any ideas? TIA! | org.reflection.Reflections 0.10.2 fails when running as jar (e.g. in a docker container) |
sort of, but i wouldn't.docker isn't meant for interactive / gui based applications at this point. there are some workarounds for this, but all of them are difficult from what I've read.it's better to think of Docker as a server. you don't have a person sitting at a server all day long, clicking things to respond to requests that come into the server. you have code that runs, listening for requests and doing things in response.Docker apps should be this type of app where it runs on it's own, exposes an API and can respond to requests.... i would bet that this becomes possible in the not-so-distant future. but right now, i don't think it's something Docker officially supports. | I have heard that docker solves the "works on my machine" issue for application deployment and that SQL Server can be run inside a docker container, running in Docker for Windows.I have a C# Windforms application that I would like to deploy without Dll Hell.Is it possible to use Docker for this? | Can (should) Docker be used for winforms applications? |
The exact list would depend on your environment/ops team requirements, but this is what seems to be useful besides ports/existing volumes:NetworksThe default network might not work for your prod environment.
As an example, your ops team might decide to put nginx/php-fpm/mariadb on different networks like in the following example (https://docs.docker.com/compose/networking/#specify-custom-networks) or even use a pre-existing networkMysql configsThey usually reside in a separate dir i.e./etc/my.cnfand/etc/my.cnf.d.
These configs are likely to be different between prod/dev.
Can’t see it in your volumes pathsPhp-fpm7Haven’t worked withphp-fpm7, but inphp-fpm5it also had a different folder with config files (/etc/php-fpm.confand/etc/php-fpm.d) that is missing in your volumes. These files are also likely to differ once your handle even a moderate load (you’ll need to configure number of workers/timeouts etc)NginxSame as forphp-fpm, ssl settings/hostnames/domains configurations are likely to be differentLoggingThink on what logging driver might fit your needs best.
Fromhere:Docker includes multiple logging mechanisms to help you get
information from running containers and services. These mechanisms are
called logging drivers.You can easily configure it in docker-compose, here's an example bring up a dedicatedfluentdcontainer for logging:version: "3"
services:
randolog:
image: golang
command: go run /usr/src/randolog/main.go
volumes:
- ./randolog/:/usr/src/randolog/
logging:
driver: fluentd
options:
fluentd-address: "localhost:24224"
tag: "docker.{{.ID}}"
fluentd:
build:
context: ./fluentd/
ports:
- "24224:24224"
- "24224:24224/udp" | I have adocker-composesetup for development, and I need to replicate the same file for production or staging.Currently, aside fromvolumesportsandenvironmentI am not quite sure what settings "may need" to be changed for production/environment.To clarify:I have to changevolumes, because I usually mount a USB drive to my docker container ex:d:/var/wwwThe issue withportsis, because there may be other services that use port 80 on my local machine, so I may need to change that.environmentis of course, different for prod/dev .. (mainly authentication and database access)Any more tips would be nice to know. | creating a separate docker-compose configuration for production and development |
The problem is that docker needs to be run as root user, so maven commands need to be run as root user,No, a docker run can be done with a-u(--user) parameterin order to use a non-root user inside the container.Either run docker as non-root userYour user (on the host) needs to be part of thedockergroup. Then you can run the docker service with that user.As commented, this is not very secure.See:"chrisfosterelli/dockerrootplease""Understanding how uid and gid work in Docker containers"That last links ends with the following findings:If there’s a known uid that the process inside the container is executing as, it could be as simple as restricting access to the host system so that the uid from the container has limited access.The better solution is to start containers with a known uid using the--user(you can use a username also, but remember that it’s just a friendlier way of providing a uid from the host’s username system), andthen limiting access to the uid on the host that you’ve decided the container will run as.Because of how uids and usernames (and gids and group names) map from a container to the host, specifying the user that a containerized process runs as can make the process appear to be owned by different users inside vs outside the container.Regarding that last point, you now haveuser namespace (userns) remapping(since docker 1.10, but I would advice 17.06, because ofissue 33844). | I am trying to build a docker image using docker-maven plugin, and plan to execute the mvn command using jenkins. I have jenkins.war deployed on a tomcat instance instead of a standalone app, which runs as a non-root user.
The problem is that docker needs to be run as root user, so maven commands need to be run as root user, and hence jenkins/tomcat needs to run as root user which is not a good practice (although my non-root-user is also sudoer so I guess won't matter much).So bottom line, I see two solutions : Either run docker as non-root user (and need help on how to do that)
OR
Need to run jenkins as root (And not sure how to achieve that as I changed environment variable /config and still its not switching to root).Any advice on which solution to choose and how to implement it ? | Running docker as non-root user OR running jenkins on tomcat as root user |
Update:There is a very useful tool calleddivethat allows you to navigate through the Docker layers and view the filesystem. | Each docker image consists of a series of layers.Ex: custom-elasticsearch:lastest$: docker history custom-elasticsearch
IMAGE CREATED CREATED BY SIZE COMMENT
5f14f49e0f6b 8 days ago /bin/sh -c #(nop) EXPOSE 9091/tcp 9200/tcp 9 0 B
c1b5b6bdc8d8 8 days ago /bin/sh -c /usr/share/elasticsearch/bin/plugi 3 MB
a406ab7ba4ed 8 days ago /bin/sh -c #(nop) COPY file:cf296a4961a04abc0 489 B
6b0d046baaa8 8 days ago /bin/sh -c #(nop) COPY file:81c04951307f0688f 83 B
6f609da577b7 20 months ago /bin/sh -c #(nop) CMD ["elasticsearch"] 0 B
20 months ago /bin/sh -c #(nop) EXPOSE 9200/tcp 9300/tcp 0 B
20 months ago /bin/sh -c #(nop) ENTRYPOINT &{["/docker-entr 0 B
20 months ago /bin/sh -c #(nop) COPY file:d25889029dd34582c 672 B
//...Can I show, copy file in image at fourth layer with id (6b0d046baaa8)?
Thanks | Can I show data in specific layer from an docker image? And how? |
You can do this with a Supervisorevent listener. Subscribe it to the eventPROCESS_STATE_FATAL, and respond to the event by sending a SIGTERM to supervisord, which you are presumably running as PID 1 within the container. | I'm currently using Supervisor inside my Docker images to start and manage my services and I would like to configure Supervisor to exit if at least one of these services entered FATAL state.Doing that, I want to avoid to have Docker containers in running state when nothing except Supervisor has succeeded to start. | Supervisor & Docker: How to exit Supervisor if a service doesn't start? |
CMD ["uvicorn", "main:app", "--host=0.0.0.0" , "--reload" , "--port", "8000"]Your work directory is /app and the main.py file is already there. So you don't need to call app.main module. Just call main.py script directly in CMD. | FROM python:3.8
WORKDIR /app
COPY requirements.txt /
RUN pip install --requirement /requirements.txt
COPY ./app /app
EXPOSE 8000
CMD ["uvicorn", "app.main:app", "--host=0.0.0.0" , "--reload" , "--port", "8000"]when i useddocker-compose up -dModuleNotFoundError: No module named 'app'the folders in Fastapi framework:fastapiapp-main.pylanguage_detector.pyDockerfiledocker-compose | ModuleNotFoundError: No module named 'app' fastapi docker |
This is a PATH related issue and profile. When you usesh -corbash -cthe profile files are not loaded. But when you usebash -lcit means load the profile and also execute the command. Now your profile may have the necessary path setup to run this command.Edit-1So the issue with the original answer was that it cannot work. When we hadENTRYPOINT ["/bin/bash", "-lc", "ocp-indent"]
CMD ["--help"]It finally translates to/bin/bash -lc ocp-indent --helpwhile for it to work we need/bin/bash -lc "ocp-indent --help". This cannot be done by directly by using command in entrypoint. So we need to make a newentrypoint.shfile#!/bin/sh -l
ocp-indent "$@"Make sure tochmod +x entrypoint.shon host. And update the Dockerfile to belowFROM ocaml/opam
WORKDIR /workdir
RUN opam init --auto-setup
RUN opam install --yes ocp-indent
SHELL ["/bin/sh", "-lc"]
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["--help"]After build and run it works$ docker run f76dda33092a
NAME
ocp-indent - Automatic indentation of OCaml source files
SYNOPSISOriginal answerYou can easily test the difference between both using below commandsdocker run -it --entrypoint "/bin/sh" env
docker run -it --entrypoint "/bin/sh -l" env
docker run -it --entrypoint "/bin/bash" env
docker run -it --entrypoint "/bin/bash -l" envNow either you bash has correct path by default or it will only come when you use the-lflag. In that case you can change the default shell of your docker image to belowFROM ocaml/opam
WORKDIR /workdir
RUN opam init --auto-setup
RUN opam install --yes ocp-indent
SHELL ["/bin/bash", "-lc"]
RUN ocp-indent --help
ENTRYPOINT ["/bin/bash", "-lc", "ocp-indent"]
CMD ["--help"] | I'm trying to build the below Dockerfile, but it keeps failing onRUN ocp-indent --helpsayingocp-indent: not found The command '/bin/sh -c ocp-indent --help' returned a non-zero code: 127FROM ocaml/opam
WORKDIR /workdir
RUN opam init --auto-setup
RUN opam install --yes ocp-indent
RUN ocp-indent --help
ENTRYPOINT ["ocp-indent"]
CMD ["--help"]I bashed into the image that ran before it viadocker run -it bash -iland ranocp-indent --helpand it ran fine. Not sure why it's failing, thoughts? | The command returned a non-zero code: 127 |
The container itself is usingRedHats Universal Base Imageand seems to usemicrodnffor managing software.Check the dockerfile of jboss/keycloak (https://hub.docker.com/r/jboss/keycloak/dockerfile) to check, how it's done. The interesting part is:RUN microdnf update -y && microdnf install -y glibc-langpack-en gzip hostname java-11-openjdk-headless openssl tar which && microdnf clean allSo you may try adding additional software using themicrodnfcommand.A better solution might be to create your own Dockerfile deriving from jboss/keycloak to add your additional software. | When running locally ajboss/keycloakcontainer, I try to add more software.So far, I have tried:~# yum install jq
bash: yum: command not found
~# apt-get install jq
apt-get: command not foundDoes anybody know how can I install more software?# uname -a
Linux 935559ef2e4c 4.19.76-linuxkit #1 SMP Tue May 26 11:42:35 UTC 2020 x86_64 x86_64 x86_64 GNU/LinuxUpdate #1It looks likemicrodnfis what I have to use, but I am still getting errors:root@276cdd5cc962 /]# microdnf update -y
(microdnf:1614): librhsm-WARNING **: 20:38:39.628: Found 0 entitlement certificates
(microdnf:1614): librhsm-WARNING **: 20:38:39.630: Found 0 entitlement certificates
(microdnf:1614): libdnf-WARNING **: 20:38:39.630: Loading "/etc/dnf/dnf.conf": IniParser: Can't open file
Downloading metadata...
Downloading metadata...
Downloading metadata...
Nothing to do.The file "/etc/dnf/dnf.conf" does not exist. | Install packages on jboss/keycloak |
I just had to add tty: true to my docker-compose.ymlversion: '2'
services:
ubuntu:
image: ubuntu:16.04
tty: trueDocker version 1.12.5, build 7392c3bdocker-compose version 1.7.1, build 0a9ab35 | Q. How to run docker-compose in detach modeI am trying to run docker-compose in detach mode but itwill exits after just it's run, but I am able run same image in detach mode using 'docker run' command.Run image using 'docker run' command(works in detach mode)docker run -itd ubuntu:16.04below is output of 'docker ps -a' commandCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d84edc987359 ubuntu:16.04 "/bin/bash" 4 seconds ago Up 3 seconds romantic_albattaniRun same image using 'docker-compose up -d' command(didn't work in detach mode)below is my docker-compose.yml fileversion: '3'
services:
ubuntu:
image: ubuntu:16.04'docker-compose ps' command outputName Command State Ports
----------------------------------------------------
composetesting_ubuntu_1 /bin/bash Exit 0Update: When using tty: true parameter in docker-compose.yml file as belowversion: '3'
services:
ubuntu:
image: ubuntu:16.04
tty: truethen console will not execute any command, like if I type 'ls -l' command console will not responding. | Docker compose detached mode not working |
You don't need to share the sock file with different docker container.The simple solution is share the socket port between dockers.In the uwsgi file need to addsocket=:3000.[uwsgi]
master=true
chdir=.
module=flaskapp
harakiri=60
callable=app
thunder-lock=true
socket=:3000
workers=12
threads=4
chmod-socket=666
vacuum=true
die-on-term=true
pidfile=uwsgi.pid
max-requests=5000
post-buffering=65536
post-buffering-bufsize=524288the define the nginx conf. theuwsgi_passsectionflask:3000means (docker server name):(docker expose port).server {
listen 80;
charset utf-8;
client_max_body_size 20M;
location / {
try_files $uri @slack;
}
location @slack {
include uwsgi_params;
uwsgi_pass flask:3000;
uwsgi_read_timeout 60s;
uwsgi_send_timeout 60s;
uwsgi_connect_timeout 60s;
}
}In docker-compose file:nginx:
container_name: nginx
build:
context: .
dockerfile: ./Dockerfile-Nginx
ports:
- "80:80"
depends_on:
- flask
flask:
tty: true
container_name: flask
build:
context: .
dockerfile: ./Dockerfile
expose:
- "3000"
command: uwsgi --ini ./uwsgi.iniSo just let the docker run one process for each docker container. | I'm using docker-compose and now have two docker containers - one is a nginx webserver, whereas the other one is ubuntu with Python uwsgi and Flask.As I know, the best way to connect nginx and uWSGI is done by sharing a *.sock file between them and pass the requests into the file (And that what I do in older projects where I did not use dockers).I'm wondering how can I share the sock file between the dockers in order to enable the communication between them?And at all.. I'm wondering if this scenario of two containers - one for nginx and one for the Flask framework and uWSGI - is best practice and right to do.Thanks | How to share .sock file between nginx docker and uwsgi docker? |
Kubernetes config file describes 3 objects:clusters,users, andcontexts.cluster- cluster name + details - the host and the certificates.user- user name and credentials, to authorise you against any cluster host.thecontextrole is to make the connection between auserand acluster, so when you use that context,kubectlwill authorise you against the cluster specified in the context object, using the credentials of the user specified in the context object. an examplecontextobject:apiVersion: v1
current-context: ""
kind: Config
preferences: {}
clusters:
- cluster:
certificate-authority: xxxxxxxxx
server: xxxxxxxxx
name: gke_dev-yufuyjfvk_us-central1-a_standard-cluster-1
users:
- name: efrat-dev
user:
client-certificate: xxxxxxxxx
client-key: xxxxxxxxx
contexts:
- context:
cluster: gke_dev-yufuyjfvk_us-central1-a_standard-cluster-1
user: efrat-dev
name: gke-devthekubectl configsubcommand has a set of commands togenerate cluster, user & context entriesin the config file.multiple k8s clusters from docker-desktopunder the hood, when you enable k8s, docker desktop downloads kubernetes components as docker images, and the server listenshttps://localhost:6443. it is all done automatically so unless you have any intention to run the entire structure by yourself i dont suppose you can configure it to run multiple clusters.about your further questions:when you set a context,kubectlwill setcurrent-contextto that one, and everykubectlyou run will go to the context's cluster, using the context's user credentials. it doesnt mean the clusters are dead. it wont affect them at all. | I can't seem to figure out how to create a totally new Kubernetes cluster on a Docker Desktop running instance on my computer. (It shouldn't matter if this was a Mac or PC).I know how to -set- the current cluster context, but I only have one cluster so I can't set anything else.### What's my current context pointing to?
$ kubectl config current-context
docker-for-desktop
### Set the context to be "docker-for-desktop" cluster
$ kubectl config use-context docker-for-desktop
Switched to context “docker-for-desktop”Further questions:If I have multiple clusters, then only one of them (the currently 'set' one) will be running at once with the other's stopped/sleeping?Clusters are independent from each other, so if i can muck around and play with one cluster, then this should not impact another cluster | How to create a new Kubernetes cluster on Docker Desktop? |
Check the string "Image is up to date" to know whether the local image was updated:sudo docker pull my-example-registry.com:5050/web-client:latest |
grep "Image is up to date" ||
(echo Already up to date. Exiting... && exit 0)So change your script to:#!/usr/bin/env bash
set -e
sudo docker pull my-example-registry.com:5050/web-client:latest |
grep "Image is up to date" ||
(echo Already up to date. Exiting... && exit 0)
echo '>>> Get old container id'
CID=$(sudo docker ps --all | grep "web-client" | awk '{print $1}')
echo $CID
echo '>>> Stopping and deleting old container'
if [ "$CID" != "" ];
then
sudo docker stop $CID
sudo docker rm $CID
fi
echo '>>> Starting new container'
sudo docker run --name=web-client -p 8080:80 -d my-example-registry.com:5050/web-client:latest | I have a starting docker script here:#!/usr/bin/env bash
set -e
echo '>>> Get old container id'
CID=$(sudo docker ps --all | grep "web-client" | awk '{print $1}')
echo $CID
echo '>>> Stopping and deleting old container'
if [ "$CID" != "" ];
then
sudo docker stop $CID
sudo docker rm $CID
fi
echo '>>> Starting new container'
sudo docker pull my-example-registry.com:5050/web-client:latest
sudo docker run --name=web-client -p 8080:80 -d my-example-registry.com:5050/web-client:latestThe fact is this script has umproper result. It deletes the old container everytime the script is run.The "starting new container" section will pull the most recent image. Here is an example output of docker pull if the image locally is up to date:Status: Image is up to date for
my-example-registry:5050/web-client:latestIs there any way to improve my script by adding a condition:Before anything, check via docker pull the local image is the most recent version available on registry. Then if it's the most recent version, proceed the stop and delete old container action and docker run the new pulled image.In this script, how to parse the status to check the local image corresponds to the most up to date available on registry?Maybe a docker command can do the trick, but I didn't manage to find a useful one. | Bash parse docker status to check if local image is up to date |
Do not use the-tiflags to start an interactive session, just execute the script directly via thedocker execcommanddocker exec website powershell -command "C:\inetpub\wwwroot\addApplication.ps1" | I have a powershell script in host which copy some files and starts the container.#Copy File
docker cp "D:\addApplication.ps1" website:/inetpub/wwwroot/
#Start Container
docker start website
Write-Host 'Process has started'
#Execute Container
docker exec -ti website powershell
#Run Script
Invoke-Expression "C:\inetpub\wwwroot\addApplication.ps1"Second last command executes fine but last command will only execute when I exit the container session and returns error(File Not Found which is because it finds that file on host)Question: Is there anyway I can execute the command in container session from the script. Or execute any command from script in any process(confused)Any help is appreciated.Thanks | RUN Powershell Script in Docker Container From Host Powershell Script |
Does anyone know which variable name I need to use on my docker composer file?Fargate does not allow you to specify thehostorsourcePathfor a bind mount. You cancheck the docs for bind volumesandthe overview for Fargate task storage docsto learn more.The big premise of Fargate is it obfuscates the underlying host from the task, so you as an end user have very little options for interacting with the host - you can't ssh to it, you can't touch its filesystem. In the case of bind mounts, you can't specify thehostbecause you don't know the name or location of the host at deploy time, and you can't further specify thesourcePathbecause you can't know anything about the file system on the host.In the instance of trying to mount thedocker.sockespecially, that would give you access toevery container running on the host, which likely belongs to other accounts/aws users. That would be very bad all around.Can I use a bind mount with Fargate?Yes. Though it might be of limited usefulness since you won't be able to access the file system of the underlying host to retrieve any files passed from the container to the host.If the sourcePath value does not exist on the host container instance, the Docker daemon creates it.So the answer for a bind mount is essentially to not specifyhost, and the Docker daemon will just create a path for you. Is that helpful? Probably not in your case. | I am having a problem with my docker compose file:
This is my docker compose file:version: '3'
services:
nginx-proxy:
image: xxxxx.dkr.ecr.xxxxx.amazonaws.com/xxxx:latest
container_name: "nginx-proxy"
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
...This is the following error:ClientException: host.sourcePath should not be set for volumes in FargateMy task Definition:"mountPoints": [],
...
"volumes": [],
...
"readonlyRootFilesystem": false,I also want my volume to be "read only".Does anyone know which variable name I need to use on my docker composer file?Can someone help me?Thanks | AWS Fargate - Volumes |
I have been fighting with HSTS headers in Traefik for multiple days, when I learned something important about HSTS:Your browser will ignore any STS headers when the certificate you are using is considered not trustworthy/safe by your browser. You can verify this (in Chrome) with the security tab in the developer tools.For HSTS (HTTP Strict Transport Security) to work, I had to solve the next few things in my particular scenario:The certificate I was using for development, wasself-signedand installed onto my machine. But because it was self-signed, it was not put in the "Trusted Root Certification Authorities" directory. My browser complained that it could not find my certificate in that directory, so I had to put it there, otherwise the browser will still consider the certificate unsafe.Note that this was only meant for development purposes, official certificates were on the way.At first I created my certificate, putting my domain in theCN(Common Name) section. Nowadays, browser kinda ignore that section and look forSAN(Subject Alternative Names). I had to create a new certificate with my domain in that section.Those two things were the things I missed, after solving those, my STS headers (used in docker-compose service labels) were working. The labels (Traefik v1.7) look as following:my_service:
deploy:
labels:
- "traefik.frontend.headers.STSPreload=true"
- "traefik.frontend.headers.STSSeconds=31536000"Hope it helps anybody. | This is an issue I have been fighting with for days, but I could not find any help on stackoverflow, not even close to it. I hope to help people with similar issues in the future. Any elaboration on this question/answer is very much welcome.I have been trying to setSTS-headersto http-requests when usingTraefikas a proxy in aDockerenvironment. Somehow, no matter how I try to set the headers, my browser (Google Chrome) ignores them. What am I doing wrong? | How to use STS headers with Traefik when using Docker |
The link(https://hub.docker.com/r/rocker/shiny/) covers how to deploy the shiny server.
Simplest way would be:
docker run --rm -p 3838:3838 rocker/shinyIf you want to extend shiny server, you can write your own Dockerfile and start with shiny image as base image.(https://docs.docker.com/engine/reference/builder/)Dockerfile:
FROM rocker/shiny:latest | Well, I'm new atDockerand I need to implement a Shiny app in a Docker Container.I have the image fromhttps://hub.docker.com/r/rocker/shiny/, that includesShiny Server, but I don't know how to deploy my app in the server.I want to deploy the app in the server, install the required packages for my app into the Docker, save the changes and export the image/container.As I said, I'm new atDockerand I don't know how it really works.Any idea? | Deploy shiny app in rocker/shiny docker |
HyperV is used to spin up a Linux VM to run containers. Docker is still running Linux containers under the covers, the native Windows containers are still being developed. | I installed Docker for windows on a windows 10 box. It required me to enable the HyperV feature on it. Everything installed correctly and is running fine.Although one thing took me by surprise. I am actually able to run a linux container on docker windows. I thought cross-containerization is not possible conceptually. Can anyone please help me understand how does this work? | Running linux container on docker windows |
I believe you've mixed up syntax, try:environment:
- booleanvar=${MY_BOOLEAN_VAL}orenvironment:
booleanvar: ${MY_BOOLEAN_VAL} | As docker documentationsuggestsboolean values in a docker-compose file should be enclosed in single quotes to avoid misinterpretation by the YAML parser.
I have docker-compose file that populates some of the values with environment variables of the shell where it gets invokedmyservice:
environment:
- firstvar: ${MY_FIRST_VAL}
- ...
- booleanvar: ${MY_BOOLEAN_VAL}MY_BOOLEAN_VALcan be eithertrueorfalseand is exposed via a config file.
I tried'${MY_BOOLEAN_VAL}'and"${MY_BOOLEAN_VAL}"instead of${MY_BOOLEAN_VAL}hoping fordocker stack deployto force a bash-like mechanism for neutralising the YAML parser to no avail.How can I pass a boolean value using an environment variable to compose file? | Setting boolean value in docker-compose.yaml using environment variable |
I got the same issue and the below settings worked for me.setproxy addressas below in docker settings -username:password@proxyAddress:portEg:and also I set DNS server as automatic and it is not set to8.8.8.8 | I've installed docker on my office windows 10 Pro machine. I'm facing dial tcp lookup issue while trying to pull from the registry.Error response from daemon: Gethttps://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.65.1:53: no such hostI've tried many possible solutions from online. But I couldn't figure out the issue. Can someone please help me regarding this issue.Thanks. | dial tcp lookup: no such host issue on docker windows desktop |
Actually it was easier than I though, just need to add a dot in the host path and it will work as expected, copying all files and folders within/my_datafolderdocker cp /tmp/my_data/. my_container:/my_data | It may sound trivial but I couldn't find a easy way to copy multiple files into the root folder of a docker volume. I am using UbuntuXenial 16.04andDocker 1.12.1. For example if I have an Ubuntu container with the volume/my_data:docker run --name my_container -v /my_data -d ubuntu:latestIn my host machine I have a folder called/tmp/my_data/with multiple files inside, and I would like to copy all those files into the volume/my_datainmy_container. I have tried the following approaches but none of them work:docker cp /tmp/my_data my_container:/docker cp /tmp/my_data/* my_container:/my_data/Does someone know a work around for this issue? | How to copy multiple files into a docker data volume |
TheDocker Remote APIhas aPING endpoint. You can use the endpoint to check whether you can successfully connect to the Docker daemon.docker-machine envsets the environment variableDOCKER_HOST, so you can useDOCKER_HOSTas host to ping. Usingnc, you can ping the host as follows:$ eval "$(docker-machine env default)"
$ echo -e "GET /_ping HTTP/1.1\r\n" | nc $DOCKER_HOST
HTTP/1.1 200 OK
Server: Docker/1.10.2 (linux)
Date: Thu, 03 Mar 2016 07:05:58 GMT
Content-Length: 2
Content-Type: text/plain; charset=utf-8
OKYou will need to check the return value. If it returns 'OK', the connection was successful. A simple check could look as follows (this probably needs more refinement):#!/bin/bash
if [ "$(echo -e "GET /_ping HTTP/1.1\r\n" | nc $DOCKER_HOST | tail -n 1)" == 'OK' ] ; then
echo "You are connected"
fi | I am writing a script that will boot docker-compose automatically.However, sometimes, doingeval "$(docker-machine env default)"doesn't cause the docker daemon to be connected immediatly and when the next line comes (docker-compose up) I getCannot connect to the Docker daemon. Is the docker daemon running on this host?If I usesleepfor a few seconds the issue resolves.Is there a way to test the connect to daemon via some system tool (checking if a process exists, if a network connect was made, port listened to, etc)? I want to test the docker daemon externally and not usedockercli | Testing connection to docker daemon |
Use a namedvolumeinstead of abind mount,docker run -v tmphome:/root whateverIn a named volume the files will still persist over container restarts but the contents from the directorywill be copied to the mounted volume at creation time. Docker chooses where to store the data depending on the driver in use.localis the default and data defaults to thevolumedirectory in the Docker data dir, usually/var/lib/docker/volumes | Experimenting with Docker for the first time. Have these steps in my Dockerfile to create a directory, but when I run the container, the directory isn't there.FROM ubuntu
MAINTAINER AfterWorkGuinness
RUN apt-get update
RUN apt-get install -y openssh-server
RUN mkdir /root/.ssh
RUN cd /root/.ssh
RUN ssh-keygen -t rsa -N "" -f id_rsa
VOLUME /root
EXPOSE 22Build image:docker build -t ubuntu-ssh --no-cache .Testing the directory when I run the container:docker run -it -v c:/users/awg/dev/tmp/home:/root ubuntu-ssh
root@39eec8fa51ad:/# cd ~/.ssh
bash: cd: /root/.ssh: No such file or directory
root@39eec8fa51ad:/# cd /root/.ssh
bash: cd: /root/.ssh: No such file or directory | Docker doesn't create directory during build |
Finally, after trying to prove myself wrong, and hairless head I found the cause and solution for my problem.We are living in the world of illusionandwhat you see is not what you get!!!. I decided to inspect my data overmongo shellclient
rather thanMongoDB Compass GUI. I figure out that data that arrived to database contained correct UTC date. This narrowed all my previous
assumption that there has to be something wrong with my python application, and environment that the application is living in. What left wasMongoDB Compassitself.
After changing time zone on my machine to a random time zone, and refreshing collection withinMongoDB Compass, displayed UTC date changed to a date that fits random time zone.Be aware thatMongoDB Copassdisplays whatever is saved in databaseDatefield, enlarged about your machine's time zone. Example, if you saved UTC time equivalent to8:00 am,
and your machine's time zone is Europe/Warsaw thenMongoDB Compasswill display10:00am. | I package my python (flask) application with docker. Within my app I'm generating UTC date withdatetimelibrary usingdatetime.utcnow().Unfortunately, when I inspect saved data withMongoDB Compassthe UTC date is offset two hours (to my local time zone). All my docker containers have time zone set toEtc/UTC. Morover,mongoengineconnection to MongoDB usestz_aware=Falseandtzinfo=None, what prevents on fly date conversions.Where does the offset come from and how to fix it? | Incorrect UTC date in MongoDB Compass |
I solved it myself. putting the answer herefirst doaws configureThis will ask you some questions like security id and key. you should be able to get this information from the aws dashboard.aws ec2 describe-subnetsThis will list a bunch of subnet information. Just look at the first one and make note of AvailabilityZone and Subnet Iddocker-machine create --driver amazonec2 --amazonec2-subnet-id=xxxx --amazonec2-zone=c aws01Here enter the subnet ID you noted from step two and only the last character of the Availability Zone (so if the value is us-east-1c just enter c)Now you will seeRunning pre-create checks...
Creating machine...
(aws01) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env aws01 | I am reading this article which shows me how I can configure my docker VM on top of amazon ec2https://docs.docker.com/machine/drivers/aws/I got to the stepdocker-machine create --driver amazonec2 aws01but now I get an errorError with pre-create check: "unable to find a subnet in the zone: us-east-1a"I googled and found this threadhttps://github.com/docker/machine/issues/1771but did find anything which worked for me.Has anyone been able to successfully create a VM on top of AWS using docker-machine? | Integrating Docker-Machine with Amazon EC2 |
I found a solution for me:mkdir /opt/docker && cd /opt/docker
wget https://get.docker.com/builds/Linux/i386/docker-1.11.2.tgz
wget https://get.docker.com/builds/Linux/i386/docker-1.11.0.tgz
wget https://get.docker.com/builds/Linux/i386/docker-1.10.0.tgz # versions you want
tar -xzf docker-1.11.2.tgz -C 1.11.2
tar -xzf docker-1.11.0.tgz -C 1.11.0
tar -xzf docker-1.10.0.tgz -C 1.10.0add something like this to your.bashrcPATH_DOCKER=$PATH
dmenter() {
case $1 in
swarm)
eval $(dm env --swarm swarm)
VERSION=$(docker-machine version swarm)
export PATH=/opt/docker/$VERSION/usr/local/bin:$PATH_DOCKER
;;
"")
eval $(docker-machine env --unset)
export PATH=$PATH_DOCKER
;;
*)
eval $(docker-machine env $*)
VERSION=$(docker-machine version $*)
export PATH=/opt/docker/$VERSION/usr/local/bin:$PATH_DOCKER
;;
esac
}Now you can enter your docker withdmenter and always have the right client version available. | As I'm working with docker and docker-machine a lot, I have to work with several docker versions at the same time.
And we all know how hard this can be:$ docker ps
Error response from daemon: client is newer than server (client API version: 1.23, server API version: 1.22)So, my question: (How) is it possible to run multiple versions of docker client on my Ubuntu 16.04? Ideally it would be to automatically select the right version, once I enter a host withdocker-machine.Side note: I know how to update the client or the server. But I still have to work with different versions. | multiple docker clients on the same machine |
What you got is as expected.Microsoft does not support running the Docker daemon (also known as the service) within the WSL instance. You can refer tothis discussion.What you can do is just usedocker clientin WSL to connect to a remote docker engine which meansdocker daemonstill on other PC.But, if you useWSL2which announced inMay 6th, 2019, then, from microsoft's announcement, it could be(There is also a demo in this announcement which you can have a look):Today we’re unveiling the newest architecture for the Windows Subsystem for Linux: WSL 2! Changes in this new architecture will allow for: dramatic file system performance increases, and full system call compatibility, meaning you can run more Linux apps in WSL 2 such as Docker. | I'm running Ubuntu as a subsystem on Windows 10.I have just followed the steps to install Docker on Linux:https://docs.docker.com/install/linux/docker-ce/ubuntu/And are now at the step to test the hello-world app:$ sudo docker run hello-worldWhere I get this error:docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.I have narrowed it down to that it actually is the service that is not running - despite lots of other solutions online that more or less fixes this type of error.When I check the status:$ sudo service docker status
* Docker is not runningIt says it's not running so I start it successfully:$ sudo service docker start
* Starting Docker: docker [ OK ]If I check the status immediately it says it's running. But when I check it again a few second later, it's not runnning:$ sudo service docker status
* Docker is running
$ sudo service docker status
* Docker is not runningWhy is the Docker service stopping and how can I keep it running? | Why is the Docker service stopping? |
Generally sometimes it is neccessary or more useful to use one container for more than one process like in this situation.Such situation happens when processes are used together to fulfill its task. I can imagine for example situation when somebody want to add logging to the web application by using ELK (Elasticsearch, Logstash, Kibana). Those things run together and can have supervisor for monitoring processes inside one container.But for most cases it isbetter to use one process per container. What is more docker command should start process itself, for example running java aplication by/usr/bin/java -jar application.jarapart from running external script:./launchApplication.shSee discussion onhttp://www.reddit.com/r/docker/comments/2t1lzp/docker_and_the_pid_1_zombie_reaping_problem/where the problem is concerned. | I sometimes use Docker for my development work. When I do, I usually work on an out-of-the-box LAMP image fromtutum.My question is: Doesn't it defeat the purpose to work with Docker if it runs multiple processes in one container? (like the container started off Tutum's LAMP image) Isn't the whole idea of Docker to separate each process into a separate container? | Docker - one process per container? |
First of all the celery image is deprecated in favour of standard python image more infohere.WORKDIRsets the working directory for all the command after it is defined in the Dockerfile, which means the command which you are try to run will run from that directory. Docker image for celery sets the working directory to/home/user.Since your code is mounted on/celery_smapleand the working directory is/home/user, Celery is not able to find your python module.One alternative is to cd into the mounted directory and execute the command:celery:
image: celery:3.1.25
command: "cd /celery_sample && celery worker -A my_celery -l INFO"
volumes:
- .:/celery_sample
networks:
- webnetnotice the commandAnd another one is to create your own image withWORKDIRset to/celery_sampleeg:FROM python:3.5
RUN pip install celery==3.1.25
WORKDIR /celery_sampleafter building you own image you can use the compose file by changing theimageof celery serviceEditYou need to link the services to one another in order to communicate:version: "3"
services:
web:
build:
context: .
dockerfile: Dockerfile
command: "python my_celery.py"
ports:
- "8000:8000"
networks:
- webnet
volumes:
- .:/celery_sample
links:
- redis
redis:
image: redis
networks:
- webnet
celery:
image: celery:3.1.25
command: "celery worker -A my_celery -l INFO"
volumes:
- .:/home/user
networks:
- webnet
links:
- redis
networks:
webnet:and your configuration file should be:## Broker settings.
BROKER_URL = 'redis://redis:6379/0'
## Using the database to store task state and results.
CELERY_RESULT_BACKEND = 'redis://redis:6379/0'once you have linked the services in compose file you can access the service by using the service name as the hostname. | I have Flask app with Celery worker and Redis and it's working normally as expected when running on local machine. Then I tried to Dockerize the application. When I trying to build/start the services ( ie, flask app, Celery, and Redis) usingsudo docker-compose upall services are running except Celery and showing an error asImportError: No module named 'my_celery'But, the same code working in local machine without any errors. Can any one suggest the solution?DockerfileFROM python:3.5-slim
WORKDIR celery_sample
ADD . /celery_sample
RUN pip install -r requirements.txt
EXPOSE 8000docker-compose.ymlversion: "3"
services:
web:
build:
context: .
dockerfile: Dockerfile
command: "python my_celery.py"
ports:
- "8000:8000"
networks:
- webnet
volumes:
- .:/celery_sample
redis:
image: redis
networks:
- webnet
celery:
image: celery:3.1.25
command: "celery worker -A my_celery -l INFO"
volumes:
- .:/celery_sample
networks:
- webnet
networks:
webnet:requirements.txtflask==0.10
redis
requests==2.11.1
celery==3.1.25my_celery.py( kindly ignore the logic)from flask import Flask
from celery import Celery
flask_app = Flask(__name__)
celery_app = Celery('my_celery')
celery_app.config_from_object('celeryconfig')
@celery_app.task
def add_celery():
return str(int(10)+int(40))
@flask_app.route('/')
def index():
return "Index Page"
@flask_app.route('/add')
def add_api():
add_celery.delay()
return "Added to Queue"
if __name__ == '__main__':
flask_app.debug = True
flask_app.run(host='0.0.0.0', port=8000)celeryconfig.py## Broker settings.
BROKER_URL = 'redis://localhost:6379/0'
## Using the database to store task state and results.
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0' | couldn't start Celery with docker-compose |
docker-compose run will start a new container on the same network with a name likefolder_db_container_run_1. This is not running mysql since you passed it a command. So it is running that command. So you connect from this container to the original db containerdocker-compose run db_container mysql -uuser -ppass db_name -h db_containerWhile when you do exec you get inside the running container. And not specifying host means local mysqldocker-compose exec db_container mysql -uuser -ppass db_nameThat is why it works. No extra container is launched in this case | Why do you need to specify a host when calling withdocker-compose run?e.g.docker-compose run db_container mysql -uuser -ppass db_name -h db_containerseems to be the direct equivalent ofdocker-compose exec db_container mysql -uuser -ppass db_nameWhen omitting the hostname flag from the first example, mysql fails with a "can't connect to socket" error.What is the difference between the two examples? | Mysql client called with `docker-compose run` vs. `docker-compose exec` |
You can create an AMI based on the AWS provided AMI, and customize it. It will still be managed since the Batch and/or ECS daemon is running on it.As a side note I’m trying to do the same thing but no luck so far. I may end up creating a custom AMI and include the configure script in the AMI itself in /etc/rc.local. Not ideal but I don’t think Batch can pass a user data script other than what it needs. I am still looking into this. | I would like to create aManaged Compute EnvironmentforAWS Batch, but useEC2 User Datato configure the instances as they are brought into the ECS fleet that Batch is scheduling jobs onto.It shouldn't matter, but the purpose of the User Data script is to pull down large data files onto an InstanceStore that the Docker containers will reference.Thisis possible in ECS, but I have found no way to pass User Data to a Managed Batch Compute Environment.At most, I can specify the AMI. But since we're going with Managed, we must use theAmazon ECS-optimized AMI.I'd prefer to useEC2 User Dataas the solution, as it gives a entry-point for any other bootstrapping we wish to perform. But I'm open to other hacks or solutions, so long as they are applicable to aManaged Compute Environment. | Create AWS Batch Managed Compute Environment passing UserData to Container Instances |
Instead of creating additional databases in docker-compose file, just create them in SQL files instead:version: '3.3'
services:
web:
build:
context: ./php56
dockerfile: Dockerfile
container_name: php56
depends_on:
- db
volumes:
- ../www:/var/www/html/
ports:
- 8000:80
db:
container_name: mysql
image: mysql:5.7.21
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: dkum
MYSQL_USER: devuser
MYSQL_PASSWORD: devpass
volumes:
- ../sql/baze/dkum.sql:/docker-entrypoint-initdb.d/dkum.sql
- ../sql/baze/dkum_joomla.sql:/docker-entrypoint-initdb.d/dkum_joomla.sql
- ../sql/baze/dkum_test.sql:/docker-entrypoint-initdb.d/dkum_test.sql
ports:
- 6033:3306dkum.sqlCREATE TABLE dkum_table (
DkumID int,
LastName varchar(255),
FirstName varchar(255),
Address varchar(255),
City varchar(255)
);dkum_joomla.sqlCREATE DATABASE IF NOT EXISTS dkum_joomla;
USE dkum_joomla;
CREATE TABLE dkum_joomla_table (
DkumJoomlaID int,
LastName varchar(255),
FirstName varchar(255),
Address varchar(255),
City varchar(255)
);dkum_test.sqlCREATE DATABASE IF NOT EXISTS dkum_test;
USE dkum_test;
CREATE TABLE dkum_test_table (
DkumTestID int,
LastName varchar(255),
FirstName varchar(255),
Address varchar(255),
City varchar(255)
); | I want to create Docker container and import 3 databases into it. I've tried with the following code:version: '3.3'
services:
web:
build:
context: ./php56
dockerfile: Dockerfile
container_name: php56
depends_on:
- db
volumes:
- ../www:/var/www/html/
ports:
- 8000:80
db:
container_name: mysql
image: mysql:5.7.21
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: dkum
MYSQL_USER: devuser
MYSQL_PASSWORD: devpass
entrypoint:
sh -c "
echo 'CREATE DATABASE IF NOT EXISTS dkum_joomla; CREATE DATABASE IF NOT EXISTS dkum_test;' > /docker-entrypoint-initdb.d/init.sql;
/usr/local/bin/docker-entrypoint.sh --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
"
volumes:
- ../sql/baze/dkum.sql:/docker-entrypoint-initdb.d/dkum.sql
- ../sql/baze/dkum_joomla.sql:/docker-entrypoint-initdb.d/dkum_joomla.sql
- ../sql/baze/dkum_test.sql:/docker-entrypoint-initdb.d/dkum_test.sql
ports:
- 6033:3306This code creates only 1 database (dkum) filled with data from the dkum.sql volume. If I remove dkum_joomla.sql and dkum_test.sql volumes then it creates 3 databases (dkum, dkum_joomla and dkum_test) with only dkum database filled with data.Here are my SQL files. I will later expand them.dkum.sqlCREATE TABLE dkum_table (
DkumID int,
LastName varchar(255),
FirstName varchar(255),
Address varchar(255),
City varchar(255)
);dkum_joomla.sqlCREATE TABLE dkum_joomla_table (
DkumJoomlaID int,
LastName varchar(255),
FirstName varchar(255),
Address varchar(255),
City varchar(255)
);dkum_test.sqlCREATE TABLE dkum_test_table (
DkumTestID int,
LastName varchar(255),
FirstName varchar(255),
Address varchar(255),
City varchar(255)
); | Docker compose with multiple databases in one container |
I can definitely reproduce this having a an emptyphpfolder, so missing theDockerfile, with the following minimal example.File hierarchy:.
├── docker-compose.yml
└── php
## ^-- mind this is an empty folder, not a fileAnd the minimaldocker-compose.yml:version: "3.9"
services:
php-apache-environment:
container_name: php-apache
build: ./phpRunningdocker compose upyields the same error as yours:failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount2757070869/Dockerfile: no such file or directorySo, if you create aDockerfilein thephpfolder, e.g.:.
├── docker-compose.yml
└── php
└── DockerfileWith a content likeFROM php:fpmThen the service starts working:$ docker compose up
[+] Running 1/0
⠿ Container php-apache Created 0.1s
Attaching to php-apache
php-apache | [14-Apr-2023 08:42:10] NOTICE: fpm is running, pid 1
php-apache | [14-Apr-2023 08:42:10] NOTICE: ready to handle connectionsAnd if your file describing the image inside the folderphphas a different name than the standard one, which isDockerfile, then you have to adapt yourdocker-compose.yml, using the object form of thebuildparameter:version: "3.9"
services:
php-apache-environment:
container_name: php-apache
build:
context: ./php
dockerfile: Dockefile.dev # for exampleRelated documentation:https://docs.docker.com/compose/compose-file/build/#build-definition | On Windows 11, with this rather simpledocker-compose.yamlfileversion: '3.0'
services:
php-apache-environment:
container_name: php-apache
build: ./php
volumes:
- ./php/src:/var/www/html/
ports:
- 8000:80
db:
image: mysql:5.6.27
restart: always
environment:
MYSQL_ROOT_PASSWORD: PassWord
MYSQL_DATABASE: test
MYSQL_USER: test
MYSQL_PASSWORD: 9yI2G0s-sZf37SS5Ml1Kj
ports:
- "9906:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
restart: always
environment:
PMA_HOST: db
PMA_PORT: 9906
PMA_USER: test
PMZ_PASSWORD: 9yI2G0s-sZf37SS5Ml1Kj
ports:
- '8080:80'
depends_on:
- dbAnd the commanddocker compose up --detachthe images are cloned but I get the following error:failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount1583816350/Dockerfile: no such file or directoryIn Docker desktop I see the images but as unused.I googled this error and came up withthisbut the linedockerfile: Dockerfileis rejected with:services.phpmyadmin Additional property dockerfile is not allowed | Docker compose fails with "failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount[...]/Dockerfile: no such file or directory" |
After searching the entire web, I didn't encounter any solution for this problem. I contacted with AWS Support. They told me that the issue is with missing "amazon-efs-utils" extension on EC2 instances created by Elastic Beanstalk and then I fixed the error by creating a file named efs.config inside .ebextensions folder:.ebextensions/efs.configpackages:
yum:
amazon-efs-utils: 1.2Finally, I zipped the .ebextensions folder and my Dockerrun.aws.json file before uploading and the problem has been resolved. | I am trying to mount my EFS to a multi-docker Elastic Beanstalk environment using task definition with Dockerrun.aws.json. Also, I have configured the security group of EFS to accept NFS traffic from EC2 (EB environment) security group.However, I am facing with the error:ECS task stopped due to: Error response from daemon: create
ecs-awseb-SeyahatciBlog-env-k3k5grsrma-2-wordpress-88eff0a5fc88f9ae7500:
VolumeDriver.Create: mounting volume failed: mount: unknown filesystem
type 'efs'.I am uploading this Dockerrun.aws.json file using AWS management console:{
"AWSEBDockerrunVersion": 2,
"authentication": {
"bucket": "seyahatci-docker",
"key": "index.docker.io/.dockercfg"
},
"volumes": [
{
"name": "wordpress",
"efsVolumeConfiguration": {
"fileSystemId": "fs-d9689882",
"rootDirectory": "/blog-web-app/wordpress",
"transitEncryption": "ENABLED"
}
},
{
"name": "mysql-data",
"efsVolumeConfiguration": {
"fileSystemId": "fs-d9689882",
"rootDirectory": "/blog-db/mysql-data",
"transitEncryption": "ENABLED"
}
}
],
"containerDefinitions": [
{
"name": "blog-web-app",
"image": "bireysel/seyehatci-blog-web-app",
"memory": 256,
"essential": false,
"portMappings": [
{"hostPort": 80, "containerPort": 80}
],
"links": ["blog-db"],
"mountPoints": [
{
"sourceVolume": "wordpress",
"containerPath": "/var/www/html"
}
]
},
{
"name": "blog-db",
"image": "mysql:5.7",
"hostname": "blog-db",
"memory": 256,
"essential": true,
"mountPoints": [
{
"sourceVolume": "mysql-data",
"containerPath": "/var/lib/mysql"
}
]
}
]
}AWS Configuration Screenshots:EC2 Security Group (Automatically created by EB)EFS Security GroupEFS networking | AWS Elastic Beanstalk EFS Mount Error: unknown filesystem type 'efs' |
I've been struggling with this for a while, and finally got it to work. I found the solution to the 17002 error was to run the setup.exe /configure config.xml while running the docker image interactively, and then committing that container. This worked for me on the windows:1809 image.Full writeupI'm copying the downloaded office files into the docker image separately, as I was having issues with getting the download to work from inside the docker file(see here). So I have the folderOfficeat the same level as the docker file with the contents ofsetup.exe /download config.xml. Then the docker file below builds a base image. I randocker run -it {IMAGEID} powershell, navigate to C:\\odtsetup and with the interactive console runsetup.exe /configure config.xml, exit the container and rundocker stop {CONTAINERID}
docker commit {CONTAINERID}`I now have a base windows server image with docker installed, and can use it in the dockerfile for my application. If I need to update the server image or excel version I'll need to do this manually again, but I'm just thankful it's working.DOCKERFILEFROM mcr.microsoft.com/windows:1809
WORKDIR C:\\odtsetup
ADD https://download.microsoft.com/download/2/7/A/27AF1BE6-DD20-4CB4-B154-EBAB8A7D4A7E/officedeploymenttool_13426-20308.exe odtsetup.exe
RUN odtsetup.exe /quiet /norestart /extract:C:\\odtsetup
ADD config.xml .
ADD Office Office\\config.xml
| Error MessageODT (Office Deployment Tool) log reported error when installing into Windows Container (Server Core): C2R client returned failing error code, error code: 17002EnvironmentsBehavior in Windows Server 2019 (1809) with Desktop Experience installed.ODT installation Result: Succeeded.test-o365.ps1: Succeeded.Behavior in Container (mcr.microsoft.com/windows/servercore:ltsc2019)ODT installation Result: Negative (C2R client returned failing error code, error code: 17002)test-o365.ps1: Negative: HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)DockerfileFROM mcr.microsoft.com/windows/servercore:ltsc2019
WORKDIR C:/setup
COPY . .
ENTRYPOINT startup.cmdstartup.cmdcurl.exe https://download.microsoft.com/download/2/7/A/27AF1BE6-DD20-4CB4-B154-EBAB8A7D4A7E/officedeploymenttool_12325-20288.exe --output .\
officedeploymenttool_12325-20288.exe
officedeploymenttool_12325-20288.exe /quiet /passive /extract:.
setup.exe /configure o365.xml
powershell -file test-o365.ps1
pauseo365.xml
test-o365.ps1# Write current datetime into result.xlsx to verify that Office COM component is working.
$filename = [System.Environment]::CurrentDirectory + "\result.xlsx"
$filename
if ([System.IO.File]::Exists($filename )) {
Remove-Item $filename
}
$xl=New-Object -ComObject Excel.Application
$xl.Visible=$false
$wb=$xl.WorkBooks.Add()
$ws=$wb.WorkSheets.item(1)
$ws.Cells.Item(1,1)= [System.DateTime]::Now
$wb.SaveAs($filename)
$xl.Quit()More informationWe are already aware of 'server-side Automation of Office' issues as mentioned in article [3]. At current stage, we are evaluating the possibility on running legacy ASP.NET application in Windows container, with Office/COM inter-operation enabled.ReferencesOverview of the Office Deployment ToolWhat is the Server Core installation option in Windows Server?Considerations for server-side Automation of Office | Installing Office into Windows Container (servercore:ltsc2019) failed with error code 17002 |
You need to secure the registry before you can access it remotely, or explicitly allow all your Docker daemons to access insecure registries.To secure the registry the easiest choice is to buy an SSL certificate for your server, but you can also self-sign the certificate and distribute to clients.To allow insecure access add the argument--insecure-registry myregistrydomain.com:5000to all the daemons who need to access the registry. (Obviously replace the domain name and port with yours).The full instructions (including an example of your error message) are available at:https://github.com/docker/distribution/blob/master/docs/deploying.mdRegarding the error message, IguessDocker tries to use v2 first, fails because of the security issue then tries v1 and fails again. | I'm trying to use a self hosted docker registry v2. I should be able to push a docker image, which does work locally on the host server (coreos) running the registry v2 container. However, on a separate machine (also coreos, same version) when I try to push to the registry, it's try to push to v1, giving this error:Error response from daemon: v1 ping attempt failed with error: Get
https://172.22.22.11:5000/v1/_ping: dial tcp 172.22.22.11:5000: i/o timeout.
If this private registry supports only HTTP or HTTPS with an unknown CA
certificate, please add `--insecure-registry 172.22.22.11:5000` to the
daemon's arguments. In the case of HTTPS, if you have access to the registry's
CA certificate, no need for the flag; simply place the CA certificate at
/etc/docker/certs.d/172.22.22.11:5000/ca.crtboth machine's docker executable is v1.6.2. Why is it that one works and is pushing to v2 but the other is v1?Here's the repo for the registry:https://github.com/docker/distribution | docker is using the v1 registry api when it should use v2 |
You need a local version of gulp as well as a global one.Adding this line should fix your issueRUN npm i gulp | I am attempting to use gulp inside a Docker container.I have the followingDockerfileFROM golang:alpine
RUN apk --update add --no-cache git nodejs
RUN npm install --global gulp
ENV GOPATH=/go PATH=$PATH:/go/bin
VOLUME ["/go/src/github.com/me/sandbox", "/go/pkg","/go/bin"]
WORKDIR /go/src/github.com/me/sandbox
CMD ["gulp"]and I have the followingdocker-compose.ymlversion: '2'
services:
service:
build: ./service
volumes:
- ./service/src/:/go/src/github.com/me/sandboxdocker-compose buildbuilds successfully, but when I rundocker-compose up, I get the following error messageRecreating sandbox_service_1
Attaching to sandbox_service_1
service_1 | [22:03:40] Local gulp not found in /go/src/github.com/me/sandbox
service_1 | [22:03:40] Try running: npm install gulpI have tried several different things to try to fix it.Tried also installinggulp-cliglobally and locallyTried installinggulplocally withnpm install gulpTried moving thenpm install --global gulpafter theWORKDIRTried different paths for volumes.My guess is that it has something to do with the volumes, because when I get rid of anything having to do with a volume, it doesn't complain.Mr project structure is shown in screenshot below: | Docker Compose w/ Gulp - Local gulp not found |
Things changed a little with the introduction of Docker Toolbox. Now you do not directly interact withboot2docker, but instead usedocker-machine. Althoughboot2dockerstill exists as VM there is no CLI-Tool any longer. It was replaced by Docker Machine.Thus you should be able to get hold of the VM's IP address by typing:docker-machine ip . If you do have the default installation your machine name will bedefault.Withdocker-machine activeyou can have a look which VM is currently active. With that name you can also usedocker-machine inspect .You can find more about Docker Machine in theofficial docs. | I used this official guide to set up Docker on a Windows 7 machine:https://docs.docker.com/windows/started/I successfully pulled an image from the docker hub and I can run my own docker image.No I am stuck trying to run and access a webserver with docker on Windows. Apparently, behindboot2dockerI can't reach my docker container the way I was used to.Once I added-p 3007:80to thedocker runcommand, the port forwarding showed up in the container list (docker ps) as0.0.0.0:3007 -> 80. And with-p 127.0.0.1:3007:80I get a more meaningful ip address. I cannot, however, reach the container with a browser on the Windows host.Moreover,docker inspectdoes not reveal an ip address for the running container (which also seems wrong).I also tried--net=hostto no avail. | Networking with Docker on Windows |
Addtty: trueto thepgadminservice in the docker-compose.yml file.pgadmin:
image: dpage/pgadmin4:4.19
restart: always
ports:
- 8001:8080/tcp
environment:
- PGADMIN_LISTEN_ADDRESS=0.0.0.0
- PGADMIN_LISTEN_PORT=8080
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
networks:
- db_network
# ADD THIS LINE
tty: trueSo the complete file will look as follows:version: '3'
services:
############################
# Setup database container #
############################
postgres_db:
image: postgres
restart: always
ports:
- ${POSTGRES_PORT}:${POSTGRES_PORT}
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- ./data:/var/lib/postgresql/data
networks:
- db_network
pgadmin:
image: dpage/pgadmin4:4.19
restart: always
ports:
- 8001:8080/tcp
environment:
- PGADMIN_LISTEN_ADDRESS=0.0.0.0
- PGADMIN_LISTEN_PORT=8080
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
networks:
- db_network
# ADD THIS LINE, TO BE ABLE TO LOGIN
tty: true
networks:
db_network:
driver: bridge | This is thedocker-compose.ymlfile:version: '3'
services:
############################
# Setup database container #
############################
postgres_db:
image: postgres
restart: always
ports:
- ${POSTGRES_PORT}:${POSTGRES_PORT}
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- ./data:/var/lib/postgresql/data
networks:
- db_network
pgadmin:
image: dpage/pgadmin4:4.19
restart: always
ports:
- 8001:8080/tcp
environment:
- PGADMIN_LISTEN_ADDRESS=0.0.0.0
- PGADMIN_LISTEN_PORT=8080
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
networks:
- db_network
networks:
db_network:
driver: bridgeThere is a.envfile in the same directory.# The above refers to the name of the postgres container since using docker-compose
# This is because docker-compose creates a user-defined network. Kubernetes also does this.
POSTGRES_PORT=5432
POSTGRES_USER=website
POSTGRES_PASSWORD=website
POSTGRES_DB=wikifakes_main[email protected]PGADMIN_DEFAULT_PASSWORD=my-secure-passwordWhen executingdocker-compose up --buildboth docker start and I can access the pgAdmin4 website vialocalhost:8001.
However, after entering the credentials, I get the following response:Specified user does not existWhy does the specified user not exist and how should I change my environment so that I can log in?The login on anpgadmin4docker created viadocker run --rm -e PGADMIN_DEFAULT_EMAIL="[email protected]" -e PGADMIN_DEFAULT_PASSWORD="my-secure-password" -p 8001:80 dpage/pgadmin4works alright though. | Docker dpage/pgadmin4 error: specified user does not exist |
Eventually, I figured out what was the reason.Visual Studio resources tool (I assume it responsible for .resx file content generation) makes an assumption that file paths are case insensitive and generates all file paths in lower case (i.e.data\example.yaml). At the same time base docker image used for the build is based on Linux where paths are case sensitive.
data\example.yaml;System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089;utf-8
Bottom line: nevertheless working solution was to manually edit .resx file (or use lowercase where it's needed), we decided to avoid using resources at all. It seems there is no proper support for it anymore. | "dotnet build" builds a project with no errors, and at the same time docker build gives following error:/src/Audit.Worker/Example/Resources.resx : error MSB3103: Invalid Resx file. System.IO.DirectoryNotFoundException: Could not find a part of the path '/src/Audit.Worker/Example/data/example.yaml'. [/src/Audit.Worker/Audit.Worker.csproj]DockerfileFROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["./Audit.Worker/Audit.Worker.csproj", "Audit.Worker/"]
RUN dotnet restore "Audit.Worker/Audit.Worker.csproj"
COPY . /src/
WORKDIR "/src/Audit.Worker/"
RUN dotnet build "Audit.Worker.csproj" -c Release -o /apps
FROM build AS publish
RUN dotnet publish "Audit.Worker.csproj" -c Release -o /apps
FROM base AS final
WORKDIR /apps
COPY --from=publish /apps .
ENTRYPOINT ["dotnet", "Audit.Worker.dll"] | .Net resource with files content make docker build fail |
There are two issues I've identified so far. Maya G points out a third in the comments below.Incorrect conditional logicYou need to replace:if len(sys.argv) >= 2:
sys.exit('ERROR: Received 2 or more arguments. Expected 1: Input file name')With:if len(sys.argv) > 2:
sys.exit('ERROR: Received more than two arguments. Expected 1: Input file name')Bear in mind that the first argument given to the script is always its own name. This means you should be expecting either 1 or 2 arguments insys.argv.Issues with locating the default fileAnother problem is that your docker container's working directory is/home/aws, so when you execute your Python script it will try to resolve paths relative to this.This means that:with open('inputfile.txt') as f:Will be resolved as/home/aws/inputfile.txt, not/home/aws/myapplication/inputfile.txt.You can fix this by either changing the code to:with open('myapplication/inputfile.txt') as f:Or (preferred):with open(os.path.join(os.path.dirname(__file__), 'inputfile.txt')) as f:(Sourcefor the above variation)UsingCMDvs.ENTRYPOINTIt also seems like your script apparentlyisn'treceivingmyapplication/inputfile.txtas an argument. This might be a quirk withCMD.I'm not 100% clear on the distinction between these two operations, but I always useENTRYPOINTin my Dockerfiles and it's given me no grief. Seethis answerand try replacing:CMD ["python", "/myapplication/script.py", "/myapplication/inputfile.txt"]With:ENTRYPOINT ["python", "/myapplication/script.py", "/myapplication/inputfile.txt"](thanks Maya G) | I've successfully built a Docker container and copied my application's files into the container in the Dockerfile. However, I am trying to execute a Python script that references an input file (that was copied into the container during the Docker build). I can't seem to figure out why my script is telling me it cannot locate the input file. I am including the Dockerfile I used to build the container below, and the relevant portion of the Python script that is looking for the input file it cannot find.Dockerfile:FROM alpine:latest
RUN mkdir myapplication
COPY . /myapplication
RUN apk add --update \
python \
py2-pip && \
adduser -D aws
WORKDIR /home/aws
RUN mkdir aws && \
pip install --upgrade pip && \
pip install awscli && \
pip install -q --upgrade pip && \
pip install -q --upgrade setuptools && \
pip install -q -r /myapplication/requirements.txt
CMD ["python", "/myapplication/script.py", "/myapplication/inputfile.txt"]Relevant portion of the Python script:if len(sys.argv) >= 2:
sys.exit('ERROR: Received 2 or more arguments. Expected 1: Input file name')
elif len(sys.argv) == 2:
try:
with open(sys.argv[1]) as f:
topics = f.readlines()
except Exception:
sys.exit('ERROR: Expected input file %s not found' % sys.argv[1])
else:
try:
with open('inputfile.txt') as f:
topics = f.readlines()
except:
sys.exit('ERROR: Default inputfile.txt not found. No alternate input file was provided')Docker command on host resulting in error:sudo docker run -it -v $HOME/.aws:/home/aws/.aws discursive python \
/discursive/index_twitter_stream.pyThe error from the command above:ERROR: Default inputfile.txt not found. No alternate input file was providedThe AWS stuff is drawn from a tutorial on how to pass your host's AWS credentials into the Docker container for use in interacting with AWS services. I used elements from here:https://github.com/jdrago999/aws-cli-on-CoreOS | Docker Python script can't find file |
As you already noticed, by design all containers in a pod are destined to live and die together. It's a bit hard to tell what your best alternative would be without knowing what kind of maintenance task your sidekick needs to perform exactly. Generally speaking, I can think of three approaches:Keep your maintenance container running. This is probably a fairly ugly solution as it wastes resources. It really only makes sense if the maintenance task can benefit from running periodically.Move the maintenance task over to your primary container, effectively converting your multi-container pod into a single-container one. I assume that you can run the task asynchronously (as you would already be able to run it in a separate container); if, for some reasons, you cannot, consider modifyingreadiness and liveness probesaccordingly so that your container is given enough time to finish any boot-up procedures before becoming eligible for termination.Consider adjusting your design so that the maintenance task may run as a separate pod (or maybe even as ajob). You'd then need to manage any dependencies and wiring yourself by putting together Kubernetes primitives properly. | I have a ReplicationController containing two containers in a pod, the first is a long-living pod, the second does a few maintenance tasks when the RC starts up a POD. However as the second container is short lived, it stops itself when it finishes its start tasks.
When Kuberbetes notices this, it kills off the POD and starts a new one...What is the correct way to handle this in Kuberbetes? | Short lived kubernetes container (/sidekick) in a pod (in a Replication Controller) |
Your application shouldnt run either way because of thisdocker run -it -p 4000:80 kubernatesimageit should bedocker run -it -p 4000:8080 kubernatesimageNow concerning the issue : Your runtime version is 8 : because of your dockerfile is "FROM openjdk:8" so your application will be running in java 8 environment =>version 52 ..
and you have compiled your application to jar file "spring-boot-web-0.0.1-SNAPSHOT.jar" by another version 55 which is java 11. So you have java version mismatch => The key is to make sure both the compile and runtime is using the same JDK.One proposed fix is change your java version in pom.xml file
...
1.8
1.8
...
Another fix is to change the runtime version in dockerfile to java 11 , there is not possible to base your image on openjdk:11 however you can use thisFROM adoptopenjdk/openjdk11:alpine-jre
ARG JAR_FILE=target/*.jar
WORKDIR /opt/app
COPY ${JAR_FILE} app.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","app.jar"]note: you can still run a smaller compiled java version in bigger runtime env, i.e running a compiled 8 java version on java 11 runtime environmentI hope I helped | I created a web application using Springboot and now I'm going to dockerize it and upload it into docker hub. So myDockerfileis,FROM openjdk:8
EXPOSE 8080
ADD target/spring-boot-web-0.0.1-SNAPSHOT.jar spring-boot-web-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java","-jar","spring-boot-web-0.0.1-SNAPSHOT.jar"]After creating.jarinside my target I'm building docker image using the following command,docker build -t kubernatesimageIt builds the docker image successfully and when I run thedocker imagesI can see the created image. But before uploading it into docker hub I need to run and check so I'm executing,docker run -it -p 4000:80 kubernatesimageAnd this returns the following exception,Exception in thread "main" java.lang.UnsupportedClassVersionError: guru/springframework/SpringBootWebApplication has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0According to @Nithin's answer inthisStackOverflow question, I found this happens due to version missmatch and the java version codes,49 = Java 5
50 = Java 6
51 = Java 7
52 = Java 8
53 = Java 9
54 = Java 10
55 = Java 11
56 = Java 12
57 = Java 13
58 = Java 14But still, I have no idea what do I need to perform to solve the issue. I mentioned openjdk:8 in myDockerfileand I runjava -versionto get the local JDK version and it returnedjava version "1.8.0_271"So do I need to change java version in my local machine or change myDockerfile? | Docker run returns an exception: Application has been compiled by a more recent version of the Java Runtime |
They are solving two different problems.--cache-to/fromis used to store the result of a build step and reuse it in future builds, avoiding the need to run the command again. This is stored in a persistent location outside of the builder, like on a registry, so that other builders can skip already completed steps of a build even if the image wasn't built on the local system.--mount type=cachecreates a mount inside the temporary container that's executed in a RUN step. This mount is reused in later executions of the build when the step itself is not cached. This is useful when a step pulls down a lot of external dependencies that do not need to be in the image and can safely be reused between builds. The storage of the mount cache is local to the builder and is an empty directory on first use. | According to the officialdocumentation, in order to leverage a cache backend indocker buildx build, you need to use the--cache-from/toflags.This makes sense as intuitively it signifies the place where the build result will be cached to (--cache-to) and what cache it will use to speed up the build process (--cache-from).However, there is another alternative (?) of using cache with themountoption within theRUNdirective, as in: (official example)RUN \
--mount=type=cache,target=/var/cache/apt \
apt-get update && apt-get install -y gitIn the last example where these (aptpackages in our case) will be retrieved from?Is the cache backend (s3,ghaetc) applicable in this case?Are these two cases complementary or orthogonal? | Difference between --cache-to/from and --mount type=cache in docker buildx build |
Do this in your Dockerfile to enable MySQLi:RUN docker-php-ext-install mysqli && docker-php-ext-enable mysqli | I build some containers with the following code:version: '3'
services:
db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
MYSQL_USER: test
MYSQL_PASSWORD: test
ports:
- "9906:3306"
web:
image: php:7.3-apache
container_name: php_web
depends_on:
- db
volumes:
- ./:/var/www/html/
ports:
- "8100:80"This works like a charm. The only problem what I have is that I need the mysqli module. This one is not included in the php:7.3-apache image.So what I tried was to add this in the dockerfile:FROM php:7.3-apache
RUN docker-php-ext-install mysqliThat is not working. So how can I add the mysqli module to my container? | Enable mysqli in docker container |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.