Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
Thanks to @aman-tuladhar and some hours lost on the internet I've found out that you just need to make surestorageClassNameis set for youPersistentVolumeandPersistentVolumeClaim.As perdocumentationif you want to avoid thatKubernetesdynamically generetesPersistentVolumeswithout considering the one you statically declared, you can just set a empty string" ".In my case I've setstorageClassName: manual.PersistentVolumekind: PersistentVolume
metadata:
name: wordpress-volume
spec:
# ...
storageClassName: manual
hostPath:
path: /tmp/wordpress-volumePersistentVolumeClaimkind: PersistentVolumeClaim
metadata:
name: wordpress-volume-claim
spec:
storageClassName: manual
# ...This works out of the box withdocker-for-desktopcluster (as long asmountPathis set to a absolute path).References:Kubernetes: Binding PersistentVolumes and PersistentVolumeClaimsStoring data into Persistent Volumes on Kubernetes | I'd like to access and edit files in my KubernetesPersistentVolumeon my local computer (macOS), but I cannot understand where to find those files!I'm pointing myhostPathto/tmp/wordpress-volumebut I cannot find it anywhere. What is the hidden secret I'm missingI'm using the following configuration on adocker-for-desktopclusterVersion 2.0.0.2 (30215).PersistentVolumekind: PersistentVolume
metadata:
name: wordpress-volume
spec:
# ...
hostPath:
path: /tmp/wordpress-volumePersistentVolumeClaimkind: PersistentVolumeClaim
metadata:
name: wordpress-volume-claim
# ...Deploymentkind: Deployment
metadata:
name: wordpress
# ...
spec:
containers:
- image: wordpress:4.8-apache
# ...
volumeMounts:
- name: wordpress-volume
mountPath: /var/www/html
volumes:
- name: wordpress-volume
persistentVolumeClaim:
claimName: wordpress-volume-claim | How to access PersistentVolume files on docker-for-desktop? |
It's possible that it takes some time for postgres to start accepting connections. The way you've written it, it will call CREATE USER immediately after the start function returns. Try putting a sleep in there and see if it's still a problem. | I'm trying to create simple postgres server with docker. I use the officialpostgres imageas a base for my container.
My Dockerfile contains these commands:FROM postgres
USER postgres
RUN /etc/init.d/postgresql start &&\
psql --command "CREATE USER user WITH SUPERUSER PASSWORD 'user';" &&\
createdb -O user appAnd when I try to run it I have an error:psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?What I'm doing wrong? | Postgres on Docker. How can I create a database and a user? |
You only need to deleteKitematicfolder in%APPDATA%(C:\Users\{User}\AppData\Roaming) and run Kitematic again. | I have installed Docker for Windows (running Windows 10). Out of the box, Docker would not install an image on Hyper-V but I was able to get it work.Edit: I acquired Kitematic via the link from this screen:Upon clicking download, I get a zip file via http.Next, I copied Kitematic zip contents to c:\program files\docker\kitematic. When I run Kitematic from Docker menu, it gives me an error stating:VirtualBox is not installed. Please install it via the Docker Toolbox.I don't want to use VirtualBox, if at all possible because I have other software that uses Hyper-V.Is it possible to get Kitematic to use Hyper-V?Thanks, | How to use Kitematic with Hyper-V enabled? |
For further investigation about this question. I would like to notify that I've "solved" my issue with the same approach than @Kai Hofstetter in the following post:How to mount a directory in the docker container to the host? | Well, Basically I wanna create a Symbolic link "ln -s" from my host to my container.To sum up: the host folder .m2 of the host must have a Symbolic link to the .m2 folder inside my container, something like: $ ln -s containerIp:/root/.m2 myContainerAliasI've seen the below posts but they didn't help me since I don't wanna copy the files to my local host.Docker - copy file from container to hostApache in Docker says: Symbolic link not allowedhttps://omarabid.com/symlink-to-a-mounted-volume-in-docker/Edited:I've found another valuable Issue here:How to mount a directory in the docker container to the host?Thanks... | Symbolic Link Host to Docker Container |
If postgres version doesn't matter, try to change Postgres image to this one, it works for meAnd also make sure that you add ports indocker-compose.ymlpostgres:
image: postgres
restart: always
environment:
POSTGRES_USER: prisma
POSTGRES_PASSWORD: prisma
ports:
- "5432: 5432"
volumes:
- postgres:/var/lib/postgresql/dataP.s. just updated answer for readability | I'm trying to use Postico to connect to a docker postgreSQL container on my local machine.I've tried connecting to 0.0.0.0, localhost, and 127.0.0.1. Each give me the following error:could not connect to server: Connection refused
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?0.0.0.0 gives me a similar, but smaller error:could not connect to server: Connection refused
Is the server running on host "0.0.0.0" and accepting
TCP/IP connections on port 5432?Here is my docker-compose file:version: '3'
services:
prisma:
image: prismagraphql/prisma:1.23
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
databases:
default:
connector: postgres
host: postgres
port: 5432
user: prisma
password: prisma
migrations: true
postgres:
image: postgres:10.5
restart: always
environment:
POSTGRES_USER: prisma
POSTGRES_PASSWORD: prisma
volumes:
- postgres:/var/lib/postgresql/data
volumes:
postgres:Solution found thanks to Egor!I forgot to specifyports: - "5432:5432"inside my docker-compose file. Rookie mistake ;) | Cannot connect to postgreSQL docker container via postico |
Update for 2017-05-05: Docker just released 17.05.0-ce with thisPR #31236included. Now the above command creates an image:$ docker build -t test-no-df -f - . < 00f017a8c2a6
Step 2/2 : CMD echo just a test
---> Running in 45fde3938660
---> d6371335f982
Removing intermediate container 45fde3938660
Successfully built d6371335f982
Successfully tagged test-no-df:latestThe same can be achieved in a single line with:$ printf 'FROM busybox:latest\nCMD echo just a test' | docker build -t test-no-df -f - .Original Responsedocker buildrequires the Dockerfile to be an actual file. You can use a different filename with:docker build -f Dockerfile.temp .They allow the build context (aka the.or current directory) to be passed by standard input, but attempting to pass a Dockerfile with this syntax will fail:$ docker build -t test-no-df -f - . <<EOF
FROM busybox:latest
CMD echo just a test
EOF
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/bmitch/data/docker/test/-: no such file or directory | Is there a way to build docker containers completely from the command-line? Namely, I need to be able to set things likeFROM,RUNandCMD.I'm a scenario where I have to use docker containers to run everything (git,npm, etc), and I'd like to build containers on the fly that have prep-work done (such as one withnpm installalready run).There are lots of different cases, and it'd be overkill to create an actualDockerfilefor each. I'd like to instead be able to just create command-line commands in my script instead. | docker build purely from command line |
if you have Windows Server 2016, you will be able to launch Windows containers (and you will need a Linux server to launch Linux containers).See those linkshttps://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/manage_dockerhttps://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/container_setuphttps://msdn.microsoft.com/en-us/virtualization/windowscontainers/containers_welcomeIn Windows, your Dockerfile will start withFROM windowsservercoreinstead of the more usualFROM debianorFROM ubuntuSee some examples of IIS in (Windows) dockerhttps://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/manage_dockeror a SQL Server in dockerhttp://26thcentury.com/2016/01/03/dockerfile-to-create-sql-server-express-windows-container-image/ | I'm a little bit confused about the concept of Docker for Windows.
Can I create a docker container for windows (and a windows host like Server 2016) and install a normal windows application into that container (simple: notepad.exe; advanced some more complex application programmed in Delphi)?
And can I run this container on every Docker enabled (windows) Host? Does the container starts automatically the application inside? Or can a windows docker container only provide service or webbased applications like an IIS website? | Run normal Win32 applications in Docker for Windows |
One option is to usesubprocess.check_outputsettingshell=True(thanks slezica!):s = subprocess.check_output('docker ps', shell=True)
print 'Results of docker ps' + sif thedocker pscommand fails (for example you don't start your docker-machine) thencheck_outputwill throw an exception.A simple find can then verify your container is found / not-found:if s.find('containername') != -1:
print 'found!'
else:
print 'not found.'I would recommend using the container hash id and not containernamein this case, too, as the name may be duplicated in the image name or other results of thedocker ps. | I am using Python to start docker instances.How can I identify if they are running? I can pretty easily usedocker psfrom terminal like:docker ps | grep myimagenameand if this returns anything, the image is running. If it returns an empty string, the image is not running.However, I cannot understand how to getsubprocess.Popento work with this - it requires a list ofargumentsso something like:p = subprocess.Popen(['docker', 'ps', '|', 'grep', 'myimagename'], stdout=subprocess.PIPE)
print p.stdoutdoes not work because it tries to take the "docker ps" and make it "docker" and "ps" commands (which docker doesn't support).It doesn't seem I can give it the full command, either, asPopentries to run theentirefirst argument as the executable, so this fails:p = subprocess.Popen('docker ps | grep myimagename', stdout=subprocess.PIPE)
print p.stdoutIs there a way to actually rundocker psfrom Python? I don't know if trying to usesubprocessis the best route or not. It is what I am using to run the docker containers, however, so it seemed to be the right path.How can I determine if a docker instance is running from a Python script? | How to check if a docker instance is running? |
docker image lslists the imagesdocker imagesalso lists the imagesdocker images lslists the images with the repository namels. And as you dont have any images namedlsit is returning an empty list.Reference :https://docs.docker.com/engine/reference/commandline/images/ | I looked up the docs to understand the difference between commandsdocker image(managing images) anddocker images(list images). So the second option seems to be a shortcut fordocker image lswhich also lists images.What I noticed is, when runningdocker image lsordocker imagesI get a list of all my images as expected, but when I accidentally mixed those two up and rundocker images lsI get an empty table without any entries:REPOSITORY TAG IMAGE ID CREATED SIZEI would expect it either to be an invalid command since it is redundant or to show the same list of all my images.So what doesdocker images lsactually show? | What does 'docker images ls' do? |
Aug. 2022:brandtpoints out inthe commentsto the updateddocker-compose documentation.Note August 2017: withdocker-compose version 3, regarding volumes:The top-levelvolumeskey defines a named volume and references it from each service’s volumes list.This replacesvolumes_fromin earlier versions of the Compose file format. SeeUse volumesandVolume Pluginsfor general information on volumes.Example:version: "3.2"
services:
web:
image: nginx:alpine
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
db:
image: postgres:latest
volumes:
- "/var/run/postgres/postgres.sock:/var/run/postgres/postgres.sock"
- "dbdata:/var/lib/postgresql/data"
volumes:
mydata:
dbdata:This example shows a named volume (mydata) being used by thewebservice, and a bind mount defined for a single service (first path underdbservice volumes).Thedbservice also uses a named volume calleddbdata(second path underdbservice volumes), but defines it using the old string format for mounting a named volume.Named volumes must be listed under the top-level volumes key, as shown.February 2016:Thedocs/compose-file.mdmentions:Mount all of the volumes from another service or container, optionally specifying read-only access(ro) or read-write(rw).(If no access level is specified, then read-write will be used.)volumes_from:
- service_name
- service_name:ro
- container:container_name
- container:container_name:rwFor instance (from this issueorthis one)version: "2"
services:
...
db:
image: mongo:3.0.8
volumes_from:
- dbdata
networks:
- back
links:
- dbdata
dbdata:
image: busybox
volumes:
- /data/db | I'm trying to create a docker-compose.yml file that contains a--volumes-frominstruction. Does anyone know the syntax?I have been looking online for some time now, and it appears that the--volumes-fromcommand is only available as a docker command. I hope I'm wrong. | Volumes and docker-compose |
After reading the PR that added this option, I realised that I misunderstood how it was supposed to work.--log-opt labels=a,b,c(same with env) define keys to include in the GELF event. The values are actually retrieved from docker labels and environment variables respectively.--log-opt labels=foo --label foo=barwill includefoo: barin the event. | Docker GELF log driver allowsenvandlabelslog-opts:The labels and env options are supported by the gelf logging driver. It adds additional key on theextrafields, prefixed by an underscore (_) (ref)I want to use this in my index name for elasticsearch output but I couldn't figure out how I can access these value or saidextrafields.Assuming that I have these options running a container,docker run -it \
--log-driver gelf \
--log-opt gelf-address=udp://127.0.0.1:12201 \
--log-opt tag=some-app \
--log-opt env=staging \
--log-opt labels=staging \
ubuntu:16.04 /bin/bash -c 'echo Hello World'I'd like to use theenvvalue that I passed in my logstash config as such:input {
gelf { }
}
output {
elasticsearch {
hosts => ["http://127.0.0.1:9200"]
index => "logstash-%{env-value-here}-%{tag}-%{+YYYY.MM.dd}"
}
}There seems to be another question about env/labels with Graylog:Docker GELF driver env option | Using docker GELF driver env/labels in logstash |
The--net=hostoption, for thedocker runcommand, should enables the behavior you are seeking -- note that it is considered as insecure, but I really don't see any other mean of doing this.See thedocker runman page:--net="bridge"
Set the Network mode for the container
'bridge': create a network stack on the default Docker bridge
'none': no networking
'container:': reuse another container's network stack
'host': use the Docker host network stack. Note: the host mode gives the container full access to local system services such as D-bus
and is therefore considered insecure.
'|': connect to a user-defined network | I have an application which after making some connections using its default ports starts opening(listening) new RANDOM ports to handle just the existing connection and then drops them (Video calls).It also exchanges its IP address and ports inside the communication protocol, I was able to solve the IP address issue, but still not able to find a way to dynamically tell IPTABLES of the host machine to open same ports when they are being opened inside Docker container, anybody has any ideas? | Dynamic listening ports inside Docker container |
Good questions!All your Docker images are stored in aGoogle Cloud Storagebucket calledartifacts..appspot.com(Replacewith your project's ID)To find the total space, rungsutil du gs://artifacts..appspot.com | I have pushed container images usinggcloud docker pushto the Google Container Registry. Two questions:How do I see how much space all my images use? (I can see individual images but I want a total in order not to navigate to all and make a sum) | How can I find out how much space is used by my container images from the Google Container Registry |
This answer is based on thiscommentfrom the #553 issue discussion on the officialnginx-proxyrepo. First, you have to create thedefault_locationfile with the static location:location /static/ {
alias /var/www/html/static/;
}and save it, for example, intonginx-proxyfolder in your project's root directory. Then, you have to add this file to/etc/nginx/vhost.dfolder of thejwilder/nginx-proxycontainer. You can build a new image based onjwilder/nginx-proxywith this file being copied or you can mount it usingvolumessection. Also, you have to share static files between yourwebappandnginx-proxycontainers using a shared volume. As a result, yourdocker-compose.ymlfile will look something like this:version: "3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx-proxy/default_location:/etc/nginx/vhost.d/default_location
- static:/var/www/html/static
webapp:
build: ./webapp
expose:
- 8080
volumes:
- static:/path/to/webapp/static
environment:
- VIRTUAL_HOST=webapp.docker.localhost
- VIRTUAL_PORT=8080
- VIRTUAL_PROTO=uwsgi
volumes:
static:Now, theserverblock in/etc/nginx/conf.d/default.confwill always include the static location:server {
server_name webapp.docker.localhost;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
include uwsgi_params;
uwsgi_pass uwsgi://webapp.docker.localhost;
include /etc/nginx/vhost.d/default_location;
}
}which will make Nginx serve static files for you. | I have a web app (django served by uwsgi) and I am using nginx for proxying requests to specific containers.
Here is a relevant snippet from my default.conf.upstream web.ubuntu.com {
server 172.18.0.9:8080;
}
server {
server_name web.ubuntu.com;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
include uwsgi_params;
uwsgi_pass uwsgi://web.ubuntu.com;
}
}Now I want the static files to be served from nginx rather than uwsgi workers.So basically I want to add something like:location /static/ {
autoindex on;
alias /staticfiles/;
}to the automatically generated server block for the container.I believe this should make nginx serve all requests to web.ubuntu.com/static/* from /staticfiles folder.But since the configuration(default.conf) is generated automatically, I don't know how to add the above location to the server block dynamically :(I think location block can't be outside a server block right and there can be only one server block per server?so I don't know how to add the location block there unless I add dynamically to default.conf after nginx comes up and then reload it I guess.I did go throughhttps://github.com/jwilder/nginx-proxyand I only see an example to actually change location settings per-host and default. But nothing about adding a new location altogether.I already posted this in Q&A for jwilder/nginx-proxy and didn't get a response.Please help me if there is a way to achieve this. | serving static files from jwilder/nginx-proxy |
You mentioned that you are using docker. I think the reason that it doesn't work locally, but in production, could be that there is a configuration shift between the two deployments.For it to work the Nginx container must have access to the storage folder, since it is supposed to serve assets that are located there. I'm guessing that is currently not the case and only the public folder is mounted.Check if the entire project or both./publicand./storageare mounted into the Nginx container.Something like this:services:
#...
nginx:
#...
volumes:
- "./public:/var/www/html/public:ro"
- "./storage/app:/var/www/html/storage/app:ro"
#... | I am using the default laravel folder structure and the filesystempublic:'public' => [
'driver' => 'local',
'root' => storage_path('app/public'),
'url' => env('APP_URL') . '/storage',
'visibility' => 'public',
],Everything runs on docker and the complete laravel folder is mounted into/var/www/html:#... Laravel PHP-FPM service definition
volumes:
- "./:/var/www/html"
#...When I runphp artisan storage:linkand thencd /var/www/html/publicandls -laI see that the symlink exists:lrwxr-xr-x 1 root root 32 May 5 11:19 storage -> /var/www/html/storage/app/publicIf i then check to see if the linked folder also exists in the containercd /var/www/html/storage/app/publiceverything is there as expected.Also, when checkingls -la /var/www/html/public/storage/directly it shows me the content of the linked folder. So everything is working from symlink perspective# ls -la /var/www/html/public/storage/
total 512
drwxr-xr-x 4 www-data www-data 128 May 5 11:21 .
drwxr-xr-x 11 www-data www-data 352 Apr 19 14:03 ..
-rw-r--r-- 1 www-data www-data 14 Feb 16 16:58 .gitignore
-rw-r--r-- 1 www-data www-data 519648 May 5 10:58 sample.pngHowever, when opening the url/storage/sample.pngthe server returns 404.The remaining contents of/var/www/html/publicdo work fine, so for example/var/www/html/public/test.pngis visible underlocalhost/test.png.On production, everything works fine though. Any ideas why the symbolic link is not working on the local system while the link is actually correct?I already tried removing the storage link and setting the symlink again. | Laravel storage sym:link not working in local environment |
Unless you have enabledCPU is always allocated, background threads and processes might stop receiving CPU time after all HTTP requests return. This means background threads and processes can fail, connections can timeout, etc. I cannot think of any benefits to running background workers with Cloud Run except when setting the--cpu-no-throttlingflag. Cloud Run instances that are not processing requests, can be terminated.Signal 6 meansabortwhich terminates processes. This probably means your container is being terminated due to a lack of requests to process.Run more workloads on Cloud Run with new CPU allocation controlsWhat if my application is doing background work outside of request processing? | I have a Python (3.x) webservice deployed in GCP. Everytime Cloud Run is shutting down instances, most noticeably after a big load spike, I get many logs like theseUncaught signal: 6, pid=6, tid=6, fault_addr=0.together with[CRITICAL] WORKER TIMEOUT (pid:6)They are always signal 6.The service is using FastAPI and Gunicorn running in a Docker with this start commandCMD gunicorn -w 2 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8080 app.__main__:appThe service is deployed using Terraform with 1 gig of ram, 2 cpu's and the timeout is set to 2 minutesresource "google_cloud_run_service" {
name =
location =
template {
spec {
service_account_name =
timeout_seconds = 120
containers {
image = var.image
env {
name = "GCP_PROJECT"
value = var.project
}
env {
name = "BRANCH_NAME"
value = var.branch
}
resources {
limits = {
cpu = "2000m"
memory = "1Gi"
}
}
}
}
}
autogenerate_revision_name = true
}I have already tried tweaking the resources and timeout in Cloud Run, using the --timeout and --preload flag for gunicorn as that is what people always seem to recommend when googling the problem but all without success. I also dont exactly know why the workers are timing out. | Lots of "Uncaught signal: 6" errors in Cloud Run |
This problem should have been resolved by the JDK Enhancement Proposal :JEP 123, Configurable Secure Random-Number Generation.According to theJDK 8 Security Enhancementsofficial Oracle document, the/dev/./urandomworkaround is no more necessary from JDK 8.SHA1PRNG and NativePRNG were fixed to properly respect the SecureRandom seed source properties in the java.security file. (The obscure workaround using file:///dev/urandom and file:/dev/./urandom is no longer required.) | I used to configure-Djava.security.egd=file:/dev/./urandomin my Dockerfile for Spring Boot applications.Inhttps://spring.io/guides/gs/spring-boot-docker/(or GitHubhttps://github.com/dsyer/gs-spring-boot-docker) a comment was added that this is not required any more for newer versions:To reduce Tomcat startup time we added a system property pointing to "/dev/urandom" as a source of entropy. This is not necessary with more recent versions of Spring Boot, if you use the "standard" version of Tomcat (or any other web server).I am looking for any references for this change in Tomcat or Spring Boot repos, and which Spring Boot versions are affected. | Deprecated java.security.egd=file:/dev/./urandom for Spring Boot applications? |
The issue has to do with the Container user. By default a scheduled task is created with the current user. It's possible the container user is a special one that the Scheduled Task command cannot parse into XML.So you have to pass the user/ru(and if needed the password/rp) to theschtaskscommand in a Windows Container.This worksFROM microsoft/windowsservercore
RUN schtasks /create /tn "hellotest" /sc daily /tr "echo hello" /ru SYSTEMIt will run the command under the system account.If you are a fan of Powershell (like me), you can use thisFROM microsoft/windowsservercore
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
RUN $action = New-ScheduledTaskAction -Execute 'echo ""Hello World""'; \
$trigger = New-ScheduledTaskTrigger -Daily -At '1AM'; \
Register-ScheduledTask -TaskName 'Testman' -User 'SYSTEM' -Action $action -Trigger $trigger -Description 'Container Scheduled task test'; | I am trying to build a container which would include a custom scheduled task.
This is my dockerfile:FROM microsoft/windowsservercore
RUN schtasks /create /tn hello /sc daily /st 00:00 /tr "echo hello"I get the following error:ERROR: The task XML contains a value which is incorrectly formatted or
out of range. (43,4):Task:I get the same error also when attaching to a running default windows core container and running the command.Needless to say, the command works well on standard windows 2016 server.It seems like a bug in Windows containers, but I didn't find any known issue about it.Appreciate any leads which may help figure out. | Error trying to create a scheduled task within Windows 2016 Core container |
There is a docker image for it:docker run mikesplain/telnet | When I run telnet command in Docker it does not run.Could you please tell me how to install telnet in Docker for Windows? | How to install telnet in Docker for Windows 10 |
For multiple services, it can often be easier to create a docker-compose.yml file that will launch all the services and any networks needed to connect them.version: '3'
services:
my-db:
image: my-db-image
ports:
- "3306:3306"
networks:
- mynetwork
my-app:
image: my-app-image
ports:
- "8000:80"
networks:
- mynetwork
networks:
mynetwork:From the project folder, you rundocker-compose upordocker-compose up -dto make the services run in the background.In this scenario, the magic of Docker provisions a network with hostname "mynetwork". It should expose default ports to other services on that network. If you want to remap the ports, the pattern is target:source.I don't know that you need the 'ports' config here. But I'm trying to map your config to the compose file. Also I'm assuming you need to expose the app on some port; using 8000 as it's pretty common setup.What are the parameters here?Docker-compose reference | I have two docker containers:databaseapp that consumes the databaseI run my database container like this:docker run --name my-db -p 127.0.0.1:3306:3306 my-db-imageAnd my app container like this:docker run --name my-app --network host -it my-app-imageThis works fine on Linux. I can access the DB from both the host system and the app container. Perfect.However--network hostdoes not work on Mac and Windows:The host networking driver only works on Linux hosts, and is not supported on Docker for Mac, Docker for Windows, or Docker EE for Windows Server.(source:https://docs.docker.com/network/host/)I can still access the database via127.0.0.1:3306from the main host, but I cannot access it from the app container.How can I solve this issue? How can I let the app container connect to the database (and keep accessing also to the DB from the main host using127.0.0.1:3306)?I've tried usinghost.docker.internalandgateway.docker.internalbut it doesn't work.I've also tried to launch both containers using--network my-networkafter creatingmy-networkwithdocker network create my-networkbut it doesn't work.I can't figure out how to solve this issue. | Docker alternative to --network host on macOS and Windows |
First thing is to setup GitLab CI to provide credentials of the private docker registry when needed. To do that there isspecific section in docsyou should follow, to be a complete answer that isGet docker registry url, username and password usingdocker loginor some other manner (I had to spend sometime to figure out registry for the
docker hub)DefineDOCKER_AUTH_CONFIGvariable in GitLab CI variable section it would look like{
"auths": {
"registry.hub.docker.com": {
"auth": "xxxxxxxxxxxxxxxxxxxxxxxxxxxx" // base 64 encoded username:password
}
}
}Declare image/serviceimage: registry.hub.docker.com/ruwanka/helloworld:0.1in.gitlab-ci.ymlThat should full fill the requirement of pulling images. There isanother section in docsthat lists the requirement of runner to allow list of services. If it doesn't specify any then it should be fine, you may have to tweak it if it doesn't work.final yaml is look like belowimage: registry.hub.docker.com/ruwanka/helloworld:0.1
build:
script:
- echo "hello"
# more steps
services:
- registry.hub.docker.com/ruwanka/helloworld:0.1snippet of GitLab job's logs | I've been trying to pull in a private (custom) MySQL image from my Docker Hub repository to the gitlab-ci.yml pipeline as a service. I have added a before_script that tries to log in to dockerhub with my username and password (CI variables). There's no output in the failed build log suggesting whether the login to Docker Hub was successful or otherwise but I'm assuming not because the pulling of my image fails with the following message (edit: or it's never even attempted because gitlab tries to get the services before it runs the before script?):repository does not exist or may require 'docker login' (executor_docker.go:168:0s)I am using a shared runner (because I believe that's my only option using gitlab.com?)
I've seen quite a few mentions of the gitlab ci token for docker but I've found no documentation explaining how to facilitate this.I'm sure I'm just overlooking something/not understanding or coming across the appropriate solution in my searches so apologies if I'm just being inexperienced and thanks in advance for any help.My gitlab-ci (the maven variables are because this project's build has a dependency on a private maven repo. The database and redis host variables are injected into my app at runtime so they know which container to point to)image: maven:3.5.0-jdk-8
before_script:
- "docker login -u$DOCKER_USER -p$DOCKER_PASS" #pipeline variables
variables:
MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
DATABASE_HOST: mysql
REDIS_HOST: redis
services:
- name: privaterepo/private-mysql-schema
alias: mysql
- name: redis:latest
alias: redis
stages:
- build
maven-build:
stage: build
script: "mvn $MAVEN_CLI_OPTS package -B"
artifacts:
paths:
- target/*.jar | Can't Access Private MySQL Docker Image From Gitlab CI |
This is totally OK, and you're not the only one to do it :-)Another example of use is to use the management container to hande authentication for the Docker REST API. It would accept connections on an EXPOSE'd TCP port, itself published with-p, and proxy requests to the UNIX socket. | I have experimented with packaging my site-deployment script in a Docker container. The idea is that my services will all be inside containers and then using the special management container to manage the other containers.The idea is that my host machine should be as dumb as absolutely possible (currently I use CoreOS with the only state being a systemd config starting my management container).The management container be used as a push target for creating new containers based on the source code I send to it (using SSH, I think, at least that is what I use now). The script also manages persistent data (database files, logs and so on) in a separate container and manages back-ups for it, so that I can tear down and rebuild everything without ever touching any data. To accomplish this I forward the Docker Unix socket using the-voption when starting the management container.Is this a good or a bad idea? Can I run into problems by doing this? I did not read anywhere that it is discouraged, but I also did not find a lot of examples of others doing this. | Is it feasible to control Docker from inside a container? |
RUNin a Dockerfile will fail if the exit code of the command is non-zero. If that happens,docker buildwill also fail with a non-zero exit code.Yournpm testscript needs to return a non-zero exit code when the tests fail.For reference, you can check the exit code like this:$ npm test
$ echo $? | DockerfileFROM node:carbon
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
RUN npm install gulp -g
COPY . .
run gulp build --build
run npm test
EXPOSE 80
CMD [ "npm", "start" ]Tests are ran usingmocha --recursivebuild.shdocker build -t my-app .
echo $?How can I detect that one mocha test fails, thusnpm testshould not be ok, and neitherdocker build?I may have missed something in here. | Make docker build fail if tests fail |
You should be able to build your own base image. I'm not aware of any reasons why it should not work.Check out the documentationhttp://docs.docker.io/en/latest/use/baseimages/for a starting point and keep us posted :) | I've got a binary application that won't work on versions of Ubuntu later than Feisty.Is it possible to build a Docker image containing Feisty and run it on my modern system? | Run old Linux release in a Docker container? |
This behaviour is caused by Apache and it is not an issue with Docker. Apache is designed to shut down gracefully when it receives theSIGWINCHsignal. When running the container interactively, theSIGWINCHsignal is passed from the host to the container, effectively signalling Apache to shut down gracefully. On some hosts the container may exit immediately after it is started. On other hosts the container may stay running until the terminal window is resized.It is possible to confirm that this is the source of the issue after the container exits by reviewing the Apache log file as follows:# Run container interactively:
docker run -it
# Get the ID of the container after it exits:
docker ps -a
# Copy the Apache log file from the container to the host:
docker cp :/var/log/apache2/error.log .
# Use any text editor to review the log file:
vim error.log
# The last line in the log file should contain the following:
AH00492: caught SIGWINCH, shutting down gracefullySources:https://bz.apache.org/bugzilla/show_bug.cgi?id=50669https://bugzilla.redhat.com/show_bug.cgi?id=1212224https://github.com/docker-library/httpd/issues/9 | Consider the following Dockerfile:FROM ubuntu:16.04
RUN apt-get update && \
apt-get install -y apache2 && \
apt-get clean
ENTRYPOINT ["apache2ctl", "-D", "FOREGROUND"]When running the container with the commanddocker run -p 8080:80 , then the container starts and remains running, allowing the default Apache web page to be accessed onhttps://localhost:8080from the host as expected. With this run command however, I am not able to quit the container usingCtrl+C, also as expected, since the container was not launched with the-itoption. Now, if the-itoption is added to the run command, then the container exits immediately after startup. Why is that? Is there an elegant way to have apache run in the foreground while exiting onCtrl+C? | Docker container exits when using -it option |
According to theMongoDB docker documentation, you can use this combination to init your db :
Environnement variable MONGO_INITDB_DATABASEThis variable allows you to specify the name of a database to be used
for creation scripts in /docker-entrypoint-initdb.d/*.js (see
Initializing a fresh instance below). MongoDB is fundamentally
designed for "create on first use", so if you do not insert data with
your JavaScript files, then no database is created.And init .js files in /docker-entrypoint-initdb.d/Initializing a fresh instanceWhen a container is started for the first time it will execute files
with extensions .sh and .js that are found in
/docker-entrypoint-initdb.d. Files will be executed in alphabetical
order. .js files will be executed by mongo using the database
specified by the MONGO_INITDB_DATABASE variable, if it is present, or
test otherwise. You may also switch databases within the .js script.Note that you can skip setting environnement variable, and set your database in js file. Seethe doc for more explanations.Hope it helps. | Here below is thedocker-compose.ymlI use to dockerize my MongoDB instance:version: '3.3'
services:
mongo:
image: 'mongo:latest'
ports:
- '27017:27017'
volumes:
- 'data-storage:/data/db'
networks:
mynet:
volumes:
data-storage:
networks:
mynet:The container is created correctly and it starts without any problem. Is it possible to create a Mongo collection and populate it with some documents the first time the container starts?For instance, I'd like to run a few statements like these:db.strategyitems.insert( { symbol: "chf", eval_period: 15, buy_booster: 8.0, sell_booster: 5.0, buy_lot: 0.2, sell_lot: 0.2 } )
db.strategyitems.insert( { symbol: "eur", eval_period: 15, buy_booster: 8.0, sell_booster: 5.0, buy_lot: 0.2, sell_lot: 0.2 } )
db.strategyitems.insert( { symbol: "usd", eval_period: 15, buy_booster: 8.0, sell_booster: 5.0, buy_lot: 0.2, sell_lot: 0.2 } )... | How to Initialize a Collection in Dockerized Mongo DB |
First, you have to add both services to same network in order to connect them. So, the latter compose file should be something likeversion: "3.5"
services:
service-to-connect-from:
build .
networks:
- my-external-network
networks:
my-external-network:
external: trueNow that both services are on the same network they can find each other using container's name. Container name is by default same as the service name BUT docker compose also prefixes it byproject namewhich is by default the directory name where the compose file exists. You can see this if you first start the services bydocker-compose up -dand then see how the containers get named by runningdocker ps. The container name could be for exampleproject1_service-to-connect-to. With this name you can connect from another service.If you like to, you can also set the container's name explicitly usingcontainer_nameoption for service. When used, compose doesn't prefix the container name anymore. | Here is compose file with config of container that I wish to connect to from external container (defined in another compose file):version: '3.5'
services:
service-to-connect-to:
build: .
networks:
- my-external-network
networks:
my-external-network:
external: trueand another compose file that contains config for container from which I wish to connect toservice-to-connect-to:version: "3.5"
services:
service-to-connect-from:
build: .I tried to connect toservice-to-connect-tovia this domains:service-to-connect-toservice-to-connect-to.my-external-networkmy_external_network.service-to-connect-tobut nothing of them worked.Where I'm wrong?Thanks | How to access container from another compose that connected to external network? |
There's no difference to the pathsinsidethe container when you move your local directory. So you only need to change the local references.The volume mount should come from./clientversion: "2"
services:
client:
build: ./client
ports:
- "3000:3000"
volumes:
- ./client:/code | I created an app usingcreate-react-appand set up docker compose to set up the container and start the app. When the app is in the root directory, the app starts and the live reload works. But when I move the app to a subdirectory, I can get the app to start, but the live reload does not work.Here's the working setup:DockerfileFROM node:7.7.2
ADD . /code
WORKDIR /code
RUN npm install
EXPOSE 3000
CMD npm startdocker-compose.ymlversion: "2"
services:
client:
build: .
ports:
- "3000:3000"
volumes:
- .:/codeDirectory structureapp
- node_modules
- docker-compose
- Dockerfile
- package.json
- src
- publicHere's the structure that I would like:app
- server
- client
/ node_modules
/ Dockerfile
/ package.json
/ src
/ public
- docker-compose.ymlI've tried every variation that I can think of, but the live reload will not work.The first thing I had to do was change the build location:version: "2"
services:
client:
build: ./client
ports:
- "3000:3000"
volumes:
- .:/codeThen I got an error when trying to rundocker-compose up:npm ERR! enoent ENOENT: no such file or directory, open '/code/package.json'So I changed the volume to- .:/client/codeand rebuilt and ran the command and the app started, but no live reload.Anyway to do this when the app is in a subdirectory? | Docker compose with subdirectory and live reload |
For your issue, you should know there are differences between Azure Web App and Azure Container Instance.In Azure Web App, you just can use only two ports: 80 and 443. And they are exposed in default. You just need to listen to one of them or both in the container. But in Azure Container Instance, you can expose all the ports that you use in the container as you wish.So for Web app for Container, if the two ports are not 80 and 443, then you cannot expose them. | Our web app runs on two ports azure web app exposes port 80 by default which we have used for part 1 but for part two we need another port how can we expose it?Our web app runs perfectly on local.Our web app runs perfectly on container instance on two ports (there is an option in Azure for multiple ports while creating the container instance).Update:I contacted the Azure support team for this and they replied:"
Web App for Containers currently allows you to expose only one port to the outside world. That means that your container can only listen for HTTP requests on a single port. Some apps need multiple ports. For example, you might have one port that is used for requests into the app and a separate port that is used for a dashboard or admin portal. As of today, that configuration isn't possible in Web App for Containers.We will attempt to detect which port to bind to your container, but you can also use the WEBSITES_PORT app setting and configure it with a value for the port you want to bind to your container.So, I'm sorry but you cannot use 2 ports for the same web app." | How to expose web app for container on two different ports in azure? |
Linux Alpine does not have timezone information natively built in.
You need to update your Dockerfile to get that information.and add the commandapk --no-cache add tzdatato the RUN linee.g., for me I have a line that looks like the followingRUN apk update && apk add bash && apk --no-cache add tzdataThis fixed the issue for me. | time.LoadLocation works regularly but throws an error on my docker instance! How do I fix it?I rant, err := time.LoadLocation("America/New_York")and it returns an error even though it works just fine on my computer and on play.golang.org (https://play.golang.org/p/4VHlaku26T3)However, when I run it on my docker instance, I get an error returnedunknown time zone America/New_YorkWhy doesn't it detect my requested time zone? | time.LoadLocation works regularly but throws an error on my docker instance! How do I fix it? |
FoundDockerCli.exein theDocker Desktoppackage. | I'm trying to switch Docker to Windows containers on my Windows Server Core 1903 machine (no desktop).Thispage says DockerCli should be able to do so:& $Env:ProgramFiles\Docker\Docker\DockerCli.exe -SwitchDaemonThere is noDockerCli.exeafter fresh Docker installation:Directory: C:\Program Files\Docker
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 30-09-2019 17:57 cli-plugins
-a---- 03-09-2019 21:58 69282136 docker.exe
-a---- 03-09-2019 21:58 76065624 dockerd.exe
-a---- 21-08-2019 00:05 2454016 libeay32.dll
-a---- 11-05-2017 22:32 56978 libwinpthread-1.dll
-a---- 03-09-2019 16:24 6124 licenses.txt
-a---- 30-09-2019 17:57 142 metadata.json
-a---- 21-08-2019 00:05 357888 ssleay32.dll
-a---- 09-06-2016 22:53 87888 vcruntime140.dllI also tried to installdocker-cliordocker-toolboxvia Choco but the needed tool is still missing. Any clue where to find it? | Where do I find DockerCli.exe |
After lot of research, trial and error I found an answer to my own question.AWS provides an extension to VSTS with build tasks and Service Endpoints. You need to configure AWS service endpoint using an account number, application ID, and secret. Then, in your build/release definition;build docker image using out of the box docker build task, or shell/bash command (for an example; docker build -t your:tag . )Then add another build step to push image into AWS registry, for this you can use AWS extension task (Amazon Elastic Container Registry Push Image). Amazon Elastic Container Registry Push Image build task will generate token and login docker client every time you run this build definition. You don't have to worry about updating username/token every 12 hours, AWS extension build task will do that for you. | We have a python docker image which needs to build/publish (CI/CD) into AWS container registry.
At the moment AWS does not support for running docker tasks using docker hub private repositories, therefore we have to use ECR instead of docker hub.Our CI/CD pipeline uses docker build and push tasks. Docker authentication is done via a Service Endpoint in the VSTS project.There are few steps we should follow to setup a VSTS service endpoint for ECR. This required to execute AWS CLI command (locally or cloud) to get a user and password for docker client to login, it looks like;aws ecr get-login --no-include-emailAbove command outputs a docker login command with a username (AWS) and a password (token).The issue with this approach is access token will last only for 12 hours. Therefore CI/CD task requires updating the Service Endpoint every 12 hours, otherwise build fail with unauthorised token exception.Other option we have is to run some shell commands to execute aws get-login command and run docker build/push commands in the same context. This option required installing aws cli into build agent (we are using public linux agent).
In addition shell command involves awkward task configuration with environment/variables. Otherwise we will be exposing aws application id and secret in the build steps.Could you please advice if you have solved VSTS CI/CD pipeline using docker with AWS ecr?Thanks, Mahi | Single Docker image push into AWS elastic container registry (ECR) from VSTS build/release definition |
As others have mentioned,/tmpis the only writable directory in any AWS Lambda environments, either using containers or not.Having said that, you should move your entire library (during the lambda runtime --during container image build time doesn't work) to that directory -- such that everything remains connected within the library -- and then reference your new library directory in the library path environment for Lambda:LD_LIBRARY_PATHReferencing your new library directory in the library path environment for Lambda should be done because Lambda looks at the/opt/directory by default; and since you just moved your library to/tmp, you should also updateLD_LIBRARY_PATHto contain that location. This can be done in the Dockerfile:# Set the LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH="/opt/my-lib-folder/:$LD_LIBRARY_PATH"or during Lambda runtime:os.environ['LD_LIBRARY_PATH'] = '/tmp/my-lib-folder:' + os.environ['LD_LIBRARY_PATH']
def lambda_handler(event, context):
# your code ...If there are still problems, it may be related to linking problems of your library, or that you didn't update yourLD_LIBRARY_PATHcorrectly.EDIT:As pointed out by @rok, you cannot move your libraries during the container image build time, because the/tmpfolder will be erased by AWS automatically. | I am using a docker container image on lambda to run my ML model. My lambda function has a S3 trigger to fetch images. I am trying to run my lambda function but I am getting this error. Can someone please help me?PS - now i am aware /tmp is the only writable directory in lambda but how to solve this with that? | AWS lambda read-only file system error, using docker image to store ML model |
Thedocumentation mentions:docker psYou should see output similar to this:$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2b799c529e73 prismagraphql/prisma:1.7 "/bin/sh -c /app/sta…" 17 hours ago Up 7 hours 0.0.0.0:4466->4466/tcp myapp_prisma_1
757dfba212f7 mysql:5.7 "docker-entrypoint.s…" 17 hours ago(Here shown with mysql, but valid with postgresql too)The point is: there should betwocontainers running, not one.Checkdocker-compose logsto see why the second one (database) did not start. | This is steps I have doneprisma initI set postgresql for database in my local(not exist).It created 3 files, datamodel.graphql, docker-compose.yml, prisma.ymldocker-compose up -dI confirmed it running successfullyBut if I callprisma deploy, it shows me errorCould not connect to server at http://localhost:4466. Please check if your server is running.All I have done is standard operation described in manual and there is no customization inhttps://www.prisma.io/docs/tutorials/deploy-prisma-servers/local-(docker)-meemaesh3kAnd this is docker-compose.ymlversion: '3'
services:
prisma:
image: prismagraphql/prisma:1.11
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
# managementApiSecret: my-secret
databases:
default:
connector: postgres
host: localhost
port: '5432'
database: databasename
schema: public
user: postgres
password: root
migrations: trueWhat am I missing? | Prisma Deploy Docker error "Could not connect to server" |
Add add($exception->getMessage());to theexception handler classright before line 37. Run the request and check the response.If that doesn't avail anything, verify the request is hitting the webserver by checking access and error logs. Check system logs also usingdmesgand similar.Since you mention Docker, if you're using nginx, be sure your site configuration is not being overwritten when runningdocker-compose up. | First of all, according to stackoverflow, this problem occurs when something is wrong with permissions ofbootstrap/cacheandstoragedirectories. And I tried literally every advice on that with no luck.I was happy user of Xubuntu 16.04 at my old laptop, developed one project. Usingdocker-composeto set up development environment. Yesterday I bought brand new PC, installed Kubuntu 18.04, installed docker and everything I need to work.Cloned repository, rancomposer install,docker-compose up, thenphp artisan migrateandphp artisan storage:link. But when I try to open website in browser, I get 500 error with empty body response.APP_DEBUG is set to true.6 hours later I'm here with literally zero results. Tried dozens of solutions found here and on forums (just example).I even did a little experiment: removed project directory from my old laptop, cloned it from scratch, installed everything required and it worked. Without any permission problem.And what kills me more: there are no logs inside docker containers, no logs inside laravel directory, just nothing.Please help! What's wrong? Maybe it's Kubuntu? Maybe it's 18.04? Maybe it's newer docker version?P.S. Right nowbootstrap/cacheandstoragedirectories are owned byalex:alexand has 775 permissions. Exactly same as at my laptop. | Laravel 500 error no logs |
It is known issue. You need to add in docker-compose lockalstack image next propertiesHOSTNAME_EXTERNALhostname: localstackso original docker-compose will looks like:localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack}"
image: localstack/localstack
hostname: localstack
networks:
- anynet
ports:
- "4566:4566"
environment:
- SERVICES=sqs,sns
- DEBUG=1
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
- HOSTNAME_EXTERNAL=localstack
volumes:
- ./data:/tmp/localstack
- "/var/run/docker.sock:/var/run/docker.sock"And it will not work if you put localhost to both of properties!!! You need to choose another name. I put localstack for hostname and HOSTNAME_EXTERNAL and it works for me | My team is trying to get a local setup for our project. We are running the same docker-compose file with imagelocalstack/localstack:0.8.10. We are running the same shell script. Our script looks like this...awslocal sns subscribe \
--topic-arn arn:aws:sns:us-east-1:123456789012:cx-clientcomm-traffic-controller-sent \
--protocol sqs \
--notification-endpoint http://localhost:4576/queue/cx-clientcomm-request-processor-queueFor whatever reason, two of the developers are getting this error.Could not connect to the endpoint URL: http://localhost:4566for the SQS.I know this port is used for the latest versions of localstack, but they're running the same image as us.Any ideas?? | localstack trying to connect to localhost:4566 when we explicitly have the url set to 4576 |
Your docker container is running as long as last command is not done. You are booting up your tomcat as a daemon. This makes docker to stop running container as soon as tomcat is started.You can changed your last line to:CMD service tomcat start && tail -f /var/lib/tomcat/logs/catalina.outOr just try using one of precreated tomcat containers from Docker Hub:https://registry.hub.docker.com/search?q=tomcat&s=downloads | I am trying to build a docker container running tomcat from a docker file. Please find below the Dockerfile content:FROM ubuntu:trusty
MAINTAINER karthik.jayaraman
VOLUME ["/tomcat/files"]
ADD /files/tar/apache-tomcat-7.0.47.tar.gz /usr/local/tomcat
ADD /files/scripts/. /tmp/tomcat_temp
RUN ls /tmp/tomcat_temp
RUN cp -a /tmp/tomcat_temp/. /etc/init.d
RUN chmod 755 /etc/init.d/tomcat
RUN chkconfig --add tomcat && chkconfig --level 234 tomcat on
ADD /files/config /usr/local/tomcat/apache-tomcat-7.0.47/conf/
ADD /files/lib /usr/local/tomcat/apache-tomcat-7.0.47/lib/
ENV CATALINA_HOME /usr/local/tomcat/apache-tomcat-7.0.47
ENV PATH $PATH:$CATALINA_HOME/bin
EXPOSE 8080
CMD ["service","tomcat","start"]When i create the image and run a bash in the container, with the command "Service tomcat start", the server is started. I checked the catalina.out file and ensured that its running. But when i try the host IP on which docker is installed and access the port using the port number 8080, i could connect to tomcat page. But when i specify the internal IP address of the container - 172.24.0.7:8080, i could view the tomcat page. I guess the port forwarding is not properly. Can someone tell me the error i am making here. | Docker container running tomcat - could not access the server using the host IP address |
Answering my own question.you can add something like to override the entry point in the Dockerfile and runlsorcatcommand to see inside.ENTRYPOINT ls /etc/fluentd | This question already has answers here:Exploring Docker container's file system(33 answers)Closed3 years ago.Sometimes running the docker image fails so ssh’ing into the container is not an option. in that cases how do we see the content inside container?There is a existing question but mistakenly marked as duplicate.how to browse docker image without running it?NOTE: To stupid Moderators with stupid EGO, Please read the question PROPERLY before making judgement about closing the problem. Don't think you know better than others. | How to view files inside docker image without running it? (NOTE: THIS QUESTION IS HOW TO READ FILES WITHOUT RUNNING THE CONTAINER) [duplicate] |
There is someinformation on this on the docker-registry website. In short, it seems designed to support multiple registries talking to the same data-store so you shouldn't see any problems.If reliability is a real issue for you, it might be wise to look at one of the commercial offerings e.g.enterprise Hubor theCoreOS Enterprise Registry. (Although these seem to stress security and access controls rather than HA). | We are currently running a private registry on one server hosting all our images on it.
If the server crash, we basically loose all our images. We would like to find a way to enable high availability on our images.
An easy solution I see would be to have a registry instance per server.
A load balancer would redirect(Round robin) the traffic to ones of the registry instances available. Registry instances would share the same network data drive(NFS) to store the images.Do you see any problems with this solution ?
i.e: If a user push an image on an instance, and another push on another ( Load balancer round robin decision), would it create any lock files on the NFS ?Thanks for your feedback | Private docker registry and high availability |
You have typographic quotes in CMD (“ ”), use straight quotes ("). – Dan Lowe | I have a simple DockerfileFROM ubuntu
RUN apt-get update
RUN apt-get install -y apache2
RUN apt-get install -y apache2-utils
RUN apt-get clean
RUN apt-get upgrade -y
EXPOSE 80
CMD [“apache2ctl”, “-D FOREGROUND”]I build it with the following statementdocker build -t mywebserver .That works quite well, but when I want to execute it withdocker run -p 80:80 mywebserverit returns the error message you can see in the headline.
I also tried/usr/sbin/apache2ctlinstead ofapache2ctlto make sure that it is not because of missing in thePATHbut that did not help.So thanks in advance for your help. | "/bin/sh: 1: [“apache2ctl”,: not found" in docker |
selenium_hub:
image: selenium/hub
ports: ["4444:4444"]
selenium_firefox_node:
image: selenium/node-firefox
links:
- "selenium_hub:hub"Whilek0pernikus' answerdoes work, I just wanted to elaborate on the reason why it was failing.The node containers expect to connect to a hub which is resolvable as simply:hubrather than in their example where it will be resolvable as:selenium_hub | I can start a selenium hub image via:docker run --rm=true -P -p 4444:4444 --name selenium-hub selenium/huband add a firefox worker via:docker run --rm=true --link selenium-hub:hub selenium/node-firefoxGoing onhttp://localhost:4444/grid/consolethen will show the grid just fine.I don't want to use docker each time but have the same setup viadocker-compose.Hence, I thought I could just do this in mydocker-compose.yml:selenium_hub:
image: selenium/hub
ports: ["4444:4444"]
links:
- selenium_firefox_worker
selenium_firefox_worker:
image: selenium/node-firefoxYet after runningdocker-compose upI get the message:selenium_firefox_node_1 | Not linked with a running Hub container
selenium_firefox_node_1 exited with code 1and hence the grid doesn't show any node.I thought that I may be doing the linking in the wrong order, yet even:selenium_hub:
image: selenium/hub
ports: ["4444:4444"]
selenium_firefox_node:
image: selenium/node-firefox
links:
- selenium_hubyields in the same error.What am I doing wrong? | How to start selenium hub and one linked node via docker-compose instead of using docker? |
You have a typo and are not mounting in yournginx.conffile correctly.You spell itngnixin a couple of places in your volumes section and the container runs with the default config (hence default home page).Once you fix that, you will probably hit the error mentioned by @Federkun (nginxwon't be able to resolve the 3 domain names you're proxying).You also have yourserverdirective in the wrong place (it needs to be within thehttpsection).This should be the modified version of your file:events { worker_connections 1024;}
http {
upstream app {
server chat-server:5000;
}
server {
listen 80;
location / {
proxy_pass http://app;
}
}
}Notice this is better than needingnginxto be aware of the replica count. You can rundocker-compose upwith--scale chat-server=Nand resize at anytime by running the same command with a differentNwithout downtime. | I have a simple flask app running on port 5000 inside the container , and i'm trying to add nginx load balance to scale the app(3 instances)Here is mydocker-composefile :version: "3.7"
services:
chat-server:
image: chat-server
build:
context: .
dockerfile: Dockerfile
volumes:
- './chat_history:/src/app/chat_history'
networks:
- "chat_net"
ngnix-server:
image: nginx:1.13
ports:
- "8080:80"
volumes:
- './ngnix.conf:/etc/ngnix/nginx.conf'
networks:
- "chat_net"
depends_on:
- chat-server
networks:
chat_net:And here is mynginx.conffile :events { worker_connections 1024;}
http {
upstream app {
server chat-server_1:5000;
server chat-server_2:5000;
server chat-server_3:5000;
}
}
server {
listen 80;
location / {
proxy_pass http://app;
}
}both services are on the samechat_netnetwork , but when i hitlocalhost:8080on my browser im getting the nginx default page , why is that? what am i missing ? | nginx load balancer - Docker compose |
Take a look at your/etc/hostsinside thebackendcontainer. You will see192.0.18.1 dir_db_1or something like that. The IP will be different anddirwill represent the dir you're in. Therefore, you must changeTYPEORM_HOST=localhosttoTYPEORM_HOST=dir_db_1.Although, I suggest you set static names to your containers.services:
db:
container_name: project_db
...
backend:
container_name: project_backendIn this case you can always be sure, that your container will have a static name and you can setTYPEORM_HOST=project_dband never worry about the name ever again. | I'm using nestjs for my backend and using typeorm as ORM.
I tried to define my database and my application in an docker-compose file.If I'm running my database as a container and my application from my local machine it works well. My program connects and creates the tables etc.But if I try to connect the database from within my container or to start the container with docker-compose up it fails.Always get an ECONNREFUSED Error.Where is my mistake ?docker-compose.ymlversion: '3.1'
volumes:
dbdata:
services:
db:
image: postgres:10
volumes:
- ./dbData/:/var/lib/postgresql/data
restart: always
environment:
- POSTGRES_PASSWORD=${TYPEORM_PASSWORD}
- POSTGRES_USER=${TYPEORM_USERNAME}
- POSTGRES_DB=${TYPEORM_DATABASE}
ports:
- ${TYPEORM_PORT}:5432
backend:
build: .
ports:
- "3001:3000"
command: npm run start
volumes:
- .:/srcDockerfileFROM node:10.5
WORKDIR /home
# Bundle app source
COPY . /home
# Install app dependencies
#RUN npm install -g nodemon
# If you are building your code for production
# RUN npm install --only=production
RUN npm i -g @nestjs/cli
RUN npm install
EXPOSE 3000.env# .env
HOST=localhost
PORT=3000
NODE_ENV=development
LOG_LEVEL=debug
TYPEORM_CONNECTION=postgres
TYPEORM_HOST=localhost
TYPEORM_USERNAME=postgres
TYPEORM_PASSWORD=postgres
TYPEORM_DATABASE=mariokart
TYPEORM_PORT=5432
TYPEORM_SYNCHRONIZE=true
TYPEORM_DROP_SCHEMA=true
TYPEORM_LOGGING=all
TYPEORM_ENTITIES=src/database/entity/*.ts
TYPEORM_MIGRATIONS=src/database/migrations/**/*.ts
TYPEORM_SUBSCRIBERS=src/database/subscribers/**/*.tsI tried to use links but it don't work in the container. | Docker Compose cannot connect to database |
AWS ALB vs AWS Network LB depends on who do you want to handle SSL.If you have a wildcard certificate and all your services are subdomains of the same domain ALB may be a good choiceIf you want to use Let's encrypt with traefik Network LB may be a better choiceIn both case your setup will look something like this :[Internet]
|
[LB]
|
[Target group]
|
[Traefik]
| |
[service1] [service2]In both case, easiest way to get this is to make traefik ecs services to auto register to the target group.This can be done at service creation (network configuration section) and can not be done later.Link to documentationScreen of configuration console | In ShortI've managed to runTraefiklocally and onAWS ECSbut now I'm wondering how should I setup some sort of load balancing to make my two services with random IPs available to the public.My current setup on ECS[Internet]
|
[Load balancer on port 443 + ALB Security group on 443]
|
[Target group on port 443 + Security group from *any* port]
|
[cluster]
|
[service1 container ports "0:5000"]While this works, I'd now like to add another container, eg.service2also with random ports eg0:8000. And that's why I need something likeTraefik.What I didHere's Toml file:[api]
address = ":8080"
[ecs]
clusters = ["my-cluster"]
watch = true
domain = "mydomain.com"
region = "eu-central-1"
accessKeyID = "AKIA..."
secretAccessKey = "..."Also I've added the host entry in/etc/hosts:127.0.0.1 service1.mydomain.com
127.0.0.1 service2.mydomain.comAnd the relative labels on the containers and I cancurl service1.mydomain.com/statusand get a200.Now my last bit is just the following question:How should publish all this to the internet?AWS ALB?AWS Network LB? Network Bridge/host/other? | How should I setup Traefik on ECS? |
Trick with bash concatenation ability:shell: "docker inspect --format '{''{ .NetworkSettings.IPAddress }''}' consul"This will stick together{+{ .NetworkSettings.IPAddress }+}into single string in bash.Update: the root cause of this behaviour isdescribed here. | I have follow playbook command:- name: Docker | Consul | Get ip
shell: "docker inspect --format {% raw %}'{{ .NetworkSettings.IPAddress }}' {% endraw %} consul"
register: consul_ipAfter run ansible return follow error:fatal: [192.168.122.41]: FAILED! => {"failed": true, "msg": "{u'cmd':
u\"docker inspect --format '{{ .NetworkSettings.IPAddress }}'
consul\", u'end': u'2017-01-18 16:52:18.786469',
u'stdout': u'172.17.0.2', u'changed': True, u'start': u'2017-01-18
16:52:18.773819', u'delta': u'0:00:00.012650', u'stderr': u'', u'rc':
0, 'stdout_lines': [u'172.17.0.2'], u'warnings': []}: template error
while templating string: unexpected '.'. String: docker inspect
--format '{{ .NetworkSettings.IPAddress }}' consul"}Ansible version:ansible 2.2.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overridesHow right way to get IP address of container? | Get IP address of Docker with Ansible |
This is due to unresolvedhostnamefrom Docker host. In Docker, the instancesmongo1,mongo2, andmongo3are reachable by those names. However, these names are not reachable from the Docker host. This is evident by this line:Addr: mongo2:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: dial tcp: lookup mongo2: no such hostMongoDB driver will attemptserver discoveryfrom given a replica set member(s); it will find all of other nodes within the replica set (viars.conf). The problem here is the replica set is set with namemongo, the driver (run in Docker host) would not be able to resolve these names. You can confirm this by trying to pingmongo1from Docker host.You can either try running the application from another Docker instance sharing the same Docker network as the replica set. Or, modify the Docker networking as such to allow resolvable hostnames.UPDATE:Regarding your comment on why usingmongoshell, orPyMongoworks.This is due to the difference in connection mode. When specifying a single node, i.e.mongodb://node1:27017in shell or PyMongo, server discovery are not being made. Instead it will attempt to connect to that single node (not as part as a replica set). The catch is that you need to connect to the primary node of the replica set to write (you have to know which one). If you would like to connect as a replica set, you have to define the replica set name.In contrast to themongo-go-driver, by default it would perform server discovery and attempt to connect as a replica set. If you would like to connect as a single node, then you need to specifyconnect=directin the connection URI. See alsoExample Connect Direct | I have a MongoDB replica set up and running using Docker and I can access through console, or Robo3T client, in order to run my queries.These are the containers:$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
efe6ae03323d mongo "docker-entrypoint.s…" 10 minutes ago Up 10 minutes 0.0.0.0:30001->27017/tcp mongo1
57d2701c8a43 mongo "docker-entrypoint.s…" 10 minutes ago Up 10 minutes 0.0.0.0:30002->27017/tcp mongo2
7553966b9ff5 mongo "docker-entrypoint.s…" 10 minutes ago Up 10 minutes 0.0.0.0:30003->27017/tcp mongo3The problem is an error when I try to make a ping using themongo-go-driver(I tried with version 1.0.0 and 1.0.2)// Create MongoDB client
client, err := mongo.NewClient(options.Client().ApplyURI("mongodb://localhost:30001"))
if err != nil {
t.Fatalf("Exit error: %v", err)
}
ctx, cancel := context.WithTimeout(context.Background(), time.Minute)
defer cancel()
err = client.Connect(ctx)
if err != nil {
t.Fatalf("Exit error: %v", err)
}
ctx, cancel = context.WithTimeout(context.Background(), time.Minute)
defer cancel()
// Ping
err = client.Ping(ctx, readpref.Primary())
if err != nil {
t.Fatalf("Exit error Ping: %v", err)
}the error is the following:Exit error Ping: server selection error: server selection timeout
current topology: Type: ReplicaSetNoPrimary
Servers:
Addr: mongo2:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: dial tcp: lookup mongo2: no such host
Addr: mongo3:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: dial tcp: lookup mongo3: no such host
Addr: mongo1:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: dial tcp: lookup mongo1: no such host | Docker and mongo-go-driver "server selection error" |
You can install from PPA and use it as usual:FROM nvidia/cuda
RUN apt-get update && apt-get install -y --no-install-recommends software-properties-common \
libsm6 libxext6 libxrender-dev curl \
&& rm -rf /var/lib/apt/lists/*
RUN echo "**** Installing Python ****" && \
add-apt-repository ppa:deadsnakes/ppa && \
apt-get install -y build-essential python3.5 python3.5-dev python3-pip && \
curl -O https://bootstrap.pypa.io/get-pip.py && \
python3.5 get-pip.py && \
rm -rf /var/lib/apt/lists/*
COPY requirements.txt requirements.txt
RUN pip3.5 install -r requirements.txt
CMD ["python3.5", "app.py"] | I want to create a docker image with specifically python 3.5 on a specific base image which is the nvidia/cuda (9.0-base image) the latter has no python environment.The reason I need specific versions is to support running cuda10.0 python3.5 and a gcc version<7 to compile the driver all together on the same boxWhen I try and build the docker environments (see below) I always end up with the system update files which load python3.6The first version I run (below) runs a system update dependencies which installs python 3.6 I have tried many variants to avoid this but always end up 3.6 in the final image.Any suggestions for getting this running with python3.5 are welcomeThanksFROM nvidia/cuda
RUN apt-get update && apt-get install -y libsm6 libxext6 libxrender-dev python3.5 python3-pip
COPY . /app
WORKDIR /app
RUN pip3 install -r requirements.txt
ENTRYPOINT [ "python3" ]
CMD [ "app.py" ]Another variant (below) I have tried is with virtualenv and here again I can't seem to force a python 3.5 environmentFROM nvidia/cuda
RUN apt-get update && apt-get install -y --no-install-recommends libsm6 libxext6 libxrender-dev python3.5 python3-pip python3-virtualenv
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m virtualenv --python=/usr/bin/python3 $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
COPY . /app
WORKDIR /app
RUN pip3 install -r requirements.txt
ENTRYPOINT [ "python3" ]
CMD [ "app.py" ] | Setting specific python version in docker file with specfic non-python base image |
We use Postgres and Docker where I work and we ended up doing the following:Copy the Dockerfile from the official Postgres repo so you can make your own image.Modify docker-entrypoint.sh (https://github.com/docker-library/postgres/blob/8f80834e934b7deaccabb7bf81876190d72800f8/9.4/docker-entrypoint.sh), which is what is called when the container starts.At the top of docker-entrypoint.sh, I put in the following:# Get the schema
url=$(curl -s -u ${GIT_USER}:${GIT_PASSWORD} "${SQL_SCRIPT_URL}" | python -c 'import sys, json; print json.load(sys.stdin)["download_url"]')
curl ${url} > db.sh
chmod +x db.sh
cp db.sh ./docker-entrypoint-initdb.dThis basically downloads a shell script from Github that initializes the schema for the database. We do this to manage versions of the schema, so when you start your container you can tell it which schema to use via an ENV variable.Some notes about the code:We need to refactor to pull stuff from Github using a private key instead of user credentials.The ./docker-entrypoint-initdb.d directory is a place where docker-entrypoint.sh will look to run init scripts for the database. You can move files to that location however you want. Do this if downloading from Github is not applicable. | I'm developing an open source application consisting of a Java web application and a postgresql database. Ideally it would be deployable similar to the process detailed in theshipyard quickstart:run a data-only containerrun the DB containerrun the application containerIs there a recommended time to set up the database schema? I was thinking on making the Dockerfile for the database image create the schema when it is built but postgres isn't running at this time obviously. | "correct" way to manage database schemas in docker |
I've had the same problem trying to work with camera interface from docker container. With suggestions in this thread I've managed to get it working with the below dockerfile.FROM node:12.12.0-buster-slim
EXPOSE 3000
ENV PATH="$PATH:/opt/vc/bin"
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf
COPY "node_modules" "/usr/src/app/node_modules"
COPY "dist" "/usr/src/app"
CMD ldconfig && node /usr/src/app/app.jsThere are 3 main points here:Add/opt/vc/binto your PATH so that you can callraspistillwithout referencing the full path.Add/opt/vc/libto your config file so thatraspistillcan find all dependencies it needs.Reload config file (ldconfig) during container's runtime rather than build-time.The last point is the main reason why Anton's solution didn't work.ldconfigneeds to be executed in a running container so either use similar approach to mine or go withentrypoint.shfile instead. | I've been trying out my Node.js app on a Raspberry Pi 3 Model B using Docker and it runs without any troubles.The problem comes when an app dependency (raspicam) requiresraspistillto make use of the camera to take a photo. Raspberry is running Debian Stretch and the pi camera is configured and tested. But I cant access it when running the app via Docker.Basically, I build the image with Docker Desktop on a win10 64bit machine using this Dockerfile:FROM arm32v7/node:10.15.1-stretch
ENV PATH /opt/vc/bin:/opt/vc/lib:$PATH
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf \
&& ldconfig
# Create the app directory
ENV APP_DIR /home/app
RUN mkdir $APP_DIR
WORKDIR $APP_DIR
# Copy both package.json and package-lock.json
COPY package*.json ./
# Install app dependencies
RUN npm install
# Bundle app source
COPY . .
EXPOSE 3000
CMD ["npm", "start"]Then in the Raspberry, if I pull the image and run it with:docker run --privileged --device=/dev/vchiq -p 3000:3000 [my/image:latest]I get:Error: spawn /opt/vc/bin/raspistill ENOENTAfter some researching, I also tried running with:docker run --privileged -v=/opt/vc/bin:/opt/vc/bin --device=/dev/vchiq -p 3000:3000 [my/image:latest]And with that command, I get:stderr: /opt/vc/bin/raspistill: error while loading shared libraries: libmmal_core.so: cannot open shared object file: No such file or directoryCan someone share some thoughts on what changes do I have to make to the Dockerfile so that I'm able to access the pi camera from inside the Docker container? Thanks in advance. | Access raspistill / pi camera inside a Docker container |
As shownin this issue, this represents abuild-arg(ie the number of args used by to build the image)A good example ishttp_proxyor source versions for pulling intermediate files.TheARGinstruction letsDockerfileauthors define values that users can set at build-time using the--build-argflag:$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .This flag allows you to pass the build-time variables that are accessed like regular environment variables in the RUN instruction of the Dockerfile.Here's an example of build args going through 1.10+:[
"|4 a=1 b=2 c=3 d=4 /bin/sh -c echo $a $b $c $d"
] | Given thisDockerfile:FROM debian:8.3
ARG TEST=123
RUN echo $TESTWhat does the|1represent in the Docker history?$ docker history 2feee0d8320f
IMAGE CREATED CREATED BY SIZE COMMENT
2feee0d8320f About a minute ago |1 TEST=123 /bin/sh -c echo $TEST 0 B
ac4872d0de0b About a minute ago /bin/sh -c #(nop) ARG TEST=123 0 B
f50f9524513f 9 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0 B
9 months ago /bin/sh -c #(nop) ADD file:b5391cb13172fb513d 125.1 MB | What does |1 mean in Docker history |
Rather than playing around with a default output, just print exactly what you are looking for from start. Most docker sub-commands accept a--formatoption which will take ago templateexpression to specify what you exactly want.In your case, I believe the following command should give what you are looking for:$ docker container ls -la --format "{{.Names}}"
recursing_liskovOf course, you can add more columns if you wish, in whatever order best suits your needs. You can easily get a list of all keys available with something like:$ docker container ls -la --format "{{json .}}" | jq
{
"Command": "\"tail -f /dev/null\"",
"CreatedAt": "2022-01-15 23:43:54 +0100 CET",
"ID": "a67f0c2b1769",
"Image": "busybox",
"Labels": "",
"LocalVolumes": "0",
"Mounts": "",
"Names": "recursing_liskov",
"Networks": "bridge",
"Ports": "",
"RunningFor": "20 minutes ago",
"Size": "0B (virtual 1.24MB)",
"State": "running",
"Status": "Up 20 minutes"
}Here is an example just to illustrate using some of the fields:$ docker container ls -la --format "container {{.Names}} is in state {{.State}} and has ID {{.ID}}"
container recursing_liskov is in state running and has ID a67f0c2b1769Some random references out of several I used:https://devcoops.com/filter-output-of-docker-image-ls/https://windsock.io/customising-docker-cli-output/ | When issuing thedocker container ls -lacommand, the output looks like this:CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a67f0c2b1769 busybox "tail -f /dev/null" 26 seconds ago Up 25 seconds recursing_liskovI'd like to get only the container's name present in theNamescolumn.My idea was to use bash to reverse the columns in the command output so as to get theNamescolumn in first position and cut the name of container.This is what I tried:sudo docker container ls -la | rev |tail -n +2 | tr -s ' ' | cut -d ' ' -f 1However this does not give the expected result and I get the following error:rev: stdin: Invalid or incomplete multibyte or wide characterI'm stuck as I don't know how to handle this error. Can it be fixed to obtain the result I expect ? Or is there any other way to obtain my result ? | Print only `Names` column from `docker-container ls -la` output |
Docker does cache the layer built from theADD(preferablyCOPY) instruction, provided the sources haven't changed. You could make use of that and get your dependencies cached by copying theCargo.tomlin first, and doing a build.But unfortunately you need something to build, so you could do it with a single source file and a dummylibtarget in your manifest:[lib]
name = "dummy"
path = "dummy.rs"In your Dockerfile build the dummy separately:COPY Cargo.toml /app/Cargo.toml
COPY dummy.rs /app/dummy.rs
RUN cargo build --libThe output of this layer will be cached, with all the dependencies installed, and then you can go on to add the rest of your code (in the sameDockerfile):COPY /src/ app/src/
RUN cargo buildThedummystuff is ugly, but it means your normal build will be quick, as it comes from the cached layer, and when you change dependencies in yourCargo.tomlthen Docker will pick it up and build a new layer with updated dependencies. | I am developing an API with Rust, and am managing the environments, including the external database with Docker. Every time I make a change to the API code, cargo rebuilds, and since Docker doesn't cache anything to do with theADDstatement to copy the Rust directory over to the container, it re-downloads all the packages, which is a fairly lengthy process since I'm using Nickel, which seems to have a boatload of dependencies.Is there a way to bring those dependencies in prior to runningcargo build? At least that way if the dependencies change it will only install what's required, similar to Cargo compiling locally.Here's the Dockerfile I currently use:FROM ubuntu:xenial
RUN apt-get update && apt-get install curl build-essential ca-certificates file xutils-dev nmap -y
RUN mkdir /rust
WORKDIR /rust
RUN curl https://sh.rustup.rs -s >> rustup.sh
RUN chmod 755 /rust/rustup.sh
RUN ./rustup.sh -y
ENV PATH=/root/.cargo/bin:$PATH SSL_VERSION=1.0.2h
RUN rustup default 1.11.0
RUN curl https://www.openssl.org/source/openssl-$SSL_VERSION.tar.gz -O && \
tar -xzf openssl-$SSL_VERSION.tar.gz && \
cd openssl-$SSL_VERSION && ./config && make depend && make install && \
cd .. && rm -rf openssl-$SSL_VERSION*
ENV OPENSSL_LIB_DIR=/usr/local/ssl/lib \
OPENSSL_INCLUDE_DIR=/usr/local/ssl/include \
OPENSSL_STATIC=1
RUN mkdir /app
WORKDIR /app
ADD . /app/
RUN cargo build
EXPOSE 20000
CMD ./target/debug/apiAnd here's my Cargo.toml[profile.dev]
debug = true
[package]
name = "api"
version = "0.0.1"
authors = ["Vignesh Sankaran <[email protected]>"]
[dependencies]
nickel = "= 0.8.1"
mongodb = "= 0.1.6"
bson = "= 0.3.0"
uuid = { version = "= 0.3.1", features = ["v4"] } | Optimising cargo build times in Docker |
This works for me:docker run -t -i ubuntu //bin/bashThe double // avoids the conversion[1][1]http://www.mingw.org/wiki/Posix_path_conversion | I'm working through "The Docker Book", am on chapter 3, installing and running an Ubuntu container. I'm on Windows 7.1, using Boot2Docker.Here's what happens when I try to run it (this is the second attempt, so it already has a local copy of the image):$ docker run -i -t ubuntu /bin/bash
exec: "C:/Program Files (x86)/Git/bin/bash": stat C:/Program Files (x86)/Git/bin/bash: no such file or directory
FATA[0000] Error response from daemon: Cannot start container 5e985b0b101bb9584ea3e40355089a54d1fba29655d5a1e0900c9b32c4f7e4c4: [8] System error: exec: "C:/Program Files (x86)/Git/bin/bash": stat C:/Program Files (x86)/Git/bin/bash: no such file or directoryStatus:$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5e985b0b101b ubuntu:latest "C:/Program Files (x 21 minutes ago loving_mayerIt's complaining about lack of C:/Program Files (x86)/Git/bin/bash, but I certainly have that on my machine:$ ls -l "c:/Program Files (x86)/Git/bin/bash"
-rwxr-xr-x 1 neilw Administ 598016 May 4 09:27 c:/Program Files (x86)/Git/bin/bashAny thoughts? | Boot2docker/Windows: can't run bash on Ubuntu container |
You've asked for adeadcontainer.TL;DR: This is how to create a dead containerDon't do this at home:ID=$(docker run --name dead-experiment -d -t alpine sh)
docker kill dead-experiment
test "$ID" != "" && chattr +i -R /var/lib/docker/containers/$ID
docker rm -f dead-experimentAnd voila, docker could not delete the container root directory, so it falls to astatus=dead:docker ps -a -f status=dead
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
616c2e79b75a alpine "sh" 6 minutes ago Dead dead-experimentExplanationI've inspected thesource code of docker and saw this state transition:container.SetDead()
// (...)
if err := system.EnsureRemoveAll(container.Root); err != nil {
return errors.Wrapf(err, "unable to remove filesystem for %s", container.ID)
}
// (...)
container.SetRemoved()So, if docker cannot remove the container root directory, it remain as dead and does not continue to the Removed state. So I've forced the file permissions to not permit root remove files (chattr -i).PS: to revert the directory permissions do this:chattr -i -R /var/lib/docker/containers/$ID | I need to get some containers to dead state, as I want to check if a script of mine is working. Any advice is welcome. Thank you. | How to get a docker container to the state: dead for debugging? |
Short answer: It would be fairer to compare the differences betweengolang:alpineandalpine.At the time of writing, thegolangimage is built off of Debian, a different distribution than Alpine.I'll quote the documentation from Docker Hub:golang:This is the defacto image. If you are unsure about
what your needs are, you probably want to use this one. It is designed
to be used both as a throw away container (mount your source code and
start the container to start your app), as well as the base to build
other images off of.andgolang:alpineThis image is based on the popular Alpine Linux project,
available in the alpine official image. Alpine Linux is much smaller
than most distribution base images (~5MB), and thus leads to much
slimmer images in general.This variant is highly recommended when final image size being as
small as possible is desired. The main caveat to note is that it does
use musl libc instead of glibc and friends, so certain software might
run into issues depending on the depth of their libc requirements.
However, most software doesn't have an issue with this, so this
variant is usually a very safe choice. See this Hacker News comment
thread for more discussion of the issues that might arise and some
pro/con comparisons of using Alpine-based images.In summary, images built off of Alpine will tend to be smaller than the Debian ones. But, they won't contain various system tools that you may find useful for development and debugging. A common compromise is to build your binaries with thegolangflavor and deploy to production with eithergolang:alpine,alpine, or as mentioned in a comment above,scratch. | Size of the imagesgolangandalpinevary by around300Mb.What are the advantages of usinggolangimage instead of plainalpine? | Choosing Golang docker base image |
I found the alternate solution. The reason that it shows binary not compatible is because I have one nginx pre-installed under the target route, and it is not compatible with the header-more module I am using. That means I cannot simply install the third party library from Alpine package.So I prepare a clean Alpine OS, and follow theGitHub repositoryto build Nginx from the source with additional feature. The path of build result is the prefix path you specified. | When I usecurl --headto test my website, it returns the server information.I followedthis tutorialto hide the nginx server header.
But when I run the commandyum install nginx-module-security-headers, it returnsyum: not found.I also triedapk add nginx-module-security-headers, and it shows that the package is missing.I have usednginx:1.17.6-alpineas my base docker image. Does anyone know how to hide the server from header under this Alpine? | Edit / hide Nginx Server header under Alpine Linux |
Okay, just adding a dummy service port to the labels workslabels:
- traefik.enable=true
- traefik.http.services.justAdummyService.loadbalancer.server.port=1337
- traefik.http.routers.traefikRouter.rule=Host(`127.0.0.11`)
- traefik.http.routers.traefikRouter.service=api@internal
- traefik.http.routers.traefikRouter.entrypoints=httpI was struggling with traefik for more than 24h now... This can't be the solution, right?
Guess I have to report this as an error. Can someone confirm that this is not how it should work? | I'm trying to use traefik 2.0 (!) in docker swarm mode. This is my stack:version: '3.7'
services:
traefik:
image: traefik:latest
ports:
- 80:80
- 443:443
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
preferences:
- spread: node.id
labels:
- traefik.enable=true
- traefik.http.routers.traefikRouter.rule=Host(`127.0.0.11`)
- traefik.http.routers.traefikRouter.service=api@internal
- traefik.http.routers.traefikRouter.entrypoints=http
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: >
--providers.docker
--providers.docker.exposedbydefault=false
--providers.docker.swarmmode=true
--entryPoints.http.address=":80"
--entryPoints.https.address=":443"
--accesslog
--log.level=DEBUG
--api=true
--api.dashboard=true
networks:
- traefik-public
whoami:
image: containous/whoami
deploy:
replicas: 2
labels:
- traefik.enable=true
- traefik.http.services.whoami.loadbalancer.server.port=80
- traefik.http.routers.whoami.rule=Host(`127.0.0.12`)
- traefik.http.routers.whoami.service=whoami
- traefik.http.routers.whoami.entrypoints=http
networks:
- traefik-public
# Run on Host: docker network create --driver=overlay traefik-public
networks:
traefik-public:
external: trueAccess tohttp://127.0.0.12/works, I see the whoami page.
Access tohttp://127.0.0.11/orhttp://127.0.0.11/dashboard/should show traefiks internal dashboard, if I readthe docsright. But I get traefiks 404.Thedocker service logshows one error:level=error msg="port is missing" container=traefik-traefik-z8kz9w91yw7pm6tp5os5vxrnv providerName=dockerWhat's the Problem? I suspect it's missing a port for the serviceapi@internal... But that's its internal service - I can't configure that?!Any ideas? Thx | Traefik 2.0 "port is missing" for internal dashboard |
A docker container is "chroot on steroids". Anyway, the kernel is the same between all docker containers and the host system. So all the kernel calls share the same kernel.So we can do on our host (in any folder, as root):mknod -m 444 urandom_host c 1 9and in some linux chroot:wget | tar -x
chroot
mknod -m 444 urandom_in_chroot c 1 9and we can dodocker run -ti --rm alpine sh -l
mknod -m 444 urandom_in_docker c 1 9Then all callsopen(2)andread(2)by any program to anyurandom_in_dockerandurandom_in_chrootandurandom_hostwill go into the same kernel into the same kernelurandommodule binded to special character file with major number 1 and minor number 9, which is according tothis listthe random number generator.As for virtual machine, the kernel is different (if there is any kernel at all). So all the calls to any block/special character files are translated by different kernel (also maybe using different, virtualized architecture and different set of instructions). From the host the virtualmachine is visible as a single process (implementation depended) which may/or may not call the hosts /dev/urandom if the virtualized system/program calls /dev/urandom. In virtualization anything can happen, and that is dependent on particular implementation.So, the requests to /dev/urandom in docker are handled the same way as on the host machine. As how urandom is handled in kernel, maybehereis a good start.If you require entropy, be sure to use and install haveged. | fordocumentation purposeson our project I am looking for the following information:We are using Docker to deploy various applications which require entropy for SSL/TLS and other stuff. These applications may use /dev/random, /dev/random, getrandom(2), etc.. I would like to know how these requests are handled in Docker containers as opposed to one virtual machine running all services (and accessing one shared entropy source).So far I have (cursorily) looked into libcontainer and runC. Unfortunately I have not found any answers to my question, although I do have a gut feeling that these requests are passed through to the equivalent call on the host.Can you lead me to any documentation supporting this claim, or did I get it wrong and these requests are actually handled differently? | How are requests to /dev/(u)random etc. handled in Docker? |
Can we use this ?import os
os.system('docker run -it --rm ubuntu bash') | I am working with a Docker image which I launch in interactive mode like so:docker run -it --rm ubuntu bashThe actual image I work with has many complicated parameters, which is why I wrote a script to construct the fulldocker runcommand and launch it for me. As the logic grew more complicated, I want to migrate the script from bash to Python.Usingdocker-py, I prepared everything to run the image. Seems like usingdocker.containers.runfor interactive shells isnot supported, however. Usingsubprocessinstead seems logical, so I tried the following:import subprocess
subprocess.Popen(['docker', 'run', '-it', '--rm', 'ubuntu', 'bash'])But this gives me:$ python3 docker_run_test.py
$ unable to setup input stream: unable to set IO streams as raw terminal: input/output error
$Note that the error message appears in a different shell prompt from the python command.How do I makepython3 docker_run_test.pyto do equivalent of runningdocker run -it --rm ubuntu bash? | How do I use Python to launch an interactive Docker container? |
You cannot copy files that are outside the build context when building a docker image. The build context is the path you specify to the docker build command. In the case of the instructionC:\temp\docker_posh> docker build --rm -f Dockerfile -t docker_posh:latest .The.specifies that the build context isC:\temp\docker_posh. ThusC:/temp/somedirectorycannot be accessed. You can either move the Dockerfile to temp, or run the same build command
underC:\temp. But remember to fix the Dockerfile instructions to make the path relative to the build context. | I want to build a Docker image including my custom Powershell modules. Therefore I use Microsoftsmicrosoft/powershell:latestimage, from where I wanted to create my own image, that includes my psm1 files.For simple testing I've the following docker file:FROM microsoft/powershell:latest
RUN mkdir -p /tmp/powershell
COPY C:/temp/somedirectory /tmp/powershellI want to copy the files included in C:\temp\somedirectory to the docker linux container. When building the image I get the following error:C:\temp\docker_posh> docker build --rm -f Dockerfile -t docker_posh:latest .Sending build context to Docker daemon 2.048kBStep 1/3 : FROM microsoft/powershell:latest
---> 9654a0b66645Step 2/3 : RUN mkdir -p /tmp/powershell
---> Using cache
---> 799972c0dde5Step 3/3 : COPY C:/temp/somedirectory /tmp/powershell
COPY failed: stat /var/lib/docker/tmp/docker-builder566832559/C:/temp/somedirectory: no such file or directoryOf course I know that Docker says that I can't find the file/directory. Therefore I also triedC:/temp/somedirectory/.,C:/temp/somedirectory/*, andC:\\temp\\somedirectory\\as alternativ source paths in the Dockerfile -> Result: none of them worked.docker version
Client:
Version: 17.12.0-ce
API version: 1.35
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:05:22 2017
OS/Arch: windows/amd64
Server:
Engine:
Version: 17.12.0-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:12:29 2017
OS/Arch: linux/amd64
Experimental: trueHow can I copy a folder including subfolder and files via a Dockerfile? | Dockerfile: Copy directory from Windows host to docker container |
This is not an answer to this specific question. It is a possible answer to "port mapping doesn't work"I've been caught by this twice.The image name must come last when creating a container from the command lineThis syntax:docker run --name MyContainer MyImage -p 8080:80will create containerMyContainerfromMyImagewithout issueBut the -p 8080:80 part will be silently ignored and your port mapping won't workThis syntax will work - you'll see exactly the same outcome except that port mapping will actually work.docker run --name MyContainer -p 8080:80 MyImageSame for this:docker run MyImage --name MyContainerThis will create a container from MyImage but it won't give it the explicit name, it'll assign a random nameI hope this saves someone some time. | Running a Jenkins image in my container which is bound to the host port 9090sudo docker run -itd -p 9090:8080 -p 50000:50000 --name=myjenkins -t jenkins-custom /bin/bashThe output of running$docker port myjenkins50000/tcp -> 0.0.0.0:50000
8080/tcp -> 0.0.0.0:9090I can also see the binding from the host perspectiveps -Af | grep proxyroot 15314 15194 0 17:52 ? 00:00:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 50000 -container-ip 172.17.0.2 -container-port 50000
root 15325 15194 0 17:52 ? 00:00:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 9090 -container-ip 172.17.0.2 -container-port 8080After starting my jenkins server i try connect to the container using the host ip and the forwarded port (9090).I'm new to Docker so may have missed something however would appreciate suggestionsUpdate: including dockerfileFrom local-artifiactory/jenkinsci/jenkins:2.9
ENV java_opts="-Xmx8192m" | Docker port binding not working as expected |
The problem is that Docker by default blocks a list of system calls, including perf_event_open, which perf relies heavily on.Official docker reference:https://docs.docker.com/engine/security/seccomp/Solution:Download the standard seccomp(secure compute)filefor docker. It's a json file.Find "perf_event_open", it only appears once, and delete it.Add a new entry in syscalls section:{ "names": [ "perf_event_open" ], "action": "SCMP_ACT_ALLOW" },Add the following to your command to run the container:
--security-opt seccomp=path/to/default.jsonThat did it for me. | First things first:Alpine Version 3.9.0perf[from:http://dl-cdn.alpinelinux.org/alpine/edge/testing]4.18.13Docker 18.09.3 build 774a1f4My DockerfileFROM alpine:latest
# Set the working directory to /app
WORKDIR /app/
# Install any needed packages specified in requirements.txt
RUN yes | apk add vim
RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" | tee -a /etc/apk/repositories
RUN apk add --update perfThe problem, these are commands ran inside the container:/ # cat /proc/sys/kernel/perf_event_paranoid
-1
/ # perf stat -d sleep 1
Error:
No permission to enable task-clock event.
You may not have permission to collect stats.
Consider tweaking /proc/sys/kernel/perf_event_paranoid,
which controls use of the performance events system by
unprivileged users (without CAP_SYS_ADMIN).
The current value is -1:
-1: Allow use of (almost) all events by all users
Ignore mlock limit after perf_event_mlock_kb without CAP_IPC_LOCK
>= 0: Disallow ftrace function tracepoint by users without CAP_SYS_ADMIN
Disallow raw tracepoint access by users without CAP_SYS_ADMIN
>= 1: Disallow CPU event access by users without CAP_SYS_ADMIN
>= 2: Disallow kernel profiling by users without CAP_SYS_ADMIN
To make this setting permanent, edit /etc/sysctl.conf too, e.g.:
kernel.perf_event_paranoid = -1
/ #The command for launching the image:docker run -it --mount type=tmpfs,tmpfs-size=512M,destination=/app/ alpyI've worked with perf for a long time. But, this is a first. Does anyone know why perf knows I have permission to profile, but won't let me do so?Thank you. | Docker Alpine and perf not getting along in docker container |
you can go:docker exec -it bashonce inside the container you can thenkill . This will kill the process but keep the container runningunlessthis is the process the container was started with. | i want to kill a running process like a Django webserver inside a Docker container without killing the container itself but for some reason if i dodocker exec -it ps -auxand thendocker exec kill it will kill my docker instance and i don't want that.How can i address this issue? | Kill a running process like a webserver inside a Docker container without killing the container |
I would say: try ;).At the moment, docker as no control whatsoever on the process once started as itexecve(3)without fork. It is not possible to update the env, that's why the links need to be done before the container runs and can't be edited afterward.Docker will try to reassign the same port to B, but there is no warranty as an other container could be using it.What do you mean by 'broken'? If you disabled the networking between unlinked container, it should still be working if you stop/start a container.No, you can't link container across network yet. | Docker allows you tolink containersby name.I have two questions on this:SupposedA(client) is linked toB(service), andB's port is exposed dynamically (i.e. the actual host port is determined by Docker, not given by the user). What happens ifBgoes down and is being restarted?Does Docker update the environment variable onA?Does Docker assign the very same port again toB?IsAlink toBbroken?…?Besides that, it's quite clear that this works fine if both containers are run on the same host machine. Does linking containers also work across machine boundaries? | Linking containers in Docker |
Depending on your version, you may need to include the scheme in the insecure registry definition. Newer versions of buildkit should not have this issue, so an upgrade may also help....
"insecure-registries" : [
"insecure.registry.local",
"http://insecure.registry.local"
]
... | Is there a way to build a docker image from a Dockerfile that uses a base image from a local, insecure registry hosted in Gitlab. For example, if my Dockerfile were:FROM insecure.registry.local:/mygroup/myproject/image:latestWhen I rundocker build .I get the following error:failed to solve with frontend dockerfile.v0: failed to create LLB definition:.... http: server gave HTTP response to HTTPS clientWhen I've been interacting with our registry and received similar types http/https errors, I would alter the docker daemon configuration file to include:...
"insecure-registries" : ["insecure.registry.local"]
...and everything would work when executing a particular docker command that would trigger it. I'm able to rundocker login insecure.registry.localsuccessfully. Is there a way either in the Dockerfile itself, through thedocker buildcommand or other alternative to have it pull the image successfully in theFROMstatement from an insecure registry? | Dockerfile FROM Insecure Registry |
UpdateDocker can now be installed on Windows 10 Home (version 2004 or higher).
Refer to this article for installation instructionshttps://docs.docker.com/docker-for-windows/install-windows-home/Old AnswerDocker for Windows requires Hyper-V, and Hyper-V requires Windows 10 Pro (or Windows Server). So no, you can't run Docker without upgrading.https://docs.docker.com/docker-for-windows/install/README FIRST for Docker Toolbox and Docker Machine users:Docker for Windows requires Microsoft Hyper-V to run. The Docker for Windows installer enables Hyper-V for you, if needed, and restart your machine.https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-vCheck RequirementsWindows 10 Enterprise, Professional, or Education64-bit Processor with Second Level Address Translation (SLAT).CPU support for VM Monitor Mode Extension (VT-c on Intel CPU's).Minimum of 4 GB memory.The Hyper-V rolecannotbe installed on Windows 10 Home. | I need to install Docker on my pc with Windows 10 home. I read that I can only install Docker Toolbox. Is there any way to have the latest Docker version instead without upgrading my pc to windows 10 pro?Thanks | Is it possible to use Docker without Windows 10 pro? |
it looks like a problem with the folder permissions. Try to execute the following:chmod -R 755 ~/docker-share/htmlWhen you map a host folder into the container, the files' ownership is maintained. e.g.If you execute the followingdocker run -it --rm -v "~/docker-share/html:/usr/share/nginx/html" nginx:alpineYou'll get something like this:total 12
drwx--x--x 2 1000 1000 4096 Oct 20 07:48 .
drwxr-xr-x 3 root root 4096 Jan 9 2020 ..
-rwx--x--x 1 1000 1000 83 Oct 20 07:48 index.htmlIn my case the folder is owned by 1000 (in your case you'll find your uid). The nginx container will use thenginxuser (uid: 101) for its workers. | I was try to mount a folder into "/usr/share/nginx/html/" and the Docker consoler shows an error of "[error] 28#28: *1 directory index of /usr/share/nginx/html/ is forbidden". I use this command to mounted volume "docker-share dilrukshi$ docker run -d -p 8080:80 --name web -v ~/docker-share/html:/usr/share/nginx/html nginx" and also I used nginx/1.19.3 Official NGINX Docker Image. In a web page show, a "403 Forbidden" error and also "/usr/share" folder doesn't have "/nginx/html". Wha's wrong with? and How can I fix it?Docker consoler/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
172.17.0.1 - - [20/Oct/2020:07:09:41 +0000] "GET / HTTP/1.1" 403 555 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.80 Safari/537.36" "-"
2020/10/20 07:09:41 [error] 28#28: *1 directory index of "/usr/share/nginx/html/" is forbidden, client: 172.17.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost:8080"Browser Error | *1 directory index of "/usr/share/nginx/html/" is forbidden, in mac catalina os |
Not easily, consideringADD or COPYuses theDockerfilecontext(the current folder or below) to seek their resources.It would be easier tocpthat file first to theDockerfilefolder (before adocker build .), and leave in saidDockerfileaCOPY myfile /opt/files/filedirective.Or you could run the container, and use adocker cp //somenetwork/somefiles/myfile /opt/files/fileto add that file at rintime | I have a file hosted (can't change that) at//somenetwork/somefiles/myfileIn myDockerfileI would like to:COPY //somenetwork/somefiles/myfile /opt/files/fileIs there anyway to achieve that withDockerfile?
thanks | In Dockerfile how to copy file from network drive |
You have a few options. Using something likedocker-compose, you could automatically build a unique image for each container using your base image as a template. For example, if you had adocker-compose.ymlthat look liked:container0:
build: container0
container1:
build: container1And then insidecontainer0/Dockerfileyou had:FROM larsks/thttpd
COPY index.html /index.htmlAnd insidecontainer0/index.htmlyou had whatever content you
wanted, then runningdocker-compose buildwould generate unique
images for each entry (and runningdocker-compose upwould start
everything up).I've put together an example of the abovehere.Using just the Docker command line, you can use host volume mounts,
which allow you to mountfilesinto a container as well as
directories. Using mythttpdas an example again, you could use the
following-vargument to override/index.htmlin the container
with the content of your choice:docker run -v index.html:/index.html larsks/thttpdAnd you could accomplish the same thing withdocker-composevia thevolumeentry:container0:
image: larsks/thttpd
volumes:
- ./container0/index.html:/index.html
container1:
image: larsks/thttpd
volumes:
- ./container1/index.html:/index.htmlI would suggest that using thebuildmechanism makes more sense if you are trying to override many files, while using volumes is fine for one or two files.A key difference between the two mechanisms is that when building images, each container will have acopyof the files, while using volume mounts, changes made to the file within the image will be reflectedon the host filesystem. | Maybe I'm missing this when reading the docs, but is there a way to overwrite files on the container's file system when issuing adocker runcommand?Something akin to theDockerfileCOPYcommand? The key desire here is to be able to take a particularDocker image, and spin several of the same image up, but with different configuration files. (I'd prefer to do this with environment variables, but the application that I'm Dockerizing is not partial to that.) | Overwrite files with `docker run` |
There are three different things happening, and none of them are specifically compose syntax, rather they are yaml syntax.First is defining an anchor with the&followed by a name. That's similar to defining a variable to use later in the yaml, with the value matching the value of the yaml object where it appears.Next is the alias, specified with*and the same name as the anchor. That uses the anchor in the second location in the yaml file.Last is a mapping merge using the<<syntax, which merges all of the mapped values in the alias with the rest of the values in the current map, allowing you to override values in the saved anchor with values specific to that section of the compose file.To dig more into this, try searching on "yaml anchors and aliases". The first hit for me is this blog post:https://medium.com/@kinghuang/docker-compose-anchors-aliases-extensions-a1e4105d70bd | Trying to understand how the docker-compose file was created as I want to replicate this into a kubernetes deployment yaml file.In reference to acookiecutter-django's docker-composeproduction.yamlfile:...
services:
django: &django
...By docker-compose design, the name of service here is already defined asdjangobut then I noticed this extra bit&django. This made me wonder why its here. Further down, I noticed the following:...
celeryworker:
<<: *django
...I don't understand how that works. The docker-compose docs have no reference or mention for using<<let alone, making a reference to a named service like*django.Can anyone explain how the above work and how do I replicate it to a kubernetes deployment or services yaml file (or both?) if possible?Edit:Thequestion that @jonsharpe sharedwas similar but the answer wasn't clear to me on how its used. | Explain how `<<: *name` makes a reference to `&name` in docker-compose? |
The usual workaround is to mount/etc/localtime, as inissue 3359$ docker run --rm busybox date
Thu Mar 20 04:42:02 UTC 2014
$ docker run --rm -v /etc/localtime:/etc/localtime:ro busybox date
Thu Mar 20 14:42:20 EST 2014
$ FILE=$(mktemp) ; echo $FILE ; echo -e "Europe/Brussels" > $FILE ; docker run --rm -v $FILE:/etc/timezone -v /usr/share/zoneinfo/Europe/Brussels:/etc/localtime:ro busybox date
/tmp/tmp.JwL2A9c50i
Thu Mar 20 05:42:26 CET 2014The same thread mentions (for ubuntu-based image though), but you already tried it.RUN echo Europe/Berlin > /etc/timezone && dpkg-reconfigure --frontend noninteractive tzdata(AndI referred before to a similar solution)Another option would be to build your owngliderlabs/docker-alpineimage withbuilder/scripts/mkimage-alpine.bash.That script allows you toset a timezone.[[ "$TIMEZONE" ]] && \
cp "/usr/share/zoneinfo/$TIMEZONE" "$rootfs/etc/localtime"You can see that image builder script used inDigital Ocean: Alpine Linux:Generate Alpine root file systemEnsure Docker is running locally.Download and unzipgliderlabs/docker-alpine.wget -O docker-alpine-master.zip https://github.com/gliderlabs/docker-alpine/archive/master.zip
unzip docker-alpine-master.zipBuild the builder (export the right timezone first).export TIMEZONE=xxx
docker build -t docker-alpine-builder docker-alpine-master/builder/Build the root file system (change v3.3 to the Alpine version you want to build).docker run --name alpine-builder docker-alpine-builder -r v3.4Copy the root file system from the container.docker cp alpine-builder:/rootfs.tar.gz .Once you have therootfs.tar.gzon your own filesystem, you can use it (asmentioned here) to build your own Alpine image, with the following Dockerfile:FROM SCRATCH
ADD rootfs.tar.gz /Once built, you can use that Alpine image with the right timezone. | My Dockerfile is:FROM gliderlabs/alpine:3.3
RUN set -x \
&& buildDeps='\
python-dev \
py-pip \
build-base \
' \
&& apk --update add python py-lxml py-mysqldb $buildDeps \
&& rm -rf /var/cache/apk/* \
&& mkdir -p /app
ENV INSTALL_PATH /app
ENV TZ=Asia/Shanghai
WORKDIR $INSTALL_PATH
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
COPY requirements-docker.txt ./
RUN pip install -r requirements-docker.txt
COPY . .
RUN apk del --purge $buildDeps
ENTRYPOINT ["celery", "-A", "tasks", "worker", "-l", "info", "-B"]I setted the timezone asAsia/Shanghai, but it did not work and gave me the UTC which had 8 hours deviation, the result is :2016-01-24 11:25:07:[2016-01-24 03:25:07,893: WARNING/Worker-2] 2016-01-24 03:25:07.892718
2016-01-24 11:25:08:[2016-01-24 03:25:08,339: INFO/MainProcess] Task tasks.crawl[98c9a9fc-0817-45cb-a2fc-40320d63c41a] succeeded in 0.447403368002s: None
2016-01-24 11:27:07:[2016-01-24 03:27:07,884: INFO/Beat] Scheduler: Sending due task spider (tasks.crawl)Then I tried other methods like:RUN echo "Asia/Shanghai" > /etc/timezone && dpkg-reconfigure -f noninteractive tzdataandRUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtimenone of them did work, how can set the timezone? Thanks very much. | How can I set the time zone in Dockerfile using gliderlabs/alpine:3.3 |
How could I reference to ghcr stored image in From statement of Dockerfile?Image references have the registry in front of them, and when not included, will default to Docker Hub. So for a registry like ghcr you want:FROM ghcr.io/path/to/image:tag | I would like to use ghcr as cache to store docker image with part which almost do not change in my project (Ubuntu, miniconda and bunch of Python packages) and then use this image in Dockerfile which adds volumes and code of the project to it. Dockerfile is run by Github Actions. How could I reference to ghcr stored image in From statement of Dockerfile? | Use ghcr in Dockerfile in GHA |
You need to replace the ":" with a "/"gcloud builds submit --tag gcr.io/mycompany.com/project-id/helloworldMore info can be found here:https://cloud.google.com/container-registry/docs/overview#domain-scoped_projects | I'm trying to build a container with GCP's Cloud Build. I'm using the simple template from thequickstart doc. I've done this before successfully.However, this time I am using a project which is under an "organization". So the project ID ismycompany.com:projectX, rather than simplyprojectX.I am unable to get the build to complete.When I run:gcloud builds submit --tag gcr.io/mycompany.com:project-id/helloworldI get the following error:(gcloud.builds.submit) INVALID_ARGUMENT: invalid build: invalid image name "gcr.io/mycompany.com:projectX/helloworld"I suspect that since the--tagflag callsdocker build -t $TAG .under the hoodanddocker image names use:to specify versions, this format may be invalid.Any ideas what I am supposed to do when working with organization projects? I cannot find relevant info in the Cloud Build or GCP IAM docs.Some things I've tried:using acloudbuild.yamlconfig filewith a$PROJECT_IDsubstitutionto ensure I have the correct formatusing the project number instead of the project ID (Using the project number in the image path is not supported. Project ID must be used instead)omitting the organization name altogether, which is denied withToken exchange failed for projectchecking my permissions - I haveCloud Build EditorandCloud Run Invokerroles, where the former specifies that I can "create and cancel builds" | invalid image name in cloud build when using domain-scoped project |
This problem can be solved by running the github actions runner as root, which somewhat reduces security.A better solution is using rootless docker:Remove docker from your system if you have previously installed it through Ubuntu's default repositories.install docker from Docker's repositoriesas directed here(I also recommend
enabling cgroupsV2,as described here) & reboot. This will give you the script in /usr/bin needed to setup rootless docker in the next step.setup rootless dockeras described here.don't forget to run the following, so docker remains running after you logout (as described in the guide)systemctl --user enable docker
systemctl --user start docker
sudo loginctl enable-linger $(whoami)Also make sure to create the rootless contextas described on that same page. This will make your own docker commands and the github actions runner automatically use rootless docker.install the self hosted runner:https://docs.github.com/en/actions/hosting-your-own-runners/adding-self-hosted-runners(skip if already installed)Add theDOCKER_HOSTenv var to the .env file in the runner directory. The file might already be created by default. The line you add should look as follows (change the 1000 if your UID is not 1000):DOCKER_HOST=unix:///run/user/1000/docker.sockre(start) the actions runner. This can by done by restarting its systemd service. Your runner should now work with rootless dockerIf you're having issues with the new docker build github action using buildx, also seeHow to solve error with rootless docker in github actions self hosted runner: write /proc/sys/net/ipv4/ping_group_range: invalid argument: unknown | Github recommending running their runner as a non-root user gives rise to someissues surrounding mixing docker and non-docker actions.This is quite annoying because it results in the checkout action not being able to run because it can't access the files created by actions run in docker containers.Can this be solved by running the actions runner with rootless docker? | How to enable non-docker actions to access docker-created files on my self hosted github actions runner? (rootless docker) |
Here is my working .yml fileversion: '3.7'
services:
fix-redis-volume-ownership: # This service is to authorise redis-master with ownership permissions
image: 'bitnami/redis:latest'
user: root
command: chown -R 1001:1001 /bitnami
volumes:
- ./data/redis:/bitnami
- ./data/redis/conf/redis.conf:/opt/bitnami/redis/conf/redis.conf
redis-master: # Setting up master node
image: 'bitnami/redis:latest'
ports:
- '6329:6379' # Port 6329 will be exposed to handle connections from outside server
environment:
- REDIS_REPLICATION_MODE=master # Assigning the node as a master
- ALLOW_EMPTY_PASSWORD=yes # No password authentication required/ provide password if needed
volumes:
- ./data/redis:/bitnami # Redis master data volume
- ./data/redis/conf/redis.conf:/opt/bitnami/redis/conf/redis.conf # Redis master configuration volume
redis-replica: # Setting up slave node
image: 'bitnami/redis:latest'
ports:
- '6379' # No port is exposed
depends_on:
- redis-master # will only start after the master has booted completely
environment:
- REDIS_REPLICATION_MODE=slave # Assigning the node as slave
- REDIS_MASTER_HOST=redis-master # Host for the slave node is the redis-master node
- REDIS_MASTER_PORT_NUMBER=6379 # Port number for local
- ALLOW_EMPTY_PASSWORD=yes # No password required to connect to node | I want to create Redis cluster in my docker based environment, Any docker base image that supports replication and allow me to create cluster using docker-compose would be helpful. | How to create redis-cluster in docker based environment |
For anyone struggling with this unfortunately this can't be done viadocker-compose.ymlyet. Refer to this issueStart Redis cluster #79. The only way to do this is by getting the IP address and ports of all the nodes that are running Redis and then running this command in any of the swarm nodes.# Gives you all the command help
docker run --rm -it thesobercoder/redis-trib
# This creates all master nodes
docker run --rm -it thesobercoder/redis-trib create 172.17.8.101:7000 172.17.8.102:7000 172.17.8.103:7000
# This creates slaves nodes. Note that this requires at least six nodes running master
docker run --rm -it thesobercoder/redis-trib create --replicas 1 172.17.8.101:7000 172.17.8.102:7000 172.17.8.103:7000 172.17.8.104:7000 172.17.8.105:7000 172.17.8.106:7000 | I'm just learning docker and all of its goodness like swarm and compose. My intention is to create a Redis cluster in docker swarm.Here is my compose file -version: '3'
services:
redis:
image: redis:alpine
command: ["redis-server","--appendonly yes","--cluster-enabled yes","--cluster-node-timeout 60000","--cluster-require-full-coverage no"]
deploy:
replicas: 5
restart_policy:
condition: on-failure
ports:
- 6379:6379
- 16379:16379
networks:
host:
external: trueIf I add thenetwork: - hostthen none of the containers start, if I remove it then the containers start but when I try to connect it throws an error likeCLUSTERDOWN Hash slot not served.Specs -Windows 10
Docker Swarm Nodes -
2 Virtual Box VMs running Alpine Linux 3.7.0 with two networks
VirtualBox VM Network -
eth0 - NAT
eth1 - VirtualBox Host-only network
Docker running inside the above VMs -
17.12.1-ce | Redis cluster with docker swarm using docker compose |
Java's "portability" is mostly marketing hogwash.Java programs can make system calls (like filesystem access or forking subprocesses) just like anything else, so the JVM doesn't isolate much of anything unless you're doing fancy things with the security manager.There isn't a single "the JVM", but rather a series of incompatible JVM releases every few years. There are also competing implementations, and occasionally it matters whether you're using Oracle's JVM or OpenJDK.Docker also isn't just about isolation and reproducability. For example, the JVM has nothing resembling UnionFS. | I'm new to using java and have just started getting a grasp of the build process and dependency management system of Maven and Gradle.From what I understand, Docker is a great tool for deploying containers inside of a docker host. I imagine this is useful in the same way Vagrant is (although not functionally the same) in that it removes the issue of the environment not being what was expected.It was my understanding that when building things with Maven or Gradle that you could create a JAR file or a WAR file that included the binaries inside them so it could run wherever the JVM was. With Docker, this seems redundant. I imagine that Docker is could be used for other things like programs running from C++ or anything else? Is Docker redundant for Java usage? | Why use docker? Aren't java files like WAR files already running on JVM? |
sedis usually the weapon of choice for such tasks. Taken from the officialmysql dockerfile:RUN sed -Ei 's/^(bind-address|log)/#&/' /etc/mysql/my.cnfThe command comments out lines starting withbind-addressorlogin my.cnf or conf.d/*. | I would like to build a container, which enables bind to multiple IP addresses. Bind address is stored inmy.cnf, it is okay. How to define it or maybe comment out with use of aDockerfileto grant remote access? | MySQL bind-address in a Docker container |
If 2 containers are in the same network, are the same ENV vars automatically exposed on the containers as if they were linked?no, you would now have to use the container names as their hostnames. The new network feature has no idea which ports will be used. Think of this as 2 computers plugged on the same network hub. Both can address the other one by its hostname.is the hosts file updated with the correct container name / ip addresses ? Even after a docker restart ?yes,/etc/hostsfiles for all containers which are part of a network will be updated live by the docker engine.I can't see in the docs how a container can find the location of another in its network?Using the container name. See theConnect containerssection of theWork with network commandsdoc:Once connected, the containers can communicate using another container’s IP address or name.Also, compose looks to have a simple set up for linking containers, and may automate some of this - would compose be the way to go for defining multi container apps? Or is it too soon to run it in production?Compose supports the new network feature as beta by offering the--x-networkingoption. Youshould not useit in production yet (current Compose version is 1.5).Furthermore, the current implementation is a bit inconvenient as we must use the full container name which is composed of theproject name+_+container name+_1. Thedocumentationsays the next version (current one is 1.5) will improve this so that we should not have to worry about theproject nameto address containers.Does compose support multiple host configuration as well?Yes, in conjonction with Swarm as detailed in theoverlay network documentation | I have an existing app that comprises of 4 docker containers running on the same host. They have been linked together using thelinkcommand.However, after some upgrades of docker, thelinkbehaviour has been deprecated, and changed it seems. We are having issues where containers are loosing the link to each other now.So, docker says to use the newNetworkfeature overlinked containers. But I can't see how this works.If 2 containers are in the same network, are the sameENVvars automatically exposed on the containers as if they were linked?Or is the hosts file updated with the correct container name / ip addresses ? Even after adocker restart?I can't see in the docs how a container can find the location of another in its network?Also,composelooks to have a simple set up for linking containers, and may automate some of this - would compose be the way to go for defining multi container apps? Or is it too soon to run it in production?Doescomposesupport multiple host configuration as well?at some point in the future we will probably need to move one of the containers to a different host.... | Docker linked containers, Docker Networks, Compose Networks - how should we now 'link' containers |
I think your problem will be your-p(publish) flag. Assuming your container is actually listening on port 8080 - try-p 8080:8080which will map localhost:8080 to your container. (Well, technically it'll map0.0.0.0:8080which is all addresses - including external)But I think if you're not specifying something on the left hand side, you're getting a random port number mapped - you should be able to see this indocker psor using thedocker portcommand.When you rundocker run -ityou start it interactively - and it should start 'whatever is defined in the docker file' unless you specify otherwise. I assume this will be a service you want, but I don't know that app. You can also use the-dflag, that runs the container in the background. | Just using Docker for the first time so I'm probably making a rookie mistake, but here goes. I am trying to use thereactioncommerce/reactionimage, and it appears to run correctly. However, I cannot seem to connect to the server from the host.I am runningdocker run -p :8080 -it reactionas suggested on theDocker Hub page, then trying to access it by going tohttp://localhost:8080on a browser on the host, but no connection can be made. Where am I going wrong?I'm running on a Linux Mint host. | Unable to connect to Docker container from host |
Thedocker daemoncan listen on three different types of Socket:unix,tcpandfd.By default,docker daemonjust listen on unix socket.If you need to access the Docker daemon remotely, you need to enable the tcp socket.When creating docker swarm cluster, the swarm manager need to access the docker daemon of swarm agent nodes remotely.Therefore, you need to re-configure thedocker daemonvim /etc/default/dockerAdd following line:DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock"Restartdocker daemonsudo restart dockerBy doing this, thedocker daemoncan be accessed remotely.References:Docker document: docker daemonDocker document: create a swarm for development | docker version 1.9.1
swarm version 1.0.1why on connecting 3 VMs (bridged net) to swarm. "docker info" shows me all nodesStatus pending.1 of 3 hosts ismanagerall output is from this host. I don't know where to look for.On runningswarm --debug manage token://XXXXXoutput >>*INFO[0000] Listening for HTTP addr=127.0.0.1:2375 proto=tcp
DEBU[0000] Failed to validate pending node: Cannot connect to the docker engine endpoint Addr=10.32.1.38:2375
DEBU[0000] Failed to validate pending node: Cannot connect to the docker engine endpoint Addr=10.32.1.4:2375
DEBU[0000] Failed to validate pending node: Cannot connect to the docker engine endpoint Addr=10.32.1.33:2375Thenroot@ubuntu:~# ps -ef | grep swarm
root 2158 1391 0 12:28 pts/2 00:00:00 swarm join token://xxxxxxx --addr 10.32.1.4:2375
root 2407 1213 0 13:57 pts/1 00:00:00 swarm manage token://xxxxxxx -H 0.0.0.0:4243
root 2413 1391 0 13:57 pts/2 00:00:00 grep --color=auto swarmThenroot@ubuntu:~# swarm list token://xxxxxxxxxxx
10.32.1.4:2375
10.32.1.33:2375
10.32.1.38:2375Thenroot@ubuntu:~# ps -ef | grep docker
root 2330 1 0 12:52 ? 00:00:00 /usr/bin/docker daemon
root 2421 1391 0 14:10 pts/2 00:00:00 grep --color=auto dockerheartbeat sorted - runs in background, checked ports, name resolution, pingable from manager. | Docker-swarm >> Cannot connect to the docker engine endpoint |
So it turns out, the ECS agent was only able to pull images with version 1.7, and that's where mine was falling. Updating the agent resolves my issue, and hopefully it helps someone else. | We are switching from Docker Hub to ECR and I'm curious how to structure the Dockerrun.aws.json file to use this image. I attempted to modify the name as/:but this is not successful. I also saw the details of private registries using an authentication file on S3 but this doesn't seem like the correct route whenaws ecr get-loginis the recommended way to authenticate with ECR.Can anyone point me to how I can use an ECR image in a Beanstalk Dockerrun.aws.json file?If I look at the ECS Task Definition,there's a required attribute calledcom.amazonaws.ecs.capability.ecr-auth, but I'm not setting that anywhere in myDockerrun.aws.jsonfile and I'm not sure what needs to be there. Perhaps it is an S3 bucket? Something is needed as every time I try to run the Elastic Beanstalk created tasks from ECS, I get:Run tasks failed
Reasons : ATTRIBUTEAny insights are greatly appreciated.UpdateI see from some otherthreadsthat this used to occur with earlier versions of the ECS agent but I am currently runningAgent version 1.6.0andDocker version 1.7.1, which I believe are the recommended versions. Is this possibly an issue with the Docker version? | Dockerrun.aws.json structure for ECR Repo |
Known issue:https://docs.docker.com/engine/reference/builder/#known-issues-runhttps://github.com/docker/docker/issues/783#issuecomment-123705753Upgrading to docker 1.9.1 solved it. | I installed Docker. Now, when my Ubuntu 14.04 Trusty system tries to boot, I get the following messageaufs au_opts_parse:1155:docker[2010] unknown option dirperm1What does this mean, and how can I get my system back to a stable stage to where I can start it up normally. If this would help: I have a container that is set to--restart on-failureand that is set to access H/W devices.I also have minikube installed, which had a VirtualBox-based Docker engine running in it. | aufs au_opts_parse:1155:docker[2010] unknown option dirperm1 |
Currently Cloud Run (fully managed) itself runs on a gVisor sandbox itself, so its support for low-level Linux APIs for creating further container environments using cgroups or Linux namespace APIs are probably not going to be possible.However, since gVisor is technically an user-space sandboxing technology (though I'm not sure what level of privileges it requires), you might be able to run a gVisor sandbox inside gVisor, though I would not hold my hopes high as it's probably not designed for that. I'm guessing that gVisor sandbox does not provideptracecapabilities for nested sandboxes to work, though you can probably ask this on gVisor’s own GitHub repository.For a use case like this, I recommend checking out Cloud Run for Anthos on GKE, it's a similar developer experience to Cloud Run, but runs your applications on GKE nodes (which are GCE VMs) which have full Linux system call suite available to them. Since Kubernetes podspec is available there, you can actually create privileged containers, and run VMs inside them etc.Usually containers themselves are supposed to be the sandbox, so attempting to create further sandboxes (like you asked earlier) is going to be a lot of platform-dependent work, even if you can get it running somehow. | Let's say I would to let the user upload some python or bash script, execute it in the cloud run and get the result back. To do this I would create a Cloud Run service with a service account that has no permissions to access project resources. I would as well run the script within the nested container so the user cannot interfere with the server code and manipulate consecutive requests from other users.How would I make gvisor runsc or some other sandbox runtime available within the container running on Cloud Run?I found some resources mentioning using the privileged flag on the original container, but that is not possible with Cloud Run. Also, I cannot find any information on how to run rootless containers with runsc. Let me know if I am on the right track or if this is even possible with cloud run or should I use another service?Thank you. | Can you run a sandbox container within a Cloud Run container? |
I found a workaround (I'm not willing to call it a solution):Windows Container Network Drivers: create a 'transparent' network:docker network create -d transparent transAttach container to this networkdocker run --network=trans ...Important: Please note, that with this network, your container needs to obtain an IP Adress from the Host Subnet and it is directly exposed to it.maybe related (this is about access the containers from the host):According tohttps://github.com/Microsoft/Virtualization-Documentation/issues/253#issuecomment-217975932(JMesser81):This is a known limitation in our Windows NAT implementation (WinNAT) that you cannot access the external port in a static port mapping directly from the container (NAT) host. | I am running a windows docker container on a Windows Server 2016 host, running default configuration.When running the docker container using the command:docker run -it microsoft/windowsservercore powershellWhen I run the command:ping It just says that the request times out.
I have checked that I can ping 8.8.8.8 and google.com etc... and even other machines on the same subnet. The only one I cannot ping is the host.I have added '--dns ' to the 'docker run' command but this only allows me to ping the host machine via hostname and not IP.Has anyone else seen this problem and have a solution? | Windows docker container cannot ping host |
In sonartype version 3.21.1 this feature has been added. When the Disable redeploy policy is selected , we get new option: Allow redeploying the 'latest' tag but defer to the Deployment Policy for all other tags.Link:https://issues.sonatype.org/browse/NEXUS-18186 | I'm using nexus to host both maven and docker artifacts. For the docker production artifacts I'd like to turn on "disable redeploy" to ensure the image can never change on the nexus server once it is potentially in production.However, enabling "disable redeploy" appears to make it impossible to re-publish the "latest" tag to point to the latest version.When trying to push I get obscure errors on the client such asblob upload invalid: blob upload invalid.Is it possible to disable redeploy to concrete version tags, while allowing on tags like "latest" | Allow redeploy for "latest" docker tag in Nexus OSS |
You can use grep in linux to fetch the relevant log messages you want:kubectl log bino | grep "error unable-to-access-website" >> John/Doe/Bino/log.txtHope this helps. | I'm new to kubernetes and am still trying to extract log from a few lines and write it, if anyone can help me what commands i should execute.If the pod is named bino, and i wanted to extract the lines corresponding to the error unable-to-access-website, and then write them to a certain location, say John/Doe/bino. How would i do this is there a easy command?I tried using kubectl log bino, but it just dumps all the output on the terminal, if i wanted to write certain parts how can i do it? Thanks!Or if anyone has played around in katacoda i would appreciate a link to a similar example. | Extract lines from Kubernetes log |
It's failing becausepasswdmanipulates a temporary file, and then attempts to rename it to/etc/shadow. This fails because/etc/shadowis a mountpoint -- which cannot be replaced -- which results in this error (captured usingstrace):102 rename("/etc/nshadow", "/etc/shadow") = -1 EBUSY (Device or resource busy)You can reproduce this trivially from the command line:# cd /etc
# touch foo
# mv foo shadow
mv: cannot move 'foo' to 'shadow': Device or resource busyYou could work around this by mounting a directory containingmy_shadowandmy_passwdsomewhere else, and then symlinking/etc/passwdand/etc/shadowin the container appropriately:$ docker run -it --rm -v $PWD/my_etc:/my_etc centos
[root@afbc739f588c /]# ln -sf /my_etc/my_passwd /etc/passwd
[root@afbc739f588c /]# ln -sf /my_etc/my_shadow /etc/shadow
[root@afbc739f588c /]# ls -l /etc/{shadow,passwd}
lrwxrwxrwx. 1 root root 17 Oct 8 17:48 /etc/passwd -> /my_etc/my_passwd
lrwxrwxrwx. 1 root root 17 Oct 8 17:48 /etc/shadow -> /my_etc/my_shadow
[root@afbc739f588c /]# passwd root
Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@afbc739f588c /]# | Example of the problem:docker run -ti -v my_passwd:/etc/passwd -v my_shadow:/etc/shadow --rm centos
[root@681a5489f3b0 /]# useradd test # does not work !?
useradd: failure while writing changes to /etc/passwd
[root@681a5489f3b0 /]# ll /etc/passwd /etc/shadow # permission check
-rw-r--r-- 1 root root 157 Oct 8 10:17 /etc/passwd
-rw-r----- 1 root root 100 Oct 7 18:02 /etc/shadowThe similar problem arises when using passwd:[root@681a5489f3b0 /]# passwd test
Changing password for user test.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: Authentication token manipulation errorI have tried using the ubuntu image, but the same problem arises.I can manually edit passwd file and shadow file from within container.I am getting the same problem on following two machines:Host OS: CentOS 7 - SELinux DisabledDocker Version: 1.8.2, build 0a8c2e3Host OS: CoreOS 766.4.0Docker version: 1.7.1, build df2f73d-dirtyI've also opened issue on GitHub:https://github.com/docker/docker/issues/16857 | Can not add new user in docker container with mounted /etc/passwd and /etc/shadow |
I believe you're experiencingthis problem. There's a couple possible solutions there, but I haven't tried them myself as I don't have Docker on Windows:Solution 1 by shayneRemoverestart:alwaysfrom your container. Instead run this command once, it'll create a container that will start your container when the mount is ready:docker run --name holdup
--restart always
-v 'C:\data\mysql_db:/var/lib/mysql/'
-v //var/run/docker.sock:/var/run/docker.sock
shaynesweeney/holdupThis will however have an effect of startingallyour stopped containers on reboot.Solution 2 by evolartCreate the following Powershell script (adjust your location to where your docker-compose.yml is):Do {
$dockerps = (docker ps)
Start-Sleep -S 5
} While (! $dockerps -contains "mysql")
Set-Location D:\Docker\MySQL
docker-compose restartThen:Add a Task Scheduler Task with the action Start a program to run the script.Program/script: powershell.exeAdd arguments: -windowstyle hidden -file D:\Docker\MySQL-Restart.ps1 | I'm using a docker container for MySQL with docker-compose that works just fine.The only problem is that I get the errorunknown database "database_name"the first time I run it every day (after Windows startup)After that, if I stop it and re-run it I get no errors and everything works fine.yaml configuration:version: "2.0"
services:
mysql:
container_name: mysql
restart: always
image: mysql:5.7
command: --max_allowed_packet=32505856
ports:
- "3306:3306"
volumes:
- 'C:\data\mysql_db:/var/lib/mysql/'
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
networks:
- shared
networks:
shared:
external:
name: sharedEDIT: here is a pastebin of the logs of a startup:https://pastebin.com/aJiKJ4aE | MYSQL Docker container gives "unknown database" error |
Is Django running in a seperate container that is linked to the Redis container? If so, you should have some environment variables with the Ip and port that Django should use to connect to the Redis container. Set BROKER_URL to use the redis Ip and port env vars and you should be in business. Ditto for RESULT_BACKEND.Reference docs for the env vars are here:Docker Compose docsHere's some example code for how we use the automatically added env vars in one of our projects at OfferUp:BROKER_TRANSPORT = "redis"
_REDIS_LOCATION = 'redis://{}:{}'.format(os.environ.get("REDIS_PORT_6379_TCP_ADDR"), os.environ.get("REDIS_PORT_6379_TCP_PORT"))
BROKER_URL = _REDIS_LOCATION + "/0"
CELERY_RESULT_BACKEND = _REDIS_LOCATION + "/1" | I'm trying to use Redis as a broker for Celery for my Django project that uses Docker Compose. I can't figure out what exactly I've done wrong, but despite the fact that the console log messages are telling me that Redis is running and accepting connections (and indeed, when I dodocker ps, I can see the container running), I still get an error about the connection being refused. I even diddocker exec -it redis-cli
pingand saw that the response wasPONG.Here are the Celery settings in mysettings.py:BROKER_URL = 'redis://localhost:6379/0'
BROKER_TRANSPORT = 'redis'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_ENABLE_UTC = True
CELERY_TIMEZONE = "UTC"Here are the Redis container settings in mydocker-compose.yml:redis:
image: redis
ports:
- "6379:6379"I remembered to link therediscontainer with mywebcontainer as well. I can start up the server just fine, but I get the connection refused error when I try to upload anything to the site. What exactly is going wrong?EDIT: I remembered to use VBoxManage to port forward such that I can go to my browser and access my site atlocalhost:8000, so it doesn't seem like I need to use the VM's IP instead oflocalhostfor mysettings.py.EDIT 2: If I replacelocalhostin the settings with either the IP address of thedocker-machineVM or the IP address of the Redis container, then what happens is that I really quickly get a false success message on my website when I upload a file, but then nothing actually gets uploaded. The underlying upload function,insertIntoDatabase(), usesdelay. | Redis+Docker+Django - Error 111 Connection Refused |
The other solution given is perfectly valid but I wanted to share my solution:Apparently dind will mount the /build directory so subcontainers can "see" its contents. So by placing the key in"./"it is viewable by those containers. I use$(pwd)because docker run doesn't accept~or.test run:
stage: deploy
script:
- *docker_login
- mkdir ./key
- echo $GCP_SVC_KEY > ./key/application_default_credentials.json
- docker run --rm -v "$(pwd)/key:/.config/gcloud/" $CONTAINER_TEST_IMAGE
tags:
- docker | I'm using docker in docker to host my containers as they work through the pipeline. The container I create from my code is setup to have a volume to pass in a gcloud key to the container. This works perfectly on my local machine, but on the gitlab-runner it doesn't link correctly.From reading this appears to be because it links the host to my container, rather than the dind host to my container.How do I link the directory that is inside dind to my container?(Also ignore any minor issues with tagging and such, this ci file is very early in development)GitLab ci belowimage: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
SPRING_PROFILES_ACTIVE: gitlab-ci
CONTAINER_TEST_IMAGE: registry.gitlab.com/fdsa
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/asdf
stages:
- build_test_image
- deploy
.docker_login: &docker_login | # This is an anchor
docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
build test image:
stage: build_test_image
script:
- *docker_login
- docker build -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
test run:
stage: deploy
script:
- *docker_login
- mkdir /key
- echo $GCP_SVC_KEY > /key/application_default_credentials.json
# BROKEN LINE HERE
- docker run --rm -v "/key:/.config/gcloud/" $CONTAINER_TEST_IMAGE
tags:
- docker | GitLab CI docker in docker can't create volume |
There isno link parameter available in the container manifest, so unfortunately you can't do it that way.However, have you tried just setting the REDIS_MASTER_SERVICE_HOST environment variable to 127.0.0.1? I believe that should allow the frontend container to talk to the redis container through the standard networking stack. | TLDR: Is it possible to link two containers with the container manifest?I'm trying to port theGuestbook Sample app from the Google Container Engine docsto acontainer vm. I'm having troubles to connect the two container vms so the web app can access the redis service.It works, if I'm using the docker command line on the instance:start the instance and ssh into it:gcloud compute instances create guestbook-vm --image container-vm --machine-type g1-small
gcloud ssh guestbook-vmcreate the containers:sudo docker run -d --name redis -p 6379:6379 dockerfile/redis
sudo docker run -d --name guestbook -p 3000:80 --link redis:redis -e "REDIS_MASTER_SERVICE_HOST=redis" -e "REDIS_MASTER_SERVICE_PORT=6379" brendanburns/php-redisI'm using the --link to connect the guestbook to the redis container.Can this also be accomplished with the container manifest?this is my start command:gcloud compute instances create guestbook-vm --image container-vm --machine-type g1-small --metadata-from-file google-container-manifest=containers.yamlEDIT: Solution from Alex to use 127.0.0.1 works fine, so that's the right containers.yaml:version: v1beta2
containers:
- name: redis
image: dockerfile/redis
ports:
- name: redis-server
containerPort: 6379
hostPort: 6379
- name: guestbook
image: brendanburns/php-redis
ports:
- name: http-server
containerPort: 80
hostPort: 3000
env:
- name: REDIS_MASTER_SERVICE_HOST
value: 127.0.0.1
- name: REDIS_MASTER_SERVICE_PORT
value: 6379 | How to link docker containers on Container VM with an manifest? |
For an example how we setup our project template you may have a look atphundament/appand its testing setup.We are using a dockerizedGitLabinstallation with acustomized runner, which is able to executedocker-compose.Note! The runner itself is running on a separate Docker host.We are usingdocker-compose.ymlto define theservicesin a stack with adjustments fordevelopmentandtesting.TheCI configurationis optimized to handle multiple concurrent tests of isolated stacks, this is just done by specifying a customCOMPOSE_PROJECT_NAME.Some in-depth documentation about our testing process and useful information aboutdocker-composeand dockerized CI.#testing README#testing DocsCI buildsExtending services and Compose filesDocker-in-Docker for CI?Finally,Travis CIalso supports Docker since a while, but I haven't tested this approach at all. | I want to setup a unit test environment for my product. I have a web application build on nginx in Lua which use mysql and redis.I think docker will be good for this although i am new to docker. My application runs on centos server (production server).I am planning to setup different container for mysql,redis and webapp and then write UT application (unit test for Lua using Busted framework) in my mac (My development machine is MAC) or VM to test it. The UT application will talk to docker container nginx and nginx will use container mysql and redis. Is this good ? If yes ,can someone guide me how to do this? maybe some good link? If no , what could be better way. I have already tried using vagrant but that took too much time which shouldn't be in my UT case. | docker unit test setup |
The hotspot sources do not currently support statically linking. Seehttp://mail.openjdk.java.net/pipermail/hotspot-dev/2013-September/010810.htmlfor more info. | I am trying to create a image using JRE without any OS. I tried this Dockerfile which does not work.FROM openjdk:11.0.1-jdk-oraclelinux7 as JDK
RUN jlink --no-header-files --no-man-pages --add-modules java.base,java.desktop,java.logging,java.sql --output /jre
FROM scratch
#FROM oraclelinux:7-slim
COPY --from=JDK /jre /jre
ARG JAR_FILE
COPY ${JAR_FILE} /app.jar
CMD ["/jre/bin/java", "-jar", "/app.jar"]I am getting following error:standard_init_linux.go:190: exec user process caused "no such file or directory"If I replace scratch with oraclelinux, it works fine. Any clue why I cannot use scratch. The reason to use scratch is to reduce the size of the image.Any help is appreciated. | Create Docker Image For JRE FROM scratch |
While you can do udp with-p 1982:1982/udpI don't believe docker's port forwarding currently supports multicast. You may have better luck if you disable the userland proxy on the daemon (dockerd --userland-proxy=false ...), but that's just a guess.The fast/easy solution, while removing some of the isolation, is to use the host network withdocker run --net=host .... | I am using docker compose and hypriotos on a raspberry pi running a node container. I would like to receive udp multicast messages send to 239.255.255.250:1982 in local network. My code is working on other machines outside of docker so I think it's a docker issue.I already exposed port 1982/udp in the docker file and in the docker-compose file I added "1982:1982" and "239.255.255.250:1982:1982/udp" to ports. I still can't receive anything. Am I missing something?My concrete plan is to receive advertisement messages from a yeelight. These messages are documentedhereAny help would be nice. Thanks. | Receive UDP Multicast in Docker Container |
Theerlang:20-alpineimage (Dockerfile), which is used as base forelixir:1.6.6-alpine(Dockerfile), has been recently updated from Alpine 3.8 to 3.9 (Github commit).The following has changed between Alpine 3.8 and 3.9:Thelibssl1.0package has been removed, and superseded bylibssl1.1.Thepdftkpackage has been removed in 3.9, and is only available in theedgebranchcommunityrepository (and older Alpine branches).libssl:This one is easily fixed: just replace thelibssl1.0package withlibssl1.1.pdftk:pdftkis more problematic. It depends onlibgcj6, the Java runtime for GCC 6.
However, the Java runtime was completely removed from GCC 8 and onwards.libgcj6is the Java runtime for GCC 6, and is not compatible with GCC 8. Installinglibgcj6also pulls the GCC 6 C++ runtime,libstdc++6 (6.4.0-r9).An attempt to installpdftkalong withlibgcj6, for example:RUN apk add --no-cache libgcj6 pdftk --repository=http://dl-cdn.alpinelinux.org/alpine/edge/communityFails with:ERROR: unsatisfiable constraints:
so:libgcj.so.17 (missing):
required by: pdftk-2.02-r1[so:libgcj.so.17]Unfortunately, I'm not familiar with a workaround, currently.There's an active open Alpine ticket for this issue:https://bugs.alpinelinux.org/issues/10136, so it's worth keeping an eye for possible updates. | I have thisDockerfilefor my Phoenix application. When running a promotion with Semaphore CI, my deployment fails and returns this error:ERROR: unsatisfiable constraints:
libssl1.0 (missing):
required by: world[libssl1.0]
pdftk (missing):
required by: world[pdftk]How come it can't fetch these two packages? | Docker - Alpine Elixir container has unsatisfiable constraints |
apt-get updateensures all package sources and dependencies are at their latest version, it does not update existing packages that have been installed. It's recommended that you always runapt-get updateprior to running anapt-get installthis is so when theapt-get installis run, the latest version of the package should be used.RUN apt-get update -q -y && apt-get install -q -y (the -q -y flags just mean that the apt process will run quietly without asking you for confirmations as this would cause the Docker process to fail) | Sorry, very new to server stuff, but very curious. Why run apt-get update when building a container?My guess would be that it's for security purposes, if that the case than that'll answer the question. | In Docker, why is it recommended to run `apt-get` update in the Dockerfile? |
There is an issue with Docker and the last version of react scripts.
Here is a Github thread about it :https://github.com/facebook/create-react-app/issues/8688The (temporary and fastest) solution for your case is to downgrade the version of react-scripts in your package.json file.
From :"dependencies": {
...
"react-scripts": "3.4.1"
}To :"dependencies": {
...
"react-scripts": "3.4.0"
}I tested your project with this configuration and it works well now.From the above Github Thread it seems to be another solution with docker-compose andstdin_open: trueoption (which basically correspond to the-itflag of thedocker runcommand. You can try that too if the react-scripts version matter for you (and you want to keep the last version of it) | I am trying to run a react app using docker. Here are my steps:I have created a react app usingreact-native-cliand addedDockerfile.devfile. My Dockerfile.dev file contains this code:# Specify a base image
FROM node:alpine
WORKDIR '/app'
# Install some depenendencies
COPY package.json .
RUN yarn install
COPY . .
# Uses port which is used by the actual application
EXPOSE 3000
# Default command
CMD ["yarn", "run", "start"]Then I execute this command and get this output. But it doesn't show any port to access it.docker build -f Dockerfile.dev .OP: Successfully built ad79cd63eba3docker run ad79cd63eba3OP:yarn run v1.22.4
$ react-scripts start
ℹ 「wds」: Project is running at http://172.17.0.2/
ℹ 「wds」: webpack output is served from
ℹ 「wds」: Content not from webpack is served from /app/public
ℹ 「wds」: 404s will fallback to /
Starting the development server...
Done in 2.02s.Can anybody tell me how I start the development server and it shows me the port likeHttp://localhost:3000to access it.Full code:https://github.com/arif2009/frontend.git | How to build react app using Dockerfile.dev and Yarn |
Docker is executing theinstall-deps.shscript. The issue is with a command insideinstall-deps.shthat is not recognized when docker attempts to run the script.As you can see the script returns anerror code of 127meaning that a command within the file does not exist.For instance - try this:touch test.sh
echo "not-a-command" >> test.sh
chmod 755 test.sh
/bin/sh -c "./test.sh"Output:/root/test.sh: line 1: not-a-command: command not foundNow check the exit code:echo $?
127I would suggest running the script inside an interactive environment to identify/install the command that is not found. | So I can't seem to figure this out, but I'm getting error code 127 when running a Dockerfile. What causes this error?MyDockerfile:FROM composer as comp
FROM php:7.4-fpm-alpine
COPY --from=comp /usr/bin/composer /usr/bin/composer
COPY ./docker/install-deps.sh /tmp/install-deps.sh
RUN echo $(ls /tmp)
RUN /tmp/install-deps.sh
COPY . /var/www
WORKDIR /var/www
RUN composer install -o --no-devThe results after building the Dockerfile:Building php
Step 1/9 : FROM composer as comp
---> 433420023b60
Step 2/9 : FROM php:7.4-fpm-alpine
---> 78e945602ecc
Step 3/9 : COPY --from=comp /usr/bin/composer /usr/bin/composer
---> 46117e22b4de
Step 4/9 : COPY ./docker/install-deps.sh /tmp/install-deps.sh
---> 7e46a2ee759c
Step 5/9 : RUN echo $(ls /tmp)
---> Running in aa1f900032f9
install-deps.sh
Removing intermediate container aa1f900032f9
---> eb455e78b7f6
Step 6/9 : RUN /tmp/install-deps.sh
---> Running in 6402a15cccb2
/bin/sh: /tmp/install-deps.sh: not found
ERROR: Service 'php' failed to build: The command '/bin/sh -c /tmp/install-deps.sh' returned a non-zero code: 127Theinstall-deps.sh:#!/bin/sh
set -e
apk add --update --no-cache \
postgresql-dev \
mysql-client \
yaml-dev \
git \
openssl
docker-php-ext-install pcntl pdo_mysql pdo_pgsql
# yaml
apk add --no-cache --virtual .build-deps g++ make autoconf
pecl channel-update pecl.php.net
pecl install yaml
docker-php-ext-enable yaml
apk del --purge .build-deps | Docker: Error code 127 when executing shell script |
You can have a mongodb replica-set with this docker-compose services:mongodb-primary:
image: "bitnami/mongodb:4.2"
user: root
volumes:
- ./mongodb-persistence/bitnami:/bitnami
networks:
- parse_network
environment:
- MONGODB_REPLICA_SET_MODE=primary
- MONGODB_REPLICA_SET_KEY=123456789
- MONGODB_ROOT_USERNAME=admin-123
- MONGODB_ROOT_PASSWORD=password-123
- MONGODB_USERNAME=admin-123
- MONGODB_PASSWORD=password-123
- MONGODB_DATABASE=my_database
ports:
- 27017:27017
mongodb-secondary:
image: "bitnami/mongodb:4.2"
depends_on:
- mongodb-primary
environment:
- MONGODB_REPLICA_SET_MODE=secondary
- MONGODB_REPLICA_SET_KEY=123456789
- MONGODB_PRIMARY_HOST=mongodb-primary
- MONGODB_PRIMARY_PORT_NUMBER=27017
- MONGODB_PRIMARY_ROOT_USERNAME=admin-123
- MONGODB_PRIMARY_ROOT_PASSWORD=password-123
networks:
- parse_network
ports:
- 27027:27017
mongodb-arbiter:
image: "bitnami/mongodb:4.2"
depends_on:
- mongodb-primary
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-arbiter
- MONGODB_REPLICA_SET_MODE=arbiter
- MONGODB_PRIMARY_HOST=mongodb-primary
- MONGODB_PRIMARY_PORT_NUMBER=27017
- MONGODB_PRIMARY_ROOT_PASSWORD=password-123
- MONGODB_REPLICA_SET_KEY=123456789
networks:
- parse_network
ports:
- 27037:27017
networks:
parse_network:
driver: bridge
ipam:
driver: default
volumes:
mongodb_master_data:
driver: local | I have tried to run mongodb replicaSet in local with mongoldb-community in my Mac I followmongodb docI can run it by this commandmongod --port 27017 --dbpath /usr/local/var/mongodb --replSet rs0 --bind_ip localhost,127.0.0.1but it doesn't run on background, so every time I want to start replica set mongodb I should run that command, before I run itI should stop mongofirst, then on the next tab console I should runmongo --eval "rs.initiate()"to create to replicaSet againhere is my docker compose:version: "3.7"
services:
mongodb_container:
image: mongo:latest
ports:
- 27017:27017
volumes:
- mongodb_data_container:/data/db
volumes:
mongodb_data_container:how to convert that into docker-compose ? is it possible ?can I dodocker exec CONTAINER_ID [commands]? to run command mongo like above , but must stop the mongodb run in that docker ? | how to run mongodb replica set in docker compose |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.