Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
I think you had missed something indocker-composefile. This is working sample we use.nginx: image: "nginx:alpine" ports: - 5000:443 links: - registry:registry volumes: - ./auth:/etc/nginx/conf.d - ./auth/nginx.conf:/etc/nginx/nginx.conf:ro registry: image: registry:2.7.0 volumes: - ./data:/var/lib/registryKeep an eye on this partvolumes: - ./auth:/etc/nginx/conf.d - ./auth/nginx.conf:/etc/nginx/nginx.conf:roHereauthfolder has certificate and key file. Also httpd file for docker registry login.Innginx.confwe directly refered inside thenginxcontainer.# SSL ssl_certificate /etc/nginx/conf.d/csr.pem; ssl_certificate_key /etc/nginx/conf.d/csr.key;
I am trying to run a private docker registry using thistutorial. But after I did everything and run the docker-compose, I get the following error from thenginxcontainerno "ssl_certificate_key" is defined for certificate "/home/user/registry/nginx/ssl/key.pem"Here is the registry.conf file:upstream docker-registry { server registry:5000; } server { listen 80; server_name example.com; return 301 https://example.com$request_uri; } server { listen 443 ssl http2; server_name privatesecurereg.netspan.com; ssl_certificate /home/user/registry/nginx/ssl/csr.pem; ssl_certificate /home/user/registry/nginx/ssl/key.pem; # Log files for Debug error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; location / { # Do not allow connections from docker 1.5 and earlier # docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) { return 404; } proxy_pass http://docker-registry; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_read_timeout 900; } }What is the rpobelom and how to fix it ?UPDATE:Here is my docker-compose:nginx: image: nginx:alpine container_name: nginx restart: unless-stopped tty: true ports: - "80:80" - "443:443" volumes: - ./nginx/conf.d/:/etc/nginx/conf.d/ - ./nginx/ssl/:/etc/nginx/ssl/ networks: - mynet
Nginx Container: no "ssl_certificate_key" is defined for certificate
The umask is a property of aprocess, not a directory. Like other process-related characteristics, it will get reset at the end of eachRUNcommand.If you're trying to make a directory writeable by a non-root user, the best option is tochownit to that user. (How to set umask for a specific folderon Ask Ubuntu has some further alternatives.) None of this will matter if the directory is eventually a bind mount or volume mount point; all of the characteristics of the mounted directory will replace anything that happens in the Dockerfile.If you did need to change the umask the only place you can really do it is in an entrypoint wrapper script. The main container process can also set it itself.#!/bin/sh # entrypoint.sh umask 000 # ... other first-time setup ... exec "$@"
I want some directory in my docker to have a specific umask value, say 000. I tried to set that in my dockerfile and in the ENTRYPOINT shell script, but they both failed to work,... RUN umask 000 /var/www/html/storage/logs //the directory ENTRYPOINT ["/etc/start.sh"] #in the /etc/start.sh #!/bin/sh umask 000 /var/www/html/storage/logs ...When I log into docker container and check/var/www/html/storage/logsumask, it is still the default 0022/var/www/html # cd storage/logs/ /var/www/html/storage/logs # umask 0022Why is that? How do I make it work ? Thanks!
Why is umask setting in dockerfile not working?
try to edit your Dockerfile like this:FROM python:3 ENV PYTHONUNBUFFERED 1 RUN mkdir /code WORKDIR /code COPY requirements.txt /code/ RUN pip install -r requirements.txt COPY . /code/ CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]and removecommand: python manage.py runserver 0.0.0.0:8000fromcomposeI assumed that themanage.pyis in/code/folder, since you haveWORKDIR /codein thedockerfilethen the server will be created in the build stage and the files will be copied to it
I am new to Docker and I want to dockerise the Django app to run as a container. Followed as below.Here is theDockerfileFROM python:3 ENV PYTHONUNBUFFERED 1 RUN mkdir /code WORKDIR /code COPY requirements.txt /code/ RUN pip install -r requirements.txt COPY . /code/Here isdocker-compose.ymlconfversion: '3' networks: mynetwork: driver: bridge services: db: image: postgres ports: - "5432:5432" networks: - mynetwork environment: POSTGRES_USER: xxxxx POSTGRES_PASSWORD: xxxxx web: build: . networks: - mynetwork links: - db environment: SEQ_DB: cath_local SEQ_USER: xxxxx SEQ_PW: xxxxx PORT: 5432 DATABASE_URL: postgres://xxxxx:xxxxx@db:5432/cath_local command: python manage.py runserver 0.0.0.0:8000 volumes: - .:/code ports: - "8000:8000" depends_on: - dbwell on my docker shell i point to Dockerfile directory, if i run an ls command from y path i see the manage.py file, but if i run:docker-compose upi get this error:web_1 | python: can't open file 'manage.py': [Errno 2] No such file or directory core_web_1 exited with code 2Why my app don't find manage.py file that is in the same position as the "docker-compose up" command is?PS: No /code folder is created when i run docker-compose command. Is it correct?So many thanks in advance
Issue with Dockerising Django app using docker-compose
You should create aconfiguration filefor NATS. And push it to the container as a Docker volume and set thecommandas-c nats-server.confnats-server.confmax_payload: 4MbStart containerdocker run -d -p 4222:4222 -v ~/nats-server.conf:/nats-server.conf nats -c /nats-server.conf
My problem is that I need to increase max_payload value that NATS receive but I have no idea where I can do it.The project is using Moleculer and NATS is created as a container with docker.When I try to make a request which is bigger than 1MB NATS returns:ERROR - NATS error. 'Maximum Payload ViolationInside dockstation logs NATS returns:cid:1 - maximum payload exceeded: 1341972 vs 1048576I tried the following items:Changing tranporter inside Moleculer Broker configs (https://moleculer.services/docs/0.12/transporters.html);Add an config file for NATS to modify some options (https://hub.docker.com/_/nats);Code example of Moleculer Broker configs:const brokerConfig: BrokerOptions = { ..., transporter: "NATS", transit: { maxQueueSize: 100000, disableReconnect: false, disableVersionCheck: false, }, ... }Code example of nats config file:{ max_payload: 1000000 }Error when I run docker with NATS config file:docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/home/matheus/nats-server.conf\\\" to rootfs \\\"/var/lib/docker/overlay2/08959b2fce0deb2abea27e103f7f4426b7ed6f3ef64b214f713ebb993c2373e6/merged\\\" at \\\"/var/lib/docker/overlay2/08959b2fce0deb2abea27e103f7f4426b7ed6f3ef64b214f713ebb993c2373e6/merged/nats-server.conf\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type. error Command failed with exit code 125.
NATS with moleculer. How can I change NATS max_payload value?
There are two ways to create and share volumes: 1. using theVOLUMEinstruction on theDockerfile. 2 Specifying the-v option during container runtime and later using--volumes-from=with every subsequent container which need to share the data. Here is an ex with the later:Start your first container with-v, then add a test file under the directory of the shared volume.docker run -it -v /test-volume --name=testimage1 ubuntu:14.04 /bin/bash root@ca30f0f99401:/# ls bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys test-volume ===> test-volume dir got created here root@ca30f0f99401:/# touch test-volume/1 root@ca30f0f99401:/# cat > test-volume/1 Test Message!From the host OS, you can get details of the volume by inspecting your container:docker inspect ca30f0f99401 | grep -i --color -E '^|Vol'"Mounts": { "Name": "025835b8b47d282ec5f27c53b3165aee83ecdb626dc36b3b18b2e128595d9134", "Source": "/var/lib/docker/volumes/025835b8b47d282ec5f27c53b3165aee83ecdb626dc36b3b18b2e128595d9134/_data", "Destination": "/test-volume", "Driver": "local", "Mode": "", "RW": true "Image": "ubuntu:14.04", "Volumes": { "/test-volume": {} }Start another container with a shared volume and check if the shared folder/files exists.$ docker run -it --name=testimage2 --volumes-from=testimage1 ubuntu:14.04 /bin/bash root@60ff1dcebc44:/# ls bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys test-volume tmp usr var root@60ff1dcebc44:/# cat test-volume/1 Test Message!Goto step-3 to share volume with a new container.
I have a directory(maybe later volume), that I would like to share with all my interactive containers. I know, that native Docker volumes are stored under/var/lib/docker/volumesanddocker run -vseems the easiest way, but I thinkData Volume Containeris a much more standardized way. I don't know, how to create this volume container from a directory or an existing another volume. Maybe is it wrong method?
Share directory or volume with container from host
In this case, theentrypointis copying the files if they don't already exist. Note in the Dockerfile thatthe wordpress source is added to /usr/src/wordpress. Then, when the container starts, the entrypointchecks if some files existand if they don't, itcopies the wordpress source into the current directory, which isWORKDIR, which is /var/www/html.General Docker Volume StuffWith/var/www/htmlspecified as aVOLUME, the only way to get files into there from the container's perspective is to attach a docker volume with files to that. Think of it as a mountpoint.You can either attach a local filesystem to that volume:docker run -v /path/to/local/webroot:/var/www/html wordpressor you can create a docker volume and use it for a persistent, more docker-esque object:docker volume create webrootAnd then move the files into it with a transient container:docker run --rm -v /path/to/local/webroot:/var/www/html \ -v webroot:/var/www/html2 \ ubuntu cp -a /var/www/html/ /var/www/html2at which point you havewebrootas a docker volume you can attach to any container.docker run -v webroot:/var/www/html wordpress
I am learning Docker and trying to understandvolumes. Looking at this example ofwordpress composeand itsdockerfileI don't get which command is responsible for populating wordpress files into/var/www/html.I do see that there isVOLUME /var/www/htmlcommand in the dockerfile to create a mount point.There is command to download wordpress files and put in/usr/src/wordpressdirectory.But what I don't get is how does files get into/var/www/html?Is it just that mounting to this directory cause all the wordpress files magically stored in this?Is it somewhere else docker is doing this?EDIT:These wordpress files are already moved or copied when randocker-compose up. I'm not asking how can move/mount files into/var/www/html. But question is how this things happened referring to the dockerfile and docker compose file above.Thanks
From where/how the files get populated in /var/www/html?
I think it is not possible at the moment seebuildkit/issue#1472.But BuildKit still caches all layers so you could use a work around.Inspecting the imagebeforethe failingRUNcommand, comment out the failing and all subsequentRUNcommands. Rerundocker buildand then dodocker runto inspect the image.Inspecting the imageafterthe failingRUNcommand, add|| trueat the end of yourRUNcommand to force the command to succeed. Rerundocker buildand then dodocker runto inspect the image.
I recently heard about Buildkit and have been trying to use it with Docker.I'm usingDOCKER_BUILDKIT=1 docker build . -t experimentalto build my Dockerfile.MyDockerfiledoesn't build properly because of some missing dependant packages.What I want to do is to attach to the last working intermediate container and fix the problem with say,apttools.When building without Buildkit, this would have been possible with the hash values of intermediate containers from the terminal output.However, the output from Buildkit is not providing me such values. So, is there any way for me to access them?Thanks in advance.
How do we run from an intermediary layer with docker buildkit? [duplicate]
The following command should do it:kubectl run my-app --image=my_docker_test -- argument_1
I have a bash script in a Docker image to which I can pass a command line argument throughdocker run(having specified the bash script inENTRYPOINTand a default parameter inCMDlike in thisanswer). So I would run something likedocker run my_docker_test argument_1Now I would like to deploy multiple (ca. 50) containers to OpenShift or Kubernetes, each with a different value of the argument. I understand that in Kubernetes I could specify thecommandandargsin the object configuration yaml file. Is there a possibility to pass the argument directly from the command line like indocker run, e.g. passing tokubectloroc, without the need to create a new yaml file each time I want to change the value of the argument?
How to pass arguments to Docker container in Kubernetes or OpenShift through command line?
You can run it with:docker run -it -p 8888:8888 jupyter/pyspark-notebook start.sh jupyter notebook --NotebookApp.token=''assuming you're in a secured environment - see more infohere.
I'm running docker withdocker run -it -p 8888:8888 jupyter/pyspark-notebook/usr/local/bin/start-notebook.sh: running hooks in /usr/local/bin/before-notebook.d /usr/local/bin/start-notebook.sh: running /usr/local/bin/before-notebook.d/spark-config.sh /usr/local/bin/start-notebook.sh: done running hooks in /usr/local/bin/before-notebook.d Executing the command: jupyter notebook [I 12:52:25.086 NotebookApp] Writing notebook server cookie secret to /home/jovyan/.local/share/jupyter/runtime/notebook_cookie_secret [I 12:52:26.010 NotebookApp] JupyterLab extension loaded from /opt/conda/lib/python3.8/site-packages/jupyterlab [I 12:52:26.010 NotebookApp] JupyterLab application directory is /opt/conda/share/jupyter/lab [I 12:52:26.014 NotebookApp] Serving notebooks from local directory: /home/jovyan [I 12:52:26.014 NotebookApp] Jupyter Notebook 6.2.0 is running at: [I 12:52:26.014 NotebookApp] http://bfd1d14020b6:8888/?token=08fe978d71a160ec97096e68a455eda5c06d411d6fe0a666 [I 12:52:26.014 NotebookApp] or http://127.0.0.1:8888/?token=08fe978d71a160ec97096e68a455eda5c06d411d6fe0a666 [I 12:52:26.014 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [C 12:52:26.019 NotebookApp] To access the notebook, open this file in a browser: file:///home/jovyan/.local/share/jupyter/runtime/nbserver-7-open.html Or copy and paste one of these URLs: http://bfd1d14020b6:8888/?token=08fe978d71a160ec97096e68a455eda5c06d411d6fe0a666 or http://127.0.0.1:8888/?token=08fe978d71a160ec97096e68a455eda5c06d411d6fe0a666I copied the below link, it asked me to input the Password or Token, but when I copied the token showing on the console, it can not be authenticated.Anyway, I don't want password to login in the notebook. Is there environment or argument passing toDocker run command? It is an official image from jupyter, but I cannot see any relevant document.
How to disable password or token login on jupyter-notebook with Docker image jupyter/pyspark-notebook
2 scenarios:Copy via ssh$ sudo docker save  myImage:tag | ssh user@IPhost:/remote/dir docker load -Copy via scp#Host A $ docker save Image > myImage.tar $ scp myImage.tar IPhostB:/tmp/myImage.tar # Host B $ docker load -i /tmp/myImage.tarAnd then you need to copy the docker-compose.yml to the host B too.The containers only have the original build's own configurations, but they don't save the environment that we generate with the file docker-compose.ymlBye
I have a series of containers created withdocker-compose. Some of these containers communicate between each other with some rules defined in thedocker-compose.ymlfile.I need to move those containers from aserverAtoserverB(same OS) but i'm having issues in understanding how this works.I tried both with theexportand thesavemethods following tutorials i've found on the web but I was not able to get the port configurations and networking rules after theexport-importorsave-loadoperations (there's a chance I didn't really get how they work...)The only way I've found to succesfully do this is to copy the whole docker-compose folder and rundocker-compose upin serverB.The question:Is there a way to preserve the whole configuration of the containers and move them from a server to another using the export or save function?Thank you for any help you can provide
Exporting a container created with docker-compose
Is it maybe faster or why is worth it?It sounds like you already have a Hadoop cluster. So you have to ask yourself, how long does it take to reproduce this environment? How often do you need to reproduce this environment?If you are not needing a way to reproduce the environment repeatedly and and contain dependencies that may be conflicts with other applications on the host, then I don't yet see a use case for you.What are advantages?If you are running Hadoop in an environment where you may need mixed Java versions, then running it as a container could isolate the dependencies (in this case, Java) from the host system. In some case, it would get you a more easily reproducible artifact to move around and set up. But Java apps are already so simple with all their dependencies included in the JAR.Maybe should be only multi node Cassandra cluster dockerized?I don't think it really comes down to whether is is a multi-node environment or not. It comes down to the problems it solves. It doesn't sound like you have any pain point in deploying or reproducing Hadoop environments (yet), so I don't see the need to "dockerize" something just because it is the hot new thing on the block.When you do have the need to reproduce the Hadoop environment easily, you might look at Docker for some of the orchestration and management tools (Kubernetes, Rancher, etc.) which make deploying and managing clusters of applications on an overlay network much more appetizing than just regular Docker. Docker is just the tool in my eyes. It really starts to shine when you can leverage some of the neat overlay multi-host networking, discovery, and orchestration that other packages are building on top of it.
Closed. This question isopinion-based. It is not currently accepting answers.Want to improve this question?Update the question so it can be answered with facts and citations byediting this post.Closed8 years ago.Improve this questionI have aHadoopbased environment. I useFlume,HueandCassandrain this system. There is a big hype aroundDockernowadays, so would like to examine, what are pros and cons in dockerization in this case. I think it should be much more portable, but it can be set usingCloudera Managerwith a few clicks. Is it maybe faster or why is worth it? What are advantages? Maybe should be only multi nodeCassandracluster dockerized?
Is Hadoop in Docker container faster/worth it? [closed]
So I finally managed to solve it. I have no idea why COPYing the cron file wasn't working. I still don't know why. (maybe someone smarter than me can explain it). But I solved it very simply by appending my commands to the/etc/crontabfile and now it works.P.S.crontab file requires a new line character at the end so usingechoadds it automatically.Here is my updated Dockerfile(I deleted all the other lines where I copied the crontab):RUN echo "* * * * * root php /var/www/artisan schedule:run >> /var/log/cron.log 2>&1" >> /etc/crontab # Create the log file to be able to run tail RUN touch /var/log/cron.log
I am using thephp:7.4-fpmDocker image and I'm trying to set up cron to run but it's not running.Here is my Dockerfile:FROM php:7.4-fpm # Set working directory WORKDIR /var/www # Install dependencies RUN apt-get update && apt-get install -y \ cron \ build-essential \ libpng-dev \ libjpeg62-turbo-dev \ libfreetype6-dev \ locales \ libzip-dev \ libmcrypt-dev \ libonig-dev \ zlib1g-dev \ zip \ jpegoptim optipng pngquant gifsicle \ vim \ unzip \ git \ graphviz \ curl \ supervisor # Install Imagick RUN apt-get update && \ apt-get install -y libmagickwand-dev --no-install-recommends && \ pecl install imagick && \ docker-php-ext-enable imagick # Clear cache RUN apt-get clean && rm -rf /var/lib/apt/lists/* # Install extensions RUN docker-php-ext-install pdo_mysql zip exif pcntl # Permissions for Laravel RUN chown -R www-data:www-data /var/www RUN chmod -R 777 /var/www # Copy crontab file to the cron.d directory COPY ./docker/php-server/crontab /etc/cron.d/crontab # Give execution rights on the cron job RUN chmod 0644 /etc/cron.d/crontab # Apply cron job RUN crontab /etc/cron.d/crontab # Create the log file to be able to run tail RUN touch /var/log/cron.log EXPOSE 9000 CMD bash -c "cron && php-fpm"When I enter the container and check the contents of/etc/cron.d/crontabit is correct* * * * * php /var/www/artisan schedule:run >> /var/log/cron.log 2>&1 # An empty lineBut it's not being run. I'm not sure what's going on here..When I runservice cron statusit says[ ok ] cron is running.But nothing is happening.
Cron does not run in a PHP Docker container
You can increase the HTTP client'stimeoutby using acustom HTTP clientfor Arango.The default is sethereto 60 seconds.from arango.http import HTTPClient class MyCustomHTTPClient(HTTPClient): REQUEST_TIMEOUT = 1000 # Set the timeout you want in seconds here # Pass an instance of your custom HTTP client to Arango: client = ArangoClient( http_client=MyCustomHTTPClient() )
I have a problem. I am usingArangoDB enterprise:3.8.6viaDocker. But unfortunately my query takes longer than30s. When it fails, the error isarangodb HTTPConnectionPool(host='127.0.0.1', port=8529): Read timed out. (read timeout=60).My collection is aroung 4GB huge and ~ 1.2 mio - 900k documents inside the collection.How could I get the complete collection with all documents without any error?Python code (runs locally on my machine)from arango import ArangoClient # Initialize the ArangoDB client. client = ArangoClient() # Connect to database as user. db = client.db(, username=, password=) cursor = db.aql.execute(f'FOR doc IN students RETURN doc', batch_size=10000) result = [doc for doc in cursor] print(result[0]) [OUT] arangodb HTTPConnectionPool(host='127.0.0.1', port=8529): Read timed out. (read timeout=60)docker-compose.yml for ArangoDBversion: '3.7' services: database: container_name: database__arangodb image: arangodb/enterprise:3.8.6 environment: - ARANGO_LICENSE_KEY= - ARANGO_ROOT_PASSWORD=root - ARANGO_CONNECT_TIMEOUT=300 - ARANGO_READ_TIMEOUT=600 ports: - 8529:8529 volumes: - C:/Users/dataset:/var/lib/arangodb3What I triedcursor = db.aql.execute('FOR doc IN RETURN doc', stream=True) while cursor.has_more(): # Fetch until nothing is left on the server. cursor.fetch() while not cursor.empty(): # Pop until nothing is left on the cursor. cursor.pop() [OUT] CursorNextError: [HTTP 404][ERR 1600] cursor not found # A N D cursor = db.aql.execute('FOR doc IN RETURN doc', stream=True, ttl=3600) collection = [doc for doc in cursor] [OUT] nothing # Runs, runs and runs for more than 1 1/2 hoursWhat workedbutonly for 100 documents# And that worked cursor = db.aql.execute(f'FOR doc IN LIMIT 100 RETURN doc', stream=True) collection = [doc for doc in cursor]
ArangoDB Read timed out (read timeout=60)
After many days I managed to add hot reload by adding in the webpack configuration file this config:devServer: { public: '0.0.0.0:8080' }After digging to the official vue js repo, specifically toserve.jsfile found thepublicoption which:specify the public network URL for the HMR clientIf you do not want to edit your webpack config, you can do this directly from docker-compose file in the command:command: npm run serve -- --public 0.0.0.0:8080
Dockerized Vue app loads normally to the browser, when applying changes to the code are not reflected without refresh.DockerfileFROM node:14-alpine # make the 'app' folder the current working directory WORKDIR /app # copy 'package.json' COPY package.json . # install project dependencies RUN npm install # copy project files and folders to the current working directory (i.e. 'app' folder) #COPY . . EXPOSE 8080 CMD ["npm", "run", "serve"]docker-compose.ymlversion: '3.9' services: frontend: container_name: 'frontend' build: ./ stdin_open: true tty: true ports: - '8080:8080' volumes: - ./:/app - /app/node_modules environment: - HOST=0.0.0.0 - CHOKIDAR_USEPOLLING=truepackage.json{ "name": "project", "version": "1.6.0", "private": true, "scripts": { "serve": "vue-cli-service serve", }, "dependencies": { "vue": "^2.6.12", "vue-axios": "^3.2.2", "vuetify": "2.3.18", "vuex": "^3.6.0", }, "devDependencies": { "@vue/cli-plugin-babel": "^4.5.10", "@vue/cli-plugin-eslint": "^4.5.11", "@vue/cli-plugin-router": "^4.5.10", "@vue/cli-plugin-unit-jest": "^4.5.10", "@vue/cli-plugin-vuex": "^4.5.10", "@vue/cli-service": "^4.5.10", "@vue/eslint-config-prettier": "^6.0.0", "@vue/test-utils": "1.1.2", "babel-eslint": "^10.1.0", "node-sass": "^5.0.0", "sass": "^1.32.4", "sass-loader": "^10.1.1", "vuetify-loader": "^1.6.0", "webpack": "^4.46.0" } }When I'm running the project locally, the hot reload works great!Any idea what might be the issue on the docker?EDITSince this is a docker for development purposes, I have tried as well to remove theCOPY . .without result.
Dockerized Vue app - hot reload does not work
As mentioned having multiple processes is not a suggested practice.Nevertheless, in some scenarios is required to have multiple processes. In those cases the usual approach is to use a process manager likesupervisor.
Hi i am wondering if it possible to run two scripts in same time automaticly on docker container start . First script have to run client application, and the second run server app as background.
Run multiple scripts in docker image
Autoscaling of containers is not yet supported and is not part of the near term1.0 roadmapfor Kubernetes (meaning that the core team isn't going to add it soon but external contributions are certainly welcome).
is it possible to autoscale docker containers, which contain application servers (like wildfly/tomcat/jetty/) within kubernetes ? For example at cpu & ram use or based on http requests ? If there is a build in feature for that i can't find it, or is it possible to write something like a configuration script for this ? If so where does the magic happen ?
Kubernetes Autoscaling Containers
Your best bet is passing (optional) environment variables to your docker container that can be processed by your startup script.docker-compose.yml:version: '2.1' services: www: image: somenginx environment: - ${UID} - ${GID}Then use the values of$UID/$GIDin your entrypoint script for updating the user's uid/gid.Sadly docker-compose does not provide the basic ability to reference the current user's UID/GID (relevantissue), so this approach requires each user to ensure the environment variables exist on the host. Example~/.bashrcsnippet that would take care of that:export UID export GID="$(id -g $(whoami))"Not quite optimal but at the moment there is no better way unless you have some other host orchestration besides docker/docker-compose. A shell script that handles container start for example would make this trivial. Another approach is templating your docker-compose.yml via some external build tool like gradle that wraps docker-compose and takes care of inserting the current UID/GID before the containers are started.
I have the following script as the ENTRYPOINT of my Dockerfile and therefore Docker image:#!/bin/bash set -e # Setup permissions data_dir="/var/www/html" usermod -u 1000 www-data && groupmod -g 1000 www-data chown -R www-data:root "$data_dir" if [ -d "$data_dir" ]; then chgrp -R www-data "$data_dir" chmod -R g+w "$data_dir" find "$data_dir" -type d -exec chmod 2775 {} + find "$data_dir" -type f -exec chmod ug+rw {} + fi # Enable rewrite a2enmod rewrite expires # Apache gets grumpy about PID files pre-existing rm -f /var/run/apache2/apache2.pid source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND "$@"Everything is running fine for Linux since the GUI and UID for most of the distros is1000(we're using Fedora and Ubuntu). Windows - I think - doesn't care about it, but again the script works properly and everything goes well.The problem comes when I try to run this in Mac (OSX) since the GUI and UID for the first user is500. That makes the permissions not to work properly.I know I can always change the values from 500 to 1000 but ....Is there any way to get this from inside the script so this is transparent for the user?UPDATEAS per the answer below, this is how my script looks like:#!/bin/bash set -e # Setup permissions data_dir="/var/www/html" usermod -u ${UID} www-data && groupmod -g ${GUID} www-data chown -R www-data:root "$data_dir" if [ -d "$data_dir" ]; then chgrp -RH www-data "$data_dir" chmod -R g+w "$data_dir" find "$data_dir" -type d -exec chmod 2775 {} + find "$data_dir" -type f -exec chmod ug+rw {} + fi # Enable rewrite a2enmod rewrite expires # Apache gets grumpy about PID files pre-existing rm -f /var/run/apache2/apache2.pid source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND "$@"What would happen if I hasn't defined theUIDorGUID? Is there any way to rely on default values as1000:1000(first created user)?
Dynamically pick the user GUI and UID who's running Docker at the host from entrypoint
My advise is to discard the IP inside the environment variables and properties at all.--link myMongo:mongodbLinks myMongo container to host 'mongodb'. This manages docker inside your host config.Now adjust your properties as follows:spring.data.mongodb.host=mongodb spring.data.mongodb.port=27017Now there is no need to manage IPs inside the container.
I have an springboot application container and mongodb container in docker.docker run -p 27017:27017 -d --name myMongo mongoSo I'm running mongodb container first and after springboot container.docker run -p 8080:8080 --name mySpringApp --link myMongo:mongodb mySpringAppAfter that I want to get that environment variables in my springboot app.MONGODB_PORT=tcp://172.17.0.5:27017 MONGODB_PORT_5432_TCP=tcp://172.17.0.5:27017 MONGODB_PORT_5432_TCP_PROTO=tcp MONGODB_PORT_5432_TCP_PORT=27017 MONGODB_PORT_5432_TCP_ADDR=172.17.0.5In application.properties file normally I have like that constant configuration for ip and port, so it connect mongodb container without any problem.spring.data.mongodb.host=172.17.0.56 spring.data.mongodb.port=27017But in that application.properties file i there a way to get that environment variables , btw i tried#{systemEnvironment['MONGODB_PORT_5432_TCP_ADDR']}like this notation. But my app couldn't connect to mongodb container. Is there a way any good practise for this situation , also i tried to implementAbstractMongoConfigurationget systemEnvironment variables with@Valueannotation.
Docker linking db container with spring boot and get environment variables
Persistent storage in the containerized world is still in its infancy, and can be problematic in high traffic environments when running more than one replica of your database(in this case, mariadb).Running more than one mariadb replica, with shared persistent data storage(e.g. NFS), regardless of the number of databases you use, could cause some corruption issues.I have not experienced these things myself, but you should research running databases in containers further, before doing anything in production. There are lots of articles on the web about this.A lot of people still run their databases in VM's, or on bare metal, and only run databases in containers for local development only.
I have an architectural question.Suppose we have a system that has multiple sub-systems:A,B, and so on. Each of these sub-system needs to persist their data and they all useMariaDB. Sub-systemAmay need adatabase(as increate database ...) calleda_db; and Sub-systemBmay need a database calledb_db. Furthermore, there are no data sharing acrossAandBIn a monolithic world before microservice and docker, it is common to set up one centralMariaDBinstance and ask each sub-system to use it and tojust use your owndatabasewhile on the shared instance (That is,Ausesa_db,Busesb_db, and so on)With docker, I think we could also have multiple mariadb containers running, and each maps their own volume for storage (e.g./data/mdb_aand/data/mdb_b, respectively).An obvious advantage would be complete isolation betweenAandB. There would be no worries thatAmay accidentally mess withB's data. And the two sub-systems can independently choose to shutdown/restart their own MariaDB container or even upgrade their MariaDB binary.On the other hand, some of my colleagues argue that running multiple MariaDB containers is inefficient and this approach imposes waste of resources.Are their good empirical measurements and articles discuss the trade-offs between the two approaches?
Multiple independent mariadb usages: multiple containers or one? Isolation vs efficiency?
The GNU Build Systemrestricts the featuresthatconfigureis supposed to use for maximum compatibility, including how to write shell code and whichutilitiesare available for use, and features you can expect out of those utilities.fileisnotin that list, and should betested forwithAC_PATH_PROG(or something like that) and so is really a bug in the package. Even if the test inconfigureabsolutely requiresfileto work getting anAC_MSG_ERRORsaying "Install the file program" would be preferable to decoding the error message.So to answer your questions:is 'file' a very basic utility or it is recently added to the autotolls package?No, file is not part of autotools, but it's a pretty common utility to be installed. Basic might not be the best adjective to describefile.Should I install it?Yourconfigurescript relies on it, so I guess you have to.But it's usage is kind of silly: Most of the time you can determine the architecture from thehost triplet. Also,objdumpis one of the standard compiler tools (objdump -f conftest.owill display similar info). At least it's going to be installed.
When I use my configure in a ubuntu OS (16), there seems to be no problem. I have installed the autoconf tool and dependencies.When I run the same configure file in a ubuntu (16 or latest) The problems is that I did not install any autotools. I am getting the following error message../configure: line 7022: /usr/bin/file: No such file or directoryThis is harmless to the build process. I just want to understand what's going on. The configure file:7022 case `/usr/bin/file conftest.o` in 7023 *32-bit*)It looks that my docker does not have /usr/bin/file. Which ubuntu packages contain the file utility. The problem with finding any useful information about '/usr/bin/file' is that file is such a common term, it is not easy to find more info. On my system with the file utility, I can get the following info from the man page of file:AVAILABILITY You can obtain the original author's latest version by anonymous FTP on ftp.astron.com in the direc‐ tory /pub/file/file-X.YZ.tar.gz.My question: is 'file' a very basic utility or it is recently added to the autotolls package? Should I install it?
autoconf configure warning: /usr/bin/file: No such file or directory
I've done a bit of research on it now and I have not found a satisfactory answer other than that it does not seem to be possible at this time to disable the parallelism.I did find a workaround that works for me and steps nicely around this issue. I now use actual remote servers to build the target platforms I need.In essence, one defines a remote (through ssh) server with docker installed on it and you configure it to build specific targets. That way it can actually run in parallel as the physically different machines can handle the formally overlapping port number (as was the problem in my use case).Read the full blog post on ithere
I have a docker build that during the build needs to run the server for some admin configuration. By running the server it claims a port and during multi-platform build this conflicts with thedocker buildxcommand as it claims that the port is already in use.Now I would like to run the build sequentially instead of in parallel but that does not seem to be an option?I've tried to make this work by setting the cpus to 1 (--cpuset-cpus 1) but that does not seem to make a difference.docker buildx build --platform=linux/amd64,linux/arm64/v8 --cpuset-cpus 1 --push -t ivonet/payara .from git repohttps://github.com/IvoNet/docker-payaraI'm working on an Apple M1 (aarch64)So is it possible to run this build with parallel disabled?
docker buildx disable parallel build for multiplatform
When you link the redis container to the node container, docker will already modify the hosts file for youYou should then be able to connect to the redis container via:var redisClient = require('redis').createClient(6379,'redis'); // 'redis' is alias for the link -> what's in the hosts file.From:https://docs.docker.com/userguide/dockerlinks/$ sudo docker run -d -P --name web --link db:db training/webapp python app.pyThis will link the new web container with the db container you created earlier. The --link flag takes the form:--link name:aliasWhere name is the name of the container we're linking to and alias is an alias for the link name. You'll see how that alias gets used shortly.
I had deployed a simple redis based nodejs application on the digital ocean cloud.Here is the node.js app.var express = require('express'); var app = express(); app.get('/', function(req, res){ res.send('hello world'); }); app.set('trust proxy', 'loopback') app.listen(3000); var redisClient = require('redis').createClient(6379,'localhost'); redisClient.on('connect',function(err){ console.log('connect'); })In order to deploy the application, I used one node.js container and one redis container respectively,and linked node.js container with redis container.The redis container could be obtained bydocker run -d --name redis -p 6379:6379 dockerfile/redisand the node.js container is based on google/nodejs, in which Dockerfile is simply asFROM google/nodejs WORKDIR /src EXPOSE 3000 CMD ["/bin/bash"]my node.js image is named as nodejs and built bydocker build -t nodejs Dockerfile_pathand the container is run by copying my host applications files to the src folder in the container and linking the existing redis containerdocker run -it --rm -p 8080:3000 --name app -v node_project_path:/src --link redis:redis nodejsfinally I got into the container successfully, then installed the npm modules bynpm installand then start the application bynode app.js.But I got a error saying:Error: Redis connection to localhost:6379 failed - connect ECONNREFUSEDAs redis container is exposed to 6379, and my nodejs container is linking to redis container. in my node.js app, connecting to localhost redis server with port 6379 is supposed to be ok, why in fact it is not working at all
fail to link redis container to node.js container in docker
Figured it out, posting here if anyone else stumbles on this.The answer is simple - you need to embed thego_libraryrule within thego_imagerule. Here is mycmd/hello/BUILD.bazelwhere I also embed the go image in a docker containerload("@io_bazel_rules_go//go:def.bzl", "go_binary", "go_library") load("@io_bazel_rules_docker//go:image.bzl", "go_image") load("@io_bazel_rules_docker//container:container.bzl", "container_image") go_library( name = "hello_lib", srcs = ["main.go"], importpath = "go-example/cmd/hello", visibility = ["//visibility:private"], deps = ["//pkg/echo"], ) go_binary( name = "hello", embed = [":hello_lib"], visibility = ["//visibility:public"], ) go_image( name = "hello_go_image", embed = [":hello_lib"], goarch = "amd64", goos = "linux", pure = "on", ) container_image( name = "docker", base = ":hello_go_image", )Now it works to runbazel build //cmd/hello:docker
Let me start by saying I'm new to Bazel. I am trying to build a Docker container from a golang project that contains local module references.First I'm creating a local golang module:go mod init go-exampleHere is the general project structure:. ├── BUILD.bazel ├── WORKSPACE ├── cmd │   └── hello │   ├── BUILD.bazel │   └── main.go ├── go.mod ├── go.sum └── pkg    └── echo    ├── BUILD.bazel    └── echo.goInmain.goI am importingpkg/echofrom the local module.import ( "go-example/pkg/echo" )(top level BUILD.bazel) ... # gazelle:prefix go-example✅ Default bazel build works$ bazel run //:gazelle $ bazel build //cmd/hello❌ Docker build fails. I get the following error:(cmd/hello/BUILD.bazel) ... go_image( name = "docker", srcs = ["main.go"], importpath = "go-example/cmd/hello", )$ bazel build //cmd/hello:docker ... compilepkg: missing strict dependencies: /private/var/tmp/_bazel[...]/__main__/cmd/hello/main.go: import of "go-example/pkg/echo" No dependencies were provided.
Bazel build docker container with local golang module
It’s because you have not opened port on host. You may try:version: '3' services: web: build: . ports: - "8080:8080" depends_on: - db db: image: redis ports: - "6379:6379"
I'm having trouble making HTTP requests to my docker container (it's a Node.js API that communicates with a Redis database), which runs inside a VM (Docker Toolbox).I've set up my Dockerfile and docker-compose.yml with the desired ports. Built them and ran ("up") them successfully.FROM node:8.15 WORKDIR /redis_server COPY package.json package-lock.json ./ RUN npm install COPY . ./ EXPOSE 8080 CMD ["npm", "start"]version: '3' services: web: build: . depends_on: - db db: image: redis ports: - "6379:6379"redis.jsconst PORT = 6379 const HOST = 'db'server.js (express.js)const PORT = '0.0.0.0:8080'I build the container succesfully, then use a HTTP request service to test a GET. Since I run Docker Toolbox and that the VM is on host 192.168.99.100, I send my requests tohttp://192.168.99.100:8080.This does not work, the error message that appears in my Visual Studio Code is "Connection is being rejected. The service isn't running on the server, or incorrecte proxy settings in vscode, or a firewall is blocking requests. Details: Error: connect ECONNREFUSED 192.168.99.100:8080."Not sure where to go from here. I don't consider myself knowledgeable on things network.
How to send HTTP requests to my docker container from localhost?
There is an issue with the version 3.4.1 of react-scripts,So i added a docker-compose file and i specified this line who solve the problem and save my day :stdin_open: trueSo my docker-compose.yml file looks like this :version : '3' services: web: build: context: . dockerfile: Dockerfile.dev stdin_open: true ports: - "3000:3000" volumes: - /app/node_modules - .:/app
I'm new to Docker and I tried to run a container of thecreate-react-appimage so these are the steps that I have done:npx create-react-app frontendI created aDockerfile.devlike below:FROM node:alpine WORKDIR '/app' COPY package.json . RUN npm install COPY . . CMD ["npm" , "run" , "start"]I used this command to build the image:docker build -f Dockerfile.dev .When i run the container using the image id provided:docker run -p 3000:3000 my_docker_image_idNothing happens,as seen in this screenshot.But when I add the-iargument to my command everything works fine,as seen in this screenshot:docker run -p 3000:3000 -i my_docker_image_idAny idea please?
I can't run a docker container of my reactjs app
Since you have a Docker file, you are required to do 4 additional steps:docker build -t .: Building your imagedocker images: Check your imagedocker run -d -p 2222:8080 myapp: Run your imagedocker ps: Check running docker imageReferDocker doc.for more detials
I am looking at a Dockerfile like this:FROM microsoft/aspnetcore:2.0 AS base # Install the SSHD server RUN apt-get update \ && apt-get install -y --no-install-recommends openssh-server \ && mkdir -p /run/sshd \ && echo "root:Docker!" | chpasswd #Copy settings file. See elsewhere to find them. COPY sshd_config /etc/ssh/sshd_config COPY authorized_keys root/.ssh/authorized_keys # Install Visual Studio Remote Debugger RUN apt-get install zip unzip RUN curl -sSL https://aka.ms/getvsdbgsh | bash /dev/stdin -v latest -l ~/vsdbg EXPOSE 2222I am trying to create an image using this dockerfile. How can I do this? I have read many webpages over the last few hours e.g. this onehttps://odewahn.github.io/docker-jumpstart/building-images-with-dockerfiles.html, however they all seem to overcomplicate it as I believe that there is only one command needed.
Create an image from a Dockerfile
Check your pg_hba.conf file in the Postgres data folder. The default configuration is that you can only login from localhost (which I assume Adminer is doing) but not from external IPs.In order to allow access from all external addresses vi password authentication, add the following line to your pg_hba.conf:host all all * md5Then you can connect to your postgres DB running in the docker container from outside, given you expose the Port (5432)
I have a PostgreSQL container set up that I can successfully connect to with Adminer but I'm getting an authentication error when trying to connect via something like DBeaver using the same credentials.I have tried exposing port 5432 in the Dockerfile and can see on Windows for docker the port being correctly binded. I'm guessing that because it is an authentication error that the issue isn't that the server can not be seen but with the username or password?Docker Compose file and Dockerfile look like this.version: "3.7" services: db: build: ./postgresql image: postgresql container_name: postgresql restart: always environment: - POSTGRES_DB=trac - POSTGRES_USER=user - POSTGRES_PASSWORD=1234 ports: - 5432:5432 adminer: image: adminer restart: always ports: - 8080:8080 nginx: build: ./nginx image: nginx_db container_name: nginx_db restart: always ports: - "8004:8004" - "8005:8005"Dockerfile: (Dockerfile will later be used to copy ssl certs and keys)FROM postgres:9.6 EXPOSE 5432Wondering if there is something else I should be doing to enable this to work via some other utility?Any help would be great.Thanks in advance.Update:Tried accessing the database through the IP of the postgresql container 172.28.0.3 but the connection times out which suggests that PostgreSQL is correctly listening on 0.0.0.0:5432 and for some reason the user and password are not usable outside of Docker even from the host machine using localhost.
Connecting to Postgres Docker server - authentication failed
I had the same issue and I use this in a cron:# KUBECTL='kubectl --dry-run=client' KUBECTL='kubectl' ENVIRONMENT=sandbox # yes, typo AWS_DEFAULT_REGION=moon-west-1 EXISTS=$($KUBECTL get secret "$ENVIRONMENT-aws-ecr-$AWS_DEFAULT_REGION" | tail -n 1 | cut -d ' ' -f 1) if [ "$EXISTS" = "$ENVIRONMENT-aws-ecr-$AWS_DEFAULT_REGION" ]; then echo "Secret exists, deleting" $KUBECTL delete secrets "$ENVIRONMENT-aws-ecr-$AWS_DEFAULT_REGION" fi PASS=$(aws ecr get-login-password --region $AWS_DEFAULT_REGION) $KUBECTL create secret docker-registry $ENVIRONMENT-aws-ecr-$AWS_DEFAULT_REGION \ --docker-server=$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com \ --docker-username=AWS \ --docker-password=$PASS \[email protected]--namespace collect
I am facing the issue while pulling the docker image from AWS ECR repository, earlier i usedkubectl create secret docker-registry regcred --docker-server=https://index.docker.io/v1/ --docker-username=kammana --docker-password=[email protected]The deployment YAML fileapiVersion: v1 kind: Pod metadata: name: private-reg spec: containers: - name: privateapp image: kammana/privateapp:0.0.1 imagePullSecrets: - name: regcredbut now the secret password is only valid for 12 hours when you generate for ECR, i will have to manually change the secret everytime. This is hectic and i read a Mediumarticle.It can creates kind of cron Job but i want to pull the image at runtime by logging in to ECR.It would be helpful if you could provide some relevant example with respect ECR direct login via Kubernetes and my cluster is not in the same AWS account so AWS IAM Roles is out of question.
Pull image from ECR to Kubernetes deployment file
Difference 1. If you want to use ssh, you need to have ssh installed on the Docker image and running on your container. You might not want to because of extra load or from a security perspective. One way to go is to keep your images as small as possible - avoids bugs like heartbleed ;). Whether you want ssh is a point of discussion, but mostly personal taste. I would say only use it for debugging, and not to actually change your image. If you would need the latter, you'd better make a new and better image. Personally, I have yet to install my first ssh server on a Docker image.Difference 2. Using ssh you can start your container as specified by the CMD and maybe ENTRYPOINT in your Dockerfile. Ssh then allows you to inspect that container and run commands for whatever use case you might need. On the other hand, if you start your container with the bash command, you effectively overwrite your Dockerfile CMD. If you then want to test that CMD, you can still run it manually (probably as a background process). When debugging my images, I do that all the time. This is from a development point of view.Difference 3. An extension of the 2nd, but from a different point of view. In production, ssh will always allow you to check out your running container. Docker has other options useful in this respect, likedocker cp,docker logsand indeeddocker attach.According to the docs "The attach command will allow you to view or interact with any running container, detached (-d) or interactive (-i). You can attach to the same container at the same time - screen sharing style, or quickly view the progress of your daemonized process." However, I am having trouble in actually using this in a useful manner. Maybe someone who uses it could elaborate in that?Those are the only essential differences. There is no difference for image layers, committing or anything like that.
Could you please point me what is the difference between installing openssh-server and starting a ssh session with a given docker container and runningdocker run -t -i ubuntu /bin/bashand then performing some operations. How doesdocker attachcompare to those two methods?
Connecting to a running docker container - differences between using ssh and running a command with "-t -i" parameters
Depending on your use case, what you could do, instead of passing a user to thepsqlcommand is to define theenvironment variablePGUSERto the container at boot time.This way, it will be the default user for PostgreSQL, if you do not specify any, so you won't even have to specify it in order to connect:$ docker run --name postgres -e POSTGRES_PASSWORD=bar -e POSTGRES_USER=foo -e PGUSER=foo -d postgres e250f0821613a5e2021e94772a732f299874fc7a16b340ada4233afe73744423 $ docker exec -ti postgres psql -d postgres psql (12.4 (Debian 12.4-1.pgdg100+1)) Type "help" for help. postgres=#
I have multiple Environment Variables defined on my Postgres container, such as POSTGRES_USER. The container is running and I want to connect to Postgres from the command line using exec.I'm unable to connect with the following:docker exec -it psql -U $POSTGRES_USER -d I understand that the variable is defined on the container and the following does work:docker exec -it bash -c 'psql -U $POSTGRES_USER -d 'Is there a way for me to execute the psql command directly from docker exec and call the environment variable on the container?docker exec -it psql -U ????? -d
Docker exec - cannot call postgres with environment variables
The docker image reference is the combination of the REPOSITORY and TAG in this formatREPOSITORY:TAGwhere they are both separated by:. So if you have an image with a REPOSITORY ofIMAGE1and a tag oflatestthe image reference would beIMAGE1:latest. The knowledge of an image reference would help you to filter by docker image list by reference by running:docker images --filter=reference='myDocker*:*dev'The above command will return all docker images that the repository name starts withmyDockerand the tag name ends withdev.
Dockerdocumentationmentions image reference in many places. However, runningdocker imagescommand gives the list of images with the following properties: REPOSITORY, TAG, IMAGE ID, CREATED, SIZE - no reference. Is 'reference' a synonym for ID or digest, or something else?
What is docker image reference?
According to your comment I understand you'd be interested in adopting amonorepoconfiguration.In this case, for the questionWhere do I manage/put the docker-compose file?you could just put thedocker-compose.ymlfile at the root of your GitLab CI project, which would lead to a directory structure like this:monorepo-project/ ├── backend/ │   ├── Dockerfile │   ├── .dockerignore │   └── src/ ├── frontend/ │   ├── Dockerfile │   ├── .dockerignore │   └── src/ ├── docker-compose.yml ├── .git/ ├── .gitignore └── .gitlab-ci.ymlAs pointed out inhttps://docs.gitlab.com/ee/user/packages/workflows/monorepo.html(the original version of this page, deleted bythis commit, is still available atthis URL), you can tweak your configuration using thechanges:key, so that if just one part of the project changes (e.g., the frontend), then the CI behaves accordingly.Further pointersFor more examples, see e.g.this article in Mediumwhich specifically relies onDocker, orthat blog articlewhich takes advantage of theneeds:key.Finally, the semantics of the GitLab CI YAML conf file is well-documented inhttps://docs.gitlab.com/ee/ci/yaml/(to be bookmarked!).
Currently I have a project (repo) in Gitlab which is an angular app. I'm using Gitlab CI/CD to build, test, release and deploy. Releasing will build a new docker image pushing it to the Gitlab registry and after deploying it on NGinx in a docker container on my Digital Ocean droplet. This works fine.Let's say I want to add a backend to it like the MEAN stack so I would have 2 containers running using a docker-compose file.container 1 - Angularcontainer 2 - Node.js, Express.js and MongoDBThe 2 gitlab projects (repo's) will have to be build separately when a change occurs (own Dockerfile and gitlab-ci.yml file) but deployed together using the docker-compose file.Where do I manage/put the docker-compose file?I hope my explanation is clear and if I'm assuming correctly.Thanks in advance.
Gitlab CI/CD to Digital Ocean for multiple repos using docker-compose
This looks like it might be related to the issue mentioned here:https://github.com/dotcloud/docker/issues/4767It sounds like you've tried stopping and removing the container. Have you tried restarting the docker daemon and/or restarting the host?
When pushing to the official registry, I get the following error:Failed to generate layer archive: Error mounting '/dev/mapper/docker-202:1-399203-ed78b67d527d993117331d27627fd622ffb874dc2b439037fb120a45cd3cb9be' on '/var/lib/docker/devicemapper/mnt/ed78b67d527d993117331d27627fd622ffb874dc2b439037fb120a45cd3cb9be': device or resource busyThe first time I tried to push the image, I ran out of memory on my hard drive. After that I cleaned up and should have now enough space to push it, but the first try somehow locked the image.How can I free it again?I have stopped and removed the container running the image, but that didn't help.I have restarted the docker service, without any results
"device or resource busy" error when trying to push image with docker
You can add an instruction to install a faketensorflow"package" that only writes the metadata without adding the duplicate sources:$ python -c 'from setuptools import setup; setup(name="tensorflow", version="2.2.0")' installIn the docker image this would look like this:FROM tensorflow/tensorflow:2.2.0-gpu RUN python -c 'from setuptools import setup; setup(name="tensorflow", version="2.2.0")' install RUN pip install my-requirements RUN pip uninstall -y tensorflow # cleaning up
In the Docker image for Tensorflow with GPU support (for example:tensorflow/tensorflow:2.2.0-gpu) the installed python package istensorflow-gpu(as shown inpip freeze).Installing any python package that depends ontensorflowtriggers the installation of tensorflow itself, although it's already installed under a different name (because -- correctly --tensorflow-gpu!=tensorflow).Is there a way to avoid this?
Tensorflow: docker image and -gpu suffix
The Azure Web app for container is different from the container. It is a web app service when you create it. The difference is that it comes out from a container.So you cannot execute a docker command to a web app. You can execute the command of the web app.For example, if you want to check the container image, the command isaz webapp config container show --resource-group groupName --name webNameand the result like this:For more details about Web app command, seeWeb App commands.
I'm using the Azure resource "Web app for containers" with a Linux docker image. I would like to use docker commands such as "docker inspect" but I'm not sure how this is possible. Via the Kudo interface this doesn't seem possible. I cannot even get the SHA256 hash of the image currently deployed. All I have is the initial docker run command executed by the app service itself.Does anyone know how such operations can be executed with app containers in Azure ?
Azure web app container and docker commands
Each container has it's own logfile, you can know where it is by using:docker inspect --format='{{.LogPath}}' It will tell you the path to the logfile.Referencies:https://docs.docker.com/config/containers/logging/json-file/https://docs.docker.com/config/containers/logging/configure/#configure-the-default-logging-driver
I usedefault nginx imageand Filebeat to read logs and send them to ELK. Both containers (nginx container and Filebeat container) are on the same host machone.Here is Dockerfile for nginx imageFROM nginx COPY . /usr/share/nginx/html/ EXPOSE 80In my nginx container access log goes toSTDOUTand error log goes toSTDERR.When I prompt from host machinedocker logs I can see logs from nginx container. But there is nothing in container's folder on host machine (/var/lib/docker/containers/nginx-container-id)how can set up filebeat to read logs?
Where are logs of docker nginx conainter stored in host
Or are multi stage builds only useful for preparing some files and then copying those into another base image?This is the main use-case discussed in "Use multi-stage builds"The main goal is to reduce the number of layers by copying files from one image to another, without including the build environment needed to produce said files.But, another goal could benotrebuild the entire Dockerfile including every stage.Then your suggestion (not copying) could still apply.You can specify a target build stage. The following command assumes you are using the previous Dockerfile but stops at the stage named builder:$ docker build --target builder -t alexellis2/href-counter:latest .A few scenarios where this might be very powerful are:Debugging a specific build stageUsing adebugstage with all debugging symbols or tools enabled, and a leanproductionstageUsing atestingstage in which your app gets populated with test data, but building forproductionusing a different stage which uses real data
does it have any advantages to use a multistage build in Docker, if you don't copy any files from the previously built image? eg.FROM some_base_image as base #Some random commands RUN mkdir /app RUN mkdir /app2 RUN mkdir /app3 #ETC #Second stage starts from first stage FROM base #Add some files to image COPY foo.txt /appDoes this result in a smaller image or offer any other advantages compared to a non multi-stage version? Or are multi stage builds only useful for preparing some files and then copying those into another base image?
Docker multistage build without copying from previous image?
Found a solution for the CSS fileshere.app.css.append_css({"external_url": "./assets/xyz.css"})
I have a multi-page dash application that works as expected when running it locally with:waitress-serve --listen=0.0.0.0:80 web_app.wsgi:applicationso all the assets within the assets folder loads correctly, the images ar loaded withsrc=app.get_asset_url('xyz.png')and have setapp.css.config.serve_locallytotrue, as shown here everything loadsworkingBut when loading the same app within a docker container the assets don't loadnot workingand so the local css don't load either.Have checked the files and folders within docker and everything is were it is expected to be.I guess I'm missing something somewhere but don't find what, any suggestions on how to get it to work?DockerfileFROM python:3 RUN apt-get update && apt-get install -qq -y \ build-essential libpq-dev --no-install-recommends ENV INSTALL_PATH /gtg_analytics-master ENV PYTHONPATH "${PYTHONPATH}:$INSTALL_PATH/web_app" RUN mkdir -p $INSTALL_PATH WORKDIR $INSTALL_PATH COPY requirements.txt requirements.txt RUN pip install -r requirements.txt COPY web_app $INSTALL_PATH/web_appdocker-compose:version: "3" services: web_app: image: patber/gtg:dev build: . command: > waitress-serve --listen=0.0.0.0:80 web_app.wsgi:application environment: PYTHONUNBUFFERED: 'true' volumes: - '.:/web_app' ports: - '80:80'
Plotly dash in docker do not load assets
From:https://docs.docker.com/storage/bind-mounts/Mount into a non-empty directory on the container If you bind-mount into a non-empty directory on the container, the directory’s existing contents are obscured by the bind mount. This can be beneficial, such as when you want to test a new version of your application without building a new image. However, it can also be surprising and this behavior differs from that of docker volumes.So, if host os's directory is empty, then container's directory will override is that right?Nope, it doesn't compare them for which one has files; it just overrides the directory inside the container with the directory on the host, no matter what.
Inside of docker image has several files in/tmpdirectory.Example/tmp # ls -al total 4684 drwxrwxrwt 1 root root 4096 May 19 07:09 . drwxr-xr-x 1 root root 4096 May 19 08:13 .. -rw-r--r-- 1 root root 156396 Apr 24 07:12 6359688847463040695.jpg -rw-r--r-- 1 root root 150856 Apr 24 06:46 63596888545973599910.jpg -rw-r--r-- 1 root root 142208 Apr 24 07:07 63596888658550828124.jpg -rw-r--r-- 1 root root 168716 Apr 24 07:12 63596888674472576435.jpg -rw-r--r-- 1 root root 182211 Apr 24 06:51 63596888734768961426.jpg -rw-r--r-- 1 root root 322126 Apr 24 06:47 6359692693565384673.jpg -rw-r--r-- 1 root root 4819 Apr 24 06:50 635974329998579791105.pngWhen I type the command to run this image -> container.sudo docker run -v /home/media/simple_dir2:/tmp -d simple_backupExpected behavioris if I runls -al /home/media/simple_dir2then the files show up.Butactual behavioris nothing exists in/home/media/simple_dir2.On the other hand, if I run the same image without the volume option such as:sudo docker run -d simple_backupAnd enter that container using:sudo docker exec -it /bin/sh ls -al /tmpThen the files exist.TL;DRI want to mount a volume (directory) on the host, and have it filled with the files which are inside of the docker image.My envUbuntu 18.04Docker 19.03.6
Files inside a docker image disappear when mounting a volume
Actually I found the best/easiest way is to just add this argument to the docker-compose.yml file:env_file: - .env
I am trying to use the samedocker-compose.ymland.envfiles for bothdocker-composeandswarm. The variables from the.envfile should get parsed, via sed, into a config file by running a run.sh script at boot. This setup works fine when using thedocker-compose upcommand, but they're not getting passed when I use thedocker stack deploycommand.How can I pass the variables into the container so that therun.shscript will parse them at boot?
How can I pass the variables I have placed in the .env file to the containers in my docker swarm?
You can achieve this most easily by running a privileged container: e.g compare:docker run alpine ls -la /devvsdocker run --privileged alpine ls -la /dev
I have a working docker-compose where I now need to bind not only one specific device but all available devices.So instead of having something like:devices - '/dev/serial0:/dev/serial0'I would like to do something like:devices - '/dev:/dev'This gives me the following error:container init caused \"rootfs_linux.go:70: creating device nodes caused \\\"open /var/lib/docker/devicemapper/mnt/6a4...05af/rootfs/dev/pts/0: permission denied\\\"\"": unknownHow could I map all devices to my container?
mapping all available devices in docker-compose
You could setup a nginx reverse proxy on the host and bind to seperate ports. The question/and answer on this article explain it quite nicely so I won't repeat it all:https://www.digitalocean.com/community/questions/how-to-bind-multiple-domains-ports-80-and-443-to-docker-contained-applications
I have a simple NodeJS site running inside a Docker container, with its ports mapped to port 80 on the host. I have a domain pointing to the IP of the EC2 instance, and everything is working as expected.If I want to run another, separate NodeJS site from a Docker container on the same instance, how can I map specific domain names to specific Docker containers?Eg, let's assume the IP of my EC2 instance is 22.33.44.55 and my domains are domain-a.com and domain-b.com. My dockerized NodeJS applications are siteA and siteB.domain-a.com is configured to point to 22.33.44.55. siteA is listening on that IP address to port 80 - this is what I currently have.domain-b.com is configured to point to 22.33.44.55. I want this traffic mapped to the siteB Docker container.
Point different domains to different Docker containers on a single EC2 instance?
I just figured this out. Requires "switching to windows container" in Docker Desktop.1). Follow:https://docs.docker.com/machine/drivers/hyper-v/#example:2). Start hyper v (may need to enable):https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v3). Then in hyper v create external virtual switch. Select your wifi adapter. (should work with vpn on or off).4). reboot.5). I used this images as it has to match my local windows windows 10 version:1809docker pull mcr.microsoft.com/windows:1809 #takes an hour to finish6). Start container and attach to new network.docker run -dit --name win1809 mcr.microsoft.com/windows:1809 powershell docker network ls docker network connect "John Windows Container Switch" win1809 docker network inspect "John Windows Container Switch"shows:"Containers": { "b8c4ae07761fdf082602f836654013b8d83a717cce9156880a80c7542d855842": { "Name": "win1809", "EndpointID": "e84652fc93fd1fa2970c3bdcad513d8928fc35823a9f8cf0e638926b6091a60c", "MacAddress": "00:15:5d:fb:77:dd", "IPv4Address": "", "IPv6Address": ""7). Connect to container and ping something:docker exec -it win1809 powershell ping www.google.com Pinging www.google.com [172.217.10.36] with 32 bytes of data: Reply from 172.217.10.36: bytes=32 time=19ms TTL=118 Reply from 172.217.10.36: bytes=32 time=18ms TTL=118 Reply from 172.217.10.36: bytes=32 time=18ms TTL=118 Reply from 172.217.10.36: bytes=32 time=14ms TTL=118
I have set up two new projects in Visual Studio using the Docker tooling. The first is a asp.net site running against a Linux container. The second is an asp.net site running against a Windows container.In the former, I can ping hostnames (ex: google.com) and it resolves just fine.However, when running the windows container I cannot do the same thing.I am running a custom network so that I can ensure the container starts up on the subnet I want:docker network create --driver=nat --subnet=192.168.221.0/24To be clear, I can ping just fine by using an IP but since I want to connect to a database via hostname, this isn't especially helpful during development.
Docker - Windows Container Not Resolving Hosts
My best guess is your hub client is trying to connect to "the-public-url-out-of-docker/chatHub":_hubConnection = new HubConnectionBuilder() .WithUrl(NavigationManager.ToAbsoluteUri("/chatHub")) .Build();NavigationManager.ToAbsoluteUri(...)will convert/chatHubto a public url which are exposed to the end user. For example, if you're using reverse proxy, it might be the domain name.Note there're url at three different levels:the domain name that are exposed to publicthe host ip & portthe container ip & port+----------------------------------+ | HOST (5000) | | + | | |Port Mapping---------------+ | | >-->-->|Container (80) | | | | | | | +--------------------+ | +-----^----------------------------+ | reverse proxy +-------+----------------------------+ | nginx | | https://www.myexample.com/chatHub | | +-------^----------------------------+ | | | | +-------+-----------+ | | | Browser | (Brazor sees only the public url via NavgiationManager ) | | +-------------------+However, when running in docker, the host's network is not accessible from the container's network all the time.If that's case, there're several approaches that should work:Avoid using the public url like.WithUrl(NavigationManager.ToAbsoluteUri("/chatHub")). Hard-code it to the container ip&port. For example, if you container listens on 80, it should behttp://localhost/chatHub.Configure a network for docker, or add a--networkfor when running docker. For more details, seethis thread
I created default blazor server side app. Then addedMicrosoft.AspNetCore.SignalR.ClientandChatHubclass. Then edited startup.cs file (addservices.AddSignalR()andendpoints.MapHub("/chatHub")) andindex.razorpage. Then run by IIS express. it is okey.Then added docker support and run Docker host. it is not working. Because only don't work hub connection StartAsync method. How to run it? Help me? Thank you very much guys.Error is:An unhandled exception occurred while processing the request. SocketException: Cannot assign requested address System.Net.Http.ConnectHelper.ConnectAsync(string host, int port, CancellationToken cancellationToken)HttpRequestException: Cannot assign requested address System.Net.Http.ConnectHelper.ConnectAsync(string host, int port, CancellationToken cancellationToken)index.razor code:@code { private HubConnection _hubConnection; protected override async Task OnInitializedAsync() { _hubConnection = new HubConnectionBuilder() .WithUrl(NavigationManager.ToAbsoluteUri("/chatHub")) .Build(); _hubConnection.On("ReceiveMessage", (user, message) => { var encodedMsg = $"{user}: {message}"; StateHasChanged(); }); await _hubConnection.StartAsync(); // **DON'T WORK IN DOCKER HOST.** } }Docker file:FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base WORKDIR /app EXPOSE 80 FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build WORKDIR /src COPY ["BlazorApp1/BlazorApp1.csproj", "BlazorApp1/"] RUN dotnet restore "BlazorApp1/BlazorApp1.csproj" COPY . . WORKDIR "/src/BlazorApp1" RUN dotnet build "BlazorApp1.csproj" -c Release -o /app/build FROM build AS publish RUN dotnet publish "BlazorApp1.csproj" -c Release -o /app/publish FROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT ["dotnet", "BlazorApp1.dll"]
how to run StartAsync connection of signalr blazor client in docker image?
You can use variables in the docker-compose file for the image & tag:version: '3' services: redis-server: image: ${IMAGE_NAME}:${IMAGE_TAG}And pass the arguments in the docker-compose task:You can also define pipeline variables and check the "Settable at release time":So when you click on "Create Release" you can replace the values:
I havedocker-compose.ymlfile present in the repository. I have added the image attribute in one of the services to pull the docker images. I have not hard coded the docker image and docker tag and planning to pass these arguments at the runtime todocker-compose.ymlfile.How to pass the runtime arguments likeIMAGE_TAG=82,IMAGE_NAME=app1to thedocker-compose.ymlfile?
How to provide docker image tag dynamically to docker-compose.yml in Azure Release Pipeline Task?
Agree with @senderle comment, Alpine is not the best choice here especially if you plan to use scientific Python packages that relies on numpy. If you absolutely need to use Alpine, you should have a look to other questions likeInstalling numpy on Docker Alpine.Here is a suggestion, I've also replaced theENTRYPOINTbyCMDin order to be able to overwrite to ease debugging (for example to run a shell). If theENTRYPOINTispythonit will be not be possible to overwrite it and you will not be able to run anything other thanpythoncommands.FROM python:3.8-slim COPY . /app WORKDIR /app RUN pip install --quiet --no-cache-dir -r requirements.txt EXPOSE 5001 CMD ["python", "main.py"]Build, run, debug.# build $ docker build --rm -t my-app . # run docker run -it --rm my-app # This is a test # debug $ docker run -it --rm my-app pip list # Package Version # --------------- ------- # click 7.1.2 # Flask 1.1.2 # itsdangerous 1.1.0 # Jinja2 2.11.2 # joblib 0.17.0 # MarkupSafe 1.1.1 # numpy 1.19.2 # pandas 1.1.3 # ...
I am trying to dockerize my python application. Errors are showing inside building Dockerfile and installing dependencies ofscikit-learnie.numpy.DockerfileFROM python:alpine3.8 RUN apk update RUN apk --no-cache add linux-headers gcc g++ COPY . /app WORKDIR /app RUN pip install --upgrade pip RUN pip install --no-cache-dir -r requirements.txt EXPOSE 5001 ENTRYPOINT [ "python" ] CMD [ "main.py" ]requirements.txtscikit-learn==0.23.2 pandas==1.1.3 Flask==1.1.2ERROR: Could not find a version that satisfies the requirement setuptools (from versions: none) ERROR: No matching distribution found for setuptoolsFull Error
Installing python numpy module inside python alpine docker
Max heap is being capped at 256MB.You mean via-min docker? If such, this isnotthe java heap you are specifying, but the total memory.I tried updating the MaxRAMFraction setting to 1MaxRAMFractionis deprecated and un-used, forget about it.UseCGroupMemoryLimitForHeapis deprecated and will be removed. UseUseContainerSupportthat was ported to java-8 also.MaxRAM=2gDo you know what this actually does? It sets the value for the "physical" RAM that the JVM is supposed to think you have.I assume that you did not set-Xmsand-Xmxon purpose here? Since you do not know how much memory the container will have? If such, we are in the same shoes. Wedoknow that the min we are going to get is1g, but I have no idea of the max, as such I prefer not to set-Xmsand-Xmxexplicitly.Instead, we do:-XX:InitialRAMPercentage=70 -XX:MaxRAMPercentage=70 -XX:+UseContainerSupport -XX:InitialHeapSize=0And that's it. What this does?InitialRAMPercentageis used to calculate the initial heap size, BUT only whenInitialHeapSize/Xmsare missing.MaxRAMPercentageis used to calculate themaximum heap. Do not forget that a java process needs more thanjustheap, it needs native structures also; that is why that70(%).
I am running a Springboot application in the alpine-OpenJDK image and facing OutOfMemory issues. Max heap is being capped at 256MB. I tried updating the MaxRAMFraction setting to 1 but did not see it getting reflected in the Java_process. I have an option to increase the container memory limit to 3000m but would prefer to use Cgroup memory with MaxRamfraction=1. Any thoughts?Java-Version openjdk version "1.8.0_242" OpenJDK Runtime Environment (IcedTea 3.15.0) (Alpine 8.242.08-r0) OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode) bash-5.0$ java -XX:+PrintFlagsFinal -version | grep -Ei "maxheapsize|MaxRAMFraction" uintx DefaultMaxRAMFraction = 4 {product} uintx MaxHeapSize := 262144000 {product} uintx MaxRAMFraction = 4 {product} openjdk version "1.8.0_242" OpenJDK Runtime Environment (IcedTea 3.15.0) (Alpine 8.242.08-r0) OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode) Container Resource limits ports: - containerPort: 8080 name: 8080tcp02 protocol: TCP resources: limits: cpu: 350m memory: 1000Mi requests: cpu: 50m memory: 1000Mi securityContext: capabilities: {}Container JAVA_OPTS screenshot
OpenJDK 1.8.0_242, MaxRAMFraction setting not reflecting
The answer is somewhat buried levels deep but I found out multiple ways of doing it starting with the most elegant:Name your container when running it so you can attach to it's process logging and couple that with a process monitor such as upstart/systemd/supervisorddocker run -itd --name=test ubuntuupstart example (/etc/init/test.conf):description "My test container" start on filesystem and started docker stop on runlevel [!2345] respawn script /usr/bin/docker start -a test end scriptLess elegant: watch for changes in cidfile contentsdocker run -itd --name=test --cidfile=/tmp/cidfile_path ubuntuAn hourly cron maybe...#!/bin/bash RUNNING=$(docker ps -a --no-trunc | awk '/test/ && /Up/' | awk '{print $1}') CIDFILE=$(cat /tmp/cidfile_path) if [ "$RUNNING" != "$CIDFILE" ] then # do something wise fiSimilar to the above you can see if a given container is running...in a loop/cron/whatever#!/bin/bash RUNNING=$(docker inspect --format '{{.State.Running}}' test) if [ "$RUNNING" == false ] then # do something wise fiYou can combine commands to do whatever checking script you like, I went withupstartbecause it suits my situation but these examples could be used for all possible scenarios should you need more control.
I'm using docker on quite a lot of servers right now but sometimes some of the containers I use crash due to heavy load. I was thinking on adding a cron that checks every minute of the container is running or not but I didn't find any satisfactory method on doing that.I'm starting the container with a cidfile that saves the id of the running container. If the container crashes the cidfile stays there with the id inside and I was just wondering how do you guys make sure a container is running or not and respawn it in case it went down. Should I just parse the output ofdocker ps -aor is there more elegant solution?
making sure a given docker container is running
1) What docker is all about from a 10000 ft bird's eye point of view?From the website:Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere.Drill down a little bit more and a thorough explanation of the what/why docker addresses:https://www.docker.io/the_whole_story/https://www.docker.io/the_whole_story/#Why-Should-I-Care-(For-Developers)https://www.docker.io/the_whole_story/#Why-Should-I-Care-(For-Devops)Further depth can be found in the technology documentation:http://docs.docker.io/introduction/technology/2) What exactly is the meaning of a container ? Is it synonymn for image?An image is the set of layers that are built up and can be moved around. Images are read-only.http://docs.docker.io/en/latest/terms/image/http://docs.docker.io/en/latest/terms/layer/A container is an active (or inactive if exited) stateful instantiation of an image.http://docs.docker.io/en/latest/terms/container/See also:In Docker, what's the difference between a container and an image?3)I remember reading somewhere that it allows you to deploy applications. Is this correct ? In other words will it behave like IIS for deploying the .net applications?Yes, Docker can be used to deploy applications. You can deploy single components of the application stack or multiple components within a container. It depends on the use case. See theFirst steps with Dockerpage here:http://docs.docker.io/use/basics/See also:http://docs.docker.io/examples/nodejs_web_app/http://docs.docker.io/examples/python_web_app/http://docs.docker.io/examples/running_redis_service/http://docs.docker.io/examples/using_supervisord/
I am just one day old to docker , so it is relatively very new to me .I read the docker.io but could not get the answers to few basic questions . Here is what it is:Docker is basically a tool which allows you to make use of the images and spin up your own customised images by installing softwares so that you can use to create the VMs using that .Is this what docker is all about from a 10000 ft bird's eye piont of view?2 . What exactly is the meaning of a container ? Is it synonymn for image?3 . I remember reading somewhere that it allows you to deploy applications. Is this correct ? In other words will it behave like IIS for deploying the .net applications?Please answer my questions above , so that I can understand it better and take it forward.
Understanding docker from a layman point of view
To runpipfor python3 usepip3, notpip.
I am getting the error using pip in my docker image.FROM ubuntu:18.04 RUN apt-get update && apt-get install -y \ software-properties-common RUN add-apt-repository universe RUN apt-get install -y \ python3.6 \ python3-pip ENV PYTHONUNBUFFERED 1 RUN mkdir /api WORKDIR /api COPY . /api/ RUN pip install pipenv RUN ls RUN pipenv syncI installed python 3.6 and pip3 but gettingStep 9/11 : RUN pip install pipenv ---> Running in b184de4eb28e /bin/sh: 1: pip: not found
cant install pip in ubuntu 18.04 docker /bin/sh: 1: pip: not found
if you want to manipulate node's iptables then you definitely need to put the pod on host's network (hostNetwork: truewithin pod'sspec). After that granting to the containerNET_ADMINandNET_RAWcapabilities (incontainers[i].securityContext.capabilities.add) is sufficient. example json slice:"spec": { "hostNetwork": true, "containers": [{ "name": "netadmin", "securityContext": {"capabilities": { "add": ["NET_ADMIN", "NET_RAW"] } }I'm not sure if privileged mode has anything to do with manipulating host's iptables these days.
I have enabled the privileged mode in the container and add a rule to it,iptables -N udp2rawDwrW_191630ce_C0 iptables -F udp2rawDwrW_191630ce_C0 iptables -I udp2rawDwrW_191630ce_C0 -j DROP iptables -I INPUT -p tcp -m tcp --dport 4096 -j udp2rawDwrW_191630ce_C0andkt execinto the container and useiptables --table filter -L, I can see the added rules./ # iptables --table filter -L Chain INPUT (policy ACCEPT) target prot opt source destination udp2rawDwrW_191630ce_C0 tcp -- anywhere anywhere tcp dpt:4096 Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain udp2rawDwrW_191630ce_C0 (1 references) target prot opt source destination DROP all -- anywhere anywhereWhile when I logged into the node where the container lives, and runsudo iptalbes --table filter -L, I cannot see the same result.I was thinking by default theprevilegedis dropped because the container might leverage it to change something like the iptables in the node, however it looks not like that.So my question is "what is the relationship between K8S iptables and the one of a container inside a pod" and "why we stop user from modifying the container's iptables without theprivilegedfield"?
relationship between K8S iptables and the one of a container inside a pod
I usually use the-v "$PWD:$PWD" -w "$PWD"trick. Run container and volume mount the current host working directory into the container at the same path and set working directory to same path.So for example if I want to transcode a wav file on the host to a mp3 file usig ffmpeg running in a container I would do:docker run --rm -v "$PWD:$PWD" -w "$PWD" mwader/static-ffmpeg:4.2.2 -i file.wav file.mp3You can also add-u $UID:$GROUPSif your unsure what default user the image runs as.
I would like to pass a file from the host system to a container at runtime. I want to run a CLI tool within a container and use the file as an argument to the CLI tool. Is it possible to modify the following command:docker run -it --rm --name to achieve what I want to do. Thedocker cpcommand doesn’t work for what I need since it doesn’t run from within the container and I need to pass the file name as an argument.
How do I pass a file into a Docker container to be used with the container?
My approach was not good as there is a great tool in VS Code called "Remote development". It's an extension that allows you to attach a container directly in VS Code.First, I had to change the way I start my node app to enable inspecting. As ts-node is not supporting theinspectoption, you have to use this:node --inspect=0.0.0.0:9229 -r ts-node/register src/Then, use Remote Development to get into your container. Once you're inside, you can debug your app as you would normally do in a "classic" node environment. Personnaly, I used these settings inlaunch.json:{ "type": "node", "request": "attach", "name": "Attach", "port": 9229, "skipFiles": [ "/**", "node_modules/**" ] }Everything works fine, my breakpoints are hit appropriately and can be debug my app efficiently :)
I'm running a Node application in Docker, withdocker-compose. I'm using Traefik as a proxy. I would like to be able to debug it in VS Code but I don't manage to connect to my app:connect ECONNREFUSED 127.0.0.1:9229Here are my files:docker-compose.yml:version: '3' services: traefik: image: traefik:1.7 command: --docker --docker.exposedbydefault=false ports: - '80:80' - 9229:9229 volumes: - /var/run/docker.sock:/var/run/docker.sock core: image: node:alpine labels: - traefik.enable=true - traefik.port=4001 - traefik.backend=core - traefik.frontend.rule=Host:core.localhost volumes: - ./leav_core:/app working_dir: /app command: [sh, -c, 'npm start'] expose: - '9229' volumes: arango_data: driver: localThe command actually executed bynpm startis:ts-node --inspect=0.0.0.0:9229 --type-check src/`The debug settings in VSCode:{ "version": "0.2.0", "configurations": [ { "name": "Docker: Attach to Node", "type": "node", "request": "attach", "remoteRoot": "/app" } ] }I access to my application with the URL defined on Traefikhttp://core.localhostbut I don't know how to attach the debugger to itThanks!
Debug in VS Code a Node Typescript app running in Docker
To run PHP5.6 with NGINX you will need to do the following:Directory layout. All web files go in your localsrc/directoryFornginx/default.confuse the following:server { listen 80; index index.php index.html; server_name localhost; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; root /var/www/html; location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass php:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; } }Forsrc/index.php(test to make sure PHP is working)For yourdocker-compose.ymlI have removed a lot of things that you will not need:version: "3" services: nginx: image: nginx:latest ports: - "8080:80" volumes: - ./src/:/var/www/html - ./nginx/default.conf:/etc/nginx/conf.d/default.conf depends_on: - php php: image: mikolatero/php5.6-fpm-alpine volumes: - ./src/:/var/www/htmlExecutedocker-compose up. Navigate tohttp://localhost:8080/index.phpand you should be greeted with the PHP info page:What Changed?In this case, I opted for the latest NGINX and located a good image for PHP5.6-FPM and used those for the stack.For the mounted volumes, I moved the directories into the same context as the Docker Compose file. Not necessary, but maybe more portable when running from a laptop. Your mounted web source may/should be the location of your web repo. I also used the well-know location for the web files in the NGINX image/var/www/htmlThe PHP5.6-FPM is mounted to the same directory as the web source so PHP is available to the files in that directory.Lastly, I got rid of thenetworksas, unless you have a specific reason, it is not necessary as these images will use the default Docker network.
Hello I need to setup php5.6 on my local machine. Following are the docker-compose.yml fileversion: '3' networks: laravel: services: nginx: image: nginx:stable-alpine container_name: nginx ports: - "8000:80" volumes: - ./src:/var/www - ./nginx/default.conf:/etc/nginx/conf.d/default.conf depends_on: - php networks: - laravel php: image: gotechnies/php-5.6-alpine container_name: php volumes: - ./src:/var/www ports: - "9000:9000" networks: - laravelngnix configuration fileserver { listen 80; index index.php index.html; server_name localhost; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; root /var/www/public; location / { try_files $uri $uri/ /index.php?$query_string; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass php:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; } }after running docker-compose up -d command following is the output.but when i am trying to accesshttp://localhost:8000i am unable to render page.
I am trying to use docker for php5.6 with ngnix but there is a issue in configuration
Unfortunately, it seems it's impossible to mount blobfuse or Azure Blob Storage to Azure Container Instance. There are just four types volume that can be mount to. You can take a look at theAzure Template for Azure Container Instance, it shows the whole property of the ACI. And you can see all the volume objectshere.Maybe other volumes which we can mount to Docker Container will be supported in the future. Hope this will help you.
I'm deploying to azure container instances from the azure container registry (azure cli and/or portal). Azure blobfuse (on ubuntu 18) is giving me the following error:device not found, try 'modprobe fuse' first.The solution to this would be to use the--cap-add=SYS_ADMIN --device /dev/fuseflags when starting the container (docker run):can't open fuse device in a docker container when mounting a davfs2 volumeHowever, the--cap-addflag is not supported by ACI:https://social.msdn.microsoft.com/Forums/azure/en-US/20b5c3f8-f849-4da2-92d9-374a37e6f446/addremove-linux-capabilities-for-docker-container?forum=WAVirtualMachinesforWindowsAzureFiles are too expensive for our scenario.Any suggestion on how to use blobfuse or Azure Blob Storage (quasi-natively from nodejs) from a Docker Linux container in ACI?
Azure Container Instances with blobfuse or Azure Storage Blobs
I use ahacky solutionto manage this problem for my development environments.To use on development environments only!The images I use for development environments contain a script that looks like this:#!/bin/sh # In usr/local/bin/change-dev-id # Change the "dev" user UID and GID # Retrieve new ids to apply NEWUID=$1 NEWGID=$1 if [ $# -eq 2 ] then NEWGID=$2 elif [ $# -ne 1 ] then echo "Usage: change-dev-id NEWUID [NEWGID]" echo "If NEWGID is not provided, its value will be the same as NEWUID" exit 1 fi # Retrieve old ids OLDUID=`id -u dev` OLDGID=`id -g dev` # Change the user ids usermod -u ${NEWUID} dev groupmod -g ${NEWGID} dev # Change the files ownership find / -not \( -path /proc -prune \) -user ${OLDUID} -exec chown -h ${NEWUID} {} \; find / -not \( -path /proc -prune \) -group ${OLDGID} -exec chgrp -h ${NEWGID} {} \; echo "UID and GID changed from ${OLDUID}:${OLDGID} to ${NEWUID}:${NEWGID} for \"dev\"" exit 0In the Dockerfile of my base image, I add it and make it executable:# Add a script to modify the dev user UID / GID COPY change-dev-id /usr/local/bin/change-dev-id RUN chmod +x /usr/local/bin/change-dev-idThen, instead of changing the owner of the mounted folder, I change the ID of the container's user to match the ID of my user on the host machine:# In the Dockerfile of the project's development environment, change the ID of # the user that must own the files in the volume so that it match the ID of # the user on the host RUN change-dev-id 1234This is very hacky but it can be very convenient.I can own the files of the project on my machine while the user in the container has the correct permissions too.You can update the code of the script to use the username you want (mine is always "dev") or modify it to pass the username as an argument.
I want to start using Docker for my Rails development, so I'm trying to put together askeletonI can use for all my apps.However, I've run into an issue with Docker volumes and permissions.I want to bind-mount the app's directory into the container, so that any changes get propagated to the container without the need to re-build it.But if I define it as a volume in mydocker-compose.yml, I can'tchownthe directory anymore. I need the directory and all its contents to be owned by theappuser in order for Passenger to work correctly.I read that it's not possible tochownvolumes.Do you know of any workarounds?
Permissions issue with Docker volumes
With some help of the AWS support service we were able to find the problem. The docker image I used to run my code on was, as I said tensorflow/tensorflow:latest-gpu-py3 (available onhttps://github.com/aws/sagemaker-tensorflow-container)the "latest" tag refers to version 1.12.0 at this time. The problem was not my own, but with this version of the docker image.If I base my docker image on tensorflow/tensorflow:1.10.1-gpu-py3, it runs as it should and uses the GPU fully.Apparently the default runtime is set to "nvidia" in the docker/deamon.json on all GPU instances of AWS sagemaker.
I have some python code that trains a Neural Network using tensorflow.I've created a docker image based on a tensorflow/tensorflow:latest-gpu-py3 image that runs my python script. When I start an EC2 p2.xlarge instance I can run my docker container using the commanddocker run --runtime=nvidia cnn-userpattern trainand the container with my code runs with no errors and uses the host GPU.The problem is, when I try to run the same container in an AWS Sagemaker training job with instance ml.p2.xlarge (I also tried with ml.p3.2xlarge), the algorithm fails with error code:ImportError: libcuda.so.1: cannot open shared object file: No such file or directoryNow I know what that error code means. It means that the runtime environment of the docker host is not set to "nvidia". The AWS documentation says that the command used to run the docker image is alwaysdocker run image trainwhich would work if the default runtime is set to "nvidia" in the docker/deamon.json. Is there any way to edit the host deamon.json or tell docker in the Dockerfile to use "--runtime=nvidia"?
How do I start an AWS Sagemaker training job with GPU access in my docker container?
To confirm setup:apk update && apk add build-base unixodbc-dev freetds-dev pip install pyodbcWhy install both unixodbc and freetds? Pyodbc's pip install requires the packages in unixodbc-dev and the gcc libraries in build-base, so no getting around that. The freetds driver tends to have fewer issues with pyodbc, and is leaned on heavily bypymssql, which I've been using in docker in lieu of pyodbc. That's a personal preference, though, you could just include the unixodbc driver. Now, to find the driverimport pyodbc pyodbc.drivers() # []Pyodbc can't locate them, but they are definitely installed, so we can find them with a shell script:find / -name *odbc.so /usr/lib/libtdsodbc.so /usr/lib/libodbc.soNow, we can automate this with thesubprocesslibrary to set the driver location manually:import subprocess s = subprocess.Popen('find / -name *odbc.so -type f', stdout=subprocess.PIPE, shell=True).communicate() f, _ = s # You can change this particular loop to select whatever driver you prefer driver = [driver for driver in f.decode().split() if 'tds' in driver][0] driver # '/usr/lib/libtdsodbc.so' username = 'someuser' server = 'someserver' database = 'somedatabase' conn = pyodbc.connect('DRIVER='+driver+';SERVER='+server+';PORT=1433;DATABASE='+database+';UID='+username+';PWD='+ password)Alternatively, you can add the configuration to/etc/odbcinst.inias mentionedhere:[FreeTDS] Description=FreeTDS Driver Driver=/usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.soThenimport pyodbc pyodbc.drivers() ['FreeTDS']
I have an alpine based docker image, with support for Python, through which I am trying to connect to Azure SQL service. Here is my simple connection code.import pyodbc server = 'blah1.database.windows.net' database = 'mydb1' username = 'myadmin' password = 'XXXXXX' driver= 'ODBC Driver 17 for SQL Server' conn = pyodbc.connect('DRIVER='+driver+';SERVER='+server+';PORT=1433;DATABASE='+database+';UID='+username+';PWD='+ password) c = conn.cursor() c.execute("SELECT * FROM dbo.customers") print(c.fetchall()) print(type(c.fetchall())) conn.commit() conn.close()When connecting to SQL server in Azure, the code generates the following error:pyodbc.Error: ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'ODBC Driver 17 for SQL Server' : file not found (0) (SQLDriverConnect)")nect)")Here is my Dockerfile:FROM tiangolo/uwsgi-nginx:python3.7-alpine3.8 RUN apk update RUN apk add gcc libc-dev g++ libffi-dev libxml2 unixodbc-dev LABEL Name=code9 Version=0.0.1 EXPOSE 8000 ENV LISTEN_PORT=8000 ENV UWSGI_INI uwsgi.ini WORKDIR /app ADD . /app RUN chmod g+w /app RUN chmod g+w /app/db.sqlite3 RUN python3 -m pip install -r requirements.txtI am assuming that I am unixODBC will take care of the connection with SQL server in Azure or do I need to install MS SQL driver for alpine? Is there one available? I couldn't find one. Please help.
Cannot connect to Azure SQL using an alpine docker image with Python
To pass docker credentials you either need to mount~/.aws/credentialsin your containerdocker -v ~/.aws/credentials:/root/.aws/credentials:roOr pass your credentials as env varsdocker run -e -e AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id --profile profilename) -e AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key --profile profilename)For the Fargate side you need to create an IAM Role and add a Policy that has access to that bucket.This needs to be assign to theTask Role. This is different than theTask Execution Rolewhich is used to pull the docker image. TheTask Roleis used at run time and that's where you need to add the policy for S3 access.
I have a Python script to upload a file to S3, the code is the same inthis question.I have a bash script that pass the AWS credential. The file I wanted to upload is generated from a model that running on Fargate (ina container), so I tried to run this Python script within the container to upload to S3, I've built the image, but when I rundocker run containernameit will give me error:INFO:root:Uploading to S3 from test.csv to bucket_name test.csv File "/usr/local/lib/python3.6/dist-packages/botocore/auth.py", line 357, in add_auth raise NoCredentialsError botocore.exceptions.NoCredentialsError: Unable to locate credentialsCan someone gave me some hint? How can I fix it? Thanks in advance.
File can be uploaded to S3 locally but can't within a container (Unable to locate credential)
No, there is not.Between versions 2.x and 3.x...several options have been removed...mem_limit,memswap_limit: These have been replaced by the resources key under deploy. deploy configuration only takes effect when using docker stack deploy, and is ignored by docker-compose.SeeCompose: Upgrading from 2 to 3And also you don't have to upgrade, you don't even have any reason to upgrade if you don't use swarm.Sadly in the official docker docs, there is statedVersion 3 (most current, and recommended)which isn't actually really true, if you use docker-compose without swarm, there is hardly any reason to switch or to use on new project v3. In the official repository you can see comments like this [2][3]. Also in thecompatibility-matrixyou can see that v2 is still upgraded even when v3 is out for quite some time. And only v1 is marked as deprecated.
Recently, I tried upgrading a version2docker-composeyamlfile to version3. Specifically, I was going from 2.1 to 3.4. Usingdocker-composeversion 1.18.0 anddockerversion 18.06.01.The first attempt causeddocker-composeto abort because of the presence of the Version 2 option:mem_limit. Reading theseVersion 3 docs, it clearly statesmem_limitwas removed and to see "upgrading" to guide usage away from this option. These instruction tell you to use thedeploysection withresources. Making these changes to thedocker-compose.ymlfile and the system started normally.Unfortunately, I missed the disclaimer in there where it states thatdeployis ignored bydocker-compose! My question: is there a way to use Compose file reference 3 anddocker-composewhile still enforcing a container memory limit?
Docker-compose: how to do version 2 "mem_limit" in version 3?
Take a look at theENTRYPOINTcommand. This specifies a command to run when the container starts, regardless of what someone provides as a command on thedocker runcommand line. In fact, it is the job of theENTRYPOINTscript to interpret any command passed todocker run.
Is it possible to add instructions likeRUNinDockerfilethat, instead of run ondocker buildcommand, execute when a new container is created withdocker run? I think this can be useful to initialize a volume attached to host file system.
Run commands on create a new Docker container
With a lot of fragmented documentation it was difficult to solve this.My first solution was to create thedaemon.jsonwith{ "hosts": [ "unix:///var/run/docker.sock", "tcp://127.0.0.1:2376" ] }This does not worked this errordocker[5586]: unable to configure the Docker daemon with file /etc/docker/daemon.jsonafter tried to restart the daemon withservice docker restart. Note: There was more on the error that I failed to copy.But what this error meant it at the start the daemon it a conflict with a flag and configurations ondaemon.json.When I looked into it withservice docker statusthis it was the parent process:ExecStart=/usr/bin/docker daemon -H fd://.What it was strange because is different with configurations on/etc/init.d/dockerwhich I thought that were the service configurations. The strange part it was that the file oninit.ddoes contain any reference todaemonargument neither-H fd://.After some research and a lot of searches of the system directories, I find out these directory (with help on the discussion on this issuedocker github issue #22339).SolutionEdited theExecStartfrom/lib/systemd/system/docker.servicewith this new value:/usr/bin/docker daemonAnd created the/etc/docker/daemon.jsonwith{ "hosts": [ "fd://", "tcp://127.0.0.1:2376" ] }Finally restarted the service withservice docker startand now I get the "green light" onservice docker status.Tested the new configurations with:$ docker run hello-world Hello from Docker! (...)And,$ curl http://127.0.0.1:2376/v1.23/info [JSON]I hope that this will help someone with a similar problem as mine! :)
Disclaimer:On a old machine with Ubuntu 14.04 with Upstart as init system I have enabled the HTTP API by definingDOCKER_OPTSon/etc/default/docker. It works.$ docker version Client: Version: 1.11.2 (...) Server: Version: 1.11.2 (...)Problem:This does solution does not work on a recent machine with Ubuntu 16.04 with SystemD.As stated on the top of the recent file installed/etc/default/docker:# Docker Upstart and SysVinit configuration file # # THIS FILE DOES NOT APPLY TO SYSTEMD # # Please see the documentation for "systemd drop-ins": # https://docs.docker.com/engine/articles/systemd/ # (...)As I checked this information on theDocker documentation pagefor SystemD I need to fill adaemon.jsonfile but as stated on thereferencethere are some properties self-explanatory but others could be under-explained.That being said, I'm looking for help to convert this:DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock -G myuser --debug"to thedaemon.jsonobject?NotesPS1:I'm aware that thedaemon.jsonhave adebug: trueas default.PS2:Probably thegroup: "myuser"it will work like this or with an array of strings.PS3:My main concern is to use SOCK and HTTP simultaneous.EDIT (8/08/2017)After reading the accepted answer, check the @white_gecko answer for more input on the matter.
Docker - Enable Remote HTTP API with SystemD and "daemon.json"
So after some tweaking and better understanding, i came to the conclusion after testing, that docker-compose is the way to go. So the first folder contains a my.cnf file that does the configuration, then the other folder which @Farhad identified is used to intilize the .sql file.version: "2" services: mariadb_a: image: mariadb:latest ports: - "3306:3306" environment: - MYSQL_ROOT_PASSWORD=111111 volumes: - c:/some_folder:/etc/mysql/conf.d - c:/some_other_folder:/docker-entrypoint-initdb.d
I am just learning the basics of docker, but have come stuck on importing an SQl file from the local system. I am on windows 10 and have allowed my docker containers to access my shared drives. I have an SQL file located on D i would like to import to the base image of Maria DB i got from docker hub.I have found a command to install that sql file on my image and tried to directly import the image from inside the container sql command prompt, but get a failed to open file error.Below are the two methods i have tried, but where do i store my sql dump and how do i import it?Method 1tried via mysql command linewinpty docker run -it --link *some-mariadb*:mysql --rm mariadb sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'Thenuse *database-created* // previously createdThensource d:/sql_dump_file.sqlMethod 2docker exec -i my-new-database mysql -uroot -pnew-db-password --force < d:/sql_dump_file.sqlUpdate 5/12/2016So after a disappointing weekend of playing around. I currently change the drive to C: drive as there seemed to be some unknown issue i can't debug with getting D: drive to work.Then i found the inspect command to see what volumes are mounted to a container. I have the following, but i can't import to SQL as it says file does not exist, but it clearly says in docker that it is there and mounted, the SQL file is inside the map_me folder. I created a folder called map_me in the main directory of C:docker inspect dbcontainer "Mounts": [ { "Source": "/map_me", "Destination": "/docker-entrypoint-initdb.d/sl_dump_fil‌​e.sql", "Mode": "", "RW": true, "Propagation": "rprivate" } ]
Install an sql dump file to a docker container with mariaDB
I finally solved it. Thank you @yamenk for your answer, it gave me the idea, upvoting.Finally what I did:I created a simple script on host which is getting host ip and writting it into another file.I set that script to be launched on every host boot before docker start.I mapped the file with the ip into the container using-von docker run command.I set my entrypoint to get the ip from that file containing the ip and modifying with sed the needed container config filesEverything working! if I boot the vm (the host machine) on another different network, it gets the ip and the container is able to reconfigure itself before starting with the new ip.
I have a very special scenario. A virtual machine containing some docker containers. One of this containers needs to know the host ip. The problem is if I pass the host ip on container build or using-eon docker run command, it remains "static" (always the same, the one of that moment) on the container.That vm can be on a laptop and the laptop is moving from different networks and the vm host ip can be different each reboot.This special container has the--restart=alwaysand is not built or "docker run" again... only once. And as I said, I need the host's ip on each reboot to configure the service inside the container on it's entrypoint because the container has a bind dns server which must load a zone with some dns entries that must be pointing to itself (the host's ip). An environment var would be great if possible. These are my data:The "normal" lauchThe end of my Dockerfile:.... .... ENTRYPOINT ["/etc/bind/entrypoint.sh"] CMD ["/usr/sbin/named", "-g", "-c", "/etc/bind/named.conf", "-u", "bind"]Entrypoint file (the regex works fine if the var could have the right value):#!/bin/bash sed -ri "s/IN A.*/IN A $HOSTIP/" /etc/bind/db.my.zone exec "$@"Docker run cmd:docker run --name myContainer -d --restart=always -p 53:53 -p 53:53/udp myImageWhat I tried:I guess the entrypoint is ok and shouldn't be modified if I can provide to it a var with the right value.If I put a-eon docker run command, it is "hardcoded" forever with the same ip always even if the host is on different networks:docker run -e HOSTIP=$(ifconfig eth0 | grep netmask | awk '{print $2}') --name myContainer \ -d --restart=always -p 53:53 -p 53:53/udp myImageI tried unsuccessfully also modifying the CMD on Dockerfile:CMD (export HOSTIP=$(ifconfig eth0 | grep netmask | awk '{print $2}'));/usr/sbin/named -g -c /etc/bind/named.conf -u bindIs possible to achieve something like this? Thanks.
Docker. Add dynamic host ip to env var on container
You need to run the data container for once to make it persistent:sudo docker run -v /var/lib/mysql --name bbdd ubuntu:trusty /bin/true sudo docker run -v /var/www/html --name wordpress ubuntu:trusty /bin/trueThis is an old bug of Docker describedhere. You may be affected if your Docker version is old.
Trying to dockerise wordpress I figure out this scenenario:2 data volume containers, one for the database (bbdd) and another for wordpress files (wordpress):sudo docker create -v /var/lib/mysql --name bbdd ubuntu:trusty /bin/true sudo docker create -v /var/www/html --name wordpress ubuntu:trusty /bin/trueThen I need a container for mysql so I use theofficial mysql imagefrom docker hub and also the volume /var/lib/mysql from the first data container:docker run --volumes-from bbdd --name mysql -e MYSQL_ROOT_PASSWORD="xxxx" -d mysql:5.6Then I need a container for apache/php so I useofficial wordpress imagefrom docker hub and also the volume /var/lib/mysql from the first data container:docker run --volumes-from wordpress --name apache --link mysql:mysql -d -p 8080:80 wordpress:4.1.2-apacheWhat I understand from docker docs is that if I don't remove the data containers, I'll have persistance.Howeverif I stop and delete running containers (apache and mysql) and recreate them again with last commands, data get lost:docker run --volumes-from bbdd --name mysql -e MYSQL_ROOT_PASSWORD="xxxx" -d mysql:5.6 docker run --volumes-from wordpress --name apache --link mysql:mysql -d -p 8080:80 wordpress:4.1.2-apacheHowever if I create the containers without data containers, it seems to work as I expected:docker run -v /home/juanda/project/mysql:/var/lib/mysql --name mysql -e MYSQL_ROOT_PASSWORD="juanda" -d mysql:5.6 docker run -v /home/juanda/project/wordpress:/var/www/html --name apache --link mysql:mysql -d -p 8080:80 wordpress:4.1.2-apache
Dockerize wordpress
You passed--dns 172.17.42.1to docker_opts, so since that you should be able to resolve the container hostnames from inside other containers.Butobviously you're doingdocker pullfrom the host, not from the container, isn't it? Therefore it's not surprising that you cannot resolve container's hostname from your host, because it is not configured to use172.17.42.1for resolving.I see two possible solutions here:Force your host to use172.17.42.1as DNS (/etc/resolv.confetc).Create a special container with Docker client inside and mountdocker.sockinside it. This will make you able to use all client commands includingpull:docker run -d -v /var/run/docker.sock:/var/run/docker.sock:rw --name=client ...docker exec -it client docker pull registry.service.consul:5000/test
I am trying to force the docker daemon to use my DNS server which is binded to bridge0 interface. I have added --dns 172.17.42.1 in my docker_opts but no successDNS server reply ok with dig command:dig @172.17.42.1 registry.service.consul SRV +short 1 1 5000 registry2.node.staging.consul.But pull with this domain fails:docker pull registry.service.consul:5000/test FATA[0000] Error: issecure: could not resolve "registry.service.consul": lookup registry.service.consul: no such hostPS: By adding nameserver 172.17.42.1 in my /etc/resolv.conf solve the issue but the DNS has to be exclusively for docker commands.Any idea ?
Docker daemon and DNS
I'm not sure that linked answer is entirely appropriate.The simple fact is that containers are just processes: you can't do anything inside a container that you can't do in a normal subprocess. You can muck about with timezones and such, but they are still referencing the same kernel clock as anything else.If you really want to play with time skew, you will probably need to investigate some sort of virtualization solution.
I want to verify the effects of clock skew on a distributed system and the simplest way for me to do that is using multiple docker containers linked together.Can I modify the clocks from individual docker containers so that they are decoupled from the host machine?
Creating clock skew with docker
Yep - that step is no longer required. The docs should be fixed shortly. You might wish to look atMy java App Engine Managed VMs build doesn't deploy after 4/14/2015 updatefor additional info.Our images are now available on the public Google Container registry. For python, you can grab the image at gcr.io/google_appengine/python-compatIt is important to note that you must do the following to use that image:docker pull gcr.io/google_appengine/python-compatYou can change the FROM line in your Dockerfile. Also, it's important to note that this image does not have the GAE SDK included, but you can add most python libraries yourself.
Following Googleinstructions to install managed VMs, everything seems to work smoothly until I get to this step:gcloud preview app setup-managed-vmsThe result is the following error:ERROR: (gcloud.preview.app) Invalid choice: 'setup-managed-vms'.I've made sure all the other dependent components are up to date. The environment is:Windows 7 x64Google Cloud SDK 0.9.56boot2docker 1.4.1/1.5 (tried both)Is there anything obvious I'm missing trying to get these managed VMs working?
Cannot seem to install Google Cloud Managed VMs
an app running in a container has access to the external network by defaultIt could have access only if a valid IP address is assigned to the container. Sometimes the IP which Docker choose for the container can conflict with external networks.By default, containers run inbridgenetwork, so look at it:docker network inspect bridgeFind the container and check its IP.To resolve conflicts, it is possible tocustomizebridgenetwork and setbipparameter to change the network's IP range (config file location depends on host's OS):"bip": "192.168.1.5/24"Orcreatea new docker network.Or experiment withnet=hostoption:docker run network settings
I have a .NET Core 1.1 app running in a Docker container on Ubuntu 14.04, and it fails to connect to the SQL Server database running on a separate server.The error is:Unhandled Exception: System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 25 - Connection string is not valid)I have deployed the same image with the same command line on another Ubuntu 14.04 server, and it connects fine.A console app running on the problem server (outside of Docker) can connect with the same connection string.As far as I can see from the documentation, an app running in a container has access to the external network by default, so what could be blocking this connection?
What would prevent code running in a Docker container from connecting to a database on a separate server?
Try:docker run -d -it -e "myvar=blah" myimage123The problem here is that-eis a flag andmyimage123is an argument. So the arguments should always come after the flags.
according to thedocs:Additionally, the operator can set any environment variable in the container by using one or more -e flags, even overriding those mentioned above, or already defined by the developer with a Dockerfile ENV. If the operator names an environment variable without specifying a value, then the current value of the named variable is propagated into the container’s environment:$ export today=Wednesday $ docker run -e "deep=purple" -e today --rm alpine env PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=d2219b854598 deep=purple today=Wednesday HOME=/rootI tried to run docker run -e with my container:docker run -d -it myimage123 -e "myvar=blah"I get this error:[FATAL tini (7)] exec -e failed: No such file or directory
docker run -e not working, bug?
Had the same issue, I filed a radar, and Apple answered:When piped to another process print is buffered, so no characters appear until the buffer is filled up. (When piped to the terminal we only buffer until we hit a newline.)You can get the behavior you want by callingsetbuf(stdout, nil)once at startup:import Darwin setbuf(stdout, nil)
So locally when in dev throughxcodeor compiled withSPMthe console logs appear as expected.i.e withSPMlocally everything is fineswift build --configuration release .build/release/Myapp # prints to consolebut when I am running the executable through a docker container running on ECS (linux I suppose), I don't see the logs generated by my Swift code but I do see stderr being printed by 3rd party libraries (i.e. libssl is printing errors) as well as shell logs when starting the applicationFor example:DockerfileFROM swift WORKDIR /app COPY Package.swift ./ COPY Sources ./Sources COPY Tests ./Tests RUN swift package clean RUN swift build --configuration release RUN chmod +x start.sh CMD ["start.sh"] # just a wrapper to see if "echo" worksin start.sh# prints as expected echo "hi this will print" # nothing in the executable will print though .build/release/MyApp
swift "print" doesn't appear in STDOut but 3rd party c library logs do when running in docker on ECS
Run:$ docker-machine start default $ eval $(docker-machine env default)And try again.Those environment variables point your local Docker client to the Docker engine running in the VM. The above commands will set them appropriately.
Mac here. I installed Docker viathe Toolboxand all Docker commands yield the same error:myuser@mymachine:~/tmp$docker info Get http:///var/run/docker.sock/v1.20/info: dial unix /var/run/docker.sock: no such file or directory. * Are you trying to connect to a TLS-enabled daemon without TLS? myuser@mymachine:~/tmp$sudo docker info Password: Get http:///var/run/docker.sock/v1.20/info: dial unix /var/run/docker.sock: no such file or directory. * Are you trying to connect to a TLS-enabled daemon without TLS? * Is your docker daemon up and running?Interestingly enough, however:myuser@mymachine:~/tmp$docker -v Docker version 1.8.1, build d12ea79Google results|for this errorindicate that the Toolbox did not install correctly, and that one of (or all) of the following env vars need to be set:DOCKER_HOST; and/orDOCKER_CERT_PATH; and/orDOCKER_TLS_VERIFYI have verifiedDOCKER_HOSTis not set on my machine (neitherenvnorecho $DOCKER_HOSTshow it). So my concerns:What are these env vars and what do they do? What are their proper values?How do I permanently set them so that they persist machine restarts?UpdateRunning the commands suggested by the one answer so far:myuser@mymachine:~/tmp$docker-machine start default Error: Host does not exist: default myuser@mymachine:~/tmp$eval $(docker-machine env default) Error: Host does not exist: defaultIdeas?
Setting DOCKER_HOST after Docker Toolbox/Mac install
Toolbox works viadocker-machine. The way thedockerclient is directed to the virtual machine is via a number of environment variables which you can see by runningdocker-machine env defaultSET DOCKER_TLS_VERIFY=1 SET DOCKER_HOST=tcp://192.168.99.100:2376 SET DOCKER_CERT_PATH=/user/.docker/machine/machines/default SET DOCKER_MACHINE_NAME=default REM Run this command to configure your shell: REM @FOR /f "tokens=*" %i IN ('docker-machine env --shell cmd default') DO @%iDocker for Mac connects directly to the/var/run/docker.socksocket which is mapped into the Docker VM so this is easy to detect by the lack of environment variables.I believe Docker for Windows uses a named pipe in the same way (//./pipe/docker_engine) so you should also be able to tell by the lack ofDOCKER_HOSTin the environment.If Docker for Windows does still use the environment, there will be differences between the Toolbox and Docker for Windows variables.DOCKER_HOSTwould be on a different range.DOCKER_CERT_PATHwon't includemachineetc.
On my current team, we're still transitioning fromDocker ToolboxtoDocker Desktop for Windows. A lot of our scripts still assume that you're running Docker Toolbox on VirtualBox (like how to mount drives, how slashes or drive names work for those mounts).Is there a reliable way to tell, from inside a script, whetherdockeris coming from Docker Toolbox or Docker Desktop for Windows?
How can a script distinguish Docker Toolbox and Docker for Windows?
There are a few ways to handle this:The most work but the better design is to move the code into each image, possibly changing your architecture to have specific pieces of the code in only one image, rather than having all the pieces in every image. Having the code shared creates a tight dependency that is very much against the micro-services design.Continue to use the named volume, but initialize it on startup of one key container. Less ideal (see above) but would work with the least change. To initialize it, you'd add the code to your image in one directory, e.g. /var/www/html-cache, mount your volume in /var/www/html, and the first step of the entrypoint would be tocp -a /var/www/html-cache/. /var/www/html/..Create a code sync image that updates the volume on demand from your version control. This would just be agit pullon the volume location.Use a volume that points to the code outside of Docker, e.g. a host directory or even an nfs mount, that manages the code synchronization outside of Docker. This is commonly done for development, but I wouldn't recommend it for production.Version 3 of the docker-compose.yml to me is synonymous with swarm mode right now. If you try to do this in swarm mode, then you either need to run the volume synchronization on every host where a container may run, or point to an external volume in a shared directory (e.g. nfs). Without upgrading the swarm mode, there's no immediate need to switch to version 3.
I have a PHP application that I need to containerize. I am setting up the following:container for varnishcontainer for nginxcontainer for php-fpmcontainer for croncontainer for toolingcontainer with PHP code baked intoContainer 2,3,4,5 all need to have access to the same PHP application codebase that is baked into container 6.I would like to set this up to be able to revert to previous releases for the application by just changing the version tag of the codebase container.My current composer file is something like ->version "3" services: web: image:nginx links: - phpfpm volumes: - code:/var/www/html phpfpm: image:php-fpm links: - db volumes: - code:/var/www/html code: build: context: ./ dockerfile: Dockerfile volumes: - code:/var/www/html volumes: code: driver: localAt this point thecode volumewas created. Copy of the php code from container code was copied to the volume.This is good as all new changes will be persisted to the volume although when I pull a new version of the codebase my volume will not get updated.What I would like to achieve is that my nginx and cron and tooling continer all see the latest version of the codebase container's content and as well I want to be able to run several one of containers calls using that php code that is in container 6.How do I need to do to go about that using v3 syntax?Thanks
I want to share code content across several containers using docker-compose volume directive
There are several issues in your question:Do not run docker with sudo. If your own user is not allowed to run docker, you should add yourself to the docker group:sudo usermod -aG docker $(whoami)Some of yourRUNcommands have no meaning, or at least not the meaning you intend - for example:RUN cd anythingwill just change to the directory inside that specificRUNstep. It does not propagate to the next step. Use&&to chain several commands in oneRUNor useWORKDIRto set the working directory for the next steps.In addition, you were missing thewgetpackageHere is a working version of your Dockerfile:FROM ubuntu:18.04 RUN apt-get update && apt-get -y install \ build-essential libpcre3 libpcre3-dev zlib1g zlib1g-dev libssl-dev wget RUN wget http://nginx.org/download/nginx-1.15.12.tar.gz RUN tar -xzvf nginx-1.15.12.tar.gz WORKDIR nginx-1.15.12 RUN ./configure \ --sbin-path=/usr/bin/nginx \ --conf-path=/etc/nginx/nginx.conf \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --with-pcre \ --pid-path=/var/run/nginx.pid \ --with-http_ssl_module RUN make && make install
My docker host is Ubuntu 19.04. I installed docker using snap. I created a Dockerfile as follows:FROM ubuntu:18.04 USER root RUN apt-get update RUN apt-get -y install build-essential libpcre3 libpcre3-dev zlib1g zlib1g-dev libssl-dev RUN wget http://nginx.org/download/nginx-1.15.12.tar.gz RUN tar -xzvf nginx-1.15.12.tar.gz RUN cd nginx-1.15.12 RUN ./configure --sbin-path=/usr/bin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --with-pcre --pid-path=/var/run/nginx.pid --with-http_ssl_module RUN make RUN make installI run it with this command:sudo docker build .I get this output:Sending build context to Docker daemon 3.584kB Step 1/10 : FROM ubuntu:18.04 ---> d131e0fa2585 Step 2/10 : USER root ---> Running in 7078180cc950 Removing intermediate container 7078180cc950 ---> 2dcf8746bcf1 Step 3/10 : RUN apt-get update ---> Running in 5a691e679831 OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:109: jailing process inside rootfs caused \\\"permission denied\\\"\"": unknownAny help would be greatly appreciated!
Building Dockerfile that has "RUN apt-get update" gives me "jailing process inside rootfs caused 'permission denied'"
I had the same issue, seems like it related to certificates on your registry host. Check here on how to fix that:https://github.com/docker/docker/issues/23620
When I try to pull an image from a private Docker Registry I get the errorError response from daemon: Get https://XX.XX.XX.XXX:5000/v1/_ping: dial tcp XX.XX.XX.XXX:5000: getsockopt: connection refusedThe docker registry is definitely listening on the correct port. Runningss --listen --tcp -n -pGives the resultState Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:22 *:* LISTEN 0 128 :::22 :::* LISTEN 0 128 :::5000 :::*Does anyone have any suggestions for how to solve this problem? I'm really stumped by it!Thanks in advance!
Error response from daemon: getsockopt: connection refused
There is an issue with nativeswarm mode, when it comes to binding to a non systemIP Addressasdocker 1.12.5. There has been multiple github issues, but the problem still persists.To define non systemIP Address:IP Addressesused with technologies likeDNAT. TheseIP Addressesare not set on local interface and visible to underlying operating system.sources:link1,link2,link3.
Some providers, such as ScaleWay will give your server an IP that is not attached to a local interface on the box.# docker swarm init --advertise-addr :2377 --listen-addr 0.0.0.0:2377 Error response from daemon: must specify a listening address because the address to advertise is not recognized as a system addressWhile# docker swarm init --advertise-addr eth0:2377will advertise a private IP address.How is docker swarm supposed to be setup in such an environment?
How can I use a docker swarm mode manager behind a floating IP
It was anOut Of Memory error(OOM). The leak was caused byelastic apmmiddleware. I removed it, and leak disappeared.
I'm currently usingFastApiwithGunicorn/Uvicornas my server engine.I'm using the following config forGunicorn:TIMEOUT 0 GRACEFUL_TIMEOUT 120 KEEP_ALIVE 5 WORKERS 10Uvicornhas all default settings, and is started in docker container casually:CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]Everything is packed in docker container.The problem is the following:After some time (somewhere between 1 day and 1 week, depending on load) my app stops responding (even simplecurl http://0.0.0.0:8000command hangs forever). Docker container keeps working, there are no application errors in logs, and there are no connection issues, but none of my workers are getting the request (and so I'm never getting my response). It seems like my request is lost somewhere between server engine and my application. Any ideas how to fix it?UPDATE: I've managed to reproduce this behaviour with customlocustload profile:The scenario was the following:In first 15 minutes ramp up to 50 users (30 of them will send requests requiring GPU at 1 rps, and 20 will send requests that do not require GPU at 10 rps)Work for another 4 hours As the plot shows, in about 30 minutes API stops responding. (And still, there are no error messages/warnings in output)UPDATE 2: Can there be any hidden memory leak or deadlock due to incorrectGunicornsetup or bug (such ashttps://github.com/tiangolo/fastapi/issues/596)?UPDATE 4: I've got inside my container and executedpscommand. It shows:PID TTY TIME CMD 120 pts/0 00:00:00 bash 134 pts/0 00:00:00 psWhich means myGunicornserver app just silently turned off. And also there is binary file namedcorein the app directory, which obviously mens that something has crashed
FastApi with gunicorn/uvicorn stops responding
I found a solution that works for both Windows (in PowerShell) and bash. The secret is to use the"provide a password using stdin".cat gcr_registry-ro.json | docker login -u _json_key --password-stdin https://gcr.io Login SucceededHelp text and versions:PS C:\Users\andy> docker login --help Usage: docker login [OPTIONS] [SERVER] Log in to a Docker registry Options: -p, --password string Password --password-stdin Take the password from stdin -u, --username string Username PS C:\Users\andy> docker -v Docker version 18.06.1-ce, build e68fc7a PS C:\Users\andy>
I'm trying to log in to Google's Container Registry on Windows 10 by using aJSON Key file. I have this working without issues on my Mac so the keyfile is definitely working.First off I had issues getting the docker login function to accept the contents of the JSON key file. I've tried running the "/set /p PASS..." command in CMD and I've tried something along the lines of this in Powershell:docker login -u _json_key -p "$(Get-Content keyfile.json)"https://gcr.ioThese all result in either an error or this:"docker login" requires at most 1 argument.Since I couldn't get this to work, I ran:docker login -u _json_keyhttps://gcr.ioAnd then just removed all breaks from the JSON file manually, copied it to clipboard and pasted it when prompted for my password.That results in:Login SucceededProblem solved! Right?Well apparently not. I was still unable to pull my images and when I randocker infothe only registry listed was "https://index.docker.io/v1/".I've tried starting Powershell as admin, and restarted and reset docker but nothing seems to help.Anyone got any clue what is going on? How do I debug this?I'm running Docker version 17.12.0 (stable)
Docker Login to gcr.io in Powershell
So i searched for a few things online and i found out that there is one solution that if i mention to install chocolatey on powershell inside my docker file. This reference, I have received from thethis postby anothony chu:so i used:# Install Chocolatey RUN @powershell -NoProfile -ExecutionPolicy Bypass -Command "$env:ChocolateyUseWindowsCompression='false'; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin" RUN powershell add-windowsfeature web-asp-net45 \ && choco install dotnet4.7 --allow-empty-checksums -y \in my docker file and now all works fine and good.
I'm new to .Net Environment, I'm trying to implement docker here for my firm. They were using 4.5 earlier so I used the following statement in my dockerfile:RUN Install-WindowsFeature NET-Framework-45-ASPNET ; \ Install-WindowsFeature Web-Asp-Net45Now, I want to do the same for framework 4.7.2 - I thought it will work if I run the statements like :RUN Install-WindowsFeature NET-Framework-472-ASPNET ; \ Install-WindowsFeature Web-Asp-Net472But it's not working this way instead shows the following error :Install-WindowsFeature : ArgumentNotValid: The role, role service, or feature name is not valid: 'NET-Framework-472-ASPNET'. The name was not found. At line:1 char:1 + Install-WindowsFeature NET-Framework-472-ASPNET ; Install-WindowsFeat ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (NET-Framework-472-ASPNET:Strin g) [Install-WindowsFeature], Exception + FullyQualifiedErrorId : NameDoesNotExist,Microsoft.Windows.ServerManager .Commands.AddWindowsFeatureCommandPlease help me with the same. I am really stuck and can't find anything on the internet.
install .net framework 4.7.2 in docker
You can do that through acombination ofENTRYPOINTandCMD.TheENTRYPOINTspecifies a command that will always be executed when the container starts.TheCMDspecifies arguments that will be fed to theENTRYPOINT.So, withDockerfile:FROM node:boron ... ENTRYPOINT ["node", "src/akamai-client.js"] CMD ["purge", "https://www.example.com/main.css"]The default behavior of a running container:docker run -it akamaiapiwould be like command :node src/akamai-client.js purge "https://www.example.com/main.css"And if you do :docker run -it akamaiapi queueThe underlying execution in the container would be like:node src/akamai-client.js queue
I have got below Dockerfile.FROM node:boron # Create app directory RUN mkdir -p /usr/src/akamai WORKDIR /usr/src/akamai # Install app dependencies COPY package.json /usr/src/akamai/ RUN npm install # Bundle app source COPY . /usr/src/akamai #EXPOSE 8080 CMD ["node", "src/akamai-client.js", "purge", "https://www.example.com/main.css"]Below is the command which I run from CMD after the docker image builddocker run -it "akamaiapi" //It executes the CMD command as given in above Dockerfile.CMD ["node", "src/akamai-client.js","purge","https://www.example.com/main.css"] //I want these two arguments directly passed from docker command instead hard-coded in Dockerfile, so my Docker run commands could be like these:docker run -it "akamaiapi" queue docker run -it "akamaiapi" purge "https://www.example.com/main.css" docker run -it "akamaiapi" purge-status "b9f80d960602b9f80d960602b9f80d960602"
Passing arguments from CMD in docker
It looks like you are trying to unzip the archive in the / folder.In fact, theunzipcommand unzips the archive by default in the current directory.Plus, the zip file is downloaded as root and might not be readable by the kong user.Try changing your Dockerfile as follows:RUN useradd -ms /bin/bash kong RUN echo "kong:password" | chpasswd RUN adduser kong sudo USER kong WORKDIR /home/kong RUN wget "${url}/my-archive.zip" -P /home/kong RUN unzip /home/kong/my-archive.zipThis way, the .zip file will be owned by the kong user and you will unzip it in his home directory.
I keep getting the below error when I try to unzip a zip file in my DockerFilecheckdir error: cannot create my-archive Permission denied unable to process my-archive/data/sample.jar The command '/bin/sh -c unzip /home/kong/my-archive.zip' returned a non-zero code: 2In my DockerFile I have:RUN useradd -ms /bin/bash kong RUN echo "kong:password" | chpasswd RUN adduser kong sudo USER kong RUN wget "${url}/my-archive.zip" -P /home/kong RUN unzip /home/kong/my-archive.zipIt works if I do:USER root RUN unzip /home/kong/my-archive.zipbut I would like to be able to do this as a non root user.Why does it fail as non root userkong?
Cannot unzip zip file in DockerFile as non root user
The problem can be solved by applying the solution from the answer to another question:"answer for .bashrc at ssh login"I followed the instructions and added the following to~/.bash_profile:if [ -f ~/.bashrc ]; then . ~/.bashrc fiBe sure that theexportforDOCKER_HOSTbeing added to the~/.bashrcfile does not contain$UID, but$(id -u $USER)instead, because$UIDwill not work insh.# inside the ~/.bashrc export DOCKER_HOST=unix:///run/user/$(id -u $USER)/docker.sockNowAttach Visual Studio Codeworks within the VSCode window being connected via SSH to the remote system as expected.Connect to remote using VSCode remote explorerStart containerAttach to running container usingVisual Studio Code Attacheither from the context menu of the Docker extension or usingCMD/CTRL+SHIFT+Pselecting>Dev Containers: Attach to running containers
I am using the following setupRemote systemRunning "rootless Docker"Docker context named "rootless" being activeVS Code Docker extension being installedVS Code - Connecting via SSH to the remote machine using "Remote Extension"Building and runing the Docker container using rootless DockerChecking that the "rootless" Docker context is selectedTrying to use "right-click" option on container "Attach Visual Studio Code", which will fail with the following error message:Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?VS Code tries to refer to the "default" Docker context, althouhgh the "rootless" Docker context is selected, which in my case is:$ echo $DOCKER_HOST unix:///run/user/1001/docker.sockAlso when the "docker.host" is set to the one of the rootless Docker or if the "docker.context" is set to the "rootless", using "Attach Visual Studio Code" will fail with the same error message.Update 2023-03-06:When runningdocker context use rootlessI get the following output:And the VSCode Docker extension shows the following:But when I run then rundocker context lsin the command line I get the following output:But the value ofDOCKER_HOSTis set to the rootless context.$ echo DOCKER_HOST unix:///run/user/1001/docker.sockWhen runningdocker psit lists the container running in the rootless context.Is this the reason it does not work as expected?Workaround- What did work out is the following:On local machine defining Docker context for remotedocker context create --docker host="ssh://@VS Code - connected to local machineSelecting the Docker context of the remote machineUsing "Attach Visual Studio Code" to attach to remote containerDoes someone know how to fix the issue, so that it is possible to use "Attach Visual Studio Code" directly from the VS Code window being connected via ssh to the remote system?
Rootless Docker (remote system) and VS Code - Attach VS Code to container failing
Docker Hub limits the number of Docker image downloads (“pulls”) based on the account type of the user pulling the image. Pull rates limits are based on individual IP address. For anonymous users, the rate limit is set to 100 pulls per 6 hours per IP address. For authenticated users, it is 200 pulls per 6 hour period. There are no limits for users with a paid Docker subscription.Docker Pro and Docker Team accounts enable 5,000 pulls in a 24 hour period from Docker Hub.Please read:https://docs.docker.com/docker-hub/download-rate-limit/#:~:text=Docker%20Hub%20limits%20the%20number,pulls%20per%206%20hour%20period.https://www.docker.com/increase-rate-limits
I'm deploying my dockerized Django app using AWS Code Pipeline but facing some errors of Docker.error:Service 'proxy' failed to build : toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limitdocker-compose-deploy.ymlversion: "3.8" services: db: container_name: db image: "postgres" restart: always volumes: - postgres-data:/var/lib/postgresql/data/ app: container_name: app build: context: . restart: always volumes: - static-data:/vol/web depends_on: - db proxy: container_name: proxy build: context: ./proxy restart: always depends_on: - app ports: - 80:8000 volumes: - static-data:/vol/static volumes: postgres-data: static-data:buildspec.ymlphases:versions. build: commands: - docker-compose -f docker-compose-deploy.yml up --build
Error using docker compose in AWS Code Pipeline
The solution is to be explicit with the container name. Thedocumentationis misleading as it states firstly that: thecontainerregistrytypeisAzure Container Registryby default. The example goes on to giveContosoas the value forazureContainerRegistry.This is wrong. You need to explicitly set this to the "Login server" value from Azure. Therefore the registry should be "contoso.azurecr.io". So the full exampleshouldbe:variables: azureContainerRegistry: contoso.azurecr.io azureSubscriptionEndpoint: Contoso steps: - task: DockerCompose@0 displayName: Container registry login inputs: containerregistrytype: Azure Container Registry azureSubscriptionEndpoint: $(azureSubscriptionEndpoint) azureContainerRegistry: $(azureContainerRegistry)This is why the push repo it was referring to was in fact: docker.io (public docker hub) as that must actually be the default whch explains the access denied error.
When pushing containers into a private Azure Container Registry using Docker Compose the Azure DevOps pipeline returns the following error:Pushing [container] ([registry]/[app]:latest)...The push refers to repository [docker.io/[registry]/[container]]denied: requested access to the resource is deniedTheazure-pipeline.ymlfile is taken from the Docker Compose example shown in the Microsoft Microservices eShopOnContainer example,here:variables: azureContainerRegistry: myregistry azureSubscriptionEndpoint: My Service Principle ... task: DockerCompose@0 displayName: Compose push customer API inputs: containerregistrytype: Azure Container Registry azureSubscriptionEndpoint: $(azureSubscriptionEndpoint) azureContainerRegistry: $(azureContainerRegistry) dockerComposeCommand: 'push [container]' dockerComposeFile: docker-compose.yml qualifyImageNames: true projectName: "" dockerComposeFileArgs: | TAG=$(Build.SourceBranchName)The service principle is in theAcrPush role.
Resource access denied when pushing container to Azure Container Registry
Docker Desktop for Mac uses HyperKit (seehttps://docs.docker.com/docker-for-mac/install/), which in turn uses xhy.ve that requires CPU EPT (https://en.wikipedia.org/wiki/Second_Level_Address_Translation#EPT,https://github.com/moby/hyperkit).People say that nested virtualization is not yet supported by VB - seehttps://forums.virtualbox.org/viewtopic.php?f=7&t=86922.So I suspect that VB does not provide EPT feature and thus Docker Desktop cannot run.
I have a Mac Sierra 10.12 OS virtual machine, hosted on Windows 10 Home using VirtualBox.I would like to run Docker inside this Mac VM, but when I try, I get the below error message:ErrorIncompatible CPU detected.We are sorry, but your hardware is incompatible with Docker Desktop.Docker requires a processor with virtualization capabilities and hypervisor support.To learn more about this issue, see:https://docs.docker.com/docker-for-mac/troubleshootI know that my machine (HP Envy, intel core i5) has Hyper-V enabled. As far as I can tell, it is NOT a hardware issue. My i5 processor supports Hyper-V therefore supports SLAT ie EPT. I am very sure it is something to do with my VM settings which is causing the issue.I am unable to use Docker Toolbox instead, as I need Docker Desktop for Mac specifically to run some Beta software inside my VM.If anyone is able to help me run Docker using my Mac VM the help would be greatly appreciated.PS. My knowledge is very limited as I am not techy, so noob-compatible instructions would be great! Thanks!
Can I install Docker inside a Mac VirtualBox VM?
ENTRYPOINT string_here...has Docker run:["sh", "-c", "string_here"]The problem with this is that when you add more arguments, they're added as new elements on the argument vector, as in:["sh", "-c", "string_here", "arg1", "arg2", "arg3..."]which means they're ignored, becausestring_here, when invoked as a script, doesn't look at what further arguments it's given.Thus, you can use:ENTRYPOINT string_here "$0" "$@"where"$@"in shell expands to"$1" "$2" "$3" ..., and$0is the first argument following-c(which is typically the name of the script or executable, and used in error messages written by the shell itself).
When I have a Docker image with the following line (a Spring Boot microservice):ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"]I can start the container using e.g.:docker run --rm my_image:1.0.0 --spring.profiles.active=localand it works, the parameter--spring.profiles.active=localis used. However, when the shell form of ENTRYPOINT is used:ENTRYPOINT java org.springframework.boot.loader.JarLauncherthis doesn't work any more, the parameters are ignored. I believe the parameters are passed to the/bin/sh -cthat is what is used by the shell form.So how do I pass arguments to the app I want to start when using the shell form?
Docker ENTRYPOINT shell form with parameters
Answer of @jordanm was right and it fixed my problem:Changehost = 'http://127.0.0.1:8001'tohost = 'http://container_2:8000'
I keep getting this error:HTTPConnectionPool(host='127.0.0.1', port=8001): Max retries exceeded with url: /api/v1/auth/sign_in (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))I searched through the stackoverflow and couldn't find the solution that would help me.Here's my code example:host = 'http://127.0.0.1:8001' response = requests.request(method=request_data['method'], url=f'{host}/{settings.ACCOUNTS_API_PREFIX}{request_data["url"]}', json=json_data, params=params, headers=headers, )Basically I'm trying to send a POST request to authenticate myself on the service, however I keep getting the above error.I have 2 containers - one is a web application (Django), another one is accounts that stores all details of the users to authenticate them.Both containers are up and running, I can open the website, I can open the API swagger for accounts, however I can't send the POST request and get any response.Containers settings as follows:container_1: build: context: ./container_1 dockerfile: Dockerfile env_file: - '.env' stdin_open: true tty: true ports: - '8000:8000' expose: - 8000 volumes: - ./data:/data working_dir: /data command: [ "./start.sh" ] networks: - web container_2: context: ./container_2 dockerfile: Dockerfile env_file: 'accounts/settings/.env' stdin_open: true tty: true environment: - 'DJANGO_SETTINGS_MODULE=project.settings' expose: - 8000 ports: - "8001:8000" volumes: - ./data:/app networks: - webCan someone assist me to figure it out?
Max retries exceeded with url: Failed to establish a new connection: [Errno 111] Connection refused'
According to thekubernetes v1.8.0 changelogContinuous integration builds use Docker versions 1.11.2, 1.12.6, 1.13.1, and 17.03.2. These versions were validated on Kubernetes 1.8.So any of these version should work fine.
I'm going to upgrade my Kubernetes cluster to the version1.8.7. Does anybody know which docker version is best compatible with it?This is what I found on the Kubernetes official page, but I suppose it might be for the latest k8s release (1.9)?On each of your machines, install Docker. Version v1.12 is recommended, but v1.11, v1.13 and 17.03 are known to work as well. Versions 17.06+ might work, but have not yet been tested and verified by the Kubernetes node team.Thank you!
Docker version supported in Kubernetes 1.8
Yes! You can use Supervisor, monit, runit, or even a "real" init system (including upstart or systemd) to run multiple processes. You can even use a cheap shell script like the following:#!/bin/sh ( while true; do run-process-1; done; ) & ( while true; do run-process-2; done; ) & wait
It appears that Docker is better suited for single process applications and services, but is it capable to offer a stable containment for a more complex application ( that has multiple processes, listening ports, considerable storage usage ) ?
Is it possible to install a complex server inside a Docker container?
I solved this by followingmkdir -p /etc/docker/certs.d/myregistry:5043cp myregistry.crt /etc/docker/certs.d/myregistry:5000/ca.crtcp myregistry.crt /usr/local/share/ca-certificates/ca.crtupdate-ca-certificates
I had followed given stepshereto create "Authenticate proxy with nginx".Certificates were created usingopensshopenssl req -newkey rsa:4096 -nodes -sha256 -keyout myregistry.key -x509 -days 365 -out myregistry.crtThendocker-compose up --buildbring docker registry starts.When I try to push image to registry (from same PC running docker-registry):docker push myregistry:5043/testI get following Error :Error response from daemon: Get https://myregistry:5043/v2/: proxyconnect tcp: x509: certificate is valid for Sachith, not myregistryI tried withinsecure-registryindaemon.json. But this does not get solved, also solution discussedhereis not clear for me.Alsoheresaying to add certificates to docker config.
proxyconnect tcp: x509: certificate is valid for Sachith, not myregistry
On DockerBecause a Docker container has its own filesystem namespace, the/dev/logsocket from the host won't be available within -- and because best-practice in the Docker world is to have each container only running a single service, there generally won'texista separate log daemon inside the local container.Best practice is to just log to the container's stdout and stderr. Thus, instead of runninglogger -s -t conf_nginx "this is a log message", justecho "this is a log message" >&2.On Shell AliasesAliases are a facility intended for interactive use, not scripting, and are in some shells turned off by default in interactive mode. Use a function instead:log() { logger -s -t conf_nginx "$@"; }
I am trying to configure logger to my script#! /bin/bash alias log="logger -s -t conf_nginx" exec >> /var/log/myscript/file.log exec 2>&1 log "this a log message"when I am executing this script, I only see this line in the log file$ cat /var/log/myscript/file.log logger: socket /dev/log: No such file or directoryPlease help$cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 9 (stretch)" NAME="Debian GNU/Linux" VERSION_ID="9" VERSION="9 (stretch)" ID=debian HOME_URL="debian.org/" SUPPORT_URL="debian.org/support" BUG_REPORT_URL="bugs.debian.org/"
Logging from bash in Docker: logger error: socket /dev/log: No such file or directory
Based on the advice ofjpetazzo(seehttps://github.com/jpetazzo/nsenter/issues/27#issuecomment-53799568) I started a different container that used the 'volumes' of the original container. Here is how:docker run --volumes-from -it busyboxThis will start a busybox shell. In there you have vi and other tools to inspect and fix the configuration-files.
Use-case: I started some nice docker image and my container needs some playing around (configuration file changes for research). I edit a file (using sed or vim ;-) ) and then I stop the container and try to start it. Now I made a mistake in the configuration and the docker container does not come up when I do:docker restart How can I edit the configuration-file to fix the mistake?
How can I edit files in a docker container when it's down/not-started
I output the go-nsq log, then find the root cause, should add-broadcast-address=127.0.0.1for nsqd command, if not,nsqd will register its hostname to nsqlookupd, it cannot be resolved by client.
I tried to use docker-compose to run nsq, thedocker-compose.ymlas below:version: '3' services: nsqlookupd: image: nsqio/nsq command: /nsqlookupd ports: - "4160:4160" - "4161:4161" nsqd: image: nsqio/nsq command: /nsqd --lookupd-tcp-address=nsqlookupd:4160 depends_on: - nsqlookupd ports: - "4150:4150" - "4151:4151" nsqadmin: image: nsqio/nsq command: /nsqadmin --lookupd-http-address=nsqlookupd:4161 depends_on: - nsqlookupd ports: - "4171:4171"I am using the nsq clientgo-nsqto produce and consume messages, the messages can be consumed by connecting to nsqd directly, but cannot be consumed by connecting to nsqlookupd:consumer.ConnectToNSQD("127.0.0.1:4150") # success (output the consumed messages) consumer.ConnectToNSQLookupd("127.0.0.1:4161") # failed 2018/01/31 16:39:12 ERR 1 [test/liu] (967fcc2c88ae:4150) error connecting to nsqd - dial tcp: i/o timeoutI can connect to nsqlookup instance:➜ test_nsq curl http://127.0.0.1:4161/ping OK% ➜ test_nsq curl http://127.0.0.1:4161/nodes {"producers":[{"remote_address":"172.22.0.3:59988","hostname":"967fcc2c88ae","broadcast_address":"967fcc2c88ae","tcp_port":4150,"http_port":4151,"version":"1.0.0-compat","tombstones":[false],"topics":["test"]}]}%the source code link:https://gist.github.com/liuzxc/1baf85cff7db8dee8c26b8707fc48799Env:OS: Mac EI Capitan 10.11.6 go version: 1.9.2 nsq: 1.0.0-compat(latest)Any idea for this?
nsq cannot consume message by connecting to nsqlookupd
When you use the feature to deploy container directly on Compute Engine, you are limited to the definition ofEntry pointArgs to pass at the entry pointEnvironment variablesThat's all, you can't add additional/custom params.One solution is, instead of using the built in feature, to use thecontainer-optimized OS (COS) on your Compute Engine and to create a startup scriptto download and run the container with the docker args that you wantMETADATA=http://metadata.google.internal/computeMetadata/v1 SVC_ACCT=$METADATA/instance/service-accounts/default ACCESS_TOKEN=$(curl -H 'Metadata-Flavor: Google' $SVC_ACCT/token | cut -d'"' -f 4) docker login -u oauth2accesstoken -p $ACCESS_TOKEN https://gcr.io docker run … gcr.io/your-project/your-imageOn the latest line, you can customize the run param in your startup script.So now, for the update, you have to update the startup script and to reset your VM (or to create a new Compute Engine with COS and the new startup script; and to delete the previous one).It's matter of tradeoff between the convenience of a built in feature and the customization capacity.
I need my image to start with this command:docker run -it --rm --security-opt seccomp=./chrome.json I'm deploying it to Google Compute Engine:https://cloud.google.com/compute/docs/containers/deploying-containersAs far as I understand, I can't specify arguments there, so Google Cloud starts it with justdocker runcommand.How do I pass these arguments? Maybe I can specify those args in Dockerfile somehow?
How do I pass arguments to docker run in a CLI (Command Line Interface)?
simply use dockervolumesto mount your persistent data outside of the container. More info can be foundhere.
I have a Django app where media files are uploading to the my_project_directory/media/images/ (so nothing special, just a common approach). The problem raised after dockerizing my app. Every time i need to update my container after pulling latest docker image, old container is removed(including, of course media files) and the new, empty one is built. So the question is - how to make my Django app stateless and where/how to store media files? Is it possible to store them in a special docker container? If yes, how? if no, what could you suggest?
Where to store media files of Django app in order to save them after docker container updating?
Almost there. In your docker file, you have defined environmental variables therefore you need to reference them as environmental variables in your wildfly config. The easiest way is to prefix your env var withenv.prefix. So in your example, you have env variablesHOST,SSL,USERNAME... which you can reference in standalone.xml like this: Withoutenv.prefix, jboss/wildfly will try to resolve the expression as jvm property, which you'd have to specify as jvm-Dflag.You can also use default value fallback in your expressions such as:ssl="${env.SSL:true}"This way, the ssl will be set the the value of environmental variable namedSSL, and if such var does not exist, server will fallback totrue.Happy hacking
I'm trying to pass values from docker-compose.yml file to Wildfly configuration dynamically. I want to have flexibility of mail configuration - just for quick change of addres, or username, or port..In this case, I tried to do that by forwarding environment variables from docker-compose.yml, by dockerfile as arguments "-Dargumentname=$environmentvariable. Currently wildfly interupts on start with error:[org.jboss.as.controller.management-operation] (ServerService Thread Pool -- 45) WFLYCTL0013: Operation ("add") failed - address: ([ ("subsystem" => "mail"), ("mail-session" => "default") ]) - failure description: "WFLYCTL0097: Wrong type for ssl. Expected [BOOLEAN] but was STRING"Same situation, if I try to pass PORT as value in outbound-socket-binding block.I have no idea how to pass integers/booleans from docker-compose file to Wildfly configuration.docker-compose.yml (part)... services: some_service: image: image_name:tag environment: - USERNAME=some_username@... - PASSWORD=some_password - SSL=true // I also tried with value 1 - HOST=smtp.gmail.com - PORT=465 // also doesn't work ...Dockerfile:FROM some_wildfly_base_image # install cgroup-bin package USER root RUN apt-get update RUN apt-get install -y cgroup-bin RUN apt-get install -y bc USER jboss ADD standalone-myapp.xml /opt/jboss/wildfly/standalone/configuration/ ADD standalone.conf /opt/jboss/wildfly/bin/ ADD modules/ /opt/jboss/wildfly/modules/ RUN wildfly/bin/add-user.sh usr usr --silent # Set the default command to run on boot # This will boot WildFly in the standalone mode and bind to all interface CMD [ "/opt/jboss/wildfly/bin/standalone.sh", "-c", "standalone-myapp.xml", "-Dmail.username=$USERNAME", "-Dmail.password=$PASSWORD", "-Dmail.ssl=$SSL", "-Drm.host=$HOST", "-Drm.port=$PORT" ]standalone-myapp.xml:... ... ...
How to pass variable as attribute to xml configuration file in Wildfly with Docker
Does docker check during the build process if my local version of imagename is still the latest (similar to docker pull)?No, docker will not do this by default because of the build cache. It will use whatever existing image it has locally in the cache [1].You can however enable the behavior you desire by using the--no-cacheoption:$ docker build --no-cache .
I have a CI Runner that automatically builds a docker image from a Dockerfile. My docker image is based on another docker image. So the beginning of my Dockerfile looks like this:FROM linktoimage/imagename:latestDoes docker check during the build process if my local version ofimagenameis still the latest (similar to docker pull? Because i noticed that my ci runner shows sn older version ofimagenameif i rundocker imageson it
Does Docker FROM Keyword in Dockerfile look for the newest image?
According to Saurabh Singh from Microsoft:The Instance name support is available in v 1.1 of .Net Core. In v1.0 of .Net Core, Instance names are not supported on OS other than Windows.So I don't think you can connect from .Net Core 1.0 running on Linux to an SQL Server using instance name.Your choices seem to be:don't use instance namewait for .Net Core 1.1 (planned for "Fall 2016")use pre-release version of .Net Core 1.1
I'm publishing an application to docker imagemicrosoft/dotnet:1.0.1-corethat reference Sql Server instance in connection string:"Data Source=host\instance;Initial Catalog=database;User ID=user;Password=pass;"In Windows environment it work's as well, but using docker, the application cannot connect to the database. Changing theData Sourceto useportinstead ofinstanceit works."Data Source=host,port;Initial Catalog=database;User ID=user;Password=pass;"How can I connect, from docker, to Sql Server using instance instead port?
SQL Server instance string connection in Linux Docker
It is the same as Express but the generated application will by default usefeathers-configurationto pull in your application settings. From the error message it looks like you are not providing a properNODE_ENVenvironment variable which has to be set toproductionwhen deploying to Heroku.
I'm trying to deploy myfeathersjsweb app on heroku, and since feathers is simply an express wrapper I thought it was like deploy an ordinary node app. I got the "npm start" script on my package.json, I added heroku remote to my git repo and when I push heroku run "yarn install" and the "npm start" script. But just when the app start, an error occurs:heroku logsI can't figure out what happen, any suggestions? Maybe I could dockerize my app, someone could help me to find the proper implementation?Thanks everybody
Deploy FeathersJS App on Heroku