Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
The syntax you are using,${secret}, was deprecated in drone 0.6 and replaced with the following syntax:pipeline: notify: image: drillster/drone-email from:[email protected]recipients: [[email protected]] secrets: [EMAIL_HOST, EMAIL_PORT, EMAIL_USERNAME, EMAIL_PASSWORD]The above syntax instructs drone to provide the requested secrets to the plugin. The secrets are exposed into the container as environment variables and consumed by the plugin.Further readinghttp://docs.drone.io/manage-secrets/http://docs.drone.io/secrets-not-working/#variable-expansionhttp://docs.drone.io/release-0.6.0(see breaking changes section)
I'm usingdrone-ci(0.8.0-rc.5) as CI tool anddrone-emailplugin for sending emails. I would like to send notifications if a build succeeded or failed. I use the Gmail SMTP server for sending emails.My .drone.yml file:notify: image: drillster/drone-email host: ${EMAIL_HOST} port: ${EMAIL_PORT} username: ${EMAIL_USERNAME} password: ${EMAIL_PASSWORD} from:[email protected]recipients: [[email protected]]Secrets are configured like on the picture below:When the build finishes, I receive following exception:time="2017-09-20T02:14:10Z" level=error msg="Error while dialing SMTP server: dial tcp :587: getsockopt: connection refused" dial tcp :587: getsockopt: connection refusedWhen I hardcode values in yml file, notifications work. So I'm wondering what I'm doing wrong with secrets or how to fix this situation?
Drone CI does not see secret variables when using drone-email plugin
I found that jboss server running inside the container was not listening on 0.0.0.0. One option to do this is, while starting the standalone server use -b 0.0.0.0./bin/standalone.sh -b 0.0.0.0
I have started a docker container using the commandsudo docker run -it -P -d plcdimageThe image is built using a Dockerfile which has instruction EXPOSE 8080. Container runs a jboss server with an application deployed on it. Port mappings are :Command: sudo docker port be1837e849dc Output: 8080/tcp -> 0.0.0.0:32771When I try to access the web application running on jboss in the container from the mapped host port using url:http://IPAddressOfHost:32771/I get connection refused error. Following is the result of command "netstat -tulpn"(Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - tcp6 0 0 :::9999 :::* LISTEN - tcp6 0 0 :::22 :::* LISTEN - tcp6 0 0 :::32771 :::* LISTEN - udp 0 0 0.0.0.0:68 0.0.0.0:* -I tried doing telnet hostip 32771 and it also results in connection refused.Docker version 1.12.1 build 23cf638What could be the possible reason for this?Thanks in advance
Cannot access port on host mapped to docker container port
I used envsubst for environment replacing, and this util tried swap $host and other nginx envs, solved with:envsubst '$WP $PMA' < nginx.template.conf > nginx.ready.conf; rm nginx.template.confThis will replace only the$WPand$PMAvariables in the nginx.template.conf file and write the output to nginx.ready.conf. Avoiding a general replacement of all nginx variables like $host, $remote_addr and others. If you dont specify the variables that you want to replace the nginx variables will be replaced with empty values
i try redirect to proxy-server nginx.location /phpmyadmin { proxy_http_version 1.1; proxy_pass https://${PMA}:5000/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; }But i get error:nginx: [emerg] invalid number of arguments in "proxy_set_header" directive in /etc/nginx/nginx.conf:26full code for inspect error in this listing, because i'm real can't find some error's (${env} = correctry changing in scriptuser root; worker_processes auto; pcre_jit on; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; keepalive_timeout 3000; sendfile on; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; server { listen 443 ssl; listen [::]:443 ssl; ssl_certificate /etc/ssl/nginx.crt; ssl_certificate_key /etc/ssl/nginx.key; root /home; index default.html /default.html; location /phpmyadmin { proxy_http_version 1.1; proxy_pass https://${PMA}:5000/; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; } location /wordpress { return 307 http://${WP}:5050/; } location / { try_files /default.html default.html default.htm; } } server { listen 80; listen [::]:80; return 301 https://$host$request_uri; } } daemon off;how much simvols need for post)
How fix nginx error "invalid number of arguments"?
Do not worry. This line was introduced in 1.3 byr/58557/and will be fixed in 1.4This log line is not usually a problem, and reporting it as an error can cause needless debugging.r/59817
I am just trying to upgrade mesos version to 1.3.1 from 1.0.3.Chronos scheduler is able to schedule the JOB thru mesos. The job runs fine and able to see mesos stdout logs. But, still seeing the following in mesos stderr logs. The docker jobs runs fine, but still the status is showing as failed with the below logs.I0905 22:05:00.824811 456 exec.cpp:162] Version: 1.3.1 I0905 22:05:00.829165 459 exec.cpp:237] Executor registered on agent c63c93dc-3d9f-4322-9f82-0553fd1324fe-S0 E0905 22:05:11.773236 465 process.cpp:956] Failed to accept socket: future discarded
Mesos task - Failed to accept socket: future discarded
One possible option to try to recover your data from the container (I'm not 100% sure it can work in your specific case...).Create an image from the stopped container statedocker commit my_stopped_container my_recovery_img:latestDelete the current containerdocker rm my_stopped_containerRecreate the container from the dumped imagedocker run [your_options] --name my_stopped_container my_recovery_img:latest [your command]Meanwhile, you should make sure you don't get in this situation again by securing your critical data on a volume or a bind mount.
I'm working with docker on a local machine (all windows). To allow my containers to access other resources in my Network, i created a new network and gave it the needed routing/gateway info.After restarting my machine to install a VPN (unrelated to my docker containers) the network was gone and all the containers connected to that network refuse to start with this error:DockerDo : Error response from daemon: network 0935c770e7e107c64e3255eaa56de2d2fce90aab108682196d4e2960a2fe5726 not foundIs there any way to disconnect the network "post reboot" from the container? or recreate the network with that ID would be fine two.edit: i already tired using the disconnect command. either the ID is not translated or i don't knowwhatis should tell docker to disconnect. This is copied right from my PS-console:C:\WINDOWS\system32> docker network ls NETWORK ID NAME DRIVER SCOPE 4661e7886520 nat nat local bad4235f0598 none null local C:\WINDOWS\system32> docker start BACH-dev Error response from daemon: network 0935c770e7e107c64e3255eaa56de2d2fce90aab108682196d4e2960a2fe5726 not found Error: failed to start containers: BACH-dev C:\WINDOWS\system32> docker network disconnect 0935c770e7e107c64e3255eaa56de2d2fce90aab108682196d4e2960a2fe5726 BACH-dev Error response from daemon: container 24bfd23804c7a95a923d0626c41f4c949317cb34a45cb81bc430dc2fa96037ae is not connected to the network 0935c770e7e107c64e3255eaa56de2d2fce90aab108682196d4e2960a2fe5726
I need to remove a deleted network from a docker container
Yes is doable.Just define your application in one container and nginx in another container, both in the same docker-compose.yml. Link them. And only expose the 443 port in nginx container.docker-compose.ymlnginx: image: nginx links: - node1:node1 - node2:node2 - node3:node3 ports: - "443:443" node1: build: ./node node2: build: ./node node3: build: ./nodeMore info:http://anandmanisankar.com/posts/docker-container-nginx-node-redis-example/Regards
I'd like to know if it's possible to use nginx with docker compose as an api gateway / reverse proxy / ssl termination point without exposing any ports on the containers behind it. I.e. I want to use only the intranet created by docker compose when the containers are linked to communicate past nginx. Ideally the only publicly accessible port will be port 443 (ssl) on nginx. Is this doable? Or do I have to expose ports on my containers?
Can you use nginx reverse proxy to docker containers without exposing any ports?
Simply launch your container with something likedocker run -it -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker...and it should do the trick
If I am on my host machine, I can kickoff a script inside a Docker container using:docker exec my_container bash myscript.shHowever, let's say I want to runmyscript.shinsidemy_containerfrom another containerbob. If I run the command above while I'm in the shell ofbob, it doesn't work (Docker isn't even installed inbob).What's the best way to do this?
Run shell script inside Docker container from another Docker container?
it surely have something to do with the venv and the path, here is an old fastapi docker combined with your codeFROM python:3.8-slim-buster ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 # install system dependencies RUN apt-get update \ && apt-get -y install gcc make \ && rm -rf /var/lib/apt/lists/*s WORKDIR /code ENV VIRTUAL_ENV=/opt/venv RUN python3 -m venv $VIRTUAL_ENV ENV PATH="$VIRTUAL_ENV/bin:$PATH" COPY ./requirements.txt /code/requirements.txt RUN pip install -r /code/requirements.txt EXPOSE 8000 COPY ./app /code/app CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]bear in mind that this docker is not very actual and may be a bit heavy for what you try to do, you really should customize it.
I am trying to deploy my fastAPI applications using Docker. It's part of a bigger system which I am trying to connect with each other using a docker-compose later on. It works fine locally but when I try deploying it, it doesn't found my sub directories. I have__init__.pyfiles in all directories.This is my project structure:And this is my Dockerfile:FROM python:3.8 WORKDIR /code COPY ./requirements.txt /code/requirements.txt RUN python3.8 -m pip install --no-cache-dir --upgrade -r /code/requirements.txt COPY ./app /code/app CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]The Dockerfile is taken from the FastAPI Documentation and I've also tried manually copying the main.py and recognition_service directory (and setting a python env) but it didn't change the Error (might have done it wrong though). Building goes fine, no errors, running only works until the importfrom recognition_service.json_reader import Json_Readerin my main.py, which causes the ErrorFile "/code/./app/main.py", line 3, in from recognition_service.json_reader import Json_Reader ModuleNotFoundError: No module named 'recognition_service'This has probably some obvious solution I am overseeing since it's the first time I have a "bigger" application with Python, but I am hoping to find some help here.As a side note, within the json_reader.py there are also imports for files in the spacy subdirectories, each interacting with their respective spacy models.EDIT: Was asked for the requirements.txt:fastapi==0.79.0 scipy==1.7.3 pydantic==1.8.2 uvicorn==0.17.6 uvloop==0.16.0
Deploying my Python (FastAPI) Application with Docker: ModuleNotFoundError: No module named 'FolderInStructure'
Build a custom image that includes those options.Create a directory for your docker imagemkdir my_elasticsearch cd my_elasticsearchCreate anelasticsearch.ymlwith all the options includingscript.inline: on script.indexed: onCreate aDockerfilethat copies the config file.from elasticsearch copy elastcsearch.yml /container/path/to/elasticsearch.ymlBuild and tag the imagedocker build -t my/elasticsearch .Then run your imagedocker run -d -p 9200:9200 -p 9300:9300 --name elasticsearch-pb my/elasticsearchYou might want to publish your image to theDocker Hubor another registry so you only need to build it once.You can also usedocker-composeto manage the build process and multiple containers.
I can start elasticsearch with Kibana using the following 2 docker commands...docker run -d -p 9200:9200 -p 9300:9300 --name elasticsearch-pb elasticsearch docker run -d -p 5601:5601 --name kibana-pb --link elasticsearch-pb:elasticsearch -e ELASTICSEARCH_URL=http://elasticsearch:9200 kibanaBut how do I start es with script support using docker?Usually this is done by adding 2 lines to elasticsearch.yml file.script.inline: on script.indexed: onhow do I change the config file within docker image?
enable scripting within docker image
Depending on your base image (used by your container), you would need to add to your Dockerfile(or to make one, starting withFROM ) with:RUN apt-get update && apt-get install gnupg(as in thisdocker-vault-init Dockerfile)Then check out "Adding GPG key inside docker container causes “no valid OpenPGP data found”".This could be needed:RUN apt-get install -y ca-certificates wget
I have an encrypted file with gpg that I want to decrypt from inside a docker container. gpg is not found on the container, how would I add it.
Decrypt with gpg from inside a docker container
All seem to be working fine, but the last step you're not accessing the actual file.Since I don't know docker so well. First I start a aarch64 shell.docker run -it quay.io/pypa/manylinux2014_aarch64 bash[root@637db2c1af5e /]# uname -maarch64Then from inside of the container, I just build the program like I would normally.git clone https://github.com/gatagat/lapThen install some dependencies.python3.8 -m pip install numpy cythonThen I can build the wheel.python3.8 setup.py bdist_wheelThen I have a "dist" folder with.-rw-r--r--. 1 root root 1.7M Aug 22 11:59 lap-0.5.dev0-cp38-cp38-linux_aarch64.whl
I am trying to build python wheels for a package (lap) for theaarch64architecture. My host environment is WSL2 with Ubuntu 20.04 anddocker. Target is BuildrootGNU/Linux. So no compiler is available on the target. My goal is to setup a cross-build environment foraarch64usingqemu. As described inRun a AArch64 native container on x86 with emulationwe can use acontainerized environment available to run on AArch64 to build wheels to the current specificationwithQEMU emulator. Steps I am doing:installing qemu packages in WSL2sudo apt-get install qemu binfmt-support qemu-user-staticregistering scripts:docker run --rm --privileged multiarch/qemu-user-static --reset -p yesTesting the emulation environmentdocker run --rm -t arm64v8/ubuntu uname -mand it returnsaarch64so, I believe the installation was successful, the emulation is working. Also,qemu-aarch64-staticis available in/usr/bin/Now I clone the projectlap(inWSL2) andcd lap/and it containssetup.pybut when I execute below command to build the wheelsdocker run --rm -v `pwd`:/io quay.io/pypa/manylinux2014_aarch64 bash -c '/opt/python/cp38-cp38/bin/python ./setup.py bdist_wheel'I get below error``` WARNING: The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64/v4) and no specific platform was requested /opt/python/cp38-cp38/bin/python: can't open file './setup.py': [Errno 2] No such file or directory ```Now I am not sure how to passqemu-aarch64-staticto abovedockercommand?Can any one please let me know how to resolve this and build python wheels using QEMU?Thanks in advance.P.S: Please let me know if any info is missing.
how to compile and build a python package for aarch64 using qemu?
The/bin/sh -conly takes one argument, the script to run. Everything after that argument is a a shell variable$0,$1, etc, that can be parsed by the script. While you could do this with the/bin/sh -csyntax, it's awkward and won't grow with you in the future.Rather than trying to parse the variables there, I'd move this into an entrypoint.sh that you include in your image:#!/bin/sh exec spark-submit --master $SPARK_MASTER script.py "$@"And then change the Dockerfile to define:COPY entrypoint.sh /entrypoint.sh ENTRYPOINT ["/entrypoint.sh"]Theexecsyntax replaces the shell script in PID 1 with the spark-submit process, which allows signals to be passed through. The"$@"will pass through any arguments fromdocker run, with each arg quoted in case you have spaces in the parameters. And since it's run by a shell script, the$SPARK_MASTERwill be expanded.
We are creating a simpleDockerfile, the last line of that file isENTRYPOINT ["sh", "-c", "spark-submit --master $SPARK_MASTER script.py"]Thescript.pyis a simple pyspark app (is not important for this discussion), this pyspark app receives some parameters that we are trying to pass using thedockercommand as followsdocker run --rm my_spark_app_image --param1 something --param2 something_elseButscript.pyis not getting any parameter, i.e. the container executed:spark-submit --master $SPARK_MASTER script.pyThe expected behaviour is that the container executes:spark-submit --master $SPARK_MASTER script.py --param1 something --param2 something_elseWhat am I doing wrong?
ENTRYPOINT with environment variables is not acepting new params
Not sure if this article will help you with this issue,JENKINS DECLARATIVE PIPELINES WITH KUBERNETES. This article shows a full stack on how to setup Jenkins in Kubernetes and also involves idea about Docker in Docker.Based on my thought, we could mark as pod container ascontainer1and container in pod ascontainer2.I thinkcontainer1andcontainer2should locate in the same host and shared the same docker engine. So flannel network with docker network should setup together.As my idea, network flow forcontainer2should be fromcontainer2->docker0->host, should be not withcontainer1.Just let me know if this should be reasonable, or we could discuss together, I think this question is very interesting.
I have a Kubernetes pod based on jenkins/slave container to which I mount docker socket and docker binary file with necessary kernel module in privileged mode. Inside that pod I build Docker image basing on which I run docker container. Inside that container I don't have Internet connection at all because pod container uses flannel network (198.x.x.x) and that container uses bridged docker network (172.x.x.x) which is not available inside pod container. How can I make Internet to be available inside the second container which is being created inside Kubernetes pod container? Using Docker API in Jenkins pipeline is not a solution for me as long as it limits output of error logs and I can not commit changes made in the second container because of immediate removing of that container after build.
Internet connection inside Docker container in Kubernetes
I just createdinit.sqlfile and add it intodocker-entrypoint-init.db, this is how i did. Its better to createpsqlfolder and addinit.sqlfile into it.Remove- POSTGRES_DB=develop_dbthis from the base-compose file and change it.version: '3.3' networks: shared_network: driver: bridge services: testdb: image: postgres:latest volumes: - postgres_data:/var/lib/postgresql/data # add here - ./psql/init.sql:/docker-entrypoint-initdb.d/init.sql environment: - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres ports: - "5432:5432" networks: - shared_network volumes: postgres_data:Here is myinit.sqlfile. I just createdDATABASEsimply make sure you createDATABASEusingUSER&PASSWORD.-- Creation of DATABASE CREATE DATABASE test_db; CREATE DATABASE test_prod_db;[
Dockerfile:FROM python:3.9 ENV PYTHONUNBUFFERED=1 RUN apt-get update && apt-get upgrade -y \ && apt-get install -y gcc gunicorn3 libcurl4-gnutls-dev librtmp-dev libnss3 libnss3-dev wget \ && apt-get clean \ && apt -f install -y WORKDIR /App COPY requirements.txt /App/ RUN pip install -r requirements.txt COPY . /App/ >! RUN pip install django~=4.1.1 RUN mkdir -p /App/data/db/ RUN chown -R 1000:1000 /App/data/db/ EXPOSE 7000 EXPOSE 8001`I havebase-composefile that contain database image, here is file content:version: '3.3' networks: shared_network: driver: bridge services: testdb: image: postgres:latest volumes: - postgres_data:/var/lib/postgresql/data environment: - POSTGRES_DB=develop_db - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres ports: - "5432:5432" networks: - shared_network volumes: postgres_data:Here is my docker-compose file:version: '3.3' include: - ./base-compose.yml services: test_develop: container_name: test_develop build: . command: python manage.py runserver 0.0.0.0:7000 ports: - "7000:7000" env_file: - ./environments/.env.develop depends_on: - testdb networks: - shared_network links: - testdbHere is my docker-compose-prod.yml file:version: '3.3' include: - ./base-compose.yml services: test_production: container_name: test_production build: . command: python manage.py runserver 0.0.0.0:8001 ports: - "8001:8001" env_file: - ./environments/.env.prod depends_on: - testdb networks: - shared_network links: - testdbwhen i run thedocker-compose up --buildit createdevelop_dbbut i want to createprod_dbtoo.I try to create two database names develop_db and prod_db, when docker-compose up --build.I used these two commands to run both docker-compose file.docker-compose -f docker-compose up --build docker-compose -f docker-compose-prod.yml up --build
How to create two postgres database when docker-compose up?
You can try with a Docker Image likeyukinying/chrome-headless-browseror similar:https://hub.docker.com/r/yukinying/chrome-headless-browser/From the description:This docker image contain the Linux Dev channel Chromium (https://www.chromium.org/getting-involved/dev-channel), with the required dependencies and the command line arguments running headless mode.
I am building a Crawler with headless browser But right now I want to dockerize my app I've installed chrome in my docker image But it throw me an error when run the script.StartChrome.jsconst chromeLauncher = require('chrome-launcher'); chromeLauncher.launch({ port: 9222, chromeFlags: ['--headless','--proxy-server=54.171.181.204:8888','--disable-web-security','--disable-gpu'] }).then(chrome => { console.log(`Chrome debugging port running on ${chrome.port}`); });Err(node:415) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: connect ECONNREFUSED 127.0.0.1:9222 (node:415) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.And when I run it in command line it throws me an error like thisFailed to move to new namespace: PID namespaces supported, Network namespace supported, but failed: errno = Operation not permitted Trace/breakpoint trap
How to run headless browser inside docker?
MariaDB documentation does have an upgrade from10.1 -> 10.2 documentationthat is worth reading.Although most of it is around package upgrades however there are some notes around an optionalSET GLOBAL innodb_fast_shutdown=0and finishing withmysql_upgrade.Adocker volume inspectto look at the mountpoint and take a copy of the datadir is prudent, especially if you don't have recent backups or have a quick restoration business requirement (though if this is the case you should test the later version with a restore from backup and the inplace upgrade procedure).An inplace upgrade withoutSET GLOBAL innodb_fast_shutdown=0prior to shutdown will cause the innodb to begin recovery and apply the redo log to the datadir. There is small risk that this could be doing something differently to what it was doing before.With the new container started you can test the data exists. When you are ready, run themysql_upgrade(I'd normally dodocker exec -i {container} mysql_upgrade). This will hopefully be automated (gh#350,MDEV-25670) when I think of a reliable way to do this.As Monty says "You should be able to trivially upgrade from ANY earlier MariaDB version to the latest one" (or any intermediate one), so don't feel as though you have to 10.1 -> 10.2 and eventually 10.3.
I have MariaDB 10.1 running in a Docker container and I want to upgrade to 10.2. My data is persisted in a volume which /var/lib/mysql is mapped to, my.cnf, is not mapped and unchanged. What is the correct procedure to end up with a Maria 10.2 container with my data intact?The procedure I considering is as follows:Stop the 10.1 containerDuplicate the data volumeCreate a new 10.3 container, mapping the data directory to the duplicated volumeStart the new containerMy concern in this is step 3. During a 'standard' (non-Docker) upgrade, might the upgrade process not alter the data directory in some way? And if so, any changes that should be made to the /var/lib/mysql directory during upgrade would not be made to the volume, as its outside Docker.Is my procedure correct? Is my concern justified?
How to upgrade MariaDB running as a docker container
By defaultnifilistening only8443port (and using HTTPS connection)If you want to connect using unsecure HTTP, you need to set HTTP port:docker run -itd -p 8443:8080 -e NIFI_WEB_HTTP_PORT=8080 --name nifi apache/nifiIn this case HTTPS connection will be disabled and you will be able to connect withhttp://localhost:8443/nifiinstead of secured HTTPS* It is not possible to activate both8080(HTTP) and8443(HTTPS) connection at same time. You have to edit container entrypoint script (/opt/nifi/scripts/start.sh) to activate both connections
I am very new to docker and Nifi so please understand if my question doesn't sound refined.When I downloaded Nifi from official apache nifi website and fired it up, it was accessible via http://localhost:8443/nifiBut when I created a docker container using the following commanddocker run -itd -p 8433:8080 --name nifi apache/nifiit runs without a problem but it's not accessible via web UIWhen I useddocker logs d7 | grep "JettyServer"2022-07-07 23:17:13,334 INFO [main] org.apache.nifi.web.server.JettyServer NiFi has started NiFi has started. The UI is available at the following URLs: 2022-07-07 23:17:13,334 INFO [main] org.apache.nifi.web.server.JettyServer https://d723418f https://d723418f16d5:8443/nifiabove message was shown, which to my understanding it means that Nifi is running.I have tried-localhost:8433-host IP:8433-bridge network IP:8433but none of those work. Is this possibly because of the update on version 1.14.0 since it accesses UI via https rather than http and requires ID and password now ? Or am I just missing something very simple?Thank you all for your help in advance.
Nifi container running but not accessible via UI
Was a stupid mistake arguments were in wrong orderdocker run -it -p 8444:8444 and it worked :\
Hi people have been looking at this for far too long and need some help.I have made a ASP.NET core website nothing fancy just the template that goes with VS 2017 (v 1.1). I publish the site using dotnet core cli and build an image using this dockerfile:FROM microsoft/dotnet:1.1-runtime COPY /Publish /dotnetapp WORKDIR /dotnetapp EXPOSE 8444 ENTRYPOINT ["dotnet", "Inqu.dll"]When i run the image created with:docker run -it -p 8444:8444The image starts up waiting for request:Hosting environment: Production Content root path: /dotnetapp Now listening on: http://*:8444 Application started. Press Ctrl+C to shut down.but i can't reach the site and getting an ERR_CONNECTION_REFUSED when trying to access the site thoughthttp://local-ip:8444/I have modified the WebHostBuild to:var host = new WebHostBuilder() .UseKestrel() .UseUrls("http://*:8444") .UseContentRoot(Directory.GetCurrentDirectory()) .UseStartup() .UseApplicationInsights() .Build(); host.Run();So it should listen to port 8444, I have also tried to set:ENV ASPNETCORE_URLS http://*:8444in the Dockerfile but it doesnt help.I have some other images in docker up and running (gogs and mysql) and i can access them with my local ip:port with no problems, but i can't connect to the kestrel server.Can somebody please help me out?
Can't connect to ASP.NET core through docker
Minikube profiles are a way of getting different isolated environments (VMs), which can be helpful in a handful of scenarios (testing how the application behaves on different networks, testing different K8s versions, etc).By default, theminikube startwill start a VM with a profile namedminikubethat can be referenced through-p minikubeor--profile minikubeor simply by omitting the profile. So in practiceminikube -p minikube docker-envandminikube docker-envare the same command, butminikube -p otherkube docker-envpoints to a different profile.The commandminikube -p docker-envprints out a set of environment variables that when evaluated will allow your local docker commands to point to the docker agent inside the specified profile's VM. Theevalcommand will run these exports in the current shell. Setting different profiles will change slightly some of the variables (namely the docker host and the active docker daemon VM).Theminikube -p docker-envwill fail if the specified profile is stopped. In the same way,minikube docker-envwill fail if theminikubeprofile is stopped.You can get a list of existing profiles using the following command:minikube profile listYou can run the following commands to better understand the difference between the output when using different profiles.minikube -p minikube start minikube -p otherkube start minikube docker-env minikube -p minikube docker-env minikube -p otherkube docker-env
I've set up the Docker Engine locally to run on minikube (rather than using Docker Desktop). I know that I need to make sure that the Engine "talks to" the minikube cluster. I've consulted two tutorials, which have slightly different instructions. Specifically for this question, I want to understand the difference between the command:eval $(minikube -p minikube docker-env)referencedhere, andeval $(minikube docker-env)referencedhere. What does theprofileflag-pdo in this case?
Command `eval $(minikube docker-env)` vs `Using eval $(minikube -p minikube docker-env)`
The node part of your docker-compose.yml doesn't declare any volumes - how should docker know which part of your node image should be shared! Try adding something like this to the node service in your compose yaml:volumes: - /usr/src/app
I have the below compose file which starts 2 containersservices: nginx: container_name: nginx build: ./nginx/ ports: - "80:80" links: - node:node volumes_from: - node node: container_name: node build: . env_file: .env command: npm run packageThe dockerfile for nodeFROM node:6.0 # Create app directory RUN mkdir -p /usr/src/app WORKDIR /usr/src/app # Install app dependencies COPY package.json /usr/src/app/ RUN npm install # Bundle app source COPY . /usr/src/app EXPOSE 8000docker-compose updoesnt seem to mount the node volumes into nginx. I require the volume to serve the static files from nodelocation / { #The location setting lets you configure how nginx responds to requests for resources within the server. proxy_pass http://node:8000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location ~* \.(js|css|png|jpeg)$ { root /usr/src/app/public; expires 30d; }the volume is present in nodeavernus@avernus-VirtualBox:~$ docker exec -it node bash root@127bddea4e31:/usr/src/app# ls Dockerfile docker-compose.override.yml migrations postgres-test shared webpack.config.js Makefile docker-compose.prod.yml node_modules public socketcluster webpack.production.config.js client docker-compose.yml package.json server testBut Nginx doesnt seem to have the volumesavernus@avernus-VirtualBox:~$ docker exec -it nginx bash root@47ca17fac4b3:/# cd /usr/src/app bash: cd: /usr/src/app: No such file or directoryIs there something else i'm missing?
volume not getting mounted in nginx container
It seems there is an issue installing the keys like that. Similar problemhereandhere.The suggested solution is to split the command like this:wget -q https://artifacts.elastic.co/GPG-KEY-elasticsearch apt-key add GPG-KEY-elasticsearchIn your case, I suspect the output of thewgetcommand is not theGPGkey. It might be something else (e.g. proxy response) or an error. Try to remove the silent flag (-q) to see what's really going on.Hope that helps.
Trying to get elasticsearch installed and running into an error here in my dockerfile. Looks like it's unable to run bin.#JDK 1.8 on Ubuntu for ElasticSearch RUN add-apt-repository -y ppa:webupd8team/java RUN apt-get -y update RUN apt-get -y install openjdk-8-jre RUN wget -qO – https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add – RUN apt-get install apt-transport-https RUN echo “deb https://artifacts.elastic.co/packages/6.x/apt stable main” | tee -a /etc/apt/sources.list.d/elastic-6.x.list RUN apt-get update RUN apt-get install elasticsearch RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install analysis-icu RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install analysis-phonetic RUN -service elasticsearch start RUN gedit /etc/elasticsearch/jvm.options RUN gedit /etc/elasticsearch/elasticsearch.yml RUN curl -XGET ‘http://localhost:9200/_cat/health?v&pretty’Step 21/70 : RUN wget -qO – https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add – ---> Running in 7558b8a264b8 Warning: apt-key output should not be parsed (stdout is not a terminal) gpg: no valid OpenPGP data found. The command '/bin/sh -c wget -qO – https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add –' returned a non-zero code: 2Kind of new to docker so any help would be greatly appreciated. I'm running root user so i don't need to add sudo in front of any of these commands.
Dockerfile: not able to run bin command ubuntu
I prefer the notation:RUN cd usr/app/ssl/certs/ && \ keytool -delete -alias my-cert-name -keystore my-cert-name.jks -storepass password123! && \ keytool -export -alias my-cert-name -keystore my-cert-namet.jks \ -file my-cert-name.crt -storepass password123! && \ keytool -importcert -keystore trustStore.jks -alias my-cert-name -storepass password123! \ -file my-cert-name.crt -nopromptIt is easier to double-check you are importing the same name you have deleted.(since-deleteis agood way to force update an existing certificate)But the gist is:you delete inmy-cert-name.jks, while you import intrustStore.jks.if the import fails, that meanstrustStore.jksalready has a certificate for that nameIf that certificate wasalreadyin the copied keystore, I would not export/re-import it. (I only imported it inmy previous answer)Make sure the "usr/app/ssl/certs" is the right path: I would rather use an absolute path, rather than a relative path.TheOP fongfongconfirmsin the comments:I should delete the existing alias fromtrustStore.jks, notmy-cert-name.jks
I useDockerfileto create an image for our web app which requiresHTTPS. However, I am gettingCertificate not imported, alias already existsJava exception. When I tried without usingDockerfile, just from command line, I was able to delete the existing alias andexport,importworked. But not withDockerfile. Any ideas? Thanks!Dockerfile:FROM openjdk:8-alpine #Starting https and certs configuration #Make directory for certs inside the container RUN mkdir -p usr/app/ssl/certs/ #Copy certs from local to the container COPY myWebApp/src/main/resources/PT/certificates/my-cert-name.jks usr/app/ssl/certs/ COPY myWebApp/src/main/resources/PT/certificates/trustStore.jks usr/app/ssl/certs/ #Export/Import certificate RUN cd usr/app/ssl/certs/ && \ keytool -delete -alias my-cert-name -keystore my-cert-name.jks -storepass password123! && \ keytool -export -alias my-cert-name -keystore my-cert-name.jks -file my-cert-name.crt -storepass password123! && \ keytool -importcert -keystore trustStore.jks -alias my-cert-name -storepass password123! -file my-cert-name.crt -noprompt #Ending https and certs configuration RUN mkdir -p /usr/app/myweb COPY myWebApp/target/myWeb.war /usr/app/myweb CMD java -Xms512M -Xmx6144M -XX:MaxMetaspaceSize=3072M -jar /usr/app/myweb/myWeb.war EXPOSE 8080Docker build command>docker build -it test-https-image .Env:Using Docker desktop on windows 10.Thanks in advance!
Dockerfile keytool: getting "Certificate alias <name> already exists" even using "keytool - delete"
Probably yourdocker-composedoesn't exist in your $PATH env variable.First you should remove any conflictingdocker-compose-rm /usr/local/bin/docker-composeOn most of the Linux systems, below is how I prefer installing docker & docker compose -(Run commands as root)curl -fsSL get.docker.com -o get-docker.sh sh get-docker.sh curl -L https://github.com/docker/compose/releases/download/1.17.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose usermod -aG docker $YOUR_USER systemctl enable dockerExit the current tty & login back again with$YOUR_USER. This will always install latest docker engine CE & docker-compose(v1.17).
I have a docker compose fileversion: "3" services: mysql: image: mysql:latest container_name: locations-service-mysql environment: MYSQL_ROOT_PASSWORD: root MYSQL_USERNAME: root MYSQL_DATABASE: 'locations_schema' restart: always volumes: - mysql_data:/var/lib/mysql:rw phpmyadmin: image: phpmyadmin/phpmyadmin:latest ports: - 8181:80 environment: MYSQL_USERNAME: root MYSQL_ROOT_PASSWORD: root PMA_HOST: mysql depends_on: - mysql links: - mysql:mysql dropwizard: build: context : ../locations-service/ ports: - 8080:8080 - 8081:8081 depends_on: - mysql links: - mysql:mysql restart: always container_name: locations-service volumes: mysql_data:And i've configure a jenkins job to execute this file by calling another shell file "environment.sh", but it attempts to execute the following error appears:23:51:57 ./environment.sh: line 3: docker-compose: command not found 23:51:57 ./environment.sh: line 4: docker-compose: command not found 23:51:57 ./environment.sh: line 6: docker-compose: command not found 23:51:57 FAILED 23:51:57 23:51:57 FAILURE: Build failed with an exception. 23:51:57 23:51:57 * What went wrong: 23:51:57 Execution failed for task ':startDockerEnvironment'. 23:51:57 > Process 'command './environment.sh'' finished with non-zero exit value 127How can i download and configure docker-compose in jenkins server, also there's no plugin available!, for docker-compose
docker-compose: command not found on jenkins
edit yourdockerfile:FROM python:alpine3.7 RUN apk update && apk add --no-cache gcc g++ python3-dev unixodbc-dev COPY . /app WORKDIR /app RUN pip install --upgrade pip RUN pip install -r requirements.txt CMD python ./index.pyedit yourrequirements.txtflask SQLAlchemy pyodbc pandas numpy
Closed. This question needsdetails or clarity. It is not currently accepting answers.Want to improve this question?Add details and clarify the problem byediting this post.Closed4 years ago.Improve this questionHi I have created a Dockerfile for my app as below but it failed when I try to build the dockerimage .FROM python:alpine3.7 COPY . /app WORKDIR /app RUN pip install --upgrade pip RUN pip install -r requirements.txt CMD python ./index.pyhere is the content of requirements.txtflask numpy pandas SQLAlchemy pyodbcwhen it gets to RUN pip install -r requirements.txt it can install flask but after that look like between numpy and pandas it start to generate error for many many pages.any help?
pip install in Dockerfile is failing [closed]
Need to indicate docker it is udp protocol.FROM:-1338:1338TO:- 1338:1338/udp
I have some middleware running in a docker container.When I run this middlewareon my host machine everything works fine.When I ran it on thedockercontainer with all the necessaryports exposed and published:Dockerfile:EXPOSE 5672 15672 1337 1338 5556 3000Docker-compose.ymlports: - "5672:5672" - "15672:15672" - "1337:1337" - "1338:1338" - "5556:5556" - "3000:3000"It’s weird because I have rabbitmq and mule in that image. Rabbit works well beacause I can access the management console and my mule app publish in it.I have a flow, that with a quartz component publish in rabbitmq a keep alive each 30ms, and works well.But I have other flow which receives information in an UDP inbound endpoint and publish that on a rabbitmq queue.The inbound endpoind doesn´t receive anything, this endpoint listens in 0.0.0.0 and port 1338, and I am binding 1338:1338.So if I receive packages on my localhost:1338 in my host machine, the inbound endpoint should receive it no?Also in other flow I have a java client socket which gives me connection refeused.The strange thing is that nothing of this happens when I run this on my host machine, and in docker I have the ports exposed and published.Thanks everyone
No connection in docker with ports exposed and published
I finally managed to fix this by resetting docker back to factory defaults fromDocker menu > Preferences > Uninstall / Reset > Reset to factory defaults(I'm using Docker for Mac beta). Note that this operation also swipes all docker images, volumes, networks, etc.
I've been tinkering with new Docker swarm mode. I can't fully recall the steps that I did, but now I'm stuck in situation where my docker engine is as a worker in a non-existing swarm:$ docker info ... Swarm: active NodeID: 1vndsuqa0r3paswufs7eq4po3 Is Manager: false Node Address: 192.168.65.2 ... $ docker swarm leave Error response from daemon: context deadline exceeded $ docker version Client: Version: 1.12.0 API version: 1.24 Go version: go1.6.3 Git commit: 8eab29e Built: Thu Jul 28 21:04:48 2016 OS/Arch: darwin/amd64 Experimental: true Server: Version: 1.12.0 API version: 1.24 Go version: go1.6.3 Git commit: 8eab29e Built: Thu Jul 28 21:04:48 2016 OS/Arch: linux/amd64 Experimental: trueHow could I get out the swarm mode?
Cannot leave swarm mode
Docker doesn't handle CPU at all. It is just a composition ofkernel namespacing, FS system layering (e.g.UnionFS) andprocess quoting.When you run something on a docker container it is just an executable running onyour OS,without virtualisation, it has access only to a selected set of kernel objects (e.g. devices) and it is chrooted to a FS hierarchy resulting from overlaying vary FSs (including the one in the docker container).Hence, Docker doesn't handle the CPU at all, it is completely orthogonal to your problem.AsPeter commentedthere are essentially two ways to CPU-dispatch:You load the right dynamic library (but every function call into the library uses a pointer).You build multiple versions of the same statically-linked binary and run the right one.The main issue is that sometime ISA extensions are orthogonal and this makes the combinations (i.e. the number of libraries/binaries) grow exponential. So, considering that you are dealing with the Docker's userbase you can simplify the approach a bit (if combinations are a problem):Either make some ISA extensions required (if the absence of such would degrade the performance too much). For the optional extensions you can use one of the approaches of a above.Create only a few baseline containers. E.g. One for the genericamd64, one foramd64-avx, one foramd64-avx2-aesni-tsxand similar. The idea being to create only a few containers that coversall,mostandfewof your users.EDITAsBeeOnRope pointed in the comments, Dockershasa version running on Windows.It uses Hyper-V to run a Linux VM with the Linux version of docker.As Hyper-V is a native VMM, apart from an extra layer, the same considerations apply.Similarly, there is a macOS version too.This time it uses an hypervisor framework based on xhyve.
My application benefits greatly from advanced CPU features that gcc can access when it is run with-march native. Docker can smooth over differences in OS, but how does it handle different CPUs? To build an application that can run on any CPU I would have to build for amd64, losing out on a lot of performance. Is there a good way to distribute Docker images when the application needs to be compiled separately for each CPU architecture?
Docker and -march native
The Telegraf Docker images now run the telegraf process as thetelegrafuser/group and no longer as therootuser. In order to monitor the docker socket, which is traditionally owned byroot:docker group, you need to pass the group into the telegraf user.This can be done via:--user telegraf:$(stat -c '%g' /var/run/docker.sock)
I try to gather some metrics about my Docker containers using Telegraf. I have mounted the docker sock to it but I still receive an error message. What am I missing here?volumes: - ./data/telegraf:/etc/telegraf - /var/run/docker.sock:/var/run/docker.sock2021-10-29T20:11:30Z E! [inputs.docker] Error in plugin: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http:///var/run/docker.sock/v1.21/containers/json?filters={"status":["running"]}&limit=0": dial unix /var/run/docker.so[[inputs.docker]] endpoint = "unix:///var/run/docker.sock" gather_services = false container_names = [] source_tag = false container_name_include = [] container_name_exclude = [] timeout = "5s" perdevice = true total = false docker_label_include = [] docker_label_exclude = [] tag_env = ["JAVA_HOME", "HEAP_SIZE"]
Telegraf can not connect to Docker sock
There are two problems here.First is that nginx considers headers which contain underscores as invalid, soSCRIPT_NAMEheader is not being accepted by nginx in the container because it's invalid from nginx point of view. Luckily, nginx directiveunderscores_in_headersis here to help.Just addunderscores_in_headers on;toserversection of nginxinside Docker(not to the host one).When this is done here is yet another issue - nginx forwards header prependingHTTPin front of its name. So now from Django side you will seeHTTP_SCRIPT_NAMEinstead ofSCRIPT_NAME. But again, luckily for us, it can be easily fixed by usinguwsgi_param SCRIPT_NAME $http_script_name;line in nginx inside Docker again.Thus, final nginx config inside Docker should look like:server { underscores_in_headers on; # <---- (1) listen 80; location / { include uwsgi_params; uwsgi_pass web:3031; uwsgi_param SCRIPT_NAME $http_script_name; # <--- (2) } }In Djangosettings.py# Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/2.1/howto/static-files/ STATIC_URL = '/static/' STATIC_ROOT = '/static' # Bug in Django of not using SCRIPT_NAME header... # See https://code.djangoproject.com/ticket/25598 # Let's implement dirty workaround for now if os.getenv('SCRIPT_NAME'): STATIC_URL = os.getenv('SCRIPT_NAME') + STATIC_ROOT
I'm trying to make my web app (Django/wsgi-based) available from some subfolder of the main domain. I'm using docker for my app, and static files, so I have main nginx on my server as reverse proxy, another nginx in "nginx" container which routes the stuff for my app and uWSGI in the second container which serves actual Django dataAnd I want my app to be available externally asmyserver.com/mytool, in the same time I do not want to hardcodemytoolanywhere in my app. UsuallySCRIPT_NAMEheader is used for this type of stuff, so here is nginx configuration on the host:server { listen 80; # Just for sake of simplicity, of course in production it's 443 with SSL location /mytool/ { proxy_pass http://127.0.0.1:8000/; include proxy_params; proxy_set_header SCRIPT_NAME /mytool; # <--- Here I define my header which backend should use } }Then in mydocker-composeI expose 8000:80 for nginx and here is internal nginx configuration:server { listen 80; location / { include uwsgi_params; uwsgi_pass web:3031; } }With this configuration I would expect that my Django app receives SCRIPT_NAME header, but apparently it does not.In the same time if I define custom headers likeproxy_set_header X-something something;then this gets forwarded correctly and I can see it from Django.How should I passSCRIPT_NAMEto avoid path hardcode in my code?
uwsgi_pass does not forward SCRIPT_NAME header
Have you tried running this yourself to see what the error is? Like so:$ docker run --rm -it ubuntu:16.10 [...] root@96117efa0948:/# apt-get update [...] root@96117efa0948:/# apt-get install -y curl [...] root@96117efa0948:/# curl -sL https://deb.nodesource.com/setup_6.x | bash - [...] ## Your distribution, identified as "Ubuntu Yakkety Yak (development branch)", is a pre-release version of Ubuntu. NodeSource does not maintain official support for Ubuntu versions until they are formally released. You can try using the manual installation instructions available at https://github.com/nodesource/distributions and use the latest supported Ubuntu version name as the distribution identifier, although this is not guaranteed to work. root@96117efa0948:/#So basically that blurb is telling you that your version of Ubunutu isn't supported yet. Try changing your config file to use ubuntu:16.04 - or work out some other way to install node.
I'm trying to build a docker image with the following dockerfile:FROM ubuntu:16.10 MAINTAINER Fátima Alves COPY ./dist /myprogram/ WORKDIR /myprogram RUN apt-get update \ && \ apt-get install -y \ curl \ && \ curl -sL https://deb.nodesource.com/setup_6.x | bash - \ && \ apt-get install -y \ python-dev \ libxml2-dev \ libxslt1-devAnd no matter what i do, this message is appearing in the terminal:curl -sL https://deb.nodesource.com/setup_6.x | bash -' returned a non-zero code: 1I'm not finding anything related in google.Thanks!
installation of nodejs returned a non-zero code: 1 with docker build
Due to small amount of information and to clarify everything- I am posting a general Community wiki answer.The solution to solve this problem was to usereverse proxy server. Inthis documentationis definiton what exactly isreverse proxy server.A proxy server is a go‑between or intermediary server that forwards requests for content from multiple clients to different servers across the Internet. Areverse proxy serveris a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server. A reverse proxy provides an additional level of abstraction and control to ensure the smooth flow of network traffic between clients and serversCommon uses for areverse proxy serverinclude:Load balancingWeb accelerationSecurity and anonymityThisis the guide where one can find basic configuration of a proxy server.See alsothis article.
I've created a service inside minikube (expressjs API) running on my local machine, so when I launch the service usingminikube service wedeliverapi --urlI can access it from my browser withlocalhost:port/apiBut I also want to access that service from another device so I can use my API from a flutter mobile application. How can I achieve this goal?
How to expose a service from minikube to be able to access it from another device in the same network?
Your environment variables inside jenkins shell will not be imported automatically. Add environment variables through.envfile under your Jenkins job's workspace.$ cat .env registryUrl=zhcjie.distribution.ata.com:8652 image_version=1.0-SNAPSHOTThen rundocker-compose up
I have a docker-compose.yml file with differents variablesversion: "2" services: data: image: "${registryUrl}/data:${image_version}"In my shell, I export registryUrl & image_versionexport registryUrl=zhcjie.distribution.ata.com:8652 export image_version=1.0-SNAPSHOT docker-compose upthat's work in my local (I'm using boot2Docker) but it doesn't work in Jenkins. I have a wrong message.The registryUrl variable is not set. Defaulting to a blank string. The image_version variable is not set. Defaulting to a blank string.I try to pass env variable with EnvInject plugin, it doesn't work too.
docker-compose, export environnement variables are not working in Jenkins
From thedefinition of devcontainer.json schema{ "containerUser": { "type": "string", "description": "The user the container will be started with. The default is the user on the Docker image." }, }So,containerUseris the same as theUser on the Docker Image.
From thedoccontainerUser: Overrides the user for all operations run as inside the container. Defaults to either root or the last USER instruction in the related Dockerfile used to create the image.Does it mean that when you set upcontainerUserbelow indevcontainer.json"containerUser": "user-name"Just same asUSERinDockerfileas below??USER user-name
VS Code devcontainer - what is the difference between containerUser and USER in Dockerfile?
When setting up aServertheHostneeds to match server host name. For my case I set server host tozrdn:The web server needs to have the server name configured as well. In my case, I configurednginxlike so:server { listen 8080; server_name zrdn; ...Thanks a million, @LazyOne!
I'm setup a docker container with SSH and FTP access.My local project looks like this:/Users/gezimhome/projects/ziprecipes.net/zip-recipesis my project dir. The source code for my WordPress plugin is insrcfolder. I have wordpress downloaded and extracted locally here in/Users/gezimhome/projects/ziprecipes.net/workdir/wordpress.Here are my deployment settings:My mappings:My server:In the docker container, wordpress is downloaded and uncompressed here:/usr/share/nginx/html/wordpress/and I map/Users/gezimhome/projects/ziprecipes.net/zip-recipes/srcto/usr/share/nginx/html/wordpress/wp-content/plugins/zip-recipeswhen creating the container.Xdebug is setup properly because I get thisIncoming Connection from Xdebugscreen:So, question is, since I already have the mapping why does it keep bothering me to do a mapping for wordpress files?!And the bigger question is, why are my breakpoints in my plugin not being hit at all?!Please help :(
PhpStorm mapping paths
Try changing the file permission usinginit containeras in official bitnami helm chart they are also updating file permissions and managing security context.helm chart :https://github.com/bitnami/charts/blob/master/bitnami/mysql/templates/master-statefulset.yamlUPDATE:initContainers: - command: - /bin/bash - -ec - | chown -R 1001:1001 /bitnami/mysql image: docker.io/bitnami/minideb:buster imagePullPolicy: Always name: volume-permissions resources: {} securityContext: runAsUser: 0 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /bitnami/mysql name: data restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1001 runAsUser: 1001 serviceAccount: mysql
I was usingthisimage to run my application indocker-compose. However, when I run the same on a Kubernetes cluster I get the error[ERROR] Could not open file '/opt/bitnami/mysql/logs/mysqld.log' for error logging: Permission deniedHere's my deployment fileapiVersion: apps/v1 kind: Deployment metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.21.0 () creationTimestamp: null labels: io.kompose.service: common-db name: common-db spec: replicas: 1 selector: matchLabels: io.kompose.service: common-db strategy: type: Recreate template: metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.21.0 () creationTimestamp: null labels: io.kompose.service: common-db spec: containers: - env: - name: ALLOW_EMPTY_PASSWORD value: "yes" - name: MYSQL_DATABASE value: "common-development" - name: MYSQL_REPLICATION_MODE value: "master" - name: MYSQL_REPLICATION_PASSWORD value: "repl_password" - name: MYSQL_REPLICATION_USER value: "repl_user" image: bitnami/mysql:5.7 imagePullPolicy: "" name: common-db ports: - containerPort: 3306 securityContext: runAsUser: 0 resources: requests: memory: 512Mi cpu: 500m limits: memory: 512Mi cpu: 500m volumeMounts: - name: common-db-initdb mountPath: /opt/bitnami/mysql/conf/my_custom.cnf volumes: - name: common-db-initdb configMap: name: common-db-config serviceAccountName: "" status: {}The config map has the configmy.cnfdata. Any pointers on where I could be going wrong? Specially if the same image works in thedocker-compose?
Mysql container not starting up on Kubernetes
I suggest you add a health check directly at container level (here)By doing so, docker pings periodically an endpoint you specified, if it's found unhealthy it will 1) stop routing traffic to it 2) kill the container and restart a new one. Therefore you upstream will be resolved to one of the healthy containers. No need to retry.As for your additional questions, the first one, docker won't start routing til it's healthy. The second, nginx is still useful to distribute traffic according to endpoint url. But personally nginx + swarm vip mode is not a great choice because swarm load balancer is poorly documented, it doesn't support sticky session and you can't have proxy level health check, I would usetraefikinstead, it has its own load balancer.
We usedocker swarmwithservice discoveryfor BackendRESTapplication. The services in swarm are configured withendpoint_mode: vipand are running inglobalmode. Nginx is proxy passed with service discovery aliases. When we update Backend services sometimes nginx throws 502 as service discovery may point to the updating service.In such case, We wanted to retry the same endpoint again. How can we achieve this?According to thiswe added upstream with the host's private IP and usedproxy_next_upstream error timeout http_502;but still the problem persists.nginx.confupstream servers { server 192.168.1.2:443; #private ip of host machine server 192.168.1.2:443 backup; } server { listen 443 ssl http2 default_server; listen [::]:443 ssl http2 default_server; proxy_next_upstream http_502; location /endpoint1 { proxy_pass http://docker.service1:8080/endpoint1; } location /endpoint2 { proxy_pass http://docker.service2:8080/endpoint2; } location /endpoint3 { proxy_pass http://docker.service3:8080/endpoint3; } }Here ifhttp://docker.service1:8080/endpoint1throws502we want to hithttp://docker.service1:8080/endpoint1again.Additional queries:Is there any way in docker swarm to make it stop pointing to updating service in service discovery till that service is fully up?Is upstream necessary here since we directly use docker service discovery?
Nginx retry same end point on http_502 in Docker service Discovery
So I finally managed to solve this usingthis answer:What we want to do is invalidate the cache for a specific block in the Docker file and then run our update command. This is done by adding a build argument to the command (CLI or Makefile) like so:docker-compose -f docker-compose-dev.yml build --build-arg CACHEBUST=0And then Adding thisadditionalblock to the Docker file:ARG CACHEBUST=1 USER node RUN npm update @myorg/myorg-common-repoThis does what we want.TheARG CACHEBUST=1invalidates the cache and thenpm updatecommand runs without it.
Background:I'm writing code innode.js, usingnpmanddocker. I'm trying to get my docker file to use cache when I build it so it doesn't take too long.We have a "common" repo that we use to keep logic that is used in a variety of repositories and this gets propagated is npm packages.The problem:I want the docker file NOT use the cache on my "common" package.Docker file:FROM node:12-alpine as X RUN npm i npm@latest -g RUN mkdir /app && chown node:node /app WORKDIR /app RUN apk add --no-cache python3 make g++ tini \ && apk add --update tzdata USER node COPY package*.json ./ COPY .npmrc .npmrc RUN npm install --no-optional && npm cache clean --force ENV PATH /app/node_modules/.bin:$PATH COPY . .package.json has this line:"dependencies": { "@myorg/myorg-common-repo": "~1.0.13",I have tried adding these lines in a variety of places and nothing seems to work:RUN npm uninstall @myorg/myorg-common-repo && npm install @myorg/myorg-common-repoRUN npm update @myorg/myorg-common-repo --forceAny ideas on how I can get docker to build and not use the cache on@myorg/myorg-common-repo?
Run npm update in docker without using the cache on that specific update
You're mounting the volume incorrectly, precisely the path. It should be-v ~/[absolute path from $HOME]/src/work:~/notebooks/Explanation:Since your working directory is/notebooks, which places it at/$HOME/notebooks. You use~to get to the$HOME.
Hi I am trying to getTensorFlownotebook folder mounted to/src/workfolder in Ubuntu.sudo docker run -it -v /src/work:/HOME/notebooks -p 8888:8888 tensorflow/tensorflow:1.3.0I have tried many combination of -v flags. It is not reading the files already in my work folder or saving new files to it.
volume mount tensorflow container for persistance storage
This is the solution on Windows 7, 8 and 10 Home:Find the docker machine environment variables. Go to the docker (shell) and type: docker-machine env. The docker host and certification path are important.Add the following properties to your pom.xml (maven) file:(e.g.) tcp://192.168.99.100:2376(e.g.) a pathIn your build plugin add just after configuration${docker.host.url}${docker.host.certPath}
Via Maven I would like to build a Docker image from a Springboot project. I run: mvn clean package docker:build Issue:ERROR] Failed to execute goal io.fabric8:docker-maven-plugin:0.21.0:build (default-cli) on project spring-boot-docker: Execution default-cli of goal io.fabric8:docker-maven-plugin:0.21.0:build failed: An API incompatibility was encountered while executing io. fabric8:docker-maven-plugin:0.21.0:build: java.lang.UnsatisfiedLinkError: unknown [ERROR] ----------------------------------------------------- [ERROR] realm = plugin>io.fabric8:docker-maven-plugin:0.21.0 [ERROR] strategy = org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy [ERROR] urls[0] = file:/C:/Users/Johan/.m2/repository/io/fabric8/docker-maven-plugin/0.21.0/docker-maven-plugin-0.21.0.jar EtcThe maven pom.xml file contains: UTF-8 1.8 springframeworkguru springbootdocker unix:///var/run/docker.sock The build plugin section contains: org.springframework.boot spring-boot-maven-plugin io.fabric8 docker-maven-plugin 0.21.0 ${docker.host.url} true ${docker.image.prefix}/${docker.image.name} ${project.basedir}/src/main/docker/ artifact latest ${project.version} As suggested, I removed my maven repository, which did not help. Using other dockerHost values (likehttp://127.0.0.1:2375) did not help.I really hope you can help!
Docker maven fabric8 plugin (on Windows): building image gives incompatibility issues ?
So my question is: There is a way to pull mcr.microsoft.com/windows:2004 docker image from the hosted agent?I am afraid there is no such way to pullmcr.microsoft.com/windows:2004docker image from the hosted agent.That becauseMatching container host version with container image versions:Windows Server containers and the underlying host share a single kernel, the container’s base image must match that of the host. If the versions are different, the container may start, but full functionally isn't guaranteed.In other words, Windows requires the host OS version to match the container OS version. If you want to run a container based on a newer Windows build, make sure you have an equivalent host build. Otherwise, you can use Hyper-V isolation to run older containers on new host builds.So, we could not pull the imagewindows:2004(2004) from the hosted agentwindows-latestORwindows-2019(1809). We could only pull the imagewindows:1809with the hosted agent.docker pull mcr.microsoft.com/windows/servercore:1809However, if I pull the imagewindows:1903with hosted agent, I will get the errorno matching manifest for windows/amd64.... In order to verify my answer, I use the private agent, which hosted on the windows version1903(OS build18362), It works fine.In summary, we cannot pull windows:2004 (2004) image on the hosted agent (1809). The workaround for this request is that use the private agent.BTW, I have tested those solutions which mentioned inthe linkin your question with private agent. Neitherswitch to Linux containersnorset the "experimental": truecan solve this error.The Reference links:List of Microsoft Windows versionsUnable to pull images from microsoft
When usingWindows-2019hosted agent(Agent installed with 1809 windows version -Microsoft Windows Server 2019 Datacenter) as Agent Specification, We can't pullmcr.microsoft.com/windows:2004docker image.Exception:I'm familiar withthis solution(Which works perfectly locally). But, since Docker Desktop doesn't install on the agent I can't switch to Windows containers. Moreover, Install Docker Desktop is not an option since reboot required.Currently, Creating a build machine is not an option.So my question is: There is a way to pullmcr.microsoft.com/windows:2004docker image from the hosted agent?
Azure DevOps hosted-agent failed to pull windows:2004
First you have to start your composed containers withdocker-compose up, which starts all of your defined services. Then you can attach to your running containers by their name. You get the names of running containers from the output ofdocker ps, e.g.:CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d6b317a4c10b image "..." 27 hours ago Up 27 hours 0.0.0.0:4284->4284/tcp container1 4fe15ab206b5 postgresql "..." 27 hours ago Up 27 hours 5432/tcp container2So in this example container2 is my database. But I want to connect to my web application. So I can directly start a shell in the running container:docker exec -it container1 bashwhich starts a bash inside container1. From there you can run any command you like, e.g. your rails console.Also youshould use version 2of docker-compose files, since version 1 lacks of some features.
I've the next docker-compose container:# docker-compose.yml version: '2' services: web: build: . ports: - "80:80" volumes: - .:/home/app/NAME_OF_MY_APP db: image: postgres:9.4 ports: - "5432" environment: POSTGRES_USER: 'postgres'I cannot figure out how can I run the rails console. I'm using the passenger/nginx image and everything is working. However, my DB is on another container and I'd like to entry at rails console to create manually a couple of users.I tried with:sudo docker-compose run web rails cBut it appears the next error:ERROR: Cannot start service web: oci runtime error: exec: "rails": executable file not found in $PATHAlso, I tried:sudo docker-compose run web "rails c"But it stills showing the same output.I'd like to entry at console, entry some users and store it on the postgres DB.Thanks in advance!
I can't run rails console with Docker and Passenger/nginx image
Try dropping aRUN ln -s fnizz.webapi.dll entrypoint.dlland changing your ENTRYPOINT toENTRYPOINT [ "dotnet", "entrypoint.dll" ]. I believedotnetmight be finnicky on DLL extensions. This pattern also lets you genericize the assembly name -- sometimes useful.
I'm trying to dockerize a aspnetcore webapi. I followed the tutorial here:https://docs.docker.com/engine/examples/dotnetcore/But when I run my container I have this message:Did you mean to run dotnet SDK commands? Please install dotnet SDK from: http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409I download the code and build the image from the Dockerfile the github site:https://github.com/dotnet/dotnet-docker-samples/tree/master/aspnetappI run the container... and it works... I compared two Dockerfile and they are very similar:Mine:FROM microsoft/aspnetcore-build:2.0.5-2.1.4 AS build-env WORKDIR /app # Copy csproj and restore as distinct layers COPY *.csproj ./ RUN dotnet restore # Copy everything else and build COPY . ./ RUN dotnet publish -c Release -o out # Build runtime image FROM microsoft/aspnetcore:2.0.5 WORKDIR /app COPY --from=build-env /app/out . ENTRYPOINT ["dotnet", "fnizz.webapi.dll"]And the one from the github sample:FROM microsoft/aspnetcore-build:2.0 AS build-env WORKDIR /app # copy csproj and restore as distinct layers COPY *.csproj ./ RUN dotnet restore # copy everything else and build COPY . ./ RUN dotnet publish -c Release -o out # build runtime image FROM microsoft/aspnetcore:2.0 WORKDIR /app COPY --from=build-env /app/out . ENTRYPOINT ["dotnet", "aspnetapp.dll"]If there is missing info, tell me, I'll add it. Thank you !
Dockerize an asnet core webapi
As@Kamelia Ymentioned about thehttps://issuetracker.google.com/issues/137517429There is a mention on workaround used @type parser format json key_name message reserve_data false emit_invalid_record_to_error false The above snippet parses the logs into JSON and injest to Cloud Logging.In this discussion inGoogle Groupson Stackdriver, we have discussed on how to use it with startup-script.Here is the snippet for startup script.cp /etc/stackdriver/logging.config.d/fluentd-lakitu.conf /etc/stackdriver/logging.config.d/fluentd-lakitu.conf-save # Shorter version of the above: cp /etc/stackdriver/logging.config.d/fluentd-lakitu.conf{,-save} ( head -n -2 /etc/stackdriver/logging.config.d/fluentd-lakitu.conf-save; cat < @type parser format json key_name message reserve_data false emit_invalid_record_to_error false EOF ) > /etc/stackdriver/logging.config.d/fluentd-lakitu.conf sudo systemctl start stackdriver-loggingThis image can be used to generate random JSON logs.https://hub.docker.com/repository/docker/patelathreya/json-random-logger
I am able to injest logs to Google Log Viewer with the help of stackdriver logging agent from Container Optimized OS as JSON.It injests logs as a value to message, but not as json payload with the default configurationWhat I have tried?I have changed the fluentd config in /etc/stackdriver/logging.config.d/fluentd-lakitu.conf to the following: @type tail format json path /var/lib/docker/containers/*/*.log @type json pos_file /var/log/google-fluentd/containers.log.pos tag reform_contain read_from_head true But its unable to send logs to Log viewerOS:Container Optimized OS cos-81-12871-1196-0
Injest logs as JSON in Container Optimized OS
The environment variables passed in thedocker-compose.ymlare strings. You don't need to pass the quotes.The influx DB is looking for the certificate under"/etc/ssl/influxdb-selfsigned.crt"...literallySimply remove the quotes and the DB will start:... - INFLUXDB_HTTP_HTTPS_ENABLED=true - INFLUXDB_HTTP_HTTPS_CERTIFICATE=/etc/ssl/influxdb-selfsigned.crt - INFLUXDB_HTTP_HTTPS_PRIVATE_KEY=/etc/ssl/influxdb-selfsigned.key ...
I'm having some troubles trying to configure SSL with InfluxDB v1.8 running on Docker Compose.I followed theofficial documentationto enable HTTPS with self-signed certificate, but the container crashes with the following error:run: open server: open service: open "/etc/ssl/influxdb-selfsigned.crt": no such file or directoryIt works if I run this configuration usingdocker runcommand:docker run -p 8086:8086 -v $PWD/ssl:/etc/ssl \ -e INFLUXDB_DB=db0 \ -e INFLUXDB_ADMIN_USER=admin \ -e INFLUXDB_ADMIN_PASSWORD=supersecretpassword \ -e INFLUXDB_HTTP_HTTPS_ENABLED=true \ -e INFLUXDB_HTTP_HTTPS_CERTIFICATE="/etc/ssl/influxdb-selfsigned.crt" \ -e INFLUXDB_HTTP_HTTPS_PRIVATE_KEY="/etc/ssl/influxdb-selfsigned.key" \ -d influxdbMy docker-compose.yml is the following:version: "3" services: influxdb: image: influxdb ports: - "8086:8086" volumes: - influxdb:/var/lib/influxdb - ./ssl:/etc/ssl/ environment: - INFLUXDB_DB=db0 - INFLUXDB_ADMIN_USER=admin - INFLUXDB_ADMIN_PASSWORD=supersecretpassword - INFLUXDB_HTTP_HTTPS_ENABLED=true - INFLUXDB_HTTP_HTTPS_CERTIFICATE="/etc/ssl/influxdb-selfsigned.crt" - INFLUXDB_HTTP_HTTPS_PRIVATE_KEY="/etc/ssl/influxdb-selfsigned.key" - INFLUXDB_HTTP_AUTH_ENABLED=true volumes: influxdb:If I setINFLUXDB_HTTP_HTTPS_ENABLEDto false, I can see that cert and key files are mounted as they should in/etc/sslin the container (docker exec -it airq_influxdb_1 ls -la /etc/ssl)Do you have any idea why this is happening and how to solve it?
InfluxDB on Docker-Compose can't read SSL cert file
It seems your are using a not supported filesystem for the OverlayFS storage driver. Please, have a look of thesupport filesystem for each storage driverSo, first, retrieve your the filesystem you're using withdf -h.Then, you have 2 options:change the Docker storage-driver from the file/etc/docker/daemon.jsonand use a supported onechange the filesystem used in your OS to one that will support OverlayFS as storage-driver
I've tried to move my Docker's directory from/var/lib/dockerto an external hard drive, which is formatted with NTFS. I've followedthis guide. However, when I dosystemctl start dockerI get an error, and in the journal I find these:Jun 15 11:38:32 lampo.sial kernel: overlayfs: upper fs does not support tmpfile. Jun 15 11:38:32 lampo.sial kernel: overlayfs: upper fs does not support RENAME_WHITEOUT. Jun 15 11:38:32 lampo.sial kernel: overlayfs: upper fs missing required features. Jun 15 11:38:32 lampo.sial dockerd[7728]: time="2023-06-15T11:38:32.910051824+01:00" level=error msg="failed to mount overlay: invalid argument" storage-driver=overlay2 Jun 15 11:38:32 lampo.sial dockerd[7728]: time="2023-06-15T11:38:32.910356041+01:00" level=error msg="[graphdriver] prior storage driver overlay2 failed: driver not supported" Jun 15 11:38:32 lampo.sial dockerd[7728]: failed to start daemon: error initializing graphdriver: driver not supportedI'm doubting whether I can move that folder to an NTFS filesystem. What can I do?
Error after moving Docker's dir to NTFS: overlayfs: upper fs does not support <xxx>
Ensure these libraries are installed (in particular,libssl-dev):RUN apt install -y libmemcached-dev zlib1g-dev libssl-devCredit to AKorezin:https://github.com/php-memcached-dev/php-memcached/issues/541#issuecomment-1624041385Then you can follow the usual PECL install process:RUN yes '' | pecl install -f memcached-3.2.0 \ && docker-php-ext-enable memcached
I have aDockerfilerelying onPHP:8.1-apache, running since months.OncePHP:8.1-apachestarted to use Debian bookworm, the memcached client started to give an error while building the image.TheDockerfilerows involved areFROM php:8.1-apache ... RUN apt-get update --fix-missing -q \ && apt-get install -y curl mcrypt gnupg build-essential software-properties-common wget vim zip unzip libxml2-dev libz-dev libpng-dev libmemcached-dev \ && pecl install memcached \ && docker-php-ext-enable memcached \ ...The error at image build time is:checking for libmemcached location... configure: error: memcached support requires libmemcached. Use --with-libmemcached-dir= to specify the prefix where libmemcached headers and library are located ERROR: `/tmp/pear/temp/memcached/configure --with-php-config=/usr/local/bin/php-config --with-libmemcached-dir=no --with-zlib-dir=no --with-system-fastlz=no --enable-memcached-igbinary=no --enable-memcached-msgpack=no --enable-memcached-json=no --enable-memcached-protocol=no --enable-memcached-sasl=yes --enable-memcached-session=yes' failedPinning the oldstable version solves the problem,FROM php:8.1-apache-bullseyeAnd that clearly indicates that the issue is caused by the switch to new Debian Version.What could be done to usebookwormand continue to use the same libraries and process ?
Problem adding Memcached support in Docker for PHP8.1 using bookworm
I prefer using named volumes, as you can mount them easily to a new container.But for unnamed volume, I:run my container (the VOLUME directive causes it to create a new volume to a new path that you can get by inspecting it)move the path of the old volume to that new path.Beforedocker volume commands,I used to do thatwitha script:updateDataContainerPath.sh.But again, these days, none of my images have aVOLUMEin them: I create separately named volumes (docker volume create), and mount them to containers at runtime (docker run -v my-named-volume:/my/path)
I'm using a docker volume, specified in my dockerfile so that my data can persist on the host. The dockerfile looks something like this:FROM base-image VOLUME /path/to/something RUN do_stuff ....When I run the container it creates a volume (call itVolumeA) which I can see when I do adocker volume ls.If I stop and remove the container, theVolumeAsticks around as expected.My question is, if I run a new version of the container, is there a way to useVolumeArather than have it create a new one?
Reattaching orphaned docker volumes
You can use thetag:https://dmp.fabric8.io/#build-configuration ... ${project.version} ... ... repo/something/%a:%l ... ${docker.image-tag} ... ... this will tag your image with both the%lbehavior and the custom set${docker.image-tag}.mvn docker:build -Ddocker.image-tag=mytag
I have the fabric8 docker-maven-plugin configured in my pom.xml as follows: ... ... io.fabric8 docker-maven-plugin ${docker.plugin.version} package build ${docker.image.prefix}/${project.artifactId}:%l Dockerfile artifact ... ... I'm using the%lplaceholder which tags the image with thelatestlabel if the version contains-SNAPSHOT, otherwise it uses the pom version. When building from CI, I'd like to include some additional tags (possibly more then one) to my image (e.g. build number / branch name) but I'd like to keep%lplaceholder behavior. I think that it should be possible using maven properties from command line, but I couldn't figure it out from the plugin docs (https://dmp.fabric8.io/)How can I include additional tags when executing the docker:build goal?
fabric8 docker-maven-plugin: include additional tags on build
I ended up having a few optionsif docker container needs to run multiple services, setting env vars to /etc/environment will make them available for all users. I added the following line to my Dockerfile CMDCMD ["env | grep _ >> /etc/environment"]if docker container runs a single service, its best to set the entry point to the desired application, env vars will automatically be passed to application user. this is my Dockerfile CDM & ENTRYPOINT to run apacheENTRYPOINT ["/usr/sbin/httpd"] CMD ["-D", "FOREGROUND"]
Im having issues accessing OS environment variables in php I have apache/php installed on a centos 6.3 imagein httpd.conf mod mod_env.so is loaded in php.ini I have set variables_order = "EGPCS" restarted httpd (many times)in shell if I type "env" I getDB_PORT_28017_TCP_PROTO=tcp HOSTNAME=c6188a8bd77f DB_NAME=/rockmongo/db DB_PORT_27017_TCP=tcp://172.17.0.36:27017 TERM=xterm DB_PORT_28017_TCP_PORT=28017 DB_PORT=tcp://172.17.0.36:27017 DB_PORT_27017_TCP_PORT=27017 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PWD=/etc/php.d DB_PORT_27017_TCP_PROTO=tcp DB_PORT_28017_TCP_ADDR=172.17.0.36 DB_PORT_28017_TCP=tcp://172.17.0.36:28017 SHLVL=1 HOME=/ DB_PORT_27017_TCP_ADDR=172.17.0.36 container=lxc _=/usr/bin/env OLDPWD=/etcwhich has the variables im after, however if I executeprint_r($_ENV);in php I getArray ( [TERM] => xterm [PATH] => /sbin:/usr/sbin:/bin:/usr/bin [PWD] => / [LANG] => C [SHLVL] => 2 [_] => /usr/sbin/httpd )have also looked in $_SERVER & $GLOBALS.Interestingly if I executephp -iin shell I see the env variables set correctly in _ENVI should note im running this in a docker container, however I dont believe it is a issue as variables display correctly in #env & #php -i. I think I have a issue with my httpd/php configAnyone have advice for this? thanks
Cant access environment variables in php
It looks likepgsqlis not included in the PHP Docker image.I useddocker-php-extension-installerto add the extensions I need to my Docker image.I added the following two lines into my dockerfile and everything is working as expected nowADD https://raw.githubusercontent.com/mlocati/docker-php-extension-installer/master/install-php-extensions /usr/local/bin/ RUN chmod uga+x /usr/local/bin/install-php-extensions && sync && \ install-php-extensions pdo_pgsql
I am trying to setup a docker image for an app using laravel and postgres but I'm running into difficulties trying to install the php driver for postgres.My Dockerfile:FROM php:7.4-fpm # Arguments defined in docker-compose.yml ARG user ARG uid # Install system dependencies RUN apt-get update && apt-get install -y \ git \ curl \ libpng-dev \ libonig-dev \ libxml2-dev \ zip \ unzip \ postgresql-client \ libpq-dev \ php7.4-pgsql # Clear cache RUN apt-get clean && rm -rf /var/lib/apt/lists/* # Install PHP extensions RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd # Get latest Composer COPY --from=composer:latest /usr/bin/composer /usr/bin/composer # Create system user to run Composer and Artisan Commands RUN useradd -G www-data,root -u $uid -d /home/$user $user RUN mkdir -p /home/$user/.composer && \ chown -R $user:$user /home/$user # Set working directory WORKDIR /var/www USER $userThe error I am receiving:E: Unable to locate package php7.4-pgsql E: Couldn't find any package by glob 'php7.4-pgsql' E: Couldn't find any package by regex 'php7.4-pgsql' ERROR: Service 'app' failed to build: The command '/bin/sh -c apt-get update && apt-get install -y php7.4-pgsql' returned a non-zero code: 100
Unable to locate package in docker image
EDITYou can simply use the hostname of the docker container in theuwsgi_passdirective as both docker containers are on the same subnet.location / { include uwsgi_params; uwsgi_pass flaskapp:8080; }0.0.0.0isn't the IP address of the server, it essentially tells the server to be hosted on every IP that the device has allocated.To connect to it from nginx, you will need to use the IP address of the container instead.You can find the IP address of the container running uWsgi with the following command:docker inspect CONTAINER_IDWhere CONTAINER_ID is the ID of the container you started uwsgi in.From here you can update the nginx config as follows:uwsgi_pass IP_ADDRESS:8080;Where IP_ADDRESS is the one you found from the command aboveYou can also set the ip address of the container when you start it with the following option--ip Be careful, however, to ensure that the IP address you set is in the same subnet as the standard IP's assigned.
I am trying to setup two docker containers(yes separate without docker-compose): one with nginx and one with uwsgi with basic flask app.I run containers in same network within dockerMy nginx config for site added/linked to sites-enabled(everything else is default):server { listen 80; server_name 127.0.0.1; location / { include uwsgi_params; uwsgi_pass 0.0.0.0:8080; } }My uwsgi.ini[uwsgi] module = app:app master = true processes = 2 socket = 0.0.0.0:8080uwsgi entry point in docker looks like.local/bin/uwsgi --ini uwsgi.iniContainers run fine on their own - uwsgi receives request on 8080 and nginx receives expected requests. How ever when I try to access 127.0.0.1 i get 502 status code and nginx logs error:1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.4.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "uwsgi://0.0.0.0:8080", host: "127.0.0.1"By googling i find solution that rather use one container and some_socket.sock as file or use docker compose. Apparently problem with permissions, but I do not know how to solve them or diagnose.I launch containers with these commands:docker run --network app_network --name nginx --rm -p 80:80 my_nginx docker run --network app_network --name flaskapp --rm -p 8080:8080 my_uwsgi
Connection refused: when uwsgi and nginx in different containers
Bydefaultit is not possible to run docker-in-docker (DIND) (as a security measure).This sectionin the Gitlab docs is your solution. You must use Docker-in-Docker.After configuring your runner to use DIND your.gitlab-ci.ymlwill look like this:#gitlab-ci image: docker:latest variables: DOCKER_DRIVER: overlay2 services: - docker:dind before_script: - docker info stages: - build - deploy build_application: stage: build script: - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA . -f Dockerfile - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA-test
I'm testing gitlab-ci and trying to generate an image on the registry from the Dockerfile.I have the same code just to test:#gitlab-ci image: docker:latest tages: - build - deploy build_application: stage: build script: - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA . -f Dockerfile - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA-testoutput:Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?docker is running the image is being pulled but I can not execute docker commands.In my local environment if a run:docker run -it docker:latestI stay inside the container and run docker info i have the same problem. I had to fix it by running the container on this way:docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock docker:latestbut I do not know how to fix it on gitlab-ci. I configured my runner so:docker run -d --name gitlab-runner --restart always \ -v /srv/gitlab-runner/config:/etc/gitlab-runner \ -v /var/run/docker.sock:/var/run/docker.sock \ gitlab/gitlab-runner:latestMaybe someone can put me in the right direction. thanks
can not run docker latest on gitlab-ci runner
No, you don't get fully automated scaling with basic ECS. What you can do is create an alarm for when load gets high and have the alarm trigger an update to increase the cluster size.Update Nov 29, 2017AWS Fargate is a technology for Amazon ECS and EKS* that allows you to run containers without having to manage servers or clusters. With AWS Fargate, you no longer have to provision, configure, and scale clusters of virtual machines to run containers.This allows to scale containers without worrying about underneath infrastructure working with ECS service level scaling configurations.
Amazon'sEC2 Container Serviceallows you to run any amount of containers you want, it will choose an EC2 instance(s) to run the containers on automatically. Which are great features. However, we are really concerned aboutautomatic scalability.Scenario:I launch a container via AWS ECS Console.The HTTP requests are starting to come up.The HTTP load increases significantly with time.CPU (or RAM) usage of the container is getting closer to 100%.Question 1: Will ECS run one more containerautomatically?Question 2: Will ECS automatically shut one of the containers down when CPU (or RAM) load gets low?
Does AWS ECS support per container dynamic scalability?
You named your containersome-redisand are trying to connect with the nameredis.Trydocker exec -it some-redis redis-cli
Something simillar (Unable to connect to MYSQL from Docker Instanceandredis connect timeout to remote server in a dockerandCalling redis-cli in docker-compose setup) I tried to run for the Redis on Docker.I start theDocker servicelike this:docker run --name some-redis -d redisOutput:docker run --name some-redis -d redis d2ea8a77ba543b3e85020de6bc450e0d50ce9f60e0307e52fd4ae394bd29722I re-verified usingdocker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d2ea8a77ba54 redis "docker-entrypoint.sh" 6 minutes ago Up 6 minutes 6379/tcp some-redis 1be4f5dde2fb mysql/mysql-server:latest "/entrypoint.sh mysql" About an hour ago Up About an hour (healthy) 0.0.0.0:3306->3306/tcp, 33060/tcp mysql e7d9e3713f5c ubuntu "/bin/bash" 6 days ago Up 6 days angry_hodgkinWhen I execute the below commands, its not workingdocker exec -it redis redis-cli Error response from daemon: No such container: redis
Unable to connect to Redis from Docker
Consider Docker images similar to android/iOS mobile apps. You are never quite sure if they are safe to run, but the probability of it being safe is higher when it's from an official source such as Google play or App Store. More concretely Docker images coming from Docker hub go through security scans details of which are undisclosed as yet. So chances of a malicious image pulled from Docker hub are rare. However, one can never be paranoid enough when it comes to security. There are two ways to make sure all images coming from any source are secure:Proactive security: Do security source code review of each Dockerfile corresponding to Docker image, including base images which you have already expressed in questionReactive security: Run Docker bench, open sourced by Docker Inc., which runs as a privileged container looking for runtime known malicious activities by containers.In summary, whenever possible use Docker images from Docker hub. Perform security code reviews ofDockerFiles. Run Docker bench or any other equivalent tool that can catch malicious activities performed by containers.References:Docker security scanning formerly known as Project Nautilus:https://blog.docker.com/2016/05/docker-security-scanning/Docker bench:https://github.com/docker/docker-bench-securityBest practices for Dockerfile:https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/
How to ensure, that docker container will be secure, especially when using third party containers or base images?Is it correct, when using base image, it may initiate any services or mount arbitrary partitions of host filesystem under the hood, and potentially send sensitive data to attacker?So if I use third party container, which Dockerfile proves the container to be safe, should I traverse the whole linked list of base images (potentially very long) to ensure the container is actually safe and does what it intends of doing?How to ensure the trustworthy of docker container in a systematic and definite way?
Docker security concerns using unofficial images
Yaml files are space sensitive. You tried to defineexternal_linksat the top level of the file rather than as part of a service. This should by syntactically correct:version: '3.4' services: local-app: build: ./app/ command: node app ports: - '7001:7001' links: - search-svc external_links: - search-svc networks: docker_app-network: external: trueThat said, linking is deprecated in docker, it is preferred to use a common network (excluding the default bridge network namedbridge) and then use the integrated DNS server for service discovery. It looks like you have defined your common network but didn't use it. This would place your service on that network and rely on DNS:version: '3.4' services: local-app: build: ./app/ command: node app ports: - '7001:7001' networks: - docker_app-network networks: docker_app-network: external: true
I have the following docker-compose file content:version: '3.4' services: local-app: build: ./app/ command: node app ports: - '7001:7001' links: - search-svc networks: docker_app-network: external: true external_links: -search-svcBasically what I 'm trying to do is to link the ' local-app ' container with another already running container the ' search-svc '. By running the docker compose I get the following error:The Compose file './docker-compose.yaml' is invalid because: Invalid top-level property "external_links". Valid top-level sections for this Compose file are: secrets, version, volumes, services, configs, networks, and extensions starting with "x-". You might be seeing this error because you're using the wrong Compose file version. Either specify a supported version (e.g "2.2" or "3.3") and place your service definitions under theserviceskey, or omit theversionkey and place your service definitions at the root of the file to use version 1.I have read the documentation but I can't find any solution to my problem. Can anyone suggest anything that might help?Thanks in advance
Invalid top-level property "external_links"
As mentioned inDocker links:Docker also defines a set of environment variables for each port exposed by the source container.Each variable has a unique prefix in the form:_PORT__The components in this prefix are:the alias specified in the --link parameter (for example, webdb)thenumber exposedawhich is either TCP or UDPThat means you need to make sure that Container1 exposes the right port with the right protocol (in your case, UDP): see "How do I expose a UDP Port on Docker?"
I have two docker containers in the following setup on a host machine:Container 1- UDP Port 5043 is mapped to host port 5043 (0.0.0.0:5043:5043)Container 2- Needs to send data to Container 1 on port 5043 as UDP.Scenario 1I start Container 1 and obtain it's IP address.I use this IP address and configure Container 2 with it and start it.Container 2 is able to send UDP data to Container 1 by callingudp://Container_1_IP:5043EVERYTHING WORKS!!Scenario 2I start Container 1 by mapping 5043 UDP port to host's 5043 port (0.0.0.0:5043:5043)I link Container 2 and Container 1 using '--links'.Now, when Container 2 invokes the URLudp://Container_1_IP:5043, an error is thrown "Connection refused".I did verify that I am able to ping the Container 1 from inside the Container 2 using the IP.Any help to get the Scenario 2 working for me would be really appreciated!!
Communication between linked docker containers
I ended up getting MAVProxy on host and dronekit-python in the docker flask container properly connected.Seemus790's answer in thisgitter threaddid the trick.Working solution: MAVProxy on host machine (Mac OS in my case)mavproxy.py --master=127.0.0.1:14550 --out udp:127.0.0.1:14551 --out udp:10.55.222.120:14550 --out=tcpin:0.0.0.0:14552dronekit-python command in docker container:vehicle = connect('tcp:host.docker.internal:14552', wait_ready=True)The trick was the --out=tcpin:0.0.0.0:14552 part of the mavproxy command which is documentedhere
I am using dronekit-python in a docker container and am attempting to connect to an instance of MAVProxy running on my host machine (Mac OSX) using the following command:vehicle = connect('udp:host.docker.internal:14551', wait_ready=True)but am getting the following error:File "/usr/local/lib/python3.7/site-packages/pymavlink/mavutil.py", line 1015, in __init__ self.port.bind((a[0], int(a[1]))) OSError: [Errno 99] Cannot assign requested addressDoes anyone know what the issue is here? I am able to successfully connect using the above command when I run the python script locally on host but not when I have it running in a docker container.I found a similar stackoverflow questionherebut the accepted answer did not work for me. Not sure if I need to be exposing ports or something like that.Here is the command that I am running on my host machine to kick off MAVProxy:mavproxy.py --master=127.0.0.1:14550 --out udp:127.0.0.1:14551 --out udp:10.55.222.120:14550 --out udp:127.0.0.1:14552
Dronekit-python running in docker connecting to MAVProxy on host
You should usevolumefor that.First, create a volume:docker volume create --name sharedThen, run containers like this:docker run -v shared:/shared-folder docker run -v shared:/shared-folder This way,/shared-folderwill be synced between these two containers.Read more about ithereHope it helps
I have 2 docker containers running on my system.I wanted to copy the data from one container to another container from my host system itself.i know that to copy data from container to host we have to usedocker cp :path in containerNow i am trying to copy the data directly from one container to another, is there any way to do that ??i tried doing this.docker cp :/usr/local/nginx/vishnu/vishtest.txt :/home/smadmin/vishnusource/but the above command failed saying its not supported.i should not copy the data to my local machine, thats my requirement.anybody have an idea to do this, thanks in advance ?
Copying data from and to Docker containers
You can try using a custom network with--internaloption and then attaching your container to this network:$ docker network create --internal internal-network $ docker run --rm -it -p 8000:8000 --network=internal-network python bash
I want to run a docker container that has no access to the outside internet. I've been using--network=nonefor this successfully. But now I want to host a web server from that container, and access it from outside. When I try, I find that the port mapping is totally ignored:$ docker run --rm -it -p 8000:8000 --network=none python bash # python -m http.server Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...Now from outside the container:$ docker port 981f253788ad $ curl localhost:8000 curl: (7) Failed to connect to localhost port 8000: Connection refused
How can I get a docker container to expose a port while blocking the internet at large?
I Had to removehttps://from the url now it works fine
Im trying to make a jenkins pipline that clones code from git and build a docker image then push it to nexus registry so thats what in my jenkins file :pipeline{ agent any environment{ DOCKERHUB_CREDENTIALS=credentials('docker_hub') NEXUS_CREDENTIALS = credentials('nexus') } stages{ stage('Build'){ steps{ sh 'docker build -t my-app .' } } stage('Login'){ steps{ sh 'echo $NEXUS_CREDENTIALS_PSW | docker login -u $NEXUS_CREDENTIALS_USR --password-stdin http://localhost:8095/repository/docker-private-repo/' } } stage('Push'){ steps{ sh 'docker tag my-app:latest http://localhost:8095/docker-private-repo/my-app:latest' sh 'docker push http://localhost:8095/docker-private-repo/my-app:latest' } } } post{ always{ sh 'docker logout' } } }for cloning the git code im using pipeline SCM , anyway the build stage and login stage are working fine but for the pushing stage i get this error "Error parsing reference: "http://localhost:8095/docker-private-repo/my-app:latest" is not a valid repository/tag: invalid reference format" i dont know what wrong with the tag command ? how can i solve this ?
Error parsing reference: is not a valid repository/tag: invalid reference format
According tocreatesuperuser -hand thisdoc,createsuperusercommand does not support--passwordflag. to read arguments from environment variables, you should use this command with--noinputflag and set required fields like username, email and password asDJANGO_SUPERUSER_in your env file.python manage.py createsuperuser --noinput
I have 3 environments set up and cannot createsuperuser. The way I migrate and runserver now follows the container so I have an entrypoint.sh:#!/bin/sh 1 2 echo "${RTE} Runtime Environment - Running entrypoint." 3 4 if [ "$RTE" = "dev" ]; then 5 6 python manage.py makemigrations --merge 7 python manage.py migrate --noinput 8 python manage.py runserver 0:8000 9 10 python manage.py createsuperuser --username "admin" --email "[email protected]" --password "superuser" 11 echo "created superuser" 12 13 elif [ "$RTE" = "test" ]; then 14 15 echo "This is tets." 16 python manage.py makemigrations 17 python manage.py migrate 18 python manage.py runserver 19 20 elif [ "$RTE" = "prod" ]; then 21 22 python manage.py check --deploy 23 python manage.py collectstatic --noinput 24 gunicorn kea_bank.asgi:application -b 0.0.0.0:8080 -k uvicorn.workers.UvicornWorker 25 26 filines 10/11 is what I want to make work, I think I have a syntax issue. Originally I wanted the password and username to be variables stored in an env file:1 RTE=dev 1 POSTGRES_USER=pguser 2 POSTGRES_PASSWORD=pgpassword 3 POSTGRES_DB=devdb 4 DJANGO_SUPERUSER_PASSWORD=superuser 5[email protected]6 DJANGO_SUPERUSER_USERNAME=adminBut now I just want it to work, I need to create a superuser account in order to develop. Can anyone spot what's wrong? I can see my application on localhost:8000, but how do I create the superuser in this scenario?
Django deployment with docker - create superuser
So, turns out this is homebrew's fault with a really questionable design decision. You start-up mysql-server in homebrew by running the recommendedlaunchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist. But then, when examining this file, you'll find the bind-address is hardcoded! /usr/local/opt/mysql/bin/mysqld_safe --bind-address=127.0.0.1 --datadir=/usr/local/var/mysql So, no matter what you do in any of your my.cnf files, it will always be bound to 127.0.0.1, and you'll never be able to query from a container. My fix is just editing this file directly not to provide a bind address so we can let /etc/my.cnf do it for us. Alternatively, though I wouldn't recommend it, you can just change the bind-address directly in this file.
So, I'm able to generally contact my localhost through Docker by running a container with--add-host=localbox:192.168.59.3.ping localboxworks just fine. Problem is, I can't seem to be able to even get a response from MySQL Server.mysql -h localbox, which works fine from outside of the docker container, just gets meERROR 2003 (HY000): Can't connect to MySQL server on 'localbox' (111)from within.I've doneGRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'password' WITH GRANT OPTION;I've addedbind-address = 0.0.0.0into /etc/my.cnf. None of this helps. What gives?Context: I'm running all of this through boot2docker on OS X Yosemite.
Connecting to MySQL Server on localhost through Docker
As @mscdex has pointed out, libicu was looking for the libicu52 package. Somehow the repository got updated allowing me to pull the new libicu which depends on libicu52 that isn't available in the repository of 12.04, but in 14.04. Since there is no official trusted build of 14.04 in the docker registry, I made my own "base" ubuntu14.04 docker image which starts with 13.10 and upgrades to 14.04;FROM ubuntu:saucy ENV DEBIAN_FRONTEND noninteractive # Work around initramfs-tools running on kernel 'upgrade': ENV INITRD No # Update OS. RUN sed -i 's/saucy/trusty/g' /etc/apt/sources.list RUN apt-get update -y RUN apt-get upgrade -y RUN apt-get dist-upgrade -y # Install basic packages. RUN apt-get install -y software-properties-common RUN apt-get install -y curl git htop unzip vim wget # Add files. ADD root/.bashrc /root/.bashrc ADD root/.gitconfig /root/.gitconfig ADD root/scripts /root/scripts RUN apt-get clean # Set working directory. ENV HOME /root WORKDIR /root CMD ["/bin/bash"]Then in the Dockerfile of my worker, I installed libicu52 instead of libicu48 thus fixing all issues
I have been using libicu to detect charset in my node app that runs inside of docker, ubuntu. this is done through the modulenode-icu-charset-detectorthat uses thelibicu-devpackage, which I install prior to the npm package.It all worked fine but I suddently get the errormodule.js:356 Module._extensions[extension](this, filename); ^ Error: libicui18n.so.52: cannot open shared object file: No such file or directory at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Module.require (module.js:364:17) at require (module.js:380:17) at Object. (/app/node_modules/node-icu-charset-detector/node-icu-charset-detector.js:1:82)Looking into my /usr/lib/, I don't find anything icu related, but libicu-dev is installed.This is my docker file;# Pull base image. FROM dockerfile/ubuntu WORKDIR / ADD run.sh /run.sh #make dirs RUN mkdir /log RUN mkdir /app RUN apt-get install -y supervisor libssl-dev pkg-config wget # Install Node.js RUN apt-get install -y software-properties-common RUN add-apt-repository -y ppa:chris-lea/node.js RUN apt-get update RUN apt-get install -y nodejs # Append to $PATH variable. RUN echo '\n# Node.js\nexport PATH="node_modules/.bin:$PATH"' >> /root/.bash_profile ADD /supervisord.conf /etc/supervisor/conf.d/supervisord.conf #get phantomJS RUN apt-get install libfreetype6 libfontconfig -y RUN cd /app RUN npm install phantomjs &>/dev/null #ICU RUN apt-get install libicu-dev libicu48 -y RUN npm install --loglevel silent &>/dev/null RUN npm update --loglevel silent &>/dev/null #GET NODE-Supervisor RUN cd / RUN npm install --loglevel silent -g supervisor RUN chmod 755 /*.sh CMD ["/run.sh"]Thank you for any help regarding this issue, as I am at the end of my linux knowledge :(
libicui18n.so.52: cannot open shared object file
Basically vagrant will try to install the latest version available from the repo. You can review in thesource codemachine.communicate.tap do |comm| comm.sudo("apt-get update -qq -y") comm.sudo("apt-get install -qq -y --force-yes curl apt-transport-https") comm.sudo("apt-get purge -qq -y lxc-docker* || true") comm.sudo("curl -sSL https://get.docker.com/ | sh") endIf you prefer to have a specific version installed you would need to run a shell provisioner before your docker provisioner (provisioner are run in order) and install the version you want to work with
I am trying to understand which is the version that Vagrant installs on its VM (my specific case: using box ubuntu/trusty64) if a Docker provisioner is selected. In particular, I would like it to be a fixed version since it has to reflect my staging environment.Unfortunately, in thedocumentation of the provisionernothing is mentioned about which version of the Docker daemon will be installed. Same by searching for my question, either on google or on github issues.Can somebody point me to the right directions/docs?
What is the docker daemon version on Vagrant provisioner?
Here is a service I created my redis cluster inside k8s.apiVersion: v1 kind: Service metadata: labels: app: redis name: my-redis-svc namespace: default spec: ports: - name: redis port: 6379 targetPort: 6379 protocol: TCP selector: app: redis type: ClusterIPIf you create that service. Your pods should be accessible by other pods in same namespace using that hostname :my-redis-svc.default.svc.cluster.localThat means, in your app code you have change that line :rDB = redis.Redis(host='my-redis-svc.default.svc.cluster.local', port=6379, db=0)
I've just setup a redis instance however I can't seem to get the two containers to talk to each-other, the setupworks over local machine with docker-compose but does not seem to be working with kubernetes.My logs tell me flask can't find the service, so the error must be my configuration filesFlask code:rDB = redis.Redis(host='redis', port=6379, db=0)Flask server:apiVersion: apps/v1beta2 kind: Deployment metadata: name: dashboard namespace: default labels: run: dashboard spec: replicas: 2 selector: matchLabels: run: dashboard template: metadata: labels: run: dashboard spec: containers: - image: gcr.io/******/dashboard_server:v102 name: dashboard livenessProbe: httpGet: path: / port: 8000 initialDelaySeconds: 300 timeoutSeconds: 5 periodSeconds: 300 failureThreshold: 3 ports: - containerPort: 8000 name: http protocol: TCPRedis instance:apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1 kind: Deployment metadata: name: redis namespace: default spec: selector: matchLabels: run: dashboard role: master tier: backend replicas: 1 template: metadata: labels: run: dashboard role: master tier: backend spec: containers: - name: redis image: redis # or just image: redis resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379Service codeapiVersion: v1 kind: Service metadata: name: dash-service namespace: default labels: run: frontend spec: selector: run: dashboard ports: - name: http protocol: TCP port: 80 targetPort: 8000 type: ClusterIP
Connecting a flask container to a redis container over kubernetes
change thisdataSource.setUrl("jdbc:mysql://mysqldb:3306/$dbName"), to:dataSource.setUrl("jdbc:mysql://database:3306/$dbName")your service name incomposeisdatabase, so you need to use it
I'm trying to run a spring boot app (as a simple REST api) and mysql server in two separate docker containers. But, I can't get the jdbc connection in the spring app to connect to mysql. They are both working independently and the implementation works when I run spring boot and mysql locally.docker-compose.ymlversion: '3' services: database: image: mysql:latest container_name: mysqldb command: --default-authentication-plugin=mysql_native_password restart: always environment: - MYSQL_ROOT_PASSWORD=password expose: - 3306 ports: - 3306:3306 networks: - backend volumes: - "dbdata:/var/lib/mysql" web: container_name: springboot build: . depends_on: - database expose: - 8080 ports: - 8080:8080 networks: - backend networks: backend: volumes: dbdata:In the spring boot app:val dataSource = DriverManagerDataSource() dataSource.setDriverClassName("com.mysql.jdbc.Driver") dataSource.setUrl("jdbc:mysql://mysqldb:3306/$dbName?characterEncoding=latin1") dataSource.username = "dev" dataSource.password = "password" val jdbcTemplate = JdbcTemplate(dataSource)Error returned from spring boot:{ "status": 500, "error": "Internal Server Error", "message": "Failed to obtain JDBC Connection; nested exception is com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server." }I AM able to connect to the mysql container from the spring boot container via the mysql cli. So, it appears the springboot container is able to resolve "mysqldb."This seems like it should be pretty simple. I'm not sure where the error lies but I would guess it has something to do with spring boots inner workings that I am unfamiliar with.
Spring boot JDBC can't connect to mysql in docker container
Publishing ports can be done only with newly created containers not existing containers. So you need to stop the container and create a new one with the port you need
So I just updated Docker on my Mac and getting adjusted to Docker seems to be quite challenging and confusing.A few weeks ago, I was able to mind port 8834 on the docker container to port 8834 on my local host by running the following commands (this is my command line history):8450 docker attach -p 8834:8834 compassionate_chandrasekhar 8452 docker start -p 8834:8834 compassionate_chandrasekharToday, if I try to do the same thing, the following happens:[user:test.local:]$ docker container ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 225146ec71d6 myuser/kali:kali "/usr/bin/zsh" 9 minutes ago Exited (0) 2 minutes ago agitated_fermat e4389cac288a myuser/kali:kali "/usr/bin/zsh" 2 weeks ago Exited (255) 2 weeks ago suspicious_hypatia 265f2c9215c5 myuser/kali:kali "/usr/bin/zsh" 2 weeks ago Exited (0) 2 weeks ago hungry_poincare 34b36b4d8a7e myuser/kali:kali "/usr/bin/zsh" 2 weeks ago Created amazing_stonebrakerfollowed by:[user:test.local:]$ docker start -p 8834:8834 agitated_fermat unknown shorthand flag: 'p' in -p See 'docker start --help'.What am I doing wrong? Extremely confusing
Docker doesn't recognize the -p command all of a sudden
By default most docker images have an empty package lists to save on image size. This is why you need toapt-get updatefirst. This will not update any software (that would beapt-get upgrade) but just updates the package list. The command is actually also in Microsoft's instructions you linked.
I'm trying to get a Laravel Sail Docker to be compatible with sqlsrv (MSSQL). I've come a long way with the config and got it to install sqlsrv and the pdo_sqlsrv. So now I need to install msodbcsql17. For that I'm following the microsoft guide (https://learn.microsoft.com/nl-nl/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server?view=sql-server-ver15) for Ubuntu 20.04 (as that is my version).That specific documentation says to download and run. Translating that to the Sail Dockerfile, that part of my Dockerfile looks like this:... && curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - \ && curl https://packages.microsoft.com/config/ubuntu/20.04/prod.list > /etc/apt/sources.list.d/mssql-release.list \ && apt-get install -y msodbcsql17 \ ...So just downloading the file and putting it in the recommended location. But no matter what I do it always comes back with a code 100:Unable to locate package msodbcsql17. So my best guess is that the location is not by default read by the apt-get install. Any suggestions are welcome.Update: So thanks to answer, this is the solution:... && curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - \ && curl https://packages.microsoft.com/config/ubuntu/20.04/prod.list > /etc/apt/sources.list.d/mssql-release.list \ && apt-get update \ && ACCEPT_EULA=Y apt-get install -y msodbcsql17 \ && ACCEPT_EULA=Y apt-get install -y mssql-tools \ ...
Laravel Sail/Docker - Unable to locate package msodbcsql17
You may use Terraform resourcenull_resourceand execute your own logic in Terraform.Example:resource "azurerm_resource_group" "rg" { name = "example-resources" location = "West Europe" } resource "azurerm_container_registry" "acr" { name = "containerRegistry1" resource_group_name = azurerm_resource_group.rg.name location = azurerm_resource_group.rg.location sku = "Premium" admin_enabled = true georeplication_locations = ["East US", "West Europe"] } resource "azurerm_azuread_application" "acr-app" { name = "acr-app" } resource "azurerm_azuread_service_principal" "acr-sp" { application_id = "${azurerm_azuread_application.acr-app.application_id}" } resource "azurerm_azuread_service_principal_password" "acr-sp-pass" { service_principal_id = "${azurerm_azuread_service_principal.acr-sp.id}" value = "Password12" end_date = "2022-01-01T01:02:03Z" } resource "azurerm_role_assignment" "acr-assignment" { scope = "${azurerm_container_registry.acr.id}" role_definition_name = "Contributor" principal_id = "${azurerm_azuread_service_principal_password.acr-sp-pass.service_principal_id}" } resource "null_resource" "docker_push" { provisioner "local-exec" { command = <<-EOT docker login ${azurerm_container_registry.acr.login_server} docker push ${azurerm_container_registry.acr.login_server} EOT } }
I am a beginner in Terraform/Azure and I want to deploy a docker image in ACR using terraform but was unable to find internet solutions. So, if anybody knows how to deploy a docker image to an azure container registry using Terraform, please share. Tell me whether this is possible or not.
How to push a docker image to Azure container registry using terraform?
Of coursedocker inspectis the way to go, but if you just want to "reconstruct" the docker run command, you havehttps://github.com/nexdrew/rekcodit saysReverse engineer a docker run command from an existing container (via docker inspect).
For example, I run a docker bydocker run -d --name sonarqube -p 19000:9000 -p 19002:9002 -e SONARQUBE_JDBC_USERNAME=sonar -e SONARQUBE_JDBC_PASSWORD=123 --link sonarqube-mysql:mysql.Then I lost my shell command history, but I want to know all my arguments. How can I get them? (I need the arguments to copy/move/restart container)
How to get `docker run` full arguments?
For part 4 when you deploy to your swarm, you get an URL withdocker-machine ls.NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS myvm1 * virtualbox Running tcp://192.168.99.100:2376 v17.10.0-ce myvm2 - virtualbox Running tcp://192.168.99.101:2376 v17.10.0-ceChange in docker-compose.yml file80:80to4000:80Use192.168.99.100:4000and it should be working.
Using the Docker tutorial I'm stuck at this part:https://docs.docker.com/get-started/part3/#run-your-new-load-balanced-appI usecurl -4 http://localhostbut i get acurl: (7) Failed to connect to localhost port 80: Connection refusederror.output of previous step:docker service ps getstartedlab_webID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS kqu5qggifnlm getstartedlab_web.1 s1mpl3/get-started:part2 moby Running Running 29 minutes ago prhrmm6hpop3 getstartedlab_web.2 s1mpl3/get-started:part2 moby Running Running 29 minutes ago ytrwy5gxp2rk getstartedlab_web.3 s1mpl3/get-started:part2 moby Running Running 29 minutes ago mayvauijghbj getstartedlab_web.4 s1mpl3/get-started:part2 moby Running Running 29 minutes ago r625x2k7n6ta getstartedlab_web.5 s1mpl3/get-started:part2 moby Running Running 29 minutes agoSoerrorandportsare empty.What should I analyse to fix this issue?
How to use curl -4 http://localhost in the Docker part 3 tutorial?
Thetype:field says whether it's a namedvolume, abindmount, or a couple of other things. Since you're mounting a host directory, you need to specifytype: bindin the extended syntax.volumes: - type: bind # <-- not "volume" source: /host/folder target: /container/folder/according to the docs, "volumes are [...] preferred...."IMHO the Docker documentation is very enthusiastic about named volumes and glosses over their downsides. Since you can't access the contents of a named volume from outside of Docker, they're harder to back up and manage, and a poor match for tasks like injecting config files and reviewing logs. I would not automatically reach for a named volume because the Docker documentation suggests it's preferred.version: '3.8' services: some-application: volumes: # Use a bind mount to inject configuration files; you need to # directly edit them on the host - type: bind source: ./config target: /app/config # Use a bind mount to read back log files; you need to read them # on the host - type: bind source: ./log target: /app/data # Use a named volume for opaque application data; you do not # need to manipulate files on the host, and on MacOS/Windows # a named volume will be faster - type: volume source: app-data # when type: volume, this is a volume name target: /app/data # Do not mount anything over /app or the code in the image. volumes: app-data:
When configuring adocker-compose.yml, I can easily mount a volume that maps a folder from the host machine to the container:... volumes: - "/host/folder/:/container/folder/" ...However, if I try to use thelong syntax, that doesn't work anymore:... volumes: - type: volume source: /host/folder target: /container/folder/ ...According to thedocssource: the source of the mount, a path on the host for a bind mount, or thename of a volume defined in the top-level volumes key. Not applicable for a tmpfs mount.So, it seems that in the long syntax I have to use the bind type to mount a host path. Does this make sense, different features according to syntax?Furthermore, again, according to thedocs,Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure and OS of the host machine, volumes are completely managed by Docker.So, I much prefer having volumes instead of binds. Do I have to use theshort syntaxfor that?
Mount volume from host in Dockerfile long format
I see a few things wrong:You should use a Docker registry service instead of SCPing an image. On AWS there is EC2 Container Registry or you can also use Docker Hub as well. This will make it much easier to get your images onto your instances.I'm not sure why you weren't able to start your container using the console. I assume you are using AWS ECS? You might try thetroubleshooting guideto help you figure out why your task wasn't running.It sounds like you are starting a docker container in the foreground (attached to your shell) rather than in the background. Try addingthe-dflagto your docker run command to run the container in the background so you can close your SSH session. Note that if the application process inside your container crashes the container will still stop. This is one reason for using an orchestrator such as AWS ECS to define a service that will attempt to always run a certain number of your tasks. ECS also helps with getting the docker container onto the instance and starting it for you in the background automatically.
I'm a beginner with microservices and have spent hours on the most tiny painful things of AWS today, would appreciate any expert advice as I suspect the next step is very small but could take me hours to work it out otherwise.So I deployed a nano instance thensshinto it. Had to actually redo it to fix the security group but anyway it worked eventually. Usedscpto put my docker image up there per the instructionshere, in summarydocker saveto make a .tar out of the image locally anddocker loadto put it into the system remotely after waiting 15 minutes forscpto upload. Then typed docker run at the command prompt.Had resorted to these (linux) terminal measures as over the last 3 days had twice tried and failed to do it from the AWS console, as in it uploaded but wouldn't run.Now it runs fantastically when I typedocker run my_imageand I can see it in there with both the commandsdocker imagesanddocker ps -a!But the command prompt on my AWS instance is busy while it runs.. if I close the terminal window it will surely die. Now that I know it works there, how can I 'deploy' it, ie let it run and continue to run for a month or until further notice? I think it might need some kind of json file called 'task definition' but don't really know at all what to do next. Can this task definition and all remaining tasks be done from within a terminal logged into the instance?
deploy docker container on AWS EC2 instance without being logged in
I ended up managing it out by mounting thelibfolder into the container usingdocker-compose, like so:version: '2' services: frontend_web: build: . volumes: - ../../lib:/app/libI then just had to add /app/lib to the container'sPYTHONPATHand I could import any module from that folder.
I have several services running in their own Docker containers. In my project I also have alibfolder containing some small modules that all the services need.What is the best way to include these modules into the Docker containers? Obviously third party modules I just useRUN pip install -r requirements.txt, is there a similar way I can include my own modules?
Installing custom modules into docker container
The problem ended up being the -t flag in the docker run command. Apparently this doesn't work because it isn't a terminal or something like that. Remove the flag and it runs fine.
I'm trying to create a new Droplet and then kick off a Docker command via a UserData bash script. I set the user data via the Java API when creating the droplet and observe that the test files and logs I made are created.newDroplet.setUserData("#!/bin/bash\n" + "touch /test.txt;"+ "docker login --username=myname--password=mypass > /loginlog;"+ "docker pull mybuild > /pulllog;"+ "docker run --log-opt max-size=1g --net host --name myserver -t -i mybuild > /runlog;");loginlog and pulllog both show successful outcomes. However nothing exists in the file runlog.I can ssh into the droplet and then run the exact same docker command and it runs as expected. Why can't it be run from a userdata script? Why is no output generated?
DigitalOcean: How to run Docker command on newly created Droplet via Java API
@sxm1972 Thank you for your effort and help.You are probably using Windows Pro or a server edition. I am usingWindows 10 Home editionHere is how I solved it, so other people using same setup can solve their issue.There may be a better way to solve this, please comment if there is an efficient way.So...First, the question... Why I don't see my shared volume from PC in my container.Ans: If we use docker's Boot2Docker with VirtualBox (which I am) then whenever a volume is mounted it refers to a folderinside the Boot2Docker VMImage: Result of -v with docker in VirtualBox Boot2DockerSo with this if we try to use$ lsit will show an empty folder which in my case it did.So we have to actually mount the folder toBoot2Docker VMif we want to share our files from Windows environment to Container.Image: Resulting Mounts Window <-> Boot2Docker <-> ContainerTo achieve this we have to manually mount the folder to VM with the followingcommandvboxmanage sharedfolder add default --name "" --hostpath "" --automountIF YOU GET ERROR RUNNING THE COMMAND, SAYING vboxmanager NOT FOUND ADD VIRTUAL BOX FOLDER PATH TO YOUR SYSTEM PATH. FOR ME IT WASC:\Program Files\Oracle\VirtualBoxAfter running the command, you'll seeonroot. You can check it bydocker-machine ssh defaultand thenls /. After confirming that the folderexist, you can use it as volume to your container.docker run -it -v /:/source shHope this helps...!P.S If you are feeling lazy and don't wan't to mount a folder, you can place your project inside yourC:/Usersfolder as it is mounted by default on the VM as show in theimage.
I am trying to setup my project with docker. I am using Docker Toolbox on Windows 10 Home. I am very new to docker. To my understanding I have to copy my files to new container and add a volume so that I can persist changes made by gulp.Here is my folder structure-- src |- dist |- node-modules |- gulpfile.js |- package.json |- DockerfileThe Dockerfile codeFROM node:8.9.4-alpine RUN npm install -g gulp CMD [ "ls", 'source' ]I tried many solutions for*docker run -v *e.gdocker run -v /$(pwd):/source docker run -v //c/Users/PcUser/Desktop/proj:/source docker run -v //c/Users/PcUser/Desktop/proj:/source docker run -v //d/proj:/source docker run -v /d/proj:/source * But No luck *Can anyone describe how would you set it up for yourself with the same structure. And why am I not able to mount my host folder.P.S: If I use two containers one for compiling my code with gulp and one with nginx to serve the content ofdistfolder. How will I do that.
Unable to share/mount Volume with Docker Toolbox on Windows 10
What version of Bamboo are you using? This problem was fixed in Bamboo 6.1.0:Unable to use variables in Container name field in Run docker taskWorkaround:Create a Script Task that runs before the Docker Task.Run commands likeecho "export sourcepath=$ini_source_path" > scriptname.sh chmod +x scriptname.shThe Docker Task will be map the ${bamboo.working.directory} to the Docker \data volume.So the just created scriptname.sh script is available in the Docker container.The script will be executed, and will set the variable correctly.
I'm using Docker plugin for bamboo and I need to execute a script in the docker container.The sh script contains:echo \"ini_source_path\": \"${bamboo.ini_source_path}\",and if I put this line directly in Container Command, the ${bamboo.ini_source_path} will be replaced with value of this variable.The problem in when I put /bin/bashscript.sh in Container Command because I'm getting a error:script.sh: line 35: \"${bamboo.ini_source_path}\",: bad substitutionIs there a way I can reach bamboo.ini_source_path variable from my script in docker container?Thanks!
How to send bamboo variables from Bamboo script to docker container?
Java usually allocates memory that has previously reserved, it only frees it when you restart the process. You can see this post that gives the full explanation.java.exe process uses more memory and does not free it up
I'm running a Java REST app withApache JavaSparkin this container, but I noticed that each request is adding the memory usage and not decreasing after the request is done. My first guess was that I had forgotten to close some stream/buffer (this app deal with a lot of file manipulation), but I reviewed all the code and looks like everything is being closed.Here is my Dockerfile:FROM maven:3.5-jdk-8-alpine WORKDIR /code ADD pom.xml /code/pom.xml RUN ["mvn", "dependency:resolve"] ADD src /code/src RUN ["mvn", "package"] EXPOSE 1337 CMD exec java -jar "target/app.jar"Here is the docker stats:CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 2db8b2f5fd72 0.16% 66.36 MiB / 1.952 GiB 3.32% 12.9 kB / 106 kB 0 B / 0 B 23
Why the docker container memory usage doesn't decrease?
As discussed in the comments on the original post this was a DNS issue. Configuring the DNS was a little to involved for the use case of this project, so I've solved my problem by using an environment variable to set the URL that is used to make calls to my API container based on whether I'm running on a dev or prod environment:(process.env.REACT_APP_URL_ROOT || '/app')I setREACT_APP_URL_ROOTto my localhost address when running locally, and have an nginx container configured to proxy to/appwhen I build and deploy the React app.
I have two Docker containers, one running a React app (built using create-react-app) and another with a Node app API. I have a docker-compose file set up, and according to thedocumentationI should be able to use the names of the services to communicate between containers.However, when I try to send a request to the/loginendpoint of my API from the React app I received anet::ERR_NAME_NOT_RESOLVEDerror. I'm using Unirest to send the request.I've done a bunch of digging around online and have come across a few things describing similar issues but still haven't been able to find a solution. When I runcat /etc/resolve.conf(seethisissue) in my React container the container with my API doesn't show up, but Docker is still fairly new to me so I'm not sure if that's part of the issue. I've also tried using links and user-defined networks in my compose file but to no avail.I've included gists of mydocker-compose.ymlfile as well as the code snippet of my request. Any help is much appreciated!docker-compose.ymlUnirest request to /login
Unable to have Docker containers communicate with each other
As of now there is no provision as such with either withctrorcrictlcli to copy a host file to a running container as we have withdockercli (eg: docker cp).Though there is a project under containerd known asnerdctlwhich is trying to emulate the same.nerdctlis a Docker-compatible CLI for containerd.Link for reference:nerdctl cp command
I find that I can usectr snapshot mountto copy a file from a container to a host. But how can I copy a file from a host to the container using containerd? I used golang to write some code to start a container, but I can't find any documentation about copying host files to a running container.
How containerd copy a file from host to a running container?
Thedocker runcommand can be passed a user and group (or uid / gid).docker run --user 2000:2000 acmeOr, via compose, theuser:attribute can be used.compose.ymlservices: acme: image: my-alpine:latest user: 2000:2000In the case that ids are used, neither the user id or group id needs to exist in the container.
I want to change an alpine-based container user's UID and GID.But there's nousermodandgroupmod. Are there equivalents?(This is for a running container, not an image.)
Change UID and GID in alpine docker container
Looks like .NET uses port 80 in production, so when Ive changed docker file and exported port 80 instead of 5000 and also changed port 5000 to port 80 in docker-compose it works as supposed.So I assume in order to run it on port 5000 it will need some configuration on .NET side.
Im new to Docker and im trying to set up 2 containers, one running mongoDB and one running the web application.The problem is that I can not access my .NET core application via localhost:5000 or 0.0.0.0:5000.Mongo is running fine.Here is my docker-compose.ymlversion: '3' services: web: build: . ports: - "5000:5000" mongo: build: ../../../docker/wishare-mongo/ volumes: - ../../../docker/wishare-mongo:/data/db ports: - "27017:27017"Dockerfile for .NET Core appFROM microsoft/aspnetcore-build:2.0 AS build-env WORKDIR /app ENV ASPNETCORE_URLS="http://*:5000" # Copy csproj and restore as distinct layers COPY *.csproj ./ RUN dotnet restore # Copy everything else and build COPY . ./ RUN dotnet publish -c Release -o out # Build runtime image FROM microsoft/aspnetcore:2.0 WORKDIR /app COPY --from=build-env /app/out . EXPOSE 5000 ENTRYPOINT ["dotnet", "Wishare-Integration-Api.dll"]The docker-compose build and docker-compose up runs without an error with this resulting for web container:web_1 | Hosting environment: Production web_1 | Content root path: /app web_1 | Now listening on:http://[::]:80web_1 | Application started. Press Ctrl+C to shut down.And this is the result of docker ps, port 5000 should be exported based on docker ps result.Any ideas?
Docker-compose with .NET Core unreachable
Your PHP app has no clue about your public IP as it is in the private network. Public IP is assigned to you by your ISP/Router. Router NATs the private IPs so only 1 IP is allocated to your private network."What is my IP" website is in the public internet, so it sees your public IP and it has no clue about your 172.xxx.xxx.xxx IP. Router takes care of the translation.So, from within the app, you will always get your private IP.If from within your app, you need to know your public IP for what ever reason, you may call the API provided by ipify.org.curl 'https://api.ipify.org?format=json'For more details on how NAT worksReference:https://www.geeksforgeeks.org/network-address-translation-nat/
I'm trying to get my ip address.Here is the code, thegetClientIp()method uses a$_SERVER['REMOTE_ADDR']global variable internally, so$request->getClientIp()and$_SERVER['REMOTE_ADDR']are the same.getClientIp())->json();I have php deployed in docker on my local machine. So I send a request to localhosthttp://localhost/api/v1/ip_addressand get a response.{ "message": "IP address:172.ХХ.Х.Х", // I replaced my ip numbers with x. "data": [] }But there is a problem, the ip address that I get is different from the ip address that applications like "get my ip" give me.You can just open Google and type in the search "find out my ip online" or "what is my ip" and they will give the correct ip, but the ip that I get from php is not.I think this is due to the fact that I am making a request to my own computer, and not to a remote server. Can anyone explain why this is happening and if I can get around it?Update: From php I get the internal ip address because it starts with 172.X.X...
How can i get client real ip address in PHP?
From the discussion athttps://github.com/moby/moby/issues/21814, there are two main reasons that layers are not extracted in parallel:It would not work on all storage drivers.It would potentially use lots of CPu.See the related comments below:Note that not all storage drivers would be able to support parallel extraction. Some are snapshotting filesystems where the first layer must be extracted and snapshotted before the next one can be applied.@aaronlehmannWe also don't really want apulloperation consuming tons of CPU on a host with running containers.@cpuguy83And the user who closed the linked issue wrote the following:This isn't going to happen for technical reasons. There's no room for debate here. AUFS would support this, but most of the other storage drivers wouldn't support this. This also requires having specific code to implement at least two different code paths: one with this parallel extraction and one without it.An image is basically something like this graph A->B->C->D and most Docker storage drivers can't handle extracting any layers which depend on layers which haven't been extracted already.Should you want to speed up docker pull, you most certainly want faster storage and faster network. Go itself will contribute to performance gains once Go 1.7 is out and we start using it.I'm going to close this right now because any gains from parallel extraction for specific drivers aren't worth the complexity for the code, the effort needed to implement it and the effort needed to maintain this in the future.@unclejack
Does extracting (untarring) of docker image layers bydocker pullhave to be conducted sequentially or could it be parallelized?Exampledocker pull mirekphd/ml-cpu-r40-base- an image which had to be split into more than 50 layers for build performance reasons - it contains around 4k R packages precompiled as DEB's (the entire CRAN Task Views contents), that would be impossible to build in docker without splitting these packages into multiple layers of roughly equal size, which cuts the build time from a whole day to minutes. Extraction stage - if parallelized - could become up to 50 times faster...ContextWhen you observedocker pullfor a large multi-layer image (gigabytes in size), you will notice that thedownloadof each layer can be performed separately, in parallel. Not so for subsequent extracting (untarring) of each of these layers, which is performed sequentially. Do we know why?From my anecdotal observations with such large images it would speed up the execution of thedocker pulloperation considerably.Moreover, if splitting image into more layers would let you spin up containers faster, people would start writingDockerfiles that are more readable and faster tobothdebug/test andpull/run, rather than trying to pile all instructions onto a single slow-bulding, impossibly convoluted and cache-busting string of instructions only to save a few megabytes of extra layers overhead (which will be easily recouped by parallel extraction).
Why is docker pull not extracting layers in parallel?
You can use the following docker command to get all containers that running from specific image:docker ps --filter ancestor="imagename:tag"Example:docker ps --filter ancestor="drone/drone:0.5"Example Output:CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3fb00087d4c1 drone/drone:0.5 "/drone agent" 6 days ago Up 26 minutes 8000/tcp drone_drone-agent_1This approach uses docker api and docker daemon, so it doesnt matter if the run command executed in background or other terminal.Aother approach:If you have a single container form a single image:Try naming your containers, You cant have 2 containers with the same name:docker run --name uniquecontainer Image_aNext time you run the above command you will get an error. Btw consider using-dso you dont have to switch terminals.docker run -d --name uniquecontainer Image_a
I'm new to docker.I have an image that I want to run, but I want docker to see if that image is already running from another terminal...if it is running I don't want it to load another one...is this something that can be done with docker?if it helps, I'm running the docker with a privileged mode.I've tried to search for singleton docker or something like that, but no luck.updates- 1.working from ubuntu. My scenario- from terminal X I rundocker run Image_afrom terminal Y I rundocker run Image_awhen trying to run from terminal Y, I want docker to check if there is already a docker running with Image_a, and the answer is true - I want docker not to run in terminal Y
How to run docker image as singleton
With Kubernetes, I understand that I need to use the service name from Kubernetes, something like "rest" (to make transparent the service itself), but that name would only be visible from the docker container serving the static resources.Your understanding is correct.As long as you have a kube-dns add-on running in your cluster, your service name as Domain name is resolvablewith in the same kubernetes cluster and namespace. In other words, as you said, "rest" will work only with in the kubernetes cluster.My question: do I need to forward traffic from NGINX to the REST Api? Does Kubernetes expose a public service name usable from Javascript, for example?This is one way to achieve this.Advantageof this approach is, you will avoid all the Same Origin Policy/CORS headaches, your microservice (express) authentication details will be abstracted out from user's browser. (This is not necessarily an advantage).Disadvantageof this approach is, your backend microservice (express) will have a tight coupling with front end (or vice-versa depending on how you look at it), This will make the scaling of backenddependenton front end. Your Backend is not exposed. So, if you have another consumer (let's just say an android app) it will not be able to access your service.Another SolutionCreate an ingress (and use an ingress controller in your cluster) and expose your Microservice(Express).
I have a doubt regarding how to structure my dockerized stack, simplified in two containers to get help here:static: NGINX serving static resources (JS/HTML).rest: express.js backend for the REST Api.Without Kubernetes, just docker-compose on a node,restis simply listening on a different port and, from Javascript, the requests go tosame_host:rest_port, no problem here.With Kubernetes, I understand that I need to use the service name from Kubernetes, something like "rest" (to make transparent the service itself), but that name would only be visible from the docker container serving the static resources.My question: do I need to forward traffic from NGINX to the REST Api? Does Kubernetes expose a public service name usable from Javascript, for example?Thank you.
How to define/use endpoints to connect to Kubernetes from Javascript
You are probably using Dockers Overlay Network feature (or Ingress network for loadbalanced services), which is based on Linux IP Virtual Server (IPVS), a.k.a.Linux Virtual Server. This uses a default 900 second (15 minutes) timeout for idle TCP connections.See:https://github.com/moby/moby/issues/31208Default Linux TCP Keep-Alive settings only start sending packets much later (if enabled at all) and thus you are left with the options of:change the TCP Keep-Alive settings on the server or clientchange the Docker networking to use the host network directlychange your software to avoid idle TCP connections, e.g. configure connection pools for databases to remove idle connections or check health more oftenchange the Kernel IPVS defaults or TCP defaults
We just transitioned to using Docker for development and are using theubuntu:18.04image. We noticed that queries usingpsycopg2failed after a few minutes.This answersolved the problem using the followingkeepalivesparams:self.db = pg.connect( dbname=config.db_name, user=config.db_user, password=config.db_password, host=config.db_host, port=config.db_port, keepalives=1, keepalives_idle=30, keepalives_interval=10, keepalives_count=5 )This works for us as well, butwhydoes this work?The psycopg2 docsdo not give insight into what the params do, however,this third partydocumentation does, andthis postgresdocumentation does.The question is, what is different in the docker environment vs the host environment which makes these non-default settings required? They work in a standard Ubuntu 18.04 environment too, but not in docker. I am hoping we could reconfigure our docker image so that these non-standard parameters aren't necessary in the first place.Postgres version:PostgreSQL 13.4 (Ubuntu 13.4-1.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bitpsycopg2 version:2.8.5Host OS: Windows 10Docker Image OS: Ubuntu 18:04
Why are the `keepalives` params in `psycopg2.connect(...)` required to run long running postgres queries in docker (ubuntu:18.04)?
I resolved this issue, by not copying node_modules folder while creating images. So what i did is i just took clone from github where node_modules folder itself is not available and create the image. After that total size becomes only 14.48 MB. So, removing node_modules resolve my issue.
I have Angular application running locally with V10. I am trying to build an Docker image with help of Dockerfile.But while building images, my Docker image size is building huge as 1.32GB. Is there any way to reduce its size ?Below is the Dockerfile which i wrote# base image FROM node:12.2.0 # set working directory (also creates two folders needed for cypress) RUN mkdir /usr/src/app && mkdir /usr/src/app/cypress && mkdir /usr/src/app/cypress/plugins WORKDIR /usr/src/app # add `/usr/src/app/node_modules/.bin` to $PATH ENV PATH /usr/src/app/node_modules/.bin:$PATH # install app and cache app dependencies COPY . /usr/src/app RUN npm install --silent EXPOSE 4200 # start app CMD ["npm", "run", "ng serve"]Please Note:- Locally the root folder is showing the property as 1,14,774 items, totaling 1.3 GB.
Is there any way to optimize size of Docker Image?
You should start Docker's container with--userparameter. If you do this and set the sameuid:gidas owner of the MySQL storage you will no have problems with permissions. You have to check how exactly to do this in Docker Compose because I show you example for normal command line execution
I have the mysql database stored in/home/mysqlinstead of/var/lib/mysql. The directory used to be owned bymysql. However, when I run the commanddocker-compose upwith this yml file:version: '3' services: mariadb: image: mariadb restart: always volumes: - /home/mysql:/var/lib/mysql elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:5.6.4 environment: - "ES_JAVA_OPTS=-Xms750m -Xmx750m" - bootstrap.memory_lock=false site: build: . volumes: - "./app:/app" links: - mariadb:mysql environment: - DOCKER_IP=172.19.0.2 depends_on: ['elasticsearch','mariadb'] ports: - "3000:3000"The docker container is able to run, but the entire folder and files in/home/mysqlare owned bysystemd-journal-remote, which causes thenodeserver fails to connect tomariadb. I have to stop the docker instance, restore the mysql folder ownership and deleteib_logfile0andib_logfile1.Why does mounting/home/mysqlcause such a fatal problem?Update:My solution is to adduser: "mysql":version: '3' services: mariadb: image: mariadb restart: always volumes: - /home/mysql:/var/lib/mysql user: "mysql" elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:5.6.4 environment: - "ES_JAVA_OPTS=-Xms750m -Xmx750m" - bootstrap.memory_lock=false site: build: . volumes: - "./app:/app" links: - mariadb:mysql environment: - DOCKER_IP=172.19.0.2 depends_on: ['elasticsearch','mariadb'] ports: - "3000:3000"
Why did mysql data ownership change to systemd-journal-remote after running a docker container
Your code is for the most part just fine you just have a path problem and a invalid test class.Also it is not good practice to have your test package into your api package.you should do something like this:. ├── api │ ├── __init__.py │ ├── main.py │ └── routers │ │ ├── __init__.py │ │ ├── something.py │ │... ├── tests │ ├── __init__.py │ └── test_something.py ├── requirements.txt ├── DockerfileAlso you made a syntax error with youapi/__init__.py, you did put 3x _ at the biginningyour test.py file doesn't respect the norm, please checkfastapi official test doc:from fastapi.testclient import TestClient from api.main import app client = TestClient(app) def test_login(): response = client.post( "/login", data={ "USERNAME": "", "PASSWORD": "" } ) assert response.status_code == 200in the parent folder you call:pytestwith above structure and a simple fastapi api in main.py:FAILED tests/test_something.py::TestApiGet::test_login - assert 404 == 200When you have a basic test structure that is running fine, you can use test classes but here is a simpler/easier solution for the beginning.
I've got an error when I try to run pytest command. The error is just when I running the application on docker, when I did it locally, it works. There's another curiosity about it, the swagger and de requests are working fine, just the test file doesn't. I have already tried :python -m pytest tests/pytest tests/test_api.pydocker-compose exec api pytestI`ve got this error :ImportError while importing test module 'C:\Users\mathe\Desktop\Teste\bluestorm-api\api\tests\test_api.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: ..\..\..\..\AppData\Local\Programs\Python\Python39\lib\importlib\__init__.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests\test_api.py:5: in import api.database.create_engine E ModuleNotFoundError: No module named 'api'My files structure:My test.py file:from sys import api_version from fastapi.testclient import TestClient from sqlalchemy import text, orm from sqlmodel import MetaData, Table from api.main import app from api.database import engine client = TestClient(app) class TestApiGet: def test_login(self): response = app.post( "/login", data = { "USERNAME": "", "PASSWORD": "" }) assert response.status_code == 200My docker-compose file:version: "3.8" services: api: build: . command: ["uvicorn", "api.main:app", "--host=0.0.0.0", "--port=8000"] volumes: - ./api:/code/api ports: - "8000:8000"My docker file:FROM python:3.9-slim-buster WORKDIR /code # TODO: Multstage build, so the container does not runs with a compiler RUN apt-get update && apt-get install curl build-essential unixodbc-dev wait-for-it -y COPY ./requirements.txt . RUN pip install -r requirements.txt RUN apt-get remove build-essential -y COPY ./api /code/api/ EXPOSE 8000
GET error : ModuleNotFoundError: No module named 'api'
You could use variables in your POM and pass them when calling maven. You should store the credentials in jenkins credentials managere.g.:... ${DOCKER_REGISTRY}/${project.artifactId}:${project.version} ... org.springframework.boot spring-boot-maven-plugin ${DOCKER_IMAGE_NAME} true ${DOCKER_REGISTRY_USER} ${DOCKER_REGISTRY_PASSWORD} ${DOCKER_REGISTRY} ...Then you could call maven like this in your jenkins pipeline:stage('Build Docker Image') { steps { withCredentials([usernamePassword(credentialsId: 'YOUR_CREDENTIALS_ID', passwordVariable: 'NEXUS_PASSWORD', usernameVariable: 'NEXUS_USER')]) { sh "mvn -DskipTests=true spring-boot:build-image -DDOCKER_REGISTRY=SUBDOMAIN.DOMAIN.COM -DDOCKER_REGISTRY_USER=$NEXUS_USER -DDOCKER_REGISTRY_PASSWORD=$NEXUS_PASSWORD" } } }
I want to create docker images with CI/CD (Jenkins) of my spring boot application and push the image to a private nexus docker registry. How to avoid adding my docker credentials to POM file and have them in GIT? Where should I pass/place the credentials instead?Or should I just push the image manually in jenkins withdocker login,docker push?I followed this tutorial (https://docs.spring.io/spring-boot/docs/2.4.0/maven-plugin/reference/htmlsingle/#build-image-example-publish) and my POM looks like this: org.springframework.boot spring-boot-maven-plugin docker.example.com/library/${project.artifactId} true user secret https://docker.example.com/v1/ [email protected]
Remove private docker registry credentials from Spring Boot POM file
Doing the following may work, and is consistent with your error message:yum groups mark install "Development Tools" yum groups mark convert "Development Tools" yum groupinstall "Development Tools"Source:https://access.redhat.com/discussions/1262603
I am using CentOS image inside a docker container everyyum install works but when i try to runyum groupinstall "Development tools"it just raise error saying:There is no installed groups file. Maybe run: yum groups mark convert (see man yum)Here is my Dockerfile# Starting from base CentOS image FROM centos:7 RUN yum install -y epel-release RUN yum install -y http://rpms.famillecollet.com/enterprise/remi-release-7.rpm RUN yum groupinstall -y "Development tools"Can some one suggest possible solution to this? I have never experienced any such issue in normal CentOS but seen that first time in docker
No Install group file - CentOS 7 - group install
Do this and be happy:ENTRYPOINT ["/entry.sh"] CMD node index.jsentry.sh:#!/bin/bash #entry.sh #step 1 npm install #step 2 npm run watch & #step 3 compass watch & #step n exec "$@"Be sure of:chmod +x entry.shAnd in Dockerfile:COPY entry.sh /
I have spent some time to grasp the difference betweenENTRYPOINTandCMDin a dockerfile. In this case I am doing some research, so even if the Idea here might not be the best, that is more about getting how that works.If I understood everything right, than:ENTRYPOINT ["/bin/bash", "-l", "-c"] CMD ["node index.js"]should result in that command:/bin/bash -l -c node index.jsright?What I would like to do, is create a script for theENTRYPOINTwhich should basically look like that:#entry.sh #step 1 npm install #step 2 npm run watch & #step 3 compass watch & #step n #that line bothers me /bin/bash -l -c $*So what I would like to accomplish is: If theCMDchanges all the »Steps 1 -n« should be executed and the resultingCMDshould finally look like:/bin/bash -l -c node index.jsInstead I get:node index.js: entry.sh: command not foundThanks for Help!DETAILS#entry.sh npm install #more to come here /bin/bash -l -c $* #dockerfile ENTRYPOINT ["/bin/bash", "-l", "-c", "./entry.sh"] CMD ["node index.js"]UPDATE#entry.sh #stuff from above echo "$*" echo "$@" /bin/bash "$@" #dockerfile ENTRYPOINT ["/bin/bash", "-l", "./entry.sh"] CMD ["node", "index.js"] /usr/bin/node: /usr/bin/node: cannot execute binary file #Result: node src/index.js node src/index.js /usr/bin/node: /usr/bin/node: cannot execute binary fileUPDATE #2 -- seams to work, but I don't if that's a good idea#entry.sh #stuff from above $@ #dockerfile ENTRYPOINT ["/bin/bash", "-l", "./entry.sh"] CMD ["node", "index.js"]
ENTRYPOINT in Combination with CMD
According todocker help run:… -p, --publish list Publish a container's port(s) to the host -P, --publish-all Publish all exposed ports to random ports …Command 1 uses-P(short form of--publish-all) and after that the image name.-Phas no arguments. Command 2 uses-p(short form of--publish list).-pexpects an argument and I think docker mistakes the image name as the argument for-p(and expects an image name after that).
I'm learning docker, and trying to run the existing images. The first command is working finecommand 1: docker run --name static-site -e AUTHOR="Mathi1" -d -P dockersamples/static-siteBut the below command is throwing errorCommand 2: docker run --name mvcdotnet -e AUTHOR="Mathi2" -d -p valkyrion/mvcdotnetError:"docker run" requires at least 1 argument.See 'docker run --help'.Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]Run a command in a new container
Docker run not working it says requires at least 1 argument
You should stick with the default bridge mode (by removingnetwork_mode: bridge) and change- SERVICE1=http://localhost:1001to- SERVICE1=http://services1:80 # services1 will be resolved by dockerBecause service1 and service2 are on separate containers, so there is no opened port 1001 on the container of service2.
I am currently working on two inter related ASP.NET Core WebAPI services (Service1 & Service2) in a solution. Both are having docker files and exposing port 80.Service1 is an independent service and required to be called from Service2. I have given both docker-compose.yml and docker-compose.override.yml.docker-compose.ymlversion: '3.4' services: services1: image: ${DOCKER_REGISTRY-}services1 network_mode: bridge build: context: . dockerfile: Services1/Dockerfile ports: - "501:80" services2: image: ${DOCKER_REGISTRY-}services2 network_mode: bridge build: context: . Services2/Dockerfile ports: - "502:80"docker-compose.override.ymlversion: '3.4' services: services1: build: context: . dockerfile: Services1/Dockerfile environment: - ASPNETCORE_ENVIRONMENT=Development ports: - "1001:80" services2: build: context: . dockerfile: Services2/Dockerfile environment: - ASPNETCORE_ENVIRONMENT=Development - SERVICE1=http://localhost:1001 ports: - "1002:80"When i callhttp://localhost:1001fromhttp://localhost:1002withnetwork_mode: bridge, I am getting the below exception.{System.Net.Http.HttpRequestException: Cannot assign requested address ---> System.Net.Sockets.SocketException: Cannot assign requested addresswhen i change the network_mode to host (network_mode:host), i am getting the below exception.System.IO.IOException: 'Failed to bind to addresshttp://[::]:80: address already in use.'So please let me know how to resolve the issue.
Docker multiple same port issue
The easiest way (IMO) is to set up your development environment to mirror as closely as possible your production environment. If you want your production application to work with 20 microservices, each running in a separate container, then do that with your development machine. That way, when you deploy to production, you don't have to change from using ports to using hostnames.The easiest way to set up a large set of microservices in a bunch of different containers is probably withFigor with Docker's upcomingintegrated orchestration tools. Since we don't have all the details on what's coming, I'll use Fig. This is afig.ymlfile for a production server:application: image: application-image links: - service1:service1 - service2:service2 - service3:service3 ... service1: image: service1-image ... service2: image: service2-image ... service3: image: service3-image ...This abbreviatedfig.ymlfile will set up links between the application and all the services so that in your code, you can refer to them via hostnameservice1,service2, etc.For development purposes, there's lots more that needs to go in here: for each of the services you'll probably want to mount a directory in which to edit the code, you may want to expose some ports so you can test services directly, etc. But at it's core, the development environment is the same as the production environment.It does sound like a lot, but a tool like Fig makes it really easy to configure and run your application. If you don't want to use Fig, then you can do the same with Docker commands - the key is the links between containers. I'd probably create a script to set things up for both the production and development environments.
BackgroundMy Environment - Java, Play2, MySqlI've written 3 stateless Restful Microservices on Play2 -> /S1,/S2,/S3S1 consumes data from S2 and S3. So when user hits /S1, that service asynchronously calls /S2, /S3, merges data and returns final json output. Side note - The services will be shipped eventually as docker images.For testing in developer environment, I run /s1,/s2,/s3 on ports 9000, 9001 and 9002 respectively. I pickup the port numbers from a config file etc. I hit the services and everything works fine. But there is a better way to setup the test env on my developer box correct? Example - What if I want to run 20 services etc..So with that said, on production they will be called just like mydomain.com/s1, mydomain.com/s2, mydomain.com/s3 etc. I want to accomplish this on my developer environment box....I guess there's some reverse proxying involved I would imagine.QuestionSo the question is, how do I call /S2 and /S3 from within S1 without specifying or using the port number on developer environment. How are people testing microservices on their local machine?Extra BonusKnowing that I'll be shipping my services as docker images, how do I accomplish the same thing with docker containers (each container running one service)
Developer environment - how to call/consume other micro services
Thedocker runsyntaxis:docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...], everything you're passing after theIMAGE[:TAG|@DIGEST]is being passed as[COMMAND] [ARG...]to theENTRYPOINTof the container.Adocker inspect mongo:3.4-xenial --format {{.Config.Entrypoint}}shows theENTRYPOINTasdocker-entrypoint.sh(e.g. you're essentially trying to executedocker-entrypoint.sh --expose ...).You can trace the execution i.e.:docker run --name mongodb --entrypoint bash mongo:3.4-xenial -c "bash -x docker-entrypoint.sh --expose 27017"+ set -Eeuo pipefail + '[' - = - ']' + set -- mongod --expose 27017 + originalArgOne=mongod + [[ mongod == mongo* ]] ++ id -u + '[' 0 = 0 ']' + '[' mongod = mongod ']' + find /data/configdb /data/db '!' -user mongodb -exec chown mongodb '{}' + + chown --dereference mongodb /proc/1/fd/1 /proc/1/fd/2 + exec gosu mongodb /usr/local/bin/docker-entrypoint.sh mongod --expose 27017 Error parsing command line: unrecognised option '--expose' try 'mongod --help' for more informationdocker run --name mongodb --expose 27017 -d mongo:3.4-xenialis passing--expose 27017in thedocker run[OPTIONS].
Running a docker container ...docker run --name mongodb -d mongo:3.4-xenial --expose 27017Results in the error "Error parsing command line: unrecognised option '-p'" in the log.However, moving the--exposeparameter to the left works fine:docker run --name mongodb --expose 27017 -d mongo:3.4-xenialI don't understand why, though.
Docker: "unrecognised option '-p'"
A different answer than most sharing it anyway for those who need that IP.I agree with @david-maze that you can get away in most cases without ever knowing the IP address. And withdocker-composecreating theyamlfile will have a friendly name to all the services.With that said in events when you just need the IP address, let's say to configure a load balancer (one lame use case that I could think of), you should lean on configuration over code.Here's a small usecase.The solution has three components:docker-compose(with a network).env file at the same level asdocker-composeConfiguring environment variables in .NET code1.docker-composeversion: '3.3' services: web_api: . . . networks: public_net: ipv4_address: ${WEB_API_1_IP} . . . networks: public_net: driver: bridge ipam: driver: default config: - subnet: ${NETWORK_SUBNET}2..envfile. . . WEB_API_1_IP=192.168.0.10 WEB_API_2_IP=192.168.0.11 . . . NETWORK_SUBNET=192.168.0.0/243. Program.csAddEnvironmentVariablespublic static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureAppConfiguration((hostingContext, config) => { config.AddEnvironmentVariables(); }) .ConfigureWebHostDefaults(webBuilder => { webBuilder.UseStartup(); });
This seemingly simple task turns out very difficult.I am trying to get docker container's IP from .net project, in my case using c#.What I have tried so far (This returns docker engine's IP (DockerNAT), not the container's IP):Dns.GetHostEntry(name).AddressList.FirstOrDefault(x => x.AddressFamily == System.Net.Sockets.AddressFamily.InterNetwork);If I use ipconfig, the list does not contain the container's IP address, which you can find using docker network inspect network_name (Below list doesn't contain container's IP):var networkInterfaces = System.Net.NetworkInformation.NetworkInterface.GetAllNetworkInterfaces();Any other idea to access the container's IP in C#?
Docker container IP address from .net project
You should check out thedocker-image-resource. You can define a Dockerfile with all of the dependencies that you want, and then push that as a resource that can be used in later builds. We wrote atutorialon this that might clear things up a bit.
I am using concourse for our build system.Concourse caches docker images so that we don't need to go through the download process each on subsequent runs.I want to add a binary file to the docker image which I will pull from the internet, but I only want to do it the first time the docker image is pulled and created by concourse.Any ideas how I can do this?
Concourse add file to docker image just once