Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
Asthis pull requestshows, previously liquibase did not warn that column ordering might not be applied, because the database does not support it. So it failed silently in older versions, but now it shows an error. Quoting the most important line:Breaking change:Because I fixed the validation logic, anyone who has a beforeColumn or afterColumn or position attribute on addColumn for a database that doesn't support that will now get a validation error vs. it just being ignored. | To fix vulnerabilities I upgraded docker image version in my dockerfile:old:FROM liquibase/liquibase:4.4new:FROM liquibase/liquibase:4.20But I started to get error:addAfterColumn is not allowed on postgresqlI started to investigate this error and found out that in some changesetsaddAfterColumnis used.
I also found this topic:https://github.com/liquibase/liquibase/issues/3091So I just removedafterColumnbut I am not sure what side effects could I experience ? I am not familiar about reason to useafterColumnbut if it is removed it could be useless but I can't find any useful information.Are there any other options to fix the issue ? because editing liquibase scripts will break checksums for existing databases | addAfterColumn is not allowed on postgresql |
You can try doing it this way:$ docker run --rm -p 4444:4444 -p 5900:5900 \
-v /tmp/chrome_profiles:/tmp/chrome_profiles \
-e JAVA_OPTS selenium/standalone-chrome:latestor# To execute this docker-compose yml file use `docker-compose -f up`
# Add the `-d` flag at the end for detached execution
version: '2'
services:
chrome:
image: selenium/node-chrome:latest
volumes:
- /dev/shm:/dev/shm
- /tmp/chrome:/tmp/chrome_profiles
ports:
- "5900:5900"
depends_on:
- hub
environment:
HUB_HOST: hub
hub:
image: selenium/hub:latest
ports:
- "4444:4444"The profile path then needs to be passed through ChromeOptions, keep in mind that this is the path inside the container. Example code:ChromeOptions chromeOptions = new ChromeOptions();
chromeOptions.addArguments("-profile", "/tmp/chrome_profiles/.selenium");
WebDriver driver = new RemoteWebDriver(new URL("http://hub:4444/wd/hub"), chromeOptions); | I need to launchseleniuminsidedockercontainer. It's important to pass browser profile towebdriver.Here'sdocker-compose:version: '2'
services:
worker_main:
build: ./app
volumes:
- /Users/username/Library/Application Support/Google/Chrome/Profile 1:/profile
restart: always
env_file:
- config.env
networks:
- backend
depends_on:
- chrome
chrome:
image: selenium/standalone-chrome
restart: always
ports:
- 4444:4444
hostname: chrome
networks:
- backend
networks:
backend:Here's driver code:from selenium import webdriver
options = webdriver.ChromeOptions()
options.add_argument("user-data-dir=/profile")
driver = webdriver.Remote("http://chrome:4444/wd/hub", options=options)As a result I catch this error:selenium.common.exceptions.WebDriverException: Message: unknown error: cannot create default profile directory | What's the right way to pass browser profile to selenium inside docker container? |
Your--publishoption is backwards: it's-p :, so for your setup you'd want--publish 8018:5000.Startup issues aside, you do need the option to cause the container to listen on 0.0.0.0 (or ::0, if IPv6 works). If it binds to localhost it will be unreachable from outside its container, including from other containers and from the host. | I created an image for .NET core:FROM microsoft/dotnet:2.1-sdk AS build-env
WORKDIR /app
EXPOSE 80 443 5000 5001 5010 5011 7000 22676
#ENTRYPOINT [ "bash"]
CMD ["bash"]I run a container from itdocker container run -it --publish 5000:8018 --name versie3001 -v //c/tijd/mount:/app michel03What goes well is that I see the mounted files.
When I create a new website withdotnet new razorand I run it withdotnet runit tries to run on port 5000/5001 (default ports) but I get this error:warn: Microsoft.AspNetCore.Server.Kestrel[0]
Unable to bind to https://localhost:5001 on the IPv6 loopback interface: 'Cannot assign requested address'.
warn: Microsoft.AspNetCore.Server.Kestrel[0]
Unable to bind to http://localhost:5000 on the IPv6 loopback interface: 'Cannot assign requested address'.Actually it says warn, but the result is the same, when I go to localhost:8018 I get no result (ERR_CONNECTION_REFUSED)What am I doing wrong here?I saw an answer saying I should do this in my containerfile:ENTRYPOINT [ "dotnet", "watch", "run", "--no-restore", "--urls", "https://0.0.0.0:5000"]. It does not give me the error (output isNow listening on: https://0.0.0.0:5000), which is good, but it also does not connect fromhttps://localhost:8018on my local machine. | dotnet core docker container - Unable to bind to https://localhost:5001 on the IPv6 loopback interface |
So when you override command section you must remember to keep existing behavior which is set by image author.So in you case you can actually install kibana plugin this way but you must also add Kibana start at the end of the command by using e.g. && to run process after plugin installation. So in your case it should be:command: sh -c './bin/kibana-plugin install https://github.com/bitsensor/elastalert-kibana-plugin/releases/download/1.0.4/elastalert-kibana-plugin-1.0.4-7.0.1.zip && exec /usr/local/bin/kibana-docker' | I am new to using docker and trying to add the elastalert plugin to my kibana image. I am using Kibana 7.0.1 and Elasticsearch 7.0.1 and trying to use the elastalert 7.0.1 kibana plugin from github. When I rundocker-compose upusing the below docker-compose.yml file it does seem to install the plugin, but it doesn't actually start up kibana. Am I missing another command? Thanksservices:
...
kibana:
image: docker.elastic.co/kibana/kibana:7.0.1
...
command: ./bin/kibana-plugin install https://github.com/bitsensor/elastalert-kibana-plugin/releases/download/1.0.4/elastalert-kibana-plugin-1.0.4-7.0.1.zip | Adding Plugin to Kibana Image in docker-compose.yml |
Usebash(or your preferred shell if notbash) in the entrypoint:ENTRYPOINT [ "bash", "-c", "./entrypoint.sh" ]This will run the entrypoint script even if you haven't set the script as executable (which I see you have)You an also use this similarly with other scripts, for example with Python:ENTRYPOINT [ "python", "./entrypoint.py" ]You could also try calling the script with full executable path:ENTRYPOINT [ "/opt/app/entrypoint.sh" ] | I have amultistagedockerfilewhich I'm deploying in k8s with script asENTRYPOINT ["./entrypoint.sh"].Deployment is done though helm and env is Azure.
While creating the container it errors out"./entrypoint.sh": permission denied: unknownWarning Failed 14s (x3 over 31s) kubelet Error: failed to create containerd task: OCI runtime create failed: container_linux.go:380: starting container process caused:
exec: "./entrypoint.sh": permission denied: unknown
Warning BackOff 1s (x4 over 30s) kubelet Back-off restarting failed containerI have givenchmod +xto make it executable andchmod 755for permission.Dockerfile##############
## Build #
##############
FROM repo.azurecr.io/maven:3.8.1-jdk-11 AS BUILD
ARG WORKDIR=/opt/work
COPY . $WORKDIR/
WORKDIR ${WORKDIR}
COPY ./settings.xml /root/.m2/settings.xml
RUN --mount=type=cache,target=/root/.m2/repository \
mvn clean package -pl app -am
RUN rm /root/.m2/settings.xml
RUN rm ./settings.xml
#################
### Runtime #
#################
FROM repo.azurecr.io/openjdk:11-jre-slim as RUNTIME
RUN mkdir /opt/app \
&& useradd -ms /bin/bash javauser \
&& chown -R javauser:javauser /opt/app \
&& apt-get update \
&& apt-get install curl -y \
&& rm -rf /var/lib/apt/lists/*
COPY --from=BUILD /opt/work/app/target/*.jar /opt/app/service.jar
COPY --from=BUILD /opt/work/entrypoint.sh /opt/app/entrypoint.sh
RUN chmod +x /opt/app/entrypoint.sh
RUN chmod 755 /opt/app/entrypoint.sh
WORKDIR /opt/app
USER javauser
ENTRYPOINT ["./entrypoint.sh"]PS: Please don't make it duplicate ofhttps://stackoverflow.com/a/46353378/2226710as i have addedRUN chmod +x entrypoint.shand it didn't solved the issue. | Permission denied while executing script entrypoint.sh from dockerfile in Kubernetes |
EXPOSEis just a metadata added to the image (as noted in "Docker ports are not exposed").It does not actuallypublishthe port.You need to make sure youdocker runthe image with-poption, in order to actually publish the container port to an host port.-p=[]Publish a container᾿s port or a range of ports to the hostformat:ip:hostPort:containerPort|ip::containerPort|hostPort:containerPort|containerPort | i write dockerfileEXPOSE 2181 2888 3888and docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abc644fe1ad0 00088267fb34 "/opt/startzookeeper…" 2 seconds ago Up 1 second 2181/tcp, 2888/tcp, 3888/tcp hopeful_curiebut when itelnet localhost 2181Trying ::1... telnet: connect to address ::1: Connection refused
Trying 127.0.0.1... telnet: connect to address 127.0.0.1: Connection
refused telnet: Unable to connect to remote hostwhy i cannot telnet the exposed port? should i add what to dockerfile?
thanks your any suggestion | dockerfile expose port cannot telnet |
Thedepends_onsection is only forcontrolling startup order.Alinksornetworkssection is also required to allow the containers to talk to each order.Update thewebsection of the docker-compose.yml file to add the link to themongo_servicecontainer:...
web:
depends_on:
- mongo_service
links:
- mongo_service
environment:
PYTHONUNBUFFERED: 'true'
...UpdateThe finalRUNinstruction will execute at build time. You need to useCMDinstead for it to execute at runtime:CMD curl "mongo_service:27017" | I'm trying to use Docker to containerize a web application that uses a Flask web server and a MongoDB database.Within the Flask server, I attempt to connect to Mongo using an environment variable namedMONGO_URI:db = MongoClient(os.environ['MONGO_URI'], connect=False)['cat_database']Within the container, I attempt to connect to Mongo by setting aMONGO_URIenvironment variable that references a service name. Full docker-compose.yml:Full docker-compose.yml:version: '2'
services:
mongo_service:
image: mongo
web:
# link the web container to the mongo_service container
links:
- mongo_service
# explicitly declare service dependencies
depends_on:
- mongo_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
volumes:
- docker-data/app/
# use the image from the Dockerfile in the cwd
build: .
command:
- echo "success!"
ports:
- '8000:8000'Full Dockerfile:# Specify base image
FROM andreptb/oracle-java:8-alpine
# Specify author / maintainer
MAINTAINER Douglas Duhaime <[email protected]>
# Add the cwd to the container's app directory
ADD . "/app"
# Use /app as the container's working directory
WORKDIR "/app"
# Test that the mongo_service host is defined
RUN apk add --update --no-cache curl
RUN curl "mongo_service:27017"This returns:Could not resolve host: mongo_serviceDoes anyone know what I'm doing wrong, or what I can do to get the server to connect to Mongo? I'd be very grateful for any advice others can offer!Docker version:Docker version 17.12.0-ce, build c97c6d6Docker-compose version:docker-compose version 1.18.0, build 8dd22a9 | Docker-Compose: Can't Connect to Mongo |
To make it faster I will recommend creating your custom Dockerfile based onpython:3.7that has installed all the dependency during the build. So this will save your time and your job will do not need to install dependency during each job build.FROM python:3.7
RUN python --version
# Create app directory
WORKDIR /app
# copy requirements.txt
COPY local-src/requirements.txt ./
# Install app dependencies
RUN pip install -r requirements.txt
# Bundle app source
COPY src /appYou can read more about this practicedocker-python-pip-requirementsandwrite-effective-docker-files-with-pythonAnother option is to add git client in the Dockerfile andpull codeduring creating the container. | Common advice (example) for carrying out CI is to use an image with pre-installed dependencies. Unfortunately for a n00b like me, the link in question doesn't go into further detail.When I look for docker tutorials, it seems that usually teach you how to containerise an app rather than, say, Python with some pre-installed dependencies.For example, if this is what my.gitlab-ci.ymlfile looks like:image: "python:3.7"
before_script:
- python --version
- pip install -r requirements.txt
stages:
- Static Analysis
flake8:
stage: Static Analysis
script:
- flake8 --max-line-length=120how can I containerise Python with some pre-installed dependencies (here, the ones inrequirements.txt), and how should I change the.gitlab-ci.ymlfile, so that the CI process runs faster? | Docker image with dependencies pre-installed for CI |
Looks like the iframes acting as a browser are receiving the hostname instead of the full path to the resources. Can you set up the following ReverseProxy headers and give it a go:proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;Basically you have a proxy at the moment, and we want a reverse proxy too. Let me know if this works. | We are trying to update our internal server infrastructure and to proxy all accesses to our R shiny webservers through an Nginx server. Im able to get a response from the shiny server but Im not able to get related files like css/js through the Nginx server.Setup:2 docker container (1 for hosting nginx, 1 running R for a shiny application)both docker container are members of an docker networkshiny server listens to port 7676 (internal ip-adress 172.18.0.3)nginx server is hosting few static html files with iFrames (legacy, cant get ride off), which should show content of the shiny serveraccessingnginx-server/QueryLandscape.htmlloads the page with the iFrameiFrame works: it loads the static part of R-shiny application, but it doesnt load the related JS/CSS/....(e.g.http://nginx-server:8001/ilandscape/shared/shiny.css)within the nginx-docker container i can access this css filewget 172.18.0.3:7676/shared/shiny.cssNginx.conflocation /ilandscape/ {
proxy_pass http://172.18.0.3:7676/;
#proxy_redirect http://172.18.0.3:7676/ $scheme://$host/;
# websocket headers
proxy_set_header Upgrade $http_upgrade;
proxy_http_version 1.1;
proxy_read_timeout 20d;
proxy_set_header Host $host;
}What am I missing in my nginx conf to proxy/redirecthttp://nginx-server:8001/ilandscape/shared/shiny.css --> 172.18.0.3:7676/shared/shiny.css?Thanks for your help,
Tobi | Nginx: Proxy pass / proxy redirect to shiny web applications |
After some investigation, I found out this seemed to be an issue when using the Alpine container for my Go application. To fix this I had to add thebinutils-golddependency within my Dockerfile.My Dockerfile now looks like the below and has fixed the issue:FROM golang:1.15.3-alpine3.12 AS builder
RUN apk update && apk add gcc make git libc-dev binutils-gold
ADD ./ /src/
WORKDIR /src/
RUN make build
FROM alpine:3.12
COPY --from=builder /src/static /app/static
COPY --from=builder /src/huski-go /app/
ENTRYPOINT ["/app/huski-go"]You can read more about this where I found the answer here:https://github.com/nodejs/node/issues/4212 | I am attempting to build a Docker image for my application to use within Integration tests.The image can be built fine on my old 2017 Macbook but fails when trying on my new Macbook with the M1 chip.The error I receive is:unable to build image:
The command '/bin/sh -c make build' returned a non-zero code: 2
{"version": "TEST", "output": "Step 1/9 : FROM golang:1.15.3-alpine3.12 AS builder---> 9701aa6ab80a
Step 2/9 : RUN apk update && apk add gcc make git libc-dev ---> Using cache ---> 87ff8d250e2d
Step 3/9 : ADD ./ /src/ ---> Using cache ---> ef95bb030ff7
Step 4/9 : WORKDIR /src/ ---> Using cache\n ---> 3b982c9ab004
Step 5/9 : RUN make build ---> Running in f7596e65a80b\u001b[91m# github.com/qadre/huski.go\n/usr/local/go/pkg/tool/linux_arm64/link: running gcc failed: exit status 1
collect2: fatal error: cannot find 'ld'\ncompilation terminated.\n\n\u001b[0m\u001b[91
mmake: *** [Makefile:8: build] Error 2\n\u001b[0mRemoving intermediate container f7596e65a80b\n"}My make build isbuild:
@go build -race -o huski-go -ldflags="-X 'main.Version=${VERSION}'"When I run ld -v I get:@(#)PROGRAM:ld PROJECT:ld64-609.8 BUILD 15:07:46 Dec 18 2020
configured to support archs: armv6 armv7 armv7s arm64 arm64e arm64_32
i386 x86_64 x86_64h armv6m armv7k armv7m armv7em LTO support using:
LLVM version 12.0.0, (clang-1200.0.32.29) (static support for 27,
runtime is 27) TAPI support using: Apple TAPI version 12.0.0
(tapi-1200.0.23.5)Has anyone encountered this with the new Macbooks? | Unable to build Docker image using new Mac M1 |
you miss semicolon afterBEGIN TRANSACTION | I am trying to create two tables using docker-compose and a dockerfile with postgres sql. However, I get the following error.psql:/docker-entrypoint-initdb.d/tables/users.sql:11: ERROR: syntax error at or near "CREATE" postgres_1 | LINE 2: CREATE TABLE usersI am not sure what I am doing wrong, but I checked my sql query viaeversql.com/sql-syntax-check-validator/and it seems to be valid sql.could it be the version of the postgres image I am using or something else? My dockerfiles look correct to me but please do let me know if I am missing something.Here is myDockerfileFROM postgres:latest
ADD /tables/ /docker-entrypoint-initdb.d/tables/
ADD /deploy_schemas.sql/ /docker-entrypoint-initdb.d/Here is mydeploy_schemas.sql-- Deploy login and users tables
\i '/docker-entrypoint-initdb.d/tables/users.sql'
\i '/docker-entrypoint-initdb.d/tables/login.sql'Here is myusers.sqlBEGIN TRANSACTION
CREATE TABLE users (
id serial PRIMARY KEY,
name VARCHAR(100),
email text UNIQUE NOT NULL,
entries BIGINT DEFAULT 0,
joined TIMESTAMP NOT NULL
);
COMMIT;Here is mylogin.sqlBEGIN TRANSACTION
CREATE TABLE login (
id serial PRIMARY KEY,
hash VARCHAR(100) NOT NULL,
email text UNIQUE NOT NULL,
);
COMMIT;and finally here isdocker-compose.ymlversion: "3.8"
services:
#Backend API
smart-brain-api:
container_name: backend
build: ./
command: npm start
working_dir: /usr/src/test-api
environment:
POSTGRES_URI: postgres://admin:password@postgres:5432/test-api
links:
- postgres
ports:
- "3000:3000"
volumes:
- ./:/usr/src/test-api
#Postgres
postgres:
environment:
POSTGRES_USER: admin
POSTGRES_DB: docker-test-api
POSTGRES_HOST: postgres
POSTGRES_PASSWORD: password
build: ./postgres
ports:
- "5432:5432"any ideas? | ERROR: syntax error at or near "CREATE" in Docker-compose Postgres |
While using --link, point to postgres (i.e., your postgresql container name) instead of IPjdbc:postgresql://postgres:5432/dBNameSo for a full solution, run your postgresql and tomcat containerdocker run -d --name postgres me/postgresql:v1
docker run -d -p 8080:8080 --name tomcat --link postgres:postgres me/tomcat:v1(Notice here I didn't put port for postgres container since it will already have 5432 exposed internally, unless you want to hit it from outside your host, you don't need to specify a port here)And your server war file will the jdbc address above, postgres will automatically resolve to the container's IP address when they are linked.Many thanks to @larsks for pointing it out. | I have a alpine docker with postgres, with listen address '*' and listening to 5432, which I'm deploying usingdocker run -d --name postgres me/postgres:v1and my tomcat container with oracle jre8, on which I'm deploying my rest web service using:# Set environment
ENV CATALINA_HOME /opt/tomcat
EXPOSE 8080
# Launch Tomcat on startup
CMD ${CATALINA_HOME}/bin/catalina.sh run
RUN rm -rf ${CATALINA_HOME}/webapps/docs \
${CATALINA_HOME}/webapps/examples \
${CATALINA_HOME}/webapps/ROOT
# Deploying war file
ADD myapp.war ${CATALINA_HOME}/webapps/ROOT.war
# Restarting server after deploying
CMD ${CATALINA_HOME}/bin/catalina.sh runAnd deploying it withdocker run -d -p 8080:8080 --name tomcat --link postgres:postgres me/tomcat:v1Both are being executed on my laptop, with IP address 192.168.x.x, and I checked the port is listening.Unfortunately my web service on tomcat cannot connect to the postgres service usingjdbc:postgresql://192.168.x.x:5432/dBNameAlternate I already tried:I launched postgres on it's own port using,docker run -d -p 5432:5432 --name postgres me/postgres:v1
docker run -d -p 8080:8080 --name tomcat me/tomcat:v1Then usedjdbc:postgresql://192.168.x.x:5432/dBNameandjdbc:postgresql://localhost:5432/dBNamebut neither seems to work.In both cases I can see my web server running in tomcat manager, and I am able to access my dB usingpsql -h localhost -p 5432 -d dBName -U myUseras well as pgAdmin.Any help in resolving this is appreciated.Solution Update:While using --link, point to postgres (i.e., your postgresql container name) instead of IPjdbc:postgresql://postgres:5432/dBNameMany thanks to @larsks for pointing it out. | Docker Tomcat container unable to access Postgres container |
After updating Dotnet DSK to 3.1.425 (release date November 8th 2022) the issue was fixed. | I'm having a problem with my .Net Core 3.1 Project. I'm using Docker for hosting the MS SQL Database (image azure-sql-edge) and I run it on a MacBook Pro M1 Max.When starting the project with Dotnet Watch Run everything works ok but after a save in Visual Studio Dotnet Watch Run restarts and gives me an error:rosetta error:/var/db/oah/0cbcd548c398ac044cf47633c4e5aa068c1a0416a18ad1861a768ac56fd1d33b/68b61c75aa9514f21db1470814e91bac8c95ea1a32f4e42fc88601dc4eeac1fc/Project.aot: attachment of code signature supplement failed: 1And Dotnet watch gives a:dotnet watch ❌ Exited with error code 133Anybody has a clue what's going wrong here? | Dotnet Watch Run gives me a Rosetta Error: attachment of code signature supplement failed: 1 after save |
This doesn't really accomplish much since things will be re-downloaded if they are requested again. But if you insist on a silly thing, best bet is a DaemonSet that runs with the host docker control socket hostPath-mounted in and runsdocker system pruneas you mentioned. You can't use a cron job so you need to write the loop yourself, probably justbash -c 'while true; do docker system prune && sleep 3600; done'or something. | Due to some internal issues, we need to remove unused images as soon as they become unused.I do know it's possible to useGarbage collectionbut it doesn't offer strict policy as we need.
I've come acrossthissolution butit's deprecatedit also removes containers and possible mounted volumesI was thinking about setting acronjob directly over the nodes to rundocker prunebut I hope there is a better wayNo idea if it makes a difference but we are using AKS | Kubernetes: How to automatically clean up unused images |
You can use Alpine which is less then 5MB, In the case of multi-stage build, you can have the samebonus of 5MBStage one: Compiling the source code to generate executable binary andStage two: Running the result.# use alpine as base image
FROM alpine as build-env
# install build-base meta package inside build-env container
RUN apk add --no-cache build-base
# change directory to /app
WORKDIR /app
# copy all files from current directory inside the build-env container
COPY . .
# Compile the source code and generate hello binary executable file
RUN gcc -o hello helloworld.c
# use another container to run the program
FROM alpine
# copy binary executable to new container
COPY --from=build-env /app/hello /app/hello
WORKDIR /app
# at last run the program
CMD ["/app/hello"]helloworld.c or replace with your own one#include
int main(){
printf("Hello World!\n");
return 0;
}Another way to copy compiled code to your image which is also in just5MB,FROM alpine:latest
RUN mkdir -p /app
COPY hello /app
WORKDIR /app
CMD ["/app/hello"] | I have a small c program application, I want to build a docker image for that and push it to docker hub and access on any platform.
I want to achieve this within 50MB of image size. i.e. should be able to pack c application and run it without GCC compiler.Please, it will be a great help if one can suggest a way to build an image within the size of 50MB. i.e without GCC compiler which is a dependency for c program to compile.Also, suggest which is a best suitable base image for the c application.NOTE: To build docker image i am using windows as host OS for docker.
NOTE: it is a basic c program to add two number which i want to pack and ship.I have already tried to create a docker image for c application which is of size 307MB. my goal is to build a docker image for c application in less than 50MBMY dockerfile:FROM busybox
COPY --from=rakeshchahar/rc-docker:my-image /usr/src/myapp usr/src/app/
WORKDIR /usr/src/app/
CMD ["./myapp"]I expect to build a image of size 50MB or less and want to access it on any platform. | How to pack and ship a simple c application in docker without the gcc compiler? |
You can run the container as the user ID matching the host user ID owning the directory. Often this is the current user:docker run -u $(id -u) -v /host/path:/container/path ...For this to work, your image needs to do a couple of things:The data needs to be kept somewhere completely separate from the application code. A top-level/datadirectory as you show is a good choice.The application proper should be owned by root, and world-readable but not world-writeable; do notRUN chown ...the application, justCOPYit in and run its build sequence as root.The image should create a non-root user, but it does not need to match any particular host user.The image needs to create the data directory, but it should be completely empty.The image startup (often an entrypoint wrapper script) needs to be able to populate the data directory if it is totally empty at startup time.FROM some-base-image
# Do all of the initial setup and build as root
WORKDIR /app
COPY . .
RUN ...
# Create some non-root user that owns the data directory by default
RUN useradd -r myuser # no specific user ID
RUN mkdir /data && chown myuser /data
# VOLUME ["/data"] # optional, the only place VOLUME makes sense at all
# Specify how to run the container
USER myuser # not earlier than this
EXPOSE ... # optional but good practice
ENTRYPOINT ["/entrypoint.sh"] # knows how to seed /data, then `exec "$@"`
CMD my_app ... # a complete command line | I have the following Dockerfile:...
RUN groupadd -r myuser&& useradd -r -g myuser myuser
RUN mkdir /data && chmod a+rwx /data
USER myuser
...Running the image withdocker runworks fine (I mean the usermyuserhas writing rights in the/datadirectory).If I run the image withdocker run -v /host/path:/data, the usermyusercannot write in the/datadirectory.Question: How to give the usermyuserpermission to write in the/datadirectory in the second case?The reason is the fact that the/host/pathdirectory is own by root and the usermyuserdoes not have permission to write in such directories. | Docker: non-root user does not have writing permissions when using volumes |
After asking in theKind Slack channel in the Kubernetes workspaceI could finally find the answer to my question:whole thread here.TL,DR;Kindwas unable to load the images with architectures that don't match the host architecture because it lacked a required--all-platformsargument in the call to thectrtool used bykind load docker-imageto load the docker images into the cluster. APR to fix the issuewas filed and it will be fixed in future releases ofKind. | I have an Apple Macbook Pro with an M1 chip, where I have a local kubernetes cluster running throughKind. The thing is I don't understand howKinddeals with docker images from different platforms/architectures. The thing is I have an application installed viaHelmthat points to some docker images withlinux/amd64architecture, and when I install it throughhelm(helm install -n [namespace] [repo] [app]), it works like a charm (I can actually look at the downloaded images and see that their architecture isamd64). However, when I download the same image to my local docker registry withdocker pull [my-image], and then try to load it to thekindcluster withkind load docker-image [my-image] [my-cluster], it doesn't work and I get the following error:Image: "[my-image]" with ID "sha256:d3a8..." not yet present on node "[my-cluster]-control-plane", loading...
ERROR: failed to load image: command "docker exec --privileged -i [my-cluster]-control-plane ctr --namespace=k8s.io images import --digests --snapshotter=overlayfs -" failed with error: exit status 1
Command Output: ctr: image might be filtered outAnd after googling the error a little bit, I could see that it is due to the mismatch of architectures between the image and thekindcluster.Could someone explain to me the reason for these different behaviors? | Unable to load local docker image in kind kubernetes cluster |
ECS containers can mount volumes so you would define{
"containerDefinitions": [
{
"mountPoints": [
{
"sourceVolume": "logs",
"containerPath": "/tmp/clogs/"
},
}
],
"volumes": [
{
"name": "logs",
}
]
}ECS also has a nice UI you can click around to set up the volumes at the task definition level, and then the mounts at the container level.Once that's set up, ECS will mount a volume at the container path, and everything inside that path will be available to all other containers that mount the volume.Further reading:https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html | I have a Sumologic log collector which is a generic log collector. I want the log collector to see logs and a config file from a different container. How do I accomplish this? | How to share file or directory with other container on ECS? |
Docker's official releaseno longer supports RHEL/Centos 6. I think that stopped with 1.7.1 and the official release is at 1.10. I would suggest updating to Centos 7 or anything with a 3.10+ kernel to use the latestdocker-engineas it has improved quite a bit.If you are stuck with Centos 6.5 then either continue with the the EPEL docker-io package or installthe 1.7.1 rpm.Completely remove the Centos 6 packageyum remove docker-ioRemovealldocker data (andneverget it back!)rm -rf /var/lib/dockerRemove the Docker repo configrm /etc/yum.repos.d/docker.repoEither installdocker-ioagainyum install docker-ioOr install thedocker-engine-1.7.1 rpmyum install http://yum.dockerproject.org/repo/main/centos/6/Packages/docker-engine-1.7.1-1.el6.x86_64.rpmStart it and dockerservice docker start
docker run hello-world | CentOS version: lsb_release -d
Description: CentOS release 6.5 (Final)My repo looks like thiscat /etc/yum.repos.d/docker.repo
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpgI have some old version of docker and when I try to install it, I get an error. When I try the skip option, even after that the docker service does not even exist on my centosyum install docker-enginehas the following problemProcessing Conflict: docker-engine-1.7.1-1.el6.x86_64 conflicts docker-io
--> Finished Dependency Resolution
Error: docker-engine conflicts with docker-io-0.6.2-1.el6.x86_64
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest"Thenservice docker startdoes not exist when I try to start it.How do I do clean of all docker stuff and do this from scratch? | yum install error docker |
starting from Charles Xu's answer, the correct sequence when setting the variable isthis uses the "state" container instance variable instead of "provisioning state". The latter is about the creation of the container group, the first is about the state of the container instance, which is what I need.
I added a delay to decrease the number of (paid) runs of the connector. | I have an Azure logic app that correctly creates an Azure Container Instance. The container starts, does its job and terminates. I need to collect its logs with the appropriate connector and write them to an azure blob.I have all the pieces in place but I do not know how to wait for the container to terminate before using the "get logs of container" connector to collect logs.If the container job would last a predictable amount of time, I could use the Delay connector before getting the logs and it would suffice (I've tried with short jobs and it works well).
But my jobs may last several hours, depending on some external factors, so the Delay technique does not work.I've tried with the "Until" connector, together with delay and the "get properties of a container group" container to wait until the state of the container is not "terminated", but without success (maybe I did it wrong). Anyway this can be quite expensive, since every "check" is billed.How can I wait for the container to terminate before asking for its logs?thanks. | azure logic apps: waiting for an ACI container to terminate to get its logs |
The docker hub registry contains a number of official language images, which you can use as your base image.https://hub.docker.com/_/python/The instructions tell you how you can build your python project, including the importation of dependencies.├── Dockerfile <-- Docker build file
├── requirements.txt <-- List of pip dependencies
└── your-daemon-or-script.py <-- Python script to runImage supports both Python 2 and 3, you specify this in the Dockerfile:FROM python:3-onbuild
CMD [ "python", "./your-daemon-or-script.py" ]The base image uses specialONBUILD instructionsto all the hard work for you. | I'm pretty new to Docker, and I need to create the container to run Docker container as an Apache Mesos task.The problem is that I can't find any relevant examples. They all are centered around Web development, which is not my case.I have a pure Python project with large number of dependencies ( like Berkeley Caffe or OpenCV ).
How to write a Docker file to properly enroll all dependecies ( and how to find them out?) | How to write a Dockerfile for a custom python project? |
Docker doesn't have a built in software based solution to share volumes across multiple machines yet. There's work oninfinitbut they haven't released anything for production usage.There are 3rd party storage solutions, that you can use. If you're on a cloud provider, their solution is typically the best for your use case. For a self hosted software solution, you could use something like glusterfs. Applications that handle data replication themselves are idea for containers, e.g. cockroachdb.The typical self-hosted solution is to fall back on to NFS. Even with cloud providers, I typically use their NFS method to mount the storage. From docker, this looks like the following:# create a reusable volume
$ docker volume create --driver local \
--opt type=nfs \
--opt o=nfsvers=4,addr=192.168.1.1,rw \
--opt device=:/path/to/dir \
foo
# or from the docker run command
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,\"volume-opt=o=nfsvers=4,addr=192.168.1.1\",volume-opt=device=:/host/path \
foo
# or to create a service
$ docker service create \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,\"volume-opt=o=nfsvers=4,addr=192.168.1.1\",volume-opt=device=:/host/path \
foo
# or inside a docker-compose file
...
volumes:
nfs-data:
driver: local
driver_opts:
type: nfs
o: nfsvers=4,addr=192.168.1.1,rw
device: ":/path/to/dir"
...Note that the IP addresses in each of those can be hostnames as long as you keep the type of nfs. | https://nickjanetakis.com/blog/docker-tip-28-named-volumes-vs-path-based-volumesseems to suggest that bothnamed volumesandpath based volumesare stored in the docker host (where containers are run)Suppose I havewebandnginxservice.I thought I could runwebservice in one host andnginxin another host (two different machines) .
(Although I'm just beginning to learn basics of docker, and it'll be a long time before I could separate services to different hosts)Is there a way fornginxcontainer to serve static files thatwebservice has by sharing volumes between the two? | Share docker volumes from multiple hosts? |
Volumes but it stores the data both in container and host.Not really, it should only store data in the host (and makes it visible in the containerthrough a bind mount)if there is a way to stream apache log data to stdoutPossible yes,through configuration, but that would not be persistent. | I use Docker to build an Apache image, and then use docker-compose to run it. I set up Apache access.log and error.log and want to store them outside of the container. currently, I use Volumes but it stores the data both in container and host.docker-compose.ymlversion: '2'
services:
web:
image: apache
build: .
container_name: my-image
volumes:
- "/var/log/my-app:/var/log/apache2"
restart: always
ports:
- "8000:80"My question is how to only store apache log data in a host, and It woule be better if there is a way to stream apache log data to stdout so that I don't need to store in the host.Thanks in advance! | Docker apache image, store logs in host? |
Zeppelin Docker documentation is missing. You can find some recent fixes in their repo, e.g. env variableZEPPELIN_ADDR=0.0.0.0:docker run --rm -ti \
-p 8080:8080 \
-e ZEPPELIN_ADDR=0.0.0.0 \
--name zeppelin \
apache/zeppelin:0.8.2 | First issue I´m having is that I can not pull the base image without specifying the version tag, not a big deal... but I find it odd, after thatdocker pull apache/zeppelin:0.8.2After that I´m able to get the image, but one I try to run it as:docker run -p 8080:8080 apache/zeppelin:0.8.2ordocker run -p 8080:8080 --rm --name zeppelin apache/zeppelin:0.8.2The browser just don´t show any result at the corresponding port: localhost:8080/In the terminal I get a series of warnings an the following error:org.glassfish.jersey.internal.Errors logErrors docker zeppelin | Zeppelin fails to load on docker: logErrors docker zeppelin |
Macvlan does not generally work over wireless interfaces. It just took me hours to discover that, as it is nowhere mentioned in most macvlan documentation. See:http://hicu.be/macvlan-vs-ipvlanFrom my understanding, access points don't like getting packets from MAC addresses that haven't previously authenticated with them.ipvlan L2 works, just replace the macvlan driver with ipvlan and specify ipvlan_mode: 2 under driver_opts. | I'm running Docker containers on a host (A) which is in a local network and gets its IP address from the WLAN router via DHCP. I'd like to access the docker containers via IP address from another host (B) which is in the same local network. I've configured amacvlandocker network in my docker compose file. However if I scan the network for IP addresses with e.g.nmap -sP XXX.XXX.XXX.0/24withXXX.XXX.XXXas subnet mask I don't find new IP addresses. In general: Do I have to consider something special in case I create a setup like this?Reference to a similar, simplifyingquestion on forums.docker.com. | How can I make docker container IP addresses accessible in a WLAN? |
Below steps do work for Docker to be installed on OEL 6.10 with a user having super user privileges.Create a user with SUDO Access as suggested in Red-Hat Docs ([Link][1] speaks well on this process). For instance I created an user as docker with group as docker.groupadd docker
useradd -m -g docker dockerAdd docker repository for installing latest copy of Docker for RHEL/Centos 6yum update -y
yum install epel-release
vi /etc/yum.repos.d/docker.repoAdd below contents to /etc/yum.repos.d/docker.repo[docker-repo]
name=Docker Repo
baseurl=https://yum.dockerproject.org/repo/main/centos/6/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpgSwitch to "docker" user and execute below commands:sudo yum install -y docker-enginePost Installation start docker using below commands.sudo chkconfig docker on
sudo service docker start
Starting cgconfig service: [ OK ]
Starting docker: [ OK ]
sudo service docker status
docker (pid 26925) is running...
ps -ef | grep docker
root 25590 14123 0 Jul27 ? 00:00:00 sshd: docker [priv]
docker 25594 25590 0 Jul27 ? 00:00:00 sshd: docker@pts/1
docker 25595 25594 0 Jul27 pts/1 00:00:00 -bash
root 26925 1 2 00:00 pts/1 00:00:00 /usr/bin/docker -d
docker 27106 25595 0 00:00 pts/1 00:00:00 ps -ef
docker 27107 25595 0 00:00 pts/1 00:00:00 grep docker
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES[1]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/Getting_Started_Guide/ch02s03.html | I am following the documentationhttps://docs.docker.com/engine/installation/rhel/to install docker on RHEL 6.7.
When I run the commandsudo yum install docker-engineI get the following errorError: Package: docker-engine-1.9.1-1.el7.centos.x86_64 (dockerrepo)
Requires: libsystemd-journal.so.0(LIBSYSTEMD_JOURNAL_196)(64bit)
Error: Package: docker-engine-1.9.1-1.el7.centos.x86_64 (dockerrepo)
Requires: libsystemd-journal.so.0(LIBSYSTEMD_JOURNAL_195)(64bit)
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigestAs per the suggestion I tried to run the commandsudo yum install docker-engine --skip-brokenHere is the outputPackages skipped because of dependency problems:
audit-libs-python-2.3.7-5.el6.x86_64 from RHEL-67-x86_64
docker-engine-1.9.1-1.el7.centos.x86_64 from dockerrepo
docker-engine-selinux-1.9.1-1.el7.centos.noarch from dockerrepo
libsemanage-python-2.0.43-5.1.el6.x86_64 from RHEL-67-x86_64
policycoreutils-python-2.0.83-24.el6.x86_64 from RHEL-67-x86_64
setools-libs-3.3.7-4.el6.x86_64 from RHEL-67-x86_64
setools-libs-python-3.3.7-4.el6.x86_64 from RHEL-67-x86_64How can I fix above problems and install docker on RHEL 6.7 ? | Install docker on RedHatLinux 6.7 |
finally this problem is solved.
I changed .env
DB_HOST=127.0.0.1 to DB_HOST=db then it's work!!" DB_HOST= service name of mysql container on docker-compose.yml "this time my mysql container name is db, so needed to DB_HOST to be db. | I'm very new to laravel and docker and trying to connect mysql to php container(laravel).
I thought set right my docker-compose.yml and env file in laravel project.Also, I can connect to mysql db inside the container.Here is a error when I did php artisan migrate :SQLSTATE[HY000] [2002] Connection refused (SQL: select * from information_schema.tables where table_schema = myapp and table_name = migrations and table_type = 'BASE TABLE')Can anyone know what happened?docker-compose.ymlversion: '3'
services:
php:
container_name: php
build: ./docker/php
volumes:
- ./myapp/:/var/www
nginx:
image: nginx:latest
container_name: nginx
ports:
- 80:80
volumes:
- ./myapp/:/var/www
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
db:
image: mysql:8.0
container_name: db
environment:
MYSQL_ROOT_PASSWORD: root1234
MYSQL_DATABASE: myapp
MYSQL_USER: docker
MYSQL_PASSWORD: docker
TZ: 'Asia/Tokyo'
command: mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
ports:
- 3306:3306envDB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=myapp
DB_USERNAME=docker
DB_PASSWORD=docker | Can not connect mysql with laravel (docker) |
Let's say your application inside docker is now working on port 8000
You want to expose your application to internet.
The request would go: internet -> router -> physical computer (host machine) -> docker.You need to export your application to your host machine, this could be done viaEXPOSE 8000instruction in Dockerfile.
That port should be accessible from your host machine first, so, when starting your docker image as docker container, you should add-pparameter, such assudo docker run -d -it -p 8000:8000 --name docker_contaier_name docker_image_nameFrom now on, your docker application can be access within your host machine, let's say it is your physical computer.Forward port from your router to your host machine
This time, you may want to do as what you did in your question.Access your application from internet.
If I am thinking correctly, the ip address10.0.0.140is just your computer LAN IP address, it cannot accessible from internet.
You can only able to connect to your app via an internet IP, to do that, you can check your router to see what is your WAN IP address, which will be assigned to your router by your internet service provider. Or go google with "what is my IP" | I deployed a ghost blogging platform on my server using docker. Now I want to expose it to the internet but I'm having some difficulties doing so.I opened port8000in my router a forwarded it to port32769which is the one assign to that container. Using port32769inside my network I can access the website fine but when I try to access it from the internet it gives atook too long to responderror.Local IP + PORT:http://10.0.0.140:32769/Docker port configPort testerRouter settingsThis post was also added toSuper Usersince it has been said that it would be responded better in there. | Exposing a docker container to the internet |
Thelsb-releasepackage is not included in the minimal Ubuntu image, but you could make use of/etc/lsb-releaseor/etc/os-releasefile instead (the second one is in common use, refer tothis answerfor comparison).For Dockerfile, just change$(lsb_release -cs)to$(. /etc/os-release && echo $VERSION_CODENAME), you won't waste time in updating and installing packages. | While building a docker image, it's possible to set the custom apt mirror by overwriting the/etc/apt/sources.list, e.g.FROM ubuntu:focal
RUN echo "deb mirror://mirrors.ubuntu.com/mirrors.txt focal main restricted universe multiverse" > /etc/apt/sources.list && \
echo "deb mirror://mirrors.ubuntu.com/mirrors.txt focal-updates main restricted universe multiverse" >> /etc/apt/sources.list && \
echo "deb mirror://mirrors.ubuntu.com/mirrors.txt focal-security main restricted universe multiverse" >> /etc/apt/sources.list
...If the base image is a variable, e.g.FROM ${DISTRO}, thesources.listshould be adjusted based on the ubuntu release.I tried$(lsb_release -cs)like below:RUN echo "deb mirror://mirrors.ubuntu.com/mirrors.txt $(lsb_release -cs) main restricted universe multiverse" > /etc/apt/sources.list && \
echo "deb mirror://mirrors.ubuntu.com/mirrors.txt $(lsb_release -cs)-updates main restricted universe multiverse" >> /etc/apt/sources.list && \
echo "deb mirror://mirrors.ubuntu.com/mirrors.txt $(lsb_release -cs)-security main restricted universe multiverse" >> /etc/apt/sources.listBut it sayslsb_release: not found.The workaround is to install the package before running it.RUN apt-get update && apt-get install -y lsb-releaseHowever, the install oflsb-releasepackage could be very slow in some areas.So the question is, is there a proper way to set the apt source mirror before using apt? | Dockerfile: How to set apt mirror based on the ubuntu release |
Generally, a container-in-container setup involves linking/var/run/docker.sockanddockeritself.For example,in this thread:docker run --name jenkins --privileged=true -t -i --rm -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/bin/docker -p 8080:8080 jenkinsThis is not exactly your case, since you don't need to run Jenkins itself in a "cic" (container in container").But that illustrates how you would run any container in a container, with docker available in it.Make sure the user in that container is part of the docker group (if you don't want to use root), as in thisjenkins/setup-docker-and-start-jenkins.shscript#!/bin/sh
set -e
JUSER="jenkins"
DOCKER_GID=$(ls -aln /var/run/docker.sock | awk '{print $4}')
if ! getent group $DOCKER_GID; then
echo creating docker group $DOCKER_GID
addgroup --gid $DOCKER_GID docker
fi
if ! getent group $GID; then
echo creating $JUSER group $GID
addgroup --gid $GID $JUSER
fi
if ! getent passwd $JUSER; then
echo useradd -N --gid $GID -u $UID $JUSER
useradd -N --gid $GID -u $UID $JUSER
fi
DOCKER_GROUP=$(ls -al /var/run/docker.sock | awk '{print $4}')
if ! id -nG "$JUSER" | grep -qw "$DOCKER_GROUP"; then
adduser $JUSER $DOCKER_GROUP
fi
chown -R $JUSER:$JUSER /var/jenkins_home/Note that this setup usestinito launch Jenkins (as I described in "Jenkins does not run automatically after install in Docker container")exec su $JUSER -c "/bin/tini -- /usr/local/bin/jenkins.sh"Again, those scripts are for using Jenkins in "cic".In your case, you can use those scripts for the containers that your Jenkins will have to run. | I'm working on Centos7. I have a Docker container which is running Jenkins. In that Jenkins-container I have to build and run other Docker containers. But Jenkins doesn't know docker. I'm able to execute a shell and install docker inside the container. But isn't it possible to let the container use my docker-engine on the host? How can I use it?What is the best option to install Docker inside a Jenkins-(docker)-container? | How to run Docker inside Jenkins which is running as container |
This dates backfrom June 2015, whenDocker announced"New Apt and Yum Repos"That is when new packages (like the one for CentOS) were named docker-engine (initially to replacelxc-docker*) | When it comes to install Docker on centos, i found 2 different ways to do it.The first one is :yum install docker-engineThe second one is:yum install docker-ioAnd in case i installed docker using the first one, it i continue with the second one the error appeared, like this:Error: docker-engine conflicts with docker-1.8.2-10.el7.centos.x86_64
Error: docker-engine-selinux conflicts with docker-selinux-1.8.2-10.el7.centos.x86_64So anyone can tell me what's the difference between them? | What's the difference when installing docker with 2 of these following command? |
I think redis is expecting lines terminated by\ror\r\n. If you're doing this on linux, you'll get\nterminated lines, which redis can't parse.Try this, in the same directory where you entered the other commands:# rm $$
# for i in {0..10} ; do printf "SET Key$i Value$i\r\n" >> $$ ; done
# cat $$ | redis-cli --pipeWhoever wrote that tutorial was probably working on Mac or Windows, which happened to produce the appropriate line terminators. | I'm trying to followRedis Mass Insertion – RediswithRedisand something is amiss(.root@f7ca5eef4a4c:~# redis-cli --version
redis-cli 3.0.6
root@f7ca5eef4a4c:~# redis-cli
127.0.0.1:6379> flushall
OK
127.0.0.1:6379>
root@f7ca5eef4a4c:~# for i in {0..10} ; do echo "SET Key$i Value$i" >> $$ ; done
root@f7ca5eef4a4c:~# cat $$ | redis-cli --pipe
All data transferred. Waiting for the last reply...
ERR unknown command 'ET'
ERR unknown command 'ET'
ERR unknown command 'ET'
ERR unknown command 'ET'
ERR unknown command 'ET'
ERR unknown command 'ET'
ERR unknown command 'ET'
ERR unknown command 'ET'
ERR unknown command 'ET'
ERR unknown command 'ET'
Last reply received from server.
errors: 10, replies: 11
root@f7ca5eef4a4c:~# cat $$
SET Key0 Value0
SET Key1 Value1
SET Key2 Value2
SET Key3 Value3
SET Key4 Value4
SET Key5 Value5
SET Key6 Value6
SET Key7 Value7
SET Key8 Value8
SET Key9 Value9
SET Key10 Value10
root@f7ca5eef4a4c:~#What am I doing wrong? Why is it failing? | Redis Mass Insertion - Errors out |
You can address either of the two blockers mentioned as such:With regards to the dynamic DHCP IPs, you can follow this resin.io guide about setting up static IPs:https://docs.resin.io/reference/resinOS/network/2.x/#setting-a-static-ip. After setting up a static ip, you should be able to use it in theportsconfiguration.Another option is to use iptables, within yourmosquittoapplication container. This can be achieved by:a) setting thenetwork_mode: hostandprivileged: truesettings for the mosquitto serviceb) installingiptablesas part of aRUNinstruction in your Dockerfile (e.g.RUN apt-get update && apt-get install iptables)c) configuring iptables (e.g.iptables -A INPUT -i eth0 -p tcp --destination-port 1883 -j DROPto drop connections to port 1883 on thewlan0interface)As a side-note, I'd encourage you to have a look at our community forum (https://forums.resin.io) for any resin.io questions you might have. Our user base is pretty active there and chances are that more people will have a similar question or helpful suggestions for you.Thanks! | My server has two network interfaces, eth0 and wlan0, one connected to the internet and the other to an internal network. The current solution of exposing Docker container ports with docker-compose to a specific interface is to use:version: '2'
services:
mosquitto:
ports:
- "192.168.0.1:1883:1883"This makes it brittle since the IP addresses are distributed via DHCP. Several devices are used, of which each may have a different IP address. Therefore, is it possible to expose ports to only a specific interface? In addition, everything runs onResin.io, limiting the configuration of iptables and co. | Is it possible to expose docker ports to a specific interface |
I don't know if this help, but if your using a Dockerfile you can addRUN usermod -u 1000 nginxor if your using Apache you can sub. nginx for apache.This seems to be only an issue for OS X and the issue is actually something to do with VirtualBox and not directly related to Docker. I had this issue with Docker v1.9.x and now again with v1.10.3. This time I was not able to solve it with the above solution but was able to solve it by writing my cache to a database. In this case it was MySQL/MariaDB but could have easily been memcache or redis.Oddly, creation of log files and writing to them wasn't an issue even though the volume is mounted a separately but originated in the same folder '/Users' of my Mac. | I have a docker-compose.yml file that runs the following (create image called mmm/nginx):web:
image: mmm/nginx
ports:
- "80:80"
volumes:
- ./var:/var/www
- ./etc/nginx/sites-enabled:/etc/nginx/sites-enabled/
links:
- php
- db
php:
image: rossriley/php56-fpm
volumes:
- ./var:/var/www
- ./etc/php5/php-fpm.conf:/etc/php5/fpm/php-fpm.conf
links:
- db
db:
image: sameersbn/mysql
ports:
- "3306:3306"
volumes:
- /var/lib/mysql
environment:
- DB_NAME=tables
- DB_USER=table
- DB_PASS=passit serves up the websites nicely that are stored in/var/wwwThe issue happens when it tries to write to the logs and tries to write session files. Whileit doescreate the files, it can't write them.The folder for thestorageand its nested directories have the permissions set to777.In order for laravel to write to them, I have to$ chmod 777 <.log|sessionfile>and it works nicely. Clearly, this is not the way to develop as I need to start new sessions regularly and create new logs daily.How can I give laravel and the docker containers permission to write the files they create?Update:This is what laravel's log says:local.ERROR: exception 'ErrorException' with message 'file_put_contents(/var/www/com.mtrinteractive.sandbox.form/storage/framework/sessions/e0117b8ca17af9c19572ddb305a272b4c22bd18d): failed to open stream: Permission denied' in /var/www/com.mtrinteractive.sandbox.form/vendor/laravel/framework/src/Illuminate/Filesystem/Filesystem.php:81Update #2Here's the project directory:Update #3Here are the project's permissions and owners: | Laravel installed on a local volume (Mac) from docker nginx/php-fpm can't write session files |
There is no built-in way as of today (might be worth checking with team on Github, as they might have this on the roadmap).However, you can build your own solution using the newlog-pull feature:Write a small time-triggered Azure Function that pulls the logs every few minutes for the containers you are interested in (or all containers). The logs will be written to a storage accountA second blob-triggered Function picks up the uploaded logs and sends them into Log Analytics.//Edit: Very new feature (still in Release Candidate for Edge 1.0.9):https://github.com/veyalla/ehmThis might be exactly what you are looking for | I am looking for a solution to send the application logs generated on iot edge devices to an azure log analytics workspace.I have tried using the Microsoft Monitoring agent using which I was able to send logs generated by running docker containers. However, on an edge device, we are using the moby engine instead of the docker daemon because of which monitoring agent is not collecting the log records(followed this set up to run with docker -https://learn.microsoft.com/en-us/azure/azure-monitor/insights/containers#install-and-configure-windows-container-hosts). Moreover, since I am running my edge environment on windows, I didn't find any container image of monitoring agent targeted for windows.(present for Linuxhttps://hub.docker.com/r/microsoft/oms/)I am looking for a completely automated way of streaming application logs , generated on the edge device, to azure log analytics workspace. | Is there a direct way to send container logs to azure log analytics workspace from iot edge device? |
The image label is not a target label, it's on the metrics themselves. Thus you should usemetric_relabel_configsrather thanrelabel_configsMy blog onLife of a Labelexplains how this works. | I usePrometheus, together withcAdvisorto monitor my environment.Now, I tried to use Prometheus' "target relabeling", and create a label that its value is the Docker container's image name, without a tag. It is based on the originally scrapedimagelabel.It doesn't work, for some reason, showing no errors when running on debug log level. I can see metrics scraped from cAdvisor (for examplecontainer_last_seen) but my newly created label isn't there.My job configuration:- job_name: "cadvisor"
scrape_interval: "5s"
dns_sd_configs:
- names: ['cadvisor.marathon.mesos']
relabel_configs:
- source_labels: ['image']
# [REGISTRYHOST/][USERNAME/]NAME[:TAG]
regex: '([^/]+/)?([^/]+/)?([^:]+)(:.+)?'
target_label: 'image_tagless'
replacement: '${1}${2}${3}'My label -image_tagless- is missing from the scraped metrics.Any help would be much appreciated. | Use Prometheus "target relabeling" to extract cAdvisor's Docker image name without tag |
You can't pass it as environment variables, but you can specify it as part of your Docker startup by passing in a custom command. Here's an example of doing it with Docker Compose. If you're callingdocker runitself you'd need to rework this into an appropriate structure:kafka-connect:
image: confluentinc/cp-kafka-connect:5.3.1
environment:
CONNECT_REST_PORT: 18083
CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
[…]
volumes:
- $PWD/scripts:/scripts
command:
- bash
- -c
- |
/etc/confluent/docker/run &
echo "Waiting for Kafka Connect to start listening on kafka-connect ⏳"
while [ $$(curl -s -o /dev/null -w %{http_code} http://kafka-connect:8083/connectors) -eq 000 ] ; do
echo -e $$(date) " Kafka Connect listener HTTP state: " $$(curl -s -o /dev/null -w %{http_code} http://kafka-connect:8083/connectors) " (waiting for 200)"
sleep 5
done
nc -vz kafka-connect 8083
echo -e "\n--\n+> Creating Kafka Connect Elasticsearch sink"
/scripts/create-es-sink.sh
sleep infinityThis calls a connector script, but if you want to embed it directly you can do itlike this. | This is the docker image we use to host docker-connect with the pluginsFROM confluentinc/cp-kafka-connect:5.3.1
ENV CONNECT_PLUGIN_PATH=/usr/share/java
# JDBC-MariaDB
RUN wget -nv -P /usr/share/java/kafka-connect-jdbc/ https://downloads.mariadb.com/Connectors/java/connector-java-2.4.4/mariadb-java-client-2.4.4.jar
# SNMP Source
RUN wget -nv -P /tmp/ https://github.com/name/kafka-connect-snmp/releases/download/0.0.1.11/kafka-connect-snmp-0.0.1.11.tar.gz
RUN mkdir /tmp/kafka-connect-snmp && tar -xf /tmp/kafka-connect-snmp-0.0.1.11.tar.gz -C /tmp/kafka-connect-snmp/
RUN mv /tmp/kafka-connect-snmp/usr/share/kafka-connect/kafka-connect-snmp /usr/share/java/I run this docker via docker-compose and then I have specified some common env variables defined herehttps://docs.confluent.io/current/installation/docker/config-reference.html#kafka-connect-configurationBut I also would like to specify connector related config from the env variable also, example I have done this- CONNECT_NAME=snmp-connector
- CONNECT_CONNECTOR_CLASS=com.github.jcustenborder.kafka.connect.snmp.SnmpTrapSourceConnector
- CONNECT_TOPIC=fm_snmpWhat I am trying to do it, instead of callingcurl -X POST -H "Content-Type: application/json" --data '{"name":"","config":{"connector.class":"com.github.jcustenborder.kafka.connect.snmp.SnmpTrapSourceConnector","topic":"fm_snmp"}}' http://localhost:8083/connectorsI want to just specify it via env variables, BUT!! unfortunately its not working. So when I try seeing list of active connectors curl -localhost:8083/connectors/ , then I dont see it listed there.So finally, my question can I configure it via env variables or only curl is the way? | Can the kafka connectors be configured via env variables passed when launching docker? Or curl is the only way? |
There is no way to set the hostname to that value.If you need a unique identifier, I would use the unique container id, which you can get by running$(hostname). | I've got adocker-compose.yml:master:
build: .
slave:
image: master
hostname: slave
command: run_slaveHow can I makedocker-compose scale slave=5generate machines with unique hostnames?...e.g. something like this:slave1
slave2
slave3
slave4
slave5 | Scaling with docker-compose and appending a number to the hostname? |
That can happen if your docker build sequence does not got all the way, meaning it stops on error at some point in the Dockerfile.The result of that interrupted process is the last intermediate image built by the Dockerfile line that succeeded, just before the Dockerfile line that fail to execute properly.Other reasons are listed in "What are Docker:images?"Each docker image is composed of layers, with these layers having a parent-child hierarchical relationship with each other.All docker file system layers are by default stored at/var/lib/docker/graph. Docker calls it the graph database:images stand for. They stand for intermediate images and can be seen usingdocker images -a.Another style of:images are the dangling images which can cause disk space problems.A dangling image and needs to be pruned. When ourhello_worldimage was rebuilt using the Dockerfile, its reference to old Fedora became untagged and dangling.you can see an example of dangling image in "What arerepository and tags? Why do they appear when I use docker build?".The next command can be used to clean up these dangling images.docker rmi $(docker images -f "dangling=true" -q)In your case, if this was the first time your useddocker build, this should not be a dangling image, but an intermediate one as I explained first in this answer. | I am new here trying to learn docker, I started this tutorialhttps://docs.docker.com/engine/examples/nodejs_web_app/Building your image$ docker build -t mlotfi/centos-node-hello .mlotfiis my username inhttps://hub.docker.com/when I diddocker images, I got:REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
sha256:189cb 27 seconds ago 485.1 MB
centos centos6 sha256:d0a31 12 days ago 228.9 MBinstead of:REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
mlotfi/centos-node-hello latest sha256:189cb 27 seconds ago 485.1 MB
centos centos6 sha256:d0a31 12 days ago 228.9 MBUPDATE:
at the end of the build I see :Complete!
---> e053d8f57e5c
Removing intermediate container 060f921fd08c
Step 4 : COPY package.json /src/package.json
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and
directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and r
eset permissions for sensitive files and directories.
lstat package.json: no such file or directoryBut I already have package.json in the src directory. | REPOSITORY <none> TAG <none> |
I released bindfs 1.13.10 with a workaround for this.Explanation for why it didn't work:https://github.com/mpartel/bindfs/issues/66#issuecomment-428323548 | Bindfs doesn't work for folder inside "/proc"...[root@some_host some_folder]# bindfs --map=root/ "/proc//" "/home//"
Failed to resolve source directory `/proc//': No such file or directory
[root@some_host some_folder]# ls "/proc//"
some_fileWhy?Thanks!UPDATE:Example with Docker container...I ended up finding out that for some reason this command...sudo bindfs --map=root/eduardo "/proc/$(docker inspect --format {{.State.Pid}} 255d)/root" "/home/eduardo/Data/Temp/20180329.1/root"... make bindfs mount the host's file system (root directory) on the mount point and not the container's file system.However the command...ls "/proc/$(docker inspect --format {{.State.Pid}} 255d)/root"... show the contents of the container's file system (root folder).I can not see an explanation for this! It makes no sense! =| | bindfs - Doesn't work for folder inside "/proc" |
In your Dockerfile, run the following command:RUN groupadd -r -g 1234 newusername && useradd -r -u 1234 -g newusername newusername
USER newusernameThis will create a usernewusernamewith GID 1234 and UID 1234, then run the container with a default usernewusername. | Where can I configure the start user UID for Docker containers? By default it uses UID 999 which conflicts with some other users on my system. | Where can I configure the start user UID for Docker containers? |
Thedocker run -itcommand brings up a bash shell in a container where TensorFlow is installed. Once you are at theroot@2e87064f0743:/#prompt you can start an interactive TensorFlow session by startingipythonas the following example shows:$ docker run -it b.gcr.io/tensorflow/tensorflow
root@2e87064f0743:/# ipython
Python 2.7.6 ...
In [1]: import tensorflow as tf
In [2]: c = tf.constant(5.0)
In [3]: sess = tf.InteractiveSession()
I tensorflow/core/...
In [4]: c.eval()
Out[4]: 5.0 | I followed the directions to install TensorFlow on Docker on Google Cloud here :http://tensorflow.org/get_started/os_setup.html#docker-based-installationThe first time, it did work and showed the tensorflow prompt.
Now that I have logged out and back in, I get this:technologiclee@docker-playground:~$ docker run -it b.gcr.io/tensorflow/tensorflow
root@2e87064f0743:/#I also tried this:root@2e87064f0743:/# docker run b.gcr.io/tensorflow/tensorflow-full
bash: docker: command not foundIs there a different way to start TensorFlow on Docker after it is installed? | Starting TensorFlow on Docker on Google Cloud |
I followed the same tutorial, and entered instead:tcp://172.17.0.18:2345/The test did work:Version = 1.10.0, API Version = 1.22 | I'm followingthisonline tutorial line by line. But at step 3 -Task: Configure Plugin- I'm getting this error message, when I press "Test connection" button:Unsupported protocol scheme found:http://172.17.0.59:2345Here is a screen of what I've done:So, what is wrong with that and what is the right way of configuring Docker image with Jenkins? | Building Docker images using Jenkins results in "Unsupported protocol scheme found" |
/remi/enterprise/7/php73/aarch64/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not FoundRemi's Repository is only for x86_64 architecture for now.You can try drpixel repository (rebuild of remi's packages)Seehttps://repo.drpixel.fr/ | I tried to build the docker compose on M1 chipset and getting error as such:/remi/enterprise/7/php73/aarch64/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not FoundBut in my Intel chip, I haven't encountered this problem. Apparently there are not much solutions on the internet as well. Have anyone encountered the same problem? Here is the Dockerfile that causes the problem:FROM centos:7
WORKDIR /home/project/source
RUN yum -y install epel-release yum-utils && \
yum -y install http://rpms.remirepo.net/enterprise/remi-
release-7.rpm && \
yum-config-manager --disable remi-php54 && \
yum-config-manager --enable remi-php73 && \
yum -y install \
nginx \
jq \
php \
php-fpm \
php-cli \
php-opcache \
php-msgpack \
php-redis \
php-mbstring \
php-intl \
php-xml \
php-gettext \
php-imagick \
php-pgsql \
php-soap \
php-pdo \
php-mysqlnd \
php-apcu \
php-igbinary \
php-json \
php-memcache \
php-xdebug \
php-mysqlnd \
php-openssl \
php-opcache
RUN yum -y update && yum clean all
COPY config/php-fpm.d/www.conf /etc/php-fpm.d/www.conf
COPY config/php.d/90-project-php.ini /etc/php.d/90-project-php.ini
RUN mkdir /var/run/php-fpm && \
chmod -R 777 /var/lib/php && \
ln -sf /dev/stdout /var/log/php-fpm/access.log && \
ln -sf /dev/stderr /var/log/php-fpm/error.log
EXPOSE 9000
CMD ["php-fpm", "-F"]It is constantly trying to find mirrors with no success. | Can't build docker compose on M1 chipset |
Image is just a set of files there are no processes, so question does not make sense. When you start container from image then process will start here - processes exists only in executing container, when container stops there are no processes anymore - only files from container's filesystem. | Is it possible to commit a container with postgresql running so that it is ready immediately? I have tried using a startup script, CMD and bashrc to start postgresql, which all start it fine when usingdocker run -it [containerID]but it takes approximately 3-5 seconds for postgresql to come up once logged in. I unfortunately need postgresql running on login.Using this approach...docker build -t [name]docker run -it [containerId]Inside of the container I then runservice postgresql startand detach with ctrl p + q. Once detached I commit withdocker commit [containerId] [name]Upon running the new image, postgresql is not running and the lock file is left over. Is it possible to commit a running service like this or is there a way to have postgresql ready upon running the image? | Docker - commit container with running processes (postgresql) |
The problem was with the parameters passed to the --mount option.I was trying to pass the source as a host directory, when I should be passing a docker volume according todocker swarm documentation. To correct the problem I did the following.docker volume create --name jenkins_home
docker service create --replicas 1 --name jenkins -p 8080:8080 -p 50000:50000 --mount source=jenkins_home,dst=/var/jenkins_home jenkins:alpineAs a side-note, I think it would be useful for docker to show an error message when a mount source could not be found. | I'm trying to run a fault tolerant Jenkins in a docker swarm using the following command:docker service create --replicas 1 --name jenkins -p 8080:8080 -p 50000:50000 --mount src=/home/ubuntu/jenkins_home,dst=/var/jenkins_home jenkins:alpineBut checking the service status and containers running I see that the replicas stay in 0.ubuntu@ip-172-30-3-81:~$ docker service create --replicas 1 --name jenkins -p 8080:8080 -p 50000:50000 --mount src=/home/ubuntu/jenkins_home,dst=/var/jenkins_home jenkins:alpine
14kwt6xoxorn62irnv9y2wm3r
ubuntu@ip-172-30-3-81:~$ docker service ls
ID NAME REPLICAS IMAGE COMMAND
14kwt6xoxorn jenkins 0/1 jenkins:alpine
87ovyhkparou helloworld 1/1 alpine ping docker.com
ubuntu@ip-172-30-3-81:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
739f8180c989 alpine:latest "ping docker.com" 21 minutes ago Up 21 minutes helloworld.1.4rz08cygb7whjmzm3922h0bfbI've tried running Jenkins in a container (without swarm) and it works perfectly, and also the swarm service example fromdocker's tutorialalso works perfectly.What is preventing the Jenkins service from running? | Jenkins service in Docker swarm stays at 0/1 replicas |
If you're using VirtualBox, configure port forwarding like:$ VBoxManage modifyvm "boot2docker-vm" --natpf1 "tcp-port5000,tcp,,5000,,5672"
$ VBoxManage modifyvm "boot2docker-vm" --natpf1 "udp-port5000,udp,,5000,,5672"Read more:http://www.deadcodersociety.org/blog/forwarding-a-range-of-ports-in-virtualbox/https://github.com/dotcloud/docker/issues/4007#issuecomment-34573044 | I have been playing with Docker for a while (on OSX via Vagrant) which worked really nice. In order to access my apps running in the docker containers I had to setup Vagrant to use static IPs ("private_network" setup).While this worked well I think the new approach to use boot2docker is a little lighter and more convenient as I can run docker directly in OSX. However, if I run docker with the usual port forwarding I get this error:docker run -p :5672 -p :15672 mikaelhg/docker-rabbitmq
2014/02/09 10:12:47 Error: start: Cannot start container fecd0f0225f49a889e63e9b113bff36305e9b9ab146ada6730d6cfffe9a10e0b: Process could not be startedSo then if I explicitly map this to a different host port it startsdocker run -p 5000:5672 -p 15000:15672 mikaelhg/docker-rabbitmqHowever I am unable to open this in my OSX host. I am aware that this setup is different to Vagrant as it does not use static IPs but rather NAT but somehow I cannot find proper docs on how I can access my apps from the OSX host.Can anyone point me to the right docs or give me an example what setup I need to use to get boot2docker setup the portforwarding for me? | Map ports so you can access docker running apps from OSX host |
VOLUMEinstruction used within aDockerfiledoes not allow us to do host mount, that is where we mount a directory from the host OS into a container.However other containers can still mount into the volumes of a container using the--from-container=, created with theVOLUMESinstruction in theDockerfile | I understand that using the VOLUME command within a Dockerfile, defines a mount point within container.FROM centos:6
VOLUME /htmlHowever I noticed that without that VOLUME definition, it's still possible to mount on that VOLUME point regardless of defining itdocker run -ti -v /path/to/my/html:/html centos:6What is the purpose of defining VOLUME mount points in the dockerfile? I suspect it's for readability so people can read the Dockerfile and instantly know what is meant to be mounted? | What is the purpose of defining VOLUME mount points within DockerFile rather than adhoc cmd-line -v? |
After someexperimentingI re-read the original question and also took into account the fact that it is independent of the type of program being launched, that is, Java, C++, etc.: the reason why it works in the one case (when invoked withbash -c) and not when you directly invoke it is thatulimitis abash built in commandand the docs fordocker runarenot entirely transparent about it. | I want to make sure the process gets killed after 10 seconds of CPU time. Docker run command accepts the flag--ulimit cpu=10that is supposed to do that.However when I run java command using this, the ulimit setting is ignored. The java process with infinite loop continues even after 10s (actually for minutes until I kill it)
Here is the command I used to test.docker run --rm -i -v /usr/local/src:/classes --ulimit cpu=10 java:8 \
java -cp /classes/ InfiniteLoopInstead of invoking java directly, if I start bash and then run java c, it works as expected.docker run --rm -i -v /usr/local/src:/classes --ulimit cpu=10 java:8 \
bash -c 'date; java -cp /classes/ InfiniteLoop'Why does invoking java program directly does not respect ulimit option?Edit 1:$ docker --version
Docker version 1.9.1, build a34a1d5The java program is, InfiniteLoop.javaimport java.util.*;
class InfiniteLoop {
public static void main(String[] args) throws Exception {
for (long i = 0; i < 1000_000_000_000L; i++) {
if (i % 1_000_000_000 == 0) {
System.out.println(new Date() + ", i = " + i);
}
}
}
}Edit 2:The following doesn't work either. That is, with only java executed in the bash.docker run --rm -i -v /usr/local/src:/classes --ulimit cpu=10 java:8 \
bash -c 'java -cp /classes/ InfiniteLoop'But, adding any noop or ':' command works. Or even an arbitrary word that prints "command not found" also works.docker run --rm -i -v /usr/local/src:/classes --ulimit cpu=10 java:8 \
bash -c ':; java -cp /classes/ InfiniteLoop'and this works too.docker run --rm -i -v /usr/local/src:/classes --ulimit cpu=10 java:8 \
bash -c 'ArbirtraryCommandNotFound; java -cp /classes/ InfiniteLoop'Edit 3:Similar to using the no-op (:), invoking the process with time also makes the process to be killed exactly after the CPU time is exceeded.docker run --rm -i -v /usr/local/src:/classes --ulimit cpu=10 java:8 \
bash -c 'time java -cp /classes/ InfiniteLoop' | docker run --ulimit cpu=10 does not kill java process after timeout |
You can try playing with Cloudera QuickStart Docker Image to get started. Please take a look athttps://hub.docker.com/r/cloudera/quickstart/. This docker image supports single-node deployment of Cloudera's Hadoop platform, and Cloudera Manager. Also this docker image supports spark too. | I want to use Big Data Analytics for my work. I have already implemented all the docker stuff creating containers within containers. I am new to Big Data however and I have come to know that using Hadoop for HDFS and using Spark instead of MapReduce on Hadoop itself is the best way for websites and applications when speed matters (is it?). Will this work on my Docker containers? It'd be very helpful if someone could direct me somewhere to learn more. | Using Hadoop and Spark on Docker containers |
I found the solution, for someone who has the same problems, we need to provide the model path in the local computer and in docker:docker run --name=the_name -p 9000:9000 -it -v "/path_to_the_model_in_computer:/path_to_model_in_docker" tensorflow/serving:1.15.0 --model_name=MODEL_NAME --port=9000 | Im have problem when I've trying to run a docker container using docker image: tensorflow/serving.I run the cmd:docker run --name=tf_serving -it tensorflow/servingThe result is:2019-10-28 04:23:56.858540: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
2019-10-28 04:23:56.858571: I tensorflow_serving/model_servers/server_core.cc:573] (Re-)adding model: model
2019-10-28 04:23:56.858852: E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:362] FileSystemStoragePathSource encountered a filesystem access error: Could not find base path /models/model for servable modelI've digging to resolve it but still there. Could anyone please have any idea for this, thanks so much! | Run docker container Error: Could not find base path /models/model for servable model |
You can edit the configuration for docker daemon.
Add a daemon.json file in the following path: %ProgramData%\docker\configThe file should contain something like this:{
"hosts": ["tcp://0.0.0.0:4243"]
}Then restart docker service.(eg Powershell: Restart-Service docker )References:How to use Remote API with Windows ContainerConfiguration File reference | I haveDocker Desktop for Windows1.12.1-stable(build: 7135) installed on my Windows 10 machine. I want to access docker using theremote APIthrough port4243. I guess this port is not enabled by default. Do you have any idea how to open it? | How to enable docker remote API in "Docker for Windows" |
It seems that yum is not available on this image. It usesmicrodnfas package manager. Simply use following dockerfile to install python 3.6 :FROM openjdk:15
RUN microdnf install python36After building and running a container with shell process I received :bash-4.4# python3 -V
Python 3.6.8 | I want to create an image of openjdk15 and pythonI am trying the Dockerfile for buidFROM openjdk:15
RUN yum install -y oracle-epel-release-el7
RUN yum install -y python36But when i try to build the image it shows/bin/sh: yum: command not found
The command '/bin/sh -c yum install -y oracle-epel-release-el7' returned a non-zero code: 127I checked the image also$ docker run --rm -it --entrypoint "" openjdk:15 sh -c "cat /etc/os-release"
NAME="Oracle Linux Server"
VERSION="8.3"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="8.3"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Oracle Linux Server 8.3"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:8:3:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 8"
ORACLE_BUGZILLA_PRODUCT_VERSION=8.3
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=8.3 | docker image: openjdk:15: how to install python inside it |
|is a shell symbol which only works within a shell environment.CMD command param1 param2 (shell form)This will work as follows:CMD [ "sh", "-c", "command param1 param2"].CMD ["executable", "param1", "param2"] (exec form, this is the preferred form)This will not invoke a shell, so|will not function.You may reference something fromhere.For your situation, you need to use a shell to leverage|so the correct way could be something like this:CMD ["bash", "-c", "/usr/games/fortune -a | cowsay"] | I am running through a Docker tutorial, and the Dockerfile contains the following line:CMD /usr/games/fortune -a | cowsayWhen usinghadolintto lint the file, I get this recommendation:DL3025 Use arguments JSON notation for CMD and ENTRYPOINT argumentsSo I update theCMDline with JSON notation for the arguments:CMD ["/usr/games/fortune", "-a", "|", "cowsay"]Now, after I (re)build the image and run it, I get this error:(null)/|: No such file or directoryWhat is the correct way to use proper JSON notation syntax when I need to pipe output from one command to another on aCMDline? | Proper JSON notation syntax in a Dockerfile when piping output through multiple commands on a `CMD` line? |
This error occurs when using Laravel Sail on Macs with the Apple M1 chip. The docker-compose file provided by Laravel Sail uses MySQL by default. As configured, the docker-compose file is attempting to use an unknown version of MySQL (linux/arm64/v8). This fails with the error message above.This can be solved by opening the docker-compose.yml file in the Laravel project root folder, searching the section named mysql and adding the following below theimage:lineplatform: 'linux/amd64'Adding this line will run an Intel image under emulation on the Mac M1. You can read some background information about this in the officialDocker document about Apple Siliconandhere.If possible for your use case this can also be resolved by switching the image to MariaDB instead of MySQL. MariaDB is basically binary compatible with MySQL. Using MariaDB may be a better option if possible because, as mentioned in the Docker documentAttempts to run Intel-based containers on Apple Silicon machines under
emulation can crash as qemu sometimes fails to run the container.Using the MySQL container in emulation on an M1 Mac could cause issues such as a segmentation fault when starting Sail - in fact I saw this issue in one case. Switching to MariaDB resolved this. You can switch Laravel Sail to MariaDB instead of MySQL by changing theimage:line for the mysql service in the docker-compose.yml file to:image: 'mariadb' | I am attempting to setup a basic project in Laravel using Laravel Sail. According to theofficial Laravel documentationthe following commands will create a new Laravel application called "example-app" and start Laravel Sail.curl -s "https://laravel.build/example-app" | bash
cd example-app
./vendor/bin/sail upHowever, after running these commands I see the following error message:ERROR: no matching manifest for linux/arm64/v8 in the manifest list entries | No Matching Manifest Error when using Sail on Laravel |
As Henry already statedCommon layers are downloaded only once and are stored only once. So this has benefits for download as well as storage.Additionaly building an image will reuse layers if the creating command allows. This reduces the build time. For example if you copy a file into your image and the file is the same as in the last build the old layer will be reused. See thebest practices for writing dockerfilesfor more details. | Let's say I have two different Dockerfiles.Image one called nudoc/my-base-image:1.1FROM ubuntu:16.10
COPY . /test.warImage two called nudoc/my-testrun-image:1.1FROM acme/my-base-image:1.1
CMD /test/start.shBoth have the layers in common.What are the advantages of having layers in a docker image? does it benefit from pulling from the registry? | what are the advantages of having layers in a docker image? |
Have you tried--progress=plain?Example:DockerfileFROM alpine
RUN ps auxbuildcommand:DOCKER_BUILDKIT=1 docker build --progress=plain -t test_buildkit .Relative output:#5 [2/2] RUN ps aux
#5 digest: sha256:e2e4ae1e7db9bc398cbcb5b0e93b137795913d2b626babb0f148a60017379d86
#5 name: "[2/2] RUN ps aux"
#5 started: 2019-04-19 09:02:58.922035874 +0000 UTC
#5 0.693 PID USER TIME COMMAND
#5 0.693 1 root 0:00 ps aux
#5 completed: 2019-04-19 09:02:59.721490002 +0000 UTC
#5 duration: 799.454128ms👉 Also, check the very useful answer by@Cocowallabelow aboutBUILDKIT_PROGRESS=plain | When building Docker images withDOCKER_BUILDKIT=1, there is a very cool progress indicator but no command output. How do I see the command output to debug my build? | Dockerfile: RUN ls -l [duplicate] |
It happens that Compose expands$TYPEbefore it gets to the inside of the container. Compose looks for the$TYPEenvironment variable in the shell or host environment and substitutes its value in.This will work with the following terminal command:docker-compose.ymlcommand: sh -c 'echo $TYPE'terminal commandTYPE='hello world' docker-compose run webWhen there is no$TYPEenvironment variable in the host machine, Compose sets the value of$TYPEto an empty string and outputs a warning.Compose needs to be informed not to expand$TYPEsince we want it expanded inside of the shell running in the container.For this usedocker-compose.ymlcommand: sh -c "echo $$TYPE"Prepending a dollar symbol to$TYPEescapes it.Reference:Variable Substitution in Compose | Running the commanddocker-compose run -e TYPE=result mongo_db_backupshould give me the value of the given TYPE variable:mongo_db_backup:
image: 'mongo:3.4'
volumes:
- '/backup:/backup'
command: sh -c '$$(echo $TYPE)'But instead I get the errorThe TYPE variable is not set. Defaulting to a blank string.What am I doing wrong | Usage of env variable in docker compose run command |
Run ovpn with a deamon in DockerfileCMD openvpn --daemon --config config/fremsyn.ovpn --auth-user-pass config/login.txt --askpass config/password.conf && python3 src/cli/getStatus.pyFor run the service use docker-compose.yml like this :docker-compose.ymlversion: "3.3"
services:
name_of_your_service:
image: your_image_from_Dockerfile_build
restart: always
sysctls:
- net.ipv6.conf.all.disable_ipv6=0
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun
volumes:
- /etc/timezone:/etc/timezone:roRun command$ docker-compose up -d | I am trying to create a docker image which has a python script that connects to an API through VPN using openVPN, however, I cannot seem to get openVPN to be working.I my docker file I have# Install openVPN and get confi files
RUN mkdir /config
ADD ./config/. /config
RUN apt-get install -y openvpn
# Run openvpn and script
CMD openvpn --config config/fremsyn.ovpn --auth-user-pass config/login.txt --askpass config/password.conf && python3 src/cli/getStatus.pyBut I keep getting the error:ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)Is there a solution to this problem?As a side note, I need to run the container as container instance in Azure. | openVPN inside docker image |
You will need to enable docker remote API on Ubuntu Docker Host by adding below settings in daemon.json or your startup script[root@localhost ~]# cat /etc/docker/daemon.json
{
"hosts": [ "unix:///var/run/docker.sock", "tcp://0.0.0.0:2376" ]
}Once you restart docker you can connect to docker host locally by socket file and remotely by listening port (2376).
Verify the listening port of docker on Ubuntu[root@localhost ~]# netstat -ntlp | grep 2376
tcp6 0 0 :::2376 :::* LISTEN 1169/dockerdNow you can connect to this docker from Windows machine by setting the DOCKER_HOST env variable in Windows like thisPS C:\Users\YellowDog> set DOCKER_HOST=tcp://:2376
PS C:\Users\YellowDog> docker psIt will list docker containers running on Ubuntu Docker Host | I have following scenario.Two Machine ( Physical Machine)One is Windows 10 With Docker On Windows Installer and same way ubuntu 18.04 with docker-ce installed.I can run command on individual and that is fine.I want to connect Ubuntu Docker Host from Docker on Windows machine. So Docker CLI on Windows Point to deamon at Ubuntu Host. | Connect to remote docker host |
Okay, Here is How I solved my current scenario. as updated in the question I was able to read the certificate from key vault. next piece was to access the cert within the docker file, since docker doesn't know the location(because its not part of the context), Its not able to read the cert. so, what I have done is used a copy task to add the cert to the source directory when docker context is set. then docker is able to see the certificate and access is(because its now in docker context).below are the copy task, if that helps.- task: CopyFiles@2
displayName: 'Copy Files to: $(Build.ArtifactStagingDirectory)'
inputs:
Contents: |
**\ps-test-cert.crt
TargetFolder: '$(Build.SourcesDirectory)/Source/Logging.API/'and in the docker file, I just have to use the name because its available inthe context.COPY ps-test-cert.crt /usr/local/share/ca-certificates/ps-test-cert.crt
RUN chmod 644 /usr/local/share/ca-certificates/ps-test-cert.crt
RUN update-ca-certificates | I am using a ssl certificate while building the docker image to communicate with other different services with in the Kubernetes. right now I have the ssl certificate in my repo and will be published as part of the artifact. we are planning to move the cert to key vault and fetch it while executing our pipeline. I am not sure how can I fetch it while building the docker image. I have tried the default azure key vault task and I am able to get the cert but its not a file(.crt or pfx).Below is my final step in Docker ImageFROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
COPY $(ps-test-cert) /usr/local/share/ca-certificates/ps-test-cert
RUN chmod 644 /usr/local/share/ca-certificates/ps-test-cert
RUN update-ca-certificates
ENTRYPOINT ["dotnet", "Logging.API.dll"]and the cert name in the key vault isps-test-certHere is my key vault task- task: AzureKeyVault@1
inputs:
azureSubscription: 'ARMDeployment-Service-Conn'
KeyVaultName: 'OneK-KeyVault'
SecretsFilter: 'ps-test-cert'
RunAsPreJob: falseDo I have to get the cert and publish as artifact? since I need this in the build time not sure how should I import the cert so that I can use.UpdateI am able to get the certificate using azure cli with the below command. but I am not sure how will I use that inside docker file.When I publish I can see that the certificate is there in the published items.> az keyvault certificate download --vault-name one-KeyVault -n
> ps-test-cert -f cert.pem openssl x509 -outform der -in cert.pem -out
> ps-test-cert.crtin the publish task, I can use it like this.- task: PublishPipelineArtifact@1
displayName: 'Publish Pipeline Artifact'
inputs:
targetPath: 'ps-test-cert.crt'
artifact: testHow can I use it in docker file? | How to fetch Certificate from Azure Key vault to be used in docker image |
Based on theDocker documentation:Compose uses Docker links to expose services containers to one
another. Each linked container injects a set of environment variables,each of which begins with the uppercase name of the container.Docker Compose would create an Environment Variable representing theFull URLof the container usingname_PORTformat, e.g.REDIS_PORT=tcp://172.17.0.5:6379.And based on yourdocker-compose.ymlfile:redis:
image: tutum/redis
ports:
- "6379:6379"
volumes:
- /dataYou would have an Environment Variable namedREDIS_PORTwith a value equals totcp://172.17.0.3:6379.Since OS environment variables have more precedence with respect to Profile-specific application properties, Spring Boot would pick up theREDIS_PORTvalue overredis.port, hence the error:Caused by: org.springframework.beans.factory.BeanCreationException:
Could not autowire field: private int
com.inkdrop.config.cache.CacheConfiguration.redisPort; nested
exception is org.springframework.beans.TypeMismatchException: Failed
to convert value of type [java.lang.String] to required type [int];
nested exception is java.lang.NumberFormatException: For input string:
"tcp://172.17.0.3:6379"As a workaround for this problem, you either should override theREDIS_PORTenvironment variable with your port value or rename your config name fromredis.nameto anything less controversial.Kinda off topic but just quoting fromtutum-docker-redisGithub repository:This image will be deprecated soon. Please use the docker official
image:https://hub.docker.com/_/redis/ | I have a Spring boot app that connects to a Redis instance that works as a cache. When I'm in dev environment, I have the following:---
spring:
profiles: default
redis:
host: localhost
port: 6379And my cache configuration class is like this:@Configuration
@EnableCaching
public class CacheConfiguration {
@Value("${redis.host}")
String redisHost;
@Value("${redis.port}")
int redisPort;In production, this app is Dockerized, and I have the followingdocker-compose.ymlfile:redis:
image: tutum/redis
ports:
- "6379:6379"
volumes:
- /data
app:
build: .
ports:
- "8080:8080"
links:
- redisAnd theapplication.ymlis:---
spring:
profiles: docker
redis:
host: redis
port: 6379To start the app on Docker, I run with-Dspring.profiles.active=docker, but when the app is starting up, the following error happens:Caused by: org.springframework.beans.factory.BeanCreationException: Could not autowire field: private int com.inkdrop.config.cache.CacheConfiguration.redisPort; nested exception is org.springframework.beans.TypeMismatchException: Failed to convert value of type [java.lang.String] to required type [int]; nested exception is java.lang.NumberFormatException: For input string: "tcp://172.17.0.3:6379"For some reason, Spring Boot is reading theredis.portastcp://172.17.0.3:6379. So for tests proposes, I removed the@Valueannotations fromCacheConfigurationclass, and set it manually toredisas host and6379as port and it worked. Seems like when using environment variables and@Value, Spring get lost. Anyone have an idea? | Environment variables and @Value can't work together on Spring Boot |
You can only mount host directories as volumes, and not individual files in Docker.In yourvolumesdefinition, instead of this:- ${PWD}/frontend/conf/nginx/mysite.template:/etc/nginx/conf.d/default.confyou should do this:- ${PWD}/frontend/conf/nginx:/etc/nginx/conf.d | I rundocker-compose -f docker-compose.prod.yml upand I immediately get the error:ERROR: for frontend Cannot start service frontend: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/c/Users/James/Projects/mysite/frontend/conf/nginx/mysite.template\\\" to rootfs \\\"/var/lib/docker/aufs/mnt/e7a2a699ae3e9ede0dd60b7cfdebb7f2d3adf71e8175157f3c9e88d3285796d2\\\" at \\\"/var/lib/docker/aufs/mnt/e7a2a699ae3e9ede0dd60b7cfdebb7f2d3adf71e8175157f3c9e88d3285796d2/etc/nginx/conf.d/default.conf\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected typeMymysite.templatefile exists. I am using digital ocean withdocker-machine. In development I didn't have this issue running on the same OS (ubuntu 16). I use docker toolbox on windows for development.Here's the frontend config fromdocker-compose.prod.yml:frontend:
image: nginx:stable-alpine
restart: always
networks:
- web
- default
volumes:
- ${PWD}/frontend/conf/nginx/mysite.template:/etc/nginx/conf.d/default.conf
- ${PWD}/frontend/public:/var/www/html
labels:
- "traefik.enable=true"
- "traefik.basic.frontend.rule=Host:mysite.com"
- "traefik.basic.port=80"I followed the instructions from thedocs. Any ideas what's going wrong? | Unable to mount a file with docker-compose |
Add tcp option to sys config as shown here:vi /etc/sysconfig/docker
OPTIONS="--host=tcp://0.0.0.0:2375"After restarting docker, I could connect to remote docker server using python. | How do I connect to remote docker host using python?>>> from docker import Client
>>> cli = Client(base_url='tcp://52.90.216.176:2375')
>>>
>>> cli.containers()
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python2.7/site-packages/docker/api/container.py", line 69, in containers
res = self._result(self._get(u, params=params), True)
File "/usr/local/lib/python2.7/site-packages/docker/utils/decorators.py", line 47, in inner
return f(self, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/docker/client.py", line 112, in _get
return self.get(url, **self._set_request_timeout(kwargs))
File "/usr/local/lib/python2.7/site-packages/requests/sessions.py", line 480, in get
return self.request('GET', url, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/adapters.py", line 437, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='52.90.216.176', port=2375): Max retries exceeded with url: /v1.21/containers/json?all=0&limit=-1&trunc_cmd=0&size=0 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))If I log-in to 52.90.216.176 and use the following:>>> cli = Client(base_url='unix://var/run/docker.sock')this works. But how do I connect to docker running on another server? | connect to docker hosted on remote server |
By default, docker-compose with a v2 yml will spin up a network for your project. Any networks you define will also be created unless you explicitly tell it otherwise. Here's an example docker-compose.yml:version: '2'
networks:
dbnet:
appnet:
services:
db:
image: busybox
command: tail -f /dev/null
networks:
- dbnet
app:
image: busybox
command: tail -f /dev/null
networks:
- dbnet
- appnet
proxy:
image: busybox
command: tail -f /dev/null
ports:
- 80
networks:
- appnetAnd then when you spin it up, you'll see that it creates the networks defined:$ docker-compose up -d
Creating network "test_dbnet" with the default driver
Creating network "test_appnet" with the default driver
Creating test_app_1
Creating test_db_1
Creating test_proxy_1Note that linking containers also created an implicit dependency, so you may want to usedepends_onin your yml to be explicit in any dependencies after removing your link. | Docker 'link' feature will be deprecated as new feature 'networking' has been released (link). I'm making docker-compose with some containers, and it was fine with 'link' to connect each others(without any other commands).Since I need to change link configuration to network, I have to make docker network before 'docker-compose up'. Is there any docker-compose feature that making docker network automatically? Or any other way to connecting each containers with some configuration? | Docker-compose network link |
You don't need to bridge them: what you want is a superset server (that you happen to be running via docker) to connect to a clickhouse database (that you also happen to be running via docker).You also shouldn't need to install SQLAlchemy for Clickhouse: looking at the dockerfile athttps://hub.docker.com/r/amancevice/superset/~/dockerfile/that image has alreadysqlalchemy-clickhouseinstalled for you.Your steps should be as follow:When youdocker run --detach --name superset [options] amancevice/supersetyou should have your superset instance running athttp://localhost:8088/Similarly, when you run$ docker run -d --name some-clickhouse-server --ulimit nofile=262144:262144 -v /path/to/your/config.xml:/etc/clickhouse-server/config.xml yandex/clickhouse-serveryou should end-up with a clickhouse instance that you can access via SQLAlchemy atsomething likeclickhouse://default:@some-clickhouse-server/testYou'd need to modify that connection URI based on your config.xml - and you should be able to double-check that it works by connecting to it in your python console.You should then be able to connect superset to your clickhouse db in the same way you'd connect to any other DB: by navigating into Superset's menu > Sources > Databases > [new] | I'm trying to setup Apache Superset for Clickhouse.
My understanding so far is that I need to install SQLAlchemy for Clickhousehttps://github.com/xzkostyan/clickhouse-sqlalchemyI'm in Ubuntu 16.04 LTS, and using the Docker vanilla version of Clickhouse and of Superset:https://store.docker.com/community/images/yandex/clickhouse-serverhttps://hub.docker.com/r/amancevice/superset/without special settingsAny idea how I can bridge the two docker containers with clickhouse-sqlalchemy ?
Where and how in that case to install that?
(if you have sample command line that I can reuse that will be great) | Superset for Clickhouse in docker with SQLAlchemy |
I had the same problem and it was caused by running tutorial code from a later version (eg v0.12) against an older version of tensorflow which was in my docker container (v0.11 in my case).
The same problem is discussed here:https://github.com/tensorflow/tensorflow/issues/5643The app.run() method didn't have the argv parameter until v0.12. | I installed Tensorflow on Ubuntu 16.04 LTS following the tutorial given here (with GPU support):Docker Installation for TensorflowManaged to run docker with this command:nvidia-docker run -it -p 8888:8888 -v /home/myusername/notebooks:/notebooks gcr.io/tensorflow/tensorflow:latest-gpu
docker exec -it [my_DOCKER_ID] bashOnce I managed to get into the docker bash successfully, I found that there is tensorflow directory here:cd /usr/local/lib/python2.7/dist-packages/tensorflow/models/image/mnist/I proceeded to try the example code and successfully reached Test error of 0.8%:python convolutional.pyNext, followinghttps://www.tensorflow.org/versions/r0.11/tutorials/mnist/pros/index.htmltutorial page, I would like to try mnist_softmax.py. So I cloned tensorflow's package to /notebooks:cd /notebooks
git clone https://githubcom/tensorflow/tensorflow.gitHowever, I found problem when running the code:cd tensorflow/tensorflow/examples/tutorials/mnist/
python mnist_softmax.py --data_dir /notebooks/tensorflow/tensorflow/examples/tutorials/mnistTraceback (most recent call last):File "mnist_softmax.py", line 78, in
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
TypeError: run() got an unexpected keyword argument 'argv'At this point I'm pretty clueless whether the error was caused by bad installation or it's because there are steps that I havent done. My questions:Is my installation complete? I assumed I had a clean installation knowing that I can run docker and get into the docker bash. Plus, I managed to run convolution.pyIf I understand Docker correctly, I do not need to clone and build tensorflow package at all? | Running mnist_softmax.py on Tensorflow Installed with Docker |
The last arg todocker build, often something likedocker build .is the build context in docker. This directory is sent to the server where the build runs and allCOPYandADDcommands are performed using this context. These commands do not run on the client, and docker is a client/server application, so anything not in that context simply doesn't exist for the purpose of building an image.So in the above example,docker build .the current directory is the build context and if that's run while you're inside of the.settingsdirectory, only those files are part of the build context. Therefore yourbuild.shscript needs to pass a different directory, and also reference where theDockerfileis inside of that build context. That would look like:docker build -f .settings/Dockerfile ..When you do this, all of theCOPYandADDcommands will now be relevant to parent directory, so you may need to adjust yourDockerfileto compensate.For your$(pwd)reference, you can eithercd ..before running yourdocker runcommand or update the command to look like:docker run \
-u root \
--rm \
-v $(pwd)/..:/app \
| I'm pretty new to the bash/shell script world, I'm trying to do the below and it could be pretty simple but I wasn't able to figure out the command, would be great if someone could help me out here and also point me to some documentation wrt to shell script topics. Thank you in advance.My build.sh and Dockerfile resides under a folder called .settings and this folder lives directly under the app root. Now inside my build.sh and Dockerfile when I refer something like $(pwd) or COPY . /apps/ it might not work since my build.sh and Dockerfile does not live directly under the app root.What command I can use in this scenario inside the files that I referenced above. Hope I made it clear. Once again this could be very simple since I'm a newbie in this arena I find it a little difficult.inside build.sh, reference to $(pwd)
docker run \
-u root \
--rm \
-v $(pwd):/app \ ----> this $(pwd) references the application root, but if I
move this build.sh inside a folder called .settings then the $(pwd) context
would change and I still want to refer it to the root.
| $(pwd) - one level up |
Link :Manage data in containersThe basic run command you want is ...docker run -dt --name containerName -v /path/on/host:/path/in/containerThe problem is that mounting the volume will, (for your purposes), overwrite the volume in the containerthe best way to overcome this is to create the files (inside the container) that you want to share AFTER mounting.The ENTRYPOINT command is executed on docker run. Therefore, if your files are generated as part of your entrypoint script AND not as part of your build THEN they will be available from the host machine once mounted.The solution is therefore, to run the commands that creates the files in the ENTRYPOINT script.Failing this, during the build copy the files to another directory and then COPY them back in your ENTRYPOINT script. | I'm running a docker container with a volume /var/my_folder. The data there is persistent: When I close the container it is still there.
But also want to have the data available on my host, because I want to work on code with an IDE, which is not installed in my container.So how can I have a folder /var/my_folder on my host machine which is also available in my container?I'm working on Linux Mint. | Sharing files between container and host |
fakesystemdis a special package in the CentOS Docker image that satisfies the dependency to Systemd without actually installing Systemd (after all, you don't usually need an init system within a container).yum info fakesystemdtells a bit more:Minimal docker-specific package to satisfy systemdProvides:without installing systemd in Docker images. It is intended strictly for use in Docker images/containers. It doesn't provide any functionality from systemd package - it only contains few important directories and files. fakesystemd is definitely not applicable for full bootable operation system!To install the real systemd in the image you need to run yum swap command in this form:yum swap -- remove fakesystemd -- install systemd systemd-libsYou need to swap thefakesystemdpackage with the "real"systemdpackage, and can then also installsystemd-devel:RUN yum swap -y fakesystemd systemd && \
yum install -y systemd-devel | I'm trying to update a Docker image based on the official CentOS7 image. It is used as a builder for Node.js projects.I need to add thesystemd-develpackage for compiling some dependencies, but this fails with the following error:fakesystemd-1-17.el7.centos.noarch has installed conflicts systemd: fakesystemd-1-17.el7.centos.noarchThanks | Docker as a builder, can't install systemd header files |
Docker allows you to isolate applications running on a host, it does not provide a different OS to run those applications on (with the exception of a the client products that include a Linux VM since Docker was originally a Linux only tool). If the application runs on Linux, it can typically run inside a container. If the application cannot run on Linux, then it will not run inside a Linux container.An exe is a windows binary format. This binary format incompatible with Linux (unless you run it inside of an emulator or VM). I'm not aware of any easy way to accomplish your goal. If you want to run this binary, then skip Docker on Linux and install a Windows VM on your host. | I am currently trying to understand and learn Docker. I have an app, .exe file, and I would like to run it on either Linux or OSX by creating a Docker. I've searched online but I can't find anything allowing one to do that, and I don't know Docker well enough to try and improvise something. Is this possible? Would I have to use Boot2Docker? Could you please point me in the right direction? Thank you in advance any help is appreciated. | How do you run an .exe file on Docker? |
This appears to be the behavior of internal networking. Since the only network attached to the container is an internal network which doesn't permit external traffic, the container becomes isolated by design. To publish a port, you need the container to be attached to a non-internal bridged network. And as soon as you connect a non-internal bridged network to the container, you will see the published port reappear. | I have created two docker networkschnetworkdocker network create --subnet=172.19.0.0/16 chnetworkInternal-networkdocker network create --internal --subnet 10.1.1.0/24 internal-networkwhile create docker container I usechnetwork,docker run -it -d --name containerone -h www.cone.net -v /var/www/html -p 3006:80 --net chnetwork --ip 172.19.0.40 --privileged magentolater I have changed toInternal-networkand disconnect container fromchnetworkdocker network connect internal-network containerone
docker network disconnect chnetwork containeronenow the problem isdocker pscommand does not display port of that container, I mean port is not accessible ininternal-network.when I change network tochnetworkthat time onlydocker psdisplay ports. what I need to do for port is accessible in all the docker networks? | is port not common for all the docker networks? |
Entrypoint cannot have a a variable. You can either move it to CMD or directly access it indocker-entrypoint.shARG db
ENV database ${db}
ENTRYPOINT ["/docker/entrypoint.sh"]
CMD ["${db}"]
-----------ENTRYPOINT---------------------
#!/usr/bin/env bash
echo "Entrypoint stuff"
echo "----------------"
echo "NEW APP DB CLONE FROM $1 or same as $database"
echo "sites/files permission changes"
echo "--------------------------------------"Even if you don't use CMD,$databasewill get you the value you need | This question already has answers here:How to pass ARG value to ENTRYPOINT?(5 answers)Closed4 years ago.I tried to pass an argument to my docker entry point , but it fails ,
these are steps i followedDocker Build Command : docker build -t "DBDNS" --build-arg db=sampleIn DockerfileARG db
ENV database ${db}
ENTRYPOINT ["/docker/entrypoint.sh", ${db}]Error for this
bash: 1: bash: [/var/www/html/.docker/entrypoint.sh,: not foundActually file exists and passing an argument for entrypoint.sh causing issue.
Any clues for this-----------ENTRYPOINT---------------------
#!/usr/bin/env bash
echo "Entrypoint stuff"
echo "----------------"
echo "NEW APP DB CLONE FROM $1"
echo "sites/files permission changes"
echo "--------------------------------------" | Docker Passing an argument Docker Entrypoint with entrypoint.sh [duplicate] |
The reason removing directories fails is that the backing (xfs) filesystem was not formatted with d_type support ("ftype=1"); you can find a discussion on github;https://github.com/docker/docker/issues/27358.To verify ifd_typesupport is available on your system, check the output ofdocker info;Server Version: 1.13.1
Storage Driver: overlay
Backing Filesystem: xfs
Supports d_type: false
Logging Driver: json-fileThis requirement is also described in therelease notes for RHEL/CentOSNote that XFS file systems must be created with the-n ftype=1option enabled for use as an overlay. With the rootfs and any file systems created during system installation, set the--mkfsoptions=-n ftype=1parameters in the Anaconda kickstart. When creating a new file system after the installation, run the# mkfs -t xfs -n ftype=1 /PATH/TO/DEVICEcommand. To determine whether an existing file system is eligible for use as an overlay, run the# xfs_info /PATH/TO/DEVICE | grep ftypecommand to see if theftype=1option is enabled.To resolve the issue, either;re-format the device withftype=1use a different storage driver. Note that the default device mapper configuration (which uses loopback devices) is not recommended for production use, so requires manual configuration.For backward-compatibility (older versions of docker allowed running overlay on systems withoutd_type), docker 1.13 will only log awarningin the daemon logs (https://github.com/docker/docker/pull/27433), but will no longer be supported in a future version. | My DockerFile contains the following instruction:rm -f plugins.7zThis command worked as expected in earlier versions of docker but fails with version 1.13. I see the error:cannot access plugins.7z: No such file or directoryIf I bring up a container with the base image and execute the command manually, I see the same error.Trying to list the folder contents displays:# ls -lrt
ls: cannot access plugins.7z: No such file or directory
total 12
??????????? ? ? ? ? ? plugins.7zThis is not listed as a known issue inDocker Issues. How do I debug the issue further?Edit:For reasons of IP, I cannot post the full Dockerfile here. Also, it may not be necessary. As I mentioned, I am able to simulate the issue even by manually running the container and trying to execute the commandThe file exists before I attempt to delete itI was wrong about there not being a similar bug in the issues list. Here isoneThe issue may not be to do with that file. Deleting other files/folders in the folder also makes them appear with ??? permissionsThe user performing the operation is root | Docker is unable to delete a file when building images |
This would be considered a bad practice or anti-pattern in docker. RVM is trying to solve a similar problem that docker is solving, but with a very different approach. RVM is designed for a host or VM with all the tools installed in one place. Docker creates an isolated environment where only the tools you need to run your single application are included.Containers are ideally minimalistic, only containing the prerequisites needed for your application, making them more portable. Docker also uses layers and a union filesystem to reuse common base images for each image, so any copy of something like Ruby version X is only downloaded and written to disk once, ever (ignoring updates to that image). | I'm new to using docker and so far I'm unable to find many ruby/rails images that containRVMorrbenv.The most common thing I see is that eachcontainerhas multipletagsand each tagged image version hasonly oneversion of Ruby installed. See thisimagefor example.The only way to use another version is to use another tag for the image you are using as you can not install a new version with RVM nor with rbenv.Is this done on purpose?Is it a bad practice to use version managers for programming languages inside docker containers?Why? | Is it a bad practice to use version managers like RVM inside docker containers? |
To accomplish this using docker-compose there are two things you should consider:Set your resolver in HAProxy to use Docker's internal DNS at127.0.0.11.Use aserver-templatein your HAProxy configuration.Using Docker's DNS in the configuration will allow HAProxy to use it as a service discovery mechanism when we define the server template in the backend. You can create theresolverin HAProxy like so:resolvers docker
nameserver dns1 127.0.0.11:53Server templatesare a really powerful feature in HAProxy that allows the configuration to update (add/remove) servers based on the DNS response from the resolver. You can create a server-template with the following:backend all
mode http
server-template nginx- 4 ws:80 check resolvers docker init-addr libc,noneYou can read about each flag used in theserver-templatebut I'll walk you through the relevant ones in your configuration. The first item is the server name prefixnginx-, you can set this to any string you want, HAProxy will append it with a number based on the total number of responses from the resolver. The next item4is the max servers you want HAProxy to configure, you can adjust this higher or lower as you need. Next is the server:port you have configured for your backend service. And finally setting the resolver for this backend todocker. | I have a simple haproxy.cfg that looks like this:frontend http
bind *:8080
mode http
use_backend all
backend all
mode http
server s1 ws:8080Now I have a docker-compose file that looks like this:version : '3.9'
services:
lb:
image: haproxy
ports:
- "8080:8080"
volumes:
- ./haproxy:/usr/local/etc/haproxy
ws:
image: myserverThis works fine, but now I want to use replica to scale my server (ws) instance up to 4.
I can do this, providing this docker-compose file:version : '3.9'
services:
lb:
image: haproxy
ports:
- "8080:8080"
volumes:
- ./haproxy:/usr/local/etc/haproxy
ws:
image: myserver
deploy:
mode: replicated
replicas: 4Callingdocker-compose upgives me this:Recreating test_server_ws_1 ... done
Recreating test_server_ws_2 ... done
Recreating test_server_ws_3 ... done
Recreating test_server_ws_4 ... done
Recreating test_lb_1 ... doneBut how can I reference now in my haproxy.cfg those 4 replicas? Using anything else thanws:8080will give melb_1 | [ALERT] (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:11] : 'server s1' : could not resolve address 'ws_1'.Just using ws as address in the config (like before) will always forward to ws_4.How can I configure haproxy correctly to forward to ws_1, ws_2, ws_3 and ws_4 ? | How to access docker-compose created replicas in haproxy config |
.NET 8 ASP.NET Core Docker images have a breaking change -Default ASP.NET Core port changed from 80 to 8080:The default ASP.NET Core port configured in .NET container images has been updated from port 80 to 8080.
We also added the newASPNETCORE_HTTP_PORTSenvironment variable as a simpler alternative toASPNETCORE_URLS.Previous behaviorPrior to .NET 8, you could run a container expecting port 80 to be the default port and be able to access the running app.For example, running the following command allowed you to access the app locally at port 8000, which is mapped to port 80 in the container:docker run --rm -it -p 8000:80 So you need either to change port mapping8860:8080or change the port for the container (for example by passing-e ASPNETCORE_HTTP_PORTS=80argument todocker runor addingENV ASPNETCORE_HTTP_PORTS=80afterFROM mcr.microsoft.com/dotnet/aspnet:8.0). | I migrated my application to .NET 8.0, ran it locally and it works perfectly.Then I created an image, the container. As a result, the page that was working before now returns a "1.2.3.4 refused to connect."Before, when I was in .NET 7.0, the API worked.My basic DockerFileFROM mcr.microsoft.com/dotnet/sdk:8.0 AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/aspnet:8.0
WORKDIR /app
RUN chmod -R 755 /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "myapi.dll"]My api runs on port 886049de1a3ce47c myapi "dotnet myapi.dll" 13 minutes ago Up 13 minutes 0.0.0.0:8860->80/tcp, :::8860->80/tcpLogs:info: Microsoft.Hosting.Lifetime[14]
Now listening on: http://[::]:8080
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
Content root path: /app | .NET 8.0 WebAPI/Swagger Docker Refused to connect |
You need to run the following command with your container name to obtain the IP for the container.docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' You can then access the containerhttp://IP_Obtained:Port.Detailed Explanation can be found athttps://docs.docker.com/docker-for-windows/troubleshoot/#limitations-of-windows-containers-for-localhost-and-published-ports | I'm totally new to docker and I tried to run the example image from the "get started" tutorial.My OS is Windows 10 Home (64 bit) and I used Docker Toolbox to install it.
I created the 3 files like the demo told me to do and copied the content into them to avoid typing errors.
When I start the image withdocker run -p 4000:80 friendlyhellothere seems to be no problem, but when I try to connect in the browser with
localhost:4000
the browser (Google Chrome most actual version) tells me that localhost refuses the connection.
Even with Microsoft Edge the same error appears.I also tried to change the windows firewall with an ingoing rule to allow the docker-engine.exe all ports, but it did not help.Has anyone a hint for me how to solve the problem? I really want to get the example run :-)Link to the get started example:https://docs.docker.com/get-started/part2/#pull-and-run-the-image-from-the-remote-repositoryThe docker process is also running:Update:
It seems that I had the wrong version of OracleVM VirtualBox installed, and that the starting of the default VM didn't work because of an error. I installed a newer version and started the default image again and it worked.After starting the docker container with:
docker run -d -p 4000:80 friendlyhelloI was able to call the demo app inside the VirtualBox with port 4000:unfortunately this leaves me behind totally confused about how docker should work :-/. I thought after running docker I would be able to access it on my Windows OS because it's just another process but now it seems I still need a virtual machine? Can someone please explain me what I'm missing at this point? | localhost refuses connection with docker |
Double-quotes need to beescapedfor them to work as expected, like sosomeVar=\"2.60.3\". | I'm buildingDocker Desktop for Windowsimage.
I try to pass a variable to a Powershell command, but it does not work.Dockerfile# escape=`
FROM microsoft/windowsservercore
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
RUN $someVar="2.60.3" ; echo $someVarDocker buildSending build context to Docker daemon 2.048kB
Step 1/3 : FROM microsoft/windowsservercore
---> 2c42a1b4dea8
Step 2/3 : SHELL powershell -Command $ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';
---> Using cache
---> ebd40122e316
Step 3/3 : RUN $someVar="2.60.3" ; echo $someVar
---> Running in dd28b74bdbda
---> 94e17242f6da
Removing intermediate container dd28b74bdbda
Successfully built 94e17242f6da
Successfully tagged secrets:latestExprected resultI can workaround this by using ENV variable and, possibly, a multistage build to avoid keeping this variable:# escape=`
FROM microsoft/windowsservercore
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ENV someVar="2.60.3"
RUN echo $env:someVar
Sending build context to Docker daemon 2.048kB
Step 1/4 : FROM microsoft/windowsservercore
---> 2c42a1b4dea8
Step 2/4 : SHELL powershell -Command $ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';
---> Using cache
---> ebd40122e316
Step 3/4 : ENV someVar "2.60.3"
---> Running in 8ac10815ff6d
---> 9073ec3256e0
Removing intermediate container 8ac10815ff6d
Step 4/4 : RUN echo $env:someVar
---> Running in 43a41df36f92
2.60.3
---> 09e48901bea9
Removing intermediate container 43a41df36f92
Successfully built 09e48901bea9
Successfully tagged secrets:latest | How to read Powershell variable inside Dockerfile? |
What you should use isENTRYPOINTFROM python:2.7-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
RUN pip install numpy==1.12.0
ENTRYPOINT ["python", "t_1.py"]Now when you run the docker commanddocker run -v ./t_1.json:/data/t_1.json /data/t_1.jsonThis will make it equivalent topython t_1.py /data/t_1.json | Below is my Dockerfile content:FROM python:2.7-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
RUN pip install numpy==1.12.0
CMD ["python", "t_1.py", "t_1.json"]I want to pass this file(t_1.sjon) as argument with docker run command at runtime so that CMD ["python", "t_1.py", "RUN TIME ARGUMENT"]. I tried mounting volumes but fails as json file is independent and I want as argument.Please help. | How to pass json file as an argument using docker run command |
So it seems that I need to accessAsyncResultonly via my Celery app instance, instead of through Celery, or pass the Celery app instance as an argument.So, this doesn't work:from celery.result import AsyncResult
@app.route('/status/')
def get_status(task_id):
task = AsyncResult(task_id)
return task.stateThis works:from app import my_celery # Your own Celery Application Instance
@app.route('/status/')
def get_status(task_id):
task = my_celery.AsyncResult(task_id)
return task.stateThis also works:from app import my_celery
from celery.result import AsyncResult
@app.route('/status/')
def get_status(task_id):
task = AsyncResult(task_id, app=my_celery)
return task.stateI'm guessing what happens is that by callingAsyncResultdirectly from Celery, it doesn't access Celery's configurations, hence it thinks that there's no backend configured to query results to.But that would only explain complete failure of the function, and not the erratic behavior. I'm guessing this is because of different threads, and situations in which the app instance is being importante, so Celery finds it, not too sure though.I've ran a couple of tests and seems to be working fine again after changing the importedAsyncResult, but I'll keep digging. | I've been using Celery for a while a now, in production I use RabbitMQ as the broker and Redis for the backend in a K8s cluster with no problems so far. Locally, I run a docker compose with a few services (Flask API, 2 different Workers, Beat, Redis, Flower, Hasura), using Redis as both the Broker and the Backend.I haven't experienced problems with this setup for the past months, but yesterday I started getting erratic behavior while accessing task results.Tasks are sent to queue, the worker recognizes it and performs the task, but while querying for the task state I sometimes getDisabledBackend. Normally on the first request, and then it works. Couldn't find a pattern of when it works and when it doesn't, it's erratic.I've read somewhere that Celery didn't work very well with flask's builtin server so I switched to uWSGI with pretty much the same setup I have in production:[uwsgi]
wsgi-file = app/uwsgi.py
callable = application
http = :8080
processes = 4
threads = 2
master = true
chmod-socket = 660
vacuum = true
die-on-term = true
buffer-size = 32768
enable-threads = true
req-logger = python:uwsgiI've seen asimilar questionin Django in which the problem seemed to be on WSGI Mod with Apache, which is not my case, but the behavior seems similar. Every other question I've seen was related to misconfiguration of the backend, which is not my case.Any ideas on what might be causing this?
Thanks. | DisabledBackend: Erratic Behavior with Celery, Redis & Flask |
Though you mention using Hyper-V, because of your screenshot (notably the WSL Integration tab), I suspect you may be running Docker Desktop in WSL2 mode, instead of HyperV mode. (WSL2 to my understanding is the newer, faster option in many cases).With that assumption, to alter the RAM in your WSL 2 VM, you have to create aC:\Users\username\.wslconfigfile with the VM settings. The details are described onthis pagewhich is actually linked to by the page you mentioned.This is an example of a.wslconfigfile:[wsl2]
memory=9GB # Limits VM memoryNote that this applies toallWSL2 VMs (I guess they are called distros?), which I'm not sure is exactly the right answer, since Docker seems to produce 2 distros by itself, plus whatever other distros you have (seewsl --list). Do you want to increase the RAM foralldistros?However, to quotethis page:WSL 2's memory usage grows and shrinks as you use it. When a process frees memory this is automatically returned to Windows.This sounds to me like the.wslconfigmemory setting is a max size, which is only allocated when needed, so I assume setting it for all WSL distros won't cause all of them to balloon up to 9GB immediately upon distro startup unless those distros try to use all that memory.They go on to say:However, as of right now WSL 2 does not yet release cached pages in memory back to Windows until the WSL instance is shut down. If you have long running WSL sessions, or access a very large amount of files, this cache can take up memory on Windows. We are tracking the work to improve this experience on theWSL Github repository issue 4166I have experienced this ballooning memory issue on large ML jobs, so just something to be aware of.So, the.wslconfigchange has seemed to work for me. Another option that has helped me is increasing theswapsize via.wslconfig, since my machine has limited memory. | I am following this tutorialhttps://docs.docker.com/docker-for-windows/#docker-settings-dialogto install docker in windows. I am stuck on the Settings section under Resources tab. My view of resources does not show how it is showing on that link. Is there a way to increase my Ram so I can have ELK to run. I installed the Docker Desktop application with the Hyper-V.This is what I see in my settings.What I should be seeing, but am not. | Incrementing GB of Ram for Docker Container in Windows |
It doesn't.docker psonly shows running containers,docker-compose psshows all containers related to the current compose file, running and stopped.docker-compose killjust force stops the container and it can be restarted withdocker-compose start, it will therefore be visible when runningdocker-compose psbut notdocker ps.To list all containers with docker usedocker ps -a. To removed stopped containers related to a compose file rundocker-compose rm, if you want to stop and remove all containers, have a look atdocker-compose down. | Why does docker compose create containers that are only accecible from docker-compose ps and that persist after killing running container ? | Why is docker-compose ps different from docker ps? |
Create a file/etc/docker/daemon.json{
"dns": ["89.101.160.5", "89.101.160.4"]
}Restart the docker service and try again and see if this works for you.You are probably on office network which has its own DNS servers that you should be using. So you need to tell the Docker daemon which DNS server its containers should be using. That is what is creating the issue. Thedaemon.jsonfile can be used to change the daemon configuration. | I'm trying to send some emails from a docker container running express through register365.This is the code usedexport class Emailer {
transporter: nodemailer.Transporter;
constructor() {
this.transporter = nodemailer.createTransport(smtpTransport({
host: 'smtp.reg365.net',
auth: {
user: 'myuser',
pass: mypassword'
}
}));
}
public async sendEmail(to,body) {
try {
return await this.transporter.sendMail({to,from: '"TEST" <[email protected]>',text: body, subject: ' WE NEED THE CONTENT AND DESIGN OF THIS EMAIL!!!!'});
}
catch(error) {
console.log('Email error');
console.dir(error);
}
}
}That's working all fine if I run the express with npm start but If I run it with docker it'll fail with this errorError: Connection closedIt only fails using smtp.reg.356.net, if I use Gmail it'll work perfectlyThis is the docker file I'm usingFROM node:8
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm install -g nodemon
COPY package.json /usr/src/app/
RUN npm install
COPY ./dist /usr/src/app/dist
EXPOSE 3005
EXPOSE 25
CMD [ "npm", "start" ]Many thanks.EDIT:
As requested, running telnet smtp.reg365.net 25 returns thistelnet: could not resolve smtp.reg.356.net/25: Name or service not knownOutput of cat /etc/resolv.conf on the host machinedomain Hitronhub.home
nameserver 89.101.160.5
nameserver 89.101.160.4On the docker containersearch hitronhub.home
nameserver 127.0.0.11
options ndots:0 | Nodemailer with Docker |
The issue was with the ELB Health Check. The default location for the health check was on path '/', and due to the design of the web app, that location was not returning 200 OK. Configuring the health check path to something that returns 200 OK solved the issue. Also, considering the health check grace period on the ECS service can be relevant too in some instances. | The app deploys and runs just fine locally for long periods of time without issue. On Amazon ECS, however, it seems to always crash after running idle for roughly 2:30 min. What's wrong?Dockerfile# Set the node alpine base image
FROM node:15-alpine
# Establish app working directory
WORKDIR /app
# Setup app workspace
COPY app.js .
COPY package.json .
COPY package-lock.json .
COPY app/ app
# Install app dependencies
RUN npm install
# Document listener port
EXPOSE 80
# Run listener
CMD [ "npm", "start" ]Amazon ECS task logs2021-06-05 17:33:20 npm ERR! A complete log of this run can be found in:
2021-06-05 17:33:20 npm ERR! /root/.npm/_logs/2021-06-05T15_33_20_563Z-debug.log
2021-06-05 17:33:20 npm ERR! command failed
2021-06-05 17:33:20 npm ERR! signal SIGTERM
2021-06-05 17:33:20 npm ERR! command sh -c node app
2021-06-05 17:33:20 npm ERR! path /app
2021-06-05 17:33:20 npm notice
2021-06-05 17:33:20 npm notice New minor version of npm available! 7.7.6 -> 7.16.0
2021-06-05 17:33:20 npm notice Changelog:
2021-06-05 17:33:20 npm notice Run `npm install -g[email protected]` to update!
2021-06-05 17:33:20 npm notice
2021-06-05 17:30:50 Server started at 0.0.0.0:80 ..
2021-06-05 17:30:46 >[email protected]start
2021-06-05 17:30:46 > node app | Node app docker image runs locally and fails on Amazon ECS |
Some mix of @Krishas and @Hans JespersenHere is the code of my docker yml:version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
ports:
- 2181:2181
kafka:
image: wurstmeister/kafka:0.10.1.1
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://10.10.10.10:9092
KAFKA_ADVERTISED_HOST_NAME: 10.10.10.10
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181That needs the "PLAINTEXT:// prefix !
And config the "host_name" + "port", or "listeners"The next step is decovery how i will configure another nodes | I´m starting with Apache Kafka and i´m facing problems when i try to conect from an external machine.With this configuration bellow, all works fine if the application and the docker are running at the same machine.but when i put the application in machine A and docker at machine B, the application cant connect.My spring Kafka @Configuration have this line to @Bean consumerFactory and producerFactory (imagine my machine with docker ip = 10.10.10.10)props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "10.10.10.10:9092");And my docker file is this:version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
ports:
- 2181:2181
kafka:
image: wurstmeister/kafka:0.10.1.1
environment:
KAFKA_ADVERTISED_HOST_NAME: 0.0.0.0
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "topic-jhipster:1:1,PROCESS_ORDER:1:1, PROCESS_CHANNEL:1:1"
JMX_PORT: 9999
KAFKA_JMX_OPTS: "-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.rmi.port=9999"
ports:
- 9092:9092
- 9999:9999
kafka-manager:
image: sheepkiller/kafka-manager
ports:
- 9000:9000
links:
- zookeeper
environment:
ZK_HOSTS: zookeeper:2181i get this error:org.springframework.kafka.core.KafkaProducerException: Failed to send;
nested exception is org.apache.kafka.common.errors.TimeoutException:
Expiring 1 record(s) forEdit, add some information..I think its any configuration about the zookeeper i´m missing .. because if i have only the zookeeper started at my machine A .. and the kafka in machine B.. that works.. i only don´t know how :( | cant connect to kafka from external machine |
When you run the commanddocker run -it -p 8080:8080 codercom/code-server --auth nonelocally, it means you add the parameter--auth nonefor the command in the link you provide. But when you run the CLI command with the parameter--auth none, the Azure CLI will look it as the parameter of the CLI commandaz container create, and this parameter does not support in the CLI.So what you need to do is change the CLI command like this:az container create --resource-group learn-deploy-vsCode \
--name code-server \
--image codercom/code-server \
--command-line "/usr/local/bin/code-server --host 0.0.0.0 . --auth none" \
--ports 8080 \
--dns-name-label san-codeserver \
--location eastus | When I run docker locally"docker run -it -p 8080:8080 codercom/code-server --auth none"I am using --auth none argument, but how can i use this in azure container create commands.If I run normally like"az container create --resource-group learn-deploy-vsCode --name code-server --image codercom/code-server --auth none --ports 8080 --dns-name-label san-codeserver --location eastus" it is throwing error "az: error: unrecognized arguments: --auth none". | How can I pass container level arguments in azure container create |
Thesecrets definition in the docker-compose.yml file, as of version 3.3 of the file format, does not support passing the content of the secret inside the docker-compose.yml file itself. The secret needs to be either external (predefined withdocker secret create secret_name -) or from the contents of a separate file.The syntax with an externally defined secret is:secrets:
my_first_secret:
file: ./secret_data
my_second_secret:
external: trueAnd the syntax for a separate file containing your secret is:secrets:
my_first_secret:
file: ./secret_data
my_second_secret:
external:
name: redis_secret | I need to store the server_key_rsa of my sftpServer in a docker-compose.yml but I don't know how to store itIt's look like that for now :-----BEGIN RSA PRIVATE KEY-----
***********************My Key bla bla bla.......
**********************************************
**********************************************
**********************************************
**********************************************
-----END RSA PRIVATE KEY-----And I would like to store it like that:server_key_rsa = Here should be the key.I tried with "|" just before my key, I tried to change my key file to Base64, I tried "\n" between lines, I tried "the\nrsa\nkey", but those solutions failed..Any idea please ? | How to store server_key_rsa in docker-compose.yml? |
If you added a resource memory limit to each GKE Deployment when thememory limitwas hit, the pod was killed, rescheduled, and should restarted and the other pods on the node should be fine.You can find more information by running this command:kubectl describe pod
kubectl top podsPlease note if you put in a memory request that is larger than the amount of memory on your nodes, the pod will never be scheduled.And if the Pod cannot bescheduledbecause of insufficient resources or some configuration error You might encounter an error indicating a lack memory or another resource. If a Pod is stuck in Pending it means that it can not be scheduled onto a node. In this case you need to delete Pods, adjust resource requests, or add new nodes to your cluster. You can find more informationhere.Additionally, as per thisdocument,Horizontal Pod Autoscaling(HPA) scales the replicas of your deployments based on metrics like memory or CPU usage. | I am using python flask in GKE contianer and moemory is increasing inside pod. I have set limit to pod but it's getting killed.I am thinking it's memory leak can anybody suggest something after watching this. As disk increase memory also increase and there are some page faults also.Is there anything container side linux os (using python-slim base). Memory is not coming back to os or python flask memory management issue ?To check memory leak i have added stackimpact to application.Please help...!
Thanks in advance | Kubernetes deployment high memory usage |
There is no option in docker-compose to allow you to run a command after a container is started.What you can do is to build your own image that will execute the actions you want on startup. To do this you need to:Find out the default startup of the container (theENTRYPOINTandCMDcombined).Create a shell script that will invoke the entrypoint with the desired parameters, after which it will invoke your commands.Create a Dockerfile that is based on the original image, copy the shell script in the image and changed the entrypoint to your scriptAdd your image and dockerfile to the docker-compose (change the current consul image to point to your image and build script)Here is example of a entrypoint shell script what can be used to kickstart your specific script. Place your code in theexecute_after_start()function.entrypoint.sh#!/bin/bash
set -e
execute_before_start() {
echo "Execute befor start" > /running.txt
}
execute_after_start() {
sleep 1
echo "Execute after start" >> /running.txt
}
execute_before_start
echo "CALLING ENTRYPOINT WITH CMD: $@"
exec /old_entrypoint.sh "$@" &
daemon_pid=$!
execute_after_start
wait $daemon_pid
echo "Entrypoint exited" >> running.txtThe script will start theexecute_before_start. When this commands are over, will start the original entry point with the arguments provided withCMDand in parallel (this is the&at the end ofexecute) it will startexecute_after_start. Whenexecute_after_startis over, it will wait for the original entry point to stop.I usesleepin the example as a simples way to assure some delay so the entry point can take the commands. Depending on the entrypoint, there might be smarter ways to assure that the entrypoint is ready to take the commands. | I have a consul docker image which is a part of adocker-composeenvironment.I have to run the commandconsul acl bootstrapinside the docker container, I believe mentionining it incommandorentrypointwill override the default commands set for consul, how do I execute it in addition to the default commands? | How to run commands when a docker image runs? |
Since you didn't post your compose. I am making few assumptions. The compose assumed is belowversion: '3'
services:
nginx:
image: nginx
ports:
- 80:80
- 443:443
depends_on:
- jenkins
- sonar
jenkins:
image: jenkins
sonar:
image: sonarqubeAnd all of these run on10.10.10.50. Now if you set the DNS to10.10.10.20inside and outside, bothjenkins.network.comwill resolve to10.10.10.50. But inside the docker network you wantjenkins.network.comto resolved to the IP of the container.So if all above is correct then below is the simplest solutionversion: '3'
service:
nginx:
image: nginx
ports:
- 80:80
- 443:443
depends_on:
- jenkins
- sonar
jenkins:
image: jenkins
networks:
default:
aliases:
- jenkins.network.com
sonar:
image: sonar
networks:
default:
aliases:
- sonar.network.comOn the nginx image i can reachjenkins.network.comroot@be6492f18851:/# telnet jenkins.network.com 8080
Trying 172.23.0.3...
Connected to jenkins.network.com.
Escape character is '^]'.
Connection closed by foreign host.And you can do that from both jenkins and sonar containers and get the same resultsEdit-1If you want the DNS to go through proxy, you can change the aliases to that networkversion: '3'
service:
nginx:
image: nginx
ports:
- 80:80
- 443:443
depends_on:
- jenkins
- sonar
networks:
default:
aliases:
- sonar.network.com
- jenkins.network.com
jenkins:
image: jenkins
sonar:
image: sonar | I'll try to explain and draw this outWhat I want to achieve:Sorry for the crappy paint diagram. Right now, it works perfectly if I hit it from the 10.10.10.0 network. The problem is DNS resolves jenkins.network.com to the 10.10.10.0 network. I want to go back through the proxy though as that has SSL termination to get to the sonarqube server. Is there a good way to accomplish this to keep the services behind the proxy? Do I need to create a second DNS server with the docker network on it? Is this possible to do with consul to have both the external and internal services point to the same domain name?Edit:
Doing something like this would work, since everything goes through the proxies. So when jenkins hits sonar, it think's its ip really is 10.10.10.51 and it can hit it through there.What I need it to do:I need it to go out of the proxy, then come back in through the proxy. IE:172.16.10.2 ---- 172.16.10.1 ----- 10.10.10.50 ----- Proxy then takes over to route to proper location (172.16.10.3:8080 or something) | docker reverse proxy DNS/networking issues |
Ok, this works. I changed myDockerfile.devto the following:FROM node:alpine
WORKDIR '/app'
COPY ./shared /shared
COPY ./web /app
RUN npm install
CMD ["npm", "run", "start"]From the base project directory (where/sharedand/webreside), I run:docker build -t sockpuppet/client -f ./web/Dockerfile.dev . | I have an npm module I'm working on locally that is a dependency in a client app.Directory structure is basically the following:/app
/client
/src
App.js
package.json
Dockerfile.dev
/shared
/contexts
package.json
test.js
/hooksMypackage.jsonis the following:{
"name": "web",
"version": "0.1.0",
"private": true,
"dependencies": {
"contexts": "file:../shared/contexts",
"react": "^16.10.2",
"react-dom": "^16.10.2",
"react-scripts": "3.2.0"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject"
},
"eslintConfig": {
"extends": "react-app"
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
}
}Importing with the following intoclient/src/App.js:import { testImport } from 'contexts/test';Works as expected when I runnpm start.The issue I'm running into is with running:docker build -t sockpuppet/testapp -f Dockerfile.dev .It fails and I get an error:npm ERR! Could not install from "../shared/contexts" as it does not contain a package.json file.Here is he Dockerfile.devFROM node:alpine
WORKDIR '/app'
COPY ./package.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "start"]How should I be handling local npm dependencies?Also, adding something like the following toCOPYthe/sharedinto the image generates aCOPY failed: Forbidden path outside the build context: ../shared/contexts ()error:COPY ../shared ./ | Local npm dependency "does not a contain a package.json file" in docker build, but runs fine with npm start |
The problem was caused by SELinux that prevented Docker to access the file system.If someone has the same problem than this post, here is how to check if it's the same situation :1/ Check SELinux status:sestatus. If the mode isenforcing, it may block Docker to access filesystem.# sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: enforcing
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Memory protection checking: actual (secure)
Max kernel policy version: 312/ Change mode topermissive:setenforce 0. There should be no more restrictions on Docker. | I'm trying to start a Nginx container that serve static content located on the host, in /opt/content.The container is started with :docker run -p 8080:80 -v /opt/content:/usr/share/nginx/html nginx:alpineAnd Nginx keeps giving me 403 Forbidden. Moreover, when trying to inspect the content of the directory, I got strange results :$ $ docker exec -i -t inspiring_wing /bin/sh
/ # ls -l /usr/share/nginx/
total 4
drwxrwxrwx 3 root root 4096 Aug 15 08:08 html
/ # ls -l /usr/share/nginx/html/
ls: can't open '/usr/share/nginx/html/': Permission denied
total 0Ichmod -R 777 /opt/to be sure there are no restriction on the host, but it doesn't change anything. I also try to add:roflag to the volume option with no luck.How can I make the mounted volume readable by the container ?UPDATE : here are the full steps I done to reproduce this problem (as root, and with another directory to start from a clean config) :mkdir /public
echo "Hello World" > /public/index.html
chmod -R 777 /public
docker run -p 8080:80 -d -v /public:/usr/share/nginx/html nginx:alpine
docker exec -i -t inspiring_wing /bin/sh
ls -l /usr/share/nginx/htmlAnd this last command inside the container returns me :ls -l /usr/share/nginx/html. Of course, replaceinspiring_wingby the name of the created container. | Docker permission denied with volume |
Installing nodejs on top of the jenkins image is the way to go. Adding an instruction to install nodejs inside the Dockefile is a standard thing in Docker to do when packaging dependencies.Adding nodejs (later) automatically at Jenkins build time is not a
good thing, because it slows the build process down.This is not always true. Docker builds use a cache for layers being created when building a Dockerfile. Thus if you install nodejs at the top of your Dockerfile, you will only have to wait once for the installation and the next build commands will just use the cache and there won't be any additional time required to install nodejs inside the Jenkins image.I would recommend that you install nodejs inside the jenkins image usingdocker multi-stage builds. Since there already exists aDocker image for node, you can use that to install node inside the jenkins image.FROM node as nodejs
FROM jenkins/jenkins
COPY --from=nodejs /usr/local/bin/node /usr/local/bin/nodeBy building the Dockerfile above, you will get an image with jenkins and node installed using the official node Docker image. | How can I (best) install/add nodejs permanently into a (Jenkins) docker image?The result is a docker image with both Jenkins and nodejs.The purpose is to install nodejs as a Global Tool in the Jenkins container. To achieve theinstallation folder of nodejshas to be known.I saw e.g. this solution, but what is the installation folder of Nodejs?RUN curl -sLhttps://deb.nodesource.com/setup_8.x| sudo -E bash && \
sudo apt-get install -y nodejsAdding nodejs (later) automatically at Jenkins build time is not a good thing, because it slows the build process down. | Install / add nodejs into (Jenkins) docker image permanently |
Just solved the mystery. It's indeed related to the docker base image, and not to the build step.It'll work perfectly if I do:FROM debian
RUN apt-get update
RUN apt-get install -y ca-certificatesAs my goal is to use the alpine image, I'm using the following right now:FROM alpine
RUN apk --no-cache add ca-certificatesHope that helps someone with the same problem. For more information, see:http://blog.cloud66.com/x509-error-when-using-https-inside-a-docker-container/PS.: mgo (no reachable servers) error message was pointing me out in the wrong direction. | Depending on where my binary is being executed, I get different results on mgo Dial.Right now, I'm building on my machine (Fedora: uname -a: Linux localhost.localdomain 4.15.6-300.fc27.x86_64 #1 SMP Mon Feb 26 18:43:03 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux) using the following command:$ CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -ldflags '-s' -o myProgramSo, if I build my docker image using:FROM centos
COPY myProgram "/usr/local/bin/myProgram"
ENTRYPOINT ["/usr/local/bin/myProgram"]It works perfectly. That means I'm connected to the database. But, if I change to:FROM debian
COPY myProgram "/usr/local/bin/myProgram"
ENTRYPOINT ["/usr/local/bin/myProgram"]I'm gettingno reachable servers. My goal is to compile the application on gitlab-ci using the golang image, and run it on a alpine container.The question is:Why the same executable get different results on different base images?Does mgo (or go) use something related to the OS? I mean, it seems that my binary will run only on red hat based distribution (just a guess, it doesn't make much sense to me right now.)Dial source code:dialInfo := &mgo.DialInfo{
Addrs: config.Addr,
Database: config.Auth,
Username: config.User,
Password: config.Pass,
ReplicaSetName: config.ReplicaSet,
Timeout: time.Second * 10,
}
dialInfo.DialServer = func(addr *mgo.ServerAddr) (net.Conn, error) {
return tls.Dial("tcp", addr.String(), &tls.Config{})
}
session, err := mgo.DialWithInfo(dialInfo)
if err != nil {
log.Fatal(err.Error())
} | No reachable servers on static linked go binary |
Thesudocommand, because it is designed as a tool for privilege escalation, intentionally sanitizes the environment before switching to a new user id. If you take a look at thesudoman page, you'll find:-E, --preserve-env
Indicates to the security policy that the user wishes to preserve their existing
environment variables. The security policy may return an error if the user does not
have permission to preserve the environment.So instead ofsudo -u appuser somecommand, just usesudo -E -u appuser somecommand.Therunusercommand is provided by theutil-linuxpackage in recent versions of Ubuntu, and does not perform any environment initialization by default. For example:$ docker pull ubuntu
$ docker run -it --rm ubuntu /bin/bash
root@ded49ffde72e:/# runuser --help
Usage:
runuser [options] -u
runuser [options] [-] [ [...]]
[...]This is with Ubuntu Xenial (but therunusercommand also appears to be available onubuntu:vivid, butis notavailable underubuntu:trusty).So your options are:Usesudo -E, orUse a more recent Ubuntu image | In my docker container I am running a command as a specific user like this fromentrypoint.sh:sudo -u appuser "$@"This works fine, however, it doesn't set any of the environment variables that get created by using the--linkoption while running the container.QuestionIs it possible to set all environment variables that exist for a root user to some other specific user (in this exampleappuser)Note: related question to this discussion. This is the reason I can't just use theUSERcommandHow to give non-root user in Docker container access to a volume mounted on the host | How to set copy all environment variables from root user to another specific user |
In case of docker volumes, you don't have control over where docker saves it's volumes. all you can do is just to change docker root directory. so it's better to mount your new partition under a directory and then change docker root directory to this mount point. this way you can achieve what you want. also you should consider that by doing this, all of your docker data will be stored in this new partition.for changing your docker root directory, you should first create a file named daemon.json in address below:/etc/docker/daemon.jsonand then add config below to it:{
"data-root": "/path/to/new/directory"
}then restart docker daemon:systemctl restart dockerthen you can run command below to check current docker root directory:docker info | I have a Docker container running on my PC. The main functionality of the container is to scrape data, and this accumulates 0.3GB/day. I'll only be needing this data for the last 30 days, and after this I plan to store it archived on Hard Disk Drives for historical purposes. However after few hours of trials and errors, I've failed to create a Docker Volume on another partition, and the_datafolder always appears in the/var/lib/docker/volumes/folder, while the partition drive is always empty.I also tried creating the volume withdocker run -v, but it still creates the volume in the main volumes folder.The operating system isPop!_OS 20.04 LTSI'll provide data about the partition:I'll provide data about the partition: | Docker Named Volume on another Partition on another hard drive |
Thedocker inspect -foption uses the Gotext/templatelanguage, with fairly few extensions. I don't think it's directly possible to print only the first network name, but it is possible to print out all of the network names and no other details. The trick here is to iterate over thatNetworksobject as a map and print out the keys:docker inspect jolly_hodgkin \
-f '{{range $k, $v := .NetworkSettings.Networks}}{{printf "%s\n" $k}}{{end}}'If you have a purpose-built command-line tool for JSON manipulation (likejq) its query language might be more powerful and more suited for the data manipulation you need. Ajqinvocation to specifically get the name of the first Docker network might look likedocker inspect jolly_hodgkin \
| jq -r '.[].NetworkSettings.Networks | keys | first' | I have command like this:$ docker inspect reacthublh_mysql_1 -f "{{json .NetworkSettings.Networks }}"which extract for me output:{"reacthublh-network":{"IPAMConfig":null,"Links":null,"Aliases":["1b905711e127","mysql"],"NetworkID":"d2b6bd4815a2eb48a57d05e5d219894f453c15e3f8b5a331a5f0668ed98f4730","EndpointID":"c71240571cc1cfb7bd50119aaf6aaef3dfbc2dc56732e0fd6f593ebe00861edc","Gateway":"172.30.0.1","IPAddress":"172.30.0.2","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:1e:00:02","DriverOpts":null}}the question is how to extract onlyreacthublh-networkwhich is first key of json object?UPDATE:The only way that I found now is:$ docker inspect reacthublh_mysql_1 -f "{{json .NetworkSettings.Networks }}" | cut -d '"' -f2which outputs exactly what I need but I'm curious if it's still possible to do it in --format parameter | How to get first key from object using docker inspect --format (get name of network of container) |
Docker network drivers have no IGMP/PIM support, so you should really establish a direct Layer 2 connection from the container to the physical switch/router.As you have found out yourself, docker's default bridge network will not help you here.I haven't tested it with multicast, but you should be able to achieve that withPipework.macvlan drivershould help you with your problem, but is currently experimental as of Docker Engine 1.11 | I have an application that sends messages over UDP multicast that I've been attempting to put under docker. I've been running into much headwind trying to send multicast packets from a docker container.I have been able to send messages through the--net=hostoption on running the docker container. I would, however, like to stick with a bridge configuration.I would like to get some insight in what needs to be done in order to publish messages through the standard docker bridge configuration. I'm attempting to publish messages on239.9.60.250with port16000. I have tried publishing udp port16000through the following argument ondocker run.-P 0.0.0.0:16000:16000/udpThis doesn't give me any change in behavior and my host doesn't see any multicast traffic. | Sending Multicast Packets from Docker Container (to multicast group) |
See the docshere:Eachrunkeyword represents a new process and shell in the runner environment. When you provide multi-line commands, each line runs in the same shell.This means that the working directory isn't persisted after thecdstep. Yourlsstep works because you explicitly set the working directory for it.You have tocdin the same run step as the build command:- name: Build docker image
run: |
cd app
docker build . -t app_name -f DockerfileOr you could set a working directory:- name: Build docker image
working-directory: ./app
run: docker build . -t app_name -f DockerfileOr you can give docker the path to your dockerfile:- name: Build docker image
run: docker build app -t app_nameThe default for-fisPATH/Dockerfile, wherePATHisappabove. | We intend to use Git Actions to build our Docker on every commit.This is our current Git Actions yml:# This is a basic workflow to help you get started with Actions
name: CI
# Controls when the workflow will run
on:
push:
branches:
- '**'
pull_request:
branches:
- '**'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
docker-build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Navigate to app folder
run: cd app
- name: Open Directory
working-directory: app
run: |
ls -la
- name: Build docker image
run: docker build . -t app_name -f DockerfileThe error I get is:unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/runner/work/git-root/app/Dockerfile: no such file or directoryBut in myls -lai see the Dockerfile is present:total 48
drwxr-xr-x 4 runner docker 4096 Sep 15 13:03 .
drwxr-xr-x 6 runner docker 4096 Sep 15 13:03 ..
-rw-r--r-- 1 runner docker 93 Sep 15 13:03 .env-template
-rw-r--r-- 1 runner docker 655 Sep 15 13:03 DockerfileI have tried:Using bothactions/checkout@v1andactions/checkout@v2cd into the directory with the Dockerfilesetting Dockerfile directory to working-directoryWhy does not the docker build find my Dockerfile? | Build Docker image using GitHub Actions: No such file or directory |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.