How to move docker data directory to another location on Ubuntu
Top 3 Things to Avoid When Using Containers Published July 29th, by Srdjan Grubor When talking about increasing development velocity for your teams, containers are at the forefront of the conversation on the new concepts of modern development. By using them as deployment artifacts, developers can be sure that their code works on the target platforms and the security personnel can have rest assured that the application code is much less likely to permanently affect the machines that the code was deployed to.
As with all technological advancement, any new tool introduced into your environment also means new security-related worries. With that in mind, in the following sections we will cover the top 3 pitfalls that you should keep an eye out for when working with containers. Mounting the Docker Daemon Socket into the Container Even though the Docker Engine is being slowly phased out in many of the orchestration engines, for the foreseeable time it will still the dominant container runtime.
Unfortunately, it currently has what is probably the most egregious and pervasive security omission seen in the wild. You just indirectly gave root-level host permissions to this container.
While this practice is generally dangerous to use, it does have some rare valid usages in container management platforms. Given its potential for misuse though, you should always ensure that the images you run with this type of configuration are ones that you have audited and that you trust to avoid potential breaches.
Neglecting Your Container Images The second biggest security lapse in the use of container images is usually reserved for the outdated dependencies and tools within the container images themselves. With unmaintained apps, infrequent builds and overzealous image layer caching, there is a very high risk that the images you create or even base images you depend on are stale.
These images directly increase the risk of having your containers compromised by attackers using the latest exploits against the unpatched and outdated libraries that your containers use.
Even though your hosts are secured by the kernel protections from your container to some extent, the data that your container has access to, as well as processing resources, can still be hijacked to suit the attackers needs. Due to this pervasive attack vector and the potential to create a jump-point for host compromises, you should ensure that you: continuously update your images including the ones you use as bases for other images , frequently update your code dependencies and occasionally invalidate the container build cache.
Use of Unverified Images The third but by no means the last! By carefully using only verified images from the author s of the tool you need and auditing the image layers, you can, in most, cases avoid this type of attack. However, You can really only reduce the risk and not eliminate it completely as image tags are mutable and uploader credentials liable to compromises.
For example, with only a few lines in a Dockerfile you can have a fully functioning OpenSSL CLI in almost all big verified distributions, but sometimes you may see someone opt to use an image like this that is not from an organization and could contain any number of problems both intentional and not in tooling that is designed to secure your infrastructure.
Are the keys the linked image creates safe? Keep this in mind when you choose what images to include in your infrastructure. Final Thoughts Containerization has been taking over the tech world as the next step in improving both cycle times and the security of your infrastructure, but relying on containerization alone to provide all of your security without being careful is usually a recipe for breaches.
Hopefully, this short list of a top few common issues will help your organization improve its security posture when using these new technologies. This is by no means an exhaustive list, butt rather an attempt to make you aware that every new tool has its quirks and problems and containers are no exception. If you decide to embark on this new paradigm, you should make sure to keep informed about what new attack vectors this new technology adds and how to avoid them.
He enjoys breaking things just to see how they work, tinkering, and solving challenging problems. Posted in DevOps Post navigation.
Introduction to Docker Bind Mounts and Volumes
Besides the 2x20 GPIO Pin extension, the Raspberry Pi also needs a small 2x2 pin extension, which took a while to find the needed parts in the right size. This way the fans are just spinning at full speed when the Raspberry Pi is turned on. I am aware that there are much more elegant ways to deploy and install on multiple devices at once Hello ansible!
So we need to download the latest Raspbian OS Lite image from the official website and flash it to a SD-card with our favorite tool Etcher.
Make also sure you have the empty "ssh" file in the boot partition to enable ssh access right away. Now get the IP address of your PI 4 and ssh into it. First make sure that you have the absolute latest updates and firmware for the Pi. Confirm the changes and when the tool asks you, reboot. In my case the official Pi image didn't boot, so I downloaded this image from this ubuntu thread to be able to boot directly into Ubuntu server Update Ubuntu, set hostname, timezone and static IP first Now turn the Pi back on and it should boot directly into your ubuntu USB SSD flash drive and ssh into it with: ssh [email protected] Password on first boot is ubuntu.
It will prompt you directly to change it to your own personal password. You get disconnected after successfully changing it and you need to reconnect again but this time use your newly set own password. We want to name our Pis in a consitent scheme like pi-cluster-1 and so on: sudo hostnamectl set-hostname pi-cluster-1 Verify your changed hostname with: hostnamectl Now we change the time zone.
To get a list of timezones you can use timedatectl list-timezones. Ubuntu Set firewall and hardening the server First we harden our ssh login by disallowing root and changing the default port to We start with our first node pi-cluster-1, which we want to be the first manager and leader to start on. On pi-cluster-1 we take the fixed IP address of pi-cluster-1 and use it with this command to initialize the swarm.
In our case its: sudo docker swarm init --advertise-addr Best is to copy and save this link for now, the message command looks something like this: Don't copy this! You have to use your OWN generated token command! For this we also need our manager token to join a node as a manager. Enter this on pi-cluster sudo docker swarm join-token manager This will give you a similar token command to the one we already got for our workers.
Copy and save this as well as your individual manager token. Now you can shh into pi-cluster-2 and use the commands to add it to our cluster: Don't copy this! You now have two nodes joined together as managers! Lets verify our cluster by entering sudo docker node ls again on one of your nodes. Our four node docker swarm cluster up and running!
Setup NFS shared storage for the cluster To have persistent data available for every node and container, we need a shared storage, that can be accessed by all our nodes. For this we use Traefik. It will automatically secure all our services and application running in the cluster with a generated wildcard ssl certificate. To deploy Traefik to our docker swarm cluster we need to do some Prework: Prerequisites First create a local DNS entry for the domain you want to use for your local services.
I use pihole for that and point in pihole all docker services to the IP of my docker swarm. So for example the domain traefik. Now we have to pre-generate a hashed password and user for our Traefik web interface to be accessible.
You can do this by using environment variables, but for the ease of use and for better understanding in this guide, I don't use these advanced setup techniques.
Instead we will write all of the variables into our traefik-swarm-compose. So lets generate the Basic Auth Password. Also replace "password" with whatever password you want to use in plain text. You find it under "My Account" on the top right user icon in your Cloudflare dashboard. Next we edit the traefik. You should now also be able to access the Traefik Dashboard via your browser. Deploy Portainer to manage our cluster For easier browser based management of our cluster we install portainer-agent and portainer-ce.
For this we ssh into our pi-cluster-2, a manager. Because we want to use our previous created shared NFS storage as a persistent volume for portainer, we have to create a portainer data folder on our NFS share and also edit the default portainer-agent-stack. Important to change these variables before you save and exit: under labels "traefik. Now we are ready to deploy portainer onto our cluster, using the portainer-swarm-compose.
After that you an see and manage your swarm, and inside of that the already created traefik and portainer services. Appendix 1: Completely remove Docker To completely uninstall Docker: dpkg -l grep -i docker To identify what installed package you have: Step 2 sudo apt-get purge -y docker-engine docker docker.
Appendix 2: Install fail2ban fail2ban is a UNIX service daemon that automatically detects malicious behaviour and bans offenders by updating the firewall rules.
Ignore any IP address that I own which do not change often like my home IP address and the server itself. Enable the jail for sshd.
Top 3 Things to Avoid When Using Containers
Now you can shh into pi-cluster-2 and use the commands to add it to our cluster: Don't copy this! You now have two nodes joined together as managers! Lets verify our cluster by entering sudo docker node ls again on one of your nodes. Our four node docker swarm cluster up and running! Setup NFS shared storage for the cluster To have persistent data available for every node and container, we need a shared storage, that can be accessed by all our nodes.
Docker: Working with local volumes and tmpfs mounts
For this we use Traefik. It will automatically secure all our services and application running in the cluster with a generated wildcard ssl certificate. To deploy Traefik to our docker swarm cluster we need to do some Prework: Prerequisites First create a local DNS entry for the domain you want to use for your local services.
I use pihole for that and point in pihole all docker services to the IP of my docker swarm. So for example the domain traefik. Now we have to pre-generate a hashed password and user for our Traefik web interface to be accessible. You can do this by using environment variables, but for the ease of use and for better understanding in this guide, I don't use these advanced setup techniques.
Instead we will write all of the variables into our traefik-swarm-compose. So lets generate the Basic Auth Password. Also replace "password" with whatever password you want to use in plain text. You find it under "My Account" on the top right user icon in your Cloudflare dashboard.
Mounting a Volume Inside Docker Container
Next we edit the traefik. Running docker volume will return some usage help for the available commands for managing volumes. Docker volume command usage Let's start with creating a new volume. I'm going to go ahead and create one called logdata.
Docker volume listing Similar to the bind mount, we can use the --mount flag to specify a volume to mount to a new container called volume1.
Empty logdata volume mounted inside a new container Let's create a new file in the logfile directory called LogFile Both volume1 and volume2 containers now have access to write to and read from the logdata volume.
Install Portainer with Docker on Ubuntu
Now I'm going to run docker inspect against a container named bindmount to look at how the bind mount's configuration data represents it. Now let's compare that to running docker inspect on the volume1 container. Volume mount returned by docker inspect You'll notice the source for both mounts contains a physical path to a directory on the container host filesystem. The difference is that Docker completely handles the volume's source. Hence, you required a shared directory or volume that you can mount on multiple Docker Containers and all of them have shared access to a particular file or directory.
Docker allows you to mount shared volumes in multiple Containers. In this article, we will mount a volume to different Containers and check whether the changes in the file is shared among all the Containers or not.