Containerization adoption is spreading fast across industries and is set to accelerate more rapidly by the year 2022. Most enterprises are ditching their existing monoliths in favor of container-based stacks as it offers a way of packaging applications in an easily portable, software-defined environment.
When coupled with an orchestration
tool, such as Kubernetes Security or Docker Swarm, application containerization also
offers other benefits, including low cost of infrastructure operations,
flexible routing between services, and scaling solutions on the micro-service
level.
As with any other technological
advancement in application development, containerization brings up another set
of security challenges for the IT department to take care of. Emerging container security risks mainly include
kernel-level threats, which amplify any existing vulnerabilities.
However, most of these risks can be
reduced significantly if managed early during the deployment process. In that
case, let's explore several key tips and tricks that boost container security
when baked into the deployment phase.
Practical Steps for Secure Container Deployment
1.
Manage privilege flags
The privilege mode is among the most
useful features in Docker containerization. At the most basic level, this mode
makes running Docker inside Docker a possibility by giving the containers the
host machine's capabilities.
The paradigm of allowing containers
to have all the host machine capabilities is similar to giving unrestricted
administrative powers to all server users. It's well established that that is
not always a good security practice.
There are instances where a
particular container will need direct hardware access or additional privileges
to perform its task. However, privileged containers are not recommended for the
safety of your architecture.
While containers have undeniable
application security benefits over VMs, flagging a container to have extra
privileges makes it a possible attack vector by itself. If the privileged
container is misconfigured, it becomes an easy avenue for hackers to attack and
spread malicious code.
Docker containers are unprivileged by
default- and it's recommendable to let them remain so. Instead of giving them
unrestricted access to the host, you should consider allowing granular access
and capabilities within the container environment.
2.
Deploy static analysis and unit testing tool in your containers
Developers are under pressure to
deliver quality projects on a timely basis while meeting coding and compliance
standards. While containerization offers fast and efficient performance,
mistakes have to be avoided as much as possible. That's why a static code
analysis tool should be at the heart of any container-based project.
A static analysis tool is a method of
automatically examining a container source code before running the program.
This analysis is performed in the early development stages or the
"create" phase for organizations that practice DevOps.
One of the major benefits of static
code analysis is that it gets you early feedback on your progress. It gives you
timely insights on every completed piece of functionality, letting you know
whether there's any flaw that could lead to security vulnerability or crashing.
3. Set
container resource limits
You're not required to set resource
limits for your containers by default. However, this is a critical safety
practice, especially if you're running your containers on a host or an
orchestration platform like Kubernetes.
If you don't set up limits for your
containers, they may end up utilizing all the available resources on the host,
including CPU, RAM, and I/O. When that happens, the host may kill kernel
processes due to low memory. This causes a loophole that malicious attackers
may use to infiltrate and bring down apps.
If the machine is hosting multiple
containers, it's advisable to specify how much RAM and CPU each container can
access. In case the specified memory runs out, it's that particular container
that shuts down. While you don't want any of your containers to shut down, at
least you're assured that the host won't run out of memory and cause multiple
containers to crash.
4. Mind third-party image safety
When you're pulling containers from public repositories,
you're simply trusting a third party with your entire project's security. The
problem is that you aren't sure whether the original authors were intentional
about the security of the containers.
Additionally, there's the risk of not detecting any
corrupt file in the container before it's too late. That's why it's hard to
emphasize enough the need to use trustworthy images only.
One surefire way of getting trustworthy images is
using a paid service, such as Docker Hub paid plan. This service should give
you confidence that the images you're pulling have been scanned for safety and
won't increase your attack surface.
Another worthy recommendation is to consider
popular official images, including Python, Ubuntu, Redis, Alpine, and Busybox.
Docker says that it sponsors a team of security experts and upstream software
maintainers to ensure these images' security and reliability.
5. Keep your secrets safe
So far, you've managed container privileges,
deployed static analysis, and ensured the safety of your images. The next major
consideration is to keep your sensitive information a secret.
In containerization, any secret value, such as API
keys, passwords, and access tokens, will eventually get into the container in
one of two ways. First, you could embed the secret in the code itself.
Secondly, you could build and define it in the container image using Docker.
The problem with these methods is that anyone can
access the secret value, which is a bad idea. The second reason why you
shouldn't embed your secret value into the container image is that it makes
future changes on the secrets somewhat complicated and complex. For instance,
should you want to change the password, you'd need to rebuild or re-deploy the
entire code.
The safest method of passing secrets to your
container is using the value mount approach. In this technique, the container
reads the secret value it requires from a file in another location. Kubernetes
and other container orchestration tools have built-in secret value storage for
this purpose. If you're building your project on cloud services, such as Google
GCP, Amazon AWS, or Azure, there is an option to create an encrypted storage
component to hold your secrets.