What are containers?
Containers represent a transformational change in the way apps are built and run. Containers package an application and all its dependencies into a single image that can be promoted from development, to test, to production, without change. Containers provide consistency across environments and multiple deployment targets: physical servers, virtual machines (VMs), and private or public cloud
Containers enable customers to simplify multi-tenancy deployments by deploying multiple applications on a single host, using the kernel and the docker runtime to spin up each container
Container Threat Scenario Examples
According to NIST SP 800-190 Application Security Container Guide, following are threat scenario examples for containers
- Exploit of a Vulnerability within an Image
- Exploit of the Container Runtime
- Running a Poisoned Image
It is critically important to carefully plan before installing, configuring, and deploying container technologies. The container technology life cycle security considerations help designers to put things in priority for security the containers.
How to Secure the Containers
Cloud Foundry secures containers through the following measures:
- Running application instances in unprivileged containers by default
- Hardening containers by limiting functionality and access rights
- Only allowing outbound connections to public addresses from application containers. This is the original default. Administrators can change this behavior by configuring ASGs
The OpenShift Container Platform and Kubernetes APIs authenticate users who present credentials, and then authorize them based on their role. Both developers and administrators can be authenticated via a number of means, primarily OAuth tokensand SSL certificate authorization.
Developers (clients of the system) typically make REST API calls from a client program like
oc or to the web console via their browser, and use OAuth bearer tokens for most communications. Infrastructure components (like nodes) use client certificates generated by the system that contain their identities. Infrastructure components that run in containers use a token associated with their service account to connect to the API.
Authorization is handled in the OpenShift Container Platform policy engine, which defines actions like “create pod” or “list services” and groups them into roles in a policy document. Roles are bound to users or groups by the user or group identifier. When a user or service account attempts an action, the policy engine checks for one or more of the roles assigned to the user (e.g., cluster administrator or administrator of the current project) before allowing it to continue.
Since every container that runs on the cluster is associated with a service account, it is also possible to associate secrets to those service accounts and have them automatically delivered into the container. This enables the infrastructure to manage secrets for pulling and pushing images, builds, and the deployment components, and also allows application code to easily leverage those secrets.
Aqua’s comprehensive, purpose-built platform for container security provides full visibility and control over containerized environments, with tight runtime security controls and intrusion prevention capabilities, at any scale. The platform provides programmatic access to all its functions through an API
Docker containers are, by default, quite secure; especially if you take care of running your processes inside the containers as non-privileged users (i.e., non-
root). You can add an extra layer of safety by enabling AppArmor, SELinux, GRSEC, or your favorite hardening solution.
There are four major areas to consider when reviewing Docker security:
- the intrinsic security of the kernel and its support for namespaces and cgroups;
- the attack surface of the Docker daemon itself;
- loopholes in the container configuration profile, either by default, or when customized by users.
- the “hardening” security features of the kernel and how they interact with containers.