Introduction

Welcome to the fifth part of our Kubernetes blog series. Focusing on Amazon Elastic Kubernetes Service (EKS), this blog discusses essential security measures that safeguard Kubernetes deployments. Important tactics and tools for securing deployments were shown. Please follow the blog if you missed our conversation on load balancing and scalability in Part 4. Let us look closely at the critical safety components crucial for protecting your Kubernetes environment and get started with Amazon EKS.

Role-Based Access Control (RBAC)

Role-Based Access Control is crucial and held to be the base of management for all access allowed within a Kubernetes cluster. It ensures that only humans or services who have been authorised may carry out certain activities within such an environment. It is a security regime where roles have been carefully worked out to define the access scope and constraints to different entities, in terms of users, groups, or service accounts.

The first step is identifying what activities need to be performed within the Kubernetes cluster and, thus, what permissions those various activities require. Each role is, in turn, clearly defined to contain specific permissions that allow the holder to perform only the described activities and nothing more. These may be more complex permissions for allowing the creation or modification of system components or just simple read access to some specific resources.

The system reviews all users, groups, or service accounts for their access levels regarding their job duties before granting these roles. For example, the system administrator will likely have broader access to monitoring and managing the security state for the entire cluster. At the same time, a developer might be allowed to install and maintain only the apps within a development namespace.

Periodic audits are necessary to ensure such responsibilities are maintained appropriately and safely. Audits shall be made to ensure all permissions of access are within the requirements associated with the most current safety regulations and expectations of operation. Auditors will search for any extreme privileges that can be abused and see whether some organisational changes in roles and responsibilities call for updating the limits on access.

In other words, RBAC is an organised way of securing a Kubernetes cluster. It is one method of clearly defining role-based rules and permissions that adapt to operational needs for different sets of users and services in a passable way. This retains the security and integrity of the whole system while ensuring all have access to do their work correctly.

While implementing RBAC, several essential rules must be followed to guarantee that the security and access control measures remain practical and operational over time.

    • Definition of roles: The roles within an organisation must be clearly defined, and access must be given to the privilege to execute the tasks.

    • Applying the Least Privilege Principle: Users and services get the minimum access required to perform a task.

    • Conducting Regular Audits: Permission and role settings must be audited regularly to meet the organisation’s security needs.

To implement RBAC, you can create a role that only allows reading pods and services in a specific namespace. Here’s how you can define this in a YAML file:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods", "services"]
  verbs: ["get", "list", "watch"]

If you want to configure access control for a system using RBAC, you can define a role that will enable viewing some data in some area. For instance, we define a role, say “pod-reader“, where the user views data, specifically about pods and services. An example of such a role is captured in a configuration file that states the kind of data to be accessed, whether it’s a read-only configuration with no edits or deletions.

Then bind this role to a user:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: default
subjects:
- kind: User
  name: example-user
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

After creating the role, it is essential to bind it to users, service accounts, and other entities within the Kubernetes environment. This binding is accomplished through an additional configuration associating the role with various entities. For instance, if a role named “pod-reader” is created, it can be assigned to a user like “example-user“, a service account, or even a group, depending on the permissions needed. In practical terms, binding the “pod-reader” role to the user “example-user” enables this user to access and view information as permitted by the role. Similarly, when this role is bound to a service account, any pods running under that account gain the same level of access. This setup is crucial for enforcing security policies and ensuring only authorised entities have the appropriate access levels within a Kubernetes cluster.

Implementation in AWS EKS

AWS EKS fits very closely with AWS IAM (Identity and Access Management), a service built for secure access management. This will improve security by setting boundaries on what Kubernetes pods and other system components could use.

AWS EKS leverages a powerful AWS IAM feature known as IRSA (IAM Roles for Service Accounts), which allows IAM roles to be associated with Kubernetes service accounts rather than directly attaching them to pods. This association provides a granular level of control over AWS resources. Here is how it works: a specific IAM role is connected to a service account in Kubernetes, which is then linked to the pods. As a result, each pod that uses this service account inherits the IAM role, enabling access to only those essential AWS resources for its functions. This method ensures that security is tight and access is meticulously controlled, helping to prevent unauthorised activities or access within the system.

Example

Imagine a situation in which a Kubernetes pod needs to access an S3 bucket for data to be read by the application. Without IRSA, it is managed less securely, embedding AWS credentials directly into the application or using overly permissive roles. In this way, IRSA dramatically simplifies and secures the process:

    1. You need to create an IAM policy that specifies the permissions required. For instance, this policy could allow actions like s3:GetObject on a specific S3 bucket.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::example-bucket/*"
        }
    ]
}

    1. Create an IAM role and attach the above policy to this role. Set the trust relationship of the IAM role to allow the Kubernetes service account to assume this role.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::aws-account-id:oidc-provider/oidc.eks.us-west-2.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "oidc.eks.us-west-2.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E:sub": "system:serviceaccount:namespace:service-account-name"
                }
            }
        }
    ]
}

To enable safe interactions between Kubernetes and your AWS environment, you must first create an IAM role and associate it with a policy. Establish a trust relationship after establishing the IAM role so that a Kubernetes service account may use the AWS Identity and Access Management (IAM) system to take on this function. AWS IAM roles for service accounts (IRSA) with OpenID Connect (OIDC) make this connection possible. Providing an identity layer on top of the OAuth 2.0 protocol, OpenID Connect (OIDC) is essential to this method. It is possible for your cluster to automatically construct an OIDC identity provider when you use Amazon EKS. Thanks to this configuration, IAM roles may be directly linked to Kubernetes service accounts now. Using OIDC, you provide AWS permission to accept the Kubernetes service account’s assumption of an IAM role. This is essential for improving security and manageability by giving Kubernetes pods particular AWS permissions without integrating AWS credentials into applications. Setting up the EKS cluster to utilise an OIDC provider and then generating IAM roles with trust links to that provider are the two steps involved in implementing OIDC for AWS. By doing this, you may safely connect Kubernetes operations with AWS services by ensuring that the trust relationship is specifically configured to permit activities from the Kubernetes service account.

    1. Associate the IAM role with the Kubernetes service account used by the pod. This is typically done through the EKS console or using a manifest file.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-service-account
  namespace: my-namespace
  annotations:
    eks.amazonaws.com/role-arn: "arn:aws:iam::aws-account-id:role/my-role"

    1. Deploy a pod with the associated service account. The pod will now be able to access the S3 bucket using the permissions granted by the IAM role without needing to manage AWS credentials explicitly.

This approach significantly enhances security from AWS’s robust IAM systems, which control access directly. Thus, it reduces the risks of leaking credentials and ensures each pod has the necessary permissions.

Image Scanning

Scanning images during software updates and deployment is a critical practice. This could identify security holes in container images and source code that will result in software flaws. These flaws are detected early and fixed, reducing potential security problems that could happen when the product is live.

Think of container images as blueprints for software. They contain settings and dependencies necessary to run the software in any environment, ensuring uniform behaviour across different platforms. Since these containers can be widely shared and reused, it is crucial to prioritise security. To enhance security, using images with zero known Common Vulnerabilities and Exposures (CVEs) when building container images like Distroless Images by Google and Alpine Linux is recommended. Using 0 CVE images helps minimise the risk of security vulnerabilities that could be exploited in the containerised environment, thus maintaining the integrity and security of the applications.

Benefits of Image Scanning

    • Prevention of Security Breaches: Image scanning ensures that weaknesses are caught during the early development cycle. Otherwise, they could be exploited for unauthorised access, data breaches, and other security and integrity incidents.

    • Maintenance of Software Integrity: This is where regular scanning and updating of images will be sustained, keeping the software running on secure and stable versions to maintain integrity and reliability.

    • Compliance with Security Standards: Most industries operate under regulations and standards, many of which contain stringent security procedures, including vulnerability scanning. Frequent image scanning guarantees compliance with these guidelines, assisting in avoiding fines and other consequences.

Proactive security measures like image scanning may guarantee software safety and integrity, which enable deployments free of known vulnerabilities. It is possible to ensure software security and dependability by image scanning procedures in the following ways:

    • Automated Scanning: Automated tools scan container images before deployment for known vulnerabilities. These tools help identify security risks that must be addressed, ensuring that only secure and reviewed images are used in production.

    • Regular Updates: Images used for all containers should regularly be updated with the latest patches and versions. This is a security practice because it helps protect against newly discovered vulnerabilities and fixes them, lowering the risk of a breach incident.

    • Private Registries: Using private registries, such as AWS ECR (Elastic Container Registry), adds more security. AWS ECR can interoperate with AWS EKS environments in image vulnerability scanning and provide a well-managed, secure solution for storing and handling container images.

These safeguards guarantee ongoing observation and management of the software’s operational environment, strengthening the security strategy.

Image Scanning with AWS ECR

To integrate automated image scanning into a software management system, the following commands are used with AWS ECR, a service that manages container images securely.

aws ecr start-image-scan --repository-name my-repo --image-id imageTag=my-image:latest

This command initiates a vulnerability scan on the specified image in your repository. You can then fetch the results to review and address any vulnerabilities.

aws ecr describe-image-scan-findings --repository-name my-repo --image-id imageTag=my-image:latest

This command fetch scan findings provides detailed results from the image scan, showing what vulnerabilities were found, which is crucial for further actions to secure the image.

In addition to AWS ECR, several other tools that serve similar purposes are available for image scanning. Some of these include:

    1. Clair: Popular in open-source environments, Clair is known for integrating with other container registries and CI/CD tools. It’s particularly favoured for its simplicity and effectiveness in scanning vulnerabilities in Docker and OCI images.

    1. Aqua Security: This tool is widely adopted for its comprehensive security solutions, including not only image scanning but also runtime protection, compliance checks, and secret management. It is designed to secure container-based applications from development to production.

    1. Anchore Engine: Anchore is used by organisations that need a detailed and customisable scanning solution. It allows teams to define custom policies that images must comply with before deployment, making it a strong choice for environments requiring strict compliance and governance.

    1. Sysdig Secure: Known for its deep visibility into container activity, Sysdig Secure provides security scanning, compliance controls, and forensics in container environments. It is widely adopted for its ability to integrate security into DevOps workflows and provide detailed insights into container health and security status.

    1. Snyk: Snyk is favoured for its developer-friendly approach to security. It integrates directly into the development process, providing real-time feedback and automated fixes for vulnerabilities in dependencies. Its strong focus on developer tools and IDE integration makes it a preferred choice in environments where developers lead the security integration.

Secrets Management

Sensitive information should always be handled with care to ensure the security of the Kubernetes environment. Most sensitive information includes but is not limited to passwords, encryption keys, and access tokens that should be handled carefully in a way that does not leave the possibility of exposure as a serious security breach that can lead to data theft, service disruptions, and confidence loss.

Sensitive information, generally known as secrets in IT and cybersecurity, becomes an important resource for the proper maintenance of system integrity and, therefore, calls for some special treatment:

    • Storage of Secrets: Secrets mustn’t be stored in a clear, plaintext, or easily accessible manner. For this, specialised tools can nucleate secrets within themselves and store them safely. Even if an unauthorised entity accesses such a storage kind, the secrets remain secure.

    • Access Control: People have to access the resources; only a certain person or service that really needs to know a secret for their work should be able to access the secret in the years to come to reduce the risk of sensitive information being exposed from within.

    • Auditing and Rotation: Regular auditing ensures efficiency in access controls. This also ensures that no unauthorised access is given. Secret rotation also helps decrease the risk related to old data, which may have been compromised but used to create unauthorised access.

Businesses focus on defending against known security risks that can still threaten their confidential information, even with proper management. Continuous vigilance and adherence to best data security practices are essential to maintain safety. Moreover, incorporating advanced, trustworthy technologies can strengthen these security measures, providing stronger protection against emerging threats. This proactive strategy is critical for safeguarding business data and preparing for future challenges.

    • Kubernetes Secrets: This tool stores and manages sensitive data like passwords, OAuth tokens, and SSH keys within a Kubernetes environment. It helps protect this data from unauthorised access.

    • AWS Secrets Manager: This service improves security by integrating with systems to manage secrets more effectively. It offers enhanced control over who can access these secrets and automates the encryption and rotation of secrets, adding an extra layer of security.

Example

Here is a simple example of creating a Kubernetes secret to store an application’s database credentials

apiVersion: v1
kind: Secret
metadata:
  name: db-credentials
type: Opaque
data:
  username: dXNlcm5hbWU=  # base64 encoded 'username'
  password: cGFzc3dvcmQ=  # base64 encoded 'password'

To use AWS Secrets Manager, you can retrieve a secret value using the AWS CLI:

aws secretsmanager get-secret-value --secret-id my-secret-id

Use environment variables or external secrets operators to inject values for directly integrating this with your Kubernetes pods.

Best Practices:

    • Encrypt data using Kubernetes secrets encryption or through AWS KMS (Key Management Service).

    • Limit access to secrets using RBAC.

Network Policies

Network policies help regulate data flow within a system and maintain a systematic data flow between multiple parts, such as pods, services, ingress controllers, external databases, external APIs, nodes, external load balancers, and external applications. These network policies ascertain that the components of a system communicate only when necessary, hardening the system’s security from unnecessary attacks and illegal access.

Network regulations help establish an organised network architecture by segmenting a network and limiting communication between pods to approved services only. This configuration creates a safe space for data exchange by obstructing hostile access points.

Furthermore, these policies define specific egress and ingress rules for each service:

    • Ingress Rules control incoming data to a pod, ensuring that only data from trusted sources or necessary services can enter. This selective permission helps maintain the integrity and security of the system by preventing unauthorised access.

    • Egress Rules govern the data leaving a pod, permitting information to be sent only to essential and secure destinations. This careful management of outgoing data prevents potential security vulnerabilities and ensures that sensitive information remains protected.

By systematically approaching incoming and outgoing data, the system is more secure against internal leaks and external threats, ensuring secure and efficient operations.

Example

This example describes a basic network policy designed to manage how certain parts of a system communicate with each other. The policy specifically limits which components can send data to a particular group, marked as ‘frontend’.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-from-backend
spec:
  podSelector:
    matchLabels:
      role: frontend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          role: backend

Hardening Hosts

Securing the hosts that run these applications is as important as the application security measures. The following best practices are in place to ensure that the hosts are hardened against all potential threats:

    • Regular Patching: Ensure you regularly update host operating systems and all installed applications with the most up-to-date security patch. This demands constant maintenance so that vulnerabilities are attended to in due course to reduce the likelihood of security incidents.

    • Firewall Rules: Strict rules for incoming and outgoing traffic in and out of hosts must be closely monitored. This provides an extra layer of defence against unwanted access by acting as a boundary and controlling what kind of data can enter and exit the hosts.

    • Configuration Management: Some tools include Ansible, Chef, and Puppet, which keep configurations safe and consistent along different hosts. They reduce the human error factor of configuration and, through automation, ensure all best security practices are adhered to keep the servers well-configured and in the proper state.

These activities dramatically improve an organisation’s overall security posture by making its hosts tough to attack and ensuring applications operate in a secure environment.

Example

Here is a simple Ansible playbook for setting up the computers that run Kubernetes safely and correctly.

---
- hosts: all
  tasks:
  - name: Ensure the latest security patches are installed
    apt:
      upgrade: 'dist'
      update_cache: yes
      cache_valid_time: 86400  # 24 hours
  - name: Install and enable a firewall
    apt:
      name: ufw
      state: present
    notify:
    - enable ufw

  handlers:
  - name: enable ufw
    command: ufw enable

This Ansible playbook outlines a series of steps to enhance the security of computers running a system. It includes two main tasks:

    • Updating Security Patches: The playbook ensures all security updates are installed on the computers. It checks for updates and applies them to keep the system protected against known vulnerabilities.

    • Firewall Installation and Activation: It also handles the setup and activation of a firewall using a tool called ufw. This firewall helps block unauthorised computer access, securing them from potential threats.

The playbook automates these tasks, ensuring they are consistently applied across all computers and maintaining a high-security standard.

AWS EKS Considerations

AWS EKS provides managed nodes as an integrated feature. This means that AWS manages node-level updates and patches, which reduces the amount of work needed to keep the system updated with security patches. For Elastic Kubernetes Service (EKS), organisations can trust AWS with these tasks as an extensive regular security maintenance ensures that the system complies with modern standards. It reduces the risks of security vulnerabilities and the administrative workload on the teams by providing them with more significant time to concentrate on application development rather than maintenance.

Conclusion

This blog has focused on the primary security measures and best practices required to safeguard the Kubernetes environment, specifically while it is hosted on AWS EKS. It started by considering how permissions may be used to manage access, ensuring that only authorised users or services can function within the system. It also covers the need to scan images for security flaws before they go live, which will be crucial in any possible breach events.

Furthermore, it looked at practical methods for handling confidential information with specific tools meant for safe storage and access. It also covered the necessity of maintaining an updated and securely configured system foundation and strategies for managing data flow inside the cluster. Incorporating both these tactics together guarantees the safety and effectiveness of Kubernetes environment and security management.

If you need any further information or clarity on Kubernetes security, AWS, cloud computing, or EKS AWS, feel free to visit the official CloudZenia website.

Aug 21, 2024