Microservices are small, self-contained units of code that can be easily isolated and updated without affecting the rest of a system. This modularity makes them more resilient to attack than monolithic applications. It also introduces new microservices security challenges.
For an organization to effectively secure microservices, it is important to understand how they work, threats to microservices, and how to prevent typical misconfigurations that lead to exploits.
This article discusses the key aspects of microservices security and best practices that help prevent misconfigurations and vulnerabilities, but stops short of providing instructions for implementing the recommended measures.
Key aspects of securing microservices
Each service in a microservice architecture operates independently and often requires distinct security controls. In addition, communication between components can be complex, making it difficult to secure access to data and resources. This leads to an obscured understanding of the security risks associated with a microservice runtime and its dependencies.
Securing microservices requires the distribution of security controls over the core categories in the table below.
Access Control
Access control is the process of identifying the legitimacy of a user or requesting service before assigning it the appropriate level of access. In a microservices framework, this can be accomplished by having each service expose an API that can be used to authenticate and authorize users. These services can further enforce their security policies autonomously, subsequently allowing granular control over who has access to which service.
The sections below cover key considerations when implementing access control for microservices security.
RBAC/ABAC
Role-Based (RBAC) and Attribute-Based Access Controls (ABAC) are security mechanisms that regulate user access based on defined roles (such as business functions) or attributes (unique set of user features and environmental conditions), respectively.
Recommended best practices:
- Enforce a deny-by-default principle that restricts all access, unless explicitly allowed
- Apply the principle of least privilege to assign the lowest user privilege required to complete business functions
Identification & authentication
Identification is the process of defining and regulating unique identities for user validation and service access. Authentication verifies if a user is who they claim to be by checking the credentials associated with their entity. The identity server determines the user’s identity and then issues a token as proof of trust. Most authentication systems rely on a username-password combination, while others leverage the knowledge, inherence, and possession principle to validate user identity.
Recommended best practices:
- Assign each service a unique identifier to prevent service-level conflicts with similar functionality
- Use Access Control Lists (ACLs) to define which users or groups of users have access to which services
- Perform regular audits of the identities database for security monitoring and compliance
Authorization
Coupled with the authentication process, the authorization mechanism helps determine a user's access scope for a specific resource, API endpoint, or data.
Recommended best practices:
- Enforce authorization for individual APIs for enhanced service-level security
- Deploy authorization engine on a sidecar for individual services
API security
Each microservice exposes a set of independent Application Programming Interfaces (APIs) that can be used by other services or applications to interact or integrate with it. Because of the sensitive information APIs carry and their integration with other services, a compromised API typically leads to severe enterprise-level threats.
The sections below describe key aspects of API security.
Utilize a centralized gateway
An API gateway is a central point of entry for all API requests that can be used to enforce security policies such as authentication, rate limiting, and parameter validation. As a reverse proxy, a gateway enhances security by validating requests from the public internet, routing client requests to the appropriate microservices and controlling how clients utilize the API.
Recommended best practices:
- Use OAuth or JSON Web Tokens (JWT) for secure authentication
- Use an encrypted HTTPS connection between the API Gateway and the microservices
- Deploy the gateway in a DMZ or other protected network zone
Isolate microservices and embed security for each service
Each service should reside on its own virtual private network (VPN) with security rules and protocols applied to each VPN. This limits the attack radius of a malicious user who has obtained access tokens for one API within the deployment. Virtual isolation of microservices also protects an organization’s APIs from an internal threat.
Recommended best practices:
- Design microservices to operate independently and not share dependencies with other services
- Use a language that supports concurrency to allow interservice communication
Secure coding practices
A secure coding practice encompasses various principles, tools, and processes that help design a secure architecture at the foundational level. The practice also acts as an essential enabler of a DevSecOps model that sets security standards since the earliest stages of application delivery.
Key considerations for secure coding of microservice applications are described in the sections below.
Refresh tokens
Refresh tokens bypass the short-lived nature of access tokens by reducing the number of calls your microservices need to make to the auth server and allowing users to access services without having to re-enter their credentials each time. These tokens help prevent security issues by isolating services and ensuring each service has its unique token.
Recommended best practices:
- Secure the codebase and storage of refresh tokens to prevent malicious use
- Only use HTTPS when making calls to the service that uses the refresh token
- Regularly rotate your refresh tokens
- Adopt the right balance of being short-lived (access tokens) and long-lived (refresh tokens) for optimum user experience and security
Input and URL validation
Input validation is the process of ensuring that data supplied by a user or system meets the requirements of your application. This is important because it helps to ensure that data is clean and consistent before it gets stored or processed. URL validation operates similarly but deals with ensuring that URLs are appropriately formatted and meet the security requirements of your framework.
Recommended best practices:
- Use regex patterns to match allowed URL formats
- Do not trust any external data sources, including user-controlled files, environment variables, network interfaces and command-line arguments
- Identify all data sources and categorize them into trusted and untrusted inputs
- Validate all input data parameters, including header values, expected data types, data range, length, etc.
Combine OAuth and SSL for enhanced security
OAuth (for authentication) and SSL (for inter-service communication) helps enforce encryption across different OSI layers of a microservice framework. Combining the two cryptographic protocols secures data at rest and in transit from potential exploits.
Recommended best practices:
- Choose an OAuth provider that supports strong encryption methods
- Encrypt all data before sending it to the OAuth provider
- Use SSL for all communication between the microservices and the OAuth provider
- Never store sensitive information within URL parameters when using OAuth
Encryption
Several different approaches can be taken when encrypting inter-service communication and data of microservices. The most common approach is to use a combination of public key infrastructure (PKI) and symmetric key cryptography. However, encryption can add debugging complexity and latency to microservices if not administered appropriately.
Key considerations for encrypting data and inter-service communication of microservices include:
Use standardized encryption algorithms
Bring-your-own-encryption (BYOE) solutions often excel at addressing test cases and expected inputs but fail to tackle novel threat patterns. Using a well-vetted, trusted algorithm allows you to apply modern approaches to tackle emerging threats and ensures your microservices are compatible with other components of a distributed ecosystem.
Recommended best practices:
- Use tried and tested cryptographic schemes, such as the Advanced Encryption Standard (AES), which eliminate errors and vulnerabilities associated with BYOE arrangements
- Ensure the encryption algorithm retains the interoperability of microservices by allowing different services to communicate with each other using the same encryption scheme
- If administering custom encryption is unavoidable, use established high-level library interfaces to build cryptographic primitives, ensuring they cover all encryption tasks in a precisely defined and reliable manner
Enforce mutual TLS (mTLS) to encrypt communications between services
Mutual TLS enforces service-to-service encryption using x.509 certificates for the identification and authentication of microservices. A certificate contains the service’s identity and a public encryption key validated by a trusted certificate authority.
Recommended best practices:
- All services should have valid SSL/TLS certificates and keys
- All communications should be authenticated by using signed certificates or a shared secret
- Each microservice should verify other’s certificate and uses the attached public keys to create session keys unique for each interaction
Secrets management
Secrets – such as SSL certificates, session tokens, client credentials, database connection strings, etc. – in a container-native microservices framework tend to be more ephemeral than other architectures. As a result, they are more difficult to track and often lead to secret sprawl as the application scales up. Additionally, every microservice module including its own autonomous set of secrets also obscures a seamless, centralized management, sharing, and rotation of secrets.
Key considerations for managing secrets are outlined in the sections below.
Avoid hard-coding secrets into source code/configurations
Including secrets in source code and configuration manifests lead to the exposure of sensitive information and are common attack targets for unauthorized access of user accounts, project repos, or system information. Although this is commonly relevant in open-source software where the code is publicly available, attackers can also reverse engineer closed-source binary files to extract secrets.
Recommended best practices:
- Store secrets as environment variables
- Secret should be segregated in separate files and databases, which should never be pushed to the repository or committed to source control
- Include variables in a .gitignore file that prevents committing secrets to repositories
Utilize Vaults to encrypt secrets
Vaults enable centralized encryption of secrets either in transit or at rest. They allow you to store secrets out of your codebase and retrieve them from a central location when needed. This reduces the risk of secrets being accidentally committed to your code repository or exposed through a security breach.
Vaults also make it easier to rotate secrets regularly, as you can simply update the Vault with the new secret and then update your application to use the new secret. This is much simpler than updating each microservice that uses the secret.
Recommended best practices:
- Use strong encryption algorithms (e.g., AES-256) to encrypt secrets
- Leverage a key management system (KMS) to manage encryption keys to ensure only authorized users have access to the keys needed to decrypt data
- Use a shared secret or an asymmetric key to ensure each service has its own key that it uses to encrypt and decrypt data; and that each service only has access to the data it needs
Vulnerability scanning and threat modeling
While vulnerability scanning helps identify, classify, and prioritize vulnerabilities. Threat modeling helps detect missing security safeguards and enumerate potential threats in a deployment pipeline. The processes collectively help design the security posture blueprint of an organization by analyzing attack surfaces, changing threat patterns, detecting potential data leakage vectors, and prioritizing mitigations.
Despite the benefits, comprehensive scanning in complex microservice architectures is naturally challenging. This is because the processes often generate a lot of false positives, which can lead to overwhelming security alerts that are difficult to triage. Additionally, because of the distributed nature of the architecture, it can be difficult to identify all potential attack vectors and miss out on critical mitigations.
The sections below cover key considerations for vulnerability scanning and threat modeling.
Scan all open-source dependencies
As modern application delivery extensively relies on open-source software components for modular development and rapid workflows, it is important to administer comprehensive scanning of all components and dependencies.
Recommended best practices:
- Stay up-to-date on the latest security advisories and patches for your dependencies by leveraging security benchmarks (such as CIS)
- Use a centralized dependency management tool like NPM to manage and update dependencies for quick mitigation of vulnerabilities
Enforce automated, continuous scanning and threat modeling
As the threat landscape constantly changes, malicious actors devise new approaches to discover and exploit security vulnerabilities. To counter this, it is critical to enforce continuous scanning to identify and fix potential security flaws before a threat actor exploits them.
Recommended best practices:
- Integrate scanning as part of CI/CD pipelines of a DevOps workflow
- Leverage automated tools for comprehensive scans
- Performing tests against known vulnerabilities databases, such as NVD or OWASP
- Use data flow diagrams to model how data flows through the system. This helps to identify potential data leakage points, where sensitive information could be inadvertently exposed
- Regularly review and update automated threat models to counter emerging threats where existing models may no longer be adequate
- Implement remediation workflow that ensures vulnerabilities are prioritized and remediated in a timely manner.
Incident response
Incident management enables a swift response to security events while helping developers to learn from recurring patterns and build preventive mechanisms for future occurrences. Incident management tools for microservices can also be used to build a service catalog that collects performance data of all services and dependencies of the deployment pipeline for a unified view.
The service catalog can also track the ownership of the microservices which is particularly helpful for operational escalation during troubleshooting. The owners are considered the software developers who designed and coded each microservices and are the highest level of escalation during an incident.
Another benefit of organizing the dependencies of microservices is that the service health and status can be deteremined and associate with each microservice. The health can be represented by a score between 0 and 100 where a perfect score would indicate the microservice is error-free and running with minimal latency. The status would indicate whether the service is up or down.
Key considerations for incident management in microservices include:
Continuously fine-tune alerts
Alerts form the foundation of an incident response cycle, as they notify software teams when a breach has occurred. When creating alerts and notifications, some recommended practices include:
- Incidents should be scoped to the lowest-tiered teams possible by default, allowing higher-level teams to focus on critical issues
- Minimize escalations and false positives by triggering alerts only for events that impact the security or critical services of the workload
- Configure alerts to be actionable for quicker corrective action
Leverage managed incident response channels
Managed incident management platforms automate the incident registration and ticket allocation processes for easier issue resolution. These platforms centralize event information from all services of a service mesh for real-time incident response.
Recommended best practices:
- Define clear roles and responsibilities for everyone involved in the incident response process.
- Define robust System Level Agreements (SLAs) for critical resources and services
- Identify and expedite the configuration of response plans for high-severity incident
- Use automation to help with coordination and communications between team members
Conduct postmortem analysis to prevent recurring problems
It is essential to record and analyze all security events to fine-tune security posture and prevent recurring problems. A thorough analysis also enables easier identification of attack patterns and proactive remediation of an ongoing breach.
Recommended best practices:
- All lessons learnt from a security incident should be entered in a shared knowledge base to reduce the likelihood and impact of a future incident
- Postmortem analysis (also known as retrospectives) should include the incident’s root cause and steps taken for remediation
- Create actionable plans for preventing recurring problems
Document runbooks to have a systematic and automated response to security problems
Runbooks are automated sequences containing steps to perform incident response actions, such as alerting and threat containment. By documenting lessons learned, runbooks are typically used to improve an incident response system continuously and to help eliminate manual interventions in future occurrences.
Recommended best practices:
- Ensure that the security incident response team is familiar with runbook processes and procedures
- Regularly review and update runbooks to ensure they are still relevant with emerging threats
- Regularly test runbooks to identify potential gaps in the documenting procedure
Conclusion
Regardless of the number of advantages over traditional monolithic applications, microservices also come with their own challenges, one of which is security.
With advancement, the tech and threat landscape will continue to change. Despite the microservices security, when properly hardened, microservices are far more secure than traditional architectures. x