Google’s 90-Day TLS Certificate Validity Proposal

  1. What is Google’s 90-Day TLS Validity Proposal?
  2. Why Shorter Certificate Lifespans?
  3. When will the proposal take effect?
  4. What is the impact of 90-Day TLS?
  5. What is the call to action?

1. What is Google’s 90-Day TLS Validity Proposal?

In March of 2023, Google announced plans in their Moving Forward, Together statement, to reduce the maximum validity period for public TLS certificates to 90 days (down from 398 days). In this statement, Google explains that reducing certificate lifespans to 90 days “will allow for faster adoption of emerging security capabilities and best practices, and promote the agility required to transition the ecosystem to quantum-resistant algorithms quickly.

In addition to decreasing the maximum validity period for TLS, Google also intends to shorten the domain validation reuse period to 90 days, down from 398 days. Requiring more frequent domain validation is intended to protect domain owners and minimize the risk of Certificate Authorities (CAs) inadvertently using outdated or inaccurate information, which could lead to certificate mis-issuance and potential abuse.

Crypto-Agility and Preparing for Post-Quantum Cryptography

2.Why Shorter Certificate Lifespans?

Reducing certificate lifespans aims to drive automation, streamline issuance processes, and mitigate potential security risks associated with longer certificate validity periods. Adopting shorter certificate validity periods will ultimately help:

  • Enhance Security
  • Encourage Automation
  • Promote Agility
  • Support Compliance
  • Encourage Post Quantum Cryptography (PQC) Readiness

3. When Will the Proposal Take Effect?

The effective date for Google’s proposal is still to be determined. Google states that they plan to introduce this change either in a future policy update of Google’s Chrome Root Program or a CA/Browser (CA/B) Forum Ballot Proposal.

The industry has been pushing for shorter TLS certificate validity periods for over a decade, with TLS certificate validity continuing to be reduced. As part of a collective effort to fortify security and encourage streamlined certificate management practices, the CA/B Forum has reduced certificate validity, through updated baseline requirements, approximately every 3 years.

4. What is the impact of 90-Day TLS?

Reducing TLS Certificate validity down to 90-days has both positive and negative implications for organizations of all sizes:

Positive Impact:

  • Agility: Shorter certificate lifespans promote quicker response times to security incidents or policy changes, with certificates being frequently rotated and renewed.
  • Enhanced Security: Shorter certificate validity reduces the window of opportunity for potential attackers to exploit compromised or weak certificates.
  • Improved Compliance: Many regulatory frameworks, including PCI DSS and GDPR, recommend or mandate shorter certificate validity for enhanced security.

Negative Impact:

  • Increased Operational Overhead: Managing certificates with shorter lifespans requires more frequent renewals and rotations (more than 4 times per year), which can drastically increase operational overhead if automation is not implemented.
  • Increased Risk of Outages/Service Disruptions: If certificate renewal processes are not automated or effectively managed, there is a greater risk of outages, security vulnerabilities or service disruption due to expired certificates.
  • Compatibility Challenges: Some legacy systems or devices may not support certificates with shorter certificate validity, requiring updates or workarounds to maintain compatibility.

While transitioning to 90-day TLS certificates presents challenges, the advantages far outweigh the drawbacks, particularly in the context of today’s dynamic threat landscape. Fortunately, comprehensive, automated Certificate Lifecycle Management (CLM) solutions are readily available to help organizations overcome 90-Day TLS validity hurdles, achieve crypto-agility and reinforce security.

5. What is the call to action?

Although the effective date is still undecided, it’s crucial for organizations to start preparing now for the transition to 90-day TLS certificate validity. Google publicly announced their proposal in 2023 to give organizations ample warning and to encourage organizations to take action now. Proactive planning, including implementing effective certificate lifecycle management (CLM) strategies and crypto-agility will help ensure a smooth transition to reduced certificate lifespans.

Navigating Google’s 90-Day TLS Certificate Validity Proposal

Post-Quantum Cryptography (PQC)

  1. What is Post-Quantum Cryptography (PQC)?
  2. Why is Post-Quantum Cryptography Important?
  3. Who is Developing Post-Quantum Cryptography?
  4. How Can Organizations Prepare for PQC?
  5. A Brief History of Quantum Computing
  6. How Do Quantum Computers Work?

1. What is Post-Quantum Cryptography (PQC)?

Post-quantum cryptography (PQC) refers to a new set of cryptographic algorithms designed to protect data and systems against threats posed by quantum computers capable of running Shor’s algorithm, which could break classic public key cryptographic algorithms. While classical computers use bits as the basic unit of information, quantum computers use quantum bits (qubits). This allows quantum computers to perform complex mathematical calculations and solve problems exponentially faster than classical computers. On the other hand, the unparalleled processing power also allows them to potentially break today’s widely-used cryptographic algorithms, such RSA and ECDH and digital signature algorithms like RSA, ECDSA, and EdDSA. Post-quantum cryptography aims to develop new cryptographic algorithms that can withstand quantum computing attacks and ensure the authenticity, confidentiality, and integrity of digital communications.

Crypto-Agility and Preparing for Post-Quantum Cryptography

2. Why is Post-Quantum Cryptography Important?

Quantum computing is a double-edged sword. While the massive computational power of quantum computers promises ground-breaking advancements in various fields such as science, finance, pharmaceuticals, and energy, it also poses a serious threat to current public-key cryptography.

Quantum computers operate on the principles of quantum mechanics, such as superposition and entanglement, that help them perform complex mathematical calculations at speeds inconceivable by today’s classical computers. While this is great, it also implies that quantum computers have the power to break the cryptographic algorithms that underpin much of today’s digital security infrastructure.

Today, all digital transactions and communications are protected through public key cryptography algorithms such as RSA and ECDH and digital signature algorithms like RSA, ECDSA, and EdDSA. These encryption algorithms help protect data at rest and in transit, regardless of location and create a safe environment for Internet communications. Breaking these algorithms by solving the complex math problems behind them requires massive computational power that today’s classical computers do not have.

However, the arrival of large-scale quantum computers, such as a CRQC (cryptographically relevant quantum computer), could have dire implications. With their immense processing power, these machines could potentially break today’s encryption algorithms, particularly RSA, with alarming ease. This scenario would potentially leave much of today’s sensitive encrypted data vulnerable and at risk of exposure.

In 1994, Peter Shor developed an algorithm (now called Shor’s algorithm) that could factorize large integers exponentially faster than the best-known classical algorithms. This demonstrated the real-world possibility of quantum computing breaking the RSA algorithm.

With the promise of quantum computing growing stronger, threat actors are not waiting. They have already begun intercepting and storing encrypted data, even though they lack the means to decrypt it with current technology. This strategy, known as data harvesting, is to wait for the day when powerful quantum computers are readily available to decrypt this data.

The impact of quantum computing also goes beyond encryption. Grover’s algorithm, another quantum algorithm, can significantly speed up brute-force search processes, effectively halving the security strength of symmetric key algorithms such as AES. This means that keys previously considered secure would need to be doubled in size to maintain the same level of security against quantum attacks.

In light of the very real quantum threat, the adoption of post-quantum cryptography (PQC) is not just a suggestion, it’s a necessity. PQC is the key to building quantum resilience and safeguarding our digital infrastructures, all while harnessing the power of quantum computing.

3. Who is Developing Post-Quantum Cryptography?

Post-quantum cryptography algorithms are being developed as part of a global effort involving a diverse array of stakeholders, including scientists, academic researchers, cryptography experts, governmental bodies, and the private sector.

To make the most of this transformative technology, the White House published two presidential directives in 2022, the first defined policies and initiatives for investments in core QIS research programs, and the second laid out specific goals to effectively address the potential security risks of quantum computers.

Leading the charge for PQC is the National Institute of Standards and Technology (NIST) in the United States, which has organized an international competition to standardize post-quantum cryptographic algorithms. This competition has attracted submissions from cryptographic researchers and institutions worldwide, including prominent universities, research labs, and tech companies.

Big technology firms such as Google, IBM, and Intel are also actively researching and developing post-quantum cryptographic solutions, integrating these advancements into their products and services to future-proof their security infrastructures. Collaborations across these sectors are driving the development, analysis, and implementation of robust post-quantum cryptographic standards, ensuring they meet security, performance, and interoperability needs in a post-quantum world.

Standardization Efforts

NIST is working on standardizing post-quantum cryptographic algorithms. The process includes multiple rounds of evaluation, considering security, performance, and implementation aspects. The aim is to provide a suite of cryptographic algorithms that can replace or augment current standards in the face of the threat posed by quantum computing.

In July 2022, NIST announced the first four PQC algorithms, powerful enough to withstand quantum-enabled attacks. These four recommended algorithms are expected to be finalized in July 2024.

The four algorithms are designed for two chief use cases where encryption is typically used: general encryption and digital signatures.

  • For general encryption used to secure websites, NIST has selected the CRYSTALS-Kyber algorithm due to its speed of operation and comparatively small encryption keys.
  • For digital signatures used to verify identities, NIST has selected three algorithms—CRYSTALS-Dilithium, FALCON, and SPHINCS+.

NIST recommends CRYSTALS-Dilithium as the primary algorithm. The FALCON algorithm will be used for applications that need smaller signatures than Dilithium can provide. The third algorithm, SPHINCS+, is said to be relatively larger and slower than the other two but has been selected as a backup since it is based on a different mathematical approach than the other three.

2023 EMA Report: SSL/TLS Certificate Security-Management and Expiration Challenges

4. How Can Organizations Prepare for PQC?

Once NIST announces the standard PQC algorithms, today’s cryptographic standards will eventually be deprecated and replaced with newer quantum-safe standards. Protecting digital assets and preventing cyberattacks will then come down to how quickly organizations can migrate their cryptographic systems to quantum-safe standards. The longer it takes to transition, the more the risk of exposure. Given the complexity, costs, and time involved in upgrading cryptographic systems, organizations need to plan and prepare their PKI proactively for PQC implementation.

Considerations for Implementing Post-Quantum Cryptography Algorithms

  • Performance: PQC algorithms often require more computational power than traditional algorithms. It’s essential to assess their impact on processing speed, power consumption, and overall system performance, especially for resource-constrained devices.
  • Key and Ciphertext Sizes: Many PQC algorithms use larger key sizes and produce larger ciphertexts than current algorithms. This can affect storage requirements and transmission times, necessitating careful evaluation of how these size increases impact system resources and network performance.
  • Interoperability: Integrating PQC algorithms with existing protocols and systems requires ensuring compatibility and smooth interaction. This includes updating standards and frameworks to support new cryptographic methods without disrupting current operations.
  • Security Assurance: Ensuring the security of new algorithms against both quantum and classical attacks is crucial. This involves rigorous analysis, peer review, and testing to identify and mitigate potential vulnerabilities.
  • Regulatory Compliance: Compliance with regulatory requirements and industry standards is essential. Organizations must ensure that the adoption of post-quantum cryptographic algorithms meets legal and regulatory obligations related to data protection and security.

Action Plan for Post-Quantum Cryptography Implementation

  • Assessment: Evaluating the potential impact of quantum computing on existing cryptographic systems.
  • Inventory: Building complete visibility of all crypto assets – where they are, how many, and what they protect. Identifying critical systems that rely on potentially vulnerable cryptographic algorithms that will need to be switched to post-quantum standards on priority.
  • Testing: Implementing and testing post-quantum cryptographic algorithms in controlled environments to identify potential systemic issues and ensure they meet security and performance requirements.
  • Planning: Developing a roadmap for transitioning to quantum-safe algorithms, including updating software and hardware and setting clear policies and procedures for migration to minimize operational disruption.
  • Vendor Collaboration: Assessing vendor capabilities to ensure they support you in your quantum transition.
  • Training: Educating key stakeholders and the relevant teams about the rapid advancements in quantum computing and the security risks it brings. Training them on the specific challenges involved in the transition.
  • Establishing Crypto-Agility: Building the ability to quickly switch between cryptographic algorithms to ensure a rapid response against cryptographic threats. Choosing an enterprise-grade certificate lifecycle management solution to provide visibility, automation, and control that can enable crypto-agility.

5. A Brief History of Quantum Computers

While the discussion on quantum mechanics as a branch of physics goes back to the early 1900s, the idea of quantum computers emerged in the 1980s, largely due to the groundbreaking observations of physicists Richard Feynman and David Deutsch. Feynman’s insight that classical computers struggle to model quantum phenomena accurately due to their exponential complexity, and his suggestion of a computer built using quantum mechanical principles, laid the foundation for quantum algorithms and the broader field of quantum information science. Deutsch later formulated the idea of a universal quantum computer, capable of performing any computational task more efficiently than classical computers for specific problems. These seminal contributions set the stage for the future of quantum computing.

1994 was a pivotal year for quantum computing, as mathematician Peter Shor developed an algorithm that would change the game. Shor’s algorithm, capable of efficiently factoring large integers , demonstrated that a sufficiently powerful quantum computer could solve complex mathematical problems in a matter of seconds, a task that would take several years for the best-known classical algorithms to develop. This revelation not only highlighted the potential of quantum computing to break traditional encryption algorithms, but also sparked a wave of increased research and investment in the development of quantum computers and quantum-resistant encryption methods.

In 1996, computer scientist Lov Grover introduced an important quantum algorithm known as Grover’s algorithm. This algorithm allows quantum computers to search through an unsorted database at unprecedented speed when compared to classical computers. The practical implications of this are significant. For instance, quantum computers can perform brute-force searches—like trying out every possible key to break an encryption—much more quickly. If it would take a classical computer a million tries to find the right key, a quantum computer might only need a thousand. This breakthrough in cryptography catalyzed further research in the development of quantum-resistant algorithms, highlighting the real-world impact of quantum computing.

Over the subsequent decades, quantum research made significant progress, marked by experimental demonstrations of small-scale quantum processors, increased qubit coherence times, and advancements in error correction and quantum algorithms, bringing the theoretical promise of quantum computing closer to practical realization.

In 2019, Google announced the first-generation Sycamore processor, which marked a significant milestone in the quantum journey. Google claimed that the 53-qubit processor could perform a calculation in 200 seconds, which would have taken the world’s most powerful supercomputer 10,000 years.

In November 2022, IBM unveiled its most powerful quantum computing processor, with 433 qubits (quantum bits). Nicknamed the Osprey, the processor is nearly three times faster than the company’s 127-qubit Eagle processor, which it unveiled in 2021, and more than eight times faster than Google’s Sycamore processor.

At present, the field of quantum computing continues to advance at great velocity, with researchers and tech giants working on developing powerful and stable quantum processors. Additionally, new algorithms and error-correction techniques are being developed to enhance the practical usability of quantum computers and realize their transformative potential across various industries.

6. How Do Quantum Computers Work?

Quantum computers operate on the principles of quantum mechanics, utilizing qubits instead of classical bits to represent and process information. While classical bits can be either 0 or 1, qubits can be 0, 1, or both at the same time. Additionally, Qubits can become entangled, a phenomenon where the state of one qubit influences the state of another, regardless of the distance between them. These two properties—Superposition and entanglement—enable quantum computers to perform multiple calculations at once and process complex problems much faster than classical computers.

However, quantum computing also faces significant challenges. Maintaining qubit coherence has been a persistent challenge; quantum states are fragile and easily disrupted by external noise and environmental factors, requiring precise control and isolation techniques at extremely low temperatures. Another challenge is scaling quantum systems to larger numbers of qubits while maintaining coherence and reducing error rates, which currently limit the complexity and reliability of computations. Additionally, quantum error correction is crucial to mitigate errors that arise from imperfect operations and noise in quantum systems.

The Bottom Line

Post-quantum cryptography is an essential area of research and development in the field of cybersecurity. As quantum computing advances, current cryptographic systems face serious threats. Addressing these threats demands developing and adopting new cryptographic algorithms that are resilient against quantum attacks. Through standardization efforts and careful planning, organizations can confidently navigate towards the post-quantum era while maintaining the security and integrity of digital communications.

Let’s get you started on your certificate automation journey

The Role of CA/B Forum in Code Signing

What is the role of the CA/B Forum in setting code signing requirements?

The CA/Browser Forum (CA/B) is a voluntary group that focuses on establishing standards and guidelines for Certificate Authorities (CAs) on issuing and managing digital certificates used to secure websites and online communication, particularly SSL/TLS certificates.

The CA/B Forum is formed by leading Certificate Authorities (CAs), such as GlobalSign, Sectigo, Entrust, DigiCert, and others, internet browser vendors, such as Google Chrome and Apple Safari, and other application vendors who work together on defining standards and industry best practices for secure web communications.

While the CA/B Forum primarily focuses on web security, it also extends its influence to setting code signing requirements, given that the same certificate authorities (CAs) that issue SSL/TLS certificates also issue code signing certificates.

Similar to the security standards developed for SSL/TLS certificates, the code signing baseline requirements are focused on enforcing strict validation procedures and revocation protocols as well as strong cryptographic algorithms, key lengths, private key protection, etc. This helps to ensure that code signing certificates remain secure and reliable, bolstering the overall integrity of software distribution in the digital landscape.

Software that is signed using a valid code signing certificate issued by a publicly trusted CA (that adheres to the CA/B Forum requirements), will be trusted by Operating Systems and other software platforms.

In light of increasing code signing-related attacks, the CA/B Forum recently issued new code signing baseline requirements that mandate generating and storing private keys in crypto hardware modules to prevent private key compromises. This puts more onus on public CAs to ensure that the organizations they issue code signing certificates to strictly adhere to strong and compliant private key protection.

Let’s get you started on your certificate automation journey

What are CSP and PKCS#11?

What are CSP and PKCS#11?

CSP (Cryptographic Service Provider) and PKCS#11 (Public-Key Cryptography Standard #11) are both cryptographic frameworks used to provide secure access to cryptographic functions and devices, such as hardware tokens, hardware security modules (HSMs), smart cards, and software-based cryptographic modules.

CSP (Cryptographic Service Provider):

A Cryptographic Service Provider (CSP) is a Microsoft Windows-specific framework that allows applications to utilize cryptographic functionality, including encryption, decryption, digital signatures, and hashing. CSPs provide a standardized interface for interacting with cryptographic algorithms and hardware devices on Windows systems. They enable applications to leverage the security features of the underlying Operating System.

CSPs offer a way for applications to access cryptographic functions without having to interact directly with the underlying hardware or cryptographic modules. They can interact with various types of cryptographic devices, including hardware security modules (HSMs), smart cards, and software-based cryptographic libraries.

PKCS#11 (Public-Key Cryptography Standard #11):

PKCS#11 is a cross-platform API standard created by RSA Security for accessing and managing cryptographic tokens and devices. These tokens can be hardware security modules (HSMs), smart cards, USB tokens, and other types of cryptographic hardware. Unlike CSP, which is Windows-specific, PKCS#11 is designed to be platform-independent and is widely used in various Operating Systems, including Windows, Linux, and macOS.

PKCS#11 defines a standardized set of functions and data types for interacting with cryptographic tokens and performing operations such as encryption, decryption, digital signatures, and key management. It allows applications to be written in a way that is agnostic to the specific hardware or software cryptographic module being used, as long as the module conforms to the PKCS#11 standard.

Let’s get you started on your certificate automation journey

Code Signing in DevOps

Where does code signing fit into the DevOps process?

Code signing is a crucial security practice in the DevOps process that helps ensure the integrity and authenticity of code as it moves through different stages of development, testing, and deployment. It helps establish trust between different stages of the DevOps pipeline and with end-users and customers.

Code signing typically occurs during the build, release and deployment stages of DevOps (the Continuous Delivery phase in CI/CD). When a development team packages software or code into a deployable artifact, such as a container image, installer, or application package, they digitally sign the artifact with a code signing certificate. This signature serves as a tamper-evident seal, assuring end users that the artifact’s integrity is maintained. Any unauthorized modification to the signed code will break the signature, alerting users to potentially harmful code.

Once the code-signed artifact is ready, it is deployed to the target environment, such as a production server or a cloud platform. Code signing allows the receiving system to verify the authenticity and integrity of the artifact, preventing the execution of malicious or tampered code. It ensures that only trusted and authorized code is deployed, promoting a more secure and reliable software delivery process. Additionally, code signing fits seamlessly into DevOps automation, enabling the rapid and consistent deployment of signed code across various environments.

Although code signing is typically carried out during the build, release, and deployment stages, it is now considered essential even during the initial stages of the DevOps lifecycle. Threat actors are no longer interested only in the software code but also in the various third-party tools that developers use to write, build, and test code.

Enforcing code signing in the code, build, and test stages helps secure all artifacts used in the pipeline, such as source code, dependencies, libraries, and tools. It gives developers the confidence that only trusted and unaltered artifacts are being used, reducing the risk of potential tampering. Organizations can also rest assured that the code is secure through all the stages of the DevOps pipeline.

How does code signing help strengthen software supply chain security?

Code signing helps strengthen software supply chain security by providing a robust mechanism for authentication, integrity assurance, and trust throughout the software development and distribution lifecycle. It helps users verify the legitimacy of software, confirm its origin, and detect any unauthorized modifications or malware before downloading and installing it.

Code signing makes it possible to secure software updates, validate the software supply chain, and protect against zero-day vulnerabilities. It also facilitates compliance and auditing requirements. By integrating code signing into DevOps practices, organizations can automate security checks and enhance developer trust, ultimately insulating the entire software supply chain from threats and vulnerabilities, fostering a safer digital landscape for themselves and the end-users.

Let’s get you started on your certificate automation journey

Code Signing Challenges

1. What are the common code signing challenges organizations face?

  • Private key theft or misusePrivate keys are the heart of the code signing process and must be protected at all times. If the private keys linked to the code signing certificates are stolen, attackers could use the compromised code signing certificate to sign malware and then distribute the software under a verified publisher name. Despite the awareness, many developers often store code signing keys on their local machines or build servers, exposing their organizations to private key theft and data breaches.
  • No visibility or control over code signing events – Modern enterprises have their development teams working in several locations across the world. Different teams use different tools for signing, and often leave private keys and certificates on developer endpoints or build servers. InfoSec teams have no visibility into these code signing events – who is accessing the private key, where they are stored, and what code was signed – in turn creating security blind spots and auditing and compliance issues.
  • Making code signing DevOps-friendly – code signing has to be easy to use for developers, which means it needs to be integrated with DevOps processes, tool chains, and automation workflows. code signing must support various signing tools that distributed development teams use. Access to private keys must be easy, seamless, and secure so developers can sign code at speed without worrying about private key protection and storage.
  • Signing breachcode signing ensures the integrity of software, but it does not guarantee that the signed code itself is free from vulnerabilities. It is important to remember that hackers don’t always need private keys to sign malware. Build servers or developer endpoints with unregulated access to code signing systems can also be hacked to get malicious code signed and distributed to users without detection. 

 2. What is the best approach to code signing to prevent attacks?

  • Build visibility – Take stock of all the code signing keys and certificates used across your organization to help security teams stay on top of vulnerabilities.
  • Protect private keys – Private keys are the most important part of code signing. So, the CA/B Forum requires private keys to be generated and secured on secure hardware crypto modules such as hardware security modules (HSMs) that are at least FIPS 140-2 Level 2 or Common Criteria EAL 4+. To adhere to the CA/B Forum mandate and prevent the misuse or theft of certificates, store private keys in secure vaults and compliant Hardware Security Modules (HSMs).
  • Perform code integrity checks: Perform a full code review before signing it to ensure it is free of any vulnerabilities. Once signed, verify all developer signatures to ensure the final code published is safe for end-users and customers. 
  • Timestamp code: Apply a timestamp to the code to ensure that the digital signature remains valid, even after the certificate used for signing expires.
  • Use test-signing certificates: Employ private trust test certificates or those issued by an internal CA to sign the pre-release code. 
  • Rotate keys: Rotate keys regularly. Use unique and separate keys for signing different releases across multiple DevOps teams to limit the extent of damage a breach can cause in the event of key theft.
  • Centralized management of code signing keys and certificates reduces complexity and ensures consistent practices across different teams and projects.
  • Control and govern code signing operations – Define and enforce code signing policies to standardize code signing process across the organization. Implement RBAC to regulate access to private keys to mitigate the risk of unauthorized access and theft.
  • Simplify code signing for DevOps – Integrate with DevOps processes and pipelines for automated, consistent, and fast code signing practices throughout the software development lifecycle.
  • Streamline audits and compliance – Maintain audit logs and reports to closely track code signing activities, detect anomalies, and ensure compliance with industry regulations. 

Let’s get you started on your certificate automation journey

Code Signing Certificates

  1. What is a code signing certificate?
  2. What are the different types of code signing certificates?
  3. What is the difference between using public trust vs private trust certificates for code signing?

1. What is a code signing certificate?

A code signing certificate is a type of digital certificate that helps identify and authenticate a software provider to the end users. This certificate is issued by trusted Certificate Authorities (CAs) and includes information, such as the name and location of the organization distributing the software, the public key associated with the organization’s identity, and a timestamp (recording the time of signing). 

2. What are the different types of code signing certificates?

There are different types of code signing certificates based on the level of trust and the intended use. The two main types for public trust include:

  • Standard or Organization Validation (OV) Certificates:

This is the default type of code signing certificate and involves basic validations of the publisher or developer by the CA. To get a standard code signing certificate, software publishers need to meet some basic requirements such as minimum key length, maximum validity period, and time stamping for digital signatures.

  • Extended Validation (EV) Certificates:

EV code signing certificates involve a high level of validations and vetting of the software publisher by the CA and are usually issued on a hardware token for additional levels of security. To get an EV certificate, apart from the basic requirements of standard certificates, software publishers also need to conform to much more stringent requirements – for example, maintaining private keys in a Hardware Security Module (HSM) that is compliant with FIPS (Federal Information Processing Standards) 140 Level-2 or equivalent.

EV code signing certificates build on the existing benefits of standard code signing certificates to offer stronger levels of assurance that the identity of the publisher is correct and has been verified.

3. What is the difference between using public trust vs private trust certificates for code signing?

Public Trust Certificates:

Public trust certificates are issued by well-known and established Certificate Authorities (CAs), such as DigiCert, GlobalSign, and Sectigo that are widely recognized by most operating systems and browsers. These certificates provide a higher level of trust and assurance to end-users because they are issued by recognized and trusted CAs after stringent verification processes. This is also the reason why public trust certificates generally come with a higher cost and the pricing can vary based on the type of certificate and the level of authentication.

Software signed with public trust certificates is more likely to be trusted by default on various platforms, reducing the likelihood of security warnings for users when installing or running the software. Public trust certificates are suitable for distributing software on the internet, where the users may not have any direct relationship with the software vendor.

Private Trust Certificates:

Private trust certificates are issued by Certificate Authorities that are managed and controlled internally by the organization itself. These CAs are not publicly recognized.

Since private CAs are not publicly recognized, private trust certificates are not trusted by default on external platforms and browsers. Private trust certificates are more suitable for signing and distributing internal applications and software within a controlled environment, such as within an organization. Further, private trust certificates can be more cost-effective compared to public trust certificates, as they don’t carry the same level of reputation and global recognition.

In summary, the main difference lies in the level of trust and the scope of distribution. Public trust certificates provide a higher level of assurance and are recognized by a broader range of platforms and users. Private trust certificates are more suitable for controlled environments where the organization can manage trust settings and where the added cost of public trust might not be necessary. The choice between public and private trust certificates depends on factors such as the intended audience, the level of trust required, and the distribution context of the signed software.

Let’s get you started on your certificate automation journey

Kubernetes Security Risks and Attack Vectors

  1. Insecure Cluster Configuration: Misconfiguring a Kubernetes cluster’s access controls or permissions can lead to severe security risks. For example, leaving default credentials or weak passwords for cluster components, such as the API server or etcd, can allow unauthorized individuals to gain access and potentially control the cluster. Additionally, inadequate network policies can enable unauthorized communication between containers, potentially compromising the security of sensitive data and services within the cluster.
  2. Vulnerabilities in Container Images: Container images are critical in Kubernetes deployments. However, using outdated or vulnerable images can introduce security risks. Attackers often target known vulnerabilities within container images to gain unauthorized access or execute malicious code. It is essential to regularly update and patch container images to mitigate these risks. Furthermore, downloading images from untrusted or unofficial sources increases the likelihood of introducing malicious code into the cluster, making it crucial to use trusted image repositories.
  3. Insider Threats: Insider threats pose a significant risk to Kubernetes security. Rogue or compromised users who have legitimate access to the cluster can abuse their privileges to access or modify sensitive data, compromise containerized applications, or disrupt cluster operations. Insufficient segregation of duties, weak access controls, and inadequate monitoring can exacerbate these risks. Implementing proper user access management, regular monitoring and auditing, and separating responsibilities within the cluster can help mitigate insider threats.
  4. Pod-to-Pod Communication: Kubernetes orchestrates the communication between pods within a cluster. However, inadequate network segmentation between pods can lead to unauthorized access and lateral movement. A compromised pod may enable an attacker to move laterally across other pods, potentially compromising the entire cluster. Encrypting pod-to-pod communication helps protect sensitive data from eavesdropping and ensures that only authorized pods can communicate with each other.
  5. Denial-of-Service (DoS) Attacks: Denial-of-Service attacks can disrupt the availability and performance of a Kubernetes cluster. Attackers can launch resource exhaustion attacks, overwhelming the cluster’s capacity and causing service disruptions. Additionally, the Kubernetes control plane, responsible for managing the cluster, can be targeted. By exploiting vulnerabilities in the control plane components, attackers can disrupt cluster operations and compromise their integrity. Implementing proper resource management, limiting resource quotas, and employing network-level protections can help mitigate the risks of DoS attacks.
  6. Cluster API and Configuration Stores: The Cluster API and configuration stores, such as etcd, store critical information about the Kubernetes cluster. Weak authentication or access controls for these components can lead to unauthorized changes in the cluster’s configuration. Attackers who gain access to the Cluster API or compromise the configuration stores can manipulate the cluster’s settings, potentially causing widespread damage. Ensuring strong authentication, encrypting communications, and applying proper access controls to these components are essential for maintaining the security of the cluster.
  7. Insecure Secrets Management: Kubernetes provides the Secrets API to manage sensitive information, such as passwords, API keys, or certificates. However, if secrets are stored in plain text within Kubernetes secrets or if weak encryption methods are used, they can be easily compromised. Unauthorized access to secrets can lead to data breaches, unauthorized access to services, or even a complete compromise of the cluster. Implementing proper secrets management practices, such as encrypting secrets at rest and in transit, using strong encryption algorithms, and restricting access to secrets, helps mitigate these risks.
  8. Container Breakouts: Container breakouts occur when an attacker exploits vulnerabilities within container runtimes, such as Docker, to escape the confines of a container and gain unauthorized access to the underlying host or other containers within the same cluster. Inadequate isolation between containers or misconfigurations in container runtime settings can enable these attacks. Implementing proper container isolation mechanisms, regularly updating container runtimes, and following security best practices for container deployments can mitigate container breakout risks.
  9. Software Supply Chain Attacks: Software supply chain attacks involve compromising or manipulating the software supply chain, including container images and third-party dependencies. Attackers may introduce malicious code, backdoors, or vulnerable components into the supply chain, which can then be unknowingly deployed within a Kubernetes cluster. It is crucial to use trusted image registries, perform security checks on container images, and regularly update and patch third-party dependencies to minimize the risks of software supply chain attacks.
  10. Privilege Escalation: Privilege escalation refers to the exploitation of vulnerabilities within Kubernetes components or misconfigurations that allow an attacker to escalate their privileges within the cluster. By gaining higher privileges, attackers can access sensitive resources, compromise other pods or nodes, and perform unauthorized actions. Regularly applying security patches, limiting privileges based on the principle of least privilege, and conducting security assessments can help mitigate privilege escalation risks and ensure a more secure Kubernetes environment.

Virtual Event: Digital Identity Protection Day on 27 September 2023

4 C’s of Cloud-Native Security in Kubernetes 

The 4 C’s of Kubernetes Security refer to four important aspects to consider when addressing security in a Kubernetes environment. Here’s a simple explanation of each C:

  1. Cloud: The cloud refers to the underlying infrastructure where Kubernetes clusters are deployed. It is important to ensure the security of the cloud environment by implementing proper access controls, securing network configurations, and employing security measures provided by the cloud provider, such as firewalls and encryption. 
  2. Cluster: The cluster refers to the Kubernetes infrastructure itself, including the control plane and worker nodes. Securing the cluster involves implementing proper access controls, strong authentication mechanisms, and regular updates to address any vulnerabilities. It also includes monitoring and auditing activities within the cluster to detect any suspicious behavior.
  3. Containers: Containers are at the heart of Kubernetes deployments, housing the applications and services. Securing containers involves using trusted container images from reliable sources, regularly updating and patching them to address vulnerabilities, and implementing strong isolation mechanisms to prevent container breakout attacks. Proper management of secrets and sensitive data within containers is also crucial to protect against unauthorized access.
  4. Code: Code refers to the applications and microservices running within the Kubernetes cluster. Secure coding practices, such as input validation, output sanitization, and secure authentication and authorization mechanisms, should be followed when developing applications for Kubernetes. Regular code reviews, vulnerability scanning, and penetration testing help identify and fix any security issues in the code.

4 C’s of Cloud-Native Security in Kubernetes

By focusing on these four areas—Cluster, Configuration, Containers, and Code—organizations can enhance the security of their Kubernetes environments and mitigate potential risks and vulnerabilities.

Best Practices for Kubernetes Security

  1. Secure Cluster Configuration: Ensure that the cluster is configured with strong security measures. This includes implementing robust authentication and authorization mechanisms, enabling encryption for data in transit and at rest, and enforcing proper network policies to control communication between pods. Regularly review and update the cluster’s configuration to address any security vulnerabilities.
  2. Regular Updates and Patching: Stay up to date with the latest Kubernetes releases and security patches. Regularly update the cluster’s components, including the control plane, worker nodes, and container runtimes, to protect against known vulnerabilities. Implement a process for timely patching to ensure that any security updates are promptly applied to the cluster.
  3. Secure Container Images: Use trusted container images from reputable sources. Regularly scan and update the container images to address any known vulnerabilities. Implement an image verification process to ensure the integrity and authenticity of the images used in the cluster. Avoid running containers with unnecessary privileges and limit access to sensitive host resources.
  4. Efficient Certificate Management: Efficient certificate management is crucial for securing communications with and within the cluster. Generate and manage TLS certificates for secure Ingress traffic, pod-to-pod communications and for the Kubernetes components, such as the API server and etcd, using strong encryption algorithms. Implement proper key management practices, including secure storage and rotation of certificates. Regularly monitor and audit the certificate infrastructure to detect any unauthorized or expired certificates.
  5. Role-Based Access Control (RBAC): Implement RBAC to enforce the least privileged access control within the cluster. Define granular roles and permissions for users and service accounts based on their specific responsibilities. Regularly review and update the RBAC policies to ensure they align with the organization’s security requirements. Monitor and audit RBAC configurations to identify any unauthorized access attempts or misconfigurations.

Additionally, it’s crucial to regularly conduct security assessments, penetration testing, and vulnerability scanning to identify and address any security gaps in the cluster. 

Let’s get you started on your certificate automation journey

Importance of PKI and TLS Certificates in Kubernetes

Public Key Infrastructure (PKI) is crucial for authentication, encryption, and identity management in Kubernetes. With PKI, digital certificates are used to verify the identity of various components, such as nodes, users, and services within the cluster. Certificates serve as digital identities, enabling secure communication, encryption and establishing trust between different entities. PKI helps prevent unauthorized access to the cluster, ensuring that only trusted entities can interact with the Kubernetes infrastructure and its resources.

When using Kubernetes, network traffic must be secured using TLS certificates. TLS offers trust, data integrity and encryption, preventing unauthorized access to and tampering with sensitive data. TLS certificates secure transactions across the network by encrypting communication routes between nodes, pods, and services. By doing so, the cluster is protected from eavesdropping and interception by hostile threat actors while also ensuring the security and privacy of data and applications shared within the cluster. 

Certificates for Kubernetes Servers: 

  • KubeAPI server: KubeAPI server receives and processes API calls and exposes HTTPS service that various components and users employ to manage the Kubernetes cluster.  In order to safeguard all communications with its clients, it needs TLS certificates in order to connect over HTTPS.
  • etcd server: A certificate is needed to safeguard the data on the Kubernetes cluster’s ETCD server, a database that houses all of the information about the cluster and its many components, including the KubeApi server, external users, and service accounts.
  • Kubelet server: Kubelet is the primary node agent that each node is running. The API server communicates with exposed HTTP API endpoints provided by Kubelet services. Certificate-based authentication is also needed by Kubelet in order to communicate with the worker nodes and KubeAPI server. 

Certificates for Kubernetes Clients: 

  • Admin: To operate the Kubernetes cluster, the administrator needs access to it. Therefore, in order to access the cluster by sending HTTP queries to the kubeAPI server, the admin needs to be authenticated using certificates.
  • Kube scheduler: When pods need to be scheduled, Kube Scheduler communicates with the kubeAPI server to request that the API server schedule the pods to the appropriate node. As a result, the scheduler is the Kube API server’s client and needs certificates to authenticate with it.
  • Kube controller: The basic control loops included with Kubernetes are embedded by the Kubernetes controller manager. As a result, it also communicates with the Kube API server as a client and needs the server to authenticate.
  • Kube proxy: Each node in a cluster runs Kube Proxy, a network proxy that upholds network regulations on each node. These settings enable network connectivity between network sessions inside and outside the cluster to reach your pods. As a result, it is also a client of the Kube API server and requires certificate-based authentication.

Kube proxy

Certificate Authority (CA) in Kubernetes: 

To sign each certificate, a certificate authority (CA) is required. You must have at least one certificate authority in your Kubernetes cluster. The pair of certificates and keys owned by the certificate authority are used to validate other certificates.

Challenges of Managing Certificates in Kubernetes 

Managing digital certificates in Kubernetes can present certain challenges due to the distributed and dynamic nature of the platform. Here are some common challenges:

  1. Certificate Lifecycle Management: Kubernetes deployments involve a large number of components, including nodes, services, and users, each requiring a unique digital certificate. Managing the lifecycle of these certificates, including issuance, renewal, and revocation, can become complex and error-prone without proper tools and processes in place.
  2. Scalability and Automation: As the number of nodes and services in a Kubernetes cluster scales up, managing certificates manually becomes impractical. Ensuring the automated provisioning and renewal of certificates at scale requires robust certificate management solutions that integrate seamlessly with Kubernetes.
  3. Certificate Distribution and Trust: Distributing and maintaining trust across the various components in a Kubernetes cluster can be challenging. Ensuring that each component trusts the appropriate certificate authorities (CAs) and verifying the authenticity of certificates can become cumbersome, especially in large and distributed clusters.
  4. Ephemeral pod volumes: Certificates in ephemeral pod volumes pose challenges for management due to their short-lived and dynamic nature. The misalignment of certificate lifespans with ephemeral volumes makes it difficult to coordinate expiration and renewal processes. Automating certificate management becomes essential to handle the rapid creation and deletion of certificates for each ephemeral pod. Distributing and securely storing private keys associated with these certificates adds complexity. Additionally, ensuring proper certificate revocation when pods are terminated requires careful tracking and coordination. Specialized solutions and integration with Kubernetes orchestration are pivotal to effectively manage certificates in ephemeral pod volumes.
  5. Secure Storage and Access Control: Storing certificates securely is crucial to protect them from unauthorized access or misuse. Implementing proper access controls, such as RBAC (Role-Based Access Control), to restrict certificate management privileges and ensure secure storage solutions are essential for maintaining certificate security.
  6. Visibility and Monitoring: Tracking and monitoring the health and expiration status of certificates across the Kubernetes cluster is vital. Without proper visibility and monitoring tools, it can be difficult to identify expiring certificates, potential vulnerabilities, or issues related to certificate management.

To overcome these challenges, organizations can leverage certificate management solutions designed specifically for Kubernetes environments. These solutions provide automation, scalability, and centralized management of certificates, easing the burden of certificate lifecycle management in Kubernetes deployments.

What is Cert-Manager? 

In Kubernetes, cert-manager is an open-source tool that provides basic management capabilities of digital certificates within a cluster. It helps automate the provisioning, renewal, and revocation of certificates for various Kubernetes resources such as nodes, services, and users.

AppViewX Certificate Lifecycle Management (CLM) Maturity Model

A cert-manager in Kubernetes typically integrates with a certificate authority (CA) to obtain and manage certificates from trusted sources. It handles the complexities of certificate lifecycle management, including certificate generation, distribution, and renewal, making it easier for administrators to handle the security aspects of their cluster.

Cert-manager for Kubernetes often provides additional features like secure storage of certificates, integration with Kubernetes APIs for seamless certificate management, and integration with Ingress controllers for automatic TLS termination and certificate provisioning.

Relation between Cert-Manager and Kubernetes Services 

The relationship between cert-manager and Kubernetes services is that the cert-manager is responsible for managing the certificates used by Kubernetes services. Here’s how they are related:

  1. Certificate Provisioning: Cert-manager in Kubernetes is responsible for provisioning the necessary certificates for Kubernetes services. It automates the process of obtaining and distributing certificates to the relevant services within the cluster.
  2. Certificate Lifecycle Management: Cert-manager handles the entire lifecycle of certificates used by Kubernetes services. It manages the issuance, renewal, and revocation of certificates, ensuring that they remain up-to-date and valid.
  3. Integration with Kubernetes APIs: Cert-manager integrates with Kubernetes APIs to interact with the cluster and retrieve relevant information about services. It utilizes the Kubernetes API to request and configure certificates for services, ensuring seamless integration.
  4. Secure Communication: Kubernetes services often require TLS certificates to enable secure communication.  Cert-manager plays a crucial role in generating and managing these certificates, ensuring that services can establish secure connections and encrypt their traffic.
  5. Ingress Controllers: Cert-manager often integrates with ingress controllers, which handle incoming traffic to Kubernetes services. Cert-manager can automatically provision TLS certificates for ingress controllers, enabling secure communication with external clients.

Overall, cert-manager and Kubernetes services have a symbiotic relationship, where cert-manager facilitates the secure operation of services by provisioning and managing the necessary certificates required for secure communication within the Kubernetes cluster.

Limitations of Cert-Manager

  1. Complexity: Cert-manager can be complex to set up and configure, especially for users who are new to Kubernetes and managing SSL/TLS certificates. It requires a solid understanding of Kubernetes concepts and resources, as well as the Certificate Authority (CA) infrastructure.
  2. Steep Learning Curve: The learning curve for  cert-manager can be steep, as it involves understanding and managing various components such as Issuers, Certificates, and ACME challenges. Users may need to invest time and effort in learning and troubleshooting the tool to use it effectively.
  3. Lack of Robustness: While cert-manager is a widely used tool, it may have occasional stability issues or bugs that can impact its functionality. Users may encounter issues during certificate issuance, renewal, or revocation, which may require troubleshooting and seeking community support.
  4. External Dependencies: Cert-manager relies on external services, such as DNS providers or ACME-based Certificate Authorities, for certificate issuance and renewal. This dependency on external services can introduce additional complexity and potential points of failure in the certificate management process.
  5. Limited Certificate Management Features: Cert-manager primarily focuses on certificate management and automation, which means it may have limited functionality in terms of managing other aspects of certificates, such as monitoring certificate health, expiration notifications, auditing, uniform policy enforcement, self-service capabilities, integrations with DevOps tools or comprehensive reporting. Users may need to integrate cert-manager with other tools or build custom solutions to fulfill these requirements.

How does a robust Certificate Lifecycle Management (CLM) solution enhance the Cert-Manager functionalities? 

The primary benefits of using  a robust certificate lifecycle management solution over the open-source cert-manager tool are:

  • Enhanced Functionality: A robust certificate lifecycle management solution often offers a broader range of features and capabilities beyond what cert-manager provides. It includes advanced certificate discovery, monitoring, alerting, reporting, and centralized management features that streamline the entire certificate lifecycle processes like issuance, provisioning, renewal, revocation, etc.. An end-to-end automated CLM solution standardizes PKI policy and governance, meets regulatory compliance mandates, and enables strong access control. 
  • Simplified Setup and Configuration: Unlike cert-manager, which can be complex to set up and configure, a dedicated certificate lifecycle management solution often provides a user-friendly interface and intuitive workflows that simplify the initial setup and ongoing management tasks.
  • Scalability and Performance: A robust certificate lifecycle management solution is designed to handle large-scale certificate deployments and complex environments efficiently. It can offer scalability, high availability, and optimized performance to meet the needs of growing organizations and their certificate management requirements.
  • Vendor Support and Expertise: Opting for an efficient certificate lifecycle management solution often provides access to dedicated vendor support and expertise. This support can be valuable in troubleshooting issues, getting timely assistance, and receiving guidance on best practices for certificate management.
  • Compliance and Security: A comprehensive certificate lifecycle management solution often includes built-in compliance and security features. It offers auditing capabilities, policy enforcement, and integration with security frameworks to ensure certificates are managed in accordance with industry standards and regulatory requirements.
  • Integration Capabilities: A dedicated solution may have better integration capabilities with other tools and systems within an organization’s infrastructure. It can seamlessly integrate with identity and access management (IAM) systems, monitoring tools, and automation frameworks, providing a unified approach to certificate management.
  • Long-term Reliability and Maintenance: A powerful certificate lifecycle management solution is typically backed by a vendor committed to ongoing maintenance, updates, and bug fixes. This ensures that the solution remains reliable, secure, and compatible with evolving industry standards and technologies.

While cert-manager is a popular open-source tool, organizations with more complex certificate management needs or those seeking additional features, scalability, support, and compliance may find a robust certificate lifecycle management solution to be a better fit. 

Let’s get you started on your certificate automation journey

Why is Kubernetes Important for DevOps?

Kubernetes is important for DevOps because it provides a powerful platform for managing and orchestrating containerized applications.

  1. Simplified Application Management: Kubernetes simplifies the deployment, scaling, and management of applications. It abstracts away the underlying infrastructure complexities, allowing DevOps teams to focus on application logic rather than infrastructure details.
  2. Automation and Efficiency: With Kubernetes, DevOps teams can automate the entire application lifecycle. They can define and manage their infrastructure as code, leveraging declarative configuration files. This automation streamlines processes, reduces manual tasks, and improves efficiency.
  3. Portability and Consistency: Kubernetes enables portability and consistency across different environments. It provides a standardized way to deploy applications, making them runnable on various platforms, such as on-premises data centers or public cloud providers. This flexibility allows for easier migration and reduces vendor lock-in.
  4. Collaboration and DevOps Culture: Kubernetes promotes collaboration between development and operations teams. Its declarative nature and infrastructure as code approach facilitate better communication and alignment between these teams, fostering a DevOps culture of collaboration, continuous integration, and continuous deployment.
  5. Scalability and High Availability: Kubernetes supports the automatic scaling of applications based on demand. It can dynamically scale the number of replicas based on resource utilization, ensuring applications can handle varying workloads. Kubernetes also provides features like load balancing and service discovery, enhancing high availability and fault tolerance.
  6. Container Orchestration: Kubernetes excels at container orchestration, allowing efficient resource utilization. It schedules containers on nodes, optimizes resource allocation, and ensures workload distribution across the cluster. This capability maximizes resource usage, reduces costs, and improves overall performance.
  7. Security: Kubernetes offers built-in security features to protect applications and infrastructure. It integrates with certificate management systems based on PKI (Public Key Infrastructure), allowing for secure communication between components. Certificates can be used for authentication, encryption, and securing network traffic within the cluster.

Popular Use Cases of Kubernetes:

  • Container Orchestration: Kubernetes is primarily used for container orchestration, managing and automating the deployment, scaling, and management of containerized applications.
  • Microservices Architecture: Kubernetes is ideal for deploying and managing microservices-based applications, allowing each service to be independently scaled and updated.
  • Scalable Web Applications: Kubernetes enables the horizontal scaling of web applications, ensuring they can handle increased traffic and maintain performance during peak times.
  • Continuous Integration/Continuous Deployment (CI/CD): Kubernetes integrates seamlessly with CI/CD pipelines, allowing for automated testing, building, and deploying of applications.
  • Hybrid and Multi-cloud Deployments: Kubernetes facilitates the deployment of applications across hybrid and multi-cloud environments, providing portability and flexibility.
  • Big Data and Analytics: Kubernetes can be used to manage big data workloads, such as distributed data processing frameworks like Apache Spark or Apache Hadoop.
  • Internet of Things (IoT): Kubernetes supports the deployment and management of IoT edge devices, allowing for efficient management and processing of data at the edge.
  • Machine Learning and AI: Kubernetes provides a scalable and flexible infrastructure for deploying and managing machine learning models and AI workloads.
  • High-performance Computing (HPC): Kubernetes can be leveraged in HPC environments to manage large-scale simulations, scientific computing, and data-intensive workloads.

Simplify certificate lifecycle management in Kubernetes and containers with AppViewX