Long-lived authentication tokens are a ticking time bomb. Everyone knows they're a risk, yet convenience ("it was just the easiest way") or technical debt keeps them alive.
Admittedly, they seem harmless and are easy to handle. But they are a nightmare for every IT manager: What happens during offboarding? Is every token really rotated? How do we prove who had access to what, and when, in the next audit? This risk lurks in the shadows until it's too late.
The good news: There's a better way that aligns with modern CI/CD Security Best Practices. Switching to short-lived, dynamic credentials via OIDC-Trust with AWS and GitLab eliminates this risk and automates access. All without any manual password-juggling.
- Author
- David Roth
- Date
- October 23, 2025
- Reading time
- 11 Minutes
The Misery of Long-Lived Tokens
Whether we're publishing a package to NuGet, creating a resource in the cloud, or deploying to a container – the fastest way is often to request a token from the target system and store it as a "secret." The core problem: This token is often created without an expiration date and then never touched again. This leads to several problems:
- High Value, Eternal Validity: A stolen token is like a master key that often never gets changed. Attackers potentially have full access to the resource for months or even years.
- Lack of Traceability: Who used the token, when, and for what? With a static token, it's nearly impossible to trace. It's like a shared keycard – you know someone was in the building, but not who.
- Manual Administrative Overhead: Ideally, tokens should be rotated manually on a regular basis. This is error-prone, time-consuming, and quickly "forgotten" in the hustle of daily business.
- Offboarding Reality: When an employee leaves the company or a contract with an external partner ends, the great hunt begins: All potentially compromised tokens must be found and replaced. This is a massive and often incomplete effort.
- Supply-Chain Attacks: Attacks on repositories like NPM have shown just how vulnerable CI/CD pipelines can be. A malicious package can grab supposedly securely stored tokens and send them to the outside world.
The Classic: Kubernetes Deployment with Service Account Tokens
Traditionally, the common way to deploy to Kubernetes was this: You create a long-lived Kubernetes Service Account, give it the necessary rights via RBAC, and pack the corresponding token into GitLab's CI/CD variables. To make things a bit more secure, best-case scenarios involve creating separate tokens for staging and production and restricting access via "Protected Branches."
This works, but in practice, it's not the best way: Rotating the tokens is tedious, especially with many projects. For every new project, the tokens have to be stored by hand all over again. And if a token is compromised despite all precautions, an attacker potentially has long-term, undetected access to the cluster.
In short: This approach is a relic and a prime example of outdated Kubernetes Secrets Management. Fortunately, there is an elegant, standardized solution to this problem.
The Solution: Passwordless Deployment with OIDC Trust
So, how can we deploy securely from GitLab to Kubernetes without using a single long-lived token? What sounds like magic at first ("Authentication without a key?") is made possible by a widespread standard: OpenID Connect (OIDC).
OIDC is an identity layer on top of the OAuth 2.0 protocol. Anyone who has ever clicked "Sign in with Google" has used OIDC. In that case, Google vouches for your identity to another application. This principle can be wonderfully transferred from human-to-human to machine-to-machine communication. Luckily, Google isn't the only OIDC Identity Provider. GitLab also offers an OIDC Identity Provider, which opens up a multitude of possibilities for us.
Step 1: Making AWS Aware of GitLab as an OIDC Provider
How do we get AWS to trust a GitLab deploy-job, and only our specific job? First, we need to teach AWS who GitLab even is.
GitLab acts as a public OIDC Provider that can issue signed ID-tokens. Our first task is to enable AWS to verify the authenticity of these tokens. In AWS, this is done by setting up an Identity Provider in the IAM service.
So, we create a new Identity Provider of type "OpenID Connect" in AWS IAM. We set the Provider URL to gitlab.com and define an "Audience," e.g., gitlab-aws-deploy. This audience is like a magic word that ensures the token was actually issued for this specific purpose.
The first hurdle is cleared. AWS and GitLab are now aware of each other, and AWS knows how to validate OIDC tokens issued by GitLab. However, no access has been granted yet. We've merely cracked open the door for authentication.
Step 2: The Trust Relationship, the Heart of it All
Next, we need an IAM role in AWS that our GitLab pipeline is allowed to assume. This role contains the actual permissions. Let's call it gitlab-deployer. The special thing about this role is its Trust Relationship. It defines who or what is allowed to assume this role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::ACCOUNT_ID:oidc-provider/gitlab.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"gitlab.com:aud": "gitlab-aws-deploy"
},
"StringLike": {
"gitlab.com:sub": "project_path:my_group/my-fancy_project:ref_type:branch:ref:production"
}
}
}
]
}
Let's take a closer look:
- Principal: Allows identities from the OIDC provider
gitlab.com. - Action: Permits the
AssumeRoleWithWebIdentityaction, i.e., assuming the role using a web identity token. - Condition: This is the most important part. Here we define the crucial rules:
- The token must be issued for our audience (
gitlab-aws-deploy). - The
sub(Subject) claim of the token must match a specific pattern. In this example, only a job from the project pathmy_group/my-fancy_projectand theproductionbranch is allowed to assume this role.
- The token must be issued for our audience (
This sub information is written directly and tamper-proof into the token by GitLab. An attacker cannot simply claim to be from our project.
A Word on GitLab Security: Trust is Good, Control is Better
The OIDC trust relationship with AWS is only one side of the coin. For the whole system to be truly secure, we must ensure that the rules are also clearly defined within GitLab. After all, what good is the most secure door if everyone can get their hands on the key?
The fine-grained Condition in our IAM role, which restricts access to a specific branch like production, is the linchpin. But this also means you have to watch this branch in GitLab like a hawk. Here is your non-negotiable homework in GitLab:
- Protected Branches: The
productionormainbranch must be configured as a "Protected Branch." This allows you to define exactly who can push or merge to it. Ideally, no one can push directly – everything must go through a Merge Request. - Granular Roles for Environments: In practice, you should create a separate IAM role for each environment. Set up a
production-deployerrole that can only be assumed by theproductionbranch, and a separatestage-deployerrole tied to thestagebranch. This ensures that a job from a feature branch can never accidentally deploy to production. - Merge Request Policies & Approvals: Define clear rules for Merge Requests. Enforce code reviews from at least one or two team members before code can be merged into a protected branch. The four-eyes principle is one of the most effective measures against faulty or malicious code.
Only when the processes within GitLab itself are configured cleanly and securely can the OIDC trust mechanism unleash its full security potential. This ensures that only verified and approved code finds its way into the infrastructure.
Step 3: Requesting the OIDC Token in the GitLab Job
Now it gets real in our .gitlab-ci.yml. GitLab makes it super easy for us to generate an OIDC token:
deploy_job:
id_tokens:
GITLAB_OIDC_TOKEN:
aud: "gitlab-aws-deploy"
script:
- # Magie passiert hier
With this small id_tokens block, a valid, signed JWT (JSON Web Token) is made available in the GITLAB_OIDC_TOKEN pipeline variable. This token contains claims like the audience (aud) and the crucial subject (sub), e.g.: "sub": "project_path:my_group/my-fancy_project:ref_type:branch:ref:production".
In the pipeline, we now exchange this GitLab token for a temporary AWS token:
export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s" \
$(aws sts assume-role-with-web-identity \
--role-arn "${EKS_ROLE_ARN}" \
--role-session-name "GitLabRunner-${CI_PIPELINE_ID}" \
--web-identity-token "${GITLAB_OIDC_TOKEN}" \
--duration-seconds 900 \
--query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' \
--output text))
The result: We have AWS credentials that are valid for 15 minutes and are bound to our gitlab-deployer role. For an EKS deployment, the eks:DescribeCluster permission is often sufficient for this role.
Step 4: The Final Handshake from IAM to Kubernetes
So now we have temporary AWS access. But how do we get into our Kubernetes cluster with it? Amazon EKS offers an elegant way to do this via EKS Access Entries. This allows us to assign permissions in the Kubernetes cluster directly to IAM roles, without having to manually maintain the old aws-auth-configmap.
We create an Access Entry for our gitlab-deployer IAM role and assign it a suitable Access Policy, for example, AmazonEKSEditPolicy for a specific namespace like production.
# Set up config for kubectl:
aws eks update-kubeconfig --name "my-cluster" --region "eu-central-1"
kubectl config set-context --current --namespace="production"
# Ship it:
kubectl apply -f deployment.yaml
What's actually happening in the background?
The aws eks update-kubeconfig command is more than just a simple config entry. It sets up kubectl to execute the aws eks get-token command with every request. This process is the crucial translator between the AWS and Kubernetes worlds:
- Signed Request:
kubectlcallsaws eks get-token. This tool uses the temporary IAM credentials from the pipeline to create a request to the AWS Security Token Service (STS). This request is cryptographically signed and contains the identity of our IAM role (gitlab-deployer). - Validation by EKS: The signed request is sent to the Kubernetes API server of the EKS cluster. A special authentication webhook on the EKS server receives this request and validates the signature against AWS STS. This provides tamper-proof confirmation: "Yes, this request is really from the IAM role
gitlab-deployer." - Mapping to a Kubernetes Identity: After the AWS identity is confirmed, Kubernetes needs to know who this "visitor" is and what they are allowed to do. There are two ways:
- The classic way:
aws-auth-configmap: In older EKS setups, a ConfigMap namedaws-authexists in thekube-systemnamespace. It's used to manually define which IAM role is mapped to which Kubernetes user or group. This is functional, but error-prone and scales poorly. - The modern way: EKS Access Entries: This is the newer, more elegant solution from AWS. Instead of a ConfigMap in the cluster, the mappings are managed directly via the EKS API. We create an "Access Entry" for our
gitlab-deployerIAM role and assign it a predefined Kubernetes permission (e.g.,AmazonEKSEditPolicy) or a specific Kubernetes group directly. This is cleaner, more manageable via Infrastructure-as-Code (IaC), and is the recommended standard.
- The classic way:
Through this seamless process, our temporary IAM role is translated into a fully-fledged, authorized Kubernetes identity. All without storing a single long-lived secret.
Conclusion: OIDC as the Key to a Passwordless Future
This insight shows how powerful OIDC is as a tool for secure, passwordless authentication. The best part: This approach doesn't just work for AWS and Kubernetes. Whether it's about retrieving secrets from a Vault or publishing packages to NuGet via Trusted Publishing – the possibilities are versatile.
It's time for some spring cleaning in our system configurations. Let's banish long-lived tokens for good and bet on a more secure, automated future!
Take Your Deployments to the Next Level!
Feeling inspired but unsure where to start? Transitioning to modern, secure CI/CD processes can be a challenge. But don't worry, that's exactly where we come in.
As experts in DevOps consulting, we have successfully implemented transformations just like this for numerous clients. We help you avoid the pitfalls, select the right technologies, and implement modern CI/CD Security Best Practices to make pipelines not only more secure but also more efficient.
Let's have a no-obligation chat about how we can get your DevOps processes fit for the future.

