TL;DR: Needed to give teams scoped kubectl access to EKS without handing out long-lived credentials. Used Hoop as the access gateway, IRSA for pod-level identity, and IAM role chaining to map users to Kubernetes RBAC roles. Every session is audited, every permission is scoped, and nobody gets more access than they need.
The Problem: kubectl for Everyone (Safely)
At some point, you’re going to have people who need to run kubectl commands against your clusters. Developers debugging pods, on-call engineers tailing logs, platform folks checking node status. The question isn’t if they need access — it’s how you give it to them without creating a security nightmare.
I didn’t want to hand out AWS credentials. I didn’t want shared kubeconfigs floating around in Slack channels. And I definitely didn’t want to be the bottleneck every time someone needed to check why a pod was crashlooping.
What I wanted was something like:
- Role-based: admin sees everything, developer sees their namespace, viewer gets read-only
- Auditable: full session recording of who ran what and when
- No long-lived credentials: everything temporary, everything scoped
- Self-service: people pick their connection, they get access, done
Hoop gave me the access gateway. The rest was wiring up IAM and RBAC to make it all work together.
The Architecture
Here’s the flow from a user typing a kubectl command to it actually hitting the cluster:
flowchart LR
User["User"] --> Hoop["Hoop Gateway"]
Hoop --> IRSA["IRSA<br/>(hoop-agent role)"]
IRSA --> AssumeRole["Assume Role<br/>(eks-admin, eks-viewer, etc.)"]
AssumeRole --> EKSAPI["EKS API<br/>(aws-auth / Access Entries)"]
EKSAPI --> RBAC["K8s RBAC<br/>(Roles + RoleBindings)"]
style IRSA fill:#339af0,stroke:#1971c2,color:#fff
style AssumeRole fill:#339af0,stroke:#1971c2,color:#fff
style RBAC fill:#51cf66,stroke:#2f9e44
The key insight is role chaining. The Hoop pod uses IRSA to get its own identity, then assumes a specific IAM role based on which connection the user picked. That IAM role maps to a Kubernetes user via aws-auth or EKS Access Entries, and RBAC takes it from there.
Step 1: Define EKS RBAC Roles in Terraform
First, create IAM roles that represent each permission level. Each role needs two trust relationships — one for IAM users/roles in the account (for direct access), and one for the Hoop agent’s service account via IRSA.
locals {
hoop_agent_sa_namespace = "hoop"
hoop_agent_sa_name = "hoopgateway"
}
# Platform Admin - Full cluster access
resource "aws_iam_role" "eks_platform_admin" {
name = "${terraform.workspace}-eks-platform-admin"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = "sts:AssumeRole"
Principal = { AWS = "arn:aws:iam::${var.account_id}:root" }
},
{
Effect = "Allow"
Action = "sts:AssumeRoleWithWebIdentity"
Principal = { Federated = module.eks.oidc_provider_arn }
Condition = {
StringEquals = {
"${replace(module.eks.cluster_oidc_issuer_url, "https://", "")}:sub" = "system:serviceaccount:${local.hoop_agent_sa_namespace}:${local.hoop_agent_sa_name}"
"${replace(module.eks.cluster_oidc_issuer_url, "https://", "")}:aud" = "sts.amazonaws.com"
}
}
}
]
})
}
# Same pattern for: eks_cluster_viewer, eks_admin, eks_developer, eks_viewer
I ended up with five roles per environment: platform-admin, cluster-viewer, admin, developer, and viewer. The first two are cluster-scoped, the last three are namespace-scoped. That covers pretty much every access pattern we’ve needed.
flowchart TB
subgraph IAM["IAM Roles (per environment)"]
PA["eks-platform-admin<br/><i>cluster-wide</i>"]
CV["eks-cluster-viewer<br/><i>cluster-wide, read-only</i>"]
AD["eks-admin<br/><i>namespace-scoped</i>"]
DEV["eks-developer<br/><i>namespace-scoped</i>"]
VW["eks-viewer<br/><i>namespace-scoped, read-only</i>"]
end
subgraph K8s["Kubernetes Users (via aws-auth / Access Entries)"]
UPA["platform-admin"]
UCV["cluster-viewer"]
UAD["admin"]
UDEV["developer"]
UVW["viewer"]
end
PA --> UPA
CV --> UCV
AD --> UAD
DEV --> UDEV
VW --> UVW
style IAM fill:#339af0,stroke:#1971c2,color:#fff
style K8s fill:#51cf66,stroke:#2f9e44
Step 2: Create the Hoop Agent IAM Role (IRSA)
The Hoop pod needs its own IAM role — this is the starting point of the role chain. It needs permission to assume the EKS RBAC roles and describe the cluster.
resource "aws_iam_policy" "hoop_agent_eks_policy" {
name = "${terraform.workspace}-hoop-agent-eks-policy"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = ["sts:AssumeRole"]
Resource = [
aws_iam_role.eks_platform_admin.arn,
aws_iam_role.eks_cluster_viewer.arn,
aws_iam_role.eks_admin.arn,
aws_iam_role.eks_developer.arn,
aws_iam_role.eks_viewer.arn
]
},
{
Effect = "Allow"
Action = ["eks:DescribeCluster"]
Resource = [module.eks.cluster_arn]
}
]
})
}
module "hoop_agent_iam" {
source = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
version = "5.16.0"
create_role = true
role_name = "${terraform.workspace}-hoop-agent"
provider_url = replace(module.eks.cluster_oidc_issuer_url, "https://", "")
role_policy_arns = [aws_iam_policy.hoop_agent_eks_policy.arn]
oidc_fully_qualified_subjects = [
"system:serviceaccount:${local.hoop_agent_sa_namespace}:${local.hoop_agent_sa_name}"
]
}
Step 3: Map IAM Roles to Kubernetes Identities
This is where IAM roles become Kubernetes users. EKS gives you two ways to do this: the legacy aws-auth ConfigMap or the newer EKS Access Entries API. Either works — pick the one that fits your setup.
Option A: aws-auth ConfigMap (Legacy)
The traditional approach. You manage a ConfigMap in kube-system that maps IAM ARNs to Kubernetes usernames:
module "eks" {
# ...
manage_aws_auth_configmap = true
aws_auth_roles = [
{
rolearn = aws_iam_role.eks_platform_admin.arn
username = "platform-admin"
groups = ["system:authenticated"]
},
{
rolearn = aws_iam_role.eks_cluster_viewer.arn
username = "cluster-viewer"
groups = ["system:authenticated"]
},
{
rolearn = aws_iam_role.eks_admin.arn
username = "admin"
groups = ["system:authenticated"]
},
{
rolearn = aws_iam_role.eks_developer.arn
username = "developer"
groups = ["system:authenticated"]
},
{
rolearn = aws_iam_role.eks_viewer.arn
username = "viewer"
groups = ["system:authenticated"]
}
]
}
This works fine, but it has some well-known downsides: it’s a single ConfigMap that can be accidentally deleted or corrupted, there’s no API-level audit trail for changes, and if you mess it up you can lock yourself out of the cluster.
Option B: EKS Access Entries (Recommended)
AWS introduced Access Entries as a first-class API for managing cluster authentication. It’s managed through the EKS API rather than a ConfigMap inside the cluster, which means better auditability via CloudTrail and no risk of accidental lockout.
With the terraform-aws-modules/eks/aws module (v20+), access entries are a first-class config block:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
# ...
authentication_mode = "API_AND_CONFIG_MAP" # or "API" to go all-in
enable_cluster_creator_admin_permissions = true
access_entries = {
platform-admin = {
principal_arn = aws_iam_role.eks_platform_admin.arn
kubernetes_groups = []
policy_associations = {
admin = {
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
access_scope = {
type = "cluster"
}
}
}
}
cluster-viewer = {
principal_arn = aws_iam_role.eks_cluster_viewer.arn
kubernetes_groups = []
policy_associations = {
viewer = {
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy"
access_scope = {
type = "cluster"
}
}
}
}
admin = {
principal_arn = aws_iam_role.eks_admin.arn
kubernetes_groups = []
policy_associations = {
admin = {
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy"
access_scope = {
namespaces = ["myapp"]
type = "namespace"
}
}
}
}
developer = {
principal_arn = aws_iam_role.eks_developer.arn
kubernetes_groups = []
policy_associations = {
edit = {
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy"
access_scope = {
namespaces = ["myapp"]
type = "namespace"
}
}
}
}
viewer = {
principal_arn = aws_iam_role.eks_viewer.arn
kubernetes_groups = []
policy_associations = {
viewer = {
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy"
access_scope = {
namespaces = ["myapp"]
type = "namespace"
}
}
}
}
}
}
Warning: The access policies used above (
AmazonEKSClusterAdminPolicy,AmazonEKSAdminPolicy,AmazonEKSEditPolicy,AmazonEKSViewPolicy) are AWS-managed and their permissions may change over time. Before applying these to your cluster, review what each policy actually grants by checking the AWS documentation. Make sure the permissions align with what you intend for each role — especially for admin and edit policies, which can be broader than you’d expect. When in doubt, use custom Kubernetes Roles instead for tighter control.
The nice thing about Access Entries is that you can use these AWS-managed access policies with built-in namespace scoping via access_scope. This means you can handle authorization directly in the access entry without needing separate Kubernetes Roles and RoleBindings for common patterns. If you need more granular permissions, you can still layer custom RBAC on top.
If you’re starting fresh, go with Access Entries. If you’re on an existing cluster with aws-auth, you can migrate by setting authentication_mode to API_AND_CONFIG_MAP first, adding your Access Entries, verifying everything works, then switching to API once you’re confident.
Note: With either approach, I’m deliberately keeping permissions scoped tightly. Platform admin gets cluster-wide access, but namespace-scoped roles only see their assigned namespaces. The identity mapping handles authentication — RBAC (whether via access policies or custom K8s roles) handles authorization.
Step 4: Kubernetes RBAC (aws-auth only)
If you’re using Access Entries with policy associations (Option B above), you can skip this step. The AWS-managed access policies handle authorization at the EKS API level — no in-cluster Roles or RoleBindings needed. Namespace scoping, read-only vs admin, it’s all handled by the
access_scopeandpolicy_arnin your access entry config.
If you went with aws-auth (Option A), you’ll need to create the actual Kubernetes Roles and RoleBindings yourself, since aws-auth only handles authentication (mapping IAM → K8s user), not authorization. I built a small Helm chart that takes a values file per environment:
# values.yaml
users:
- name: platform-admin
clusterRoles:
- platform-admin
- name: admin
namespacedRoles:
- namespace: myapp
role: admin
- namespace: argocd
role: admin
- name: developer
namespacedRoles:
- namespace: myapp
role: developer
- name: viewer
namespacedRoles:
- namespace: myapp
role: viewer
The chart generates the appropriate ClusterRoleBindings and RoleBindings from this config. Nothing fancy — just templating that saves me from writing dozens of RBAC manifests by hand.
Step 5: Build the Kubeconfigs
Here’s where the role chaining actually happens. Each kubeconfig targets a specific role and includes the IRSA environment variables so it works from inside the Hoop pod.
apiVersion: v1
kind: Config
clusters:
- name: <cluster-name>
cluster:
server: https://<eks-endpoint>.<region>.eks.amazonaws.com
certificate-authority-data: <base64-encoded-ca-cert>
contexts:
- name: <cluster-name>-admin
context:
cluster: <cluster-name>
user: <cluster-name>-admin
current-context: <cluster-name>-admin
users:
- name: <cluster-name>-admin
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: aws
args:
- eks
- get-token
- --cluster-name
- <cluster-name>
- --role-arn
- arn:aws:iam::<account-id>:role/<cluster-name>-eks-admin
env:
- name: AWS_ROLE_ARN
value: "arn:aws:iam::<account-id>:role/<cluster-name>-hoop-agent"
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: "/var/run/secrets/eks.amazonaws.com/serviceaccount/token"
The magic is in those env variables. When aws eks get-token runs, it first picks up the IRSA token from the mounted service account, assumes the hoop-agent role, then chains into the --role-arn target. Two hops, all temporary credentials.
I created one kubeconfig per role per environment:
| File | Role | Scope |
|---|---|---|
production-myapp-admin.kubeconfig | admin | myapp namespace (full) |
production-myapp-viewer.kubeconfig | viewer | myapp namespace (read-only) |
staging-myapp-developer.kubeconfig | developer | myapp namespace (dev access) |
production-platform-admin.kubeconfig | platform-admin | cluster-wide |
Step 6: Mount Kubeconfigs to the Hoop Pod
All kubeconfig files need to end up at /home/hoop/.kube/ in the Hoop agent pod. I used External Secrets Operator with AWS Secrets Manager for this — keeps everything centralized and auto-synced.
flowchart LR
SM["AWS Secrets Manager<br/><i>base64-encoded kubeconfigs</i>"] --> ESO["External Secrets<br/>Operator"]
ESO --> K8S["K8s Secret<br/><i>hoop-kubeconfigs</i>"]
K8S --> VOL["Volume Mount<br/>/home/hoop/.kube/"]
VOL --> HP["Hoop Agent Pod"]
style SM fill:#ff922b,stroke:#e8590c,color:#fff
style ESO fill:#339af0,stroke:#1971c2,color:#fff
style K8S fill:#845ef7,stroke:#7048e8,color:#fff
style HP fill:#51cf66,stroke:#2f9e44
Storing in AWS Secrets Manager
One gotcha: kubeconfigs need to be base64 encoded before storing in Secrets Manager.
cat production-myapp-admin.kubeconfig | base64 -w 0 > production-myapp-admin.kubeconfig.b64
aws secretsmanager create-secret \
--name production-hoop-kubeconfigs \
--secret-string '{
"platform-admin": "<base64-encoded-kubeconfig>",
"platform-viewer": "<base64-encoded-kubeconfig>",
"ns-admin": "<base64-encoded-kubeconfig>",
"ns-developer": "<base64-encoded-kubeconfig>",
"ns-viewer": "<base64-encoded-kubeconfig>",
"HOOP_KEY": "<hoop-grpc-key>"
}'
The HOOP_KEY is the gRPC API key the Hoop agent uses to connect to the gateway — I stash it in the same secret for convenience.
ExternalSecret to Decode and Mount
The ExternalSecret pulls from Secrets Manager, decodes the base64, and creates a Kubernetes Secret:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: hoop-kubeconfigs
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets
kind: ClusterSecretStore
target:
name: hoop-kubeconfigs
creationPolicy: Owner
template:
engineVersion: v2
data:
platform-admin.kubeconfig: "{{ .platformadmin | b64dec }}"
ns-admin.kubeconfig: "{{ .nsadmin | b64dec }}"
ns-developer.kubeconfig: "{{ .nsdeveloper | b64dec }}"
ns-viewer.kubeconfig: "{{ .nsviewer | b64dec }}"
data:
- secretKey: platformadmin
remoteRef:
key: production-hoop-kubeconfigs
property: platform-admin
- secretKey: nsadmin
remoteRef:
key: production-hoop-kubeconfigs
property: ns-admin
- secretKey: nsdeveloper
remoteRef:
key: production-hoop-kubeconfigs
property: ns-developer
- secretKey: nsviewer
remoteRef:
key: production-hoop-kubeconfigs
property: ns-viewer
Then mount it into the Hoop agent:
# hoop-agent values.yaml
extraVolumes:
- name: kubeconfigs
secret:
secretName: hoop-kubeconfigs
extraVolumeMounts:
- name: kubeconfigs
mountPath: /home/hoop/.kube
readOnly: true
Don’t forget to annotate the Hoop service account with the IRSA role:
serviceAccount:
create: true
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<account-id>:role/<cluster-name>-hoop-agent
Step 7: Create Hoop Connections
Last piece — create a Hoop connection for each kubeconfig. The trick is using xargs -L 1 kubectl as the command, which lets users type raw kubectl subcommands without the kubectl prefix:
# Production - admin access
hoop admin create connection production-myapp-admin \
--agent default \
--type command-line \
--command "xargs -L 1 kubectl" \
--env "KUBECONFIG=/home/hoop/.kube/ns-admin.kubeconfig"
# Production - viewer access
hoop admin create connection production-myapp-viewer \
--agent default \
--type command-line \
--command "xargs -L 1 kubectl" \
--env "KUBECONFIG=/home/hoop/.kube/ns-viewer.kubeconfig"
Now users just pick a connection in Hoop and type:
get pods -n myapp
describe pod my-pod -n myapp
logs my-pod -n myapp
exec -it my-pod -n myapp -- /bin/sh
Instead of wrestling with kubeconfig files, AWS credentials, and role assumptions. They don’t even need kubectl installed locally.
Going Further: Runbooks for Locked-Down Access
Sometimes even viewer access is more than you want to hand out. Maybe you have an on-call engineer who only needs to restart a specific deployment, or a support team that should be able to tail logs but nothing else. Giving them a full kubectl passthrough — even scoped to a namespace — is overkill.
That’s where Hoop runbooks come in. Instead of a freeform kubectl connection, you create a connection that runs a specific, pre-defined command:
# Restart a specific deployment
hoop admin create connection production-myapp-restart \
--agent default \
--type command-line \
--command "kubectl rollout restart deployment/myapp -n myapp" \
--env "KUBECONFIG=/home/hoop/.kube/ns-admin.kubeconfig"
# Tail logs for a specific service
hoop admin create connection production-myapp-logs \
--agent default \
--type command-line \
--command "kubectl logs -f deployment/myapp -n myapp --tail=100" \
--env "KUBECONFIG=/home/hoop/.kube/ns-viewer.kubeconfig"
# Check pod status
hoop admin create connection production-myapp-status \
--agent default \
--type command-line \
--command "kubectl get pods -n myapp -o wide" \
--env "KUBECONFIG=/home/hoop/.kube/ns-viewer.kubeconfig"
The user clicks a button, the command runs, they see the output. No free-form input, no chance of running something they shouldn’t. You can even pair these with Hoop’s approval workflows — so restarting a production deployment requires a second person to approve it before the command executes.
This layered approach lets you give teams exactly the level of access they need: full kubectl for platform engineers, namespace-scoped passthrough for developers, and single-command runbooks for everyone else.
How It All Fits Together
sequenceDiagram
participant U as User
participant H as Hoop Gateway
participant P as Hoop Agent Pod
participant STS as AWS STS
participant EKS as EKS API
participant K8s as K8s RBAC
U->>H: "get pods -n myapp"
H->>P: Forward command
P->>P: Read kubeconfig<br/>(IRSA token + role ARN)
P->>STS: AssumeRoleWithWebIdentity<br/>(hoop-agent role)
STS-->>P: Temporary credentials
P->>STS: AssumeRole<br/>(eks-admin role)
STS-->>P: Scoped credentials
P->>EKS: get-token
EKS-->>P: K8s token (user: admin)
P->>K8s: kubectl get pods -n myapp
K8s-->>P: Pod list
P-->>H: Output
H-->>U: Results + audit log
Let me walk through what actually happens when someone runs get pods:
- User opens Hoop and selects
production-myapp-admin - Hoop agent reads the kubeconfig at
/home/hoop/.kube/ns-admin.kubeconfig - IRSA token is read from
/var/run/secrets/eks.amazonaws.com/serviceaccount/token - First hop:
aws eks get-tokenuses the token to assumeproduction-hoop-agentrole - Second hop: chains into
production-eks-adminvia--role-arn - EKS validates the role, maps it to K8s user
admin(viaaws-author Access Entries) - RBAC evaluates — user
adminhas a RoleBinding in themyappnamespace, access granted - Hoop records the entire session for audit
All credentials are temporary (15 minute STS tokens by default). Nothing is stored, nothing persists.
Troubleshooting
A few things that tripped me up along the way:
“Forbidden” errors
Usually a missing piece in the chain. Check in order:
- The IAM role is mapped to a K8s user — check
aws-auth(kubectl get configmap aws-auth -n kube-system -o yaml) or Access Entries (aws eks list-access-entries --cluster-name <cluster>) - RoleBinding exists in the target namespace:
kubectl get rolebindings -n myapp - The Role actually grants what you think it does
”AssumeRoleWithWebIdentity access denied”
IRSA trust policy issues. Verify:
- The trust policy on the target role includes the service account
- The pod annotation matches the IRSA role ARN
- The token file actually exists:
kubectl exec -it <hoop-pod> -- ls -la /var/run/secrets/eks.amazonaws.com/serviceaccount/
”Could not assume role”
The hoop-agent policy might not include the target role in its sts:AssumeRole resources. Or the target role’s trust policy doesn’t allow the hoop-agent role.
Connection not working at all
Start from the bottom — verify the kubeconfig is mounted:
kubectl exec -it <hoop-pod> -- ls -la /home/hoop/.kube/
kubectl exec -it <hoop-pod> -- kubectl --kubeconfig=/home/hoop/.kube/ns-admin.kubeconfig get pods -n myapp
Security Considerations
A few things that made this setup feel production-ready:
- Least privilege everywhere: each role only gets the permissions it needs, scoped to specific namespaces
- Full audit trail: Hoop records every session — who connected, what they ran, what they saw
- No long-lived credentials: everything is temporary STS tokens, 15 minutes by default
- Environment isolation: each environment gets its own set of roles, no cross-contamination
- Namespace boundaries: developers can’t accidentally
kubectl deletesomething in a namespace they shouldn’t be in
Wrapping Up
Setting this up took some effort — there are a lot of moving pieces between IAM, IRSA, aws-auth, RBAC, and Hoop. But once it’s wired together, it just works. People get the access they need, scoped to exactly what they should see, with every command logged.
The best part? When someone asks “can I get kubectl access to production?”, the answer isn’t a complicated runbook anymore. It’s “pick the right connection in Hoop.”