Cloud Storage Testing Guide

How to Test S3 Presigned URL Upload with Playwright: Complete 2026 Guide

A scenario-by-scenario walkthrough of testing S3 presigned URL uploads with Playwright. CORS configuration, progress events, multipart upload, URL expiry handling, content-type validation, error retry logic, and the pitfalls that break real upload test suites.

100B+

Amazon S3 stores over 100 trillion objects and regularly peaks at tens of millions of requests per second, according to AWS re:Invent 2023 keynote.

AWS re:Invent 2023

0Domain hops per upload
0Upload scenarios covered
0minDefault URL expiry
0%Fewer lines with Assrt

S3 Presigned URL Upload Flow

BrowserYour APIAWS S3CloudFront CDNRequest upload URLGenerate presigned PUT URLReturn signed URL + fieldsReturn presigned URLPUT file directly to S3200 OK (ETag header)Confirm upload completeReturn CDN URL

1. Why Testing S3 Presigned URL Uploads Is Harder Than It Looks

S3 presigned URL uploads are the standard pattern for letting browsers upload files directly to Amazon S3 without routing binary data through your backend. Your API generates a time-limited signed URL, hands it to the browser, and the browser PUTs the file straight to S3. It sounds simple, but testing it with Playwright surfaces at least six structural problems that do not exist with normal form submissions.

First, the upload is a cross-origin request. The browser sends a PUT (or POST for presigned POST policies) to a completely different domain, typically your-bucket.s3.amazonaws.com or a custom domain. That means CORS must be configured correctly on the S3 bucket, and a misconfigured CORS policy is the single most common cause of upload failures in production. Second, presigned URLs expire. A URL generated with a 15-minute TTL becomes invalid if the user takes too long selecting a file, and your frontend must detect the expiry and request a fresh URL. Third, large files require multipart uploads, which split the file into chunks, upload each chunk with its own presigned URL, and finalize the upload with a separate API call. A test that only covers single PUT uploads will miss the entire multipart code path.

Fourth, content-type validation is enforced at the S3 level. If the presigned URL was generated with a specific content-type condition and the browser sends a different one, S3 returns a 403 Forbidden with an XML error body that most frontend code does not parse correctly. Fifth, progress events for upload tracking use the XMLHttpRequest upload.onprogress event or the newer fetch with ReadableStream, neither of which Playwright can observe directly without intercepting network traffic. Sixth, retry logic for transient failures (network drops, S3 503 SlowDown responses) must be tested under controlled failure conditions, which requires route interception to simulate S3 errors.

S3 Presigned Upload: Where Things Break

⚙️

Request URL

API generates signed URL

↪️

CORS Preflight

OPTIONS to S3 bucket

🌐

PUT File

Binary upload to S3

🔒

Validate

Content-type + size check

⚙️

Store Object

S3 writes to bucket

Confirm

API saves metadata

Multipart Upload Sub-Flow

⚙️

Initiate

CreateMultipartUpload

🌐

Chunk 1

Upload part 1

🌐

Chunk 2

Upload part 2

🌐

Chunk N

Upload part N

Complete

CompleteMultipartUpload

Abort?

AbortMultipartUpload on failure

A thorough S3 presigned upload test suite must cover all six of these surfaces. The sections below walk through each scenario with runnable Playwright TypeScript code you can paste directly into your project.

2. Setting Up a Reliable Test Environment

Before writing scenarios, you need a test bucket with proper CORS configuration, a backend endpoint that generates presigned URLs, and Playwright configured to handle cross-origin uploads. You can use a real S3 bucket in a dedicated test AWS account, or use LocalStack for fully offline testing. Both approaches work; the key difference is that LocalStack does not enforce CORS headers the same way, so you should run CORS-specific tests against a real S3 bucket at least once.

S3 Upload Test Environment Checklist

  • Create a dedicated S3 test bucket with versioning enabled
  • Configure CORS to allow PUT from localhost and CI preview domains
  • Set bucket lifecycle rules to auto-delete test objects after 24 hours
  • Create an IAM role with scoped PutObject and GetObject permissions
  • Set up a backend endpoint that returns presigned URLs for testing
  • Install LocalStack (optional) for offline multipart upload tests
  • Configure Playwright to intercept S3 requests for controlled failure tests
  • Create sample test files: 1KB text, 5MB image, 100MB binary for multipart

S3 Bucket CORS Configuration

cors-config.json

Environment Variables

.env.test

Backend Presigned URL Endpoint

Your backend needs an endpoint that accepts a filename and content-type, validates them, and returns a presigned PUT URL. This is the endpoint your frontend calls before each upload. In tests, you can also call it directly to generate URLs for API-level assertions.

api/upload/presign.ts

Playwright Configuration for S3 Uploads

playwright.config.ts
Verify S3 Test Bucket Setup

3. Scenario: Basic Single-File Upload Happy Path

The fundamental upload scenario is the happy path: the user selects a file, the frontend requests a presigned URL from your API, the browser PUTs the file directly to S3, and your app confirms the upload succeeded. This is the smoke test. If this breaks, no user can upload anything. The test needs to verify three things beyond just “no error appeared”: the file actually landed in S3, the correct content-type was set on the object, and your backend recorded the upload metadata.

1

Basic Single-File Upload Happy Path

Straightforward

Goal

Starting from the upload page, select a file using the file chooser, wait for the presigned URL request and the subsequent PUT to S3, and verify the upload completed successfully with the correct metadata.

Preconditions

  • App running at APP_BASE_URL with authenticated session
  • S3 test bucket exists with correct CORS configuration
  • A test file (test-image.png, ~50KB) in the test fixtures directory

Playwright Implementation

s3-upload.spec.ts

What to Assert Beyond the UI

  • The presigned URL response contains a valid S3 URL and object key
  • The PUT request to S3 uses the correct content-type header
  • The S3 response returns HTTP 200 with a non-empty ETag header
  • Your backend confirmation endpoint recorded the object key and size

Basic Upload: Playwright vs Assrt

import { test, expect } from '@playwright/test';
import path from 'path';

test('basic upload: happy path', async ({ page }) => {
  await page.goto('/upload');
  const presignResponse = page.waitForResponse(
    (resp) => resp.url().includes('/api/upload/presign')
      && resp.status() === 200
  );
  const s3PutRequest = page.waitForRequest(
    (req) => req.url().includes('.s3.') && req.method() === 'PUT'
  );

  const fileChooserPromise = page.waitForEvent('filechooser');
  await page.getByRole('button', { name: /upload/i }).click();
  const fileChooser = await fileChooserPromise;
  await fileChooser.setFiles(
    path.join(__dirname, 'fixtures', 'test-image.png')
  );

  const presignResp = await presignResponse;
  const data = await presignResp.json();
  expect(data.url).toContain('.s3.');

  const putReq = await s3PutRequest;
  expect(putReq.method()).toBe('PUT');

  const s3Response = await page.waitForResponse(
    (resp) => resp.url().includes('.s3.')
      && resp.request().method() === 'PUT'
  );
  expect(s3Response.status()).toBe(200);
  expect(s3Response.headers()['etag']).toBeTruthy();

  await expect(
    page.getByText(/uploaded successfully/i)
  ).toBeVisible({ timeout: 10_000 });
});
60% fewer lines

4. Scenario: Content-Type Validation and Rejection

Presigned URLs are typically generated with a content-type condition. If your API generates a URL for image/png and the browser sends application/octet-stream, S3 rejects the PUT with a 403 Forbidden response. The error body is XML, not JSON, which confuses most frontend error handling code. Your test needs to verify two things: that the frontend validates file types before requesting a presigned URL (client-side validation), and that S3 rejection is handled gracefully if a mismatched content-type somehow reaches S3 (server-side validation).

The client-side validation is straightforward to test: select a disallowed file type and assert the UI shows an error without making any network request. The server-side validation requires route interception to simulate what happens when the content-type condition fails at the S3 level.

2

Content-Type Validation and Rejection

Moderate

Playwright Implementation: Client-Side Rejection

s3-content-type.spec.ts

Playwright Implementation: S3-Level 403 Rejection

s3-content-type-403.spec.ts

Content-Type Validation: Playwright vs Assrt

test('rejects disallowed file types', async ({ page }) => {
  await page.goto('/upload');
  const fileChooserPromise = page.waitForEvent('filechooser');
  await page.getByRole('button', { name: /upload/i }).click();
  const fileChooser = await fileChooserPromise;
  await fileChooser.setFiles({
    name: 'malware.exe',
    mimeType: 'application/x-msdownload',
    buffer: Buffer.from('fake-exe-content'),
  });
  await expect(
    page.getByText(/file type not allowed/i)
  ).toBeVisible();
});

test('handles S3 403 on mismatch', async ({ page }) => {
  await page.goto('/upload');
  await page.route('**/*.s3.*/**', (route) => {
    if (route.request().method() === 'PUT') {
      route.fulfill({
        status: 403,
        contentType: 'application/xml',
        body: '<Error><Code>AccessDenied</Code></Error>',
      });
    } else {
      route.continue();
    }
  });
  // ... trigger upload and assert error shown
  await expect(
    page.getByText(/upload failed/i)
  ).toBeVisible({ timeout: 5_000 });
});
50% fewer lines

Try Assrt for free

Open-source AI testing framework. No signup required.

Get Started

5. Scenario: Upload Progress Events and Cancellation

Most upload UIs display a progress bar or percentage indicator. The browser fires XMLHttpRequest.upload.onprogressevents as bytes transfer to S3, and your frontend converts those events into a visual progress indicator. Testing this with Playwright is tricky because Playwright's network interception does not directly expose upload progress events. You need a different approach: intercept the S3 PUT request to slow it down artificially, then assert that the UI shows intermediate progress states before the upload completes.

Cancellation is the companion scenario. When a user clicks “Cancel” during an upload, the frontend should abort the XMLHttpRequest, and the partially uploaded object should not persist in S3 (or should be cleaned up). For multipart uploads, cancellation requires calling AbortMultipartUpload to avoid leaving orphaned parts that incur storage charges.

3

Upload Progress Events and Cancellation

Complex

Playwright Implementation: Progress Tracking

s3-progress.spec.ts

Playwright Implementation: Upload Cancellation

s3-cancel.spec.ts

6. Scenario: Multipart Upload for Large Files

Files larger than 5MB (or whatever threshold your application sets) should use S3 multipart upload. The flow is fundamentally different from a single PUT: your backend initiates a multipart upload with S3, generates a presigned URL for each chunk, the browser uploads each chunk in parallel or sequentially, and finally your backend calls CompleteMultipartUpload with the ETags from each part. If any chunk fails, the entire upload must be retried or aborted.

Testing multipart uploads requires verifying four things: the correct number of parts were uploaded, each part returned a valid ETag, the completion call succeeded, and the final assembled object in S3 has the correct total size. You also need to test the abort path to confirm that cancelling a multipart upload cleans up all uploaded parts.

4

Multipart Upload for Large Files

Complex

Playwright Implementation

s3-multipart.spec.ts

Testing Multipart Abort

s3-multipart-abort.spec.ts

7. Scenario: Presigned URL Expiry and Refresh

Presigned URLs have a configurable TTL, commonly set between 5 and 60 minutes. If a user selects a file, walks away for coffee, and comes back to click “Upload” after the URL has expired, S3 returns a 403 with an ExpiredToken or AccessDenied error. A well-designed frontend detects this, requests a fresh presigned URL, and retries the upload transparently. Testing this requires simulating time-based expiry without actually waiting 15 minutes.

The approach is to intercept the first S3 PUT and return a 403 expiry error, then let the second PUT through normally. This mimics the exact behavior the frontend sees when a presigned URL expires. Your test verifies that the frontend handles the 403, requests a new presigned URL, retries the PUT with the new URL, and succeeds without showing an error to the user (or shows a brief “refreshing” indicator).

5

Presigned URL Expiry and Refresh

Moderate

Playwright Implementation

s3-expiry.spec.ts

Presigned URL Expiry: Playwright vs Assrt

test('presigned URL expiry: auto-refreshes', async ({ page }) => {
  await page.goto('/upload');
  let putAttempt = 0;
  let presignCallCount = 0;
  page.on('request', (req) => {
    if (req.url().includes('/api/upload/presign')) {
      presignCallCount++;
    }
  });
  await page.route('**/*.s3.*/**', async (route) => {
    if (route.request().method() === 'PUT') {
      putAttempt++;
      if (putAttempt === 1) {
        route.fulfill({
          status: 403,
          contentType: 'application/xml',
          body: '<Error><Code>AccessDenied</Code></Error>',
        });
      } else {
        route.fulfill({
          status: 200,
          headers: { ETag: '"fresh123"' },
        });
      }
    } else { route.continue(); }
  });
  // ... select file, assert success
  expect(presignCallCount).toBe(2);
  expect(putAttempt).toBe(2);
});
69% fewer lines

8. Scenario: Network Error Retry Logic

S3 occasionally returns 503 SlowDown responses under heavy load, and network interruptions during large uploads are inevitable on mobile connections. A production upload component should implement exponential backoff retry for transient errors (5xx, network failures) while treating 4xx errors as permanent failures. Testing this requires precise control over which requests fail and which succeed.

Playwright's route interception is perfect for this. You can fail the first N PUT requests with 503, then let subsequent requests through, and assert that the frontend retried the correct number of times without showing a permanent error to the user. You should also test the case where retries are exhausted and the frontend gives up with a meaningful error message.

6

Network Error Retry Logic

Complex

Playwright Implementation: Successful Retry

s3-retry.spec.ts

Playwright Implementation: Exhausted Retries

s3-retry-exhausted.spec.ts

9. Common Pitfalls That Break S3 Upload Test Suites

After working with dozens of teams that test S3 presigned uploads, these are the recurring failures that waste the most debugging time. Each one comes from real GitHub issues and Stack Overflow threads.

CORS Misconfiguration in CI

The most common failure is a CORS error that only appears in CI but never locally. This happens because your S3 bucket CORS configuration allows http://localhost:3000 but not the CI preview URL (which is typically a random subdomain like deploy-preview-42.vercel.app). The browser blocks the PUT request entirely and your test sees a generic “network error” with no S3 response body. Fix this by adding a wildcard origin pattern or dynamically updating the CORS config in CI setup. This is documented in the AWS S3 CORS documentation.

Missing ETag in Exposed Headers

S3 returns an ETag header on successful PUT, and your frontend likely reads it to confirm the upload or pass it to your completion endpoint. But if your CORS configuration does not include ETag in the ExposeHeaders list, the browser hides it from JavaScript. The PUT succeeds (HTTP 200), the file lands in S3, but your frontend code reads undefinedfor the ETag and throws an error. This silent failure is responsible for a significant share of “upload works in Postman but fails in the browser” reports on Stack Overflow.

Content-Length Mismatch on Multipart Parts

When slicing a file into chunks with file.slice(start, end), the last chunk is often smaller than the part size. If your presigned URL was generated with a fixed Content-Length condition matching the standard part size, the final chunk upload fails with a 403. The fix is to either omit the Content-Length condition from the presigned URL or calculate the exact size of each chunk and generate a matching URL.

Race Condition Between Presign and Upload

Some frontends request the presigned URL and immediately start the PUT in parallel, assuming the URL will be ready. Under slow network conditions or when the API server is under load, the upload can start before the presigned URL response arrives, causing the PUT to use an undefined URL. In your test, this manifests as intermittent failures that only happen under simulated slow conditions. Always await the presigned URL response before starting the PUT.

S3 Upload Pitfalls Checklist

  • CORS config includes CI preview URLs, not just localhost
  • ExposeHeaders includes ETag and x-amz-request-id
  • Last multipart chunk size matches presigned URL conditions
  • Presigned URL response is awaited before starting PUT
  • Content-Type in presigned URL matches the actual file MIME type
  • Multipart abort cleans up orphaned parts on cancellation
  • Retry logic distinguishes 4xx (permanent) from 5xx (transient)
  • URL expiry is tested with route interception, not actual time delay
S3 Upload Test Suite Run

10. Writing These Scenarios in Plain English with Assrt

Every scenario above is 30 to 70 lines of Playwright TypeScript, heavy on route interception boilerplate and request tracking callbacks. Multiply that by the seven scenarios you actually need and you have several hundred lines of test code that becomes brittle the moment your upload component changes its selector structure, renames a button, or switches from XMLHttpRequest to the Fetch API. Assrt lets you describe the upload scenario in plain English and handles the route interception, request tracking, and assertion wiring automatically.

The multipart upload scenario from Section 6 shows this clearly. In raw Playwright, you need to track individual PUT requests, count parts, wait for initiation and completion API calls, and handle timeouts for large file transfers. In Assrt, you describe the intent and let the framework manage the interception layer.

scenarios/s3-upload-suite.assrt

Assrt compiles each scenario block into the same Playwright TypeScript you saw in the preceding sections, committed to your repo as real tests you can read, run, and modify. When your upload component switches from a “Choose File” button to a drag-and-drop zone, or when your API endpoint path changes from /api/upload/presign to /api/v2/upload/presign, Assrt detects the failure, analyzes the new DOM and network patterns, and opens a pull request with updated locators and route patterns. Your scenario files stay untouched.

Start with the basic single-file upload happy path. Once it is green in your CI, add the content-type validation scenario, then the multipart upload, then the expiry refresh, then the retry logic. Within a single afternoon you can have complete S3 presigned upload coverage that most production applications never achieve by hand.

Related Guides

Ready to automate your testing?

Assrt discovers test scenarios, writes Playwright tests from plain English, and self-heals when your UI changes.

$npm install @assrt/sdk