Back to Home
DevOps

How to Auto-Deploy a Static Site to S3 with GitHub Actions

Mark Borden
March 17, 2026
5 min read

If you're deploying a static site manually, you've probably run into these problems: wrong AWS profile, forgetting to deploy after a commit, or accidentally syncing draft files to production. Here's how to fix all three with a single GitHub Actions workflow.


Why Bother Automating This

Manual deploys are fine when you're shipping once a month. But once you're making frequent updates — blog posts, copy tweaks, case study edits — the friction adds up fast.

Here's what tends to go wrong with manual deploys:

  • Wrong AWS profile. If you have multiple AWS profiles, running aws s3 sync with the wrong one either fails silently or deploys to the wrong bucket.
  • Forgetting to deploy. You commit, push, and assume the site was updated. It wasn't.
  • Syncing draft files. Local files that weren't ready for production — draft blog posts, test data, config files — would end up on the live site.

The fix is simple: push to main, and the site deploys itself. No manual steps, no room for error.


The GitHub Actions Workflow

The entire workflow is a single file at .github/workflows/deploy.yml. Here's the workflow:

.github/workflows/deploy.yml
name: Deploy to S3
on:
  push:
    branches: [main]
  schedule:
    # Daily at 9 AM EST (14:00 UTC) for scheduled posts
    - cron: '0 14 * * *'

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Build exclude args from schedule
        id: excludes
        run: |
          # Static excludes that always apply
          EXCLUDES="--delete --exclude '.git/*' ..."

          # Read publish-schedule.json
          # Exclude posts whose publishDate is still in the future
          TODAY=$(date -u +%Y-%m-%d)
          while IFS= read -r entry; do
            PUB_DATE=$(echo "$entry" | jq -r '.publishDate')
            POST_PATH=$(echo "$entry" | jq -r '.path')
            if [[ "$PUB_DATE" > "$TODAY" ]]; then
              EXCLUDES="$EXCLUDES --exclude '$POST_PATH'"
            fi
          done < <(jq -c '.[]' publish-schedule.json)
          echo "args=$EXCLUDES" >> $GITHUB_OUTPUT

      - name: Sync to S3
        uses: jakejarvis/s3-sync-action@v0.5.1
        with:
          args: ${{ steps.excludes.outputs.args }}
        env:
          AWS_S3_BUCKET: your-bucket-name
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          AWS_REGION: us-east-1

Let me break down what's happening:

  1. Triggers: The workflow runs on every push to main and on a daily cron schedule at 9 AM EST. Push deploys immediately; the cron picks up any scheduled posts that are due.
  2. Checkout: actions/checkout@v4 pulls the repo into the runner.
  3. S3 Sync: jakejarvis/s3-sync-action wraps the AWS CLI's s3 sync command. It syncs the repo contents to the S3 bucket, deleting files from S3 that no longer exist in the repo (with exclusions).
Why jakejarvis/s3-sync-action?

You could install the AWS CLI yourself and run aws s3 sync directly. This action just wraps that in a clean Docker container with the credentials pre-configured. Less boilerplate, same result.


Setting Up AWS Credentials

The workflow needs AWS access to write to your S3 bucket. You store these as GitHub repository secrets — they're encrypted and never exposed in logs.

Here's the setup:

  1. Go to your GitHub repo → SettingsSecrets and variablesActions
  2. Click New repository secret
  3. Add AWS_ACCESS_KEY_ID with your IAM user's access key
  4. Add AWS_SECRET_ACCESS_KEY with the corresponding secret key
IAM Best Practice

Create a dedicated IAM user with only s3:PutObject, s3:DeleteObject, s3:ListBucket, and s3:GetObject permissions on your specific bucket. Don't use your root account keys or an admin user. Least privilege matters, even for a personal site.

The workflow references these secrets with ${{ secrets.AWS_ACCESS_KEY_ID }} syntax. GitHub injects them at runtime — they never appear in your workflow file or logs.


Excluding Files You Don't Want Deployed

The --delete flag is critical — it removes files from S3 that no longer exist in your repo. But you need to be careful. Some files should never be synced, and some files on S3 shouldn't be deleted.

Here's what I exclude and why:

Exclude Patterns
# Dev/build files - not needed on the live site
--exclude '.git/*'
--exclude 'node_modules/*'
--exclude 'cypress/*'
--exclude 'package*.json'
--exclude '.gitignore'
--exclude '.github/*'

# Claude/AI artifacts - development only
--exclude '*.jsonl'
--exclude '.claude*'

# Draft content - not ready for production
--exclude 'blog_post_content/*'
--exclude 'benchmarks.csv'

# Existing S3 content - don't delete other apps
--exclude 'other-app/*'

That last one is important. If you have any subdirectory on S3 (other-app/) that hosts a separate app, without the exclude, --delete would wipe it out since it doesn't exist in the git repo. The exclude tells S3 sync to leave it alone.

Draft Blog Posts

Draft posts live in the repo but stay excluded from deploy until their scheduled publish date arrives. More on that in the next section.


Scheduled Publishing (Like WordPress)

One limitation of static sites is there's no server-side scheduler — but GitHub Actions can fill that role.

The trick is a simple JSON file at the root of the repo:

publish-schedule.json
[
  {
    "path": "blog/my-draft-post/*",
    "publishDate": "2026-03-17",
    "title": "My Draft Post"
  },
  {
    "path": "blog/another-post/*",
    "publishDate": "2026-03-24",
    "title": "Another Post"
  }
]

The deploy workflow reads this file before every sync. For each entry, it compares publishDate against today's date. If the date is in the future, that post's path gets added to the --exclude list. Once the date arrives, the exclude drops off and the post goes live on the next run.

Combined with the daily cron trigger at 9 AM EST, this means you can:

  1. Write a blog post any time
  2. Add it to publish-schedule.json with a future date
  3. Push to main — the post is in the repo but not deployed
  4. On the scheduled date, the daily cron run deploys it automatically

No manual intervention. No remembering to push on Monday morning. The posts just appear when they're supposed to.

Cleanup

Once a post is live, you can remove it from publish-schedule.json. It won't hurt anything to leave it — past dates are ignored — but keeping the file clean makes it easier to see what's actually queued.


What's Next

This started as a simple "push to deploy" workflow and ended up solving the scheduling problem too. The whole thing is one YAML file, one JSON schedule, and zero infrastructure beyond what GitHub gives you for free.

If you wanted to extend this, you could add CloudFront cache invalidation after deploy, run HTML validation or link checking before sync, or add a Slack notification on success/failure. But for a personal portfolio site, this covers everything. Push to main for instant deploys, add a date for scheduled posts. Done.

GitHub Actions AWS S3 CI/CD DevOps Static Site
Mark Borden
Mark Borden

CTO & Technology Consultant. Building the systems behind 10x faster eLearning at KnowledgeNow.

Want Help Setting Up Your Deploy Pipeline?

Get In Touch