Cloudflare Workers run at the edge, milliseconds from your users. Cold starts are nearly instant. Perfect for global applications.
npx alepha build --cloudflare
cd dist && npx wrangler deploy
Here's a minimal package.json:
1{ 2 "scripts": { 3 "dev": "alepha dev", 4 "build": "alepha build --cloudflare", 5 "deploy": "alepha build --cloudflare && wrangler deploy -c=dist/wrangler.jsonc" 6 }, 7 "dependencies": { 8 "alepha": "^0.13.0", 9 "react": "^19.0.0"10 },11 "devDependencies": {12 "wrangler": "^4.0.0"13 }14}
dist/
├── index.js # Your bundled application
├── main.cloudflare.js # Worker entry point
├── public/ # Static assets (served by Workers Sites)
└── wrangler.jsonc # Wrangler configuration
For complex deployments (migrations, environment loading), create an alepha.config.ts:
1import { $command } from "alepha/command"; 2import { loadEnv } from "vite"; 3 4export default () => ({ 5 deploy: $command({ 6 handler: async ({ run, root }) => { 7 // Build for Cloudflare 8 await run("npx alepha build --cloudflare"); 9 10 // Run database migrations11 await run("npx alepha db:migrate --mode=production");12 13 // Load Cloudflare credentials from .env.production14 Object.assign(process.env, loadEnv("production", root, "CLOUDFLARE"));15 16 // Deploy17 await run("npx wrangler deploy -c=dist/wrangler.jsonc");18 },19 }),20});
Now npx alepha deploy handles everything.
D1 is Cloudflare's serverless SQLite database. It runs at the edge, so your database is as fast as your workers.
npx wrangler d1 create my-database
# Note the database ID from the output
# .env.production
DATABASE_URL=cloudflare-d1://my-database:your-database-id
The format is cloudflare-d1://binding-name:database-id.
Alepha automatically:
d1_databases binding in wrangler.jsoncYour entities and repositories work exactly the same:
1import { $entity, $repository, db } from "alepha/orm"; 2 3const userEntity = $entity({ 4 name: "users", 5 schema: t.object({ 6 id: db.primaryKey(), 7 email: t.email(), 8 name: t.text(), 9 }),10});11 12class Db {13 users = $repository(userEntity);14}15 16// Works on Postgres locally, D1 in production17const user = await db.users.create({ email: "hello@example.com", name: "Alice" });
The generated wrangler.jsonc looks like:
{
"name": "my-app",
"main": "./main.cloudflare.js",
"compatibility_flags": ["nodejs_compat"],
"compatibility_date": "2025-11-17",
"assets": {
"directory": "./public",
"binding": "ASSETS"
},
// Auto-generated if DATABASE_URL starts with cloudflare-d1://
"d1_databases": [
{
"binding": "my-database",
"database_name": "my-database",
"database_id": "your-database-id"
}
]
}
You can extend this in vite.config.ts:
1viteAlepha({ 2 cloudflare: { 3 // Additional wrangler config 4 vars: { 5 STRIPE_PUBLIC_KEY: "pk_live_..." 6 }, 7 kv_namespaces: [ 8 { binding: "CACHE", id: "your-kv-id" } 9 ]10 }11})
Use Cloudflare KV for caching:
# Create a KV namespace
npx wrangler kv:namespace create CACHE
Add to your config:
1viteAlepha({2 cloudflare: {3 kv_namespaces: [4 { binding: "CACHE", id: "your-kv-id" }5 ]6 }7})
For file uploads, use Cloudflare R2 (S3-compatible):
# Create R2 bucket
npx wrangler r2 bucket create uploads
Configure in your app:
1import { AlephaBucketS3 } from "@alepha/bucket-s3";2 3alepha.with(AlephaBucketS3);
S3_ENDPOINT=https://account-id.r2.cloudflarestorage.com
S3_ACCESS_KEY_ID=...
S3_SECRET_ACCESS_KEY=...
S3_REGION=auto
Set secrets via Wrangler:
npx wrangler secret put APP_SECRET
npx wrangler secret put DATABASE_URL
Or use .dev.vars for local development:
# .dev.vars (gitignored)
APP_SECRET=dev-secret
DATABASE_URL=postgres://localhost:5432/dev
Run D1 migrations before deploying:
# Generate migration
npx alepha db:generate
# Apply locally
npx wrangler d1 execute my-database --local --file=drizzle/0001_migration.sql
# Apply to production
npx wrangler d1 execute my-database --file=drizzle/0001_migration.sql
# Deploy to production
npx alepha build --cloudflare && cd dist && npx wrangler deploy
# Deploy to preview
npx alepha build --cloudflare && cd dist && npx wrangler deploy --env preview
# Tail logs
npx wrangler tail