alepha@docs:~/docs/cli/commands$
cat 5-platform.md
7 min read
Last commit:

#Platform Command

Deploying a full-stack app to the cloud means provisioning databases, creating storage buckets, configuring queues, pushing secrets, and running migrations -- before you even think about deploying code. Most teams glue this together with shell scripts, Terraform files, and CI pipelines that drift out of sync with the application.

alepha platform does all of it in one command.

bash
alepha p up

It reads your code, figures out what cloud resources you need, provisions them, builds, migrates, deploys, and pushes secrets. If your app uses $entity, it creates a database. If it uses $bucket, it creates a storage bucket. No Terraform. No YAML. No manual resource declarations.

Alias: alepha p (or alepha platform).

#How It Works

Alepha introspects your application at build time. It scans for primitives -- $entity, $bucket, $cache, $queue, $scheduler -- and maps them to cloud resources on the target platform.

The deployment lifecycle runs in a fixed order:

bash
authenticate → provision → build → migrate → deploy → secrets

Each step is handled by an adapter. The adapter knows how to talk to a specific cloud provider. Currently, the Cloudflare adapter is the recommended choice. Vercel and Docker adapters are experimental.

#Configuration

Add a platform section to alepha.config.ts:

typescript
 1import { defineConfig } from "alepha/cli"; 2  3export default defineConfig({ 4  platform: { 5    environments: { 6      production: { 7        adapter: "cloudflare", 8        domain: "myapp.com", 9      },10      staging: {11        adapter: "cloudflare",12        domain: "staging.myapp.com",13      },14    },15  },16});

#Options

Option Type Default Description
name string package.json name Project name. Used as prefix for all resource names.
apps string[] -- Monorepo app paths (e.g. ["apps/api", "apps/web"]). Omit for standalone.
default string "production" Default environment when --env is omitted.
environments Record -- Named environments with adapter and optional domain.

#Environment Options

Option Type Description
adapter string Cloud provider: "cloudflare", "vercel", "docker"
domain string Primary custom domain
domains Record Per-app domain mapping (monorepo)
ip string VPS IP address (Docker remote mode)

#Secrets

Secrets are read from .env.{env} files. If you deploy to the production environment, create a .env.production file:

bash
# .env.production
STRIPE_SECRET_KEY=sk_live_...
SENDGRID_API_KEY=SG...

Variables that are handled by platform bindings (DATABASE_URL, R2_BUCKET_NAME, etc.) are filtered out automatically. The rest are pushed to the cloud provider's secret store.

#Resource Naming

All cloud resources follow a deterministic naming convention:

bash
<project>-<env>[-<app>]

For a project named acme deployed to production:

Resource Name
Worker acme-production
D1 Database acme-production
R2 Bucket acme-production
KV Namespace acme-production
Queue acme-production

In a monorepo with an api app: acme-production-api.

Names are slugified -- lowercase, alphanumeric and dashes, max 63 characters.

#Commands

#plan

Preview the deployment topology without touching anything. No authentication required.

bash
alepha p plan
alepha p plan --env staging
alepha p plan --json

Shows: project name, mode (standalone/monorepo), environments, detected resources, resource names, and secret count.

#up

Full deployment pipeline. Runs all six lifecycle steps.

bash
alepha p up
alepha p up --env staging
alepha p up --app api          # monorepo: deploy a single app

#down

Tear down all resources for an environment. Requires --env.

bash
alepha p down --env staging

Prompts for confirmation before deleting. Environments starting with tmp skip the confirmation.

#status

Inspect what is currently deployed. Alias: alepha p s.

bash
alepha p status
alepha p status --env staging
alepha p status --json

Shows: workers (deployed/not deployed, version, date), databases, buckets, KV namespaces, queues, and secrets (pushed/missing).

#build

Build only. No deployment.

bash
alepha p build --env production

#deploy

Deploy only. Assumes already built.

bash
alepha p deploy --env production

#migrate

Run database migrations only.

bash
alepha p migrate --env production

#Flags

All commands accept --env (-e) to target an environment and --app (-a) to target a specific app in a monorepo. --verbose (-v) enables detailed output. plan and status accept --json for machine-readable output.

#Cloudflare Adapter

The Cloudflare adapter deploys your application as a Cloudflare Worker. It uses the Cloudflare REST API for resource provisioning and the Wrangler CLI for deployment, migrations, and secret management.

#Prerequisites

  • A Cloudflare account
  • wrangler is installed automatically if missing

On first run, alepha p up opens the Wrangler OAuth flow in your browser. Credentials are cached for 4 hours.

#Resource Mapping

Alepha detects primitives in your code and maps them to Cloudflare resources:

Primitive Cloudflare Resource Condition
$entity / $repository D1 (SQLite) DATABASE_URL is absent or not Postgres
$entity / $repository Hyperdrive DATABASE_URL starts with postgres:
$bucket R2 Any $bucket primitive detected
$cache KV Any $cache with non-memory provider
$queue Queue Any $queue primitive detected
$scheduler Cron Triggers Any $scheduler primitive detected (configured at build time, not provisioned)

D1, Hyperdrive, R2, KV, and Queue are provisioned via the Cloudflare REST API during the provision step. Cron triggers are written into wrangler.jsonc during the build step.

All provisioning is idempotent. If a resource already exists with the expected name, it is reused.

#Database: D1 vs Hyperdrive

The adapter chooses the database strategy based on DATABASE_URL in .env.{env}:

D1 (default) -- If no DATABASE_URL is set, or it does not start with postgres:, the adapter provisions a Cloudflare D1 database (SQLite at the edge). Migrations run via wrangler d1 migrations apply.

Hyperdrive -- If DATABASE_URL points to an external PostgreSQL database (postgres://...), the adapter provisions a Hyperdrive config instead. Hyperdrive accelerates connections from Workers to your Postgres database through connection pooling and caching. Migrations run via alepha db migrations apply directly against the database.

bash
# .env.production — D1 (no DATABASE_URL, or d1:// protocol)
# Nothing to set. D1 is created and wired automatically.

# .env.production — Hyperdrive (external Postgres)
DATABASE_URL=postgres://user:[email protected]:5432/mydb

#Build

The adapter runs alepha build -t cloudflare with environment variables injected from provisioned resources:

Variable Set When
DATABASE_URL D1 provisioned (format: d1://name:id)
HYPERDRIVE_ID Hyperdrive provisioned
R2_BUCKET_NAME R2 provisioned
CLOUDFLARE_KV_NAME KV provisioned
CLOUDFLARE_KV_ID KV provisioned
CLOUDFLARE_QUEUE_NAME Queue provisioned
CLOUDFLARE_DOMAIN Domain configured

You do not set these manually. The adapter injects them between provisioning and build.

For details on what the Cloudflare build produces (wrangler.jsonc, worker entry, bindings), see Cloudflare Workers Deployment.

#Deploy

Deploys via wrangler deploy using the generated dist/wrangler.jsonc. Returns the live Worker URL.

#Secrets

After deployment, secrets from .env.{env} are pushed via wrangler secret:bulk. The following keys are excluded (handled by bindings or build config):

  • DATABASE_URL, HYPERDRIVE_ID, POSTGRES_SCHEMA
  • R2_BUCKET_NAME, CLOUDFLARE_DOMAIN
  • NODE_ENV
  • Any VITE_* variable

#Teardown

alepha p down deletes resources in dependency order:

  1. Queue consumers (unbind from worker)
  2. Workers
  3. Queues
  4. KV namespaces
  5. R2 buckets (fails if not empty -- empty manually)
  6. D1 databases / Hyperdrive configs

#Full Example

typescript
 1// alepha.config.ts 2import { defineConfig } from "alepha/cli"; 3  4export default defineConfig({ 5  platform: { 6    environments: { 7      production: { 8        adapter: "cloudflare", 9        domain: "myapp.com",10      },11    },12  },13});
bash
# .env.production
STRIPE_SECRET_KEY=sk_live_...
bash
alepha p up

This authenticates with Cloudflare, provisions D1 + R2 + KV + Queue (based on your code), builds for Cloudflare Workers, runs D1 migrations, deploys the worker, and pushes STRIPE_SECRET_KEY as a secret.

#Temporary Environments

Prefix an environment name with tmp to create a throwaway deployment. Teardown skips the confirmation prompt.

typescript
1environments: {2  production: { adapter: "cloudflare", domain: "myapp.com" },3  staging: { adapter: "cloudflare", domain: "staging.myapp.com" },4  "tmp-pr-42": { adapter: "cloudflare" },5}
bash
alepha p up --env tmp-pr-42
# ... test ...
alepha p down --env tmp-pr-42   # no confirmation

#Monorepo Support

For monorepos, define app paths in the config:

typescript
 1export default defineConfig({ 2  platform: { 3    apps: ["apps/api", "apps/web"], 4    environments: { 5      production: { 6        adapter: "cloudflare", 7        domains: { 8          api: "api.myapp.com", 9          web: "myapp.com",10        },11      },12    },13  },14});

Each app gets its own Worker, KV namespace, and Queue. Database (D1/Hyperdrive) and R2 bucket are shared across apps.

Deploy a single app:

bash
alepha p up --env production --app api

#Other Adapters

#Vercel (experimental)

Deploys to Vercel serverless. Handles project creation, deployment, and environment variable management.

typescript
1environments: {2  production: { adapter: "vercel" },3}

Limitations: no resource provisioning (database, storage), no native queue support. Prefer Cloudflare for new projects.

#Docker (experimental)

Two modes based on whether ip is set:

Local mode -- Generates docker-compose.yml with Postgres and Redis for local development.

typescript
1environments: {2  local: { adapter: "docker" },3}

Remote mode -- Builds a Docker image, uploads to a VPS via SSH, and deploys behind a Traefik reverse proxy with automatic TLS.

typescript
1environments: {2  production: {3    adapter: "docker",4    ip: "203.0.113.1",5    domain: "myapp.com",6  },7}