# CryptFlare Docs (full corpus) > Developer documentation for CryptFlare, a secrets management platform built on Cloudflare. Covers REST API, CLI, SDK, Terraform, sync integrations, and security architecture. This file is the full `llms-full.txt` corpus: every documentation page concatenated in reading order with their source URLs as headings. Safe to ingest wholesale into an LLM context window for grounding. OpenAPI spec: https://api.cryptflare.com/v1/openapi.json --- # Authentication Source: https://docs.cryptflare.com/api-reference/authentication Passwordless authentication via email OTP. Session cookies, CSRF protection, and session lifecycle. # Authentication CryptFlare uses **passwordless authentication** via email OTP. There are no passwords anywhere in the system - users enter their email, receive a 6-digit code via Resend, and exchange it for a session cookie. >API: POST /v1/auth/login with email API->>API: Generate 6-digit OTP, hash with 10m TTL API->>Store: Persist OTP hash keyed by email API->>Email: Send OTP to user mailbox Email-->>Browser: OTP arrives in inbox Browser->>API: POST /v1/auth/verify with email + code API->>Store: Compare code to stored hash Store-->>API: Match within TTL API->>API: Mint session JWT, create session row API-->>Browser: Set cf_session cookie (HttpOnly, Secure) Browser->>API: Subsequent requests carry cookie API-->>Browser: Authorised response `} /> The OTP only crosses email; the session cookie is HttpOnly so no page script can read it, and both sides reference the same server-side session record. This page is an **overview** - every endpoint is documented on its own page. ## Endpoints | Method | Endpoint | Description | |---|---|---| | | [`/auth/login`](/api-reference/authentication/request-otp) | Request a login OTP by email (no user enumeration) | | | [`/auth/verify`](/api-reference/authentication/verify-otp) | Verify the OTP and issue a session cookie | | | [`/auth/me`](/api-reference/authentication/get-me) | Get the authenticated user with organisations and permissions | | | [`/auth/logout`](/api-reference/authentication/logout) | Destroy the current session | ## Session details ### Cookies | Cookie | Purpose | HttpOnly | Secure | SameSite | |---|---|---|---|---| | `cf_session` | Session identifier | Yes | Yes (production) | Lax | | `cf_csrf` | CSRF token (JS-readable) | No | Yes (production) | Lax | ### Session lifecycle | Property | Value | |---|---| | Session expiry | 1 hour of inactivity | | Sliding refresh | Auto-extended every 5 minutes of activity | | Maximum lifetime | 48 hours (forces re-login regardless of activity) | | Cookie Max-Age | 48 hours (browser retention) | | Storage | Server-side in database | Sessions are validated on every protected request. Expired sessions are rejected with `401` and the session cookie is cleared. ## CSRF protection CryptFlare uses the **double-submit cookie pattern**: On login, the API generates a cryptographically random CSRF token and stores it in the session. Two cookies are set: `cf_session` (HttpOnly) and `cf_csrf` (JavaScript-readable). The frontend reads `cf_csrf` and sends it as the `x-csrf-token` header on every `POST`, `PUT`, `PATCH`, and `DELETE` request. Middleware validates that the header matches the CSRF token stored in the session. Mismatches return `403`. An attacker on another domain cannot forge requests because `SameSite=Lax` prevents cross-origin cookie access. ### CSRF token rotation The CSRF token rotates automatically when a privilege-changing action occurs: | Event | Effect | |---|---| | TOTP enabled | CSRF token rotated, all other sessions invalidated | | TOTP disabled | CSRF token rotated, all other sessions invalidated | | Member role changed | CSRF token rotated for the affected user, session cache flushed (user stays logged in with a fresh token) | | Ownership transfer accepted | CSRF token rotated for both old and new owner | | SSO JIT role change | CSRF token rotated for the affected user on callback | When rotated, the API updates the `cf_csrf` cookie and includes the new token in the `X-CSRF-Token` response header. ## Two-factor authentication (TOTP) When a user enables TOTP, subsequent logins require a valid 6-digit code (or a recovery code) after the email OTP step. CryptFlare applies **progressive delay** on failed attempts rather than a hard account lockout: | Failed attempts (15 min rolling window) | Behaviour | |---|---| | 1 through 5 | No penalty, typical typo tolerance | | 6 through 10 | Server-side delay of 2s, 5s, 10s, 20s, 40s before responding | | 11 and above | Throttled to 1 attempt per minute, responds with `429 TOO_MANY_ATTEMPTS` | At the 5th failure the user also receives a one-time "suspicious login attempt" email so a legitimate owner knows someone is probing their account. The counter resets on any successful verification. Because the server never locks the account, an attacker who knows a user's email cannot deny them access by spamming failed codes. TOTP recovery codes are hashed with PBKDF2-SHA256 (600,000 iterations, 16-byte per-code salt) and are single-use. ## MCP server The Model Context Protocol server at `mcp.cryptflare.com` uses the same Bearer token flow as the REST API - both workspace tokens and service tokens authenticate via `Authorization: Bearer cf_live_...`. The extra check is the `mcp:use` permission gate. See [MCP access](/security/mcp-access) for details. ## IP and User-Agent binding Each session records the client's IP address and User-Agent string at creation time. These are kept for audit purposes and can be used for anomaly detection (e.g. flagging a session that suddenly appears from a different country). --- # Get current user Source: https://docs.cryptflare.com/api-reference/authentication/get-me GET /auth/me - returns the authenticated user with organisations, roles, and resolved permissions. # Get current user Returns the authenticated user along with every organisation they belong to, their role in each, and the **resolved** permission set (including any overrides from [role customisation](/api-reference/role-permissions)). This is the endpoint dashboards typically call first after sign-in to hydrate the UI. Authentication is by session cookie - the `cf_session` cookie set by [Verify OTP](/api-reference/authentication/verify-otp). --- ## Request --- --- # Log out Source: https://docs.cryptflare.com/api-reference/authentication/logout POST /auth/logout - destroys the current session and clears the session cookie. # Log out Destroys the current session row in the database and clears the `cf_session` cookie. Any subsequent request presenting the cleared cookie returns `401`. This endpoint only ends the current session - other sessions the user has in other browsers or devices are untouched. Use role changes or TOTP toggles when you need to invalidate every active session for a user. --- ## Request --- --- # Request a login OTP Source: https://docs.cryptflare.com/api-reference/authentication/request-otp POST /auth/login - sends a 6-digit OTP to the provided email. Identical response whether the email exists or not. # Request a login OTP Sends a 6-digit one-time code to the caller's email via Resend. CryptFlare uses passwordless auth exclusively - there are no passwords anywhere in the system. The response shape is **identical** whether the email exists or not. You always receive a `requestId`. This prevents attackers from enumerating valid user accounts by probing the login endpoint. --- ## Request --- --- # Verify an OTP code Source: https://docs.cryptflare.com/api-reference/authentication/verify-otp POST /auth/verify - validates the 6-digit OTP code and issues a session cookie. # Verify an OTP code Validates the 6-digit OTP code against the request ID. On success, CryptFlare creates or finds the user account, issues a session cookie (`cf_session`, HttpOnly), and returns the user record with an `isNewUser` flag so the client can trigger onboarding if needed. --- ## Request --- --- # Database Backups Source: https://docs.cryptflare.com/api-reference/backups Manage automated and manual database backups via the console API. # Database Backups The backup system creates full snapshots of the platform databases on a regular schedule, stored in geo-redundant object storage. A rolling window of **20 versions** is retained - the oldest is pruned automatically when a new one is created. Snapshots can be downloaded as JSON (for programmatic restore) or SQL (via the console UI, for manual database recovery). Use the manual trigger endpoint if you need a snapshot ahead of a risky migration. Append-only and ephemeral tables (audit logs, session state, status metrics) are protected through dedicated mechanisms outside the snapshot system, so they are not duplicated in scheduled backups. ## How it works Scheduled and manual backups share the same pipeline: a worker streams each regional D1 to JSONL, compresses and encrypts the output with the organisation's backup key, and writes the object to R2 with retention metadata attached. Restore walks the same pipeline in reverse. Export Export --> Encrypt Encrypt --> Store Store --> List List --> Select Select --> Replay `} /> The encryption step means the R2 object is useless without the org-scoped key, and the retention metadata is what lets the rolling window prune old snapshots automatically. Every Backups endpoint is gated behind a valid console session and `console:database:read` or `console:database:manage` permission. These endpoints are not available to customer sessions. This page is an **overview** - every endpoint is documented on its own page. ## Endpoints | Method | Endpoint | Description | |---|---|---| | | [`/console/database/backups`](/api-reference/backups/list-backups) | List every backup with schedule info | | | [`/console/database/backups`](/api-reference/backups/trigger-backup) | Trigger an immediate manual backup | | | [`/console/database/backups/:key`](/api-reference/backups/download-backup) | Download a backup as JSON | | | [`/console/database/backups/restore`](/api-reference/backups/restore-backup) | Restore tables from a backup snapshot | | | [`/console/database/backups/:key`](/api-reference/backups/delete-backup) | Permanently delete a backup from storage | ## Backup format Each backup is a JSON file with this structure: ```json { "version": 1, "createdAt": "2026-04-09T13:00:00.000Z", "databases": { "platform": { "tables": { "users": [{ "id": "...", "email": "..." }], "organisations": [ ... ] }, "tableCount": 28, "totalRows": 1250 }, "console": { "tables": { "console_users": [ ... ] }, "tableCount": 4, "totalRows": 48 } } } ``` ## SQL export Backups can be downloaded as SQL from the console UI. The SQL format generates `DELETE FROM` + `INSERT INTO` statements per table, ready to execute against the database: ```sql -- Table: users (5 rows) DELETE FROM users; INSERT INTO users (id, email, name) VALUES ('uuid', 'user@example.com', 'User'); ``` ## Automation Automated snapshots run on a regular weekly schedule. The maximum of 20 retained backups combines automated and manual snapshots together. For more granular point-in-time recovery between snapshots, the immutable audit log archive can be replayed to reconstruct any intermediate state. --- # Delete backup Source: https://docs.cryptflare.com/api-reference/backups/delete-backup DELETE /console/database/backups/:key - permanently removes a backup snapshot. # Delete backup Permanently deletes a backup snapshot. There is no recycle bin - once deleted, the backup is gone. Double-check before removing the last good snapshot. The `key` must start with `db-backups/` - arbitrary paths are rejected. Double-check the key before deleting; accidental removal during an incident could lose the last good snapshot. --- ## Required permission --- ## Request --- --- # Download backup Source: https://docs.cryptflare.com/api-reference/backups/download-backup GET /console/database/backups/:key - downloads the raw backup JSON file. # Download backup Downloads the raw backup JSON file. The response is a `Content-Disposition: attachment` download so browsers save it to disk rather than rendering it. The `key` must start with `db-backups/` - anything else is rejected. --- ## Required permission --- ## Request --- ## Backup file format ```json { "version": 1, "createdAt": "2026-04-09T13:00:00.000Z", "databases": { "platform": { "tables": { "users": [{ "id": "...", "email": "..." }], "organisations": [ ... ] }, "tableCount": 28, "totalRows": 1250 }, "console": { "tables": { "console_users": [ ... ] }, "tableCount": 4, "totalRows": 48 } } } ``` --- --- # List backups Source: https://docs.cryptflare.com/api-reference/backups/list-backups GET /console/database/backups - returns every available backup with schedule info. # List backups Returns every available backup with metadata, the next scheduled automated backup time, and a summary of the most recent backup. Used by the console UI to render the backup dashboard. Every Backups endpoint is gated behind a valid console session and `console:database:*` permission. These endpoints are not available to customer sessions. --- ## Required permission --- ## Request --- --- # Restore from backup Source: https://docs.cryptflare.com/api-reference/backups/restore-backup POST /console/database/backups/restore - restores tables from a backup snapshot. Supports full, per-table, and per-org restore modes. # Restore from backup Restores tables from a backup snapshot. Three modes are supported: | Mode | How it works | |---|---| | **Full restore** | Omit `tables` and `orgId`. Deletes every row from every table in the backup, then re-inserts from the snapshot. | | **Table-specific** | Provide `tables`. Only the listed tables are cleared and restored. | | **Org-specific** | Provide `orgId`. Only rows with matching `organisation_id`, `owner_id`, or `id` fields are deleted and re-inserted. Other organisations' data is untouched. | Restore is a **delete-then-insert** operation. Rows in the target database that don't exist in the backup will be gone. Test against a non-prod database first or use org-specific mode to limit the blast radius. --- ## Required permission --- ## Request --- --- # Trigger manual backup Source: https://docs.cryptflare.com/api-reference/backups/trigger-backup POST /console/database/backups - creates an immediate backup of both databases. # Trigger manual backup Creates an immediate backup of both the platform and console databases. The key includes a `-manual` suffix to distinguish it from the 10-hour cron backups. Only 20 backups are retained in total. Manual backups count towards that limit - the oldest backup is pruned automatically if the new one pushes the count over. --- ## Required permission --- ## Request --- --- # Compliance Reports Source: https://docs.cryptflare.com/api-reference/compliance Generate audit-ready compliance reports for SOC 2, PCI DSS, HIPAA, ISO 27001, GDPR, and NIST 800-53. # Compliance Reports The Compliance API generates audit-ready reports that package your organisation's security posture, access controls, policy coverage, encryption configuration, and audit trail into a single downloadable document. Reports can target a specific compliance framework or cover all frameworks at once. Reports are generated asynchronously - the API returns a job ID that you poll until the report is ready for download. >API: POST /v1/compliance/report API->>Queue: Enqueue job with framework and org scope API-->>User: 202 Accepted with jobId Queue->>Renderer: Trigger render job Renderer->>Renderer: Aggregate audit, policies, encryption posture Renderer->>R2: Write HTML and JSON artefacts Renderer->>API: Update job status to completed User->>API: GET /v1/compliance/report/:jobId API-->>User: Status completed with download URL User->>API: GET /v1/compliance/report/:jobId/download API->>R2: Stream artefact bytes R2-->>User: Report HTML or JSON `} /> The request path returns immediately with a job ID; rendering happens in the background, and the client polls until the artefact is ready to stream from R2. ## Endpoints | Method | Endpoint | Description | |---|---|---| | | [`/compliance/report`](/api-reference/compliance/generate-report) | Generate a compliance report | | | [`/compliance/report/:jobId`](/api-reference/compliance/report-status) | Poll report generation status | | | [`/compliance/report/:jobId/download`](/api-reference/compliance/download-report) | Download the generated report | ## Supported frameworks ## Report sections --- # Download report Source: https://docs.cryptflare.com/api-reference/compliance/download-report GET /compliance/report/:jobId/download - download the generated compliance report. # Download report Returns the generated compliance report as HTML or JSON content. Only available after the report status is `completed`. The response is the raw report content, not wrapped in a JSON envelope. HTML reports are self-contained single-file documents with inline CSS that print cleanly and can be opened in any browser. --- ## Required permission --- ## Request Compliance Report - Acme Inc - CryptFlare ...

Compliance Report

Acme Inc - SOC 2 Trust Services Criteria

... `} />
--- --- # Generate a compliance report Source: https://docs.cryptflare.com/api-reference/compliance/generate-report POST /compliance/report - queue generation of an audit-ready compliance report. # Generate a compliance report Queues generation of a compliance report for the specified framework and date range. The report is generated asynchronously via a queue and stored in R2 for download. Returns a job ID for polling. This endpoint returns `202 Accepted`. Poll `GET /compliance/report/:jobId` for completion, then download via `GET /compliance/report/:jobId/download`. --- ## Required permission --- ## Supported frameworks ## Supported sections --- ## Request --- ## Use cases ### SOC 2 audit preparation Generate a quarterly evidence pack for your SOC 2 auditor: ```bash curl -X POST https://api.cryptflare.com/v1/organisations/$ORG/compliance/report \ -H "Authorization: Bearer $TOKEN" \ -d '{ "framework": "soc2", "dateRange": { "from": "2026-01-01", "to": "2026-03-31" }, "format": "html", "sections": ["access", "audit", "encryption", "policies", "controls"] }' ``` ### PCI DSS quarterly review Generate a PCI-focused report for your QSA: ```bash curl -X POST .../compliance/report \ -d '{ "framework": "pci_dss", "dateRange": { "from": "2026-01-01", "to": "2026-03-31" }, "sections": ["access", "encryption", "rotation", "controls"] }' ``` ### Scheduled compliance dashboard Automate monthly report generation via cron or CI/CD: ```bash # Generate, poll, download JOB=$(curl -s -X POST .../compliance/report -d '...' | jq -r '.data.jobId') while true; do STATUS=$(curl -s .../compliance/report/$JOB | jq -r '.data.status') [ "$STATUS" = "completed" ] && break sleep 2 done curl -s .../compliance/report/$JOB/download > report.html ``` --- --- # Get report status Source: https://docs.cryptflare.com/api-reference/compliance/report-status GET /compliance/report/:jobId - poll for the status of a compliance report. # Get report status Polls the status of a compliance report generation job. Returns `processing` while the report is being generated, `completed` when ready for download, or `failed` if generation encountered an error. Reports expire after 24 hours. --- ## Required permission --- ## Request --- --- # Data Residency Source: https://docs.cryptflare.com/api-reference/data-residency Set and manage data residency regions for your organisation via the API. # Data Residency The Data Residency API controls which geographic region stores your organisation's operational data. Required for GDPR, data sovereignty laws, and internal governance policies. Data residency is a Team-plan feature. Free and Pro plans use the default global region. Every write is routed using the organisation's pinned `data_region`. The API resolves the region from the request's org context, binds the matching regional D1 and R2 for the handler, and keeps reads and writes isolated to that region for the lifetime of the call. Resolve Resolve --> Bind Bind --> Regions Regions --> Write `} /> Billing and account metadata stay in the global store; only operational records (secrets, audit rows, pods) follow the regional binding. This page is an **overview** - every endpoint is documented on its own page. ## Endpoints | Method | Endpoint | Description | |---|---|---| | | [`/data-region`](/api-reference/data-residency/set-region) | Set the region (starts a migration if data exists) | | | [`/data-region/status`](/api-reference/data-residency/get-status) | Get current region and migration progress | ## Available regions ## Role permissions Setting the data region requires `org:update` (owner only). Any member can read status. ## Migration statuses | Status | Meaning | |---|---| | `pending` | Migration queued, not yet started | | `in_progress` | Migration is running | | `copying` | Rows are being transferred | | `verifying` | Destination is being verified against source | | `completed` | Migration finished successfully | | `failed` | Migration hit an error (see `error` field) | --- # Get data region status Source: https://docs.cryptflare.com/api-reference/data-residency/get-status GET /data-region/status - returns the current region and active migration progress. # Get data region status Returns the current data region and any active or recently completed migration. Poll this endpoint after calling [Set region](/api-reference/data-residency/set-region) with a migration response to track progress - `tables_copied` and `rows_copied` increment as data moves. --- ## Required permission --- ## Request --- --- # Set data residency region Source: https://docs.cryptflare.com/api-reference/data-residency/set-region POST /data-region - sets the geographic region where organisation data is stored. Starts a migration if data already exists. # Set data residency region Sets the geographic region where the organisation's data is stored. If the organisation has no existing data, the change is **instant**. If data already exists, an **asynchronous migration** is started and the response includes a migration object you can poll via [Get migration status](/api-reference/data-residency/get-status). Data residency is a Team-plan feature. Free and Pro plans get the default global region and cannot call this endpoint. --- ## Required permission --- ## Request --- --- # Dynamic Secrets Source: https://docs.cryptflare.com/api-reference/dynamic-secrets Mint short-lived credentials on demand from upstream cloud providers with strict TTL and quota enforcement. # Dynamic Secrets Dynamic secrets are short-lived credentials minted on demand from upstream cloud providers (Azure AD, AWS IAM, GCP Service Accounts). Each lease has a strict TTL and is automatically revoked when it expires - the application that requested it never sees a long-lived credential, and CryptFlare never stores the issued credential value. Operators configure a **config** with root credentials and a TTL / quota policy. Applications request **leases** on demand, each one bound to the issuing identity (session, service token, or access token). When the parent identity is revoked, every lease it spawned cascade-revokes at the upstream provider within the same request. This page is an **overview** - every endpoint is documented on its own page. Use the sidebar or the endpoint index below to jump to a specific operation. ## Plan availability | Feature | Free | Pro | Team | |---|---|---|---| | Dynamic secrets | - | - | Yes | | Concurrent leases per config | - | - | Configurable | | Identity-bound cascade revoke | - | - | Yes | | Provider-side TTL enforcement | - | - | Yes | | BYOK-encrypted root credentials | - | - | Yes | Dynamic secrets are a Team-plan feature. The platform enforces a system-wide ceiling of **24 hours** on any single lease TTL. Every org gets at most **10** dynamic secret configurations. ## Default role permissions These are the default permissions for a new organisation. Owners can override them in **Organisation Settings > Roles**. ## Supported providers See the setup guides for step-by-step instructions on registering the upstream identity, granting Graph / IAM permissions, and creating your first config: - [Azure Service Principal setup](/guides/dynamic-secrets/azure) - [AWS IAM (AssumeRole) setup](/guides/dynamic-secrets/aws) ### Azure provider has two modes The `azure_sp` provider supports two strategies for minting credentials, selected via `providerConfig.mode` at create-config time: | Mode | What each lease does | `AZURE_CLIENT_ID` | Issue latency | Best for | |---|---|---|---|---| | **`static_sp`** (default) | Rotates a password credential on one pre-existing App Registration you control | Same for every lease | ~500ms | Fast, simple, no blast-radius trade-offs - the default Vault-compatible path | | **`dynamic_sp`** | Creates a brand new App Registration + Service Principal + role assignments per lease, deletes on revoke | Unique per lease | ~2-5s + up to 30s propagation delay | Per-lease identity in Azure activity logs, per-lease role assignments, compliance / forensic attribution | The full field schema for each mode is on the [Create a config](/api-reference/dynamic-secrets/create-config) page. Missing `mode` defaults to `static_sp` so configs created before dynamic mode existed keep working without a migration. ## TTL resolution rules Every lease's effective TTL is computed at issuance time by clamping the caller's request against every ceiling that applies: ``` effective_ttl = min( caller_requested_ttl ?? default_ttl, max_ttl, system_max_ttl, parent_token_remaining_lifetime, ) ``` The result must be at least **60 seconds** or the request is rejected with `DYNAMIC_TTL_INVALID`. ### Worked example With `default_ttl = 1800s`, `max_ttl = 3600s`, `system_max_ttl = 86400s`: | Caller asks for | Effective TTL | Why | |---|---|---| | nothing | **1800s** (30 min) | Used the default | | `ttl: 900` | **900s** (15 min) | Below default, accepted | | `ttl: 2700` | **2700s** (45 min) | Between default and max, accepted | | `ttl: 7200` | **3600s** (60 min) | Clamped to max | | `ttl: 86400` | **3600s** (60 min) | Still clamped to max | | `ttl: 1800`, parent token expires in 600s | **600s** | Clamped to parent token's remaining lifetime | The response always includes both `expiresAt` and `requestedTtl` so the caller can detect when their request was clamped. ### Renewal caps Renewal follows the same formula but uses `max_expires_at - now` (anchored to original issue time) as the ceiling. A lease **cannot outlive its `max_expires_at`** no matter how many times it is renewed - this is what prevents a runaway client from keeping a credential alive forever. See [Renew a lease](/api-reference/dynamic-secrets/renew-lease) for the full rules. ## Endpoints ### Configurations Configurations hold the root credentials and TTL/quota policy for one upstream provider integration. Encrypted at rest with AES-256-GCM (platform master secret by default, or the org's BYOK customer key when `useByok: true`). | Method | Endpoint | Description | |---|---|---| | | [`/dynamic-secrets/configs`](/api-reference/dynamic-secrets/list-configs) | List dynamic secret configurations | | | [`/dynamic-secrets/configs/:configId`](/api-reference/dynamic-secrets/get-config) | Get a single configuration | | | [`/dynamic-secrets/configs`](/api-reference/dynamic-secrets/create-config) | Create a configuration | | | [`/dynamic-secrets/configs/:configId`](/api-reference/dynamic-secrets/update-config) | Update editable fields on a configuration | | | [`/dynamic-secrets/configs/:configId/validate`](/api-reference/dynamic-secrets/validate-config) | Re-run the provider permission check against the stored root credentials | | | [`/dynamic-secrets/configs/:configId`](/api-reference/dynamic-secrets/delete-config) | Delete a configuration (drains active leases first) | ### Leases Leases record issued credentials. The credential itself is returned in the issue response exactly once and never stored. | Method | Endpoint | Description | |---|---|---| | | [`/dynamic-secrets/configs/:configId/lease`](/api-reference/dynamic-secrets/issue-lease) | Issue a new lease (optionally wrapped) | | | [`/dynamic-secrets/leases/:leaseId/renew`](/api-reference/dynamic-secrets/renew-lease) | Renew an active lease | | | [`/dynamic-secrets/leases`](/api-reference/dynamic-secrets/list-leases) | List leases with optional filters | | | [`/dynamic-secrets/leases/:leaseId`](/api-reference/dynamic-secrets/revoke-lease) | Revoke a lease | | | [`/dynamic-secrets/leases/:leaseId/force-revoke`](/api-reference/dynamic-secrets/force-revoke-lease) | Force-revoke an irrevocable lease (operator escape hatch) | | | [`/dynamic-secrets/unwrap/:token`](/api-reference/dynamic-secrets/unwrap-credentials) | Exchange a wrap token for credentials | ## Lease lifecycle Each lease moves through a small state machine: | State | Meaning | |---|---| | `pending` | Row created but workflow not yet started. Transient. | | `active` | Workflow is sleeping until TTL. The credential is valid at the upstream provider. | | `expired` | The workflow's `step.sleep` fired and the credential was successfully revoked. | | `revoked` | A user, cascade, or operator manually revoked the lease before its TTL. | | `irrevocable` | All revoke attempts failed. Credential may still be valid at the provider - ops investigation required. | ### Identity binding (cascade revoke) Every lease is bound to the identity that issued it via `parent_token_id` and `parent_token_type`. When that parent identity is revoked, every lease created under it cascade-revokes at the upstream provider: | Trigger | Effect on active leases | |---|---| | User logs out | All session-bound leases are revoked | | Service token deleted | All leases issued under that service token are revoked | | Access token deleted | Same | | Session expires by sliding window | Leases die naturally - their TTL was clamped to fit the session at issue time | ## Quotas Each config has two independent quotas, both enforced at issuance time: | Quota | What it limits | |---|---| | `maxConcurrentLeases` | Total active leases for the config across all callers | | `maxLeasesPerIdentity` | Active leases per session, service token, or access token | Either quota tripping returns `429 DYNAMIC_LEASE_QUOTA_EXCEEDED` with a count in the error message. ## Security model - **Root credentials encrypted at rest** with AES-256-GCM. The encryption key is derived per-config via HKDF salt `dynamic_root_` from the platform master secret (or the org's BYOK customer key when `useByok: true`). - **Lease credentials returned exactly once.** CryptFlare does not store them. - **Audit log records only `configId`, `leaseId`, `expiresAt`, and `externalId`** - never the credential value. - **Tenant isolation** at every query layer by `organisation_id`. The `external_id` is opaque to CryptFlare and only meaningful in the customer's own cloud account. --- # Create a dynamic secret config Source: https://docs.cryptflare.com/api-reference/dynamic-secrets/create-config POST /dynamic-secrets/configs - register a new upstream provider integration and store encrypted root credentials. # Create a dynamic secret config Creates a new dynamic secret configuration. CryptFlare **validates the supplied root credentials against the upstream provider before persisting them**. If validation fails, the row is not stored and a `400 DYNAMIC_PROVIDER_ERROR` is returned with the upstream error message. Root credentials are encrypted at rest with AES-256-GCM using a per-config key derived via HKDF from the platform master secret (or, when `useByok: true`, the organisation's customer-managed key). They are never returned by any endpoint. --- ## Required permission --- ## Per-provider shapes ### `azure_sp` The Azure provider supports **two modes**, selected via `providerConfig.mode`. The backend defaults to `static_sp` when `mode` is omitted so configs created before dynamic mode existed continue to work unchanged. | | **`static_sp`** (default, Vault "existing SP" pattern) | **`dynamic_sp`** (Vault "dynamic SP" pattern) | |---|---|---| | Per-lease behaviour | Mints a new password credential on a pre-existing App Registration you control | Creates a brand new App Registration + Service Principal + role assignments + password per lease | | `AZURE_CLIENT_ID` in lease | Same across every lease (the root App's appId) | Unique per lease | | Issue latency | ~500ms (2 Graph calls) | ~2-5 seconds (5-7 Graph / ARM calls) | | Revoke | `removePassword` by `keyId` | `DELETE /applications/{id}` - cascades to SP, passwords, role assignments | | Azure activity-log attribution | Every lease looks identical | Per-lease identity | | Root permission - Graph | `Application.ReadWrite.All` | `Application.ReadWrite.All` | | Root permission - ARM | None | `Microsoft.Authorization/roleAssignments/write` (typically via **User Access Administrator**) at each target scope | | Propagation delay | None | Up to 30 seconds - fresh SPs are not immediately visible on the management plane | | Scale ceiling | Unlimited | Azure soft-limits tenants to ~50k App Registrations and throttles creation rate | #### `static_sp` fields | Field | Description | |---|---| | `providerConfig.mode` | `"static_sp"` (or omitted - this is the default) | | `providerConfig.tenantId` | Azure AD tenant ID (UUID) | | `providerConfig.appObjectId` | Object ID of the root App Registration (NOT the Application ID) | | `providerConfig.displayName` | Optional display name returned to lease consumers as `AZURE_DISPLAY_NAME` | | `rootCredentials.clientId` | The root App Registration's Application (client) ID | | `rootCredentials.clientSecret` | Long-lived client secret. Encrypted at rest, never returned. | #### `dynamic_sp` fields | Field | Description | |---|---| | `providerConfig.mode` | `"dynamic_sp"` | | `providerConfig.tenantId` | Azure AD tenant ID (UUID) | | `providerConfig.displayNamePrefix` | Optional. Prefix for the display name of each minted App Registration. Defaults to `cryptflare-lease`. The lease id is appended automatically. | | `providerConfig.roleAssignments` | **Required.** Array of `{ scope, roleDefinitionId }` pairs. Every minted Service Principal receives all of these role assignments at issue time. At least one entry is required - otherwise the lease credentials authenticate but have zero resource access. | | `providerConfig.roleAssignments[].scope` | ARM resource scope the role is assigned at. Must start with `/`. Examples: `/subscriptions/`, `/subscriptions//resourceGroups/`, or a specific resource URI. | | `providerConfig.roleAssignments[].roleDefinitionId` | ARM role definition ID. Accepts both the short form (`/providers/Microsoft.Authorization/roleDefinitions/`) and the subscription-qualified form (`/subscriptions//providers/Microsoft.Authorization/roleDefinitions/`). | | `rootCredentials.clientId` | The root App Registration's Application (client) ID | | `rootCredentials.clientSecret` | Long-lived client secret. Encrypted at rest, never returned. | Common built-in Azure role definition IDs (stable GUIDs): | Role | `roleDefinitionId` | |---|---| | Reader | `/providers/Microsoft.Authorization/roleDefinitions/acdd72a7-3385-48ef-bd42-f606fba81ae7` | | Contributor | `/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c` | | Storage Blob Data Reader | `/providers/Microsoft.Authorization/roleDefinitions/2a2b9908-6ea1-4ae2-8e65-a410df84e7d1` | | Key Vault Secrets User | `/providers/Microsoft.Authorization/roleDefinitions/4633458b-17de-408a-b874-0445c86b69e6` | See the [Azure built-in roles reference](https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles) for the full list. ### `aws_iam` | Field | Description | |---|---| | `providerConfig.region` | AWS region (e.g. `us-east-1`) | | `providerConfig.roleArn` | ARN of the target IAM role CryptFlare will assume for each lease | | `providerConfig.externalId` | Optional. Must match the `sts:ExternalId` condition in the target role's trust policy if set. | | `providerConfig.sessionPolicy` | Optional JSON policy that further restricts the lease credentials beyond the target role. | | `rootCredentials.accessKeyId` | IAM user access key id with `sts:AssumeRole` permission on the target role | | `rootCredentials.secretAccessKey` | Matching secret | | `rootCredentials.sessionToken` | Optional. For role-chained root auth. | ## Request --- --- # Delete a dynamic secret config Source: https://docs.cryptflare.com/api-reference/dynamic-secrets/delete-config DELETE /dynamic-secrets/configs/:configId - drain active leases at the provider, then hard-delete the configuration. # Delete a dynamic secret config Deletes a configuration. **The endpoint drains active leases at the upstream provider before removing the row.** This is the path the Terraform provider takes on `terraform destroy` - it is safe by default and requires no `force` flag. Order of operations on the server: So no new leases can be issued during the drain. Loop through the lease table and call `provider.revoke()` for each. Failed revocations flip the offending lease to `irrevocable` for ops review but do not abort the deletion. FK `onDelete: cascade` then wipes any remaining lease rows. The response includes the drain breakdown so the caller can audit what was removed. --- ## Required permission --- ## Request --- --- # Force-revoke an irrevocable lease Source: https://docs.cryptflare.com/api-reference/dynamic-secrets/force-revoke-lease POST /dynamic-secrets/leases/:leaseId/force-revoke - operator escape hatch for leases stuck in irrevocable state. # Force-revoke an irrevocable lease Operator escape hatch for leases stuck in `irrevocable` state. Marks the lease as `revoked` in CryptFlare's database **without calling the upstream provider**. Use this only after you have manually removed the credential at the provider yourself - for example, deleted the password in the Azure portal using the `keyId` from the lease's `external_id`. This endpoint will not invalidate the credential at the provider. It only clears our database state so the dashboard stops flagging the lease for attention. The endpoint refuses to act on leases that are not in `irrevocable` state. For healthy leases use the standard [revoke endpoint](/api-reference/dynamic-secrets/revoke-lease) instead. --- ## Required permission --- ## Request --- --- # Get a dynamic secret config Source: https://docs.cryptflare.com/api-reference/dynamic-secrets/get-config GET /dynamic-secrets/configs/:configId - fetch a single configuration without root credentials. # Get a dynamic secret config Returns a single configuration. Encrypted root credentials are never included in the response - they are only decrypted server-side at lease issue / revoke time. --- ## Required permission --- ## Request --- --- # Issue a dynamic secret lease Source: https://docs.cryptflare.com/api-reference/dynamic-secrets/issue-lease POST /dynamic-secrets/configs/:configId/lease - mint a fresh short-lived credential at the upstream provider. # Issue a lease Mints a fresh credential at the upstream provider and starts a durable Cloudflare Workflow that will revoke it at the resolved TTL. The credential is returned in the response body **exactly once** - CryptFlare does not store it. Use it immediately and discard. The lease is bound to the requesting identity (session, service token, or access token). When that identity is revoked, the lease cascade-revokes at the upstream provider. --- ## Required permission --- The effective TTL is `min(requested_ttl ?? default_ttl, max_ttl, system_max_ttl, parent_token_remaining_lifetime)`. The result must be at least 60 seconds or the request is rejected with `DYNAMIC_TTL_INVALID`. See the [overview](/api-reference/dynamic-secrets#ttl-resolution-rules) for the full rules. When the caller includes `wrap: { ttl }`, the credentials are not returned in the response body. Instead, CryptFlare stores them in a short-lived encrypted KV entry and returns a single-use exchange token. A separate process (or the same caller, later) redeems the token via [`POST /unwrap/:token`](/api-reference/dynamic-secrets/unwrap-credentials) to retrieve the credentials. Useful for handing credentials off through insecure channels (CI logs, pastebin relays, webhooks) where a short-lived token is safer than the raw values. ## Request ", "AZURE_TENANT_ID": "00000000-0000-0000-0000-000000000000", "AZURE_DISPLAY_NAME": "Azure Prod Reader" }, "wrapped": null } }`} /> ## Wrapped issuance example When `wrap` is present, credentials do not appear in the response body. The caller redeems the returned token via [`POST /unwrap/:token`](/api-reference/dynamic-secrets/unwrap-credentials). ## AWS IAM example Same endpoint, different provider. The AWS IAM provider returns `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`, and `AWS_REGION` - the standard env vars the `aws` CLI and `terraform-provider-aws` pick up automatically. ", "AWS_SESSION_TOKEN": "", "AWS_REGION": "us-east-1" }, "wrapped": null } }`} /> --- --- # List dynamic secret configs Source: https://docs.cryptflare.com/api-reference/dynamic-secrets/list-configs GET /dynamic-secrets/configs - returns all dynamic secret configurations for the organisation, without root credentials. # List dynamic secret configs Returns all dynamic secret configurations for the organisation. Encrypted root credentials are never included in the response. --- ## Required permission --- ## Request --- --- # List dynamic secret leases Source: https://docs.cryptflare.com/api-reference/dynamic-secrets/list-leases GET /dynamic-secrets/leases - returns lease history for the organisation with optional filters. # List leases Returns lease history for the organisation. Lease records never include credential values. --- ## Required permission --- ## Request --- --- # Renew a dynamic secret lease Source: https://docs.cryptflare.com/api-reference/dynamic-secrets/renew-lease POST /dynamic-secrets/leases/:leaseId/renew - Vault-style lease renewal bounded by max_expires_at. # Renew a lease Extends an active lease. Vault-style renewal semantics apply: - The new expiry is `min(now + increment, max_expires_at, now + system_max_ttl, parent_token_remaining_lifetime)`. - `max_expires_at` is **anchored to the original issue time** and NEVER advanced - a lease cannot outlive its hard cap regardless of how many times it is renewed. - If the resolved TTL is below 60s (the floor), the request returns `400 DYNAMIC_TTL_INVALID` and the caller should issue a new lease instead. - If `now >= max_expires_at`, the request returns `400 DYNAMIC_LEASE_EXHAUSTED` and the caller MUST issue a new lease. ## Per-provider behaviour Both production providers (Azure SP and AWS STS) mint **immutable** credentials. Renewal therefore revokes the old credential and issues a new one under the same lease id. The response includes the fresh values in `credentials` and `credentialsRotated: true` - callers must replace their environment variables with the new values and re-run anything that was holding the old secret. Providers that support in-place extension (none in v1) would return `credentialsRotated: false` and `credentials: null` - the existing credential values keep working until the new deadline. --- ## Required permission --- ## Request ", "AZURE_TENANT_ID": "00000000-0000-0000-0000-000000000000", "AZURE_DISPLAY_NAME": "Azure Prod Reader" } } }`} /> --- --- # Revoke a dynamic secret lease Source: https://docs.cryptflare.com/api-reference/dynamic-secrets/revoke-lease DELETE /dynamic-secrets/leases/:leaseId - manually revoke an active lease at the upstream provider. # Revoke a lease Manually revokes a lease. The endpoint calls `provider.revoke()` synchronously so the caller knows it succeeded, then terminates the underlying Cloudflare Workflow so it does not fire again at TTL. The endpoint is **idempotent**: calling delete on an already-revoked or expired lease returns success with `alreadyRevoked: true`. AWS STS tokens are self-expiring and cannot be revoked at AWS before their `DurationSeconds` expires. For AWS leases this endpoint still marks the lease revoked in our database and kills the workflow, but the credential itself remains valid at AWS until its natural expiry. Use short `maxTtlSeconds` and session policies to limit blast radius. --- ## Required permission --- ## Request --- --- # Unwrap a credential token Source: https://docs.cryptflare.com/api-reference/dynamic-secrets/unwrap-credentials POST /dynamic-secrets/unwrap/:token - exchange a single-use wrap token for the underlying credentials. # Unwrap a credential token Exchanges a short-lived wrap token (minted earlier by a [`POST /configs/:configId/lease`](/api-reference/dynamic-secrets/issue-lease) request with `wrap: { ttl }`) for the underlying credentials. The exchange is **single-use**: once the token is redeemed, the KV entry is deleted atomically so the same token cannot be unwrapped twice. The token is also **cross-org scoped** - attempting to unwrap a token issued under a different organisation returns `404 DYNAMIC_WRAP_NOT_FOUND`. This endpoint still requires a valid CryptFlare session or service token with `dynamic_secrets:issue` permission. The wrap token is not a bearer credential - it's a capability that only works alongside an authenticated request. Combined with 32 bytes of entropy the token space is effectively unbreakable within the exchange window. --- ## Required permission --- ## Typical flow `POST /configs/:configId/lease` with `wrap: { ttl: 60 }`. The response contains a `wrapped.token` instead of the credentials. CI output, paste buffer, Slack message, task queue - anything. The token alone is useless to anyone without CryptFlare auth. `POST /unwrap/:token` with their own CryptFlare auth. The KV entry is atomically read and deleted. Subsequent unwrap attempts on the same token return 404. ## Request ", "AZURE_TENANT_ID": "00000000-0000-0000-0000-000000000000", "AZURE_DISPLAY_NAME": "Azure Prod Reader" } } }`} /> --- --- # Update a dynamic secret config Source: https://docs.cryptflare.com/api-reference/dynamic-secrets/update-config PATCH /dynamic-secrets/configs/:configId - update editable fields on an existing configuration. # Update a dynamic secret config Updates editable fields on a configuration. If `rootCredentials` is supplied, the new credentials are validated against the upstream provider before being persisted. The `useByok` flag is **immutable** - it cannot be toggled after creation, because re-encryption would require the plaintext root credentials which CryptFlare does not keep. To move a config across encryption sources, delete and recreate it. --- ## Required permission --- ## Request --- --- # Re-run provider permission check Source: https://docs.cryptflare.com/api-reference/dynamic-secrets/validate-config POST /dynamic-secrets/configs/:configId/validate - re-run the provider adapter's validate() hook against the stored root credentials and return pass/fail without modifying state. # Re-run provider permission check Re-runs the provider adapter's `validate()` hook against the **currently-stored** root credentials and `providerConfig`. Returns `{ valid, error, checkedAt, provider }` so an operator can confirm the configured identity still has every permission it needs - before or after a lease-issue failure - without having to try to issue a lease and pay the cost. This endpoint is **read-only**: it does not mutate any state. For the Azure provider it acquires a Graph token, creates and immediately deletes a throwaway Application to probe write access, then (in `dynamic_sp` mode) acquires an ARM token and probes role-assignment read permission at every configured scope. Each step produces a targeted error message pointing at the exact misconfiguration to fix. The dashboard's **Check permissions** button on the configuration edit page is a thin wrapper around this endpoint. Use it from the UI whenever you grant a new permission in your cloud console and want to verify it before the next lease request. Calling this endpoint emits a `dynamic_config.validated` audit log entry carrying the pass/fail outcome and (when failed) the error message, so your audit trail shows who ran the check and what came back. --- ## Required permission --- ## Response shape The endpoint always returns `200 OK` when the check runs, regardless of whether validation passed. A `valid: false` result is not an HTTP error - it is a structured answer. The HTTP status codes are used only for errors outside the check itself (404 missing config, 403 plan gate, 500 unexpected server fault). If the provider adapter's `validate()` function throws instead of returning `{ valid: false, error }`, the handler wraps the exception as `{ valid: false, error: "Provider validate() threw: " }` so the UI still renders a useful result. | Field | Description | |---|---| | `data.valid` | `true` if every probe passed. `false` if any step failed. | | `data.error` | `null` on success. On failure, a human-readable string identifying the exact problem (missing permission, wrong scope, invalid credentials, etc.) and often pointing at the fix. | | `data.provider` | Provider key for the config - `azure_sp`, `aws_iam`, etc. | | `data.checkedAt` | ISO 8601 timestamp of when the check completed. | --- ## Common error messages (azure_sp) These are the strings the `azure_sp` provider returns in `data.error` for the most frequent misconfigurations. Each one includes a fix hint. | Error | What it means | Fix | |---|---|---| | `Root credentials rejected by Azure (401)` | The clientId/clientSecret pair is wrong, expired, or typoed | Regenerate the client secret on the root App Registration and PATCH the config's `rootCredentials` | | `Root credentials cannot read the App Registration` | `Application.ReadWrite.All` is not granted, or was granted as Delegated instead of Application | Grant `Application.ReadWrite.All` under **Application permissions** with admin consent. Search the Entra picker for "Read and write all applications" - the dotted identifier will not match the search box | | `Root credentials can read but not write to the App Registration` | Read permission is granted but write is missing, OR you are using `Application.ReadWrite.OwnedBy` and the SPN is not listed as an owner | Switch to `Application.ReadWrite.All` (recommended) or add the SPN as an owner of its own App Registration | | `App Registration not found - check appObjectId` | The `providerConfig.appObjectId` does not exist in the tenant, or you supplied the Application ID instead of the Object ID | Copy the **Object ID** (not the Application ID) from the App Registration Overview page | | `Root credentials could not exchange for an Azure Resource Manager token` (dynamic_sp only) | The client credentials work for Graph but fail against ARM - rare, usually a tenant-level condition policy | Check the Entra sign-in logs for the root SP; look for a conditional access policy blocking the ARM resource | | `Root credentials do not have permission to read role assignments at scope ` (dynamic_sp only) | The root SP is missing `Microsoft.Authorization/roleAssignments/write` at that scope | Assign the root App **User Access Administrator** (or Owner) at the listed scope | | `Scope not found: ` (dynamic_sp only) | The `scope` path in a role assignment entry is wrong | Verify the scope exists. Format: `/subscriptions/` or `/subscriptions//resourceGroups/` - lowercase `resourceGroups`, no trailing slash | --- ## Request --- --- # Environments Source: https://docs.cryptflare.com/api-reference/environments Isolated secret containers within a workspace. Create, list, and delete environments. # Environments Environments are the isolated containers where [secrets](/api-reference/secrets) actually live. Each environment is a fully separate keyspace - an environment named `production` in one workspace cannot see keys from the `production` environment of another. Typical setups are `development`, `staging`, and `production`. Some teams add `preview` for per-PR environments or `canary` for gradual rollouts. This page is an **overview** - every endpoint is documented on its own page. ## Endpoints | Method | Endpoint | Description | |---|---|---| | | [`/environments`](/api-reference/environments/list-environments) | List every environment in a workspace | | | [`/environments`](/api-reference/environments/create-environment) | Create an environment | | | [`/environments/:env`](/api-reference/environments/delete-environment) | Delete an environment and all its secrets | | | [`/environments/:env/resolve-path`](/api-reference/environments/resolve-path) | Resolve a nested URL path to an env, pod, or secret | ## Scoping rules - **Keyspace isolation:** a secret `DATABASE_URL` in `production` has no relationship with `DATABASE_URL` in `staging`. Rotations, versions, and values are independent per environment. - **Plan limits:** the number of environments allowed per workspace depends on your plan. Exceeding it returns `403 PLAN_LIMIT_REACHED`. - **Cascade delete:** deleting an environment removes every secret, pod, version history row, and service token scoped to it. --- # Create an environment Source: https://docs.cryptflare.com/api-reference/environments/create-environment POST /environments - creates a new environment inside a workspace. # Create an environment Creates a new environment inside a workspace. Common setups are `development`, `staging`, and `production`, but any slug is allowed - teams often add `preview` for PR environments. Environments are subject to your plan's per-workspace environment limit. Exceeding the limit returns `403 PLAN_LIMIT_REACHED`. --- ## Required permission --- ## Request --- --- # Delete an environment Source: https://docs.cryptflare.com/api-reference/environments/delete-environment DELETE /environments/:env - permanently deletes an environment and all its secrets, pods, and version history. # Delete an environment Permanently deletes an environment along with every secret, pod, and version history row scoped to it. Service tokens and rotation policies bound to the environment are revoked as part of the cascade. There is no soft delete. Secrets inside the environment are gone the instant the call succeeds - rotate them to a safe placeholder value first if you have any doubt. --- ## Required permission --- ## Request --- --- # List environments Source: https://docs.cryptflare.com/api-reference/environments/list-environments GET /environments - returns all environments inside a workspace. # List environments Returns every environment inside a workspace. The response includes live `secrets_count` and `pods_count` metrics so dashboards can render the environment picker with usage indicators in one round-trip. --- ## Required permission --- ## Request --- --- # Resolve a path Source: https://docs.cryptflare.com/api-reference/environments/resolve-path GET /environments/:env/resolve-path - resolves a slash-joined URL tail to an env, pod, or secret. # Resolve a path Resolves a slash-joined `path` query argument (for example `ops/database/APP_SECRET`) to whichever of the environment root, a pod, or a secret it points at. This endpoint powers the nested URL shape used by the CryptFlare vault web UI: every pod segment in the URL is a pod slug, and the final segment may be either a pod slug or a secret key. Resolution rules: - An empty `path` returns `type=env`. - Each segment is tried first as a pod slug under the current scope (env root, then the deepest pod resolved so far). - If the pod lookup fails on the **last** segment, it is re-tried as a secret key scoped to whatever pod or env-root the walk reached. - If a slug matches both a pod and a secret in the same scope, the pod wins - pods are containers and the user can always navigate into the pod to open the secret from there. - A maximum of five pod segments (`MAX_POD_DEPTH`) plus an optional trailing secret key is accepted. Longer paths return `404`. --- ## Required permission --- ## Request --- --- # Errors Source: https://docs.cryptflare.com/api-reference/errors Error response format, codes, and handling best practices # Errors The CryptFlare API returns errors in a consistent format following [RFC 9457](https://www.rfc-editor.org/rfc/rfc9457) (Problem Details for HTTP APIs). Every error response includes a machine-readable error code, a human-readable message, and the HTTP status code. ## Error response format All errors share this structure: ```json { "error": "RESOURCE_NOT_FOUND", "message": "The requested secret was not found", "status": 404, "requestId": "550e8400-e29b-41d4-a716-446655440000" } ``` >API: Authenticated request API->>Logic: Invoke handler Logic->>Logic: Validate inputs and run work Logic-->>API: throw ApiException(code, status) API->>MW: Bubble to error middleware MW->>MW: Map code to RFC 9457 body MW->>MW: Attach requestId from context MW-->>Client: "problem+json with error, message, status, requestId" Client->>Client: Branch on error code, log requestId `} /> Every failure takes the same path, so clients can rely on a stable envelope regardless of which handler threw. ## Validation errors When request body or query parameters fail validation, the API returns a `422` status with field-level error details: ```json { "error": "VALIDATION_FAILED", "message": "Request validation failed", "status": 422, "requestId": "550e8400-e29b-41d4-a716-446655440000", "details": [ { "path": "key", "message": "Must be UPPER_SNAKE_CASE (e.g., DATABASE_URL)" }, { "path": "value", "message": "Required" } ] } ``` ## Error codes by category ### Authentication | Code | Status | Description | |------|--------|-------------| | `AUTH_INVALID_TOKEN` | 401 | The provided API token is invalid or revoked | | `AUTH_TOKEN_EXPIRED` | 401 | The API token has expired | | `AUTH_SESSION_EXPIRED` | 401 | The session cookie has expired | | `AUTH_MISSING_HEADER` | 401 | No Authorization header or session cookie provided | | `AUTH_TOTP_REQUIRED` | 401 | Two-factor authentication code required | | `AUTH_TOTP_INVALID` | 401 | Invalid TOTP code | ### Authorization | Code | Status | Description | |------|--------|-------------| | `RBAC_FORBIDDEN` | 403 | You do not have permission to perform this action | | `RBAC_INSUFFICIENT_ROLE` | 403 | Your role does not have the required permission | | `RBAC_OWNERSHIP_REQUIRED` | 403 | Only the organisation owner can perform this action | ### Resources | Code | Status | Description | |------|--------|-------------| | `RESOURCE_NOT_FOUND` | 404 | The requested resource does not exist | | `RESOURCE_CONFLICT` | 409 | A resource with this identifier already exists | | `RESOURCE_GONE` | 410 | The resource has been permanently deleted | ### Validation | Code | Status | Description | |------|--------|-------------| | `VALIDATION_FAILED` | 422 | Request body or query parameters failed validation. Check the `details` array for field-level errors. | ### Idempotency | Code | Status | Description | |------|--------|-------------| | `IDEMPOTENCY_KEY_COLLISION` | 422 | An `Idempotency-Key` header was reused with a different request body. Generate a fresh key for a different payload. See the [idempotency guide](/guides/idempotency). | ### Rate limiting | Code | Status | Description | |------|--------|-------------| | `RATE_LIMITED` | 429 | Too many requests. Check `Retry-After` header for seconds to wait. | | `QUOTA_EXCEEDED` | 429 | Daily organisation quota exceeded. Resets at midnight UTC. | ### Billing | Code | Status | Description | |------|--------|-------------| | `BILLING_PLAN_LIMIT` | 403 | Your current plan does not support this feature or limit has been reached | | `BILLING_PAYMENT_REQUIRED` | 402 | Payment is required to continue using this feature | ### Pods | Code | Status | Description | |------|--------|-------------| | `POD_MAX_DEPTH_EXCEEDED` | 400 | Pod nesting exceeds the maximum of 5 levels | | `POD_NOT_EMPTY` | 400 | Pod contains secrets or sub-pods and cannot be deleted | ### Secrets | Code | Status | Description | |------|--------|-------------| | `SECRET_APPROVAL_REQUIRED` | 403 | This secret requires approval before it can be modified | | `SECRET_DECRYPTION_FAILED` | 500 | Failed to decrypt the secret value. Contact support. | ### Audit | Code | Status | Description | |------|--------|-------------| | `AUDIT_CHAIN_BROKEN` | 500 | Audit log integrity verification detected a broken hash chain. Contact support. | ### MCP | Code | Status | Description | |------|--------|-------------| | `MCP_NOT_GRANTED` | 403 | Token lacks the `mcp:use` permission required to reach [`mcp.cryptflare.com`](/security/mcp-access). | | `MCP_TOOL_NOT_FOUND` | 404 | Requested tool name does not exist in the MCP registry. | ### SSO | Code | Status | Description | |------|--------|-------------| | `SSO_PROVIDER_ERROR` | 502 | The SSO identity provider returned an error | | `SSO_ASSERTION_INVALID` | 401 | The SSO assertion or token is invalid | ### Internal | Code | Status | Description | |------|--------|-------------| | `INTERNAL_ERROR` | 500 | An unexpected internal error occurred. If this persists, contact support with the `requestId`. | ## HTTP status code summary | Status | Meaning | When it happens | |--------|---------|-----------------| | `400` | Bad Request | Malformed request syntax | | `401` | Unauthorized | Missing or invalid authentication | | `402` | Payment Required | Feature requires a paid plan | | `403` | Forbidden | Authenticated but insufficient permissions | | `404` | Not Found | Resource does not exist | | `409` | Conflict | Duplicate resource (e.g., slug already taken) | | `410` | Gone | Resource permanently deleted | | `422` | Unprocessable Entity | Validation failed on request body/params | | `429` | Too Many Requests | Rate limit or quota exceeded | | `500` | Internal Server Error | Unexpected server error | | `502` | Bad Gateway | External service (SSO provider) error | ## Handling errors ### Example: handling errors in JavaScript ```typescript const res = await fetch('https://api.cryptflare.com/v1/organisations/org_xyz/workspaces', { headers: { Authorization: `Bearer ${token}` }, }); if (!res.ok) { const error = await res.json(); switch (error.error) { case 'AUTH_TOKEN_EXPIRED': // Refresh the token and retry break; case 'RATE_LIMITED': // Wait and retry after Retry-After seconds const retryAfter = res.headers.get('Retry-After'); await sleep(Number(retryAfter) * 1000); break; case 'VALIDATION_FAILED': // Show field-level errors to the user error.details.forEach((d) => { console.error(`${d.path}: ${d.message}`); }); break; default: console.error(`${error.error}: ${error.message}`); } } ``` ### Best practices - **Always check the `error` field** for programmatic handling, not the `message` (messages may change) - **Log the `requestId`** when reporting issues to support - **Implement exponential backoff** for `429` responses using the `Retry-After` header - **Validate client-side** before sending requests to avoid `422` errors - **Handle `401` gracefully** by redirecting to login or refreshing tokens --- # Event Subscriptions Source: https://docs.cryptflare.com/api-reference/event-subscriptions Subscribe to audit events and receive real-time HTTP notifications with HMAC-SHA256 signed payloads. # Event Subscriptions Event subscriptions are HTTPS webhooks that CryptFlare POSTs to whenever something interesting happens - a secret is rotated, a member is invited, an SSO connection changes. Every payload is signed with HMAC-SHA256 using a per-subscription secret so you can verify authenticity on your side. This page is an **overview** - every endpoint is documented on its own page. ## Subscription endpoints | Method | Endpoint | Description | |---|---|---| | | [`/events/subscriptions`](/api-reference/event-subscriptions/list-subscriptions) | List every subscription in the organisation | | | [`/events/subscriptions`](/api-reference/event-subscriptions/create-subscription) | Create a new subscription | | | [`/events/subscriptions/:id`](/api-reference/event-subscriptions/update-subscription) | Update name, URL, events, headers, or active state | | | [`/events/subscriptions/:id`](/api-reference/event-subscriptions/delete-subscription) | Permanently delete a subscription and its delivery log | | | [`/events/subscriptions/:id/test`](/api-reference/event-subscriptions/test-subscription) | Send a `test.ping` event to verify the endpoint | | | [`/events/subscriptions/:id/rotate-secret`](/api-reference/event-subscriptions/rotate-secret) | Rotate the HMAC signing secret with a 24h grace period | | | [`/events/subscriptions/:id/replay`](/api-reference/event-subscriptions/replay) | Re-send events from a time window (up to 100) | ## Delivery log endpoints | Method | Endpoint | Description | |---|---|---| | | [`/events/deliveries`](/api-reference/event-subscriptions/list-deliveries) | List recent delivery attempts across every subscription | | | [`/events/deliveries/:deliveryId/redeliver`](/api-reference/event-subscriptions/redeliver) | Retry a single failed delivery | ## Organisation-level endpoints | Method | Endpoint | Description | |---|---|---| | | [`/events/status`](/api-reference/event-subscriptions/get-status) | Check whether events are enabled for this organisation | | | [`/events/toggle`](/api-reference/event-subscriptions/toggle-events) | Enable or disable events org-wide (owner only) | ## How matching works When CryptFlare fires an event, the dispatcher scans every active subscription for the organisation, filters by `event_types` and optional `resource_filter`, and fans out the payload to each match in parallel. Failed deliveries retry independently without blocking other subscribers. Scan Scan --> TypeFilter TypeFilter --> ResourceFilter ResourceFilter --> Dispatch `} /> Non-matching subscriptions short-circuit at the filter step, so adding narrow subscriptions never slows down unrelated event types. ## Payload format Every delivery sends a JSON body with this shape: ```json { "id": "evt_abc123", "type": "secret.rotated", "timestamp": "2026-04-11T12:00:00Z", "organisation": { "id": "org_xyz" }, "actor": { "id": "usr_456", "role": "developer" }, "resource": { "type": "secret", "id": "sec_789" }, "metadata": { "key": "DATABASE_URL", "version": 3 }, "source": "dashboard" } ``` Secret **values** are never included in event payloads. Only key names and metadata appear. ## Verifying signatures Each delivery includes an `X-CryptFlare-Signature` header with the format `sha256=`. To verify: 1. Read the raw request body as a UTF-8 string 2. Compute HMAC-SHA256 of the body using your signing secret 3. Compare the hex digest to the value after `sha256=` ```javascript import crypto from 'node:crypto'; function verifySignature(body, secret, signature) { const expected = crypto .createHmac('sha256', secret) .update(body) .digest('hex'); return `sha256=${expected}` === signature; } ``` ## Delivery headers | Header | Description | |---|---| | `Content-Type` | `application/json` | | `X-CryptFlare-Signature` | `sha256=` | | `X-CryptFlare-Event` | Event type (e.g. `secret.rotated`) | | `X-CryptFlare-Delivery` | Unique delivery ID | ## Retry behaviour Failed deliveries are retried **up to 3 times** with exponential backoff (1s, 2s). A delivery is considered failed when the target returns a non-2xx status code or the request times out (10s). After all retries fail, the subscription's `failedCount` increments. **At 10 consecutive failures the subscription is auto-disabled** to protect CryptFlare's delivery queue from a broken endpoint. Re-enabling it resets the counter. ## Plan availability ## Supported events ## Default role permissions ## Integration examples --- # Create an event subscription Source: https://docs.cryptflare.com/api-reference/event-subscriptions/create-subscription POST /events/subscriptions - creates a new webhook subscription with HMAC signing. # Create an event subscription Creates a new webhook subscription. Every delivery to `url` is signed with HMAC-SHA256 using the `secret` you supply - store it securely, CryptFlare hashes it immediately and the raw value is never returned again. Use `events: ['*']` to subscribe to every event type, or pass an explicit list (e.g. `['secret.created', 'member.invited']`). Wildcards within a type (`secret.*`) are not supported. The `url` must be a **public HTTPS endpoint**. To prevent server-side request forgery, the following are rejected at both creation time and delivery time: - Non-HTTPS schemes (`http://`, `file://`, `javascript:`, `data:`, etc.) - Literal or DNS-resolved private IPs (RFC 1918, CGNAT, loopback, link-local, IPv6 ULA) - `localhost`, `0.0.0.0`, `127.x.x.x`, `::1` - Cloud metadata endpoints (`169.254.169.254`, `metadata.google.internal`, `metadata.goog`) - URLs containing user info (e.g. `https://example.com@attacker.com`) If your webhook destination fails validation, the API returns `400 VALIDATION_FAILED` with a descriptive message. To test a webhook locally, tunnel your dev server through a public HTTPS proxy (ngrok, Cloudflare Tunnel, etc.). --- ## Required permission --- ## Request --- --- # Delete an event subscription Source: https://docs.cryptflare.com/api-reference/event-subscriptions/delete-subscription DELETE /events/subscriptions/:subscriptionId - permanently deletes a subscription and its delivery logs. # Delete an event subscription Permanently deletes a subscription and every delivery log entry that belongs to it. In-flight deliveries already queued will fail with a dropped-subscription error. --- ## Required permission --- ## Request --- --- # Get events status Source: https://docs.cryptflare.com/api-reference/event-subscriptions/get-status GET /events/status - returns whether event subscriptions are enabled for this organisation. # Get events status Returns whether event subscriptions are enabled at the organisation level. When disabled, no events are delivered and the subscription endpoints return `403 EVENTS_DISABLED`. --- ## Required permission --- ## Request --- --- # List delivery log Source: https://docs.cryptflare.com/api-reference/event-subscriptions/list-deliveries GET /events/deliveries - returns recent delivery attempts across every subscription. # List delivery log Returns recent delivery attempts across every subscription in the organisation, newest first. Each entry captures the HTTP status, attempt number, duration, and error (if any) for one attempt - so a retried delivery shows up multiple times. --- ## Required permission --- ## Request --- --- # List event subscriptions Source: https://docs.cryptflare.com/api-reference/event-subscriptions/list-subscriptions GET /events/subscriptions - returns every event subscription in the organisation. # List event subscriptions Returns every event subscription registered in the organisation. Signing secrets are never returned - they're stored as salted hashes after creation. --- ## Required permission --- ## Request --- --- # Redeliver a failed event Source: https://docs.cryptflare.com/api-reference/event-subscriptions/redeliver POST /events/deliveries/:deliveryId/redeliver - retries a single failed delivery. # Redeliver a failed event Resends the original payload of a failed delivery to its subscription's URL. Creates a new delivery log entry with the outcome - the original row is left alone so you can track the retry history. Unlike [Replay events](/api-reference/event-subscriptions/replay), this endpoint operates on a single delivery rather than a time range. --- ## Required permission --- ## Request --- --- # Replay events Source: https://docs.cryptflare.com/api-reference/event-subscriptions/replay POST /events/subscriptions/:subscriptionId/replay - re-sends audit events from a time range to a subscription. # Replay events Re-sends audit events from a time window to a subscription. Up to **100 events** per replay call. Each delivery includes an `X-CryptFlare-Replay: true` header so consumers can distinguish replays from live events and de-duplicate if needed. Replays are the recovery path when your consumer was down during live delivery. Use a narrow time window, verify the outcome, and call again if more events need redelivery. --- ## Required permission --- ## Request --- --- # Rotate signing secret Source: https://docs.cryptflare.com/api-reference/event-subscriptions/rotate-secret POST /events/subscriptions/:subscriptionId/rotate-secret - rotates the HMAC signing secret with a 24h grace period. # Rotate signing secret Generates a new HMAC signing secret and returns it **exactly once**. The previous secret remains valid for **24 hours** so consumers can migrate without downtime - during the grace period, deliveries include both: - `X-CryptFlare-Signature` (new secret) - `X-CryptFlare-Signature-Previous` (old secret) Update your consumer to accept either, then remove the old one once the grace period lapses. The plaintext secret is returned exactly once. CryptFlare stores only a hash. If you lose it, call this endpoint again (which restarts the grace window). --- ## Required permission --- ## Request --- --- # Send a test event Source: https://docs.cryptflare.com/api-reference/event-subscriptions/test-subscription POST /events/subscriptions/:subscriptionId/test - sends a test.ping event to verify connectivity and HMAC signing. # Send a test event Sends a `test.ping` event to the subscription URL. Useful immediately after creation to confirm the target endpoint is reachable, responds with 2xx, and verifies your HMAC signature check on your side. --- ## Required permission --- ## Request --- --- # Toggle events Source: https://docs.cryptflare.com/api-reference/event-subscriptions/toggle-events POST /events/toggle - enables or disables event subscriptions org-wide. Owner only. # Toggle events Enables or disables event subscriptions for the entire organisation. When disabled, no events are delivered and every subscription endpoint returns `403 EVENTS_DISABLED`. **Only the organisation owner can call this endpoint** - even other Managers with `events:manage` will get a 403. Use this as a platform-wide kill-switch during an incident. Toggling off does not delete subscriptions - they resume exactly where they left off when you re-enable events. --- ## Required permission --- ## Request --- --- # Update an event subscription Source: https://docs.cryptflare.com/api-reference/event-subscriptions/update-subscription PATCH /events/subscriptions/:subscriptionId - update name, URL, events, secret, headers, or active state. # Update an event subscription Updates an existing subscription. Every field is optional - omit fields you don't want to change. Supplying a new `secret` immediately replaces the old one without a grace period; use [Rotate signing secret](/api-reference/event-subscriptions/rotate-secret) if you need consumers to transition gradually. --- ## Required permission --- ## Request --- --- # Feedback Source: https://docs.cryptflare.com/api-reference/feedback Submit and retrieve documentation page feedback. # Feedback The Feedback API powers the thumbs-up / thumbs-down widget at the bottom of every documentation page. It lets users submit ratings on pages and retrieve their previous feedback when authenticated. This is an internal endpoint used by the docs site itself. Anonymous submits are allowed but retrieval requires a CryptFlare session. ## Endpoints | Method | Endpoint | Description | |---|---|---| | | [`/feedback`](/api-reference/feedback/submit-feedback) | Submit a rating for a page (auth optional) | | | [`/feedback`](/api-reference/feedback/get-feedback) | Get the authenticated user's rating for a page | --- # Get feedback Source: https://docs.cryptflare.com/api-reference/feedback/get-feedback GET /feedback - returns the authenticated user's feedback for a specific documentation page. # Get feedback Returns the authenticated user's feedback for a specific documentation page. Unlike [Submit feedback](/api-reference/feedback/submit-feedback), this endpoint **requires authentication** - there's no such thing as anonymous retrieval. --- ## Required permission --- ## Request --- --- # Submit feedback Source: https://docs.cryptflare.com/api-reference/feedback/submit-feedback POST /feedback - submits a rating for a documentation page. Anonymous feedback is accepted. # Submit feedback Submits a rating for a documentation page. **Authentication is optional** - anonymous feedback is accepted, and authenticated feedback is linked to the user account so it can be retrieved later via [Get feedback](/api-reference/feedback/get-feedback). This endpoint powers the thumbs-up / thumbs-down widget at the bottom of every docs page. Console users see extended analytics in the dashboard; external callers can still submit but cannot retrieve. --- ## Request --- --- # Idempotency Source: https://docs.cryptflare.com/api-reference/idempotency Safely retry mutations with the Idempotency-Key header # Idempotency Network hiccups, CI retries, and Terraform re-plans can all cause a mutation request to be sent twice. Without idempotency keys, the server has no way to tell a retry from a new request - you end up with two of the same secret, two service tokens, two webhook subscriptions. The CryptFlare API supports the `Idempotency-Key` header on every authenticated mutation endpoint. Attach a unique key to a `POST`, `PUT`, `PATCH`, or `DELETE` request and the server records the response for 24 hours. Any retry with the same key and the same body replays the cached response instead of re-executing the mutation. Requests without an `Idempotency-Key` header are unaffected - the mutation runs as before. Existing clients do not need to change to benefit from the rest of the API. ## How to send a key Send a client-generated key on any mutation request: ```bash curl -X POST https://api.cryptflare.com/v1/organisations/$ORG/workspaces/$WS/environments/$ENV/secrets \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -H "Idempotency-Key: 01HX7N4QJ9P8W2B3K5VY6Z4EDA" \ -d '{"key":"DATABASE_URL","value":"postgres://..."}' ``` The server stores the response keyed on: - the authenticated caller (user id or token id) - the HTTP method and request path - the idempotency key you sent ## Request header ## Replays A retry with the same key and the same request body returns the cached response unchanged. The replayed response always includes an `Idempotency-Replayed: true` header so clients can tell it came from the cache. >API: POST with Idempotency-Key API->>Store: Insert key as in-progress, acquire lock API->>Handler: Execute mutation Handler-->>API: Result API->>Store: Cache response body and status API-->>Client: 2xx with result Note over Client,Handler: Duplicate call (same key and body) Client->>API: POST with same Idempotency-Key API->>Store: Lookup key, hit cache Store-->>API: Stored response API-->>Client: Cached response with Idempotency-Replayed true Note over Client,Handler: Concurrent duplicate Client->>API: POST with same Idempotency-Key while first runs API->>Store: Wait on lock until first call finishes Store-->>API: Stored response once released API-->>Client: Same cached response, handler runs once `} /> Every retry resolves to one handler execution: first call writes the cache, duplicates read it, concurrent duplicates block on the lock and then read it. ```http HTTP/1.1 201 Created Idempotency-Replayed: true Content-Type: application/json {"key":"DATABASE_URL","version":1} ``` ## Collisions If you reuse a key with a **different** request body, the server rejects the request with `422 IDEMPOTENCY_KEY_COLLISION`. This catches a common class of client bug where the same key is accidentally reused for a different payload. ```json { "error": "IDEMPOTENCY_KEY_COLLISION", "message": "The Idempotency-Key you sent was previously used with a different request body. Use a fresh key for a different payload.", "status": 422, "requestId": "550e8400-e29b-41d4-a716-446655440000" } ``` The comparison is done over a canonical hash of the request body, so JSON objects whose keys appear in a different order (`{a:1,b:2}` vs `{b:2,a:1}`) are treated as equivalent and do not trigger a collision. ## Choosing a key Use any client-generated identifier that is unique per logical operation: - **UUIDv4 or ULID** - simplest option for one-shot scripts and SDK calls - **Deterministic hash of the operation** - e.g. `sha256(workflow_run_id + secret_name)` lets an entire CI job be safely replayed without the client having to persist state between retries Keys can be up to 255 characters. Shorter keys are fine. ## What is cached | Response | Cached? | Why | |---|---|---| | `2xx` success | Yes | A replay returns the same body and status | | `4xx` client error | Yes | Prevents a client bug (e.g. invalid JSON) from spamming the system on retry | | `5xx` server error | **No** | Transient server failures should be retryable - the next attempt re-executes the mutation | | Body larger than 1 MB | No | Only a handful of bulk export endpoints exceed this; normal CRUD responses are far smaller | ## What is not cached - Requests without an `Idempotency-Key` header. - Authentication endpoints (`/v1/auth/*`, `/v1/console/auth/*`). These have their own anti-replay logic - OTP codes expire and login tokens rotate, so caching a failed verify would break retries. - The Stripe billing webhook (`/v1/billing/webhook`), which enforces idempotency upstream via Stripe's signed event IDs. ## TTL Cached responses expire after **24 hours**. Retries after that window execute the mutation normally. Choose a fresh key for operations that are expected to happen more than once a day. ## Scope Keys are scoped **per caller**. Two different service tokens using the same key create two independent cache entries. A user on the dashboard and the same user on a service token are treated as separate callers. This means you do not need to coordinate key generation across teams or tokens - pick a scheme that makes sense for your own client. ## Side effects Cached replays return the stored response body. Side effects scheduled via `waitUntil` during the first execution - audit log emission, cache invalidation, analytics counters - do **not** fire again on a replay. This is correct: an idempotent replay represents one logical operation, not two. If a logical operation changes (different body, different target resource), generate a fresh key. Reusing a key with a new body returns `422 IDEMPOTENCY_KEY_COLLISION` and the mutation does not run. ## Response headers | Header | When it appears | Meaning | |---|---|---| | `Idempotency-Replayed: true` | On any replay of a cached response | The response body was served from the idempotency cache; the handler did not re-execute | ## Error code | Code | Status | Description | |---|---|---| | `IDEMPOTENCY_KEY_COLLISION` | 422 | An `Idempotency-Key` header was reused with a different request body. Use a fresh key. | --- # Notifications Source: https://docs.cryptflare.com/api-reference/notifications In-app notification management for organisation members. # Notifications The Notifications API manages in-app notifications for organisation members. Notifications are created by the system in response to events like support ticket replies, JIT access requests, policy changes, and automated secret rotations. This page is an **overview** - every endpoint is documented on its own page. ## Endpoints | Method | Endpoint | Description | |---|---|---| | | [`/notifications`](/api-reference/notifications/list-notifications) | List paginated notifications with unread filter | | | [`/notifications/:id/read`](/api-reference/notifications/mark-read) | Mark a single notification as read | | | [`/notifications/mark-all-read`](/api-reference/notifications/mark-all-read) | Mark every unread notification as read | ## Notification types | Type | Description | |---|---| | `ticket_reply` | A support ticket received a new reply | | `access_request` | A member submitted a JIT access request | | `access_approved` | Your access request was approved | | `access_denied` | Your access request was denied | | `policy_change` | A policy was created, updated, or deleted | | `member_invited` | A new member was invited to the organisation | | `member_removed` | A member was removed from the organisation | | `secret_rotated` | A secret was automatically rotated | | `workspace_created` | A new workspace was created | --- # List notifications Source: https://docs.cryptflare.com/api-reference/notifications/list-notifications GET /notifications - returns paginated in-app notifications for the authenticated user. # List notifications Returns paginated in-app notifications for the authenticated user within the organisation, newest first. Use `unread=true` to filter to unread items only - handy for the notification badge in the dashboard. --- ## Required permission --- ## Request --- ## Notification types | Type | Description | |---|---| | `ticket_reply` | A support ticket received a new reply | | `access_request` | A member submitted a JIT access request | | `access_approved` | Your access request was approved | | `access_denied` | Your access request was denied | | `policy_change` | A policy was created, updated, or deleted | | `member_invited` | A new member was invited to the organisation | | `member_removed` | A member was removed from the organisation | | `secret_rotated` | A secret was automatically rotated | | `workspace_created` | A new workspace was created | --- --- # Mark all as read Source: https://docs.cryptflare.com/api-reference/notifications/mark-all-read POST /notifications/mark-all-read - marks every unread notification as read in one call. # Mark all as read Marks every unread notification as read for the authenticated user within this organisation. The response includes the count of rows that were updated so the client can clear its badge optimistically. --- ## Required permission --- ## Request --- --- # Mark as read Source: https://docs.cryptflare.com/api-reference/notifications/mark-read PATCH /notifications/:id/read - marks a single notification as read. # Mark as read Marks a single notification as read. Users can only mark **their own** notifications - trying to update another user's notification returns `404` (the server hides existence to prevent enumeration). --- ## Required permission --- ## Request --- --- # Organisations Source: https://docs.cryptflare.com/api-reference/organisations Manage organisations, members, and ownership transfers. # Organisations Organisations are the top-level container for every resource in CryptFlare. Each organisation has a plan, an owner, members with roles, and owns workspaces, environments, pods, secrets, and tokens. Most API calls are scoped to an organisation via the `:org` path parameter. This page is an **overview** - every endpoint is documented on its own page. Use the sidebar or the tables below to jump straight to a specific operation. ## Organisation endpoints | Method | Endpoint | Description | |---|---|---| | | [`/organisations`](/api-reference/organisations/list-organisations) | List organisations the caller belongs to | | | [`/organisations`](/api-reference/organisations/create-organisation) | Create an organisation (caller becomes owner) | | | [`/organisations/:org`](/api-reference/organisations/get-organisation) | Get full details for one organisation | | | [`/organisations/:org`](/api-reference/organisations/update-organisation) | Update the organisation name | | | [`/organisations/:org`](/api-reference/organisations/delete-organisation) | Permanently delete the organisation and all its resources | | | [`/organisations/:org/tree`](/api-reference/organisations/get-tree) | Get the full resource hierarchy in one call | ## Member endpoints | Method | Endpoint | Description | |---|---|---| | | [`/organisations/:org/members`](/api-reference/organisations/list-members) | List all members with role and profile info | | | [`/organisations/:org/members/invite`](/api-reference/organisations/invite-member) | Invite a user by email | | | [`/organisations/:org/members/:userId/role`](/api-reference/organisations/update-member-role) | Change a member's role (role ceiling enforced) | | | [`/organisations/:org/members/:userId`](/api-reference/organisations/remove-member) | Remove a member from the organisation | ## Ownership transfer endpoints | Method | Endpoint | Description | |---|---|---| | | [`/organisations/:org/transfer`](/api-reference/organisations/get-transfer) | Get the pending transfer status | | | [`/organisations/:org/transfer`](/api-reference/organisations/initiate-transfer) | Initiate an ownership transfer by email | | | [`/organisations/:org/transfer/cancel`](/api-reference/organisations/cancel-transfer) | Cancel a pending ownership transfer | ## Feature flag endpoints Org-wide toggles for high-risk capabilities (AI, outbound events, secret sync, sync takeover). All members can read the current state; only the owner can flip the toggles. | Method | Endpoint | Description | |---|---|---| | | [`/organisations/:org/features`](/api-reference/organisations/get-features) | Read the current state of every feature flag | | | [`/organisations/:org/features`](/api-reference/organisations/toggle-feature) | Enable or disable a feature (owner-only) | ## Roles and permissions | Role | Secrets | Members | Billing | Org settings | |---|---|---|---|---| | **Owner** | Full access | Manage all | Manage | Full access | | **Biller** | None | View only | Manage | None | | **Manager** | Full access | Invite / remove (ceiling) | View | None | | **Developer** | Read / write | View only | None | None | | **Employee** | Read only | View only | None | None | | **Viewer** | List only (no values) | None | None | None | See [Role permissions](/api-reference/role-permissions) for the full capability matrix and [Access control](/security/access-control) for how role ceiling is enforced across the member invite and role-assign endpoints. --- # Cancel pending transfer Source: https://docs.cryptflare.com/api-reference/organisations/cancel-transfer POST /organisations/:org/transfer/cancel - cancels a pending ownership transfer. # Cancel pending transfer Cancels a pending ownership transfer. Only the original initiator (the current owner) may cancel. After cancellation the recipient's accept link stops working and a new transfer can be started. --- ## Required permission --- ## Request --- --- # Create an organisation Source: https://docs.cryptflare.com/api-reference/organisations/create-organisation POST /organisations - creates a new organisation and assigns the caller as owner. # Create an organisation Creates a new organisation and automatically assigns the authenticated caller as its `owner`. The new organisation starts on the `free` plan; upgrade via the billing endpoints once it exists. --- ## Required permission --- ## Request --- --- # Delete an organisation Source: https://docs.cryptflare.com/api-reference/organisations/delete-organisation DELETE /organisations/:org - permanently deletes the organisation and every resource it owns. # Delete an organisation Permanently deletes the organisation and every workspace, environment, pod, secret, token, audit log, and membership it owns. Only the current owner may call this endpoint. There is no soft delete, no recycle bin, and no grace period. Every resource scoped to the organisation is removed within a single transaction. If you need a reversible "pause" instead, downgrade the plan or suspend all active tokens. --- ## Required permission --- ## Request --- --- # Get organisation features Source: https://docs.cryptflare.com/api-reference/organisations/get-features GET /organisations/:org/features - returns the org-wide enabled/disabled state of every toggleable feature. # Get organisation features Returns the current on/off state of every toggleable organisation feature. Feature flags are centrally defined in `packages/shared/src/constants/features.ts` so there's one source of truth across the API, the vault `Org Settings → Features` tab, and any runtime feature guards. Adding a new flag to the shared constants file automatically surfaces it in both this endpoint and the UI. Anyone who can read the organisation (i.e. any member) can call this endpoint. Only org owners can flip the toggles via [Toggle a feature](/api-reference/organisations/toggle-feature). --- ## Required permission --- ## Request --- --- # Get an organisation Source: https://docs.cryptflare.com/api-reference/organisations/get-organisation GET /organisations/:org - returns full details for a single organisation. # Get an organisation Returns the full details of a single organisation. The caller must be a member - non-members get a `403`, not a `404`, so you cannot enumerate existence by scanning IDs. --- ## Required permission --- ## Request --- --- # Get transfer status Source: https://docs.cryptflare.com/api-reference/organisations/get-transfer GET /organisations/:org/transfer - returns the pending ownership transfer for the organisation, if any. # Get transfer status Returns the currently pending ownership transfer for the organisation, or `data: null` if there isn't one. Useful for rendering the "Transfer pending" banner in the dashboard without polling a webhook. --- ## Required permission --- ## Request --- --- # Get the organisation tree Source: https://docs.cryptflare.com/api-reference/organisations/get-tree GET /organisations/:org/tree - returns the full workspace / environment / pod hierarchy with secret counts. # Get the organisation tree Returns the full hierarchy of the organisation in one call: workspaces, environments, and pods with per-environment secret counts. This is the endpoint the dashboard uses to build the left-hand resource tree, and it's intentionally one round-trip so the UI can hydrate the whole tree without N+1 queries. Secret *values* are never included. Only metadata and counts. --- ## Required permission --- ## Request --- --- # Initiate ownership transfer Source: https://docs.cryptflare.com/api-reference/organisations/initiate-transfer POST /organisations/:org/transfer - initiates an ownership transfer to another user by email. # Initiate ownership transfer Initiates an ownership transfer. Only the current owner may call this endpoint. CryptFlare emails the recipient with an accept link; the transfer expires automatically 7 days after creation if not accepted. When the recipient accepts, the current owner is demoted to `manager` and the recipient becomes `owner`. The recipient's existing membership (if any) is replaced. Billing, SSO, and data residency settings are preserved. --- ## Required permission --- ## Request --- --- # Invite a member Source: https://docs.cryptflare.com/api-reference/organisations/invite-member POST /organisations/:org/members/invite - invites a user to the organisation by email. # Invite a member Adds a user to the organisation by email. If the target user already has a CryptFlare account they join immediately; otherwise an invitation email goes out and the membership is created on first sign-in. `manager` callers cannot assign the `owner` or `biller` roles, even if they have `members:invite`. Role ceiling is enforced server-side - an attempt returns `403` with error code `ROLE_CEILING_EXCEEDED`. --- ## Required permission --- ## Request --- --- # List members Source: https://docs.cryptflare.com/api-reference/organisations/list-members GET /organisations/:org/members - returns every member of the organisation with their role. # List members Returns every member of the organisation, with user profile data joined so you can render a members table without a second fan-out. --- ## Required permission --- ## Request --- --- # List your organisations Source: https://docs.cryptflare.com/api-reference/organisations/list-organisations GET /organisations - returns every organisation the authenticated user belongs to along with the caller's role in each. # List your organisations Returns every organisation where the authenticated user is a member. Each record carries the caller's role in that organisation, so the dashboard can decide which menus to render without a second round-trip. --- ## Required permission --- ## Request --- --- # Remove a member Source: https://docs.cryptflare.com/api-reference/organisations/remove-member DELETE /organisations/:org/members/:userId - removes a member from the organisation. # Remove a member Removes a member from the organisation. The member's personal sessions survive, but their access tokens and service tokens scoped to this organisation are immediately invalidated. **The owner cannot be removed** - move ownership first via the [transfer endpoints](/api-reference/organisations/initiate-transfer). --- ## Required permission --- ## Request --- --- # Toggle an organisation feature Source: https://docs.cryptflare.com/api-reference/organisations/toggle-feature POST /organisations/:org/features - enable or disable a feature flag for the organisation. Owner-only. # Toggle an organisation feature Enables or disables a toggleable organisation feature. Only the organisation **owner** can call this - managers, billers, and developers get a `403` even if they can otherwise administer the org. The endpoint is idempotent: setting a feature to its current state is a no-op from the user's perspective (still records an audit entry for traceability). Every toggle is recorded in the audit log as `organisation.feature_toggled` with the feature key and new state in metadata. Disabling `sync` suppresses all trigger endpoints and auto-sync fan-out without deleting existing connections; disabling `syncTakeover` hides the drift panel's "Take over" buttons org-wide while leaving read-only drift detection intact (the common single-switch control for SOC 2 / ISO 27001 / PCI-DSS environments). Disabling `ai` blocks Cipher and inference calls; disabling `events` suppresses outbound webhook delivery. Existing data is never touched - features resume when re-enabled. See the [Secret Sync security guide](/security/sync) for the full compliance model. --- ## Required permission Feature flags gate access to high-risk capabilities (secret sync, AI inference, webhook delivery, destination overwrite). Only the org owner can flip them so an attacker with a compromised manager/developer session cannot silently re-enable a feature the owner has disabled. --- ## Request --- --- # Change member role Source: https://docs.cryptflare.com/api-reference/organisations/update-member-role PATCH /organisations/:org/members/:userId/role - changes a member's role in the organisation. # Change member role Changes a member's role. The owner cannot be demoted through this endpoint - move ownership first via the [transfer endpoints](/api-reference/organisations/initiate-transfer). `manager` callers may only assign `developer`, `employee`, or `viewer`. Attempting to promote a member to `biller`, `manager`, or `owner` returns `403` with error code `ROLE_CEILING_EXCEEDED`. --- ## Required permission --- ## Request --- --- # Update an organisation Source: https://docs.cryptflare.com/api-reference/organisations/update-organisation PATCH /organisations/:org - update the organisation name. Owner only. # Update an organisation Updates the organisation's display name. The `slug`, `plan`, and `owner_id` cannot be changed through this endpoint - plan changes go through billing, and ownership moves through the [transfer endpoints](/api-reference/organisations/initiate-transfer). --- ## Required permission --- ## Request --- --- # Pagination Source: https://docs.cryptflare.com/api-reference/pagination How to paginate through list endpoints # Pagination List endpoints that can return large result sets support offset-based pagination via query parameters. Cursor-based pagination is being rolled out for high-volume list endpoints where a stable walk across inserts and deletes matters more than jumping to an arbitrary page. Page1 Page1 --> Check Check -- No --> Next Next --> PageN PageN --> Check Check -- Yes --> Done Page1 -. stability .-> Stable PageN -. stability .-> Stable `} /> The client loops until `nextCursor` is null; because the cursor carries the last-seen sort key and a tie-breaker, concurrent inserts and deletes never cause skipped or duplicated rows. ## Query parameters ## Response format Paginated endpoints return a `data` array and a `total` count: ```json { "data": [ { "id": "item_1", "..." : "..." }, { "id": "item_2", "..." : "..." } ], "total": 142 } ``` ## Example: paginating through audit logs ### First page Fetch the first 20 audit logs: ``` GET /v1/organisations/:org/audit?limit=20&offset=0 ``` Audit list responses include an `integrity` object that reports whether the current page passes hash-chain verification. `verified: true` means every row on this page self-hashes correctly. `verified: false` means at least one row has been tampered with since write - `firstBrokenId` points at the earliest break. `verified: null` means the page contains only pre-migration rows without a stored hash and cannot be checked. See [Audit log integrity](/security/audit-logs) for the full model. ### Next page Use `offset` to fetch subsequent pages. For page 2 with 20 items per page: ``` GET /v1/organisations/:org/audit?limit=20&offset=20 ``` ### Calculating pages ```typescript const pageSize = 20; const totalPages = Math.ceil(total / pageSize); const currentPage = Math.floor(offset / pageSize) + 1; const hasNextPage = offset + pageSize < total; ``` ## Paginated endpoints | Endpoint | Default limit | Max limit | |----------|--------------|-----------| | `GET /v1/organisations/:org/audit` | 20 | 100 | > Other list endpoints (secrets, workspaces, members, environments) currently return all items without pagination. Pagination will be added as data volumes grow. ## Filtering Some paginated endpoints support filters via query parameters: ### Audit logs | Parameter | Type | Description | |-----------|------|-------------| | `action` | string | Filter by action (e.g., `secret.created`, `token.revoked`) | | `actorId` | string | Filter by the user who performed the action | | `resourceType` | string | Filter by resource type (e.g., `secret`, `access_token`, `workspace`) | ``` GET /v1/organisations/:org/audit?action=secret.created&limit=50 ``` ## Response envelope patterns The API uses consistent response envelopes across all endpoints: ### List response ```json { "data": [ /* array of items */ ] } ``` Used by: organisations, members, workspaces, environments, secrets, tokens ### Paginated list response ```json { "data": [ /* array of items */ ], "total": 142 } ``` Used by: audit logs (and future paginated endpoints) ### Single item response ```json { "data": { /* item object */ } } ``` Used by: get organisation, get workspace, reveal secret ### Success response ```json { "success": true } ``` Used by: delete operations, logout ### Action response ```json { "key": "DATABASE_URL", "version": 3 } ``` Used by: create secret, rotate secret (returns domain-specific fields) --- # Pods Source: https://docs.cryptflare.com/api-reference/pods Hierarchical folders for organising secrets inside an environment. Up to 5 levels deep. # Pods Pods are folders for [secrets](/api-reference/secrets). They let you group related keys inside an environment without creating more environments, and they nest up to **5 levels deep** so you can model organisation charts, service trees, or feature areas. ``` production/ databases/ DATABASE_URL REDIS_URL services/ stripe/ STRIPE_SECRET_KEY STRIPE_WEBHOOK_SECRET resend/ RESEND_API_KEY API_KEY ``` In this example `databases`, `services`, `stripe`, and `resend` are pods. `API_KEY` lives at the environment root. ENV ENV --> P1 P1 --> P2 P2 --> P3 P3 --> P4 P4 --> P5 P5 --> Secret `} /> Pods nest up to five levels under an environment; secrets can attach at any level, and anything deeper than level five returns `POD_MAX_DEPTH_EXCEEDED`. This page is an **overview** - every endpoint is documented on its own page. ## Endpoints | Method | Endpoint | Description | |---|---|---| | | [`/pods`](/api-reference/pods/list-pods) | List pods at one level of the hierarchy | | | [`/pods/:pod`](/api-reference/pods/get-pod) | Get a single pod with its ancestor breadcrumb | | | [`/pods`](/api-reference/pods/create-pod) | Create a pod, optionally nested | | | [`/pods/:pod`](/api-reference/pods/update-pod) | Rename, re-slug, or edit the description | | | [`/pods/:pod`](/api-reference/pods/delete-pod) | Delete an empty pod | ## Nesting limits Pods support up to **5 levels** of nesting. Attempting to create a pod deeper than 5 levels returns `400 POD_MAX_DEPTH_EXCEEDED`. ``` Level 1: databases/ Level 2: postgres/ Level 3: primary/ Level 4: read-replicas/ Level 5: region-us/ (maximum depth) ``` ## Nested path resolution Pod slugs are unique within their parent, which makes it possible to address any pod or secret by its full slash-separated slug path: ``` /workspaces/{wsSlug}/{envSlug}/databases/postgres/primary/APP_SECRET ``` The vault dashboard uses this URL shape directly so pod context is preserved across navigation, refreshes, and shared links. For API clients that need to resolve a path to IDs, use the dedicated [resolve-path endpoint](/api-reference/environments/resolve-path), which walks the slug chain and tells you whether the final segment lands on an environment, a pod, or a secret. ## Deletion behaviour - Pods can only be deleted when empty (no secrets, no sub-pods). - Attempting to delete a non-empty pod returns `400 POD_NOT_EMPTY`. - To delete a pod with contents, move or delete every secret and sub-pod inside it first. - When a parent environment is deleted the `POD_NOT_EMPTY` guard does not apply - the entire subtree goes away with the environment in one cascade. --- # Create a pod Source: https://docs.cryptflare.com/api-reference/pods/create-pod POST /pods - creates a new pod, optionally nested under a parent pod. # Create a pod Creates a new pod in an environment. Omit `parentId` to create the pod at the environment root, or pass a parent pod's ID to nest it underneath. Pods can be nested up to **5 levels deep**. Trying to create a pod at level 6 returns `400 POD_MAX_DEPTH_EXCEEDED`. This keeps pathfinding cheap and the UI tree manageable. --- ## Required permission --- ## Request --- --- # Delete a pod Source: https://docs.cryptflare.com/api-reference/pods/delete-pod DELETE /pods/:pod - deletes an empty pod. The pod must contain no secrets or sub-pods. # Delete a pod Deletes a pod. **The pod must be empty** - any secrets or sub-pods inside it must be moved or deleted first, or the call returns `400 POD_NOT_EMPTY`. This is deliberate; we don't provide a "recursive delete" to prevent accidentally destroying an entire subtree. If a pod is removed as part of an [environment delete](/api-reference/environments/delete-environment), the `POD_NOT_EMPTY` guard does not apply - the entire subtree goes away with the parent environment. Secrets whose `podId` pointed at a deleted pod are not cascaded (they live at the environment root under normal delete rules). --- ## Required permission --- ## Request --- --- # Get a pod Source: https://docs.cryptflare.com/api-reference/pods/get-pod GET /pods/:pod - returns pod details including the ancestor breadcrumb chain. # Get a pod Returns details for a single pod, with the full ancestor breadcrumb chain included so dashboards can render the path without recursive lookups. The `ancestors` array is ordered from closest parent to root. --- ## Required permission --- ## Request --- --- # List pods Source: https://docs.cryptflare.com/api-reference/pods/list-pods GET /pods - lists pods at a given level within an environment. # List pods Returns pods at a single level of the hierarchy. Pass `parentId` to walk into a sub-tree, or omit it to list pods at the environment root. Pods are never returned recursively - traverse one level at a time so you can lazy-load deep trees in the dashboard. --- ## Required permission --- ## Request --- --- # Update a pod Source: https://docs.cryptflare.com/api-reference/pods/update-pod PATCH /pods/:pod - update a pod's name, slug, or description. # Update a pod Updates a pod's name, slug, or description. The `parentId` cannot be changed through this endpoint - moving a pod (and all its contents) between parents is a separate operation we haven't exposed yet. Pass `null` for `description` to clear an existing description. --- ## Required permission --- ## Request --- --- # Policies Source: https://docs.cryptflare.com/api-reference/policies Attribute-based access policies and just-in-time access requests. Deny-first, priority-ordered, fully auditable. # Policies The Policies API lets you layer fine-grained, deny-first access policies on top of RBAC roles. Policies can target specific resources via glob patterns, gate access by time window or IP range, and override role-based permissions. They're evaluated in a strict order - see the [policy evaluation guide](/security/policies) for the full sequence. This page is an **overview** - every endpoint is documented on its own page. ## Policy endpoints | Method | Endpoint | Description | |---|---|---| | | [`/policies`](/api-reference/policies/list-policies) | List policies ordered by priority | | | [`/policies`](/api-reference/policies/create-policy) | Create a policy | | | [`/policies/:id/toggle`](/api-reference/policies/toggle-policy) | Enable or disable a policy | | | [`/policies/:id`](/api-reference/policies/delete-policy) | Delete a policy | | | [`/policies/simulate`](/api-reference/policies/simulate-policy) | Dry-run policy evaluation | | | [`/policies/export`](/api-reference/policies/export-policies) | Export every policy as a JSON document | | | [`/policies/import`](/api-reference/policies/import-policies) | Import policies (additive, non-destructive) | ## JIT access endpoints | Method | Endpoint | Description | |---|---|---| | | [`/policies/access-requests`](/api-reference/policies/list-access-requests) | List pending and resolved JIT access requests | | | [`/policies/access-requests`](/api-reference/policies/create-access-request) | Submit a new access request | | | [`/policies/access-requests/:id/approve`](/api-reference/policies/approve-access-request) | Approve a request and create a grant | | | [`/policies/access-requests/:id/deny`](/api-reference/policies/deny-access-request) | Deny a pending request | | | [`/policies/access-grants`](/api-reference/policies/list-access-grants) | List every active grant | | | [`/policies/access-grants/:id/revoke`](/api-reference/policies/revoke-access-grant) | Revoke an active grant before it expires | ## Resource tags Tags are free-form labels attached to resources for policy scoping and compliance classification. See the [Tags API](/api-reference/tags) for full documentation. | Method | Endpoint | Description | |---|---|---| | | [`/tags`](/api-reference/tags/create-tag) | Attach a tag to a resource | | | [`/tags`](/api-reference/tags/delete-tag) | Remove a tag from a resource | | | [`/tags`](/api-reference/tags/list-tags) | List tags for a specific resource | | | [`/tags/org`](/api-reference/tags/list-org-tags) | List every distinct tag in the organisation | ## Plan availability --- # Approve an access request Source: https://docs.cryptflare.com/api-reference/policies/approve-access-request POST /policies/access-requests/:id/approve - approves a pending request and creates a time-limited grant. # Approve an access request Approves a pending access request. The platform creates a time-limited access grant for the requester that automatically expires after the originally requested `durationMinutes`. --- ## Required permission --- ## Request --- --- # Create an access request Source: https://docs.cryptflare.com/api-reference/policies/create-access-request POST /policies/access-requests - submits a JIT access request for elevated permissions. # Create an access request Submits a just-in-time access request. The request stays `pending` until a member with `approvals:approve` either approves or denies it. On approval, a time-limited access grant is created that expires after `durationMinutes`. JIT access is a Team-plan feature. Any authenticated member may create requests. --- ## Required permission --- ## Request --- --- # Create a policy Source: https://docs.cryptflare.com/api-reference/policies/create-policy POST /policies - creates a new global or team-scoped access policy. # Create a policy Creates a new access policy. Policies layer on top of RBAC roles and can either `ALLOW` or `DENY` specific permissions on matching resources. The deny-first evaluator makes `DENY` policies the simplest way to carve out sensitive resources. Free plans cannot create policies. Pro plans are limited to 5 global policies. Team plans have unlimited policies plus team scoping and simulation. --- ## Required permission --- ## Request --- --- # Delete a policy Source: https://docs.cryptflare.com/api-reference/policies/delete-policy DELETE /policies/:id - permanently deletes a policy. Active JIT access grants are unaffected. # Delete a policy Permanently deletes a policy. Active JIT access grants that were approved under the policy are not revoked - they expire naturally at their scheduled time. --- ## Required permission --- ## Request --- --- # Deny an access request Source: https://docs.cryptflare.com/api-reference/policies/deny-access-request POST /policies/access-requests/:id/deny - denies a pending access request. No grant is created. # Deny an access request Denies a pending access request. No grant is created, and the requester is notified. Optionally include a `reason` so the requester knows what to do next (e.g. "use staging instead"). --- ## Required permission --- ## Request --- --- # Export policies Source: https://docs.cryptflare.com/api-reference/policies/export-policies GET /policies/export - exports all policies as a JSON document for backup or migration. # Export policies Exports every policy in the organisation as a versioned JSON document. Suitable for backup, migration between organisations, or check-in to source control for peer review. Policy import / export is a Team-plan feature. --- ## Required permission --- ## Request --- --- # Import policies Source: https://docs.cryptflare.com/api-reference/policies/import-policies POST /policies/import - imports policies from an export document. Non-destructive. # Import policies Imports policies from an export document. Policies with the same `name` as an existing one are updated in place; new policies are created. **Existing policies that are not in the import document are left alone** - this is an additive operation, not a sync. Policy import / export is a Team-plan feature. --- ## Required permission --- ## Request --- --- # List access grants Source: https://docs.cryptflare.com/api-reference/policies/list-access-grants GET /policies/access-grants - returns every active JIT access grant. Expired grants are excluded. # List access grants Returns every currently active JIT access grant. Expired grants are filtered out - query the audit log if you need history. --- ## Required permission --- ## Request --- --- # List access requests Source: https://docs.cryptflare.com/api-reference/policies/list-access-requests GET /policies/access-requests - lists pending and resolved JIT access requests. # List access requests Returns pending and resolved just-in-time (JIT) access requests. JIT lets members request temporary elevated permissions that require approval before taking effect. JIT access requests are a Team-plan feature. --- ## Required permission --- ## Request --- --- # List policies Source: https://docs.cryptflare.com/api-reference/policies/list-policies GET /policies - returns all global policies ordered by priority (highest first). # List policies Returns every global policy for the organisation, ordered by `priority` (highest first). Disabled policies are included so the response matches the dashboard. --- ## Required permission --- ## Request --- --- # Revoke an access grant Source: https://docs.cryptflare.com/api-reference/policies/revoke-access-grant POST /policies/access-grants/:id/revoke - immediately revokes an active JIT access grant before it expires. # Revoke an access grant Immediately revokes an active JIT access grant before its scheduled expiry. The member loses the elevated permissions on their next request. --- ## Required permission --- ## Request --- --- # Simulate policy evaluation Source: https://docs.cryptflare.com/api-reference/policies/simulate-policy POST /policies/simulate - dry-run policy evaluation for a given member, action, and resource. # Simulate policy evaluation Dry-runs the policy evaluator for a given member, permission, and resource. Returns the result plus every evaluation step so you can debug why a policy matched (or didn't). This is the best way to test a new policy before enabling it in production. Policy simulation is a Team-plan feature. Pro and Free plans cannot call this endpoint. --- ## Required permission --- ## Request --- --- # Toggle a policy Source: https://docs.cryptflare.com/api-reference/policies/toggle-policy POST /policies/:id/toggle - enable or disable a policy without deleting it. # Toggle a policy Flips the `enabled` flag on a policy. Disabled policies are skipped during evaluation but remain in the database, so you can re-enable them later without losing the definition. --- ## Required permission --- ## Request --- --- # Rate limiting Source: https://docs.cryptflare.com/api-reference/rate-limits Learn about the rate limits the CryptFlare API enforces # Rate limiting Learn about the rate limits the CryptFlare API enforces. The CryptFlare API uses a two-tier rate limiting system: a **per-request sliding window** and a **daily organisation quota**. Both are enforced at the edge via distributed key-value storage. ## Response headers Every API response includes headers that show your current rate limit status. | Header Name | Description | |---|---| | `X-RateLimit-Limit` | The maximum number of requests permitted in the current window (60) | | `X-RateLimit-Remaining` | The number of requests remaining in the current window | | `X-RateLimit-Reset` | The time at which the current window resets, in UTC epoch seconds | | `X-Quota-Warning` | Appears at 80% daily quota usage, formatted as `current/limit` | ## Sliding window limit API requests are subject to a sliding window rate limit. The window is tracked per authenticated user, or by IP address for unauthenticated requests. Key Key --> Check Check -- Yes --> Spend Spend --> Handler Handler --> Ok Check -- No --> Retry Retry --> Deny Deny --> Backoff `} /> The happy path spends a token and returns; the rejected path tells the client exactly how long to wait before the next attempt. ### Default limit Most API endpoints use the default limit: | Parameter | Value | |---|---| | **Max requests** | 60 | | **Window** | 60 seconds (sliding) | | **Identifier** | User ID (authenticated) or IP address (unauthenticated) | | **Storage** | Durable Object (edge-local state) | ### Stricter limits for sensitive endpoints Authentication, two-factor, and high-value data endpoints have significantly tighter limits to protect against brute-force attacks and bulk exfiltration: | Endpoint group | Max requests | Window | Notes | |---|---|---|---| | `POST /v1/auth/login` | 5 | 10 minutes | OTP request | | `POST /v1/auth/verify` | 5 | 10 minutes | OTP verification | | `POST /v1/auth/verify-totp` | 5 | 5 minutes | TOTP code during login | | `POST /v1/auth/totp/setup` | 5 | 5 minutes | Start 2FA setup | | `POST /v1/auth/totp/verify-setup` | 5 | 5 minutes | Confirm 2FA setup | | `POST /v1/auth/totp/disable` | 5 | 5 minutes | Disable 2FA | | `GET /v1/.../secrets/:key` (reveal value) | 30 | 60 seconds | Includes version reveal | | `GET /v1/.../secrets/export` | 5 | 60 minutes | Full environment dump | | Rotation policy create/update/toggle/delete | 20 | 60 minutes | Configuration mutations | | `POST /v1/.../access-requests/:id/approve` | 10 | 60 seconds | JIT access approval | | `POST /v1/.../subscriptions/:id/test` + `/deliveries/:id/redeliver` | 20 | 60 seconds | Webhook manual triggers | | `POST /v1/console/decrypt` | 10 | 60 minutes | Console support decrypt tool | | All other endpoints | 60 | 60 seconds | Default CRUD operations | Each endpoint group maintains its own rate limit bucket, so hitting the auth limit does not affect your ability to use other API endpoints. MCP traffic through [`mcp.cryptflare.com`](/security/mcp-access) inherits the same per-token rate limits as the REST API, because every MCP tool call delegates to the matching REST endpoint with the caller's token. There is no separate MCP quota. Sensitive-endpoint limits are keyed by **user ID** (session auth) or **token ID** (access/service tokens), not IP. Two different service tokens held by the same user draw from separate buckets, so a rogue automation does not burn through your interactive budget. Auth and sensitive-endpoint limiters are **fail-closed**: if the underlying rate-limit store is briefly unavailable, requests to these endpoints are rejected with `429` rather than passed through. This prevents a transient outage from becoming a brute-force bypass. Generic CRUD routes stay fail-open so short infrastructure blips do not cascade into platform-wide errors. ### Limits for public endpoints A small set of endpoints is **public** - no `Authorization` header required - so the rate limiter identifies callers by IP address instead of user ID. These are kept tight to prevent enumeration, email flooding, and abuse of signed-token mechanics. | Endpoint | Max requests | Window | Identifier | Notes | |---|---|---|---|---| | `GET /v1/status` | unlimited | - | edge cache | Served from Cloudflare edge cache (60s TTL). Cache hits never touch the worker or the rate limiter. | | `POST /v1/status/subscribe` | 10 | 60 seconds | IP address | Adds an email to status notifications. | | `POST /v1/status/unsubscribe` | 10 | 60 seconds | IP address | Requires a signed HMAC token from a notification email. Idempotent. | | `POST /v1/status/unsubscribe/check` | 10 | 60 seconds | IP address | Pre-check for the unsubscribe page - verifies token validity without mutating state. | All three public `POST` endpoints share a single `status-public:` bucket keyed per IP, so a caller hitting `subscribe` and `unsubscribe` back-to-back draws from the same 10/minute budget. Public endpoints are designed to be called directly from the status page, from marketing sites, and from unsubscribe links in email. They use signed tokens and edge caching instead of API keys so they can run on machines that have never authenticated to CryptFlare. See the [Public API overview](/api-reference/status) for the full list. When you exceed any rate limit, the API returns a `429` response with a `Retry-After` header indicating how many seconds to wait. The response body includes `retryAfter` (same value as the header, for clients that cannot easily read headers) and `endpoint` (a human-readable label identifying which limiter tripped): ```http HTTP/2 429 Content-Type: application/json Retry-After: 12 { "error": "RATE_LIMITED", "message": "You have hit the rate limit for revealing secrets. Try again in 12 seconds.", "status": 429, "retryAfter": 12, "endpoint": "reveal_secret", "requestId": "550e8400-e29b-41d4-a716-446655440000" } ``` ## Daily organisation quota Protected API routes (secrets, workspaces, environments) enforce a **daily request quota** per organisation, based on your plan. The quota resets at **midnight UTC** each day. At **80% usage**, a warning header is added to responses: ```http X-Quota-Warning: 8000/10000 ``` When the quota is exceeded: ```http HTTP/2 429 Content-Type: application/json Retry-After: 3600 { "error": "QUOTA_EXCEEDED", "message": "Daily request quota exceeded for this organisation. Resets at midnight UTC.", "status": 429, "requestId": "550e8400-e29b-41d4-a716-446655440000" } ``` The `Retry-After` value is the number of seconds until UTC midnight. ## Error format All rate limit errors follow the [RFC 9457](https://www.rfc-editor.org/rfc/rfc9457) problem details format: ```json { "error": "RATE_LIMITED", "message": "You have hit the rate limit for this endpoint. Try again in 12 seconds.", "status": 429, "retryAfter": 12, "endpoint": "reveal_secret", "requestId": "uuid" } ``` The `error` field is one of: | Error code | Meaning | |---|---| | `RATE_LIMITED` | Sliding window limit exceeded | | `QUOTA_EXCEEDED` | Daily organisation quota exhausted | ## Best practices - **Cache responses** where possible to reduce API calls - **Use exponential backoff** when you receive a `429` response - **Monitor the `X-RateLimit-Remaining` header** to stay within limits - **Check `X-Quota-Warning`** to proactively alert when nearing the daily limit - **Upgrade your plan** if you consistently hit the daily quota ## Middleware stack Rate limiting is applied in the following middleware order for protected routes: B[Database] B --> C[Auth] C --> D[Organisation] D --> E[RBAC] E --> F[Quota Check] F -->|Under limit| G[Handler] F -->|Over limit| H[429 Response] style A fill:#dbe1ff,stroke:#2958c2,color:#2b3438 style B fill:#f1f4f7,stroke:#aab3b9,color:#2b3438 style C fill:#f1f4f7,stroke:#aab3b9,color:#2b3438 style D fill:#f1f4f7,stroke:#aab3b9,color:#2b3438 style E fill:#f1f4f7,stroke:#aab3b9,color:#2b3438 style F fill:#dbe1ff,stroke:#2958c2,color:#2b3438 style G fill:#d4edda,stroke:#28a745,color:#2b3438 style H fill:#fe8983,stroke:#9f403d,color:#2b3438 `} /> Auth routes (login, verify OTP, TOTP) are not subject to the daily quota but have their own stricter per-endpoint rate limits (see table above). ## Request lifecycle >E: API Request E->>KV: Check sliding window alt Under 60 req/min KV-->>E: Allowed E->>KV: Check daily quota alt Under daily limit KV-->>E: Allowed E->>H: Forward request H-->>E: Response E-->>C: 200 + Rate limit headers else Over daily limit KV-->>E: Quota exceeded E-->>C: 429 QUOTA_EXCEEDED end else Over 60 req/min KV-->>E: Rate limited E-->>C: 429 RATE_LIMITED + Retry-After end `} /> --- # Role Permissions Source: https://docs.cryptflare.com/api-reference/role-permissions View and customise the permissions granted to each role in your organisation. # Role Permissions The Role Permissions API lets organisation owners inspect the effective permission set for each role and customise non-owner roles without affecting other organisations. See [Roles and permissions](/security/roles) for the full RBAC model and default permission sets. This page is an **overview** - every endpoint is documented on its own page. ## Endpoints | Method | Endpoint | Description | |---|---|---| | | [`/role-permissions`](/api-reference/role-permissions/get-role-permissions) | Get the effective permissions for every role | | | [`/role-permissions`](/api-reference/role-permissions/toggle-role-permission) | Grant or revoke a single permission for a role | ## Available permissions --- # Get role permissions Source: https://docs.cryptflare.com/api-reference/role-permissions/get-role-permissions GET /role-permissions - returns the effective permissions for every role in the organisation. # Get role permissions Returns the effective permissions for every role in the organisation. If the owner has customised any role, the response reflects those overrides; otherwise platform defaults are returned. See [Roles and permissions](/security/roles) for the full RBAC model and default permission sets. --- ## Required permission --- ## Request --- --- # Toggle a role permission Source: https://docs.cryptflare.com/api-reference/role-permissions/toggle-role-permission PATCH /role-permissions - grants or revokes a specific permission for a non-owner role. # Toggle a role permission Grants or revokes a single permission for a non-owner role. The change takes effect immediately for every member carrying that role in the organisation, and every change is audit-logged. The `owner` role always holds every permission. Attempting to modify it returns `400 Bad Request`. To restrict what owners can do, transfer ownership to a different user. There is no single "reset to defaults" call - each permission must be toggled individually. Granting a permission that is already granted is a no-op; the same holds for revoking an already-revoked permission. --- ## Required permission --- ## Request --- ## Audit log Every permission change creates an audit log entry: | Field | Value | |---|---| | `action` | `role_permission.updated` | | `resource_type` | `role` | | `resource_id` | The role name (e.g. `developer`) | | `metadata.permission` | The permission string that was toggled | | `metadata.enabled` | `true` on grant, `false` on revoke | --- --- # Rotation Policies Source: https://docs.cryptflare.com/api-reference/rotation-policies Configure automated secret rotation on a schedule with generated values. # Rotation Policies Rotation policies automate secret rotation on a fixed schedule. When a policy is due, CryptFlare generates a new random value, encrypts it, and rotates the target secret - incrementing the version and preserving the previous value in version history. This page is an **overview** - every endpoint is documented on its own page. ## How it works Attach a rotation policy to any secret with an interval (e.g. every 30 days). Every 6 hours, the scheduler scans for policies where `next_rotation_at` has passed. Due policies are pushed onto a dedicated rotation queue. The consumer generates a new value, encrypts it, and rotates the target secret. An audit log entry is written and optional email / in-app notifications fire. Every rotation bumps the secret version on main. Previous versions stay addressable on a rollback branch so an operator can cherry-pick a known-good value back onto main without losing history. Rollback never mutates history - the restore lands as a new version on main, so the audit chain stays intact and every prior value remains retrievable for compliance review. ## Endpoints | Method | Endpoint | Description | |---|---|---| | | [`/rotation-policies`](/api-reference/rotation-policies/list-policies) | List every rotation policy in the organisation | | | [`/:ws/:env/:key/rotation-policy`](/api-reference/rotation-policies/get-policy) | Get the policy attached to a specific secret | | | [`/:ws/:env/:key/rotation-policy`](/api-reference/rotation-policies/create-policy) | Attach a new rotation policy | | | [`/rotation-policies/:policyId`](/api-reference/rotation-policies/update-policy) | Update interval or generation settings | | | [`/rotation-policies/:policyId/toggle`](/api-reference/rotation-policies/toggle-policy) | Pause or resume a policy | | | [`/rotation-policies/:policyId`](/api-reference/rotation-policies/delete-policy) | Permanently remove a policy (secret is untouched) | ## Supported intervals | Interval | Typical use case | |---|---| | 7 days | High-security API keys | | 14 days | Internal service credentials | | 30 days | Standard rotation (recommended default) | | 60 days | Low-risk configuration values | | 90 days | Quarterly compliance rotation | | 180 days | Semi-annual rotation | | 365 days | Annual rotation | ## Character sets | Charset | Characters | Typical use case | |---|---|---| | `alphanumeric` | A-Z, a-z, 0-9 | API keys, database passwords | | `hex` | 0-9, a-f | Cryptographic tokens, hashes | | `base64` | A-Z, a-z, 0-9, +, / | Encoded secrets, JWT signing keys | | `ascii` | Alphanumeric + special characters | High-entropy passwords | ## Error handling If a rotation fails (secret locked, encryption error, database issue), the policy's `retry_count` increments and `last_error` is populated. The scheduler retries on the next 6-hour cycle. **After 5 consecutive failures** the policy stops retrying until manually re-enabled with [Toggle](/api-reference/rotation-policies/toggle-policy) - re-enabling resets the counter. ## Default role permissions --- # Create a rotation policy Source: https://docs.cryptflare.com/api-reference/rotation-policies/create-policy POST /:ws/:env/:key/rotation-policy - attaches a rotation policy to a secret. # Create a rotation policy Attaches an automated rotation policy to a secret. Only **one policy per secret** is allowed - a conflicting request returns `409`. The first rotation fires at `now + intervalDays`, and CryptFlare generates the replacement value using the configured length and charset. `intervalDays` must be one of `7`, `14`, `30`, `60`, `90`, `180`, `365`. Anything else is rejected. --- ## Required permission --- ## Request --- --- # Delete a rotation policy Source: https://docs.cryptflare.com/api-reference/rotation-policies/delete-policy DELETE /rotation-policies/:policyId - permanently removes a rotation policy. The secret is untouched. # Delete a rotation policy Permanently removes the rotation policy. **The underlying secret is not affected** - its current value remains valid until you rotate it manually or attach a new policy. Version history is preserved. --- ## Required permission --- ## Request --- --- # Get policy for a secret Source: https://docs.cryptflare.com/api-reference/rotation-policies/get-policy GET /:ws/:env/:key/rotation-policy - returns the rotation policy attached to a specific secret, or null. # Get policy for a secret Returns the rotation policy attached to a single secret, or `data: null` if the secret has no policy. Useful when rendering a secret's detail view in the dashboard without having to filter a list response client-side. --- ## Required permission --- ## Request --- --- # List rotation policies Source: https://docs.cryptflare.com/api-reference/rotation-policies/list-policies GET /rotation-policies - returns every rotation policy across every workspace and environment. # List rotation policies Returns every rotation policy in the organisation across every workspace and environment. The response includes scheduling state (`next_rotation_at`, `last_rotated_at`, `retry_count`, `last_error`) so dashboards can render health indicators without a second round-trip. --- ## Required permission --- ## Request --- --- # Toggle a rotation policy Source: https://docs.cryptflare.com/api-reference/rotation-policies/toggle-policy POST /rotation-policies/:policyId/toggle - pauses or resumes a rotation policy. # Toggle a rotation policy Pauses or resumes a rotation policy. Re-enabling a policy **resets `retry_count` to 0 and clears `last_error`**, so it's how you un-stick a policy that hit the 5-failure retry ceiling after fixing the underlying problem. --- ## Required permission --- ## Request --- --- # Update a rotation policy Source: https://docs.cryptflare.com/api-reference/rotation-policies/update-policy PATCH /rotation-policies/:policyId - update interval, generation, or notification settings. # Update a rotation policy Updates a rotation policy's interval, generation settings, or notification preferences. **Changing `intervalDays` recalculates `next_rotation_at` from the current time** - if you set a 7-day interval at noon, the next rotation fires at noon a week from now. --- ## Required permission --- ## Request --- --- # Search Source: https://docs.cryptflare.com/api-reference/search Permission-aware search across workspaces, environments, secrets, and members. # Search Unified, permission-aware search across every resource in an organisation. Results are filtered through full RBAC and policy evaluation, so callers only see resources they actually have access to. Deny policies are also evaluated - if a deny matches a result for the caller, that result is silently excluded. This endpoint powers the Cmd+K palette in the CryptFlare dashboard and is exposed only to console users. --- ## Required permission Results are further filtered per resource type: | Result type | Required permission | |---|---| | Workspace | `workspace:read` on the workspace | | Environment | `environment:read` on the parent workspace | | Secret | `secrets:read` on the parent environment | | Member | `members:read` on the organisation | --- ## Request --- --- # Secrets Source: https://docs.cryptflare.com/api-reference/secrets Create, read, rotate, and delete encrypted secrets. Organize with pods. # Secrets The Secrets API lets you manage encrypted key-value pairs within an [environment](/api-reference/environments). All values are encrypted with AES-256-GCM before storage - plaintext never touches disk and CryptFlare workers decrypt on-demand at read time. Secrets can be organised into [pods](/api-reference/pods) for hierarchical grouping within an environment. This page is an **overview** - every endpoint is documented on its own page. Use the sidebar or the endpoint index below to jump to a specific operation. ## Endpoints | Method | Endpoint | Description | |---|---|---| | | [`/secrets`](/api-reference/secrets/list-secrets) | List secret metadata (values excluded) | | | [`/secrets`](/api-reference/secrets/create-secret) | Create a new encrypted secret | | | [`/secrets/:key`](/api-reference/secrets/reveal-secret) | Decrypt and return a single secret value | | | [`/secrets/:key/rotate`](/api-reference/secrets/rotate-secret) | Encrypt a new value and increment version | | | [`/secrets/:key`](/api-reference/secrets/delete-secret) | Permanently delete a secret and its history | | | [`/secrets/:key/move`](/api-reference/secrets/move-secret) | Move a secret into or out of a pod | | | [`/secrets/:key/settings`](/api-reference/secrets/get-settings) | Get settings, metadata, and validation rules | | | [`/secrets/:key/settings`](/api-reference/secrets/update-settings) | Update settings including validation rules | | | [`/secrets/:key/rules`](/api-reference/secrets/get-rules) | Get validation rules | | | [`/secrets/:key/rules`](/api-reference/secrets/set-rules) | Set validation rules | | | [`/secrets/:key/rules`](/api-reference/secrets/remove-rules) | Remove all validation rules | | | [`/secrets/batch/create`](/api-reference/secrets/batch-create) | Create up to 100 secrets in one call | | | [`/secrets/batch/update`](/api-reference/secrets/batch-update) | Rotate up to 100 secrets in one call | | | [`/secrets/batch/delete`](/api-reference/secrets/batch-delete) | Delete up to 100 secrets in one call | | | [`/secrets/batch/:jobId`](/api-reference/secrets/batch-status) | Poll batch job status and results | ## Batch operations Batch endpoints process operations asynchronously via a queue for reliability on all Cloudflare Workers tiers. The HTTP handler validates input and encrypts values immediately (CPU-only, fast), then enqueues the DB work. The response returns a `jobId` with status `processing`. Poll the [batch status](/api-reference/secrets/batch-status) endpoint until the job completes. ## Validation rules Secrets can have per-secret validation rules that enforce value constraints on write and rotate. If the new value violates any rule, the request is rejected with `400 SECRET_VALIDATION_FAILED` before the value is encrypted. Rules are managed via dedicated endpoints ([get](/api-reference/secrets/get-rules), [set](/api-reference/secrets/set-rules), [remove](/api-reference/secrets/remove-rules)) and require the `secrets:rules` permission. ## How secrets are stored - **Encrypted at rest** with AES-256-GCM. The encryption key is derived per-environment via HKDF from the platform master secret (or the organisation's BYOK customer key when BYOK is enabled). - **Plaintext never persisted.** Values are decrypted on demand inside a Cloudflare Worker and never written to disk or logs. - **Versioned.** Every rotation increments a monotonically-increasing `version` counter. Previous versions are retained up to your plan's history limit. - **Audit-logged.** Reveal calls record the caller identity, environment, and secret key. Write calls additionally record the resulting version. --- # Batch create secrets Source: https://docs.cryptflare.com/api-reference/secrets/batch-create POST /secrets/batch/create - create up to 100 secrets in a single request via an async job. # Batch create secrets Creates multiple secrets in a single API call. Values are encrypted immediately in the request handler, then the DB inserts are processed asynchronously via a queue. The response returns a `jobId` that the client polls for completion. Duplicate keys are skipped (not rejected), so the operation is partially idempotent. Each item in the result reports its own status: `created`, `skipped`, or `error`. This endpoint returns `202 Accepted` with a job ID. The actual DB inserts happen asynchronously. Poll `GET /secrets/batch/:jobId` for results. Jobs expire after 1 hour. --- ## Required permission --- ## Request --- ## Polling for results Once you have a `jobId`, poll the status endpoint until `status` is `completed`: --- ## Use cases ### Migrating from another secrets manager Import all secrets from Doppler, Vault, or AWS Secrets Manager in one call instead of 200 sequential requests: ```bash # Export from source, transform to CryptFlare format, batch create curl -X POST https://api.cryptflare.com/v1/organisations/$ORG/workspaces/my-app/environments/production/secrets/batch/create \ -H "Authorization: Bearer $TOKEN" \ -d '{ "secrets": [ { "key": "DB_HOST", "value": "prod-db.rds.amazonaws.com" }, { "key": "DB_PORT", "value": "5432" }, { "key": "DB_USER", "value": "app_user" }, { "key": "DB_PASS", "value": "s3cur3-p4ss!" } ] }' ``` ### Terraform provider bulk provisioning The Terraform provider uses batch create internally when `terraform apply` provisions multiple secrets in a single plan: ```hcl resource "cryptflare_secret" "db" { for_each = var.database_secrets workspace = "my-app" environment = "production" key = each.key value = each.value } ``` ### Seeding a new environment Clone secrets from staging to a new production environment: ```bash # Fetch staging secrets, batch create in production SECRETS=$(cf secrets list --workspace my-app --env staging --format json) curl -X POST .../environments/production/secrets/batch/create \ -d "$SECRETS" ``` --- --- # Batch delete secrets Source: https://docs.cryptflare.com/api-reference/secrets/batch-delete POST /secrets/batch/delete - delete up to 100 secrets by key in a single request. # Batch delete secrets Deletes multiple secrets by key in a single API call. Non-existent keys and locked secrets are caught upfront and reported in `preValidationErrors`. Valid keys are processed asynchronously via a queue. Tags and version history are cascade-deleted with each secret. This endpoint returns `202 Accepted` with a job ID. The actual deletions happen asynchronously. Poll `GET /secrets/batch/:jobId` for results. --- ## Required permission --- ## Request --- ## Polling for results --- ## Use cases ### Environment teardown When decommissioning a staging environment, remove all secrets in one call: ```bash # List all keys, then batch delete KEYS=$(cf secrets list --workspace my-app --env old-staging --format json | jq '[.[].key]') curl -X POST .../environments/old-staging/secrets/batch/delete \ -H "Authorization: Bearer $TOKEN" \ -d "{\"keys\": $KEYS}" ``` ### Post-rotation cleanup After rotating to a new credential scheme, remove all legacy secrets that are no longer referenced: ```bash curl -X POST .../secrets/batch/delete \ -d '{"keys": ["LEGACY_DB_PASSWORD", "LEGACY_API_KEY_V1", "OLD_SMTP_PASS"]}' ``` ### IaC destroy operations The Terraform provider uses batch delete when `terraform destroy` removes multiple `cryptflare_secret` resources in a single plan. --- --- # Get batch job status Source: https://docs.cryptflare.com/api-reference/secrets/batch-status GET /secrets/batch/:jobId - poll for the results of a batch operation. # Get batch job status Polls the status and results of a batch create, update, or delete operation. Batch operations are processed asynchronously via a queue - this endpoint lets the client check whether the job has completed and retrieve per-item results. Jobs expire after 1 hour. Polling a non-existent or expired job returns `404`. --- ## Required permission --- ## Request --- ## Polling pattern A typical client polls every 1-2 seconds until `status` is no longer `processing`: ```typescript async function waitForBatch(jobId: string): Promise { while (true) { const res = await fetch(`${API}/secrets/batch/${jobId}`); const { data } = await res.json(); if (data.status !== 'processing') return data; await new Promise(r => setTimeout(r, 1500)); } } ``` Most batch jobs complete in under 5 seconds. Jobs processing 100 items with complex validation rules may take up to 30 seconds. --- --- # Batch update secrets Source: https://docs.cryptflare.com/api-reference/secrets/batch-update POST /secrets/batch/update - rotate up to 100 secrets to new values in a single request. # Batch update secrets Rotates multiple secrets to new values in a single API call. Validation rules are enforced per-secret in the request handler before encryption - a value that violates a rule is rejected immediately (reported in `preValidationErrors`) without reaching the queue. Locked secrets and missing keys are also caught upfront. Previous versions are archived per the normal rotation flow. Each secret's version counter increments independently. This endpoint returns `202 Accepted` with a job ID. The actual DB rotations happen asynchronously. Poll `GET /secrets/batch/:jobId` for results. --- ## Required permission --- ## Request --- ## Polling for results --- ## Use cases ### Credential rotation across environments When a database password changes, rotate the connection string in every environment: ```bash for ENV in staging production canary; do curl -X POST .../environments/$ENV/secrets/batch/update \ -H "Authorization: Bearer $TOKEN" \ -d '{ "secrets": [ { "key": "DATABASE_URL", "value": "postgres://user:rotated@host/db" }, { "key": "DATABASE_READONLY_URL", "value": "postgres://ro:rotated@replica/db" } ] }' done ``` ### CI/CD pipeline secret refresh Rotate all build-time secrets in one deployment step: ```yaml # GitHub Actions - name: Rotate build secrets run: | curl -X POST $CRYPTFLARE_API/secrets/batch/update \ -H "Authorization: Bearer $CRYPTFLARE_TOKEN" \ -d '{"secrets": [ {"key": "NPM_TOKEN", "value": "${{ secrets.NPM_TOKEN }}"}, {"key": "SENTRY_AUTH_TOKEN", "value": "${{ secrets.SENTRY_AUTH_TOKEN }}"} ]}' ``` ### Validation rule enforcement Secrets with validation rules are checked before encryption. Invalid values are returned immediately in `preValidationErrors` - they never reach the queue and never get encrypted: ```json { "preValidationErrors": [ { "key": "WEAK_SECRET", "error": "Validation failed: Value must be at least 14 characters (got 5)" } ] } ``` --- --- # Create a secret Source: https://docs.cryptflare.com/api-reference/secrets/create-secret POST /secrets - creates a new secret with version 1. Value is encrypted with AES-256-GCM before storage. # Create a secret Creates a new secret in the environment with `version: 1`. The plaintext value is encrypted with AES-256-GCM before it ever touches durable storage. Optionally place the new secret inside a pod for hierarchical grouping. --- ## Required permission --- ## Request --- --- # Delete a secret Source: https://docs.cryptflare.com/api-reference/secrets/delete-secret DELETE /secrets/:key - permanently deletes a secret and all its version history. # Delete a secret Permanently deletes a secret, including every archived version. **This cannot be undone.** Applications that still reference the secret by key will start receiving `404 Not Found` on reveal. There is no soft-delete and no grace period. If you need a reversible hide, rotate the secret to an empty placeholder value first and delete only after you've confirmed no consumers still depend on it. --- ## Required permission --- ## Request --- --- # Get validation rules Source: https://docs.cryptflare.com/api-reference/secrets/get-rules GET /secrets/:key/rules - returns the validation rules configured on a secret. # Get validation rules Returns the validation rules currently configured on a secret. Rules are enforced on every write and rotate operation - if no rules are set, the response contains an empty array. --- ## Required permission --- ## Request --- --- # Get secret settings Source: https://docs.cryptflare.com/api-reference/secrets/get-settings GET /secrets/:key/settings - returns metadata, version policy, validation rules, and auto-delete configuration. # Get secret settings Returns the full settings object for a secret including custom metadata, version policy, validation rules, and auto-delete configuration. --- ## Required permission --- ## Request --- --- # List secrets Source: https://docs.cryptflare.com/api-reference/secrets/list-secrets GET /secrets - returns secret key names and metadata. Values are never included in list responses. # List secrets Returns secret key names and metadata for the environment. Plaintext values are **never** included in list responses - use [`GET /secrets/:key`](/api-reference/secrets/reveal-secret) to decrypt a single secret when you need its value. Optionally filter by pod ID. --- ## Required permission --- ## Request --- --- # List secret versions Source: https://docs.cryptflare.com/api-reference/secrets/list-versions GET /secrets/:key/versions - returns the full version history of a secret (metadata only, no plaintext). # List secret versions Returns every version of a secret, newest first. The response is metadata only: version numbers, who created each version, and when. Plaintext values are never included here - use [Reveal a secret](/api-reference/secrets/reveal-secret) with `?version=N` (or [Reveal a specific version](/api-reference/secrets/reveal-version)) to pull the decrypted value for a given row. Pair this endpoint with the reveal endpoint's `?version=N` query parameter to build a rollback UI: show the history here, let the user pick, then call reveal with the chosen version number. --- ## Required permission --- ## Request --- --- # Move a secret to a pod Source: https://docs.cryptflare.com/api-reference/secrets/move-secret PATCH /secrets/:key/move - moves a secret into a pod, or back to the root level. # Move a secret to a pod Moves an existing secret into a [pod](/api-reference/pods) for hierarchical grouping, or back to the environment root. The value and version number are untouched - this is a pure metadata move and does not count as a rotation in the audit log. --- ## Required permission --- ## Request --- --- # Remove validation rules Source: https://docs.cryptflare.com/api-reference/secrets/remove-rules DELETE /secrets/:key/rules - removes all validation rules from a secret. # Remove validation rules Removes all validation rules from a secret. After removal, the secret will accept any value on subsequent writes and rotations without constraint. --- ## Required permission --- ## Request --- --- # Reveal a secret Source: https://docs.cryptflare.com/api-reference/secrets/reveal-secret GET /secrets/:key - decrypts and returns the secret value. Logged in the audit trail. # Reveal a secret Decrypts and returns the plaintext value of a single secret. **Every call is logged in the audit trail** including the caller identity, environment, and key name - use this endpoint deliberately and prefer service tokens scoped to a single environment when possible. Append `?version=N` to reveal a specific historical version. Omit the param to get the current version. Version `N` must match a row in [`/secrets/:key/versions`](/api-reference/secrets/list-versions). Use the [Try it](javascript:void(0)) panel's version dropdown to pick a value from the live version history. Append `?format=base64` to get the value base64-encoded. Useful for binary secrets, TLS certificates, or embedding values in configs that don't support special characters. --- ## Required permission --- ## Request --- --- # Reveal a specific version Source: https://docs.cryptflare.com/api-reference/secrets/reveal-version GET /secrets/:key/versions/:version - decrypts and returns a specific historical version. # Reveal a specific version Decrypts and returns the plaintext value of a specific historical version of a secret. **Every call is logged in the audit trail** with the requested version number. Use this to implement rollback flows or compliance-driven point-in-time reveals. You can also get a specific version from the main reveal endpoint: `GET /secrets/:key?version=N`. Both paths are equivalent and log identically to the audit trail - pick whichever fits your SDK / routing shape better. --- ## Required permission --- ## Request --- --- # Rotate a secret Source: https://docs.cryptflare.com/api-reference/secrets/rotate-secret POST /secrets/:key/rotate - encrypts a new value, increments the version, and archives the old version. # Rotate a secret Encrypts a new plaintext value, increments the secret's `version`, and archives the previous version. Previous versions are retained according to your plan's version history limit - older versions are pruned automatically when the limit is exceeded. If the secret has [validation rules](/api-reference/secrets/validation-rules) configured, the new value is checked against every rule before encryption. A value that violates any rule is rejected with `400 SECRET_VALIDATION_FAILED` and the secret is not modified. --- ## Required permission --- ## Request --- --- # Set validation rules Source: https://docs.cryptflare.com/api-reference/secrets/set-rules PUT /secrets/:key/rules - replaces all validation rules on a secret. # Set validation rules Replaces all validation rules on a secret. Rules are enforced server-side at write and rotate time before encryption. If the new value violates any rule, the request is rejected with `400 SECRET_VALIDATION_FAILED` and the value is never stored. Setting rules does not retroactively validate the current secret value. Rules are checked only on subsequent writes and rotations. --- ## Required permission --- ## Request ### Rule types | Type | Fields | Description | |------|--------|-------------| | `min_length` | `value: number` | Value must be at least N characters | | `max_length` | `value: number` | Value must be at most N characters | | `regex` | `pattern: string`, `label?: string` | Value must match the regex pattern | | `format` | `format: string` | Must be `uuid`, `json`, `base64`, `url`, `pem`, or `connection_string` | | `no_common_passwords` | _(none)_ | Value must not be in the HIBP top list | | `entropy_min` | `bits: number` | Minimum Shannon entropy in bits | --- --- # Update secret settings Source: https://docs.cryptflare.com/api-reference/secrets/update-settings PATCH /secrets/:key/settings - update metadata, version policy, and auto-delete configuration. # Update secret settings Updates one or more settings on a secret. All fields are optional - only include the fields you want to change. The secret must not be locked. --- ## Required permission --- ## Request --- --- # Service Tokens Source: https://docs.cryptflare.com/api-reference/service-tokens Organisation-level API tokens for CI / CD pipelines. Not tied to any user account. # Service Tokens Service tokens are **organisation-owned** API tokens. Unlike [API tokens](/api-reference/tokens) they are not tied to any user account, so they keep working when team members leave, making them the right choice for CI / CD pipelines, Terraform providers, long-lived AI agents connecting through [`mcp.cryptflare.com`](/security/mcp-access) (tick `mcp:use` at creation), and shared service accounts. Service tokens always use the `cf_live_...` prefix - there is no test-mode equivalent. They also support an IP allowlist for defence-in-depth. This page is an **overview** - every endpoint is documented on its own page. ## Endpoints | Method | Endpoint | Description | |---|---|---| | | [`/service-tokens`](/api-reference/service-tokens/list-service-tokens) | List every service token in the organisation | | | [`/service-tokens`](/api-reference/service-tokens/create-service-token) | Create a service token and return the one-shot secret | | | [`/service-tokens/:tokenId`](/api-reference/service-tokens/update-service-token) | Update name, description, scopes, or IP allowlist | | | [`/service-tokens/:tokenId/toggle`](/api-reference/service-tokens/toggle-service-token) | Enable or disable a token (reversible kill-switch) | | | [`/service-tokens/:tokenId`](/api-reference/service-tokens/revoke-service-token) | Permanently revoke a service token | ## Differences from API tokens | Feature | API token | Service token | |---|---|---| | Scope | Single workspace | Entire organisation | | Linked to user | Yes | No (audit only) | | Survives user removal | No | Yes | | IP allowlist | No | Yes | | Description field | No | Yes | | Required permission | `tokens:create` | `service_tokens:create` | | Available to roles | Owner, Manager, Developer | Owner, Manager only | ## Authentication Service tokens authenticate via the standard `Authorization` header: ``` Authorization: Bearer cf_live_a1b2c3d4e5f6... ``` The token auth middleware validates, in order: (1) the token exists and is not disabled, (2) it has not expired, (3) the request IP matches the allowlist if one is configured, and (4) the token's scopes include the required permission for the endpoint. IP mismatches return `403 SERVICE_TOKEN_IP_BLOCKED`. ## IP allowlist Accepts single IPs (`10.0.1.42`), CIDR ranges (`10.0.0.0/8`), or a mixed array. When the allowlist is empty or `null`, any IP may authenticate. --- # Create a service token Source: https://docs.cryptflare.com/api-reference/service-tokens/create-service-token POST /service-tokens - generates a new organisation-level service token. Secret returned exactly once. # Create a service token Generates a new organisation-level service token. Service tokens survive user removal and support an optional IP allowlist, making them the right choice for CI / CD pipelines, Terraform providers, and shared service accounts. The `token` field in the response is the only time you will ever see the plaintext secret. Write it to your CI environment or secret manager before you close the tab. --- ## Required permission --- ## Request --- --- # List service tokens Source: https://docs.cryptflare.com/api-reference/service-tokens/list-service-tokens GET /service-tokens - returns every service token in the organisation. # List service tokens Returns every service token registered in the organisation. Only the first 12 characters of each secret are exposed via `tokenPrefix` - full values are stored as salted hashes. --- ## Required permission --- ## Request --- --- # Revoke a service token Source: https://docs.cryptflare.com/api-reference/service-tokens/revoke-service-token DELETE /service-tokens/:tokenId - permanently deletes a service token. CI / CD pipelines using it will lose access immediately. # Revoke a service token Permanently deletes a service token. Any CI / CD pipeline, Terraform run, or script using this token loses access immediately - and the next request they make will fail with `401`. Use [Toggle a service token](/api-reference/service-tokens/toggle-service-token) instead if you want a reversible kill-switch. Revoked service tokens cannot be recovered. The audit trail retains the token ID and prefix for forensic review, but the token itself is gone. --- ## Required permission --- ## Request --- --- # Enable or disable a service token Source: https://docs.cryptflare.com/api-reference/service-tokens/toggle-service-token POST /service-tokens/:tokenId/toggle - disable a service token without permanently revoking it (or re-enable it). # Enable or disable a service token Flips a service token's `disabled` flag. A disabled token cannot authenticate but still exists in the database, so you can re-enable it later without losing the audit history. Use this as the incident-response kill-switch when revoke-and-recreate would lose too much context. --- ## Required permission --- ## Request --- --- # Update a service token Source: https://docs.cryptflare.com/api-reference/service-tokens/update-service-token PATCH /service-tokens/:tokenId - update the name, description, scopes, or IP allowlist of a service token. # Update a service token Updates a service token's name, description, scopes, or IP allowlist. The token value itself cannot be changed - revoke and recreate if you need to rotate the secret. --- ## Required permission --- ## Request --- --- # SSO Source: https://docs.cryptflare.com/api-reference/sso Configure OIDC Single Sign-On connections, group-to-role mappings, and the auth flow endpoints. # SSO The SSO API configures OIDC-based Single Sign-On. Endpoints split into two groups: the **auth flow** (public, called during login) and **config management** (org-scoped, requires authentication). SSO is a Team-plan feature. Pro and Free plans cannot create connections and receive `403 SSO_PLAN_REQUIRED`. This page is an **overview** - every endpoint is documented on its own page. ## Auth flow endpoints | Method | Endpoint | Description | |---|---|---| | | [`/auth/sso/check`](/api-reference/sso/check-domain) | Check whether a domain has force-SSO enabled (public) | | | [`/auth/sso/initiate`](/api-reference/sso/initiate-login) | Redirect to the IdP authorization endpoint | | | [`/auth/sso/callback/oidc`](/api-reference/sso/oidc-callback) | Handle the OIDC authorization code callback | ## Connection management | Method | Endpoint | Description | |---|---|---| | | [`/sso`](/api-reference/sso/list-connections) | List SSO connections | | | [`/sso`](/api-reference/sso/create-connection) | Create a new connection (starts disabled) | | | [`/sso/:connectionId`](/api-reference/sso/update-connection) | Update config, allowed domains, or force-SSO | | | [`/sso/:connectionId`](/api-reference/sso/delete-connection) | Delete a connection and its mappings | | | [`/sso/:connectionId/toggle`](/api-reference/sso/toggle-connection) | Enable or disable a connection | | | [`/sso/:connectionId/test`](/api-reference/sso/test-connection) | Dry-run OIDC discovery against the issuer | ## Group mapping management | Method | Endpoint | Description | |---|---|---| | | [`/sso/:connectionId/mappings`](/api-reference/sso/list-mappings) | List IdP group-to-role mappings | | | [`/sso/:connectionId/mappings`](/api-reference/sso/create-mapping) | Map an IdP group to a CryptFlare role | | | [`/sso/:connectionId/mappings/:mappingId`](/api-reference/sso/delete-mapping) | Remove a group-to-role mapping | ## Error codes | Code | Description | |---|---| | `SSO_NOT_CONFIGURED` | No SSO connection is configured or enabled for the organisation | | `SSO_DOMAIN_MISMATCH` | The user's email domain is not in the connection's allowed domains | | `SSO_FORCE_SSO_ENABLED` | OTP login is blocked because the organisation requires SSO | | `SSO_PLAN_REQUIRED` | SSO requires the Team plan | | `SSO_CALLBACK_FAILED` | The OIDC callback failed (missing claims, token exchange error) | | `SSO_STATE_INVALID` | The state parameter is invalid or expired | | `SSO_PROVIDER_ERROR` | The identity provider returned an error | ## Permissions | Permission | Roles | Description | |---|---|---| | `sso:read` | Owner, Manager | View SSO connections and group mappings | | `sso:manage` | Owner | Create, update, delete, and toggle SSO connections and mappings | --- # Check force-SSO status Source: https://docs.cryptflare.com/api-reference/sso/check-domain GET /auth/sso/check - returns whether a given email domain has force-SSO enabled. No authentication required. # Check force-SSO status Public endpoint used by the login page to detect whether an email domain is tied to an SSO-only organisation. When `forceSso: true`, the sign-in form should redirect straight into the IdP flow instead of showing the OTP prompt. This endpoint does not require a session or a token. It is intentionally rate-limited so it cannot be used to enumerate organisation slugs. --- ## Request --- --- # Create an SSO connection Source: https://docs.cryptflare.com/api-reference/sso/create-connection POST /sso - creates a new SSO connection. Starts in a disabled state until tested. # Create an SSO connection Creates a new SSO connection for the organisation. The connection is created **disabled** - test it with the [test endpoint](/api-reference/sso/test-connection) first, then flip `enabled` with [toggle](/api-reference/sso/toggle-connection) once OIDC discovery succeeds. SSO is a Team-plan feature. Pro and Free plans return `403 SSO_PLAN_REQUIRED`. --- ## Required permission --- ## Request --- --- # Create a group mapping Source: https://docs.cryptflare.com/api-reference/sso/create-mapping POST /sso/:connectionId/mappings - maps an IdP group to a CryptFlare role. # Create a group mapping Maps an IdP group to a CryptFlare role. When a user signs in via SSO, CryptFlare walks their group list and assigns the role from the highest-priority matching mapping. If no mapping matches, they receive the connection's `defaultRole`. The `owner` role cannot be assigned via group mappings. Use the [ownership transfer endpoint](/api-reference/organisations/initiate-transfer) for that instead. --- ## Required permission --- ## Request --- --- # Delete an SSO connection Source: https://docs.cryptflare.com/api-reference/sso/delete-connection DELETE /sso/:connectionId - deletes an SSO connection and every group mapping associated with it. # Delete an SSO connection Deletes an SSO connection and every group mapping associated with it. Members who were provisioned via this connection retain their accounts and current memberships - they simply lose the ability to sign in via SSO until another connection is configured. Deleting the only connection on a force-SSO domain re-enables OTP login for that domain. Add a replacement connection first if you want to keep OTP disabled. --- ## Required permission --- ## Request --- --- # Delete a group mapping Source: https://docs.cryptflare.com/api-reference/sso/delete-mapping DELETE /sso/:connectionId/mappings/:mappingId - removes an IdP group-to-role mapping. # Delete a group mapping Removes an IdP group-to-role mapping. **Existing members keep their current role** until their next SSO login - only new logins re-evaluate mappings. --- ## Required permission --- ## Request --- --- # Initiate SSO login Source: https://docs.cryptflare.com/api-reference/sso/initiate-login GET /auth/sso/initiate - redirects the user to the IdP authorization endpoint with PKCE. # Initiate SSO login Redirects the user to the identity provider's authorization endpoint. CryptFlare generates a PKCE code challenge and stores the state in a short-lived key-value entry (10 minute TTL) for CSRF validation. Accepts either `org` (slug) or `domain` as a query parameter - pass exactly one. --- ## Request /oauth2/v2.0/authorize? response_type=code& client_id=...& redirect_uri=...& scope=openid+email+profile& state=...& code_challenge=...& code_challenge_method=S256`} /> --- --- # List SSO connections Source: https://docs.cryptflare.com/api-reference/sso/list-connections GET /sso - returns every SSO connection for the organisation. Client secrets are redacted. # List SSO connections Returns every SSO connection for the organisation. Client secrets are always redacted as `****` - CryptFlare never exposes them through the API after they're stored. --- ## Required permission --- ## Request --- --- # List group mappings Source: https://docs.cryptflare.com/api-reference/sso/list-mappings GET /sso/:connectionId/mappings - returns every IdP group-to-role mapping, ordered by priority. # List group mappings Returns every IdP group-to-role mapping for an SSO connection, ordered by `priority` (lower number = higher priority). Mappings determine which CryptFlare role a user receives based on the IdP groups in their ID token. --- ## Required permission --- ## Request --- --- # OIDC callback Source: https://docs.cryptflare.com/api-reference/sso/oidc-callback GET /auth/sso/callback/oidc - handles the authorization code callback from the identity provider. # OIDC callback Handles the authorization code callback from the identity provider. Validates the `state` parameter against the KV entry created during [initiate](/api-reference/sso/initiate-login), exchanges the code for tokens, verifies the ID token, provisions or updates the user (JIT), and redirects to the app with session cookies set. Your application never calls this endpoint directly. The identity provider redirects the user's browser here after the authorization step of the OIDC flow. --- ## Request --- --- # Test an SSO connection Source: https://docs.cryptflare.com/api-reference/sso/test-connection POST /sso/:connectionId/test - tests OIDC discovery against the configured issuer. No user login performed. # Test an SSO connection Tests an SSO connection by running OIDC discovery against its configured `issuer` URL. This verifies the issuer is reachable, returns a valid `.well-known/openid-configuration`, and advertises the expected endpoints. No actual user login happens. Run this after [create](/api-reference/sso/create-connection) and before [toggle](/api-reference/sso/toggle-connection) - it's the fastest way to catch a typo in the issuer URL. --- ## Required permission --- ## Request --- --- # Toggle an SSO connection Source: https://docs.cryptflare.com/api-reference/sso/toggle-connection POST /sso/:connectionId/toggle - enables or disables an SSO connection. # Toggle an SSO connection Enables or disables an SSO connection. When enabling, CryptFlare validates that every required field (`issuer`, `clientId`, `clientSecret`) is present. Disabling a connection immediately stops new SSO logins - existing sessions are unaffected. --- ## Required permission --- ## Request --- --- # Update an SSO connection Source: https://docs.cryptflare.com/api-reference/sso/update-connection PATCH /sso/:connectionId - updates an existing SSO connection's configuration. # Update an SSO connection Updates an existing SSO connection. Only the fields you pass are updated - omit a field to leave it unchanged. Flipping `forceSso` to `true` immediately blocks OTP login for matching domains, so do that after verifying the connection works. --- ## Required permission --- ## Request --- --- # Status Source: https://docs.cryptflare.com/api-reference/status Public, unauthenticated endpoints for service health, incident data, and email notification subscriptions. # Status The Status API is a small set of **public, unauthenticated** endpoints that power [status.cryptflare.com](https://status.cryptflare.com) and the email notification flow. No API token, session cookie, or organisation context is required to call these. This page is an **overview** - every endpoint is documented on its own page. Use the sidebar or the endpoint index below to jump to a specific operation. ## Endpoints | Method | Endpoint | Description | |---|---|---| | | [`/status`](/api-reference/status/get-status) | Current service health, incidents, maintenance, and metrics | | | [`/status/subscribe`](/api-reference/status/subscribe) | Subscribe an email to incident and maintenance notifications | | | [`/status/unsubscribe`](/api-reference/status/unsubscribe) | Unsubscribe using a signed token from a notification email | | | [`/status/unsubscribe/check`](/api-reference/status/check-unsubscribe) | Validate a token and check current subscription state | ## No authentication Every endpoint in this group is **public**. Callers do not need a CryptFlare account, API token, or session cookie. This is intentional: the status API is consumed by anonymous visitors (the status page), webhooks (your monitoring stack), and notification email recipients (the unsubscribe flow). Public endpoints are protected by three layers of defence: 10 requests per minute per IP address on all mutating endpoints (`subscribe`, `unsubscribe`, `unsubscribe/check`). The `get-status` endpoint is edge-cached with a 60-second TTL and has no per-request rate limit. Unsubscribe flows require an HMAC-SHA256 signed token that can only be generated server-side. Anyone with a valid token can unsubscribe, but nobody can forge a token without the signing key. Subscribing with an already-registered email is a no-op and returns success. Unsubscribing an already-removed email returns success with `alreadyUnsubscribed: true`. This prevents the endpoints from being used for email enumeration. ## Rate limits | Endpoint | Per-IP limit | Additional limit | |---|---|---| | `GET /v1/status` | Edge-cached (60s) | - | | `POST /v1/status/subscribe` | 10 / minute | 5 / hour per email | | `POST /v1/status/unsubscribe` | 10 / minute | - | | `POST /v1/status/unsubscribe/check` | 10 / minute | - | Exceeding a per-IP limit returns `429 RATE_LIMITED` with a `Retry-After` header indicating the number of seconds until the window resets. --- # Check unsubscribe token Source: https://docs.cryptflare.com/api-reference/status/check-unsubscribe POST /status/unsubscribe/check - validates a signed token and returns the current subscription state without modifying anything. # Check unsubscribe token Validates an unsubscribe token and returns the current subscription state for the email inside it. This is a **read-only probe** - nothing is modified, no email is sent, no counter is incremented. Used by the unsubscribe page on load to route between three states: - Valid token + still subscribed -> show the confirm prompt - Valid token + already unsubscribed -> show "Already unsubscribed" state (no confirm) - Invalid/expired token -> show error state --- ## Authentication No CryptFlare account is required, but the request body must contain a **valid signed token** generated by the server. See [`POST /status/unsubscribe`](/api-reference/status/unsubscribe#token-format) for the token format. --- ## Rate limits | Limit | Window | |---|---| | 10 requests | 1 minute per IP address | Exceeding the limit returns `429 RATE_LIMITED` with a `Retry-After` header. --- ## Request --- ## Why this endpoint exists Without a pre-check, the unsubscribe page would always show the confirm prompt - even when the email has already been unsubscribed. Users who click an old link months later would see a misleading "Are you sure?" dialog for an action that has nothing to do. The check endpoint lets the page route directly to an "Already unsubscribed" state, which is both more accurate and less confusing. Since the endpoint is read-only, it is safe to call eagerly on page load without any risk of state mutation or side effects. --- --- # Get service status Source: https://docs.cryptflare.com/api-reference/status/get-status GET /status - returns live service health checks, 90-day history, active incidents, maintenance windows, and platform metrics. Public and edge-cached. # Get service status Returns a full snapshot of the CryptFlare platform's current health plus all active incidents, scheduled maintenance, and platform-wide metrics. This is the endpoint that powers [status.cryptflare.com](https://status.cryptflare.com) and the status banner in every CryptFlare app. No authentication is required. The response is **edge-cached for 60 seconds** via the Cloudflare Cache API, so high traffic from the public status page does not impact API worker capacity. --- ## Authentication None - this endpoint is public. No `Authorization` header, cookie, or organisation context is required. --- ## Request --- ## Caching The response is served from three tiers of cache before touching the database: 60-second TTL. Zero latency within a Cloudflare colo. Served before the worker runs. 60-second TTL. Cross-colo fallback at around 10ms. Populated on edge cache miss. D1 databases and Durable Objects. Only runs on full cache miss, then backfills both upper tiers. Polling this endpoint at 60-second intervals is fine. Polling faster than that just returns the same cached response. --- --- # Subscribe to notifications Source: https://docs.cryptflare.com/api-reference/status/subscribe POST /status/subscribe - subscribes an email to receive incident and maintenance notifications. Public, idempotent, rate-limited. # Subscribe to notifications Subscribes an email address to receive notifications when CryptFlare creates or updates an incident, or schedules maintenance. The email is stored with a subscription timestamp and starts receiving notifications immediately. No account is created. The endpoint is **idempotent**: calling it multiple times with the same email is a no-op and always returns the same response shape. This prevents the endpoint from being used for email enumeration. --- ## Authentication None - this endpoint is public. No `Authorization` header, cookie, or organisation context is required. --- ## Rate limits | Limit | Window | |---|---| | 10 requests | 1 minute per IP address | | 5 requests | 1 hour per email address | Exceeding either limit returns `429 RATE_LIMITED` with a `Retry-After` header. --- ## Request --- ## Storage model Subscribers are stored in Cloudflare KV under the key `status-sub:{email}`. Each record contains: ```json { "email": "oncall@acme.com", "subscribedAt": "2026-04-11T09:15:00Z", "emailsSent": 0, "lastEmailAt": null } ``` The `emailsSent` and `lastEmailAt` fields are incremented when a notification is delivered. We do not store IP addresses, names, or any other profile data. See the [Status Notifications guide](/security/status-notifications#privacy) for the full privacy model. --- --- # Unsubscribe Source: https://docs.cryptflare.com/api-reference/status/unsubscribe POST /status/unsubscribe - unsubscribe an email using a signed token from a notification email. Public, idempotent, rate-limited. # Unsubscribe Removes a subscriber from the status notification list using a signed token. The token is an HMAC-SHA256 signature over the email address, generated server-side and embedded in every notification email. Without a valid token, the request is rejected with `401`. This endpoint is **idempotent**: calling it for an already-unsubscribed email returns `success: true` with `alreadyUnsubscribed: true`. This lets the unsubscribe page handle double-clicks, email forwards, and link scanners gracefully without error responses. --- ## Authentication No CryptFlare account is required, but the request body must contain a **valid signed token** generated by the server. Tokens are embedded in the unsubscribe links of every notification email and have no expiry - they remain valid indefinitely for the email they were issued to. --- ## Rate limits | Limit | Window | |---|---| | 10 requests | 1 minute per IP address | Exceeding the limit returns `429 RATE_LIMITED` with a `Retry-After` header. --- ## Request --- ## Token format Tokens consist of two base64url-encoded segments joined by a dot: ``` base64url(email).base64url(hmac-sha256(secret, "status-unsub:" + email)) ``` - **First segment** is the lowercase, trimmed email address - **Separator** is a literal dot (`.`) - **Second segment** is the HMAC-SHA256 signature of `status-unsub:` + email, base64url-encoded The `status-unsub:` namespace prefix in the HMAC input prevents the signing secret from being used to forge tokens for other purposes. Tokens are **not** JWTs - there is no header, no expiry, and no payload beyond the email. --- ## Idempotency and re-subscription Clicking the same unsubscribe link twice is safe. The first click removes the subscriber and returns `alreadyUnsubscribed: false`. The second click returns `alreadyUnsubscribed: true` and does nothing else. No duplicate work, no error. If a user re-subscribes via [`POST /status/subscribe`](/api-reference/status/subscribe) after unsubscribing, the same old token will work again because tokens are stateless - they are bound only to the email, not to a particular subscription record. --- --- # Support Source: https://docs.cryptflare.com/api-reference/support Create and manage support tickets, upload attachments, and communicate with the CryptFlare team. # Support The Support API lets you create tickets, send messages, and upload attachments programmatically. Useful for integrating CryptFlare support into your internal tools or alerting systems. This page is an **overview** - every endpoint is documented on its own page. ## Plan limits | Plan | Max priority | Response SLA | Concurrent tickets | |---|---|---|---| | Free | Medium | - | 2 | | Pro | High | 24h | 5 | | Team | Urgent | 4h | Unlimited | ## Endpoints | Method | Endpoint | Description | |---|---|---| | | [`/support`](/api-reference/support/list-tickets) | List every ticket in the organisation | | | [`/support`](/api-reference/support/create-ticket) | Create a new support ticket | | | [`/support/:ticketId`](/api-reference/support/get-ticket) | Get a ticket with its full message thread | | | [`/support/:ticketId/messages`](/api-reference/support/add-message) | Add a message to a ticket thread | | | [`/support/:ticketId/close`](/api-reference/support/close-ticket) | Close an open ticket | | | [`/support/:ticketId/upload`](/api-reference/support/upload-attachment) | Upload a file attachment | ## Attachments Attachments are a two-step flow: Call `POST /support/:ticketId/upload` with `multipart/form-data`. The response returns an opaque `key`. Pass `{ key, name, size, type }` in the `attachments` array of an [Add message](/api-reference/support/add-message) call. Max file size **10MB**. Allowed types: PNG, JPEG, GIF, WebP, PDF, TXT, CSV, Markdown, JSON, ZIP. --- # Add a message Source: https://docs.cryptflare.com/api-reference/support/add-message POST /support/:ticketId/messages - adds a message to an existing ticket thread. # Add a message Adds a message to an existing ticket thread. To attach files, [upload them first](/api-reference/support/upload-attachment) and pass the returned `key`, `name`, `size`, and `type` in the `attachments` array. --- ## Required permission --- ## Request --- --- # Close a ticket Source: https://docs.cryptflare.com/api-reference/support/close-ticket POST /support/:ticketId/close - closes an open ticket. Closed tickets cannot receive new messages. # Close a ticket Closes an open ticket. Closed tickets cannot receive new messages - create a new ticket if you need to follow up on a previously-closed issue. --- ## Required permission --- ## Request --- --- # Create a ticket Source: https://docs.cryptflare.com/api-reference/support/create-ticket POST /support - creates a new support ticket. Priority is validated against your plan tier. # Create a ticket Creates a new support ticket. The `priority` you can request depends on your plan: | Plan | Max Priority | Response SLA | Concurrent Tickets | |---|---|---|---| | Free | Medium | - | 2 | | Pro | High | 24h | 5 | | Team | Urgent | 4h | Unlimited | Requesting a priority above your plan's ceiling returns `403`. --- ## Required permission --- ## Request --- --- # Get ticket detail Source: https://docs.cryptflare.com/api-reference/support/get-ticket GET /support/:ticketId - returns a ticket with its full message thread. # Get ticket detail Returns a single ticket with its full message thread. Employee messages are marked with `isEmployee: true` and include the CryptFlare employee's name. --- ## Required permission --- ## Request --- --- # List tickets Source: https://docs.cryptflare.com/api-reference/support/list-tickets GET /support - returns every support ticket in the organisation, newest first. # List tickets Returns every support ticket in the organisation, ordered by most recent. Use [Get ticket](/api-reference/support/get-ticket) to pull the full message thread for a specific one. --- ## Required permission --- ## Request --- --- # Upload attachment Source: https://docs.cryptflare.com/api-reference/support/upload-attachment POST /support/:ticketId/upload - uploads a file as multipart/form-data. Use the returned key when adding messages. # Upload attachment Uploads a file to a ticket. The response returns an opaque `key` - pass it in the `attachments` array when calling [Add a message](/api-reference/support/add-message) to actually attach the file to a message. Max file size: **10MB**. Allowed types: PNG, JPEG, GIF, WebP, PDF, TXT, CSV, Markdown, JSON, ZIP. Anything else is rejected with `400`. --- ## Required permission --- ## Request --- --- # Sync Connections Source: https://docs.cryptflare.com/api-reference/sync-connections Push secrets to third-party platforms like GitHub, Vercel, and AWS Secrets Manager. # Sync Connections Sync connections push CryptFlare secrets to external platforms. When a secret changes, the destination is updated automatically (auto mode) or on demand (manual mode). Credentials for the destination are encrypted at rest with the same AES-256-GCM key CryptFlare uses for your secrets. This page is an **overview** - every endpoint is documented on its own page. ## Endpoints | Method | Endpoint | Description | |---|---|---| | | [`/sync-connections`](/api-reference/sync-connections/list-connections) | List every sync connection in the organisation | | | [`/sync-connections`](/api-reference/sync-connections/create-connection) | Create a new sync connection (credentials validated before save) | | | [`/sync-connections/:connectionId/trigger`](/api-reference/sync-connections/trigger-sync) | Enqueue an immediate sync job | | | [`/sync-connections/:connectionId/logs`](/api-reference/sync-connections/list-logs) | View execution history for a connection | | | [`/sync-connections/:connectionId/drift`](/api-reference/sync-connections/get-drift) | Detect drift between CryptFlare and the destination (names only, never values) | ## Plan availability | Feature | Free | Pro | Team | |---|---|---|---| | Sync connections | 0 | 3 | 10 | | Auto sync | - | Yes | Yes | | Pod-level scoping | - | No | Yes | | Key filtering | - | Yes | Yes | ## How sync works CryptFlare acts as a **central hub** for secret propagation. Update a secret once and every connected destination (GitHub, Vercel, AWS, ...) receives the new value automatically. >Queue: Enqueue sync job with scope Queue->>Matcher: Dequeue job Matcher->>Matcher: Resolve connections in scope Matcher->>Adapter: Hand off per matching connection Adapter->>Adapter: Decrypt secrets and apply filters Adapter->>Destination: Push values over HTTPS Destination-->>Adapter: Ack with status Adapter-->>Queue: Record run outcome `} /> Audit events fire on every secret mutation and manual triggers hit the same queue, so auto mode and on-demand sync share one code path and one log table. Configure the destination provider, credentials, and scope (environment or pod). Manually via the trigger endpoint, or automatically when a secret in scope changes (auto mode). CryptFlare reads all secrets in scope, applying key filters and pod hierarchy. Secret values are decrypted server-side using your encryption key (BYOK-aware). Values are re-encrypted for the destination's API (e.g. GitHub's sealed box) and pushed over HTTPS. Each run records counts, duration, and outcome in the sync log. ## Sync modes | Mode | Behaviour | |---|---| | **Manual** | Only syncs when you click "Sync now" or call the trigger endpoint. | | **Auto** | Syncs automatically whenever a secret in the scoped environment / pod is created, rotated, deleted, imported, locked, unlocked, or auto-rotated. | Auto mode is powered by the audit queue: every secret mutation creates an audit event, and the queue consumer fans out to matching sync connections. No polling - typical latency is under 30 seconds. ## Pod hierarchy When a sync connection targets a pod, all secrets in that pod **and every descendant pod** are included. CryptFlare uses a recursive query to walk the pod tree. ``` infrastructure/ databases/ postgres/ ← sync targets this pod DATABASE_URL DATABASE_PASSWORD redis/ REDIS_URL ``` Syncing the `postgres` pod pushes `DATABASE_URL` and `DATABASE_PASSWORD` but not `REDIS_URL`. ## Key filtering | Option | Purpose | Example | |---|---|---| | `keyPrefix` | Prefix applied at the destination | `PROD_` turns `DATABASE_URL` into `PROD_DATABASE_URL` | | `keyFilter` | Allowlist - only sync these keys | `["DATABASE_URL", "API_KEY"]` | | `excludeKeys` | Blocklist - never sync these keys | `["INTERNAL_SECRET"]` | Filters apply after pod scoping. `keyFilter` takes priority if both are set. ## Drift detection & reconciliation CryptFlare sync is one-way (CryptFlare → destination). Destinations like GitHub Actions Secrets don't return secret values over their REST APIs, so you can never pull values back into CryptFlare. What you *can* do is compare **secret names** on both sides to see whether state has drifted. ### Drift report [`GET /sync-connections/:connectionId/drift`](/api-reference/sync-connections/get-drift) returns three classifications: | Bucket | Meaning | What to do | |---|---|---| | **`inSync`** | Destination has a name that matches a CryptFlare source key. | Nothing. | | **`unmanaged`** | Destination has a name CryptFlare isn't managing. Manual addition, another tool, or predates the connection. | Take it over (see below) or leave it. | | **`orphaned`** | Previously-managed key is missing from the destination. Someone deleted it there after CryptFlare pushed it. | Trigger a sync to re-push, or edit the source to reflect the new state. | The endpoint reads names only, never values. Safe to call from the UI on demand. ### Take over an unmanaged secret CryptFlare exposes a UI flow (`sync → drift report → Take over`) for adopting an unmanaged destination-side secret: Click the drift icon on the connection row to fetch the current state. Each row in the unmanaged section has a "Take over" button. CryptFlare cannot read the existing value from the destination - you have to provide it. A dialog asks for the CryptFlare key name (pre-filled by reversing the connection's key prefix) and the secret value. On save, CryptFlare creates a new secret at the connection's exact source scope with the value you typed, then runs the sync. This **overwrites** the destination's existing value with whatever you just typed. If you mis-type, the destination now holds the wrong value. **Pod placement:** the taken-over secret is always created at the connection's exact source scope (`workspaceId` / `environmentId` / `podId`). If the connection is pod-scoped to `test/production/api-keys`, taken-over secrets land in `api-keys`, same as any other secret pushed by this connection. For nested pod chains, the leaf pod is used. This keeps the new secret in sync scope permanently - if it were placed anywhere else, subsequent syncs would ignore it as out-of-scope. **Disabling takeover per connection:** set `allowTakeover: false` via the edit form or `PATCH /sync-connections/:id` to hide the takeover UI without disabling drift detection. Regulated orgs use this to let operators audit drift without giving them the ability to overwrite existing destination values. **Disabling takeover org-wide:** flip **Sync Takeover** off under `Org settings → Features`. This is a single kill switch that disables the takeover action on every sync connection in the org regardless of per-connection settings. Drift detection itself stays available. The effective permission is `orgSyncTakeoverEnabled AND connectionAllowTakeover` - both have to be on for the button to appear. The takeover dialog overwrites whatever currently exists on the destination with the value the user types. For regulated environments (SOC 2, ISO 27001, PCI-DSS), this is an audit-relevant operation. Turning off `allowTakeover` on production connections and restricting it to a specific rollover window is a reasonable control. Drift detection itself is read-only (name comparison only, zero value exposure) and is always available. ### Delete on source delete By default, deleting a secret in CryptFlare does **not** remove it from the destination - the destination keeps the last-pushed value. This is safe but leaves stale secrets around if you rely on CryptFlare as the source of truth. Enable `deleteOnSourceDelete` on a connection (via the edit form or `PATCH /sync-connections/:id`) to change that behaviour. With the flag on, every sync's reconcile pass computes `(previously-managed - current-source)` and calls the provider's `deleteSecret` for each missing key. CryptFlare only ever deletes secrets it can prove it pushed. The consumer tracks this in `sync_connections.managed_keys` - an allow-list updated at the end of every successful sync. A destination-side secret CryptFlare never pushed (e.g. a manually-added `TEST` secret on GitHub) will never be deleted by this path regardless of the flag state, because it was never in `managed_keys` to begin with. The connection row shows a small red `auto-del` badge when the flag is on so operators can spot it from the list view. The edit form renders the toggle in a red-tinted card with explicit copy about what it does. ## Default role permissions ## Supported providers --- # Create a sync connection Source: https://docs.cryptflare.com/api-reference/sync-connections/create-connection POST /sync-connections - creates a new sync connection. Credentials are validated against the provider before saving. # Create a sync connection Creates a new sync connection. Credentials are validated against the destination provider before being persisted - if the PAT or API key is wrong, you'll get a `400` back with a provider-specific error message instead of silently storing invalid credentials. Free plans cannot create sync connections. Pro plans are limited to 3 connections per organisation. Team plans get 10 connections plus pod-level scoping. --- ## Required permission --- ## Request --- --- # Get drift report Source: https://docs.cryptflare.com/api-reference/sync-connections/get-drift GET /sync-connections/:connectionId/drift - classifies destination secrets against CryptFlare source scope. Names only, never values. # Get drift report Compares the current in-scope CryptFlare source secrets to whatever the provider's `listSecrets` method reports from the destination, and classifies the delta into three buckets: `inSync`, `unmanaged`, and `orphaned`. Used by the drift panel in the vault sync page to surface state mismatches between CryptFlare and the destination. Takeover is not done via this endpoint - the vault UI orchestrates it as a `createSecret` + `triggerSync` sequence so the adopted value goes through the standard secret create path. See the [Secret Sync security guide](/security/sync) for the full classification model and compliance controls. Destinations like GitHub Actions Secrets don't return secret values over their REST APIs, so drift detection is fundamentally limited to comparing **names**. This endpoint never reads, returns, or logs any secret value. It's safe to expose to any member with `sync:read`, including compliance auditors who should not have write access. Providers whose adapter doesn't implement `listSecrets` return `501`. --- ## Required permission --- ## Request --- --- # List sync connections Source: https://docs.cryptflare.com/api-reference/sync-connections/list-connections GET /sync-connections - returns every sync connection in the organisation. Credentials are never exposed. # List sync connections Returns every sync connection in the organisation. Destination credentials (PATs, API keys) are never returned - they're encrypted at rest with the same AES-256-GCM key used for secrets. --- ## Required permission --- ## Request --- --- # List sync logs Source: https://docs.cryptflare.com/api-reference/sync-connections/list-logs GET /sync-connections/:connectionId/logs - returns execution history for a sync connection. # List sync logs Returns the execution history for a sync connection, newest first. Each entry includes counts of created / updated / deleted secrets, the outcome, duration, and how the run was triggered. --- ## Required permission --- ## Request --- --- # Trigger a manual sync Source: https://docs.cryptflare.com/api-reference/sync-connections/trigger-sync POST /sync-connections/:connectionId/trigger - enqueues an immediate sync job for the connection. # Trigger a manual sync Enqueues a sync job for immediate processing. The sync itself runs asynchronously via the sync queue - use [List sync logs](/api-reference/sync-connections/list-logs) to poll for the result. Typical queue latency is under 30 seconds. Transient provider errors (`429`, `5xx`, network timeouts) are automatically retried with exponential backoff (base 30s, cap 15m) up to three attempts; terminal errors (`401`, `403`, `404`) ack immediately and stamp `credentialStatus = invalid` so the UI surfaces a 'creds expired' badge without waiting for the daily credential-health cron. See the [Secret Sync security guide](/security/sync) for the full classification model. Connections with `enabled: 0` return `400` instead of queuing. Enable them first. --- ## Required permission --- ## Request --- --- # Tags Source: https://docs.cryptflare.com/api-reference/tags Attach free-form labels to resources for policy scoping and compliance classification. # Tags The Tags API lets you attach free-form labels to workspaces, environments, pods, and secrets. Tags are consumed by the policy engine's `resource_tag_any`, `resource_tag_all`, and `resource_tag_none` conditions, making it possible to write compliance-scoped policies like "deny writes on PCI-tagged resources without MFA." Tags are also useful for audit filtering and general resource classification. ## Tag endpoints | Method | Endpoint | Description | |---|---|---| | | [`/tags`](/api-reference/tags/create-tag) | Attach a tag to a resource | | | [`/tags`](/api-reference/tags/delete-tag) | Remove a tag from a resource | | | [`/tags`](/api-reference/tags/list-tags) | List tags for a specific resource | | | [`/tags/org`](/api-reference/tags/list-org-tags) | List every distinct tag in the organisation | ## Tag format Tags must match `[a-z0-9][a-z0-9._:-]{0,62}` - 1 to 63 characters, starting with an alphanumeric character. Tags are case-insensitive and stored lowercase. --- # Attach a tag Source: https://docs.cryptflare.com/api-reference/tags/create-tag POST /tags - attach a tag to a workspace, environment, pod, or secret. # Attach a tag Attaches a free-form tag to a resource. Tags are normalised to lowercase. Attempting to attach a tag that already exists on the resource returns `409 TAG_ALREADY_EXISTS`. --- ## Required permission --- ## Request --- --- # Remove a tag Source: https://docs.cryptflare.com/api-reference/tags/delete-tag DELETE /tags - remove a tag from a resource. # Remove a tag Removes a tag from a resource. Returns `404 TAG_NOT_FOUND` if the tag is not applied to the resource. --- ## Required permission --- ## Request --- --- # List org tags Source: https://docs.cryptflare.com/api-reference/tags/list-org-tags GET /tags/org - list every distinct tag used in the organisation. # List org tags Returns every distinct tag string used across the organisation, sorted alphabetically. This powers the tag autocomplete in the vault dashboard and policy builder. --- ## Required permission --- ## Request --- --- # List tags for a resource Source: https://docs.cryptflare.com/api-reference/tags/list-tags GET /tags - list all tags attached to a specific resource. # List tags for a resource Returns every tag attached to a given workspace, environment, pod, or secret. Pass the resource type and ID as query parameters. --- ## Required permission --- ## Request --- --- # API Tokens Source: https://docs.cryptflare.com/api-reference/tokens Create, update, toggle, and revoke workspace-scoped API tokens for programmatic access. # API Tokens API tokens give you programmatic access to CryptFlare. Every token is **scoped to a single workspace** and tied to the user who created it - when that user leaves the organisation, their tokens are automatically revoked. Token format: `cf_live_...` (production) or `cf_test_...` (development). Only the first 12 characters are ever readable after creation; the full secret is returned exactly once in the create response and then stored as a salted hash. API tokens are tied to a user. For organisation-owned tokens suitable for CI / CD, Terraform providers, or shared service accounts, use [Service Tokens](/api-reference/service-tokens) instead. This page is an **overview** - every endpoint is documented on its own page. ## Endpoints | Method | Endpoint | Description | |---|---|---| | | [`/tokens`](/api-reference/tokens/list-tokens) | List every API token in the organisation | | | [`/tokens`](/api-reference/tokens/create-token) | Create a token and return the one-shot secret | | | [`/tokens/:tokenId`](/api-reference/tokens/update-token) | Update name and / or scopes | | | [`/tokens/:tokenId/toggle`](/api-reference/tokens/toggle-token) | Enable or disable a token (reversible kill-switch) | | | [`/tokens/:tokenId`](/api-reference/tokens/revoke-token) | Permanently revoke a token | ## Available scopes | Scope | What it grants | |---|---| | `secrets:list` | List secret metadata (no values) | | `secrets:read` | Decrypt and return secret values | | `secrets:write` | Create and rotate secrets | | `secrets:delete` | Permanently delete secrets | | `secrets:rotate` | Rotate existing secrets (narrower than `secrets:write`) | | `mcp:use` | Call tools via [`mcp.cryptflare.com`](/security/mcp-access) for AI agents (Claude, Cursor, Zed) | Scopes are additive - a token with `secrets:read` cannot list, a token with `secrets:list` cannot read values. Always grant the narrowest set your caller actually needs. MCP access is opt-in: new tokens start without `mcp:use` even if their creator has the role permission. ## Authentication precedence Every inbound request hits a single auth resolver that inspects the credential presented in the `Authorization` header (or the session cookie) and routes to the matching auth type. The resolved actor and organisation are attached to the request, permissions are evaluated, and the matching rate limiter is applied before the handler runs. Resolver Resolver -->|"cf_sess_ cookie"| Session Resolver -->|"cf_svc_ bearer"| Service Resolver -->|"cf_live_ bearer"| Personal Session --> Context Service --> Context Personal --> Context Context --> RateLimit RateLimit --> Handler `} /> Session cookies carry the fullest permission set (dashboard users), service tokens are organisation-owned and unaffected by member churn, and personal access tokens inherit the creator's current role at evaluation time. --- # Create an API token Source: https://docs.cryptflare.com/api-reference/tokens/create-token POST /tokens - generates a new API token. The full secret is returned exactly once. # Create an API token Generates a new API token and returns the full secret **exactly once** in the response body. CryptFlare only stores a salted hash, so if you lose the value you'll need to revoke the token and create a new one. The `token` field in the response is the only time you will ever see the plaintext secret. Write it to your vault / secret manager / CI env before you close the tab. --- ## Required permission --- ## Request --- --- # List API tokens Source: https://docs.cryptflare.com/api-reference/tokens/list-tokens GET /tokens - returns every API token in the organisation. Token secrets are never exposed. # List API tokens Returns every API token registered in the organisation. Only the first 12 characters of each token are exposed via `tokenPrefix` - the full secret is stored as a salted hash and can never be re-read. --- ## Required permission --- ## Request --- --- # Revoke a token Source: https://docs.cryptflare.com/api-reference/tokens/revoke-token DELETE /tokens/:tokenId - permanently deletes a token. Cannot be undone. # Revoke a token Permanently deletes a token row. The token's hash is removed from the database immediately and every request presenting it starts failing with `401`. If you want a reversible kill-switch instead, use [Toggle a token](/api-reference/tokens/toggle-token). Revoked tokens cannot be recovered. The audit trail retains the token's ID and prefix for forensic review, but the token itself is gone. --- ## Required permission --- ## Request --- --- # Enable or disable a token Source: https://docs.cryptflare.com/api-reference/tokens/toggle-token POST /tokens/:tokenId/toggle - disable a token without permanently revoking it (or re-enable it). # Enable or disable a token Flips a token's `disabled` flag. A disabled token cannot authenticate but still exists in the database, so you can re-enable it later without losing its audit history. Use this as a kill-switch during an incident when revoke-and-recreate would lose too much context. --- ## Required permission --- ## Request --- --- # Update an API token Source: https://docs.cryptflare.com/api-reference/tokens/update-token PATCH /tokens/:tokenId - update the name and / or scopes of an existing token. # Update an API token Updates a token's display name and / or permission scopes. The token secret, workspace binding, and environment (`live` / `test`) cannot be changed - revoke and recreate the token if you need to move it or rotate the secret. --- ## Required permission --- ## Request --- --- # Usage Source: https://docs.cryptflare.com/api-reference/usage Check organisation resource usage, remaining quota, and plan limits. # Usage The Usage API returns real-time resource consumption for your organisation. Use it to monitor limits in CI / CD pipelines, dashboards, or alerting systems. Returns current usage, plan limits, remaining quota, and percentage for every resource type. Available to any authenticated member - no special permission required. --- ## Required permission --- ## Request --- ## Example usage ### CI / CD quota check Before deploying, check if you have headroom to create more secrets: ```bash title="Bash" #!/bin/bash ORG_ID="your-org-id" TOKEN="cf_live_..." REMAINING=$(curl -s \ -H "Authorization: Bearer $TOKEN" \ "https://api.cryptflare.com/v1/organisations/$ORG_ID/usage" \ | jq '.usage.secrets.remaining') if [ "$REMAINING" -eq 0 ]; then echo "Error: Secret limit reached" exit 1 fi echo "Secrets remaining: $REMAINING" ``` ### Monitoring and alerting ```javascript title="JavaScript" async function checkQuota(orgId, token) { const res = await fetch( `https://api.cryptflare.com/v1/organisations/${orgId}/usage`, { headers: { Authorization: `Bearer ${token}` } } ); const { usage } = await res.json(); for (const [resource, data] of Object.entries(usage)) { if (data.percentage && data.percentage > 80) { console.warn(`${resource} at ${data.percentage}% capacity`); } } } ``` ```python title="Python" import requests def check_quota(org_id: str, token: str): res = requests.get( f"https://api.cryptflare.com/v1/organisations/{org_id}/usage", headers={"Authorization": f"Bearer {token}"}, ) for resource, data in res.json()["usage"].items(): if isinstance(data, dict) and data.get("percentage", 0) > 80: print(f"Warning: {resource} at {data['percentage']}%") ``` --- --- # Workspaces Source: https://docs.cryptflare.com/api-reference/workspaces Create, list, and delete workspaces - the top-level project container inside an organisation. # Workspaces Workspaces are the project-level container that sits between an organisation and its environments. A typical setup has one workspace per application or service, each containing `development`, `staging`, and `production` environments where secrets actually live. This page is an **overview** - every endpoint is documented on its own page. Use the sidebar or the table below to jump straight to a specific operation. ## Endpoints | Method | Endpoint | Description | |---|---|---| | | [`/workspaces`](/api-reference/workspaces/list-workspaces) | List every workspace in an organisation | | | [`/workspaces`](/api-reference/workspaces/create-workspace) | Create a workspace | | | [`/workspaces/:ws`](/api-reference/workspaces/get-workspace) | Get details for one workspace (by ID or slug) | | | [`/workspaces/:ws`](/api-reference/workspaces/delete-workspace) | Delete a workspace and cascade every resource it owns | ## Where workspaces fit ``` organisation └── workspace (this resource) └── environment ├── pod (optional folder) └── secret ``` - **Organisation:** billing + member container - **Workspace:** project-level grouping, owns environments - **Environment:** isolated secret store (`production`, `staging`, ...) - **Pod:** optional hierarchical folder inside an environment - **Secret:** the encrypted key-value pair itself --- # Create a workspace Source: https://docs.cryptflare.com/api-reference/workspaces/create-workspace POST /workspaces - creates a new workspace within an organisation. # Create a workspace Creates a new workspace inside the organisation. A workspace is a named project container; it owns environments, which in turn own secrets and pods. --- ## Required permission --- ## Request --- --- # Delete a workspace Source: https://docs.cryptflare.com/api-reference/workspaces/delete-workspace DELETE /workspaces/:ws - permanently deletes the workspace and every environment, pod, secret, and token it owns. # Delete a workspace Permanently deletes a workspace and every environment, pod, secret, service token, and rotation policy scoped to it. The deletion runs in a single cascade inside one transaction. There is no soft delete and no grace period. If you need to temporarily hide a workspace instead, suspend the tokens scoped to it or move its secrets into a different workspace first. --- ## Required permission --- ## Request --- --- # Get a workspace Source: https://docs.cryptflare.com/api-reference/workspaces/get-workspace GET /workspaces/:ws - returns details for a single workspace. Can look up by ID or slug. # Get a workspace Returns details for a single workspace. The `:ws` parameter accepts either the workspace ID or its slug, so dashboards can deep-link from friendly URLs without maintaining their own slug → ID table. --- ## Required permission --- ## Request --- --- # List workspaces Source: https://docs.cryptflare.com/api-reference/workspaces/list-workspaces GET /workspaces - returns all workspaces in an organisation. # List workspaces Returns every workspace in the organisation. Workspaces group environments by project, so this is typically the first call a dashboard makes after resolving the organisation. --- ## Required permission --- ## Request --- --- # Complete API reference Source: https://docs.cryptflare.com/changelog/2026-04-08-api-reference The API reference now covers every endpoint with interactive documentation. ### Documented endpoints - **Authentication** - Login OTP, verify, current user, logout - **Secrets** - List, create, reveal, rotate, move, delete (with pod support) - **Pods** - List, get, create, update, delete - **Organisations** - List, create, get, update, delete, members, invite, role change, remove - **Workspaces** - List, create, get, delete, environments ### Reference pages - **Errors** - All 52 error codes with HTTP status mapping - **Pagination** - Offset/limit patterns with filtering - **Rate Limits** - Sliding window and daily quota details with Mermaid flow diagrams ### API Playground Each endpoint has a "Try it" button that opens a full-screen API client with: - Authenticated requests using your session - Organisation/workspace/environment selectors from your real data - Live request/response viewer with syntax highlighting - Endpoint sidebar for quick navigation --- # CLI device flow authentication Source: https://docs.cryptflare.com/changelog/2026-04-08-cli-device-flow The CryptFlare CLI now authenticates via a browser-based device flow. Run `cf auth login`, approve in your browser, and the CLI automatically receives a scoped API key. No tokens to copy. ### How it works 1. Run `cf auth login` in your terminal 2. The CLI opens your browser with a one-time code 3. Approve the request in the vault dashboard 4. The CLI saves the API key automatically ### What's new - Device flow endpoints: `/v1/cli/device`, `/v1/cli/token`, `/v1/cli/approve` - Browser approval page with code verification, permission preview, and token details - Upfront code validation (expired/already used detection before approval) - API key created on behalf of the user with scoped permissions ```bash cf auth login # -> Opens browser # -> Approve in vault dashboard # -> CLI authenticated automatically ``` --- # Dark mode Source: https://docs.cryptflare.com/changelog/2026-04-08-dark-mode The documentation site now supports dark mode with three options: light, system (follows OS preference), and dark. The theme selector is in the footer. Full Material Design 3 dark color tokens are applied across all components, code blocks, tables, and diagrams. Mermaid charts automatically switch between light and dark themes. --- # Health endpoint and status indicator Source: https://docs.cryptflare.com/changelog/2026-04-08-health-status New health check endpoint and a reusable status indicator component. ### Health endpoint `GET /v1/health` checks database and KV store connectivity, returns overall status (ok/degraded) with response time. No authentication required. ### Status indicator A shared UI component (`StatusBanner`) polls the health endpoint and displays a pulsing dot with the current status. Available in `@cryptflare/ui` for use across all apps. Currently shown in the docs site footer. - Green pulse: All systems operational - Amber pulse: Partial degradation - Red pulse: Major outage - Labels are i18n-aware --- # Pods - organize secrets into folders Source: https://docs.cryptflare.com/changelog/2026-04-08-pods-support Pods are hierarchical folders for organizing secrets within an environment. Group related secrets together with up to 5 levels of nesting. ### API - `GET/POST /pods` - List and create pods - `GET/PATCH/DELETE /pods/:pod` - Read, update, delete individual pods - `PATCH /secrets/:key/move` - Move secrets between pods - `POST /secrets` now accepts optional `podId` - `GET /secrets` supports `?podId` filtering ### CLI - `cf pod list`, `get`, `create`, `update`, `delete` - `cf pod tree` - View full hierarchy as an ASCII tree - `cf secret set --pod` - Create secrets inside a pod - `cf secret move --pod` - Move secrets between pods ### Vault App - Pod browsing with breadcrumb navigation - Create, rename, and delete pods - Drag secrets between pods --- # Terraform Provider Source: https://docs.cryptflare.com/changelog/2026-04-08-terraform-provider Manage your CryptFlare secrets infrastructure as code with the official Terraform provider. Create workspaces, environments, pods, and secrets - all version-controlled and repeatable. ### Resources - `cryptflare_workspace` - Manage workspaces within your organisation - `cryptflare_environment` - Create environments (production, staging, development) - `cryptflare_secret` - Store encrypted secrets with AES-256-GCM - `cryptflare_pod` - Organize secrets into hierarchical folders ### Data Sources - `cryptflare_workspace` / `cryptflare_workspaces` - Look up existing workspaces - `cryptflare_secret` - Read secrets for use in other providers ### Features - Import support on all resources - Sensitive value handling for secrets - Variable validation with `sensitive()`, `format()`, `for_each` - Automated releases with conventional commits and GPG signing - Full documentation on the [Terraform Registry](https://registry.terraform.io/providers/BuunGroup-IaC/cryptflare/latest) ```hcl resource "cryptflare_secret" "database_url" { workspace_id = cryptflare_workspace.backend.id environment_id = cryptflare_environment.production.id key = "DATABASE_URL" value = var.database_url pod_id = cryptflare_pod.databases.id } ``` --- # CLI authentication Source: https://docs.cryptflare.com/cli/authentication How the CryptFlare CLI authenticates using the device authorization flow # CLI authentication The CryptFlare CLI uses a **device authorization flow** to authenticate. You run a command in your terminal, approve it in your browser, and the CLI automatically receives a scoped API key. No tokens to copy, no passwords to type. >API: cryptflare auth login API-->>CLI: device_code, user_code, verification_url CLI->>Browser: Open verification URL with user code loop Poll for approval CLI->>API: Exchange device_code for token API-->>CLI: AUTH_PENDING end Browser->>App: User signs in and reviews scopes App->>API: Approve user_code API-->>App: Approval recorded CLI->>API: Exchange device_code for token API-->>CLI: API key, user, organisation CLI->>CLI: Save key to ~/.cryptflare/config `} /> The polling loop and the browser approval run in parallel, so the CLI receives the key as soon as you click "Authorize CLI" in the browser. ## How it works ``` Terminal API Browser | | | | cf auth login | | |------------------------->| | | POST /v1/cli/device | | |<-------------------------| | | deviceCode + userCode | | | | | | Opens browser --------->|------------------------->| | | /cli/auth?code=ABCD-EFGH| | | | | Polling... | User logs in and | | POST /v1/cli/token | clicks "Authorize" | |------------------------->| | | "AUTH_PENDING" |<-------------------------| | | POST /v1/cli/approve | | Polling... | | | POST /v1/cli/token | | |------------------------->| | |<-------------------------| | | apiKey + user + org | | | | | | Saves to config | | | Done! | | ``` ### Step by step You run `cf auth login` in your terminal The CLI calls `POST /v1/cli/device` to get a **device code** (for polling) and a **user code** (for display) The CLI shows the code and opens your browser to `vault.cryptflare.com/cli/auth?code=ABCD-EFGH` In the browser, you log in (if not already) and see the authorization page with your code You click **"Authorize CLI"** - the browser calls `POST /v1/cli/approve` with your user code Back in the terminal, the CLI has been polling `POST /v1/cli/token` every 5 seconds. Once approved, the API creates a scoped API token and returns it. The CLI saves the API key to `~/.config/cryptflare/config.json` ## What you see in the terminal ```bash CryptFlare CLI ──────────────────────────────────── Authorize this device Code: ABCD-EFGH URL: https://vault.cryptflare.com/cli/auth?code=ABCD-EFGH ✓ Browser opened automatically ⠹ Waiting for browser approval... ✓ Authenticated successfully ──────────────────────────────────── Key: cf_live_abc1...cdef User: jane@acme.com Org: org_xyz789 Saved to: ~/.config/cryptflare/config.json ``` ## What you see in the browser The authorization page shows: - The **one-time code** matching what the CLI displayed - verify they match - **Who you are signed in as** (email) - A security notice explaining what the CLI will have access to - An **"Authorize CLI"** button to approve - A **"Cancel"** button to deny After approval, the page confirms the token was created and shows: - The scopes the CLI token has access to - A link to the **API tokens page** where you can view or revoke the token ## What the token can do The CLI token is a real API key (starting with `cf_live_`) scoped to your first organisation and workspace. It has the following permissions: | Resource | Permissions | |----------|-------------| | Secrets | List, read, write, rotate, delete | | Workspaces | Read, create | | Environments | Read, create | | Tokens | Read | The token does **not** have access to: - Billing or subscription management - Member management (invite, remove, role changes) - Organisation settings (update, delete) - Audit log export ## Token lifecycle ### Storage The API key is saved to your config file: | OS | Path | |----|------| | Linux | `~/.config/cryptflare/config.json` | | macOS | `~/Library/Preferences/cryptflare/config.json` | | Windows | `%APPDATA%/cryptflare/config.json` | ### Expiry CLI tokens do not expire by default. They remain valid until you revoke them. ### Revoking You can revoke a CLI token in two ways: ```bash # Remove the local key (does not revoke on server) cf auth logout ``` Or revoke it server-side from the vault dashboard: Go to **Settings > API Tokens** Find the token named `CLI (your@email.com)` Click **Revoke** Server-side revocation immediately invalidates the key. The CLI will get `401` errors until you run `cf auth login` again. ## CI/CD environments The device flow requires a browser, so it's designed for developer workstations. For CI/CD pipelines, use the `CF_TOKEN` environment variable instead: ```bash # In GitHub Actions env: CF_TOKEN: ${{ secrets.CF_TOKEN }} # In your script cf secret list -w my-app -e production ``` Create a dedicated CI token in the vault dashboard with only the scopes your pipeline needs. ## Security considerations - The **device code expires after 5 minutes** - if not approved, the flow must restart - The **user code is 8 characters** (e.g., `ABCD-EFGH`) - short enough to verify visually - The CLI **polls every 5 seconds** - it does not receive a callback - The **API key is a real token** stored as a hash on the server - the full value only exists in your config file - Only approve requests **you initiated** - if you see a code you don't recognize, click Cancel - The token is scoped to a **single organisation** - it cannot access other organisations you belong to ## Troubleshooting ### "Device code expired" The code is valid for 5 minutes. If the CLI shows this error, run `cf auth login` again. ### "User has no organisations" You need to complete onboarding and create an organisation before using the CLI. Log in at the vault dashboard first. ### Browser doesn't open If the CLI can't open your browser, manually open the URL shown in the terminal. The code is the same. ### Already authenticated If you're already logged in, the CLI shows your existing key and asks you to logout first: ```bash cf auth logout # Remove local key cf auth login # Start fresh ``` --- # Command reference Source: https://docs.cryptflare.com/cli/commands Complete reference for all CryptFlare CLI commands # Command reference Full reference for every `cf` command. All commands support `--json` for machine-readable output and `--help` for usage info. --- ## cf auth login Authenticate the CLI via browser-based device flow. Opens your browser, you approve, and the CLI automatically receives an API key. ```bash cf auth login # -> Opens browser to vault.cryptflare.com/cli/auth?code=ABCD-EFGH # -> Approve in browser # -> CLI saves API key to ~/.config/cryptflare/config.json ``` ``` ✓ Authenticated successfully Key: cf_live_abc123...cdef User: jane@acme.com Saved: ~/.config/cryptflare/config.json ``` If you already have a key saved, the CLI will ask before replacing it. ## cf auth status Show authentication status, including the masked key, source, and user email. ```bash cf auth status ``` ``` ✓ Authenticated Key: cf_live_abc1...cdef Source: config file User: jane@acme.com Organisations: Acme Corp (pro) - owner ``` ``` Authenticated as jane@acme.com Name: Jane Smith Organisations: Acme Corp (pro) - owner ``` ## cf auth logout Clear stored credentials from the config file. ```bash cf auth logout ``` --- ## cf secret list List all secret keys in an environment. Values are not shown. ```bash cf secret list -w my-app -e production ``` ``` KEY VERSION UPDATED DATABASE_URL v3 2h ago API_KEY v1 3d ago ``` | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--pod` | `-p` | Filter by pod ID. Use `"root"` for root-level secrets only. | | `--json` | | Output as JSON | | `--quiet` | `-q` | Minimal output | ## cf secret set Create a new secret. If the key already exists, the value is rotated (version incremented). Optionally place it in a pod. ```bash # Create at root level cf secret set DATABASE_URL "postgres://user:pass@db/mydb" -w my-app -e production # Create inside a pod cf secret set STRIPE_KEY "sk_live_..." -w my-app -e production --pod pod_abc123 ``` ``` ✓ Set DATABASE_URL (version 1) ``` | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--pod` | `-p` | Pod ID to place the secret in | | `--json` | | Output as JSON | ## cf secret get Retrieve and decrypt a secret value. ```bash cf secret get DATABASE_URL -w my-app -e production ``` ``` DATABASE_URL (v3) postgres://user:pass@db/mydb ``` With `--quiet`, outputs only the value (for scripting): ```bash DB=$(cf secret get DATABASE_URL -w my-app -e production -q) ``` | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | | `--quiet` | `-q` | Output value only | ## cf secret rotate Rotate a secret to a new value, incrementing the version. ```bash cf secret rotate DATABASE_URL --value "postgres://user:newpass@db/mydb" -w my-app -e production ``` ``` ✓ Rotated DATABASE_URL to version 4 ``` | Flag | Short | Description | |------|-------|-------------| | `--value` | | New secret value (required) | | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | ## cf secret delete Permanently delete a secret and all its version history. ```bash cf secret delete DATABASE_URL -w my-app -e production --yes ``` Prompts for confirmation unless `--yes` is passed. | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--yes` | `-y` | Skip confirmation prompt | ## cf secret move Move a secret into a pod, or back to root level. ```bash # Move into a pod cf secret move DATABASE_URL --pod pod_abc123 -w my-app -e production # Move back to root level cf secret move DATABASE_URL --pod root -w my-app -e production ``` ``` ✓ Moved DATABASE_URL to pod pod_abc123 ``` | Flag | Short | Description | |------|-------|-------------| | `--pod` | | Target pod ID, or `root` to move to root level (required) | | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | --- ## cf pod list List pods at a given level. Omit `--parent` for root-level pods. ```bash # Root-level pods cf pod list -w my-app -e production # Sub-pods inside a specific pod cf pod list -w my-app -e production --parent pod_abc123 ``` ``` NAME SLUG DESCRIPTION ID Databases databases DB connections pod_abc123 Services services - pod_def456 ``` | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--parent` | `-p` | Parent pod ID (omit for root) | | `--json` | | Output as JSON | | `--quiet` | `-q` | Minimal output | ## cf pod get Show pod details with the full breadcrumb path. ```bash cf pod get pod_ghi789 -w my-app -e production ``` ``` Stripe Path: Services / Stripe Slug: stripe Desc: Stripe payment integration ID: pod_ghi789 ``` | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | ## cf pod create Create a new pod. Supports nesting up to 5 levels deep. ```bash # Root-level pod cf pod create -n "Databases" -s databases -w my-app -e production # Nested pod cf pod create -n "Postgres" -s postgres -w my-app -e production --parent pod_abc123 # With description cf pod create -n "Stripe" -s stripe -w my-app -e production --parent pod_def456 -d "Stripe payment secrets" ``` ``` ✓ Created pod Databases (databases) ``` | Flag | Short | Description | |------|-------|-------------| | `--name` | `-n` | Pod name (required) | | `--slug` | `-s` | URL-safe slug (required) | | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--parent` | `-p` | Parent pod ID for nesting | | `--description` | `-d` | Pod description | | `--json` | | Output as JSON | ## cf pod update Update a pod's name, slug, or description. ```bash cf pod update pod_abc123 -n "Database Credentials" -w my-app -e production ``` | Flag | Short | Description | |------|-------|-------------| | `--name` | `-n` | New name | | `--slug` | `-s` | New slug | | `--description` | `-d` | New description | | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | ## cf pod delete Delete an empty pod. The pod must have no secrets or sub-pods inside it. ```bash cf pod delete pod_abc123 -w my-app -e production --yes ``` | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--yes` | `-y` | Skip confirmation | ## cf pod tree Show the full pod and secret hierarchy as an ASCII tree. ```bash cf pod tree -w my-app -e production ``` ``` production/ ├── API_KEY ├── databases/ │ ├── DATABASE_URL │ └── REDIS_URL └── services/ ├── SENDGRID_KEY └── stripe/ ├── STRIPE_SECRET_KEY └── STRIPE_WEBHOOK_SECRET ``` Pods are shown in yellow with a trailing `/`. Secrets are shown in cyan. The tree is fetched recursively from the API. | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--json` | | Output root pods and secrets as JSON | --- ## cf run Run a command with all secrets from an environment injected as environment variables. ```bash cf run -w my-app -e development -- node server.js ``` Existing environment variables are preserved. Use `--override` to let secrets overwrite existing values. | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--override` | | Overwrite existing env vars | ## cf env Export secrets in various formats. ```bash # Shell export cf env -w my-app -e production -f shell # → export DATABASE_URL="postgres://..." # Dotenv cf env -w my-app -e production -f dotenv # → DATABASE_URL=postgres://... # JSON cf env -w my-app -e production -f json ``` | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--format` | `-f` | Output format: `shell`, `dotenv`, `json` (default: dotenv) | --- ## cf org list List organisations you belong to. The active organisation is marked with a bullet. ```bash cf org list ``` | Flag | Description | |------|-------------| | `--json` | Output as JSON | ## cf org select Set the active organisation. Pass an ID directly or omit for a list. ```bash cf org select org_xyz789 ``` ## cf org current Print the active organisation ID. ```bash cf org current ``` --- ## cf workspace list List all workspaces in the active organisation. ```bash cf workspace list ``` | Flag | Short | Description | |------|-------|-------------| | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | Alias: `cf ws list` ## cf workspace create Create a new workspace. ```bash cf workspace create -n "Backend API" -s backend-api ``` | Flag | Short | Description | |------|-------|-------------| | `--name` | `-n` | Workspace name (required) | | `--slug` | `-s` | URL-safe slug (required) | | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | ## cf workspace delete Delete a workspace and all its environments and secrets. ```bash cf workspace delete my-app --yes ``` | Flag | Short | Description | |------|-------|-------------| | `--org` | `-o` | Organisation ID | | `--yes` | `-y` | Skip confirmation | --- ## cf environment list List environments in a workspace. ```bash cf environment list -w my-app ``` | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug (required) | | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | ## cf environment create Create a new environment. ```bash cf environment create -n "Staging" -s staging -w my-app ``` | Flag | Short | Description | |------|-------|-------------| | `--name` | `-n` | Environment name (required) | | `--slug` | `-s` | URL-safe slug (required) | | `--workspace` | `-w` | Workspace ID or slug (required) | | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | --- ## cf token list List API tokens for the active organisation. ```bash cf token list ``` | Flag | Short | Description | |------|-------|-------------| | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | ## cf token create Generate a new API token. The full token is shown once and cannot be retrieved again. ```bash cf token create -n "CI Deploy" -w my-app -s secrets:read ``` ``` ✓ Created token CI Deploy Token: cf_live_abc123def456... Save this token now. It will not be shown again. ``` | Flag | Short | Description | |------|-------|-------------| | `--name` | `-n` | Token name (required) | | `--workspace` | `-w` | Workspace to scope the token to (required) | | `--scope` | `-s` | Permission scope, repeatable (required) | | `--expires` | | Expiry date (ISO 8601) | | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | ## cf token revoke Permanently revoke an API token. ```bash cf token revoke tkn_abc123 --yes ``` | Flag | Short | Description | |------|-------|-------------| | `--org` | `-o` | Organisation ID | | `--yes` | `-y` | Skip confirmation | --- ## cf config list Show all configuration. ```bash cf config list ``` | Flag | Description | |------|-------------| | `--json` | Output as JSON | ## cf config get Get a configuration value. ```bash cf config get defaults.workspace ``` ## cf config set Set a configuration value. ```bash cf config set defaults.workspace my-app cf config set defaults.environment development ``` ## cf config unset Remove a configuration value. ```bash cf config unset defaults.workspace ``` --- # Auth commands Source: https://docs.cryptflare.com/cli/commands/auth Authenticate and manage CLI sessions # Auth commands Manage authentication for the CryptFlare CLI. ## cf auth login Authenticate via the browser-based device flow. Opens your browser, you approve, and the CLI automatically receives an API key. ```bash cf auth login ``` ```bash CryptFlare CLI ──────────────────────────────────── Authorize this device Code: ABCD-EFGH URL: https://vault.cryptflare.com/cli/auth?code=ABCD-EFGH ✓ Browser opened automatically ⠹ Waiting for browser approval... ✓ Authenticated successfully ──────────────────────────────────── Key: cf_live_abc1...cdef User: jane@acme.com Org: org_xyz789 Saved to: ~/.config/cryptflare/config.json ``` If you already have a key saved, the CLI will ask you to logout first. See [CLI Authentication](/cli/authentication) for the full flow details. --- ## cf auth status Show authentication status, including the masked key, source, and user email. ```bash cf auth status ``` ``` CryptFlare CLI ──────────────────────────────────────────────── ● Authenticated Key: cf_live_abc1...cdef Source: config file User: jane@acme.com Organisations: ● Acme Corp (pro) - owner ``` --- ## cf auth logout Remove the saved API key from the config file. ```bash cf auth logout ``` ``` ✓ API key removed ``` This removes the local key. To also revoke it server-side, go to Settings > API Tokens in the vault dashboard. --- # Environment commands Source: https://docs.cryptflare.com/cli/commands/environment Inject secrets, export in various formats, and manage environments # Environment commands Inject secrets into running processes and export in various formats. ## cf run Run a command with all secrets from an environment injected as environment variables. ```bash cf run -w my-app -e development -- node server.js ``` Existing environment variables are preserved. Use `--override` to let secrets overwrite existing values. | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--override` | | Overwrite existing env vars | --- ## cf env Export secrets in various formats. ```bash # Shell export format cf env -w my-app -e production -f shell # → export DATABASE_URL="postgres://..." # Dotenv format cf env -w my-app -e production -f dotenv # → DATABASE_URL=postgres://... # JSON format cf env -w my-app -e production -f json ``` | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--format` | `-f` | Output format: `shell`, `dotenv`, `json` (default: dotenv) | --- ## cf environment list List environments in a workspace. ```bash cf environment list -w my-app ``` | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace (required) | | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | --- ## cf environment create Create a new environment. ```bash cf environment create -n "Staging" -s staging -w my-app ``` | Flag | Short | Description | |------|-------|-------------| | `--name` | `-n` | Environment name (required) | | `--slug` | `-s` | URL-safe slug (required) | | `--workspace` | `-w` | Workspace (required) | | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | --- # Pod commands Source: https://docs.cryptflare.com/cli/commands/pods Organize secrets into hierarchical folders from the CLI # Pod commands Pods are folders for organizing secrets within an environment. They support up to 5 levels of nesting. ## cf pod list List pods at a given level. Omit `--parent` for root-level pods. ```bash # Root-level pods cf pod list -w my-app -e production # Sub-pods inside a pod cf pod list -w my-app -e production --parent pod_abc123 ``` ``` NAME SLUG DESCRIPTION ID Databases databases DB connections pod_abc123 Services services - pod_def456 ``` | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--parent` | `-p` | Parent pod ID (omit for root) | | `--json` | | Output as JSON | | `--quiet` | `-q` | Minimal output | --- ## cf pod get Show pod details with the full breadcrumb path. ```bash cf pod get pod_ghi789 -w my-app -e production ``` ``` Stripe Path: Services / Stripe Slug: stripe Desc: Stripe payment integration ID: pod_ghi789 ``` | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | --- ## cf pod create Create a new pod. Supports nesting up to 5 levels deep. ```bash # Root-level pod cf pod create -n "Databases" -s databases -w my-app -e production # Nested pod cf pod create -n "Postgres" -s postgres -w my-app -e production --parent pod_abc123 # With description cf pod create -n "Stripe" -s stripe -w my-app -e production \ --parent pod_def456 -d "Stripe payment secrets" ``` ``` ✓ Created pod Databases (databases) ``` | Flag | Short | Description | |------|-------|-------------| | `--name` | `-n` | Pod name (required) | | `--slug` | `-s` | URL-safe slug (required) | | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--parent` | `-p` | Parent pod ID for nesting | | `--description` | `-d` | Pod description | | `--json` | | Output as JSON | --- ## cf pod update Update a pod's name, slug, or description. ```bash cf pod update pod_abc123 -n "Database Credentials" -w my-app -e production cf pod update pod_abc123 -d "All database connection strings" -w my-app -e production ``` | Flag | Short | Description | |------|-------|-------------| | `--name` | `-n` | New name | | `--slug` | `-s` | New slug | | `--description` | `-d` | New description | | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | --- ## cf pod delete Delete an empty pod. The pod must have no secrets or sub-pods inside it. ```bash cf pod delete pod_abc123 -w my-app -e production --yes ``` Move or delete all contents first, then delete the pod. | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--yes` | `-y` | Skip confirmation | --- ## cf pod tree Show the full pod and secret hierarchy as an ASCII tree. ```bash cf pod tree -w my-app -e production ``` ``` production/ ├── API_KEY ├── databases/ │ ├── DATABASE_URL │ └── REDIS_URL └── services/ ├── SENDGRID_KEY └── stripe/ ├── STRIPE_SECRET_KEY └── STRIPE_WEBHOOK_SECRET ``` Pods are shown with a trailing `/`. The tree is fetched recursively from the API. | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--json` | | Output root pods and secrets as JSON | --- # Resource commands Source: https://docs.cryptflare.com/cli/commands/resources Manage organisations, workspaces, tokens, and config # Resource commands Manage organisations, workspaces, API tokens, and CLI configuration. ## cf org list List organisations you belong to. The active organisation is marked. ```bash cf org list ``` | Flag | Description | |------|-------------| | `--json` | Output as JSON | ## cf org select Set the active organisation. ```bash cf org select org_xyz789 ``` ## cf org current Print the active organisation ID. ```bash cf org current ``` --- ## cf workspace list List all workspaces in the active organisation. ```bash cf workspace list ``` Alias: `cf ws list` | Flag | Short | Description | |------|-------|-------------| | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | ## cf workspace create Create a new workspace. ```bash cf workspace create -n "Backend API" -s backend-api ``` | Flag | Short | Description | |------|-------|-------------| | `--name` | `-n` | Workspace name (required) | | `--slug` | `-s` | URL-safe slug (required) | | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | ## cf workspace delete Delete a workspace and all its environments and secrets. ```bash cf workspace delete my-app --yes ``` | Flag | Short | Description | |------|-------|-------------| | `--org` | `-o` | Organisation ID | | `--yes` | `-y` | Skip confirmation | --- ## cf token list List API tokens for the active organisation. ```bash cf token list ``` | Flag | Short | Description | |------|-------|-------------| | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | ## cf token create Generate a new API token. The full token is shown once. ```bash cf token create -n "CI Deploy" -w my-app -s secrets:read ``` ``` ✓ Created token CI Deploy Token: cf_live_abc123def456... Save this token now. It will not be shown again. ``` | Flag | Short | Description | |------|-------|-------------| | `--name` | `-n` | Token name (required) | | `--workspace` | `-w` | Workspace scope (required) | | `--scope` | `-s` | Permission scope, repeatable (required) | | `--expires` | | Expiry date (ISO 8601) | | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | ## cf token revoke Permanently revoke an API token. ```bash cf token revoke tkn_abc123 --yes ``` | Flag | Short | Description | |------|-------|-------------| | `--org` | `-o` | Organisation ID | | `--yes` | `-y` | Skip confirmation | --- ## cf config list Show all configuration. ```bash cf config list ``` ## cf config get Get a configuration value. ```bash cf config get defaults.workspace ``` ## cf config set Set a configuration value. ```bash cf config set defaults.workspace my-app cf config set defaults.environment development ``` ## cf config unset Remove a configuration value. ```bash cf config unset defaults.workspace ``` --- # Secret commands Source: https://docs.cryptflare.com/cli/commands/secrets Create, read, rotate, move, and delete secrets from the CLI # Secret commands Manage encrypted secrets within an environment. ## cf secret list List all secret keys in an environment. Values are not shown. ```bash cf secret list -w my-app -e production ``` ``` KEY VERSION UPDATED DATABASE_URL v3 2h ago API_KEY v1 3d ago STRIPE_SECRET v2 1w ago ``` Filter by pod: ```bash # Secrets in a specific pod cf secret list -w my-app -e production --pod pod_abc123 # Root-level secrets only cf secret list -w my-app -e production --pod root ``` | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--pod` | `-p` | Filter by pod ID. Use `"root"` for root-level only. | | `--json` | | Output as JSON | | `--quiet` | `-q` | Minimal output | --- ## cf secret set Create a new secret. If the key already exists, the value is rotated. ```bash # Create at root level cf secret set DATABASE_URL "postgres://user:pass@db/mydb" -w my-app -e production # Create inside a pod cf secret set STRIPE_KEY "sk_live_..." -w my-app -e production --pod pod_abc123 ``` ``` ✓ Set DATABASE_URL (version 1) ``` | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--pod` | `-p` | Pod ID to place the secret in | | `--json` | | Output as JSON | --- ## cf secret get Retrieve and decrypt a secret value. ```bash cf secret get DATABASE_URL -w my-app -e production ``` ``` DATABASE_URL (v3) postgres://user:pass@db/mydb ``` With `--quiet`, outputs only the value: ```bash DB=$(cf secret get DATABASE_URL -w my-app -e production -q) ``` | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | | `--quiet` | `-q` | Output value only | --- ## cf secret rotate Rotate a secret to a new value, incrementing the version. ```bash cf secret rotate DATABASE_URL --value "postgres://user:newpass@db/mydb" -w my-app -e production ``` ``` ✓ Rotated DATABASE_URL to version 4 ``` | Flag | Short | Description | |------|-------|-------------| | `--value` | | New secret value (required) | | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | --- ## cf secret move Move a secret into a pod, or back to root level. ```bash # Move into a pod cf secret move DATABASE_URL --pod pod_abc123 -w my-app -e production # Move back to root cf secret move DATABASE_URL --pod root -w my-app -e production ``` ``` ✓ Moved DATABASE_URL to pod pod_abc123 ``` | Flag | Short | Description | |------|-------|-------------| | `--pod` | | Target pod ID, or `root` for root level (required) | | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--json` | | Output as JSON | --- ## cf secret delete Permanently delete a secret and all its version history. ```bash cf secret delete DATABASE_URL -w my-app -e production --yes ``` Prompts for confirmation unless `--yes` is passed. | Flag | Short | Description | |------|-------|-------------| | `--workspace` | `-w` | Workspace ID or slug | | `--env` | `-e` | Environment ID or slug | | `--org` | `-o` | Organisation ID | | `--yes` | `-y` | Skip confirmation | --- # Configuration Source: https://docs.cryptflare.com/cli/configuration Configure the CryptFlare CLI defaults, credentials, and output preferences # Configuration The CLI stores configuration in a JSON file managed by the `conf` package. You can set defaults to avoid repeating common flags. ## Config file Location: `~/.config/cryptflare/config.json` ```json { "token": "cf_live_abc123...", "org": "org_xyz789", "defaults": { "workspace": "my-app", "environment": "development" } } ``` | Key | Description | |-----|-------------| | `token` | API token (set by `cf auth login`) | | `org` | Active organisation ID (set by `cf org select`) | | `defaults.workspace` | Default workspace for secret commands | | `defaults.environment` | Default environment for secret commands | ## Managing config ```bash # View all config cf config list # Get a value cf config get defaults.workspace # Set a value cf config set defaults.workspace my-app cf config set defaults.environment development # Remove a value cf config unset defaults.workspace ``` ## Environment variables Environment variables override the config file. Useful for CI/CD where you don't want a config file on disk. | Variable | Overrides | Example | |----------|-----------|---------| | `CF_TOKEN` | `token` | `cf_live_abc123...` | | `CF_ORG` | `org` | `org_xyz789` | | `CF_WORKSPACE` | `defaults.workspace` | `my-app` | | `CF_ENVIRONMENT` | `defaults.environment` | `production` | | `CF_API_URL` | API base URL | `http://localhost:5488` | | `NO_COLOR` | Disables colored output | `1` | ## Credential resolution The CLI checks credentials in this order (first match wins): 1. `CF_TOKEN` environment variable 2. `token` in config file (set by `cf auth login`) ## Context resolution For commands that need `--workspace` and `--env`: 1. Flag on the command (`-w`, `-e`) 2. Environment variable (`CF_WORKSPACE`, `CF_ENVIRONMENT`) 3. Config file (`defaults.workspace`, `defaults.environment`) If none are set, the command exits with an error message explaining what to pass. ## Config file paths | OS | Path | |----|------| | Linux | `~/.config/cryptflare/config.json` | | macOS | `~/Library/Preferences/cryptflare/config.json` | | Windows | `%APPDATA%/cryptflare/config.json` | The exact path depends on the `conf` package's platform detection. Run `cf config list` to see the resolved path. ## Resetting ```bash # Remove a specific key cf config unset token # Clear everything cf config unset token cf config unset org cf config unset defaults ``` Or delete the file directly: ```bash rm ~/.config/cryptflare/config.json ``` --- # CLI Source: https://docs.cryptflare.com/cli/overview Manage CryptFlare secrets from your terminal # CryptFlare CLI The CryptFlare CLI (`cf`) lets you manage secrets, workspaces, and environments from your terminal. It works with any CI/CD system, supports JSON output for scripting, and uses the same API as the vault dashboard. Every command follows the same lifecycle: the CLI resolves your config and auth token, attaches them to an HTTPS request, and renders the API response in your terminal. CLI Config --> CLI Auth --> CLI CLI --> Request Request --> API API --> Render `} /> Flags beat environment variables which beat the config file, so any command can be run with zero stored state by passing everything inline. ## Install ```bash npm install -g @cryptflare/cli ``` Or with other package managers: ```bash pnpm add -g @cryptflare/cli yarn global add @cryptflare/cli ``` Verify the installation: ```bash cf --version ``` ## Quick start ```bash # 1. Authenticate (opens browser, creates API key automatically) cf auth login # 2. Store a secret cf secret set DATABASE_URL "postgres://localhost/mydb" -w my-app -e production # 3. Organize with pods cf pod create -n "Databases" -s databases -w my-app -e production cf secret move DATABASE_URL --pod pod_abc123 -w my-app -e production # 4. View the hierarchy cf pod tree -w my-app -e production # 5. Inject secrets into a command cf run -w my-app -e development -- node server.js ``` ## Authentication `cf auth login` uses a device authorization flow. It opens your browser, you approve the request, and the CLI automatically receives an API key. ```bash cf auth login # -> Opens browser to vault.cryptflare.com/cli/auth?code=ABCD-EFGH # -> You log in (if not already) and click "Authorize CLI" # -> CLI receives an API key and saves it automatically # -> Key stored at ~/.config/cryptflare/config.json ``` This will: Call `POST /v1/cli/device` to get a device code Open your browser to the vault app with the code You authenticate normally (OTP or SSO) and click **"Authorize CLI"** The CLI picks up the API key and saves it to config The generated API key has scopes for secrets, workspaces, environments, and tokens. It does not expire by default. ### Credential resolution The CLI resolves your API key in this order: 1. `CF_TOKEN` environment variable (highest priority, recommended for CI/CD) 2. Config file (`~/.config/cryptflare/config.json`) ```bash # Option A: Device flow (recommended for development) cf auth login # Option B: Environment variable (recommended for CI/CD) export CF_TOKEN=cf_live_abc123... cf secret list -w my-app -e production ``` API keys start with `cf_live_` (production) or `cf_test_` (development). ## Setting defaults Avoid repeating flags by setting defaults: ```bash cf config set defaults.workspace my-app cf config set defaults.environment development ``` Now these are equivalent: ```bash cf secret list --workspace my-app --env development cf secret list ``` Flags always override defaults. ## Output modes Every command supports three output modes: ```bash # Default: human-readable, colored table cf secret list -w my-app -e production # JSON: machine-readable cf secret list -w my-app -e production --json # Quiet: values only cf secret get API_KEY -w my-app -e production --quiet ``` ## Environment injection Inject all secrets as environment variables into any command: ```bash cf run -w my-app -e development -- node server.js ``` Or export in various formats: ```bash # Shell export eval $(cf env -w my-app -e production -f shell) # Write .env file cf env -w my-app -e production -f dotenv > .env # JSON cf env -w my-app -e production -f json ``` ## CI/CD ### GitHub Actions ```yaml - name: Install CryptFlare CLI run: npm install -g @cryptflare/cli - name: Deploy with secrets env: CF_TOKEN: ${{ secrets.CF_TOKEN }} run: | eval $(cf env -w my-app -e production -f shell) npm run deploy ``` ### Docker ```dockerfile FROM node:20-alpine RUN npm install -g @cryptflare/cli CMD ["sh", "-c", "eval $(cf env -w my-app -e production -f shell) && node server.js"] ``` ## Global flags | Flag | Short | Description | |------|-------|-------------| | `--json` | | Output as JSON | | `--quiet` | `-q` | Minimal output | | `--version` | `-V` | Show version | | `--help` | `-h` | Show help | --- # Meet Cipher Source: https://docs.cryptflare.com/getting-started/cipher CryptFlare's AI assistant that helps you manage secrets, understand policies, and navigate the platform. # Meet Cipher Born from the intersection of cryptography and intelligence, Cipher was created to make secrets management effortless. Named after the fundamental building block of encryption - the cipher - it embodies CryptFlare's core mission: making security simple, accessible, and invisible. > Cipher never has access to your secret values. It can help you find, organise, and manage secrets, but it cannot read or display plaintext data. ## Personality Cipher isn't a generic chatbot. It has a distinct personality shaped by the world of cryptography and security engineering. - **Concise and direct** - No filler, no fluff. Every word earns its place. - **Security-first** - Always suggests the secure path and warns about risky patterns. Vigilant by default. - **Technically precise** - Uses correct terminology. Knows the difference between encryption and hashing, authentication and authorisation. - **Honestly transparent** - Says "I don't know" rather than guessing. Directs you to the right resource when unsure. - **Friendly but professional** - Like a senior engineer who remembers what it was like to be new. Technical without being intimidating. ## What Cipher can do - **Search secrets** - Find secrets by describing what you need, not just exact key names. Ask "find all Stripe keys" and Cipher locates them across workspaces. - **Explain policies** - Ask "what permissions does the developer role have?" or "who can access production secrets?" and get a clear breakdown. - **Guide workflows** - Step-by-step help with rotation, BYOK setup, SSO configuration, team policies, and more. - **Audit insights** - Ask "who accessed the database credentials this week?" for natural language audit queries. - **Platform help** - Questions about encryption, billing, API usage, CLI commands, or anything in the docs. ## What Cipher will never do - Read, display, or generate secret values, API keys, or passwords - Execute actions on your behalf (create secrets, change permissions, modify billing) - Access data outside your organisation's scope - Provide advice that would weaken your security posture - Store or remember conversations across sessions ## How to use Cipher Cipher is available in the vault dashboard. Click the chat icon in the bottom-right corner to open a conversation. Open the vault at [vault.cryptflare.com](https://vault.cryptflare.com) and sign in. Click the **Cipher** icon in the bottom-right corner of any page. Type your question or pick one of the suggested prompts to get started. ### Example conversations Where are my Stripe API keys?

Found 3 secrets matching "Stripe" across your workspaces:

  • STRIPE_SK_LIVE in Backend / production
  • STRIPE_PK_LIVE in Backend / production
  • STRIPE_SK_TEST in Backend / development
Can developers delete production secrets?

No. The developer role has secrets:read and secrets:write permissions but not secrets:delete. Only managers and owners can delete secrets.

If a developer needs to delete a secret, they can request elevated access via a just-in-time access grant.

How do I rotate a secret without downtime?

To rotate a secret safely:

  • Go to the secret in the vault UI
  • Click Rotate - this creates a new version
  • The previous version stays accessible during rollout
  • Update your application to use the new value
  • Once confirmed, the old version is archived

You can also rotate via CLI: cf secrets rotate MY_SECRET --env production

## Under the hood Cipher is powered by CryptFlare's multi-provider AI infrastructure, routed through Cloudflare's AI Gateway for caching, rate limiting, and observability. It uses lightweight models optimised for speed and cost efficiency - not large language models that take seconds to respond. >Cipher: Send user message Cipher->>Cipher: Build system prompt + org context Cipher->>LLM: Call model with tools schema LLM-->>Cipher: Tool call request (e.g. listSecrets) Cipher->>API: Invoke tool with caller scope API-->>Cipher: Tool result (metadata only, no values) Cipher->>LLM: Return tool result to model LLM-->>Cipher: Final reply token stream Cipher-->>UI: Stream reply to chat surface `} /> Cipher never exchanges secret values with the model; tools return metadata scoped to the caller's RBAC view, and the final reply is streamed back token by token for a responsive feel. Cipher uses the same tool registry that powers [`mcp.cryptflare.com`](/security/mcp-access), so new capabilities ship to both the in-app assistant and external MCP clients at the same time. Cipher inherits your session permissions instead of a bearer token, which is why it does not require the `mcp:use` scope. ### How Cipher keeps your data safe | Protection | How it works | |---|---| | **No secret access** | Cipher only sees key names, descriptions, and metadata. Encrypted values never enter the AI pipeline. | | **RBAC enforcement** | Cipher respects your role-based access control. It only surfaces information you have permission to view. | | **Audit trail** | Every AI interaction is logged in your organisation's audit log with actor, timestamp, and token usage. | | **No training** | Your conversations are not stored permanently or used to train AI models. | | **Gateway isolation** | All AI calls route through Cloudflare's AI Gateway with caching and rate limiting. No third-party AI providers see your data. | ## Plan availability Cipher is included on all plans. AI token usage counts toward your plan's daily AI allowance. Token usage resets daily at midnight UTC. Cached responses (repeated questions) do not consume tokens. --- # Pricing Source: https://docs.cryptflare.com/getting-started/pricing CryptFlare plans and pricing - from free to team # Pricing Choose the plan that fits your team. All plans include AES-256-GCM encryption, audit logging, and the full API. ## Free Get started with no commitment. Perfect for personal projects and small experiments. - 1 organisation - 1 workspace with up to environments - secrets - API requests per day - version history per secret - days audit log retention ## Pro - /month For growing teams that need more capacity and longer history. - Unlimited organisations - workspaces with environments each - secrets - API requests per day - team members - version history per secret - days audit log retention ## Team - /month For larger teams with enterprise requirements. - Unlimited organisations, workspaces, environments, and secrets - API requests per day - team members - Unlimited version history - days audit log retention - Enterprise SSO (OIDC/SAML) - coming soon ## All plans include - **AES-256-GCM encryption** at rest for all secrets - **TLS 1.2+** encryption in transit - **Role-based access control** with 6 roles - **Audit logging** for all actions - **CLI** with device flow authentication - **REST API** with OpenAPI documentation - **Pods** for hierarchical secret organization - **Secret versioning** with rotation support ## FAQ Yes. Upgrading takes effect immediately. Downgrading takes effect at the end of your billing period. If you exceed the new plan's limits, you'll need to reduce usage before the downgrade completes. The API returns a `429` status with a `Retry-After` header. Requests resume at midnight UTC. You'll receive a warning header at 80% usage. Not yet. We plan to add annual billing with a discount in the future. We don't offer trials, but you can start on the Free plan and upgrade when you need more capacity. There's no lock-in. We accept all major credit cards via Stripe. Enterprise invoicing is available on the Team plan. CryptFlare is a managed platform built on a global edge network. Self-hosting is not available. Your secrets are encrypted with AES-256-GCM and we cannot read them. --- # Quickstart Source: https://docs.cryptflare.com/getting-started/quickstart Get up and running with CryptFlare in under 5 minutes. # Quickstart Get your first secret stored and retrieved in under 5 minutes. Before you run the first command, here is the shape of the system you are about to talk to: clients at the top, one API layer in the middle, regional + global storage underneath. Wrk SDK --> Wrk UI --> Wrk TF --> Wrk Wrk --> GD Wrk --> RD Wrk --> R2 Wrk --> Q `} /> Every client calls the same API layer; your secrets land in the regional store that your organisation is pinned to, while auth and billing live in the global cluster. You can also connect AI agents like Claude, Cursor, or Zed to your vault via `mcp.cryptflare.com` - see [MCP access](/security/mcp-access) for the `mcp:use` permission gate. ## Prerequisites Before you begin, make sure you have: - A CryptFlare account (sign up at [vault.cryptflare.com](https://vault.cryptflare.com)) - Node.js 18 or later - A terminal with `npm` or `pnpm` available ## Install the CLI Install the CryptFlare CLI globally: ```bash npm install -g @cryptflare/cli ``` Verify the installation: ```bash cf --version ``` ## Authenticate Run the login command - this opens your browser: ```bash cf auth login ``` Approve the request in your browser. The CLI automatically receives an API key. Verify you're authenticated: ```bash cf auth status ``` ## Store your first secret Add a secret to your workspace's `production` environment: ```bash cf secret set DATABASE_URL "postgres://localhost:5432/mydb" -w my-app -e production ``` Retrieve the secret to verify it was stored: ```bash cf secret get DATABASE_URL -w my-app -e production ``` You should see: ``` DATABASE_URL (v1) postgres://localhost:5432/mydb ``` ## Use in your application ### Environment injection The easiest way to use secrets is to inject them as environment variables: Run your application with secrets injected: ```bash cf run -w my-app -e development -- node server.js ``` All secrets from the environment are set as environment variables before your command runs. Or export secrets to a `.env` file: ```bash cf env -w my-app -e development -f dotenv > .env ``` ### SDK usage For programmatic access, use the SDK: ```typescript import { CryptFlare } from '@cryptflare/sdk'; const cf = new CryptFlare({ token: process.env.CF_TOKEN }); const secret = await cf.secrets.get('DATABASE_URL', { workspace: 'my-app', environment: 'production', }); console.log(secret.value); ``` ## Next steps - Learn about [environments](/secrets/environments) to separate dev, staging, and production secrets - Organize secrets with [pods](/api-reference/pods) (folders) - Explore the full [CLI reference](/cli/commands/secrets) - Read about [encryption](/security/encryption) to understand how your secrets are protected --- # Dynamic secrets with AWS IAM (AssumeRole) Source: https://docs.cryptflare.com/guides/dynamic-secrets/aws Use AWS STS AssumeRole to mint short-lived IAM credentials on demand with session policies and scoped durations # Dynamic secrets with AWS IAM (AssumeRole) This guide walks through creating the AWS IAM identities that CryptFlare uses to mint short-lived AWS credentials via `STS:AssumeRole`. The setup has two sides: 1. A **root** IAM user (or assumed role) with permission to call `sts:AssumeRole` on the target role. CryptFlare authenticates to AWS as this identity to mint leases. 2. A **target** IAM role that the root identity assumes. Every CryptFlare lease produces a temporary credential set for this role. The role's RBAC policies and session policies determine what the lease can do inside your AWS account. Unlike Azure, the AWS credentials CryptFlare hands out are **self-expiring STS tokens**. They cannot be revoked at AWS before their `DurationSeconds` expires - revocation at the CryptFlare side just removes the lease from our records. For safety, use session policies to further restrict what the credentials can do and set a conservative `maxTtlSeconds`. ## What CryptFlare does - **At issue time** - signs a `POST sts.{region}.amazonaws.com AssumeRole` request with SigV4, using the root credentials. AWS returns a fresh `AccessKeyId` / `SecretAccessKey` / `SessionToken` triple valid for the requested `DurationSeconds`. CryptFlare returns these to the caller exactly once. - **At revoke time** - no-op. AWS STS tokens cannot be invalidated before their natural expiry. CryptFlare still marks the lease `revoked` in the dashboard and audit log, and the workflow stops running, but the credential stays valid at AWS until its `Expiration` timestamp. - **Never stores the issued credential** - it appears in the lease response exactly once and is then forgotten. CryptFlare only keeps the `AccessKeyId` as an audit correlation handle. ## Prerequisites - An AWS account where you can create IAM users and IAM roles - A CryptFlare organisation on the **Team plan** - CryptFlare role with `dynamic_secrets:manage` (Owner or Manager by default) ## Setup This is the role CryptFlare's leases will assume. Its policies determine what the lease credentials can do. 1. Sign in to the [AWS IAM console](https://console.aws.amazon.com/iam/home). 2. Go to **Roles > Create role**. 3. **Trusted entity type**: `Custom trust policy`. 4. Paste the following trust policy, which grants `sts:AssumeRole` to a root user we will create in step 2: ```json { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam:::user/cryptflare-dynamic-root" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "sts:ExternalId": "" } } }] } ``` Replace `` with your 12-digit AWS account id and pick a random string for `sts:ExternalId` (this protects against the [confused deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html) and you'll paste the same string into CryptFlare in step 4). 5. **Permissions**: attach whichever AWS managed policies (or your own customer-managed policies) describe what the lease credentials should be allowed to do. Start narrow - `ReadOnlyAccess` is a good first choice. You can always widen later. 6. **Role name**: `cryptflare-lease-reader` (or whatever describes its scope) 7. **Maximum session duration**: set to the longest TTL you want CryptFlare leases to request. Default is 1 hour. Max is 12 hours. CryptFlare will clamp its lease TTL to this value at issue time. 8. Create the role and copy the **Role ARN** from its summary page - you will need it in step 4. CryptFlare authenticates to AWS as this user. Its only permission is `sts:AssumeRole` on the target role created in step 1. 1. Still in the IAM console, go to **Users > Create user**. 2. **User name**: `cryptflare-dynamic-root`. Match this exactly to the ARN you pasted into the trust policy in step 1. 3. **Permissions**: `Attach policies directly > Create policy` and paste this inline policy: ```json { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam:::role/cryptflare-lease-reader" }] } ``` Replace `` with your account id and `cryptflare-lease-reader` with the actual role name from step 1. Note that this is **exactly one permission**, scoped to **exactly one role ARN** - the root user cannot assume anything else, cannot list roles, cannot call any other AWS API. Principle of least privilege. 4. Name the policy `CryptflareDynamicRootAssumeRole` and attach it to the user. 5. Finish creating the user. Still on the user page: 1. Go to **Security credentials > Access keys > Create access key**. 2. **Use case**: `Application running outside AWS`. Confirm the warning. 3. Optionally add a description tag. 4. **Copy the `Access key ID` and `Secret access key` immediately** - the secret is only shown once. Store both in your password manager briefly; you will paste them into CryptFlare in the next step. > Rotate this access key on a schedule (see the "Rotating the root access key" section below). It is a long-lived AWS credential, exactly the kind of thing dynamic secrets exist to eliminate further down the stack. Treat it with the same care as any other prod AWS root credential. Open the CryptFlare dashboard, navigate to **Dynamic Secrets**, and click **New configuration**. In the 4-step wizard: **Step 1 - Provider**: pick `AWS IAM (AssumeRole)`. **Step 2 - Configuration and root credentials**: | Field | Value | |---|---| | Configuration name | `aws-prod-reader` (any short identifier your team will recognise) | | Description | Free-form | | Region | The AWS region to sign STS requests for, e.g. `us-east-1` | | Role ARN | The full ARN of the target role from step 1, e.g. `arn:aws:iam::123456789012:role/cryptflare-lease-reader` | | External ID | The string you chose for `sts:ExternalId` in the trust policy | | Session policy | Optional JSON policy to further restrict the lease credentials beyond what the role's own policies allow - see below | | Access key ID | The access key ID from step 3 | | Secret access key | The secret access key from step 3 | **Step 3 - TTL policy and quotas**: | Field | Suggested value | Notes | |---|---|---| | Default TTL | 30 min | Used when a lease request does not specify a TTL | | Max TTL | 60 min | Hard cap per lease. Must be ≥ 15 min (the STS minimum) | | System max TTL | 12 h | Must not exceed the target role's "Maximum session duration" from step 1 | | Max concurrent leases | 50 | Total active leases for this config | | Max leases per identity | 5 | Per session / service token | **Step 4 - Review and create**. CryptFlare calls `STS GetCallerIdentity` with your root credentials to validate them before persisting. If the credentials are wrong or the root user lacks STS access, the wizard fails with the AWS error. From the configuration card on the Dynamic Secrets page, click **Issue lease**. Pick a TTL and click `Issue lease`. The credentials view shows: | Field | Value | |---|---| | `AWS_ACCESS_KEY_ID` | The STS-minted access key id (begins with `ASIA`) | | `AWS_SECRET_ACCESS_KEY` | The STS-minted secret | | `AWS_SESSION_TOKEN` | The STS session token - required for every call | | `AWS_REGION` | Your configured region | > **These credentials are shown exactly once.** CryptFlare does not store them. If you close the dialog without copying them, issue a fresh lease. Paste them into your terminal or CI environment: ```bash export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export AWS_SESSION_TOKEN= export AWS_REGION=us-east-1 # Any AWS SDK / CLI picks these up automatically aws sts get-caller-identity ``` See [Using dynamic secrets](/guides/dynamic-secrets/usage) for CI / Terraform / local-dev consumption patterns. ## Session policies A session policy is an AWS-IAM JSON policy that is applied to the STS-minted credentials **in addition to** the target role's own policies. The effective permissions of the lease credential are the intersection of (a) the role's trust + permission policies and (b) the session policy, if supplied. This is useful when one target role covers a broad area (e.g. `PowerUserAccess`) but a specific CryptFlare config should hand out narrower credentials. The role stays broad; the session policy does the scoping per config. Example session policy that allows read-only access to a single S3 bucket: ```json { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": ["s3:GetObject", "s3:ListBucket"], "Resource": [ "arn:aws:s3:::my-bucket", "arn:aws:s3:::my-bucket/*" ] }] } ``` Paste this into the "Session policy" field in step 4. CryptFlare passes it verbatim to AWS on every `AssumeRole` call. ## TTL notes specific to AWS - **STS minimum `DurationSeconds` is 900 (15 minutes)**. If you set `defaultTtlSeconds` below 900, CryptFlare clamps the AWS-side duration up to 900. The lease's own TTL (tracked by CryptFlare) can still be shorter - the workflow will "revoke" the lease in our DB at the customer-configured TTL even though the credential remains valid at AWS for up to 15 min. - **STS maximum is 43200 (12 hours)** or the target role's "Maximum session duration" setting, whichever is smaller. - **Renewal produces a new credential**. AWS STS tokens are immutable - renewing a lease revokes the old token in our DB and mints a fresh one via a new `AssumeRole` call. The response includes the new credential values, which callers must substitute for the old ones. ## Rotating the root access key The root user's access key is a long-lived AWS credential. Rotate it on a schedule (AWS recommends every 90 days): Generate a second access key on the root user (IAM allows 2 keys at once per user) In CryptFlare, open the configuration and click **Edit**. Replace `accessKeyId` and `secretAccessKey` with the new values Issue a test lease to confirm the new root key works Disable and then delete the old access key in the IAM console Existing active leases are unaffected by root key rotation - they hold STS tokens minted with the old root but don't depend on it being current. The STS tokens continue working until their natural expiry. ## Troubleshooting | Symptom | Cause | Fix | |---|---|---| | `AWS STS 403: InvalidClientTokenId` | Root access key is wrong, deleted, or belongs to a different account | Verify the access key id / secret in step 3, check the key is still active in IAM | | `AWS STS 403: AccessDenied ... sts:AssumeRole` | Root user lacks the assume-role permission or the target role's trust policy does not list the root user as a principal | Re-check both policies in steps 1 and 2 - they must reference each other exactly | | `AWS STS 400: Unable to validate the ExternalId` | The external ID in the CryptFlare config does not match the `sts:ExternalId` in the role's trust policy | Paste the same string into both places | | `AWS STS 400: DurationSeconds exceeds MaxSessionDuration` | The lease TTL is longer than the target role's maximum session duration | Raise the role's "Maximum session duration" in IAM, or lower `maxTtlSeconds` in the CryptFlare config | | `DYNAMIC_LEASE_QUOTA_EXCEEDED` | You've hit the concurrent or per-identity quota | Wait for leases to expire, revoke some, or raise the quotas on the config | ## Security properties - **Root credentials encrypted at rest** with AES-256-GCM. The key is derived per-config via HKDF with salt `dynamic_root_` from the platform master secret. With BYOK, the root key source is the organisation's customer-managed key instead. - **Lease credentials never leave the API worker** except in the response body to the requesting user. - **AWS STS tokens cannot be invalidated early** - they expire at their scheduled time regardless of what CryptFlare does. This is why session policies and short `maxTtlSeconds` matter: they are the main line of defense if a lease credential leaks. - **Audit logs** record `{configId, leaseId, expiresAt, externalId}` only - never the credential value. - **Tenant isolation** is enforced at every query layer by `organisation_id`. AWS-side isolation is enforced by AWS - one customer's role ARN is opaque to all other customers. ## Next steps - **[Using dynamic secrets](/guides/dynamic-secrets/usage)** - consumption patterns from CLI, CI, and Terraform - **[Dynamic Secrets API reference](/api-reference/dynamic-secrets)** - full endpoint documentation - **[Dynamic secrets with Azure Service Principals](/guides/dynamic-secrets/azure)** - same feature, different provider --- # Dynamic secrets with Azure Service Principals Source: https://docs.cryptflare.com/guides/dynamic-secrets/azure Register an Azure AD App Registration, grant the right Microsoft Graph permissions, and mint short-lived Azure credentials on demand # Dynamic secrets with Azure Service Principals This guide walks through registering an App Registration in Microsoft Entra ID (formerly Azure AD), granting it the right Microsoft Graph permissions, connecting it to CryptFlare, and issuing your first lease. Every lease produces a fresh `AZURE_CLIENT_ID / AZURE_CLIENT_SECRET / AZURE_TENANT_ID` that your code uses to authenticate against any Azure service the lease identity has been granted access to. ## Two modes, pick one The Azure provider supports **two strategies** for how lease credentials are minted. Pick one before you start - the setup is nearly identical, but Dynamic SP mode requires one extra Azure RBAC step. | | **Static SP** (default, recommended) | **Dynamic SP** | |---|---|---| | What each lease does | Mints a new client secret on a pre-existing App Registration you control | Creates a brand new App Registration + Service Principal + role assignments per lease | | `AZURE_CLIENT_ID` | Same across every lease (the root App's appId) | Unique per lease | | Issue latency | ~500ms | ~2-5 seconds + up to 30s propagation delay | | Revoke | `removePassword` by `keyId` | `DELETE /applications/{id}` - cascades to SP, passwords, role assignments | | Azure activity log | All leases attributed to the same SP | Per-lease attribution in activity logs | | Root RBAC on Azure resources | `Reader` / `Contributor` / etc. directly on the root App | `User Access Administrator` on target scopes (so it can delegate roles to minted SPs) | | Azure quotas | Unlimited | Soft ~50k App Registrations per tenant, app-creation rate-limited | | Best for | Most use cases - it's the Vault-compatible default | Compliance / forensic attribution, per-lease role variance | **If you're not sure, pick Static SP.** Dynamic SP is the compliance-oriented escape hatch - it's the right answer if your auditors specifically require per-lease identity in Azure activity logs or if different leases need different Azure roles. It's not the default and it's not what Vault uses out of the box. ## What CryptFlare does ### Static SP mode - **At issue time** - calls Microsoft Graph `POST /applications/{id}/addPassword` to add a new client secret to the pre-existing App Registration, with `endDateTime = lease_ttl + 5 minutes` baked in by Azure for defense in depth - **At revoke time** - calls `POST /applications/{id}/removePassword` with the `keyId` it stored at issue time - **Every lease returns the same `AZURE_CLIENT_ID`** - the root App's appId. Only `AZURE_CLIENT_SECRET` rotates. ### Dynamic SP mode - **At issue time** - `POST /applications` to create a new Application, `POST /servicePrincipals` to create the SP, `POST /applications/{id}/addPassword` for the credential, then `PUT /roleAssignments/{uuid}` for each configured role assignment. Any failure after the Application is created triggers a best-effort `DELETE /applications/{id}` rollback. - **At revoke time** - `DELETE /applications/{id}` which cascades to the SP, all passwords, and all role assignments - **Every lease returns a unique `AZURE_CLIENT_ID`** - a fresh GUID that did not exist before the lease was issued Both modes never store the issued credential - it appears in the lease response exactly once and is then forgotten. ## Prerequisites - An Azure AD tenant where you can create App Registrations - Permission to grant **admin consent** for Microsoft Graph application permissions in your tenant - A CryptFlare organisation on the **Team plan** - CryptFlare role with `dynamic_secrets:manage` (Owner or Manager by default) ## Setup Sign in to the [Azure portal](https://portal.azure.com) and navigate to `Microsoft Entra ID > App registrations > New registration`. | Field | Value | |---|---| | **Name** | `cryptflare-dynamic-root-prod` (this is the *root* App - leases are minted under it, not as separate SPs) | | **Supported account types** | Accounts in this organizational directory only (Single tenant) | | **Redirect URI** | Leave blank | Click `Register`. On the Overview page, note these values - you will need all three later: | Field | Where | Destination in CryptFlare | |---|---|---| | **Application (client) ID** | Overview top section | `rootCredentials.clientId` | | **Directory (tenant) ID** | Overview top section | `providerConfig.tenantId` | | **Object ID** | Overview top section under "Essentials" | `providerConfig.appObjectId` | > **The Object ID is not the same as the Application ID.** Microsoft Graph uses the Object ID in the URL `/applications/:objectId/addPassword`, but the Application ID is what the App uses when authenticating to Azure. Both are UUIDs, both are visible on the Overview page, and mixing them up is the #1 footgun. Copy both and double-check. Still on the App Registration, go to `Certificates & secrets > Client secrets > New client secret`. | Field | Value | |---|---| | **Description** | `cryptflare-root` | | **Expires** | 24 months (this is the *root* - rotate rarely, leases are the short-lived ones) | Click `Add`. **Copy the `Value` column immediately** - not the `Secret ID` column. The Value is your root client secret and it is only shown once. Store it in your password manager temporarily; you will paste it into CryptFlare in the next step. > If you miss copying the Value, delete the secret and create a new one. There is no way to recover the Value after leaving the page. The root App needs Graph write access to create passwords (Static SP) or create whole new Applications (Dynamic SP). The simplest and most reliable choice for both modes is `Application.ReadWrite.All` - the same permission HashiCorp Vault's Azure secrets engine documents. Go to `API permissions > Add a permission > Microsoft Graph > Application permissions`. > **The Entra search box is broken for dotted identifiers.** Searching for `Application.ReadWrite.All` returns "No results" because Entra's permission picker only matches **display names**, not identifiers. Type **`Read and write all applications`** instead, or click **expand all** in the top-right and scroll. Tick the `Read and write all applications` row, then click `Add permissions`. **Confirm you're on the "Application permissions" tab, not "Delegated permissions"**. They share the same name but delegated only works with signed-in users and will cause every lease issue to fail with `Authorization_RequestDenied`. Back on the API permissions list, click the **Grant admin consent** button for your tenant and confirm. Verify the permission shows **Status: Granted for <your tenant>** (green check) with **Admin consent required: Yes**. ### Advanced: least-privilege alternative (Application.ReadWrite.OwnedBy) If you want to constrain the root SP's blast radius to only App Registrations it explicitly owns, use `Application.ReadWrite.OwnedBy` instead. This is least-privilege but carries a sharp edge: the SPN must be **listed as an owner of every App Registration it manages**, otherwise `addPassword` returns `Authorization_RequestDenied` even though the permission is granted. The Entra portal's Owners page only accepts user principals, so you must add the SPN as an owner via [Azure Cloud Shell](https://portal.azure.com/#cloudshell) or [Graph Explorer](https://developer.microsoft.com/en-us/graph/graph-explorer): ```bash # Cloud Shell - easiest SPN_OBJECT_ID=$(az ad sp show --id --query id -o tsv) az ad app owner add --id --owner-object-id $SPN_OBJECT_ID az ad app owner list --id # verify ``` Most users should stick with `Application.ReadWrite.All` - Vault does, and the ownership-dance workarounds exist only because Microsoft never surfaced app-ownership management in the Entra UI. **Dynamic SP mode does not support `OwnedBy`** - it creates fresh Applications on every lease, and the SP cannot be an owner of an App that doesn't exist yet. The Azure RBAC setup differs by mode. Do exactly one of the two paths below. ### Static SP mode Every lease uses the same root App's `clientId/clientSecret` to authenticate, so whatever RBAC roles you assign to **the root App** are the roles every lease inherits. 1. Open the **subscription**, resource group, or specific resource where leases will operate 2. Go to `Access control (IAM) > Role assignments > Add > Add role assignment` 3. Pick the role - `Reader`, `Storage Blob Data Contributor`, `Key Vault Secrets User`, etc. 4. Assign access to `User, group, or service principal` 5. Search for your root App Registration name and select it 6. Save > Grant the **most-restricted role** that satisfies your use case. Every lease inherits this scope - if you give the root App `Owner` on a subscription, every CryptFlare lease is a subscription owner credential. Start with `Reader` and widen only as needed. ### Dynamic SP mode Dynamic SP mode creates **new** Service Principals at issue time and assigns roles to them from the root App's own authority. The root App therefore needs TWO permissions at each target scope: 1. **The role you want leases to get** (e.g. `Reader`) - so the root App can delegate it 2. **`User Access Administrator`** - so the root App can create the role assignment in the first place. This is the minimal built-in role that grants `Microsoft.Authorization/roleAssignments/write`. `Owner` also works but grants far more than necessary. Do both assignments at the scope where you want leases to operate: 1. Open the **subscription** / resource group / resource 2. `Access control (IAM) > Role assignments > Add > Add role assignment` 3. Assign **`Reader`** (or whatever role your leases need) to the root App 4. Repeat and assign **`User Access Administrator`** to the root App `User Access Administrator` on a subscription lets the holder grant any role at that scope, including roles it does not itself hold. Only grant it on scopes where you accept that CryptFlare (or anyone with write access to your CryptFlare config) could theoretically elevate. For the tightest security, scope it narrowly (one resource group rather than the whole subscription). You'll reference each target scope by its **ARM resource path** when you create the config in Step 5: | Target | Scope path | |---|---| | A whole subscription | `/subscriptions/` | | A resource group | `/subscriptions//resourceGroups/` | | A specific resource | `/subscriptions//resourceGroups//providers//` | Common built-in role definition IDs (you'll paste these verbatim into the CryptFlare config): | Role | `roleDefinitionId` | |---|---| | Reader | `/providers/Microsoft.Authorization/roleDefinitions/acdd72a7-3385-48ef-bd42-f606fba81ae7` | | Contributor | `/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c` | | Storage Blob Data Reader | `/providers/Microsoft.Authorization/roleDefinitions/2a2b9908-6ea1-4ae2-8e65-a410df84e7d1` | | Key Vault Secrets User | `/providers/Microsoft.Authorization/roleDefinitions/4633458b-17de-408a-b874-0445c86b69e6` | See the [Azure built-in roles reference](https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles) for the full list. Navigate to **Dynamic Secrets** in the CryptFlare dashboard (sidebar under "Overview") and click **New configuration**. You'll see a 4-step wizard. **Step 1 - Provider:** pick `Microsoft Azure Service Principal`. **Step 2 - Azure SP mode + configuration and root credentials:** at the top of this step you'll see two mode cards. Pick the one you committed to at the start of this guide. **If you picked Static SP**, fill in: | Field | Value | |---|---| | Configuration name | `azure-prod-reader` (any short identifier your team will recognise) | | Description | `Read-only Azure access for the prod subscription` | | Tenant ID | The Directory (tenant) ID from the Overview page | | App registration object ID | The **Object ID** (not the Application ID) | | Display name | Optional - returned to lease consumers as `AZURE_DISPLAY_NAME` | | Client ID | The Application (client) ID | | Client secret | The Value column from the client secret you generated | **If you picked Dynamic SP**, the wizard swaps `App registration object ID` for a role-assignments editor. Fill in: | Field | Value | |---|---| | Configuration name | `azure-prod-dynamic-reader` | | Description | `New SP per lease - Reader on prod subscription` | | Tenant ID | The Directory (tenant) ID | | Display name prefix | Optional - used to name minted Applications. Defaults to `cryptflare-lease`. | | Role assignments | Click **Add role assignment**. For each entry, paste the ARM scope path (from Step 4) into `Scope` and the role definition ID into `Role definition ID`. Add one row per scope-role pair you want every lease's SP to receive. | | Client ID | The root App's Application (client) ID | | Client secret | The Value column from the client secret you generated | **Step 3 - TTL policy and quotas:** pick sensible defaults for your use case. | Field | Suggested value | Notes | |---|---|---| | Default TTL | 30 min | Used when the caller does not request a specific TTL | | Max TTL | 60 min | Hard cap per lease | | System max TTL | 24 h | Org-wide ceiling (platform hard-caps at 24h) | | Max concurrent leases | 50 | Total active leases for this config | | Max leases per identity | 5 | Per session / service token | **Step 4 - Review and create:** confirm everything and click `Create configuration`. CryptFlare calls Microsoft Graph with your root credentials to validate the connection before persisting anything. If validation fails, you will see the exact Graph error message and nothing is saved. From the configuration card on the Dynamic Secrets page, click **Issue lease**. Pick a TTL in minutes and click `Issue lease`. The dialog will switch to a **credentials view** showing: | Field | Value (Static SP) | Value (Dynamic SP) | |---|---|---| | `AZURE_CLIENT_ID` | Your root App Registration's Application ID - same for every lease | A fresh per-lease Application ID - different every time | | `AZURE_CLIENT_SECRET` | A fresh secret valid for the TTL you chose + 5 minutes | Same | | `AZURE_TENANT_ID` | Your tenant ID | Same | | `AZURE_DISPLAY_NAME` | The display name you configured (optional) | The display name prefix + the lease id | > **These credentials are shown exactly once.** CryptFlare does not store them. If you close the dialog without copying them, issue a fresh lease - you will get new credentials, and the old ones will continue to be valid until their TTL expires (though you will have no way to use them). > **Dynamic SP propagation delay:** fresh Service Principals take **up to 30 seconds** to replicate across Azure AD and become visible to the management plane. If you use the lease credentials against Azure ARM immediately after issue, you may see `AuthorizationFailed` or "principal not found" for a few seconds. Retry with backoff. Static SP mode does not have this delay - use it for latency-sensitive workloads. Click the copy icon next to each value, paste into your terminal or CI environment, and use the credentials immediately. See [Using dynamic secrets](/guides/dynamic-secrets/usage) for common usage patterns. ## What your leases can do Every lease inherits the Azure RBAC roles assigned to the root App Registration in step 4. Some examples: - **`Reader` on a subscription** → leases can list and read every resource but change nothing - **`Storage Blob Data Contributor` on a storage account** → leases can read/write blobs in that account only - **`Key Vault Secrets User` on a key vault** → leases can read secrets from that vault (cannot read keys or certificates) - **`Contributor` on a resource group** → leases can fully manage every resource in that resource group CryptFlare never grants more than you configured in Azure. The lease credentials are authenticated as the same App Registration, so Azure enforces the exact same RBAC you set up. ## Scoping patterns ### Per environment Create one config per environment (`azure-prod-reader`, `azure-staging-contributor`, `azure-dev-owner`). Each config has its own root App Registration and its own role assignments in Azure. This is the recommended default. ### Per team Create one config per team (`azure-platform-prod`, `azure-data-prod`). Attach service tokens to each team and grant `dynamic_secrets:issue` so only that team's CI can mint leases. ### Per workload Create one config per workload that needs credentials (`azure-terraform-apply`, `azure-backup-job`). Restrict the role assignment to exactly the resources that workload touches. ## Rotating the root client secret Azure client secrets expire (you picked 24 months in step 2). Rotate before the deadline: Generate a new client secret on the same App Registration (step 2 again). Copy the Value. In CryptFlare, open the configuration and click **Edit**. Update the `clientSecret` field with the new value. CryptFlare re-validates against Microsoft Graph before saving - if the new secret is wrong, the edit fails and the existing root stays in place. Issue a test lease to confirm the new root credential works. Delete the old client secret in Azure. > Existing active leases are unaffected by root secret rotation - each lease has its own short-lived secret that was minted with the old root but does not depend on the old root being current. The leases continue working until their TTLs expire naturally. ## Deleting a config Deleting a configuration from CryptFlare (dashboard button, API call, or `terraform destroy`) does three things in order: 1. **Disables** the config so no new leases can be issued during the drain 2. **Drains every active lease** by calling Microsoft Graph `removePassword` for each one, killing them at the upstream provider 3. **Hard-deletes the config row**, which cascades and wipes the lease history This is safe by default - you do not need to manually revoke leases first. The response includes the drain count so you know exactly how many credentials were torn down. ## Troubleshooting | Symptom | Cause | Fix | |---|---|---| | Entra permission picker shows "No results" when you type `Application.ReadWrite.All` | Entra search matches display names only, not dotted identifiers | Type `Read and write all applications` instead, or click **expand all** and scroll | | `Root credentials cannot read the App Registration` | `Application.ReadWrite.All` not granted, or granted as Delegated instead of Application, or admin consent missing | Grant under **Application permissions** and click Grant admin consent. Use the **Check permissions** button on the config edit page to re-verify | | `Root credentials can read but not write to the App Registration` | Read-only permission granted, OR using `OwnedBy` without listing the SPN as an owner | Switch to `Application.ReadWrite.All` (recommended) or add the SPN as an owner via Cloud Shell / Graph Explorer | | `App Registration not found - check appObjectId` (static_sp only) | You used the Application (client) ID instead of the Object ID | Copy the **Object ID** under "Essentials" on the App Registration Overview page | | `Root credentials rejected by Azure (401)` | The client secret is wrong, expired, or copied incorrectly | Generate a fresh client secret in Azure and PATCH the config's `rootCredentials` | | `Root credentials do not have permission to read role assignments at scope ` (dynamic_sp only) | Root App is missing `User Access Administrator` at that scope | Assign the root App `User Access Administrator` (or Owner) at the listed scope, then click **Check permissions** again | | `Scope not found: ` (dynamic_sp only) | Wrong or misspelled scope in `roleAssignments` | Verify the scope exists. Format: `/subscriptions/` or `/subscriptions//resourceGroups/` - lowercase `resourceGroups`, no trailing slash | | Lease credentials return "principal not found" for the first few seconds (dynamic_sp only) | Azure AD propagation delay - fresh SPs take up to 30s to replicate | Retry the Azure API call with backoff. For latency-sensitive workloads use `static_sp` mode | | `DYNAMIC_LEASE_QUOTA_EXCEEDED` | You have hit `maxConcurrentLeases` or `maxLeasesPerIdentity` | Wait for leases to expire, manually revoke some, or raise the quotas in the config | | `Effective TTL is Ns, below the minimum of 60s` | Your CryptFlare session or service token is about to expire | Refresh your session or use a long-lived service token for lease issuance | | Lease stuck in `irrevocable` state | Revoke attempts failed (Azure outage, deleted App Registration, etc.) | For Static SP: delete the password manually in Azure using the keyId shown in audit. For Dynamic SP: delete the Application manually. Then click **Force revoke** in the CryptFlare dashboard to clear the DB row | ## Security properties - **Root credentials encrypted at rest** with AES-256-GCM. The key is derived per-config via HKDF with salt `dynamic_root_` from the platform master secret. A leaked database row alone cannot be decrypted. - **Lease credentials never leave the API worker** except in the response body to the requesting caller. - **Every Azure secret CryptFlare mints has `endDateTime = lease_ttl + 5 minutes`** baked in by Azure itself. Even if CryptFlare disappears entirely, your credentials die at Azure on schedule. - **Audit logs** record `{configId, leaseId, expiresAt, externalId}` only - never the credential value. - **Tenant isolation** is enforced at every query layer by `organisation_id`. Azure-side isolation is enforced by Azure - one customer's `appObjectId` is opaque to all other customers. ## Next steps - **[Using dynamic secrets](/guides/dynamic-secrets/usage)** - how to actually consume a lease from the CLI, from CI/CD, and from Terraform - **[Dynamic Secrets API reference](/api-reference/dynamic-secrets)** - full endpoint documentation with request and response examples - **[Terraform provider](/integrations/terraform)** - manage dynamic secret configs as Terraform resources --- # Using dynamic secrets Source: https://docs.cryptflare.com/guides/dynamic-secrets/usage How to actually consume a dynamic secret lease from the CLI, your CI pipeline, local development, and Terraform # Using dynamic secrets You have a dynamic secret configuration set up. Your operator has connected Azure (or AWS, GCP, etc.), granted the right RBAC roles, and everything validates. Now what? **How do your apps, CI pipelines, and developers actually get credentials out of it?** This guide is the end of the operator's setup and the start of the developer's daily workflow. It covers the four most common consumption patterns: 1. **Local developer workflow** - a developer grabs a 1-hour Azure credential before running `terraform plan` 2. **CI/CD pipeline** - a service token requests a fresh credential at the start of every pipeline 3. **One-shot command wrapping** - run a single command with a dynamic credential and auto-revoke on exit 4. **Terraform integration** - feed a dynamic lease into another provider like `azurerm` All four patterns use the same underlying API. Pick the one that matches your situation. ## Core model Before you pick a pattern, the mental model matters: A user session (if you are a developer) or a service token (if you are CI). That identity has the `dynamic_secrets:issue` permission granted by your org owner. `POST /v1/organisations/:org/dynamic-secrets/configs/:id/lease` with an optional `{ "ttl": }` body. Everything else is wiring. Use them immediately. CryptFlare does not store them - if you lose them, issue a fresh lease. Pass the credentials to whatever tool needs them - Azure CLI, Terraform, kubectl, a migration script, `curl`. CryptFlare automatically revokes the credential at the upstream provider when the TTL expires. You don't do anything. If you finish early, you can `DELETE` the lease to revoke it immediately - but you don't have to. > You never see the lease's underlying identity (CryptFlare's encrypted root credentials). The upstream provider (Azure, AWS, ...) only sees your short-lived credential, not CryptFlare. Your cloud audit logs will show the App Registration / IAM role acting on your resources - exactly as if you had configured it directly. ## Pattern 1: Local developer workflow **When to use it**: you are a developer running commands from your laptop that need Azure/AWS/GCP credentials. You want short-lived creds scoped to exactly what you need, logged against your identity. ### Using the dashboard (simplest) 1. Sign in to CryptFlare and navigate to **Dynamic Secrets** in the sidebar 2. Find the configuration you need (e.g. `azure-prod-reader`) 3. Click **Issue lease**, pick a TTL in minutes, click the big blue button 4. Copy each credential field by clicking the icon 5. Paste into your terminal as environment variables: ```bash export AZURE_CLIENT_ID= export AZURE_CLIENT_SECRET= export AZURE_TENANT_ID= # Now any Azure tool that respects these env vars just works az login --service-principal \ --username $AZURE_CLIENT_ID \ --password $AZURE_CLIENT_SECRET \ --tenant $AZURE_TENANT_ID terraform plan ``` 6. When you're done, close your terminal. The credentials expire automatically at the TTL you chose - you don't need to do anything. ### Using `curl` directly ```bash # Replace with your org slug or ID, config ID, and session cookie ORG_ID="org_xyz" CONFIG_ID="ds_abc" LEASE=$(curl -sX POST \ "https://api.cryptflare.com/v1/organisations/$ORG_ID/dynamic-secrets/configs/$CONFIG_ID/lease" \ -H "Content-Type: application/json" \ --cookie "cf_session=" \ -d '{"ttl": 1800}') export AZURE_CLIENT_ID=$(echo "$LEASE" | jq -r .data.credentials.AZURE_CLIENT_ID) export AZURE_CLIENT_SECRET=$(echo "$LEASE" | jq -r .data.credentials.AZURE_CLIENT_SECRET) export AZURE_TENANT_ID=$(echo "$LEASE" | jq -r .data.credentials.AZURE_TENANT_ID) export CF_LEASE_ID=$(echo "$LEASE" | jq -r .data.leaseId) # Use the credentials terraform plan ``` If you want to revoke the lease early when you're done (so quota is freed up for your teammates): ```bash curl -sX DELETE \ "https://api.cryptflare.com/v1/organisations/$ORG_ID/dynamic-secrets/leases/$CF_LEASE_ID" \ --cookie "cf_session=" ``` This is idempotent - you can call it on an already-expired lease and nothing breaks. ## Pattern 2: CI/CD pipeline **When to use it**: your GitHub Actions / GitLab CI / CircleCI workflow needs cloud credentials to run `terraform apply`, push container images, or upload release artifacts. You want the pipeline to get a fresh credential that dies when the pipeline ends. ### Prerequisites - A CryptFlare **service token** with `dynamic_secrets:issue` scope (Organisation Settings → Service Tokens) - The service token stored as a CI secret (e.g. `CRYPTFLARE_TOKEN` in GitHub Actions) - Your dynamic secret config ID (CryptFlare dashboard) ### GitHub Actions example ```yaml name: Deploy on: push: branches: [main] jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Mint Azure credentials id: cryptflare run: | LEASE=$(curl -sfX POST \ "https://api.cryptflare.com/v1/organisations/${{ vars.ORG_ID }}/dynamic-secrets/configs/${{ vars.DYNAMIC_CONFIG_ID }}/lease" \ -H "Authorization: Bearer ${{ secrets.CRYPTFLARE_TOKEN }}" \ -H "Content-Type: application/json" \ -d '{"ttl": 1800}') # Mask each credential in logs echo "::add-mask::$(echo "$LEASE" | jq -r .data.credentials.AZURE_CLIENT_SECRET)" # Export as step outputs for later steps { echo "AZURE_CLIENT_ID=$(echo "$LEASE" | jq -r .data.credentials.AZURE_CLIENT_ID)" echo "AZURE_CLIENT_SECRET=$(echo "$LEASE" | jq -r .data.credentials.AZURE_CLIENT_SECRET)" echo "AZURE_TENANT_ID=$(echo "$LEASE" | jq -r .data.credentials.AZURE_TENANT_ID)" echo "LEASE_ID=$(echo "$LEASE" | jq -r .data.leaseId)" } >> $GITHUB_OUTPUT - name: Terraform apply env: ARM_CLIENT_ID: ${{ steps.cryptflare.outputs.AZURE_CLIENT_ID }} ARM_CLIENT_SECRET: ${{ steps.cryptflare.outputs.AZURE_CLIENT_SECRET }} ARM_TENANT_ID: ${{ steps.cryptflare.outputs.AZURE_TENANT_ID }} ARM_SUBSCRIPTION_ID: ${{ vars.AZURE_SUBSCRIPTION_ID }} run: | cd infrastructure terraform init terraform apply -auto-approve - name: Revoke lease if: always() # run even if terraform failed run: | curl -sX DELETE \ "https://api.cryptflare.com/v1/organisations/${{ vars.ORG_ID }}/dynamic-secrets/leases/${{ steps.cryptflare.outputs.LEASE_ID }}" \ -H "Authorization: Bearer ${{ secrets.CRYPTFLARE_TOKEN }}" || true ``` > The `if: always()` revoke step is **optional** - the lease would die on its own at the 30-minute TTL anyway. But revoking early frees up quota and tightens the audit trail. For long-running pipelines (>30 min) you can either raise the TTL in the lease request or re-mint partway through. ### GitLab CI example ```yaml deploy: stage: deploy image: hashicorp/terraform:latest before_script: - apk add --no-cache curl jq - | LEASE=$(curl -sfX POST \ "https://api.cryptflare.com/v1/organisations/$CRYPTFLARE_ORG/dynamic-secrets/configs/$DYNAMIC_CONFIG_ID/lease" \ -H "Authorization: Bearer $CRYPTFLARE_TOKEN" \ -H "Content-Type: application/json" \ -d '{"ttl": 1800}') export ARM_CLIENT_ID=$(echo "$LEASE" | jq -r .data.credentials.AZURE_CLIENT_ID) export ARM_CLIENT_SECRET=$(echo "$LEASE" | jq -r .data.credentials.AZURE_CLIENT_SECRET) export ARM_TENANT_ID=$(echo "$LEASE" | jq -r .data.credentials.AZURE_TENANT_ID) export LEASE_ID=$(echo "$LEASE" | jq -r .data.leaseId) script: - terraform init - terraform apply -auto-approve after_script: - | curl -sX DELETE \ "https://api.cryptflare.com/v1/organisations/$CRYPTFLARE_ORG/dynamic-secrets/leases/$LEASE_ID" \ -H "Authorization: Bearer $CRYPTFLARE_TOKEN" || true ``` ## Pattern 3: One-shot command wrapping **When to use it**: you want to run a single command with temporary credentials and have them torn down immediately when the command finishes - success or failure. This is the canonical replacement for `vault write ... | sh`. Here's a reusable shell function you can drop into your `~/.bashrc` or a team script: ```bash # cryptflare-run # Mints a lease, runs the command with the credentials exported, # and revokes the lease on exit regardless of success. cryptflare-run() { local config_id="$1" shift local lease lease=$(curl -sfX POST \ "https://api.cryptflare.com/v1/organisations/$CRYPTFLARE_ORG/dynamic-secrets/configs/$config_id/lease" \ -H "Authorization: Bearer $CRYPTFLARE_TOKEN" \ -H "Content-Type: application/json" \ -d '{"ttl": 3600}') if [ -z "$lease" ]; then echo "Failed to mint lease" >&2 return 1 fi local lease_id lease_id=$(echo "$lease" | jq -r .data.leaseId) # Trap ensures the lease is revoked even on Ctrl+C or subprocess crash trap "curl -sX DELETE 'https://api.cryptflare.com/v1/organisations/$CRYPTFLARE_ORG/dynamic-secrets/leases/$lease_id' -H 'Authorization: Bearer $CRYPTFLARE_TOKEN' > /dev/null 2>&1 || true" EXIT INT TERM # Export every credential field from the response while IFS= read -r line; do export "$line" done < <(echo "$lease" | jq -r '.data.credentials | to_entries[] | "\(.key)=\(.value)"') # Run the wrapped command "$@" } ``` Usage: ```bash cryptflare-run azure-prod-reader terraform plan cryptflare-run azure-prod-admin az vm list cryptflare-run aws-migrations ./run-migrations.sh ``` When the wrapped command exits (normal exit, error exit, Ctrl+C, SIGTERM), the `trap` fires and revokes the lease. Even if your laptop dies, the TTL ceiling still kills the credential at Azure on schedule. > A native `cryptflare dynamic run` command is on the roadmap - it will do exactly this in the official CLI with better error handling and signal masking. Until then, the shell function above is a good drop-in. ## Pattern 4: Terraform integration **When to use it**: your Terraform configuration needs to authenticate with Azure/AWS/GCP but you don't want long-lived cloud credentials in state files, CI secrets, or local `~/.azure` caches. There are two approaches depending on whether you want CryptFlare's Terraform provider itself to mint the lease, or whether you want to mint the lease before running Terraform. ### Approach A: External mint, Terraform consumes (available today) Mint the lease outside Terraform (via curl, CLI, or a wrapper script) and pass the credentials in via environment variables. This works with **any** Terraform Azure provider version. ```bash #!/bin/bash set -euo pipefail # Mint a 1-hour Azure credential LEASE=$(curl -sfX POST \ "https://api.cryptflare.com/v1/organisations/$CRYPTFLARE_ORG/dynamic-secrets/configs/$CONFIG_ID/lease" \ -H "Authorization: Bearer $CRYPTFLARE_TOKEN" \ -H "Content-Type: application/json" \ -d '{"ttl": 3600}') # Export as the env vars the azurerm provider reads natively export ARM_CLIENT_ID=$(echo "$LEASE" | jq -r .data.credentials.AZURE_CLIENT_ID) export ARM_CLIENT_SECRET=$(echo "$LEASE" | jq -r .data.credentials.AZURE_CLIENT_SECRET) export ARM_TENANT_ID=$(echo "$LEASE" | jq -r .data.credentials.AZURE_TENANT_ID) export ARM_SUBSCRIPTION_ID=$(echo "$LEASE" | jq -r .data.credentials.AZURE_SUBSCRIPTION_ID) # Run Terraform - the azurerm provider picks up ARM_* from env automatically terraform init terraform apply # The lease expires automatically after 1 hour ``` Your Terraform config needs no special configuration: ```hcl provider "azurerm" { features {} # No client_id / client_secret needed - provider reads ARM_* env vars } ``` ### Approach B: Ephemeral resource (Terraform 1.10+, coming soon) The CryptFlare Terraform provider will ship an **ephemeral resource** - a Terraform resource type introduced in 1.10 that never writes to state, never logs values, and is computed fresh on every plan and apply. ```hcl # Not yet shipped - roadmap ephemeral "cryptflare_dynamic_lease" "azure_admin" { config_name = "azure-prod-reader" ttl_seconds = 1800 } provider "azurerm" { features {} client_id = ephemeral.cryptflare_dynamic_lease.azure_admin.client_id client_secret = ephemeral.cryptflare_dynamic_lease.azure_admin.client_secret tenant_id = ephemeral.cryptflare_dynamic_lease.azure_admin.tenant_id subscription_id = var.azure_subscription_id } ``` The credentials never touch the state file, never appear in logs, and get re-minted on every Terraform run. This is the recommended pattern once Terraform 1.10+ is universal. > **Do not** write a `resource "cryptflare_dynamic_lease"` (non-ephemeral). That would store the credentials in the state file, which defeats the point of dynamic secrets. Use `data` sources or `ephemeral` blocks only. ### AWS variant The same external-mint pattern works for AWS. The `aws_iam` provider returns STS-minted credentials in the env vars the `aws` and `terraform-provider-aws` tools read natively: ```bash LEASE=$(curl -sfX POST \ "https://api.cryptflare.com/v1/organisations/$CRYPTFLARE_ORG/dynamic-secrets/configs/$CONFIG_ID/lease" \ -H "Authorization: Bearer $CRYPTFLARE_TOKEN" \ -H "Content-Type: application/json" \ -d '{"ttl": 3600}') export AWS_ACCESS_KEY_ID=$(echo "$LEASE" | jq -r .data.credentials.AWS_ACCESS_KEY_ID) export AWS_SECRET_ACCESS_KEY=$(echo "$LEASE" | jq -r .data.credentials.AWS_SECRET_ACCESS_KEY) export AWS_SESSION_TOKEN=$(echo "$LEASE" | jq -r .data.credentials.AWS_SESSION_TOKEN) export AWS_REGION=$(echo "$LEASE" | jq -r .data.credentials.AWS_REGION) # Any AWS tool picks these up automatically terraform init terraform apply ``` AWS STS tokens are self-expiring - you don't need to revoke them when you're done, they die at the provider regardless. CryptFlare-side revocation just removes the lease from the dashboard. ## Pattern 5: Renewing a lease for long-running workloads **When to use it**: your workload outlives the initial TTL but you don't want to issue a fresh credential from scratch (which would mean updating every running process's environment). Examples: an overnight data migration, a 90-minute Terraform apply, a long-running CI pipeline that can't predict its own runtime. Vault-style renewal extends the lease's active window without touching the hard cap. For Azure and AWS (whose credentials are immutable), renewal means CryptFlare revokes the old credential and issues a new one under the same lease id - the response includes the fresh values which you must plug into your workload. Request a lease with a TTL matching the short end of what you expect the job to take. ```bash LEASE=$(curl -sfX POST \ "https://api.cryptflare.com/v1/organisations/$ORG/dynamic-secrets/configs/$CONFIG/lease" \ -H "Authorization: Bearer $CF_TOKEN" \ -d '{"ttl": 1800}') LEASE_ID=$(echo "$LEASE" | jq -r .data.leaseId) export AZURE_CLIENT_SECRET=$(echo "$LEASE" | jq -r .data.credentials.AZURE_CLIENT_SECRET) # ... etc ``` Call `POST /leases/$LEASE_ID/renew` with an `increment`. If the provider rotates credentials on renewal, capture the new values from the response. ```bash RENEWAL=$(curl -sfX POST \ "https://api.cryptflare.com/v1/organisations/$ORG/dynamic-secrets/leases/$LEASE_ID/renew" \ -H "Authorization: Bearer $CF_TOKEN" \ -d '{"increment": 1800}') if [ "$(echo "$RENEWAL" | jq -r .data.credentialsRotated)" = "true" ]; then export AZURE_CLIENT_SECRET=$(echo "$RENEWAL" | jq -r .data.credentials.AZURE_CLIENT_SECRET) echo "Credentials rotated - downstream processes need the new secret" fi ``` Renewal is bounded by `max_expires_at`, which is anchored to the original issue time. Once you hit the hard cap, the next renewal returns `400 DYNAMIC_LEASE_EXHAUSTED` and you must issue a brand-new lease. Check `maxExpiresAt` in the response to know how much headroom you have: ```bash MAX=$(echo "$RENEWAL" | jq -r .data.maxExpiresAt) echo "Credential cannot be renewed past $MAX" ``` > **When to renew vs reissue**: renew when you want to extend an active job's credential. Reissue (a fresh `POST /lease`) when the old credential has already leaked into environment variables of processes that have moved on. Renewal of a rotating-credential provider means every process holding the old secret must be updated. ## Pattern 6: Wrapping credentials for handoff **When to use it**: you need to hand credentials from one process to another through a channel you don't fully trust. CI logs, task queue messages, webhooks, pastebin relays, email. A wrapped response gives you a single-use **exchange token** that is useless on its own - the recipient must have a valid CryptFlare session or service token to redeem it. This is safer than passing raw credentials because: - The wrap token is a capability, not a credential. Intercepting the token without also having CryptFlare auth gets you nothing. - The token is single-use. Once redeemed, it's gone forever. Replay attacks fail. - The token has a short window (default 60 seconds, max 5 minutes). Even if intercepted and leaked to an attacker with CryptFlare auth, the window to use it is tiny. Set `wrap` on the issue request. The response contains a wrap token instead of credentials. ```bash LEASE=$(curl -sfX POST \ "https://api.cryptflare.com/v1/organisations/$ORG/dynamic-secrets/configs/$CONFIG/lease" \ -H "Authorization: Bearer $CF_TOKEN" \ -d '{"ttl": 1800, "wrap": {"ttl": 60}}') WRAP_TOKEN=$(echo "$LEASE" | jq -r .data.wrapped.token) echo "Hand this token to the consumer: $WRAP_TOKEN" ``` Post it to the Slack channel, write it to the CI artifact, return it from the HTTP handler. The token alone is useless to anyone without CryptFlare auth. On the other end, the consumer exchanges the token for the real credentials. ```bash UNWRAPPED=$(curl -sfX POST \ "https://api.cryptflare.com/v1/organisations/$ORG/dynamic-secrets/unwrap/$WRAP_TOKEN" \ -H "Authorization: Bearer $CF_TOKEN_CONSUMER") export AZURE_CLIENT_ID=$(echo "$UNWRAPPED" | jq -r .data.credentials.AZURE_CLIENT_ID) export AZURE_CLIENT_SECRET=$(echo "$UNWRAPPED" | jq -r .data.credentials.AZURE_CLIENT_SECRET) export AZURE_TENANT_ID=$(echo "$UNWRAPPED" | jq -r .data.credentials.AZURE_TENANT_ID) # The token is now consumed - a second unwrap attempt would 404 ``` ### Canonical use case: cross-pipeline handoff ```yaml # Producer job - runs on a trusted runner, mints a wrapped credential - name: Mint wrapped Azure lease run: | LEASE=$(curl -sfX POST "$CF_URL/.../lease" \ -H "Authorization: Bearer $CF_TOKEN" \ -d '{"ttl": 1800, "wrap": {"ttl": 120}}') echo "WRAP_TOKEN=$(echo "$LEASE" | jq -r .data.wrapped.token)" >> $GITHUB_OUTPUT # Consumer job - runs on a less-trusted runner (e.g. a PR from a fork) - name: Unwrap and use env: TOKEN: ${{ needs.producer.outputs.WRAP_TOKEN }} run: | CREDS=$(curl -sfX POST "$CF_URL/.../unwrap/$TOKEN" \ -H "Authorization: Bearer $CF_CONSUMER_TOKEN") export AZURE_CLIENT_SECRET=$(echo "$CREDS" | jq -r .data.credentials.AZURE_CLIENT_SECRET) terraform apply ``` Both jobs need their own CryptFlare authentication - the producer mints the wrap, the consumer redeems it. If an attacker reads the GitHub Actions output between jobs and sees `WRAP_TOKEN`, they still need a CryptFlare service token with `dynamic_secrets:issue` to actually unwrap it. ## TTL trade-offs Picking the right TTL matters. Too short and your CI pipelines fail halfway through. Too long and the blast radius grows. Some guidelines: | Use case | Suggested TTL | Reasoning | |---|---|---| | Local dev, quick command | **5-15 min** | Matches how long you're actually at the keyboard | | Local dev, longer session | **60 min** | Covers a focused working session without constant re-minting | | CI pipeline, short job | **10-30 min** | Just longer than the job's median runtime | | CI pipeline, `terraform apply` | **30-60 min** | Accounts for slow providers and large plans | | CI pipeline, full e2e test suite | **60-120 min** | Longest acceptable without chunking the pipeline | | Scheduled batch job | **TTL = job duration × 1.5** | Cushion for retries and variance | > **Request slightly less than you think you need, not more.** It's better to re-mint mid-pipeline than to leave a 4-hour credential sitting in an expired runner. If your median pipeline is 25 minutes, request 30 minutes - not 2 hours. ## Observability Every lease action lands in the [audit log](/security/audit-logs) as one of: - `dynamic_lease.issued` - credential was minted, includes TTL and requester - `dynamic_lease.expired` - workflow's TTL fired and the credential was revoked at the provider - `dynamic_lease.revoked` - manual or cascade revoke - `dynamic_lease.irrevocable` - revoke attempts exhausted, ops investigation required You can subscribe to these via [event subscriptions](/security/event-subscriptions) to forward to Slack, PagerDuty, or your SIEM: ```json { "name": "Dynamic lease alerts", "events": ["dynamic_lease.irrevocable"], "destination": { "type": "slack", "url": "https://hooks.slack.com/services/..." } } ``` In the CryptFlare dashboard, the Dynamic Secrets page shows every active lease with its requester, expiry countdown, and status. You can manually revoke any lease from this view with one click. ## Troubleshooting ### "Effective TTL is 30s, below the minimum of 60s" Your CryptFlare session or service token is about to expire. TTLs are clamped to the parent token's remaining lifetime - if the parent is about to die, so must every lease. Fix: - **Sessions**: log out and back in to refresh - **Service tokens**: check the token's `expires_at` field. Rotate it or set a longer expiry - **Access tokens**: same as service tokens ### "Concurrent lease quota reached (50/50)" Every slot in the config's `max_concurrent_leases` is taken. Options: - Wait for existing leases to expire naturally (check the dashboard for expiry countdowns) - Manually revoke leases you no longer need (`DELETE /leases/:id`) - Ask your operator to raise the quota (`PATCH /configs/:id` with `maxConcurrentLeases`) ### "Per-identity lease quota reached (5/5)" You (specifically - bound to your current session or token) have five active leases on this config. Same remedies as above, but scoped to your identity. ### Lease was issued but Azure says 401 Unauthorized Most likely causes: - **You waited too long** - the lease TTL has expired. Check the lease detail in the dashboard. Issue a fresh lease. - **Clock skew** - your local clock is wrong. Azure rejects tokens based on `iat`/`exp` claims. Fix your NTP. - **Wrong subscription** - the credential is valid but the role assignment is on a different subscription than the one you're targeting. Check `ARM_SUBSCRIPTION_ID`. - **Permission too narrow** - the role assigned to the root App Registration doesn't cover the operation you're attempting. Widen the role in Azure or pick a different config with broader permissions. ### Lease stuck in `irrevocable` state The workflow exhausted all 6 revoke attempts. The credential may still be valid at the upstream provider. Steps: 1. **Go to the upstream provider directly** (Azure portal for azure_sp) and manually delete the credential using the `externalId` shown in the lease detail 2. **Return to CryptFlare** and click **Force revoke** on the irrevocable lease, or call `POST /leases/:id/force-revoke`. This marks the lease as revoked in CryptFlare's DB without attempting another provider call. Irrevocable leases are logged as audit events so your ops team can investigate the root cause (provider outage, deleted App Registration, network policy blocking Graph, etc.). --- # Idempotency Keys Source: https://docs.cryptflare.com/guides/idempotency Safely retry mutations without creating duplicates by attaching an Idempotency-Key header # Idempotency Keys Network hiccups, CI retries, and terraform re-plans can all cause a mutation request to be sent twice. Without idempotency keys, the server has no way to tell a retry from a new request: you end up with two of the same secret, two service tokens, two webhook subscriptions. CryptFlare supports the `Idempotency-Key` header on every authenticated mutation endpoint. Attach a unique key to a request and the server will record the response for 24 hours. Any retry with the same key and the same body replays the cached response instead of re-executing the mutation. ## How it works Send a client-generated key on any `POST`, `PUT`, `PATCH`, or `DELETE`: ```bash curl -X POST https://api.cryptflare.com/v1/organisations/$ORG/secrets \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -H "Idempotency-Key: 01HX7N4QJ9P8W2B3K5VY6Z4EDA" \ -d '{"name":"DATABASE_URL","value":"postgres://..."}' ``` The server stores the response keyed on: - the authenticated caller (user id or token id) - the HTTP method and path - the idempotency key you sent ### Replays A retry with the same key and the same request body returns the cached response unchanged. The replayed response includes an `Idempotency-Replayed: true` header so you can tell it came from the cache. ``` HTTP/1.1 201 Created Idempotency-Replayed: true Content-Type: application/json {"id":"sec_abc123","name":"DATABASE_URL",...} ``` ### Collisions If you reuse a key with a different request body, the server returns `422 IDEMPOTENCY_KEY_COLLISION`. This catches a common class of client bug where the same key is reused for a different payload. ```json { "error": "IDEMPOTENCY_KEY_COLLISION", "message": "The Idempotency-Key you sent was previously used with a different request body. Use a fresh key for a different payload.", "status": 422 } ``` ## Choosing a key Use any client-generated identifier that is unique per logical operation: - UUIDv4 or ULID is the simplest option - A deterministic hash of the operation (e.g. `sha256(workflow_run_id + secret_name)`) lets an entire CI job be safely replayed Keys can be up to 255 characters. Anything shorter than that is fine. ## What is cached - `2xx` responses are cached. A replay returns the same body and status. - `4xx` responses are also cached so a client bug (e.g. sending invalid JSON) does not loop endlessly. - `5xx` responses are **not** cached. A retry after a transient server error re-executes the mutation so it has a chance to succeed. - Response bodies larger than 1 MB are not cached. This only affects a handful of bulk export endpoints; normal create/update responses are far smaller. ## What is not cached - Responses to requests without an `Idempotency-Key` header (backwards compatible — older clients are unaffected). - Responses to authentication endpoints (`/v1/auth/*`, `/v1/console/auth/*`), which have their own anti-replay logic. - Responses to the Stripe billing webhook, which has upstream idempotency of its own. ## TTL Cached responses expire after **24 hours**. Retries after that window execute the mutation normally. Choose a fresh key for operations that are expected to happen more than once a day. ## Side effects Cached replays return the stored response body. Side effects that run inside `waitUntil` during the first execution — audit log emission, cache invalidation, analytics counters — do **not** fire again on a replay. This is the correct behaviour: an idempotent replay represents one logical operation, not two. ## Scope Keys are scoped per caller. Two different service tokens using the same key create two independent cache entries. A user on the dashboard and the same user on a service token are treated as separate callers. --- # SSO with Auth0 Source: https://docs.cryptflare.com/guides/sso/auth0 Step-by-step guide to configuring OIDC-based SSO with Auth0 in CryptFlare # SSO with Auth0 This guide walks through creating an application in Auth0 and connecting it to CryptFlare for SSO. ## Prerequisites - Auth0 tenant with admin access - A CryptFlare organisation on the **Team plan** - Organisation **owner** permissions in CryptFlare ## Setup Sign in to the [Auth0 Dashboard](https://manage.auth0.com). Go to `Applications` then `Applications` in the sidebar. Click `Create Application`. - **Name**: `CryptFlare` - **Application type**: `Regular Web Applications` Click `Create`. In the application `Settings` tab, scroll to `Application URIs`. Set the `Allowed Callback URLs` to: ``` https://api.cryptflare.com/v1/auth/sso/callback/oidc ``` Click `Save Changes`. From the `Settings` tab, copy: | Field | Where to find it | |-------|-----------------| | `Domain` | Settings, top section | | `Client ID` | Settings, top section | | `Client Secret` | Settings, top section | Your **Issuer URL** is your Auth0 domain with a trailing slash: ``` https://{your-auth0-domain}/ ``` For example: `https://acme.us.auth0.com/` > The trailing slash is required. Without it, CryptFlare will report an issuer mismatch. Under `Connections`, enable the identity sources you want users to authenticate with (e.g., `Database`, `Google`, enterprise connections). In CryptFlare, navigate to `Organisation Settings` and open the `SSO` tab. Click `Add Connection` and select `Auth0` as the provider. | Field | Value | |-------|-------| | **Connection name** | e.g., `Acme Corp Auth0` | | **Issuer URL** | `https://{your-auth0-domain}/` | | **Client ID** | From step 3 | | **Client Secret** | From step 3 | | **Allowed domains** | Your company email domain, e.g., `acme.com` | | **Default role** | The role for users who do not match any group mapping | Click `Create Connection`. Click the `Test` button to verify the connection, then toggle it to `Enabled`. ## Troubleshooting | Issue | Solution | |-------|---------| | `callback_url_mismatch` | Ensure the callback URL in Auth0 exactly matches `https://api.cryptflare.com/v1/auth/sso/callback/oidc` | | `issuer mismatch` | Make sure the issuer URL includes the trailing `/` | | Users cannot log in | Check that the correct `Connections` are enabled in the Auth0 application | ## Next steps - [Set up group mappings](/security/sso#role-mapping-rules) to assign roles - [Enable Force SSO](/security/sso#force-sso) to require SSO for all users - [SSO API Reference](/api-reference/sso) for programmatic configuration --- # SSO with Microsoft Entra ID Source: https://docs.cryptflare.com/guides/sso/entra-id Step-by-step guide to configuring OIDC-based SSO with Microsoft Entra ID (Azure AD) in CryptFlare # SSO with Microsoft Entra ID This guide walks through creating an OIDC application in Microsoft Entra ID (formerly Azure Active Directory) and connecting it to CryptFlare. ## Prerequisites - A Microsoft Entra ID tenant with admin access - A CryptFlare organisation on the **Team plan** - Organisation **owner** permissions in CryptFlare ## Setup Sign in to the [Azure portal](https://portal.azure.com) and navigate to `Microsoft Entra ID`. Go to `App registrations` and click `New registration`. - **Name**: `CryptFlare SSO` (or any name your team will recognise) - **Supported account types**: Select `Accounts in this organizational directory only (Single tenant)` - this is the most common choice. Only select multi-tenant if your users span multiple Entra ID tenants. - **Redirect URI**: Select `Web` as the platform and enter: ``` https://api.cryptflare.com/v1/auth/sso/callback/oidc ``` Click `Register`. After registration, you land on the `Overview` page. Copy these two values - you will need them both: | Field | Where to find it | Example | |-------|-----------------|---------| | `Application (client) ID` | Overview page, top section | `8512a48e-f285-4128-98e6-ab65bb0caa4b` | | `Directory (tenant) ID` | Overview page, top section | `f0ad64be-1eab-4495-9457-87b041ab39e1` | > **Important**: The `Directory (tenant) ID` must match the tenant where the app is registered. If your Overview page says "Default Directory" but your users are in a different tenant, you have registered the app in the wrong tenant. Go to your profile icon in the Azure portal, click `Switch directory`, and re-register in the correct tenant. Your **Issuer URL** follows this pattern: ``` https://login.microsoftonline.com/{tenant-id}/v2.0 ``` Replace `{tenant-id}` with your `Directory (tenant) ID`. For example: ``` https://login.microsoftonline.com/f0ad64be-1eab-4495-9457-87b041ab39e1/v2.0 ``` In the app registration sidebar, go to `Certificates & secrets`. Click `New client secret`. - **Description**: `CryptFlare SSO` - **Expires**: Choose a duration (6 months, 12 months, or 24 months) Click `Add`. **Copy the `Value` column immediately** - this is your client secret. It is only shown once. The `Secret ID` column is not the secret - you need the `Value`. In the sidebar, go to `API permissions`. By default, Entra ID only adds `User.Read`. You need to add the OIDC scopes manually: - Click `Add a permission` - Select `Microsoft Graph` - Select `Delegated permissions` - Search for and add each of these: - `openid` - Sign users in - `email` - View users' email address - `profile` - View users' basic profile Your permissions list should now show: | Permission | Type | Status | |-----------|------|--------| | `User.Read` | Delegated | Granted | | `openid` | Delegated | Granted | | `email` | Delegated | Granted | | `profile` | Delegated | Granted | Click `Grant admin consent for {your tenant}` to approve all permissions. > For **group-based role mapping** with more than 200 groups, also add `GroupMember.Read.All` (delegated) and grant admin consent. If you want to use group-based role mapping: - In the sidebar, go to `Token configuration` - Click `Add groups claim` - Select `Security groups` (or `All groups` depending on your directory) - Under `ID` token, ensure the claim is included - Click `Add` > For organisations with more than 200 groups, CryptFlare automatically falls back to the Microsoft Graph API to fetch group memberships. No extra configuration is needed - just ensure `GroupMember.Read.All` is granted in step 4. In CryptFlare, navigate to `Organisation Settings` and open the `SSO` tab. Click `Add Connection` and select `Microsoft Entra ID` as the provider. | Field | Value | |-------|-------| | **Tenant ID** | The `Directory (tenant) ID` from step 2 | | **Client ID** | The `Application (client) ID` from step 2 | | **Client Secret** | The secret `Value` from step 3 | | **Connection name** | e.g., `Acme Corp Entra ID` | | **Allowed domains** | Your company email domain, e.g., `acme.com` | | **Default role** | The role for users who do not match any group mapping | CryptFlare builds the issuer URL automatically from the tenant ID. Click `Create Connection`. Click the `Test` button to verify CryptFlare can reach your Entra ID OIDC discovery endpoint. This validates: - The tenant ID is correct - The issuer URL resolves to a valid OIDC discovery document - The endpoint is reachable from CryptFlare Once the test passes, toggle the connection to `Enabled`. Optionally, set up [group mappings](/security/sso#role-mapping-rules) to map Entra ID groups to CryptFlare roles. ## Common errors | Error | Cause | Fix | |-------|-------|-----| | `AADSTS700016: Application not found in directory` | The `Directory (tenant) ID` in your issuer URL does not match the tenant where the app is registered | Go to `Overview` in your app registration, copy the correct `Directory (tenant) ID`, and update the issuer URL in CryptFlare | | `AADSTS50011: Reply URL does not match` | The redirect URI registered in Entra ID does not match what CryptFlare sends | In `Authentication`, verify the redirect URI is exactly `https://api.cryptflare.com/v1/auth/sso/callback/oidc` | | `AADSTS65001: User or admin has not consented` | API permissions have not been admin-consented | Go to `API permissions` and click `Grant admin consent` | | `AADSTS7000218: Request body must contain client_assertion or client_secret` | The client secret is missing or expired | Go to `Certificates & secrets`, create a new secret, and update it in CryptFlare | | Test fails with `invalid issuer` | The issuer URL is malformed or uses the wrong tenant ID | Verify the URL matches `https://login.microsoftonline.com/{tenant-id}/v2.0` exactly | | Only `User.Read` in permissions | The OIDC scopes were not added | Add `openid`, `email`, and `profile` as delegated Microsoft Graph permissions | | `IdP did not return an email claim` | The OIDC token does not include the user's email | Add `openid`, `email`, `profile` permissions (step 4) and grant admin consent. Also verify the user has an `Email` field set in their Entra ID profile (see below) | ## Email claim not returned If you have added the `email` permission and granted admin consent but still get `IdP did not return an email claim`, the user's Entra ID profile may not have the `Email` field populated. To check: - In the Azure portal, go to `Users` and click the user - Go to `Properties` then `Contact info` - Verify the `Email` field has a value Some Entra ID accounts (especially admin-created ones) have a `User principal name` (e.g., `user@buungroup.com`) but no value in the `Email` field. The OIDC `email` claim is sourced from the `Email` property, not the UPN. To fix, either: - Edit the user's profile and set the `Email` field - Or configure an **optional claim** to map the UPN to the email claim: go to your app registration, then `Token configuration`, click `Add optional claim`, select `ID` token type, and add the `email` claim. If prompted, check `Turn on the Microsoft Graph email permission`. ## Troubleshooting checklist If SSO is not working, verify each of these: - [ ] App is registered in the **correct tenant** (not "Default Directory" if that is a personal tenant) - [ ] `Application (client) ID` in CryptFlare matches the `Overview` page - [ ] `Directory (tenant) ID` in the issuer URL matches the `Overview` page - [ ] Redirect URI in `Authentication` matches exactly (no trailing slash, correct protocol) - [ ] API permissions include `openid`, `email`, `profile` with admin consent granted - [ ] Admin consent has been granted (green checkmark in the `Status` column) - [ ] Client secret `Value` (not `Secret ID`) is copied into CryptFlare - [ ] Client secret has not expired - [ ] User has an `Email` field set in their Entra ID profile (not just a UPN) ## Next steps - [Set up group mappings](/security/sso#role-mapping-rules) to automatically assign roles based on Entra ID groups - [Enable Force SSO](/security/sso#force-sso) to require SSO for all users on matching domains - [SSO API Reference](/api-reference/sso) for programmatic configuration --- # SSO with Generic OIDC Source: https://docs.cryptflare.com/guides/sso/generic-oidc Step-by-step guide to configuring SSO with any OpenID Connect provider in CryptFlare # SSO with Generic OIDC CryptFlare supports any identity provider that implements the OpenID Connect standard. This guide covers the general setup process. ## Prerequisites - Admin access to your identity provider - A CryptFlare organisation on the **Team plan** - Organisation **owner** permissions in CryptFlare ## Setup Create a new application or client in your identity provider with the following settings: | Setting | Value | |---------|-------| | **Application type** | `Web application` | | **Grant type** | `Authorization Code` | | **Redirect URI** | `https://api.cryptflare.com/v1/auth/sso/callback/oidc` | | **Scopes** | `openid`, `email`, `profile` | You need three values from your identity provider: | Field | Description | |-------|------------| | **Issuer URL** | Must serve a valid `/.well-known/openid-configuration` document | | **Client ID** | The application or client identifier | | **Client Secret** | The application secret | You can verify the issuer URL by fetching the discovery document: ```bash curl https://your-idp.com/.well-known/openid-configuration ``` The response should include `authorization_endpoint`, `token_endpoint`, and `jwks_uri` fields. If your identity provider supports group claims and you want to use group-based role mapping, configure the ID token to include a `groups` claim. The expected format is a JSON array of group names or IDs: ```json { "groups": ["Engineering", "Platform Team", "Finance"] } ``` > If your provider uses a different claim name (e.g., Auth0 uses namespaced claims like `https://myapp.com/groups`), you can configure the groups claim name in the connection settings. In CryptFlare, navigate to `Organisation Settings` and open the `SSO` tab. Click `Add Connection` and select `Generic OIDC` as the provider. | Field | Value | |-------|-------| | **Connection name** | A descriptive name for this connection | | **Issuer URL** | From step 2 | | **Client ID** | From step 2 | | **Client Secret** | From step 2 | | **Allowed domains** | Restrict to specific email domains (optional) | | **Default role** | The role for users who do not match any group mapping | Click `Create Connection`. Click the `Test` button to verify CryptFlare can reach the OIDC discovery endpoint, then toggle the connection to `Enabled`. ## Security CryptFlare enforces the following security measures for all OIDC connections: - **PKCE** - Proof Key for Code Exchange with `SHA-256` is used on every authorization request - **State parameter** - Cryptographically random, validated on callback (one-time use, `10 minute` TTL) - **Domain enforcement** - Prevents users with non-matching email domains from accessing the organisation - **Org isolation** - SSO connections are strictly scoped to a single organisation ## Troubleshooting | Issue | Solution | |-------|---------| | Test fails | Verify the issuer URL serves a valid `/.well-known/openid-configuration` | | `invalid_redirect_uri` from IdP | Ensure the redirect URI is exactly `https://api.cryptflare.com/v1/auth/sso/callback/oidc` | | Token errors | Verify the `Client ID` and `Client Secret` are correct and not expired | | Groups not mapping | Ensure the ID token includes a `groups` claim with the expected format | ## Next steps - [Set up group mappings](/security/sso#role-mapping-rules) to assign roles - [Enable Force SSO](/security/sso#force-sso) to require SSO for all users - [SSO API Reference](/api-reference/sso) for programmatic configuration --- # SSO with Google Workspace Source: https://docs.cryptflare.com/guides/sso/google Step-by-step guide to configuring OIDC-based SSO with Google Workspace in CryptFlare # SSO with Google Workspace This guide walks through creating an OAuth 2.0 client in Google Cloud and connecting it to CryptFlare for SSO. ## Prerequisites - Google Workspace or Cloud Identity with admin access - A CryptFlare organisation on the **Team plan** - Organisation **owner** permissions in CryptFlare ## Setup Go to the [Google Cloud Console](https://console.cloud.google.com) and select (or create) a project. Navigate to `APIs & Services` then `OAuth consent screen`. - Select `Internal` if all users are in your Workspace domain, or `External` if you need broader access - Fill in the required fields (`App name`, `User support email`, `Developer contact email`) - On the `Scopes` page, add `openid`, `email`, and `profile` - Click `Save and Continue` through the remaining steps Go to `APIs & Services` then `Credentials`. Click `Create Credentials` and select `OAuth client ID`. - **Application type**: `Web application` - **Name**: `CryptFlare SSO` - **Authorised redirect URIs**: Add: ``` https://api.cryptflare.com/v1/auth/sso/callback/oidc ``` Click `Create` and copy the `Client ID` and `Client secret` from the dialog. The Google OIDC issuer URL is always: ``` https://accounts.google.com ``` In CryptFlare, navigate to `Organisation Settings` and open the `SSO` tab. Click `Add Connection` and select `Google Workspace` as the provider. | Field | Value | |-------|-------| | **Connection name** | e.g., `Acme Corp Google` | | **Issuer URL** | `https://accounts.google.com` | | **Client ID** | From step 2 | | **Client Secret** | From step 2 | | **Allowed domains** | Your Workspace domain, e.g., `acme.com` | | **Default role** | The role for users who do not match any group mapping | Click `Create Connection`. Click the `Test` button to verify the connection, then toggle it to `Enabled`. ## Troubleshooting | Issue | Solution | |-------|---------| | `Access blocked` during login | Verify the OAuth consent screen is properly configured and the app is published (or user is a test user) | | `redirect_uri_mismatch` | Ensure the redirect URI in Google Cloud exactly matches `https://api.cryptflare.com/v1/auth/sso/callback/oidc` | | Users outside your domain can log in | Add your domain to the `Allowed domains` field in CryptFlare | ## Next steps - [Set up group mappings](/security/sso#role-mapping-rules) to assign roles based on Google groups - [Enable Force SSO](/security/sso#force-sso) to require SSO for all users - [SSO API Reference](/api-reference/sso) for programmatic configuration --- # SSO with Okta Source: https://docs.cryptflare.com/guides/sso/okta Step-by-step guide to configuring OIDC-based SSO with Okta in CryptFlare # SSO with Okta This guide walks through creating an OIDC application in Okta and connecting it to CryptFlare. ## Prerequisites - Okta admin dashboard access - A CryptFlare organisation on the **Team plan** - Organisation **owner** permissions in CryptFlare ## Setup Sign in to your [Okta admin dashboard](https://login.okta.com). Navigate to `Applications` then `Applications` and click `Create App Integration`. - **Sign-in method**: `OIDC - OpenID Connect` - **Application type**: `Web Application` Click `Next`. Set the `App integration name` (e.g., `CryptFlare`). Under `Sign-in redirect URIs`, add: ``` https://api.cryptflare.com/v1/auth/sso/callback/oidc ``` Under `Assignments`, select who can access the application (everyone, or specific groups). Click `Save`. Go to the `General` tab of the application and copy: | Field | Where to find it | |-------|-----------------| | `Client ID` | Client Credentials section | | `Client secret` | Client Credentials section (click the eye icon to reveal) | Your **Issuer URL** depends on your authorization server: ``` https://{your-okta-domain}/oauth2/default ``` Replace `{your-okta-domain}` with your Okta domain (e.g., `acme.okta.com`). > If you use a custom authorization server, use its issuer URL instead. To use group-based role mapping in CryptFlare: - Go to `Security` then `API` then `Authorization Servers` - Select the authorization server you are using (e.g., `default`) - Go to the `Claims` tab and click `Add Claim` | Setting | Value | |---------|-------| | **Name** | `groups` | | **Include in token type** | `ID Token`, `Always` | | **Value type** | `Groups` | | **Filter** | Matches regex `.*` (or a more specific filter) | Click `Create`. In CryptFlare, navigate to `Organisation Settings` and open the `SSO` tab. Click `Add Connection` and select `Okta` as the provider. | Field | Value | |-------|-------| | **Connection name** | e.g., `Acme Corp Okta` | | **Issuer URL** | `https://{your-okta-domain}/oauth2/default` | | **Client ID** | From step 3 | | **Client Secret** | From step 3 | | **Allowed domains** | Your company email domain, e.g., `acme.com` | | **Default role** | The role for users who do not match any group mapping | Click `Create Connection`. Click the `Test` button to verify the connection, then toggle it to `Enabled`. ## Troubleshooting | Issue | Solution | |-------|---------| | Test fails with `issuer not found` | Verify your Okta domain and authorization server name in the issuer URL | | Users not assigned to the app | Check the application `Assignments` tab in Okta | | Groups not appearing | Verify the `groups` claim is configured on the correct authorization server | ## Next steps - [Set up group mappings](/security/sso#role-mapping-rules) to assign roles based on Okta groups - [Enable Force SSO](/security/sso#force-sso) to require SSO for all users - [SSO API Reference](/api-reference/sso) for programmatic configuration --- # Sync to AWS Secrets Manager (federated) Source: https://docs.cryptflare.com/guides/sync/aws-federated Step-by-step guide to setting up keyless IAM OIDC federation between AWS and CryptFlare so sync connections push secrets without storing IAM access keys # Sync to AWS Secrets Manager (federated) This guide walks through creating an IAM OpenID Connect (OIDC) identity provider in AWS and connecting it to CryptFlare. When complete, CryptFlare pushes secrets to AWS Secrets Manager using short-lived STS session tokens minted per sync via `sts:AssumeRoleWithWebIdentity` - nothing to rotate, no IAM access key stored on our side. For the concept behind federation, see [Federated Identity](/security/federated-identity). ## Prerequisites - An AWS account where secrets should land - Account-level IAM: `iam:CreateOpenIDConnectProvider`, `iam:CreateRole`, `iam:PutRolePolicy`, `secretsmanager:*` (or equivalent managed roles like `IAMFullAccess` + `SecretsManagerReadWrite`) - A CryptFlare organisation on the **Pro** or **Team plan** - Organisation **owner** or **admin** permissions in CryptFlare - The `aws` CLI v2 installed and authenticated against the target account ## Setup AWS needs the issuer URL and a TLS thumbprint before it verifies any signed JWTs we send. Calculate the thumbprint from the current CryptFlare certificate chain: ```bash THUMBPRINT=$( openssl s_client -servername api.cryptflare.com -showcerts \ -connect api.cryptflare.com:443 /dev/null | openssl x509 -fingerprint -sha1 -noout | tr -d ':' | tr 'A-F' 'a-f' | cut -d= -f2 ) ``` Create the provider: ```bash aws iam create-open-id-connect-provider \ --url "https://api.cryptflare.com" \ --client-id-list "sts.amazonaws.com" \ --thumbprint-list "$THUMBPRINT" ``` | Field | Value | |-------|-------| | **Provider URL** | `https://api.cryptflare.com` | | **Audience** | `sts.amazonaws.com` | | **Thumbprint** | SHA-1 fingerprint of the intermediate CA (calculated above) | Copy the `Arn` from the response - needed in step 3. It looks like: ``` arn:aws:iam::123456789012:oidc-provider/api.cryptflare.com ``` > If Cloudflare rotates its intermediate CA, AWS will reject the next STS exchange with `thumbprint does not match`. Re-calculate with the `openssl` snippet and run `aws iam update-open-id-connect-provider-thumbprint` with the new value. This is the link that activates federation. You are telling AWS: "when someone holds an assertion with this exact subject, let them assume this role." Copy the **federated subject** from the vault dashboard's sync connection configuration page. It follows this shape: ``` cryptflare:org:org_abc123:sync:conn_xyz789:v1 ``` Save this as `trust-policy.json`, replacing the two placeholders: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::123456789012:oidc-provider/api.cryptflare.com" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "api.cryptflare.com:aud": "sts.amazonaws.com", "api.cryptflare.com:sub": "cryptflare:org:org_abc123:sync:conn_xyz789:v1" } } } ] } ``` | Condition | Required value | |-----------|----------------| | `:aud` | Always `sts.amazonaws.com` (AWS convention) | | `:sub` | Exact federated subject from the vault dashboard, copy-paste | ```bash aws iam create-role \ --role-name CryptFlareSync \ --assume-role-policy-document file://trust-policy.json \ --description "Let CryptFlare write to Secrets Manager via OIDC federation" ``` Attach a least-privilege Secrets Manager policy: ```bash cat > secrets-policy.json <<'EOF' { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "secretsmanager:CreateSecret", "secretsmanager:PutSecretValue", "secretsmanager:DescribeSecret", "secretsmanager:GetSecretValue", "secretsmanager:ListSecrets", "secretsmanager:TagResource", "secretsmanager:UntagResource", "secretsmanager:DeleteSecret" ], "Resource": "*" } ] } EOF aws iam put-role-policy \ --role-name CryptFlareSync \ --policy-name SecretsManagerAccess \ --policy-document file://secrets-policy.json ``` | Field | Value | |-------|-------| | **Role name** | `CryptFlareSync` | | **Trust policy** | `trust-policy.json` from step 2 | | **Inline policy** | `SecretsManagerAccess` scoped to the 8 listed actions | Capture the role ARN: ```bash aws iam get-role --role-name CryptFlareSync --query 'Role.Arn' --output text ``` Example output: ``` arn:aws:iam::123456789012:role/CryptFlareSync ``` > For tighter isolation, pin `"Resource"` to an ARN pattern like `arn:aws:secretsmanager:us-east-1:123456789012:secret:cryptflare/*` and set the connection's key prefix to `cryptflare/`. Every pushed secret lands inside the allowed namespace. In the vault dashboard, navigate to `Organisation Settings` and open the `Integrations` tab. Find `AWS Secrets Manager` and click `Register AWS integration`. Select `Workload Identity Federation` as the auth mode and fill in: | Field | Value | |-------|-------| | **Region** | Region where Secrets Manager lives (e.g. `us-east-1`) | | **Role ARN** | The role ARN from step 3 | | **KMS key** (optional) | Alias or ARN of a customer-managed KMS key for SecretString encryption | Click `Register`. The integration appears in the list with auth mode `federated` and no stored credentials. Open a pod or environment, click `Sync` > `New connection`, and pick `AWS Secrets Manager` as the provider. Select the integration registered in step 4 (region auto-populates from it). Decide on delete semantics: - **Off (default)** - cascade deletes schedule secrets with AWS's 30-day recovery window (soft delete) - **Force delete without recovery window** - cascades remove immediately, skipping the tombstone Click `Create connection`. Trigger a sync manually from the connection's overview page. Expect: - Connection status flips to `healthy` in the CryptFlare dashboard - Secrets appear in `aws secretsmanager list-secrets --region us-east-1` - CloudTrail shows `AssumeRoleWithWebIdentity` followed by `CreateSecret` / `PutSecretValue` events under the `CryptFlareSync` role If anything fails, the sync log contains the verbatim AWS error (usually a trust policy mismatch or a missing Secrets Manager permission). ## How it works at sync time Every sync kicks off a fresh single-hop token exchange so the worker holds nothing long-lived. CryptFlare mints a short-lived JWT, calls `AssumeRoleWithWebIdentity` on AWS STS, and uses the returned temporary credentials to call Secrets Manager. >OIDC: Request signed JWT for connection subject OIDC-->>Worker: Short-lived JWT Worker->>STS: AssumeRoleWithWebIdentity with JWT STS-->>Worker: Temporary access key, secret, session token Worker->>SM: CreateSecret or PutSecretValue with temp creds SM-->>Worker: Secret version ARN `} /> The temp credentials expire within the hour and are never persisted, so revoking the IAM role's trust policy or deleting the role cuts CryptFlare off at the next sync. ## Common errors | Error | Cause | Fix | |-------|-------|-----| | `Not authorized to perform sts:AssumeRoleWithWebIdentity` | Trust policy does not match the assertion CryptFlare mints | Verify `sub` condition matches the vault dashboard's federated subject character-for-character, `aud` is exactly `sts.amazonaws.com`, and `Federated` principal ARN matches your OIDC provider | | `User is not authorized to perform: secretsmanager:CreateSecret` | Role has STS trust but lacks Secrets Manager permissions | Re-apply the inline policy from step 3. IAM changes take up to 60s - wait then retry | | `InvalidSignatureException` | SigV4 timestamp drift or wrong region | Check laptop clock (`sudo ntpdate pool.ntp.org`) and verify the integration's region matches where Secrets Manager lives | | `OpenID Connect provider thumbprint does not match` | Cloudflare rotated the intermediate CA | Re-calculate with the `openssl` snippet and run `aws iam update-open-id-connect-provider-thumbprint` | | `ResourceExistsException` on every push | Expected - secret already exists | Not an error; CryptFlare falls through to `PutSecretValue` automatically | | Secret names mangled with underscores | Source keys contain chars outside `A-Za-z0-9/_+=.@-` | Expected - CryptFlare sanitises to Secrets Manager's allowed set. Use a `keyPrefix` on the connection to namespace | ## Troubleshooting checklist If sync is not working, verify each of these: - [ ] Region matches where Secrets Manager is enabled (CLI: `aws configure get region`) - [ ] OIDC provider URL is exactly `https://api.cryptflare.com` (no trailing slash, no path) - [ ] OIDC provider client-id list contains `sts.amazonaws.com` - [ ] OIDC provider thumbprint is current (re-calculate with `openssl` if unsure) - [ ] Trust policy `Federated` principal ARN matches the OIDC provider ARN - [ ] Trust policy `:aud` condition is exactly `sts.amazonaws.com` - [ ] Trust policy `:sub` condition matches the vault dashboard subject (copy-paste, do not retype) - [ ] Role has the 8 Secrets Manager actions listed in step 3 - [ ] Role ARN in CryptFlare matches `aws iam get-role --role-name CryptFlareSync` - [ ] Laptop / CI clock within 15 minutes of UTC (SigV4 rejects beyond that window) ## Static access keys (alternative) If OIDC federation is not possible for your organisation, CryptFlare also supports a classic IAM user access-key flow: 1. Create an IAM user (`aws iam create-user --user-name cryptflare-sync`) or reuse an existing one 2. Attach the same Secrets Manager policy from step 3 to the user 3. Create an access key (`aws iam create-access-key --user-name cryptflare-sync`) and copy the key + secret 4. In the CryptFlare integration modal, pick `Access key + secret` as the auth mode and paste both values - they're encrypted at rest with a per-integration AES-GCM key derived from the platform master secret > **Federation is the recommended path.** With static keys, you're back to managing credential rotation (typically every 90 days per AWS best-practice guidance). Federation removes that burden entirely. ## Revoking access Cut CryptFlare off by deleting the IAM role: ```bash aws iam delete-role-policy --role-name CryptFlareSync --policy-name SecretsManagerAccess aws iam delete-role --role-name CryptFlareSync ``` Future syncs fail at the STS exchange within seconds. For a stronger cut, delete the OIDC provider entirely: ```bash aws iam delete-open-id-connect-provider \ --open-id-connect-provider-arn arn:aws:iam::123456789012:oidc-provider/api.cryptflare.com ``` After this CryptFlare cannot federate into this account until the provider is re-registered. ## Next steps - [Federated Identity concept doc](/security/federated-identity) - trust model, rotation, and cross-provider overview - [Secret Sync overview](/security/sync) - how CryptFlare's sync engine works end-to-end - [Sync Connections API Reference](/api-reference/sync-connections) for programmatic configuration --- # Sync to GCP Secret Manager (federated) Source: https://docs.cryptflare.com/guides/sync/gcp-federated Step-by-step guide to setting up keyless Workload Identity Federation between GCP and CryptFlare so sync connections push secrets without a service-account JSON key # Sync to GCP Secret Manager (federated) This guide walks through creating a Workload Identity Federation (WIF) pool in Google Cloud and connecting it to CryptFlare. When complete, CryptFlare pushes secrets to GCP Secret Manager using short-lived STS tokens minted per sync - nothing to rotate, no credential stored on our side. For the concept behind federation, see [Federated Identity](/security/federated-identity). ## Prerequisites - A Google Cloud project where secrets should land - Project-level IAM: `roles/iam.workloadIdentityPoolAdmin`, `roles/iam.serviceAccountAdmin`, `roles/resourcemanager.projectIamAdmin` - A CryptFlare organisation on the **Pro** or **Team plan** - Organisation **owner** or **admin** permissions in CryptFlare - The `gcloud` CLI installed and authenticated against the target account ## Setup Set your project and enable the APIs CryptFlare calls: ```bash gcloud config set project YOUR_PROJECT_ID gcloud services enable \ iam.googleapis.com \ iamcredentials.googleapis.com \ sts.googleapis.com \ secretmanager.googleapis.com ``` > **Tip**: the same flow works in the GCP Console at `IAM & Admin` > `Workload Identity Federation`. Every `gcloud` command below maps one-to-one. A pool is a logical container for one or more external identity providers. Create one dedicated to CryptFlare so it can be audited and revoked independently: ```bash gcloud iam workload-identity-pools create cryptflare \ --location=global \ --display-name="CryptFlare sync" ``` | Field | Value | |-------|-------| | **Name** | `cryptflare` (or any team-recognisable identifier) | | **Location** | `global` | | **Display name** | `CryptFlare sync` | Tell GCP to trust CryptFlare's issuer. The `attribute-mapping` flag carries our JWT `sub` claim into the IAM binding you create in step 5: ```bash gcloud iam workload-identity-pools providers create-oidc cryptflare-oidc \ --location=global \ --workload-identity-pool=cryptflare \ --display-name="CryptFlare OIDC" \ --issuer-uri="https://api.cryptflare.com" \ --attribute-mapping="google.subject=assertion.sub,attribute.aud=assertion.aud" ``` | Field | Value | |-------|-------| | **Provider ID** | `cryptflare-oidc` | | **Issuer URI** | `https://api.cryptflare.com` (no trailing slash) | | **Attribute mapping** | `google.subject=assertion.sub,attribute.aud=assertion.aud` | > For defence in depth, append `--attribute-condition="assertion.sub.startsWith('cryptflare:org:YOUR_ORG_ID:')"` so only assertions for your CryptFlare org are accepted. Replace `YOUR_ORG_ID` with the `org_...` prefix visible in the vault dashboard URL. Grant permissions to a service account, then let the federated identity impersonate it. All auditing and scope lives on the SA's IAM bindings: ```bash gcloud iam service-accounts create cryptflare-sync \ --display-name="CryptFlare Secret Sync" gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \ --member="serviceAccount:cryptflare-sync@YOUR_PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/secretmanager.admin" ``` | Field | Value | |-------|-------| | **Service account ID** | `cryptflare-sync` | | **Display name** | `CryptFlare Secret Sync` | | **Roles** | `roles/secretmanager.admin` (or narrower, see note) | > For tighter scope use `roles/secretmanager.secretAccessor` + `roles/secretmanager.secretVersionManager` instead of admin. CryptFlare surfaces a clear error in the sync log if it hits a missing permission. This is the link that activates federation. You are telling GCP: "when someone holding an assertion with this exact subject asks to impersonate this service account, let them." Open the sync connection you are setting up in the CryptFlare dashboard and copy the **federated subject** shown on the configuration page. It follows this shape: ``` cryptflare:org:org_abc123:sync:conn_xyz789:v1 ``` Run: ```bash PROJECT_NUMBER=$(gcloud projects describe YOUR_PROJECT_ID --format='value(projectNumber)') SUBJECT="cryptflare:org:org_abc123:sync:conn_xyz789:v1" gcloud iam service-accounts add-iam-policy-binding \ cryptflare-sync@YOUR_PROJECT_ID.iam.gserviceaccount.com \ --role="roles/iam.workloadIdentityUser" \ --member="principal://iam.googleapis.com/projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/cryptflare/subjects/${SUBJECT}" ``` | Field | Where it comes from | |-------|---------------------| | `PROJECT_NUMBER` | `gcloud projects describe` - numeric, not the project ID | | `SUBJECT` | Vault dashboard > sync connection > configuration page | | Role | `roles/iam.workloadIdentityUser` (exactly this, not `User`) | In the vault dashboard, navigate to `Organisation Settings` and open the `Integrations` tab. Find `GCP Secret Manager` and click `Register GCP integration`. Select `Workload Identity Federation` as the auth mode and fill in: | Field | Value | |-------|-------| | **Project ID** | `YOUR_PROJECT_ID` | | **Service account email** | `cryptflare-sync@YOUR_PROJECT_ID.iam.gserviceaccount.com` | | **WIF provider resource** | `projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/cryptflare/providers/cryptflare-oidc` | Click `Register`. The integration appears in the list with auth mode `federated` and no stored credentials. Open a pod or environment, click `Sync` > `New connection`, and pick `GCP Secret Manager` as the provider. Select the integration registered in step 6 (region + project auto-populate from it). Choose a replication mode: - **Automatic** - GCP replicates the secret globally (default) - **User-managed** - pin specific regions (enter comma-separated `us-central1, europe-west1`) Click `Create connection`. Trigger a sync manually from the connection's overview page. Expect: - Connection status flips to `healthy` in the CryptFlare dashboard - Secrets appear in `gcloud secrets list --project=YOUR_PROJECT_ID` - GCP Cloud Audit Logs (Admin Activity) show `GenerateAccessToken` + `AddSecretVersion` calls against your service account If anything fails, the sync log contains the verbatim GCP error response (usually a missing role or subject mismatch). ## How it works at sync time Every sync kicks off a fresh two-hop token exchange so the worker holds nothing long-lived. CryptFlare mints a short-lived JWT, swaps it at Google STS, then impersonates the configured service account via the IAM Credentials API before calling Secret Manager. >OIDC: Request signed JWT for connection subject OIDC-->>Worker: Short-lived JWT Worker->>STS: Exchange JWT for federated token STS-->>Worker: Federated access token Worker->>IAM: generateAccessToken for service account IAM-->>Worker: Impersonated short-lived token Worker->>SM: Create or update secret version SM-->>Worker: Version metadata `} /> The impersonated token is scoped to the configured service account's IAM bindings, so revoking the `workloadIdentityUser` binding cuts CryptFlare off at step 5 within seconds. ## Common errors | Error | Cause | Fix | |-------|-------|-----| | `The caller does not have permission` | Service account lacks the Secret Manager role | Re-run the role binding from step 4. Wait 60s for IAM propagation then retry | | `Subject does not match any configured principal` | Subject in your IAM binding does not match what CryptFlare mints | Re-copy the subject from the vault dashboard and re-run the binding (step 5). Verify the `principal://` URL uses the **project number**, not project ID | | `Token issuer is not valid` | Pool provider's issuer URL is wrong | Re-create the provider with `--issuer-uri="https://api.cryptflare.com"` (no trailing slash, no path) | | `Requested entity was not found` | Workload Identity Pool or provider name mismatch | Verify the `WIF provider resource` in CryptFlare matches the `projects/{num}/locations/global/workloadIdentityPools/{pool}/providers/{provider}` format exactly | | `Permission 'secretmanager.secrets.create' denied` | SA role narrower than required | Upgrade to `roles/secretmanager.admin` or add `roles/secretmanager.secretVersionManager` | | Secret names mangled with underscores | Source keys contain chars outside `A-Za-z0-9_-` | Expected - CryptFlare sanitises to GCP's allowed character set. Use a `keyPrefix` on the connection to namespace | ## Troubleshooting checklist If sync is not working, verify each of these: - [ ] Project ID matches where the WIF pool lives (`gcloud config get-value project`) - [ ] `iam.googleapis.com`, `iamcredentials.googleapis.com`, `sts.googleapis.com`, `secretmanager.googleapis.com` are all enabled - [ ] Pool name is `cryptflare` in the `global` location - [ ] Provider's issuer URI is exactly `https://api.cryptflare.com` - [ ] Service account email in CryptFlare matches the SA you created - [ ] IAM binding uses `roles/iam.workloadIdentityUser` (not `roles/iam.serviceAccountUser`) - [ ] Binding `principal://` URL uses the project **number**, not project ID - [ ] Subject in the binding matches the vault dashboard exactly (copy-paste, do not retype) - [ ] SA has `roles/secretmanager.admin` (or equivalent access + version-manager roles) ## Revoking access Cut CryptFlare off by removing the IAM binding: ```bash gcloud iam service-accounts remove-iam-policy-binding \ cryptflare-sync@YOUR_PROJECT_ID.iam.gserviceaccount.com \ --role="roles/iam.workloadIdentityUser" \ --member="principal://iam.googleapis.com/projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/cryptflare/subjects/${SUBJECT}" ``` Future syncs fail at the STS exchange within seconds. For a stronger cut, delete the service account - existing tokens expire within the hour. ## Next steps - [Federated Identity concept doc](/security/federated-identity) - trust model, rotation, and cross-provider overview - [Secret Sync overview](/security/sync) - how CryptFlare's sync engine works end-to-end - [Sync Connections API Reference](/api-reference/sync-connections) for programmatic configuration --- # Sync to GitHub Actions Source: https://docs.cryptflare.com/guides/sync/github Step-by-step guide to syncing CryptFlare secrets to GitHub Actions (or Codespaces / Dependabot) secrets using a PAT or GitHub App # Sync to GitHub Actions This guide walks through setting up a sync connection that pushes CryptFlare secrets to GitHub Actions (or Codespaces / Dependabot) repository secrets. Two auth paths are supported - a Personal Access Token (PAT) for quick setup, or a GitHub App for org-wide installs with short-lived installation tokens minted fresh on every sync. For a deep dive on the sync engine, see [Secret Sync](/security/sync). ## Prerequisites - A CryptFlare organisation on the **Pro** or **Team plan** - Secret Sync enabled (`Organisation Settings` > `Features`) - Organisation **owner** or **admin** permissions in CryptFlare - A GitHub repository you can administer ## Setup (Personal Access Token) Best for a quick single-repo setup. Requires manual rotation when the token expires. Navigate to `GitHub` > `Settings` > `Developer settings` > `Personal access tokens` > `Fine-grained tokens` and click `Generate new token`. | Field | Value | |-------|-------| | **Token name** | `CryptFlare Sync` (or any team-recognisable identifier) | | **Expiration** | Up to 1 year (default 90 days) | | **Repository access** | `Only select repositories` - pick the repos to sync to | | **Permissions** | `Repository permissions` > `Secrets` > `Read and write` | Click `Generate token` and copy the `github_pat_...` value. It will only be shown once. > Classic tokens also work if fine-grained is disabled in your org. Use the `repo` scope, which implicitly covers Actions secrets. In the vault dashboard, navigate to `Organisation Settings` and open the `Integrations` tab. Find `GitHub` and click `Connect GitHub`. This takes you through GitHub's App install flow, which also accepts a PAT via the `Use a token instead` link. Paste the PAT from step 1. > For a pure PAT flow without an integration, you can also paste the token directly on the sync connection form. Integration-mode is preferred because it lets you manage one credential across many connections. Open a pod or environment, click `Sync` > `New connection`, and pick `GitHub` as the provider. | Field | Value | |-------|-------| | **Auth method** | `Personal access token` | | **Owner** | GitHub username or org (e.g. `acme-corp`) | | **Repository** | Repo name without the owner prefix (e.g. `api-gateway`) | | **Environment** (optional) | GitHub Environment name like `production` - leaves secrets at repo scope when empty | | **Sync mode** | `Auto` for fan-out on every change, `Manual` for on-demand | Click `Create connection`. CryptFlare validates the token by fetching the repo's Actions public key before persisting. Click the `Sync now` button on the connection. Expect: - Connection status flips to `healthy` - Sync log shows each key as `pushed` - Secrets appear in `GitHub` > `Settings` > `Secrets and variables` > `Actions` If you configured an Environment, the secrets show under that Environment's secret list instead of the repo-level list. ## Setup (GitHub App) Best for org-wide installs. No token to rotate - CryptFlare mints installation tokens on every sync and they self-expire after 1 hour. In the vault dashboard, navigate to `Organisation Settings` > `Integrations` > `GitHub` and click `Install GitHub App`. The popup redirects to GitHub's install page. Choose: | Field | Value | |-------|-------| | **Account** | The GitHub org or user that owns the repos | | **Repository access** | `All repositories` or `Only select repositories` | | **Permissions requested** | `Repository secrets` (read/write) - auto-requested by the App manifest | Click `Install` then `Authorize`. The callback returns you to CryptFlare with the integration registered and listed under `GitHub`. Open a pod or environment, click `Sync` > `New connection`, and pick `GitHub` as the provider. With the App installed, the UI defaults to the App flow: | Field | Value | |-------|-------| | **Auth method** | `GitHub App` (locked) | | **Repository** | Picker sourced from the App's installed repos | | **Environment** (optional) | GitHub Environment name | | **Sync mode** | `Auto` or `Manual` | Click `Create connection`. CryptFlare mints an installation token on the first sync using the App's private key - nothing is stored on the connection beyond the `installation_id` and the repo selector. Same as step 4 of the PAT flow. Connection status flips to `healthy`, sync log shows `pushed`, secrets appear under the repo's Actions settings. ## Secret name mapping CryptFlare secret keys are sanitised for GitHub's Actions secret naming rules: - Converted to **UPPERCASE** - Non-alphanumeric characters (except underscores) replaced with `_` - Optional `keyPrefix` prepended | CryptFlare key | GitHub secret name | With prefix `PROD_` | |----------------|--------------------|--------------------| | `database_url` | `DATABASE_URL` | `PROD_DATABASE_URL` | | `api-key` | `API_KEY` | `PROD_API_KEY` | | `my.secret.value` | `MY_SECRET_VALUE` | `PROD_MY_SECRET_VALUE` | ## Common errors | Error | Cause | Fix | |-------|-------|-----| | `Invalid token - authentication failed` | PAT is expired or was revoked | Generate a new PAT (PAT flow step 1) and paste into the integration | | `Token does not have access to this repository` | PAT scope too narrow, or fine-grained token excludes the repo | Re-issue with `Repository access` covering the target repo | | `Repository not found or token lacks access` | Owner / repo mismatch, or private repo without access | Verify owner + repo names on the connection | | `Failed to get public key: 403` | Token can read the repo but lacks secrets write permission | Add `Repository permissions` > `Secrets` > `Read and write` | | `Bad credentials (GitHub App)` | App private key rotated without re-deploying CryptFlare | Reach out to support - CryptFlare rotates the platform key quarterly | | `Installation not found` | GitHub App was uninstalled from the owner | Reinstall via `Organisation Settings` > `Integrations` > `GitHub` | ## Troubleshooting checklist If sync is not working, verify each of these: - [ ] Secret Sync is enabled on the org (`Organisation Settings` > `Features`) - [ ] Owner field matches the GitHub login that holds the repo - [ ] Repository field is the repo name only, not `owner/repo` - [ ] Environment name (if set) exists in `Settings` > `Environments` on the repo - [ ] PAT or App installation covers the target repo - [ ] PAT has `Secrets` > `Read and write`, or the App requests `Repository secrets` permission - [ ] PAT is not expired (default 90 days from creation) ## Bidirectional flow CryptFlare supports a hub-and-spoke model. External services can push updated secret values to CryptFlare, and auto-mode connections fan the change out to every connected destination: ``` Your rotation lambda rotates a database password | +-- POST /v1/.../secrets/DATABASE_URL/rotate | +-- CryptFlare auto-syncs to GitHub, Vercel, AWS... ``` Set the connection's sync mode to `Auto` - no extra config needed. Any API call that creates, rotates, or deletes a secret triggers the fan-out. ## Next steps - [Secret Sync overview](/security/sync) - how CryptFlare's sync engine works end-to-end - [Sync Connections API Reference](/api-reference/sync-connections) for programmatic configuration - [Drift detection](/security/sync#drift-detection) - identify unmanaged or orphaned GitHub secrets --- # Terraform Provider Source: https://docs.cryptflare.com/integrations/terraform Manage CryptFlare secrets, workspaces, environments, and pods as infrastructure-as-code with Terraform # Terraform Provider The CryptFlare Terraform provider lets you manage your entire secrets infrastructure as code. Create workspaces, environments, pods, and secrets - all version-controlled, repeatable, and auditable. The provider is the right fit for declarative infra in CI; for AI agents that need to query or mutate secrets at runtime, pair a service token with the `mcp:use` permission and connect through [`mcp.cryptflare.com`](/security/mcp-access). - [Terraform Registry](https://registry.terraform.io/providers/BuunGroup-IaC/cryptflare/latest) - [GitHub Repository](https://github.com/BuunGroup-IaC/terraform-provider-cryptflare) - [Source Code & Examples](https://github.com/BuunGroup-IaC/terraform-provider-cryptflare/tree/main/examples) ## Install ```hcl terraform { required_providers { cryptflare = { source = "BuunGroup-IaC/cryptflare" version = "~> 0.2" } } required_version = ">= 1.13" } provider "cryptflare" {} ``` ## Authentication Set your API token and organisation ID via environment variables: ```bash export CF_TOKEN="cf_live_..." export CF_ORG="org_..." ``` Or configure in the provider block: ```hcl provider "cryptflare" { api_token = var.cryptflare_token org_id = var.cryptflare_org } ``` > Never hardcode tokens in `.tf` files. Use environment variables, `terraform.tfvars`, or a secrets manager. ## How it works Terraform treats CryptFlare like any other cloud provider. The provider plugin turns resource definitions into API calls and writes the results into your configured state backend. The diagram below traces one full `plan` and `apply` cycle from HCL source to a materialised secret in the vault. Plan Plan --> Provider Provider -->|"Read + dry-run calls"| API API --> Diff Diff --> Apply Apply --> Provider Provider -->|"Create, update, delete"| API API --> Vault Apply -->|"Write state"| Backends `} /> The state backend stores secret values after apply, so always pick an encrypted backend (S3 with KMS, HCP Terraform, or another encrypted remote store) for production workspaces. ## Resources | Resource | Description | |---|---| | `cryptflare_workspace` | Manages a workspace within an organisation | | `cryptflare_environment` | Manages an environment within a workspace | | `cryptflare_secret` | Manages an encrypted secret (AES-256-GCM) | | `cryptflare_pod` | Manages a pod (folder) for organizing secrets | ## Data Sources | Data Source | Description | |---|---| | `cryptflare_workspace` | Look up a workspace by ID or slug | | `cryptflare_workspaces` | List all workspaces in the organisation | | `cryptflare_secret` | Read a secret value (for passing to other providers) | ## Quick Example Create a workspace with environments: ```hcl resource "cryptflare_workspace" "backend" { name = "Backend API" slug = "backend-api" } locals { environments = { production = "Production" staging = "Staging" development = "Development" } } resource "cryptflare_environment" "this" { for_each = local.environments workspace_id = cryptflare_workspace.backend.id name = each.value slug = each.key } ``` Organize secrets with pods: ```hcl resource "cryptflare_pod" "databases" { workspace_id = cryptflare_workspace.backend.id environment_id = cryptflare_environment.this["production"].id name = "Databases" slug = "databases" description = "Database connection strings." } ``` Store secrets with validation: ```hcl variable "database_url" { type = string sensitive = true validation { condition = can(regex("^postgres(ql)?://", var.database_url)) error_message = "Must be a valid PostgreSQL connection string." } } resource "cryptflare_secret" "database_url" { workspace_id = cryptflare_workspace.backend.id environment_id = cryptflare_environment.this["production"].id key = "DATABASE_URL" value = var.database_url pod_id = cryptflare_pod.databases.id } ``` ## Using Data Sources Read secrets managed outside Terraform and pass them to other providers: ```hcl data "cryptflare_secret" "database_url" { workspace_id = "backend-api" environment_id = "production" key = "DATABASE_URL" } # Pass to AWS SSM resource "aws_ssm_parameter" "database_url" { name = "/app/database-url" type = "SecureString" value = data.cryptflare_secret.database_url.value } ``` ## Import All resources support `terraform import`: ```bash # Workspace terraform import cryptflare_workspace.example ws_abc123 # Environment (workspace_id/env_id) terraform import cryptflare_environment.example ws_abc123/env_def456 # Secret (workspace_id/env_id/key) terraform import cryptflare_secret.example ws_abc123/env_def456/DATABASE_URL # Pod (workspace_id/env_id/pod_id) terraform import cryptflare_pod.example ws_abc123/env_def456/pod_ghi789 ``` ## Best Practices - **Use variables with `sensitive = true`** for all secret values - **Add `validation` blocks** to catch bad inputs before they hit the API - **Use `for_each`** to create environments and pods from maps - **Use `format()` and `sensitive()`** to construct connection strings from parts - **Use data sources** to reference secrets managed by other teams or processes - **Store state encrypted** - secret values are in Terraform state after apply ## Plan Limits Resources you can manage are subject to your plan limits: ## More Resources - [Full Registry Documentation](https://registry.terraform.io/providers/BuunGroup-IaC/cryptflare/latest/docs) - [Complete Example (GitHub)](https://github.com/BuunGroup-IaC/terraform-provider-cryptflare/tree/main/examples/complete) - [Report an Issue](https://github.com/BuunGroup-IaC/terraform-provider-cryptflare/issues) Yes. Set `CF_TOKEN` and `CF_ORG` as workspace environment variables in your HCP Terraform workspace settings. Yes. After `terraform apply`, secret values are in your state file. Use an encrypted state backend (S3 with KMS, HCP Terraform, etc.) to protect them. The provider calls the rotation API - the version increments and the old value is archived per your plan's version history limit. Not yet. The provider currently supports workspaces, environments, secrets, and pods. Member and billing resources are on the roadmap. --- # Dynamic Secrets - Internal Architecture Source: https://docs.cryptflare.com/internal/dynamic-secrets-architecture Schema, workflow lifecycle, cascade revoke, and provider abstraction for the dynamic secrets subsystem. Engineering team only. # Dynamic Secrets - Internal Architecture > This page is internal engineering documentation. If you're a customer looking for how the feature works, see [Dynamic Secrets](/security/dynamic-secrets) instead. This is the single-page walkthrough of how dynamic secrets work inside CryptFlare. Use it when you need to ship a new provider, debug a stuck lease, or review a PR that touches the subsystem. The canonical deep reference is `docs/guides/dynamic-secrets-architecture.md` in the repo root - this MDX is the living summary. ## Design in one sentence **One Cloudflare Workflow instance per lease** is the expiration timer, and **`provider.revoke()` is idempotent** so it doesn't matter which of the three revocation paths fires. ## Schema Two tables, both in the global D1 database (same as `sync_connections`, not regional). Lives in `apps/api/src/db/schema.ts`. | Table | Purpose | |---|---| | `dynamic_secret_configs` | Root credentials (AES-256-GCM encrypted per-config) + TTL/quota policy | | `dynamic_leases` | One row per issued credential. FK `onDelete: cascade` to configs. | Key columns on leases that matter during debugging: - `parent_token_id` + `parent_token_type` - identity binding, used by `cascadeRevokeLeasesForToken` - `external_id` + `external_metadata` - provider's own handle (Azure `keyId`), opaque to us - `max_expires_at` - **anchored to issue time**, never advanced. Used by the orphan-sweep cron. - `status` - `pending | active | expired | revoked | irrevocable` - `workflow_instance_id` - the CF Workflow handle used by manual revoke to call `instance.terminate()` ## The three revocation paths `apps/api/src/workflows/dynamic-lease.ts` runs `step.sleep(ttlSeconds)` then `step.do('revoke', { retries: 6, backoff: exponential })`. Engine hibernates during sleep - $0 CPU until the timer fires. Six retries then `status='irrevocable'` (Vault's exact rule). `apps/api/src/lib/dynamic-cascade.ts:cascadeRevokeLeasesForToken` is wired into: - `routes/auth/handlers.ts:logout` - `routes/service-tokens/handlers.ts:handleRevokeServiceToken` - `routes/tokens/handlers.ts:handleRevokeToken` Fired from `c.executionCtx.waitUntil()` so user-facing responses aren't blocked by provider latency. `cascadeRevokeLeasesForConfig` runs **synchronously** inside `handleDeleteDynamicConfig`. Must run synchronously because the config row holds the root credentials needed to call `provider.revoke()`. Order: disable → drain → hard delete → cascade FK wipes leases. Two hourly cron safety nets: - **Expired sweep** (`apps/cron/src/jobs/dynamic-expired-sweep.ts`): catches leases past `expires_at + 5 min` that are still `active`/`pending`. Happens when the per-lease Workflow runtime misses its `step.sleep` (local `wrangler dev` restarts are the main trigger in practice; prod sees this only during Workflow-runtime outages). Flips status to `expired` with `revoked_by='orphan_sweep'` and emits a `dynamic_lease.expired` audit event so the cleanup shows up in the Activity feed. Pure DB bookkeeping - does NOT call the provider, because every credential we mint has `endDateTime = lease_ttl + 5 min` baked in at the provider side (Azure's own expiry kicks in by the time this sweep runs). - **Orphan sweep** (`apps/cron/src/jobs/dynamic-orphan-sweep.ts`): catches leases past the hard `max_expires_at + 10 min` cap still in an active state. This is a serious-problem signal - normally zero rows. Flips to `irrevocable` so an operator investigates. ## TTL resolution (Vault-style) ``` effective_ttl = min( caller_requested_ttl ?? default_ttl, max_ttl, system_max_ttl, parent_token_remaining_lifetime, ) ``` Implemented in `apps/api/src/lib/dynamic-ttl.ts`. Throws `DYNAMIC_TTL_INVALID` if result is below `DYNAMIC_MIN_TTL_SECONDS` (60s). Tested exhaustively in `dynamic-ttl.test.ts` - 11 cases including the "parent token expiring in 30s" rejection. ## Provider abstraction Mirrors `SyncProviderAdapter`. Three methods per provider: ```ts type DynamicProviderAdapter = { validate(config, root): Promise<{ valid: boolean; error?: string }>; issue(config, root, lease): Promise<{ credentials, externalId, externalMetadata? }>; revoke(config, root, externalId, externalMetadata): Promise; }; ``` **Key rule for new providers**: `revoke()` must be idempotent. Return cleanly on 404 / already-deleted. All three revocation paths assume this. **Defense in depth**: if the upstream platform supports baked-in credential expiry (Azure's `endDateTime`), set it to `lease_ttl + 5 minutes`. This gives the provider itself as a fallback if CryptFlare misses the revoke for any reason. ### Azure provider has two modes (static_sp + dynamic_sp) The Azure adapter is split into a dispatcher + two mode-specific implementations so each mode stays under ~300 lines and can be tested in isolation: | File | Purpose | |---|---| | `azure-sp.ts` | Public adapter. Parses `providerConfig` via `parseAzureConfig` and dispatches to the right mode. Missing `mode` defaults to `static_sp` for backwards compat with configs created before dynamic mode existed. | | `azure-sp-shared.ts` | Token cache (keyed by `tenantId:clientId:audience` so Graph + ARM tokens don't collide), discriminated-union types, parsers, `ENTRA_SEARCH_HINT` constant. | | `azure-sp-static.ts` | Static SP mode: `validateStatic` / `issueStatic` / `revokeStatic`. Rotates password credentials on a single root App Registration. Every lease shares the root `clientId`; only the secret is unique. | | `azure-sp-dynamic.ts` | Dynamic SP mode: creates a new Application + Service Principal + ARM role assignments per lease; revoke `DELETE`s the whole Application (cascades to SP + passwords + role assignments). Includes the compensating-transaction rollback: any failure after `POST /applications` triggers a best-effort `DELETE` so partial state never leaks. | The two-mode shape is NOT a generic pattern every provider needs to follow. Azure happens to support both Vault-style patterns and we wanted feature parity; AWS IAM and any future provider can stay as single-file adapters if one strategy is enough. Dynamic SP mode has ~30s propagation delay from the Azure side - fresh Service Principals aren't immediately visible to the management plane. Documented in the setup guide, not mitigated in code (polling would double issue latency). ### Permission re-check endpoint `POST /v1/organisations/:org/dynamic-secrets/configs/:id/validate` re-runs the provider adapter's `validate()` hook against the currently-stored root credentials and returns `{ valid, error, checkedAt, provider }`. Handler lives at `routes/dynamic-secrets/config-handlers.ts:handleValidateDynamicConfig`. Used by the dashboard "Check permissions" button on the edit page. Emits `dynamic_config.validated` to audit_logs with the pass/fail outcome; the console dashboard's Errors tab surfaces failed checks. ### Adding a new provider 1. `apps/api/src/lib/dynamic-providers/.ts` - implement the adapter 2. `apps/api/src/lib/dynamic-providers/index.ts` - register in the registry 3. `packages/shared/src/constants/dynamic-secrets.ts` - add to `DYNAMIC_PROVIDERS` enum and `DYNAMIC_PROVIDER_META` 4. `apps/api/src/db/schema.ts` - extend the `provider` enum on `dynamic_secret_configs` 5. Run `pnpm db:generate` to create a migration updating the CHECK constraint 6. Write tests mirroring `dynamic-providers/mock.test.ts` 7. Ship a consumer setup guide at `/guides/dynamic-secrets/` The mock provider (`dynamic-providers/mock.ts`) is your test harness - it supports `failNextIssue` and `failNextRevoke` for lifecycle tests. ## Key entry points | File | Purpose | |---|---| | `apps/api/src/workflows/dynamic-lease.ts` | `DynamicLeaseWorkflow` - the per-lease timer | | `apps/api/src/lib/dynamic-cascade.ts` | `revokeOneLease` primitive + both cascade entry points | | `apps/api/src/lib/dynamic-ttl.ts` | Pure TTL resolver, exhaustively tested | | `apps/api/src/lib/dynamic-root-crypto.ts` | Per-config AES key derivation | | `apps/api/src/lib/dynamic-providers/index.ts` | Provider registry (static + dynamic modes for Azure, plus AWS IAM, plus mock) | | `apps/api/src/lib/dynamic-providers/azure-sp.ts` | Azure adapter dispatcher - branches on `providerConfig.mode` | | `apps/api/src/lib/dynamic-providers/azure-sp-shared.ts` | Azure token cache + type guards + constants shared by both modes | | `apps/api/src/lib/dynamic-providers/azure-sp-static.ts` | Static SP mode (rotate-password-on-root-app) | | `apps/api/src/lib/dynamic-providers/azure-sp-dynamic.ts` | Dynamic SP mode (new SPN + role assignments per lease, with rollback) | | `apps/api/src/routes/dynamic-secrets/` | Tenant-scoped API surface (including `/configs/:id/validate`) | | `apps/api/src/routes/console/dynamic-secrets/` | Platform-wide console API (summary, status breakdown, issue series, providers, irrevocable watchlist, error feed, activity feed) | | `apps/api/src/db/queries/dynamic-secrets.ts` | All tenant DB queries | | `apps/cron/src/jobs/dynamic-expired-sweep.ts` | Hourly sweep for workflow-missed leases past `expires_at` | | `apps/cron/src/jobs/dynamic-orphan-sweep.ts` | Hourly sweep for hard-cap overruns past `max_expires_at` | ## Debugging playbook ### "Lease stuck in `irrevocable`" 1. Check audit log for `dynamic_lease.irrevocable` - shows the error message that exhausted retries 2. If provider is reachable: manually revoke at the provider using `external_id` 3. POST `/leases/:id/force-revoke` or click **Force revoke** in the dashboard to clear the DB row ### "Credentials leaked in audit log" Shouldn't be possible - audit records `configId`, `leaseId`, `expiresAt`, and `externalId` only. If you see credential values, that's a bug in a new provider's `metadata` field. Scrub with `LIKE '%AZURE_CLIENT_SECRET%'` and file a security issue. ### "Workflow never fires at TTL" 1. Check CF Workflows dashboard for the instance by name (lease id) 2. If instance is running and stuck in `step.sleep` past its scheduled time: CF runtime issue, file ticket 3. If instance errored: check `onError` path - lease should be `irrevocable` 4. If instance doesn't exist: issuance path broke between `createDynamicLease` and `workflow.create()`. The orphan sweep will catch this within the hour. ### "DELETE config times out" Large drain - `cascadeRevokeLeasesForConfig` revokes sequentially. Each provider call can take 1-3 seconds. For configs with >50 active leases consider adding a **disable + background drain** path as a follow-up. Hasn't come up in practice yet. ## Test coverage | Test file | What | |---|---| | `dynamic-ttl.test.ts` | 11 TTL clamping cases | | `dynamic-root-crypto.test.ts` | Encryption roundtrip, IV uniqueness, wrong-salt rejection | | `dynamic-providers/mock.test.ts` | Registry + lifecycle with failure injection | | `dynamic-cascade.test.ts` | 10 cases for cascade by token and cascade by config | Real Azure integration tests live outside the repo - run manually against a test tenant before each release. ## Pricing and quotas on Cloudflare One workflow instance per lease. CF Workflows bills per **step invocation** at $0.30 per million, not per workflow instance. Each lease has ~4 steps. Sleeping instances cost $0 CPU and don't count toward the concurrency cap. - Free Workers plan: 100k instances/day shared with all Workers traffic - Paid Workers plan: 10M+ instances/month - Max sleep duration: 365 days - Max concurrent *running* instances: 100 free / 10,000 paid (sleeping doesn't count) Marginal cost per lease is effectively zero until you're running millions per day. ## See also - [`docs/guides/dynamic-secrets-architecture.md`](https://github.com/buun-group/cryptflare-platform/blob/main/docs/guides/dynamic-secrets-architecture.md) - the full repo-level architecture doc with every detail - [Consumer docs: Dynamic Secrets overview](/security/dynamic-secrets) - [Consumer docs: Azure setup](/guides/dynamic-secrets/azure) - [API reference: Dynamic Secrets](/api-reference/dynamic-secrets) --- # Internal Engineering Docs Source: https://docs.cryptflare.com/internal/index Private documentation for the CryptFlare platform team. Requires a valid console session. # Internal Engineering Docs This section is only visible to authenticated Buun Group console users. If you can see this page, you have a valid `cf_console_session` cookie. Customer accounts never see these pages in the sidebar and cannot navigate to them directly - the route redirects to the public docs root. ## What lives here | Area | Purpose | |---|---| | **Feature architecture** | How a feature is wired end-to-end - schema, queues, workflows, cascade paths, test strategy | | **Incident playbooks** | Runbooks for on-call engineers - how to diagnose, mitigate, and post-mortem specific failure classes | | **Platform internals** | Things that are not features but matter for engineers: migration strategies, backfill jobs, long-running cron jobs | | **Security reviews** | Internal security analysis of high-risk features and cross-app interactions | Consumer-facing guides belong in `/guides/...`, not here. If you're unsure where a doc should live: **if it mentions internal file paths, database schemas, or private environment variables, it's internal.** ## Access control This section uses the `` route wrapper, which checks for a valid CryptFlare **console** session via `GET /v1/console/auth/me`. The check runs on: 1. **Sidebar rendering** - internal sections are filtered out of the left-hand nav for non-console users 2. **Route resolution** - typing an internal URL directly (e.g. `/internal/dynamic-secrets-architecture`) redirects to the docs root if you're not logged in to the console Both layers enforce the same gate. A customer who knows the URL pattern still cannot load the content. ## Marking new internal docs The pattern is documented in `apps/docs/src/data/nav.ts`: ```ts { titleKey: 'sidebar.internal', internal: true, // hides the whole section items: [ { labelKey: 'sidebar.internalOverview', href: '/internal', icon: 'lock', internal: true, // hides the item }, ], } ``` And in `apps/docs/src/app.tsx`: ```tsx }> } /> } /> ``` Both the nav flag and the route guard are required. Forgetting the guard means a customer can still open the URL directly; forgetting the nav flag means the link shows up in the sidebar even for customers. ## Elevated roles `` accepts an optional `minRole` prop. Default is `viewer` (any console user). Set to `engineer` or `administrator` to restrict further: ```tsx }> } /> ``` Use this sparingly - most engineering docs are safe for any console user to read. ## Available docs | Doc | Audience | |---|---| | [Dynamic secrets architecture](/internal/dynamic-secrets-architecture) | Engineers working on dynamic secrets, rotation, or cascade revoke | --- # Environments Source: https://docs.cryptflare.com/secrets/environments Separate your secrets across development, staging, and production # Environments Environments let you maintain different sets of secrets for different stages of your deployment pipeline. A workspace might have `development`, `staging`, and `production` environments - each with their own values for the same secret keys. ## How environments fit together CryptFlare organises resources in a simple hierarchy: **Organisation** > **Workspace** > **Environment** > **Secrets** - An **organisation** is your team or company - A **workspace** is a project (e.g., "Backend API", "Mobile App") - An **environment** is a deployment stage within that project - **Secrets** live inside an environment For example: | Organisation | Workspace | Environment | Secret | |-------------|-----------|-------------|--------| | Acme Corp | Backend API | development | `DATABASE_URL` = `postgres://localhost/dev` | | Acme Corp | Backend API | staging | `DATABASE_URL` = `postgres://staging.db/app` | | Acme Corp | Backend API | production | `DATABASE_URL` = `postgres://prod.db/app` | The same key (`DATABASE_URL`) exists in all three environments with different values. Your application loads the right value based on which environment it runs in. ## Default environment When you create a workspace through the onboarding flow, a **production** environment is created automatically. You can then add additional environments as needed, up to your plan's limit. ## Environment limits On the Free plan, you get environments per workspace - for example, `production` and `development`. Upgrade to Pro or Team for more. ## Creating environments ### Via the CLI ```bash cf environment create preview --workspace my-app ``` ### Via the API ```bash curl -X POST https://api.cryptflare.com/v1/organisations/org_xyz/workspaces/my-app/environments \ -H "Authorization: Bearer YOUR_TOKEN" \ -H "Content-Type: application/json" \ -d '{"name": "Preview", "slug": "preview"}' ``` ## Environment isolation Secrets are fully isolated between environments. Creating a secret in `production` does not affect `development` or any other environment. There is no automatic syncing or inheritance between environments. This means: - A secret that exists in `production` may not exist in `development` - The same key can hold completely different values in each environment - Deleting a secret in one environment does not affect other environments ## Using environments in your application ### Environment injection via CLI The simplest approach - inject all secrets from an environment as environment variables: ```bash # Development cf run --workspace my-app --env development -- node server.js # Production cf run --workspace my-app --env production -- node server.js ``` ### SDK ```typescript import { CryptFlare } from '@cryptflare/sdk'; const cf = new CryptFlare({ token: process.env.CF_TOKEN }); const dbUrl = await cf.secrets.get('DATABASE_URL', { workspace: 'my-app', environment: 'production', }); ``` ### GitHub Actions ```yaml - name: Load production secrets uses: cryptflare/secrets-action@v1 with: workspace: my-app environment: production token: ${{ secrets.CF_TOKEN }} ``` ## Best practices - **Keep environment names consistent** across workspaces so your CI/CD pipelines can use the same configuration - **Use `development` for local secrets** that are safe to share across the team (not personal credentials) - **Restrict production access** - use roles to limit who can read production secret values - **Create preview environments** for feature branch deployments that need their own isolated secrets --- # Secret rotation Source: https://docs.cryptflare.com/secrets/rotation Rotate secrets safely and plan for automated rotation with service syncing # Secret rotation Rotation is the process of replacing a secret's value with a new one. The old value is archived as a previous version, and all future reads return the new value. Rotation is essential for maintaining security - it limits the window of exposure if a credential is ever compromised. ## How rotation works When you rotate a secret, CryptFlare: Encrypts the new value with AES-256-GCM Increments the version number Archives the previous value (retained per your plan's [version history](/secrets/versioning)) Logs the rotation in the [audit trail](/security/audit-logs) The operation is atomic - either the rotation completes fully or nothing changes. ## Rotating a secret ### Via the CLI ```bash cf secret rotate DATABASE_URL \ --value "postgres://user:newpass@db.example.com/mydb" \ --workspace my-app \ --env production ``` ### Via the API ### Via the SDK ```typescript import { CryptFlare } from '@cryptflare/sdk'; const cf = new CryptFlare({ token: process.env.CF_TOKEN }); const result = await cf.secrets.rotate('DATABASE_URL', { workspace: 'my-app', environment: 'production', value: 'postgres://user:newpass@db.example.com/mydb', }); console.log(`Rotated to version ${result.version}`); ``` ## When to rotate ### Immediately - A secret value has been exposed (committed to git, leaked in logs, shared insecurely) - A team member with access to production secrets has left the organisation - You detect suspicious access in the [audit logs](/security/audit-logs) ### On a schedule Many compliance frameworks and security best practices recommend regular rotation: | Secret type | Recommended frequency | |-------------|----------------------| | Database credentials | Every 90 days | | API keys for third-party services | Every 90 days | | Encryption keys | Every 12 months | | CI/CD tokens | Every 30 days | ## Graceful rotation When you rotate a secret that's actively used by running services, there's a window where the old value is still in memory. To avoid downtime: **Rotate the secret** in CryptFlare **Deploy your services** so they pick up the new value **Verify** the new value is working **Revoke the old credential** at the source (e.g., change the database password, regenerate the API key) For database credentials, consider using a connection pool that can reconnect with updated credentials without restarting the application. ## Version history Every rotation creates a new version. Previous versions are retained based on your plan: ## Automated rotation Rotation policies let you schedule automatic rotation for any secret. CryptFlare generates a new random value, encrypts it, and creates a new version - all without manual intervention. ### Setting up a rotation policy From the secret detail page, click **Set Up Rotation** on the overview tab. Configure: - **Interval** - how often to rotate (7, 14, 30, 60, 90, 180, or 365 days) - **Character set** - alphanumeric, hex, base64, or ASCII (with special characters) - **Value length** - 8 to 128 characters - **Notifications** - receive an email and in-app alert each time rotation occurs ### How scheduling works You attach a rotation policy to a secret. The first rotation is scheduled for now + interval days. Every 6 hours, a scheduled job checks for policies where `next_rotation_at` has passed. Due policies are enqueued to a dedicated rotation queue for parallel processing. A new random value is generated using your configured charset and length. The value is encrypted and the secret is rotated, creating a new version in history. The next rotation date is set to now + interval days. Audit log and notifications are sent. ### Error handling If a rotation fails (secret is locked, encryption error, database issue), the policy's retry count increments and the error is recorded. The scheduler retries on the next 6-hour cycle. After **5 consecutive failures**, the policy stops retrying until you manually re-enable it. ### Via the API See the full [Rotation Policies API reference](/api-reference/rotation-policies) for programmatic management. ### Notifications and event subscriptions When a secret is auto-rotated, CryptFlare: - Creates an audit log entry with action `secret.auto_rotated` - Sends an email notification to the policy creator (if enabled) - Triggers any matching [event subscriptions](/security/event-subscriptions) so downstream systems are notified ## Best practices - **Rotate on compromise** - do not wait for a scheduled rotation if a secret may have been exposed - **Use separate secrets per environment** - rotating a production credential should not require changes to development - **Keep version history** - upgrade your plan if you need to retain more previous versions for rollback - **Audit after rotation** - check the [audit logs](/security/audit-logs) to confirm the rotation was performed by an authorized user - **Test rotation in staging first** - verify your services handle credential changes gracefully before rotating in production --- # Secret versioning Source: https://docs.cryptflare.com/secrets/versioning How CryptFlare tracks every change to your secrets # Secret versioning Every time a secret value changes, CryptFlare creates a new version. The previous value is archived, giving you a complete history of changes and the ability to understand when and why a secret was updated. ## How it works When you first create a secret, it starts at **version 1**. Each time you rotate (update) the value, the version number increments: | Action | Version | What happens | |--------|---------|-------------| | Create `DATABASE_URL` | v1 | Value encrypted and stored | | Rotate with new value | v2 | New value stored, v1 archived | | Rotate again | v3 | New value stored, v2 archived | The current version is always the one returned when you reveal a secret. Previous versions are retained based on your plan's version history limit. ## Rotating a secret ### Via the CLI ```bash cf secret rotate DATABASE_URL \ --value "postgres://user:newpass@db.example.com/mydb" \ --workspace my-app \ --env production ``` ### Via the API ```bash curl -X POST https://api.cryptflare.com/v1/organisations/org_xyz/workspaces/my-app/environments/production/secrets/DATABASE_URL/rotate \ -H "Authorization: Bearer YOUR_TOKEN" \ -H "Content-Type: application/json" \ -d '{"value": "postgres://user:newpass@db.example.com/mydb"}' ``` The response confirms the new version number: ```json { "key": "DATABASE_URL", "version": 3 } ``` ## Version retention How many previous versions are kept depends on your plan: When a version is outside the retention window, it is permanently deleted and cannot be recovered. ## Audit trail Every version change is recorded in the [audit log](/security/audit-logs). The log captures who rotated the secret, when, and from which IP address. Combined with versioning, this gives you a complete picture of a secret's lifecycle. ## When to rotate secrets - **Credential compromise** - if you suspect a secret has been exposed, rotate immediately - **Team member offboarding** - when someone with access to production leaves the team - **Regular schedule** - many compliance frameworks recommend rotating secrets every 30-90 days - **Deployment** - some teams rotate database credentials as part of their deployment process ## What versioning does not do - **No automatic rollback** - CryptFlare does not automatically revert to a previous version if something breaks. Version history is for reference and manual recovery. - **No scheduled rotation** - automatic rotation on a schedule is on our roadmap but not yet available. Today, rotation is always an explicit action. - **No cross-environment sync** - rotating a secret in `production` does not affect the same key in `staging` or `development`. --- # Access control Source: https://docs.cryptflare.com/security/access-control Role-based permissions and how CryptFlare controls who can do what # Access control CryptFlare uses role-based access control (RBAC) to manage who can access secrets, invite members, manage billing, and perform other actions within your organisation. ## How it works Every member of an organisation is assigned a single role. Each role grants a specific set of permissions. When a member makes a request, CryptFlare checks their role's permissions before allowing the action. ## Roles | Role | Best for | Summary | |------|----------|---------| | **Owner** | Founders, CTOs | Full access to everything. One per organisation. Cannot be removed. | | **Biller** | Finance teams | Manages subscription and payment. No access to secrets. | | **Manager** | Team leads | Manages members and secrets. Cannot promote above their own level. | | **Developer** | Engineers | Reads and writes secrets. Can create API tokens. | | **Employee** | Non-technical staff | Read-only access to secrets. Cannot reveal values in bulk. | | **Viewer** | Auditors, contractors | Can list secret names but cannot reveal values. | ## Permission breakdown ### Secrets | Action | Owner | Manager | Developer | Employee | Viewer | |--------|-------|---------|-----------|----------|--------| | List secret names | Yes | Yes | Yes | Yes | Yes | | Reveal secret values | Yes | Yes | Yes | Yes | No | | Create/update secrets | Yes | Yes | Yes | No | No | | Rotate secrets | Yes | Yes | Yes | No | No | | Delete secrets | Yes | Yes | No | No | No | | View version history | Yes | Yes | Yes | No | Yes | | Restore previous versions | Yes | Yes | No | No | No | | Lock/unlock secrets | Yes | Yes | No | No | No | ### Members | Action | Owner | Manager | Developer | Employee | Viewer | |--------|-------|---------|-----------|----------|--------| | View member list | Yes | Yes | Yes | No | Yes | | Invite members | Yes | Yes | No | No | No | | Remove members | Yes | Yes | No | No | No | | Change roles | Yes | Yes | No | No | No | ### Organisation | Action | Owner | Biller | Manager | |--------|-------|--------|---------| | Update org name | Yes | No | No | | Delete organisation | Yes | No | No | | Manage billing | Yes | Yes | No | | View subscription | Yes | Yes | View only | ## Role ceiling Managers can only assign roles that are at or below their own level. A manager can invite someone as a `developer`, `employee`, or `viewer` - but cannot create another `manager`, `biller`, or `owner`. This prevents privilege escalation. ## API tokens API tokens inherit the scopes you assign at creation time. You can restrict a token to specific operations (e.g., read-only access to secrets) regardless of your own role. Tokens are scoped to a single workspace and expire on the date you set. Reaching AI agents through [`mcp.cryptflare.com`](/security/mcp-access) requires the separate `mcp:use` permission group, which is opt-in per token. ## Principle of least privilege We recommend: - **Give developers `developer` role** - they can read and write secrets without being able to manage members or billing - **Use `viewer` for auditors** - they can verify what secrets exist without seeing values - **Create scoped API tokens** - limit each token to the minimum permissions needed for its purpose - **Rotate tokens regularly** - set expiration dates and revoke tokens that are no longer needed ## Search respects access control The global search bar (`Ctrl+K` / `Cmd+K`) is permission-aware. Every search result is filtered through the same access control layers described above, so members only see resources they're authorised to access. ### What the search checks | Layer | How it applies to search | |-------|-------------------------| | **RBAC role** | If your role doesn't include `workspaces:read`, no workspaces appear. If it doesn't include `secrets:list`, no secrets appear. Each resource type requires its own permission. | | **Global deny policies** | Any workspace, environment, or pod that matches an active global deny policy is hidden from your results, even if your role would normally allow access. | | **Team deny policies** | If your team has a deny policy on a specific resource, that resource is hidden from your search results. | | **JIT access grants** | Temporary grants do not add resources to search results. If you have a time-limited grant to an otherwise restricted environment, you can access it directly via the link in your notification, but it won't appear in search. This is by design - search shows your baseline access, not temporary overrides. | ### What you can search | Resource | Example matches | Permission needed | |----------|-----------------|-------------------| | Workspaces | Name or slug matches your query | `workspaces:read` | | Environments | Environment name matches, shown with parent workspace | `environments:read` | | Secrets | Secret key name matches, shown with workspace and environment path | `secrets:list` | | Members | Name or email matches | `members:read` | Secret **values** are never included in search results - only key names. To reveal a secret value, navigate to the secret page where a separate `secrets:read` permission check and audit log entry occur. ### Search is debounced and rate-limited Search queries are debounced by 300ms on the client and only fire when you type 2 or more characters. Results are limited to 5 workspaces, 5 environments, 10 secrets, and 5 members per query. This prevents excessive API calls and keeps search fast. --- # Audit logs Source: https://docs.cryptflare.com/security/audit-logs Track every action taken in your organisation # Audit logs CryptFlare records every significant action in your organisation. Audit logs help you answer questions like "who accessed this secret?" and "when was this member invited?" - essential for security reviews, incident response, and compliance. ## What gets logged Every action that creates, reads, modifies, or deletes a resource is recorded. Each log entry captures who did what, when, and from where. | Field | Description | |-------|-------------| | Who | The user who performed the action and their role at the time | | What | The action performed and the resource affected | | When | Timestamp of the action (UTC) | | Where | The IP address of the request | | Context | Additional metadata relevant to the action | ## Logged actions ### Secrets | Action | When this is logged | |--------|-------------------| | `secret.created` | A new secret is stored in an environment | | `secret.revealed` | Someone decrypts and views a secret value | | `secret.rotated` | A secret value is updated to a new version | | `secret.rolled_back` | A secret is restored to a previous version | | `secret.moved` | A secret is moved to a different pod | | `secret.deleted` | A secret is permanently removed | | `secret.settings_updated` | Secret metadata (e.g. rotation policy) is changed | | `secret.locked` | A secret is locked to prevent changes | | `secret.unlocked` | A secret is unlocked | | `secrets.exported` | Secrets are exported from an environment | | `secrets.imported` | Secrets are imported into an environment | ### Members | Action | When this is logged | |--------|-------------------| | `member.invited` | A new user is invited to the organisation | | `member.removed` | A user is removed from the organisation | | `member.role_changed` | A member's role is updated | | `invitation.revoked` | A pending invitation is cancelled | ### Organisation | Action | When this is logged | |--------|-------------------| | `organisation.created` | A new organisation is created | | `organisation.updated` | Organisation settings are changed | | `organisation.deleted` | An organisation is permanently removed | | `organisation.transfer_initiated` | Ownership transfer is started | | `organisation.transfer_accepted` | Ownership transfer is accepted by the recipient | | `organisation.transfer_cancelled` | Ownership transfer is cancelled by the initiator | | `role_permission.updated` | A role's permissions are customised by the owner | ### Workspaces and environments | Action | When this is logged | |--------|-------------------| | `workspace.created` | A new workspace is set up | | `environment.created` | A new environment is added to a workspace | | `environment.deleted` | An environment is removed | | `environment.delete_requested` | An environment deletion is requested (confirmation pending) | ### Pods | Action | When this is logged | |--------|-------------------| | `pod.created` | A new pod (folder) is created in an environment | | `pod.updated` | A pod is renamed or moved | | `pod.deleted` | A pod is removed | ### Tokens and auth | Action | When this is logged | |--------|-------------------| | `token.created` | A new API token is generated | | `onboarding.complete` | A user finishes the onboarding flow | ### Teams and policies | Action | When this is logged | |--------|-------------------| | `team.created` | A new team is created | | `team.deleted` | A team is removed | | `team.policy_created` | A team-scoped policy is added | | `policy.created` | A global policy is created | | `policy.updated` | A global policy is modified | | `policies.imported` | Policies are imported from a file | | `access.granted` | A JIT access request is approved | ### Encryption | Action | When this is logged | |--------|-------------------| | `encryption.byok_enabled` | Bring-your-own-key encryption is activated | | `encryption.byok_disabled` | BYOK encryption is deactivated | ### Support | Action | When this is logged | |--------|-------------------| | `support.ticket_created` | A support ticket is opened | ## How audit events are recorded Every authenticated request flows through a short pipeline before it lands in the audit table. The hand-off to a queue means the request path never blocks on audit writes, while the chain hash guarantees tamper detection on the way in. >API: Authenticated request API->>API: Execute action (read/write/rotate) API->>MW: Emit audit event MW->>Queue: Enqueue with actor, action, resource, metadata Note over API,User: Response returned immediately Queue->>DB: Read last entry for organisation DB-->>Queue: prev_hash Queue->>Queue: Compute SHA-256(canonical fields + prev_hash) Queue->>DB: Insert row with prev_hash + entry_hash DB->>DB: Chain verified on next read DB-->>Alerts: Notify if chain break detected Alerts-->>User: Critical email + persistent banner `} /> The queue absorbs bursts (e.g. mass secret imports) and guarantees in-order insertion per organisation, so the chain hash always references the genuine predecessor. ## Viewing audit logs Audit logs are available in the vault dashboard under your organisation settings. You can filter by: - **Action type** - see only secret access events, or only member changes - **User** - see all actions performed by a specific team member - **Resource** - see the history of a specific secret, workspace, or token - **Date range** - narrow down to a specific time window - **Source** - isolate activity from a particular channel, including `source: mcp` to review tool calls made via [`mcp.cryptflare.com`](/security/mcp-access) ## API access You can also query audit logs programmatically. See the [API reference](/api-reference/organisations) for the full endpoint documentation. ```bash curl https://api.cryptflare.com/v1/organisations/org_xyz/audit?action=secret.read&limit=50 \ -H "Authorization: Bearer YOUR_TOKEN" ``` ## Who can access audit logs By default, audit log access is determined by the `audit:read`, `audit:export`, and `audit:verify` permissions assigned to each role. > Organisation owners can customise which roles have audit log access, including who can run integrity verification. See [Roles and permissions](/security/roles) for details on how to override defaults. ## Retention and archival v === -1 ? 'Unlimited' : `${v} days`} /> Logs older than the plan retention period are **archived, not deleted**. A daily job exports expiring rows to immutable object storage as JSONL (one file per region per day) before removing them from the live database. Archived logs remain available for compliance review via Support. The hash chain integrity guarantee survives archival: verification walks forward from the first surviving live entry, and archived entries retain their original `entry_hash` in the exported JSONL files for independent offline verification. ## Immutability Audit logs cannot be edited or deleted by any user, including organisation owners. This ensures the integrity of the audit trail for compliance purposes. ## Integrity verification Every audit log entry is cryptographically chained to the previous entry for the same organisation using SHA-256 hash chaining. This provides tamper detection - if any entry is modified, inserted, or deleted, the chain breaks and the tampering is detectable. ### How it works Each audit log entry contains two integrity fields: | Field | Description | |-------|-------------| | `prev_hash` | The hash of the immediately preceding entry for this organisation | | `entry_hash` | SHA-256 hash of this entry's canonical fields combined with `prev_hash` | The hash is computed over a canonical representation of the entry: `id`, `organisation_id`, `actor_id`, `action`, `resource_type`, `resource_id`, `metadata`, `created_at`, and `prev_hash`. The first entry for an organisation has a null `prev_hash`, starting the chain. MP MH -- linked --> LP Tamper["Tamper attempt: modify · delete · insert · reorder"]:::danger Tamper -. breaks .-> MH Tamper -. breaks .-> LP `} /> Each `entry_hash` is an input to the next row's `prev_hash`. Any mutation - field edit, row deletion, out-of-order insert - invalidates every hash downstream, so tampering cannot stay hidden once the chain is verified. ### What this protects against - **Modification** - changing any field in an entry breaks its `entry_hash` - **Deletion** - removing an entry breaks the `prev_hash` reference in the next entry - **Insertion** - inserting an entry between two existing entries breaks the chain - **Reordering** - swapping entries breaks both their hashes ### Verification Users with the `audit:verify` permission can verify the integrity of the audit chain from the Audit Log page in the vault dashboard. The verification process walks all entries in chronological order, recomputes each hash, and reports any break in the chain. By default, the `audit:verify` permission is granted to **Owner**, **Biller**, and **Manager** roles. Organisation owners can customise this via [role permissions](/security/roles). ### Continuous read-path verification In addition to the on-demand verification endpoint, **every call to list audit logs** automatically re-hashes the returned page and compares against the stored `entry_hash`. The list response includes an `integrity` field: ```json { "data": [ /* audit rows */ ], "total": 1284, "integrity": { "verified": true, "firstBrokenId": null, "checkedRows": 50 } } ``` - `verified: true` - every row on the page self-hashes correctly - `verified: false` - at least one row has been tampered with; `firstBrokenId` points at the earliest break - `verified: null` - the page contains only pre-migration rows without a stored hash and cannot be checked Read-path verification catches in-place tampering of visible rows with zero extra database load. Cross-page boundary checks still require the full-chain verification endpoint above. ### Integrity failure alerts When read-path verification detects a break (`verified: false`), CryptFlare automatically notifies everyone with `audit:read` permission in the organisation: - **Critical-priority email** with the subject "URGENT: Audit log integrity check failed", including `firstBrokenId` and recommended next steps - **Persistent in-app notification** on the dashboard that does not auto-dismiss Notifications are deduplicated per organisation within a 1-hour window. A new audit entry is also written recording the detection (`audit.integrity_failure_detected`) so the break is itself part of the chain going forward. ### API endpoint ```bash GET /v1/organisations/:org/audit/verify ``` Requires the `audit:verify` permission. Returns: ```json { "valid": true, "checked": 1284 } ``` If the chain is broken: ```json { "valid": false, "checked": 847, "brokenAt": "entry-id-here", "expected": "a1b2c3...", "actual": "d4e5f6..." } ``` ### Downloadable integrity report After running a verification, you can download a compliance-ready integrity report in **PDF** or **JSON** format directly from the Audit Log page. The report includes: - Organisation name and ID - Who requested the verification and when - Verification result (passed/failed) with entry count - Time range covered (first and last audit entry timestamps) - Latest hash chain value - A SHA-256 report hash for verifying the report itself has not been altered The report endpoint is also available via the API: ```bash GET /v1/organisations/:org/audit/verify/report ``` Returns a structured JSON report: ```json { "report": "audit_integrity", "version": "1.0", "organisation": { "id": "org_xyz", "name": "Acme Corp", "slug": "acme-corp" }, "generatedAt": "2026-04-12T09:15:00Z", "generatedBy": { "id": "usr_abc", "email": "jane@acme.com", "name": "Jane Smith" }, "result": { "valid": true, "checked": 1284, "chainStatus": "intact" }, "coverage": { "firstEntry": "2025-11-03T08:22:15Z", "lastEntry": "2026-04-12T09:14:58Z", "latestHash": "a1b2c3d4..." }, "reportHash": "sha256:d4e5f6..." } ``` ## Use cases ### Security incident response If you suspect unauthorized access, filter audit logs by `secret.read` to see exactly which secrets were accessed, by whom, and from which IP address. ### Compliance audits Export audit logs to demonstrate that access to sensitive data is controlled, monitored, and attributable to individual users. ### Onboarding and offboarding When a team member joins or leaves, review their audit trail to understand what resources they accessed and ensure proper handover. --- # Bring Your Own Key (BYOK) Source: https://docs.cryptflare.com/security/byok Use your own encryption key to control how secrets are encrypted at rest # Bring Your Own Key (BYOK) By default, CryptFlare encrypts all secrets using a platform-managed encryption key. With BYOK, organisation owners can provide their own 256-bit AES encryption key. All new secrets in the organisation will be encrypted using the customer key instead of the platform key. BYOK gives you direct control over the root of trust for your secrets without changing how you use CryptFlare day-to-day. ## How it works The organisation owner generates a 256-bit AES key (or uses CryptFlare's built-in key generator) The key is wrapped (encrypted) with the platform key before being stored - it is never saved in plaintext All new secrets created after enabling BYOK are encrypted using the customer key Existing secrets remain encrypted with the platform key until they are individually rotated ## Key hierarchy CryptFlare derives a unique encryption key per environment from the root key (either platform or customer). This means no two environments share the same derived key, even when using the same root. |Wrap customer key| KEK KEK -->|HKDF salt = env id| DEK DEK -->|Encrypt value + unique IV| CT `} /> The customer controls only the KEK tier - every tier below it is derived fresh per environment, so a single leaked DEK exposes one environment and nothing more. ``` Customer Key (provided by org owner) └─ HKDF (salt = environment ID) └─ AES-256-GCM per secret (unique IV each time) ``` When BYOK is disabled, the same hierarchy applies using the platform key instead. ## Security properties | Property | Detail | |----------|--------| | Algorithm | AES-256-GCM (authenticated encryption) | | Key size | 256 bits (32 bytes) | | Key wrapping | Customer key is encrypted with the platform key before storage | | Plaintext storage | Never - the raw key is only held in memory during encryption/decryption | | Key fingerprint | SHA-256 hash (first 16 hex chars) for identification without exposing the key | | Key derivation | HKDF-SHA256 with environment ID as salt | ## Enabling BYOK All encryption management endpoints require the **owner** role. BYOK operations are available under `/v1/organisations/:org/encryption`. ### Generate a key Use this endpoint to generate a cryptographically random 256-bit key. You can also generate your own key externally - it must be exactly 32 bytes, base64-encoded. ### Enable BYOK Submit your key to enable BYOK. The key is wrapped with the platform key and stored securely. BYOK must not already be enabled - disable it first if you need to change keys. ### Check encryption status Returns whether BYOK is currently enabled and the key fingerprint. ### Disable BYOK Reverts the organisation to platform-managed encryption. New secrets will use the platform key. Existing secrets encrypted with the BYOK key remain readable (decryption automatically selects the correct key) but should be rotated to re-encrypt with the platform key. ## Plan availability BYOK is available on the **Team plan** only. Free and Pro plans use platform-managed encryption exclusively. See the [pricing page](/getting-started/pricing) for current plan details. ## Important notes - **Owner only** - Only the organisation owner can enable, disable, or generate BYOK keys - **Mixed key state** - When BYOK is enabled or disabled, existing secrets stay encrypted with their original key. Each secret tracks which key version was used, and decryption automatically selects the correct key - **Key rotation** - To rotate a BYOK key, disable BYOK, rotate all secrets (re-encrypts with platform key), then enable BYOK with the new key and rotate secrets again - **Key backup** - CryptFlare stores the customer key in wrapped (encrypted) form. If both the wrapped key and the platform key are lost, secrets encrypted with the BYOK key cannot be recovered. Keep a secure backup of your key - **Audit trail** - All BYOK operations (enable, disable) are recorded in the organisation audit log with the key fingerprint --- # Security and compliance Source: https://docs.cryptflare.com/security/compliance How CryptFlare protects your secrets, audit capabilities, and compliance posture # Security and compliance CryptFlare is built with security as a foundational requirement. Every secret is encrypted at rest, every access is logged, and every action is permission-controlled. This document covers our security architecture, audit capabilities, and compliance posture. ## Encryption ### Secrets encryption All secret values are encrypted before they touch storage using **AES-256-GCM** with per-secret initialization vectors. | Property | Value | |----------|-------| | Algorithm | AES-256-GCM | | Key derivation | HKDF with SHA-256 | | Key info | `cryptflare-v1` | | IV length | 12 bytes (random per encryption) | | Storage format | Base64-encoded ciphertext + IV | The encryption key is derived from a master secret stored as an environment variable in the serverless runtime - it never enters the database. Each encrypt operation generates a fresh random IV, ensuring identical plaintext values produce different ciphertext. ### Token storage API tokens are hashed before storage. The full token is only returned once at creation time and cannot be retrieved again. | Property | Value | |----------|-------| | Stored fields | Token hash + 12-char prefix | | Hash visible | No (only prefix shown in UI) | | Recovery | Not possible - generate a new token | ### Two-factor authentication CryptFlare supports TOTP-based two-factor authentication following RFC 6238. | Property | Value | |----------|-------| | Standard | RFC 6238 (TOTP) | | Algorithm | HMAC-SHA1 | | Period | 30 seconds | | Digits | 6 | | Clock drift | +/- 1 time window tolerance | | Recovery codes | Random alphanumeric, PBKDF2-SHA256 hashed (600,000 iterations, per-code salt) | ## Transport security All traffic between clients and CryptFlare is encrypted in transit. | Header | Value | Purpose | |--------|-------|---------| | Strict-Transport-Security | `max-age=31536000; includeSubDomains` | Forces HTTPS for 1 year | | X-Content-Type-Options | `nosniff` | Prevents MIME sniffing | | X-Frame-Options | `DENY` | Prevents clickjacking | | Content-Security-Policy | `default-src 'none'; frame-ancestors 'none'` | Strict content policy | | Referrer-Policy | `strict-origin-when-cross-origin` | Privacy-aware referrers | ## Session management Sessions are server-side with short-lived expiry and sliding refresh. | Property | Value | |----------|-------| | Session cookie | `cf_session` (HttpOnly, Secure, SameSite=Lax) | | CSRF cookie | `cf_csrf` (JS-readable, for double-submit pattern) | | Storage | Server-side (edge database) | | Inactivity timeout | 1 hour | | Sliding refresh | Every 5 minutes of activity | | Maximum lifetime | 48 hours (forced re-login) | | IP binding | Client IP recorded at session creation | | User-Agent binding | Browser fingerprint recorded at session creation | | Validation | Session, CSRF token, and expiry checked on every protected request | ### CSRF protection All state-changing requests (`POST`, `PUT`, `PATCH`, `DELETE`) require a valid `x-csrf-token` header matching the session's stored CSRF token. The token is delivered via the `cf_csrf` cookie (JS-readable) and verified server-side against the session. The `SameSite=Lax` attribute prevents cross-origin cookie access. CSRF tokens are automatically rotated when privilege-changing actions occur (enabling/disabling 2FA, role changes). On role changes, all sessions for the affected user are invalidated, forcing re-authentication with the new permissions. ## Access control CryptFlare uses role-based access control (RBAC) with granular permissions. ### Roles | Role | Description | |------|-------------| | **Owner** | Full access to all resources. Cannot be removed or demoted. | | **Biller** | Manages billing and subscription. Read-only access to members and audit. | | **Manager** | Manages members, workspaces, secrets, and tokens. Cannot elevate roles above their own. | | **Developer** | Read/write access to secrets and environments. Can create tokens. | | **Employee** | Read-only access to secrets. Can view workspaces and environments. | | **Viewer** | List-only access. Cannot reveal secret values. | ### Permissions Permissions follow the `{resource}:{action}` pattern. Each role maps to a set of permissions: | Resource | Actions | |----------|---------| | Organisation | `org:read`, `org:update`, `org:delete` | | Billing | `billing:read`, `billing:update`, `billing:cancel`, `billing:upgrade` | | Members | `members:read`, `members:invite`, `members:remove`, `members:role_assign` | | Workspaces | `workspaces:read`, `workspaces:create`, `workspaces:update`, `workspaces:delete` | | Environments | `environments:read`, `environments:create`, `environments:update`, `environments:delete` | | Secrets | `secrets:list`, `secrets:read`, `secrets:write`, `secrets:rotate`, `secrets:delete`, `secrets:versions`, `secrets:restore` | | Tokens | `tokens:read`, `tokens:create`, `tokens:revoke` | | Approvals | `approvals:read`, `approvals:approve`, `approvals:deny` | | Audit | `audit:read`, `audit:export` | | Analytics | `analytics:read`, `analytics:export` | ### Manager role ceiling Managers can only assign roles at or below their level: `developer`, `employee`, `viewer`. They cannot create other managers, billers, or owners. This prevents privilege escalation. ## Audit logging Every significant action in CryptFlare is recorded in an immutable audit log. ### What is logged | Field | Description | |-------|-------------| | `actor_id` | User who performed the action | | `actor_role` | Role at the time of the action | | `action` | Operation performed (e.g., `secret.created`, `member.invited`) | | `resource_type` | Type of resource affected (e.g., `secret`, `workspace`) | | `resource_id` | Specific resource identifier | | `metadata` | Additional context (JSON) | | `ip_address` | Client IP address | | `created_at` | ISO 8601 timestamp | ### Example actions | Action | When | |--------|------| | `secret.created` | A new secret is stored | | `secret.read` | A secret value is revealed/decrypted | | `secret.rotated` | A secret value is rotated | | `secret.deleted` | A secret is permanently deleted | | `member.invited` | A user is added to the organisation | | `member.removed` | A user is removed from the organisation | | `member.role_changed` | A member's role is changed | | `token.created` | An API token is generated | | `token.revoked` | An API token is revoked | | `workspace.created` | A workspace is created | | `workspace.deleted` | A workspace is deleted | ### Integrity verification Every audit log entry is cryptographically chained to the previous entry using SHA-256 hash chaining. Each entry stores an `entry_hash` (hash of its own canonical fields) and a `prev_hash` (the hash of the previous entry). This ensures any modification, deletion, or insertion of entries is detectable. See [Audit logs](/security/audit-logs) for details. ### Querying audit logs Audit logs can be filtered by action, actor, and resource type. See the [Audit API reference](/api-reference/organisations) for details. ### Retention v === -1 ? 'Unlimited' : `${v} days`} /> ## Rate limiting Two layers of rate limiting protect the API from abuse: a per-request sliding window and a daily organisation quota. Sensitive endpoints have stricter limits. | Endpoint group | Limit | Window | |---|---|---| | Authentication (login, OTP verify) | 5 requests | 10 minutes | | TOTP (verify, setup, disable) | 5 requests | 5 minutes | | All other endpoints | 60 requests | 60 seconds | Daily quota limits vary by plan: See the [Rate limits reference](/api-reference/rate-limits) for full details. ## Internal service authentication Internal services (cron jobs, status checks) authenticate using HMAC-SHA256 signatures with: - Timestamp freshness check (60-second maximum age) - Nonce for replay prevention - Service allowlist (`cron`, `status`) ## Infrastructure CryptFlare runs on a global edge network: | Component | Service | Purpose | |-----------|---------|---------| | API | Edge Workers | Request handling at the edge | | Database | Edge Database (SQLite) | User accounts, secrets metadata, audit logs | | Key-value | Edge KV Store | Rate limiting, OTP storage, session cache | | Object storage | Object Storage | Audit log archives, large exports | | Encryption keys | Worker environment | Master secret for AES-256-GCM | | DNS | Edge DNS | DDoS protection, WAF | ### Edge-first architecture Every API request is handled at the edge location nearest to the caller. There is no single origin server - the serverless runtime executes globally, reducing latency and eliminating single points of failure. ## Planned security enhancements The following features are on our roadmap: | Feature | Status | Target | |---------|--------|--------| | Audit log export to R2 | Planned | Phase 4 | | Audit archive cron job | Planned | Phase 4 | | SOC 2 Type II audit | Planned | 2027 | ## Reporting vulnerabilities If you discover a security vulnerability, please report it responsibly: - **Email**: security@cryptflare.com - **Response time**: We aim to acknowledge within 24 hours - **Do not** disclose publicly until we have issued a fix --- # Compliance Reports Source: https://docs.cryptflare.com/security/compliance-reports Generate and share audit-ready compliance evidence reports with auditors and compliance teams. # Compliance Reports Compliance reports package your organisation's security posture into a single downloadable document that auditors can review without access to your vault. The report aggregates data from across the platform - access controls, audit trail, encryption configuration, policy coverage, secret hygiene, and control evidence - into a structured, printable HTML file. ## When to use compliance reports - **SOC 2 audit preparation** - generate a quarterly evidence pack for your auditor covering Trust Services Criteria CC6 (logical access), CC7 (monitoring), and CC8 (change management) - **PCI DSS assessment** - provide your QSA with a single document showing encryption posture, key management, access controls, and rotation coverage - **Internal compliance reviews** - schedule monthly reports to track your security posture over time - **Vendor security questionnaires** - attach the report to customer security review requests ## Supported frameworks | Framework | What the report covers | |---|---| | **SOC 2** | Trust Services Criteria CC6.1-CC6.7 (logical access), CC7.2-CC7.3 (monitoring), CC8.1 (change management) | | **HIPAA** | Administrative safeguards (164.308), Technical safeguards (164.312), Access controls, audit controls | | **ISO 27001** | Annex A controls: A.5.15-A.5.18 (access), A.8.2-A.8.24 (operations), logging, cryptography | | **PCI DSS** | Requirements 3.5-3.6 (keys), 7.2-7.3 (access), 8.2 (user ID), 10.2 (audit logs) | | **GDPR** | Articles 25 (data protection by design) and 32 (security of processing) | | **NIST 800-53** | AC-2, AC-3, AC-5, AC-6 (access), AU-2 (audit), IA-5 (authenticator), SC-12, SC-28 (cryptography) | Use `all` to generate a combined report covering every framework in one document. ## Report sections ### Access control Shows your current member count, team structure, role distribution, and any custom role permission overrides. Auditors use this to verify the principle of least privilege and that access reviews are happening. ### Audit trail Aggregates your audit log for the specified date range: total event count, breakdown by action type (top 15), top actors, and failed access attempts. This is the "proof of monitoring" section that SOC 2 CC7.2 and HIPAA 164.312(b) require. ### Encryption posture Documents your encryption configuration: whether BYOK is enabled, the key derivation method (HKDF-SHA256 per environment for platform-managed, customer-managed AES-256 for BYOK), the data residency region, and the encryption algorithm (AES-256-GCM). Required by PCI DSS 3.5-3.6 and NIST SC-12/SC-28. ### Policy coverage Lists your active global and team-scoped policies, how many were created from compliance templates, and which frameworks are covered by your current policy set. Shows the gap between "frameworks you care about" and "frameworks your policies actually address." ### Secret hygiene Quantifies your rotation discipline: total secrets, how many have active rotation policies (with coverage percentage), overdue rotations, and validation rule adoption. PCI DSS 3.7 and NIST SC-12 specifically require provable key rotation practices. ### Control evidence The most audit-relevant section. Maps each framework's controls to specific CryptFlare features and policy templates that satisfy them. Controls without coverage are flagged as gaps. This is what your auditor will spend the most time reviewing. ## How to generate a report Reports are generated asynchronously via the API. The flow is: Call `POST /v1/organisations/:org/compliance/report` with your chosen framework, date range, and sections. The API returns a job ID. Call `GET /v1/organisations/:org/compliance/report/:jobId` every 1-2 seconds. Most reports complete in under 10 seconds. Call `GET /v1/organisations/:org/compliance/report/:jobId/download` to retrieve the HTML file. Open it in any browser, print to PDF, or email it directly to your auditor. See the [API reference](/api-reference/compliance) for full endpoint documentation. ## Sharing with auditors HTML reports are self-contained single-file documents with inline CSS. They contain no JavaScript, no external dependencies, and no embedded secrets. You can safely: - Email the HTML file as an attachment - Upload to a shared drive (Google Drive, SharePoint, Dropbox) - Print to PDF from any browser - Commit to an evidence repository Reports do NOT contain secret values, encrypted data, or API tokens. They contain aggregate counts, member lists (names + roles), policy names, and audit event summaries. ## What reports do not prove A compliance report documents your current configuration and activity. It does not constitute a SOC 2, PCI DSS, HIPAA, or ISO 27001 certification. Compliance depends on your full security posture - including areas outside CryptFlare (network security, physical access, employee training, incident response procedures). Use these reports as evidence in your broader compliance program, not as a substitute for it. ## Best practices - **Generate reports quarterly** for SOC 2 and PCI DSS evidence - **Include all sections** unless you have a specific reason to exclude one - **Save reports to an evidence repository** with the date and framework noted - **Review the "Control evidence" section** to identify gaps before your audit - **Use `all` framework** for internal reviews to get a complete picture --- # Data Residency Source: https://docs.cryptflare.com/security/data-residency Control where your organisation's data is stored to meet regulatory and compliance requirements. # Data Residency CryptFlare allows organisations on the **Team plan** to choose the geographic region where their data is stored. This ensures compliance with data sovereignty laws, industry regulations, and internal governance policies. ## Why data residency matters Many industries and jurisdictions require that sensitive data remains within specific geographic boundaries: - **GDPR (EU)** requires personal data of EU residents to be processed and stored within the European Economic Area unless adequate safeguards are in place - **APAC regulations** such as Australia's Privacy Act and Singapore's PDPA impose data localization requirements for certain categories of data - **US compliance** frameworks like FedRAMP, HIPAA, and SOC 2 may require data to remain within US borders - **Internal governance** policies at large enterprises often mandate that secrets and credentials stay within approved regions ## Available regions ## What data is regionalized When you select a region, the following data is stored exclusively in that region's infrastructure: - **Workspaces and environments** - your project structure - **Secrets and secret versions** - all encrypted values and their history - **Pods** - folder organization for secrets - **Audit logs** - every access and modification event - **Teams and policies** - access control rules and team membership - **Access requests and grants** - just-in-time access records - **Support tickets** - any tickets filed by your organisation ## What stays global Some data must remain accessible globally for authentication and account management: - **User accounts and sessions** - login credentials and active sessions - **Organisation metadata** - name, plan, billing information - **API tokens and service tokens** - token hashes for authentication lookup - **SSO connections** - single sign-on configuration This separation ensures that your secrets and operational data stay in your chosen region while authentication works seamlessly from any location. ## How it works Every organisation has a single `data_region` selector that pins its regional data to one jurisdiction. Authentication and billing stay in a separate global cluster so a user can log in from anywhere and still reach only their own regional store. Sel Org --> GD subgraph Global["Global plane"] direction TB GD["Global D1 (billing, auth, accounts)"]:::global end subgraph Regional["Regional primary"] direction LR EU["EU D1 + R2"]:::regional US["US D1 + R2"]:::regional APAC["APAC D1 + R2"]:::regional end Sel --> EU Sel --> US Sel --> APAC `} /> Secrets, audit rows, and workspace metadata land in the regional cluster the selector points at; only identity and billing state is shared across regions. Organisation owners on the Team plan can select a data region in **Organisation Settings**. New organisations default to the Oceania region. Every API request for your organisation is automatically routed to the correct regional database. All secrets are encrypted with AES-256-GCM before storage, regardless of region. Each environment uses a unique derived encryption key. Organisations with Bring Your Own Key (BYOK) enabled retain full control of their encryption keys in every region. ## Changing regions ### New organisations (no data) If your organisation has no workspaces or secrets yet, changing regions is instant. The region is updated immediately and all future data is written to the new region. ### Organisations with existing data When an organisation with existing data changes regions, CryptFlare performs an automated migration: 1. **Migration initiated** - a background process begins copying your data to the new region 2. **Data copied** - all regional tables are transferred in dependency order (workspaces, then environments, then secrets, etc.) 3. **Verification** - row counts are verified to ensure completeness 4. **Region switched** - the organisation's routing is updated to point to the new region 5. **Cleanup** - data is removed from the previous region You can monitor migration progress in **Organisation Settings** under **Data Residency**. The migration status API is also available at `GET /v1/organisations/:org/data-region/status`. ## Region guarantees - **Write locality** - all writes for your organisation go to a single primary database in your chosen region - **Read replicas** - read replicas are automatically distributed but are restricted to the same jurisdiction (EU databases only replicate within the EU) - **No cross-region data leakage** - your organisation's operational data never leaves the selected region - **Audit trail** - every region change is recorded in the audit log with the actor, timestamp, and source/target regions ## Compliance certifications CryptFlare's infrastructure is built on SOC 2 Type II certified data centers with the following security controls: - **Encryption in transit** - TLS 1.3 for all API communications - **Encryption at rest** - AES-256-GCM with per-environment key derivation (HKDF-SHA256) - **Access control** - deny-first RBAC with team-scoped policies and just-in-time access - **Audit logging** - every secret access, modification, and administrative action is logged with actor, IP, and timestamp - **Key management** - optional BYOK allows organisations to control their own 256-bit encryption keys ## API reference For full API documentation including request/response examples, see the [Data Residency API Reference](/api-reference/data-residency). ## Plan availability | Feature | Free | Pro | Team | |---|---|---|---| | Data residency | - | - | Included | | Region selection | Default only | Default only | All regions | | Region migration | - | - | Included | Upgrade to the Team plan to enable data residency controls for your organisation. --- # Dynamic Secrets Source: https://docs.cryptflare.com/security/dynamic-secrets Mint short-lived, auto-revoked credentials on demand from upstream cloud providers. No long-lived secrets, no manual rotation, bounded blast radius. # Dynamic Secrets Dynamic secrets are short-lived credentials that CryptFlare mints on demand from upstream cloud providers (Azure, AWS, GCP, ...) and automatically revokes when they expire. Instead of storing a long-lived API key that rotates every 90 days and hoping nobody leaks it, your applications request a fresh credential every time they need one - and that credential dies on its own at a time you control. This is the same model [HashiCorp Vault](https://developer.hashicorp.com/vault/docs/concepts/lease) pioneered. An operator configures a "root" identity in the upstream provider once, applications request leases on demand, and every lease is bound to the identity that requested it. ## Why dynamic secrets Your CI, your developers, and your production workloads never hold a credential that outlives its purpose. A terraform apply gets a 30-minute Azure token; when terraform finishes, the token dies. If a credential leaks (CI log, developer laptop, accidental commit), the window of exposure is capped by the lease TTL. A 60-minute lease means the worst case is 60 minutes of attack surface. The credentials rotate themselves on every request. You never run a runbook to "change the DB password before Friday's audit" - every request is the fresh password. Every lease is recorded with who requested it, when, from where, and for what TTL. When something goes wrong you can trace the exact credential back to the exact developer or CI run. When a developer leaves and their session is revoked, every credential they minted dies immediately. When a rotated service token is deleted, every credential issued under it dies. Vault's identity binding rule, built in from day one. ## How it works An admin (via the dashboard or the Terraform provider) registers a "root" identity with CryptFlare - for Azure that's an App Registration with `Application.ReadWrite.All` on Microsoft Graph (plus `User Access Administrator` at target scopes if they pick Dynamic SP mode). CryptFlare validates the credentials against the provider before persisting them **encrypted at rest with AES-256-GCM**. The config holds three TTL numbers (Vault-style): `default_ttl`, `max_ttl`, and `system_max_ttl`. Callers can request any TTL up to `max_ttl`, omitting it defaults to `default_ttl`, and the system ceiling clamps everything. Two quotas prevent runaways: `max_concurrent_leases` and `max_leases_per_identity`. Your CI, CLI, or local developer makes a single API call: `POST /v1/organisations/:org/dynamic-secrets/configs/:id/lease`. CryptFlare resolves the effective TTL, checks quotas, decrypts the root credentials, calls the upstream provider to mint a fresh credential, and returns it. The credential is returned exactly once in the response body. Use it immediately - export it as an env var, pass it to a subprocess, plug it into another Terraform provider. **CryptFlare never stores it.** Behind the scenes, CryptFlare started a durable Cloudflare Workflow that sleeps until the TTL expires, then calls the upstream provider to revoke the credential. The credential also has a provider-side deadline baked in (where supported) so even if CryptFlare vanishes, your cloud provider kills the credential on schedule. ## Supported providers More providers are in development. Want one we haven't built yet? [Let us know](/support). ## Setup guides | Provider | Guide | |---|---| | **Microsoft Azure** | [Dynamic secrets with Azure Service Principals](/guides/dynamic-secrets/azure) | | **Amazon Web Services** | [Dynamic secrets with AWS IAM (AssumeRole)](/guides/dynamic-secrets/aws) | Each guide walks through registering the upstream identity, granting the right permissions, creating the config in CryptFlare, and issuing your first lease. ## Using a lease Once an operator has set up a config, your applications can request leases. The [**Using dynamic secrets**](/guides/dynamic-secrets/usage) guide walks through the common patterns: - **Local developer workflow** - grab a 1-hour credential from the CLI before running `terraform plan` - **CI/CD pipeline** - service token requests a fresh credential at the start of every pipeline run - **Inline one-shot usage** - wrap a single command with automatic revoke on exit - **Long-running processes** - [renew the lease](/api-reference/dynamic-secrets#renew-a-lease) periodically, bounded by the hard `max_expires_at` cap - **Credential handoff** - [wrap](/api-reference/dynamic-secrets#issue-a-lease) credentials in a single-use exchange token and pass the token through an insecure channel, unwrap at the destination Every pattern uses the same API - it's just wiring. ## Lease renewal For long-running workloads that outlive their initial TTL (a multi-hour CI pipeline, an overnight migration, a lengthy Terraform apply), you can extend an active lease via [`POST /leases/:leaseId/renew`](/api-reference/dynamic-secrets#renew-a-lease). The renewal follows Vault's rule: you can extend the lease as many times as you want, but **never past `max_expires_at`**, which is anchored to the original issue time. A runaway client cannot keep a credential alive forever - the hard cap always wins. For providers whose credentials are immutable (Azure SP, AWS STS), renewal means revoke-and-reissue: the response includes fresh credential values which the caller must swap in. For providers that support in-place extension (none in v1), renewal keeps the same credential values and just pushes the expiry deadline forward. ## Response wrapping When your application needs to hand credentials from one process to another through a channel it doesn't fully trust (CI logs, task queues, paste buffers, webhooks), you can request the credentials in a **wrapped** form. The initial `POST /configs/:configId/lease` returns a short-lived single-use exchange token instead of the credential values, and a separate process redeems the token via [`POST /unwrap/:token`](/api-reference/dynamic-secrets#unwrap-a-credential-token) to get the actual credentials. `POST /configs/:id/lease` with `{ wrap: { ttl: 60 } }`. CryptFlare mints the credential, stores it in a KV entry encrypted with a per-token derived key, and returns a 32-byte random exchange token. The credentials do not appear in the response body. Through whatever channel is convenient - environment variable, CI output, a posted message. The token alone is useless to anyone who cannot also authenticate to CryptFlare with `dynamic_secrets:issue`, so brief exposure is safer than passing the raw credential. `POST /unwrap/:token`. CryptFlare atomically reads and deletes the KV entry, returning the credentials. Any subsequent unwrap attempt on the same token fails - single-use by design. Wrap TTL is configurable between 10 and 300 seconds (default 60). The wrap token does NOT extend the lease's own TTL - the underlying credential still expires at its originally-scheduled time. Wrapping only affects how the credential travels from mint to consumer. ## Default role permissions By default, any role that can issue leases can also read them. Only `manager` and above can create, update, or delete the underlying config. Owners can override these defaults in **Organisation Settings > Roles**. ## Lease lifecycle Each lease moves through a small state machine. The workflow starts the moment a lease is minted, sleeps until the TTL approaches, then cleans up at the upstream provider. Any identity change along the way can cascade drain before the natural expiry. pending: Lease issued pending --> active: Workflow started active --> expiring_soon: TTL approaching expiring_soon --> expired: Provider revoke ok active --> revoked: User or operator revokes expiring_soon --> revoked: User or operator revokes active --> drained: Config deleted or cascade expiring_soon --> drained: Config deleted or cascade expired --> [*] revoked --> [*] drained --> [*] `} /> The terminal states (expired, revoked, drained) all converge on a revoked upstream credential, so the only difference between them is which event got there first. | State | Meaning | Can revoke? | |---|---|---| | `pending` | Row created, workflow not yet started. Transient - usually under 1 second. | Yes | | `active` | Workflow is sleeping until TTL. Credential is valid at the upstream provider. | Yes | | `expired` | The workflow's sleep fired and the provider successfully revoked the credential. | - | | `revoked` | A user, operator, or cascade-revoke killed the lease before TTL. | - | | `irrevocable` | Six revoke attempts failed. Credential may still be valid at the provider - operator investigation required. | Force only | ## Identity binding Every lease is bound to the identity that issued it. When that identity is revoked, **all of its leases cascade-revoke at the upstream provider**. This is the Vault rule and it is what makes dynamic secrets actually safer than long-lived credentials. | Trigger | Effect on active leases | |---|---| | User logs out of CryptFlare | All session-bound leases revoked within seconds | | Service token deleted | All leases issued under that service token revoked | | Access token deleted | Same | | Session expires by sliding window timeout | Leases die naturally - their TTL was clamped to fit the session at issue time | The cascade runs in the background so user-facing responses are not blocked by upstream provider latency. If a cascade revoke fails (e.g. Azure unreachable), the offending lease is flipped to `irrevocable` and **every owner and manager in the organisation receives an in-app notification** linking straight to the lease detail page with the force-revoke button. The credential may still be valid at the provider - use the notification as your prompt to clean it up manually and then force-revoke to clear the database state. ## TTL resolution Effective TTL is clamped by four independent ceilings at issue time. The response always reflects what you actually got. ``` effective_ttl = min( caller_requested_ttl ?? default_ttl, max_ttl, system_max_ttl, parent_token_remaining_lifetime, ) ``` Worked example with `default_ttl = 30m`, `max_ttl = 60m`, `system_max_ttl = 24h`: | Caller asks for | Effective TTL | Why | |---|---|---| | nothing | **30 min** | Used the default | | `ttl: 900` | **15 min** | Below default, accepted | | `ttl: 2700` | **45 min** | Between default and max, accepted | | `ttl: 7200` | **60 min** | Clamped to max | | `ttl: 86400` | **60 min** | Clamped to max (max < system_max) | | `ttl: 1800`, session expires in 600s | **10 min** | Clamped to parent token's remaining lifetime | The effective TTL must be at least **60 seconds** or the request is rejected with `DYNAMIC_TTL_INVALID`. If your session is about to expire, refresh it before requesting a long-lived lease. ## Dashboard Every dynamic secrets config and its leases are visible in the vault at **Dynamic Secrets**. The page is tabbed into **Configurations** and **Analytics**: - **Configurations tab** - table of every dynamic secret config in your organisation with provider badge, TTL window, active-lease count / quota, and a row of actions (Issue lease, View leases, Edit, Delete). A search bar filters by name or description, and a segmented control filters by enabled/disabled status and by provider when multiple providers are in use. - **Analytics tab** - four summary metric cards (issued in window, revoked in window, active now, irrevocable count), a daily issue-rate chart, a revoke-reason breakdown, an active-by-config top-10, and a provider usage summary. The window picker (7 / 30 / 90 days) filters the window server-side. Two modal / detail surfaces drill deeper from the Configurations tab: - **View leases modal** (eye icon on a config row) - opens a compact list of every lease for that config. Includes a search bar matching lease id / session id / user id, a status segmented filter, client-side pagination at 25 per page, and a capped-height scroll container so it stays usable even with hundreds of leases. Manager / owner roles see revoke + force-revoke buttons on each row. - **Lease detail page** (click any lease in the list) - opens `/dynamic-secrets/leases/:leaseId` with the full audit timeline for that one lease, every metadata field, the workflow instance id, revocation attempts, and (for active leases) a revoke button. Irrevocable leases expose the force-revoke action here. **Permission check** - the config edit page has a "Check permissions" button that re-runs the provider adapter's validate() hook against the currently-stored root credentials and returns pass/fail with the exact error string. Use it after you grant a new cloud permission to verify before the next lease request, or after a lease-issue failure to confirm what's actually missing. Empty organisations (no configs yet) see only a hint to create one - analytics / modal surfaces only populate once a config exists. ## Rate limiting On top of the hard `maxConcurrentLeases` and `maxLeasesPerIdentity` quotas, every lease issuance is rate-shaped at **10 issuances per 60 seconds per parent token** (session, service token, or access token). Two service tokens held by the same user are rate-limited independently - so two CI pipelines running in parallel don't interfere with each other. Rotation policies, CI jobs, and Terraform runs all fit comfortably inside that budget. Hitting the rate limit (`429 RATE_LIMITED`) almost always means a runaway loop - back off, investigate, and the limiter clears 60 seconds later. ## Defense in depth Dynamic secrets use three independent revocation layers. Each one exists so the others don't have to be perfect. The primary mechanism. Every lease gets its own durable workflow instance that sleeps until the exact TTL expiry, then calls the provider's revoke API with 6-attempt exponential backoff. Workflows survive worker deploys and are precise to the second. Where the upstream platform supports it, CryptFlare bakes a deadline into the credential itself at issue time. Azure SP passwords use `endDateTime = lease_ttl + 5 min`. So even if CryptFlare's workflow never fires, your cloud provider kills the credential on schedule. A cron job runs once an hour and flips any lease whose `max_expires_at` has passed but is still marked active to `irrevocable`. In normal operation it processes zero rows - it exists so a Workflow runtime outage cannot leave permanently-active credentials. ## Plan availability > Dynamic secrets are a **Team plan** feature. Upgrade to enable short-lived credential minting for your organisation. | Feature | Free | Pro | Team | |---|---|---|---| | Dynamic secrets | - | - | | | Concurrent leases per config | - | - | Configurable | | Identity-bound cascade revoke | - | - | | | Provider-side TTL enforcement | - | - | | | Durable workflow-based expiration | - | - | | | Terraform provider integration | - | - | | The platform enforces a hard ceiling of **24 hours** on any single lease TTL regardless of plan. ## Security model - **Root credentials encrypted at rest** with AES-256-GCM, per-config key derived via HKDF from the platform master secret. A leaked D1 row alone cannot be decrypted. - **BYOK option** - when `useByok: true` is set at config creation, root credentials are encrypted with the organisation's customer-managed key instead of the platform master. Flipping the org's BYOK state afterwards does NOT re-encrypt existing configs; each config stays encrypted under whichever source it was created with. - **Lease credentials returned exactly once** - never stored, never logged, never cached. - **Wrap tokens** - stored in KV with a short TTL (default 60s, max 300s) encrypted with a per-token derived key. Single-use: atomically read-and-delete on unwrap. Cross-org scoped. - **Audit log** records `configId`, `leaseId`, `expiresAt`, and `externalId` only. No credential values ever touch the hash-chained audit log. - **Tenant isolation** enforced at every query layer by `organisation_id`. The upstream provider handle (e.g. Azure `keyId`, AWS access key id) is opaque to CryptFlare and only meaningful in the customer's own cloud account. - **Vault-style identity binding** - every lease is bound to its issuing session or token. Cascade revoke fires on logout, service token delete, or access token delete. - **Renewal hard cap** - `max_expires_at` is anchored to issue time. Renewal can extend the active window but can never push past this ceiling, so a runaway client cannot keep a credential alive forever. ## Azure Dynamic SP mode - additional security properties Dynamic SP mode (one new Service Principal per lease, rather than rotating a password on a shared root SP) has a few security properties and trade-offs worth calling out separately. ### Per-lease identity in Azure activity logs In Static SP mode, every lease authenticates to Azure as the same root App Registration, so Azure activity logs show all lease-driven actions attributed to one principal. In Dynamic SP mode, each lease carries a **unique `AZURE_CLIENT_ID`** that didn't exist before the lease was issued and won't exist after revoke. Azure activity logs attribute actions per lease, which means: - You can reconstruct "who did what" at the lease level from Azure-side logs alone, without needing to cross-reference CryptFlare's audit trail - Compliance regimes that require per-principal attribution (SOC 2 CC6.1, ISO 27001 A.9.4.2) are satisfied by the Azure logs directly - A compromised lease credential can be traced to a specific lease id via the SP's display name (`cryptflare-lease-`) The trade-off is operational: Dynamic SP mode is ~5x slower per issue (~2–5 seconds vs ~500ms) and carries a propagation delay (next section). ### Compensating-transaction rollback on partial failure Dynamic SP issuance is a multi-step Graph + ARM sequence: create Application → create Service Principal → add password → create role assignment(s). Any failure after the Application is created triggers a best-effort `DELETE /applications/{id}` which cascades to the SP, all passwords, and all role assignments. The original error is preserved and re-raised to the caller. If the rollback itself fails (rare - only if Graph is throwing 5xx on both create and delete), we log `azure_dynamic_sp_rollback_failed` to the Workers runtime log with the application object id so an operator can clean up manually. The orphaned App is named `cryptflare-lease-` making it easy to find in the Entra portal. **What this means for the security model**: partial-failure recovery never leaves a dangling Application with valid credentials. Either the lease succeeds end-to-end, or every resource we created is gone. There is no "half-minted lease" state where an attacker could race us. ### Propagation delay trust boundary Fresh Service Principals take **up to 30 seconds** to replicate across Azure AD and become visible to the Azure management plane. During this window the minted credentials authenticate successfully but an ARM call may fail with "principal not found" or `AuthorizationFailed`. This is **not a security weakness** - the lease credential can't do anything it shouldn't during the delay, because Azure's authorisation layer is the one rejecting it. It's a usability constraint. For latency-sensitive workloads (short-lived CI jobs that need credentials to work within seconds of issue) use Static SP mode, which has no propagation delay. ### User Access Administrator blast radius Dynamic SP mode requires the root App to hold `Microsoft.Authorization/roleAssignments/write` at every scope it delegates roles at. The minimal built-in role that grants this is `User Access Administrator`. Assigning it to your root App means: - The root App can create role assignments at that scope for **any role it holds** - not just the one you're delegating to leases. If you grant the root App `User Access Administrator` + `Reader`, you're effectively granting it the ability to delegate any role up to and including `User Access Administrator` itself at that scope. - Anyone with write access to your CryptFlare config (owner / manager roles with `dynamic_secrets:manage`) could in theory add a role assignment line item for a more-privileged role and have future leases carry it. **Mitigations we recommend**: 1. **Scope narrowly** - grant `User Access Administrator` at a single subscription or resource group rather than tenant-wide. Blast radius is bounded to that scope regardless of what the root App decides to delegate. 2. **Principle of least privilege on the root App** - only assign the root App the roles you actually plan to delegate to leases. If leases only need `Reader`, the root App should only hold `Reader` + `User Access Administrator`, never `Contributor` or `Owner`. 3. **Audit `dynamic_config.updated` events** - any change to `roleAssignments` on a config emits this event. Watch it in the console dashboard's Activity feed or in your org's audit log. Static SP mode does not have this concern - the root App's RBAC is assigned once up front and applies to every lease uniformly; there is no per-lease role delegation at issue time. ### Clean-up on revoke cascades to Azure-side resources In Static SP mode, revoke removes one password credential from the root App; the SP and its role assignments persist untouched. In Dynamic SP mode, revoke `DELETE`s the entire Application, which cascades to: - The Service Principal (removed from Entra ID) - All password credentials on the Application - All role assignments targeting the SP's object id (Azure automatically orphans these) This means a revoked Dynamic SP lease leaves **zero footprint** in your Azure tenant - there is no way for a stale role assignment to grant access to a principal that no longer exists, because the principal is gone. Orphan role assignment records may briefly linger in ARM diagnostic logs but they are functionally dead. ## API reference For full endpoint documentation with request and response examples, see the [Dynamic Secrets API reference](/api-reference/dynamic-secrets). Key endpoints: - [`POST /dynamic-secrets/configs`](/api-reference/dynamic-secrets#create-a-dynamic-secret-config) - create a provider config (optional `useByok` to encrypt root creds with the org's customer key) - [`POST /dynamic-secrets/configs/:id/lease`](/api-reference/dynamic-secrets#issue-a-lease) - mint a fresh credential (optional `wrap` for handoff through insecure channels) - [`POST /dynamic-secrets/leases/:id/renew`](/api-reference/dynamic-secrets#renew-a-lease) - Vault-style lease renewal, bounded by `max_expires_at` - [`POST /dynamic-secrets/unwrap/:token`](/api-reference/dynamic-secrets#unwrap-a-credential-token) - redeem a wrap token for the underlying credentials - [`GET /dynamic-secrets/leases`](/api-reference/dynamic-secrets#list-leases) - list lease history - [`DELETE /dynamic-secrets/leases/:id`](/api-reference/dynamic-secrets#revoke-a-lease) - manually revoke a lease - [`DELETE /dynamic-secrets/configs/:id`](/api-reference/dynamic-secrets#delete-a-dynamic-secret-config) - drain and delete a config (Terraform-destroy-safe) --- # Encryption Source: https://docs.cryptflare.com/security/encryption How CryptFlare encrypts your secrets at rest and in transit # Encryption Your secrets are encrypted before they ever reach storage. CryptFlare uses industry-standard encryption to ensure that even in the unlikely event of a data breach, your secret values remain unreadable. ## Encryption at rest Every secret value is encrypted using **AES-256-GCM** before being written to the database. The plaintext value exists only in memory during the encryption/decryption operation and is never persisted. | Property | Detail | |----------|--------| | Algorithm | AES-256-GCM (authenticated encryption) | | Key length | 256 bits | | IV | Unique 12-byte random value per secret | | Integrity | GCM authentication tag prevents tampering | Each time a secret is created or rotated, a fresh initialization vector is generated. This means storing the same value twice produces completely different ciphertext. ## Encryption in transit All communication with CryptFlare is encrypted using TLS 1.2 or higher. We enforce HTTPS across all endpoints and subdomains with a strict transport security policy that lasts one year. Our global edge network terminates TLS at the edge, meaning your data is encrypted from the moment it leaves your application until it reaches our serverless runtime. ## Key management Encryption keys are stored as environment secrets in the serverless runtime. They are never committed to source control, stored in the database, or exposed through the API. Keys are derived using HKDF (HMAC-based Key Derivation Function) with SHA-256, providing an additional layer of separation between the master secret and the per-operation encryption key. ## Token security API tokens are hashed before storage. When you create a token, the full value is shown exactly once. After that, only the first 12 characters (the prefix) are visible. There is no way to retrieve the full token from our systems - if you lose it, you must generate a new one. ## Recovery code security TOTP recovery codes are hashed using **PBKDF2-SHA256** with 600,000 iterations and a unique random salt per code. This makes brute-force attacks computationally infeasible even if the database is compromised. Recovery codes are shown once at 2FA setup and cannot be retrieved afterwards. ## What we cannot see CryptFlare staff cannot read your secret values. The encryption key is isolated within the serverless runtime and is not accessible through any administrative interface or database query. We can see metadata (key names, versions, timestamps) but never plaintext values. --- # Event Subscriptions Source: https://docs.cryptflare.com/security/event-subscriptions Receive real-time HTTP notifications when actions occur in your CryptFlare organisation. # Event Subscriptions Event subscriptions let you receive real-time HTTP notifications (webhooks) when actions happen in your organisation. Every event that appears in your [audit log](/security/audit-logs) can trigger a delivery to your endpoint. ## How it works CryptFlare's audit system already records every action - secret creation, member changes, policy updates, and more. Event subscriptions tap into this pipeline: A user or service token performs an action (e.g. rotates a secret) The action is recorded in the audit log via a queue The queue consumer checks for active subscriptions matching the event type Matching subscriptions receive an HTTP POST with the event payload Each delivery is signed with HMAC-SHA256 for verification ## What's in a payload Event payloads contain **metadata only** - the same information that appears in the audit log. Secret values, encrypted data, and tokens are **never** included. ```json { "id": "evt_abc123", "type": "secret.rotated", "timestamp": "2026-04-11T12:00:00Z", "organisation": { "id": "org_xyz" }, "actor": { "id": "usr_456", "role": "developer" }, "resource": { "type": "secret", "id": "sec_789" }, "metadata": { "key": "DATABASE_URL", "version": 3 }, "source": "dashboard" } ``` The `metadata` field contains contextual information like key names, version numbers, and setting changes - but never the actual secret value. ## Available events You can subscribe to all events (`*`) or pick specific event types. Events follow the `{resource}.{verb}` naming convention used by the audit log. ## Verifying deliveries Every delivery includes an `X-CryptFlare-Signature` header containing an HMAC-SHA256 signature of the request body, computed with your signing secret: ``` X-CryptFlare-Signature: sha256=5d41402abc4b2a76b9719d911017c592... ``` Always verify this signature before processing the payload. Example in Node.js: ```javascript import crypto from 'node:crypto'; function verifySignature(rawBody, secret, signatureHeader) { const expected = crypto .createHmac('sha256', secret) .update(rawBody) .digest('hex'); return signatureHeader === `sha256=${expected}`; } ``` ## Permissions These are the default permissions for a new organisation. Owners can customise which roles have these permissions in **Organisation Settings > Roles**. Developers can view subscriptions but cannot create or modify them. This prevents unauthorised registration of external URLs while maintaining visibility. ## Organisation-level control Organisation owners can enable or disable event subscriptions for the entire organisation. When disabled: - All event subscription API endpoints return `403 EVENTS_DISABLED` - The queue consumer skips delivery for that organisation - Existing subscriptions are preserved but inactive Events are enabled by default. Owners can toggle this from the Event Subscriptions page or via the [toggle API endpoint](/api-reference/event-subscriptions#enabledisable-events). ## Plan limits ## Delivery behaviour Every delivery walks a short pipeline: the audit queue feeds a subscription matcher, the payload is HMAC-signed, and the webhook is POSTed to your endpoint. A failed POST escalates through a fixed retry ladder before it lands in the dead letter sink. AQ["Audit queue"] AQ --> Match["Subscription matcher"] Match --> Sign["HMAC-SHA256 signing"] Sign --> Send["Webhook POST"] Send -->|2xx| Done["Delivered"]:::success Send -->|non-2xx or timeout| Retry subgraph Retry["Retry ladder"] direction TB R1["1 min"] --> R2["5 min"] R2 --> R3["15 min"] R3 --> R4["1 hr"] R4 --> R5["6 hr"] R5 --> DLQ["Dead letter"]:::danger end `} /> The retry ladder is bounded in wall-clock time, so a flapping endpoint spends at most a few hours retrying before the subscription is paused and on-call is notified. | Setting | Value | |---|---| | Timeout | 10 seconds per attempt | | Max retries | 3 attempts with exponential backoff (1s, 2s) | | Success criteria | Any 2xx HTTP status code | | Failure tracking | Consecutive failures increment `failedCount` | | Auto-disable | Subscription paused after 10 consecutive failures | | Re-enabling | Resets the failure counter | ### Auto-disable If a subscription accumulates 10 consecutive delivery failures (across multiple events, not just retries), it is automatically paused to prevent wasting resources on a broken endpoint. The subscription card in the dashboard shows an **Auto-disabled** badge. When auto-disable fires, CryptFlare notifies everyone in your organisation with the `events:manage` permission (owners and managers) via: - **Email** summarising the subscription name, destination URL, and the last three failure reasons with timestamps - **In-app notification** with a deep link to the subscription detail page Notifications are deduplicated per subscription within a 24-hour window, so a flapping endpoint cannot spam the inbox. To recover: fix the endpoint issue, then re-enable the subscription. The failure counter resets to zero on re-enable. ### Manual redeliver Failed deliveries can be retried from the delivery log. Click the retry button on any failed delivery to resend the original payload to the subscription URL. A new delivery log entry is created with the result. See the [redeliver API endpoint](/api-reference/event-subscriptions#redeliver-a-failed-event) for programmatic usage. ### Signing secret rotation Rotate the signing secret without downtime using the [rotate secret endpoint](/api-reference/event-subscriptions#rotate-signing-secret). A new secret is generated automatically, and the previous secret remains valid for **24 hours** so consumers can migrate. During the grace period, deliveries include both headers: - `X-CryptFlare-Signature` - signed with the new secret - `X-CryptFlare-Signature-Previous` - signed with the old secret Update your consumer to verify against the new secret, then remove the old one after the grace period expires. ### Event replay Resend historical audit events to a subscription using the replay button or the [replay endpoint](/api-reference/event-subscriptions#replay-events) with a date range. Up to 100 events per replay. Replay deliveries include an `X-CryptFlare-Replay: true` header so consumers can distinguish replays from live events. This is useful for backfilling a new endpoint or recovering from an outage. ## Integrations CryptFlare supports pre-built integrations for popular platforms. When creating a subscription, select the destination format to automatically transform event payloads into the platform's native format. Select the destination format when creating a subscription. CryptFlare transforms the audit event payload into the platform's native format before delivery. For custom endpoints, select **Raw JSON** - the audit event payload is delivered as documented in [payload format](#payload-format). ## Common use cases - **Custom API endpoints** - Receive events at your own HTTP endpoint for processing - **Compliance logging** - Forward events to an external SIEM or log aggregator - **Automation triggers** - Kick off CI/CD pipelines when configurations change - **Audit trail backup** - Mirror audit events to your own data store - **Chat notifications** - Get formatted alerts in Slack, Discord, or Teams - **Incident management** - Trigger PagerDuty or Opsgenie alerts on critical events ## Getting started Navigate to **Event Subscriptions** in the vault sidebar Click **New Subscription** in the top right Enter a name, your delivery URL, and a signing secret (or generate one) Choose to receive all events or pick specific event types Click **Create**, then use the **Send test event** button to verify your endpoint receives the payload For API usage, see the [Event Subscriptions API reference](/api-reference/event-subscriptions). --- # Federated Identity Source: https://docs.cryptflare.com/security/federated-identity CryptFlare runs its own OIDC issuer so sync connections can push to AWS, GCP, Azure, and Kubernetes using short-lived federated credentials instead of long-lived API keys. # Federated Identity CryptFlare runs its own [OpenID Connect (OIDC)](https://openid.net/specs/openid-connect-core-1_0.html) identity provider. Every cloud destination we sync to (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, Kubernetes clusters) supports OIDC federation as a native auth mode, so you can grant CryptFlare access **without ever handing us a static credential**. You register our issuer URL in your cloud account once, bind a specific subject to a specific role/service account, and from that moment on every sync mints a short-lived signed assertion, exchanges it for a provider token, and pushes your secrets. No API keys to rotate. No secret-material for an attacker to steal. This is the same trust model GitHub Actions uses with AWS, Terraform Cloud uses with GCP, and every major CI/CD platform has standardised on. CryptFlare is just another relying party - except the "workload" that needs to talk to your cloud is our sync consumer, and the scope of what it can do is limited to whatever you chose to bind to the federated subject. ## Why federated identity In the service-account-key model, you export a JSON key from your cloud provider, paste it into CryptFlare, and we store it encrypted at rest. If our database is exfiltrated *and* the per-integration AES key is compromised, the attacker holds a valid credential until you manually rotate it. Federation eliminates that blast radius entirely: the only thing stored on our side is the name of your Workload Identity pool. Nothing there is a credential. The actual trust lives in the signed JWT we mint at sync time, valid for five minutes. Want to cut CryptFlare off immediately? Remove the IAM binding on the target role/service-account in your cloud console. No need to wait for us to propagate a revocation, no need to log in to our dashboard, no dependency on CryptFlare being available. The trust is a policy you control. Every sync connection in CryptFlare mints assertions under a unique subject like `cryptflare:org:{orgId}:sync:{connId}:v1`. Your IAM binding pins to *that exact subject*, meaning connection A can only impersonate the role you granted to connection A. A different connection in the same CryptFlare org - even one you created yourself - cannot reach the same role unless you explicitly bind it too. The tenant isolation lives in the cloud provider, not in our code. Service account keys need to rotate. You either remember to do it manually, or you set a calendar reminder, or you ship a rotation pipeline. With federation there is nothing to rotate on your side. CryptFlare rotates its own signing keys automatically (every 60 days, with a 48h overlap window) and the JWKS endpoint we publish handles key discovery transparently for your cloud provider. Every federated assertion is logged on our side (`audit_logs` rows with the subject + audience + issued-at + jti), and on your side your cloud provider logs every STS token exchange and every API call made with the resulting token. Pairing the two gives you end-to-end traceability: "which CryptFlare connection produced which API call in my cloud, at what time, for which secret." ## How it works At sync time the worker walks a three-hop token exchange. It mints a signed assertion, trades it for a short-lived cloud token, and uses that token to call the destination secret store. Nothing static ever reaches your cloud. >OIDC: Request signed JWT (sub, aud, 5 min TTL) OIDC-->>Sync: RS256 JWT with current kid Sync->>STS: Exchange JWT for federated token STS->>STS: Verify via cached JWKS, match subject binding STS-->>Sync: Federated access token Sync->>IAM: Impersonate service account (GCP two hop) IAM-->>Sync: Short-lived provider token Sync->>Dest: Call Secret Manager or Key Vault Dest-->>Sync: Ack on write or list `} /> Each hop shortens the credential lifetime, so even a token captured in transit is useful for minutes at most against one pinned audience. Our API worker publishes the two well-known endpoints the OIDC spec mandates: ``` https://api.cryptflare.com/.well-known/openid-configuration https://api.cryptflare.com/.well-known/jwks.json ``` The first advertises the issuer URL, the JWKS URI, the signing algorithms (RS256), and the supported claims. The second exposes the public keys currently in use - typically two at a time, so in-flight tokens signed with the previous key continue to validate during a rotation window. One-time setup in your cloud provider's IAM console: - **GCP**: Create a Workload Identity Pool + an OIDC provider pointing at `https://api.cryptflare.com`. Grant `roles/iam.workloadIdentityUser` on the target service account to the federated subject (see below). - **AWS**: Create an IAM OIDC provider with our issuer URL + audience. Attach a role with a trust policy that pins the `sub` claim. - **Azure**: Add a federated credential to a multi-tenant App Registration with our issuer and subject. - **Kubernetes**: Configure the API server's `--service-account-issuer` flag to trust our issuer, or register our issuer with your cluster's projected service account token configuration. The resource identifier you paste into CryptFlare (the WIF provider name, the AWS role ARN, etc.) is all we need to know about your side. When a sync fires, the consumer calls the OIDC issuer module: ```ts const assertion = await mintFederationAssertion({ issuer: 'https://api.cryptflare.com', subject: 'cryptflare:org:org_abc123:sync:conn_xyz789:v1', audience: 'projects/123/locations/global/workloadIdentityPools/cryptflare/providers/cryptflare-oidc', ttlSeconds: 300, }); ``` The JWT header contains `alg: RS256` and `kid: `. The payload carries `iss`, `sub`, `aud`, `iat`, `exp`, and a `jti` for replay protection. It is signed with our current RSA-2048 private key and lives for five minutes. The consumer posts the JWT to your cloud provider's token endpoint. Your provider: 1. Fetches `https://api.cryptflare.com/.well-known/jwks.json` (cached for ~24h). 2. Verifies the JWT signature against the advertised public key. 3. Checks the subject against your pre-configured binding. 4. Issues a short-lived provider access token scoped to what your binding allows. For GCP this is a two-hop flow (STS → IAM Credentials), for AWS/Azure a single-hop exchange. CryptFlare handles the protocol differences inside each provider adapter. With the provider token in hand, CryptFlare calls the native secret-store API (Secret Manager, Key Vault, Secrets Manager, the Kubernetes API, ...) to list, create, or update secrets. The token expires within the hour - even if we cached it wrong or our environment leaked, the window of exposure is narrow and no static credential is ever present. ## Subject format Every federated assertion CryptFlare mints uses a deterministic subject you can paste directly into an IAM binding: ``` cryptflare:org:{orgId}:sync:{connId}:v1 ``` - `orgId` - your CryptFlare organisation ID (prefix `org_`). - `connId` - the sync connection ID (prefix `conn_`). - `:v1` - version suffix. Reserved for future format changes. The vault dashboard shows the full subject string on every connection's configuration page so you can copy-paste it into the cloud-provider binding without typing. We could issue assertions under `cryptflare:org:{orgId}`, but that would mean *any* sync connection in your org could impersonate *any* role you bound to that subject. By scoping per connection, a compromised connection can only reach exactly what that connection was granted - even if two connections are in the same CryptFlare org, they cannot see each other's cloud permissions unless you configure them to. ## Key rotation CryptFlare's OIDC signing keys rotate automatically. A rotation cron runs on a schedule: The cron generates a fresh RSA-2048 keypair, moves the current key into the PREVIOUS slot, and publishes the new key as CURRENT. JWKS now advertises both keys, so cloud providers caching the old public key continue to validate assertions signed with it. The PREVIOUS key is deleted from the API worker's secrets and dropped from JWKS. Any cloud provider that cached the keyset within the past 24h has already refreshed by now; anything still holding the old key will retry and pick up the new one. Your IAM binding is against the **subject**, not against any specific key. Because our JWKS endpoint advertises the current valid keys, cloud providers re-fetch automatically on cache expiry and pick up whichever key we used to sign the newest assertion. There is nothing you need to do, update, or rotate. ## Supported providers | Provider | Auth mode | Setup guide | |---|---|---| | GCP Secret Manager | Workload Identity Federation | Set up a WIF pool + provider, bind `roles/iam.workloadIdentityUser` on the target SA | | Azure Key Vault | Workload Identity Federation | Add a federated credential to a multi-tenant App Registration | | AWS Secrets Manager | IAM OIDC provider + role | Create an OIDC provider, attach a role with a trust policy pinning `sub` | | Kubernetes | Projected SA tokens (federated mode) | Configure cluster `--service-account-issuer` + RBAC | Each provider still supports the static-credential path (service account JSON key, API token, bearer token) for teams that cannot or prefer not to configure federation. The dashboard shows both options side-by-side when you register an integration, and we mark federated mode as the recommended choice. ## Security considerations Every JWT we mint has a five-minute expiry. Even if an assertion leaks in transit, it is useful for at most five minutes, and only against the specific audience it was minted for. Assertions carry a unique `jti` (JWT ID) claim. Cloud providers typically reject reuse within the assertion's TTL, and we never re-use the same `jti` across assertions. The `aud` claim is tied to your specific WIF provider / role / app registration. An assertion minted for connection A targeting one audience cannot be replayed against a different audience. The entire flow is signature-based. CryptFlare's private key never leaves our API worker's sealed environment; your cloud provider only ever sees the public key via JWKS. Your cloud provider's STS / token-exchange endpoints enforce their own rate limits. A runaway CryptFlare sync cannot exceed those limits - the worst case is our sync queue backs off and retries. Federation controls **who** can call your cloud API; it does not control **what** that caller can do. Follow the principle of least privilege: grant the target role (or SA) only the minimum permissions needed to write to the specific secret-store namespace you want CryptFlare to manage. Don't give CryptFlare `roles/secretmanager.admin` - give it `roles/secretmanager.secretAccessor` + `roles/secretmanager.secretVersionManager` scoped to the project (or secret) you want synced. ## OIDC discovery endpoints If your IAM tooling needs to verify our issuer before you can configure it, hit the endpoints directly: ```bash # Discovery document curl -s https://api.cryptflare.com/.well-known/openid-configuration # Current signing keys curl -s https://api.cryptflare.com/.well-known/jwks.json ``` The responses conform to [RFC 8414](https://datatracker.ietf.org/doc/html/rfc8414) (OAuth 2.0 Authorization Server Metadata) and [RFC 7517](https://datatracker.ietf.org/doc/html/rfc7517) (JSON Web Key), so every standards-compliant OIDC client library will discover and verify them automatically. --- # MCP Access Source: https://docs.cryptflare.com/security/mcp-access Control which tokens can reach the Model Context Protocol server at mcp.cryptflare.com. One permission gate, full audit trail, opt-in per token. # MCP Access CryptFlare exposes a [Model Context Protocol](https://modelcontextprotocol.io) server at **mcp.cryptflare.com** so AI agents like Claude, Cursor, and Zed can discover and call a typed subset of the REST API. Access is gated by a dedicated permission, `mcp:use`, which is independent of any resource-level grant. Owners audit and revoke AI-agent access without touching REST permissions. ## Why a separate permission REST permissions define WHAT a token can do. `mcp:use` defines WHERE it can act - the MCP channel at `mcp.cryptflare.com`, in addition to regular `api.cryptflare.com` traffic. - **Kill switch.** Revoke `mcp:use` from every token in one query to cut agent traffic without disrupting CI or humans. - **Compliance.** Auditors filter "which tokens can reach AI agents" in one field instead of walking every permission set. - **Opt-in by default.** New tokens start without the permission. You grant it explicitly when the token needs to reach the MCP server. ## How it works Check{"Token has mcp:use?"} Check -- No --> Reject["403 MCP_NOT_GRANTED"]:::deny Check -- Yes --> Tool{"Tool-level permission granted?"} Tool -- No --> ToolReject["403 PERMISSION_DENIED"]:::deny Tool -- Yes --> Allow["Tool executes via REST"]:::grant class Check gate class Tool gate `} /> The MCP worker enforces `mcp:use` in its bearer middleware before any tool is even listed. A token that lacks the gate cannot discover the tool list, let alone call one. Defence in depth: once past the gate, every individual tool still checks its own resource-level permission inside the handler. ## Default role grants Users inherit `mcp:use` based on their org role. Owners can override any default through [role permissions](/security/roles). Service tokens and personal access tokens always start without the permission regardless of the creator's role. The token creation UI surfaces an opt-in checkbox; tick it to grant, leave it for a read-only token that cannot reach MCP. ## Adding MCP access to a token Open the vault dashboard, expand the **Account** section in the left sidebar, and click **API Tokens**. The page hosts two tabs: **Workspace Tokens** (user-scoped, pinned to one workspace) and **Service Tokens** (org-scoped, ideal for CI and long-lived agents). ### Workspace token From the vault sidebar, click **API Tokens** under **Account**. The **Workspace Tokens** tab is active by default. Pick the workspace the agent should bind to. Workspace tokens inherit only the permissions you tick; they never span other workspaces. In the **Permissions** section, enable **Call tools via mcp.cryptflare.com for AI agents** (robot icon). Grant the resource permissions the agent actually needs (secrets read, pods read, and so on). The MCP scope alone opens the channel; it does nothing without operation perms. After clicking **Create**, copy the `cf_live_...` or `cf_test_...` value. Paste it into your MCP client's configuration. CryptFlare shows the value once. ### Service token On the same **API Tokens** page, click the **Service Tokens** tab. Service tokens are preferred for long-lived agents because they outlive the creator's session and can scope to a specific environment. Open the creation dialog and give the token a descriptive name (for example, `claude-desktop-agent`). The permission picker is grouped by category. Find the **MCP** group at the bottom and tick **mcp:use**. Select the workspace and optional environment the token should bind to. A compromised service token cannot move laterally beyond its scope. Click **Create**, copy the `cf_live_...` or `cf_test_...` value once, paste into your agent's MCP config. ## Configuring your MCP client CryptFlare exposes the Streamable HTTP transport, which every modern MCP client supports. Pick your client below for a copy-paste config snippet. Swap `cf_live_...` for the token you minted above; the rest is ready for production. Not listed? Any client that speaks the MCP Streamable HTTP transport can connect with `url = https://mcp.cryptflare.com/mcp` and an `Authorization: Bearer` header. For stdio-only clients, wrap the endpoint with [`mcp-remote`](https://www.npmjs.com/package/mcp-remote). ## Revoking access Open **Account** > **API Tokens** in the vault sidebar. Switch between the **Workspace Tokens** and **Service Tokens** tabs, find the token, click **Revoke**. The MCP worker rejects that token within seconds (no cache beyond the current request). To stop a whole role from minting MCP-capable tokens, open **Organisation Settings** > **Roles** and uncheck **mcp:use**. Existing tokens that already hold the permission remain active until revoked individually; new tokens minted by that role skip the scope automatically. For a hard org-wide kill, remove `mcp:use` from every role, then revoke every active token that carries it. Use the audit log to sweep: filter by `action = token.created` and inspect the metadata's `mcp` flag. ## Audit trail Every MCP tool call emits a standard audit entry with `source = mcp` plus the tool name in `metadata.tool`. Filter in the audit log UI, or query the REST API: ```bash curl "https://api.cryptflare.com/v1/organisations/$ORG/audit?source=mcp&limit=50" \ -H "Authorization: Bearer $TOKEN" ``` The MCP worker delegates every operation to the REST API using the caller's token, so every tool call is audited as the corresponding REST action. There is no way for MCP activity to escape the audit chain. ## Security considerations A token with MCP access plus secrets read can reveal plaintext values through the reveal tool. Rotate it like any other high-privilege credential, store it in your password manager or secret vault, never commit it to source control. Service tokens scope to one workspace and environment and can be revoked independently. Personal access tokens are user-bound and carry the user's full org membership. For CI or always-on agents, mint a service token with the MCP scope. Write-class MCP tools default to a dry-run preview unless the caller passes `confirm: true`. Agents that blindly call `tools/call` get the preview back first so you can review the effect before it applies. Organisation IP allowlists apply to MCP traffic too. Limit MCP access to your corporate egress so a leaked token cannot be used from arbitrary networks. ## How Cipher relates to MCP [Cipher](/getting-started/cipher), the AI assistant built into the vault dashboard, can run the same set of actions external MCP clients can. You work with Cipher inside the dashboard using your normal login; external agents like Claude Desktop or Cursor connect through `mcp.cryptflare.com` with a token. |"Bearer token with mcp:use"| API Cipher -->|"your logged-in session"| API `} /> **The short version:** - **Cipher does not need an MCP token.** It runs inside the dashboard and inherits your logged-in permissions, so if you can already see a secret in the UI, Cipher can help you search or manage it. - **External agents need a token with `mcp:use`.** Claude Desktop, Cursor, Zed, the OpenAI Agents SDK, and similar clients all use the same permission gate + bearer token flow described earlier on this page. - **Both see the same actions.** New capabilities we ship to MCP show up for Cipher at the same time. You don't have to choose where to do something. - **Cipher still never reveals plaintext.** Even if a tool could decrypt a value, Cipher is built to refuse it. See [Meet Cipher](/getting-started/cipher) for the full safety rules. Logging a user out of the vault cuts their Cipher access immediately - no extra lever to pull. Revoking `mcp:use` on a token only stops that token from reaching `mcp.cryptflare.com`; it has no effect on Cipher because Cipher is not using a token. ## Next steps --- # Notifications Source: https://docs.cryptflare.com/security/notifications In-app notifications for access requests, member invites, secret rotations, and more # Notifications CryptFlare sends in-app notifications to keep you informed about important events in your organisation. Notifications appear in the bell icon dropdown in the vault dashboard header. ## Notification types | Type | When it fires | |------|---------------| | `ticket_reply` | Someone replies to a support ticket you are involved in | | `access_request` | A team member submits a JIT access request that you can approve | | `access_granted` | Your JIT access request is approved | | `access_denied` | Your JIT access request is denied | | `member_invited` | You are invited to an organisation | | `secret_rotated` | A secret you have access to is rotated to a new version | | `policy_change` | A policy affecting your access is created, updated, or removed | ## How notifications work ### Delivery Notifications are delivered in real-time within the vault dashboard. The notification bell icon shows an unread count badge when you have unread notifications. ### Auto-polling The dashboard polls for new notifications every 30 seconds. You do not need to refresh the page to see new notifications. ### Reading and dismissing Click a notification to mark it as read. Each notification includes a link to the relevant resource (e.g., the access request, the rotated secret, or the policy that changed). You can mark all notifications as read at once from the dropdown. ## Notification content Each notification includes: | Field | Description | |-------|-------------| | Title | A short summary of the event | | Message | Additional context about what happened | | Link | A direct link to the relevant resource in the dashboard | | Timestamp | When the event occurred | ## When notifications are created Notifications are created server-side when the triggering action occurs. For example, when a manager approves a JIT access request, the requester receives an `access_granted` notification immediately. Notifications are scoped to an organisation. You only receive notifications for events in organisations where you are a member. ## Use cases ### JIT access workflow A developer submits an access request for production secrets Managers and owners receive an `access_request` notification A manager approves the request The developer receives an `access_granted` notification with a link to the granted resource ### Secret rotation alerts When a secret is rotated, members with access to that secret receive a `secret_rotated` notification. This helps teams stay aware of credential changes that might affect their deployments. ### Policy change awareness When a global or team policy is created, updated, or removed, affected members receive a `policy_change` notification. This is especially useful for understanding why access was suddenly granted or revoked. --- # Ownership transfer Source: https://docs.cryptflare.com/security/ownership-transfer Transfer organisation ownership to another member # Ownership transfer Every organisation has a single owner. The owner has full control over the organisation, including billing, member management, and the ability to delete it. If the owner needs to hand off control, they can initiate an ownership transfer. ## How it works The current owner initiates a transfer by specifying the recipient's email address CryptFlare sends a secure email invitation to the recipient The recipient accepts or declines the transfer from the vault dashboard If accepted, ownership changes immediately ## Who can initiate Only the current organisation owner can initiate a transfer. Managers, billers, and other roles cannot start a transfer. ## Who can receive The recipient must be identified by email. If they are already a member of the organisation, their role is promoted to owner. If they are a CryptFlare user but not yet a member, they are added as a member and promoted to owner. ## Accepting a transfer After logging in, the recipient sees pending transfers on their dashboard. They can choose to accept or decline. When a transfer is accepted: The recipient becomes the new owner with full permissions The previous owner is demoted to the **manager** role The previous owner retains membership and can continue working in the organisation ## Declining a transfer The recipient can decline the transfer. The current owner remains the owner and is notified. The transfer record is marked as declined. ## Expiration Transfer invitations expire after **7 days**. If the recipient does not respond within that window, the transfer is automatically marked as expired. The owner can initiate a new transfer at any time after expiration. ## Cancelling a transfer The owner can cancel a pending transfer at any time before the recipient responds. This is useful if the transfer was initiated by mistake or if plans change. ## Constraints - Only one transfer can be pending per organisation at a time - The owner cannot transfer to themselves - The transfer is tied to the recipient's email, not their user ID ## Transfer statuses | Status | Description | |--------|-------------| | `pending` | Transfer initiated, waiting for recipient response | | `accepted` | Recipient accepted, ownership has changed | | `declined` | Recipient declined the transfer | | `expired` | 7-day window passed without a response | | `cancelled` | Owner cancelled the transfer before a response | ## Audit trail All transfer actions are logged: - `organisation.transfer_initiated` - When the owner starts the transfer - `organisation.transfer_accepted` - When the recipient accepts - `organisation.transfer_cancelled` - When the owner cancels See [audit logs](/security/audit-logs) for details on viewing and filtering these events. ## API reference ### Initiate ownership transfer ### Accept a transfer ### Decline a transfer ### Cancel a pending transfer ### Get pending transfer status ### Get pending transfers for current user See the [organisations API reference](/api-reference/organisations) for additional details. --- # Policies Source: https://docs.cryptflare.com/security/policies Fine-grained, deny-first access policies that layer on top of RBAC roles # Policies Policies give you fine-grained control over who can do what, where, and when - beyond what roles alone can express. While [RBAC roles](/security/access-control) determine a member's baseline permissions, policies let you add targeted overrides that deny or allow specific actions on specific resources. Policies affect everything - API requests, the vault dashboard, and the [global search bar](/security/access-control#search-respects-access-control). If a deny policy blocks access to a resource, that resource is hidden from search results, invisible in the tree view, and inaccessible via the API. Policies follow a **deny-first** evaluation model. If any deny policy matches, the request is blocked regardless of other allow policies. This makes it straightforward to lock down sensitive resources and then selectively grant access where needed. ## When to use policies - **Restrict production access** - deny write operations to production environments for most members - **Time-based access** - allow deployments only during business hours - **IP restrictions** - limit admin operations to your office network - **Temporary elevated access** - use JIT (just-in-time) access requests for break-glass scenarios - **Team-scoped rules** - apply different policies to different teams within the same organisation ## Evaluation order When a member makes a request, CryptFlare evaluates access through a 7-step chain. The first match wins - evaluation stops as soon as a step produces a definitive result. | Step | Layer | Description | |------|-------|-------------| | 1 | **JIT access grants** | Active time-limited grants from approved access requests. If a grant matches, access is allowed immediately. | | 2 | **Global DENY policies** | Organisation-wide deny policies, ordered by priority (highest first). If any matches, the request is denied. | | 3 | **Team DENY policies** | Deny policies scoped to the member's team. If any matches, the request is denied. | | 4 | **Team ALLOW policies** | Allow policies scoped to the member's team. If any matches, the request is allowed. | | 5 | **Global ALLOW policies** | Organisation-wide allow policies. If any matches, the request is allowed. | | 6 | **Org role permissions** | Falls back to standard RBAC. If the member's role grants the permission, access is allowed. | | 7 | **Default: DENY** | If nothing matched, the request is denied. | JIT grants take the highest priority because they represent an explicit, approved, time-limited override. Deny policies always come before allow policies at the same scope level, which is what makes the system deny-first. ## Resource patterns Policies target resources using a `{type}:{pattern}` syntax where `pattern` supports `*` glob matching. | Pattern | Matches | |---------|---------| | `workspace:*` | All workspaces | | `workspace:payments` | Only the workspace named "payments" | | `environment:prod-*` | Environments starting with "prod-" (e.g. prod-us, prod-eu) | | `environment:staging` | Only the "staging" environment | | `pod:*` | All pods | | `pod:database-*` | Pods starting with "database-" | | `*:*` | Everything - all resource types and names | A single policy can include multiple resource patterns. The policy matches if **any** of its patterns match the target resource. ## Permissions reference Policies reference the same permission strings used throughout CryptFlare. The table below is generated from the shared permissions constant - when new permissions are added to the platform, they appear here automatically. ## Conditions Conditions let you narrow when a policy is active. A policy with conditions only applies when **all** conditions are met. ### Time windows Restrict a policy to specific hours using UTC times. | Condition | Format | Description | |-----------|--------|-------------| | `time_window` | `"HH:MM-HH:MM"` | Policy is only active during this UTC time range | | `time_window_exclude` | `"HH:MM-HH:MM"` | Policy is inactive during this UTC time range | For example, a deny policy with `time_window: "00:00-09:00"` would block the action only between midnight and 9 AM UTC. Outside that window, the policy is skipped entirely. ### IP ranges Restrict a policy to specific networks using CIDR notation. | Condition | Format | Description | |-----------|--------|-------------| | `ip_range` | `"CIDR"` | Policy is only active when the request comes from this IP range | For example, an allow policy with `ip_range: "10.0.0.0/8"` would only grant access from internal network addresses. ### Days of week Restrict a policy to specific weekdays (UTC). | Condition | Format | Description | |-----------|--------|-------------| | `days_of_week` | `["mon","tue",...]` or CSV string | Policy is only active on the listed days | The accepted day tokens are `mon`, `tue`, `wed`, `thu`, `fri`, `sat`, `sun`. For example, a deny policy with `days_of_week: ["sat","sun"]` blocks the action on weekends and is completely skipped on weekdays. Day-of-week is evaluated against the server clock in UTC - mix it with `time_window` if you need a specific local window. ### Country restrictions Restrict a policy based on the country the request originates from. Country codes are ISO 3166-1 alpha-2 (`US`, `GB`, `AU`, etc.) and are read from the Cloudflare edge `cf.country` value, which is tamper-proof and free for every request. | Condition | Format | Description | |-----------|--------|-------------| | `country_allow` | `["US","CA"]` or CSV string | Policy is only active when the request originates from one of the listed countries | | `country_deny` | `["KP","IR"]` or CSV string | Policy is only active when the request originates from one of the listed countries (use on DENY policies) | Both conditions check **list membership** - a request whose country is in the list causes the condition to be met. The difference between `country_allow` and `country_deny` is only semantic: pair `country_allow` with allow policies and `country_deny` with deny policies so the policy reads cleanly to operators and auditors. If the request has no detectable country (for example a local CLI request that bypasses the Cloudflare edge), the condition is treated as **not met** and the policy is skipped. This is the fail-safe default - a missing country never silently satisfies a geo-scoped rule. ### Require recent MFA Gate a policy behind a recent multi-factor authentication step. | Condition | Format | Description | |-----------|--------|-------------| | `require_mfa` | `true` | Policy is only active when the actor completed an MFA challenge within the recent session window | Use this to build step-up auth flows for high-value resources. When the actor has not recently completed MFA, the condition fails and the policy is skipped - combine with a deny-by-default fallthrough for maximum safety. ### Secret age Block reveals on stale secrets to enforce rotation hygiene. | Condition | Format | Description | |-----------|--------|-------------| | `max_secret_age_days` | integer | Policy is active when the target secret has not been rotated in at least this many days | For example, a deny policy with `max_secret_age_days: 90` refuses secret reveal requests on anything that has not rotated in the past 90 days. The operator sees a clean error message prompting them to rotate first. This condition only applies to secret-level actions where the engine has access to the secret's age. ### Resource tags Condition a policy on arbitrary tags attached to the target resource. Tags are free-form labels you attach via the tags API (see [Tagging](/security/policies#tagging)). The policy engine loads the target resource's tags once per evaluation and checks them against the condition. | Condition | Format | Description | |-----------|--------|-------------| | `resource_tag_any` | `["pci-scope","sensitive"]` or CSV string | Policy fires when the resource carries at least one of the listed tags | | `resource_tag_all` | `["pci-scope","production"]` | Policy fires when the resource carries every one of the listed tags | | `resource_tag_none` | `["public","demo"]` | Policy fires when the resource carries none of the listed tags | Tags are the cleanest way to express compliance scoping - tag all PCI-scoped pods with `pci-scope` and write a single policy that blocks cross-region syncs on anything carrying that tag. ## Tagging Resources (workspaces, environments, pods, secrets) can carry free-form tags. Tags power the `resource_tag_*` conditions above and also make compliance-scoped filtering across the audit log possible. **API endpoints** | Method | Path | Description | |--------|------|-------------| | `POST` | `/v1/organisations/:org/tags` | Attach a tag to a resource | | `DELETE` | `/v1/organisations/:org/tags` | Remove a tag from a resource | | `GET` | `/v1/organisations/:org/tags?resourceType=pod&resourceId=...` | List tags for a single resource | | `GET` | `/v1/organisations/:org/tags/org` | List every distinct tag in the organisation (for autocomplete) | **Rules** - Tags are **case-insensitive** and stored lowercase - `PCI-Scope` and `pci-scope` are the same tag. - Tags must match `[a-z0-9][a-z0-9._:-]{0,62}` - 1-63 characters, starting alphanumeric. - Duplicate tags on the same resource return `409 TAG_ALREADY_EXISTS` (idempotent from the operator's view). - Deleting a workspace, environment, pod, or secret cascades to its tags - no orphan rows are left behind. - Tag operations are audit-logged as `tag.added` and `tag.removed`. ## Plan limits Policy features and limits are determined by your organisation's plan. These values are enforced on both the API and dashboard. ## Best practices **Start with deny, then add exceptions.** Create broad deny policies for sensitive resources first, then layer targeted allow policies for the teams and members that need access. This is safer than starting open and trying to close gaps. **Use the principle of least privilege.** Grant the minimum permissions needed for each team or workflow. If a CI/CD pipeline only needs to read secrets, do not give it write access. **Use conditions for time-based access.** Instead of manually toggling policies, use `time_window` conditions to automatically restrict production access outside business hours. **Test with simulation before enabling.** On Team plans, use the [simulate endpoint](/api-reference/policies) to verify that a new policy behaves as expected before enabling it. This prevents accidental lockouts. **Use JIT access for break-glass scenarios.** Rather than giving permanent production write access, have developers submit access requests when they need to deploy a hotfix. The approval trail is logged for audit purposes. **Name policies clearly.** Use descriptive names like "Deny prod writes outside business hours" rather than "Policy 1". Clear names make it easier to understand and audit your policy set. ## Examples ### Deny all writes to production Block any write, delete, or rotate operation on production environments for everyone. Individual exceptions can be added with higher-priority allow policies or JIT access grants. ```json { "name": "Deny all prod writes", "effect": "DENY", "permissions": [ "secrets:write", "secrets:delete", "secrets:rotate", "secrets:lock", "secrets:restore" ], "resources": ["environment:prod-*", "environment:production"], "conditions": {}, "priority": 100 } ``` ### Allow read-only access to staging Grant list and read access to all staging environments. Because deny policies are evaluated first, this will not override any deny rules. ```json { "name": "Allow staging reads", "effect": "ALLOW", "permissions": [ "secrets:list", "secrets:read", "secrets:versions" ], "resources": ["environment:staging-*", "environment:staging"], "conditions": {}, "priority": 50 } ``` ### Time-restricted production access Allow write access to production only during business hours (9 AM to 5 PM UTC). Outside this window, the policy does not apply and the deny-first chain takes over. ```json { "name": "Prod writes - business hours only", "effect": "ALLOW", "permissions": [ "secrets:write", "secrets:rotate" ], "resources": ["environment:prod-*"], "conditions": { "time_window": "09:00-17:00" }, "priority": 80 } ``` ### IP-restricted admin access Allow organisation management operations only from the corporate network. Requests from other IP addresses will fall through to the next evaluation step. ```json { "name": "Admin ops - office network only", "effect": "ALLOW", "permissions": [ "org:update", "org:delete", "members:invite", "members:remove", "members:role_assign" ], "resources": ["*:*"], "conditions": { "ip_range": "10.0.0.0/8" }, "priority": 90 } ``` ### Freeze writes on weekends Block every write, rotation, and delete on Saturdays and Sundays (UTC). Reads still work so on-call engineers can debug. ```json { "name": "Weekend change freeze", "effect": "DENY", "permissions": [ "secrets:write", "secrets:rotate", "secrets:delete", "pods:delete", "environments:delete" ], "resources": ["*:*"], "conditions": { "days_of_week": ["sat", "sun"] }, "priority": 70 } ``` ### Block high-risk countries Deny every action from a list of sanctioned or high-risk jurisdictions. Country codes are resolved from the Cloudflare edge. ```json { "name": "Deny sanctioned countries", "effect": "DENY", "permissions": ["*"], "resources": ["*:*"], "conditions": { "country_deny": ["KP", "IR", "SY", "CU"] }, "priority": 85 } ``` ### Require recent MFA for production reveals Forces a step-up MFA check before an operator can decrypt a production secret. Pair with your existing business-hours policy for layered defence. ```json { "name": "Prod reveals require recent MFA", "effect": "DENY", "permissions": ["secrets:read"], "resources": ["environment:prod-*"], "conditions": { "require_mfa": true }, "priority": 90 } ``` ### Deny reveals on stale secrets Refuses reveal requests on any secret older than 90 days without rotation - a hard-edged rotation-hygiene control. ```json { "name": "Deny reveals on stale secrets", "effect": "DENY", "permissions": ["secrets:read"], "resources": ["*:*"], "conditions": { "max_secret_age_days": 90 }, "priority": 60 } ``` ### PCI scope isolation with tags Tag your PCI-scoped pods and bind an MFA-gated policy to the `pci-scope` tag. This is the cleanest way to express a PCI DSS audit boundary because the rule can be pointed at by auditors and mapped to PCI DSS controls 7.2, 7.3, and 8.2. ```json { "name": "PCI scope - MFA required for writes", "effect": "DENY", "permissions": [ "secrets:write", "secrets:rotate", "secrets:delete" ], "resources": ["*:*"], "conditions": { "resource_tag_all": ["pci-scope"], "require_mfa": true }, "priority": 95 } ``` ## Compliance mapping Every policy template in the library ships with a compliance mapping that documents which external controls the template materially helps satisfy. Hover the "Compliance" column in the policy library UI to see the exact controls per framework. The supported frameworks are: | Framework | Standard | Why it matters | |-----------|----------|----------------| | **SOC 2** | AICPA Trust Services Criteria | Common Criteria 6.1-6.7 (logical access), 7.2-7.3 (monitoring) | | **HIPAA** | 45 CFR Part 164 Subpart C | Administrative and technical safeguards for ePHI | | **ISO 27001** | ISO/IEC 27001:2022 | Annex A.5.15-A.8.24 (access control, logging, cryptography) | | **PCI DSS** | v4.0.1 | Requirements 3.5-3.6 (keys), 7.2-7.3 (access), 8.2 (user identification) | | **GDPR** | Regulation (EU) 2016/679 | Articles 25 and 32 (data protection by design, security of processing) | | **NIST 800-53** | Rev. 5 | AC-2, AC-3, AC-5, AC-6, AU-2, IA-5, SC-12, SC-28 | | **CIS v8** | Critical Security Controls | 3.11, 5.4, 6.1-6.2, 8.2 | | **FedRAMP** | Moderate baseline | FedRAMP-enhanced NIST AC-2, AU-2, SC-12 | Compliance mappings are advisory, not certifying. Applying a template contributes to a control but does not alone satisfy it - your full policy stack, access review process, and audit evidence still matter. --- # Roles and permissions Source: https://docs.cryptflare.com/security/roles Understand the built-in roles, what each one can do, and how organisation owners can customise permissions. # Roles and permissions Every member of a CryptFlare organisation is assigned a role. Roles determine what actions a member can perform - from viewing secrets to managing billing. Organisation owners can customise the default permissions for each role to match their team's needs. ## Built-in roles CryptFlare ships with six roles. Each role has a default set of permissions designed for a specific function within your organisation. ## Role summary > The **Owner** role always has full access and cannot be restricted. This ensures that at least one person can always recover the organisation. ## How permissions work Permissions follow the format `resource:action` - for example, `secrets:read` allows revealing secret values, and `members:invite` allows inviting new members. Permissions are grouped by resource type (Organisation, Billing, Secrets, etc.) and each group contains granular actions. The full matrix below shows exactly which permissions each role has by default. ## Default permission matrix This matrix is generated from the platform's source of truth. When new permissions are added, they appear here automatically. ## Customising role permissions Organisation owners can override the default permissions for any non-owner role. This lets you tailor access to your team's workflow - for example, granting developers audit log access, or restricting managers from deleting workspaces. Navigate to **Organisation Settings** and select the **Roles** tab. Click any permission toggle to grant or revoke it for a role. Changes take effect immediately for all members with that role. The permission matrix shows green checkmarks for granted permissions and grey crosses for revoked ones. Owner permissions are always locked. ### What happens when you customise - **Changes are org-specific** - customisations only affect your organisation. Other organisations using the same roles keep the platform defaults. - **RBAC enforcement** - the API enforces customised permissions on every request. If you revoke `secrets:read` from the developer role, developers in your org will get a 403 error when trying to reveal secrets. - **Frontend gating** - the vault dashboard hides features that the current user doesn't have permission to access. - **Audit logged** - every permission change is recorded in the audit log with the actor, the role affected, and the permission toggled. - **Policy interaction** - team policies and global policies still layer on top of role permissions. A policy can grant additional access (e.g. JIT access to a specific workspace) or deny access that the role would normally allow. ### Limitations - **Owner cannot be restricted** - the owner role always has all permissions. This is a safety net to prevent lockouts. - **Cannot add new roles** - you can only customise the permissions of the six built-in roles. Custom named roles are not yet supported. - **Cannot restrict yourself** - if you are the only owner, you cannot transfer ownership and then restrict the owner role (it's always full access). ## Roles and other features ### Service tokens Service tokens have their own scoped permissions that are independent of roles. When you create a service token, you select exactly which permissions it has. Service tokens can only access a subset of permissions: Workspaces, Environments, Secrets, Pods, Audit, and the MCP channel. The `mcp:use` permission is granted by default to Owner, Manager, and Developer roles; see [MCP access](/security/mcp-access) for the full breakdown. ### Teams and policies Teams group members together and policies define what those teams can access. Policies evaluate on top of role permissions: 1. **JIT access grants** - temporary permissions from approved access requests (checked first) 2. **Global deny policies** - org-wide deny rules (highest priority) 3. **Team deny policies** - team-scoped deny rules 4. **Team allow policies** - team-scoped allow rules 5. **Global allow policies** - org-wide allow rules 6. **Role permissions** - the base RBAC layer (fallback) This means a policy can override role permissions in both directions - granting access that the role doesn't have, or denying access that it does. ### SSO group mappings When SSO is configured, external identity provider groups can be mapped to CryptFlare roles. Members who authenticate via SSO are automatically assigned the role mapped to their group. The customised permissions for that role apply to them just like any other member. ## API reference For full endpoint documentation including request/response examples, see the [Role Permissions API Reference](/api-reference/role-permissions). --- # Single Sign-On (SSO) Source: https://docs.cryptflare.com/security/sso Authenticate your team through your corporate identity provider with OIDC-based SSO # Single Sign-On (SSO) CryptFlare supports Single Sign-On so organisations on the **Team plan** can authenticate members through their corporate identity provider. Instead of email OTP codes, your team signs in with their existing company credentials - same login they use for everything else. ## Why SSO - **One login for everything** - your team uses the same credentials they already know - **Centralised access control** - onboarding and offboarding happens in your IdP, not in CryptFlare - **Compliance** - meet enterprise security requirements that mandate centralised authentication - **Automatic provisioning** - new team members get access the moment they join your IdP directory ## Supported providers CryptFlare supports any identity provider that speaks OpenID Connect (OIDC). This covers the vast majority of enterprise identity platforms. | Provider | Protocol | Notes | |----------|----------|-------| | **Microsoft Entra ID** (Azure AD) | OIDC | Full support including group claims and large group handling (200+ groups via Microsoft Graph API) | | **Google Workspace** | OIDC | Works with Google Cloud Identity and Workspace directories | | **Okta** | OIDC | Supports Okta groups for automatic role mapping | | **Auth0** | OIDC | Works with Auth0 organisations and connections | | **Generic OIDC** | OIDC | Any provider that implements the OpenID Connect standard | > SAML 2.0 support is planned for a future release. OIDC covers all major providers listed above. ## How it works An SSO login is a standard OIDC Authorization Code flow with PKCE. The browser bounces through the IdP, CryptFlare redeems the code server-side, and a session cookie comes back - no password ever reaches CryptFlare. >App: Click sign in App->>API: Start SSO (email domain) API-->>B: Redirect to IdP with state + PKCE challenge B->>IdP: Authenticate (password, MFA, biometrics) IdP-->>B: Redirect to callback with code B->>API: Callback with code + state API->>IdP: Exchange code for id_token IdP-->>API: id_token with groups claim API->>API: Provision user, map groups to role API-->>B: Set session cookie, redirect into vault `} /> Group claims resolved at step 9 decide the CryptFlare role, which means onboarding, offboarding, and role changes all live in your IdP. An organisation owner sets up an SSO connection in **Organisation Settings**, providing the IdP's issuer URL, client ID, and client secret. They can also restrict SSO to specific email domains and map IdP groups to CryptFlare roles. When a team member enters their email on the login page, CryptFlare checks if their email domain has SSO enabled. If it does, they are redirected to the IdP instead of receiving an OTP code. The user signs in with their corporate credentials (password, MFA, biometrics - whatever the IdP requires). CryptFlare never sees these credentials. After the IdP confirms the user's identity, CryptFlare automatically creates or updates their account, assigns them to the organisation with the correct role based on their IdP groups, and issues a session. ## Setting up SSO ### Prerequisites - Your organisation must be on the **Team plan** - You must be the **organisation owner** to configure SSO - You need admin access to your identity provider to create an OIDC application ### 1. Create an OIDC application in your IdP In your identity provider, register a new application with the following settings: | Setting | Value | |---------|-------| | **Application type** | Web application | | **Grant type** | Authorization Code | | **Redirect URI** | `https://api.cryptflare.com/v1/auth/sso/callback/oidc` | | **Scopes** | `openid`, `email`, `profile` | For **Entra ID**, also request the `GroupMember.Read.All` scope if you plan to use group-based role mapping with more than 200 groups. Copy the **Issuer URL**, **Client ID**, and **Client Secret** from your IdP. See the [provider-specific guides](#provider-setup-guides) below for step-by-step instructions for each identity provider. ### 2. Add the connection in CryptFlare Navigate to **Organisation Settings** and create a new SSO connection: - **Name** - a display name for this connection (e.g., "Acme Corp Entra ID") - **Provider** - select your identity provider - **Issuer URL** - the OIDC issuer URL from your IdP - **Client ID** - the application/client ID - **Client Secret** - the application secret - **Allowed domains** - restrict SSO to specific email domains (e.g., `acme.com`) - **Default role** - the role assigned to users who don't match any group mapping (default: `employee`) ### 3. Test the connection Use the **Test** button to verify that CryptFlare can reach your IdP's OIDC discovery endpoint. This validates the issuer URL and confirms the configuration is reachable. ### 4. Set up group mappings (optional) Map your IdP's groups to CryptFlare roles. When a user signs in, their IdP group memberships determine their role in the organisation. | IdP group | CryptFlare role | |-----------|----------------| | `Engineering` | `developer` | | `Platform Team` | `manager` | | `Finance` | `biller` | | `Contractors` | `viewer` | If a user belongs to multiple mapped groups, the mapping with the highest priority (lowest number) wins. If no group mapping matches, the user receives the **default role** configured on the connection. ### 5. Enable the connection Once configured and tested, enable the SSO connection. Members with matching email domains will be redirected to SSO on their next login. ## Force SSO When **Force SSO** is enabled on a connection, email OTP login is disabled for users whose email domain matches the connection's allowed domains. They must authenticate through the IdP - there is no fallback to OTP codes. This is useful for organisations that require all authentication to go through their identity provider for compliance or security policy reasons. > The organisation owner can always access the account through email OTP regardless of the Force SSO setting. This prevents lockout if the IdP becomes unavailable. ## Just-in-time provisioning CryptFlare automatically provisions users the first time they sign in via SSO: - **New users** - a CryptFlare account is created with their email and name from the IdP - **New memberships** - the user is added to the organisation with a role determined by group mapping - **Existing users** - if the user already has a CryptFlare account (e.g., from another org), they are added to this organisation without creating a duplicate account - **Role updates** - if a user's IdP groups change, their CryptFlare role is updated on the next SSO login ## Role mapping rules - **Owner role is never assignable via SSO** - this is a hardcoded security guard. If a group mapping or default role resolves to `owner`, it is automatically downgraded to `manager` - **Priority-based resolution** - when a user belongs to multiple mapped groups, the mapping with the lowest priority number wins - **Tie-breaking** - if multiple mappings share the same priority, the most privileged role is selected (manager > developer > employee > viewer) - **Default role fallback** - users with no matching group mappings receive the connection's default role ## Provider setup guides We have step-by-step guides for each supported identity provider: | Provider | Guide | |----------|-------| | **Microsoft Entra ID** (Azure AD) | [SSO with Microsoft Entra ID](/guides/sso/entra-id) | | **Google Workspace** | [SSO with Google Workspace](/guides/sso/google) | | **Okta** | [SSO with Okta](/guides/sso/okta) | | **Auth0** | [SSO with Auth0](/guides/sso/auth0) | | **Generic OIDC** | [SSO with Generic OIDC](/guides/sso/generic-oidc) | Each guide covers application registration, credential gathering, group claims configuration, and connecting to CryptFlare. ## Entra ID specifics ### Large group handling Microsoft Entra ID has a limit of 200 groups in the OIDC token claims. If a user belongs to more than 200 groups, the groups claim is omitted and CryptFlare automatically falls back to the Microsoft Graph API to fetch the user's full group membership. This happens transparently - no additional configuration is required. The OIDC application in Entra ID must have the `GroupMember.Read.All` permission for this fallback to work. ### Tenant configuration For Entra ID connections, the issuer URL follows the pattern: ``` https://login.microsoftonline.com/{tenant-id}/v2.0 ``` Replace `{tenant-id}` with your Azure AD tenant ID. ## Security ### PKCE All OIDC flows use Proof Key for Code Exchange (PKCE) with SHA-256 code challenges. This prevents authorization code interception attacks even if the redirect URI is compromised. ### State parameter Every SSO initiation generates a cryptographically random state parameter stored in a secure key-value store with a 10-minute TTL. The state is validated and consumed (one-time use) when the callback is received, preventing CSRF and replay attacks. ### Domain enforcement When allowed domains are configured on an SSO connection, CryptFlare verifies that the authenticated user's email domain matches before provisioning access. A user authenticating through the IdP with an email outside the allowed domains is rejected. ### Org isolation SSO connections are strictly scoped to a single organisation. One organisation's SSO configuration is never visible to or accessible by another organisation. The callback handler re-validates the connection ownership before issuing a session. ## API reference For full API documentation including request and response examples, see the [SSO API Reference](/api-reference/sso). ## Plan availability | Feature | Free | Pro | Team | |---|---|---|---| | SSO (OIDC) | - | - | Included | | Force SSO | - | - | Included | | Group-to-role mapping | - | - | Included | | JIT provisioning | - | - | Included | Upgrade to the Team plan to enable SSO for your organisation. --- # Status Notifications Source: https://docs.cryptflare.com/security/status-notifications Subscribe to email alerts for incidents and scheduled maintenance on the CryptFlare status page. # Status Notifications Status notifications are email alerts that let anyone - customers, partners, integrators, on-call engineers - stay informed when CryptFlare experiences an incident or schedules maintenance. No account required. Unlike [event subscriptions](/security/event-subscriptions), which target actions inside your own organisation, status notifications are about the CryptFlare platform itself: if the API goes down, a regional database degrades, or we schedule planned maintenance, subscribers receive an email within seconds. ## How it works Enter your email on the [status page](https://status.cryptflare.com) and click subscribe. No login required. An incident is created or updated, or a maintenance window is scheduled. Every active subscriber receives a branded email with the event details and a link back to the status page. Every email includes a signed unsubscribe link that works without a password. ## What you'll receive Subscribers receive emails for two event types: ### Incidents Any time CryptFlare platform staff create or update an incident, subscribers receive an email with: - The incident title and severity (minor, major, critical) - The current status (investigating, identified, monitoring, resolved) - An update message written by the on-call engineer - A link to the full status page for live updates - A personal unsubscribe link Each update to the same incident generates a new email so subscribers always have the latest information. ### Scheduled maintenance When maintenance is scheduled, subscribers receive an email with: - The maintenance title and description - The scheduled start and end time (in UTC) - The affected services - A link to the status page If the maintenance status changes (for example, from `scheduled` to `in_progress`, or when it's marked `completed`), subscribers receive a follow-up email. ## Subscribing There are two ways to subscribe: **From the status page footer** - The quickest path. Visit [status.cryptflare.com](https://status.cryptflare.com), scroll to the footer, enter your email, and click subscribe. **Via the API** - Useful if you're building a tool that wants to automatically subscribe a team alias. ```bash curl -X POST https://api.cryptflare.com/v1/status/subscribe \ -H "Content-Type: application/json" \ -d '{"email":"oncall@acme.com"}' ``` The response is always the same shape regardless of whether the email was already subscribed - this prevents the endpoint from being used for email enumeration. A single email address can only submit 5 subscribe requests per hour, and each IP address can only make 10 requests per minute across all public status endpoints. This prevents abuse while leaving plenty of headroom for legitimate retries. ## Unsubscribing Every notification email contains an unsubscribe link that looks like: ``` https://vault.cryptflare.com/unsubscribe?token=c2FjaGFAZXhhbXBsZS5jb20.aMAc2V... ``` The token is an HMAC-SHA256 signature of the recipient's email, generated on our servers. Without a valid token, nobody can unsubscribe anyone, and the token can only have been issued by us. Click the "Unsubscribe" button or link in any CryptFlare status email. The unsubscribe page opens at `vault.cryptflare.com/unsubscribe` and verifies the token. If the token is valid and the email is still subscribed, you'll see a confirmation prompt with the email address. Click "Confirm unsubscribe". The subscriber is removed from our system immediately. The page shows a success state. If you click an old unsubscribe link for an email that's already been removed, the page shows an "Already unsubscribed" state instead of the confirmation prompt. No duplicate requests are made. Tokens never expire - you can save an old email and click the link months later. They only stop working if the email has been removed from our system (unsubscribed) or if our signing secret is rotated. ## Signed token security The unsubscribe flow is protected by stateless HMAC-SHA256 tokens. Here's why we use this model instead of the more common "enter your email to unsubscribe" form: | Concern | Mitigation | |---|---| | Anyone unsubscribing anyone via enumeration | The endpoint requires a signed token. Without the server-side signing key, nobody can forge a token. | | Tokens leaked via email forwarding | Worst case: someone unsubscribes the recipient. Recoverable by re-subscribing. No account access. | | Replay attacks after unsubscribe | The page pre-checks KV state. If the email is already unsubscribed, the confirm prompt doesn't appear. | | Token reuse across purposes | Each HMAC includes a namespace prefix (`status-unsub:`) so the signing key can't be used for other token types. | | Brute-force guessing tokens | HMAC-SHA256 output is 256 bits. With the public rate limit of 10 req/min per IP, an attacker gets fewer than 15k attempts per day - infeasible against 2^256 possible signatures. | ## Privacy We store the **minimum** data needed to deliver notifications and nothing else: ```json { "email": "oncall@acme.com", "subscribedAt": "2026-04-11T09:15:00Z", "emailsSent": 3, "lastEmailAt": "2026-04-12T14:22:00Z" } ``` - **No account linkage** - subscribing to status notifications does not create a CryptFlare account. - **No name or profile data** - just the email address and delivery metadata. - **No IP logging per subscription** - IPs are only used for rate limiting and discarded. - **Email content is delivered via Resend** and is subject to their processor agreements. See [Privacy Policy](https://cryptflare.com/legal/privacy) for details. - **Unsubscribing removes your record immediately** - no soft delete, no retention period. ## Rate limits Public status endpoints are rate limited per IP address via a sliding window. If you exceed the limit, you'll get a `429 RATE_LIMITED` response with a `Retry-After` header. | Endpoint | Limit | Window | |---|---|---| | `POST /v1/status/subscribe` | 10 | 1 minute per IP + 5 per hour per email | | `POST /v1/status/unsubscribe` | 10 | 1 minute per IP | | `POST /v1/status/unsubscribe/check` | 10 | 1 minute per IP | ## API Full request/response details for every status endpoint - including the public `GET /status` health endpoint and the token-authenticated subscribe/unsubscribe flow - live in the [Status API reference](/api-reference/status). | Endpoint | Reference | |---|---| | `POST /v1/status/subscribe` | [Subscribe](/api-reference/status/subscribe) | | `POST /v1/status/unsubscribe` | [Unsubscribe](/api-reference/status/unsubscribe) | | `POST /v1/status/unsubscribe/check` | [Check token](/api-reference/status/check-unsubscribe) | | `GET /v1/status` | [Get status](/api-reference/status/get-status) | ## FAQs No. Status notifications are intentionally decoupled from accounts. Any email address can subscribe, and subscribing does not create an account or tie you to one. Visit [status.cryptflare.com](https://status.cryptflare.com) and submit your email again in the footer. Re-subscribing creates a fresh record with a new `subscribedAt` timestamp. Yes. We recommend using a team alias like `platform-oncall@yourcompany.com` rather than individual emails so your entire on-call rotation gets alerts without anyone having to forward them. The page checks the current subscription state first. If the email is still subscribed, you'll see a confirm prompt. If it's already been unsubscribed, you'll see an "Already unsubscribed" state and no request is made. To prevent accidental unsubscribes. Some email clients and link scanners pre-fetch URLs to check for malware, which would trigger an immediate unsubscribe. Requiring an explicit click on the page makes the action intentional. Status notifications are about the **CryptFlare platform** (incidents, maintenance) and are delivered via email to anyone who subscribes. [Event subscriptions](/security/event-subscriptions) are about **your own organisation** (secrets, members, policies) and are delivered via signed HTTP webhooks to your infrastructure. Not currently. Every subscriber receives both incident and maintenance emails. If you want more granular control, subscribe to the [RSS/Atom feed](https://status.cryptflare.com) (coming soon) or integrate via event subscriptions for organisation-level events. --- # Secret Sync Source: https://docs.cryptflare.com/security/sync Push CryptFlare secrets to third-party destinations like GitHub Actions, Vercel, and AWS Secrets Manager. One-way, auto-reconciled, identity-aware. # Secret Sync Secret Sync pushes CryptFlare secrets to external destinations (GitHub Actions, Vercel, AWS Secrets Manager, and more) so your applications can read them from their native secret store. You configure a connection once - pointing at a specific CryptFlare scope on one side and a destination repo/project on the other - and every change to a secret in scope flows automatically. When a rotation policy rotates a database password in CryptFlare, every connected GitHub repo gets the new value within seconds. You never maintain duplicate copies by hand. This is the same centralised fan-out model [HashiCorp Vault agents](https://developer.hashicorp.com/vault/docs/agent-and-proxy/agent/template), [Doppler](https://docs.doppler.com/docs/integrations), and [Infisical](https://infisical.com/docs/integrations) popularised. CryptFlare sits as the authoritative source and propagates outward. ## Why secret sync Your secrets live in CryptFlare - encrypted at rest, audit-logged, access-controlled by your roles and policies. Every destination just receives a projection of the current state. No more "I rotated the DB password but forgot to update the Vercel env variable." When a secret is created, rotated, or deleted inside the sync scope, every auto-mode connection watching that scope fires within a few seconds. A 10-line audit-queue consumer fans out to the right set of connections, and Cloudflare Queues handle retries and batching for you. Different secrets for different pipelines? Point connection A at `production/api-keys`, connection B at `production/database`, and connection C at the whole `production` environment. Each connection sees only its own slice of secrets. The pod hierarchy is walked with a recursive CTE, so nesting (up to 5 levels) is free. For GitHub, CryptFlare ships with first-party GitHub App integration. Install the App on your GitHub org once, and every sync connection targeting repos inside that org uses a short-lived installation token minted fresh at sync time. No personal access tokens in CI, no 90-day rotation reminders. Destinations like GitHub Actions Secrets are write-only by design - once you push a value, nobody (including CryptFlare) can read it back. Drift detection works around this by comparing secret **names** on both sides and classifying each destination key as `inSync`, `unmanaged`, or `orphaned`. The drift panel is safe to run on demand and never touches plaintext. When a secret is deleted from CryptFlare, the connection can (opt-in) also delete it from the destination. The safety mechanism: CryptFlare only ever removes destination keys it can prove it pushed itself, tracked in a `managed_keys` allow-list updated after every successful sync. A destination-side secret CryptFlare never pushed (e.g. a manually-added `TEST` on GitHub) will never be touched, even when the flag is on. ## How it works A sync connection glues one CryptFlare scope to one external destination. When a secret changes, an audit event fans out through the queue layer to every matching connection, and the provider adapter pushes the new state. >API: Mutate secret (create / rotate / delete) API->>AQ: Emit secret.updated event AQ->>SQ: Match auto-mode connections in scope SQ->>Prov: Load connection, decrypt secrets Prov->>Dest: Push values via provider API Dest-->>Prov: Ack or reject Prov->>API: Stamp status, counts, managed keys API->>AQ: Emit sync.completed or sync.failed Note over SQ,Dest: Drift reconciler runs on next tick `} /> The queue hop keeps the operator request path fast while guaranteeing every matching connection eventually pushes, even through transient provider errors. From the vault dashboard, an org member with `sync:manage` picks a destination (GitHub, Vercel, AWS, ...), points it at a source workspace → environment → optional pod, configures options (key prefix, filters, auto/manual), and provides credentials. CryptFlare validates the credentials against the destination provider **before** persisting them so typos and missing permissions surface at creation time, not on the first sync. Destination credentials are sealed with AES-256-GCM using a per-connection key derived from the platform master secret via HKDF. A leaked D1 row alone cannot be decrypted. For GitHub App installs specifically, **no credentials are stored at all** - CryptFlare mints a fresh installation token at sync time using the platform-wide App private key. A user creates, updates, rotates, or deletes a secret in the connection's workspace / environment / pod. Every secret mutation publishes an event to the audit queue. The audit queue consumer reads the event, checks whether the source action is sync-triggering (create/update/rotate/delete/import/lock/unlock/auto-rotate), resolves the affected environment + pod, and looks up every auto-mode connection whose scope contains the changed secret. Each matching connection is enqueued to `QUEUE_SYNC` with `triggered_by: 'auto'`. A dedicated Cloudflare Queue consumer picks up the job: loads the connection, reads in-scope secrets via a recursive CTE, decrypts them server-side with the environment's derived key (or BYOK customer key), resolves destination credentials, and calls `provider.sync(secrets, config, credentials)`. The provider handles platform-specific bits - GitHub's libsodium sealed-box encryption, AWS's `PutSecretValue`, Vercel's env var API. When `deleteOnSourceDelete` is on, the consumer computes `managed_keys − current_source_keys` and calls `provider.deleteSecret` for the difference. The `managed_keys` allow-list is then updated with whatever was just pushed so the next sync knows the canonical set. The connection's `last_sync_status` / `last_sync_count` / `last_sync_error` columns are updated, a `sync_logs` row is inserted, and a `sync.completed` or `sync.failed` audit event is written to the hash-chained audit log. The vault sync page polls every 5 seconds while any connection is pending so the UI reflects live state. ## Supported providers More providers are in development. Want one we haven't built yet? [Let us know](/support). ## Sync modes | Mode | Trigger | Use case | |---|---|---| | **Manual** | Only runs when someone clicks the ▶ play button in the UI or calls [`POST /sync-connections/:id/trigger`](/api-reference/sync-connections/trigger-sync) | Staging environments, one-off migrations, controlled rollouts | | **Auto** | Runs automatically whenever a secret in the scoped environment / pod is created, rotated, deleted, imported, locked, unlocked, or auto-rotated | Production pipelines where CryptFlare is the source of truth | Auto mode is powered by the audit queue - no polling, no cron dependencies. Typical end-to-end latency from "operator rotates a secret" to "GitHub repo has the new value" is under 30 seconds. There's also a cron safety net that re-runs auto connections whose `last_synced_at` is more than 6 hours old, so a brief queue outage cannot leave a connection permanently stale. ## Pod-level targeting Every connection has a source scope made up of a workspace, environment, and **optional pod**. When a pod is set, the sync includes secrets in that pod *and every descendant pod* - a recursive CTE walks the tree at query time, so nesting comes free. ``` infrastructure/ ← env root databases/ ← pod (L1) postgres/ ← pod (L2) ← sync connection watches this DATABASE_URL DATABASE_PASSWORD replica/ ← pod (L3) REPLICA_URL ← also included (descendant of postgres) redis/ ← pod (L2) ← sibling - NOT included REDIS_URL ``` Picking `postgres` as the source pod syncs `DATABASE_URL`, `DATABASE_PASSWORD`, and `REPLICA_URL`. `REDIS_URL` is ignored because it lives in a sibling pod. The vault wizard caps the pod drill-down at **5 levels** for UI clarity, but the backend's recursive CTE has no depth limit - if you ever need deeper trees, it's a one-line change. ## Key filtering | Option | Purpose | Example | |---|---|---| | `keyPrefix` | Prefix applied at the destination (never in CryptFlare) | `PROD_` turns `DATABASE_URL` into `PROD_DATABASE_URL` on GitHub | | `keyFilter` | Allowlist - only sync these keys | `["DATABASE_URL", "API_KEY"]` | | `excludeKeys` | Blocklist - never sync these keys | `["INTERNAL_SECRET", "DEBUG_TOKEN"]` | Filters are applied after pod scoping. `keyFilter` takes priority over `excludeKeys` when both are set (allowlist first, blocklist on the result). ## Drift detection & reconciliation CryptFlare sync is fundamentally **one-way**. Destinations like GitHub Actions Secrets don't return secret values over their REST APIs, so you cannot pull values back into CryptFlare from any destination - that's a design choice of the destination, not CryptFlare. What you *can* do is compare secret **names** on both sides, which is enough to drive drift detection and delete reconciliation. The drift endpoint (`GET /v1/organisations/:org/sync-connections/:id/drift`) returns three classifications: | Bucket | Meaning | Action | |---|---|---| | **`inSync`** | Destination key matches a transformed CryptFlare source key | Nothing - state is correct | | **`unmanaged`** | Destination has a key CryptFlare isn't pushing. Manual addition, another tool, or predates the connection. | Optional: take it over (see below) | | **`orphaned`** | CryptFlare's `managed_keys` remembers a key that's now missing from the destination. Someone deleted it there after a successful sync. | Trigger a new sync to re-push, or fix the source | Drift is **read-only** (name comparison only, never values) and can be safely run on demand from the UI. Each call to the endpoint hits `provider.listSecrets` once, so in regulated environments you can allow `sync:read` members to audit drift without granting them any write capability. ## Takeover flow When drift surfaces an `unmanaged` destination secret, operators can **take it over** - create a matching secret in CryptFlare at the connection's exact source scope and overwrite the destination value with it in one step. The flow is: Click the drift icon on any sync connection row. The panel fetches a fresh report from `provider.listSecrets` and classifies every destination key. CryptFlare cannot read the existing value from the destination - you have to paste it yourself. A dialog asks for the CryptFlare key name (pre-filled by reversing the connection's key prefix) and the secret value. On save, CryptFlare creates a new secret at the connection's exact source scope (workspace / environment / pod) with the value you typed, then triggers a sync. The connection pushes the new secret, which **overwrites** whatever was on the destination. **Important semantic: takeover overwrites the destination value.** It's not an import. If you mis-type the value, the destination now holds the wrong value. The dialog has an explicit amber warning about this. The taken-over secret always lands at `connection.podId` (the leaf pod the connection watches), so future syncs keep it in scope permanently - placing it anywhere else would leave it out of sync scope and the takeover would be broken. The takeover dialog overwrites whatever currently exists on the destination with the value the user types. For regulated environments (SOC 2, ISO 27001, PCI-DSS), this is an audit-relevant operation. See [Gating takeover](#gating-takeover-org-wide--per-connection) below for the two-tier control model. ## Delete on source delete By default, deleting a secret in CryptFlare does **not** remove it from the destination - the destination keeps the last-pushed value until someone touches it. This is safe and predictable, but it can leave stale values behind if CryptFlare is your source of truth. Enable `deleteOnSourceDelete` on a connection (via the edit form or `PATCH /sync-connections/:id`) to change that behaviour. With the flag on, every sync's reconcile pass computes `(previously-managed − current-source)` and calls the provider's `deleteSecret` for each key in the difference. The critical safety mechanism is **`managed_keys`** - a JSON-encoded allow-list of CryptFlare source keys the connection successfully pushed on its last run, stored on `sync_connections.managed_keys`. The reconcile pass only ever removes keys in this list, so: - A destination-side secret CryptFlare never pushed (e.g. a `TEST` you created directly on GitHub before the connection existed) will **never** be deleted, regardless of the flag state - A pod-scope change that drops keys from the source → reconcile pass deletes them from the destination on the next sync - A disabled connection → no pushes, no reconcile, destination untouched `managed_keys` is NULL before the first successful sync (reconcile is a no-op), updated at the end of every non-failed sync, and never updated on full failures (to avoid losing the canonical set if the destination state is unknown). Failure handling: individual `deleteSecret` calls are wrapped in try/catch and their errors are merged into the aggregate error list on the sync result. A sync where push succeeded but some deletes failed finishes as `partial` so the UI can surface the failures without pretending success. ## Gating takeover org-wide + per-connection Regulated environments often need to prevent end-users from overwriting production destination secrets through a UI action. CryptFlare gates the takeover action at **two levels**, ANDed together: | Tier | Where | Scope | Default | Use case | |---|---|---|---|---| | **Org-wide kill switch** | `Org Settings → Features → Sync Takeover` | All connections in the org | ON | Single-switch "no user can overwrite destination values in this org" for regulated environments | | **Per-connection flag** | Edit sync connection → "Allow takeover" card | One specific connection | ON | Fine-grained gating when the org-wide switch is ON but specific high-risk connections should be locked | Effective permission: `effectiveAllowTakeover = org.syncTakeoverEnabled && connection.allowTakeover`. Either one being OFF hides the "Take over" button across the affected scope. Drift detection itself stays available regardless of either flag - it's read-only (name comparison only, zero value exposure). Only the overwrite action is gated. The drift panel renders different copy depending on which level is blocking: - **Org-wide off** → "Takeover is disabled org-wide in Settings > Features - ask an org owner to enable it." - **Per-connection off (org-wide on)** → "Takeover is disabled on this connection - an org owner would need to enable it to adopt these secrets." A small amber `🔒 locked` badge appears on the sync connection row when the per-connection flag is off, so operators can spot locked connections from the list view without opening each one. ## Default role permissions ## Sync queue lifecycle Every sync goes through `sync_connections.last_sync_status` values in this order: Set the moment the job is placed on `QUEUE_SYNC`. The sync page's amber banner shows queue depth, and each pending row displays a live-ticking "queued 12s ago" pill with a position-aware number. Pending rows poll every 5 seconds for updates. The consumer picks up the job, runs the sync, and writes the terminal status. `success` means every in-scope secret pushed successfully with count > 0. `failed` means the connection could not complete at all (bad credentials, provider outage, encryption error). `partial` means some secrets pushed but others failed, OR the push succeeded but the delete reconcile pass had errors. A `success` result with `last_sync_count: 0` is technically correct (the consumer ran and nothing was in scope) but almost always means a scope mismatch - the user added a secret outside the connection's workspace/env/pod. The UI renders this as an amber `⚠ 0 secrets` pill instead of a green `success` pill, with a clickable disclosure explaining the likely cause. ## Cron safety net Auto connections are **primarily** driven by the audit queue fan-out - push the moment a secret changes, no polling. But the audit queue can lose messages (retention windows, consumer outages, mid-flight worker deploys), so a backup mechanism exists. `apps/cron/src/jobs/sync-resync.ts` runs every 6 hours at `00:00 / 06:00 / 12:00 / 18:00 UTC` and re-enqueues any auto-mode connection whose `last_synced_at` is older than 6 hours. In steady state this job processes zero rows. It exists so a queue outage can never leave an auto connection permanently stale - worst case, a connection resyncs on the next cron tick within 6 hours of the missed event. The vault sync page shows a live countdown banner to the next cron tick, framed for end users as *"Next automated check in 2h 14m - Auto-mode connections push secrets the moment they change. This is just the periodic safety check that catches anything that fell behind."* ## Credential health checks Expired PATs and revoked GitHub App installations are the single most common cause of a sync quietly breaking between rotations. CryptFlare runs a dedicated daily probe so an operator finds out *before* the next rotation lands on a dead credential, not during it. `apps/cron/src/jobs/sync-credential-check.ts` enqueues a `sync_credential_check` message for every enabled sync connection whose org has `sync_enabled = 1`. Messages are ordered by `credential_checked_at ASC NULLS FIRST` so never-checked and oldest-checked connections are probed first. The batch cap is 500 per tick, which covers every current customer on a single run with room to spare. The sync queue consumer picks up `sync_credential_check` messages separately from regular sync jobs and calls the provider's `validate(config, credentials)` hook. This is a pure read-only probe - it never touches secret values, never decrypts anything beyond the destination credential itself, and never pushes data. For GitHub, `validate()` issues a `GET /repos/:owner/:repo` and then fetches the secrets public key to confirm write access. For GitHub App connections, a fresh installation token is minted at probe time so revoked installs fail immediately. The outcome is persisted via `updateCredentialStatus`: `credential_status` set to `valid` or `invalid`, `credential_checked_at` to the current timestamp, and `credential_error` to the provider error string (truncated to 512 chars) on failure. `credential_status` defaults to `unknown` before the first probe runs. The vault sync page reads `credentialStatus` from the list endpoint and renders a small rose-coloured `🔑 creds expired` pill next to the mode column on any row where `credential_status === 'invalid'`. Hovering the badge shows the full `credential_error` so an operator can see exactly why the provider rejected the credential (e.g. `401: Bad credentials`, `403: Resource not accessible by integration`). Two additional paths keep `credential_status` fresh without waiting for the daily cron: - **Opportunistic refresh on success** - every successful sync stamps `credential_status = valid` at the end of the run, so an active connection effectively self-heals and clears a stale badge from a previous transient outage the moment the next push succeeds. - **Terminal-error shortcut** - if a regular sync job fails with a classified-as-terminal error (`401`, `403`, `404`), the consumer immediately stamps `credential_status = invalid` and records the error. The operator does not have to wait for the next cron tick to see the badge. The cron job is safe to run against every enabled connection on every tick because `validate()` is cheap (a single HEAD-style API call per provider) and never exposes secret values. ## Retry with backoff for transient provider errors Not every sync failure is permanent. GitHub occasionally returns `502 Bad Gateway` or `429 Too Many Requests` during high-traffic periods, and a single blip should not leave a connection marked failed until a human notices. The queue consumer classifies every provider error through `apps/api/src/lib/sync-providers/retry.ts` into one of two buckets: | Class | Status codes | Example messages | Consumer behaviour | |---|---|---|---| | **Terminal** | `400`, `401`, `403`, `404`, `422` | `invalid token`, `not found`, `forbidden` | Ack the message, mark `last_sync_status = failed`, stamp `credential_status = invalid` if it was an auth failure. Requeueing would just reproduce the error. | | **Transient** | `408`, `425`, `429`, `500`, `502`, `503`, `504` | `rate limit`, `timeout`, `ECONNRESET`, unclassified errors | Call `msg.retry({ delaySeconds: backoffSeconds(msg.attempts) })` with exponential backoff (base 30s, doubling to a 15-minute cap, plus jitter). Cloudflare Queues handle the next delivery automatically. | The queue is configured with `max_retries = 3` in `apps/api/wrangler.toml`. After three failed transient attempts, the message is dropped and the connection stays marked failed until the next trigger (manual, auto-queue fan-out, or the 6-hourly safety-net cron). A `partial` result is **never** retried automatically because some secrets did land and a second full push would re-write the ones that already succeeded. Classification is string-based with a defensive default: any unrecognised error is treated as transient so a one-off unclassified failure gets a second chance before the connection terminates. ## Failure notifications via audit events Every sync run emits an audit event into `QUEUE_AUDIT` regardless of outcome: `sync.completed` on success, `sync.failed` on failure. Both are wired into the `SUBSCRIBABLE_EVENT_GROUPS` constant under the "Sync" category, so any active webhook subscription with `events: ['*']` or `events: ['sync.failed']` fans out the failure to the customer's URL automatically - no extra registration needed. Event metadata includes `provider`, `status`, `secretsSynced`, `secretsCreated`, `secretsUpdated`, and `triggeredBy`, so downstream alerting can distinguish between a one-off transient blip and a persistently-failing connection. The delivery path runs through the existing event consumer (`apps/api/src/queue-consumer.ts`) which HMAC-signs each payload, retries with exponential backoff three times, and auto-disables a subscription after 10 consecutive delivery failures. Ops teams typically subscribe `sync.failed` alongside `secret.rotation_failed` into the same PagerDuty or Slack incident channel. ## Plan availability > Secret Sync is a **Pro / Team plan** feature. Upgrade to enable outbound sync for your organisation. | Feature | Free | Pro | Team | |---|---|---|---| | Sync connections | - | 3 | 10 | | Auto sync (audit-queue fan-out) | - | | | | Pod-level scoping | - | - | | | Key prefix + filter + exclude | - | | | | Drift detection | - | | | | Takeover action (overwrite destination) | - | | | | Delete on source delete | - | | | | GitHub App installation auth | - | | | | Org-wide takeover kill switch | - | | | The org owner must explicitly enable the **Secret Sync** feature flag under `Org Settings → Features` before any sync connections can be created - the flag defaults to OFF because sync is a higher-risk surface (it decrypts secrets server-side and forwards them to external APIs). ## Security model - **Destination credentials encrypted at rest** with AES-256-GCM using a per-connection derived key (`sync_cred_{connectionId}`). A leaked D1 row alone cannot be decrypted. - **GitHub App installations** store no credentials at all. CryptFlare mints a short-lived (~1 hour) installation token at sync time using the platform-wide App private key, and the token is never persisted. Rotating the App's private key rotates every customer's sync auth simultaneously. - **Secret values decrypted server-side only** during sync execution. Values never appear in logs, audit entries, sync logs, or the response body of any administrative endpoint. - **BYOK respected end-to-end** - if the org uses a customer-managed encryption key, secrets are decrypted with it before being handed to the provider. Flipping BYOK on or off does not re-encrypt existing secrets; each environment stays encrypted under whichever key source it was created with. - **Provider-native sealed-box encryption** - GitHub in particular requires libsodium `crypto_box_seal` on every pushed value (BLAKE2b nonce derivation - a known pitfall where SHA-512 implementations fail validation). CryptFlare uses `tweetnacl-sealedbox-js` to guarantee byte-for-byte compatibility with libsodium. - **Feature flag gating** - org owners can disable sync entirely via `sync_enabled` and disable takeover via `sync_takeover_enabled`. Both toggles are owner-only and audit-logged as `organisation.feature_toggled`. - **RBAC enforcement** - `sync:manage` required to create, edit, trigger, or delete connections. `sync:read` required to view connection state or run drift detection. Plan limits on connection count enforced on create. - **Audit trail per sync** - `sync.completed` / `sync.failed` events with provider, status, counts, and triggering method. Every feature toggle (`sync_enabled`, `sync_takeover_enabled`) and every connection mutation (create/update/delete/trigger) is audit-logged. - **GitHub App locked mode** - when an org has connected GitHub via the org integrations tab, new sync connections targeting GitHub are **forced** to use the installation auth and can only point at repos inside a connected GitHub org. Personal repos and PATs are blocked at create time. This makes it impossible for a developer to create a sync connection that bypasses the org's approved GitHub org list. ## Setup guides Each destination provider has its own step-by-step setup guide covering how to register credentials, create the first connection, and verify it end-to-end: | Provider | Guide | |---|---| | **GitHub Actions Secrets** | [Sync secrets to GitHub Actions](/guides/sync/github) | More provider guides are in development as the provider list grows. ## API reference Every sync operation is available via the REST API: - [**List connections**](/api-reference/sync-connections/list-connections) - returns every connection for the org plus joined workspace / environment / pod names - [**Create connection**](/api-reference/sync-connections/create-connection) - validates credentials against the provider before persisting; auto-triggers an initial sync - [**Trigger sync**](/api-reference/sync-connections/trigger-sync) - enqueues a one-off sync job for a connection - [**Get drift report**](/api-reference/sync-connections) - classifies destination keys into `inSync` / `unmanaged` / `orphaned` without exposing values - [**Get feature flags**](/api-reference/organisations/get-features) - read `syncEnabled` and `syncTakeoverEnabled` - [**Toggle feature**](/api-reference/organisations/toggle-feature) - owner-only endpoint for flipping the org-wide switches For a deeper look at the queue architecture, fan-out logic, cron safety net, and the reconcile pass internals, see the [Sync architecture](/guides/sync-architecture) guide. --- # Teams Source: https://docs.cryptflare.com/security/teams Group members into teams for scoped access policies and collaboration # Teams Teams let you group organisation members and apply access policies at the group level. Instead of managing permissions for each member individually, you create teams like "Backend Engineers" or "DevOps" and attach policies to those teams. ## Why use teams - **Scoped access** - Apply deny or allow policies to an entire team instead of individual members - **Easier onboarding** - Add a new member to a team and they inherit all the team's policies automatically - **Separation of concerns** - Give backend engineers access to backend workspaces and frontend engineers access to frontend workspaces, without complex per-user rules - **Audit clarity** - See which team's policy granted or denied an action in the evaluation chain ## Plan availability Teams are not available on the Free plan. Pro plans include a limited number of teams, and Team plans have unlimited teams. See the [pricing page](/getting-started/pricing) for current limits. ## Creating a team Teams belong to an organisation. To create a team, you need the `members:invite` permission (owner or manager). Each team has a name, a URL-safe slug, and an optional description. ## Managing team members ### Adding a member Any existing organisation member can be added to a team. A member can belong to multiple teams. ### Removing a member Removing a member from a team immediately removes all team-scoped policies for that member. Their org-level role permissions are unaffected. ## Team policies Each team can have policies that allow or deny specific permissions on specific resources. Policy effect flows top-down through org, team, and pod scope, with JIT grants overlaying everything below them. A deny anywhere on the chain wins over an allow at the same level. OrgDeny Org --> OrgAllow OrgAllow --> Team Team --> TeamDeny Team --> TeamAllow TeamAllow --> Pod JIT --> Pod `} /> Evaluation walks deny first at every level, so a single team deny on a production pod overrides a global allow without needing to edit org-wide rules. Team policies are evaluated as part of the [policy evaluation chain](/security/policies#evaluation-order): JIT access grants Global DENY policies **Team DENY policies** **Team ALLOW policies** Global ALLOW policies Org role permissions Default: DENY Team deny policies are checked before team allow policies, which means you can set a broad allow on a team and then add targeted denies for sensitive resources. ### Adding a team policy ### Team policy vs global policy | Aspect | Team policy | Global policy | |--------|------------|---------------| | Scope | Applies only to team members | Applies to all org members | | Priority | Evaluated after global DENY | Evaluated after team policies | | Conditions | Not supported | Supports time windows and IP ranges | | Resource targeting | By resource type and ID | By glob pattern | Team policies target a specific resource type (`workspace`, `environment`, or `pod`) and resource ID. Global policies use glob patterns to match multiple resources at once. ## Teams and org roles Teams add granularity on top of the existing [RBAC role system](/security/access-control). A member's org role (developer, manager, etc.) defines their baseline permissions. Team policies can then: - **Deny** actions the role would normally allow (e.g., deny `secrets:delete` on production for the developer role) - **Allow** actions on specific resources (e.g., allow read access to a workspace the role can already access, overriding a global deny) The org role is always the fallback. If no team or global policy matches a request, the system defers to the member's role permissions. ## Deleting a team Deleting a team removes all team memberships and team policies. Members keep their org-level role and any global policies still apply. ## API reference | Method | Path | Description | |--------|------|-------------| | `GET` | `/v1/organisations/:org/teams` | List teams | | `POST` | `/v1/organisations/:org/teams` | Create team | | `GET` | `/v1/organisations/:org/teams/:team` | Get team with members and policies | | `PATCH` | `/v1/organisations/:org/teams/:team` | Update team | | `DELETE` | `/v1/organisations/:org/teams/:team` | Delete team | | `GET` | `/v1/organisations/:org/teams/:team/members` | List team members | | `POST` | `/v1/organisations/:org/teams/:team/members` | Add member to team | | `DELETE` | `/v1/organisations/:org/teams/:team/members/:userId` | Remove member from team | | `GET` | `/v1/organisations/:org/teams/:team/policies` | List team policies | | `POST` | `/v1/organisations/:org/teams/:team/policies` | Add team policy | | `DELETE` | `/v1/organisations/:org/teams/:team/policies/:policyId` | Delete team policy |