
Run provider SDKs in dedicated microVMs, keep their secrets scoped with `secretEnv`, and let workflow VMs call them through manager-routed proxies instead of sharing credentials across the whole runtime.
Provider VMs
One SDK, one secret scope
Keep Google, Graph, or any other SDK in its own VM and inject only the env vars that provider needs.
Manager Bridge
No direct guest-to-guest trust
Consumer proxies call the manager bridge, which validates the alias and then invokes the provider VM over vsock.
Workflow VMs
Discover, compose, and sync
Read peer manifests and README files, optionally mount mirrored source, and compose multi-SDK workflows locally.
# Consumer/workflow VM linked to isolated provider SDK VMs
curl -X POST http://localhost:3000/v1/vms \
-H "X-API-Key: $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"cpu": 1,
"memMb": 256,
"allowIps": [],
"outboundInternet": false,
"peerLinks": [
{ "alias": "google", "vmId": "vm-google", "sourceMode": "hidden" },
{ "alias": "outlook", "vmId": "vm-outlook" }
]
}'200 OK • VM ready to execute
The current platform is built around provider VMs, consumer VMs, proxy generation, and manager-routed calls.
Provider SDKs live in separate microVMs with their own `secretEnv`, so workflow code never gets those credentials locally.
Consumer VMs get `/workspace/peers/index.json`, per-alias manifests, generated README files, and importable proxy modules.
Consumer proxies call the manager bridge, which validates the peer link and executes the real SDK code inside the provider VM.
Snapshot provider VMs after SDK upload and snapshot consumer VMs after peer sync to cut repeated setup and boot time.
Deploy on any Linux server with KVM support
# Install via Docker Compose (recommended)
# 1) Create .env
# - API_KEY: required for all /v1/* requests (send as X-API-Key header)
# - ADMIN_EMAIL / ADMIN_PASSWORD: Admin UI login credentials
# - RUN_DAT_SHEESH_DATA_DIR: host directory to persist manager state (DB, VM storage)
# - RUN_DAT_SHEESH_IMAGES_DIR: host directory to store uploaded guest images (vmlinux + rootfs.ext4)
# - ROOTFS_CLONE_MODE: "auto" is fine for most setups (advanced)
# - SNAPSHOT_TEMPLATE_*: size legacy template snapshot VMs (optional)
cat > .env <<'ENV'
API_KEY=dev-key
VM_SECRET_KEY=change-me
ADMIN_EMAIL=admin@example.com
ADMIN_PASSWORD=admin
RUN_DAT_SHEESH_DATA_DIR=./data
RUN_DAT_SHEESH_IMAGES_DIR=./images
ROOTFS_CLONE_MODE=auto
# Warm pool is optional and needs a default image.
# Warm pool is skipped for VMs created with secretEnv or peerLinks.
ENABLE_WARM_POOL=false
WARM_POOL_TARGET=1
WARM_POOL_MAX_VMS=4
SNAPSHOT_TEMPLATE_CPU=1
SNAPSHOT_TEMPLATE_MEM_MB=256
ENV
# 2) Create host directories
mkdir -p ./data ./images
# 3) Create docker-compose.yml (published image)
cat > docker-compose.yml <<'YAML'
version: "3.9"
# Runs the manager API directly on http://127.0.0.1:3000 (no proxy/TLS).
services:
manager:
image: lelemm/rundatsheesh:latest
# Keep dev aligned with integration + prod compose hardening.
read_only: true
security_opt:
- no-new-privileges:true
- seccomp=unconfined
- apparmor=unconfined
cap_drop:
- ALL
cap_add:
- NET_ADMIN
# Required by Firecracker jailer (mount namespace + chroot + privilege drop + dev setup).
- SYS_ADMIN
- SYS_CHROOT
- SETUID
- SETGID
- MKNOD
- CHOWN
- DAC_OVERRIDE
- DAC_READ_SEARCH
tmpfs:
- /tmp
- /run
sysctls:
net.ipv4.ip_forward: "1"
net.ipv4.conf.all.forwarding: "1"
net.ipv4.conf.default.forwarding: "1"
environment:
API_KEY: ${API_KEY:-dev-key}
VM_SECRET_KEY: ${VM_SECRET_KEY}
ADMIN_EMAIL: ${ADMIN_EMAIL:-admin@example.com}
ADMIN_PASSWORD: ${ADMIN_PASSWORD:-admin}
PORT: 3000
STORAGE_ROOT: /var/lib/run-dat-sheesh
IMAGES_DIR: /var/lib/run-dat-sheesh/images
AGENT_VSOCK_PORT: 8080
ROOTFS_CLONE_MODE: ${ROOTFS_CLONE_MODE:-auto}
ENABLE_WARM_POOL: ${ENABLE_WARM_POOL:-false}
WARM_POOL_TARGET: ${WARM_POOL_TARGET:-1}
WARM_POOL_MAX_VMS: ${WARM_POOL_MAX_VMS:-4}
SNAPSHOT_TEMPLATE_CPU: ${SNAPSHOT_TEMPLATE_CPU:-1}
SNAPSHOT_TEMPLATE_MEM_MB: ${SNAPSHOT_TEMPLATE_MEM_MB:-256}
ports:
- "3000:3000"
volumes:
- ${RUN_DAT_SHEESH_IMAGES_DIR:-./images}:/var/lib/run-dat-sheesh/images
- ${RUN_DAT_SHEESH_DATA_DIR:-./data}:/var/lib/run-dat-sheesh
devices:
- /dev/kvm:/dev/kvm
- /dev/vhost-vsock:/dev/vhost-vsock
- /dev/net/tun:/dev/net/tun
# Optional (some hosts expose this; integration script mounts it when present)
# - /dev/vsock:/dev/vsock
YAML
# 4) Download guest images from GitHub releases OR build them yourself
# Option A: Download from releases (recommended)
# Visit https://github.com/lelemm/rundatsheesh/releases
# Download the guest image zip (e.g. alpine.zip) and extract to ./images/
# Option B: Build from source (requires Docker)
# git clone https://github.com/lelemm/rundatsheesh.git
# cd rundatsheesh && ./scripts/build-guest-images.sh
# cp -r dist/images/* ./images/
# 5) Start
docker compose up -d
# 6) Open:
# - Admin UI: http://localhost:3000/login/
# - Docs: http://localhost:3000/docs/
# - Swagger: http://localhost:3000/swaggerRequires Linux with KVM enabled. See system requirements for details. Peer SDK VMs require `VM_SECRET_KEY`, and warm pool only works when a default image is configured.
RESTful endpoints with SDKs for Python, Node.js, Go, and more
# Consumer/workflow VM linked to isolated provider SDK VMs
curl -X POST http://localhost:3000/v1/vms \
-H "X-API-Key: $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"cpu": 1,
"memMb": 256,
"allowIps": [],
"outboundInternet": false,
"peerLinks": [
{ "alias": "google", "vmId": "vm-google", "sourceMode": "hidden" },
{ "alias": "outlook", "vmId": "vm-outlook" }
]
}'curl -X POST http://localhost:3000/v1/vms/{vm_id}/exec \
-H "X-API-Key: $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"cmd": "echo hello && id -u"
}'curl -X POST http://localhost:3000/v1/vms/{vm_id}/snapshots \
-H "X-API-Key: $API_KEY" \
-H "Content-Type: application/json" \
-d '{}'