Independent AI compute operator. 120× GPU in production (H100, B200, B300 pilot) across 15 servers, with investor-backed capital funding expansion. 7+ years operating distributed infrastructure.
One current workload is Gonka AI — a decentralized AI infrastructure network where GPU compute is rewarded in a native token. Gonka is not the only deployment target: I position architecturally as a compute operator, not a token holder, which decouples infrastructure economics from the price of any single asset.
Tenant-side deal paid upfront with investor capital. We take capacity (H100, B200, B300) on short-term commitments of 1–12 months, ideally 1–6, and lock it whole for the term — not pay-as-you-go. For tier-1 providers, this typically monetizes flex/idle capacity in windows between long leases; for tier-2/3, it acts as a stable anchor-tenant with predictable cash flow.
Before AI compute — validator operations for major L1/L2 networks (Cosmos Hub, Ethereum, Solana, Celestia, Avalanche, Aptos, and others). Work done in direct contact with core teams: pre-mainnet testnets, security reviews of node configurations, 24/7 production ops. The same operational rigor — high-load ops, observability, incident response — carries directly into today's AI compute setup.
Engineers with production experience running high-load distributed infrastructure, DevOps, a commercial function for neocloud providers, data centers, and hardware vendors, and operational management. Actively hiring against current and upcoming partnerships.