Alexander Tirovskiy

Independent operator · AI compute infrastructure · since 2018

Independent AI compute operator. 120× GPU in production (H100, B200, B300 pilot) across 15 servers, with investor-backed capital funding expansion. 7+ years operating distributed infrastructure.

120
GPU in production · 15 servers, 8 GPU each, three hardware generations
72
× H100 · 9 servers
40
× B200 · 5 servers
8
× B300 · pilot

One current workload is Gonka AI — a decentralized AI infrastructure network where GPU compute is rewarded in a native token. Gonka is not the only deployment target: I position architecturally as a compute operator, not a token holder, which decouples infrastructure economics from the price of any single asset.

Capital
Investor-backed
Growth funding in place. Independent of token emission.
Payment
Prepayment
Paid upfront — zero credit exposure for the counterparty.
Term
1–12 months
Ideally 1–6. Shorter than long-term lease, committed through term.

Looking for GPU capacity with neocloud providers and data centers in Europe, UK, US, UAE, India, and Japan.

Tenant-side deal paid upfront with investor capital. We take capacity (H100, B200, B300) on short-term commitments of 1–12 months, ideally 1–6, and lock it whole for the term — not pay-as-you-go. For tier-1 providers, this typically monetizes flex/idle capacity in windows between long leases; for tier-2/3, it acts as a stable anchor-tenant with predictable cash flow.

01
Prepayment
Capital from investors, not from token emission. Paid upfront — zero credit exposure on the counterparty side, no net-30 reconciliation rounds.
02
High utilization
Workload profile is high-density AI compute, 24/7 near peak. Capacity gets used, not paid-for-and-idle.
03
Short-to-mid term fit
If you have an idle flex-capacity window between long contracts — we take it whole. 1–6 months at a premium rate for speed and predictable close.
04
Clean counterparty
7+ years of production distributed-infrastructure operations, 9-person team. References and technical diligence on request.

Seven-plus years of continuous distributed-infrastructure operations.

Before AI compute — validator operations for major L1/L2 networks (Cosmos Hub, Ethereum, Solana, Celestia, Avalanche, Aptos, and others). Work done in direct contact with core teams: pre-mainnet testnets, security reviews of node configurations, 24/7 production ops. The same operational rigor — high-load ops, observability, incident response — carries directly into today's AI compute setup.

A team of nine.

Engineers with production experience running high-load distributed infrastructure, DevOps, a commercial function for neocloud providers, data centers, and hardware vendors, and operational management. Actively hiring against current and upcoming partnerships.

If you operate GPU capacity for AI compute — H100, B200, B300 — and would value a prepaying tenant for 1–6 months, let's talk.