Frequently Asked Questions

Everything you need to know about our GPU cloud platform. Can't find the answer you're looking for? Reach out to our team.

One API and many models with APAC-native routing. We host open-weight models in Singapore today, with proprietary model routes planned for Sydney and Singapore. Regional teams generally see lower response latency than US-routed paths.

Yes! Our platform is live and ready for production workloads. You can sign up here and get instant access to GPU instances and the router API across our APAC infrastructure.

We offer NVIDIA T4, L4, A100, and H100 GPUs. See our pricing page for full specs and per-hour rates.

Our infrastructure is live in Singapore today. We are expanding to additional APAC regions, follow our changelog for region announcements.

Our pricing is competitive with global providers while offering significantly better performance for APAC users due to reduced latency. We also provide transparent billing with no hidden costs.

Our platform is optimized for AI/ML training and inference, batch processing, rendering, and other GPU-accelerated workloads. It's particularly well-suited for businesses serving customers in the APAC region.

Yes, we provide secure isolated environments for hosting proprietary models with enterprise-grade security. This includes dedicated infrastructure options for larger deployments.

We offer tiered support options, from community support for our starter tier to dedicated account managers for enterprise customers. All support is provided in APAC-friendly time zones.

Our infrastructure is designed with APAC data regulations in mind. We maintain strict data separation, offer data residency options, and provide compliance documentation for relevant regional standards.

Still have questions?

Contact our team