Hi HN,
YC w24 company here. We just pivoted from drone delivery to build gpudeploy.com, a website that routes on-demand traffic for GPU instances to idle compute resources.
The experience is similar to lambda labs, which we’ve really enjoyed for training our robotics models, but their GPUs are never available for on-demand. We are also trying to make it more no-nonsense (no hidden fees, no H100 behind “contact sales”, etc.).
The tech to make this work is actually kind of nifty, we may do an in-depth HN post on that soon.
Right now, we have H100s, a few RTX 4090s and a GTX 1080 Ti online. Feel free to try it out!
Also, if you’ve got compute sitting around (a GPU cluster, a crypto mining operation or just a GPU) or if you’re an AI company with idle compute (hopefully not in a Stability AI way) and want to see some ROI, it’s very simple and flexible to hook it up to our site and you’ll maybe get a few researchers using your compute.
Nice rest of the week!
Comments URL: https://news.ycombinator.com/item?id=40260259
Points: 58
# Comments: 22
Accedi per aggiungere un commento
Altri post in questo gruppo

We wrote our inference engine on Rust, it is faster than llama cpp in all of the use cases. Your feedback is very welcomed. Written from scratch with idea that you can add support of any kernel an

I would very much like to enjoy HN the way I did years ago, as a place where I'd discover things that I never otherwise would have come across.
The increasing AI/LLM domination of the site has m
Article URL: https://www.blender.org/download/releases/4-5/
What I’m asking HN:
What does your actually useful local LLM stack look like?
I’m looking for something that provides you with real value — not just a sexy demo.
---
After a recent interne

Article URL: https://systemf.epfl.ch/blog/rust-regex-lookbehinds/
Article URL: https://www.matthieulc.com/posts/shoggoth-mini