Hacker Newsnew | past | comments | ask | show | jobs light | darkhn

I am super interested in AI on a personal level and have been involved for a number of years.

I have never seen a GPU crunch quite like it is right now. To anyone who is interested in hobbyist ML, I highly highly recommend using vast.ai


Additional clouds:

For H100s and A100s - lambda, fluidstack, runpod. Also coreweave and crusoe and oblivus and latitude

For non a/h100s: vast, Tensordock, also runpod here too


Depends on what you class as hobbyist but I am running a T4 for a few minutes to get acquainted with tools and concepts and I found modal.com really good for this. They resell AWS and GCP at the moment. They also have A100 but T4 is all I need for now.


Significantly more expensive than equivalent 3090 configuration if you can do model parallelism


What do you mean by this? I use less than the $30/m free included usage.

I am guessing you mean at some point just buy your own 3090 as it will be cheaper than paying a cloud per second for a server-grade Nvidia setup.


I think this is more applicable for training usecases. If you can get by with less than $30/mo in aws compute (quite expensive) then it likely does not make a didference.

What I mean is that you can rent out 4 3090 GPUs for much cheaper than renting an A100 on aws because you are not paying Nvidia's "cloud tax" on flops/$


Many thanks for posting about vast.ai, which I had never heard of! It's a sort of "gig economy/marketplace" for GPU's. The first machine I tried just now worked fine, had 512GB of RAM, 256 AMC CPUs, an A100 GPU, and I got about 4 minutes for $0.05 (which they provided for free).


The only caveat is it is not really appropriate for private usecases.

Also, many of the available options clearly are recycled crypto mining rigs which have somewhat odd configurations (poor gpu bandwidth, low cpu ram).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact |

Search: