Actually, the TPU Research Cloud program is still going strong! We've expanded the compute pool significantly to include Cloud TPU v4 Pod slices, and larger projects still use hundreds of chips at a time. (TRC capacity has not been reclaimed for internal use.)
Demand for Cloud TPUs is definitely intense, so if you're using preemptible capacity, you're probably seeing more frequent interruptions, but reserved capacity is also available. Hope you email the TRC support team to say hello!
Zak, I love you buddy, but you should have some of your researchers try to use the TRC program. They should pretend to be a nobody (like I was in 2019) and try to do any research with the resources they’re granted. I guarantee you those researchers will all tell you “we can’t start any training runs anymore because the TPUs die after 45 minutes.”
This may feel like an anime betrayal, since you basically launched my career as a scientist. But it’s important for hobbyists and tinkerers to be able to participate in the AI ecosystem, especially today. And TRC just does not support them anymore. I tried, many times, over the last year and a half.
You don’t need to take my word for it. Here’s some unfiltered DMs on the subject: https://imgur.com/a/6vqvzXs
Notice how their optimism dries up, and not because I was telling them how bad TRC has become. It’s because their TPUs kept dying.
I held out hope for so long. I thought it was temporary. It ain’t temporary, Zak. And I vividly remember when it happened. Some smart person in google proposed a new allocation algorithm back near the end of 2021, and poof, overnight our ability to make TPUs went from dozens to a handful. It was quite literally overnight; we had monitoring graphs that flatlined. I can probably still dig them up.
I’ve wanted to email you privately about this, but given that I am a small fish in a pond that’s grown exponentially bigger, I don’t think it would’ve made a difference. The difference is in your last paragraph: you allocate reserved instances to those who deserve it, and leave everybody else to fight over 45 minutes of TPU time when it takes 25 minutes just to create and fill your TPU with your research data.
Your non-preemptible TPUs are frankly a lie. I didn’t want to drop the L word, but a TPUv3 in euw4a will literally delete itself — aka preempt — after no more than a couple hours. I tested this over many months. That was some time ago, so maybe things have changed, but I wouldn’t bet on it.
There’s some serious “left hand doesn’t know that right hand detached from its body and migrated south for the winter” energy in the TRC program. I don’t know where it embedded itself, but if you want to elevate any other engineers from software devs to researchers, I urge you to make some big changes.
One last thing. The support staff of TRC is phenomenal. Jonathan Colton has worked more miracles than I can count, along with the rest of his crew. Ultimately he had to send me an email like “by the way, TRC doesn’t delete TPUs. This distinction probably won’t be too relevant, but I wanted to let you know” (paraphrasing). Translation: you took the power away from the people who knew where to put it (Jonathan) and gave it to some really important researchers, probably in Brain or some other division of Google. And the rest is history. So I don’t want to hear that one of the changes is “ok, we’ve punished the support staff” - as far as I can tell, they’ve moved mountains with whatever tools they had available, and I definitely wouldn’t have been able to do any better in their shoes.
Also, hello. Thanks for launching my career. Sorry that I had to leave this here, but my duty is to the open source community. The good news is that you can still recover, if only you’d revert this silly “we’ll slip you some reserved TPUs that don’t kamikaze themselves after 45 minutes if you ask in just the right way” stuff. That wasn’t how the program was in 2019, and I guarantee that I couldn’t have done the work I did then under the current conditions.
> But it’s important for hobbyists and tinkerers to be able to participate in the AI ecosystem
Totally agree! This was a big part of my original motivation for creating the TPU Research Cloud program. People sometimes assume that e.g. an academic affiliation is required to participate, but that isn't true; we want the program to be as open as possible. We should find a better way to highlight the work of TRC tinkerers - for now, the GitHub and Hugging Face search buttons near the top of https://sites.research.google/trc/publications/ provide some raw pointers.
I'm sorry to hear that you've personally had a hard time getting TPU v3 capacity in europe-west4-a. In general, TRC TPU availability varies by region and by hardware generation, and we've experimented with different ways of prioritizing projects. It's possible that something was misconfigured on our end if your TPU lifetimes were so short. Could you email Jonathan the name of the project(s) you were using and any other data you still have handy so we can figure out what was going wrong?
Also, thanks for the kind words for Jonathan and the rest of the TRC team. They haven't lost any power or control, and they are allocating a lot more Cloud TPU capacity than ever. However, now that everyone wants to train LLMs, diffusion models, and other exciting new things, demand for TPU compute is way up, so juggling all of the inbound TRC requests is definitely more challenging than it used to be.
It’s not euw4a. It’s everywhere. The allocation algorithm across the board kills off TPUs after no more than a couple hours. usc1f, usc1a, usc1c, euw4a; it makes no difference.
It would be funny if someone set gpt-2-15b-poetry (our project) in some special way to prevent us from making TPUs that ever last more than a few hours, but from what I’ve heard from other people, this isn’t the case. That’s what I mean about the left hand doesn’t know what’s going on with the right hand. It’s not a misconfiguration. Again, pretend to be some random person who just wants to apply for TPU access, fill out your form, then try to do research with the TPUs that are available to you. You’ll have a rough time, but it’ll also cure this misconception that it’s a special case or was just me.
Again, no need to take my word for it; here’s an organic comment from someone who was rolling their eyes whenever I was cheerleading TRC, because their experience was so bad: https://news.ycombinator.com/item?id=36936782
I think that the experience is probably great for researchers who get special approval. And that’s fine, if that’s how the program is designed to be. But at least tell people that they shouldn’t expect more than an hour or two of TPU time.
It sounds like you're primarily using preemptible TPU quota, which doesn't come with any availability or uptime expectations at all.
By default, the TRC program grants both on-demand quota and preemptible quota. If you are able to create a TPU VM with your on-demand quota, it should last quite a bit longer than a few hours. (There are situations in which on-demand TRC TPU VMs can be interrupted, but these ought to be rare.) If your on-demand TPU VMs are being interrupted frequently, please email TRC support and provide the names of the TPU hosts that were interrupted so folks can try to help.
When there is very high demand for Cloud TPUs, it's certainly possible for preemptible TPU VMs to be interrupted frequently. It would be an interesting engineering project to make a very robust training system that could make progress even with low TPU VM uptime, and I hope someone does it! Until then, though, you should have a better experience with on-demand resources when you're able to create them. Reserved capacity is even better since it provides an expectation of both availability and uptime.
I was using on-demand TPUs primarily, and preemptible TPUs secondarily. Neither would last more than an hour or two. And two was something of a minor miracle by the end.
For future reference, the team looked into this, and it appears that the interruptions you experienced were specific to your project and a small number of other projects. The vast majority of TRC projects should see much longer Cloud TPU uptimes when they are able to create on-demand TPUs.
I'm sorry that you had such a frustrating time and that we weren't able to sort it out via email while it was happening. If you decide to try TRC again and run into issues like this, please be sure to engage with TRC support!
> You don’t need to take my word for it. Here’s some unfiltered DMs on the subject: https://imgur.com/a/6vqvzXs
> Notice how their optimism dries up, and not because I was telling them how bad TRC has become. It’s because their TPUs kept dying.
Unless I'm misreading this they sound pretty happy and you sound pessimistic? Their last substantial comment was "I'm sure Zak could hook you up with something better"?
TRC is supposed to be the “something better”. This insider TPU stuff is for the birds. If TRC can only offer 4 hours with no preemptions, that’s fine, but they need to be up front about that. Saying that TPUs preempt every 24 hours and then killing them off after 45 minutes is… not very productive.
As for their comments, the third screenshot is the key; they’re agreeing that the situation is bad. They’re a friend, and they’re a little indirect with the way they phrase things. (If you’ve ever had a friend who really doesn’t want to be wrong, you know what I mean; they kind of say things in a circular way in order to agree without agreeing. After awhile it’s pretty cute and endearing though.)
I was particularly pessimistic in those DMs because it came a couple months after I thought I’d give TRC one last try, back in January, which was roughly a year after I’d started my “ok, I’m losing hope, but I’ll wait and see” journey. In the meantime I kept cheerleading TRC and driving people to their signup page. But after the TPUs all died in less than two hours yet again, that was that.
I have a really high tolerance for faulty equipment. This is free compute; me complaining is just ungrateful. But I saw what things were like in 2019. “Different” would be the understatement of the century. If my baby wasn’t being incubated in the NICU today, I’d show the charts where our usage went from thousands of cores down to almost zero, and not for lack of trying.
It also would’ve been fine to say “sorry, this is unsustainable, the new limits are one tpu per person per project” and then give me a rock solid tpu. We had those in 2021. One of our TPUv3s stayed online for so long that I started to host my blog on it just to show people that TPUs were good for more than AI; the uptime was measured in months. Then poof, now you can barely fire one up.
I don't have a qualified opinion on the subject of TPU availability.
I'm just pointing out that your summary of the DMs ("Notice how their optimism dries up, and not because I was telling them how bad TRC has become. It’s because their TPUs kept dying") is the opposite of what the DMs show.
As mentioned in another comment, it sounds like you're using preemptible TRC TPU quota. If you use on-demand TRC TPU quota instead, that should improve your uptime substantially.
Frankly, it sounds to me like they're having severe yield+reliability problems with the TPUv4s that aren't getting caught by wafer-level testing, and have binned the flakiest ones for use by outsiders.
A lot of yield issues show up as spontaneous resets/crashes.
It's more likely Google preempting researcher who are on a preemptable research grant, and it is happening a lot more often because there are more paying customers.
Main problem with the TPU Research Cloud is you get dragged down a LOT by the buggy TPU API-- not just the Google Cloud API being awful but the Tensorflow/Jax/Pytorch support also being awful too. You also basically must use Google Cloud Storage, which is also slow and can be really expensive getting anything into / out-of.
The Googlers maintaining the TPU Github repo also just basically don't care about your PR unless it's somehow gonna help them in their own perf review.
In contrast with a GPU-based grid, you can not only run the latest & greatest out-of-the-box but also do a lot of local testing that saves tons of time.
Finally, the OP here appears to be offering real customer engagement, which is totally absent from my own GCloud experiences across several companies.
Could you share a few technical details about the issues you've encountered with TF / JAX / PyTorch on Cloud TPUs? The overall Cloud TPU user experience improved a whole lot when we enabled direct access to TPU VMs, and I believe the newer JAX and PyTorch integrations are improving very rapidly. I'd love to know which issues are currently causing the most friction.
Check out this list of recent TRC-supported publications: https://sites.research.google/trc/publications/
Demand for Cloud TPUs is definitely intense, so if you're using preemptible capacity, you're probably seeing more frequent interruptions, but reserved capacity is also available. Hope you email the TRC support team to say hello!