Yeah, I think "AI" as a term has always been super compromised to the point of uselessness. ML in general is firmly in steps 4-5 - it's integrated into our lives in so many places and generally without the users having to think about it.
Car crash detection, automatic photo editing, heart rate sensing, etc. We use this stuff daily but there's generally little hype about the underlying tech (though some hype about specific applications).
What's in step 2 is "Generative AI", which IMO is also a misnomer for "large language models". The viability and uses of these models is far from proven out yet.
The LLM hype is maybe blinding us to all the other use cases that the new powerful GPUs will provide. Maybe the real progress was the ability to train increasingly large models. I don’t think LLMs will solve most problems but there will
be other models that can learn from their success
This is why I hate the term "AI". Ai pathfinding, ai upscaling, and generative art are completely different pieces of technology that falls under the same marketing term. Strictly speaking, the decades of machine learning that's been going on in acedemia is also all "AI" as well.