El Capitan being much faster than my desktop doesn't mean that my desktop is useless. Same with LLMs.
I've been using Mistral Small 3.x for a bunch of tasks on my own PC and it has been very useful, especially after i wrote a few custom tools with llama.cpp to make it more "scriptable".
The local models can get 10x as good next year, it won't matter to me if the frontier models are still better.
And just because we can run those models (heavily quantized, and thus less capable), they are unusably slow on that 10k dead weight hardware.