The usability has been plummetting with every single redesign for quite a while, though.
Used to be everything could be found and done in two or three clicks... now it's five minutes clicking and scrolling through the useless single windowed chaos of the configuration app looking for where the last update randomly moved it to (finding one or two options that are almost what you're looking for, but can't do what used to take just a couple clicks), five minutes looking it up on what's left of the internet while avoiding ads, spam, and hallucinating LLMs, only to find out this setting you and everyone you know had been using almost daily was removed by the last update “to improve usability”, and five minutes writing eldritch incantations into the registry, group policies, or powershell to finally configure the fucking setting...
The other day we were going over some SQL query with a younger colleague and I went “wait, what was the function for the length of a string in SQL Server?”, so he typed the whole question into chatgpt, which replied (extremely slowly) with some unrelated garbage.
I asked him to let me take the keyboard, typed “sql server string length” into google, saw LEN in the except from the first result, and went on to do what I'd wanted to do, while in another tab chatgpt was still spewing nonsense.
LLMs are slower, several orders of magnitude less accurate, and harder to use than existing alternatives, but they're extremely good at convincing their users that they know what they're doing and what they're talking about.
That causes the people using them to blindly copy their useless buggy code (that even if it worked and wasn't incomplete and full of bugs would be intended to solve a completely different problem, since users are incapable of properly asking what they want and LLMs would produce the wrong code most of the time even if asked properly), wasting everyone's time and learning nothing.
Not that blindly copying from stack overflow is any better, of course, but stack overflow or reddit answers come with comments and alternative answers that if you read them will go a long way to telling you whether the code you're copying will work for your particular situation or not.
LLMs give you none of that context, and are fundamentally incapable of doing the reasoning (and learning) that you'd do given different commented answers.
They'll just very convincingly tell you that their code is right, correct, and adequate to your requirements, and leave it to you (or whoever has to deal with your pull requests) to find out without any hints why it's not.