Computers aren't people. AI "learning" is a metaphorical usage of that word. Human learning is a complex mystery we've barely begun to understand, whereas we know exactly what these computer systems are doing; though we use the word "learning" for both, it is a fundamentally different process. Conflating the two is fine for normal conversation, but for technical questions like this, it's silly.
It's perfectly consistent to decide that computers "learning" breaks the rules but human learning doesn't, because they're different things. Computer "learning" is a a new thing, and it's a lot more like creating replicas than human learning is. I think we should treat it as such.
From that same article stub:
This is a very dangerous path. I recognize it thanks to Dan Mcquillan, who writes about this a lot. Governments using algorithmic tools to figure out who needs special services ends up becoming automated neoliberal austerity. He frequently collects examples. I just dug up his mastodon and here's a recent toot with three: https://kolektiva.social/@danmcquillan/111207202749078945
Also, the main headline is about automated text translations for calls, which is now AI. Ever since ChatGPT melted reporters' brains, everything has become AI. Every time I bring this up, some pedantic person tells me that NLP (or machine vision or LLMs) is a subfield of AI. Do you do this for any other field? "Doctors use biology to solve disease," or "Engineers use physics to to build bridge." Of course not, because it's ridiculous marketing talk that journalists should stop repeating.