While the consumption for AI train can be large, there are arguments to be made for its net effect in the long run.
The article's last section gives a few examples that are interesting to me from an environmental perspective. Using smaller problem-specific models can have a large effect in reducing AI emissions, since their relation to model size is not linear. AI assistance can indeed increase worker productivity, which does not necessarily decrease emissions but we have to keep in mind that our bodies are pretty inefficient meat bags. Last but not least, AI literacy can lead to better legislation and regulation.
IMO it's not about what metric is used, but how it is used. The current approach, completely avoiding any karma like mechanism, solves the farming issue, but IMO does not cater to the needs of every user.
For example, I have ADHD and if accumulating karma gives me much needed motivation and feel good chemicals, I am going to take them.
At the same time, holding a user to a higher regard because of their karma is stupid, it's better to build real connections with usernames you recognise through continuous communication.
Personally, karma was an easily digestable piece of information about how my outreach into the social media is performing. Accumulating karma helps me feel connected with the community, feel accepted.