arc99

joined 3 days ago
[–] [email protected] 1 points 22 hours ago

The ribbon was contentious but most people are familiar with it and it has advantages like taskcentricity and less clutter. LibreOffice has an experimental ribbon that I think should be worked on, mainstreamed and set during installation or in the settings.

UX in other areas should be improved. Lots of little annoyances add up for new users and can break their opinions. It's not hard to look over the UI and see things which have no business being there, or should only appear in certain contexts, or could be implemented in better ways. I think the project should get some MS Office volunteers into a lab and ask them to do things and observe their problems. I'd have power Word, Excel, Powerpoint users come in and do non-trivial things they normally do and see where they trip up or even if they can do what they need.

[–] [email protected] 2 points 22 hours ago

Good evidence of astroturfing on Reddit. That Reddit took action and banned the Palantir agents only provides evidence that exposure of the op is the problem. Not evidence that Reddit acts in good faith.

A good question to ask, is what would happen if Lemmy was the victim of astroturfing. It's decentralized for starters and groups might not even reside in the same place on the fediverse. Also I expect Reddit has monitoring, analytics and tools that could flag behaviour rather than somebody having to go through logs trying to find patterns.

I think Lemmy and other federated platforms have escaped having to deal with these issues simply because someone attempting to astroturf will do it on the biggest platform. So Lemmy escapes not by any technical or administrative virtue but by being smallfry.

[–] [email protected] 1 points 22 hours ago* (last edited 22 hours ago)

An LLM is an ordered series of parameterized / weighted nodes which are fed a bunch of tokens, and millions of calculations later result generates the next token to append and repeat the process. It's like turning a handle on some complex Babbage-esque machine. LLMs use a tiny bit of randomness ("temperature") when choosing the next token so the responses are not identical each time.

But it is not thinking. Not even remotely so. It's a simulacrum. If you want to see this, run ollama with the temperature set to 0 e.g.

ollama run gemma3:4b
>>> /set parameter temperature 0
>>> what is a leaf

You will get the same answer every single time.

[–] [email protected] 12 points 23 hours ago (3 children)

I think if I were any non-US government I'd be very seriously thinking about not using Microsoft software at this time, particularly if it connects to the cloud. And that goes for companies with government contracts, or merely companies who are potential targets of industrial espionage.

That said, LibreOffice needs to tap the EU for funding to broaden its features and also improve the UX because it's not great tbh. It can be extremely frustrating using LibreOffice after using MS Office, in part because the UI is so different, noisy with esoteric actions, and very unrefined compared to its MS counterpart. That needs funding and to get to the point that somebody can pick up LibreOffice for the first time and not be surprised or stuck by the way it behaves.

[–] [email protected] 2 points 3 days ago

It's even worse when AI soaks up some project whose APIs are constantly changing. Try using AI to code against jetty for example and you'll be weeping.

[–] [email protected] 8 points 3 days ago (2 children)

All AIs are the same. They're just scraping content from GitHub, stackoverflow etc with a bunch of guardrails slapped on to spew out sentences that conform to their training data but there is no intelligence. They're super handy for basic code snippets but anyone using them anything remotely complex or nuanced will regret it.

[–] [email protected] 20 points 3 days ago (2 children)

Hardly surprising. Llms aren't -thinking- they're just shitting out the next token for any given input of tokens.