this post was submitted on 22 Jun 2024
11 points (73.9% liked)

Technology

34883 readers
45 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -2 points 4 months ago (5 children)

Interesting video and glad to see open-source suggested as a potential solution at the end... yet, it does not solve hallucinations (for LLMs), energy consumption (any form of AI) or... the fact that the hype itself is an economical and political tool at the service of a few. On the final point on regulators, I believe it's damaging to imply that regulators are ignorant. They are not technical, indeed, but they are not supposed to. Regulators didn't need to know how to build a plane to dictate rules that would improve safety in the industry, same for not being engineers in order to make the seatbelt mandatory. Yet, they do learn from technical experts, e.g in Europe the JRC that informs the Europeen Commission, Parliament, etc.

[–] [email protected] 8 points 4 months ago (4 children)

Open source does actually pave the way towards addressing many of the problems. For example, Petals is a torrent style system for running models which allows regular people to share resources to run models.

Problems like hallucinations and energy consumption aren't inherent either. These problems are actively being worked on, and people are finding ways to make models more efficient all the time. For example, by using the same techniques Google used to solve Go (MTCS and backprop), Llama8B gets 96.7% on math benchmark GSM8K. That’s better than GPT-4, Claude and Gemini, with 200x fewer parameters. https://arxiv.org/pdf/2406.07394

And here's an approach being explored for making models more reliable https://www.wired.com/story/game-theory-can-make-ai-more-correct-and-efficient/

The reality is that we can't put the toothpaste back in the tube now. This tech will be developed one way or the other, and it's much better if it's developed in the open.

[–] [email protected] 2 points 4 months ago* (last edited 4 months ago) (1 children)

FWIW I do have my own page on FLOSS AI, cf https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence so I do believe it's at least interesting, even important, to understand what it is.

Still, AFAIK both the electricity https://www.bloomberg.com/graphics/2024-ai-data-centers-power-grids/ and even the potential for correction https://arxiv.org/abs/2404.04125 from intrinsic properties of the dataset and learning but also as its marketed https://link.springer.com/article/10.1007/s10676-024-09775-5 today make me reiterate, AI FLOSS doesn't not automatically solve all problems of closed source or proprietary AI.

Edit: I know of Petals, I even discussed with some people working on it, and I learned about federated AI or federated learning back then, since at least 2019 (proof) so this isn't new to me.

[–] [email protected] 2 points 4 months ago (1 children)

Again, I'm not arguing that open source automatically solves problems, just that since AI is obviously going to continue being developed, it's better if it's done in the open.

[–] [email protected] 2 points 4 months ago* (last edited 4 months ago) (1 children)

Well that's one position, another is to say AI, being developed currently, is :

  • not working due to hallucinations
  • wasteful in terms of resources
  • creates problematic behaviors in terms of privacy
  • creates more inequality

and other problems and is thus in most cases (say outside of e.g numerical optimization as already done at e.g DoE, so in the "traditional" sense of AI, not the LLM craze) better be entirely ignored.

Edit : what I mean is that the argument of inevitability itself is dangerous, often abused.

[–] [email protected] 2 points 4 months ago

not working due to hallucinations

It's pretty clear that hallucinations are an issue only for specific use cases. This problem certainly doesn't make ML useless. For example, I find it's far faster to use a code oriented model to get an idea of how to solve a problem than going to stack overflow. The output of the model doesn't need to be perfect, it just needs to get me moving in the right direction.

Furthermore, there is nothing to suggest that the problem of hallucinations is fundamental and can't be addressed going forward. I've linked an example of a research team doing precisely that above.

wasteful in terms of resources

Sure, but so are plenty of other things. And as I've illustrated above, there are already drastic improvements happening in this area.

creates problematic behaviors in terms of privacy

Not really a unique problem either.

creates more inequality

Don't see how that's the case. In fact, I'd argue the opposite to be true, especially if the technology is open and available to everyone.

and other problems and is thus in most cases (say outside of e.g numerical optimization as already done at e.g DoE, so in the “traditional” sense of AI, not the LLM craze) better be entirely ignored.

There is a lot of hype around this tech, and some of it will die down eventually. However, it would be a mistake to throw the baby out with the bath water.

what I mean is that the argument of inevitability itself is dangerous, often abused.

The argument of inevitability stems from the fact that people have already found many commercial uses for this tech, and there is a ton of money being poured into it. This is unlikely to stop regardless of what your personal opinion on the tech is.

load more comments (2 replies)
load more comments (2 replies)