this post was submitted on 26 Jun 2024
81 points (100.0% liked)

Technology

59312 readers
5006 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 6 comments
sorted by: hot top controversial new old
[–] [email protected] 25 points 4 months ago (1 children)

Someday, we’ll have the technology to generate an image of a centaur with 4 boobs without using more energy than a small hospital. Very exciting stuff.

[–] [email protected] -4 points 4 months ago* (last edited 4 months ago) (1 children)

Very exciting stuff.

...NOT!

:)

EDIT: wow, no one got it? OP is named after a joke on Wayne's World

[–] [email protected] 3 points 4 months ago

I obviously got it. But not everyone appreciates high culture.

[–] [email protected] 13 points 4 months ago

This is the best summary I could come up with:


Researchers claim to have developed a new way to run AI language models more efficiently by eliminating matrix multiplication from the process.

The technique has not yet been peer-reviewed, but the researchers—Rui-Jie Zhu, Yu Zhang, Ethan Sifferman, Tyler Sheaves, Yiqiao Wang, Dustin Richmond, Peng Zhou, and Jason Eshraghian—claim that their work challenges the prevailing paradigm that matrix multiplication operations are indispensable for building high-performing language models.

They argue that their approach could make large language models more accessible, efficient, and sustainable, particularly for deployment on resource-constrained hardware like smartphones.

In the paper, the researchers mention BitNet (the so-called "1-bit" transformer technique that made the rounds as a preprint in October) as an important precursor to their work.

According to the authors, BitNet demonstrated the viability of using binary and ternary weights in language models, successfully scaling up to 3 billion parameters while maintaining competitive performance.

Limitations of BitNet served as a motivation for the current study, pushing them to develop a completely "MatMul-free" architecture that could maintain performance while eliminating matrix multiplications even in the attention mechanism.


The original article contains 499 words, the summary contains 177 words. Saved 65%. I'm a bot and I'm open source!

[–] [email protected] 12 points 4 months ago

This is interesting but I'll reserve judgement until I see comparable performance past 8 billion params.

All sub-4 billion parameter models all seem to have the same performance regardless of quantization nowadays, so 3 billion is a little hard to see potential in.

[–] [email protected] 4 points 4 months ago