this post was submitted on 18 Apr 2025
212 points (98.6% liked)

Technology

69299 readers
3937 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 21 comments
sorted by: hot top controversial new old
[–] [email protected] 11 points 6 days ago (1 children)

So... How many cycles can it withstand?

[–] [email protected] 12 points 6 days ago
[–] [email protected] 51 points 1 week ago (3 children)

For those, like me, who wondered how much data was written in 400 picoseconds, the answer is a single bit.

If I'm doing the math correctly, that's write speeds in the 10s-100s GBps range.

[–] [email protected] 6 points 6 days ago* (last edited 6 days ago)

1 bit / 400 picoseconds is 2.5Gbit/s, or 10x slower than a 1-bit GDDR7 bus (which the 5090 runs at 28Gbit/s * 512 bits).

To be fair this is non-volatile memory though, so the closest real comparison might be Intel Optame. The speeds actually seem somewhat comparable to DDR5, though even that is starting to run in to physical distance and timing issues. The real questions will be around density, cost, and reliability.

[–] [email protected] 2 points 6 days ago

You can always parallelize, this would be more beneficial for latency.

[–] [email protected] 22 points 1 week ago

If it's sustainable.

[–] [email protected] 31 points 1 week ago (2 children)

Still about 100 picoseconds too slow for my taste.

[–] [email protected] 17 points 1 week ago

400 for my use case, we're trying to violate causality

[–] [email protected] 2 points 1 week ago (1 children)

The human eye can’t even perceive faster than 1000 picoseconds, so…

[–] [email protected] 9 points 1 week ago (1 children)

Really? I would have guessed the eye was 6 orders of magnitude slower than that.

[–] [email protected] 7 points 1 week ago (1 children)

What, you can't measure the size of a room by timing the bounces of light hitting the walls?

[–] [email protected] 7 points 1 week ago (1 children)

No! I didn't know that's how you guys were doing it. I feel silly for using perspective and the slight differences from my right and left eyes to judge distance this whole time!

[–] [email protected] 2 points 5 days ago* (last edited 5 days ago)

Sheesh, you’re living in the stone age my dude.

[–] [email protected] 19 points 1 week ago (4 children)

Other than just making everything generally faster, what would be a use-case that really benefits the most from something like this? My first thought is something like high-speed cameras; some Phantom cameras can capture hundreds, even thousands of gigabytes of data per second, so I think this tech could probably find some great applications there.

[–] [email protected] 21 points 1 week ago (1 children)

There's some servers using SSDs as a direct extension of RAM. It doesn't currently have the write endurance or the latency to fully replace RAM. This solves one of those.

Imagine, though, if we could unify RAM and mass storage. That's a major assumption in the memory heirarchy that goes away.

[–] [email protected] 2 points 6 days ago

This was actually the main market for Intel Optame. It's got great write endurance, and better latency than Flash. I think they ended up stopping making it because it wasn't cost effective. I'm actually using some old Optame drives in my server for the OS boot drive.

[–] [email protected] 8 points 1 week ago

The article highlights on device AI processing. Could be game changing in a lot of ways.

[–] [email protected] 8 points 1 week ago

I doubt it would work for the buffer memory in a high speed camera. That needs to be overwritten very frequently until the camera is triggered. They didn't say what the erase time or write endurance is. It could work for quickly dumping the RAM after triggering, but you don't need low latency for that. A large number of normal flash chips written in parallel will work just fine.

[–] [email protected] 6 points 1 week ago (1 children)

The speed of many machine learning models is bound by the speed of the memory they're loaded on so that's probably the biggest one.

[–] [email protected] 1 points 6 days ago

Unfortunately this 1 bit / 400 picoseconds metric is 10x slower than GDDR7. The applications for this will be limited to things that need non-volatile memory.

[–] [email protected] 16 points 1 week ago

It's using graphene so we'll see this as soon as the 100s of graphene innovations come too in who knows when?