this post was submitted on 21 May 2025
169 points (99.4% liked)

Technology

70267 readers
4050 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

A team from MIT and the Woods Hole Oceanographic Institution (WHOI) has developed an image-analysis tool that cuts through the ocean's optical effects and generates images of underwater environments that look as if the water had been drained away, revealing an ocean scene's true colors. The team paired the color-correcting tool with a computational model that converts images of a scene into a three-dimensional underwater "world," that can then be explored virtually.

The researchers have dubbed the new tool SeaSplat, in reference to both its underwater application and a method known as 3D Gaussian splatting (3DGS), which takes images of a scene and stitches them together to generate a complete, three-dimensional representation that can be viewed in detail, from any perspective.

For now, the method requires hefty computing resources in the form of a desktop computer that would be too bulky to carry aboard an underwater robot. Still, SeaSplat could work for tethered operations, where a vehicle, tied to a ship, can explore and take images that can be sent up to a ship's computer.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 7 points 2 days ago (2 children)

Can't wait to try this on my dive photos without red filters

[–] [email protected] 1 points 2 days ago (1 children)

I get the impression this is a video-only thing because you need multiple vantage points of the scene. You can still extract a single frame in the end of course (like the article itself does), but you'll need to shift around meaningful distances, like attack submarines do with Target Motion Analysis.

[–] [email protected] 3 points 1 day ago

Yep, since this is using Gaussian Splatting you'll need multiple camera views and an initial point cloud. You get both for free from video via COLMAP.