this post was submitted on 13 Nov 2023
-6 points (44.4% liked)

Technology

58137 readers
4517 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 44 points 10 months ago (2 children)

I am calling bullshit on all of their points.

  1. No screen, but a projector to project on your hand? WTF? So not only will it far less information, but it will be a pain to use...

  2. Voice commands? Meaning I will need to tell everyone around me what I am doing? Also calling bullshit on getting them to work in a busy area.

  3. No it can't, there are no ways to detect nutrition from a picture of a peice of food

  4. Privacy? Yeah, get back with me in 20 years when it has been proven to not sell, leak or have data stolen, then I'll be impressed.

In conclusion, this is as real as the Skarp laser razor is.

[–] [email protected] 7 points 10 months ago (1 children)

No it can’t, there are no ways to detect nutrition from a picture of a peice of food

Why not? at least to the extent that a human can. Some AI model recognizes the type of food, estimates the amount and calculates nutrition based on that (hopefully verified with actual data, unlike in this demo).

All three of these functions already exist, all that remains is to put them together.

[–] [email protected] 10 points 10 months ago (2 children)

Ok, if you take any book, keep it closed, how many times do the letters s, q, d and r appear in the book?

There is no way to know without opening the book and counting, sure, you could make some statisticsl analysis based on the language used, but that doesn't take into account the font size and spacing, nor the number of pages.

Since the machine only has a photo to analyze, it can only give extremely generic results, making them effectively useless.

You would need to open the food up and actually analyze a part of the inside with something like a mass spectrometer to get any useful data.

[–] [email protected] 7 points 10 months ago* (last edited 10 months ago) (2 children)

I agree with you, but disagree with your reasoning.

If you take 1lb of potatoes, boil and mash them with no other add-ins, you can reasonably estimate the nutritional information through visual inspection alone, assuming you have enough reference to see there is about a pound of potatoes. There are many nutrition apps out there that utilize this, and it’s essentially just lopping off the extremes and averaging out the rest.

The problem with this is, it’s impossible to accurately guess the recipe, and therefore the ingredients. Take the aforementioned mashed potatoes. You can’t accurately tell what variety of potato was used. Was water added back during the mashing? Butter? Cream cheese? Cheddar? Sour cream? There’s no way to tell visually, assuming uniform mashing, what is in the potatoes.

Not to mention, the pin sees two pieces of bread on top of each other… what is in the bread? Who the fuck knows!

[–] [email protected] 1 points 10 months ago

It isn't as magical (or accurate) as it looks. It's just an extension of how various health tracking apps track food intake. There's usually just one standard entry in the database for mashed potatoes based on whatever their data source thinks a reasonable default value should be. It doesn't know if what you're eating is mostly butter and cheese.

How useful a vague and not particularly accurate nutrition profile really can be is an open question, but it seems to be a popular feature for smartwatches.

[–] [email protected] 0 points 10 months ago (2 children)

I see what you mean, and while you raise a few excellent points, you seem to forget that a human looking at mashed potatoes have far more data than a computer lookkng at an image.

A human get data about smell, temperature texture and weight in addition to a simple visual impression.

This is why I picked a book/letter example, I wanted to reduce the variables available to a human to get closer to what a computer has from a photo.

[–] [email protected] 7 points 10 months ago* (last edited 10 months ago) (1 children)

It needn’t be exact. A ballpark calorie/sugar that’s 90% accurate would be sufficient. There’s some research that suggests that’s possible: https://arxiv.org/pdf/2011.01082.pdf

[–] [email protected] 1 points 10 months ago (1 children)

But what use would it be then, you wouldn't be able to compare one potato to another, both would register the same values.

[–] [email protected] 3 points 10 months ago (1 children)

I think the use case is not people doing potato study but people that want to lose weight and need to know the amount of calories in the piece of cake that’s offered at the office cafeteria.

[–] [email protected] 1 points 10 months ago (1 children)

And that means the feature is useless, there are so many things in a cake that can't be seen from a simple picture.

And if it is just a generic "cake" value, it will show incorrect data

[–] [email protected] 2 points 10 months ago

The paper I showed earlier disagrees

[–] [email protected] 1 points 10 months ago (1 children)

You are correct but you are speaking for yourself and not for example the disabled community who may lack senses or the capacity to calculate a result. While ai still improves its capabilities they are the first to benefit.

[–] [email protected] 1 points 10 months ago

I get what you are saying, but this specific decive had no future.

[–] [email protected] 2 points 10 months ago (1 children)

If i had a big list or directory of a lot of well known books and how many times s, q, d and r appears in them then sure I would be able to make a very good estimate on how many there are from just looking at the cover of the book, with a slight variance being in the editing that version may have. Almost like how a specific type of food will likely have a certain amount of protein fibre etc, with slight variations based on how the cook prepared the food.

[–] [email protected] 1 points 10 months ago (1 children)

But then you have opened the books, missing the point.

[–] [email protected] 1 points 10 months ago (1 children)

I didn't open the book, someone else looked into the book and wrote it down for me to then read when needed, just like how someone would put in the data for a program to look it up when asked.

[–] [email protected] 1 points 10 months ago (1 children)

That changes nothing, you had the book inspected and hot the data.

[–] [email protected] 2 points 10 months ago (1 children)

I think you're missing what I'm trying to say.

[–] [email protected] 1 points 10 months ago (1 children)

No, you are hung up on trying to read the book without actually reading it.

That breaks the puzzle, since the device would not be able to anslyze the inside of an item of food from a pucture of the inside, and can only use highly generic data based on what it can assume from an image of the outside

[–] [email protected] 2 points 10 months ago (1 children)

Re-read the first one I sent.

You can get a pretty good generalisation if you know what the food is. How do you think current apps for tracking nutrition work? All that this will do is just try and figure out what the food is from the picture rather than the user typing it in. Most foods you can tell what it is without "looking inside". I'm pretty sure there's apps that do that now, this isn't something new and groundbreaking.

And for nutrition you don't need to be 100% exact when tracking it. Because you can't be 100% even if you do know exact ingredients and how much of each one. Everything always has a variance. This method doesn't need to be perfect for it to meet the needs of most that will use it.

[–] [email protected] 2 points 10 months ago

I agree that you can get a generic value of nutrition from a photo of a simple, fruit or vegetable, but since a pie/cake contains soo much stuff that looks identical to other stuff, rendering any photographic analysis useless.

So yes, you can get some idea of the nutrition of some foods, but way too low to be useful.

[–] [email protected] 6 points 10 months ago

You have to talk aloud so that you're included in the distinct lack of privacy this thing has.