this post was submitted on 22 Jun 2024
675 points (98.1% liked)

Technology

59374 readers
3794 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 8 points 4 months ago (2 children)

AI did boom, but people don't realize the peak happened a year ago. Now all we have is latecomers with FOMO. It's gonna be all incremental gains from here on.

[–] [email protected] 3 points 4 months ago* (last edited 4 months ago)

I think the true use case for these AI technologies is yet to come. What most people are doing with the "AI" tools available today is just gambling around. But working with personal computers could be changing fundamentally in the coming years.

[–] [email protected] 1 points 4 months ago (1 children)

AI did boom, but people don’t realize the peak happened a year ago.

A simple control algorithm "if temperature > LIMIT turnOffHeater" is AI, albeit an incredibly limited one.

LLMs are not AI. Please don't parrot marketing bullshit.

The former has an intrinsic understanding about a relationship based in reality, the latter has nothing of the likes.

[–] [email protected] 2 points 4 months ago (1 children)

I can see where you're getting at, LLM don't necessarily solve a problem, they just mímic patterns in data.

[–] [email protected] 3 points 4 months ago

That is indeed exactly my point. LLMs are just a language-tailored expression of deep-learning, which can be incredibly useful, but should never be confused for any kind of intelligence (i.e. logical conclusions).

I appreciate that you see my point and admit that it makes some sense :)

Example where I think pattern recognition by deep learning can be extremely useful:

  • recheck medical imaging data of patients that have already been screened by a doctor, to flag some data for a re-check by a second doctor. This could improve chances of e.g. early cancer detection for patients, without a real risk of a false detection, because again, a real doctor will look at the flagged results in detail before even alarming a patient to a potential diagnosis
  • pre-filter large amounts of data for potential matches -> e.g. exoplanet search by certain patterns (planet hunters lets humans do this as crowdsourcing)

But what I am afraid is happening for people who do not see why a very simple algorithm is already AI, but consider LLMs AI, is that they mentally decide to call AI what seems "AGI" / "human-like". They mistake the patterns of LLMs for a conscious being and that is incredibly dangerous in terms of trusting the answers given by LLMs.

Why do I think they subconsciously imply (self-)awareness / conscience? Because to not consider as (very limited) AI a control mechanism like a simple room thermostat, is viewing it as "too simple" to be AI - which means that a person with such a view makes a qualitative distinction between control laws and "AI", where a quantitative distinction between "simple AI" and "advanced AI" would be appropriate.

And such a qualitative distinction that elevates a complex word guessing machine to "intelligence", that can only be made by people who actually believe there's understanding behind those word predictions.

That's my take on this.