this post was submitted on 13 Nov 2024
661 points (95.2% liked)

Technology

59374 readers
6264 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 4 hours ago (1 children)

I’m excited for the future, but not as excited for the transition period.

I have similar feelings.

I discovered LLMs before the hype ever began (used GPT-2 well before ChatGPT even existed) and the same with image generation models barely before the hype really took off. (I was an early closed beta tester of DALL-E)

And as my initial fascination grew, along with the interest of my peers, the hype began to take off, and suddenly, instead of being an interesting technology with some novel use cases, it became yet another technology for companies to show to investors (after slapping it in a product in a way no user would ever enjoy) to increase stock prices.

Just as you mentioned with the dotcom bubble, I think this will definitely do a lot of good. LLMs have been great for asking specialized questions about things where I need a better explanation, or rewording/reformatting my notes, but I've never once felt the need to have my email client generate every email for me, as Google seems to think I'd want.

If we can just get all the over-hyped corporate garbage out, and replace it with more common-sense development, maybe we'll actually see it being used in a way that's beneficial for us.

[–] [email protected] 2 points 2 hours ago* (last edited 2 hours ago)

I initially started with natural language processing (small language models?) in school, which is a much simpler form of text generation that operates on words instead of whatever they call the symbols in modern LLMs. So when modern LLMs came out, I basically registered that as, "oh, better version of NLP," with all its associated limitations and issues, and that seems to be what it is.

So yeah, I think it's pretty neat, and I can certainly see some interesting use-cases, but it's really not how I want to interface with computers. I like searching with keywords and I prefer the process of creation more than the product of creation, so image and text generation aren't particularly interesting to me. I'll certainly use them if I need to, but as a software engineer, I just find LLMs in all forms (so far) annoying to use. I don't even like full text search in many cases and prefer regex searches, so I guess I'm old-school like that.

I'll eventually give in and adopt it into my workflow and I'll probably do so before the average person does, but what I see and what the media hypes it up to be really don't match up. I'm planning to set up a llama model if only because I have the spare hardware for it and it's an interesting novelty.