this post was submitted on 15 Sep 2023
466 points (97.2% liked)
Technology
59390 readers
2539 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Anyone surprised by this wasn't paying attention. This is the "AI" apocalypse everyone has been wringing their hands over and dumbass executives have been salivating over. This is exactly the problem with LLMs, they produce very convincing looking content, but it's not actually factual content. You need teams of fact checkers and editors to review all their output if you care at all about accuracy.
As is with software developing, actually writing the stuff down is the easiest part of the work. If you already have someone fact checking and editing.. why do you need AI to shit out crap just for the writing? It would be easier to gather the facts first, fact check them, then wrangle them through the AI if you don't want to hire a writer (+ another pass for editing).
LLMs look like magic on a glance, but people thinking they are going to produce high quality content (or code for god's sake) are delusional.
Yeah. I'm a programmer. Everyone has been telling me that I'm about to be out of a job any day now because the "AI" is coming for me. I'm really not worried. It's way harder to correct bad code than it is to just throw it all away and start fresh, and I can't even imagine how difficult it's going to be to try to debug whatever garbage some "AI" has spewed out. If you employ a dozen programmers now, if you start using AI to generate your code you're going to need two dozen programmers to debug and fix it's output.
The promise with "AI" (more accurately machine learning, as this is not AI) as far as code is concerned is as a sort of smart copy and paste, where you can take a chunk of code and say "duplicate this but with these changes", and then verify and tweak its output. As a smart refactoring tool it shows a lot of promise, but it's not like you're going to sit down and go "write me an app" and suddenly it's done. Well, unless you want Hello World, and even then I'm sure it would find a way to introduce a bug or two.
People have been saying programming would become redundant since the first 4GL languages came out in the 1980s.
Maybe it'll actually happen some day.. but I see no sign of it so far.
Yep, had this argument a bunch. Conversation basically goes:
Devil's advocate though. With things like 4GLs, it was still all on the human to come up with the detailed spec. Best case scenario was that you work very hard, write a lot of things down, generate the code, see that it didn't work and then ???. That "???" at the end was you as the programmer sitting alone in a room trying to figure out what a non-responsive black box might wanted you to have said instead.
It's qualitatively different if you can just talk to the black box as though it were a programmer. It's less of a black box at that point. It understands your language, and it understands the code. So you can start with the spec, but when something inevitably doesn't work, the "???" step doesn't just come back to you figuring out with no help what you did wrong. You can ask it questions and make suggestions. You can run experiments. Today's LLMs hit the wall pretty quick there, and maybe they always will. There's certainly the viewpoint that "all they do is model text and they can't really learn anything".
I think that's fundamentally wrong. I'm a pretty solid programmer. I have a PhD in Computer Science, and I've worked as a software engineer and an architect throughout a pretty long career. And everything I've ever learned has basically been through language. Through reading, writing, speaking, and listening to English and a few other languages. I think that to say that I can learn what I've learned, but it's fundamentally impossible for a robot to learn it is to resort to mysticism. At some point, we will have AIs that can do what I do today. I think that's inevitable.
Well, that particular conversation typically happens in relation to something like a business rules engine, or sometimes one of those drag and drop visual programming languages which everyone always touts as letting you get rid of programmers (but in reality just limits you to a really hard to work with programming language), but there is a lot of overlap with the current LLM based hype.
If we ever do get an actual AI, then yes, AI will probably end up writing most of the programs, although it's possible programmers will still exist in some capacity maybe for the purpose of creating flow charts or something to hand to the AIs. But we're a long way off from a true AI, so everyone acting like it's going to happen any day now is as laughable as everyone promising cold fusion was going to happen any day now back in the 70s. Ironically I think we are more likely to see a workable cold fusion before we see true AI, some of the hot fusion experiments happening lately are very promising.