The one colleague using AI at my company produced (CUDA) code with lots of memory leaks that required two expert developers to fix. LLMs produce code based on vibes instead of following language syntax and proper coding practices. Maybe that would be ok in a more forgiving high level language, but I don't trust them at all for low level languages.
mashbooq
Who's said bye to DVDs?
Because it reports sources known to be unreliable (like Jerusalem Post and EuroNews) as Highly Trustworthy
without compromising on visual fidelity
But it does compromise. Netflix has the worst banding issues in low-light scenes of any of the streaming services I've tried. It's hard not to notice and it's very annoying.
Yeah Startpage and Ecosia too
"Aren't I", as in "I'm still going with you, aren't I?", which, when uncontracted, becomes "are I not?" It should be "ain't I" since "ain't" is a proper contraction for "amn't", but there's been an irrational suppression of "ain't".
That is the neutral answer. It's objectively and demonstrably correct.
to join lemmy.ml you need to copy and paste a paragraph from a communist text, so, no
Maybe the soldiers were landlords
Straw man
I'm not trained in formal computer science, so I'm unable to evaluate the quality of this paper's argument, but there's a preprint out that claims to prove that current computing architectures will never be able to advance to AGI, and that rather than accelerating, improvements are only going to slow down due to the exponential increase in resources necessary for any incremental advancements (because it's an NP-hard problem). That doesn't prove LLMs are end of the line, but it does suggest that additional improvements are likely to be marginal.
Reclaiming AI as a theoretical tool for cognitive science