I've met people with C++ Stockholm Syndrome, and I think their trajectory is different. There's no asymptotic approach toward zero; their appreciation just grows or stays steady, even decades into their career.
BatmanAoD
The logo and "join our Discord" text are more than half cut off for me. Is that the original cropping, or is it a client (Jerboa) issue?
Rust is extremely geared toward maintainability at the cost of other values such as learnability and iteration speed. Whether it's successful is of course somewhat a matter of opinion (at least until we figure out how to do good quantitative studies on software maintainability), and it is of course possible to write brittle Rust code. But it does make a lot of common errors (including ones Go facilitates) hard or impossible to replicate.
It also strongly pushes toward specific types of abstractions and architectural decisions, which is pretty unique among mainstream languages, and is of course a large part of what critics dislike about it (since that's extremely limiting compared to the freedom most languages give you). But the ability for the compiler to influence the high-level design and code organization is a large part of what makes Rust uniquely maintainability-focused, at least in theory.
At the time, she called it a "compiler", but its function was more akin to what we'd call a linker or assembler today.
Do you mean Grace Hopper, who wrote the first assembler?
I agree that a symbolic representation of the splatters would probably be more interesting. The whole point is that random character sequences are often valid Perl, though, so changing the generation method wouldn't change that aspect.
Perl programs are, by definition, text. So "paint splatters are valid Perl" implies that there's a mapping from paint splatters to text.
Do you have a suggested mapping of paint splatters to text that would be more "accurate" than OCR? And do you really think it would result in fewer valid Perl programs?
No, you leapt directly from what I said, which was relevant on its own, to an absurdly stronger claim.
I didn't say that humans and AI are the same. I think the original comment, that modern AI is "smart enough to be tricked", is essentially true: not in the sense that humans are conscious of being "tricked", but in a similar way to how humans can be misled or can misunderstand a rule they're supposed to be following. That's certainly a property of the complexity of system, and the comment below it, to which I originally responded, seemed to imply that being "too complicated to have precise guidelines" somehow demonstrates that AI are not "smart". But of course "smart" entities, such as humans, share that exact property of being "too complicated to have precise guidelines", which was my point!
...I didn't say that it does.
We can create rules and a human can understand if they are breaking them or not...
So I take it you are not a lawyer, nor any sort of compliance specialist?
They aren't thinking about it and deciding it's the right thing to do.
That's almost certainly true; and I'm not trying to insinuate that AI is anywhere near true human-level intelligence yet. But it's certainly got some surprisingly similar behaviors.
Have you considered that one property of actual, real-life human intelligence is being "too complicated to have precise guidelines"?
Well, except "robust", unless you have very strict code standards, review processes, and static analysis.
(And arguably it's never elegant, though that's almost purely a matter of taste.)