1 fast 1 furious
theluddite
This is completely contentless. Not one new idea or insight. I'm pretty sure chatgpt wrote it. It even has the short sections with header titles and the repetitive conclusion. Trust me on this one, because if anyone here would know about clogging up the Internet with LLM generated blog spam, it's me.
found it very difficult to find an objective injury rate for driverless cars. Probably because there are five levels of automation, and many of them allow human error to come into play. Also they are self reported by the driver companies.
This is an important point but I think you're interpreting it backwards. The current system relies on companies with a profit motive to do the testing internally, and the rest of us to trust their honesty and openness working with regularity authorities to make that rollout safe. They violated that trust,.
Also fwiw companies used to publish their data on injury rates for their internal testing, and by and large they were way worse than humans. In the last couple years, they've mostly stopped reporting them. Afaik there doesn't exist a single shred of actual, empirical evidence that we can make self driving cars actually better than humans outside of faith in technological improvement. Maybe that faith is warranted, maybe it's not (I think it's not), but either way, safety must be the number one priority. If these companies can't be trusted to work collaboratively with safety authorities then we should pull the plug hard and fast.
Different article about the same "report" being discussed here: https://lemmy.ml/post/6609951?scrollToComments=true
No on should take any of these articles seriously. They all do the same thing: They purposefully reduce a complex task into generating some plausible text, and then act shocked when the LLM can generate plausible text. Then the media credulously reports what the researchers supposedly found.
I wrote a whole thing responding to this entire genre of AI hype articles. I focused on the "AI can do your entire job in 1 minute for 95 cents" style of article, but most of the analysis carries over. It's the same fundamental flaw -- none of this research is real science.
The purpose of a system is what it does. "There is no point in claiming that the purpose of a system is to do what it constantly fails to do.” These articles about how social media is broken are constant. It's just not a useful way to think about it. For example:
It relies on badly maintained social-media infrastructure and is presided over by billionaires who have given up on the premise that their platforms should inform users
These platforms are systems. They don't have intent. There's no mens rea or anything. There is no point saying that social media is supposed to inform users when it constantly fails to inform users. In fact, it has never informed users.
Any serious discussion about social media must accept that the system is what it is, not that it's supposed to be some other way, and is currently suffering some anomaly.
It's actually very confusing. I think the only good definition is that it's a cultural designation for any company that was focused on digital technology at its inception, which comes with a certain cultural package, and even that has some problems. Netflix is a tech company, not a movie studio, but HBO is not a tech company, even though it also has a streaming platform, and Netflix produces a lot of its own stuff, which is even more confusing because Netflix started as a company that would mail you DVDs. Amazon is a tech company, but WalMart is not, even though Amazon has many physical stores and WalMart does more and more of its business online.
Mechanical engineering can be a part of tech, but again I think it's a cultural designation before anything else at this point. Plenty of mechanical engineers work at Apple, which is definitely a tech company, but if you're a mechanical engineer working on an oil rig, that's not tech.
Add to the confusion that Twitter is a tech company. At this point, what technology is Twitter really developing? Isn't technology about innovation? No doubt that a platform of that size has substantial daily engineering problems to overcome, but like... is that really what we mean when we say technology? Plenty of non-tech companies also deal with the same thing.
I wrote a whole thing fleshing out my theory, if you're curious.
edit: just under this post in my feed is one about how netflix is going to open physical stores.
Word. Personally, I really like St. Augustine's writings, which is a weird thing for an atheist and socialist living 1600 years later to say. I got really into his stuff during the pandemic for some reason. I also recommend some of Trotsky's writing about war, especially in the run-up to WW1 while they were trying to hold the second international together. Lots of really wonderful stuff about international solidarity, and the role of socialists in a time of capitalist war, that I think would do people good to read today, 100 years later. He also wrote some stuff once he was in power after WW1 that I personally found less cool, but interesting in a "no one can reign innocently" way.
People have been coming up with theories about this forever, from perspectives and time periods as diverse as Aristotle, St. Augustine, Gandhi, and Trotsky. You put a lot of very difficult questions in your post, but you didn't put forth a criteria for what "justified" means to you. I think you're going to need to interrogate that before being able to even think about any of these questions. For example, is violence justified by better outcomes, or by some absolute individual right to fight your oppressor? Is justification a question of morality, legality, tactical value, or something entirely different?
If you like this you should check out xenobots. They're this but actually real. They're made of muscle cells and can move on their own and apparently reproduce.
That is only true if you use capitalist metrics to measure poverty