Windex007
Considering he asked twitter programmers to print out their pull requests Im not even sure he's not cosplaying a programmer
I completely agree that if there are tools that can allow a vehicle to "see" better than a human it's absurd not to implement them. Even if musk could make a car exactly as good as a human, that's a low bar. It isn't good enough.
As for humans: if you are operating a vehicle such that you could not avoid killing an unexpected person on the road, you are not safely operating the vehicle. In this case, it's known as "over driving your headlights", you are driving at a speed that precludes you from reacting appropriately by the time you can perceive an issue.
Imagine if it wasn't a deer but a chunk of concrete that would kill you if struck at speed. Perhaps a bolder on a mountain pass. A vehicle that has broken down.
Does Musk's system operate safely? No. The fact that it was a deer is completely irrelevant.
Yeah. I mean, I understand the premise, I just think it's flawed. Like, you and I as vehicle operators use two cameras when we drive (our two eyes). It's hypothetically sufficient in terms of raw data input.
Where it falls apart is that we also have brains which have evolved in ways we don't even understand to consume those inputs effectively.
But most importantly, why aim for parity at all? Why NOT give our cars the tools to "see" better than a human? I want that!
If you watch the video, the deer was standing on a strip of off coloured pavement, and also had about the same length as the dotted line. Not sure how much colour information comes through at night on those cameras.
The point here isn't actually "should it have stopped for the deer" , it's "if the system can't even see the deer, how could it be expected to distinguish between a deer and a child?"
The calculus changes incredibly between a deer and a child.
Yeah I got a pretty nauseating explanation of "The June 4th Incident"
To be fair, if they're driven by an LLM I would still expect it to be wrong.
I didn't realize that LoRa didn't care about carrier frequency, that's for sure the root of my faulty assumption! Thanks for taking the time to explain
I don't think it's "just" LoRa on 2.4ghz, because if it were existing lora devices wouldn't be able to decode the signals off the shelf, as the article claims. From the perspective of the receiver, the messages must "appear" to be in a LoRa band, right?
How do you make a device who's hardware operates in one frequency band emulate messages in a different band? I think that's the nature of this research.
And like, we already know how to do that in the general sense. For all intents and purposes, that's what AM radio does. Just hacking a specific peice of consumer hardware to do it entirely software side becomes the research paper.
Sounds like they basically crafted some special messages such that it's nonsense at 2.4ghz but smoothes out to a LoRa message on a much much lower frequency band (<ghz).
Still calling it "The Chat Gippity" though
One of them is EXACTLY 8 ASCII characters, may not contain any English dictionary word, no repeating character. At least 1 number, and at least 1 special characters. Just obliterates the search space.