From the pictures, the tank's only a couple meters from a tree itself. "supplement" would be a better word than "alternative" in the headline
0ops
That just means you can catch some sick air
I think it's worth noting that the obd-2 protocol required by law is ONLY for emissions related parameters. Knock, air/fuel ratio, throttle position, things like that. A lot of manufacturers can get data that's not emissions related (like transmission codes) from the obd-2 port, but with a different/extended proprietary protocol that requires a proprietary (very expensive) scanner.
Basically, I think obd-2 should be expanded to include these other systems and ev systems as well. That would standardize ev diagnostics and non-emissions-related ice diagnostics too, which would be a boon for repairability.
Plus if you're young it builds a credit score. Get a credit card, pretend that it's your debit card. Set up automatic payments.
I'm only finishing the class now and it's pretty wild to hear "We're only learning this model to help you understand a fundamental concept, the model itself is ancient and obsolete", and said model came out in 2018. Wild
I'm with you that LLM's don't work like the human brain. They were built for a very specific task. But that's a model architecture problem (and being gimped by having only two dimension of awareness, arguably two if you count "self attention" another limiting factor in it's depth of understanding, see my post history if you want). I wouldn't bet against us making it to agi however we define it through incremental improvements over the next decade or two.
Actually, most models are already doing some form of filtering AFAIK, but I don't know how comparable it is to our sensory system. CNN's, for example, work the way our eyes work. The short of it is image data goes through a few layers, each node in the next layer collecting the aggregate data of several from the last (usually a 3x3) grid. Each of these layers has filters to determine the output of that node, which need to be trained to collectively recognize specific patterns in the data, like a dog. Source: lecture notes and homework from my applied neural networks class
LLMs have no idea what a cookie is
The large language model takes in language, so it's only understand things in terms of language. This isn't surprising. Personally, I've tasted a cookie. I've crushed one in my fist watching it crumble, and I remember the sound. I've seen how they were made, and I've made them myself. It feels good when I eat it, apparently that's the dopamine. Why can't the LLM understand cookies the way I do? The most glaring difference is it doesn't have my body. It doesn't have all of my different senses constantly feeding data into it, and it doesn't have a body with muscles to manipulate it's environment, and observe the results. I argue that we shouldn't assume that human consciousness has a "special sauce" until our model's inputs and outputs are similar to our own, the model's scaled/modified sufficiently, and it's still not sentient/sapient by our standards, whatever they are.
My problem with the Chinese room is that how it applies depends on scale. Where do you draw the line between understanding and executing a program? An atom bonding with another atom? A lipid snuggling next to a neighboring lipid? A single neuron cell firing to its neighbor? One section of the nervous system sending signals to the other? One homo sapien speaking to another? Hell, let's go one further: one culture influencing another? Do we actually have free will and sapience, or are we just complicated enough, through layers and layers of Chinese rooms inside of Chinese buildings inside of Chinese cities inside of China itself, that we assume that we are for practical purposes?
I mean, I think so?
A.) Do you have proof for all of these claims about what llm's aren't, with definitions for key terms? B.) Do you have proof that these claims don't apply to yourself? We can't base our understanding of intelligence, artificial or biological, on circular reasoning and ancient assumptions.
It can't do a single thing without human input.
That's correct, hence why I said that chatGPT isn't there yet. What are you without input though? Is a human nervous system floating in a vacuum conscious? What could it have possibly learned? It doesn't even have the concept of having sensations at all, let alone vision, let alone the ability to visualize anything specific. What are you without an environment to take input from and manipulate/output to in turn?
The perceived quality of human intelligence is held up by so many assumptions, like "having free will" and "understanding truth". Do we really? Can anyone prove that? (Edit, this works the other way too. Assuming that we do understand truth and have free will - if those terms can even be defined in a testable way - can you prove that the llm doesn't?)
At this point I'm convinced that the difference between a llm and human-level intelligence is dimensions of awareness, scale, and further development of the model's architecture. Fundamentally though, I think we have all the pieces
Edit: I just want to emphasize, I think. I hypothesize. I don't pretend to know
On the flip side, you just taught me that the extra length can be wound up underneath!