I thought for so long whether go get a 1440p UW or just standars 1440p. In the end I opted for the standard one. Maybe one of these days I'll try it but the price point just wasn't there imo.
AnonStoleMyPants
Man, I hate gamepads. If a game is clearly meant to be played with a pad and thw kb+m is a garbage afterthought I immediately uninstall. Can't stand them.
Wait, really? So you think Matrix is the ultimate form of secure and private "chat" communities? Because if it is not then it is a compromise.
This Lemmy instance for sure as hell is not the most private and secure.
Yeah same here. I like fake meat. I mean, if it tastes good and has no animal parts in it, it goes into my mouth. It's not that complicated.
Try freezing it. Makes the texture a lot different.
That does not mean you need to log in. If you press continue it'll go to the website.
The same thing as with tooooooons of things: scale.
Nobody cares if one dude steals office supplies at work. Now, if everyone stats doing it, or if the single guy steals everything, then action is taken.
Nobody cares if a random person draws in the same style and with same characters as you, but if they start to sell them, or god forbid, out-sell you, then there is a problem.
Nobody cares (except police I guess) if a random driver drives double the speed limit and annoys people living next to the road on the weekends, but when tons of people do it, you get speed bumps.
Nobody cares if few people pirate movies, but when it gets to mainstream and companies notice that there might be money being lost. Then you get whatever we have now.
Nobody cares if the mudhill behind your house erodes a bit and you get mud on your shoes. Have a bunch of that erode and you realise the danger...
You have been fine-tuning your own writing style for a decade and random schmuck starts to write similarly, you probably don't care. No harm done. Now, get an AI to write 10 000 books in a weekend and someone starts to sell them... well now you have a completely different problem.
On a fundamental level the exact same thing is happening, yet action is only taken after a certain threshold is step over.
Article here (from reader mode in Firefox)
Everyone’s favorite chatbot can now see and hear and speak. On Monday, OpenAI announced new multimodal capabilities for ChatGPT. Users can now have voice conversations or share images with ChatGPT in real-time.
Audio and multimodal features have become the next phase in fierce generative AI competition. Meta recently launched AudioCraft for generating music with AI and Google Bard and Microsoft Bing have both deployed multimodal features for their chat experiences. Just last week, Amazon previewed a revamped version of Alexa that will be powered by its own LLM (large language model), and even Apple is experimenting with AI generated voice, with Personal Voice.
Voice capabilities will be available on iOS and Android. Like Alexa or Siri, you can tap to speak to ChatGPT and it will speak back to you in one of five preferred voice options. Unlike, current voice assistants out there, ChatGPT is powered by more advanced LLMs, so what you’ll hear is the same type of conversational and creative response that OpenAI’s GPT-4 and GPT-3.5 is capable of creating with text. The example that OpenAI shared in the announcement is generating a bedtime story from a voice prompt. So, exhausted parents at the end of a long day can outsource their creativity to ChatGPT.
Use your voice to engage in a back-and-forth conversation with ChatGPT. Speak with it on the go, request a bedtime story, or settle a dinner table debate. Sound on 🔊 pic.twitter.com/3tuWzX0wtS — OpenAI (@OpenAI) September 25, 2023
Multimodal recognition is something that’s been forecasted for a while, and is now launching in a user-friendly fashion for ChatGPT. When GPT-4 was released last March, OpenAI showcased its ability to understand and interpret images and handwritten text. Now it will be a part of everyday ChatGPT use. Users can upload an image of something and ask ChatGPT about it — identifying a cloud, or making a meal plan based on a photo of the contents of your fridge. Multimodal will be available on all platforms.
As with any generative AI advancement, there are serious ethics and privacy issues to consider. To mitigate risks of audio deepfakes, OpenAI says it is only using its audio recognition technology for the specific “voice chat” use case. Also, it was created with voice actors they have “directly worked with.” That said, the announcement doesn’t mention whether users’ voices can be used to train the model, when you opt in to voice chat. For ChatGPT’s multimodal capabilities, OpenAI says it has “taken technical measures to significantly limit ChatGPT’s ability to analyze and make direct statements about people since ChatGPT is not always accurate and these systems should respect individuals’ privacy.” But the real test of nefarious uses won’t be known until it’s released into the wild.
Voice chat and images will roll out to ChatGPT Plus and Enterprise users in the next two weeks, and to all users “soon after.”
30/70 water ipa mix can make the mix dissolve materials that are not normally dissolvable in ipa. It is a cosolvent system which reduces the polarity of the mix and hence, can make things miscible in it that normally are not.
Just fyi, if pure ipa is not working, using the diluted version might!
Lots of plastics are attacked by acetone. If there are other plastic parts inside the actuator then you might just nuke them too.
Sure, not denying it. But the point was that you can not connect the TV into the Internet and still use netflix etc. You stream the content through chromecast when you need to.
Man I hate these. They make water warm up instantly (unless vacuum insulated) and I could just one a single glass the whole day, or multiple days.