tee9000

joined 6 days ago
[–] [email protected] 1 points 4 minutes ago

Cant blame me for asking :)

Seems like tools to recognize ai content to prevent synthetic input avoids model degredation.

If those tools are up to the task then i would agree it probably doesnt hinder model training. Not sure what the reality is, or if the need for those tools creates a barrier to entry for a significant portion of those trying to create models with internet-crawled data.

[–] [email protected] 1 points 44 minutes ago (2 children)

By chance, is that based on other peoples succinct social media comments on ai?

[–] [email protected] 9 points 11 hours ago (4 children)

Kind of like how true thoughts and opinions on complex topics are boiled down to digestible concepts for others to understand who then perpetuate those concepts without understanding them and the meaning degrades and we dont think anymore, just repeat stuff in social media comments.

Side note... this article sucks and seems like it was ai generated. Repetitive and no author credit? Just says it was originally posted elsewhere.

Generative AI isnt in danger of being killed as this clickbait titled suggests... just hindered.

[–] [email protected] 3 points 2 days ago

Sorry but a new pico headset wouldnt do much of anything. New meta headset, new valve headset would give a bump.

Really needs better content. The hardware is almost there (in terms of cost and accessibility of the experience).

Its slowly getting there. But the current population of vr users is characterized by: who would play the same limited experiences consistently with hardware that is often cumbersome and loading screens that arent super long but become your entire existence and its annoying.

Meta sucks but they have been a boon for vr development.

[–] [email protected] 12 points 2 days ago* (last edited 2 days ago)

I really truly suggest diversifying to newsfeeds without comment sections like techmeme for a bit.

Increasing complexity is overwhelming and theres plenty of bad shit going on but theres a lot overblown in your post.

Sorry for the long edit: i personally felt improvement for my mental health when i did this for 6 months or so. Because seriously, whatever disinformation is happening in american news is so exhausting. We need to think whatever we want and then engage with each other when our thoughts are more individualized. Dont be afraid to ask questions that might seem like you are questioning some holy established lemmy/reddit consensus. If you are being honest about your opinions and arent afraid to look dumb then you are doing the internet a HUGE service. We need more dumb questions and vulnerability to combat the obsession of appearing as an expert. So thank you for making a genuine post of concern.

[–] [email protected] 1 points 3 days ago

Its what chaptgpt calls it.

[–] [email protected] 1 points 3 days ago

Me. Moderate ai enthusiast and software engineer.

[–] [email protected] 1 points 4 days ago

I think its the company's responsibility to incorporate a technology to carry out their policy accurately. They cant just use an LLM stock from a vendor. They work to adapt it for their needs and get acceptable results. I think if an llm isnt considerably more accurate than humans then its a disservice to their customers and they should be responsible for that. There should be regulations to keep companies from using models if they dont work

[–] [email protected] 1 points 4 days ago (2 children)

I agree that is a bit of an ethical minefield to employ it to make decisions that affect peoples livelihood. But my point is if a company uses it to decide if an insurance claim should be paid out, the models ability to make those decisions isnt changed by what we call the steps it takes to come to a decision.

If an insurance company can dissect any particular claim decision and agree with each step the model took, then is it really different than having someone do it? Might it be better in some ways? A real concern is the fact that ai isnt perfect and mistakes made are pretty hard to accept... seems pretty dystopian i get that. But if less mistakes are made and you can still appeal decisions then maybe its overblown?

[–] [email protected] 3 points 4 days ago

Sorry but thats not an explanation of your position, thats restating what you just said.

[–] [email protected] 2 points 5 days ago (6 children)

Why does ai that has a "reasoning" step become dangerous?

[–] [email protected] 4 points 5 days ago (1 children)

100-250 per month

view more: next ›