CodeInvasion

joined 1 year ago
[–] [email protected] 19 points 3 months ago (2 children)

You do realize that every posted on the Fediverse is open and publicly available? It’s not locked behind some API or controlled by any one company or entity.

Fediverse is the Wikipedia of encyclopedias and any researcher or engineer, including myself, can and will use Lemmy data to create AI datasets with absolutely no restrictions.

[–] [email protected] 3 points 4 months ago

Agreed.

Nevertheless, the Federal regulators will have an uphill battle as mentioned in the article.

Neither "puffery" nor "corporate optimism" counts as fraud, according to US courts, and the DOJ would need to prove that Tesla knew its claims were untrue.

The big thing they could get Tesla on is the safety record for autosteer. But again there would need to be proof it was known.

[–] [email protected] 4 points 4 months ago (2 children)

I am a pilot and this is NOT how autopilot works.

There is some autoland capabilities in the larger commercial airliners, but autopilot can be as simple as a wing-leveler.

The waypoints must be programmed by the pilot in the GPS. Altitude is entirely controlled by the pilot, not the plane, except when on a programming instrument approach, and only when it captures the glideslope (so you need to be in the correct general area in 3d space for it to work).

An autopilot is actually a major hazard to the untrained pilot and has killed many, many untrained pilots as a result.

Whereas when I get in my Tesla, I use voice commands to say where I want to go and now-a-days, I don’t have to make interventions. Even when it was first released 6 years ago, it still did more than most aircraft autopilots.

[–] [email protected] 3 points 6 months ago

"If it wasn't hard, it wouldn't be worth doing"

[–] [email protected] 4 points 7 months ago

I'm convinced that we should use the same requirements to fly an airplane as driving a car.

As a pilot, there are several items I need to log on regular intervals to remain proficient so that I can continue to fly with passengersor fly under certain conditions. The biggest one being the need for a Flight Review every two years.

If we did the bare minimum and implemented a Driving Review every two years, our roads would be a lot safer, and a lot less people would die. If people cared as much about driving deaths as they did flying deaths, the world would be a much better place.

[–] [email protected] 8 points 10 months ago (1 children)

I’m an AI researcher at one of the world’s top universities on the topic. While you are correct that no AI has demonstrated self-agency, it doesn’t mean that it won’t imitate such actions.

These days, when people think AI, they mostly are referring to Language Models as these are what most people will interact with. A language model is trained on a corpus of documents. In the event of Large Language Models like ChatGPT, they are trained on just about any written document in existence. This includes Hollywood scripts and short stories concerning sentient AI.

If put in the right starting conditions by a user, any language model will start to behave as if it were sentient, imitating the training data from its corpus. This could have serious consequences if not protected against.

[–] [email protected] 1 points 11 months ago

Not quite, this was made with a ControlNet. A hybrid image wouldn't work as well as this does. But the underlying visual phenomena is the same.

[–] [email protected] 17 points 11 months ago

This is done by combining a Diffusion model with ControlNet interface. As long as you have a decently modern Nvidia GPU and familiarity with Python and Pytorch it's relatively simple to create your own model.

The ControlNet paper is here: https://arxiv.org/pdf/2302.05543.pdf

I implemented this paper back in March. It's as simple as it is brilliant. By using methods originally intended to adapt large pre-trained language models to a specific application, the author's created a new model architecture that can better control the output of a diffusion model.

[–] [email protected] 12 points 11 months ago

I am a satellite software engineer turned program manager. This is not unexpected in this current environment, however the conditions that created the environment are abnormal.

This solar cycle is much stronger than past cycles. I'm on mobile, so I can't get a good screenshot, but you can go here to see this cycle and the last cycle, as well as an overlay of a normal cycle https://www.swpc.noaa.gov/products/solar-cycle-progression

As solar flux increases, the atmosphere expands considerably, causing more drag than predicted. During periods of solar minimum, satellites can remain in a very low orbit with minimal station keeping. However, at normal levels of solar maximum, 5 year orbits can easily degrade to 1 year orbits. Forecasters says we are still a year away from solar maximum, and flux is already higher than last cycle's all time high (which was also an anomalously strong cycle). So it will get worse before it gets better.

TLDR: Satellites are falling out of the sky because the sun is angy