anachronist

joined 1 year ago
[–] [email protected] 11 points 1 year ago (1 children)

Actually this makes sense from a corporate asshole perspective: the need for call center employees is seasonal. So you hire call center employees before the holiday season, and then the system auto-fires all the excess employees for missing their quota after the end of the season.

[–] [email protected] 2 points 1 year ago (1 children)

Amtrak is a US-government owned corporation and is probably subject to buy-american rules. However, those rules relate to where things are made, not the nationality of the contracting company. Also the company can play plenty of games with the definition of sub-component quantity and how these percentages are determined.

There are not any (US) American passenger railset manufacturers left. The main one operating in North America is Bombardier in Canada. But they've been producing horrible trains and screwing customers for decades. In the 2000s China Rail Road Corporation (CRRC) moved into the US market and started beating Bombardier on trainset bids left and right, due to price but also due to the fact that Bombardier has been a really horrible vendor.

But CRRC hasn't been much better, and also there's now a lot of questions about working with a Chinese state-run company even if the assembly happens in the US. So the USA is still a ripe market for a vendor to conquer by just producing a reliable product at a reasonable price. American trainsets are, on average, really old and they need to be replaced. This should be a big opportunity for Alstom. But not if they go to war with the FRA.

[–] [email protected] 5 points 1 year ago

Exactly this seems like a matter of the train set manufacturer going to war against the regulator, with the operator and the public stuck in the lurch.

[–] [email protected] 13 points 1 year ago (5 children)

just order some trains from Europe. We know how to build them.

Alstom is a French multinational rolling stock manufacturer

Sounds like they did that.

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (1 children)

You can quote a work under fair use, and if it's legal depends on your intent. You have to be quoting it for such uses as "commentary, criticism, news reporting, and scholarly reports."

There is no cheat code here. There is no loophole that LLMs can slide on through. The output of LLMs is illegal. The training of LLMs without consent is probably illegal.

The industry knows that its activity is illegal and it strategy is not to win but rather to make litigation expensive, complex and slow through such tactics as:

  1. Diffusion of responsibility: (note the companies compiling the list of training works, gathering those works, training on those works and prompting the generation of output are all intentionally different entities). The strategy is that each entity can claim "I was only doing X, the actual infringement is when that guy over there did Y".
  2. Diffusion of infringement: so many works are being infringed that it becomes difficult, especially on the output side, to say who has been infringed and who has standing. What's more, even in clear cut cases like, for instance, when I give an LLM a prompt and it regurgitates some nontrivial recognizable copyrighted work, the LLM trainer will say you caused the infringement with your prompt! (see point 1)
  3. Pretending to be academic in nature so they could wrap themselves in the thick blanket of affirmative defense that fair use doctrine affords the academy, and then after the training portion of the infringement has occurred (insisting that was fair use because it was being used in an academic context) "whoopseeing" it into a commercial product.
  4. Just being super cagey about the details of the training sets that were actually used and how they were used. This kind of stuff is discoverable but you have to get to discovery first.
  5. and finally magic brain box arguments. These is typically some variation of "all artists have influences." It is a rhetorical argument that would be blown right past in court, but it muddies the public discussion and is useful to them in that way.

Their purpose is not to win. It's to slow everything down, and limit the number of people who are being infringed who have the resources to pursue them. The goal is that if they can get LLMs to "take over" quickly then they can become, you know, too big and too powerful to be shut down even after the inevitable adverse rulings. It's classic "ask for forgiveness, not permission" silicon valley strategy.

Sam Altman's goal in creeping around Washington is to try to get laws changed to carve out exceptions for exactly the types of stuff he is already doing. And it is just the same thing SBF was doing when he was creeping around Washington trying to get a law that would declare his securitized ponzi tokens to be commodities.

[–] [email protected] 10 points 1 year ago (4 children)

OpenAI is trying to argue that the whole work has to be similar to infringe, but that's never been true. You can write a novel and infringe on page 302 and that's a copyright infringement. OpenAI is trying to change the meaning of copyright otherwise, the output of their model is oozing with various infringements.

[–] [email protected] 3 points 1 year ago (1 children)

Models don’t get bigger as you add more stuff.

They will get less coherent and/or "forget" the earlier data if you don't increase the parameters with the training set.

There are two-gigabyte networks that have been trained on hundreds of millions of images

You can take a huge tiff of an image, put it through JPEG with the quality cranked all the way down and get a tiny file out the other side, which is still a recognizable derivative of the original. LLMs are extremely lossy compression of their training set.

[–] [email protected] 1 points 1 year ago

Nah you just accused human drivers of being meth-heads.

[–] [email protected] 12 points 1 year ago (5 children)

Yes companies will definitely choose to pay more to keep people safe. 🙄

[–] [email protected] 1 points 1 year ago

They're all dead already.

[–] [email protected] 8 points 1 year ago

Considering how hard they're pushing for human testing this isn't far off.

[–] [email protected] 47 points 1 year ago (2 children)

Animal testing is awful in the best case, agreed.

What this article and other articles about Neuralink allege is that the company blew right past any kind of ethical guidelines that the industry has in a desire to be fast. The industry standard is to avoid any "undue suffering". They admit animals will suffer but all effort must be taken to minimize it.

What whistleblowers have exposed is that Neuralink started putting devices in primate's brains when they knew the devices won't work and were deadly in predictable ways. For instance a lot of monkeys got their brains cooked alive because the device put out too much waste heat. This was done because Elon was getting impatient and wanting to see progress in primate trials, so they just YOLOed a bunch of obviously deadly devices into a bunch of primate brains and in doing so, tortured and killed all the animals needlessly.

view more: ‹ prev next ›