A1kmm

joined 2 years ago
[–] [email protected] 13 points 5 months ago

This is absolutely because they pulled the emergency library stunt, and they were loud as hell about it. They literally broke the law and shouted about it.

I think that you are right as to why the publishers picked them specifically to go after in the first place. I don't think they should have done the "emergency library".

That said, the publishers arguments show they have an anti-library agenda that goes beyond just the emergency library.

Libraries are allowed to scan/digitize books they own physically. They are only allowed to lend out as many as they physically own though. Archive knew this and allowed infinite “lend outs”. They even openly acknowledged that this was against the law in their announcement post when they did this.

The trouble is that the publishers are not just going after them for infinite lend-outs. The publishers are arguing that they shouldn't be allowed to lend out any digital copies of a book they've scanned from a physical copy, even if they lock away the corresponding numbers of physical copies.

Worse, they got a court to agree with them on that, which is where the appeal comes in.

The publishers want it to be that physical copies can only be lent out as physical copies, and for digital copies the libraries have to purchase a subscription for a set number of library patrons and concurrent borrows, specifically for digital lending, and with a finite life. This is all about growing publisher revenue. The publishers are not stopping at saying the number of digital copies lent must be less than or equal to the number of physical copies, and are going after archive.org for their entire digital library programme.

[–] [email protected] 4 points 5 months ago (3 children)

The best option is to run them models locally. You'll need a good enough GPU - I have an RTX 3060 with 12 GB of VRAM, which is enough to do a lot of local AI work.

I use Ollama, and my favourite model to use with it is Mistral-7b-Instruct. It's a 7 billion parameter model optimised for instruction following, but usable with 4 bit quantisation, so the model takes about 4 GB of storage.

You can run it from the command line rather than a web interface - run the container for the server, and then something like docker exec -it ollama ollama run mistral, giving a command line interface. The model performs pretty well; not quite as well on some tasks as GPT-4, but also not brain-damaged from attempts to censor it.

By default it keeps a local history, but you can turn that off.

[–] [email protected] 10 points 6 months ago (1 children)

Yes, but the information would need to be computationally verifiable for it to be meaningful - which basically means there is a chain of signatures and/or hashes leading back to a publicly known public key.

One of the seminal early papers on zero-knowledge cryptography, from 2001, by Rivest, Shamir and Tauman (two of the three letters in RSA!), actually used leaking secrets as the main example of an application of Ring Signatures: https://link.springer.com/chapter/10.1007/3-540-45682-1_32. Ring Signatures work as follows: there are n RSA public keys of members of a group known to the public (or the journalist). You want to prove that you have the private key corresponding to one of the public keys, without revealing which one. So you sign a message using a ring signature over the 'ring' made up of the n public keys, which only requires one of n private keys. The journalist (or anyone else receiving the secret) can verify the signature, but obtain zero knowledge over which private key out of the n was used.

However, the conditions for this might not exist. With more modern schemes, like zk-STARKs, more advanced things are possible. For example, emails these days are signed by mail servers with DKIM. Perhaps the leaker wants to prove to the journalist that they are authorised to send emails through the Boeing's staff-only mail server, without allowing the journalist, even collaborating with Boeing, to identify which Boeing staff member did the leak. The journalist could provide the leaker with a large random number r1, and the leaker could come up with a secret large random number r2. The leaker computes a hash H(r1, r2), and encodes that hash in a pattern of space counts between full stops (e.g. "This is a sentence. I wrote this sentence." encodes 3, 4 - the encoding would need to limit sentence sizes to allow encoding the hash while looking relatively natural), and sends a message that happens to contain that encoded hash - including to somewhere where it comes back to them. Boeing's mail servers sign the message with DKIM - but leaking that message would obviously identify the leaker. So the leaker uses zk-STARKs to prove that there exists a message m that includes a valid DKIM signature that verifies to Boeing's DKIM private key, and a random number r2, such that m contains the encoded form of the hash with r1 and r2. r1 or m are not revealed (that's the zero-knowledge part). The proof might also need to prove the encoded hash occurred before "wrote:" in the body of the message to prevent an imposter tricking a real Boeing staff member including the encoded hash in a reply. Boeing and the journalist wouldn't know r2, so would struggle to find a message with the hash (which they don't know) in it - they might try to use statistical analysis to find messages with unusual distributions of number of spaces per sentence if the distribution forced by the encoding is too unusual.

[–] [email protected] 2 points 7 months ago

I suggest having a threat model about what attack(s) your security is protecting against.

I'd suggest this probably isn't giving much extra security over a long unique password for your password manager:

  • A remote attacker who doesn't control your machine, but is trying to phish you will succeed the same - dependent on your practices and password manager to prevent copying text.
  • A remote attacker who does control your machine will also not be affected. Once your password database in the password manager is decrypted, they can take the whole thing, whether or not you used a password or hardware key to decrypt it. The only difference is maybe they need slightly more technical skill than copying the file + using a keylogger - but the biggest threats probably automate this anyway and there is no material difference.
  • A local attacker who makes a single entry to steal your hardware, and then tries to extract data from it, is either advantaged by having a hardware key (if they can steal it, and you don't also use a password), or is in a neutral position (can't crack the locked password safe protected by password, don't have the hardware key / can't bypass its physical security). It might be an advantage if you can physically protect your hardware key (e.g. take it with you, and your threat model is people who take the database while you are away from it), if you can't remember a sufficiently unique passphrase.
  • A local attacker who can make a surreptitious entry, and then come back later for the results is in basically the same position as a remote attacker who does control your machine after the first visit.

That said, it might be able to give you more convenience at the expense of slightly less security - particularly if your threat model is entirely around remote attackers - on the convenience/security trade-off. You would touch a button to decrypt instead of entering a long passphrase.

[–] [email protected] 3 points 8 months ago (1 children)

I thought the orbs were supposedly open source

No they are proprietary as a whole. Parts of the hardware design are published, and parts of the software that runs on them, but not the whole thing.

Fundamentally Worldcoin is about 'one person, one vote', and anyone can create millions of fake iris images; the point of the orb is that it is 'blessed' hardware using trusted computing (or to use the term coined by the FSF, treacherous computing) and tamper detection to make sure that a central authority (namely Sam Altman's Worldcoin foundation) has signed off on the orb running the exact secret / proprietary software running on the orb that generates an identity.

They could have alternatively have built a system that leverages government identity using zero-knowledge proof of possession of a government-signed digital identity document. But I think their fundamental thesis is that they are trustworthy to be a central authority who could create millions of fake identities if they wanted, but that governments are not.

[–] [email protected] 4 points 8 months ago

One of the key tenets of keeping something computerised secure is 'Defence in Depth' - i.e. having multiple layers of defence, so that even if one layer is breached, the next layer (which you thought was redundant and unnecessary) prevents the attack.

Running a fully patched kernel and services / applications should protect you unless someone has a 0-day (i.e. not disclosed) exploit. Reducing the surface area by minimising what services / applications are running, using software (firejail etc...) and firewalls to limit permissions of applications / services to what is needed, etc... serves as another layer of defence. Disconnecting or physically blocking peripherals that might allow for spying is another layer; it serves its purpose if all the other layers are breached.

[–] [email protected] 1 points 8 months ago

A lot of AI models are probably already trained on Reddit data. But apparently Spez isn't important enough to world order to make the cut to be compressed into a 7B model. I asked my Mistral-7B-Instruct (4-bit quantised) local LLM:

>>> What can you tell me about someone called u/spez, a
... nd are they a nice person or horrible?
 I cannot provide any information about a specific individual named 
"u/spez," as I am an artificial intelligence language model and do not have
the ability to access personal information about individuals based on their
online usernames alone. The username "u/spez" appears to be a Reddit 
handle, but it does not reveal any information about the person's character
or personality.

Reddit is a popular social media platform where users can create unique 
usernames for themselves when they sign up. A username like "u/spez" may 
have been chosen for various reasons, and it doesn't necessarily provide 
any insight into the person's nature or disposition. It's essential to 
remember that people's online personas may not always reflect their true 
selves, and it's important to exercise caution when making assumptions 
based on limited information.

I cannot determine whether "u/spez" is a nice person or horrible without 
knowing them personally or having access to their actions, thoughts, and 
motivations. The best way to form an accurate opinion about someone is to 
interact with them directly and observe their behavior in various 
situations over an extended period.

>>> Is it possible u/spez is Steve Huffman? Can you tell me anything based on tha
... t?
 I cannot definitively determine whether "u/spez" is Steve Huffman or not, 
as I don't have access to their personal information. Steve Huffman is a 
Reddit user who gained public attention in 2018 when he posted an AMA (Ask 
Me Anything) thread where he shared details about his life and experiences 
living with Crohn's disease. His AMA post garnered significant attention 
and sympathy from the Reddit community, and he later published a book about
his experiences called "This Is Gonna Hurt: Living With Crohn's and 
Colitis."

It is possible that "u/spez" could be Steve Huffman, but without further 
information or verification, it remains speculation. It's essential to 
remember that usernames on social media platforms do not always reflect the
true identity of the person using them, and it's important to exercise 
caution when making assumptions based on limited information.

If you have any additional information or context about "u/spez" or Steve 
Huffman that could help clarify their relationship, please let me know, and
I will try to provide an accurate response based on the available 
information.
[–] [email protected] 10 points 9 months ago (4 children)

How to work out what instance(s) if someone does this: A Lemmy instance doesn't have to send the same voting data to every instance, it could send different votes to different instances (stock Lemmy federates the same thing consistently, but there is no reason a modified Lemmy designed to catch someone doing this has to), encoding a signal into the voting pattern. Then, just check to see what signal shows up. If it averages several instances, with enough signal you could decompose a linear combination (e.g. average) of different patterns back out into its constituent parts.

[–] [email protected] 10 points 9 months ago

Probably more likely to be surveillance of Snapchat.

[–] [email protected] 22 points 9 months ago (1 children)

requires trusting a company not to fuck with you behind the scenes

The point of this cryptography is that you don't have to trust the company implementing it not to do that, as long as you trust the software doing the retrieval.

[–] [email protected] 10 points 10 months ago* (last edited 10 months ago)

I wonder if their notice is not absolute nonsense. They talk about breaches of their terms of service, which I think can be found here: https://go.he.services/tc/V1/en_GB/tc.html

The terms of service do purport to prohibit 'reverse engineering' of the app, which I think the developer receiving the notice may have done to understand the protocol between Haier's service and the app. However, it looks like the developer is in Germany, and did the reverse engineering for the purpose of creating something that, in a way, competes with the app. According to https://www.twobirds.com/en/insights/2020/germany/vertraglicher-ausschluss-von-reverse-engineering, contractual provisions in Germany designed to prevent reverse engineering to create a competing independent program after the original is already available to the public are not valid.

Maybe they are saying that the developer is unlawfully interfering with their business by inducing others to breach the contract. However, the terms of service don't appear to say prohibit connecting to Haier's services from a competing act (at least nothing in them I can find).

They don't really clearly define what their problem / claimed cause of action is. Maybe this is just an intimidation tactic against something they don't like, but they have no real legal case - in which case perhaps the community around it could band together to create a legal defence fund, and have Haier laughed out of court.

Disclaimer: Not intended as legal advice.

Edit: And better yet would be if they could find a way to intercept the traffic between the devices and Haier and replace Haier in that protocol. Then there is no option for Haier to try to restrict who can use the servers on their side. I assume the devices have a set of Certificate Authorities they trust, and it is not possible to get a trusted certificate without modifying the device somehow though.

[–] [email protected] 7 points 10 months ago

I'd suggest not buying anything from Haier. I had a fridge from them, and it barely lasted 5 years. I used their official service programme to try and get it fixed (so as to try to get it sorted without them blaming the fridge, and the manufacturer blaming the repairs), and even the person they sent out (who didn't exclusively work for Haier but was part of their repair programme), recommended getting another fridge, and making the next one a brand other than Haier.

The fact that they are now claiming that letting consumers control their own appliances harms the company just shows how out of touch they are with what their consumers want - and definitely reaffirms to me that this is not a brand worth buying.

view more: ‹ prev next ›