276
this post was submitted on 17 Aug 2024
276 points (94.2% liked)
Technology
60052 readers
3048 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Linux still unable to catch up with NTFS when it comes to filename length, sadly. 256 bytes in an era of Unicode is ridiculous.
NTFS also has a 255 limit, but it's UTF16, so for unicode, you will get more out of it. High price to pay for UTF16. Windows basically is moving stuff between UTF16 and ASCII all the time. Most apps are ASCII but Windows is natively UTF16. All other modernly maintained OS do UTF8, which "won" unicode.
The fact that all major Unix (not just Linux) filesystems are to 255 bytes says it's not a feature in demand.
I'd much rather have COW subvolume snapshotting and incremental backup of btrfs or zfs. Plus all the other things Linux has over Windows of course.
I think this is a biased way of putting it. NTFS way is easy to understand and therefore manage. What's more important is that ASCII basically means English only. I've seen enough of such "discrimination" (stuff breaks etc.) based on used language in software/technology and it should end for good.
UTF8 is Unicode. UTF8 symbols can take more than 1 byte.
There are also encryption methods that slash maximum length of each filename even further.
Of course UTF8 is Unicode. The cool thing about UTF8 is that is ASCII, until it isn't. It cover all of Unicode, but doesn't need any bloat if you are just doing latin characters. Plus UTF8 will seamless go through ASCII code and things that understand it do, others just have patches of jibberish, but still work otherwise. It's a way better approach. Better legacy handling and more efficient packing for latin languages. Which is why it "won" out. UTF16 pretty much only exists in Windows because it's legacy it will be hard for it to escape.
LUKS is by far the most common encryption setup on Linux. It's done at block layer and the filesystem doesn't know about it. No effect of filename length, or anything else.
None of that helps or discards anything I've said above. But it allows to say that NTFS limit can be basically 1024 bytes. Just because you like what UTF-8 offers it doesn't solve hurdles with Linux limits.
LUKS is commonly used but not the only one.
Linus's VFS is where the 256 limit is hard. Some Linux filesystem, like RaiserFS, go way beyond it. If it was a big deal, it would be patched and widely spread. The magic of Linux, is you can try it yourself, run your own fork and submit patches.
LUKS is the one to talk about as the others aren't as good an approach in general. LUKS is the recommended approach.
Edit: oh and NTFS is 512 bytes. UTF16 = 16bit = 2 bytes. 256*2 = 512
Well it should probably go further and offer more of another kind of magic - where stuff works as user expects it to work.
As for submitting patches, it sounds like you suggest people play around and touch core parts responsible for file system operations. Such an advice is not going to work for everyone. Open source software is not ideal. It can be ideal in theory, but that's it.
It looks like there are enough use cases where some people would not prefer LUKS.
I have lived quite happily, on pretty much only open source for over 12 years now. Professionally and at home (longer at home). Debian I put with Wikipedia as an example of what humans can be.
There is no gate keepers in who can do what where. Only on who will accept the patches. Projects fork for all kinds of reasons, though even Google failed to fork the Linux kernel. If there is some good patch to extend the filename limit, it will get in. Enough pressure and maybe the core team of that subsystem will do it.
Open source already won I'm affriad. Most of the internet, IoT to super computers, runs open source. Has been that way for a while. If you use Windows, fine, but it is just a consumer end node OS for muggels. 😉
If you setup a new install, and say you want encryption, LUKS is what you get.
Does it look like I advocate for windows? Nah.
Open source is great when it works. "If there is some good patch..." and "Enough pressure and maybe..." is the sad reality of it. Why would people need to put pressure on order for Linux to start supporting features long available in file systems it supports? Why would I, specifically, should spend time on it? Does Linux want to become an os for everyone or only for people experimenting with dangerous stuff that make them lose data sometimes?
Don't get me wrong, Linux is good even now. But there is no need to actively deny points of possible improvement. When they ask you how great XFS is compared to others you shouldn't throw "exbibytes" word, you should first think what problems people might have with it, especially if they want to switch from windows.
And if I want to only encrypt some files? I need to create a volume specifically for that, right? Or I could just use something else.
Open source clearly works because of the scale and breath of it's use. That's the modern world and its use is only increasing. This a good thing for multiple reasons.
Unicode filename length clearly isn't as big an issue as you feel or it would be fixed. There is some BIG money that could be spent to fix this for countries and companies who need unicode.
How you encrypt depends on your aim. If you aim is limit your character available for filenames, there are ways. If it's read only, you do a GPG tar ball. LUKS if you want a live system. You can just create a file, LUKS format it.
Resetup
close
reopen
Basically the same as systemd-homed does for you: https://wiki.archlinux.org/title/Systemd-homed
But there are many ways. A good few filesystems offer folder/file encryption natively. Though I'd argue that's less secure.
I might have agreed with such statements 20 years ago. But not anymore. I can't count the times I've seen how certain software, game, system or a service literally brick themselves when a use case involves using non-ascii, non-english or non-unicode characters, paths or regions. Not Linux related only or specifically, but almost always it looks and feels embarrassing. I've seen some related global improvements in windows, NTFS, and some products, but all that is still not enough in my opinion. The thought that people shouldn't need >255 bytes (or symbols) sounds not different from that 640k ram quote.
I doubt the Linux kernel bricks itself when filename are too long, regardless of encoding. It doesn't do characters, but just bytes. If there is too maybe bytes, they just get trimmed. User level above I can certainly believe. On all platforms. Difference is you can fix it in the open world and throw a patch. It's an embarrassing crash, and will be a simple fix, so it will get in. Closed products, well maybe you can log it, maybe they will fix it, but your in serfdom unless you have real money and other options.
The other thing that makes me think this can't be as a big an issue as you say is, the example you gave, still looks bloody long. Seams like doing it wrong if the filename is a sentence. It filename, not filesentence.
This tiny, and seemingly silly, thing, doesn't make Windows and NTFS not laughablely in 2024.
You aren't getting it.
It's not about bricking, it's about relying on "standards" (limitations actually) that should be obsolete in 2024, in multinational technology world. About the fact that they are effectively limiting how people from all around the world can use characters, words, names etc. anywhere.
It's not about money, not about patches or developing them. It's about what users expect. They surely don't expect to be told "fix it yourself if you don't like".
This is by no means a "big" issue because it affects less than 1 percent of users, sure. Not many people hit the NTFS limit on windows either, yet you can see thousands places where people discuss that long paths setting, people who need to overcome it, people who maybe even grateful that such an option appeared in later windows versions.
😒 Yep, that's useless. What's next, "hey Linux doesn't support .exe, those are games for windows so you play them on windows"?
You want unlimited filename length?? Yer... that's a bad idea. Everything has a limit set for good reason.
Yes with open source you can do it yourself, but you can also pay someone to do it. Skills+time, or money to pay someone with the skills, that's what is needed. There is nothing stopping what you want happening. Yet it doesn't. Not even talk of if it by looks of it
This a mountain out of mole hill.
I have no idea what you are on about with Windows games exes. I assume you know of Stream's Proton and just Wine.
No. But a limit at least better than Windows has to offer would help a lot (already because switching is a common thing and should be made breeze for everyone). And 256 bytes is bad no matter how you look at it.
No, that's not needed I think. Some file systems supported by Linux already support longer names, it's Linux VFS that is limiting them. This is an artificial limit basically. It will be changed eventually, I only say that it's long overdue already.
I assume you know it wasn't always like that. Surely a lot of Linux developers never thought it was a good idea to support many more windows-related systems (one could say it would be implemented if it was a big issue), but here we are.
Come on.
'Welcome to the Exile Guild ~The incompetent S-rank party will banish more and more talented adventurers, so collect the weakest and create the strongest guild~ 1 (Dragon Comics Age) - Yusuke Araki'
Is not a reasonable name.
I get:
ようこそ『追放者ギルド』へ ~無能なSランクパーティがどんどん有能な冒険者を追放するので、最弱を集めて最強ギルドを創ります~ 1 (ドラゴンコミックスエイジ) - 荒木 佑輔
As 87 Unicode characters and 241 bytes in UTF8.
So this unreasonable name does fit.
I don't see this limit changing any time soon because in hitting it, you're naming files unmanageablely. Pretty sure that is what the main devs will say and concentrate on more important stuff. If you present them with nice code for it, maybe they will take it. If not, it will mean carrying those patches on own folk. Though maybe you could get them to take bits of it making the carrying easier.
People doing it for themselves is very common. I've fixed bugs in all kinds of things, including the Linux kernel. People doing it for money is a world I don't know, but I know of. Example : https://console.algora.io/
You can also just hire a contractor, or team, to do open source. I've done that, at the developer end (Qt4 Windows port work).
Wine is, old. It's from 1993. The code is great though. Over 12 years ago, when stuck on Windows for work, I used to use it as a reference when the MSDN didn't cover stuff. But I wouldn't recommend it though as a way of living on a UNIX. If you are depent on Windows apps, you aren't ready to leave. Wine does not make a UNIX into Windows. Changing underlying implementation bring out bugs in software above. With closed shit, you can't fix them. Wine does however, give you a route to running a piece of Windows software, if you have the time to give that software the set of Windows bug it expects, "Bug for bug". Valve have basics lovingly wrapped Windows games with what each game needs.
You aren't addressing what I've said. But that's expected. No need to spend more of your time.
It does feel like we are talking past each other. Probably coming from very different places. All the best anyway.
Linux might have a similar file name restriction, but what's more important IMO, is the obnoxious file path restrictions NTFS has.
Naming a file less than 255 chars is a lot easier than keeping its path down.
Limiting file name is one thing, but dealing with limited path lengths when trying to move a custies folder full of subdir on subdirs is obnoxious when the share name its being transferred to makes it just too long.
Can't you work around that with the extended length prefix of
\\?\
(\\?\C:\whateverlongpathhere\
)? Though admittedly, it is a pain in the ass to use.(edited for clarity and formatting)
You can also enable long paths in w10/11 (30,000+ characters). Instructions are here:
https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=registry
That would unfortunately require me to edit GPO I have non control over. I could temporarily knock it out with regedit but I don't know if it'd be tossed next gpupdate, I'd have to check.
Bummer. The '\?' prefix will work regardless of registry setting, though it's a pain to remember each time.
True. Problem is, moving from more restricted system to less restricted system is a breeze, but painful otherwise. Linux is in a position where it would benefit from any little thing. People trying to switch to Linux will find path length feels like an upgrade, but file name limitation is clearly a downgrade.
What are you guys naming your files anyways? No more than four words in lower snake case, as the Machine Spirit intended.
I guess something like
ようこそ『追放者ギルド』へ ~無能なSランクパーティがどんどん有能な冒険者を追放するので、最弱を集めて最強ギルドを創ります~ 1 (ドラゴンコミックスエイジ) - 荒木 佑輔.epub
- 92 characters, but 246 bytes. Where on Windows this file hits 35% of the limit, on Linux it hits 96%.The file is not some rare case. It's from a torrent, uploaded somewhere just today. There are tons of files like this with slightly or much longer names. As of 2024, they can't be served by Linux. Not in a pure file form, that is.
Yeah I suppose that would get in the way.
Linux file system is shit? Otherwise I don't get why you've used the "because" word. NTFS is certainly not shit.
I re-read your comment and i completely misunderstood it sorry it's 4am