Kalcifer

joined 11 months ago
[–] [email protected] 6 points 4 days ago* (last edited 4 days ago)
[–] [email protected] 1 points 1 week ago

Make it work, then make it better.

I really like this one. It's borderline a mantra.

[–] [email protected] 12 points 1 week ago* (last edited 1 week ago) (3 children)

As a counter to perfectionism:

If it's worth doing, then it's worth doing poorly. [source: a reddit user]

[–] [email protected] 10 points 1 week ago* (last edited 1 week ago)

Very clever use case!

[–] [email protected] 42 points 2 weeks ago* (last edited 2 weeks ago)

As of 2024-09-03T22:10:25.545Z, Starlink is now complying with Brazil's X ban [1].

References

  1. "Starlink says it will block X in Brazil". Emma Roth. The Verge. Published: 2024-09-03T22:10:25.545Z. Accessed: 2024-09-04T04:17Z. https://www.theverge.com/2024/9/3/24235204/starlink-block-x-brazil-comply-elon-musk.

    “We immediately initiated legal proceedings in the Brazilian Supreme Court explaining the gross illegality of this order and asking the Court to unfreeze our assets,” Starlink says in a post on X. “Regardless of the illegal treatment of Starlink in freezing of our assets, we are complying with the order to block access to X in Brazil.”

[–] [email protected] 1 points 2 weeks ago

Not really as those are public things.

Would you mind citing an example of exactly what you are referring to? I feel like I'm presuming a lot of things in your statements here.


Dhcp is more of a issue.

I don't know if it's "more", or "less" of an issue, but all these things are worthy of concern.

[–] [email protected] 2 points 2 weeks ago (2 children)

That would certainly also be worthy of concern.

[–] [email protected] 2 points 2 weeks ago* (last edited 2 weeks ago)

have the machine pretend you’re in UTC.

That is a possible solution, though not exactly the most convenient, imo. That is, if I understand you correctly that you are talking about setting the OS timezone to be UTC.

[–] [email protected] 1 points 2 weeks ago

could be defeated by doing an analysis of when the commits were made on average vs other folks from random repositories to find the average time of day and then reversing that information into a time zone

This is the first thing I thought of upon reading the title

It's also in the post body.

[–] [email protected] 0 points 2 weeks ago

Any given time zone there are going to be millions if not billions of people.

One more bit of identifying information is still one more bit of identifying information.


Git also “leaks” your system username and hostname IIRC by default which might be your real name.

This is only part of a fallback if a username and email is not provided [1].

References

  1. Git. Reference Manual. git-commit. "COMMIT INFORMATION". Accessed: 2024-08-31T23:30Z. https://git-scm.com/docs/git-commit#_commit_information.

    In case (some of) these environment variables are not set, the information is taken from the configuration items user.name and user.email, or, if not present, the environment variable EMAIL, or, if that is not set, system user name and the hostname used for outgoing mail (taken from /etc/mailname and falling back to the fully qualified hostname when that file does not exist).


A fake name and email would pretty much be sufficient to make any “leaked” time zone information irrelevant.

Perhaps only within the context where one is fine with being completely unidentifiable. But this doesn't consider the circumstance where a user does want their username to be known, but simply don't want it to be personally identifiable.


UTC seems like it’s just “HEY LOOK AT ME! I’M TRYING TO HIDE SOMETHING!”

This is a fair argument. Ideally, imo, recording dates for commits would be an optional QoL setting rather than a mandatory one. Better yet, if Git simply recorded UTC by default, this would be much less of an issue overall.


if you sleep like most people, could be defeated by doing an analysis of when the commits were made on average vs other folks from random repositories to find the average time of day and then reversing that information into a time zone.

I mentioned this in my post.


It’s better to be “Jimmy Robinson in Houston Texas” than “John Smith in UTC-0”

That decision is contextually dependent.

[–] [email protected] 2 points 2 weeks ago (4 children)

How do you mean?

81
submitted 2 weeks ago* (last edited 2 weeks ago) by [email protected] to c/[email protected]
 

Git records the local timezone when a commit is made [1]. Knowledge of the timezone in which a commit was made could be used as a bit of identifying information to de-anonymize the committer.

Setting one's timezone to UTC can help mitigate this issue [2][3] (though, ofc, one must still be wary of time-of-day commit patterns being used to deduce a timezone).

References

  1. Git documentation. git-commit. "Date Formats: Git internal format". Accessed: 2024-08-31T07:52Z. https://git-scm.com/docs/git-commit#Documentation/git-commit.txt-Gitinternalformat.

    It is <unix-timestamp> <time-zone-offset>, where <unix-timestamp> is the number of seconds since the UNIX epoch. <time-zone-offset> is a positive or negative offset from UTC. For example CET (which is 1 hour ahead of UTC) is +0100.

  2. jthill. "How can I ignore committing timezone information in my commit?". Stack Overflow. Published: 2014-05-26T16:57:37Z. (Accessed: 2024-08-31T08:27Z). https://stackoverflow.com/questions/23874208/how-can-i-ignore-committing-timezone-information-in-my-commit#comment36750060_23874208.

    to set the timezone for a specific command, say e.g. TZ=UTC git commit

  3. Oliver. "How can I ignore committing timezone information in my commit?". Stack Overflow. Published: 2022-05-22T08:56:38Z (Accessed: 2024-08-31T08:30Z). https://stackoverflow.com/a/72336094/7934600

    each commit Git stores a author date and a commit date. So you have to omit the timezone for both dates.

    I solved this for my self with the help of the following Git alias:

    [alias]
    co = "!f() { \
        export GIT_AUTHOR_DATE=\"$(date -u +%Y-%m-%dT%H:%M:%S%z)\"; \
        export GIT_COMMITTER_DATE=\"$(date -u +%Y-%m-%dT%H:%M:%S%z)\"; \
        git commit $@; \
        git log -n 1 --pretty=\"Autor: %an <%ae> (%ai)\"; \
        git log -n 1 --pretty=\"Committer: %cn <%ce> (%ci)\"; \
    }; f"
    


Cross-posts:

80
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]
 

I use Workman.

EDIT (2024-08-10T19:23Z): I should clarify that I am referring to the layout that you use for a physical computer keyboard, not a mobile/virtual keyboard.

 

I drink PG Tips Original.

 

Cross-posted from: https://sh.itjust.works/post/19987854


We have previously highlighted the importance of not losing your account number, encouraging it to be written down in a password manager or similar safe location.

For the sake of convenience account numbers have been visible when users logged into our website. This had led to there being potential concerns where a malicious observer could:

  • Use up all of a user's connections
  • Delete a user's devices

From the 3rd June 2024 you will no longer be able to see your account number after logging into our website.


 

Danish banks have implemented significant restrictions on how Danish kroner (DKK) used outside Denmark can be repatriated back into Denmark.

Due to these circumstances, which are unfortunately beyond Mullvad’s control, Mullvad will no longer be able to accept DKK from its customers. We will continue to credit DKK received until the end of the month, but considering postal delays, it is best to stop sending it immediately.

 
 

I'm looking to buy some wireless earbuds. The following is what I am looking for in them:

  • <=$100USD
  • No third party companion app requirement
  • Compatbile with Android (Pixel 6)
  • Won't fall out of ears while exercising
  • ~8h playback
  • Sweat/water resistant
  • (optional, but would be nice) Active noise cancelling
 

I thought I'd share my experience doing this, as it was quite a pain, and maybe this will help someone else. It contains the process I took to set it all up, and the workarounds, and solutions that I found along the way.

  1. Hardware that I used: Raspberry Pi 1 Model B rev 2.0, SanDisk Ultra SD Card (32GB).
  2. I had issues using the Raspberry Pi Imager (v1.8.5, Flatpak): It initially flashed pretty quickly, but the verification process was taking an unreasonably long time — I waited ~30mins before giving up, and cancelling it; so, I ended up manually fashing the image to the SD card:
    1. I connected the SD card to a computer (running Arch Linux).
    2. I located what device corresponded to it by running lsblk (/dev/sdd, in my case).
    3. I downloaded the image from here. I specifically chose the "Raspberry Pi OS Lite" option, as it was 32-bit, it had Debian Bookworm, which was the version needed for podman-compose (as seen here), and it lacked a desktop environment, which I wanted, as I was running it headless.
    4. I then flashed the image to the SD card with dd if=<downloaded-raspbian-image> of=<drive-device> BS=50M status=progress
      • <downloaded-raspbian-image> is the path to the file downloaded from step 3.
      • <drive-device> is the device that corresponds to the SD card, as found in step 2.2.
      • BS=50M I found that 50M is an adequately sized buffer size. I tested some from 1M to 100M.
      • status=progress is a neat option that shows you the live status of the command's execution (write speed, how much has been written, etc.).
  3. I enabled SSH for headless access. This was rather poorly documented (which was a theme for this install).
    1. To enable SSH, as noted here, one must put an empty file named ssh at the "root of the SD card". This is, unfortunately, rather misleading. What one must actually do is put that file in the root of the boot partition. That is not to say the directory /boot contained in the root partition, rootfs, but, instead, it must be placed within the boot partition, bootfs (bootfs, and rootfs are the two partitions written to the SD card whe you flash the downloaded image). So the proper path would be <bootfs>/ssh. I simply mounted bootfs within my file manager, but, without that, I would have had to manually locate which partition corresponded to that, and mount it manually to be able to create the file. The ownership of the file didn't seem to matter — it was owned by my user, rather than root (as was every other file in that directory, it seemed).
    2. One must then enable password authentication in the SSH daemon, otherwise one won't be able to connect via SSH using a password (I don't understand why this is not the default):
      1. Edit <bootfs>/etc/ssh/sshd_config
      2. Set PasswordAuthentication yes (I just found the line that contained PasswordAuthentication, uncommented the line, and set it to yes).
  4. I changed the hostname by editing <rootfs>/etc/hostname and replacing it with one that I wanted.
  5. I created a user (the user is given sudo priveleges automatically)
    1. Create a file at <bootfs>/userconf.txt — that is, create a file named userconf.txt in the bootfs partition (again, poorly documented here).
    2. As mentioned in that documentation, add a single line in that file of the format `:, where
      • <username> is the chosen username for the user.
      • <password> is the salted hash of your chosen password, which is generated by running openssl passwd -6 and following its prompts.
  6. Plug the SD card into the Pi, plug in power, and wait for it to boot. This is an old Pi, so it takes a good minute to boot fully and become available. You can ping it with ping <hostname>.local to see when it comes online (where <hostname> is yor chosen hostname).
  7. SSH into the Pi with ssh <username>@<hostname>.local (You'll of course need mDNS, like Avahi, setup on your device running SSH).
  8. Make sure that everything is updated on the Pi with sudo apt update && sudo apt upgrade
  9. Install Podman with sudo apt install podman (the socket gets automatically started by apt).
  10. Install Podman Compose with sudo apt install podman-compose.
  11. Create the compose file compose.yaml. Written using the official as reference, it contains the following:
version: "3"
services:
  pihole:
    container_name: pihole
    image: docker.io/pihole/pihole:latest
    ports:
      - "<host-ip>:53:53/tcp"
      - "<host-ip>:53:53/udp"
      - "80:80/tcp"
    environment:
      TZ: '<your-tz-timezone>'
    volumes:
      - './etc-pihole:/etc/pihole'
      - './etc-dnsmasq.d:/etc/dnsmasq.d'
  • <host-ip> is the ip of the device running the container. The reason for why this is needed can be found in the solution of this post.
  • <your-tz-timezone> is your timezone as listed here.
  • For the line that contains image: docker.io/pihole/pihole:latest, docker.io is necessary, as Podman does not default to using hub.docker.com.
  • Note that there isn't a restart: unless-stopped policy. Apparently, podman-compose currently doesn't support restart policies. One would have to create a Systemd service (which I personally think is quite ugly to expect of a user) to be able to restart the service at boot.
  1. (NOTE: if you wan't to skip step 13, run this command as sudo) Pull the image with podman-compose --podman-pull-args="--arch=arm/v6" pull
    • --podman-pull-args="--arch=arm/v6" is necessary as podman-compose doesn't currently support specifying the platform in the compose file.
      • Specifying the architecture itself is required as, from what I've found, Podman appears to have a bug where it doesn't properly recognize the platform of this Pi, so you have to manually specify which architecture that it is i.e. armv6 (you can see this architecture mentioned here under "latest").
    • This took a little while on my Pi. The download rate was well below my normal download rate, so I assume the single threaded CPU is just being too bogged down to handle a high download rate.
    • Don't be concerned if it stays at the "Copying blob..." phase for a while. This CPU is seriously slow.
  2. Allow podman to use ports below 1024, so that it can run rootless:
    • Edit /etc/sysctl.conf, and add the line net.ipv4.ip_unprivileged_port_start=53. This allows all non-priveleged users to access ports >=53. Not great, but it's what's currently needed. You can avoid this step by running step 12, and 14 as sudo.
    • Apply this with sysctl -p
  3. (NOTE: if you wan't to skip step 13, run this command as sudo) Start the container with podman-compose up -d.
    • It will take a while to start. Again, this Pi is slow.
    • Don't worry if podman-compose ps shows that the container is "unhealthy". This should go away after about a minute, or so. I think it's just in that state while it starts up.
  4. Access the Pihole's admin panel in a browser at http://<host-ip>/admin.
    • The password is found in the logs. You can find it with podman-compose logs | grep random. The password is randomly generated everytime the container starts. If you want to set your own password, then you have to specify it in the compose file as mentioned here.
 

Solution

It was found (here, and here) that Podman uses its own DNS server, aardvark-dns which is bound to port 53 (this explains why I was able to bind to 53 with nc on the host while the container would still fail). So the solution is to bridge the network for that port. So, in the compose file, the ports section would become:

ports:
  - "<host-ip>:53:53/tcp"
  - "<host-ip>:53:53/udp"
  - "80:80/tcp"

where <host-ip> is the ip of the machine running the container — e.g. 192.168.1.141.


Original Post

I so desperately want to bash my head into a hard surface. I cannot figure out what is causing this issue. The full error is as follows:

Error: cannot listen on the UDP port: listen udp4 :53: bind: address already in use

This is my compose file:

version: "3"
services:
  pihole:
    container_name: pihole
    image: docker.io/pihole/pihole:latest
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "80:80/tcp"
    environment:
      TZ: '<redacted>'
    volumes:
      - './etc-pihole:/etc/pihole'
      - './etc-dnsmasq.d:/etc/dnsmasq.d'
    restart: unless-stopped

and the result of # ss -tulpn:

Netid       State        Recv-Q       Send-Q                             Local Address:Port               Peer Address:Port       Process                                         
udp         UNCONN       0            0                    [fe80::e877:8420:5869:dbd9]:546                           *:*           users:(("NetworkManager",pid=377,fd=28))       
tcp         LISTEN       0            128                                      0.0.0.0:22                      0.0.0.0:*           users:(("sshd",pid=429,fd=3))                  
tcp         LISTEN       0            128                                         [::]:22                         [::]:*           users:(("sshd",pid=429,fd=4))        

I have looked for possible culprit services like systemd-resolved. I have tried disabling Avahi. I have looked for other potential DNS services. I have rebooted the device. I am running the container as sudo (so it has access to all ports). I am quite at a loss.

  • Raspberry Pi Model 1 B Rev 2
  • Raspbian (bookworm)
  • Kernel v6.6.20+rpt-rpi-v6
  • Podman v4.3.1
  • Podman Compose v1.0.3

EDIT (2024-03-14T22:13Z)

For the sake of clarity, # netstat -pna | grep 53 shows nothing on 53, and # lsof -i -P -n | grep LISTEN shows nothing listening to port 53 — the only listening service is SSH on 22, as expected.

Also, as suggested here, I tried manually binding to port 53, and I was able to without issue.

 

I use nftables to set my firewall rules. I typically manually configure the rules myself. Recently, I just happened to dump the ruleset, and, much to my surprise, my config was gone, and it was replaced with an enourmous amount of extremely cryptic firewall rules. After a quick examination of the rules, I found that it was Docker that had modified them. And after some brief research, I found a number of open issues, just like this one, of people complaining about this behaviour. I think it's an enourmous security risk to have Docker silently do this by default.

I have heard that Podman doesn't suffer from this issue, as it is daemonless. If that is true, I will certainly be switching from Docker to Podman.

 

Cross-posted to: https://sh.itjust.works/post/15859195


From other conversations that I've read through, people usually say "Yes, because it's easy on Windows", or "Yes, because they simply don't trust the webcam". But neither of these arguments are enough for me. The former I feel is irrelevent when one is talking about Linux, and the latter is just doing something for the sake of doing it which is not exactly a rational argument.

Specifically for Linux (although, I suppose this partially also depends on the distro, and, of course, vulnerabilites in whatever software that you might be using), how vulnerable is the device to having its webcam exploited? If you trust the software that you have running on your computer, and you utilize firewalls (application layer, network layer, etc.), you should be resistant to such types of exploits, no? A parallel question would also be: How vulnerable is a Linux device if you don't take extra precautions like firewalls.

If this is the case, what makes Windows so much more vulnerable?

112
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/[email protected]
 

My Nextcloud has always been sluggish — navigating and interacting isn't snappy/responsive, changing between apps is very slow, loading tasks is horrible, etc. I'm curious what the experience is like for other people. I'd also be curious to know how you have your Nextcloud set up (install method, server hardware, any other relevent special configs, etc.). Mine is essentially just a default install of Nextcloud Snap.

Edit (2024-03-03T09:00Z): I should clarify that I am specifically talking about the web interface and not general file sync capabilites. Specifically, I notice the sluggishness the most when interacting with the calendar, and tasks.

view more: next ›