The dumpster fire that is the Raspberry Pi

The Raspberry Pi 4 (image courtesy of Michael Henzler via Wikimedia Commons)

For a couple of little home projects I need an always-on computer. In an ideal world, perhaps, this would be something like a Mac Mini. Powerful [enough], easy to install & maintain, runs anything & everything (including anything Linux through Docker or at worst a straight VM). Unfortunately, Mac Minis are surprisingly expensive – even nine year old models are a couple of hundred dollars at a minimum.

So, I decided to instead explore this Raspberry Pi thing.

I very quickly started wishing I hadn’t.

The whole process thus far has just been a series of absurd errors & frustration.

Acquiring a Raspberry Pi

Step zero, of merely buying a Raspberry Pi, is stupidly difficult. Virtually all the retailers officially listed on raspberrypi.org did not actually have the Raspberry Pi 4 in stock. Later I discovered that some of these same retailers, that list no stock on their own websites, are actively selling the Pi on Amazon. So I bought one through there, which is fine, but why doesn’t raspberrypi.org just say to use Amazon, if that’s really the only way to get them?

Next up was all the peripherals – the Pi by default doesn’t even come with a power supply, so it’s useless out of the box. A cursory internet search reveals a huge amount of FUD about power supplies for the Pi. I have no idea if it’s accurate or not, but given some relevant, egregious design flaws in the Raspberry Pi 4, it seems plausible.

Plus you need at a minimum some stand-offs, if not a full case, to prevent the Pi damaging the surface it’s placed on, or damaging itself through shorts.

And addressing those bootstrapping problems ended up sending me down a rabbit hole trying to find a cooling solution too, since it turns out the Raspberry Pi 4 is infamous for overheating and suffering severe performance – and presumably reliability – problems as a result.

In the end, I spent several hours just figuring out how & what to buy, and what is nominally “the $35 computer” cost over $100. Still without a case, even.

Sidenote: the Pimoroni Fan Shim for Raspberry Pi, while a little fiddly to assemble, does seem to work very well, and is quite quiet.

Booting a Raspberry Pi

This is the one part of the process thus far that’s actually worked mostly as it should. I downloaded the full Raspbian Buster image, following the installation guide, and using balenaEtcher to plop the image onto an SD card. It all worked, even with the Etcher app being a tad dodgy (e.g. it lets you select non-removable volumes, which you cannot possibly intend to flash Raspbian onto, which is unnecessarily dangerous). The Raspberry Pi 4 booted first time.

I tried to discern whether booting it headless from its birth would work. Officially it does not, but I found that baffling and dug further, reading countless online guides (e.g.), which seemed to suggest it is possible.

I learnt that there exists the raspi-config tool for headless setup, but it was unclear if it would really work, fully. Though I did the GUI set up process to be conservative, I’ve since used raspi-config quite a bit. Turns out, it not only does work just fine, but it’s actually necessary because the GUI install doesn’t do some important things (like resize the root file system to fill the SD card).

One thing which nearly blew the whole enterprise was when it came to join a wifi network. I have multiple wifi networks, all with [different] emoji for names. The GUI set up tool can’t handle emoji, rendering them as octal escape sequences. I don’t happen to have memorised the four-character byte codes of each emoji, so it was a tedious game of trial-and-error in which I tried every permutation of unreadable SSID & password.

Worse, it took multiple attempts, too, before it finally worked – I have no idea why it failed to join the network the first time or two, despite using the right password. To this day it still arbitrarily fails to join one of the networks, yet joins the other just fine – both are in the same frequency bands from the exact same router.

Aside: Raspberry Pi 4 as a desktop computer

Since my intended use is as a headless, touchless server, I played only briefly with it in the GUI, using a makeshift setup involving my TV (the only HDMI viewing device I’ve ever owned – lucky I had that at least!). It’s fine, but very sluggish – it was immediately apparent that nobody with any other options would ever try to actually use a Raspberry Pi 4 as a desktop machine. Just [cold] launching the web browser, before you even navigate to a website, takes up to a minute. And everything is uncomfortably small, with no apparent system configuration options available to adjust render scaling. Clearly Raspbian is not really intended to be operated at UHD resolutions.

Enabling Remote Access

Though I ultimately intended to use only SSH to interact with the Pi, I did want to have VNC available as an option in case I ran into anything which required using the GUI (again, based on the heavy bias in all the official documentation, and the uncertainty created by that as to whether GUI interaction is required or merely an option).

Turns out, VNC doesn’t work out of the box on a Raspberry Pi, unless you buy commercial, proprietary VNC software. A baffling collusion on the part of the Raspberry Pi / Raspbian people. You have to do additional work to make it actually work – work that’s completely undocumented in any official Raspberry Pi / Raspbian documentation. (at best you’ll find the interwebs littered with accounts & instructions on installing a non-proprietary VNC server in replacement, which presumably also works to solve this problem)

Installing Homebridge

It wasn’t actually my purpose in buying the Raspberry Pi, but I decided that – before I go down the meat grinder that is presumably getting Swift to work on the Pi, since the Pi sadly lacks support for Swift out of the box – I figured I’d just real quickly install Homebridge, since I do have a couple of devices I’ve long wished would work with HomeKit.

Ugh.

What a fucking dumpster fire.

You can install Homebridge raw, but since it’s written in Node.js, I didn’t want it going into my real, bare system – infecting it with npn and JavaScript and all that horror.

This would be a perfect opportunity for Docker, and as one might expect there are many guides on how to install Homebridge via Docker.

Installing Homebridge Docker

Sadly – and frankly bizarrely – running Homebridge through Docker isn’t officially supported.

This 3rd party guide appeared to be the best, based on this third party project to support Homebridge in Docker. Step zero, of course, is to install Docker itself. Surely that’s trivial. It’s Docker. What doesn’t run Docker these days? Hell, macOS runs Docker and it doesn’t even support containers. I was baffled that Raspbian didn’t include Docker pre-installed.

Many, many hours later, it still wasn’t working. One would think that Docker, of all things, would be a seamless thing to sudo apt install, but far from it. For example, the official Docker apt repo for Raspbian tries to install some ‘aufs-kdms’ as a dependency, even though – turns out – it’s not a real dependency and doesn’t even compile on Raspbian. WTF?!

So that wasted hours, in figuring that out – predominately consumed in trawling the interwebs for a solution. After many hours and reading through dozens if not hundreds of StackOverflow, blog, and similar sources quoting similar issues and offering bogus remedies, I finally found a thread that’s actually helpful.

The worst was yet to come.

At some point in this process something also screwed with my Pi’s boot settings to force the root directory to be mounted – at boot – as an overlay with writes going to tmpfs (i.e. nowhere). That wasted yet more hours as I painstakingly root-caused why my Raspberry Pi suddenly had alzheimers (and lost a lot of progress otherwise on installing Homebridge, too).

The web utterly failed in this case, as I couldn’t even find how to disable overlayfs. All I got, mockingly, was endless articles explaining how to enable it and voluntarily ruin your day.

Even just figuring out that it was overlayfs that was screwing me took quite some time, since the first failure symptom was a baffling error message when trying to start dockerd:

failed to start daemon: rename /var/lib/docker/runtimes /var/lib/docker/runtimes-old: invalid cross-device link

With love and fuck you, dockerd

Ultimately I found a fix, in part thanks to this forum post which had enough transparency on enabling this bullshit situation that I could deduce how to disable it – long story short you need to mount the SD card on another, working computer and remove ‘boot=overlay‘ from /boot/cmdline.txt and ‘initramfs initrd.img-4.19.75-v7l+-overlay‘ from /boot/config.txt.

How that ever got enabled I have no idea. Absolutely no commands I ran had anything to do with that at all. Evidently something buried inside Docker installation and/or execution performs this system lobotomy. Even then, I’ve since reviewed every single command I ran, and nothing seems even remotely like it could nor should have caused that.

Despite ultimately defeating all this failure, I was greeted by merely another fatal failure, just as inscrutable as the last:

$ docker-compose up -d
ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?

If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
$ ps auxww | grep docker
root       427  0.6  1.4 966720 58856 ?        Ssl  15:42   0:01 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
pi        1942  0.0  0.0   7348   472 pts/0    S+   15:46   0:00 grep --color=auto docker

Turns out this was because my user (‘pi’, the default) wasn’t a member of the ‘docker’ group. I’d added it previously, but it must have been under the tyrannical overlayfs regime, and all memory of that event purged. Adding it again (then logging out & back in) fixed it (sudo usermod -aG docker pi btw).

This is why Linux can’t have nice things

In summary, Linux in general, and certainly Raspbian specifically, continues to be the same giant clusterfuck it’s always been. I’m no Linux novice – I’ve been writing software for Linux for over a decade as my day job. I’ve just had the luxury of teams of dozens if not hundreds of other engineers to insulate me from the bare wiring that is installing, configuring, & maintaining a Linux installation.

At this point I’m two days in and have only just gotten Docker working. For all the time I’ve wasted I’ve completely blown the price savings between a Raspberry Pi and even a brand new, $800 Mac Mini.

And I still haven’t even started installing Swift, let-alone actually running my Swift app on the Raspberry Pi, which – contrary to where all my time has gone on this project – is the actual purpose of this whole sad enterprise.

Truly deleting ‘removed’ files from Lightroom

When you tell Lightroom to deleted rejected photos, it pops up a dangerous dialog box:

Screen shot of Lightroom dialog asking if you want to actually delete rejected photos, or merely lose track of them

Though it does explain itself well – i.e. if you want to actually delete the photos, you need to click “Delete from Disk” – the default option is that misleading “Remove” button, which doesn’t really remove the files at all – it merely makes Lightroom lose track of them. They’ll still be there on disk, wasting space forever.

And, you can’t directly undo this operation, so if you hit return a little too quickly, or misread the dialog at any point, you’re seemingly pretty screwed (if you have a Lightroom catalog of any significant size).

Luckily, there is a way to find these undead files – that doesn’t require you walking through every single file on disk one by one & comparing against Lightroom’s view of the world.

1In the left-side panel, under the “Folders” section, select all the folders and right-click on them (if you have multiple volumes listed under “Folders”, you’ll have to do this one volume at a time as Lightroom won’t let you select folders across multiple volumes simultaneously). You’ll get a contextual menu:

Screen shot of the contextual menu from right-clicking on an entry in the 'Folders' section of the Lightroom left-side panel

2Click “Synchronize Folder…”. A dialog will appear:

Screenshot of the "Synchronize Folder" dialog

You probably want to uncheck “Remove missing photos from catalog” (if it’s not already disabled) and “Scan for metadata updates”, as those are unrelated to the purpose here and have their own ramifications. Instead, just select “Import new photos” and “Show import dialog before importing”. Then, click “Synchronize”.

3Lightroom’s standard import dialog will now appear, and will slowly sort through all the files in the folder(s) you selected, filtering them down to just those that exist on disk yet are not tracked in Lightroom – e.g. all those rejects you accidentally “Removed” but didn’t really remove previously. You can now review those and see what you’ve got – it’s possible you’ll find in there media you didn’t intend to delete, but rather were somehow misplaced by Lightroom at some point.

You might want to, in the import dialog, change your preview generation setting to ‘Minimal’ in order to minimise import time & wasted preview generation. You could also choose to add some keywords to the imports, e.g. “to be deleted” or “recovered” or “undead”, if you’re not going to just immediately delete them anyway.

In any case, you can now import some or all the undead files. Importing them might seem counter-productive, since the goal here is to delete them – but it’s necessary for the final step…

4Once they’re imported, you can now immediately mark them as rejects and delete all rejects again – this time correctly choosing “Removing from Disk”.

So while it’s a bit roundabout, it does get the job done pretty quickly and easily. Now if only Lightroom would fix that stupid dialog to make the default option the one that actually does what you told Lightroom to do to begin with. 🙄

Full Disk Access is required to access Time Machine backups in Mojave

I’ve been struggling since Mojave came out to deal with it’s over-bearing expansion of SIP (“System Integrity Protection”), which is basically a super-root notion that blocks access – even to root – to lots of basic parts of the system, including obvious & mostly sensible ones like /System and /Library, but also less usefully things like any & all Time Machine backups.

Blocking access to Time Machine makes it very difficult to actually use Time Machine, since it’s then difficult to retrieve files from a backup (you have to then use the stupid ‘warp’ Time Machine interface, which is slow, ugly, and buggy).

Luckily, it turns out there is a fairly simple solution that isn’t disabling SIP entirely (which requires multiple reboots in order to do, so is typically quite disruptive & slow). It appears that any application granted Full Disk Access (System Preferences → Security & Privacy → Full Disk Access) can read Time Machine backups.

In case you’re unfamiliar, the symptoms of this problem include:

  • Being unable to navigate into Time Machine backups in the Open / Save / etc dialogs.
  • Being unable to see – through ls or similar tools – the contents of Time Machine backups via Terminal.
  • Apps reporting errors like “The file “Foo” couldn’t be opened because you don’t have permission to view it” or bluntly “Operation not permitted” when trying to read something in a Time Machine backup.

There’s a strange & ironically very bad security quirk though – curiously, any tools run via Terminal inherit Terminal’s access (or lack thereof) to Full Disk Access. They don’t use whatever setting might be specified for them in the Security & Privacy preferences. This is pretty baffling, as it means to give Full Disk Access to anything you run via Terminal, you have to give it to everything you run via Terminal. Anything you specifically give Full Disk Access won’t actually receive it if it happens to be launched via the Terminal (which confused me for a while, since it’s so unintuitive).

I’m guessing whatever mechanism enforces all this so-called security is based in LaunchServices or somesuch – while the Finder and most things in general will launch apps via LaunchServices, as detached & independent process sessions, Terminal doesn’t – everything it runs, from the shells down, run under it in the process hierarchy, and seemingly share its security & privacy settings.