There are multiple ways to do it, apparently. One would think that there’d be the correct & working packages already available through apt, but just as with Docker, one would be wrong. In the case of Swift, there is the Swift-ARM site & group that does provide a package repo with pre-built binaries (and various tutorials on using them), but oddly they don’t provide the current version of Swift.
An alternative is the buildSwiftOnARM Github repo. They merely provide tarballs, which is slightly suboptimal, but very straightforward and they have tarballs for essentially every version – major and minor – of Swift to date. The git history also indicates that they’re very prompt about building tarballs as new versions are released.
Better yet, they provide a couple of shell scripts to build Swift from scratch from source. Only a couple of dependencies (e.g. clang) need be pre-installed, which can be done quickly & painlessly via apt.
Installing from source is presumably the least reliable approach, but since I had already resigned myself to a miserable experience, I figured I might as well go all in.
However, it works. Perfectly. Sure, it takes some time to pull down the huge source base for Swift and all its core dependencies, and some time to build it (though not that long – hours, not days, contrary to what I read online). But the end result was a working toolchain.
It remains to be seen exactly how good or bad Swift development on Linux is, given the absence of the numerous Apple system libraries which are what actually distinguish macOS development above other platforms, but at least getting Swift itself installed is painless.
Sidenote: SATA to USB
The only real speedbump in the whole process, for me, had nothing actually to do with installing Swift itself, but rather the external storage situation on the Raspberry Pi.
The Raspberry Pi doesn’t offer SATA directly, unfortunately (let-alone any form of pluggable PCIe). MicroSD is a low-performance, low-reliability, and high-cost option. So to attach any significant storage you’re basically going either through Ethernet (e.g. NAS) or USB.
USB to SATA adaptors are a shitshow. I’ve tried at least half a dozen different vendors’ offerings over the years, and every single one has been super buggy. The one I newly acquired for my Raspberry Pi use proved to be no exception.
Long story short on that, the symptoms were I/Os taking incredibly long times to complete (many seconds each, serialised), and generally unusable performance. /var/log/messages contained countless pages of:
Oct 13 11:13:45 applepi kernel: [ 234.087294] sd 0:0:0:0: [sda] tag#2 uas_eh_abort_handler 0 uas-tag 1 inflight: CMD IN
Oct 13 11:13:45 applepi kernel: [ 234.087306] sd 0:0:0:0: [sda] tag#2 CDB: opcode=0x28 28 00 77 3b ce 00 00 00 d0 00
Oct 13 11:13:45 applepi kernel: [ 234.126541] scsi host0: uas_eh_device_reset_handler start
Oct 13 11:13:45 applepi kernel: [ 234.277450] usb 2-2: reset SuperSpeed Gen 1 USB device number 2 using xhci_hcd
Oct 13 11:13:45 applepi kernel: [ 234.312541] scsi host0: uas_eh_device_reset_handler success
Oct 13 11:14:15 applepi kernel: [ 264.805760] sd 0:0:0:0: [sda] tag#7 uas_eh_abort_handler 0 uas-tag 2 inflight: CMD IN
Oct 13 11:14:15 applepi kernel: [ 264.805778] sd 0:0:0:0: [sda] tag#7 CDB: opcode=0x28 28 00 77 3b d1 b8 00 00 48 00
Turns out this is an incredibly common problem with USB to SATA adaptor chipsets, that’s documented as such all over the web. Finding how to solve it was less trivial, because a lot of the advice given is either outright wrong or at least doesn’t work on Raspbian. The solution I found, via this random thread, was to simply add:
…to the end of the existing line in /boot/cmdline.txt (where 152d:1561 is the vendor & device IDs for the particular chipset used in my case). All other variations on this, involving adding similar magic incantations to files in /etc/modprobe.d etc, simply do not do anything on Raspbian.
For a couple of little home projects I need an always-on computer. In an ideal world, perhaps, this would be something like a Mac Mini. Powerful [enough], easy to install & maintain, runs anything & everything (including anything Linux through Docker or at worst a straight VM). Unfortunately, Mac Minis are surprisingly expensive – even nine year old models are a couple of hundred dollars at a minimum.
So, I decided to instead explore this Raspberry Pi thing.
I very quickly started wishing I hadn’t.
The whole process thus far has just been a series of absurd errors & frustration.
Acquiring a Raspberry Pi
Step zero, of merely buying a Raspberry Pi, is stupidly difficult. Virtually all the retailers officially listed on raspberrypi.org did not actually have the Raspberry Pi 4 in stock. Later I discovered that some of these same retailers, that list no stock on their own websites, are actively selling the Pi on Amazon. So I bought one through there, which is fine, but why doesn’t raspberrypi.org just say to use Amazon, if that’s really the only way to get them?
Next up was all the peripherals – the Pi by default doesn’t even come with a power supply, so it’s useless out of the box. A cursory internet search reveals a huge amount of FUD about power supplies for the Pi. I have no idea if it’s accurate or not, but given some relevant, egregious design flaws in the Raspberry Pi 4, it seems plausible.
Plus you need at a minimum some stand-offs, if not a full case, to prevent the Pi damaging the surface it’s placed on, or damaging itself through shorts.
And addressing those bootstrapping problems ended up sending me down a rabbit hole trying to find a cooling solution too, since it turns out the Raspberry Pi 4 is infamous for overheating and suffering severe performance – and presumably reliability – problems as a result.
In the end, I spent several hours just figuring out how & what to buy, and what is nominally “the $35 computer” cost over $100. Still without a case, even.
This is the one part of the process thus far that’s actually worked mostly as it should. I downloaded the full Raspbian Buster image, following the installation guide, and using balenaEtcher to plop the image onto an SD card. It all worked, even with the Etcher app being a tad dodgy (e.g. it lets you select non-removable volumes, which you cannot possibly intend to flash Raspbian onto, which is unnecessarily dangerous). The Raspberry Pi 4 booted first time.
I tried to discern whether booting it headless from its birth would work. Officially it does not, but I found that baffling and dug further, reading countless online guides (e.g.), which seemed to suggest it is possible.
I learnt that there exists the raspi-config tool for headless setup, but it was unclear if it would really work, fully. Though I did the GUI set up process to be conservative, I’ve since used raspi-config quite a bit. Turns out, it not only does work just fine, but it’s actually necessary because the GUI install doesn’t do some important things (like resize the root file system to fill the SD card).
One thing which nearly blew the whole enterprise was when it came to join a wifi network. I have multiple wifi networks, all with [different] emoji for names. The GUI set up tool can’t handle emoji, rendering them as octal escape sequences. I don’t happen to have memorised the four-character byte codes of each emoji, so it was a tedious game of trial-and-error in which I tried every permutation of unreadable SSID & password.
Worse, it took multiple attempts, too, before it finally worked – I have no idea why it failed to join the network the first time or two, despite using the right password. To this day it still arbitrarily fails to join one of the networks, yet joins the other just fine – both are in the same frequency bands from the exact same router.
Aside: Raspberry Pi 4 as a desktop computer
Since my intended use is as a headless, touchless server, I played only briefly with it in the GUI, using a makeshift setup involving my TV (the only HDMI viewing device I’ve ever owned – lucky I had that at least!). It’s fine, but very sluggish – it was immediately apparent that nobody with any other options would ever try to actually use a Raspberry Pi 4 as a desktop machine. Just [cold] launching the web browser, before you even navigate to a website, takes up to a minute. And everything is uncomfortably small, with no apparent system configuration options available to adjust render scaling. Clearly Raspbian is not really intended to be operated at UHD resolutions.
Enabling Remote Access
Though I ultimately intended to use only SSH to interact with the Pi, I did want to have VNC available as an option in case I ran into anything which required using the GUI (again, based on the heavy bias in all the official documentation, and the uncertainty created by that as to whether GUI interaction is required or merely an option).
Turns out, VNC doesn’t work out of the box on a Raspberry Pi, unless you buy commercial, proprietary VNC software. A baffling collusion on the part of the Raspberry Pi / Raspbian people. You have to do additional work to make it actually work – work that’s completely undocumented in any official Raspberry Pi / Raspbian documentation. (at best you’ll find the interwebs littered with accounts & instructions on installing a non-proprietary VNC server in replacement, which presumably also works to solve this problem)
It wasn’t actually my purpose in buying the Raspberry Pi, but I decided that – before I go down the meat grinder that is presumably getting Swift to work on the Pi, since the Pi sadly lacks support for Swift out of the box – I figured I’d just real quickly install Homebridge, since I do have a couple of devices I’ve long wished would work with HomeKit.
What a fucking dumpster fire.
This would be a perfect opportunity for Docker, and as one might expect there are many guides on how to install Homebridge via Docker.
Many, many hours later, it still wasn’t working. One would think that Docker, of all things, would be a seamless thing to sudo apt install, but far from it. For example, the official Docker apt repo for Raspbian tries to install some ‘aufs-kdms’ as a dependency, even though – turns out – it’s not a real dependency and doesn’t even compile on Raspbian. WTF?!
So that wasted hours, in figuring that out – predominately consumed in trawling the interwebs for a solution. After many hours and reading through dozens if not hundreds of StackOverflow, blog, and similar sources quoting similar issues and offering bogus remedies, I finally found a thread that’s actually helpful.
The worst was yet to come.
At some point in this process something also screwed with my Pi’s boot settings to force the root directory to be mounted – at boot – as an overlay with writes going to tmpfs (i.e. nowhere). That wasted yet more hours as I painstakingly root-caused why my Raspberry Pi suddenly had alzheimers (and lost a lot of progress otherwise on installing Homebridge, too).
The web utterly failed in this case, as I couldn’t even find how to disable overlayfs. All I got, mockingly, was endless articles explaining how to enable it and voluntarily ruin your day.
Even just figuring out that it was overlayfs that was screwing me took quite some time, since the first failure symptom was a baffling error message when trying to start dockerd:
failed to start daemon: rename /var/lib/docker/runtimes /var/lib/docker/runtimes-old: invalid cross-device link
With love and fuck you, dockerd
Ultimately I found a fix, in part thanks to this forum post which had enough transparency on enabling this bullshit situation that I could deduce how to disable it – long story short you need to mount the SD card on another, working computer and remove ‘boot=overlay‘ from /boot/cmdline.txt and ‘initramfs initrd.img-4.19.75-v7l+-overlay‘ from /boot/config.txt.
How that ever got enabled I have no idea. Absolutely no commands I ran had anything to do with that at all. Evidently something buried inside Docker installation and/or execution performs this system lobotomy. Even then, I’ve since reviewed every single command I ran, and nothing seems even remotely like it could nor should have caused that.
Despite ultimately defeating all this failure, I was greeted by merely another fatal failure, just as inscrutable as the last:
$ docker-compose up -d
ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
$ ps auxww | grep docker
root 427 0.6 1.4 966720 58856 ? Ssl 15:42 0:01 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
pi 1942 0.0 0.0 7348 472 pts/0 S+ 15:46 0:00 grep --color=auto docker
Turns out this was because my user (‘pi’, the default) wasn’t a member of the ‘docker’ group. I’d added it previously, but it must have been under the tyrannical overlayfs regime, and all memory of that event purged. Adding it again (then logging out & back in) fixed it (sudo usermod -aG docker pi btw).
This is why Linux can’t have nice things
In summary, Linux in general, and certainly Raspbian specifically, continues to be the same giant clusterfuck it’s always been. I’m no Linux novice – I’ve been writing software for Linux for over a decade as my day job. I’ve just had the luxury of teams of dozens if not hundreds of other engineers to insulate me from the bare wiring that is installing, configuring, & maintaining a Linux installation.
At this point I’m two days in and have only just gotten Docker working. For all the time I’ve wasted I’ve completely blown the price savings between a Raspberry Pi and even a brand new, $800 Mac Mini.
And I still haven’t even started installing Swift, let-alone actually running my Swift app on the Raspberry Pi, which – contrary to where all my time has gone on this project – is the actual purpose of this whole sad enterprise.
The recent kerfuffle with Microsoft Edge vs YouTube was particularly interesting since while I have no specific knowledge of that instance, I certainly do have some cultural insight from nearly eight years working inside the so-called Chocolate Factory – though not on any web stuff, to be clear, so my experience is in the broadest internal sense.
Like everyone else last week, I was trying to determine how much intent or malice was behind Google’s actions, but with a marginally more informed perspective – or at least a relatively unusual one.
Permit me to first provide some larger context, though.
When I worked at Apple, back in ~2006-2010, I insisted on using Camino, because it was superior to Safari at the time (which, among other flaws, was particularly crashtastic in its early years – an attribute which thankfully is long gone, but has burned itself permanently into my emotional memory).
That choice to not use Safari caused periodic issues because some Apple-operated websites wouldn’t work properly with anything but Safari. When I reported those issues internally, the typical response was “we only support Safari”. From the perspective of Apple, once they had their own web browser, that was all that mattered. Thankfully it wasn’t a huge issue since the web wasn’t that important to day-to-day work at Apple, as they used native applications for most things (sidenote: I still miss Radar… I didn’t miss Xcode for a long time, until I was forced more recently to use IntelliJ). And certainly the world outside Apple didn’t care about this cute little ‘Safari’ thing, at the time.
My experience at Google was essentially the same.
The vast majority of Google’s internal websites do not work properly in any browser except Chrome. This is a very real problem since it’s practically impossible to perform any job function at Google without using their internal websites heavily, since Google is so dogmatically opposed to native applications. Google has worked extremely hard to [try to] make it possible to do almost anything through Chrome (often to the point of absurdity).
Ironically even Microsoft – whom I currently work for, via LinkedIn – are on the Chrome bandwagon, as some of their websites – that I am required to use for work – require Chrome.
Most interestingly – and distinct from Apple’s behaviour, where dysfunction in browsers outside their own was predominately based on actual functional differences between them – this ‘requirement’ to use Chrome is often not because of any actual, functional dependency on Chrome, but because Google’s (and Microsoft’s) web developers will specifically require a Chrome user-agent, and explicitly block any other browser. While this is easily worked around by spoofing the user-agent field – and is how I know that the purported Chrome requirement is usually a fallacy – it emphases the mentality at Google:
There is no web, there is only Chrome.
This is, I believe, the crux of the matter in not just this Edge vs YouTube issue, but with web development broadly in space & time. The vast majority of web developers don’t create content for The Web, they create content for a browser. One browser, usually.
I saw it unashamedly unfiltered inside Google, but it inevitably leaks out in time – through things like carelessly & needlessly crippling other browsers’ performance on YouTube.
While today that browser happens to be Chrome, before Chrome existed there was still always that browser – e.g. the 90s and much of the 00s was defined by Microsoft Internet Explorer’s dominance and the refusal by the majority of so-called web developers to create content for The Web rather than just Internet Explorer. (Of course, back then The Web really was almost synonymous with Internet Explorer, with ≥90% marketshare for many years, so at least it was more pragmatic back then than Chrome obsession is now.)
So, I’m actually sceptical that the YouTube team explicitly sabotaged Edge – rather, I think it’s just one of endless cases of web developers not really caring about The Web – ignorance & indifference, in other words, rather than [outright] malice. But just as caustic & dangerous.
What particularly concerns me today is that it’s not quite the same as the terrible 90s and 00s. Then, when Internet Explorer dominated, the vast majority of important websites were not operated by Microsoft.
Today, Google’s web properties are dominant in mindshare if not marketshare to the point of essentially being monopolies (certainly in the case of YouTube, at least) – way more than anything Microsoft ever did was.
More to the point, 90s web developers chose to develop for Internet Explorer exclusively – they were not coerced into it for the most part, nor firmly bound to that choice, because their corporate masters did not have a horse in the browser race and were pragmatically & unemotionally going for audience reach. A meritocracy was possible, and existed to a degree, and was essential to the rise of Firefox, my nostalgic Camino, and yes, even Chrome.
Google very much does have a horse in that race, and I know – from many years of experience inside Google seeing their unfiltered opinions – that they absolutely do want Chrome to become the only horse in that race. Not because of some comically-evil secret council scheming at the heart of Mountain View, but because they culturally & corporately just don’t care about anyone else. Modern Google is just as paranoid, fearful, power-hungry, and ruthless as 90s-era Microsoft ever appeared to be – Google want control, and the browser today is as fundamental to control as operating systems ever were.
Given all that, my fear is that there’s no longer a practical way for another browser to compete on merit with Chrome – anymore so than a third party app store can compete on iOS, for example.
Chrome is open source in the literal sense, but not in the more important governance & existential senses. The only way to give The Web a chance is to remove any corporate browser bias from the minds of the top websites’ developers – Google’s web developers. (Or, technically, to just supplant Google’s numerous dominant web properties. Good luck with that.)
This assuredly won’t happen anytime soon by way of government intervention, given the current U.S. political circumstance, but it is conceivable that Google themselves would perform this surgical separation voluntarily, for the good of The Web.
Sadly, I fear that’s unlikely in an era post-“Do no evil”.