Attention trumps experience

It’s always bemused me that I did better in electronic engineering than computer science.

I studied them simultaneously, receiving degrees in both after five years (some classes counted towards both, thus why it wasn’t seven or eight years).

I grew up playing on, dabbling with, and programming computers. From BASIC II to Hypercard to RealBasic and onwards. I don’t recall ever questioning the apparent inevitability of going to university to study computer science, and then into a permanent career in software development. It was as if I were born to it.

In contrast, I chose to do electronic engineering basically on a whim, while I was choosing my university course. I had effectively zero background in it. It just looked interesting.

Though I observed the curious results – the unexpected inversion in my academic grades – it took me quite a while to learn the lesson. I did better in EE despite my inexperience because I paid attention. It was interesting and I knew I was starting from scratch there, so I worked hard at it. I gave it more time.

I’ve increasingly appreciated the importance of this as I’ve gathered other life anecdotes. Why does the new hire straight out of school often do a better job than the senior engineer? Why do “fresh eyes” on a long troublesome area of the project suddenly find all sorts of bugs and flaws that had been overlooked?

Reverting to an older version of Safari Technology Preview

Apple try to make it impossible to revert to a prior version of Safari Technology Preview (STP) – and they also try to force updates to the latest version immediately, without user consent. This is bafflingly hostile behaviour for what is supposed to be a beta version of the browser that users voluntarily, out of charity, help Apple debug.

It’s also highly problematic when new versions are flat-out broken. Starting with around STP 124 I started experiencing consistent crashes on some websites, making them completely unusable in STP. For the time-being I chose to use them separately in regular Safari, on the assumption that these egregious issues would be quickly fixed in the next STP version. Well, three versions later and those bugs have not been fixed. Not even close.

Now, with STP 126, it crashes on launch. Every time.

Well, thankfully there are places where you can obtain the prior versions, even if Apple won’t provide them. My preference is The Wayback Machine – you can start with this calendar, from which you can pick a date and (with any luck) the download page for that date will point to the version you want. You then download the disc image, delete the current copy of STP from your Applications folder (otherwise the pkg installer for the older version will refuse to work), and re-install the older version.

Once you’ve done that, make sure to turn Automatic Updates off in System Preferences, otherwise Apple will just trash your working version with the broken one again.

If you appreciate that – I certainly did; I like having a web browser that doesn’t crash on launch – remember that The Wayback Machine is run by the Internet Archive, a non-profit group, and they can always use monetary support as well as volunteers.

P.S. Yes, I’m aware that their donation page is sadly a bit janky. If you’re a web developer or designer, maybe you could volunteer some of your time to improve it? 😁

The Abstracted Unreliability Anti-pattern

I don’t know if it’s genuinely on the uptick, or just coincidence, or merely runaway pattern recognition on my part, but I keep seeing the same logical fallacy applied over and over again: if I add more layers to a system it will become more reliable.

A recent example: this set of services have poor uptime so we need to put another service in front of them, and this will – apparently, magically – make them have better uptime.

This might be true in specific cases. If the abstraction layer utilises caching, for example, it’s conceivable it’ll be able to continue operating in at least some capacity while an underlying component is (briefly) unavailable. Or maybe the reduced load to the downstream service(s) will alleviate pressure on racey code and dodgy GC. But this is practically never a factor, in the real-world examples I see. It really does seem to be “step 1, collect underpants … step 3, profit”.

The best you could say is that it accidentally arrives at a practical benefit, despite itself.

It’s important to separate the actual causes of improvement from the red herrings. I can only assume that the confusion between the two is what has allowed this blatantly illogical practice to gain some ground. “We implemented better input validation and an extra layer, and things got better”, and bafflingly nobody ever seems to question how “and an extra layer” contributed.

Now, adding an abstraction layer might have other direct benefits – e.g. the opportunity for use-case-specific APIs, better alignment with organisational structure, etc – but reliability is unlikely to be one of them. Particularly if by ‘reliability’ one largely means availability. Again, it’s crucial to understand the actual causality involved, not miscategorise coincidence.

The logical thing to do in the face of an unreliable component is to simply make it reliable. Anything else is just making the situation worse.