Attention trumps experience

It’s always bemused me that I did better in electronic engineering than computer science.

I studied them simultaneously, receiving degrees in both after five years (some classes counted towards both, thus why it wasn’t seven or eight years).

I grew up playing on, dabbling with, and programming computers. From BASIC II to Hypercard to RealBasic and onwards. I don’t recall ever questioning the apparent inevitability of going to university to study computer science, and then into a permanent career in software development. It was as if I were born to it.

In contrast, I chose to do electronic engineering basically on a whim, while I was choosing my university course. I had effectively zero background in it. It just looked interesting.

Though I observed the curious results – the unexpected inversion in my academic grades – it took me quite a while to learn the lesson. I did better in EE despite my inexperience because I paid attention. It was interesting and I knew I was starting from scratch there, so I worked hard at it. I gave it more time.

I’ve increasingly appreciated the importance of this as I’ve gathered other life anecdotes. Why does the new hire straight out of school often do a better job than the senior engineer? Why do “fresh eyes” on a long troublesome area of the project suddenly find all sorts of bugs and flaws that had been overlooked?

The Abstracted Unreliability Anti-pattern

I don’t know if it’s genuinely on the uptick, or just coincidence, or merely runaway pattern recognition on my part, but I keep seeing the same logical fallacy applied over and over again: if I add more layers to a system it will become more reliable.

A recent example: this set of services have poor uptime so we need to put another service in front of them, and this will – apparently, magically – make them have better uptime.

This might be true in specific cases. If the abstraction layer utilises caching, for example, it’s conceivable it’ll be able to continue operating in at least some capacity while an underlying component is (briefly) unavailable. Or maybe the reduced load to the downstream service(s) will alleviate pressure on racey code and dodgy GC. But this is practically never a factor, in the real-world examples I see. It really does seem to be “step 1, collect underpants … step 3, profit”.

The best you could say is that it accidentally arrives at a practical benefit, despite itself.

It’s important to separate the actual causes of improvement from the red herrings. I can only assume that the confusion between the two is what has allowed this blatantly illogical practice to gain some ground. “We implemented better input validation and an extra layer, and things got better”, and bafflingly nobody ever seems to question how “and an extra layer” contributed.

Now, adding an abstraction layer might have other direct benefits – e.g. the opportunity for use-case-specific APIs, better alignment with organisational structure, etc – but reliability is unlikely to be one of them. Particularly if by ‘reliability’ one largely means availability. Again, it’s crucial to understand the actual causality involved, not miscategorise coincidence.

The logical thing to do in the face of an unreliable component is to simply make it reliable. Anything else is just making the situation worse.

People vs Products

I’ve experienced an interesting arc over my twenty or so years (thus far) of software development.

I started out as a one-person shop, doing my own things, selling shareware. I had no manager nor technical lead. I had to make all my own decisions, in all aspects, without guidance or assistance.

Subsequently, during my four years at Apple, I did have a manager, but they focused on people, not the technical – myself and/or my colleagues collectively made the technical decisions, and provided technical leadership, and effectively set the product direction. My managers were there to make that as easy as possible for us.

Over my nearly eight years at Google, I observed the tail half of a major cultural transition for Google. Long before I started, Google had explicitly laid down a culture where managers were not product / technical leads. The two roles were physically separated, between different people, and they operated independently. Managers focused on people – career growth, happiness, basic productivity, & skills – while tech leads focused on the technical, the product. In fact the manager role was so principled about focus on people that managers would sometimes help their direct reports leave the company, if that was simply what was best for those people for their own success & growth. And, to be clear, not in a “you aren’t working out” sense, but for engineers that were excellent and simply didn’t have deserved opportunities available to them at Google.

By the time I joined, that culture was half-gone, but still present enough in my division for me to experience it. But by the time I left the culture was heavily weighted towards managers being technical leads.

In my nearly three years now at LinkedIn, I’ve completed that arc. LinkedIn culturally & executively emphasises managers as technical / product leads even moreso than Google ever did. As far as I’ve been told, LinkedIn always has (meaning, this is presumably the culture Yahoo had too, from which LinkedIn forked).

Having experienced most of this spectrum, I finally feel qualified to pass judgement on it.

Managers should not be leads.

I immediately, intuitively recognised & appreciated this at Google, but now I’m certain of it.

People management & (technical) product leadership are fundamentally at odds with each other. The needs of individuals are often at odds with the needs of the product. The product might need Natalie to really focus on churning through a bunch of menial tasks, but to evolve, Natalie might really need design experience & leadership opportunities.

Having one person (in authority) try to wear both hats creates conflict, bias, and inefficiency. It discourages dialogue, because you can never really trust where the polymorph stands. The roles require different skillsets, which rarely coexist in a single person and in any case are difficult to keep up to date in parallel. Context-switching between them is burdensome. It creates a power imbalance and perverse incentives.

Even if an individual is exceptionally talented at mitigating those problems, they simply don’t have the time to do both well. Being a product or technical lead is at least a full-time job. Likewise, helping a team of any real size grow as individuals requires way more hands-on, one-on-one attention than most people realise. It’s hard enough being good at either one of them alone – anyone that attempts doing both simultaneously ends up doing neither effectively.

I’ve had the opportunity to be both a technical lead only and a manager only. This is quite rare in the tech industry. I deeply appreciated being able to focus on just one of those roles at a time. I could be consistent, deliberate, and honest. I could, as a manager, tell people exactly what I thought they should or shouldn’t work on, irrespective of what the product(s) need, because I knew the technical lead(s) would worry about those angles. Conversely, when I was a technical lead, I could lay out what was simply, objectively best for the project, uncomplicated by individuals’ interests. In either case, there was real, other human being that could be debated with, as necessary, to find happy mediums.

Yet beyond just being more efficient and effective, the serendipitous consequence was that it gave agency to the individuals – whenever a conflict arose between people and products, it was revealed to them, and the implicit decision about it at least in part theirs to make. Most importantly, they knew that whichever way they leaned they had someone in their corner who had their back.

(Of course, sometimes they didn’t like having to make that decision, but putting it on them forced them to take control and responsibility for themselves, and evolve into more confident, happy, motivated developers.)

I suppose it’s no surprise that companies tends this way – to conflate people with products. These days, for many big tech companies, people literally are the products, and their humanity inevitably stripped away in the process. People are “promoted” into management from technical positions, and often by way of the Peter Principle, are not actually good people managers, nor able to relinquish their former role and ways of thinking. A hierarchy of technical leads in manager’s clothing becomes self-sustaining, self-selecting, and self-enforcing.

The question is: what’s the antidote?

Acknowledgement: I was inspired to pen this post by reading Tanner Wortham‘s Why Manager as Product Owner Will Usually Fail, which is essentially positing the same thing albeit in different terminology.