People vs Products

I’ve experienced an interesting arc over my twenty or so years (thus far) of software development.

I started out as a one-person shop, doing my own things, selling shareware. I had no manager nor technical lead. I had to make all my own decisions, in all aspects, without guidance or assistance.

Subsequently, during my four years at Apple, I did have a manager, but they focused on people, not the technical – myself and/or my colleagues collectively made the technical decisions, and provided technical leadership, and effectively set the product direction. My managers were there to make that as easy as possible for us.

Over my nearly eight years at Google, I observed the tail half of a major cultural transition for Google. Long before I started, Google had explicitly laid down a culture where managers were not product / technical leads. The two roles were physically separated, between different people, and they operated independently. Managers focused on people – career growth, happiness, basic productivity, & skills – while tech leads focused on the technical, the product. In fact the manager role was so principled about focus on people that managers would sometimes help their direct reports leave the company, if that was simply what was best for those people for their own success & growth. And, to be clear, not in a “you aren’t working out” sense, but for engineers that were excellent and simply didn’t have deserved opportunities available to them at Google.

By the time I joined, that culture was half-gone, but still present enough in my division for me to experience it. But by the time I left the culture was heavily weighted towards managers being technical leads.

In my nearly three years now at LinkedIn, I’ve completed that arc. LinkedIn culturally & executively emphasises managers as technical / product leads even moreso than Google ever did. As far as I’ve been told, LinkedIn always has (meaning, this is presumably the culture Yahoo had too, from which LinkedIn forked).

Having experienced most of this spectrum, I finally feel qualified to pass judgement on it.

Managers should not be leads.

I immediately, intuitively recognised & appreciated this at Google, but now I’m certain of it.

People management & (technical) product leadership are fundamentally at odds with each other. The needs of individuals are often at odds with the needs of the product. The product might need Natalie to really focus on churning through a bunch of menial tasks, but to evolve, Natalie might really need design experience & leadership opportunities.

Having one person (in authority) try to wear both hats creates conflict, bias, and inefficiency. It discourages dialogue, because you can never really trust where the polymorph stands. The roles require different skillsets, which rarely coexist in a single person and in any case are difficult to keep up to date in parallel. Context-switching between them is burdensome. It creates a power imbalance and perverse incentives.

Even if an individual is exceptionally talented at mitigating those problems, they simply don’t have the time to do both well. Being a product or technical lead is at least a full-time job. Likewise, helping a team of any real size grow as individuals requires way more hands-on, one-on-one attention than most people realise. It’s hard enough being good at either one of them alone – anyone that attempts doing both simultaneously ends up doing neither effectively.

I’ve had the opportunity to be both a technical lead only and a manager only. This is quite rare in the tech industry. I deeply appreciated being able to focus on just one of those roles at a time. I could be consistent, deliberate, and honest. I could, as a manager, tell people exactly what I thought they should or shouldn’t work on, irrespective of what the product(s) need, because I knew the technical lead(s) would worry about those angles. Conversely, when I was a technical lead, I could lay out what was simply, objectively best for the project, uncomplicated by individuals’ interests. In either case, there was real, other human being that could be debated with, as necessary, to find happy mediums.

Yet beyond just being more efficient and effective, the serendipitous consequence was that it gave agency to the individuals – whenever a conflict arose between people and products, it was revealed to them, and the implicit decision about it at least in part theirs to make. Most importantly, they knew that whichever way they leaned they had someone in their corner who had their back.

(Of course, sometimes they didn’t like having to make that decision, but putting it on them forced them to take control and responsibility for themselves, and evolve into more confident, happy, motivated developers.)

I suppose it’s no surprise that companies tends this way – to conflate people with products. These days, for many big tech companies, people literally are the products, and their humanity inevitably stripped away in the process. People are “promoted” into management from technical positions, and often by way of the Peter Principle, are not actually good people managers, nor able to relinquish their former role and ways of thinking. A hierarchy of technical leads in manager’s clothing becomes self-sustaining, self-selecting, and self-enforcing.

The question is: what’s the antidote?

Acknowledgement: I was inspired to pen this post by reading Tanner Wortham‘s Why Manager as Product Owner Will Usually Fail, which is essentially positing the same thing albeit in different terminology.

The Myth of The Web

The recent kerfuffle with Microsoft Edge vs YouTube was particularly interesting since while I have no specific knowledge of that instance, I certainly do have some cultural insight from nearly eight years working inside the so-called Chocolate Factory – though not on any web stuff, to be clear, so my experience is in the broadest internal sense.

Like everyone else last week, I was trying to determine how much intent or malice was behind Google’s actions, but with a marginally more informed perspective – or at least a relatively unusual one.

Permit me to first provide some larger context, though.

When I worked at Apple, back in ~2006-2010, I insisted on using Camino, because it was superior to Safari at the time (which, among other flaws, was particularly crashtastic in its early years – an attribute which thankfully is long gone, but has burned itself permanently into my emotional memory).

That choice to not use Safari caused periodic issues because some Apple-operated websites wouldn’t work properly with anything but Safari. When I reported those issues internally, the typical response was “we only support Safari”. From the perspective of Apple, once they had their own web browser, that was all that mattered. Thankfully it wasn’t a huge issue since the web wasn’t that important to day-to-day work at Apple, as they used native applications for most things (sidenote: I still miss Radar… I didn’t miss Xcode for a long time, until I was forced more recently to use IntelliJ). And certainly the world outside Apple didn’t care about this cute little ‘Safari’ thing, at the time.

My experience at Google was essentially the same.

The vast majority of Google’s internal websites do not work properly in any browser except Chrome. This is a very real problem since it’s practically impossible to perform any job function at Google without using their internal websites heavily, since Google is so dogmatically opposed to native applications. Google has worked extremely hard to [try to] make it possible to do almost anything through Chrome (often to the point of absurdity).

Ironically even Microsoft – whom I currently work for, via LinkedIn – are on the Chrome bandwagon, as some of their websites – that I am required to use for work – require Chrome.

Most interestingly – and distinct from Apple’s behaviour, where dysfunction in browsers outside their own was predominately based on actual functional differences between them – this ‘requirement’ to use Chrome is often not because of any actual, functional dependency on Chrome, but because Google’s (and Microsoft’s) web developers will specifically require a Chrome user-agent, and explicitly block any other browser. While this is easily worked around by spoofing the user-agent field – and is how I know that the purported Chrome requirement is usually a fallacy – it emphases the mentality at Google:

There is no web, there is only Chrome.

This is, I believe, the crux of the matter in not just this Edge vs YouTube issue, but with web development broadly in space & time. The vast majority of web developers don’t create content for The Web, they create content for a browser. One browser, usually.

I saw it unashamedly unfiltered inside Google, but it inevitably leaks out in time – through things like carelessly & needlessly crippling other browsers’ performance on YouTube.

While today that browser happens to be Chrome, before Chrome existed there was still always that browser – e.g. the 90s and much of the 00s was defined by Microsoft Internet Explorer’s dominance and the refusal by the majority of so-called web developers to create content for The Web rather than just Internet Explorer. (Of course, back then The Web really was almost synonymous with Internet Explorer, with ≥90% marketshare for many years, so at least it was more pragmatic back then than Chrome obsession is now.)

So, I’m actually sceptical that the YouTube team explicitly sabotaged Edge – rather, I think it’s just one of endless cases of web developers not really caring about The Web – ignorance & indifference, in other words, rather than [outright] malice. But just as caustic & dangerous.

What particularly concerns me today is that it’s not quite the same as the terrible 90s and 00s. Then, when Internet Explorer dominated, the vast majority of important websites were not operated by Microsoft.

Today, Google’s web properties are dominant in mindshare if not marketshare to the point of essentially being monopolies (certainly in the case of YouTube, at least) – way more than anything Microsoft ever did was.

More to the point, 90s web developers chose to develop for Internet Explorer exclusively – they were not coerced into it for the most part, nor firmly bound to that choice, because their corporate masters did not have a horse in the browser race and were pragmatically & unemotionally going for audience reach. A meritocracy was possible, and existed to a degree, and was essential to the rise of Firefox, my nostalgic Camino, and yes, even Chrome.

Google very much does have a horse in that race, and I know – from many years of experience inside Google seeing their unfiltered opinions – that they absolutely do want Chrome to become the only horse in that race. Not because of some comically-evil secret council scheming at the heart of Mountain View, but because they culturally & corporately just don’t care about anyone else. Modern Google is just as paranoid, fearful, power-hungry, and ruthless as 90s-era Microsoft ever appeared to be – Google want control, and the browser today is as fundamental to control as operating systems ever were.

Given all that, my fear is that there’s no longer a practical way for another browser to compete on merit with Chrome – anymore so than a third party app store can compete on iOS, for example.

Chrome is open source in the literal sense, but not in the more important governance & existential senses. The only way to give The Web a chance is to remove any corporate browser bias from the minds of the top websites’ developers – Google’s web developers. (Or, technically, to just supplant Google’s numerous dominant web properties. Good luck with that.)

This assuredly won’t happen anytime soon by way of government intervention, given the current U.S. political circumstance, but it is conceivable that Google themselves would perform this surgical separation voluntarily, for the good of The Web.

Sadly, I fear that’s unlikely in an era post-“Do no evil”.