Episode 6: On the death of First Principles Problem Solving

What I've lost at the b-school models of summarizing and generalizing

Apparently I am the kind of person who learns best by doing things and reading about things. In some sense, this should have prepared me to be the perfect business school candidate because I can just ingest a large enough volume of cases and then be able to generate the appearance of intellect by regurgitating them in consultant-style jargon.

I'm often told that my neurotic tendency to poke at details, to find out how things really work, or to keep asking questions in class reflects an inability to look at the "big picture" of things. I find this accusation offensive. A lot of my (poorly contained) disdain for Twitter Tech VC culture and for many peers at business school stems from the fact that comprehensive understanding of an industry or a job from the people actually working "on ground" and not from the loudest "senior" voices in the room. I have not been shy about how I feel about consultants on this blog before, and I think that unless consultants can show actual experience for what they are consulting in, their opinion is generalized GPT garbage.

Also, there is power to having information at all levels of a business/organization. Having worked at a fairly cohesive small startup before, I can tell you that the critical mass at which the CEO/Founder can start to lose cohesion with "what's actually happening" is a surprisingly small number (in my team of 12, it was when we hired our 5th engineer). This is exacerbated if the founder is non-technical. At some level executive staff at any organization (unless it's a five person startup) starts losing context of what's happening on ground because middle managers start to play their own politics of what they think their superiors want to hear.

My days as a boots-on-ground engineer and also as someone who has had to face aggressive client teams as part of a go-to-market strategy have been marked by many days where my peers resented the fact that the CEO's/sales people/marketing people / middle management staff had "no idea what was really going on." This resentment is brought forth by the fact that many engineers have struggled to represent the value of their work to their leadership, or they inherently know that their skills can be deployed in more useful directions than that which is dictated to them from Above.

Arguably, software engineers also make it hard to make themselves be understood because they are also wrapped in an ivory tower of their complex phraseology and bad documentation that they deem the rest of society too stupid to comprehend. Managers are meant to serve as liaisons in this capacity and they aren't going to be respected by their engineering teams if they cannot show earn that respect without showing that they bear similar scars from their time of pushing code.

Regardless, cultish behavior is not out of place at a business school. I am in at least 8 group chats that are constantly talking about AI, and include maybe 250-ish total people in them. Only about 16 or so have ever grappled with the horrors of Python by hand, let alone developed models. There's constant cross-chatter about "AI", wherein articles from your mainstream news media are being shared. But not once has anyone tried to share the actual published research behind the hype. This is why LLM's are dominating the discourse on what we think "AI" is, instead of looking at the myriad ways in which AI can be useful.

And, look, I get it. Reading CS research papers isn't fun or easy. Engaging with the broader engineering community at Penn outside of Wharton is also not easy, especially given how the social fabric of Wharton is so tightly interwoven all the time. But also, how can you expect to work in tech and not speak to the people doing the work?

I hear some classmates say statements like "oh if I can't make it in banking, I'll just work at Google." This makes sense if you intend to work in a Finance or Corporate Development Capacity at a Big Tech corporation. At that point, you are so removed from what is actually happening on the ground that your job expertise doesn't require it (or so I am told). After all Corporate Development is predominantly a function of Finance departments, and (allegedly) Finance people don't need to know how engineering/research works to drive the next pivotal step for organizations.

Which is why we have incredibly stupid decisions like hype-culture investments and hype-driven growth and acquisition. I was employed at IBM when they decided they needed a blockchain team (I know this because the team used to share office desks with mine). I cannot answer what possessed IBM to do this, as the decision was clearly made many paygrades above my level. This team was later unceremoniously divested/dissolved in a few months. All of those people went on to work in other web3 jobs, and contributing to making hideous ape NFT's or whatever.

But I think we're watching that wave of stupidity engulf AI development and engineering now. Because OpenAI has somehow played the opening gambit of launching ChatGPT for everyone and their dogs, every other tech company (including Apple – the one who made the device this is being written from) is trying to do "generative AI". This is redirecting significant engineering effort and investment into an "AI-based" future because everyone acknowledges that this technology is the future. But also, does that mean we all have to compete with GPT? Does that mean we all have to be doing the same thing just because someone else is doing it, and has made a significant impact by doing it (reasonably) well?

Like, instead of looking at TechCrunch, why aren't we trying to engage more with the actual whitepapers or research? Because we're losing the ability to look at information from actual sources. We're being taught to generalize and stuff into frameworks extremely nuanced details that can have shifting effects in the aggregate. I cannot be the only person at business school who has heard of Lorenz' Chaos Theory and how large drifts in the Big Picture start by small deviations from the Small Picture.

And I hate that we are being forced to release nuance at the sake of having to "understand" rapidly and then immediately think of generating revenue. This is common even for those outside of Wharton – wherein having a quick, slick, smart soundbyte-y opinion gets you more "clout" than actually considering real data on the ground. Many external observers of Google note that the engineering culture is no longer the same. Those within (and apparently without) speak of how there's a rush to just move everything to AI and label everything as AI at the sake of sacrificing good engineering practices. Google is just one animal that I know really well, I'm sure the others aren't all that different. I'm sure that there are many corporate-level claims that every organization, including one that makes paper clips, is now an AI organization.

As I mentioned before, some of my classmates are operating under the assumption that simply having the Wharton name can open doors. I don't feel confident enough to test that assumption, because I have always felt like flashing a credential means you somehow need to prove why you have it. Maybe it's a form of imposter syndrome. But I stand by the belief that we're not going to get through the doors (if any appear) without spending time and skills learning or working on first principles problem-solving.

Subscribe to stonecoldtakes

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe