I think the challenge is largely a matter of scale. Once you have thousands or tens of thousands of people, communication becomes by far the dominant factor in outcomes. You can set up different structures and cultural norms to try to nudge things in the right direction, but there is no org structure that can solve this because local details always matter. The trick is understanding which details matter and why.
Ultimately I think this depends on management competence and judgement. I also think IC leadership is a critical counterweight to the distortions of empire building that incentivize creating messes. And finally as a competent individual you need the maturity to recognize the difference between unavoidable but tolerable organizational dysfunction, and broken leadership where its impossible to do good work, and hence time to cut bait and leave.
That's a weird take, he's talking about youthful ambitions. Tenured professor is a pretty standard bar for academic success, and startup us shorthand for building a scalable tech company from scratch. These seem like perfectly cromulent ambitions to me, not sure how insecurity enters into it or why I would care if it did.
This is such a low effort learned-helplessness response. Look, there are good CEOs and bad CEOs, I'm not here to defend them, but one thing you have to understand is that CEO action is an extremely blunt instrument. Of all the problems in an org, the vast majority can not be directly solved by the CEO, they can just sort of broadly steer the culture in the right direction, but folks down the chain need to solve problems at their own level. Of course there are tradeoffs in an organization and so not every problem can be solved, but if folks who understand the details can't propose any kind of solution that doesn't A) require CEO action or B) every other person to act exactly the way they propose, then they're not really helping.
I understand there's a lot of toxic environments where it's not worth trying to improve things, but a blanket statement pointing at CEOs en masse as the root of all problems is just as stupid and reductive as CEOs who don't do anything to empower ICs and learn from the front-line expertise.
I have no way of knowing if this is true, but supposedly Musk gets down and dirty at the front line, on the factory floor, every single week trying to solve problems:
This kind of math is nonsense, even as a back-of-the-napkin exercise.
Engineers productivity is not linear, both over time and team size. In fact there may be productivity improvements just be freezing hiring as adding too many people becomes net negative if the architecture and domain complexity does not support. Also, writing code is not the bottleneck on value, it's making the correct changes that adds value. While AI can accelerate simple and repetitive code production, this could easily add more technical debt and be net-negative producing over time if engineers aren't thinking about the big picture. On the other hand AI could add a lot of value not directly related to coding as it can process and "understand" more breadth of information (including code) that can magnify engineers productivity if used thoughtfully, but that may have no direct relationship to coding per se.
The point of this announcement is to boost sales, not to save costs. He's selling AI hype. The only thing that matters if customers are buying the hype, not the reality of AI's limitations which I'm sure the board is well aware of.
I imagine if you demonstrate that you can have AI agents as viciously competitive COOs, CFOs, and CTOs, who never need time off, never sleep, it would be something that would set Wall Street on fire.
I'm surprised this has caught on yet... Well, not really I suppose -- for obvious power-tripping reasons.
However, I could see the issue of AI 'hallucinations' being a non-issue in this domain because many in C-Suite positions have been 'hallucinating' for decades.
From there it only follows: (camera pans to board of directors) "What would you say… ya' do here?"
We can have a funding agent provision an agent-based board as well. What's the point of a board that can't react to real-time market information 24x7x365?
The humans who lead companies are rarely held personally responsible (in a legal sense) for decisions they make anyway, so having LLMs at the helm wouldn’t really be that much of a change.
You're talking past my point. I understand there's cynicism at multiple levels.
The cynicism about AI's capabilities is well understood, we're at the peak of the hype cycle. People are selling AI across the board, but the reality will fall short of the sales pitch in innumerable ways across the board whether that be programmer productivity or anything else.
Then there's the meta cynicism about the sales pitches of AIs, reinforced my CEOs speaking to wall street about how AI will enable staff reductions. The rank and file is understandbly very angry about this, coupled with the understanding by technical folks that AI is far from being able to replace the function of actual deep-thinking humans. This is when the temptation to minimize the value of executives and "call their bluff" comes in. But here is where you need a dose of reality. Executives aren't stupid as you think, they don't get paid what they do for no reason, and despite the bad and anti-humanitarian decisions they make in the name of shareholder, they actually can't be replaced by AI. Both executives and boards understand this and so it's not really a topic of discussion. You are free to disagree with this of course, but at some point its just toothless wishful thinking.
> Executives aren't stupid as you think, they don't get paid what they do for no reason
I agree, but I think the reason they get paid what they do isn't because they're highly skilled (not to say they don't have skills, but those skills are mostly good ol' boys networking and the ability to do basic analysis), but rather because they're part of an insular class that protects its own. However, they're expensive and inefficient, and if we're going to practice honest capitalism then the first group to rip that bandaid off and automate away their (mostly) dead weight actually will be competitively superior to the backwards holdouts, and they will proceed to dominate the market.
This is of course if we're practicing actual capitalism and not a dressed-up form of neo-feudalism.
> but the reality will fall short of the sales pitch in innumerable ways across the board whether that be programmer productivity or anything else
Remains to be seen. We are with AI where the web was in 1996, when plenty of trusted thought leaders were sagely telling us that the web was just a place for glorified brochures.
The gap between the claimed capability of "AI Agents" and what you're actually able to build with tools like AutoGen and Crew (and presumably AgentForce, it's been a few months since I saw a demo) is the largest gap I have ever seen in the field and I've been working with NLP/Conversational AI professionally since 2016.
Yeah 100%. Honestly style / technique / language consistency are implementation details, it helps with engineer fungibility and ramp up, but it also works against engineers applying local judgement. This is something to briefly consider when starting new services/features, but definitely not something to optimize for in an existing system.
On the other hand, data and logic consistency can be really important, but you still have to pick your battles because it's all tradeoffs. I've done a lot of work in pricing over the decades, and it tends to be an area where the logic is complex and you need consistency across surfaces owned by many teams, but at the same time it will interact with local features that you don't want to turn pricing libraries/services into god objects as you start bottlenecking all kinds of tangentially related projects. It's a very tricky balance to get right. My general rule of thumb is to anchor on user impact as the first order consideration, developer experience is important as a second order, but many engineers will over-index on things they are deeply familiar with and not be objective in their evaluation of the impact / cost to other teams who pay an interaction cost but are not experts in the domain.
A common experience (mostly in the Pacific North West) I have had is to implement a feature in a straightforward manner that works with minimal code, for some backlog issue. Then I'm told the PR will be looked at.
A couple days later I am told this is not the way to do X. You must do it Y? Why Y? Because of historical battles won and lost why, not because of a specific characteristic. My PR doesn't work with Y and it would be more complicated...like who knows what multiplier of code to make it work. Well that makes it a harder task than your estimate, which is why nobody ever took it up before and was really excited about your low estimate.
How does Y work? Well it works specifically to prevent features like X. How am I supposed to know how to modify Y in a way that satisfies the invisible soft requirements? Someone more senior takes over my ticket, while I'm assigned unit tests. They end up writing a few hundred lines of code for Y2.0 then implement X with a copy paste of a few lines.
I must not be "a good fit". Welcome to the next 6-12 months of not caring about this job at all, while I find another job without my resume starting to look like patchwork.
Challenging people's egos by providing a simpler implementation for something someone says is very hard, has been effective at getting old stagnant issues completed. Unnaturally effective. Of course, those new "right way" features are just as ugly as any existing feature, ensuring the perpetuation of the code complexity. Continually writing themselves into corners they don't want to mess with.
Hard for me to comment definitively here since I don't have the other side of the story, but I will say that I have seen teams operating based on all kinds of assumed constraints where we lose sight of the ultimate objective of building systems that serve human needs. I've definintely seen cases where the true cost of technical debt is over-represented due to a lack of trust between between business stakeholders and engineering, and those kind of scenarios could definintely lead to this kind of rote engineering policy detached from reality. Without knowledge of your specific company and team I can't offer any more specific advice other than to say that I think your viewpoint of the big picture sounds reasonable and would resonate in a healthy software company with competent leadership. Your current company may not be that, but rest assured that such companies do exist! Never lose that common sense grounding, as that way madness lies. Good luck in finding a place where your experience and pragmatism is valued and recognized!
Generally, companies filter out candidates who request to look at any measurable amount of source code as part of the process. Larger companies leveragethe 6-12 mo contractor to hire. You are still stuck there until you are not.
These topics are common knowledge, if you have interviewed in the last 5 to 10 years. I have been working for 25, so I find the blame trying to be redirected, by some, misguided.
Yes, you can't directly look at the source code (unless you pick companies that open source a lot). But I was more thinking of trying to develop some proxy metrics that you can measure; the most common being asking the right questions in the interview. But you can also try to look for other tells.
Sure, except the author is an IC, so by definition it's not from middle management.
As a middle manager myself though, I understand why you jumped to that conclusion. Because this is an example of highly efficient behavior in a system of imperfect humans, which all large teams are. Typically in a large cross-functional teams, engineers are not incentivized to question product requirements because it's not technically their job. But in practice, engineers closest to the problem are best positioned to recognize requirement flaws and directly contribute to better solutions. When I got my start in web development this was just the norm because no one understood the web yet, so both traditional designers as well as human-computer interaction specialists were out of their depth, and product manager wasn't a career path that people could go into to make big bucks without any technical knowledge the way it became after the web and mobile revolutions went mainstream and tech became the new finance for the blindly ambitious.
reply