Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Write less code, be more responsible (orhun.dev)
155 points by orhunp_ 22 hours ago | hide | past | favorite | 106 comments
 help



I’m working as a single solo developer of a tiny video game. I’m writing it in C with raylib. No coding assistants, no agents, not even a language server.

I only work on it for a few hours during the week. And it’s progressing at a reasonable pace that I’m happy with. I got cross-compilation from Linux to Windows going early on in a couple of hours. Wasn’t that hard.

I’ve had to rework parts of the code as I’ve progressed. I’ve had to live with decisions I made early on. It’s code. It’s fine.

I don’t really understand the, “more, better, faster,” cachet to be honest. Writing the code hasn’t been the bottle neck to developing software for a long time. It’s usually the thinking that takes most of the time and if that goes away well… I dunno, that’s weird. I will understand it even less.

Agree with writing less code though. The economics of throwing out 37k lines of code a week is… stupid in the extreme. If we get paid by the line we could’ve optimized for this long before LLM’s were invented. It’s not like more lines of code means more inventory to sell. It’s usually the opposite: the more bugs to fix, the more frustrated customers, the higher churn of exhausted developers.


>I don’t really understand the, “more, better, faster,” cachet to be honest. Writing the code hasn’t been the bottle neck to developing software for a long time. It’s usually the thinking that takes most of the time and if that goes away well… I dunno, that’s weird. I will understand it even less.

This is what I've always found confusing as well about this push for AI. The act of typing isn't the hard part - its understanding what's going on, and why you're doing it. Using AI to generate code is only faster if you try and skip that step - which leads to an inevitable disaster


> The act of typing isn't the hard part - its understanding what's going on, and why you're doing it. Using AI to generate code is only faster if you try and skip that step - which leads to an inevitable disaster

It’s more than just typing though. A simple example remembering the exact incantation of CSS classes to style something that you can easily describe in plain English.

Yes, you could look them up or maybe even memorize them. But there’s no way you can make wholesale changes to a layout faster than a machine.

It lowers the cost for experimentation. A whole series of “what if this was…” can be answered with an implementation in minutes. Not a whole afternoon on one idea that you feel a sunk cost to keep.


> It’s more than just typing though. A simple example remembering the exact incantation of CSS classes to style something that you can easily describe in plain English.

Do that enough and you won't know enough about your codebase to recognise errors in the LLM output.


imo a question is, do you still need to understand the codebase? What if that process changes and the language you’re reading is a natural one instead of code?

> What if that process changes and the language you’re reading is a natural one instead of code?

Okay, when that happens, then sure, you don't need to understand the codebase.

I have not seen any evidence that that is currently the case, so my observation that "Continue letting the LLM write your code for you, and soon you won't be able to spot errors in its output" is still applicable today.

When the situation changes, then we can ask if it is really that improtant to understand the code. Until that happens, you still need to understand the code.


The same logic applies to your statement:

> Do that enough and you won't know enough about your codebase to recognise errors in the LLM output.

Okay, when that happens, then sure, you'll have a problem.

I have not seen any evidence that that is currently the case i.e. I have no problems correcting LLM output when needed.

When the situation changes, then we can talk about pulling back on LLM usage.

And the crucial point is: me.

I'm not saying that everyone that uses LLM to generate code won't fall into "not able to use LLM generated code".

I now generate 90% of the code with LLM and I see no issues so far. Just implementing features faster. Fixing bugs faster.


You do have a point but as the sibling comment pointed out, the negative eventuality you are describing also has not happened for many devs.

I quite enjoy being much more of an architect than I could compared to 90% of my career so far (24 years in total). I have coded my fingers and eyes out and I spot idiocies in LLM output from trivially easy to needing an hour carefully reviewing.

So, I don't see the "soon" in your statement happening, ahem, anytime soon for me, and for many others.


What happens when your LLM of choice goes on an infinite loop failing to solve a problem?

What happens when your LLM provider goes down during an incident?

What happens when you have an incident on a distributed system so complex that no LLM can maintain a good enough understanding of the system as a whole in a single session to spot the problem?

What happens when the LLM providers stop offering loss leader subscriptions?


AFAIK everything I use has timeouts, retries, and some way of throwing up its hands and turning things back to me.

I use several providers interchangeably.

I stay away from overly complex distributed systems and use the simplest thing possible.

I plan to wait for some guys in China to train a model on traces that I can run locally, benefitting from their national “diffusion” strategy and lack of access to bleeding-edge chips.

I’m not worried.


> What if that process changes and the language you’re reading is a natural one instead of code?

Natural language is not a good way to specify computer systems. This is a lesson we seem doomed to forget again and again. It's the curse of our profession: nobody wants to learn anything if it gets in the way of the latest fad. There's already a historical problem in software engineering: the people asking for stuff use plain language, and there's a need to convert it to a formal spec, and this takes time and is error prone. But it seems we are introducing a whole new layer of lossy interpretation to the whole mess, and we're doing this happily and open eyed because fuck the lessons of software engineering.

I could see LLMs being used to check/analyze natural language requirements and help turn them into formal requirements though.


> But it seems we are introducing a whole new layer of lossy interpretation to the whole mess (...)

I recommend you get acquainted with LLMs and code assistants, because a few of your assertions are outright wrong. Take for example any of the mainstream spec-driven development frameworks. All they do is walk you through the SRS process using a set of system prompts to generate a set of documents featuring usecases, functional requirements, and refined tasks in the form of an actionable plan.

Then you feed that plan to a LLM assistant and your feature is implemented.

I seriously recommend you check it out. This process is far more structured and thought through than any feature work that your average SDE ever does.


> I recommend you get acquainted with LLMs and code assistants

I use them daily, thanks for your condescension.

> I could see LLMs being used to check/analyze natural language requirements and help turn them into formal requirements though.

Did you read this part of my comment?

> Take for example any of the mainstream spec-driven development frameworks. All they do is walk you through the SRS process using a set of system prompts to generate a set of documents featuring usecases, functional requirements, and refined tasks in the form of an actionable plan.

I'm not criticizing spec-driven development frameworks, but how battle-tested are they? Does it remove the inherent ambiguity in natural language? And do you believe this is how most people are vibe-coding, anyway?


> Did you read this part of my comment?

Yes, and your comment contrasts heavily with the reality of using LLMs as code assistants, as conveyed in comments such as "a whole new layer of lossy interpretation. This is profoundly wrong, even if you use LLMs naively.

I repeat: LLM assistants have been used to walk users through software requirements specification processes that not only document exactly what usecases and functional requirements your project must adhere to, but also create tasks and implement them.

The deliverable is both a thorough documentation of all requirements considered up until that point and the actual features being delivered.

To drive the point home, even Microsoft of all companies provides this sort of framework. This isn't an arcane, obscure tool. This is as mainstream as it can be.

> I'm not criticizing spec-driven development frameworks, but how battle-tested are they?

I really recommend you get acquainted with this class of tools, because your question is in the "not even wrong" territory. Again, the purpose of these tools is to walk developers through a software requirements specification process. All these frameworks do is put together system prompts to help you write down exactly what you want to do, break it down into tasks, and then resume the regular plan+agent execution flow.

What do you think "battle tested" means in this topic? Check if writing requirements specifications is something worth pursuing?

I repeat: LLM assistants lower formal approaches to the software development lifecycle by orders of magnitude, to the point you can drive each and every single task with a formal SRS doc. This isn't theoretical, it's month's old stuff. The focus right now is to remove human intervention from the SRS process as well with the help of agents.


> Yes, and your comment contrasts heavily with the reality of using LLMs as code assistants, as conveyed in comments such as "a whole new layer of lossy interpretation. This is profoundly wrong, even if you use LLMs naively.

Most people, when told they sound condescending, try to reframe their argument in order to remove this and become more convincing.

Sadly, you chose to double down instead. Not worth pursuing.

> This isn't theoretical, it's month's old stuf

Hahaha! "Months old stuff"!

Disengaging from this conversation. Over and out.


That's a bold assertion without any proof.

It also means you're so helpless as a developer that you could never debug another person's code, because how would you recognize the errors, you haven't made them yourself.


> It lowers the cost for experimentation. A whole series of “what if this was…”

Anecdotal, but I've noticed while this is true it also adds the danger of knowing when to stop.

Early on I would take forever trying to get something exactly to whats in my head. Which meant I would spend too much time in one sitting then if I had previously built it by hand.

Now I try to time box with the mindset "good enough".


> But there’s no way you can make wholesale changes to a layout faster than a machine.

You lost me here. I can make changes very quickly once I understand both the problem and the solution I want to go with. Modifying text is quite easy. I spend very little time doing it as a developer.


This is not correct. CSS is the style rules for all rendering situations of that HTML, not just your single requirement that it "looks about right" in your narrow set of test cases.

Nobody writing production CSS for a serious web page can avoid rewriting it. Nobody is memorizing anything. It's deeply intertwined with the requirements as they change. You will eventually be forced to review every line of it carefully as each new test is added or when the HTML is changed. No AI is doing that level of testing or has the training data to provide those answers.

It sounds like you're better off not using a web page at all if this bothers you. This isn't a deficiency of CSS. It's the main feature. It's designed to provide tools that can cover all cases.

If you only have one rendering case, you want an image. If you want to skip the code, you can just not write code. Create a mockup of images and hand it off to your web devs.


Eh, I've written so much CSS and I hate it so much I use AI to write it now not because it's faster or better at doing so, just so I don't need to do it.

So AI is good for CSS? That’s fine, I always hated CSS.

Don't worry. In a few years we'll be like the COBOL programmers who still understand how things work, our brains haven't atrophied, and we make good money fixing the giant messes created by others.

Sounds awful. I'm not interested in fixing giant messes. I'll just be tinkering away making little things (at scale) where the scope is very constrained and the fixing isn't needed.

People can do their vibecoding to make weird rehackings of stuff I did, almost always to make it more mainstream, limited, and boring, and usually to some mainstream acclaim. And they can flame out, not my problem.

I'm not fixing anybody's giant mess. I'm doing the equivalent of simply refusing to give up COBOL. To stop me, people will have to EOL a huge amount of working useful stuff for no good reason and replace it with untrustworthy garbage.

I am aware this is exactly the plan on so many levels. Bring it. I don't think it's going to be popular, or rather: I think only at this historical moment can you get away with that and not immediately be called on it, as a charlatan.

When our grandest celebrity charlatans go in the bin, the time for vibecoding will truly be over.


AI not just types code for you. It can assist with almost every part of software development. Design, bug hunting, code review, prototyping, testing.

It can even create a giant ball of mud ten times faster than you can.

A Luddite farm worker can assist in all those things, the question is, can it assist in a useful manner?

Not only it can but it does.

Just as I was reading this claude implemented a drag&drop of images out of SumatraPDF.

I asked:

> implement dragging out images; if we initiate drag action and the element under cursor is an image, allow dragging out the image and dropping on other applications

then it didn't quite work:

I'm testing it by trying to drop on a web application that accepts dropped images from file system but it doesn't work for that

Here's the result: https://github.com/sumatrapdfreader/sumatrapdf/commit/58d9a4...

It took me less than 15 mins, with testing.

Now you tell me:

1. Can a farm worker do that?

2. Can you improve this code in a meaningful way? If you were doing a code review, what would you ask to be changed?

3. How long would it take you to type this code?

Here's what I think: No. No. Much longer.


The code is really bad, so I'd have a lot to say about it in a review. Couldn't do it in 15 minutes, though.

Why is it using a temp file? Is there really no more elegant way to pass around pointers to images than spilling to disk?

Of course there is, but slop generators be slopping

What is it, o wise person stingy with the information.

I admire you for what you've created wrt Sumatra. It's an excellent piece of software. But, as a matter of principle, I refuse to knowingly contribute to codebases using AI to generate code, including drive-by hints, suggestions, etc.

You, or rather Claude, are not the first to solve this problem and there are examples of better solutions out there. Since you're willing to let Claude regurgitate other people's work, feel free to look it up yourself or have Claude do it for you.


It always seemed to me like its lootbox behavior. Highly addictive for the dopamine hit you get.

"This is what I've always found confusing as well about this push for AI."

I think it's a few things converging. One is that software developers have become more expensive for US corporations for several reasons and blaming layoffs on a third party is for some reason more palatable to a lot of people.

Another is that a lot of decision makers are pretty mediocre thinkers and know very little about the people they rule over, so they actually believe that machines will be able to automate what software developers do rather than what these decision makers do.

Then there's the ever-present allure of the promise that middle managers will somehow wrestle control over software crafts from the nerds, i.e. what has underpinned low-code business solutions for ages and always, always comes with very expensive consultants, commonly software developers, on the side.


> This is what I've always found confusing as well about this push for AI.

They want you to pay for their tokens at their casino and rack up a 5 - 6 figure bill.


> This is what I've always found confusing as well about this push for AI. The act of typing isn't the hard part - its understanding what's going on, and why you're doing it.

This is a very superficial and simplistic analysis of the whole domain. Programmers don't "type". They apply changes to the code. Pressing buttons in a keyboard is not the bottleneck. If that was the case, code completion and templating would have been a revolutionary, world changing development in the field.

The difficult part is understanding what to do and how to do it, and why. It turns out LLMs can handle all these types of task. You are onboarding onto a new project? Hit a LLM assistant with /explain. You want to implement a feature that matches a specific requirement? You hit your LLM assistant with /plan followed by apply. You want to cover some code with tests? You hit your LLM assistant with /tests.

In the end you review the result,and do with it whatever you want. Some even feel confident enough to YOLO the output of the LLM.

So while you still try to navigate through files, others already have features out.


I've been developing a moderately popular (for an indie) game for over 4 years at this point (full time). C++, SFML, SQLite. Same as you: no coding assistants, no agents, etc. I also don't use git. [1]

One of the largest speedups is from how much of the codebase I can keep in my head. Because I started from an empty C++ file, the engine reflects how I reason and organize concepts (lossless compression). Thus most of the codebase is in my brains RAM.

I don't see how LLM agents are going to improve my productivity in the long run. The less a person understands their code (organized logic), the more abstracted the conversation is going to become when directing an agent. The higher up the abstraction ladder you go, the less distinct your product becomes.

[1] And very, very rarely have I wished I had it for a moment. Not using git simplifies abstracted parts of development. No branches, no ballooning of conceptual tangents, etc. Focus on one thing at a time. Daily backups and a log of what I worked on for the day suffices should I need to revisit/remember earlier changes. I've never been in a situation where I change I made over a week ago interfered with todays work.


I definitely feel like understanding the system is a big part of what makes it relatively easy to maintain/understand.

My game is just in my spare time while I'm looking for work and the scope of the project is small so that I can finish it, release it, and start working on the next one. I'm not trying to build an engine or anything. Just a game. Not even the best game I can make.

I can iterate on it fast because I know the structures. I can refactor it fast because I've built an intuition for a process over time that keeps code amenable to changes over time. I know I'm not going to make the right decisions at the start so I avoid committing to generalizations, etc.

Editing code is pretty fast for me. Again, years working with a particular setup. I still have expandable snippets, multi-cursor editing and a host of macros for common editing motions.

Checking changes... pretty fast. I'm getting to the point where I might invest in using dynamic reloading for my in-development builds. I suspect it will take a few hours to do at most. Not a big deal. For now I have a basic system that just watches for file changes and recompiles/re-runs the program.

In a different context, working on a team in a large multi-million line codebase... I dunno what other people find it's like but I've never found it terribly slow to write/edit code or ship features. I can usually knock most tasks out at a reasonable pace especially when my familiarity with the area of the code I'm working in increases. I usually find my priorities shift with the demands of users, the business, etc. Some times I work on shipping new features quick. Other times it's making sure what we ship the right things and done well so that we don't leak PII.

Either way... actually writing the code isn't the slowest part, in my experience. It's all the other stuff: the meetings, the design, maintenance, documentation, understanding the problem domain, actually shipping the code to production, etc that takes up the most time for me.


“more, better, faster,”

I have heard these words, almost verbatim, from manager-yes-men coming from a FAANG background, and surprisingly concentrated in a certain demography (if someone find this offensive, I'll remove this part).

My CTO wants us to "deliver as fast as possible", and my VP wants us to "go much faster, and more ownership". "Better" or anything related with quality was definitely mentioned, too, but always at a second place.

To this day, I consider these yes-men to be a major red flag, so I always tried to probe for such information during interviews.


Honestly I think you can tell pretty quickly if a company or person is approaching AI from the viewpoint of accelerating development and innovation or just looking to do the same amount of work with less people. The space has been flooded by mean-spirited people who love the idea of developers becoming obsolete, which is a viewpoint that isn't working out for a lot of companies right now...many are already scrambling to rehire. Approaching the situation practically, integrating AI as a tool and accelerator is the much smarter way and if done right will pay for itself anyway.

Those mean spirited people are actually capitalists and they've been chasing the dream of perpetual labor since the 1800s.

This:

> I don’t really understand the, “more, better, faster,” cachet to be honest

And this:

> I’m working as a single solo developer

...I believe explain it all here. You likely are not beholden to PMs, CEOs and the like. Of course you can go at your own pace. I am actually puzzled that you don't understand that aspect yourself.

> The economics of throwing out 37k lines of code a week is… stupid in the extreme

Again, bosses. CEOs have 14 calls a week with potential prospects and sometimes want demos, sometimes they sign quickly and want a prototype, and sometimes they arrange a collab with a friend or family. Then 3 weeks later the whole thing falls apart and you have to throw it away because it's getting in the way of delivering what actually still pays the bills.

I am not the CEO. I try making his visions come true. I don't get to make the calls on whether 37k of lines will be quickly churned out and then deleted some weeks later.

I think your comment is overly focused only on the coding/programming aspect of things. We don't exist in a vacuum. May I ask how do you make your living? That might shed extra light on your trouble understanding the inevitable churn when writing code for money.

---

All of this does not even mention the fact that I 100% agree that less coding lines == less trouble. Code is generally a liability, I believe every mature dev understands that. But often we are not given a choice so we have to produce more code and periodically compress it / re-architect it (while never making the mistake of asking to be given time to do so because we never will).


> Writing the code hasn’t been the bottle neck to developing software for a long time. It’s usually the thinking that takes most of the time

Does your coding not involve thinking? And if not, why are you not delighted to have AI take that over? Writing unthinking boilerplate is tedious garbage work.

Today I wanted to address a bug I found on a product I work on. At the intersection of platform migration and backwards compatibility I found some customers getting neither. I used an LLM to research the code paths and ensure that my understanding of the break was correct and what the potential side effects of my proposed fix would be. AI saved me stepping through code for hours to understand the side effects. I asked it for a nice description of the flow and it gave it to me, including the pieces I didn’t really know because I’d never even touched that code before. I could have done this. Would it have been a better use of my time than moving on to the next thing? Probably not. Stepping through function calls in an IDE is not my idea of good “thinking” work. Tracing through glue to understand how a magical property gets injected is a great job for a machine.


>> Writing the code hasn’t been the bottle neck to developing software for a long time. It’s usually the thinking that takes most of the time

> I used an LLM to research the code paths and ensure that my understanding of the break was correct and what the potential side effects of my proposed fix would be.

Using the LLM for understanding is very different to using the LLM for codegen.

You are not really disagreeing with the author here; it's just that for the specific project he is talking about, he already understands it just fine so the advantages of LLM help in understanding is tiny.


My point is that these are not separate activities. They are drawing a false distinction between thinking and coding and then asserting that code speed doesn’t matter and implying that AI only helps with the coding bit.

None of this is actually true, though. Coding and thinking are often tightly intertwined, as rarely is the coding piece so straightforward that it requires no interesting thought. Coding speed does matter, even if it’s not the primary bottleneck for many things. And AI can be very helpful outside the context of pure coding.


> My point is that these are not separate activities. They are drawing a false distinction between thinking and coding

I agree.

> and implying that AI only helps with the coding bit.

They did imply that. Do you think that AI only helps with the coding bit, helps with the thinking bits, or helps with neither?

> Coding and thinking are often tightly intertwined, as rarely is the coding piece so straightforward that it requires no interesting thought.

I agree with this too.

> Coding speed does matter, even if it’s not the primary bottleneck for many things.

Up to a point, sure. But without AI, we read code once while writing it, we read it again while testing it/finding errors during tests, we read it again during review.

With AI code we read it during review. Maybe.

If AI generates code faster than the time it takes to read it more than once, then it isn't "helping" in terms of sustainability. Churning out code is easy; maintaining that code is not.

> And AI can be very helpful outside the context of pure coding.

Isn't this how the author is using it? Outside the context of pure coding? I admit this is how I use it - to understand some new thing that I have to implement before I implement it.


> Do you think that AI only helps with the coding bit, helps with the thinking bits, or helps with neither?

Both. I’m effectively using AI to generate code and to help me reason through design options.

> If AI generates code faster than the time it takes to read it more than once, then it isn't "helping" in terms of sustainability

This seems rather reductionist, especially as in this scenario you described we went from reading code 3 times reading at once. If reading the code is actually the bottleneck, then you described a 3X speed up.

> Isn't this how the author is using it?

Which author are you referring to? agentultra seems to be not using it at all.


You can use LLM to write less code too. Just takes more intention. Which is kind of the whole point.

>Writing the code hasn’t been the bottle neck to developing software for a long time.

Then we're doing different things.

I didn't like GitHub so I wrote my own. 60k lines of code later... yes writing code was the bottleneck which has been eliminated. The bottleneck is now design, review, and quality assessments that can't be done trivially.

This isn't even the project I wanted to be doing, the tools that were available were holding me back so I wrote my own. It also consumes a few hours a week.

If you think writing code isn't the bottleneck then you aren't thinking big enough. If you don't WANT to think big enough, that's fine, I also do things for the joy of doing them.


We do different things, I do code for other people to use.

Once we tried shipping features and updates every week, because we could ideate, code, test and deploy that fast.

No user wanted that - product owners and business wanted that or they thought they wanted, until users came with torches and pitchforks.

Don’t forget there is user adoption and education.

Churning out features no one will use because they don’t know about is useless.


> Writing the code hasn’t been the bottle neck to developing software for a long time.

For who? There's no lack of professional programmers who couldn't clear FizzBuzz now coding up company-sized systems using Agents. This is all good as long as agents can stick to the spec/req & code it all up with decent enough abstractions... as the professional approving it is in no position to clue it on code organization or bugs or edge cases. I think, we (as a society) are looking at something akin to "reproducibility crisis" (software full of Heisenbugs) as such "vibe coded" systems get widely sold & deployed, 'cause the "pros" who excel at this are also good at... selling.


> Writing the code hasn’t been the bottle neck to developing software for a long time.

I see this on HN just so much and I am not sure what this is, almost seems like a political slogan that followers keep repeating.

I had to do some rough math in my head but in the last 5 years I have been involved with hiring roughly 40 SWEs. Every single one of them was hired because writing the code was THE bottleneck (the only one) and we needed more people to write the code


If you’ve never read Fred Brooks, I’d recommend it. The aphorism is a bit dated but rings true: you can’t add another developer and make the process go faster. It usually slows teams down.

I’ve seen it time and again: startups move from their market-fit phase into an operational excellence phase on the backing of VC funding and they start hiring a ton of people. Most of those developers are highly educated, specialized people with deep technical skills and they’re often put to work making the boxes more blue or sitting in meetings with PMs for hours. Teams slow down output when you add more people.

You don’t have a quota. It’s not like you’ll have fewer units to sell if you don’t add that 30k lines of code by Friday.

This is knowledge work. The work is understanding problems and knowing how to develop solutions to them. You add more people and you end up adding overhead. Communication, co-ordination, operations overhead.

The real bottle necks are people and releasing systems into production. Every line of code change is a liability. There’s risk tolerance to manage in order to achieve five-nines.

A well-sized team that has worked together a long time can outperform a massive team any day in my experience.


> they’re often put to work making the boxes more blue or sitting in meetings with PMs for hours

Haha, this is exactly my experience.

I'll never forget the best candidate I ever interviewed - my feedback was to absolutely hire him and put him on the most interesting and challenging problems. They put him in a marketing team tweaking upsell popups. He left after 2 months.


> If you’ve never read Fred Brooks, I’d recommend it. The aphorism is a bit dated but rings true: you can’t add another developer and make the process go faster.

He didn’t say that. He said adding developers to a late project makes it slower, explained why, and even added some charts to illustrate it. The distinction matters.

By your interpretation, no company should have more than a few developers, which is obviously false. You can argue team organization, but that’s not what Brooks was saying, either.

On top of that, parent never said he hired 40 devs for one project at one time. He was talking in general terms, over the course of years, perhaps in multiple companies.

Finally, let me invoke another aphorism: hours of planning can save you weeks of development. Right here you have the bottleneck staring you into the face.

Of course it’s development. And unless you’re in a really dysfunctional environment, most of that development is coding, testing and debugging, where AI can help a lot.


> He didn’t say that.

Actually he did, or something very close to it.

Obviously SOMETIMES you can add more developers to a project to successfully speed it up, but Brooks point was that it can easily also have the opposite effect and slow the project down.

The main reason Brooks gives for this is the extra overhead you've just added to the project in terms of communications, management, etc. In fact increasing team size always makes the team less efficient - adds more overhead, and the question is whether the new person added adds enough value to offset or overcome this.

Most experienced developers realize this intuitively - always faster to have the smallest team of the best people possible.

Of course some projects are just so huge that a large team is unavoidable, but don't think you are going to get linear speedup by adding more people. A 20 person team will not be twice as fast as a 10 person team. This is the major point of the book, and the reason for the title "the mythical man month". The myth is that men and months can be traded off, such that a "100 man month" project that would take 10 men 10 months could therefore be accomplished in 1 month if you had a team of 100. The team of 100 may in fact take more than 10 months since you just just turned a smallish efficient team into a chaotic mess.

Adding an AI "team member" is of course a bit different to adding a human team member, but maybe not that different, and the reason is basically the same - there are negatives as well as positives to adding that new member, and it will only be a net win if the positives outweigh the negatives (extra layers of specifications/guardrails, interaction, babysitting and correction - knowing when context rot has set in and time to abort and reset, etc).

With AI, you are typically interactively "vibe coding", even if in responsible fashion with specifications and guardrails, so the "new guy" isn't working in parallel with you, but is rather taking up all your time, and now his/its prodigious code output needs reviewing by someone, unless you choose to omit that step.


> The aphorism is a bit dated but rings true: you can’t add another developer and make the process go faster. It usually slows teams down.

I've been doing this for 30-years and this is another political slogan of sorts. this is true in every single imaginable job - new people slow you down, until they do not and become part of the well-oiled machine that is hopefully your team. not sure why people insist on saying this, it is like saying "read this book, says that that Sun will rise tomorrow morning"

> I’ve seen it time and again: startups move from their market-fit phase into an operational excellence phase on the backing of VC funding and they start hiring a ton of people. Most of those developers are highly educated, specialized people with deep technical skills and they’re often put to work making the boxes more blue or sitting in meetings with PMs for hours. Teams slow down output when you add more people.

I wasn't talking about startups or developers making boxes more blue, I was talking about personal experience. The bottleneck, if you are doing amazing shit and not burning some billionaries money on some silly "startup" is always the code which is why we hire developers to write the code. Everything else is just coming up with some silly unrelated examples - of course there are people (at every job again) doing nothing or menial tasks - this isn't what I was talking about.

> You don’t have a quota. It’s not like you’ll have fewer units to sell if you don’t add that 30k lines of code by Friday.

I do have customers that want features that would make their lives easier and are willing to handsomely pay for it, that good enough?

> This is knowledge work. The work is understanding problems and knowing how to develop solutions to them. You add more people and you end up adding overhead. Communication, co-ordination, operations overhead.

This is only on super shitty teams with super shitty co-workers (especially senior ones) and super shitty managers. I feel for the careers in this industry where this is/was the case. A whole lot of people are terrible at their jobs in places like this - a whole lot of people...

> A well-sized team that has worked together a long time can outperform a massive team any day in my experience.

a well-sized team was at one point (well-sized team - 1) and (well-sized team - 2) and (well-sized team - 3) and in the future if it is right team will be even more amazing as well (well-sized team + 1), (well-sized team + 2)


If you’ve heard it a number of times and refuse to consider what people are saying then maybe I can’t help you.

I’m talking from personal experience of well over twenty years as both a developer, and for a while, a manager.

The slow part isn’t writing code.

It’s shipping it. You can have every one vibe coding until their eyes bleed and you’ve drained their will to live. The slowest part will still be testing, verifying, releasing, and maintaining the ball of technical debt that’s been accumulating. You will still have to figure out what to ship, what to fix, what to rush out and what to hold out until it’s right, etc. The more people you have to slower that goes in my experience. AI tools don’t make that part faster.


> If you’ve heard it a number of times and refuse to consider what people are saying then maybe I can’t help you.

What someone says “I’ve heard this a thousand times, but…”, it could be that the person is just stupidly obstinate but it could also mean that they have a considered opinion that it might benefit you to learn.

“More people slow down projects” is an oversimplified version of the premise in The Mythical Man Month. If that simplistic viewpoint held, Google would employ a grand total of maybe a dozen engineers. What The Mythical Man Month says is that more engineers slow down a project that is already behind. i.e. You can’t fix a late project by adding more people.

This does not mean that the amount of code/features/whatever a team can produce or ship is unrelated to the size of the team or the speed at which they can write code. Those are not statements made in the book.


Sure, I’m not writing a whole critical analysis of TMMM here and am using an aphorism to make a point.

Let’s imagine we’re going to make a new operating system to compete with Linux.

If we have a team of 10 developers we’re probably not going to finish that project in a month.

If we’re going to add 100 developers we’re not going to finish that project in a month.

If we add a thousand developers we’re still not going to finish that project in a month.

But which team should ship first? And keep shipping and release fastest?

My bet would be on the smaller team. The exact number of developers might vary but I know that if you go over a certain threshold it will slow down.

People trying to understand management of software projects like to use analogies to factory lines or building construction to understand the systems and processes that produce software.

Yet it’s not like adding more code per unit of time is adding anything to the process.

Even adding more people to a factory line had diminishing returns in efficiency.

There’s a sweet spot, I find.

As for Google… it’s not a panacea of efficiency from what I hear. Though I don’t work there. I’ve heard stories that it takes a long time to get small changes to production. Maybe someone who does work there could step in and tell us what it’s like.

As a rule though, I find that smaller teams with the right support, can ship faster and deliver higher quality results in the long run.

My sample size isn’t large though. Maybe Windows is like the ultimate operating system that is fast, efficient, and of such high quality because they have so many engineers working on it.


> using an aphorism to make a point.

But your “aphorism” is not true. You made a claim that more developers make a project slower. And you pointed to TMMM in support of that claim.

Now you seem to be saying “I know this isn’t really true, but my point hinges on us pretending it is.”

> Let’s imagine we’re going to make a new operating system to compete with Linux.

This is a nonsensical question. “Would you rather be set up to fail with 10 engineers or 1000”? Your proposed scenario is that it’s not possible to succeed there is no choice to be made on technical merit.

> But which team should ship first? And keep shipping and release fastest?

Assuming you are referring to shipping after that initial month where we have failed, the clear option is the largest of the teams. A team of 10 will never replicate the Linux kernel in their lifetimes. The Linux kernel has something like 5000 active contributors.

> I’ve heard stories that it takes a long time to get small changes to production.

There are many reasons it’s slow to ship changes in a company like Google. This doesn’t change the fact that no one is building Chrome or Android with a team of ten.


You’re right, I’m not making my point well.

You do need enough people to make complex systems. We can do more together than we can on our own. Linux started out with a small team but it is large today.

It runs against my experience though and I can’t seem to explain why.

My observation in my original post is that I don’t see why writing code is the bottleneck. It can be when you have too much of it but I find all the ancillary things around shipping code takes more time and effort.

Thanks for the discussion!


> It runs against my experience though and I can’t seem to explain why.

Your experiences are probably correct, but incomplete. More engineers on a project do come with more cost. Spinning up a new engineer is a net loss for some time (making the late project later) and output per engineer added (even after ramp up) is not linear. 5000 engineers working on Linux do not produce 5000x as much as Torvalds by himself. But they probably do produce more than 2500 engineers.

> Thanks for the discussion!

You too


> It’s shipping it. You can have every one vibe coding until their eyes bleed and you’ve drained their will to live. The slowest part will still be testing, verifying, releasing, and maintaining the ball of technical debt that’s been accumulating. You will still have to figure out what to ship, what to fix, what to rush out and what to hold out until it’s right, etc. The more people you have to slower that goes in my experience. AI tools don’t make that part faster.

This type of comments is all that is wrong with our industry. If "shipping it" is an issue there are a colossal failure throughout the entire organization. My team "shipped" 11 times yesterday, 7 on Monday, 21 on Friday... "shipping" is a non-event if you know what the F you are doing. If you don't, you should learn. If adding more people to help you with the amazing shit you are doing makes you slower, you have a lot of work to do up and down your ladder.


Maybe it's just my luck but most engineering teams I've worked with that were building some kind of network-facing service in the last 16-some-odd-years have tried to implementing continuous delivery of one kind or another. It usually started off well but it ends up being just as slow as the versioned-release system they used before.

It sounds like your team is the exception? Many folks I talk to have similar stories.

I've worked with teams to build out a well-oiled continuous delivery system. With code reviews, integration gating, feature flags, a blue-green deployment process, and all of the fancy o11y tools... we shipped several times a day. And people were still afraid to ship a critical feature on a Friday in case there had to be a roll-back... still a pain.

And all of that took way more time and effort than writing the code in the first place. You could get a feature done in an afternoon and it would take days to get through the merge queue, get through reviews, make it through the integration pipeline and see the light of production. All GenAI had done there was increase the input volume to the slowest part of the system.

People were still figuring out the best way to use LLM tools at that time though. Maybe there are teams who have figured it out. Or else they just stop caring and don't mind sloppy, slow, bloated software that struggles to keep one nine of availability.


When people say code is the bottleneck, they don’t always mean the lack of code: it’s also the accumulation of code, which becomes like plaque clogging your arteries. When you have too many people pumping out too much code, it can be the death of your startup. Startups have failed from writing way too much code.

amazing how many comments in these discussion talk about startups as if that is all there's to it, you either are at startup or you are plumber... [mind blown...]

I've worked in two different types of environments - one where what you said is absolutely true (most of my jobs), and another where it's not true and the quote holds up.

The difference, I think is:

- Code factories where everything is moving fast - there's no time to think about how to simplify a problem, just gotta get it done. These companies tended to hire their way out of slowness, which led to more code, more complexity, and more code needed to deal with and resolve edge cases introduced by the complexity. I can count many times I was handed directives to implement something that I knew was far more complex than it had to be, but because of the pressure to move forward it was virtually impossible to push back. Maybe it's the only way they can make the business case work, but IMO it undoubtedly led to far, far more code than would've been necessary if it were possible to consider problems more carefully and if engineers had more autonomy. In these companies also a lot of time was consumed by meetings trying to "sync up" with the 100 other people moving in one direction.

- Smaller shops, open source projects, or indie development where there isn't a rush to get something out the door. Here, it's possible to think through a problem and come up with a solution that reduces code surface area. This was about solving the largest number of problems with the least amount of complexity. Most of my time at this company was spent thinking through how to solve the problem and considering edge cases and exploratory coding, the actual implementation was really quick to write. It really helped that I had a boss who understood and encouraged this, and we were working on safety critical systems. My boss liked to say "you can't birth a baby in less than 9 months just by adding another woman".

I think most of the difference is in team size. A larger team inherently results in more code to do less, because of the n*(n-1)/2 communication overhead [1].

Recently I learned the Navy SEALs saying "Slow is smooth, smooth is fast" which I feel sums up my experience well.

[1] https://en.wikipedia.org/wiki/The_Mythical_Man-Month


I think your mind might be blown when you discover a third type of environment. It's neither a small shop of yak-shaving idealists, nor a desperate code factory.

The third environment is a large business maintaining services long term. These services do not change in fundamental ways for well over a decade and they make a shit ton of money, yet the requirements never stop changing in subtle ways for the clients. Bugs pop up constantly, but there's more than enough time to fix them the right way as outlined by their contract where expectations have been corrected over the years. There's no choice to do it any other way. The requirements and deadlines are firm. Reliability is the priority.

These are the stable businesses of the broader working world and they're probably what will remain after AI has driven the tech industry into the ground.


The second environment I was describing fits what you’re describing more than “yak shaving idealists”.

We were working on control systems for large industry that had to work reliably and with minimum intervention. A lot of these systems were being renewed but the plant was often 30+ years old. We were also dealing with quite limited hardware.


HN is a tough crowd for comments like these :) The absolute best work in our industry is this but people (especially younger people which is a shame) are chasing FAANGS is shit like that. I have been blessed (also it wasn't by accident but still blessed) to spend a lot of my career in exactly these kinds of places (minus the bugs popping up constantly :) ).

Even if the entire totem pole of decision makers in a company thinks writing code is the bottleneck doesn't make it true that writing code is the bottleneck.

On the extreme end to prove the point, the suits intentionally abstract out reality into neat forecasts and spreadsheet cells.

It's hard for me to think of something concrete that will convince you. Does code map directly to business outcomes in your experience? Because it's overwhelmingly not even remotely true in my experience.

even just "all lines of code are not created equal" tells me there's no direct correlation with business value.


But how much time per week does an SWE actually spend writing code?

another one, this is 2nd most frequent thing people write here, not sure how to even approach answering :)

so I’ll do what I was thought in first grade to never do and answer a question with a question - how much time per week does a brick layer spend laying bricks? they are looking at these new “robots” laying bricks automatically and talking on BrickLayerNews “man, the brick laying has not been a bottleneck for a long time.”

But to answer your question directly, a lot of time if other people do their job well. Last week I had about 7 hours of meetings, the rest of the time I was coding (so say 35 hours) minus breaks I had to take to stretch and rest my eyes


Interesting! I guess it really varies between jobs, roles, and companies.

Thats never been my experience but I have an odd skill set that mixes design and dev.

I’ve always spent a lot of time planning, designing, thinking, etc.

How detailed are the tickets if you spend all your time coding? You never have to think through architecture, follow up on edge cases the ticket writers didn’t anticipate, help coworkers with their tasks, review code, etc.?


i think this is it. you're a bricklayer. No, the bottleneck for erecting buildings is not bricklaying.

Without taking all the time to write a dissertation to try to convince you, because why; how about we just start with even zoning laws and demographic analysis preclude the laying of the bricks.

is it so unreasonable to think it is not about the laying of the bricks?


I think you comparing software development to brick laying says all anyone needs to hear about your approach to software development.

It's like saying the bottleneck in mathematics is arithmetic.


writing software, if you know what you are doing, is very similar to laying bricks. write smallest possible functions that do one thing and do it well and then compose them, like bricks, to make a house (which is what brick layers do).

comments like this come from places where it is more like bunch of chefs in a italian restaurant making spaghetti pasta (code) :)


No, thats a common mechanistic view of building software but it's not really accurate. Unlike with bricks, the way you arrange your components and subcomponents has an effect on the entire system. It's a complex phenomenon.

Of course your view is quite common especially in the managerial class, and often leads to broken software development practices and the idea that you can just increase output by increasing input. One step away from hiring 9 pregnant women to make a baby in a month.


boy I am glad I had good fortune in my 30 years hacking to not work with people like you :)


It sounds like you just aren't very good at managing teams of programmers tbh. If your bottleneck is producing code, very rarely does hiring more programmers actually help.

this makes just so much sense, I'll manage my programmers to work 37 hours per day, imma try this next week and will let you know how it goes

Try having your programmers work 30 hours a week instead of 40 and measure their output. You might be surprised.

I agree the slogan isn't very true. It's similar to another line of commentary that would suggest soft skills are more important than the hard skills of actually being able to program, i.e. the primary service being paid for.

There is some truth to it, like Brooks' Law (https://en.wikipedia.org/wiki/Brooks's_law) about how adding people to an already late project will just make it later. There are many factors in how long a software engineering task takes beyond pure typing speed, which suggests there are factors beyond code produced per day as well. But some typing has to be done, and some code has to be produced, and those can absolutely be bottlenecks.

Another way of looking at it that I like is Hickey's hierarchy of the problems of programming and their relative costs, from slide 22: https://github.com/matthiasn/talk-transcripts/blob/master/Hi... If you have inherent domain complexity, or a misconception on how to apply programming to a domain, those are 10x worse costs than any day-to-day practice of programming concerns ("the code"), and there's a 10x further reduction for trivialisms like typos.

I think some of it must be cope since so many are in organizations where the more they get promoted the less they program, trending towards (and sometimes reaching) 0. In such an organization sure, code isn't the bottleneck per se, it's a symptom of an underlying cause. The bottleneck would be the bad incentives that get people to schedule incessant unnecessary meetings with as many people as they can to show leadership of stakeholders for promotion doc material, and other questionable things shoved on the best engineers that take them away from engineering. Remove those, and suddenly productivity can go way up, and code produced will go up as well.

I've also always been amused by estimates of what constitutes "good" productivity if you try to quantify it in lines of code. There's a paper from 1994 by Jim Coplien, "Borland Software Craftsmanship: A New Look at Process, Quality, and Productivity". It's summarized in the free book by Richard Gabriel, "Patterns of Software". (https://www.dreamsongs.com/Files/PatternsOfSoftware.pdf pg 135) They were making new spreadsheet software for Windows, and had a group of "very high caliber" professionals, with a core group of 4 people (2 with important prior domain expertise) and then 4 more team members added after a year. "The QPW group, consisting of about eight people, took 31 months to produce a commercial product containing 1 million lines of code. This elapsed time includes the prototypes, but the line count does not. That is, each member of this group produced 1,000 lines of final code per week."

Later on, Coplien was asked "what he thought was a good average for US software productivity", and the answer was "1000 to 2000 non-commentary source lines per programmer per year". Also: "this number was constant for a large in-house program over its 15-year lifetime -- so that original development and maintenance moved at the same pace: slowly". An average of 1k lines a year is 19 lines a week, or about 4 lines a day for a work-week. This was considered acceptable for an average, whereas for an exceptional team you could get 200 a day. Might not there be ways to boost the average from 4 to something like 12 or 20? If your organization is at 4, there is clearly a bottleneck. (For extra context, the QPW group was in C++, and Gabriel notes he had personal experience with several groups demonstrating similar productivity levels. "I watched Lisp programmers produce 1000 lines of Lisp/CLOS code per month per person, which is roughly equivalent to 350 to 1000 lines of C++ code per week." Of course language matters in lines of code comparisons.)


> Writing the code hasn’t been the bottle neck to developing software for a long time

It was!

pre-2022 people needed developers to build software for them, now with platforms like Replit, Lovable - people are creating their own tiny software projects, which wasn't easily accessible in the past.

If you say coding wasn't a bottleneck, then indirectly you could also say, you don't need developers. If you need developers, outcome of their other type of work (thinking, designing based on existing tools and so on) is actually CODE.


I am a machine learning engineer. I've been in the domain almost 12 years now (different titles and roles).

In my current role (and by no means that is unique), I don't know how to write less code.

Here are problems I am facing: - DS generating a lot of code - Managers who have therapy sessions with Gemini, and in which their ideas have been validated - No governance on DS (you want this package? import it) - No governance on Infrastructure (I spent a couple of months upskilling in a pipeline technology that were using: reading documentation and creating examples, until I became very good it...just for the whole tech to be ditched) - Libraries and tools that have been documentation, or too complex (GCP for example)

The cognitive overload is immense.

Back few years ago, when I was doing my PhD, immersing in PyTorch and Scipy stack had a huge return on investment. Now, I don't feel it.

So, how do I even write less code? Slowly, I am succumbing to the fact that my tools and methods are inappropriate. I am steadily shifting towards offloading this to Claude and its likings.

Is it introducing risks? For sure. It's going to be a disaster at one point. But I don't know what to do. Do I need a better abstraction? Different way to think about it? No clue


I've seen some success teaching data scientists how to write better code. SWE concepts like modularity, testing, and refuse. Things that they normally ignore or choose to throw out the window.

(Disclosure: I'm a corporate trainer)


I appreciate that. I am not a position though to advocate for such a change :)

What is DS?

Data Scientists

> Nowadays many people are pushing AI-assisted code, some of them in a responsible way, some of them not. So... what do we do?

You hold them accountable.

Once upon a time we used to fire people from their jobs for doing things poorly. Perhaps we could return to something approximating this model.


My current take is that AI is helping me experiment much faster. I can get less involved with the parts of an application that matter less and focus more (manually) on the parts that do. I agree with a lot of the sentiment here - even with the best intentions of reviewing every line of AI code, when it works well and I'm working fast on low stakes functionality, that sometimes doesn't happen. This can be offset however by using AI efficiencies to maintain better test coverage than I would by hand (unit and e2e), having documentation updated with assistance and having diagrams maintained to help me review. There are still some annoyances, when the AI struggles with seemingly simple issues, but I think that we all have to admit that programming was difficult, and quality issues existed before AI.

I'm not entirely sure I can trust the opinions of someone on LLMs when their blog is sponsored by an AI company. Am I not simply seeing the opinions that the AI company is paying for?

I think that generally creators being responsible for what they ship applies across the board. That doesn't change because AI has it's fingers in it.

Code Complete came out in '93 and even then they acknowledge most of the work around development wasn't actually programming but architecture, requirements, and design.

Sure you can let Claude have a field day and churn out whatever you want but the question is: a) Did you read the diffs and provide the necessary oversight to make sure it actually does what you want properly, b) Is the feature actually useful?

If you've worked on legacy systems you know there's so much garbage floating around that the bar isn't that high generally for code as long as it seems to work. If you read the code and documentation Claude makes thoroughly and aren't blindly accepting every commit there is not really a problem as long as you are responsible and can put your stamp of approval on it. If you are pushing garbage through it doesn't matter if a junior dev, yourself, or Claude wrote it, the problem isn't the code but your CI/CD process.

I think the problem is expectations. I know some devs at 'AI-native' organizations that have Claude do a lot for them. Which is fine, for a lot of boiler plate or standard requests they can now ship 2X code. The problem is the expectation is now that they ship 2X code. I think if you leave timelines relatively the same as pre-AI then having an agent generate, document, refactor, test, and evaluate code with you can lead to a better product.


My repos for personal projects are split in two. One side contains code of better quality than I could write myself. The other side is throwaway vibe-coded shit that works somehow.

This resonates. Smaller codebases are easier to audit, easier to maintain, and usually faster. The best code is the code you don't write.

For various internal tools & other projects, I started using config only tools and avoid code as much as possible.

https://avilpage.com/2026/03/config-first-tools.html


I like this. Thanks for sharing.

I think "config first" is an understatement. The more general term here is "data driven".

It's sort of obvious that agents are way better and faster when writing data that can be validated easily against a schema and understood and reviewed in far less time. Data driven also gives you leverage, because it is far easier to for a program to produce data than code.

The same applies to humans as well. Sort of ironic that we are now rediscovering and celebrating robust approaches like writing well designed CLIs, data driven programming, actionable error messages and good documentation.

Maybe AI agents are a sort of reality check or even evolutionary pressure that forces us to do the right things.


Yeah many newbies are thinking that all ai generated code is safe while they can poison the next gen ai by training on wrong data.

A similar post with more emphasis on validating changes: https://bower.sh/thinking-slow-writing-fast

After experimenting with various approaches, I arrived at Power Coding (like Power Armor). This requires:

- small codebases (whole thing is injected into context)

- small, fast models (so it's realtime)

- a custom harness (cause everything I tried sucks, takes 10 seconds to load half my program into context instead of just doing it at startup lmao)

The result is interactive, realtime, doesn't break flow (no waiting for "AI compile", small models are very fast now), and most importantly: active, not passive.

I make many small changes. The changes are small, so small models can handle them. The changes are small, so my brain can handle them. I describe what I want, so I am driving. The mental model stays synced continuously.

Life is good.


Good framing. I’d add that “be responsible” extends well beyond code quality - it’s about product responsibility.

AI making code cheaper to produce doesn’t make the decisions around it any cheaper. What to build, for whom, and why — that’s still fully on you. It should free up more time for strategy, user understanding, and saying “no” to things that shouldn’t exist regardless of how easy they are to ship.

The maintainability concern Orhun raises is real, but I think the root cause isn’t AI — it’s ownership. If you don’t understand what was built, you can’t evolve it. It’s the same failure mode as a PM who doesn’t grasp the technical implementation — they end up proposing expensive features that fight the architecture instead of working with it. Eventually, someone has to pay for that disconnect, and it’s usually the team


It was always possible to write large amounts of crappy code if you were motivated or clueless enough (see https://github.com/radian-software/TerrariaClone). It's now just easier, and the consequences less severe, as the agent has code comprehension superpowers and will happily extend your mud ball of a codebase.

There are still consequences, however. Even with an agent, development slows, cost increases, bugs emerge at a higher rate, etc. It's still beneficial to focus on code quality instead of raw output. I don't think this is limited writing it yourself, mind - but you need to actually have an understanding of what's being generated so you can critique and improve it.

Personally, I've found the accessibility aspect to be the most beneficial. I'm not always writing more code, but I can do much more of it on my phone, just prompting the agent, which has been so freeing. I don't feel this is talked about enough!


> It's something ethical that I don't know the answer to. In my case, it was the guy's first ever open source project and he understandably went for the quickest way of creating an app. While I appreciate their contribution to open source, they should be responsible for the quality of what they put out there.

Pitching this is the exact opposite of the maintainer burden of expectation.

> Sometimes I discover a project that is truly wonderful but visibly vibe-coded. I start using it without the guarantee of next release not running rm -rf and wipe my system.

For me this is on you, not the developer.


> So you are saying that the quality of the projects is going down?

The website seems to at the least be semi-generated via AI. But I think the statement that the quality of many projects went downwards, is true.

I am not saying all projects became worse, per se, but if you, say, search for some project these days, often you land on a github page only. Or primarily. How is the documentation there? Usually there is README.md and some projects have useful documentation. But in most cases that I found, open source projects really have incredibly poor documentation for the most part. Documentation is not code, so the code could be great, but I am increasingly noticing that even if the code gets better, the documentation just gets worse; rarely updated, if at all. Even when you file requests for specific improvements, often there is no response or change, probably because the author just lacks time to do so, anyway.

But I am also seeing that the code also gets worse. AI generated slop is often unreadable and unmaintainable. I have even recently seen AI spam slop used on mailing lists - look here:

https://lists.ffmpeg.org/archives/list/ffmpeg-devel@ffmpeg.o...

Michael Niedermayer does not seem to understand why AI slop is a problem. One comment reveals that. I don't read mailing lists myself really (never was able to keep up with traffic) but I would be pissed to no ends if AI spam like that would land into my mailbox and waste my time. Yet the people who use AI spam, don't seem to understand mentally why that is a problem. This is interesting. They suddenly think spam is ok if AI generated it. So the overall trend is that quality goes down more and more. Not in all projects but in many of them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: