Diseconomies of AI scale

In IBM there's a religion in software that says you have to count K-LOCs, and a K-LOC is a thousand line of code. How big a project is it? Oh, it's sort of a 10K-LOC project. This is a 20K-LOCer. And this is 5OK-LOCs. And IBM wanted to sort of make it the religion about how we got paid. How much money we made off OS 2, how much they did. How many K-LOCs did you do? And we kept trying to convince them - hey, if we have - a developer's got a good idea and he can get something done in 4K-LOCs instead of 20K-LOCs, should we make less money? Because he's made something smaller and faster, less KLOC. K-LOCs, K-LOCs, that's the methodology. Ugh anyway, that always makes my back just crinkle up at the thought of the whole thing. - Steve Ballmer(yes that guy who held a funeral for the iPhone)

The ability to produce large volumes of code is a solved problem. Just a few words, and any of the LLMs will produce large volumes of code. This ability has led to various AI CEO's to declare the end of the humble overpriced software developer. Funny how making this claim benefits these tech bros' bottom line, but that could be just a coincidence. I'm sure the likes of people who call themselves "angel investors" abide by a strict moral code. I'm sure they never lie to make money.

Despite these same companies paying over €300K for Android Developers, the end is nigh for my kind, apparently. AI can write code. Employers will give the likes of Sam Altman lots of money to add to the data-centre firepit, and in turn, they will save loads of money on those pesky software engineers. Right? Right...

My time in the industry has been short(just over 10 years), and I have seen this claim many times. Before ChatGPT, there was a push for a "no-code" movement, using tools such as Webflow and Zapier to replace expensive software teams. Then there were tools before my time, such as Visual Basic and Lotus Notes.

Yet I'm still here writing code, and not just me, but millions of us. A number that is set to keep growing. It's strange that I'm writing code because I was told, 4 years ago, that my job would be replaced in 6 months.

In the classic Drucker-style management theory, people fall into two categories: cost-centre and profit-centre. Software Engineers, despite building the product that is sold, are usually seen as a cost. Of course, anyone who has worked in a small company knows that every employee is part of the profit centre. Customer service keeps people happy, sales get more people, and the product team finds new ways to appeal to and solve more pain points. Yet despite this blurred line, a large chunk of our upper corporate society just doesn't want to admit that making computers do what we want is hard.

It doesn't stop them from trying, though, and in the face of this effort, a whole new line of work has opened up for me.

Fixing what the AI wrote.

It turns out that making it easier to produce large volumes of code leads to more problems. Who could have seen this coming(aside from every Software Engineer worth their salt)?

It's not just me; many open-source maintainers have alluded to the sheer number of requests to accept code written by AI that doesn't align with their project's goals.

It's this observation that has led me to wonder whether there is a tipping point in AI where the marginal benefit at an industrial scale is an example of diseconomies of scale.