Why AI Writing Code Will Require More People Coding

Welcome to another edition of “In the Minds of Our Analysts.”

At System2, we foster a culture of encouraging our team to express their thoughts, investigate, pen down, and share their perspectives on various topics. This series provides a space for our analysts to showcase their insights.

All opinions expressed by System2 employees and their guests are solely their own and do not reflect the opinions of System2. This post is for informational purposes only and should not be relied upon as a basis for investment decisions. Clients of System2 may maintain positions in the securities discussed in this post.

Today’s post was written by David Cheng.


Have you seen the headlines these days? Recently…

Or if you want to see the speech itself, it’s here. Some key bits:

With coding taken care of by AI, humans can instead focus on more valuable expertise like biology, education, manufacturing, or farming

How many grains of salt do I need?

Or a little more?

Why do we listen to what Jensen says about AI and coding?

He’s the CEO of Nvidia whose chips are a key component of the latest developments in generative AI. Hopefully, Nvidia or Jensen are experts in either AI (maybe) or coding (no). After all, Nvidia’s current valuation is around 3 trillion dollars!

How Nvidia started. Source

Since 1993, Nvidia has made GPUs (graphic processing units), chips specialized to do computer graphic computations. Computer graphics boils down to insane amounts of parallel linear algebra done quickly. In 2007, Nvidia released CUDA, a software library that lets you use the GPU for general-purpose linear algebra. By 2011, academics used CUDA to push impractically slow neural nets to be practical. Last year, Nvidia announced chips specialized for AI. They’d like you to forget about their specialized chips for crypto mining (2022).

You tell them! Source

I’d argue that they’re definitely in the middle of a lot of exciting AI conversations and they’re in the position to make a lot of money selling their fancy, fast linear algebra chips but they’re the folks who make the engine but not the car. Universities, Google, and Facebook figured out how to make the car. OpenAI sold it to the public.

Why you should listen to me

I’ve been writing code to build systems and running them for almost as long as Nvidia’s been around. I’ve worked on everything from big enterprise things to apps for a phone. I’ve built systems for everyone from the military to startups. By no means am I an expert on hardware (like Nvidia) but I have been an end-user (CUDA and OpenCL). I also am not an AI expert but am a happy user of it. ChatGPT and Copilot have been helpful for getting boilerplate code out of the way or generating fantastic mock data for testing.

I’m generally terrified of how wrong non-coders think of how coding will be impacted by AI (see Jensen), and I already see how AI coding tools give people a false sense of capability because a lot of people misunderstand what coding is about.

What is coding actually about?

When I started, I thought coding was about building software systems. It was a bunch of folks writing, testing, and deploying code. How very wrong I was.

Building software is a big experiment in human organizational behavior. Every project kicks off by establishing a governance structure. We usually call them methodologies. Are we doing Extreme Programming or are we doing Agile with SCRUMS? Are we doing planning poker? Open source projects are a whole other level with various roles, codes of conduct, and community processes (see PEP). There are older things like CMM (capability maturity model). Setting up and running a governance model to fit the team and objectives is a big part of a successful project.

Communication breakdowns. Source

Building software is a constant negotiation between building for today, building for tomorrow, and maintainability. Do we do something quick and dirty now that will require more maintenance and be impossible to build on top of? Or do we take more time to build flexibility or lay the foundation for future work? We call short-sighted calls and mistakes “technical debt” in the industry. Like “real” debt, it accumulates and compounds over time until you pay it down.

Applies to humans and AI. Source

If you’re successful, the majority of your total cost of ownership will be maintenance and new features. The worst failure means it will be all maintenance. Most software systems don’t run themselves and require maintenance to keep things running and to fix problems as they’re found. Some portions of the development team will be left behind to maintain and fix, and others will move on to work on new features or new projects.

It’s scary how much of the modern world actually looks like this. Source

Tying the three points together, why don’t firms fire a big chunk of the team when a product ships? Why does headcount tend to keep growing? On the flip side, how can Elon fire so many folks at X (formerly known as Twitter), and it keeps running? It’s because coding is much more than just writing working code. Systems require people to keep it running and even more people to build on top of it.

After spending a few decades building character through coding, the writing working code part is usually the fun and cheapest part. You then spend the other 90% of the time truly finding out how good or bad your code is from how happy your end-users are, how constipated your coworkers look as you explain things, and how tense your QA team is.

How does AI fit into this?

If Jensen or the pundits are to be believed, It’ll do the fun and small part of coding for you. The rest is up to you.

You can be the one to maintain it though. AI generates bugs and misses edge cases like we do. With AI, we can generate more code quickly, which will make it incredibly cheap to generate technical debt. Did you review all the generated code so that you’re aware of any artifacts that are generated (memory that isn’t cleaned up, temp files that aren’t deleted, connections that aren’t closed)? When someone asks you to build a dashboard to monitor what’s going on, is one AI going to figure out what the other AI wrote and either instrument it or put together a live dashboard? Or are you going to have to read through a ton of generated code to figure it out?

You can also be the one to try to build on top of it. So far I haven’t seen any of the AI demos understand an existing code base and bolt features on top of it. It’s best done when you can work with someone who wrote the code in the first place so you can understand the trade-offs made and areas missed. If they’re gone, it can halve productivity because figuring out someone else’s code is usually twice as hard as writing it. A favorite quote from a dude who created the C programming language and Unix:

"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." — Brian W. Kernighan

So AI will take the fun out of code, help you generate technical debt cheaply, and leave you to maintain it and try to build on top of it. In this AI-driven future, we’re going to need a lot more people who can write code.

Wrapping it up

Trying to explain to college students or big corporate types that GPT doesn’t “solve” the need for engineers feels like swimming upstream. I’m sure the next 5 years will be met with broken expectations and using GenAI to create a pile of terrible systems at a scale that wasn’t humanly possible before. After that, engineers will be in high demand to maintain and build upon the mess. On the upside, here’s something from Jensen that I can agree with…

Or if we’re lucky, this hyperbole-spewing PR genius will be right, and this will kill us all.

matei zatreanu