
In our latest post, Philipp discusses the potential for bias in AI-driven healthcare solutions. AI’s transformative benefits to healthcare are far-reaching, but AI models trained on biased data deliver unfair or inaccurate outcomes for certain groups of patients. Left undetected, bias can amplify existing disparities.
Philipp identifies the key methods for detecting bias in AI healthcare models, and explains why a proactive approach to mitigating inequalities – analysing both data and model outcomes – is so crucial:
Ah, the tech industry. The same industry that once worshiped programmers now treats them like relics from an ancient civilisation, like scribes who refuse to accept the printing press. Companies are convinced AI is the answer to everything, and programmers? Well, they’re just expensive, opinionated, and worst of all, human. But here’s the thing – if you think cutting programmers in favour of AI is a genius move, you might want to remember the last time a company fired all its engineers: it ended in lawsuits, product failures, and a desperate rehiring spree. But sure, go ahead. Lay them off. You’ll regret it faster than you can say ‘syntax error.’
Let’s break this down properly. Three things are about to happen, and none of them are good for companies that think AI will replace programmers:
Once upon a time, aspiring programmers cut their teeth on real problems – fixing code, breaking systems, and learning from grizzled veterans who’d been through a thousand production crises. They learned how to optimise performance, deal with weird hardware bugs, and – most importantly – how to think like an engineer, not just type words into a compiler.
But with the AI craze, companies aren’t investing in junior developers. Why train people when you can have a model spit out boilerplate? Why mentor young engineers when AI promises to handle everything?
Spoiler alert: this is a terrible idea.
The next generation of programmers will grow up expecting AI to do the hard parts for them. They won’t know why an algorithm is slow, they won’t be able to debug cryptic race conditions (provided they are familiar with the concept), and they certainly won’t know how to build resilient systems that survive real-world chaos. It’s like teaching kids to drive but only letting them use Teslas on autopilot – one day, the software will fail, and they’ll have no idea how to handle it.
The result? We’ll have a whole wave of programmers who are more like AI operators than real engineers. And when companies realise AI isn’t magic, being just a bunch of tokenised words in line (prove me wrong on that), they’ll scramble to find actual programmers who know what they’re doing. Too bad they spent years not hiring them.
Imagine a company that fires its software engineers, replaces them with AI-generated code, and then sits back, expecting everything to just work. This is like firing your entire fire department because you installed more smoke detectors. It’s fine until the first real fire happens.
Let’s say you’re a big fintech company. You fired half your dev team because ‘AI can write code .’ Now, six months later, you realise that your AI-generated software is riddled with security holes. Whoops! Your database is leaking private financial data like a sieve, regulators are breathing down your neck, and customers are fleeing faster than rats from a sinking ship. The AI that wrote your software? It doesn’t care. It doesn’t fix bugs. It doesn’t ‘own’ the problem. It just generates more broken code, like a toddler smashing LEGO bricks together and calling it a house.
What do you do? You try to rehire the programmers you laid off. But guess what? They’ve moved on. The good ones are at startups or working on their own projects. Some are consulting for obscene rates. And now your company is left with AI-generated spaghetti code and no one to fix it.
Now, let’s talk about the real winners in all this: the programmers who saw the chaos coming and refused to play along. The ones who didn’t take FAANG jobs but instead went deep into systems programming, AI interpretability, or high-performance computing. These are the people who actually understand technology at a level no AI can replicate.
And guess what? They’re about to become very expensive. Companies will soon realise that AI can’t replace experienced engineers. But by then, there will be fewer of them. Many will have started their own businesses, some will be deeply entrenched in niche fields, and others will simply be too busy (or too rich) to care about your failing software department.
Want to hire them back? Hope you have deep pockets and a good amount of luck. The few serious programmers left will charge rates that make executives cry. And even if you do manage to hire them, they won’t stick around to play corporate politics or deal with useless middle managers. They’ll fix your broken systems, invoice you an eye-watering amount, and walk away.
The tech industry is making a massive mistake. By believing AI can replace programmers, it’s killing the very ecosystem that keeps innovation alive. We’re about to enter a world where:
But hey, if tech companies really want to dig their own grave, who are we to stop them? The rest of us will be watching from the sidelines, popcorn in hand, as they desperately try to hire back the programmers they so carelessly discarded.
Good luck, tech industry. You’re going to need it.
