Home North West James Heggs – co-founder of Tech Returners – on how AI might...

James Heggs – co-founder of Tech Returners – on how AI might impact platform and cloud engineers

Much has been said already around AI and its effect on industry, not least the tech sector. It will take a few years to fully understand its impact, but one of the areas that most interests me is how platform and cloud engineers are leveraging it and what the future might hold for these types of roles.

We’ve already observed cloud ops adopting AI style approaches including using Pulumi’s AI tool for automating infrastructure as code generation and how Kubiya uses LLM’s for ChatOps style interactions such as getting logs from a cluster. However, the inevitable question is: where will it leave us humans? I certainly have my own moments of concern, but what about the platform engineers… What will they do? What about their skills … will they still be needed? Is technical leadership even going to be required?

All of this can seem like a daunting and somewhat scary prospect, but my tonic is keeping the importance of human relationships front and centre of my mind. Organisations embarking on, navigating through or maintaining a CloudOps transformation will always need humans. Specifically, they’ll need humans with skills in empathy, collaboration and communication if they want to have a really authentic cloud approach.

Most of us will have seen “the machines” writing code so inevitably we might jump to a conclusion around questioning our own relevance. If you consider tools like GitHub Co-pilot or ChatGPT produce code, then an inevitable next question is: well… I’m paid to do that so where does this leave me? We are, after all, lead actors in our own play.


We know that machines learn at a much faster rate than us and in the future they will learn even faster and will accumulate knowledge more rapidly, eclipsing us mere mortals. But is it all doom and gloom? I don’t think so, as where us mere mortals excel is context.

We understand the context in which the code is being applied; and we understand that maybe using a certain approach is good but only in the right context. We have the emotional agility and creativity to understand that maybe the optimal way (as defined by a machine) might have a detrimental effect on the customer experience. For example, a machine could decide that a Kubernetes cluster might be an optimal way to deploy a service but if we know we don’t have any people experience with Kubernetes then is this optimal? Context, again.

Using a word from the title of Johnathan Smart’s “Sooner, Safer, Happier” book we’ll always need the happiness factor. People will need to understand how soon is soon enough, how safe is safe enough and we’ll always want to arrive there happy. Understanding that context right now is a human trait and one that I think will persist because sometimes “enough” is just right.