That's roughly the number of parameters in a 120,000 cell 2G,3G,4G,5G network running single vendor radio. Not all of these parameters are tunable of course. Some of them hardware-related, others are system-configured ... but enough are.
For over 8 years, we've been involved in managing those millions of parameters that make up a network's RAN configuration. During that time, things just kept getting more complicated. Networks upgraded from 3G to 4G, implemented single RAN, added twin beam sectors, had to build a layer strategy, dived into carrier aggregation, installed massive MIMO ... and then ... 5G.
Keeping track of RAN configuration
Over time, thousands of rules were implemented to manage an increasing number of optimisation scenarios across different regions (highways, borders, stadiums and many others). It happened so gradually that we hardly noticed at first. And it became harder to keep RAN configuration management software manageable.
As a user, how does one keep track of 50,000 different rules in a network no matter how good the interface is? What happens to KPIs when a couple of rules are changed, especially if the impact takes a few days to notice? Which of the hundreds of counters are relevant? And how does one deliver all those reports and visualisation requests coming from engineers and managers, especially when they keep changing?
Is this really sustainable?
But first, a little bit of background
It is likely an understatement to say that there's been a lot of changes to Radio Access Networks since the days of 2G. Today, you have higher bandwidth, lower latency, more capacity and much better voice and data quality that ever before.
All great for end users but for mobile network operators, this evolution comes with a price tag called complexity. Old tech doesn't just disappear when new tech like 5G comes in. It hangs around for a while, slowly becoming the thing no one wants to look after ... but has to.
And to add spice into the mix, most 2G/3G/4G/5G networks today are a combination of Ericsson, Huawei, Nokia and ZTE equipment that all work somewhat differently despite broadly conforming to 3GPP specs.
This broken topology often struggles with maintaining peaceful coexistence and low costs.
Making the RAN more efficient
Optimisation has always been a slow-burn, needing lots and lots of careful analysis to squeeze out some improvement in a network or sometimes simply just maintaining quality even as conditions change.
This process of monitoring key performance indicators, figuring out if there are coverage, capacity or performance issues, and then adding cells or tuning parameters to "fix" problems has traditionally been labour-intensive and expensive.
Imagine then the promise of optimisation without human involvement. And you have Self Organising Networks (SON).
10 years old now, SON has had some success but in limited scope. Most prominent is Automatic Neighbour Relations - a tricky but repeatable algorithmic process of refreshing neighbour lists based on User Equipment (UE) measurement reports. One operator we know took well over 9 months to deploy SON ANR, wary of the risk to the network after related traffic degradation.
As a result and especially over the last 3-4 years, much focus has been on the potential of Artificial Intelligence and Machine Learning (AI/ML) to help operators reduce costs and dramatically improve the quality of the network.
The industry promise with AI/ML
... is that it will solve RAN's planning and optimisation complexity challenge. Ingest all that data, automate and optimise. For some reason, we're captive to the idea that having large amounts of telco data ingested by AI/ML systems will somehow spit out answers to RAN problems.
Vendors of course tend to encourage this. Nokia, Huawei and Ericsson have over the years spent large amounts of marketing dollars touting benefits. Even Intel got involved, commissioning an impressive report from the analyst Senza Fili's Monica Paolini to argue that the "time is now" for AI & Machine Learning.
What's always been questioned is how real this promise is, or perhaps, how exaggerated.
Is AI/ML for RAN configuration a real alternative to rule-based systems?
A few years ago, one of our colleagues looked at all the RAN data we were crunching every night and asked "Shouldn't it be possible for an AI/ML system to automatically figure out the best parameter values?".
Three years later, despite some technical success, many of our AI/ML projects struggled to justify business value. What these projects did uncover though seemed to contradict a few commonly held beliefs.
- One, we found that using AI/ML actually required MORE rather than LESS involvement by human engineers on the ground.
- Two, that most problems don't really need as much data as one would imagine.
- Three, that real value came as soon as we were able to frame a problem
Over the next few posts, we'll explore these points in detail and outline why we believe an organic approach, where focus is placed on relationships and interaction, is the most potent way to handle RAN configuration complexity.
Walking through a RAN configuration problem