Artificial Intelligence (AI) is certainly a popular buzzword in the industry today and according to surveys by TM Forum, many have now “deployed” AI and are looking to scale. But deploying AI means different things and it depends on the strategy adopted by the organisation for implementing AI.
For some, a proof-of-concept (POC) has been successfully executed and a model has been created that uses machine learning (ML) algorithms. For example, in the context of telecom customer value management (CVM), this is often around churn or decay prediction, and the date and value of the next recharge.
For others, it means the technical capability has been put in place to build, evaluate and execute ML-driven models across multiple use cases – our own AI at Scale platform is a case in point. It can be used for building, evaluating and running in production both CVM and non-CVM models.
Typically, the AI platform will sit on top an existing big data lake or incorporate a data fusion layer to capture data from multiple sources (covering real-time and batch, structured and unstructured data).
A fully functioning AI platform provides a graphic user interface (GUI)- based workbench for data scientists to rapidly build models using ML algorithms. It also supports the full lifecycle of model creation and execution; data ingestion, data exploration, feature engineering, model development and evaluation, and deployment and execution in production.
But, along with the technology challenges associated with scaling for AI, there are larger implications for the “people” and “processes” that go along with it. Often, our clients will point out that one of the biggest challenges they face is the hiring and retention of key talent in data science. Understandable, really, as data scientists are in a “hot” market. This is why companies will often turn to partners to assist.
In fact, this people-centric factor is not only limited to the data science team. In the context of CVM, marketers need to adapt. It is one thing to be designing and implementing campaigns that are based on business rules created by a marketer; the criteria for the segment, and the offer to make to that segment. It is quite a different thing to be prepared to leave this to a “black box” solution. This is exactly what a ML model is. It is next to impossible with a ML-driven model to determine exactly why a particular offer is made to a particular customer.
Therefore, a scaling strategy is required, that allows confidence to be built. For example, allowing the model to apply to a proportion of the base while the traditional business rules approach is applied to the remainder. When positive results are seen, then the proportion of the base that receives offers based on the model can be increased.
An ancillary consequence of this is that the robustness of the methodology for measuring performance is critical. There must be confidence that when an improved result is seen, it is trusted. This is why effort is required to ensure the universal control group (UCG) is highly representative of the base. Performance is measured as the difference between the UCG and the universal target group.
The “process” side is also extremely critical. This covers the governance and practices in place to ensure data integrity is maintained, and that data is updated when expected. Critically, it also covers those processes associated with putting a new model into production. This DevOps side is particularly challenging for many organisations because there are new practices to be developed. A ML driven model is not constant. By its very nature it changes while in production, which is very different from what we see as “normal” software deployment. In fact, this is so different that the TM Forum has a Catalyst stream in progress for “AIOps” to develop frameworks for supporting the operations of AI. Engage with this Catalyst programme to keep abreast with the thinking as it develops. And perhaps become a contributor.