This is our story of leveraging cutting-edge Machine Learning Operations (MLOps) methodologies to modernize organizational processes for a leading hospitality firm.
As we know…
In the dynamic business landscape, the travel and hospitality industry is focused on enhancing customer experiences and driving revenue growth. To achieve these objectives, they are developing Machine Learning (ML) models for various applications, including price optimization, market adaptation, customer travel propensity analysis, partnership tier upgrades, revenue forecasting, personalized destination recommendations and brand affinity models.
Furthermore, companies are improving operational efficiency through HR analytics, predictive modeling for staff attrition and call center volume forecasting to ensure seamless customer service. Consequently, they seek an integrated approach to managing the end-to-end ML lifecycle.
The challenge for the hospitality firm was…
Addressing the complexities in ML lifecycle management. The client's substantial investments in AI / ML and data infrastructure development within the Google Cloud Platform (GCP) environment led to a diverse repository of models addressing critical aspects of customer strategy. However, as the model inventory expanded, challenges emerged, escalating costs in Exploratory Data Analysis (EDA), building, deployment, training and monitoring ML models.
The client sought a comprehensive solution to efficiently manage the end-to-end ML lifecycle. The objective was clear: Capture core capabilities, enhance decision-making speed and mitigate the risk of stagnant decisions as the model inventory continued to expand.
In the pursuit of an effective solution…
Our focus crystallized into adopting MLOps to revolutionize the model building, monitoring and implementation processes. The strategic roadmap involved the following key elements:
-
Streamlining and automating deployment
-
Enabling continuous monitoring of model performance
-
Managing model and experiment registry
-
Facilitating continuous model training
-
Deciding between batch deployment and API deployment
The goal was to showcase the benefits of MLOps through a Proof of Concept (PoC) before scaling across all models.
As the strategic partner…
WNS Analytics (WNS’ data, analytics and AI practice) leveraged seasoned experts to spearhead the creation of a comprehensive high-level schematic. This schematic provided a clear overview of the operations, showing how information moves smoothly between different systems.
Leveraging the client's existing data and workflow infrastructure on the GCP, we harnessed the advanced capabilities of Vertex AI (a sophisticated ML platform) to create an end-to-end ML workflow. Key aspects of this holistic solution included:
The deployment phase was meticulously managed, leveraging BQML to ensure real-time predictions and optimize streamlined batch processes. Simultaneously, a robust Vertex AI Model Monitoring mechanism was employed to vigilantly monitor models for training-serving skew and prediction drift. This ensured ongoing reliability and performance, mitigating potential discrepancies in real-world scenarios.
The deployment of MLOps…
Propelled the ML initiative to new heights of efficiency and effectiveness. Key quantifiable advantages for the client included:
- percent
improvement in ML development productivity on the back of streamlined workflows powered by the efficiency of the Vertex AI pipeline
- percent
cost savings annually