How Eidos-Montréal uses machine learning to leverage the Voice of the Customer and make better strategic decisions
Training models on a large volume of high quality, unstructured data
Open-source annotation platform not adapted for scaling rapidly
Ensuring quality of annotations
One centralized, scalable training hub to address all use cases and data types
Advanced quality review workflow to control and scale labeling projects (consensus)
A comprehensive interface to manage projects and track progress
Eidos-Montréal is a Canadian video game developer based in Montréal, Canada. The studio, created in 2007 and counting 400+ employees today, is responsible for developing some of the most popular video games of all time including the Deus Ex series, the recent Marvel’s Guardians of the Galaxy, Shadow of the Tomb Raider, and many more.
The Eidos-Montréal Analytics and Data Science department were created in 2018 with a set of clear and ambitious objectives: to evangelize the value of analytics services and the use of data in making business and design decisions. Today, the team provides a source of invaluable insights into what customers and fans think about the games developed by the studio, helping drive the Eidos-Montréal production support department’s strategy and development for their current and future games.
When Marie de Léséleuc, the Data Science and Analytics Director at Eidos-Montréal, created the department of the same name.
Setting up the pipeline, the team leveraged a pre-trained model that knows about general NLP categorization and sentiment as a base for their Machine Learning project. Of course, they had some work to do to fine-tune the model to fit the gaming industry and the specific terms used by their customers to talk about games. Still, it is interesting to note that they did not have to start from scratch. Indeed, more and more models are readily available online, a trend in the ML world. Once their dataset and model were ready, the next phase of the project consisted in training the model to get accurate and consistent results.
The Data Science team at Eidos-Montréal started collecting thousands of reviews and articles from Steam, Metacritic, and many other specialized websites.
Then, they defined a protocol to classify these reviews and articles. The first data point is the articles and review’s topic (what specific topic is being commented about a game) and the second data point is the sentiment associated with it. The team broke down topics into 18 categories including aesthetic, AI, animation, audio, combats, puzzles, performance, playtime, etc.
For sentiment analysis, they defined three types: positive, neutral, and negative. Then, after a long and thorough analysis of thousands of reviews and articles, they created a list of keywords around these categories. These keywords are used to classify any sentence from a review and, or article, determine which category the sentence belongs to, and identify the sentiment associated with it.
To make their Machine Learning model accurate, it needed to be trained on thousands of reviews and articles. Furthermore, text, especially the customer’s voice, is complex data to analyze. The model needs to recognize the subtle distinction between negative words that mean a positive sentiment, identify sarcasm, and many other forms of language that are specific to the gaming industry, countries, and cultures.
At first, the team chose to train their model themselves. They used Doccano, an open-source annotation tool for machine learning practitioners. It enabled them to create the labeled data they needed to start training the model.
To do the annotations, they relied on the willingness of everyone at Eidos-Montréal.
Also, relying on internal resources to annotate hundreds, if not, thousands of reviews and articles were just not scalable, and the data science team had to spend a lot of time reviewing the quality of the annotations.
It appeared quickly that Kili would be able to answer all our needs for our use case. On top of this, the availability of the customer support team made Kili’s integration into our ML workflow painless and quick.
The limitations of the previous solution
Lack of machine-assisted features to scale and annotate thousands of data
Working on an interface that does not fit the use case
Quality of annotations done in-house instead of using a dedicated workforce
No review workflow and tools for quality check and ensure consistency
No role-based access control
No customer support
This is why they started looking for a more professional approach. After looking at various providers, they decided to run a POC with Kili to test the solution on their use case.
For Eidos-Montréal, the key to the success of their project relied on the quality of the annotations at scale, so their model could provide accurate insights to the different business departments.
Boosting efficiency and scalability with external workforce
The data science team decided to use a professional workforce provided by Kili as part of Kili’s service agreement and partnerships. The labelers were familiar with the game industry's jargon and were trained directly on the Kili platform
Checking quality with consensus
With the tools dedicated to quality control that Kili offered on its platform, they easily reached consistency at scale . As Nika explains it, one of the most important features for them was the consensus feature. By leveraging consensus, they make sure to get consistent labels across all their training datasets. It also helps the labelers to improve their skills much faster
The labelers that Kili provided to support our project are very good. They are familiar with the jargon of the gaming industry, and they understand very well what we expect of them. Also, Kili’s support team made sure to train them on the platform. This accelerated considerably our progress and improved the quality of the annotations.
The consensus feature on Kili’s platform is exactly what we needed. It made a tremendous amount of difference to control the quality of the labelers’ work and get to a level of quality and consistency at a scale that we could not achieve before.
Autonomy to accelerate project delivery
Being autonomous and dedicating their time to value-adding activities is very important with a small team . By leveraging Kili’s API, they easily add new data to the platform and to the labeling queue or create new projects.
Intuitive interfaces and support if needed
The team also widely credits the easy and intuitive Kili interface to create and follow up on projects, without needing any help. The dashboard provides all the information. They need to know; how many annotations were completed, what’s left in the queue, and what is ready for review.
Our understanding of how to ship highly efficient ML models is maturing and Kili plays a great role in this process. With Kili, we are in control of our datasets and the quality of the training we give to our models. The platform also enables us to quickly and easily iterate and ship new projects.
Kili’s documentation is also very helpful. On various occasions, it helped me quickly solve issues I encountered without having to contact the support team.
With such rapid success, the team is rightfully eager to evolve its capabilities. Ultimately, the team’s roadmap is to keep scaling by always analyzing more reviews and articles, adding more granularity, and opening new sources such as more countries and languages. Then, their goal is to build an automated, continuous improvement floe to train and run their models, all in the cloud for maximum agility, from scraping the data to running the model and providing actionable and valuable insights to the teams.
This initiative is a real success for us. We deliver valuable insights about market trends to our strategy teams. We help marketers better understand what our customers and the market like or dislike about our products and the competition. We support our developers by providing them with direct feedback from our customer base about very specific features in our games.
Get started! Build better data now.