Machine learning meets marketing analytics
In the past couple of years, we have been working with a genuinely game-changing technique, which combines evolutionary computing with machine learning, to produce much more accurate predictive models. This allows us to achieve levels of predictive accuracy using ‘white box’ algorithms, which were previously unobtainable except by ensemble and non-transparent methods. For marketing analytics, this is especially valuable because of the need to be able to interpret how predictions are made, the all important ‘why’ behind data-driven decision making. Marketing Multi-Touch Attribution (MTA) is especially improved by this technique.
In data science terms, technology is about the application of genetic algorithms to both feature selection and parameter selection. Genetic algorithms are an underutilized early technique in machine learning, which are now experiencing something of a revival. The best way to understand the type of challenge they solve is to think of a genetic algorithm as an intelligent search technique that can find the best model from a universe of many trillions of possibilities.
How genetic algorithms help
To explain why this is necessary at all, it is important to understand the scale of the problem. A key challenge facing the data scientist is that before they run any machine learner on training data, they must make choices about what features to use, how to represent those features as predictors in their model, and what hyperparameters to allow for model configuration. The reality is, most machine learning models bake in these decisions through somewhat arbitrary assumptions made early in the modeling process.
Good data scientists seek justification for their feature selection and model configuration, in terms of theory and a heuristic understanding of the problem. Far too many just accept the first model that achieves sufficient accuracy after trying only a few alternatives. While there are methods for feature reduction which have a basis in statistics, very few analysts are even aware of the sheer scale of possible alternative models that could be legitimately trained with the very same data source.
For example, in a dataset with, say, just 50 alternative features, and for a single model requiring only between 3 and 10 predictors selected from these, there are already around 20 trillion different possible combinations that could be modelled. Add in multiple alternative model types, and allow for each model to be configured in different ways, and you have 100s of trillions of possible models. Many problems have a wider range of possible features than this. Finding the best possible model is what data scientists refer to as an ‘NP-hard’ problem, whereby the computational time required must be scaled in proportion to all the possible options. It’s like looking for a needle in the biggest haystack you can imagine.
Enter the genetic algorithm. While a genetic algorithm does not guarantee you will find the optimal solution, it gives you a great chance of getting close without having to try every possible option. The way it works is by generating a ‘population’ of random solutions and then evaluating each of these against the objective criteria of desirable model properties, such as predictive accuracy, a normal error, and so on. There is a lot that could be said about how to get these objective criteria right, as there are potential pitfalls here – but let’s leave this for another post.
The best models in the population are used to seed a new population in a subsequent ‘generation’ of models, and random ‘mutation’ is also applied to occasionally change these models and keep the population fresh. There are multiple generations in this process, and when we run our genetic algorithms we ensure they spit out each new winner as it appears instead of simply arriving at one final winning solution. The effect is to move towards an optimal solution without merely settling on a ‘local maxima’ i.e. the best solution within a limited range of solutions.
Improved predictive accuracy
The difference between the best guess model and a model derived in this way is simply stunning. Gains of 10-20% in predictive accuracy are not uncommon. For marketing attribution, this means we can produce a model which is validated by the ability to predict on a blind sample, while at the same time being transparent and simple enough to explain how specific marketing interactions work to drive sales. There are numerous other applications, for example at Metageni we use different types of genetic algorithm to help select data samples which match the characteristics of cross-device matched data, to help with that particular attribution challenge.
I guess that we will hear a lot more about this technique in the next few years, as one of several meta analytics processes in the toolkit which help scale and optimize analytics in many domains. We are very keen to hear about the experiences of others using genetic or evolutionary approaches in machine learning, so please do get in touch if you are working in this area.
Gabriel Hughes PhD
Can we help unlock the value of your analytics and marketing data? Metageni is a London UK-based marketing analytics and optimisation company offering support for developing in-house capabilities.
Please email us at [email protected]