Motivation for recommendation solutions
Rapid changes in customer behaviour requires businesses to adapt at an ever increasing pace. The recent changes to our work and personal life has forced entire nations to work remotely and do all non essential shopping online. With every challenge in business there is opportunity on the other side.
According to a techcrunch.com report, US e-commerce sales went up by 49% in April, in comparison with the baseline period in early March. This subsequently led to a flood of digital campaigns and initiatives fighting for attention.
We think that businesses who want to stay relevant in such a competitive environment must understand how to best add value to their customers and more importantly, respect their time.
In a rapidly changing environment it is critical for an organisation to develop situational awareness, which feeds into a business adjustment process. The adjustment process in this context, is the time an organisation or business requires to react to change in their underlying market condition.
One example of an adjustment process framework is the OODA Loop (Observe, Orient, Decide, Act) which was developed by John Boyd—a military strategist whose theories are now widely applied in law enforcement and business.
Boyd suggests that each and every one of our decision-making processes run in a recurring cycle of Observe—Orient—Decide—Act. An individual or business who is capable of operating in this cycle quickly—observing, orientating and deciding rapidly to reach an action decision with ease—is able to ‘get inside’ another’s cycle to gain a strategic advantage.
OODA Loop for businesses
The entire process begins with the observation of data from the world, environment and situation. Situational awareness for a business means to establish a baseline for their customers through methods such as market analysis and research.
This is the moment that we take in all the information garnered during the observation phase and begin to process it. Typically in the form of dashboards and reports to support the business analysis. Within larger organisations, this information tends to be distributed across different systems. Here we typically see the biggest divergence on how quick a business can produce those reports and metrics to keep up with trend changes.
Decide will be the outcome of—and is entirely reliant on—your orienting phase. If your orientation is lacking information or is using inaccurate information, a potential decision might be ineffective or worse working against its purpose. The idea here is to derive an hypothesis on how to best react to the situation. Knowing the likely outcomes of certain practices and decisions can have overlap onto others. It is important that the outcomes of your previous experience and your orienting phase gets you as close to an accurate prediction of your future actions and impacts as possible.
Act is the final part of the OODA Loop. However, as it is a cycle, any information which you gather from the results of your actions can be used to restart the analytical process. This would include making changes to your business strategy and starting the observation phase again.
Speeding up the loop
For businesses, an efficient loop can allow them to gain a competitive advantage over others in the market. If you’re thinking faster than them, you will be better than them.
If organisations are automating the collection and analysis of observations, it allows them to build powerful capability to surface relevant products/content to customers. This makes recommender systems a key component of all successful online business. From e-commerce, streaming and news to online advertising, recommender systems are today’s automated OODA loops.
On a general level, recommender systems are algorithms that predict relevant items to users.
Recommender systems are critical for certain online industries and are worth high rewards to large corporations such a netflix.
On a high level, there are two classes of recommender systems—collaborative filtering methods and content based methods.
Collaborative filtering methods
Collaborative filtering methods for recommender systems are based solely on the past interactions recorded between the users and items in order to produce new recommendations. Those interactions are typically stored in a user-item interactions matrix.
The main idea is that past user-item interactions are sufficient to detect similar users and/or similar items and make predictions based on these estimated relationships.
Collaborative Filtering User-Item Interactions Matrix
Collaborative filtering can be distinguished into two main categories—memory based approach and model based approach.
Memory based approach
Memory basedcollaborative filtering again splits into two subcategories—user-to-user or item-to-item—based recommendations.
User-to-user collaborative filtering: Users who are similar to you also liked…
Item-to-item collaborative filtering: Users who liked this item also liked…
The main idea is that we are not learning any parameter using gradient descent (or any other optimisation algorithm). Similar users or items are calculated only by using distance metrics such as Cosine Similarity or Pearson Correlation Coefficients, which are based on arithmetic operations.
Model based approach
Model based collaborative filtering is developed using machine learning algorithms to build a representation of the information. Predications are then generated using the trained models, rather than distance metrics used in the memory based approach.
The main advantage of collaborative filtering approaches is that they require no information about users or items, so they can be used in many situations. These algorithms improve with the number of interactions recorded over time (increased dataset).
The biggest disadvantage of collaborative filtering is that they only learn from past data points and hence suffer from the cold start problem. It’s impossible to recommend anything new (new content, new items) and challenging for items with few user interactions. In those cases, the recommender systems typically use fallback strategies such as random recommendations or most popular items.
Content based methods
On the other hand, content based methods use additional features and information about the users or items to learn from a model. A feature in this context simply means information about a user or item. If we consider typical user features such as age, gender or other personal information, we can learn the relationships between user details and their preferences.
The same is true for additional item characteristics which help to understand commonalities or differences between items.
The main idea is building a model (representation) which explains user behaviour (interactions) using additional features data.
Content based methods are less vulnerable to the cold start problem because they can use similarities in user/item characteristics to infer recommendations. This limits the cold start problems to only new users/items which also have new (unknown) features. Using a model to predict the recommendation has a cost which is described as a bias-variance tradeoff.
Without explicitly explaining the implications of bias and variance, it can be understood as the compromise between model complexity and data volume. This usually requires more effort in carefully fine tuning your models.
The hybrid approach is a combination of collaborative filtering and the content based approach. One way to archive those systems is to simply have two models and then mix the recommendations from both to ensure a more robust recommendation system. The second more sophisticated way is to combine both approaches—this is often done using machine learning concepts such as neural networks.
AWS realised that many customers were struggling to build recommendation systems on top of their own customer data. So, they focused on solving a very common, but complex problem for their customers.
Amazon Personalize allows developers with no prior machine learning experience to easily build sophisticated personalisation capabilities into their applications, using machine learning technology by leveraging Amazon’s experience.
Amazon Personalize supports collaborative filtering, the content based approach and a very powerful hybrid approach using hierarchical recurrent neural networks.
What is Amazon Personalize trying to solve?
AWS realized that the end-to-end workflow for developing and deploying recommendation systems is a challenging process. If recommendation systems are built from scratch, it typically requires experts and a higher level technical maturity from an organisation to successfully deploy recommendation models in production.
Amazon Personalize identifies a typical development workflow to build highly capable recommender systems. Whilst the focus is on ease of use and abstraction of complexity, the model configuration itself allows expert level access and control of model parameters. From our research so far at DiUS, we’ve found that this flexibility has great potential, whereby as ML practitioners working with our customers, we will be able to leverage the more managed aspects for some use cases but then break away and implement something more bespoke for others.
Here are a few examples of how personalization could be used in applications:
- Product and content recommendations tailored to a user’s profile (preferences).
- Search results consider each user’s preferences.
- Intent to surface products that are relevant to the individual.
- Promotions based on a user’s behaviour.
- Select the most appropriate mobile app notification to send based on a user’s location, buying habits and discount amounts.
How quickly could you get up and running?
The answer to that question…it depends!
Setting up the required AWS resources and uploading some data can be done in a couple of hours. Training models takes time, which is proportional to the amount of data provided. However, it can also be achieved within the same day. We’ve found that deploying an Amazon Personalize endpoint and testing the recommendation on top of a successful model can be done in minutes!
However, we’ve also learned that finding and using the right data for your recommendation solution is an entirely different problem.
The timeframe will be largely impacted by organisational data readiness. Which means easy access to data exports and high quality datasets. Getting data (repeatedly) in a clean format can take anywhere from a day up to weeks.
This is a basic overview of the key technical concepts. For more of the technical detail, you might like to refer to the official Amazon Personalize documentation.
Amazon Personalize can be considered as a complete solution to build an advanced recommender system which includes the entire lifecycle of a recommendation system.
The above image—from the AWS website—lists the activities involved in setting up a workflow.
Typically, there are three main activities involved.
1. Data preparation
- Identifying a suitable dataset
- Removing incorrect or augmenting missing data points
- Exporting data in required format
- Defining a dataset schema
2. Model training
- Selecting the appropriate solution (algorithm)
- Model configuration
- Model training
- Model evaluation
3. Model deployment
- Creating a model endpoint (API access to predictions)
Datasets and dataset groups
Each model is dependent on the dataset group which contains the individual dataset. You can build multiple models against the same dataset and select the best option.
- Relationship between dataset / solution and campaigns
- Record of interactions between user and items with a timestamp
- User specific details such as age, gender, address etc. User_ID used to map back to interactions
- Items details such as genre, preis, description etc. Item_ID used to map back to interaction dataset
For more information see Amazon Personalize documentation.
Amazon Personalize is guiding the data selection process by requiring the data to conform with a predefined schema. Schemas are defined as JSON structures and will be versioned by Amazon Personalize. Each dataset schema has mandatory files which have to match the CSV column names. The mandatory fields help to map common user and item data so the model training can convert them accordingly.
Each dataset has some reserved keywords which you would typically use in the context of a recommendation scenario.
The key takeaway from the schema concept is that while there is some flexibility in defining your own schema, Amazon Personalize is asking users to convert/map data to their predefined schema. This helps users to guide data selection and also puts focus on what’s important. If you define additional fields in your user/item data, you have to decide if that data is categorical or quantitative.
Recipes (aka algorithms)
Amazon Personalize is offering a mixture of collaborative filtering and content based algorithms—named recipes—which can be categorised into three classes.
User Personalization Recipes
Predict which item a user will most likely interact:
- HRNN is a hierarchical recurrent neural network (hybrid approach)
- HRNN with metadata (user/item dataset)
- HRNN coldstart aware model
- Popularity-Count Recipe
- Most popular items based on interaction count
Personalized-Ranking Recipe (collaborative filtering)
Product and content recommendations tailored to a user’s profile
- HRNN a hierarchical recurrent neural network
- filter and rerank results
Related Items Recipe
- SIMS Recipe (collaborative filtering)
- Item-to-item similarities (SIMS)
The documentation is straightforward on how to select a recipe for training a model. Amazon Personalize is following good practice by selecting sensible default configurations, but also allowing the user to go deep into tuning their models.
Model configuration parameters are also referred to as Hyperparameter. Those values are estimated without actually using any real data. Sometimes they are also referred to as ‘good guesses’.
Amazon Personalize is offering an automatic tuning of these hyperparameters. Which is essentially a grid search for parameters to find the most successful configuration, this however is abstracted from the user as an optional step. Again, we like the fact that this is available but only as required given the problem at hand.
Solutions (model training)
A solution version is the term Amazon Personalize uses for a trained machine learning model.
The creation of a solution requires the user to select one of the recipe’s, as well as provide the required dataset and their corresponding schemas.
Amazon Personalize supports various numerical metrics to measure the model performance.
Depending on the choice of recipe (algorithm) certain metrics will be generated. We are just looking at two types of metrics, however if you would like more detail, you can refer to the documentation.
precision_at_K – Is the total relevant items divided by total recommended items.
normalized_discounted_cumulative_gain_at_K – Considers positional effects by applying inverse logarithmic weights based on the positions of relevant items, normalised by the largest possible scores from ideal recommendations.
From our experience, recommendation is a challenging problem and typically those models have a relatively low accuracy on an individual predictions scale, however because we typically get many more opportunities to recommend, a small increase can have big impact.
Campaigns (model deployment)
The creation of a campaign is packaging your solution together with some wrapping code into a HTTP endpoint. This all is done automatically by Amazon Personalize. The user has to simply select a solution model and minimum expected Transaction-per-minute TPS.
Model predictions are available via HTTP or AWS SDK and can be either a single recommendation or batch predictions.
We are now observing the rapid change in consumer behaviour in real time due a drastic impact on our environment. We were looking at the OODA Loop framework on how organisations have to analyse and react to those underlying changes.
We think that powerful recommender systems are one way for organisations to reduce their OODA loop and react quickly to rapid changes. Looking at the types of recommender systems, collaborative filtering and content based approaches helped to understand what challenges we have to solve.
Amazon Personalize is a new contender in the recommendation market and has some advantages for existing Amazon cloud customers since most of their data is already in the cloud. It’s using a state-of-the-art recommendation algorithm that addresses cold start problem.
Amazon Personalize is trying to provide easier access to the complex world of custom recommendation systems (trained on your own customer data).
Part 2 of this blog post will look into the challenges you might face when building on top of Amazon Personalize and how to automate the entire process.