Resources

FILTERS

Resources

Payment transaction monitoring
Blog Post 5 min read

Increase approval rates with AI-based payment transaction monitoring

Autonomous monitoring creates the real-time visibility that is critical for enabling faster detection of transaction issues and optimized approval rates and revenues
Blog Post 5 min read

Proactively monitoring customer experience and network performance in real-time

Leading CSP use AI-based autonomous monitoring of granular CX, performance and telemetry data to provide a flawless user experience by proactively mitigating incidents and service degradation
Blog Post 6 min read

Network monitoring: The build vs. buy dilemma

CSPs opting to adopt an AI-based approach to network monitoring need to balance time to value and return on investment, while securing the best possible solution for their specific needs. Here’s what you need to know.
Blog Post 7 min read

Anodot vs. AWS: Which Has the Most Accurate Cloud Cost Forecasts?

When forecasting cloud costs, accuracy is key. So how did Amazon Forecast fare against Anodot? Read on for the results.
Blog Post 5 min read

Transforming the Gaming Industry with AI Analytics

In 2020, the gaming market generated over 177 billion dollars, marking an astounding 23% growth from 2019. While it may be incredible how much revenue the industry develops, what’s more impressive is the massive amount of data generated by today’s games.  The Enormous Data Challenge in Gaming There are more than 2 billion gamers globally, generating over 50 terabytes of data each day. The largest game companies in the world can host 2.5 billion unique gaming sessions in a single month and host 50 billion minutes of gameplay in the same period.  The gaming industry and big data are intrinsically linked. Companies that develop capabilities in using that data to understand their customers will have a sizable advantage in the future. But doing this comes with its own unique challenges.  Games have many permutations, with different game types, devices, user segments, and monetization models. Traditional analytics approaches, which rely on manual processes and interventions by operators viewing dashboards, are insufficient in the face of the sheer volume of complex data generated by games.  Unchecked issues lead to costly incidents or missed opportunities that can significantly impact the user experience or the company's bottom line. That's why many leading gaming companies are turning to AI and Machine Learning to address these challenges. Gaming Analytics AI  Gaming companies have all the data they need to understand who their users are, how they engage with the product, and whether they are likely to churn. The challenge is gaining valuable business insights into the data and taking action before opportunities pass and users leave the game.  AI/ML helps bridge this gap by providing real-time, actionable insights on near limitless data streams so companies can design around these analytics and act more quickly to resolve issues. There are two fundamental categories that companies should hone in on to make the best use of their gaming data: The revenue generating opportunities in the gaming industry is one reason it’s a highly competitive market. Keeping gamers engaged requires emphasizing the user experience and continuous delivery of high-quality content personalized to a company's most valued customers.  Customer Engagement and User Experience  Graphics and creative storylines are still vital, and performance issues, in particular, can be a killer for user enjoyment and drive churn. But with a market this competitive, it might not be enough to focus strictly on these issues.  Games can get an edge on the competition by investing in gaming AI analytics to understand user behaviors, likes, dislikes, seasonality impacts and even hone in on what makes them churn or come back to the game after a break.  AI-powered business monitoring solutions deliver value to the customer experience and create actionable insights to drive future business decisions and game designs to acquire new customers and prevent churn. AI-Enhanced Monetization and Targeted Advertising All games need a way to monetize. It's especially true in today's market, where users expect games to always be on and regularly deliver new content and features. A complex combination of factors influences how monetization practices and models enhance or detract from a user's experience with a game.  When monetization frustrates users, it's typically because of aggressive, irrelevant advertising campaigns or models that aren't well suited to the game itself or its core players. Observe the most successful products in the market, and one thing you will consistently see is highly targeted interactions.  Developers can use metrics gleaned from AI analytics combined with performance marketing to appeal to their existing users and acquire new customers. With AI/ML, games can use personalized ads that cater to users' or user segments' behavior in real-time, optimizing the gaming experience and improving monetization outcomes.  Using AI based solutions, gaming studios can also quickly identify growth opportunities and trends with real-time insight into high performing monetization models and promotions. Mobile Gaming Company Reduces Revenue Losses from Technical Incident  One mobile gaming company suffered a massive loss when a bug in a software update disrupted a marketing promotion in progress. The promotion involved automatically pushing special offers and opportunities for in-app purchases across various gaming and marketing channels. When a bug in an update disrupted the promotions process, the analytics team couldn't take immediate action because they were unaware of the issue.  Their monitoring process was ad hoc, relying on the manual review of multiple dashboards, and unfortunately, by the time they discovered the problem, it was too late. The result was a massive loss for the company – a loss of users, a loss of installations, and in the end, more than 15% revenue loss from in-app purchases. The company needed a more efficient and timely way to track its cross-promotional metrics, installations, and revenue. A machine learning-based approach, like Anodot's AI-powered gaming analytics, provides notifications in real-time to quickly find and react to any breakdowns in the system and would have prevented the worst of the impacts. Anodot's AI-Powered Analytics for Gaming  The difference between success and failure is how companies respond to the ocean of data generated by their games and their users. Anodot's AI-powered Gaming Analytics solutions can learn expected behavior in the complex gaming universe across all permutations of gaming, including devices, levels, user segments, pricing, and ads. Anodot's Gaming AI platform is specifically designed to monitor millions of gaming metrics and help ensure a seamless gaming experience. Anodot monitors every critical metric and establishes a baseline of standard behavior patterns to quickly alert teams to anomalies that might represent issues or opportunities. Analytics teams see how new features impact user behavior, with clear, contextual alerts for spikes, drops, purchases, and app store reviews without the need to comb over dashboards trying to find helpful information.  The online gaming space represents one of the more recent areas where rapid data collection and analysis can provide a competitive differentiation. Studios using AI powered analytics will keep themselves and their players ahead of the game.
Blog Post 11 min read

How Influencing Events Impact the Accuracy of Business Monitoring

Businesses are flooded with constantly changing thresholds brought on by seasonality, special promotions and changes in consumer habits. Manual monitoring with static thresholds can't account for events that do not occur in a regularly timed pattern. That's why historical context of influencing events is critical in preventing false positives, wasted resources and disappointed customers. Typically, when forecasting a metric's future values, its past values are used to learn patterns of behavior. However, in many cases, it is not possible to produce accurate forecasts without additional knowledge about what influences a metric’s behavior. Specifically, the observed patterns in the metric are often influenced by special events that are known to occur at specific times, both in the past and the future.  What is an Event? An event is a special occurrence in time that is known in advance of it happening. An event recurs in the lifecycle of a metric but not necessarily on a regular basis or cadence. This is better explained with a few examples:  Holidays that aren’t a fixed date but rather are dependent on a time of the month or year. Consider the U.S. observation of Thanksgiving, which is always the fourth Thursday of November; the Jewish observation of Hanukkah, which may occur at any time from late November to late December; or the Muslim observation of Eid al-Fitr, whose date of celebration is dependent on the cycle of the moon.  Sales and marketing events that are often tied to a season or special celebration. Examples would be the Black Friday shopping day(s) that follow the U.S. Thanksgiving holiday; Back to School sales; post-holiday clearance sales; or Amazon’s annual “Prime Day” sale. Sporting events, which may be local or regional. For example, both the Super Bowl and the World Cup have a big effect on sales of beer and snack foods, attendance at sports bars, and sales of team merchandise. In a more local case, regional sporting events can have a similar effect. Other examples of events include weather (blizzard days, heavy rains, hurricanes, etc.); financial events (earnings releases, bank holidays, changes in interest rates, etc.); and technical events (deployment of new software version, hardware upgrades, etc.). These events are generally known (or at least can be anticipated) in advance. Even weather predictions have become accurate enough to know when significant weather events are going to happen in a particular locale. In the context of a machine learning (ML) based business monitoring system, events are useful in two ways: Understanding why an incident occurred for the purpose of root cause analysis (e.g., the increase in app crashes occurred right after a new version release indicates that a bug in the new version caused the errors).  To improve the accuracy of the ML based monitoring. By taking into account the expected influence of an event on the metrics being monitored, you can avoid false positives, reduce false negatives, and improve forecasting accuracy. What is an Influencing Event? An influencing event is an event that has predictable and measurable impact on a metric behavior when it occurs. For example, Cyber Monday is an influencing event on metrics that measure revenues for many e-commerce companies in the U.S. The impact of that event is almost universally a dramatic increase in revenues during the event.  If a machine learning business monitoring system does not consider the influence of such an event on the revenue metrics of an e-commerce company, the spike in revenues would appear to be an anomaly, and a false positive alert might be sent. On the other hand, when the influence of the event is accounted for, it can help identify real anomalies.  For example, if this year’s revenue on Cyber Monday is lower than the expectation learned by the system, an alert highlighting a drop in expected revenues will be sent, ideally in real time, so remediation actions can be taken to bring it back to the expected levels of revenue. An influencing event can impact the baseline of a metric before, during and after the event takes place. To understand the logic of that statement, consider this example: Christmas is an annual event. Depending on the metrics you are monitoring, this event has multiple effects, both good and bad, that happen before Christmas Day, on Christmas Day, and after Christmas Day has passed. For merchants measuring revenue from sales, the days before Christmas are spike days. Christmas Day itself is a slow sales day for those merchants who are open for business. The days immediately following Christmas Day can see spiking sales again as shoppers look for post-holiday bargains. For rideshare companies, there can be an uptick in riders before the holiday as people socialize and get out and about, but Christmas Day is a drop day as people tend to stay at home that day. Sample Patterns in a Real Business Scenario There is a computer gaming company that occasionally runs events (called “quests”) to stimulate interest in the game. Quests happen multiple times per month at irregular intervals and each quest spans several days. For example, a quest might run for five days and be done, and the next one starts in ten days, and the one after that starts 15 days after the second quest ends. An object of the game is to collect “coins” and the total coin count is one of the metrics the company measures. During a quest, the coin count has an interesting pattern: high on the first few days of the quest, then a smaller spike, and then returning to a normal steady pattern at the end of the quest. It looks something like this: The gaming company wants to monitor the coin counts during a quest to learn if there is anything unusual happening with the game. For example, if coin counts are down considerably from the normal usage pattern, it could mean that gamers are having trouble logging into the game. That would certainly be something to look into and remedy as soon as possible. This is why anomaly detection and alerting are so important. In the scheme of machine learning and anomaly detection, these quests are influencing events that occur at irregular times. We can’t apply a seasonality model to the machine learning process because the quests aren’t seasonal; nor are they completely random.  They are irregular, but important, nonetheless. If the machine learning took place without consideration for the influencing events, the forecast of the coin counts compared to the actual coin counts would look something like the graph below.  The shaded area represents the forecasted (baseline) range and the solid line is the actual data. It's a very inaccurate forecast, to say the least. There are many false positives in the timeline, and if a real issue with the game occurred during this period, it would not be detected as an anomaly.   However, if the ML model were to be told when a quest is going to start – after all, quests are scheduled, not impromptu – the model could learn the pattern of the event. The baseline could learn the pattern and it could be taken into account each time there is another quest. The resulting forecast versus actual looks something like this: You can see the forecast is much more accurate, even with a very complicated pattern. Take note of the small square marker (circled in red) at the bottom left of the graph. This is the indicator that tells the model a quest is starting. When this marker is sent before the start of a quest, the forecast algorithm understands how to treat the data coming in because it has seen this pattern before. In mathematical terms, the influencing event is called a regressor, and it’s critical to incorporate it into the forecast algorithm to ensure better accuracy. The example below shows a real issue that happened during a quest. Because the baseline was accurate, the drop in activity was detected and the issue was quickly fixed to get the game up and running as normal. Challenges of Learning the Impact of Influencing Events You can see just how important it is for accuracy that a model learn the impact of an influencing event. This is far easier said than done. There are some relatively difficult challenges in having the mathematical model accurately and automatically learn the impact of influencing events. The three main challenges are: 1. Being able to automatically identify if a group of historical events has any influence on a given time series To a ML model, it’s not inherently apparent if a group of events – like Black Friday occurring over a period of years, or the gaming company’s quests over the span of a year – has an actual impact on the metrics. The first part of the challenge is to figure out if that group of events does have an impact. The second part is, if the group of events is shown to have an influence, how can occurrences of the events be automatically identified without human intervention?  For example, with the gaming company, it’s measuring many other metrics besides the coin count, so how can you tell if it is indeed the quest that has an influence on the coin count and not something else? And how can this be recognized automatically? 2. If the group does have an influence, being able to identify accurately and robustly the pattern of the influence, both before and after the event date So you’ve determined that the group of events has an influence on the metric’s pattern. An event has a pattern, and the challenge is to learn this pattern robustly and accurately. There are two main factors making it hard: Separating the event effect from the normal pattern: The pattern of the event needs to be separated from the normal pattern of the metric occurring at the same time - e.g., a metric measuring viewership during an event like the Superbowl is composed of the normal viewership pattern and the added viewership due to the Superbowl itself. To accurately and robustly learn the pattern of influence of the event, applications of techniques such as blind source separation are required - and the assumptions behind those techniques require validation during the learning process. Causal and non-causal effects: A complication is that sometimes there is an impact even before the event starts. You can’t assume the impact of an event will start just when the event starts 3. Given a set of many events, automatically group them to multiple groups of events, where each group has a similar influence pattern and a clear mechanism for identifying from the event description to which group it belongs All sorts of groups of events can have an influencing event on a particular metric. Sometimes different events can have an almost identical pattern. If these events can be grouped together, the learning of the pattern and its impact can be faster and easier. Say you are measuring revenue for a rideshare company. This company sees spikes on New Year’s Eve in all major cities and on St. Patrick’s Day in New York and Boston because people like to go out to celebrate these days. The patterns of ridership for these events are almost identical. When you have lots of these types of events with similar patterns, you want to group them because that makes learning about them more accurate. What’s more, the groupings provide more data samples so you can do with less time in history to learn the pattern. Despite the challenges highlighted above, being able to automatically include influencing events in the machine learning model is critically important for two key reasons. First, it reduces the generation of false positive alerts, and second, it enables capturing when a special event you are running is not acting as normal. Consider an e-commerce company whose Black Friday sale event has lower sales than expected. By incorporating this influencing event in the ML model, the merchant can see that sales are off for some reason and can begin investigation of the cause before revenues are significantly impacted.
Blog Post 4 min read

Curb network incidents fast with cross-domain correlation analysis

AI-based cross-domain correlation analysis reduces TTD and TTR of customer impactful incidents, revenue loss and damaged brand reputation
Blog Post 4 min read

Webinar Recap: Lessons learned from T-mobile Netherlands’ road to zero touch

By implementing Anodot’s autonomous monitoring on top of its network, T-mobile Netherlands reduced time to resolution of incidents and progressed to proactive incident and customer experience management
Ecommerce monitoring
Blog Post 5 min read

Preventing Shopping Cart Abandonment with Anomaly Detection

The global pandemic has changed B2C markets in many ways. In the U.S. market alone in 2020, consumers spent more than $860 billion with online retailers, driving up sales by 44% over the previous year.eCommerce sales are likely to remain high long after the pandemic subsides, as people have grown accustomed to the convenience of ordering online and having their goods – even groceries – delivered to their door. Despite the growth in online sales, eCommerce companies continue to deal with the perennial problem of shopping cart abandonment, online consumers who quit the checkout process before completing a purchase. According to Dynamic Yield, the average shopping cart abandonment rate globally is 70.05%. Why Shoppers Abandon Carts There are numerous reasons why shoppers might forgo their prospective purchases. Perhaps they were only comparison shopping among products or different e-commerce sites. It could be that the checkout process is too cumbersome, or the store doesn’t accept the forms of payment shoppers prefer. But sometimes system errors or technical issues lead people to leave the purchase funnel or abandon their carts. Perhaps the payment platform isn’t working right, or the product page isn’t loading, or a new customer can’t complete the account creation process. In cases like these, shoppers tend to abandon their cart and leave the merchant’s site in frustration and disappointment—and the merchant may never know why it is losing sales. Why Traditional Monitoring Falls Short Online shopping issues are almost impossible for eCommerce companies to detect manually. There are just too many metrics and dimensions – different products, devices, campaigns, sessions, etc. – to manually detect issues that impact customer experience and revenue. Another challenge with traditional dashboards is the time it takes to realize there’s a problem in the path to purchase. Delays between anomaly occurrence and detection can take hours, sometimes days. That’s a delay eCommerce business can’t afford if they want to remain competitive and reduce customer churn. How to Catch Online Shopping Incidents Quickly The best way to detect incidents in the purchase funnel is to put all business level KPIs and data points into a machine learning system that can learn normal patterns and automatically detect anomalies. For system metrics, it’s important to learn shopping behaviors across every dimension in a very intimate way. For example, the ML platform would need to learn how many times shoppers search for a product, how often it is added to a cart, what the average cart size is, whether the product was on a special promotion, what time of day or day of week the product sells best, etc. There might even be “seasonality” to these metrics, such as the pattern of sales over a week, a month, a year, during special promotions that are seasonal (think “back to school” shopping), and so on. After learning the normal patterns of the various metrics and dimensions, the anomaly detection system can analyze incoming data feeds of all the metrics and quickly identify anomalies in real time. An alert notifies data analysts that something is amiss and needs attention. In the example below, the alert reveals a significant drop (the solid orange line) in the approval rate of payments using the PayPal widget as compared to the typical approval rate (shaded blue area). The drop is correlated to the time when a spike in downtime of the PayPal widget is occurring, indicating a problem with the payment platform. To prevent a loss of sales when such a glitch is happening, the merchant can advise customers to use another form of payment to complete their sale. The next example shows a lengthy drop in purchase completions for a particular product when the shopper is on an Android device. The cause of the problem was a software upgrade for the Android app. Once the anomaly was detected due to a low volume on the purchase metric, the developer was able to correct the app and prevent extended losses in sales. How Correlation Helps with Investigations But why is an anomaly happening? Where to begin to investigate? Correlation analysis helps with these questions. By having related anomalies and events grouped into one alert, you can get to the root cause of conversion issues faster. In the graphic below, the top circle is an anomaly that has been detected. The right circle shows the related events (ex: holidays or promotions) and anomalies that are happening concurrently and grouped in that alert. The left circle identifies the leading dimensions within the monitored metrics. This is real-time insight that helps merchants find and fix the root cause of problems before they impact the bottom line. The image below is a good example of how correlated anomalies can point to the source of a system problem. In this example, a drop in successful orders corresponds to a drop in finished orders, which seems to indicate a problem with the process to finish orders. That would be the first place to investigate if there is anything wrong with the underlying process.   eCommerce businesses can’t afford to wait until the end of the sales day, or week, or quarter to understand if there is a problem in their system that is resulting in cart abandonment. An anomaly detection solution with real time alerts can let you know if problems or errors are occurring that should be investigated promptly to prevent shoppers from leaving items in their carts before the sale can be completed.