Anodot Resources Page 15

FILTERS

Anodot Resources Page 15

Correlation analysis in analytics
Blog Post 7 min read

Why Use Correlation Analysis in Data Analytics?

When organizations track metrics by the thousands, millions, or even billions, it’s helpful in many ways to understand which metrics have close relationships, meaning when one metric behaves in a certain way, one or more additional metrics can be expected to behave in a similar or opposite way. What is Correlation Analysis? Correlation analysis calculates the level of change in one variable due to the change in the other. If there is shown to be a strong correlation between two variables or metrics, and one of them is being observed acting in a particular way, then you can conclude that the other one is also being affected in a similar manner. Finding relationships between disparate events and patterns can reveal a common thread, an underlying cause of occurrences that, on a surface level, may appear unrelated and unexplainable. A high correlation points to a strong relationship between the two metrics, while a low correlation means that the metrics are weakly related. A positive correlation result means both metrics increase in relation to each other, while a negative correlation means that as one metric increases, the other decreases. Why Correlation Analysis is Important Correlation analysis can reveal meaningful relationships between different metrics or groups of metrics. Information about those connections can provide new insights and reveal interdependencies, even if the metrics come from different parts of the business. If there is shown to be a strong correlation between two variables or metrics, and one of them is being observed acting in a particular way, then you can conclude that the other one is also being affected in a similar manner. This helps to group related metrics together to reduce the need for individual processing of data. The Benefits of Correlation Analysis Reduce Time to Detection In anomaly detection, working with a vast number of metrics and surfacing correlated anomalous metrics helps draw relationships that not only reduce time to detection (TTD) but also supports shortened time to remediation (TTR). As data-driven decision-making has become the norm, early and robust detection of anomalies is critical in every industry domain, as delayed detection adversely impacts customer experience and revenue. Reduce Alert Fatigue Another important benefit of correlation analysis in anomaly detection is in reducing alert fatigue by filtering irrelevant anomalies (based on the correlation) and grouping correlated anomalies into a single alert. Alert storms and false positives are significant challenges faced by organizations today – getting hundreds, even thousands of separate alerts from multiple systems, when many of them stem from the same incident. Reduce Costs Correlation analysis helps significantly reduce the costs associated with the time spent investigating meaningless or duplicative alerts. In addition, the time saved can be spent on more strategic initiatives that add value to the organization. Example Use Cases for Correlation Analysis Marketing professionals use correlation analysis to evaluate the efficiency of a campaign by monitoring and testing customers’ reactions to different marketing tactics. In this way, they can better understand and serve their customers. Financial planners assess the correlation of an individual stock to an index such as the S&P 500 to determine if adding the stock to an investment portfolio might increase systematic risk of the portfolio. For data scientists and those tasked with monitoring data, correlation analysis is incredibly valuable when used for root cause analysis and reducing time to detection (TTD) and time to remediation (TTR). Two unusual events or anomalies happening at the same time/rate can help to pinpoint an underlying cause of a problem. The organization will incur a lower cost of experiencing a problem if it can be understood and fixed sooner rather than later. Technical support teams can reduce the number of alerts they must respond to by filtering irrelevant anomalies and grouping correlated anomalies into a single alert. Tools such as Security Information and Event Management (SIEM) systems do this automatically to facilitate incident response. How Anodot Uses Correlation of Metrics in Business Monitoring Business monitoring is the process of collecting, analyzing, and using metrics and key performance indicators (KPIs) to track an organization’s progress toward reaching its business objectives and to guide management decisions. Anomaly detection is a key method for identifying when a business process is experiencing an unexpected change that may indicate an underlying issue is derailing the process. As organizations become more data-driven, they find themselves unable to scale their analytics capabilities without the help of automation. When an organization has thousands of metrics (or more), analyzing individual metrics can obscure key insights. A faster method is to use machine learning based correlation analysis in order to group related metrics together. In this way, when a metric becomes anomalous, all the related events and metrics that are also anomalous are grouped together in a single incident. This helps to reduce data processing time, reveal the root cause of an incident, and tie events together to reduce alert fatigue. On average, customers using Anodot have found correlation analysis helps reduce alert noise by up to 99%. An Example of Correlation in Business Monitoring Consider the applicability of correlation analysis in the realm of eCommerce promotions.  For many retailers, the last quarter of the year accounts for more than 50 percent of their annual sales. Most merchants run various promotions to boost sales that correspond with Black Friday, Cyber Monday, and other holiday-related events. Multiple factors are at play with any promotion, including the promotion type, promotional pricing, audience targeting, purchase intent, timeliness, media used for the promotion, and numerous other factors. Correlation analysis finds a natural fit to determine which factor(s) play a key role in driving the top and bottom lines in the sales. The ability to identify strong correlations would help marketers double down on the corresponding promotions. To illustrate, consider the figure below, which shows how two correlated anomalies – a spike in page views (top chart) and add to carts (bottom chart) – for an eCommerce site points to an anomalous sales pattern. The shaded area (the baseline) is the normal pattern of sales for a promotional event of this nature. Clearly, the add to cart metric is underperforming. Correlating the relevant event (the sale) and the related metrics (pageviews and add to cart) together, it underscores the irregularity of a drop in both those metrics. When the event started, the team was alerted about the fact that the sales event did not yield the expected increase in both the correlated metrics; in fact, page views actually dropped(!) 46 percent compared to the expected spike, leading to a drop of 66 percent in add to cart. These drops were identified because the effect of the sales event (an “external” variable to the metric), was correlated to the values of the metric. If the correlation between the metrics and the event was not taken into account, the drop would have seemed like an increase. Discovering the relationships among data metrics has many practical applications in business monitoring. Correlation analysis can help identify the root cause of a problem and vastly reduce the time to remediate the issue. It also helps to group events together in order to reduce the number of alerts generated by the events, in turn reducing alert fatigue among support personnel and the cost of investigating duplicative alerts.
Payment gateway
Blog Post 4 min read

Payment gateway analytics for payment service providers

Anodot’s payment gateway analytics provides clear visibility into the payments environment to enable the fast detection of transaction performance issues, anomalies and trends so that no revenues or customers are lost
Payment transaction monitoring
Blog Post 5 min read

Increase approval rates with AI-based payment transaction monitoring

Autonomous monitoring creates the real-time visibility that is critical for enabling faster detection of transaction issues and optimized approval rates and revenues
Blog Post 5 min read

Proactively monitoring customer experience and network performance in real-time

Leading CSP use AI-based autonomous monitoring of granular CX, performance and telemetry data to provide a flawless user experience by proactively mitigating incidents and service degradation
Blog Post 6 min read

Network monitoring: The build vs. buy dilemma

CSPs opting to adopt an AI-based approach to network monitoring need to balance time to value and return on investment, while securing the best possible solution for their specific needs. Here’s what you need to know.
Blog Post 7 min read

Anodot vs. AWS: Which Has the Most Accurate Cloud Cost Forecasts?

When forecasting cloud costs, accuracy is key. So how did Amazon Forecast fare against Anodot? Read on for the results.
Blog Post 5 min read

Transforming the Gaming Industry with AI Analytics

In 2020, the gaming market generated over 177 billion dollars, marking an astounding 23% growth from 2019. While it may be incredible how much revenue the industry develops, what’s more impressive is the massive amount of data generated by today’s games.  The Enormous Data Challenge in Gaming There are more than 2 billion gamers globally, generating over 50 terabytes of data each day. The largest game companies in the world can host 2.5 billion unique gaming sessions in a single month and host 50 billion minutes of gameplay in the same period.  The gaming industry and big data are intrinsically linked. Companies that develop capabilities in using that data to understand their customers will have a sizable advantage in the future. But doing this comes with its own unique challenges.  Games have many permutations, with different game types, devices, user segments, and monetization models. Traditional analytics approaches, which rely on manual processes and interventions by operators viewing dashboards, are insufficient in the face of the sheer volume of complex data generated by games.  Unchecked issues lead to costly incidents or missed opportunities that can significantly impact the user experience or the company's bottom line. That's why many leading gaming companies are turning to AI and Machine Learning to address these challenges. Gaming Analytics AI  Gaming companies have all the data they need to understand who their users are, how they engage with the product, and whether they are likely to churn. The challenge is gaining valuable business insights into the data and taking action before opportunities pass and users leave the game.  AI/ML helps bridge this gap by providing real-time, actionable insights on near limitless data streams so companies can design around these analytics and act more quickly to resolve issues. There are two fundamental categories that companies should hone in on to make the best use of their gaming data: The revenue generating opportunities in the gaming industry is one reason it’s a highly competitive market. Keeping gamers engaged requires emphasizing the user experience and continuous delivery of high-quality content personalized to a company's most valued customers.  Customer Engagement and User Experience  Graphics and creative storylines are still vital, and performance issues, in particular, can be a killer for user enjoyment and drive churn. But with a market this competitive, it might not be enough to focus strictly on these issues.  Games can get an edge on the competition by investing in gaming AI analytics to understand user behaviors, likes, dislikes, seasonality impacts and even hone in on what makes them churn or come back to the game after a break.  AI-powered business monitoring solutions deliver value to the customer experience and create actionable insights to drive future business decisions and game designs to acquire new customers and prevent churn. AI-Enhanced Monetization and Targeted Advertising All games need a way to monetize. It's especially true in today's market, where users expect games to always be on and regularly deliver new content and features. A complex combination of factors influences how monetization practices and models enhance or detract from a user's experience with a game.  When monetization frustrates users, it's typically because of aggressive, irrelevant advertising campaigns or models that aren't well suited to the game itself or its core players. Observe the most successful products in the market, and one thing you will consistently see is highly targeted interactions.  Developers can use metrics gleaned from AI analytics combined with performance marketing to appeal to their existing users and acquire new customers. With AI/ML, games can use personalized ads that cater to users' or user segments' behavior in real-time, optimizing the gaming experience and improving monetization outcomes.  Using AI based solutions, gaming studios can also quickly identify growth opportunities and trends with real-time insight into high performing monetization models and promotions. Mobile Gaming Company Reduces Revenue Losses from Technical Incident  One mobile gaming company suffered a massive loss when a bug in a software update disrupted a marketing promotion in progress. The promotion involved automatically pushing special offers and opportunities for in-app purchases across various gaming and marketing channels. When a bug in an update disrupted the promotions process, the analytics team couldn't take immediate action because they were unaware of the issue.  Their monitoring process was ad hoc, relying on the manual review of multiple dashboards, and unfortunately, by the time they discovered the problem, it was too late. The result was a massive loss for the company – a loss of users, a loss of installations, and in the end, more than 15% revenue loss from in-app purchases. The company needed a more efficient and timely way to track its cross-promotional metrics, installations, and revenue. A machine learning-based approach, like Anodot's AI-powered gaming analytics, provides notifications in real-time to quickly find and react to any breakdowns in the system and would have prevented the worst of the impacts. Anodot's AI-Powered Analytics for Gaming  The difference between success and failure is how companies respond to the ocean of data generated by their games and their users. Anodot's AI-powered Gaming Analytics solutions can learn expected behavior in the complex gaming universe across all permutations of gaming, including devices, levels, user segments, pricing, and ads. Anodot's Gaming AI platform is specifically designed to monitor millions of gaming metrics and help ensure a seamless gaming experience. Anodot monitors every critical metric and establishes a baseline of standard behavior patterns to quickly alert teams to anomalies that might represent issues or opportunities. Analytics teams see how new features impact user behavior, with clear, contextual alerts for spikes, drops, purchases, and app store reviews without the need to comb over dashboards trying to find helpful information.  The online gaming space represents one of the more recent areas where rapid data collection and analysis can provide a competitive differentiation. Studios using AI powered analytics will keep themselves and their players ahead of the game.
Blog Post 11 min read

How Influencing Events Impact the Accuracy of Business Monitoring

Businesses are flooded with constantly changing thresholds brought on by seasonality, special promotions and changes in consumer habits. Manual monitoring with static thresholds can't account for events that do not occur in a regularly timed pattern. That's why historical context of influencing events is critical in preventing false positives, wasted resources and disappointed customers. Typically, when forecasting a metric's future values, its past values are used to learn patterns of behavior. However, in many cases, it is not possible to produce accurate forecasts without additional knowledge about what influences a metric’s behavior. Specifically, the observed patterns in the metric are often influenced by special events that are known to occur at specific times, both in the past and the future.  What is an Event? An event is a special occurrence in time that is known in advance of it happening. An event recurs in the lifecycle of a metric but not necessarily on a regular basis or cadence. This is better explained with a few examples:  Holidays that aren’t a fixed date but rather are dependent on a time of the month or year. Consider the U.S. observation of Thanksgiving, which is always the fourth Thursday of November; the Jewish observation of Hanukkah, which may occur at any time from late November to late December; or the Muslim observation of Eid al-Fitr, whose date of celebration is dependent on the cycle of the moon.  Sales and marketing events that are often tied to a season or special celebration. Examples would be the Black Friday shopping day(s) that follow the U.S. Thanksgiving holiday; Back to School sales; post-holiday clearance sales; or Amazon’s annual “Prime Day” sale. Sporting events, which may be local or regional. For example, both the Super Bowl and the World Cup have a big effect on sales of beer and snack foods, attendance at sports bars, and sales of team merchandise. In a more local case, regional sporting events can have a similar effect. Other examples of events include weather (blizzard days, heavy rains, hurricanes, etc.); financial events (earnings releases, bank holidays, changes in interest rates, etc.); and technical events (deployment of new software version, hardware upgrades, etc.). These events are generally known (or at least can be anticipated) in advance. Even weather predictions have become accurate enough to know when significant weather events are going to happen in a particular locale. In the context of a machine learning (ML) based business monitoring system, events are useful in two ways: Understanding why an incident occurred for the purpose of root cause analysis (e.g., the increase in app crashes occurred right after a new version release indicates that a bug in the new version caused the errors).  To improve the accuracy of the ML based monitoring. By taking into account the expected influence of an event on the metrics being monitored, you can avoid false positives, reduce false negatives, and improve forecasting accuracy. What is an Influencing Event? An influencing event is an event that has predictable and measurable impact on a metric behavior when it occurs. For example, Cyber Monday is an influencing event on metrics that measure revenues for many e-commerce companies in the U.S. The impact of that event is almost universally a dramatic increase in revenues during the event.  If a machine learning business monitoring system does not consider the influence of such an event on the revenue metrics of an e-commerce company, the spike in revenues would appear to be an anomaly, and a false positive alert might be sent. On the other hand, when the influence of the event is accounted for, it can help identify real anomalies.  For example, if this year’s revenue on Cyber Monday is lower than the expectation learned by the system, an alert highlighting a drop in expected revenues will be sent, ideally in real time, so remediation actions can be taken to bring it back to the expected levels of revenue. An influencing event can impact the baseline of a metric before, during and after the event takes place. To understand the logic of that statement, consider this example: Christmas is an annual event. Depending on the metrics you are monitoring, this event has multiple effects, both good and bad, that happen before Christmas Day, on Christmas Day, and after Christmas Day has passed. For merchants measuring revenue from sales, the days before Christmas are spike days. Christmas Day itself is a slow sales day for those merchants who are open for business. The days immediately following Christmas Day can see spiking sales again as shoppers look for post-holiday bargains. For rideshare companies, there can be an uptick in riders before the holiday as people socialize and get out and about, but Christmas Day is a drop day as people tend to stay at home that day. Sample Patterns in a Real Business Scenario There is a computer gaming company that occasionally runs events (called “quests”) to stimulate interest in the game. Quests happen multiple times per month at irregular intervals and each quest spans several days. For example, a quest might run for five days and be done, and the next one starts in ten days, and the one after that starts 15 days after the second quest ends. An object of the game is to collect “coins” and the total coin count is one of the metrics the company measures. During a quest, the coin count has an interesting pattern: high on the first few days of the quest, then a smaller spike, and then returning to a normal steady pattern at the end of the quest. It looks something like this: The gaming company wants to monitor the coin counts during a quest to learn if there is anything unusual happening with the game. For example, if coin counts are down considerably from the normal usage pattern, it could mean that gamers are having trouble logging into the game. That would certainly be something to look into and remedy as soon as possible. This is why anomaly detection and alerting are so important. In the scheme of machine learning and anomaly detection, these quests are influencing events that occur at irregular times. We can’t apply a seasonality model to the machine learning process because the quests aren’t seasonal; nor are they completely random.  They are irregular, but important, nonetheless. If the machine learning took place without consideration for the influencing events, the forecast of the coin counts compared to the actual coin counts would look something like the graph below.  The shaded area represents the forecasted (baseline) range and the solid line is the actual data. It's a very inaccurate forecast, to say the least. There are many false positives in the timeline, and if a real issue with the game occurred during this period, it would not be detected as an anomaly.   However, if the ML model were to be told when a quest is going to start – after all, quests are scheduled, not impromptu – the model could learn the pattern of the event. The baseline could learn the pattern and it could be taken into account each time there is another quest. The resulting forecast versus actual looks something like this: You can see the forecast is much more accurate, even with a very complicated pattern. Take note of the small square marker (circled in red) at the bottom left of the graph. This is the indicator that tells the model a quest is starting. When this marker is sent before the start of a quest, the forecast algorithm understands how to treat the data coming in because it has seen this pattern before. In mathematical terms, the influencing event is called a regressor, and it’s critical to incorporate it into the forecast algorithm to ensure better accuracy. The example below shows a real issue that happened during a quest. Because the baseline was accurate, the drop in activity was detected and the issue was quickly fixed to get the game up and running as normal. Challenges of Learning the Impact of Influencing Events You can see just how important it is for accuracy that a model learn the impact of an influencing event. This is far easier said than done. There are some relatively difficult challenges in having the mathematical model accurately and automatically learn the impact of influencing events. The three main challenges are: 1. Being able to automatically identify if a group of historical events has any influence on a given time series To a ML model, it’s not inherently apparent if a group of events – like Black Friday occurring over a period of years, or the gaming company’s quests over the span of a year – has an actual impact on the metrics. The first part of the challenge is to figure out if that group of events does have an impact. The second part is, if the group of events is shown to have an influence, how can occurrences of the events be automatically identified without human intervention?  For example, with the gaming company, it’s measuring many other metrics besides the coin count, so how can you tell if it is indeed the quest that has an influence on the coin count and not something else? And how can this be recognized automatically? 2. If the group does have an influence, being able to identify accurately and robustly the pattern of the influence, both before and after the event date So you’ve determined that the group of events has an influence on the metric’s pattern. An event has a pattern, and the challenge is to learn this pattern robustly and accurately. There are two main factors making it hard: Separating the event effect from the normal pattern: The pattern of the event needs to be separated from the normal pattern of the metric occurring at the same time - e.g., a metric measuring viewership during an event like the Superbowl is composed of the normal viewership pattern and the added viewership due to the Superbowl itself. To accurately and robustly learn the pattern of influence of the event, applications of techniques such as blind source separation are required - and the assumptions behind those techniques require validation during the learning process. Causal and non-causal effects: A complication is that sometimes there is an impact even before the event starts. You can’t assume the impact of an event will start just when the event starts 3. Given a set of many events, automatically group them to multiple groups of events, where each group has a similar influence pattern and a clear mechanism for identifying from the event description to which group it belongs All sorts of groups of events can have an influencing event on a particular metric. Sometimes different events can have an almost identical pattern. If these events can be grouped together, the learning of the pattern and its impact can be faster and easier. Say you are measuring revenue for a rideshare company. This company sees spikes on New Year’s Eve in all major cities and on St. Patrick’s Day in New York and Boston because people like to go out to celebrate these days. The patterns of ridership for these events are almost identical. When you have lots of these types of events with similar patterns, you want to group them because that makes learning about them more accurate. What’s more, the groupings provide more data samples so you can do with less time in history to learn the pattern. Despite the challenges highlighted above, being able to automatically include influencing events in the machine learning model is critically important for two key reasons. First, it reduces the generation of false positive alerts, and second, it enables capturing when a special event you are running is not acting as normal. Consider an e-commerce company whose Black Friday sale event has lower sales than expected. By incorporating this influencing event in the ML model, the merchant can see that sales are off for some reason and can begin investigation of the cause before revenues are significantly impacted.
Blog Post 4 min read

Curb network incidents fast with cross-domain correlation analysis

AI-based cross-domain correlation analysis reduces TTD and TTR of customer impactful incidents, revenue loss and damaged brand reputation