Anodot Resources Page 20

FILTERS

Anodot Resources Page 20

Anomaly detection template
Blog Post 6 min read

9 Key Areas to Cover in Your Anomaly Detection RFP

Learn how to evaluate anomaly detection solutions: why you need an RFP template, key topics that you should cover and why they’re important. Download an RFP template here.
ecommerce analytics
Blog Post 8 min read

How Correlation Analysis Boosts the Efficacy of eCommerce Promotions

We continue our series on correlation analysis by examining use cases in eCommerce, especially in optimizing promotions and ad spend.
Blog Post 8 min read

Correlation Analysis: A Natural Next Step for Anomaly Detection

The first article in a three-part series on correlation analysis, and why pairing it with anomaly detection greatly facilitates root cause analysis, reducing time to detection (TTD) and time to remediation (TTR), as well as alleviating alert fatigue.
Blog Post 10 min read

The Rise of FinOps

Many companies have tried to feed business data, such as business activity, into IT or APM monitoring solutions, only to discover the data is too dynamic for static thresholds. Some companies choose to depend on analyze BI dashboards to find issues, but that leaves anomaly detection to chance. As companies have tried to solve these challenges, AI is driving a future where monitoring business data is monitored autonomously.
Business Monitoring: The Future is Here and it’s Autonomous
Blog Post 8 min read

The Future of Business Monitoring is Here & it’s Autonomous

Many companies have tried to feed business data, such as business activity, into IT or APM monitoring solutions, only to discover the data is too dynamic for static thresholds. Some companies choose to depend on analyze BI dashboards to find issues, but that leaves anomaly detection to chance. As companies have tried to solve these challenges, AI is driving a future where monitoring business data is monitored autonomously.
Blog Post 6 min read

Stay Ahead of Cloud Costs: Real-Time AWS Monitoring and Forecasting with Anodot

Don't settle for daily cloud cost reports: Rein in your AWS cloud costs with real-time, AI-driven monitoring.
Good Catch Cloud Cost Monitoring
Blog Post 5 min read

Good Catch: Cloud Cost Monitoring

Aside from ensuring each service is working properly, one of the most challenging parts of managing a cloud-based infrastructure is cloud cost monitoring. There are countless services to keep track of—including storage, databases, and cloud computing—each with its own complex pricing structure. Cloud cost monitoring is essential for both cloud cost management and optimization. But monitoring cloud spend is quite different from other organizational costs in that it can be difficult to detect anomalies in real-time and accurately forecast monthly costs.  Many cloud providers such as AWS, Google Cloud, and Azure provide you with a daily cost report, but in most cases, this is not enough. For example, if someone is incorrectly querying a database for a few hours this can cause costs to skyrocket—and with a daily report, you wouldn’t be able to detect the spike until it’s too late.  While there are cloud cost management tools that allow you to interpret costs, again these technologies often fall short as they don’t provide the granularity that’s required in real-time monitoring. Similarly, without a real-time alert to detect and resolve the anomaly, the potential to negatively impact the bottom line is significant.  As we’ll see from the examples below, only an AI-based monitoring solution can effectively monitor cloud costs. In particular, there are three layers to Anodot’s holistic cloud monitoring solution, these include: Cost monitoring: Instead of just providing generic cloud costs, one of the main advantages of AI-based monitoring is that costs are specific to the service, region, team, and instance type. When anomalies do occur, this level of granularity allows for a much faster time-to-resolution. Usage monitoring: The next layer consists of monitoring usage on an hourly basis. This means that if usage spikes, you don’t need to wait a full day to resolve the issue and can actively prevent cost increases. Cost forecasting: Finally, the AI-based solution can take in every single cloud-based metric - even in multi-cloud environments - learn its normal behavior on its own, and create cost forecasts which allow for more effective budget planning and resource allocation. Now that we’ve discussed the three layers of AI-based cloud cost monitoring, let’s review several real-world use cases. Network Traffic Spikes In the example below, we can see that the service is an AWS EC2 instance, which is being monitored on an hourly basis. As you can see, the service experienced a 1000+ percent increase in network traffic, from 292.5M to 5.73B over the course of three hours. In this case, if the company was simply using a daily cloud cost report this spike would have been missed and costs would have also skyrocketed as it’s likely that the network traffic would have stayed at this heightened level at least until the end of the day. With the real-time alert sent to the appropriate team, which was paired with root-cause analysis, you can see the anomaly was resolved promptly, ultimately resulting in cost savings for the company. Spike in Average Daily Bucket Size The next use case is from an AWS S3 service on an hourly time frame. In this case, the first alert was sent regarding a spike in head request by bucket. As you may know, bucket sizes can go up and down frequently, but if you’re looking at the current bucket you often don’t actually know how much you’re using relative to normal levels. The key difference in the example below is that, instead of simply looking at absolute values, Anodot’s anomaly detection was looking at the average daily bucket size. You can see that the spike in the bucket size is not larger than the typical spikes, but what is anomalous is the time of day of the spike. In this case, by looking at the average daily bucket size and monitoring on a shorter time frame, the company received a real-time alert and was able to resolve it before it incurred a significant cost. [CTA id="dcd803e2-efe9-4b57-92d5-1fca2e47b892"][/CTA] Spike in Download Rates A final example of cloud cost monitoring is monitoring the AWS CloudFront service, which was again being monitored on an hourly timescale.  In this case, there was an irregular spike in the rate of CloudFront bytes downloaded. Similar to other examples, if the company was only monitoring costs reactively at the end of the day, this could have severely impacted the bottom line. By taking a proactive approach to cloud-cost management with the use of AI and machine learning, the anomaly was quickly resolved and the company was able to save a significant amount of otherwise wasted costs. Summary: Cloud Cost Monitoring As we’ve seen from these three examples, managing a cloud-based infrastructure requires a highly granular solution that can monitor 100 percent of the data in real-time. If this unexpected cloud activity isn’t tracked in real-time, it opens the door to runaway costs, which in most cases is entirely preventable. In addition, it is critical that cloud teams understand the business context of their cloud performance and utilization. An increase in cloud costs might be a result of business growth - but not always. Understanding whether a cost increase is proportionately tied to revenue growth requires context that can be derived only through AI monitoring and cloud cost management. AI models allow companies to become proactive - rather than reactive - in their cloud financial management by catching and alerting anomalies as they occur. Each alert is paired with a deep root-cause analysis so that incidents can be remediated as fast as possible. By distilling billions of events into a single scored metric, IT teams are able to focus on what matters leave alert storms, false positives, and false negatives behind, gain control over their cloud spend, and proactively work towards cloud costs optimization.
Autonomous Monetization Monitoring in Gaming
Blog Post 6 min read

A Guide to Autonomous Monetization Monitoring for the Gaming Industry

Similar to other companies in the entertainment industry, gaming companies typically drive revenue from three sources: in-app purchases, ads, and subscription. A couple of examples of these sources include creating different in-app purchase options for each game and various ad units from multiple ad networks. While this diversity in revenue streams may be advantageous from a business perspective, from a technical standpoint, it creates numerous challenges. In particular, every time an in-app purchase fails, an ad isn’t displayed correctly, or a user isn’t converted to a paying customer, revenue is lost. Navigating Permutational Complexities   Experienced game developers understand each one of their games’ permutational complexities—from the operating systems, user segments, multiple devices, promotional strategies, and more. Each one of these permutations not only presents unique technical challenges but also must be monitored constantly in order to prevent monetization failures. The Limitations of Traditional Monitoring   In the past, many companies have tried traditional monitoring and alerting methods, but the inherent complexities mentioned above often make this unfeasible. Either these dashboards and manual thresholds will miss an anomaly because it is too granular, or the system will generate too many false positives. As we’ll discuss in this guide, autonomous AI-based proactive monitoring is the solution to dealing with the complexities of gaming analytics. What is Autonomous Monetization Monitoring?   As mentioned, proactive monitoring revenue streams of gaming companies often involves tracking thousands of metrics and billions of events each day. Autonomous monitoring allows you to not only observe each individual metric but also automatically learn the normal behavior of each on its own by using a branch of machine learning called unsupervised learning.  As described in our guide on Unsupervised Anomaly Detection: Unsupervised machine learning algorithms, however, learn what normal is, and then apply a statistical test to determine if a specific data point is an anomaly. A system based on this kind of anomaly detection technique is able to detect any type of anomaly, including ones that have never been seen before. In other words, unsupervised learning can be used to monitor 100% of the data, identify anomalies that lead to revenue losses, and alert the relevant team in real-time. Keep in mind that none of these alerts are threshold-based, and instead are constantly changing on their own based on the learned normal behavioral patterns. Similarly, each anomaly is paired with a root-cause analysis of incidents that affect revenue streams or the user experience. This allows the technical team to identify what’s causing the incident and have the fastest possible time-to-resolution. In the context of gaming, Anodot’s autonomous monitoring solution has proven to reduce up to 70% of losses associated with monetization errors for game studios such as King, Gamesys, Outfit7, Moonactive, and more. To do this, the AI-based solution monitors three core monetization channels, including: In-app purchases Ads Subscriptions Now that we’ve discussed what autonomous monetization monitoring is, let’s look at several real-world examples. Use Case: Autonomous Monitoring for Gaming   In this section, we’ll review two use cases of autonomous monitoring for gaming: in-app purchases and drops in ad impressions. Monitoring In-App Purchase Funnels One particular game studio started using autonomous monitoring for its monetization of in-app purchases. Here are a few highlights from their experience: In just the past 6 months, 57 anomalies triggered alerts in real-time based on spikes of purchase failures These purchase failures resulted from various technical bugs, version updates, payment gateway issues, and more With the use of the AI-based correlation analysis, the team was able to remediate the issues within hours instead of days They’ve estimated that because of their faster time-to-resolution, they were able to save $800k in the past 6 months Below, you can see three of the purchase failure rate metrics which are measured by different games, platforms, and versions. The shaded blue area represents the normal range of each metric and each one of the anomalies from this normal behavior is highlighted in orange. As you can see, as the normal range shifts over time, the purchase fail metrics also shift automatically by Anodot’s algorithms. During the same period, there were a total of 64 purchase failure related alerts, which means the detection rate was 92%.     Real-Time Alerts on Drops in Ads Shown Another example of autonomous monitoring for gaming comes from a game studio that has over 500 million monthly active users (MAU). In this case, the company wanted to monitor ad impressions presentations in their games, which is of course tied to the bottom line of the company. Here are a few highlights from their experience implementing autonomous monitoring: In the past 6 months, 18 alerts were triggered indicating a drop in ad impressions across various games, platforms, and ad networks 17 of the 18 alerts were confirmed to be significant revenue impactful incidents, making the detection rate 94.4% The company estimates that they saved $153K USD based on detecting and resolving these 17 incidents in near real-time As you can see below, the blue shaded area represents the normal ad impression range. Here you can see the drop in the ads shown occurred for multiple games and platforms, apart from a single ad provider (Facebook). These anomalies were not only alerted in real-time but were also correlated with a single alert resulting in a faster remediation time and avoiding an alert storm.   Summary: Autonomous Monetization Monitoring   As we’ve discussed, monitoring the various revenue streams of gaming companies is a highly complex undertaking. While experienced developers understand each one of their game’s permutational complexities, monitoring them in real-time still presents a unique challenge. Some companies have tried using manual thresholds in the past, but this either doesn’t have enough granularity or will be triggered based on false-positives, leaving the technical team with an alert storm. Instead, AI-based autonomous monetization monitoring is the only solution that can observe every single metric, learn its normal behavior on its own, and identify anomalies in real-time. Not only does catching and resolving these anomalies drastically enhance the user experience but ultimately it helps companies improve their bottom line.  
Good Catch
Blog Post 4 min read

Good Catch: Monitoring Revenue When it Matters Most

Revenue monitoring not only involves monitoring huge amounts of data in real-time but also finding correlations between thousands, if not millions, of customer experience and other metrics.  Are traditional monitoring methods capable of detecting a correlation between a drop in user log-ins and a drop in revenue as it’s happening? For many reasons, the answer is no. The Power of AI-Based Monitoring   To kick off our "Good Catch" series, we're sharing anomalies that Anodot caught for our customers, who flagged each of them as a "good catch" in our system. For an online gaming customer, Anodot alerted them to a drop in log-ins and correlated the anomaly to a spike in command errors, an incident that negatively impacted revenue.  A traditional monitoring system might have been able to catch a drop in revenue as it occurred, but without machine learning, this company would only have caught the connection between these two anomalies had an analyst happened to stumble upon them. An unlikely scenario. The customer managed to release a fix within 3 hours, saving them a significant amount of otherwise lost revenue.     Adapting to Market Changes with AI   Given the subtlety of this alert at the start, using static thresholds would have taken longer for an alert to fire. With the impact of COVID-19 on the travel industry, affected businesses who rely on static thresholds are having to manually adjust those settings to the new norm. They would again need to readjust those settings as travel bookings pick up, although at this time no one can accurately predict when that will be. An AI-based monitoring solution, on the other hand, can adapt to the new normal, without the need for any human intervention. In particular, Anodot’s unsupervised learning algorithms are able to monitor thousands of metrics simultaneously and understand the normal behavior for each individual one. This ability to adapt to changing market conditions and consumer behavior can drastically improve a company's ability to adjust growth and demand forecasts in real-time, both of which can significantly contribute to the bottom line.  As you can see below, the shaded blue area represents the normal range of data. As the COVID-19 closures occurred in mid-March, you can see the AI monitoring solution was able to adjust its normal range and catch up with the global changes in bookings within days. Towards the end of the graph, we can also see there’s an increase towards the original range, which happened without any human intervention or a need to adjust a static threshold:    Real-Time Detection in a Complex Environment   A final example of the difficulty of building your own monitoring system is the fact that you’re dealing with human-generated data, meaning it’s incredibly volatile, irregular, and seasonal. For example, the image below is from a gaming company and you can clearly see the seasonal nature of gamers playing more on the weekends and evenings. In this example, someone on the team released a hot fix, along with a critical bug, that prevented players from completing in-game purchases. Luckily, their anomaly detection solution was able to detect and alert the error in real-time, and root cause analysis led the developers directly to the recent release.  Since there is such a high degree of seasonality in this user-generated data, unlike a traditional BI tool, an AI solution is able to calculate the normal usage depending on each hour and day and adapt accordingly. These incidents give you an under-the-hood look at the complexity in monitoring business metrics, some of which include: an adaptive baseline seasonality granular visibility monitor at scale to correlate related anomalies/events real-time detection In the next post, we'll look at why these aspects also come into play for finding hidden incidents that might otherwise go undetected in your partner networks and affiliate programs.