Resources

FILTERS

Resources

Good Catch
Blog Post 4 min read

Good Catch: Monitoring Revenue When it Matters Most

Revenue monitoring not only involves monitoring huge amounts of data in real-time but also finding correlations between thousands, if not millions, of customer experience and other metrics.  Are traditional monitoring methods capable of detecting a correlation between a drop in user log-ins and a drop in revenue as it’s happening? For many reasons, the answer is no. The Power of AI-Based Monitoring   To kick off our "Good Catch" series, we're sharing anomalies that Anodot caught for our customers, who flagged each of them as a "good catch" in our system. For an online gaming customer, Anodot alerted them to a drop in log-ins and correlated the anomaly to a spike in command errors, an incident that negatively impacted revenue.  A traditional monitoring system might have been able to catch a drop in revenue as it occurred, but without machine learning, this company would only have caught the connection between these two anomalies had an analyst happened to stumble upon them. An unlikely scenario. The customer managed to release a fix within 3 hours, saving them a significant amount of otherwise lost revenue.     Adapting to Market Changes with AI   Given the subtlety of this alert at the start, using static thresholds would have taken longer for an alert to fire. With the impact of COVID-19 on the travel industry, affected businesses who rely on static thresholds are having to manually adjust those settings to the new norm. They would again need to readjust those settings as travel bookings pick up, although at this time no one can accurately predict when that will be. An AI-based monitoring solution, on the other hand, can adapt to the new normal, without the need for any human intervention. In particular, Anodot’s unsupervised learning algorithms are able to monitor thousands of metrics simultaneously and understand the normal behavior for each individual one. This ability to adapt to changing market conditions and consumer behavior can drastically improve a company's ability to adjust growth and demand forecasts in real-time, both of which can significantly contribute to the bottom line.  As you can see below, the shaded blue area represents the normal range of data. As the COVID-19 closures occurred in mid-March, you can see the AI monitoring solution was able to adjust its normal range and catch up with the global changes in bookings within days. Towards the end of the graph, we can also see there’s an increase towards the original range, which happened without any human intervention or a need to adjust a static threshold:    Real-Time Detection in a Complex Environment   A final example of the difficulty of building your own monitoring system is the fact that you’re dealing with human-generated data, meaning it’s incredibly volatile, irregular, and seasonal. For example, the image below is from a gaming company and you can clearly see the seasonal nature of gamers playing more on the weekends and evenings. In this example, someone on the team released a hot fix, along with a critical bug, that prevented players from completing in-game purchases. Luckily, their anomaly detection solution was able to detect and alert the error in real-time, and root cause analysis led the developers directly to the recent release.  Since there is such a high degree of seasonality in this user-generated data, unlike a traditional BI tool, an AI solution is able to calculate the normal usage depending on each hour and day and adapt accordingly. These incidents give you an under-the-hood look at the complexity in monitoring business metrics, some of which include: an adaptive baseline seasonality granular visibility monitor at scale to correlate related anomalies/events real-time detection In the next post, we'll look at why these aspects also come into play for finding hidden incidents that might otherwise go undetected in your partner networks and affiliate programs.    
subscription payment monitoring
Blog Post 7 min read

Using AI to Autonomously Monitor Your Subscription Payment Model

While single transaction revenue models are prone to fluctuate based on the seasonality of markets, subscription plans offer more predictable revenues. And while that consistency can certainly be advantageous over one-off transactions, it's notoriously challenging to keep subscribers active. The cornerstone to managing and scaling a subscription-based business is monitoring the KPIs that influence top-line revenue, such as: conversion rate churn rate retention rate One of Anodot's customers runs a subscription-based business with transactions from various countries, in different languages and on different devices. Before working with Anodot, the company experienced an error in the customer sign-up process, where the SMS verification was broken— but only for customers in Russia using Android devices. Without this SMS verification process in place, anybody that fell into this demographic was unable to subscribe to their subscription or process their payment for three weeks. Instead of being able to detect this anomaly in real-time, the bug went unnoticed by their traditional monitoring methods, resulting in three weeks of frustrated customers and hundreds of thousands in lost revenue. As soon as they realized how much time and money could have been saved with the use of machine learning, the company immediately went looking for an AI-based revenue monitoring solution. While you can theoretically use traditional monitoring methods such as statistical models or BI tools, they lack the granularity, scalability, and accuracy to be able to find and alert on an anomaly like the one described above.   What is Revenue Monitoring? If you’re running a subscription-based business, you know that you need to constantly monitor performance metrics such as website traffic, bounce rate, time on site, and many others. The same concept applies to revenue data. Revenue monitoring refers to the process of tracking the KPIs and metrics that influence overall revenue such as conversion rate, subscriber growth, subscribers per location, and so on.  Revenue data is often made up of billions of data points that are influenced by human behavior. In particular, as discussed in our white paper on business monitoring, these metrics pose a unique challenge for three main reasons: Context: Revenue-related business metrics often can’t be evaluated in absolute terms, with set maximum and minimum thresholds. They should be evaluated in relation to a set of changing conditions. Topology: It's much easier to track the relationship between different machines, although the same is not true for business metrics. The relationships and correlations between metrics are dynamic and volatile. Volatility: Business metrics often have irregular sampling rates, which requires a significant adaptation of how data is stored and how the algorithms work. Due to the dynamic nature of revenue-related metrics, manually monitoring that data manually or with static thresholds can easily deluge teams with false positives or, worse yet, allow for false negatives to go unnoticed. So how can companies overcome current hurdles to scalable monitoring and ensure the consistency and predictability of their revenue?  Many companies are enhancing their analytics stack with machine learning, automating their monitoring and anomaly detection. Applying AI to Revenue Monitoring AI-based revenue monitoring means that the solution is able to learn the usual behavior of each performance metric on its own, without providing static thresholds.  Machine learning’s ability to process huge amounts of data and derive insights, patterns, and correlations means that it can provide the granularity and scalability that subscription-based businesses need to effectively monitor their revenue. The AI-based solution can then notify the appropriate team in real-time when anomalous events do occur. Returning to our earlier example of monitoring location-based changes in subscription renewals, below is a visualization of how AI-based anomaly detection can be used for revenue monitoring. In particular, we can see the model is monitoring active subscribers of a particular plan for a given location:   In this example, we can see that the blue line is the actual users’ behavior, and the shaded blue area is the expected behavior based on what the machine learning algorithm has previously learned.  This is an example of unsupervised learning in which the machine learning algorithms derive patterns and structure from unlabeled data. As an example, Anodot uses sequential adaptive learning to learn the normal behavior of each metric, and then each new data point is computed in relation to this behavior going forward. If, however, we wanted to add more metrics to monitor such as each individual traffic source, referrals from affiliates, and so on, you can see how this would simply be too complex for a traditional BI tool to monitor. As highlighted in our guide on Revenue Monitoring with AI, a few reasons why an AI-based solution has so advantageous over traditional monitoring include: AI can learn and monitor each revenue stream by itself: Since the revenue streams from each subscription plan are unique, this means it’s crucial to monitor each one on its own instead of simply monitoring revenue as a single metric. AI can monitor metrics such as traffic and conversion rates simultaneously: If an anomalous event does occur, for example, a significant drop in revenue, being able to monitor not only top-line revenue but also the events that lead up a purchase (such as traffic, conversion rates, etc.) means that you can immediately identify the root cause. AI can correlate metrics and events in real-time for the shortest time to resolution: In the same way that AI can monitor multiple metrics simultaneously, it can also find correlations between metrics and events so that you know exactly what is causing the anomaly and so you have the shortest time to resolution possible. Real World Examples of Revenue Monitoring for Subscription-Based Businesses These are among the most common revenue-related metrics to monitor in a subscription business model.  Churn Rate Monitoring Churn rate helps you identify how many customers you’re losing over a given time period. Monitoring this metric is one example of leveraging customer experience monitoring for subscription-based businesses as a high churn rate indicates that users either aren’t getting enough value from the subscription or don’t know how to use the product properly.   New Subscriber Monitoring New subscriber monitoring is another use case of AI-based revenue monitoring. As you can see below, the anomaly detection solution is constantly monitoring for spikes or drops in registrations. The platform is also fully autonomous so you don’t need to set up and update typical subscriber thresholds. This means that the AI can autonomously monitor billions of events and distills them into a single score, and can also send impact alerts when you need them most. Conversion Rate Monitoring Conversion rate is another metric that has a high impact on a company’s revenue. If there is a sudden drop in conversion rate, this could mean that there is something broken on the website, or, like the example mentioned earlier, there could be a simple translation error. Catching these issues in real-time can often lead to saving a significant amount of potential lost revenue. Aside from conversion rate monitoring for purchase conversions, as discussed in our guide on Use Cases for Machine Learning, subscription-based businesses can also monitor existing customers from the time they login until the time they logout. This allows you to identify anomalous user behavior such as how they’re interacting with features in the product   Revenue Monitoring for Subscription-Based Businesses  As we’ve discussed, revenue streams for subscription-based businesses are highly complex and often fragmented across products, plans, and locations. These revenue streams are also highly susceptible to changes in things like conversion rate, churn rate, and many other performance metrics. The dynamic nature of this revenue data means that traditional monitoring methods, such as BI tools, simply don’t offer the capabilities that are required.  Instead, an AI-based anomaly detection solution can learn the usual behavior of each individual metric on its own, and provide real-time updates when it matters most. Regardless of the number of products offered or the number of active subscribers, being able to catch these incidents in real time can often mean saving a significant amount of otherwise lost revenue.
Online Payments
Blog Post 7 min read

Monitoring Micro-Transaction Payment Models with AI

See how online businesses can use machine learning to more intelligently support teams as they monitor micro-transaction revenue.
Blog Post 3 min read

Anodot Raises $35M Led by Intel Capital

I’m very pleased to announce that we’ve just secured $35 million in funding, bringing our total capital raised to $62.5 million.
Download our Hilarious Zoom Virtual Backgrounds for Free
Blog Post 1 min read

Download Our Hilarious Virtual Backgrounds to Set the Stage for Your Zoom Meetings

All the Zoom meetings can get tiresome. Break free from the generic white wall as a background with this fun collection of virtual backgrounds, tried and tested by the Anodot team.
Blog Post 8 min read

Now's the Time to Perfect Your Customer Experience

Customer experience is tied to so many different areas of an app - product, customer support, and payments. How do you find small breaks in the chain? Most tools can't. Machine learning solutions are changing that.
Blog Post 5 min read

AI/ML - Are We Using It in the Right Context?

There used to be a distinct, technical separation between terms such as AI and machine learning (ML) – but only while these technologies remained largely theoretical. As soon as they became practical in the real world, and then commodifiable into products, the marketers stepped in. Widespread overuse of the terms AI/ML in marketing have managed to thoroughly confuse the meanings of these words. You might think of this as a relatively minor issue – until you realize that it’s been at the core of some deceptive practices. Research by The Verge has shown that up to 40 percent of European startups claiming to use AI are actually lying or exaggerating their capabilities. In short, if you don’t know what AI/ML are, or what the difference is between them, then you’re that much more likely to be sold a bill of goods when you’re shopping for a product based on these technologies. [embed]https://youtu.be/-Bouv9Q8YOI[/embed] What is Artificial Intelligence? There’s an automatic association between AI and sci-fi. When people think of artificial intelligence, they tend to think of the Terminator, Data from Star Trek, HAL from 2001, etc. These represent a very specific form of AI known as Artificial General Intelligence (also known as Strong AI) – a digital form of consciousness that can match or exceed human-like performance in any number of metrics. An AGI would be equally good at solving math equations, conducting a humanlike conversation, or composing a sonnet. Currently, there is no working example of an AGI, and the likelihood of ever creating such a system remains low. Attempts to create AGIs currently revolve around the idea of scanning and modeling the human brain, and then replicating the human brain in software. This is a sort of top-down approach – humans are the only example of working sentience, so in order to create other sentient systems, it makes sense to start from the standpoint of our brains and attempt to copy them. If you take the bottom-up approach, you end up with what’s known as Narrow or Weak Artificial Intelligence. This is the kind of AI that you see every day – AI that excels at a single specific task. AI powers apps that help you find music to listen to, tag your friends in social media photos, etc. Behind the scenes, it may help protect you or your company from fraud, malware, or malicious activity. This kind of narrow AI does only one thing, but it does it much faster and better than a human. Imagine scanning a million purchase orders a day to make sure that there are no forgeries – you’d quickly get bored and start to make mistakes. AI could process those orders in a relative eyeblink and catch more errors and suspicious activity than even a trained human observer ever could. What is Machine Learning? Machine learning and artificial intelligence are not the same thing – BUT, if you’re looking to create a narrow AI the easy way, machine learning is increasingly the only game in town. Machine learning works by getting it wrong – and then eventually getting it right. Here’s a layman’s explanation of how it works. Let’s say you’re creating an image-recognition program in order to find pictures of cute dogs. First, you give the software program some idea of what a dog looks like. Then you show it a dataset of images – some with dogs, some without. You tell your software to pick out the dogs. In all likelihood, the software will get it mostly wrong. That’s okay. You tell the software which pictures it got right, and then repeat with different datasets until the software starts picking out dogs with confidence. This example demonstrates a central tenet of the machine learning advantage: at no point do you have to get into the weeds of a software program and code it to recognize dogs. Instead, the machine “codes itself”, generating mathematical models to find dogs and then refining them as they’re trained on additional data.  That is the basic gist of how it works.  When you use machine learning, you save time and effort on creating narrow artificial intelligence. Instead of creating a complex and branching decision tree by hand, your decision tree grows on its own and improves its usefulness every time it encounters and categorizes new data. By taking the grunt work out of creating models and categorizing data, machine learning vastly increases the effectiveness of data scientists. Machine learning is also the driving force behind augmented analytics, a class of analytics that is powered by AI and ML to automate data preparation, insight generation and data explanation. Because not all business problems can be solved purely by machine learning, augmented analytics combines human curiosity and machine learning to automatically generate insights from data. [CTA id="8f41ae2b-68f8-4174-8985-1e5bc3fbcc73"][/CTA] AI/ML for Better Performance  The difference between machine learning and AI is that machine learning represents one of – but not the only – precursors to creating a narrow AI. Specifically, machine learning is the best and fastest way to create a narrow AI model for the purpose of categorizing data, detecting fraud, recognizing images, or making predictions about the future (among other things). Although hyperbolic marketing has in many ways distorted the meaning behind machine learning and AI, the advantage of the commodifying technology is that it’s now easier than ever to use and create machine learning models – assuming you’re working with a company that’s selling the genuine article.   Related Guides: Top 13 Cloud Cost Optimization: Best Practices for 2025 Understanding FinOps: Principles, Tools, and Measuring Success Related Products: Anodot: Cost Management Tools
Blog Post 5 min read

The Price You Pay for Poor Data Quality

When vacation-goers booked flights with Hawaiian Airlines last spring, they were surprised to find that their tickets -- which were intended to be free award flights -- actually cost tens of thousands of dollars. The culprit of this was a faulty airline booking application that accidentally charged customer accounts in dollars instead of airline miles. A ticket that was supposed to be redeemed for 674,000 miles turned into a sky-high price of $674,000 USD! This is yet another example of the impact that poor data quality can have, sometimes with these types of embarrassing results. The value of a company can be measured by the performance of its data; however, data quality often carries heavy costs in terms of financial, productivity, missed opportunities and reputational damage. The Financial Cost of Data Quality Erroneous decisions made from bad data are not only inconvenient but also extremely costly. According to Gartner research, “the average financial impact of poor data quality on organizations is $9.7 million per year.” IBM also discovered that in the US alone, businesses lose $3.1 trillion annually due to poor data quality. Data Quality’s Cost to Productivity This all goes beyond dollars and cents. Bad data slows employees down so much so that they feel their performance suffers. For example, every time a salesperson picks up the phone, they rely on the belief that they have the correct data - such as a phone number - of the person on the other end. If they don’t, they’ve called a person that no longer exists at that number, something that wastes more than 27 percent of their time.   Accommodating bad data is both time-consuming and expensive. The data needed may have plenty of errors, and in the face of a critical deadline, many individuals simply make corrections themselves to complete the task at hand. Data quality is such a pervasive problem, in fact, that Forrester reports that nearly one-third of analysts spend more than 40 percent of their time vetting and validating their analytics data before it can be used for strategic decision-making. The crux of the problem is that as businesses grow, their business-critical data becomes fragmented. There is no big picture because it’s scattered across applications, including on-premise applications. As all this change occurs, business-critical data becomes inconsistent, and no one knows which application has the most up-to-date information.  These issues sap productivity and force people to do too much manual work. The New York Times noted that this can lead to what data scientists call ‘data wrangling’, ‘data munging’ and ‘data janitor’ work. Data scientists, according to interviews and expert estimates, spend from 50 percent to 80 percent of their time mired in this more mundane labor of collecting and preparing unruly digital data, before it can be explored for useful nuggets. Data Quality’s Reputational Impact Poor data quality is not just a monetary problem; it can also damage a company’s reputation. According to the Gartner report Measuring the Business Value of Data Quality, organizations make (often erroneous) assumptions about the state of their data and continue to experience inefficiencies, excessive costs, compliance risks and customer satisfaction issues as a result. In effect, data quality in their business goes unmanaged. The impact on customer satisfaction undermines a company’s reputation, as customers can take to social media (as in the example at the opening of this article) to share their negative experiences. Employees too can start to question the validity of the underlying data when data inconsistencies are left unchecked. They may even ask a customer to validate product, service, and customer data during an interaction — increasing handle times and eroding trust. Case Study: Poor Data Quality at a Credit Card Company Every time a customer swipes their credit card at any location around the world, the information reaches a central data repository. Before being stored, however, the data is analyzed according to multiple rules, and translated into the company’s unified data format. With so many transactions, changes can often fly under the radar: A specific field is changed by a merchant (e.g., field: “brand name”). Field translation prior to reporting fails, and is reported as “null”. An erroneous drop in transactions appears for that merchant’s brand name. A drop goes unnoticed for weeks, getting lost in the averages of hundreds of other brands they support. Setting back the data analytics effort, the data quality team had to fix the initial data and start analyzing again. In the meantime, the company was pursuing misguided business strategies – costing lost time for all teams, damaging credibility for the data analytics team, adding uncertainty as to the reliability of their data and creating lost or incorrect decisions based on the incorrect data.  Anodot’s AI-Powered Analytics solution automatically learns normal behavior for each data stream, flagging any abnormal behavior. Using Anodot, changes leading to issues such as null fields would be immediately alerted on, so that it could be fixed. This prevents wasted time and energy and ensures that decisions are made based on the complete and correct data. [CTA id="076c8680-fa50-4cd7-b342-37f878bd14fc"][/CTA] Applying Autonomous Business Monitoring to Ensure Good Data Quality Reducing the causes of poor data is crucial to stopping the negative impact of bad data. An organization’s data quality is ultimately everyone’s business, regardless of whether or not they have direct supervision over the data. Artificial Intelligence can be used to rapidly transform vast volumes of big data into trusted business information. Machine learning can automatically learn your data metrics’ normal behavior, then discover any anomaly and alert on it. Anodot uses machine learning to rapidly transform vast volumes of critical data into trusted business information. Data scientists, business managers and knowledge workers alike all have a responsibility to implement the best tools to ensure that false data doesn’t impact critical decisions. Related Guides: Top 13 Cloud Cost Optimization: Best Practices for 2025 Understanding FinOps: Principles, Tools, and Measuring Success Related Products: Anodot: Cost Management Tools  
Blog Post 4 min read

Predictive Maintenance: What’s the Economic Value?

The global predictive maintenance market is expected to grow to $6.3 billion by 2022, according to a report by Market Research Future. However, a new paradigm is required for analyzing real-time IoT data. The Impact of Predictive Maintenance in Manufacturing Predictive maintenance, which is the ability to use data-driven analytics to optimize capital equipment upkeep, is already used or will be used by 83 percent of manufacturing companies in the next two years. And it’s considered one of the most valuable applications of the Internet of Things (IoT) on the factory floor. Benefits of Predictive Maintenance The CXP Group report, Digital Industrial Revolution with Predictive Maintenance, revealed that 91 percent of predictive maintenance manufacturers’ see a reduction of repair time and unplanned downtime, and 93 percent see improvement of aging industrial infrastructure. According to a PWC report, predictive maintenance in factories could: Reduce cost by 12 percent Improve uptime by 9 percent Reduce safety, health, environment, and quality risks by 14 percent Extend the lifetime of an aging asset by 20 percent Challenges in Leveraging IoT Data for Predictive Maintenance The CXP Group report provides examples in which, for companies like EasyJet, Transport for London (TfL), and Nestle, predictive maintenance and understanding can boost the efficiency of its technicians, benefit the customer experience, and improve unplanned downtime. Realizing these advantages is not without challenge. Most IoT data, according to Harvard Business Review, is not currently leveraged through machine learning, squandering immense economic benefit. For example, less than one percent of unstructured data is analyzed or used at all and less than half of structure data is actively used in decision-making. Because traditional Business Intelligence (BI) platforms were not designed to handle the plethora of IoT data streams and don’t take advantage of machine learning, use of BI reports and dashboards only periodically leads to late detection of issues. In addition, currently, alerts are set with static thresholds, leading to false alerts in case of low thresholds, and failed detection in the instance of high thresholds. What’s more, data may change by time of day, week or season with those irregularities being outside the limited scope of static thresholds. Last, BI platforms were not designed to correlate between the myriad of sensor data—a correlation that exponentially boosts the likelihood of detecting issues. Let’s say an engine that is about to fail may rotate faster than usual, have an unusual temperature reading and low oil level. Connecting the dots between these sensor readings through machine learning can multiply the likelihood of detection. Automated Anomaly Detection By addressing the deficiencies of existing BI platforms, Anodot’s automated anomaly detection paves the way for factories to realize the full value of predictive maintenance. Analyzing big data from production floors and machinery to deliver timely alerts, Anodot’s visualizations and insights facilitate optimization, empower predictive maintenance and deliver bottom-line results. Anodot leverages the IoT-generated stream data such as meter readings, sensor data, error events, voltage readings and more. Monitoring data over time—and in real-time—it uses machine learning to learn the metric stream’s normal behavior. It then automatically detects out-of-the-ordinary data and events, serving up diagnoses and making preemptive recommendations that represent significant cost savings on upkeep and downtime. Rejecting old-school static thresholds, Anodot identifies anomalies in data that changes over time, for example, with built-in seasonality. This dynamic way of understanding what is happening in real time detects and alerts for real issues, often much earlier than a static threshold would have alerted. Without manual configuration, data selection or threshold settings, this platform uses machine learning to automatically calibrate to achieve the best results. Anodot algorithms can control data of any size or complexity, such as seasonality, trends and changing behavior because they are sufficiently robust to handle an army of data variables, intelligently correlating anomalies whose connections may escape the limitations of a human observer. Conclusion: The Future of Predictive Maintenance with Anodot Predictive maintenance offers a hefty opportunity for factories to save money on maintenance, downtime and while extending the life of their capital equipment. Automatic anomaly detection, such as Anodot, offers the best way to expose and preempt issues that require real maintenance in real time.   Related Guides: Top 13 Cloud Cost Optimization: Best Practices for 2025 Understanding FinOps: Principles, Tools, and Measuring Success Related Products: Anodot: Cost Management Tools