Anodot Resources Page 26

FILTERS

Anodot Resources Page 26

what is ai/ml?
Blog Post 5 min read

AI/ML - Are We Using It in the Right Context?

Widespread overuse of the terms AI/ML in marketing have managed to thoroughly confuse the meanings of these words. The trouble is that if you don’t know what AI/ML are, or what the difference is between them, then you’re that much more likely to be sold a bill of goods when you’re shopping for a product based on those technologies. 
Documents 1 min read

TDWI Guide: 6 steps for improving cloud cost management

Documents 1 min read

Anomaly Detection Guide - Building an anomaly detection system

Telecom AI
Blog Post 3 min read

Extending the Competitive Advantage in Telecom

As the telecom industry navigates upcoming tech changes, machine learning and artificial intelligence are poised to drive the next frontier.
Documents 1 min read

Top 5 EC2 Savings Recommendations

anomaly detection techniques in focus
Blog Post 4 min read

Anomaly Detection Techniques in Focus: Multivariate and Univariate

The Hybrid Approach: Benefit from Both Multivariate and Univariate Anomaly Detection Techniques In our previous post, we explained what time series data is and provided some details as to how the Anodot time series real-time anomaly detection system is able to spot anomalies in time series data. We also discussed the importance of choosing a model for a metric’s normal behavior, which includes any and all seasonal patterns in the metric, and the specific algorithm which Anodot uses to find seasonal patterns. A concise explanation on conciseness At the end of that post, we concluded that it’s possible to get a sense of the bigger picture from many individual anomalies. Conciseness is a requirement of any large-scale anomaly detection system because monitoring millions of metrics is guaranteed to generate a flood of reported anomalies, even if there are zero false positives. Stable applications and operating systems often ship with errors, even if those errors don’t result in a failure state right away. For context, Google’s bug bounty program paid out nearly $3 million in 2017. This however doesn’t mean that Google’s programs were constantly crashing. These bugs were present but dormant, able to be activated only under certain conditions. An anomaly detection system might detect and flag them even if they’re not currently causing your application to crash. Achieving conciseness in this context is analogous to distilling the many individual symptoms into a single diagnosis. This could be viewed in much the same way that a mechanic might diagnose a car problem by observing the pitch, volume and duration of all the sounds it makes, in addition to watching all the dials and indicator lights on the dashboard. Univariate and multivariate anomaly detection techniques After employing the anomaly detection techniques described in our last post (and in our previous series), how does a practical real-time system like Anodot’s actually achieve concise reporting of detected anomalies? How does the system determine the diagnosis? The answer is that after the system detects anomalies in individual metrics, a second layer of machine learning takes over and groups anomalies from related metrics together. This grouping of related anomalies condenses the original flood of individual alerts into a smaller, more manageable number of underlying incidents. This two-step approach actually combines two different anomaly detection techniques: univariate and multivariate. Univariate anomaly detection looks for anomalies in each individual metric, while multivariate anomaly detection learns a single model for all the metrics in the system. Univariate methods are simpler, so they are easier to scale to many metrics and large datasets. However, someone would then need to unravel the causal relationships between the anomalies in the resulting alert storm. Companies need to quickly understand what all those alerts mean before they can decide what to do in response, and many companies simply don’t have the time. With outages costing up to $5,600 per minute, every second counts when determining the response to an anomaly. Multivariate approaches, on the other hand, detect anomalies as complete incidents, yet are difficult to scale both in terms of computation and accuracy of the models. This approach also produces anomaly alerts. These are hard to interpret because all the metrics are inputs that generate a single output from the anomaly detection system.  With multivariate methods, each added metric introduces interactions between itself and all the other metrics. Since multivariate anomaly detection methods have to model this entire complex system, the computational cost increases rapidly as the number of modeled metrics increases. In addition, individual metrics need to have similar statistical behavior for multivariate methods to accurately work. Revolutionizing anomaly detection techniques: Anodot’s two-step approach Anodot effectively combines the strengths of each of these techniques into a hybrid approach. Since univariate anomaly detection is used first on individual metrics, Anodot’s approach benefits from its scalability and simplicity. By using other advanced AI techniques for discovering relationships between the metrics, Anodot utilizes the multivariate approach to group and interpret related anomalies, satisfying the requirement of conciseness we spoke of earlier. This blending of univariate and multivariate anomaly detection is similar to Anodot’s combination of the supervised and unsupervised anomaly detection techniques we discussed in our previous series. At each layer of its anomaly detection system, Anodot uses the most appropriate data science and machine learning techniques for that layer, even combining them in sophisticated ways to provide businesses with actionable information in real time. Stay tuned soon, to learn how your business can sift through thousands or even millions of metrics in real time using automated anomaly detection.
Videos & Podcasts 13 min read

The next big thing for CSP Network Monitoring - Autonomous Remediation

CSPs aiming to achieve zero-touch networks are looking for monitoring solutions that go beyond autonomous detection to autonomous remediation. Ira Cohen, Co-Founder and Chief Data Science at Anodot spoke with Ken Wieland, Contributing Editor at Light Reading, and explained the principles behind autonomous remediation and the building blocks and capabilities required to achieve it.   [embed]https://youtu.be/dToELNIox78[/embed]
Blog Post 5 min read

The Price You Pay for Poor Data Quality

When vacation-goers booked flights with Hawaiian Airlines last spring, they were surprised to find that their tickets -- which were intended to be free award flights -- actually cost tens of thousands of dollars. The culprit of this was a faulty airline booking application that accidentally charged customer accounts in dollars instead of airline miles. A ticket that was supposed to be redeemed for 674,000 miles turned into a sky-high price of $674,000 USD! This is yet another example of the impact that poor data quality can have, sometimes with these types of embarrassing results. The value of a company can be measured by the performance of its data; however, data quality often carries heavy costs in terms of financial, productivity, missed opportunities and reputational damage. The Financial Cost of Data Quality Erroneous decisions made from bad data are not only inconvenient but also extremely costly. According to Gartner research, “the average financial impact of poor data quality on organizations is $9.7 million per year.” IBM also discovered that in the US alone, businesses lose $3.1 trillion annually due to poor data quality. Data Quality’s Cost to Productivity This all goes beyond dollars and cents. Bad data slows employees down so much so that they feel their performance suffers. For example, every time a salesperson picks up the phone, they rely on the belief that they have the correct data - such as a phone number - of the person on the other end. If they don’t, they’ve called a person that no longer exists at that number, something that wastes more than 27 percent of their time.   Accommodating bad data is both time-consuming and expensive. The data needed may have plenty of errors, and in the face of a critical deadline, many individuals simply make corrections themselves to complete the task at hand. Data quality is such a pervasive problem, in fact, that Forrester reports that nearly one-third of analysts spend more than 40 percent of their time vetting and validating their analytics data before it can be used for strategic decision-making. The crux of the problem is that as businesses grow, their business-critical data becomes fragmented. There is no big picture because it’s scattered across applications, including on-premise applications. As all this change occurs, business-critical data becomes inconsistent, and no one knows which application has the most up-to-date information.  These issues sap productivity and force people to do too much manual work. The New York Times noted that this can lead to what data scientists call ‘data wrangling’, ‘data munging’ and ‘data janitor’ work. Data scientists, according to interviews and expert estimates, spend from 50 percent to 80 percent of their time mired in this more mundane labor of collecting and preparing unruly digital data, before it can be explored for useful nuggets. Data Quality’s Reputational Impact Poor data quality is not just a monetary problem; it can also damage a company’s reputation. According to the Gartner report Measuring the Business Value of Data Quality, organizations make (often erroneous) assumptions about the state of their data and continue to experience inefficiencies, excessive costs, compliance risks and customer satisfaction issues as a result. In effect, data quality in their business goes unmanaged. The impact on customer satisfaction undermines a company’s reputation, as customers can take to social media (as in the example at the opening of this article) to share their negative experiences. Employees too can start to question the validity of the underlying data when data inconsistencies are left unchecked. They may even ask a customer to validate product, service, and customer data during an interaction — increasing handle times and eroding trust. Case Study: Poor Data Quality at a Credit Card Company Every time a customer swipes their credit card at any location around the world, the information reaches a central data repository. Before being stored, however, the data is analyzed according to multiple rules, and translated into the company’s unified data format. With so many transactions, changes can often fly under the radar: A specific field is changed by a merchant (e.g., field: “brand name”). Field translation prior to reporting fails, and is reported as “null”. An erroneous drop in transactions appears for that merchant’s brand name. A drop goes unnoticed for weeks, getting lost in the averages of hundreds of other brands they support. Setting back the data analytics effort, the data quality team had to fix the initial data and start analyzing again. In the meantime, the company was pursuing misguided business strategies – costing lost time for all teams, damaging credibility for the data analytics team, adding uncertainty as to the reliability of their data and creating lost or incorrect decisions based on the incorrect data.  Anodot’s AI-Powered Analytics solution automatically learns normal behavior for each data stream, flagging any abnormal behavior. Using Anodot, changes leading to issues such as null fields would be immediately alerted on, so that it could be fixed. This prevents wasted time and energy and ensures that decisions are made based on the complete and correct data. [CTA id="076c8680-fa50-4cd7-b342-37f878bd14fc"][/CTA] Applying Autonomous Business Monitoring to Ensure Good Data Quality Reducing the causes of poor data is crucial to stopping the negative impact of bad data. An organization’s data quality is ultimately everyone’s business, regardless of whether or not they have direct supervision over the data. Artificial Intelligence can be used to rapidly transform vast volumes of big data into trusted business information. Machine learning can automatically learn your data metrics’ normal behavior, then discover any anomaly and alert on it. Anodot uses machine learning to rapidly transform vast volumes of critical data into trusted business information. Data scientists, business managers and knowledge workers alike all have a responsibility to implement the best tools to ensure that false data doesn’t impact critical decisions.
Case Studies 1 min read

Reducing revenue loss with proactive monitoring

“We used Anodot to reduce time to detection of root causes from up to a week to less than a day. We were able to save lots of money for both Xandr and our customers.”