Resources

FILTERS

Resources

Webinars 4 min read

Intelligent Payment Operations

In today's payment ecosystem, the ability to monitor and use payment data effectively represents a real competitive advantage. Intelligent payment operations enables organizations to build a future-proof operations infrastructure. In a recent webinar hosted by Anodot, we talked to a panel of experts in payments operations to discuss how to leverage data to optimize payment processes. Experts from Thunes, Payoneer, 888 Holdings and Anodot joined in the roundtable. Liron Diamant, Anodot's Global Payment expert set the stage discussing today's environment in which payment data is becoming a commodity - a digital product. She said payment companies and financial institutions are realizing that smart operations aren't necessarily related to performance but also to the company's ability to learn and adapt using automation and complex data analysis. The panel started the webinar discussing the process of collecting data, specifically which data they find most useful in analyzing. Collecting useful data for payment operations Elie Bertha, Product Director at Thunes, said it's most useful to collect and monitor payment data that enables users to detect issues as fast as possible and communicate it properly. He also said it's important to link all data sources together for a 360 degree view of the business and the customer. Ari Kohn, the Risk Team Leader at Payoneer, said data that is managed and measured properly is the foundational layer of a successful payments business. He said Payoneer's approach to using data for analysis is constantly evolving. He says the company has multiple sources of data stored in multiple formats. His teams have to wrangle all of that to get a 360 degree view of what's going on in order to identify risk. . Anodot's Chief Data Scientist, Ira Cohen,  discussed what happens on the other side of data collection - machine learning. Ira agreed it's important to be notified as soon as possible when something is happening. He said the speed of incident detection has a lot to do with the volume and velocity of data. Cohen says the challenge in data collection that feeds into AI and machine learning is to understand what level of granularity to go by. Cohen says the two options of granularity are by time and space. For example, you can break down transactions by location - down to a particular user. You can also aggregate transactions in time as well - in windows of one minute, five minutes, one hour, etc. Cohen says a good monitoring system allows you to play with both of these attributes, but the dimensionality of the data and the timescale resolution of the data. [CTA id="7bfafd13-eb8f-4542-a736-1a6b27f79f68"][/CTA] Payment use cases  Elie Bertha from Thunes says one of the company's interesting use cases is to segment customers and compare them which helps detect anomalies from a business perspective. Amit Levy at 888 holdings says they strive for end-to-end monitoring that correlates technical issues with business KPIs such as revenue, and how they are related. Ari Kohn from Payoneer discussed use cases in risk management. He says different products carry different risks. For example, when Payoneer is issuing a debit card, the primary concern is fraud. In order to protect customers from card theft, they have to look for signals that indicate that kind of behavior. However, when issuing capital for a seller that needs an advance, they are worried more about delinquency. Kohn says both of those use cases rely heavily on the availability of data - data that is specific to the types of risk they monitoring. The panel also discussed how they prioritize payment incident alerts and how they democratize data across the company for self service analytics. You can watch the roundtable discussion in its entirety here.
ecommerce analytics
Blog Post 4 min read

3 Reasons Why Machine Learning Anomaly Detection is Critical for eCommerce

Running machine learning anomaly detection on streaming data can play a significant role in your overall revenue. Here’s why.
Blog Post 6 min read

Business Monitoring: If You Can't Measure It, You Can't Improve It

A jumping-off point for improving your business monitoring capabilities and the way you measure its effectiveness.
Blog Post 5 min read

Performance Monitoring: Are All Ecommerce Metrics Created Equal?

Traditional Analytics Tools for eCommerce can’t include Each and Every Metric Number of sessions, total sales, number of transactions, competitor pricing, clicks by search query, cart abandonment rate, total cart value…the analytics tools commonly used by eCommerce companies for performance monitoring can’t include every metric, and even if they did the analysts using them wouldn’t be able to keep up with the amounts of changing data. This of course, inevitably leads to overlooked business incidents and lost revenue whenever these tools are used in the fast-paced world of eCommerce. In eCommerce, minutes matter. Your infrastructure and your competitors’ ad bidding strategies can change in an instant. Any metric can signal an important business incident. When these tools are the foundation of your performance monitoring and business, incident detection doesn’t occur anywhere near the speed of business, so your analysts can spend less time analyzing and more time head-scratching. The need to go granular with performance monitoring Traditional analytics tools like KPI dashboards and lists fall flat on their face when it comes to performance monitoring in the fast-paced, multi-faceted world of eCommerce. These tools take a high-level approach that tries to simplify the complex through generalization, causing BI teams to overlook plenty of metrics for eCommerce analytics. This is a design flaw since even though those tools may automate reporting and visualization, they still require humans to manually monitor the visualized data and spot the anomalies which point to business incidents. Many interesting things can happen in the metrics you’re not monitoring, leading you to miss things completely or discover them too late after the financial and reputation damage is already done. Also, missing just one of a metric’s many dimensions can cause you to miss significant business incidents. Think of metrics as the general kind of quantity and dimensions as the specific slices of that data (e.g. daily sales per brand, daily sales per browser). In effect, monitoring each dimension multiplies the number of metrics that could be monitored, easily resulting in far too many ecommerce analytics metrics for a single person, or even a team, to constantly monitor. A performance monitoring horror story To illustrate why etailers need to take this granular approach to performance monitoring, consider an eCommerce company that sells physical goods in the US. Like many online retailers, this one accepts a wide variety of payment options, from PayPal and credit cards to e-wallets like Google Wallet and Apple Pay. The etailer’s BI team notices on their dashboard that the total daily revenue dropped very slightly. The almost imperceptible dip in this high-level KPI gets passed over by the analysts because they have about five other dashboards to monitor anyway, so they attribute it to statistical noise. Meanwhile, a crucial payment processor has changed their API, breaking the etailer’s ability to process orders made with American Express cards, resulting in those customers abandoning their carts. Since orders with AMEX cards make up such a small portion of the total order volume for this merchant, the total daily revenue barely budges, glossing over the frustration of those AMEX cardholders. Had this company been monitoring daily revenue, not as a single KPI, but broken out across each payment option (daily revenue from AMEX orders, daily revenue from Apple Pay orders, etc.), the sudden drastic drop in successful AMEX orders would have been obvious. Even if this team was using a reasonable static threshold on this metric (an approach which doesn’t scale, as we’ve discussed before), they would have been alerted and the team could contact the payment provider to fix their broken API or implement a workaround in their own code. Problems like these, which impact a small subset of your target market or existing customer base occur quite often in eCommerce, and can paralyze a company’s growth. And what if the company in our hypothetical scenario had just launched a line of premium smartphone accessories for international business travelers – the exact demographic most likely to shop with an American Express card? Good luck recovering from that misstep. The value of real-time monitoring of every eCommerce metric With every passing day that the problem goes undetected, lost revenue piles up and this merchant’s success in breaking into that wealthier clientele is less and less likely. Missed problems lurking in overlooked eCommerce analytics metrics can stop growth in its tracks. The only performance monitoring solution which is adequate for eCommerce is one that can monitor all the dimensions of a given metric in real-time. By missing the crucial business incidents that can make or break eCommerce success, analytics tools that overlook many vertical-specific metrics imperil the merchants who use them. As we’ll see in the next article of this series, this is just as true in fintech as it is in eCommerce.
Blog Post 5 min read

Can AI Analytics Weed Out Fake News?

Social Media Platforms Promote Fake News and Spread Unreliable Content Picture this: You’re at work and you’ve been given an assignment by your boss to research a possible new product. So you go out and do some googling, you find several blog posts, including a very intriguing one with several quotes from industry leaders. You go fetch yourself a cup of coffee and settle in to read. There’s one very big problem with this post, however: It’s completely fake. According to a recent post in the Wall Street Journal, “[r]eal-sounding but made-up news articles have become much easier to produce thanks to a handful of new tools powered by artificial intelligence.” This could be one more instance where ‘fake news’ has penetrated mainstream venues, underscoring how fake news can flourish online. In fact, since the 2016 presidential election, awareness of fake news has soared. Detecting and preventing the spread of unreliable media content is a difficult problem, especially given the rate at which news can spread online. Google and Facebook blamed algorithm errors for these events. Overwhelming amounts of data challenge Social Media to Take Action on Fake News The reach and speed of social media networks (Facebook alone has nearly two billion users) make it easy for such stories to spread before they can be debunked. Part of the challenge lies in how Facebook and Google rely on algorithms, especially when it comes to making complex news decisions. Already in the 2020 presidential campaign, we’ve seen disinformation spread, including manufactured sex scandals against former Mayor Pete Buttigieg of South Bend, Ind., and Sen. Elizabeth Warren (D-Mass.), and a smear campaign claiming Sen. Kamala Harris is “not an American black” because of her mixed-race heritage. Further examples illustrate the impact of fake news on both mainstream media and the public’s mind share: 10 most-viewed ‘fake news’ stories on Facebook Fake bushfire images and maps spreading in Australia Fake news leading to violence in Hong Kong protests Local ‘fake news’ factory spreads disinformation Fake news used to sell diet supplements Climate disaster denialism in Australia  While the algorithms are geared to support the social media giants’ business model for generating traffic and engagement, they’re largely run by engineers who rely on data to choose which content will trend. Are Machine Learning Algorithms Reliable or Are More Human Editors the answer? While computer programs may be cheaper than real-life human editors, Fortune asserts, “The reality is that Facebook needs to hire humans to edit and review the content it promotes as news—and it needs to hire a lot of them.” Facebook was using human editors, but then in 2016 the company fired them after it was reported that they routinely suppressed conservative news stories from trending topics. Now, however, Facebook has brought back human editors to curate certain news content. Appeasing all audiences won’t be easy, though. As New York magazine explains, “the algorithms are biased, and if Facebook hires editors and moderators to double-check decisions made by algorithms, those editors will be denounced as biased too.”  With the sheer volume of data and speed of appearance, MIT has suggested that the use of artificial intelligence tools could help. But artificial intelligence alone isn’t the answer, writes Samual Wooley, who argues that the future will involve “some combination of human labor and AI that eventually succeeds in combating computational propaganda, but how this will happen is simply not clear. AI-enhanced fact-checking is only one route forward.” AI-powered Analytics Using Anomaly Detection Can Hold Back the Spread of Fake News The problem is with the trending algorithms that the social media platforms use – these are machine learning algorithms. They have no context and therefore make these errors. In light of the recent South Park motivated Alexa mishap, we suggested that there should be systems in place to detect when something out of place happens, in order to let the right people know. AI-powered analytics tools would include stance classification to determine whether a headline agreed with the article body, text processing to analyze the author’s writing style, and image forensics to detect Photoshop use. To determine the reliability of an article, Algorithms could extract even relatively simple data features, like image size, readability level, and the ratio of reactions versus shares on Facebook. The fake news issue can also be detected by focusing on anomalies. When a social media algorithm starts pushing a trending post or article to the top, if AI-powered analytics tracked the sudden surge of a new topic, correlating this data with the source site or Facebook page, it would emerge as an obvious anomaly and be paused from gaining any further momentum until a human at Facebook or Google can validate the specific item, rather than needing human review of all topics. You can’t prevent anyone from writing fake news, but by applying AI-powered analytics that employs anomaly detection, we can prevent the “simple-AI” algorithms from spreading and promoting fake news stories. The power of this application of AI-powered analytics to spot anomalies, far faster than humans could, can be used when working with thousands or millions of metrics. Real-time anomaly detection can catch even the most subtle, yet important, deviations in data.    
Blog Post 8 min read

How Businesses are Using Machine Learning Anomaly Detection to Scale Partner and Affiliate Tracking

Monitoring partner networks with machine learning anomaly detection has a number of big advantages over traditional BI tools.
Payment monitoring
Videos & Podcasts 0 min read

Proactive Payment Monitoring for Puma

To protect revenue and reduce lost sales, global ecommerce companies like Puma rely on Anodot's autonomous payment monitoring solution. Learn how Puma uses Anodot to monitor and detect payment issues across 45 global ecommerce sites.
Blog Post 14 min read

The Key Principles of a Successful Time Series Forecasting System for Business

This in-depth article covers the value in using machine learning to create highly accurate, real-time, scalable forecasts for your business demand and growth.
Blog Post 3 min read

The Top 10 Anomalies of the Last Decade

After much debate, we ranked the most note-worthy anomalies of the 2010s - the most unexpected people, events and trends to shake the spheres of business, politics, entertainment and pop culture. Find out what - and who - made the list.