There is no denying that this epidemic has created a breeding ground for fraud. Cybercriminals are thriving in the blind spots of organizations, and the transition to digital services has provided many loopholes for bad actors to exploit.
The expanding fraud economy has allowed the development of online fraud to go far beyond the limitations of isolated attacks and basement residents. In 2020 alone, more than 37 billion records have been leaked, and armed bad actors use this data to execute larger and more destructive attacks.
However, the recent crackdown on darknet markets has led fraudsters to turn to new and underground trading venues for illegal activities. Cybercriminals were forced to leave the Dark Web forum and set their sights on secure messaging applications such as Telegram for fraudulent activities. As part of Deep Web, Deep Web is part of the Internet and is not indexed by search engines. Secure messaging apps are a safe haven for professional criminals to remain anonymous while causing damage and profit.
However, cybercriminals are not the only ones who can benefit from the new era of messaging-based app fraud. As a platform accessible to almost everyone in the world, these applications have become attractive tools for attracting new fraudsters to conduct low-risk trials.
A new era of fraud
Today’s bad actors no longer focus on cautious, secret crimes, but pay more attention to how to get what they want. This new way of thinking is an indispensable factor in the flood of scams into messaging applications and forums.
Messaging applications often provide security features that fraudsters need to keep undetected. Knowing that apps’ privacy-focused features and strong encryption can play a protective role, cybercriminals are increasingly gathering on messaging forums to resell stolen credentials and commit fraud. But they are not unique. In the past year, many possible criminals have flooded in, using messaging apps for the first time to test fraud.
Through message forums, individuals can essentially test for fraud and assess the amount of risk they are willing to take. Knowing that many novices are lurking in messaging apps, professional cybercriminals are advertising their services to novices on the platform.
An example is a Telegram fraud scheme recently identified by my company Sift, where professional bad guys steal from restaurants and food delivery services. By promoting their ability to use stolen information to purchase food and beverage orders, they are able to provide meals to opportunistic diners at a significant discount rate. Then, the person who is going to dine uses cryptocurrency to pay for the cybercriminals, and the cybercriminals use stolen credit card details or hacked accounts to purchase meals and deliver them to the location of the meals.
The scam involves two different types of fraudsters: professional cybercriminals providing cheap food purchase services, and more passive fraudsters who only want ridiculous cheap meals, all of which harm the victimized restaurant that sells food .
The low cost of meals reduces the perceived risk of accidental fraudsters. Knowing that they are unlikely to buy a meal, they are more willing to get involved. They can then decide whether they are willing to purchase other services offered on fraudulent forums, such as fake COVID-19 test results or vaccine cards.
Although it is almost impossible for security teams to turn off such fraud on messaging applications, they can mitigate the risk by going beyond traditional methods and adopting digital trust and security strategies. This approach incorporates risk detection into the entire decision-making process and considers customer safety and experience as a whole, so companies no longer need to choose between increasing revenue and reducing fraud.
By implementing new processes and technologies (such as machine learning), companies can more effectively combat fraud on a large scale. Machine learning is essential not only for identifying new trends but also for changing risk thresholds. By ingesting thousands of different signals in addition to purchase data, the machine learning system can quickly adapt to the process of detecting suspicious activity in real time without manual intervention.
The expansion of the fraud economy into secure messaging applications shows how quickly bad actors can change their strategies. It is frustrating that few companies can stop bad actors from advertising their services to people interested in fraud. However, by adopting a more comprehensive approach to fraud and understanding the signals that lead to fake purchases, security teams can ensure that their home bases (such as websites and apps) are protected.
Brittany Allen has more than ten years of experience in e-commerce market fraud by companies such as Etsy, Airbnb, 1stdibs and letgo. Her expertise in fraud prevention, policy leadership and dispute management has led her to speak at numerous industry conferences.