top of page

OUR RESEARCH

Data Analysis

Statistics from primary survey across India:​

  • We studied the prevalent methods used by scamsters of such frauds and surveyed to understand the awareness of such crimes. The survey covered around 600 respondents across multiple cities, cultures, economic and linguistic backgrounds.

  • The survey showed a widespread usage of digital payments with ~98% of those surveyed using apps/cards to make payments and engaging in an average of at least one digital transaction daily.

  • Over a third of survey respondents reported attempts by scamsters to defraud them. Over a quarter of the respondents have received telephone calls from scammers and 14% report receiving links to fake websites/being subject to phishing attacks. However, scammers are evolving into increasingly sophisticated scams involving the use of both social engineering and technology.

  • The prevalence of such scams has resulted in the erosion of trust with 90% of the survey respondents expressing concern over the safety of such digital payments.

  • The people in the 18-35 yr age group (young working adults) and those without a university education, report a much higher prevalence of such financial frauds, indicating vulnerability of this segment to scams partly due to their higher levels of trust in technology.

  • A majority of the respondents remain unaware of how to report such financial crimes.

infographic

Sharing is NOT caring for Financial Transactions

distributing brochures
getting survey responses
distributing brochures

As we grow up, money management is a necessary skill that has been added to our list of must-learns. However, in the world where technology has integrated itself in all aspects of life, this skill has become so much more than just budgeting. It gives us immense pride to note that India is at the cutting edge of digital payments with the country contributing to roughly one in two digital payments globally. India’s technology stack handled ~90 billion digital payment transactions in 2022 and this number continues to rise exponentially as more people embrace the convenience of paying each other via their phones. However, this has been met by an increase in the number of schemes to defraud an innocent population. As a part of our Service Project, we studied the modus operandi of such frauds and conducted a survey to understand the level of awareness of such crimes and the protective measures available for the general consumer.
 

The survey covered around 600 respondents across five different cities, multilingual and socioeconomic backgrounds, indicating widespread usage of digital payments with ~98% of those surveyed using apps/cards to make payments and engaging in at least one digital transaction daily. However over 90% of the survey respondents expressed concern on the safety of such payments and fears of online scams. The people in the 18-35 yr age group (young working adults), without any university education, report a much higher prevalence of such financial frauds. We attribute this to the high smartphone ownership in this segment, but the lack of awareness of the different security measures. Similarly, senior citizens expressed vulnerability and desire to learn more about how to prevent getting frauded. Over a quarter of the respondents have received telephone calls from scamsters and 14% report receiving links to fake websites/being subject to phishing attacks. However, scamsters have moved to more sophisticated scams such as using voice changing softwares and deepfakes to scam people in the name of their near and dear ones. Most of the respondents remain unaware on how to identify and report such financial crimes.
 

To raise awareness, we have summarised our research into easy-to-understand brochures which we have distributed to over 200 people to improve awareness. Similar sessions were conducted, and material was distributed amongst senior citizens at their request. While it was a truly enriching experience to research and create awareness about financial scams, the sense of fulfilment we achieved after interacting with the staff and senior citizens in our community was unparalleled. 

Articles By MoneySatark Club

Welcome to the MoneySatark Club's fraud awareness section. Here, our dedicated student volunteers provide articles on the latest scams, upcoming fraudulent activities, and notable case studies. Stay informed to protect yourself from financial deception.

AI Ransomware: What is this newly emerging technology and how can it impact you?

By Samaira Aich, Meha Mehrotra, Paarth Agarwal

28th January 2026

As the age of technology prospers, so does malware in the world of cybercrime. A rapidly improving method of system infiltration in 2026 is AI-driven malware / ransomware. AI-driven malware is malicious software that uses artificial intelligence to adapt autonomously. Instead of having fixed code, it can generate new code, such as Lua scripts, to analyse a system’s vulnerabilities and create strategies for attack. By behaving differently each time it interacts with a system, it avoids detection from traditional security systems while working to corrupt
files and data.


Although cybercrime is already widespread, AI-driven malware is different because it has advantages over human-driven attacks. Since it can re-write its own code, something that humans otherwise have to do to avoid being detected, this method is optimal for efficiency in infiltrating systems by jumbling its Lua code or adding meaningless code to alter appearance. Unlike human-driven malware that waits for commands, AI-driven malware analyses the system it is infiltrating and carefully determines the best method of attack, making sure the result is guaranteed to be in favour of the attacker. Current models such as PromptLock and MalTerminal are still developing, raising concerns about how much more malicious they may become.


AI-driven malware also carries out ransomware attacks using SPEK-128 bit encryption while scanning sensitive data to identify high-value ransom opportunities. Encryption locks information so files appear as scrambled characters to the owner. SPEK-128 bit encryption uses a 128-bit key, one of the strongest forms of encryption, making the key impossible to guess. Only the malware has access to this key, meaning files can only be restored with it. The AI identifies valuable data by searching for patterns such as credit card numbers, bank
details, or keywords like “tax” and “invoice.” Once data is found and encrypted, a ransomware note appears through pop-ups, wallpaper changes, etc, stating that important documents are locked and can only be recovered by following instructions, usually transferring money, while threatening data loss if files are deleted. So far, the only proof of concept through which such information has been uncovered includes the PromptLock
software, discovered by ESET researchers.


Research projects such as PromptLock and developing AI systems demonstrate how large language models, or LLM (systems that gain unauthorised access to information) could automate complete attack lifecycles, including encryption, and the creation of persuasive ransom messages with minimal human input. Real-world cases already reflect this shift, with cybercriminals using generative AI to write malicious code and some ransomware groups testing AI chatbots to automate ransom negotiations and pressure victims. To reduce risk, users should maintain regular offline backups and use behaviour-based antivirus software, which detects suspicious actions rather than fixed malware signatures, making it especially effective against constantly evolving AI ransomware. While there are systems like PromptLock coming into the picture, none have been used for genuine ransomware attacks yet, however, these systems are still developing and pose the threat of
being highly malicious and prevalent in the cybercrime world soon, posing threat for the future of cybersecurity.

AI Voice Cloning: How Scammers Are Using Artificial Intelligence

By Ailish Garg, Aayush Seshagiri, Paarth Agarwal, Daanish Sachdev, Ria Varma, Samaira Aich, Ruhaan Sharma, Siddhanth Ramanujam​

​

13th March 2026

What is AI Voice Cloning & How does it work?

AI voice cloning is a form of artificial intelligence that
replicates a person’s voice using machine learning
models trained on recorded audio samples. The system
analyses vocal features such as pitch, tone, rhythm,
accent, and pronunciation patterns, converting them
into mathematical representations such as voice
vectors. Using deep learning techniques, particularly

Screenshot 2026-04-01 at 8.04.12 PM.png

neural networks, it generates synthetic speech that closely mimics the original speaker. In scams, fraudsters obtain short audio clips from social media or calls, train the model, and produce realistic voicemessages to impersonate victims convincingly.

How is AI Voice Cloning Used in Cybercrime Scams

AI voice cloning can be applied in multiple ways. Some of the positive ways that it could be used would be to be able to create advertisements and create realistic videos when people don't have the time to record the audio themselves. Another negative way it could be used is to clone people's voices without consent; this would be an immediate breach of privacy, which would be illegal. Another way that scammers can use this would be to use “your voice” to demand things like ransom or extort your family members into paying large sums of money to them.

Examples of AI Voice Cloning’s Impact in Real Life

Screenshot 2026-04-01 at 8.11.06 PM.png

Real-world cases demonstrate the severe harm caused by AI voice cloning scams. Jennifer DeStefano narrowly escaped losing $1 million in 2023 when fraudsters cloned her daughter's voice from social media to simulate a kidnapping, demanding ransom in a chilling call she described as "completely her voice" with matching inflexion. In a corporate breach, scammers targeted WPP executives by cloning CEO Martin Sorrell's voice(often mislinked to Microsoft contexts) in a Teams meeting to 

extract funds and data, nearly succeeding before detection. In India, Delhi resident Lakshmi Chand Chawla lost ₹50,000 in 2024 after a cloned child's voice—her cousin's son—begged for help via WhatsApp in a fake abduction ploy, part of a rising trend with 47% of adults affected or knowing victims. These incidents underscore the emotional and financial devastation.

Governmental / Regulatory Aspect

AI Ransomware: What is this newly emerging technology and how can it impact you?

By Samaira Aich, Meha Mehrotra, Paarth Agarwal

28th January 2026

As the age of technology prospers, so does malware in the world of cybercrime. A rapidly improving method of system infiltration in 2026 is AI-driven malware / ransomware. AI-driven malware is malicious software that uses artificial intelligence to adapt autonomously. Instead of having fixed code, it can generate new code, such as Lua scripts, to analyse a system’s vulnerabilities and create strategies for attack. By behaving differently each time it interacts with a system, it avoids detection from traditional security systems while working to corrupt
files and data.


Although cybercrime is already widespread, AI-driven malware is different because it has advantages over human-driven attacks. Since it can re-write its own code, something that humans otherwise have to do to avoid being detected, this method is optimal for efficiency in infiltrating systems by jumbling its Lua code or adding meaningless code to alter appearance. Unlike human-driven malware that waits for commands, AI-driven malware analyses the system it is infiltrating and carefully determines the best method of attack, making sure the result is guaranteed to be in favour of the attacker. Current models such as PromptLock and MalTerminal are still developing, raising concerns about how much more malicious they may become.


AI-driven malware also carries out ransomware attacks using SPEK-128 bit encryption while scanning sensitive data to identify high-value ransom opportunities. Encryption locks information so files appear as scrambled characters to the owner. SPEK-128 bit encryption uses a 128-bit key, one of the strongest forms of encryption, making the key impossible to guess. Only the malware has access to this key, meaning files can only be restored with it. The AI identifies valuable data by searching for patterns such as credit card numbers, bank
details, or keywords like “tax” and “invoice.” Once data is found and encrypted, a ransomware note appears through pop-ups, wallpaper changes, etc, stating that important documents are locked and can only be recovered by following instructions, usually transferring money, while threatening data loss if files are deleted. So far, the only proof of concept through which such information has been uncovered includes the PromptLock
software, discovered by ESET researchers.


Research projects such as PromptLock and developing AI systems demonstrate how large language models, or LLM (systems that gain unauthorised access to information) could automate complete attack lifecycles, including encryption, and the creation of persuasive ransom messages with minimal human input. Real-world cases already reflect this shift, with cybercriminals using generative AI to write malicious code and some ransomware groups testing AI chatbots to automate ransom negotiations and pressure victims. To reduce risk, users should maintain regular offline backups and use behaviour-based antivirus software, which detects suspicious actions rather than fixed malware signatures, making it especially effective against constantly evolving AI ransomware. While there are systems like PromptLock coming into the picture, none have been used for genuine ransomware attacks yet, however, these systems are still developing and pose the threat of
being highly malicious and prevalent in the cybercrime world soon, posing threat for the future of cybersecurity.

Hardships faced in addressing the significant issue of AI voice cloning include legal and ethical challenges. In terms of AI voice cloning, these include the extreme lack of laws pertaining to what is defined as crossing the line between legal and unethical or entirely illegal. Mainly, the issue faced here is how, legally, voices and voice recognition are considered biometric data identifiers as opposed to legally defining one person as the owner of that voice, meaning that in many places it is not illegal to use another person’s voice for purposes they may not have consented to.

Precautions & Prevention Methods

If you suspect you’ve been targeted by a voice-cloning scam, stop the conversation immediately. Contact your bank to freeze accounts and cancel any pending transfers. Verify the situation by calling the person or organisation back on a trusted, pre-saved number; never use the one that just called you. Report the incident to your local authorities and the FTC (or your national fraud centre) to help track the scammer’s tactics. Finally, notify your family and friends. If a scammer has your voice sample, they may use it to target your inner circle next.

Citations

  1. Akshat Mandloi. “AI Voice Cloning in Real-Time: A Deep Learning Approach.” Smallest.ai, 18 Dec. 2025, smallest.ai/blog/real-time-ai-voice-cloning-deep-learning-tts-clone . Accessed 5 Mar. 2026.
     

  2. Mannie, Kathryn. “AI Kidnapping Scam Copied Teen Girl’s Voice in $1M Extortion Attempt.” Global News, 18 Apr. 2023, globalnews.ca/news/9629883/ai-kidnapping-scam-teen-girl-voice-cloned-extortion-arizona-jennifer-destefano/. Accessed 5 Mar. 2026.
     

  3. Thakur, Anjali. “Woman Claims AI Cloned Her Daughter’s Voice in $1 Million Kidnapping Scam.” Www.ndtv.com, NDTV, 17 Apr. 2023, www.ndtv.com/feature/woman-claims-ai-cloned-her-daughters-voice-in-1-million-kidnapping-scam-3954384. Accessed 5 Mar. 2026.
     

  4. Mishra, Pankaj. “AI Scams Surge: Voice Cloning and Deepfake Threats Sweep India.” Www.ndtv.com, NDTV, 10 Oct. 2024, www.ndtv.com/ai/ai-scams-surge-voice-cloning-and-deepfake-threats-sweep-india-6759260. Accessed 5 Mar. 2026.

bottom of page