In this scam operation, AI is responsible for dating and also for creating fake lawyer licenses.

robot
Abstract generation in progress

Most of the work in scam parks is done with just one ChatGPT account.

Author: Curry, Deep Tide TechFlow

OpenAI recently released a report stating that someone used ChatGPT for malicious purposes, and they caught them.

The report is lengthy, listing a large number of AI abuse cases. Some involve Russian disinformation campaigns, some involve suspected spies using social engineering, but today I want to talk about one case:

Cambodian “Pig Butchering” scams.

Pig butchering scams are not new; everyone has heard plenty of stories about scam parks in Cambodia. What’s new is AI’s role in these schemes.

In this scam gang, ChatGPT handles dating conversations, translates supervisor instructions, writes daily work reports, and estimates the value of each victim.

In pig butchering, there’s an internal term called “kill value,” which is the estimated amount of money they can squeeze out of you.

Across the entire pipeline, ChatGPT might be the busiest employee.

OpenAI gave this case a codename: Operation Date Bait.

The process goes like this:

The scam gang first creates a fake high-end dating service called Klub Romantis, with a logo made by ChatGPT. Then they run paid ads on social media, targeting keywords like golf, yachts, and fine dining, specifically aimed at young Indonesian men.

When you click the ad, you first chat with an AI chatbot. The bot, posing as a sexy receptionist, asks what type of girl you like. After you choose, it gives you a Telegram link with a special invite code.

Once on Telegram, a real person takes over.

The handler continues using ChatGPT to generate flirtatious messages, gradually becoming more explicit, then leads you to two fake dating platforms, called LoveCode and SexAction.

These platforms have fake profiles of women and a scrolling message bar constantly announcing “Congratulations to so-and-so for completing a task, unlocking a bonus.” All fabricated. Experienced internet users might see through this immediately, but not all targets will.

When the conversation heats up, the handler transfers you to a “mentor.” The mentor then assigns you “tasks,” each requiring payment, with increasing amounts. Buying VIP cards, voting for “favorite girls,” paying hotel deposits—there are many excuses.

The final step is called “kill” internally.

They invent a reason, like data processing errors or deposit verification, to get you to transfer a large sum at once. OpenAI included a letter from the scam gang to victims in the report, demanding 20.5 million Indonesian rupiah, roughly $1,200 USD, promising a 35% bonus after payment.

Once the money is received, the scammer on Telegram will block you and mark the case as closed.

Seeing this, you might think it’s nothing new.

The scam techniques themselves aren’t innovative; pig butchering schemes have been exposed many times over the past few years. What’s truly shocking is the backend.

OpenAI investigators pieced together a complete organizational structure from the usage logs of these ChatGPT accounts:

The scam park is divided into three departments: the traffic generation team, the reception team, and the management team. The traffic team runs ads to attract targets; the reception team builds trust through chat; the management team handles the final harvest.

They produce daily reports. Each report lists every active victim, the responsible person, the current progress, and a number called:

kill value.

This is the estimated amount the management expects to extract from each person.

They also use ChatGPT to analyze financial accounts, generate work reports, and even ask ChatGPT how to connect to APIs or modify dating website code. When the managers speak Chinese and the staff speak Indonesian, ChatGPT handles the translation.

It’s amusing that one scam worker openly asked ChatGPT about tax issues after earning income, honestly listing “scammer” as their occupation.

OpenAI’s report is quite restrained, stating that based on the scam gang’s own input records, they might be handling hundreds of targets simultaneously, earning thousands of dollars daily. But the report also admits that these figures cannot be independently verified.

However, I think it’s unnecessary to worry about the accuracy of these numbers. Just looking at this management process alone is revealing:

Traffic, conversion, customer value, daily reports, departmental division—change the terminology, and it looks like a SaaS company’s operational manual.

And all the activities—dating, translation, daily reporting, coding, accounting—are mostly handled by a single ChatGPT account.

The story doesn’t end there.

In the same report, OpenAI also uncovered a second operation, codenamed Operation False Witness, also originating from Cambodia.

This line targets victims who have already been scammed.

The logic is simple: if you’ve been defrauded by a pig butchering scam and want to recover your losses, you search online for solutions.

Then you see an ad for a law firm claiming to help victims recover damages. You click.

The website looks very legitimate. Some lawyer photos are stolen from social media, others are AI-generated. Each law firm has an address, a license, and a profile. ChatGPT generated a fake New York State Bar Association membership card and a fake lawyer registration record.

OpenAI found at least six fake law firms.

There’s also a website that impersonates the FBI Internet Crime Complaint Center. It has a “Submit Complaint” button, which redirects to a Telegram account.

On Telegram, the “lawyer” begins chatting with you. The script is generated by ChatGPT, specifically crafted in “American English” with a professional tone. They tell you they cooperate with the International Criminal Court and that recovery services are free before payment.

But you need to pay a 15% service fee upfront via cryptocurrency to activate your account.

They also ask you to sign a confidentiality agreement. This agreement is also written by ChatGPT, designed to prevent you from verifying the information elsewhere.

Later, the FBI issued a public warning about this scam, noting it mainly targets elderly people, exploiting their urgent desire to recover losses.

After reading these two cases, I believe that in today’s environment where AI has become standard, the most ironic part is this:

The first time they scam you, you are just a target. The second time, you are a better target—because you’ve already proven you can be fooled.

Finally, OpenAI summarized the scam process into three steps in their report:

The first step is called ping—cold outreach to get the target’s attention.
The second is zing—creating emotional responses, making you excited, nervous, or scared.
The third is sting—harvesting the money and walking away.

This framework is quite well summarized. Take a closer look—at which step can AI not do?

In traditional pig butchering scams, the biggest cost was human labor. You had to hire a bunch of people to chat at computers, and they had to speak the target’s language. Early on, Cambodia’s scam parks even recruited English speakers and paid high wages.

Now, looking at the dating scams described in the report, the managers speak Chinese, the staff speak Indonesian, and the targets are Indonesians. With language barriers, this work was impossible before. But with ChatGPT, it’s all seamless.

Language is just one aspect.

The report also mentions that some scam workers even asked ChatGPT how to connect to OpenAI’s API, aiming to fully automate the chat process.

In other words, AI isn’t making the scams more sophisticated; the scams are still the same. AI is making them cheaper.

According to OpenAI, this gang might be handling hundreds of scams simultaneously. As scale increases, the labor cost per victim decreases, allowing them to target more people with smaller amounts.

And there’s another point worth pondering:

OpenAI can detect these scams because the scam gang uses ChatGPT, and the chat logs are stored on OpenAI’s servers.

But what about those using locally deployed open-source models?

What this report shows is only a small piece of the puzzle that OpenAI can see. The parts they can’t see—how big is that? No one knows.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский язык
  • Français
  • Deutsch
  • Português (Portugal)
  • ภาษาไทย
  • Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)