This is a rather long article which most readers may not deem enjoyable; however, I would like for you to give it a chance.

Some of this is written in a manor to be understood the lesser experienced. Maybe because I am too stupid myself : D

The following methods were used in aid of stopping/closing channels such as BotSquad and HiddenVault from YouTube.

The dangers of YouTube malware campaigns and how to help prevent further distribution

Part 1: Introduction

Even though YouTube is commonly referred to as “dead” (spreading wise) by many, that statement is quite false.
YouTube has to be one of the few platforms capable of mass hoarding victims without utilizing exploits(excluding macros).
As a result of this, it is crammed with malware.
Each day there is at least one video being uploaded per popular niche that is part of a spreading operation; that’s 365 minimum spreading videos / year.
For every recent successful video, the installation rate is way above videos in the past. This is because there are less videos, yet enough to cause a lot of damage.
Less videos means less competition between spreaders. This means that one video can get an insane amount of attention which is why it is important to shut such channels down.
So, the reason why YouTube is still a problem: Most victims are below legal school leaving age (age may vary depending on your Country/State).
As a result of this, a majority of the machines the victims use will be a family/shared computer.
Even if the computer isn’t shared, chances are that the victim has requested to use their parents Credit/Debit card for an online transaction.
Many of these teens add their parents card or bank to a PayPal account to make transaction ever much quicker.
Stores like eBay, Amazon and so on will store the card upon request; this is a used feature more often than not.
Once the attacker has got the credentials to the victims PayPal, making transactions directly from account is a fairly simple process. “I’ve heard all of this before” Of course, but that does not stop this from still being a problem.


Part 2: Social Engineering aspects of YouTube spreading operations

– A Proper SE is the root of YouTube spreading success.

If the an attacker fails to master the SE, they will gather minimal installs.

Common YouTube videos should consist of the following SE’s:

Deception: This stands for all SE’s, no further explanation needed.

Channel Maintenance: spotting a maintained a spreading video can be done with 4 steps:

– Attacker has regular communication with viewers(comments)

– Comments are actively removed

– Large View/Like spike

– comments that indicate a tool update

The attacker will speak/type a set of instructions to match the software the victim is trying to install.
Once the attacker shows the tool working, the soon-to-be victim will usually check the likes and comments(refer to SE 2) in a hope to see if the tool in mind is in fact legitimate.
If the comments backup the software in the video, a sense of trust will be created between the victim and the attacker; once this trust is built, the victim will install a working program which is delivered alongside a payload (usually downloaded from a function within the original program that was intended to be installed.)

Comments from viewers:

A viewer comemnt isn’t the attacker directly socially engineering his targets, rather his past victims doing it for him without knowing.
The attacker will deliver a working tool which will download/load a payload.(Working tool EG: ‘CCleaner Crack’ which will function just like any other CCleaner release), this will result in the victim commenting something along the lines of “This is working” ETC.
This is effective for long term since the accounts posting these comments are often legitimate YouTubers with a following above 20 subscribers which makes the comment look authentic.
Once you put all 4 SE’s to use, the video will be very effective for both long term and short term operations.


Part 3: Identifying a persistent distributor

Attackers tend to stick to old habits when it comes to distribution and the manor in how they deliver the malware.
As you progress in hunting on YouTube, you will start seeing specific patterns between separate channels and videos.
When you start seeing patterns, this is a sign of an attacker either returning to the YouTube scene, or just creating a new channel for greater spreading efficiency.
It is important to take note of such patterns if you intend to build a report on a specific distributor.

Identifying your spreader:

Extracting a path from debug information(PDB)

Failing to disable PDB is very common amongst new malware authors.
Using any string extractor(or a hex viewer), we are able to find the attackers local PDB path(usually included in the build directory of the malware)
To find the PDB path, simply use your string extractor’s search function(if applicable) and look for “PDB”.
The search should return a string formatted like so:

c:\users\Admin\documents\visual studio 2015\Projects\Illegal Malware\Illegal Malware\obj\Debug\Illegal Malware.pdb

A quicker way to do this would be simply dragging the executable into IDA.

If debug information was found, you will see a box that looks like the following image:

Once a path has been found, you can store the path with any current & further information found.
If a second sample is found, make sure to save information alongside the old report with include the path.
If the path is starts with something along the lines of “c:\Users\User” or “c:\Users\Admin” look for further information such as:
Common build path – Does the author build in the default VS path, or does he build somewhere like “c:\Users\admin\Projects”?
You should pair this with the next segment (Identifying an attacker from code habits) for maximum identification potential
The same principle can be applied to build locations on external drives.


Identifying an attacker from code habits

Code habits will only apply if the malware was coded from the distributor.
Such things to look out for is how certain tasks are performed.
You’re not going to get too far looking for the above, but you will start noticing much more subtle features in the same developers code which you can use to identify a future sample.
An example would be if the developer had 2 different pieces of malware distributed.
Say the malware was delivered alongside a “booter”, the booter would usually have to contact an API
Inexperienced developers tend to use something along the lines of:

WebClient.DownloadString("" + TextBox1.text + "&time?=120");

While a more experienced developer will use something more accustomed to the job instead of slapping anything that works (such as WebClient DownloadString.) Spotting such details will help you with future identification.

Identifying an attacker from contacted Domain/IP

This is a fairly obvious one.

You could also check if the sample is using the same server provider as a sample from the past.

Identifying an attacker from malware/packer family.

While attackers do tend to change their tool often (at least in this line of work), it is still a good idea to take notes on what the attacker often uses.


Does the attacker pack the tool with ConfuserEX or a Purchased Crypter?

Is the attacker using a customized variant?


Identifying an attacker from channel habits

This is the quickest way of identifying a prior attacker since the channel is the second page you will usually see while hunting.

Things to look out for is channel art and video consistency.

Things to look out for:

– Channel art similar to that of a past channel

– Strict upload schedule similar to that of a past channel

– Title formatting similar to that of a past channel

– Description formatting similar to that of a past channel

– Comments from recognizable accounts(accounts that commented on a past channel)

– Specific uncommon host used on a past channel

All details can help, so try to be informative in your reports.


Part 4: Detection of spreading channels

Telling apart a legitimate channel from a spreading channel is usually a fairly straight forward process.

Things to look out for:
– Multiple uploads in a short time span.
– Multiple similar uploads. EG: “CCLeaner Crack”, “Sony Vegas Cracked”, “Other Tool Cracked” and so on all uploaded on the same channel.
– Booters, crypters, and other malware/”hacking” related videos are a big red flag for backdoors.
Password protected zips in description.

– A password protected zip prevents Virustotal from being able to scan the executable, resulting in 0 detection’s being returned to the user.
– If something looks too good to be true, it probably is. (This is true outside of technology too 🙂 .)
– “Modding” or “Hacking” related channels.
– Silly like:dislike ratio. EG: 100 likes and 0 dislikes
– Odd views/likes for publication date. EG: Publication date was 3 days ago, but the video has above 100 likes. All videos need time to grow. No video should have a stupid amount of views/likes if it is a very recent upload.
– Like count above (or similar) to view count. This will occur from fake likes. Fake likes indicate malware.
– Video/s can be found on “like4like” websites. Attackers usually choose these sites to get free traffic and likes to their video.
– Video/s contain multiple “tags” (keywords) in description

Part 5: Usual samples and what to expect

– Spreading Channel Example –

Due to nearly all attackers using public malware, identification can be rather easy.
Identifying your sample can help when we get to Part 4.

What to expect:

– Most samples are MSIL/CIL based(usually NJRat, Imminent Monitor or XtremeRat) with the odd Win32(Usually Darkcomet or other Delphi variants).
– Nearly all samples will contact a free domain. This makes Part 4 of this article very easy to follow along with.
– Samples will usually attempt to evade execution in a virtual environment or sandbox.
– Nearly all malware will be packed with some sort of HF crypter.
– Many CNC’s will be inactive. This is usually due to the attacker moving on or just forgetting about the channel.
– Not everything is malware! Yep. Odd, but true. This is fairly rare, however.
– Contacted domain/s resolves to residential IP. This is usually found on the lower quality campaigns & can help with Part 4.

Usual Suspects:


Imminent Monitor

– LuminosityLink
(Should only be found on fairly old campaigns due to KFC’s Arrest)


– Remcos

Part 6: Preventing further attacks & Sinkholing contacted domains

You would probably think this is as simple as using the built-in YouTube report tool, right? Wrong. YouTube almost never pays attention to malware related reports.
Although reporting the video may prevent further attacks, you will need multiple people to report legitimate information for the removal to take place. This will take several days.


Sinkholing these kinds of operations is usually always very straightforward. Almost no prior malware analysis skills are required, but it can help to know the basics.

If you wish to find a domain/IP quick and easy, you could either use a Network Analysis tool (Wireshark, TCPView, Netmon ETC.) or you can upload the sample to an online sandbox such as Deepviz or Payload-Security.

If you wish to do this statically, you can use a string viewer such as D.I.E, strings(UNIX) or the built-in IDA String Tool. (Only do this with an unpacked sample)

Once a domain has been located, you will want to view the whois information to find the registrar.
Search for the abuse email of the domain registrar and forward as much information as you deem necessary for them to deactivate the domain. (You can also do this with server hosts)

Try to include the following:

– Link to spreading video

– Contacted domain/IP

– Virustotal Scan

– Sandbox results
(Deepviz, Payload-Security ETC.)


Preventing Further Attacks.

Although I said reporting the video usually doesn’t have much effect, it sometimes works. If it works, the whole channel could be shutdown resulting in your win.
Virustotal after any update. This will of course raise detection’s and make his whole campaign a lot harder.

Part 5: “Rat Trolling” channels & the issues they cause.

“Rat Trolling” is a rising issue and isn’t slowing down.
The peak of Rat Trolling channels were mid-to-late 2017 as a result of a channel named “BotSquad”. Ouch!

(Totally didn’t just copy MMD)

BotSquad, as the name implies, was a group of several individuals all with the same intention: Gain an online following by annoying R.A.T victims.


– The issues of “Rat Trolling” channels.

The whole niche is targeted towards a young audience. As a result of this, a majority of the viewers are inspired to distribute their own malware.
Once he gets a few victims, he creates a trolling channel of his own.
A young kid is now essentially a cyber criminal and is publicly recording his crimes.

This whole process will repeat, meaning 100’s of kids will be distributing malware.



Hopefully this has inspired some of you to take a look at YouTube spreading operations.

The end.


This post is a guest post by Purple Worm.

Send me your hard-earned BTC for this abysmal post!: 1EpavyxQwRoCB6599moNBhf17HHUcomae1

Feel free to comment if you have any questions regarding this subject.

For those of that wish to contact privately:

Thank you to the following:

Mr. Krabs

Mr. Dionysus