Are you an interesting target? In the rack and stack of threat intelligence, the tactics used by an adversary are illuminating. It’s easier to flip the toggle and switch in and out atomic indicators than to flip over the way you do business. It’s a matter of investment. That domain? Pffftt. No problem. The method you use to acquire and employ them? That one that you’ve invested resources, tools, infrastructure and people to smooth out and make just right? That’s a little more difficult to justify for a quick drop and switch.
Consider the recent post by ESET–it’s a follow on to a previous paper in March this year talking about the Turla adversary and their use of watering hole techniques. See this Forcepoint article about it as well.
The point is that tactics tend to be one of those wonderful elements to fingerprint and identify. They are pretty durable, as I pointed out above. Right now, let’s talk about a certain case; the use of a watering hole tactic. It’s an excellent social engineering gambit — compromising a select group of end users by compromising a place they are known to visit and congregate.
Why change what works?
Even animals know predators hunt at the places where they go to drink water. It doesn’t keep them from going there — though they do take greater care as they approach. More so if an attack was recent or spectacular in some way. Switch that thought to people. This attack via waterhole focused on embassy (and other) sites isn’t exactly new. If fact, if you had a chance to poke through the linked articles and their references you’ll note this type of activity has been present, with a few variations, since 2014. It’s rather a ridiculous time frame when you think of it. Slightly sad as well. Especially given that we know it is happening. A portion of it is fatigue. Alert fatigue. Exhaustion over processing an overwhelming abundance of atomic indicators that switch at the drop of a hat. Tiredness, if you will.
Another issue is understanding.
How exactly does this tactic work and how again, can I prevent, detect, and if necessary, mitigate its use? A integral effort in threat intelligence is not just identifying the tactic used by the enemy but also providing the information and understanding to deter, stop and hunt for it as well. In this case, the technical side of the watering hole leverages a malicious redirect — just not for everyone. It also wraps in the concept of filtering, only serving bad to those of interest. As a hat tip towards the title, they really only are interested if you are interesting. Interesting, in this case, means your IP Address fits in the ranges of their interest. So, if you step away from the rest of the technical details needed to set up, deliver and take advantage of the water hole — all, by the way, well covered beforehand — you’ll quickly stumble over the idea of wondering how the targets were, well, interesting enough to target.
That particular rabbit hole is pretty deep; I’ll be the first to admit. However, since the adversary in this case is being particular, it bears wondering both “why” and “how”. The “why” of things is bit too far out of the wheelhouse, but the “how” is definitely derivable. The technical side of “how” is documented with good detail in the above articles as well. The other element of “how” is the use of filtering IP Addresses within a targeted range. That’s a definable pattern. Not necessarily easy to leverage, but doable. A script that looks for IP Addresses belonging to my company is something I’m interested in investigating. Given the amount of traffic flowing through a network, that could be tough, especially if its running server-side and not susceptible to my monitoring. The outcome, however, is. A wide enough range of IP Addresses going to the watering hole will show the different avenues of the filtering. Analytics scripts shouldn’t be one thing one time and something different another. They lean towards the static to perform their purpose. Their size, contents, naming and other observable artifacts can be plumbed for patterns to hint towards when a change has occurred. Updates to plug-ins and browser add-ons should always be scrutinized, if allowed. In the scheme of network hunting its an observable that can bear good fruit if plucked. Its not likely an anomaly, unless its a restricted action under your enterprise rules. Then, of course, its a great cue towards undesired activity. However interesting, its more likely the use of the ever cookie, another element of this attack, will be discovered before the other two. This type of attack is not new and is another one of those tried and true methods that works and continues to work.
Why in the world would we change it?
Let me leave you with a few thoughts. All the articles linked on this topic have indicators. From addresses fingerprinting servers to the hashes of files, malware, and other things. They also talk about process and procedure, the steps taken during the attack and afterward. My thought here is you should add to that analysis the pre-steps needed. The pre-stage efforts to make it all work. They leave just as many if not more hints and cues than the debris of artifacts and effort found after the attack happens. If you were to gain the list of IPs, for example, that were being used for the filtering — what would that tell you? Could it hint at what IPs they (the adversary) had determined was of interest? If it was fully inclusive of all your IPs, were they something you could determine from a public information scan? Were they only observable internally? Think about where that path could lead you. It could hint at an insider activity, malware on your system, sniffing of network traffic and plenty of other options.
Every one of those speaks “threat intel pivot” and “threat hunting” to me.