Phish Like You Fight, or A Surprise Bonus, For Me?
A report published last week by CERTFA, the Iran-centric Cyber Security team, detailed a holiday spear phishing campaign by APT 35, AKA Charming Kitten. The kittens mounted a fairly sophisticated phishing scheme during December, embedding the usual malicious links in holiday themed emails. However they also utilized compromised Google accounts to add credibility to SMS messages disguised as Google security alerts. The holiday well wishes included such hits as the uplifting “May this festive season sparkle and shine, may all your wishes and dreams come true, and may you feel this happiness all year round. Merry Christmas!” as well as and the curiosity spawning “This year I decided to make my friends happy with my last book. Here’s my special Xmas gift to you. Hope you enjoy it”. Hopefully none of this reads familiar to any of your end-users because (again, hopefully) they’ve been through phishing simulations and resisted clicking on what will surely be the next War and Peace.
Let’s talk about those simulations. Earlier in the holiday season, an undoubtedly well-meaning CISO tested the mettle of his company with an all but irresistible phishing training simulation announcing a fictitious holiday bonus. That campaign turned out to be a learning experience for all parties involved when the test unexpectedly went viral. You can read about it on Forbes, here. Even yours truly pushed the training envelope early in the Pandemic by co-opting a technique then already-in-use by the bad guys: A simulated link allowing end users to “See if there are Covid-19 Infections near you”. Mrs. Isaac herself fell for one of her employer’s dastardly, yet cleverly disguised E-card Phishing Training simulation in early December. This got me to thinking about how aggressive phishing simulations should be.
As Security Practitioners we know that phishing and social engineering attacks are one of the most significant threats to an IT environment. If you find yourself on the wrong end of a compromise, the odds are better than even that phishing was involved, if not the actual vector*. This threat is exacerbated by the fact that end-users are a huge attack plane that can’t be eliminated. The good news, of course, is that the risk from phishing and social attacks can be mitigated through training, exercise, and good feedback.
Patton (Pictured above, and possibly wondering if that telegram about winning the lottery was a spear phish) said “You fight like you train”. Long before that, the Spartans adopted the even more extreme “You fight like you live”. Tenets like these highlight the importance of training for phishing attacks for (at least!) two reasons.
First, it educates your end-users to the tactics, techniques and procedures used by the bad guys. In short, it lets the end-user know what to expect. Who would think that bad guys would send fake amazon tracking notices? Hopefully all of your end users would, after some good ole fashioned simulating.
Second (and more importantly, in my opinion), Phishing simulations acclimate your end-users to the simple act of calling someone a liar. And let’s face it. That is what your end-user is doing when they report suspicious email. If you’re lucky, most of your employees are decent, confrontation avoiding folk who have been taught it’s somewhat rude to accuse someone of being dishonest. When you ask employees to report email, this is essentially what you’re asking them to do. Some of them will need some help getting used to it. They may not feel comfortable clicking that report button. They may be thinking that the email from “firstname.lastname@example.org” asking them to reset their PayPal password might be legitimate. They might wonder “Why would Update185621 lie to them?” Training and phishing simulations serve the important role of giving the end-user the comfort level to call old Update185621 on his or her shenanigans. The repetition of this training drives that reporting process into the end-user’s mental muscle memory. It reinforces that fact that no one is going to punish them for reporting suspicious emails.
It should be noted that a critical element of this process is feedback. There needs to be some follow up from your security personnel or SOC that lets the end-user know they did the right thing. Even in the case of a false positive identification, they still need to know they did the right thing. This feedback also needs to be timely. In the case of correctly identified phishing emails, a speedy response confirming the valid report is critical to positively reinforce the behavior. Quick feedback on incorrectly identified emails is equally important. As Security Practitioners, we would rather an end-user errs on the side of caution. That is why, in cases of incorrectly reported emails, it is crucial to hear back from the SOC that the reporting was appreciated, and investigated, and it was ultimately determined that the email. This lets the end-user know that there is no negative aspect to their “better-safe-than-sorry” mindset. From a practical standpoint, it also ensures that legitimate emails are returned to email boxes to ensure continued business operations.
So we know training is necessary, but we might still struggle with what form it should take? How difficult should it be? How far can we push the end-user? Should training consist of easily identified holiday e-cards with dancing hamsters? Should we test the end-user’s intellect, forcing them to use some critical reasoning and self-control. The answer is a little bit of all the above.
When I sent the Covid related Phishing simulation in early 2020 I received a lot of feedback. Typically, after a reasonably challenging simulation I would receive an email or two from the end users: A joking “You got me that time”, Or a “Not today, Satan”. However, after the “See COVID-19 cases in your area” simulation I heard from quite a few end-users. The responses almost universally decried my use of the then developing pandemic, including “How dare you!” and “Indefensible!”. My favorite was the imminently civil “Dirty Pool”. Each of them had, of course, clicked on the test link. I wrote or called each of them back, explaining that bad guys would always use whatever tools they have available, including fear and curiosity and greed are some of the most powerful of these. And that the simulations were designed to provoke thought and awareness on the depths to which the bad guys would stoop. Those exchanges and conversations made it clear to me that, while not every phishing simulation had to burn down the house, the underlying content of at least some phishing simulations had to be as challenging as the current state of the art bad guy stuff. This should be viewed as the low end of the spectrum, though. A good Security Practitioner should anticipate developing phishing schemes in light of social trends and current events. We need to prepare our end-users for this because the bad guys are already doing this very thing.
Finally, even in light of the above anecdotes, and recognizing the need to push our end-user’s comfort zones a bit, I would still caution some temperance and restraint on the simulation topics. Simulations should challenge one or all of the aspects that make phishing successful: Curiosity, Fear, Greed, even Empathy. However there are ways to do this that teach the lesson while not causing unnecessary emotional distress. For instance leveraging an ongoing corporate sore point for a simulation (perhaps the lack of bonuses or raises) is sure to result in a highly “clicked” simulation. Even though that avenue of simulated attack might be a legitimate security concern, as a simulation, it just “feels” personal. Perhaps issues like these, that carry the possibility of causing an actual disruption to end-users and thereby to business operations, might be better addressed by an email from security pointing out the likely phishing strategy. That doesn’t mean I am not advocating challenging your end users with complex and timely simulations. Simulations should range from the tried-and-true e-cards to innovative use of emerging social issues. As always though, Security folk need to remain cognizant that our simulations don’t cross that fine line between constructive to destructive.
*Barney Stinson once observed that 83% of all statistics are made up. However, Roger Grimes at Knowb4 puts the percentage of malicious breaches caused by social engineering and phishing at between 70% and 90%.