The value of experience (or "don't fire the person that got phished")

Posted on 2020-01-15 by Matt Strahan in Social Engineering

When performing social engineering attacks, physical intrusion attacks, or red teams we have to be particularly careful. At all times we have to be aware that we’re not dealing with emotionless systems here, but with real people who are often just trying to do their jobs. What’s more, the people on the other end can feel mislead, manipulated, and betrayed. Perhaps the hardest challenge of designing an effective user awareness programme is getting the desired outcome of increased security when you’re dealing with real people. People who have emotions and potentially unpredictable behaviours.

With systems, getting the outcome of being more secure generally means fixing bugs, tightening configuration, or implementing more controls. Usually once you’re done you can look back and say “yep things are more secure”.

People are more complex than that. Actions that you take can backfire and make things worse. I’d like to talk about perhaps the most extreme of these actions: disciplinary action including firing the person who got phished.

Stages of a successful social engineering attack

The process of the user side of a successful social engineering attack includes a few stages:

  1. The user receives the social engineering attack.
  2. The user engages with the attack.
  3. The user performs an action as a result of the attack.

In these three stages there’s plenty in the user’s control, but there’s also plenty outside the user’s control as well. The user received the social engineering attack, but should it have been instead blocked by an email filter? If the user submitted credentials, should multi-factor authentication have mitigated the consequences? If the user transferred money, should the organisation have required multiple sign-offs for large financial transfers? If the user provided information or access, should that information or access have been provided to the user in the first place?

What’s more, it can be quite hard to spot social engineering attacks. Gone are the days of emails with poor English. Social engineering attacks are now often targeted, with authentic letterheads and potentially from a hijacked email. Without training how is the user to spot the attack?

I believe one of the main advantages of phishing exercises is not the assessment and metrics that we provide like “last time 20% of users spotted the attack but this time only 13% of users spotted it” but the practice for the user of spotting the attack. Good user awareness programmes should incorporate a “lessons learned” component that shows the email back to all users and explains what could have triggered an indication that it was a phishing attack. It should also incorporate skilful targeted attacks that may not even have any obvious signs, but simply ask the user to perform actions outside of process.

The fourth stage

There’s a hidden stage of a social engineering attack that I didn’t mention above. A successful social engineering attack ends with the user not reporting the attack. The longer the attacker can have before the attack is noticed, the more the attacker can do, leveraging access, exfiltrating information, and establishing persistence. When the attacker has more time, the impact to the organisation can be exponential.

Let’s put ourselves in the shoes of a person who has been socially engineered. There’s a fair few conflicting emotions that are intermixing. The victim of the attack could be feeling fear, embarrassment, anxiety, all of which could be contributing to potentially leaving the attack unreported. This mix of emotions often don’t come from nowhere but are grounded in past actions that have occurred in similar circumstances.

This is why disciplinary action as a result of previous attacks can be so counter productive and even dangerous. The victim will be thinking “what happened last time? The person was yelled at, and one person was fired! I don’t want that to happen to me!” It’s potentially a completely rational decision to leave the attack unreported.

The disciplinary action the victim is concerned about occurred despite all of the ways that the victim was let down by the organisation. The email filtering didn’t stop it. The multi-factor authentication let it through. The financial processes were ineffective. The training was insufficient. The user wasn’t given the practice at being resilient against the attack. Social engineering works not because “users are the weakest link”, but because there is a failure in technology that forces users to be the last line of defence

What ends up happening is, what should be routine incident response turns into a crisis. Should you blame the victim for making rational decisions?

I feel firings and “voluntary resignations” in the case of security incidents can be extremely counterproductive. If someone leaves as a result of a security incident then it’s not just the person that leaves, but the learnings and experience they now have being involved in such an incident. Who would you trust more in responding to future incidents? The “new blood”, or someone who has taken part in such an incident previously in their career and has learned from it? The experience from that incident is valuable and in a time of struggle it is good to at least take something away from the incident.

All this makes positive reinforcement a key component of social engineering programmes. It is critical that all people in the organisations feel that they are a part of the security of the organisation, and that all users feel supported during and after an incident. There needs to be an understanding that the impact of a successful social engineering attack isn’t only on a user, but on the entire chain of security for an organisation. However, the user should be provided the tools including education and practice required to do their part.

About the author

Matthew Strahan is Co-Founder and Managing Director at Volkis. He has over a decade of dedicated cyber security experience, including penetration testing, governance, compliance, incident response, technical security and risk management. You can catch him on Twitter and LinkedIn.

Cover photo by Markus Spiske on Unsplash.

If you need help with your security, get in touch with Volkis.
Follow us on Twitter and LinkedIn