In 2013, our internal Information Security team carried out a series of controlled anti-phishing exercises. The purpose was to raise employees' awareness of potential spear phishing attacks through emails. Spear phishing has been a common first step for Advanced Persistent Threat (APT) attacks to gain access to a user's system before launching further attacks at internal targets. As such, if employees are vigilant against such attack patterns, we should effectively reduce the risk of successful APT attacks involving email phishing.
Through a series specially designed phishing emails executed over the four quarters, at one to two emails each month, the team captured an average "click" rate of 26%. The lowest click rate was 5%, and a highest was 61%. However, month over month, there was no discernible trend, as some months were low and others suddenly shot up. What was the data telling us? Did the users' awareness rise or remain indifferent because of this exercise?
Unfortunately, the data from the click rate does not provide sufficient insights for us to answer these questions directly. Empirically, observing from the volumes of emails generated within internal discussion mailers, there was definitely increased interest in the subject matter concerned. But not all employees were in those mailers, and many remained silent even after they had clicked on the educational phishing links.
What we could conclude from the campaign was the following:
Interestingly, Cisco has internally deployed our own Ironport Email Security Appliance (ESA) solution. It has a good reputation for blocking spam, known phishing emails and other malicious content and web sites where the Cisco Talos Security Intelligence and Research group has detected suspicious activity from its global feeds. Users who are aware of this feel safe with emails received in their inbox. If the emails are bad, it would have been blocked already. This probably explains the high rates of click-through as well -confidence of our own technology in action.
Another, more critical metric was taken during the anti-phishing campaign; a metric which, from the responsive security perspective, is more important than the click rate itself. This metric relates to the time it takes for the incident response team to be notified, and the time for the incident response team to do something to prevent further exploitation by the "offending" site (the URL link that the phishing email is directing the recipient to click). In one of the exercises, where 294 users received the phishing email, the metrics showed that a total of 69 users (23.5%) clicked the URL link in the phishing email between 10 AM and 4:30 PM EST. However, the incident response team was notified within seven minutes of the exercise, at 10:07 AM. In response, the team blocked (i.e., "Blackholed") the offending site, which took five minutes to complete. In total, 12 minutes into the exercise itself, at 10:12 AM, the offending site was no longer reachable. Instead, of all 69 users being affected by the "attack," only the first few users who had read and followed the URL link within the first 12-minute window of exposure were "compromised." The rest of them were in fact "safe" from the simulated attack.
The 12-minute window, which includes a seven-minute notification period, effectively reflects the responsiveness of the organization in two aspects. If the initial few users had notified the incident response team earlier, then this notification delay would have been reduced, further narrowing the window of exposure. In the exercise, the clock was started when the first phishing email was sent. There is, however, a lapse between the start time and the time the first user opens the email, which is unlikely to be zero. As such, the actual notification period is less than seven minutes. The notification metric also signifies the level of awareness of the users.
The notification delay could also go to the other extreme, where no user reports detection of the suspicious email. If that is the case, then there will be no mitigation response from the incident team. Therefore, a shorter notification delay signifies a higher level of awareness of what constitutes a suspicious email, how to verify it, and how/who to notify to investigate the suspicion. Regardless, the key lesson from these notify-to-mitigate metrics is clear. An effective awareness program -incorporating not just knowledge of what constitutes a suspicious phishing email, but also the notification and mitigation response process working in concert -can then limit the potential damage to the organization.
Through these exercises, we can see the benefits of taking a responsive security approach. Instead of focusing on traditional preventive measures -which direct the awareness efforts on what constitutes a phishing email, and getting users to refrain from clicking on the link -a responsive approach extends the awareness scope to ensure that the notification process is also well established and understood by the users. It also ensures that the incident response process is adequately prepared to implement relevant stopgap measures or workarounds to mitigate a newly detected risk exposure within the shortest possible time. In short, we accept that there will be weak links in our systems, including users, processes, and technology weaknesses. The click rates further demonstrate this. By being prepared and ready to deal with their exploitation at the earliest possible time, we can effectively and reliably limit the potential damage that may be incurred.
This concludes our blog series on Responsive Security. Thank you for joining us to learn, adopt, and practice a responsive security approach for managing information security risks in your organization.