Your Employees Are the Attack Surface: Rethinking Cybersecurity Awareness

Jon Pertwee · 2023 (revised 2026)

 

For a long time, my go-to password was a variation of ‘NewZealand123!’. My favourite country, a number sequence, an exclamation mark for the required symbol. It met every complexity requirement. It was memorable. And, as a colleague pointed out to me over a conversation that eventually led me to pursue my MSc in IT Security Management, it was trivially easy to crack.

His point was specific: if ‘NewZealand’ appears in more than one breach database, a basic Python script can test every likely iteration in milliseconds. A password with theoretically 1028 possibilities had, in practice, been reduced to around 32,000. I changed my approach immediately.

I tell this story because it illustrates the central problem with most organisational security awareness programmes. Employees are not careless about security because they do not care, they are careless because nobody has ever shown them concretely how the habits they use in their personal digital lives translate into organisational risk. Policy documents and compliance training do not do this. Personal examples do.

This post covers the behaviours that most frequently create organisational exposure, drawn from personal experience rather than policy frameworks, and considers what effective awareness training looks like in the context of a full information security management programme.

 

Credential Behaviour Is an Organisational Risk

The pattern I used with ‘NewZealand123!’ is not unusual. Research consistently shows that the majority of people reuse passwords across multiple accounts, follow predictable patterns when creating them, and make minor variations rather than genuinely different passwords when forced to change them. In a personal context this is a manageable risk. In an organisational context it is a significant one.

If an employee reuses their work credentials on an external site that is breached, those credentials are now in circulation. According to IBM’s Cost of a Data Breach research, breaches remain undetected for an average of over 200 days. That is a substantial window during which a threat actor with valid credentials can move through an organisation’s systems without triggering obvious alerts.

What good looks like: password managers

Password managers solve the credential reuse problem at scale. They generate long, random, unique passwords for every account, eliminating the pattern problem entirely. A 14-character random password has approximately 1028 possible combinations, not the reduced set that a predictable pattern creates.

They also provide a less obvious but equally important protection: they will not autofill credentials into a site that does not match the stored URL. This means a convincing phishing site that mimics a login page will not receive the employee’s credentials automatically, even if the employee does not notice the discrepancy.

This approach is now aligned with current NIST guidance. NIST SP 800-63B revised the standard recommendations on password management significantly, moving away from mandatory periodic password changes, complexity rules that produce predictable patterns, and short minimum lengths. NIST now recommends longer passphrases over complex short passwords, screening credentials against known breach databases rather than forcing routine changes, and the use of password managers and MFA to reduce reliance on password memorability entirely. 

What good looks like: two-factor authentication

Two-factor authentication provides a second line of protection when credentials are compromised. Even if a threat actor obtains a valid username and password, they cannot authenticate without the second factor. Given that credential compromise through breached third-party sites is one of the most common initial access vectors, 2FA is one of the highest-value controls available for its implementation cost.

The IBM research cited above puts the average breach-to-detection window at over 200 days. 2FA does not shorten that window, but it significantly limits what a threat actor can do with credentials obtained through it.

The practical implication for IT departments is significant. Forced password expiry policies, typically every 30, 60, or 90 days, generate a steady volume of helpdesk reset requests. Each of those requests is a potential social engineering attack vector: a caller claiming to be an employee, locked out, needing urgent access restored. The more routine and frequent the reset process, the harder it becomes for helpdesk staff to apply appropriate scrutiny to a request that follows the expected pattern. Reducing forced resets through the adoption of password managers and MFA does not just reduce administrative burden; it removes a category of social engineering opportunity that periodic reset policies quietly create.

Phishing Succeeds Because It Bypasses Deliberate Thinking

Phishing attacks are designed to trigger automatic rather than deliberate responses. A sense of urgency, a familiar name, an unexpected invoice, a compelling offer. The goal is to get the recipient to act before they think, to click the link, open the attachment, or enter credentials before the cognitive process that would flag the anomaly has had time to engage.

I received an email from what appeared to be a former project contact, subject line ‘Project Follow-up’, asking me to view essential feedback on a shared document. The email was professionally worded and the contact was real. What made me pause was the context: the project had concluded months earlier and all feedback had been exchanged. I hovered over the link. The URL resolved to a server in Thailand with no connection to the project. I contacted my former colleague directly. They had not sent the email.

That pause, the moment of stopping to think about whether the email made sense rather than just what the email was asking, was the control. No technology was involved. No specialist knowledge was required. Just the habit of asking whether an email was expected and whether the action it requested made sense in context.

What good looks like: building the pause habit

Effective phishing awareness training does not teach employees to identify sophisticated phishing attempts, most employees can’t, and most attacks don’t need to be sophisticated to succeed. It teaches them to pause before acting on any email that requests a click, an attachment open, or a credential entry, and to ask three questions:

      • Was this email expected?
      • Does the action it requests make sense given the stated context?
      • If in doubt, can I verify through a separate channel before acting?

Phishing simulation exercises that expose employees to realistic attempts and then debrief on what the indicators were build this habit more effectively than policy documents. The goal is not to make employees suspicious of everything — it is to make the pause automatic.

 

Social Media Oversharing Creates Organisational Exposure

The connection between personal social media behaviour and organisational security risk is underappreciated. The information employees share about their roles, their colleagues, their projects, and their workplace creates a detailed and freely accessible profile that social engineering attacks exploit directly.

A threat actor preparing a targeted attack on an organisation will typically spend time on LinkedIn, Twitter, and other platforms mapping its structure: who reports to whom, who works on which systems, which employees are likely to have elevated access, and which might be receptive to a convincing pretext. The more freely employees share professional information, the easier this mapping becomes.

This is not an argument for employees to remove themselves from professional networks. LinkedIn in particular has legitimate professional value. It is an argument for awareness that professional information shared publicly is visible to everyone, including those with malicious intent, and that some information, particularly around specific systems, projects, access levels, or security practices, warrants more caution than other kinds of professional sharing.

 

The Habit of Vigilance Transfers Between Personal and Professional Contexts

One of the most effective things security awareness training can do is build a general habit of noticing anomalies. I once caught a fraudulent charge on a personal credit card that I nearly ignored because the amount was small. That is precisely the tactic: small charges test whether the account is active and whether the owner is paying attention. Ignoring small anomalies because they seem insignificant is the behaviour fraudsters rely on.

The same habit applies in organisational contexts. Employees who have developed the practice of noticing small discrepancies in their personal financial lives are better equipped to notice anomalies in organisational systems: unexpected access requests, unfamiliar processes running on devices, small changes to financial or operational data that might indicate something is wrong.

Security awareness is not a set of rules to memorise. It is a set of habits. Habits formed in personal contexts transfer to professional ones when the underlying principle is understood, which is why effective awareness training connects to examples employees can recognise from their own experience.

 

Backup Behaviour: Personal Habits Predict Organisational Attitudes

Most people know they should back up their personal data. Most people do not do it consistently. The same pattern appears in organisational contexts, and the consequences are substantially more serious.

An employee who has never experienced the practical reality of losing data they did not back up has an abstract understanding of why backups matter. An employee who has lost a year of personal photographs to a failed hard drive has a concrete one. Security awareness programmes that connect backup discipline to personal experience tend to produce more reliable behaviour than those that present it purely as policy compliance.

In the context of ransomware, which encrypts organisational data and demands payment for its release, backup integrity is not a secondary concern. It is the primary recovery mechanism. Recovery Time Actual in a ransomware scenario depends almost entirely on the quality of backup practices in the period before the attack. Employees who understand backup discipline personally are more likely to follow organisational backup procedures reliably.

 

Remote Work and Network Security

The shift to remote and hybrid working has extended the organisational network into coffee shops, hotels, co-working spaces, and home environments. Employees connecting to organisational systems over public or inadequately secured Wi-Fi networks create exposure that perimeter security controls cannot address.

A useful way to explain this risk to non-technical employees is the analogy of sending a letter without an envelope: the content is visible to anyone who handles it in transit. A VPN, or Virtual Private Network, provides the envelope, encrypting traffic between the device and the destination so that interception on an insecure network does not expose the content.

During a period of travel, I connected to airport Wi-Fi and subsequently noticed unusual device behaviour consistent with a rogue hotspot, a fake access point designed to intercept traffic. My VPN was active throughout. The incident reinforced something that policy documents state but rarely make vivid: the risk on public networks is not theoretical.

For organisations, VPN deployment for remote workers is a baseline control. The complementary awareness requirement is that employees understand why the control exists and why bypassing it for convenience creates genuine exposure, not theoretical risk.

 

Security Awareness as Part of an ISMS

The behaviours described in this post, credential management, phishing awareness, appropriate professional sharing, backup discipline, and network security on remote connections, are not advanced topics. They are foundational. They are also the behaviours that most frequently appear as contributing factors in breach investigations.

Within an information security management system, security awareness training is not a compliance checkbox. It is a control. Its effectiveness depends on whether it changes actual behaviour, and behaviour change requires that employees understand the ‘why’ behind the requirement in terms that connect to their own experience, not just the ‘what’ as specified in a policy document.

Security awareness that starts from personal experience and builds toward organisational application is, in my experience, more effective than training that presents policy and expects compliance. The habits are the same. The context is different. Making that connection explicit is the job of the training.

If you are developing a security awareness programme as part of an ISMS implementation, or reviewing the effectiveness of existing training, feel free to get in touch.