Are We Getting Zero Trust Wrong?
Jon Pertwee · 2024 (updated 2026)
During the research for my MSc thesis in IT Security Management at Arden University, I reached a conclusion that I found difficult to ignore: the way Zero Trust is typically presented, in vendor documentation, practitioner guides, and even most academic literature, is misleading. Not maliciously so, but systematically. The emphasis is almost entirely on the wrong part of the problem.
My thesis, completed in 2024 and resulting in a Distinction, examined Zero Trust implementation across a significant body of literature. I found that approximately 3% of academic sources and 6% of practitioner and vendor sources devoted meaningful attention to the precursor work that makes a Zero Trust implementation viable. The remaining 94-97% focused on technical controls: architectures, tooling, access policies, and microsegmentation strategies.
The problem is that technical controls applied without the precursor work are, at best, incomplete, and, at worst, actively misleading. An organisation that has implemented a sophisticated Zero Trust architecture but does not know which devices are on its network, where its sensitive data resides, or which access paths exist to it has not reduced its risk; it just believes it has.
|
Zero Trust is not a technical solution. It is a strategy. The technology is the last step, not the first. |
What Zero Trust Actually Means
The foundational principle of Zero Trust is simple: assume breach. Do not assume that anything inside the network perimeter is trustworthy. Treat every access request as potentially hostile unless it can be verified. This is a direct response to the obsolescence of the perimeter security model, which granted trust based on network location and breaks down entirely in environments with remote workers, cloud services, BYOD devices, and third-party access.
The operational implication of assuming breach is that you must know, precisely, what you are protecting, who legitimately needs access to it, and what the threats and risks associated with that access are. Only after those questions are answered does the question of which technical controls to implement become meaningful.
This sequence has a name in Zero Trust frameworks. John Kindervag, who developed the Zero Trust model, introduced the concept of the protect surface: the minimal set of critical data, assets, applications, and services that actually need protecting. Defining the protect surface requires discovery. You can only protect what you know about.
The Discovery Problem
My research consistently identified a gap between the assumed and the actual state of asset visibility in most enterprises. The figures are sobering: research suggests that around 70% of breaches involve unmanaged devices, including IoT devices, BYOD endpoints, shadow IT, and maliciously installed hardware. More striking still, over 90% of enterprises have unknown or unmanaged devices on their networks at any given time.
This is not a fringe problem. It is the normal state of most enterprise networks. And it means that an organisation that has not conducted thorough discovery before implementing Zero Trust controls has, almost certainly, left significant portions of the actual surface it needs to protect outside its defined protect surface, unmapped, and therefore, unprotected.
Shadow IT illustrates this particularly well. Shadow IT refers to applications, services, and devices used within the organisation that sit outside IT department oversight: personal cloud storage used for business documents, departmental SaaS tools procured without IT involvement, and consumer applications handling data that belongs in governed systems. Shadow IT typically arises when employees find faster or more convenient ways to accomplish tasks that IT-sanctioned tools make difficult. This does not make it benign. It frequently contains business intellectual property or other sensitive data, implemented through change processes with no governance, no security review, and no visibility to the teams responsible for protecting it.
An attacker who identifies a shadow IT application containing sensitive data, and who can access it through credentials obtained elsewhere, does not need to breach the Zero Trust architecture at all. They simply walk around it, through a door that was never included in the protect surface because nobody knew it was there.
The Correct Sequence for Zero Trust Implementation
A Zero Trust strategy implemented in the correct sequence looks like this:
Step 1: Discovery
Find everything. Devices, applications, services, data stores, access paths, and dependencies. This is not a one-time exercise; the network state changes continuously. Discovery needs to be ongoing and the asset inventory needs to be maintained. The output of this step is a complete picture of what exists, not what is supposed to exist.
Update, 2026: The structured approach to discovery described above maps directly to the layered dependency framework I developed for disaster recovery and business continuity planning, subsequently formalised in An IT Manager’s Guide to Disaster Recovery: A Layered Approach (2025) and in a co-authored paper currently under peer review.
The framework organises dependency mapping across ten layers, from physical infrastructure through to organisational governance. In doing so, it traces the pathways through an organisation’s infrastructure in a manner analogous to how an attacker would use a kill chain: systematically following connections between systems, processes, and data until the full scope of what is present, including shadow IT and disparate systems not previously visible to IT governance, becomes clear.
This insight is directly applicable to Zero Trust scoping. An organisation that has completed a thorough layered dependency mapping exercise will find not only that much of the foundational discovery work for defining a protect surface has already been done, but that the surface it defines will be meaningfully more complete than one produced by conventional asset inventory methods. The framework finds what organisations did not know they had, which is precisely what Zero Trust discovery is supposed to do.
Step 2: Define the protect surface
From the complete asset inventory, identify the subset that actually requires protection: the data, applications, assets, and services that house sensitive or critical information, or that would result in significant operational or regulatory consequences in the event of compromise. This is the protect surface. It is deliberately smaller than the attack surface, and defining it precisely is what makes proportionate, effective controls possible.
Step 3: Least privilege
For each element of the protect surface, define who legitimately needs access to it, in what context, and at what level. Remove access that cannot be justified. Apply the principle of least privilege: every user, device, and service should have access only to what it needs to perform its specific function, and nothing more. This step is where identity and access management becomes meaningful, because it requires knowing both what is being protected and who should be able to reach it.
Step 4: Threat modelling and risk assessment
Before selecting technical controls, model the threats. Who would want access to this data? Through what vectors could they obtain it? What is the likelihood and potential impact of each threat scenario? Risk assessment at this stage prioritises the implementation journey, ensuring that the highest-risk elements of the protect surface receive attention first and that technical controls are proportionate to actual risk rather than applied uniformly.
Step 5: Technical controls
Only at this stage does the selection and implementation of technical controls become the primary focus. Micro-segmentation, multi-factor authentication, continuous verification, endpoint detection and response, privileged access management. These are the tools that most Zero Trust literature leads with. They are genuinely important. They are also only effective when the four preceding steps have been completed, because the controls can only be configured meaningfully when the protect surface is defined, access requirements are understood, and threats are prioritised.
A Connection to Layered Dependency Thinking
Readers familiar with my work on disaster recovery will notice a structural parallel here as I mentioned earlier. The layered dependency mapping framework developed for DR planning is grounded in the same core principle: you cannot protect or recover what you have not mapped. In DR, the failure to map dependencies between systems, processes, and organisational functions leads to recovery plans that do not hold under real conditions because they were built on an incomplete picture of what the organisation actually depends on.
In Zero Trust, the failure to map assets, data locations, and access paths leads to a protect surface that has holes in it by definition. The technical architecture may be sound; the scope to which it is applied is not.
Both disciplines, Zero Trust strategy and disaster recovery planning, are fundamentally about knowing what you have before deciding how to protect or recover it. The discovery and mapping work is not preliminary to the real work. It is the real work. The technical controls and recovery procedures that follow are only as good as the foundations they are built on.
What This Means in Practice
For an organisation beginning a Zero Trust journey, the most important early investment is not in architecture or tooling. It is in understanding the current state: what is on the network, where sensitive data actually resides, who has access to what, and what the realistic threat profile is. That understanding is what a Zero Trust implementation is designed to protect. Without it, the implementation is protecting an imagined network rather than the real one.
The literature will not tell you this clearly, because the literature is largely written by vendors with architectures to sell and analysts with frameworks to promote. The precursor work is unglamorous and lacks a product. It is, however, the part that determines whether the rest works.
If your organisation is planning a Zero Trust implementation, or reviewing one that has not delivered the expected risk reduction, the starting point is almost always the same: go back to discovery, and be honest about what you actually know versus what you assume. Feel free to get in touch to discuss.