Tag: Zero Trust

  • ZTNA vs. Traditional VPN: Why Zero Trust Network Access Has Become the Enterprise Standard

    ZTNA vs. Traditional VPN: Why Zero Trust Network Access Has Become the Enterprise Standard

    If your organization still routes remote employees through a legacy VPN to access internal resources, you are operating with a security model that was designed for a world that no longer exists. Traditional VPNs were built when corporate networks had clear perimeters, nearly all workloads lived on-premises, and most devices were company-issued and fully managed. None of those assumptions reliably hold anymore.

    Zero Trust Network Access (ZTNA) has emerged as the architectural response to this changed reality. By 2026, ZTNA has moved well past early adopter status — it is now a core requirement for most enterprise security frameworks, and a condition of cyber insurance policies from many carriers. Understanding how it differs from traditional VPN, and where the practical implementation challenges lie, is essential for any team responsible for remote access security.

    The Core Problem with Traditional VPN

    The fundamental design of a traditional VPN is perimeter-based: verify a user or device at the edge, then grant them access to the network segment they connect to. Once inside, lateral movement between systems is often relatively unrestricted, constrained mainly by whatever network segmentation and firewall rules have been manually configured over the years.

    This model has three structural weaknesses that become more serious as organizations modernize their infrastructure.

    First, the security model assumes the network perimeter is meaningful. In hybrid environments where workloads span on-premises data centers, Azure, AWS, and SaaS applications, there is no single perimeter to defend. Traffic between a remote employee and a cloud application often never touches the corporate network at all, yet a VPN-centric model routes it through the datacenter anyway, adding latency without adding meaningful protection.

    Second, VPN grants network-level access rather than application-level access. A compromised VPN credential does not just expose one application — it exposes whatever network segment the VPN configuration allows, which in poorly maintained environments can be quite broad. Ransomware operators and advanced persistent threat actors have made VPN lateral movement one of their primary techniques precisely because it works so reliably.

    Third, VPN concentrator infrastructure creates a chokepoint that does not scale gracefully to fully distributed workforces. The surge in remote work since 2020 exposed this limitation in stark terms. Organizations that suddenly needed to put every employee on VPN discovered that their hardware concentrators were not sized for that load, and that adding capacity takes time and capital.

    What Zero Trust Network Access Actually Means

    Zero Trust Network Access applies the zero trust principle — never trust, always verify — specifically to the problem of remote access. Instead of granting network-level access after a single point of authentication, ZTNA grants access to specific applications, services, or resources, for specific users and devices, based on continuous verification of identity, device health, and contextual signals.

    The practical mechanics differ depending on the implementation model, but most ZTNA architectures share several defining characteristics. Authentication and authorization happen before any connection to the resource is established, not after. The resource is not exposed to the public internet at all — instead, a ZTNA broker or proxy handles the connection, so the application’s IP address and port are never directly accessible. Every access decision is logged, and policies can enforce session duration limits, data loss prevention controls, and step-up authentication for sensitive operations.

    Device posture is a first-class input to the access decision. Before a connection is allowed, the ZTNA policy engine checks whether the device has an up-to-date operating system, active endpoint protection, disk encryption enabled, and any other configured requirements. A device that fails posture checks gets denied or redirected to a remediation workflow rather than connected to the resource.

    Agent-Based vs. Agentless ZTNA

    ZTNA deployments fall into two broad patterns, and the right choice depends heavily on what you are protecting and who needs access.

    Agent-based ZTNA requires installing a client on the endpoint. The agent handles device posture assessment, establishes the encrypted tunnel to the ZTNA broker, and enforces policy locally. This model offers the richest device visibility and the strongest posture enforcement — you know exactly what is running on the endpoint. It is the right choice for managed corporate devices accessing sensitive internal applications.

    Agentless ZTNA delivers access through the browser, typically via a reverse proxy. No software needs to be installed on the endpoint. This model is suitable for third-party contractors, partners, and BYOD scenarios where installing an agent is impractical. The tradeoff is reduced device visibility — without an agent, the policy engine can assess far less about the endpoint’s security posture. Most enterprise ZTNA deployments use both models: agent-based for employees on corporate devices, agentless for third parties and unmanaged devices accessing lower-sensitivity resources.

    Performance and User Experience

    One of the frequently overlooked benefits of well-implemented ZTNA is improved performance for cloud-hosted applications. Traditional split-tunnel VPN configurations often route cloud application traffic through corporate infrastructure even though a direct path exists. ZTNA architectures using a cloud-delivered broker or a software-defined perimeter typically route traffic more directly, reducing round-trip latency for SaaS and cloud applications.

    Full-tunnel VPN, which routes all traffic through corporate infrastructure, almost always performs worse for cloud applications than ZTNA. The performance gap widens as users are geographically distant from the VPN concentrator and as the proportion of traffic destined for cloud services increases — both trends that have moved in one direction over the past five years.

    User experience is also meaningfully better when ZTNA is implemented well. Instead of connecting to VPN first as a prerequisite for everything, application access becomes native: open the application, authenticate if prompted, and you are in. For frequently used applications, the session stays active and reconnects silently in the background. This reduces the friction that leads employees to look for workarounds.

    Where Traditional VPN Still Makes Sense

    ZTNA is not a universal replacement for every VPN use case. There are scenarios where traditional VPN remains the appropriate tool.

    Site-to-site VPN for connecting fixed locations — branch offices, data centers, co-location facilities — remains a solid choice. ZTNA is primarily a remote access solution for individual users and devices; it does not replace the persistent encrypted tunnels between network locations that site-to-site VPN provides.

    Network-level access requirements also persist in some environments. Operational technology systems, legacy applications that rely on IP-based access controls, development environments where engineers need broad network visibility for troubleshooting — these scenarios can be harder to serve with application-level ZTNA policies and may still require network-level access solutions.

    The practical path for most organizations is therefore not to immediately rip out VPN infrastructure, but to identify the highest-risk access scenarios — privileged access to production systems, access to sensitive data stores, third-party contractor access — and address those with ZTNA first. Legacy VPN handles the remaining cases while the ZTNA coverage expands over time.

    Implementation Considerations

    A ZTNA deployment is a significant project, and several decisions made at the outset have long-term architectural consequences.

    Identity is the foundation. ZTNA’s access decisions depend on knowing exactly who is requesting access, which means your identity and access management infrastructure needs to be in good shape before you build access policies on top of it. Single sign-on with phishing-resistant multi-factor authentication is table stakes. If your identity infrastructure has orphaned accounts, stale service accounts, or inconsistent MFA enforcement, fix those problems first.

    Application inventory is the next prerequisite. You cannot write access policies for applications you have not catalogued. Organizations frequently discover more internal applications than they expected when they begin this exercise, including shadow IT applications that were deployed without formal IT involvement. The inventory process is a useful forcing function for that cleanup.

    Policy design requires a default-deny mindset that can feel unfamiliar at first. Every access grant is explicit and specific — a user gets access to this application, from this device health state, during these hours, at this sensitivity level. The upfront policy work is more intensive than configuring a VPN subnet, but the result is an access model where blast radius from a compromised credential is bounded by design rather than by luck.

    Integration with existing security tooling — SIEM, EDR, identity providers, PAM solutions — is important for making ZTNA’s access logs actionable. The access telemetry ZTNA generates is a rich source of signal for detecting anomalous behavior, but only if it flows into your detection and response infrastructure.

    The Regulatory and Insurance Angle

    Zero trust architecture has moved from a security best practice to a regulatory and insurance expectation. NIST SP 800-207 defines a zero trust architecture framework that federal agencies are required to adopt. The DoD Zero Trust Strategy mandates zero trust implementation across defense systems by 2027. Commercial cyber insurance applications increasingly ask specifically about VPN MFA, network segmentation, and privileged access controls — all areas where ZTNA materially improves posture.

    For organizations in regulated industries — healthcare, financial services, critical infrastructure — the combination of PCI DSS 4.0’s tighter access control requirements and HIPAA’s ongoing security rule enforcement creates a compliance environment where ZTNA’s access logging and policy granularity are practical necessities, not optional enhancements.

    Choosing a ZTNA Platform

    The ZTNA vendor landscape in 2026 is mature and competitive. The major platforms — Zscaler Private Access, Cloudflare Access, Palo Alto Prisma Access, Microsoft Entra Private Access, and Cisco Secure Access — all offer core ZTNA capabilities, but differ meaningfully in integration depth with cloud platforms, the quality of their device posture integrations, their global point-of-presence coverage, and their pricing models.

    Microsoft Entra Private Access is worth specific mention for organizations already deep in the Microsoft 365 and Azure ecosystem. Its tight integration with Entra ID, Conditional Access policies, and Microsoft Defender for Endpoint means you can build access policies that incorporate rich identity and device signals from infrastructure you already operate, without adding a separate vendor relationship.

    Cloudflare Access offers a compelling option for organizations that want a globally distributed proxy infrastructure. Its zero-configuration DNS routing and broad browser-delivered agentless access make it particularly strong for third-party access scenarios.

    The evaluation criteria that matter most are: how well the platform integrates with your existing identity provider, what device platforms and MDM solutions it supports, how granular the access policies are, and how the logging integrates with your SIEM or XDR platform.

    The Bottom Line

    The VPN-to-ZTNA migration is not a technology switch; it is a security architecture shift. The underlying change is moving from trusting the network and verifying at the edge to verifying every access request and trusting nothing implicitly. That shift requires investment in identity infrastructure, application cataloguing, and policy design, but it produces a meaningfully stronger security posture and better user experience for a distributed workforce.

    Organizations that have not started this transition are not standing still — they are falling further behind the threat landscape and the regulatory expectations that increasingly assume zero trust as baseline. Starting with the highest-risk access scenarios, building out from there, and treating the VPN-to-ZTNA migration as a multi-year program rather than a one-time cutover is the realistic path to getting there without operational disruption.

  • How to Scope Browser-Based AI Agents Before They Become Internal Proxies

    How to Scope Browser-Based AI Agents Before They Become Internal Proxies

    Abstract dark navy illustration of browser windows, guarded network paths, and segmented internal connections

    Browser-based AI agents are getting good at navigating dashboards, filling forms, collecting data, and stitching together multi-step work across web apps. That makes them useful for operations teams that want faster workflows without building every integration from scratch. It also creates a risk that many teams underestimate: the browser session can become a soft internal proxy for systems the model should never broadly traverse.

    The problem is not that browser agents exist. The problem is approving them as if they are simple productivity features instead of networked automation workers with broad visibility. Once an agent can authenticate into internal apps, follow links, download files, and move between tabs, it can cross trust boundaries that were originally designed for humans acting with context and restraint.

    Start With Reachability, Not Task Convenience

    Browser agent reviews often begin with an attractive use case. Someone wants the agent to collect metrics from a dashboard, check a backlog, pull a few details from a ticketing system, and summarize the result in one step. That sounds efficient, but the real review should begin one layer lower.

    What matters first is where the agent can go once the browser session is established. If it can reach admin portals, internal tools, shared document systems, and customer-facing consoles from the same authenticated environment, then the browser is effectively acting as a movement layer between systems. The task may sound narrow while the reachable surface is much wider.

    Separate Observation From Action

    A common design mistake is giving the same agent permission to inspect systems and make changes in them. Read access, workflow preparation, and final action execution should not be bundled by default. When they are combined, a prompt mistake or weak instruction can turn a harmless data-gathering flow into an unintended production change.

    A stronger pattern is to let the browser agent observe state and prepare draft output, but require a separate approval point before anything is submitted, closed, deleted, or provisioned. This keeps the time-saving part of automation while preserving a hard boundary around consequential actions.

    Shrink the Session Scope on Purpose

    Teams usually spend time thinking about prompts, but the browser session itself deserves equally careful design. If the session has persistent cookies, broad single sign-on access, and visibility into multiple internal tools at once, the agent inherits a large amount of organizational reach even when the requested task is small.

    That is why session minimization matters. Use dedicated low-privilege accounts where possible, narrow which apps are reachable in that context, and avoid running the browser inside a network zone that sees more than the workflow actually needs. A well-scoped session reduces both accidental exposure and the blast radius of bad instructions.

    Treat Downloads and Page Content as Sensitive Output Paths

    Browser agents do not need a formal API connection to move sensitive information. A page render, exported CSV, downloaded PDF, copied table, or internal search result can all become output that gets summarized, logged, or passed into another tool. If those outputs are not controlled, the browser becomes a quiet data extraction layer.

    This is why reviewers should ask practical questions about output handling. Can the agent download files? Can it open internal documents? Are screenshots retained? Do logs capture raw page content? Can the workflow pass retrieved text into another model or external service? These details often matter more than the headline feature list.

    Keep Environment Boundaries Intact

    Many teams pilot browser agents in test or sandbox systems and then assume the same operating model is safe for production. That shortcut is risky because the production browser session usually has richer data, stronger connected workflows, and fewer safe failure modes.

    Development, test, and production browser agents should be treated as distinct trust decisions with distinct credentials, allowlists, and monitoring expectations. If a team cannot explain why an agent truly needs production browser access, that is a sign the workflow should stay outside production until the controls are tighter.

    Add Guardrails That Match Real Browser Behavior

    Governance controls often focus on API scopes, but browser agents need controls that fit browser behavior. Navigation allowlists, download restrictions, time-boxed sessions, visible audit logs, and explicit human confirmation before destructive clicks are more relevant than generic policy language.

    A short control checklist can make reviews much stronger:

    • Limit which domains and paths the agent may visit during a run.
    • Require a fresh, bounded session instead of long-lived persistent browsing state.
    • Block or tightly review file downloads and uploads.
    • Preserve action logs that show what page was opened and what control was used.
    • Put high-impact actions behind a separate approval step.

    Those guardrails are useful because they match the way browser agents actually move through systems. Good governance becomes concrete when it reflects the tool’s operating surface instead of relying on broad statements about responsible AI.

    Final Takeaway

    Browser-based AI agents can save real time, especially in environments where APIs are inconsistent or missing. But once they can authenticate across internal apps, they stop being simple assistants and start looking a lot like controlled proxy workers.

    The safest approach is to approve them with the same seriousness you would apply to any system that can traverse trust boundaries, observe internal state, and initiate actions. Scope the reachable surface, separate read from write behavior, constrain session design, and verify output paths before the agent becomes normal infrastructure.

  • How to Design Service-to-Service Authentication in Azure Without Creating Permanent Trust

    How to Design Service-to-Service Authentication in Azure Without Creating Permanent Trust

    Abstract illustration of Azure service identities, trust boundaries, and secure machine-to-machine connections

    Service-to-service authentication sounds like an implementation detail until it becomes the reason a small compromise turns into a large one. In Azure, teams often connect apps, functions, automation jobs, and data services under delivery pressure, then promise themselves they will clean up the identity model later. Later usually means a pile of permanent secrets, overpowered service principals, and trust relationships nobody wants to touch.

    The better approach is to design machine identity the same way mature teams design human access: start narrow, avoid permanent standing privilege, and make every trust decision easy to explain. Azure gives teams the building blocks for this, but the outcome still depends on architecture choices, not just feature checkboxes.

    Start With Managed Identity Before You Reach for Secrets

    If an Azure-hosted workload needs to call another Azure service, managed identity should usually be the default starting point. It removes the need to manually create, distribute, rotate, and protect a client secret in the application layer. That matters because most service-to-service failures are not theoretical cryptography problems. They are operational problems caused by credentials that live too long and spread too far.

    Managed identities are also easier to reason about during reviews. A team can inspect which workload owns the identity, which roles it has, and where those roles are assigned. That visibility is much harder to maintain when the environment is stitched together with secret values copied across pipelines, app settings, and documentation pages.

    Treat Role Scope as Part of the Authentication Design

    Authentication and authorization are tightly connected in machine-to-machine flows. A clean token exchange does not help much if the identity behind it has contributor rights across an entire subscription when it only needs to read one queue or write to one storage container. In practice, many teams solve connectivity first and least privilege later, which is how temporary shortcuts become permanent risk.

    Designing this well means scoping roles at the smallest practical boundary, using purpose-built roles when they exist, and resisting the urge to reuse one identity for multiple unrelated services. A shared service principal might look efficient in a diagram, but it makes blast radius, auditability, and future cleanup much worse.

    Avoid Permanent Trust Between Tiers

    One of the easiest traps in Azure is turning every dependency into a standing trust relationship. An API trusts a function app forever. The function app trusts Key Vault forever. A deployment pipeline trusts production resources forever. None of those decisions feel dramatic when they are made one at a time, but together they create a system where compromise in one tier becomes a passport into the next one.

    A healthier pattern is to use workload identity only where the call is genuinely needed, keep permissions resource-specific, and separate runtime access from deployment access. Build pipelines should not automatically inherit the same long-term trust that production workloads use at runtime. Those are different operational contexts and should be modeled as different identities.

    Use Key Vault to Reduce Secret Exposure, Not to Justify More Secrets

    Key Vault is useful, but it is not a license to keep designing around static secrets. Sometimes a secret is still necessary, especially when talking to external systems that do not support stronger identity patterns. Even then, the design goal should be to contain the secret, rotate it, monitor its usage, and avoid replicating it across multiple applications and environments.

    Teams get into trouble when “it is in Key Vault” becomes the end of the conversation. A secret in Key Vault can still be overexposed if too many identities can read it, if access is broader than the workload requires, or if the same credential quietly unlocks multiple systems.

    Make Machine Identity Reviewable by Humans

    Good service-to-service authentication design should survive an audit without needing tribal knowledge. Someone new to the environment should be able to answer a few basic questions: which workload owns this identity, what resources can it reach, why does it need that access, and how would the team revoke or replace it safely? If the answers live only in one engineer’s head, the design is already weaker than it looks.

    This is where naming standards, tagging, role assignment hygiene, and architecture notes matter. They are not paperwork for its own sake. They are what make machine trust understandable enough to maintain over time instead of slowly turning into inherited risk.

    Final Takeaway

    In Azure, service-to-service authentication should be designed to expire cleanly, scale narrowly, and reveal its intent clearly. Managed identity, tight role scope, separated deployment and runtime trust, and disciplined secret handling all push in that direction. The real goal is not just getting one app to talk to another. It is preventing that connection from becoming a permanent, invisible trust path that nobody remembers how to challenge.