Perspectives · Resources
Cybersecurity Glossary
Security terminology explained without vendor framing. Written for leaders who need to understand what their teams are talking about and evaluate what they are being sold.
A
Attack Surface
The totality of entry points, interfaces, and data pathways that an attacker could use to access an organization’s systems, data, or operations. The attack surface includes network perimeters, cloud environments, third-party integrations, user endpoints, and human factors such as phishing susceptibility. A larger attack surface means more potential pathways for compromise.
What security leaders need to know
- The attack surface expands every time the organization adds a new application, cloud service, vendor integration, or remote access point. Growth without corresponding security review is a reliable source of unmanaged exposure.
- Reducing attack surface is as valuable as adding defensive controls. Removing unnecessary access points, retiring unused applications, and revoking dormant accounts eliminates entire categories of risk without additional investment.
- Shadow IT — applications adopted without IT approval — typically represents the fastest-growing and least-visible portion of the attack surface.
What should boards ask about attack surface?
The most useful board-level question is whether the organization knows the boundaries of what it is protecting. An honest answer requires visibility into cloud usage, third-party connections, and remote access arrangements — not just the perimeter network. Boards should expect the security function to report not just on threats detected but on whether the scope of monitoring reflects the actual scope of the organization’s digital environment.
Does reducing attack surface require significant investment?
Not always. Many attack surface reduction actions involve removing or consolidating rather than buying: retiring unused applications, revoking unnecessary access privileges, ending vendor integrations that are no longer required, and disabling services that are running but unneeded. These actions cost organizational effort, not necessarily budget.
Asset Classification
The process of identifying, inventorying, and categorizing an organization’s information assets — systems, data, applications, and infrastructure — by their business value, sensitivity, and criticality. Asset classification is the prerequisite for proportionate security: you cannot apply appropriate controls to assets you have not identified and understood.
What security leaders need to know
- Asset inventories that are built once and not maintained become inaccurate quickly. In environments with active cloud adoption, the inventory can be outdated within weeks of completion.
- Asset classification determines control prioritization. A comprehensive inventory of low-sensitivity assets is less valuable than an accurate inventory of critical assets — know where your most important data and systems are before trying to catalogue everything.
- Unmanaged assets — those outside the formal inventory — are frequently where attackers find their easiest entry points.
What does a useful asset inventory actually contain?
At minimum: what the asset is, who owns it, what data it holds or processes, what systems it connects to, and what classification category applies. Tools can automate discovery and inventory population, but the classification decisions — which assets are critical, which are sensitive, which are standard — require business input, not just technical scanning.
B
Board Security Reporting
The structured communication of security posture, risk, and investment adequacy from the security function to the board or audit committee. Effective board security reporting translates technical indicators into business risk language, provides the information the board needs to exercise governance, and surfaces decisions that require board-level authority or awareness.
What security leaders need to know
- Board members are governance decision-makers, not security technicians. Reports that lead with technical metrics without translating them to business consequence do not enable governance.
- A reporting framework should be consistent across periods so the board can track progress, not just receive disconnected snapshots.
- The most important element of a board security report is not what happened last quarter — it is what decision, if any, is being asked of the board today.
What should a board security report always contain?
At minimum: a clear statement of the organization’s most significant current exposure in business terms, an assessment of whether the program is adequately resourced to address it, and a progress update against previously committed priorities. If no decision is being requested, the report should still be explicit about that — not ambiguous.
How often should security be reported to the board?
Quarterly is appropriate for formal reporting at most organizations, with ad hoc briefings when a significant incident or regulatory development requires board awareness. The cadence matters less than the consistency — boards that receive security updates on an irregular or reactive basis cannot exercise meaningful governance.
Breach Notification
The legal or regulatory obligation to notify affected individuals, regulatory authorities, or both, within a defined timeframe after discovering that personal data or systems have been compromised. Requirements vary by framework: GDPR requires notification to supervisory authorities within 72 hours of becoming aware; India’s DPDPA sets timelines via rules; PCI-DSS requires immediate notification to card networks; sector-specific frameworks add further obligations.
What security leaders need to know
- The notification clock typically starts from when the organization became aware of the breach, not when the investigation concludes. You often have to notify while you still have incomplete information about scope and impact.
- GDPR requires notification to individuals when the breach is likely to result in high risk to their rights and freedoms — the threshold is risk, not certainty.
- Organizations that have never tested their breach notification process consistently fail the required timelines when a real incident occurs.
What does “becoming aware” mean for notification purposes?
Under most frameworks, awareness does not require certainty. When an organization has reasonable grounds to believe a breach has occurred, the obligation begins. Organizations that delay investigating potential breaches to avoid triggering notification are taking regulatory risk, not reducing it.
C
CASB (Cloud Access Security Broker)
A security control point deployed between an organization’s users and cloud service providers, used to enforce security policies, provide visibility into cloud application usage, and protect data in transit to and from cloud services. CASBs address the visibility and control gap created when employees access cloud applications directly, outside the corporate network perimeter.
What security leaders need to know
- CASBs are most valuable where users access cloud services from unmanaged devices or locations outside the corporate perimeter — which describes most organizations post-2020.
- A CASB without an accurate inventory of cloud services in use provides incomplete protection. Shadow cloud usage is typically where the most significant data exposure lives.
- CASB capabilities have increasingly converged into broader Security Service Edge (SSE) platforms. When evaluating CASB as a standalone category, confirm whether your existing SSE investment already provides equivalent capability before adding a separate tool.
Is a CASB a compliance tool or a security tool?
Both — but the distinction matters. CASBs provide data activity monitoring and policy enforcement capabilities that regulators increasingly expect under DPDPA, GDPR, and sector-specific frameworks. Compliance reporting capability is not the same as effective data protection, however. A CASB configured to report on policy violations it then allows is not a security control.
Compliance Program
The structured set of policies, controls, procedures, and governance mechanisms an organization maintains to meet its regulatory obligations. A compliance program translates external requirements — from GDPR, DPDPA, ISO 27001, PCI-DSS, or sector-specific mandates — into internal operational practice, with evidence to demonstrate adherence.
What security leaders need to know
- A compliance program is not the same as a security program. An organization can be fully compliant and still have a weak security posture — because frameworks define minimum requirements, not best practice.
- Organizations with overlapping regulatory obligations benefit from mapping all frameworks to a common control set (such as ISO 27001 or NIST CSF) to avoid building parallel programs for each framework.
- Compliance programs that exist primarily to pass audits rather than reduce risk are expensive and provide false assurance to leadership.
What happens when compliance and security appear to conflict?
They rarely genuinely conflict. When a compliance requirement appears to conflict with an effective security control, the usual explanation is that the requirement is being interpreted too narrowly, or the control is not well-matched to the underlying risk the requirement was written to address. These apparent conflicts are almost always resolvable at the implementation level.
Control Framework
A structured set of security controls organized to address identified risks, provide assurance over key processes, and demonstrate compliance with regulatory requirements. Common control frameworks include ISO 27001, NIST CSF, CIS Controls, and SOC 2. Organizations adopt control frameworks as a reference architecture for building and assessing their security programs.
What security leaders need to know
- No single control framework is appropriate for all organizations. Selection should reflect the sector, regulatory environment, customer requirements, and operational maturity.
- Adopting a framework does not mean implementing every control in it. Frameworks are reference documents; organizations select and implement the controls relevant to their risk profile.
- Frameworks describe what to do — not how to do it for a specific environment. Implementation guidance and organizational context are required to translate requirements into working controls.
Which control framework should an organization adopt?
The most useful answer starts with what the organization needs to achieve. If the primary driver is demonstrating compliance to enterprise customers, SOC 2 is the market standard. For regulatory or contractual certification, ISO 27001 is globally recognized. For risk-management prioritization, NIST CSF provides good structure. Most mature organizations implement against multiple frameworks — the key is identifying overlap rather than treating them as separate programs.
Cyber Risk
The potential for financial loss, operational disruption, reputational damage, or regulatory penalty arising from the failure of digital systems, unauthorized access to data, or deliberate exploitation of vulnerabilities. Cyber risk is a subset of operational risk and should be quantified, prioritized, and managed through the same governance structures that apply to other material business risks.
What security leaders need to know
- Cyber risk cannot be eliminated. The goal of a security program is to reduce residual risk to a level the organization has explicitly accepted — not to achieve a risk-free state.
- Quantifying cyber risk in financial terms enables leadership conversations that technical risk scores alone cannot support.
- The cyber risk profile changes continuously as the organization’s environment changes, the threat landscape evolves, and new regulatory obligations come into force.
How should the board think about cyber risk tolerance?
The same way it thinks about other operational risks: by establishing an explicit risk appetite that defines what level of exposure is acceptable given the organization’s risk capacity, regulatory obligations, and strategic objectives. A board that has not defined its risk appetite for cyber risk has implicitly delegated all cyber risk decisions to the security function — which is a governance gap, not a program design feature.
D
Data Classification
The process of organizing an organization’s data into defined categories based on sensitivity, regulatory status, and business value, with corresponding handling and protection requirements for each category. Classification is the foundation of proportionate data protection — it allows security controls to be concentrated where the data that would cause the most harm if compromised actually lives.
What security leaders need to know
- Classification schemes that are too granular become unmanageable. Most organizations function well with three to four categories: public, internal, confidential, and restricted — or equivalent labels that reflect their specific environment.
- Data that has not been classified is effectively unclassified — protection decisions are being made without a policy basis, and unclassified data is among the most common sources of compliance exposure.
- Classification is only useful if the handling requirements associated with each category are operational — meaning employees know what they are supposed to do, and controls enforce what the policy says.
What is the minimum viable data classification program?
Define the categories, write the handling requirements for each, identify where the most sensitive data sits, and ensure controls are calibrated to protect it. Organizations that try to classify every piece of data simultaneously before taking any action typically produce classification schemes that are never operationalized. Start with the data whose compromise would cause the most damage.
DLP (Data Loss Prevention)
A category of security controls designed to detect and prevent the unauthorized transfer, disclosure, or destruction of sensitive data. DLP solutions monitor data in use (on endpoints), data in motion (network traffic, email, cloud uploads), and data at rest (storage repositories) to identify policy violations and enforce controls ranging from alerting to active blocking.
What security leaders need to know
- DLP is a policy enforcement tool, not a data discovery tool. It works effectively only when the organization already knows what data it is trying to protect and has defined clear policies for how that data should and should not move.
- High false-positive rates are the most common DLP implementation failure. Policies that are too broad generate alert volumes that security teams cannot investigate, producing a program that exists on paper but does not reduce risk.
- Cloud adoption has shifted the primary DLP challenge from network egress to cloud upload and sharing. DLP programs designed for a pre-cloud environment frequently miss the data movement patterns that matter most today.
Is DLP worth the investment?
For organizations handling significant volumes of personal data, financial data, or intellectual property — yes, if implemented with clear policies and adequate tuning capacity. DLP deployed without accurate data inventories and ongoing policy maintenance creates administrative overhead without materially reducing exposure. The investment decision should be assessed against specific data risks, not against a general assumption that DLP is required for compliance.
DPDPA (Digital Personal Data Protection Act)
India’s primary data protection legislation, enacted in 2023, which establishes obligations for organizations — termed Data Fiduciaries — that collect, process, or use the personal data of individuals in India. The Act requires lawful basis for processing, consent management, purpose limitation, data minimization, accuracy obligations, security safeguards, breach notification, and data principal rights including access, correction, and erasure.
What security leaders need to know
- DPDPA applies to any organization processing the personal data of individuals in India, regardless of where the organization itself is incorporated or located.
- Significant Data Fiduciaries — designated by the government based on data volume and sensitivity — face additional obligations including Data Protection Impact Assessments, data audits, and appointment of a Data Protection Officer.
- Breach notification obligations apply to Data Fiduciaries, with timelines specified in rules. Organizations without a tested breach notification process are not positioned to meet these requirements under pressure.
How does DPDPA interact with GDPR for organizations subject to both?
The frameworks share common principles — lawful basis, purpose limitation, data subject rights — but differ in implementation detail and specific obligations. Organizations subject to both should map their compliance programs against both frameworks simultaneously. The overlap is substantial enough that a single, well-designed data governance program can address both, but the detail differences require specific attention.
G
GDPR (General Data Protection Regulation)
The European Union’s primary data protection legislation, applicable to any organization that processes the personal data of individuals in the EU, regardless of where the organization is based. GDPR establishes requirements for lawful processing, data subject rights, data transfer restrictions, breach notification (72 hours to regulators), and organizational accountability through Data Protection Officers and Privacy Impact Assessments.
What security leaders need to know
- GDPR’s extraterritorial reach means any organization with EU customers, employees, or users is potentially subject to GDPR regardless of where it operates.
- Fines of up to €20 million or 4% of global annual turnover (whichever is higher) are available for serious violations. Enforcement activity has increased substantially since 2021.
- GDPR requires organizations to demonstrate compliance, not merely claim it. Documentation, records of processing activities, and evidence of control implementation are required for regulatory accountability.
What is a Data Protection Officer and do we need one?
A DPO is required under GDPR for public authorities, organizations conducting large-scale systematic monitoring of individuals, and organizations processing special categories of data at large scale. Where required, the DPO must be formally appointed and cannot hold a role with a conflict of interest. Where not strictly required, many organizations appoint one voluntarily as a governance measure — it signals accountability and provides a point of contact for regulators and data subjects.
Governance Model (Security)
The framework of structures, processes, and accountabilities through which an organization makes and oversees security decisions. A security governance model defines who has authority over which security decisions, how those decisions are escalated, how security is reported to leadership and the board, and how security priorities are aligned with business objectives.
What security leaders need to know
- Security governance failures are among the most common root causes of significant incidents — not technical control failures. When the right decisions were not made at the right level, governance was inadequate regardless of the technical controls in place.
- A governance model that exists only on paper does not function in a crisis. The test is whether the right people are making the right decisions under pressure, not whether the policy documents are current.
- Governance models must evolve as organizations grow. A structure appropriate for 200 people will not work for 2,000 — and adapting in advance of growth is significantly easier than restructuring after an incident.
What is the difference between security governance and security management?
Management is the operational execution of the security program: running controls, responding to incidents, maintaining tools. Governance is the oversight of that program: setting direction, allocating resources, evaluating performance, and holding the function accountable. Both are necessary, but they operate at different organizational levels and involve different stakeholders. The board governs; the security function manages.
I
IAM (Identity and Access Management)
The set of policies, processes, and technologies that control which users, devices, and applications can access which systems and data, under what conditions, and with what level of privilege. IAM encompasses user provisioning and deprovisioning, authentication (including multi-factor authentication), authorization (role-based and attribute-based access control), and privileged access management.
What security leaders need to know
- The majority of significant breaches involve compromised identities at some point in the attack chain. Weak IAM is among the highest-value targets for attackers and the highest-leverage areas for defenders.
- Provisioning processes that add access promptly but remove it slowly or incompletely are a common and underappreciated risk. Accounts with access no longer required represent uncontrolled exposure.
- Privileged accounts require controls beyond standard user accounts. The compromise of a privileged account typically has materially different impact than a standard user account compromise.
What is least privilege and why does it matter?
Least privilege means giving users, applications, and systems only the minimum access necessary to perform their function. A compromised account can only access what it was authorized for. Organizations that grant broad access for convenience and review it infrequently create environments where a single account compromise can have organization-wide consequences.
Incident Response
The organized approach to managing and containing the impact of a security breach or cyberattack. An incident response program includes preparation (plans, playbooks, team roles, communication protocols), detection and analysis, containment and eradication, recovery, and post-incident review. Each phase has specific activities and decision points that must be designed before an incident occurs.
What security leaders need to know
- Organizations that have never tested their incident response plans have plans that will not work. Plans written in calm conditions make assumptions about information, personnel availability, and system behavior that consistently fail under real incident conditions.
- The decisions that most commonly go wrong in real incidents are organizational, not technical: who has authority to take systems offline, when do regulators get notified, who communicates externally. These must be decided in advance.
- Incident response is not a security team function alone. Legal, communications, senior leadership, and operations all have roles in a significant incident.
How long should an incident response plan be?
Long enough to cover the scenarios the organization actually faces, short enough that the relevant section can be located in the first 60 seconds of an incident. Plans that run to 50+ pages are typically not used in real incidents. A modular structure — a short core plan with separate playbooks for specific incident types — works better than a single comprehensive document that no one can navigate under pressure.
ISO 27001
An international standard for information security management systems (ISMS), published by the International Organization for Standardization. ISO 27001 provides a framework for establishing, implementing, maintaining, and continuously improving an ISMS. Certification requires independent audit by an accredited certification body and provides third-party assurance of security governance to customers, partners, and regulators.
What security leaders need to know
- ISO 27001 certification demonstrates that an organization has a functioning security management system and has been independently assessed against it. It certifies systematic management, not a specific security outcome.
- The standard is structured around risk management: organizations identify their information security risks and implement controls appropriate to those risks. The Annex A control set is a reference — organizations are not required to implement every control.
- Certification requires ongoing maintenance through annual surveillance audits and recertification every three years.
Is ISO 27001 worth pursuing if no customer is requiring it?
ISO 27001 is most commonly pursued in response to customer or regulatory requirements. Where neither is present, the standard’s risk management framework and governance structure can provide value without full certification. Organizations in this position should evaluate whether the framework can be adopted without the certification overhead, or whether an alternative framework better matches their current maturity and objectives.
M
Security Maturity Model
A framework for evaluating and describing the capability and consistency of a security program across defined domains. Maturity models provide a structured way to assess where a program currently sits on a defined scale, identify the specific improvements needed to advance, and communicate program development to leadership and the board over time.
What security leaders need to know
- Maturity scores are a means to an end. A program that scores a 3 but has no significant incidents and is well-aligned to the organization’s actual risk profile is more valuable than one that scores a 5 on paper but is disconnected from operational reality.
- Maturity models vary significantly. Widely used options include CMMI, CIS Controls maturity tiers, NIST CSF implementation tiers, and C2M2. Selecting the right model requires understanding what the organization needs to communicate and to whom.
- Maturity assessments conducted by the same organization providing implementation services have an inherent incentive to find lower scores. Independent assessment removes that conflict.
How should the board interpret maturity scores?
As directional indicators, not absolute measures. The more useful board question is not what the maturity score is, but what the specific gaps are between current maturity and the maturity needed to manage actual risks — and what the plan is to close them. Maturity scores without that context are governance theater.
MFA (Multi-Factor Authentication)
An authentication mechanism that requires users to provide two or more verification factors before accessing a system or application. Factors typically combine something the user knows (password), something the user has (an authenticator app code or hardware token), and something the user is (biometrics). MFA significantly reduces the risk of account compromise from credential theft or phishing.
What security leaders need to know
- Not all MFA is equally effective. SMS-based one-time codes are substantially weaker than app-based authenticators or hardware tokens, because SMS codes can be intercepted through SIM swapping. For privileged accounts, hardware tokens or passkeys are the appropriate standard.
- MFA enforcement on privileged accounts is the highest-leverage single control for most organizations. An attacker who compromises a privileged account without MFA can move through an environment with minimal friction.
- User resistance to MFA is a usability problem, not a security argument. Modern authenticator applications are fast and low-friction; resistance is addressed through communication, not by accepting weaker security.
Should MFA be required for all users or just privileged accounts?
All users, with higher-assurance methods for privileged accounts. The population most commonly targeted in credential-based attacks is the general user population, not just administrators. A compromise that starts with a standard user account and escalates to privileged access is a common and well-documented attack pattern. The cost of organization-wide MFA deployment is low relative to the risk reduction it provides.
N
NIST CSF (NIST Cybersecurity Framework)
A voluntary framework developed by the US National Institute of Standards and Technology that organizes cybersecurity activities into six core functions: Govern, Identify, Protect, Detect, Respond, and Recover (the Govern function was added in CSF 2.0). Widely used as a reference architecture for building, assessing, and communicating security programs, particularly in sectors without a mandated framework.
What security leaders need to know
- NIST CSF is a management framework, not a technical control standard. It describes categories of activity that should exist in a mature security program, not the specific controls required to execute them.
- The framework’s most practical application for many organizations is as a communication tool: mapping the current program to NIST CSF provides a structured way to describe coverage and gaps to leadership and the board.
- NIST CSF is not a compliance standard and there is no certification. Organizations described as “NIST CSF compliant” typically mean their program is organized according to the framework’s structure.
How does NIST CSF relate to ISO 27001?
NIST CSF is a framework; ISO 27001 is a certifiable standard. They are complementary, not competing. Organizations that use NIST CSF for internal program management often pursue ISO 27001 certification to provide external assurance to customers and partners. Mapping between them is well-documented, and organizations operating against one typically find the other requires relatively modest incremental effort.
P
Penetration Testing
A structured, authorized exercise in which security professionals attempt to exploit vulnerabilities in an organization’s systems, applications, or network using the same techniques an attacker would use. Penetration tests are distinguished from vulnerability scans by their active exploitation component: a penetration test demonstrates impact, not just the existence of exposure.
What security leaders need to know
- A penetration test produces findings about the environment at the time it was conducted, against the agreed scope. It is a point-in-time assessment, not ongoing assurance. Environments change between tests, and new vulnerabilities emerge continuously.
- The quality of a penetration test is determined primarily by the skills and methodology of the testers. Running automated tools and reporting output — checkbox testing — provides limited value.
- The most important output is not the count of findings but the realistic demonstration of what an attacker could achieve, and the prioritized remediation plan that follows.
How often should penetration testing be conducted?
For most organizations, annually for core infrastructure and applications, with additional tests after significant changes — new applications, infrastructure changes, or acquisitions. Regulatory frameworks including PCI-DSS specify minimum testing frequencies for in-scope environments. Organizations undergoing rapid technology change may benefit from more frequent targeted testing of specific areas rather than annual comprehensive tests.
R
Residual Risk
The risk that remains after security controls have been applied. Residual risk represents the exposure an organization has chosen to accept — or has not yet addressed — given its current control environment. All security programs produce residual risk; the question is whether that residual risk has been explicitly evaluated and accepted by the appropriate authority.
What security leaders need to know
- Residual risk that has not been formally accepted is risk the organization is carrying without a governance decision. The security function identifies and quantifies residual risk; the decision to accept it belongs to the business or the board.
- Security investment decisions are most usefully framed as decisions about which residual risks to reduce and by how much, rather than decisions about which controls to buy.
- Residual risk changes as the threat landscape evolves. A level of residual risk that was acceptable two years ago may not be acceptable today.
Risk Appetite
The amount and type of risk an organization is willing to accept in pursuit of its objectives. In cybersecurity, risk appetite defines the boundaries within which the security program operates: how much residual risk is acceptable, which risk categories require immediate remediation, and what level of investment is appropriate to reduce specific risks.
What security leaders need to know
- Risk appetite must be defined by the board or senior leadership, not by the security function. Security teams operating without a defined risk appetite are making business risk decisions without the authority to do so.
- Risk appetite statements that are too vague to guide decisions — such as “we have a low appetite for cyber risk” — are not useful. Effective definitions are specific enough to distinguish between risks requiring immediate action and risks that can be monitored within current controls.
- Risk appetite should directly inform security investment decisions. If the board defines a low appetite for regulatory exposure, that translates into compliance investment priorities.
Who should define risk appetite?
The board or executive leadership, informed by the security function’s assessment of the current risk environment. The security function can propose a risk appetite framework and populate it with data about current exposure — but the decision about what level of risk is acceptable is a business decision, not a security one. This distinction is fundamental to good governance.
Risk Register
A structured document that records the security risks identified by an organization, including description, likelihood, potential impact, current controls, residual risk level, risk owner, and planned treatment actions. A maintained risk register is both a management tool for tracking risk treatment and a governance artifact for demonstrating systematic risk management to auditors, regulators, and leadership.
What security leaders need to know
- A risk register created for an audit and then filed is a compliance artifact, not a risk management tool. Risk registers provide value only when actively maintained and used to inform security investment and prioritization decisions.
- Risk ownership is a critical and commonly neglected element. A risk with no named owner is a risk no one is accountable for addressing. Ownership should sit with the function that has the ability and authority to treat the risk.
- Risk registers should be reviewed at minimum annually and after significant events — and updated to reflect changes in the environment, the threat landscape, and control effectiveness.
S
SASE (Secure Access Service Edge)
An architectural framework that converges wide-area networking (WAN) and network security services into a unified, cloud-delivered platform. SASE consolidates Secure Web Gateway (SWG), Cloud Access Security Broker (CASB), Zero Trust Network Access (ZTNA), Firewall-as-a-Service (FWaaS), and SD-WAN into a single architecture, enabling consistent security policy enforcement regardless of where users and data are located.
What security leaders need to know
- SASE is an architecture, not a product. Vendors market products as SASE solutions with varying degrees of capability coverage. Evaluating these claims demands clarity about which specific capabilities the architecture requires before vendor assessment begins.
- The primary driver for SASE adoption is the dissolution of the network perimeter — users accessing cloud applications from distributed locations cannot be protected by controls designed for a corporate perimeter. SASE moves the enforcement point to where users and data actually are.
- SASE consolidation promises to reduce tool sprawl and simplify operations, but consolidation onto a single vendor creates concentration risk. This tradeoff should be evaluated explicitly.
What is the difference between SASE and SSE?
Security Service Edge (SSE) is the security subset of SASE — it includes SWG, CASB, ZTNA, and FWaaS but excludes the networking components (SD-WAN). Organizations whose primary need is security enforcement rather than WAN transformation often implement SSE first and expand to full SASE as the network modernization program matures. The two terms are used interchangeably by vendors but represent different scopes of implementation.
Shadow IT
Technology — applications, cloud services, devices, or other systems — adopted and used by employees or business units without the knowledge or approval of the IT or security function. Shadow IT is driven by the gap between what employees need to work effectively and what the organization formally provides, and creates security risk because it sits outside the organization’s visibility and control framework.
What security leaders need to know
- Shadow IT is not primarily a compliance problem — it is a risk exposure problem. Data that moves into applications outside the organization’s control framework is data the organization cannot monitor, protect, or recover in the event of an incident.
- Attempting to prohibit shadow IT without addressing the underlying need that drove its adoption is not effective. The most successful approaches combine policy enforcement with fast-lane processes for evaluating and approving new tools.
- Most organizations that analyze cloud application usage for the first time are surprised by the volume of unauthorized applications in active use.
How do we get visibility into shadow IT?
Cloud access security brokers (CASBs) and Security Service Edge (SSE) platforms can provide visibility into cloud application usage at scale by analyzing DNS queries, proxy logs, or agent-based endpoint data. Even a basic analysis of DNS or web proxy logs typically surfaces a substantially larger shadow IT footprint than most security teams expect — and gives the organization a foundation for risk-based decisions about which applications to approve, which to restrict, and which to replace with sanctioned alternatives.
T
Threat Modelling
A structured process for identifying, prioritizing, and addressing security threats relevant to a specific system, application, or organization. Threat modelling answers: what are we protecting, who might attack it, how might they do so, and what controls would most effectively reduce the risk of a successful attack? It produces a prioritized view of threats specific to the environment being assessed.
What security leaders need to know
- Threat models built around industry averages rather than the organization’s specific data environment, operational architecture, and adversary profile tell leadership nothing they could not learn from a published framework. Specificity is what makes threat modelling valuable.
- Threat modelling is not a one-time activity. The threat landscape changes, the organization’s environment changes, and new threat actor capabilities emerge. A model built three years ago for an environment that has since moved substantially to the cloud should not be treated as current.
- The most valuable output is prioritization: which threats, given the organization’s specific profile, represent the highest likelihood and impact — and which controls address them most efficiently.
Who should be involved in a threat modelling exercise?
At minimum: the security team, representatives from the business units that own the systems or data being modelled, and legal or compliance where regulatory obligations shape the threat profile. Threat models built solely by the security team often miss the business context that determines which threats matter most. The people closest to the business processes understand the data flows, the dependencies, and the operational constraints that shape both the threat surface and the realistic control options.
Third-Party Risk
The risk to an organization’s data, operations, or compliance posture arising from the actions, failures, or security weaknesses of vendors, partners, suppliers, or other entities with access to the organization’s systems or data. Third-party risk is a material and growing source of organizational exposure as supply chains become more complex and digital integration with external parties increases.
What security leaders need to know
- Third-party risk is not transferred when a contract is signed. Data that sits with a vendor is still the organization’s data under most regulatory frameworks. The organization remains responsible for its protection regardless of who holds it.
- Not all third parties represent equivalent risk. Applying the same assessment intensity to every vendor is inefficient. Risk-based tiering — assessing vendors with access to sensitive data or critical systems at a different standard than those with minimal access — is the appropriate approach.
- Third-party relationships assessed at onboarding but not reassessed subsequently are a common source of untracked exposure. Vendor environments change; so does their access to the organization’s data and systems.
What should a third-party risk assessment include?
At minimum: what data the vendor has access to and how they protect it, what security certifications or assessments they maintain, what their breach notification process is and how it connects to the organization’s own obligations, and what contractual rights the organization has to audit or require remediation. For high-risk vendors, questionnaire-based assessments should be supplemented with evidence review and, for critical relationships, independent assurance.
V
Vulnerability Management
The ongoing process of identifying, classifying, prioritizing, and remediating security vulnerabilities in an organization’s systems, applications, and infrastructure. Effective vulnerability management requires asset visibility (you cannot manage vulnerabilities in systems you do not know exist), regular scanning, risk-based prioritization, and a remediation process that is actually executed rather than backlogged.
What security leaders need to know
- Vulnerability management is an operational discipline, not an event. Organizations that scan quarterly and remediate at an annual review are not managing vulnerabilities — they are documenting them.
- CVSS scores measure technical severity, not business risk. A critical-CVSS vulnerability in a system with no external access and no sensitive data is typically a lower remediation priority than a medium-CVSS vulnerability in a customer-facing application processing financial data.
- The remediation backlog is the most reliable indicator of program effectiveness. A growing backlog indicates the organization is identifying vulnerabilities faster than it is closing them — a resource, process, or prioritization problem that must be addressed explicitly.
What does good vulnerability prioritization look like?
Prioritization that combines technical severity with business context: the asset’s criticality, its exposure (internet-facing vs internal), the data it holds or processes, whether the vulnerability is actively exploited in the wild, and the feasibility of remediation within the operational environment. Organizations that prioritize purely by CVSS score end up remediating low-impact vulnerabilities in unimportant systems while critical-path exposures age in the backlog.
Z
Zero Trust
A security model based on the principle that no user, device, or application should be trusted by default, regardless of whether they are inside or outside the traditional network perimeter. Zero Trust replaces perimeter-based security with continuous verification: every access request is authenticated, authorized, and validated against security policy, regardless of origin.
What security leaders need to know
- Zero Trust is an architectural principle, not a product. Vendors market products as Zero Trust solutions — some implement meaningful Zero Trust principles and some do not. The decision to adopt Zero Trust should begin with architectural intent, not vendor selection.
- Implementing Zero Trust in an existing environment is an incremental program, not a single project. Organizations that attempt comprehensive implementation in a single initiative typically stall. The more effective approach is to identify the highest-risk access paths and apply Zero Trust controls to them first.
- Zero Trust does not eliminate the need for other security controls. It addresses the risk of lateral movement by an attacker who has gained an initial foothold — it does not prevent the initial foothold from occurring.
What is the difference between Zero Trust and VPN?
A VPN authenticates users and grants them access to the network. Zero Trust authenticates users and grants them access only to the specific applications or resources they are authorized for — nothing more. In a VPN environment, an attacker who compromises a VPN credential can move laterally through the network. In a Zero Trust environment, a compromised credential provides access only to what that credential was authorized for, significantly limiting the attacker’s ability to escalate or move laterally.
Is Zero Trust achievable for mid-sized organizations?
Yes, with the right framing. Zero Trust is most achievable when treated as a direction rather than a destination — a set of principles that guide investment decisions over time. A mid-sized organization that implements MFA for all users, enforces application-level access controls rather than network-level trust, and monitors user and device posture before granting access has made meaningful Zero Trust progress, even if the full architecture is years away.
A term you need that is not here?
This glossary covers the terms most relevant to the governance and leadership conversations DataNudge is typically part of. If you need a definition that is not here, or want to discuss what a term means in the context of your specific environment, start a conversation.