OBJECTIVE 4.3 Explain

Explain various activities associated with vulnerability management

Vulnerability management is the continuous cycle of finding, evaluating, remediating, and verifying security weaknesses. It’s not just scanning — it’s the full lifecycle from discovery to closure.

Vulnerability Identification

Scanning Types

  • Credentialed (authenticated): Scanner logs into systems with valid credentials. Sees installed software, configurations, missing patches. More accurate, fewer false positives.
  • Non-credentialed (unauthenticated): Scanner probes from the network without credentials. Only sees what’s externally visible. Faster but less thorough.
  • Agent-based: Lightweight agent installed on endpoints reports vulnerabilities continuously. Works for devices that aren’t always on the network (laptops, remote workers).
  • Agentless: Scanner-initiated from a central point. Simpler to deploy but requires network access to targets.

Active vs. Passive

  • Active scanning: Sends probes to targets. Can be disruptive to sensitive systems (ICSIndustrial Control System — Systems managing physical industrial processes/SCADASupervisory Control and Data Acquisition — Industrial control system for remote monitoring).
  • Passive scanning: Monitors network traffic to identify vulnerabilities. Non-disruptive but less thorough.

Scanning Cadence

  • External-facing systems: Weekly or continuous
  • Internal systems: Monthly minimum
  • After significant changes: Immediately (new deployments, major patches, configuration changes)
  • Compliance-driven: PCI-DSSPayment Card Industry Data Security Standard — Credit card data protection standard requires quarterly ASVApproved Scanning Vendor — PCI-DSS authorized external vulnerability scanner scans for external systems

Vulnerability Analysis

CVSS (Common Vulnerability Scoring System)

Standard severity rating from 0.0 to 10.0:

  • Critical: 9.0–10.0
  • High: 7.0–8.9
  • Medium: 4.0–6.9
  • Low: 0.1–3.9

CVSSCommon Vulnerability Scoring System — Standard severity rating 0.0-10.0 includes three metric groups:

  • Base: Inherent characteristics (attack vector, complexity, impact)
  • Temporal: Factors that change over time (exploit maturity, patch availability)
  • Environmental: Organization-specific context (asset criticality, existing mitigations)

EPSS (Exploit Prediction Scoring System)

Modern supplement to CVSSCommon Vulnerability Scoring System — Standard severity rating 0.0-10.0 that answers a different question:

  • CVSSCommon Vulnerability Scoring System — Standard severity rating 0.0-10.0 answers: “How severe is this vulnerability?” (inherent characteristics)
  • EPSS answers: “How likely is this vulnerability to be exploited in the next 30 days?” (real-world probability)

EPSS uses machine learning on historical exploitation data to produce a probability score (0-1). A CVSSCommon Vulnerability Scoring System — Standard severity rating 0.0-10.0 9.8 with EPSS 0.01 means it’s critical in theory but nobody’s actually exploiting it. A CVSSCommon Vulnerability Scoring System — Standard severity rating 0.0-10.0 6.5 with EPSS 0.95 means attackers are actively going after it despite the moderate CVSSCommon Vulnerability Scoring System — Standard severity rating 0.0-10.0.

Prioritization strategy: Combine CVSSCommon Vulnerability Scoring System — Standard severity rating 0.0-10.0 (severity) + EPSS (likelihood) + asset criticality (impact) for risk-based prioritization that reflects real-world threat landscape.

Beyond CVSS

CVSSCommon Vulnerability Scoring System — Standard severity rating 0.0-10.0 alone is insufficient for prioritization. Additional context:

  • Exploitability: Is there a known exploit in the wild? (CISA KEV catalog)
  • Asset criticality: A medium vulnerability on your internet-facing auth server matters more than a critical vulnerability on an air-gapped test machine
  • Exposure: Is the vulnerable system internet-facing, internal only, or segmented?
  • Business context: What’s the impact if this system is compromised?

False Positives and False Negatives

  • False positive: Scanner reports a vulnerability that doesn’t actually exist. Waste of remediation effort.
  • False negative: Scanner misses a real vulnerability. Dangerous.
  • Validation: verify scanner findings manually or with a second tool before escalating

Application Security Testing

Covered briefly in 4.1, but CompTIA tests the distinction here in the vulnerability management context:

MethodWhenWhat It FindsLimitations
SASTStatic Application Security Testing — Analyzing source code for vulnerabilities (Static)During development, on source codeInjection flaws, hardcoded secrets, insecure patterns, logic errorsHigh false positive rate. Can’t find runtime issues.
DASTDynamic Application Security Testing — Testing running applications for vulnerabilities (Dynamic)Against running applicationRuntime vulnerabilities, auth issues, config errors, injection from the outsideCan’t see source code. Slower. May miss logic bugs.
IAST (Interactive)During testing with instrumented runtimeCombines SASTStatic Application Security Testing — Analyzing source code for vulnerabilities + DASTDynamic Application Security Testing — Testing running applications for vulnerabilities insights with runtime contextRequires instrumentation. More complex setup.
SCASoftware Composition Analysis — Identifying vulnerable third-party dependencies (Software Composition Analysis)On dependency manifests / SBOMsKnown CVEs in third-party libraries and frameworksOnly finds known vulnerabilities. Can’t assess custom code.

When to use which: SASTStatic Application Security Testing — Analyzing source code for vulnerabilities early in development (shift-left), DASTDynamic Application Security Testing — Testing running applications for vulnerabilities in staging/pre-production, SCASoftware Composition Analysis — Identifying vulnerable third-party dependencies continuously (dependencies change constantly), IAST during QA testing cycles.

Threat Intelligence Feeds

External data sources that inform vulnerability prioritization and detection:

Key Sources

  • CVECommon Vulnerabilities and Exposures — Standard identifier for known vulnerabilities databases: NISTNational Institute of Standards and Technology — US standards body, publishes CSF and SP 800 series NVD (National Vulnerability Database) — the canonical source for vulnerability details and CVSSCommon Vulnerability Scoring System — Standard severity rating 0.0-10.0 scores
  • CISA KEV (Known Exploited Vulnerabilities): Authoritative list of vulnerabilities being actively exploited. Federal agencies must patch KEV entries within specified timelines. Essential for prioritization.
  • Vendor advisories: Microsoft Patch Tuesday, Apple security updates, Cisco security advisories
  • ISAC feeds: Industry-specific threat intelligence (FS-ISAC for finance, H-ISAC for healthcare)
  • Commercial feeds: Recorded Future, Mandiant, CrowdStrike threat intelligence
  • Open source: AlienVault OTX, Abuse.ch, VirusTotal

Integration

  • Feed IOCs (IPs, domains, hashes) into SIEMSecurity Information and Event Management — Centralized log collection, correlation, and alerting, EDREndpoint Detection and Response — Monitors endpoints for threats and enables response, firewall for automated blocking
  • Cross-reference scan results with KEV to identify actively exploited vulnerabilities in your environment
  • Use threat intel to inform scanning priorities — if a new campaign targets your industry, scan for those specific CVEs immediately

Bug Bounty Programs

In the vulnerability management context (not just the audit context from 5.5):

  • Supplements internal scanning and pentesting with crowd-sourced discovery
  • Researchers find vulnerability classes that automated tools miss (business logic flaws, chained exploits)
  • Continuous — unlike annual pentests, bounty programs run year-round
  • Vulnerability Disclosure Program (VDP): Similar to bug bounty but without monetary rewards. Provides a channel for researchers to report findings safely.
  • Every organization should have at minimum a VDP with a security.txt file at /.well-known/security.txt

Cyber Insurance

Listed in the CompTIA blueprint but often overlooked:

  • What it covers: Breach response costs (forensics, notification, credit monitoring), legal fees, regulatory fines (where insurable), business interruption, ransomware payments (controversial)
  • What it doesn’t cover: Reputational damage, loss of competitive advantage, pre-existing known vulnerabilities
  • Underwriting requirements: Insurers increasingly require MFAMulti-Factor Authentication — Requiring multiple authentication factors, EDREndpoint Detection and Response — Monitors endpoints for threats and enables response, backup verification, IRIncident Response — Structured approach to handling security incidents plan, and vulnerability management program before issuing a policy
  • Connection to vulnerability management: Poor vulnerability management (unpatched systems, no scanning) can void coverage or increase premiums. Insurers may audit.

Exceptions and Exemptions Process

Formal workflow for when vulnerabilities can’t be remediated within SLAService Level Agreement — Measurable performance expectations with a vendor:

  1. Request: System owner submits exception request with business justification
  2. Risk assessment: Security team evaluates the risk of leaving the vulnerability unpatched
  3. Compensating controls: Identify and implement alternative protections (segmentation, enhanced monitoring, WAFWeb Application Firewall — Layer 7 firewall protecting web applications rules)
  4. Approval: Security leadership (or risk committee for critical exceptions) approves with conditions
  5. Documentation: Exception recorded in risk register with owner, justification, compensating controls, and expiration date
  6. Review: Exceptions reviewed on schedule (30/60/90 days). Extended, remediated, or escalated.

Key point: An exception without compensating controls is just risk acceptance — and that requires a different approval level.

Vulnerability Response

Remediation

Fix the vulnerability directly.

  • Patching: Apply vendor-provided update. The most common and preferred response.
  • Configuration change: Disable a vulnerable feature, restrict access, tighten permissions.
  • Code fix: For custom applications, fix the vulnerable code.

Mitigation

Reduce the risk without fully eliminating the vulnerability.

  • Network segmentation to limit exposure
  • WAFWeb Application Firewall — Layer 7 firewall protecting web applications rules to block known exploit patterns
  • Enhanced monitoring on the vulnerable system
  • Used when patching isn’t immediately possible (legacy systems, change freeze)

Acceptance

Document the decision to accept the risk without remediation.

  • Must include business justification, risk owner sign-off, and compensating controls
  • Not a permanent state — review regularly

Exceptions

Formal process for granting temporary or permanent exceptions to patching requirements.

  • Time-limited, documented, approved by security leadership
  • Compensating controls required

Patch Management

Process

  1. Identification: Vendor releases patch or advisory
  2. Evaluation: Assess relevance, severity, and applicability to your environment
  3. Testing: Deploy to staging/test environment. Verify no regressions.
  4. Deployment: Roll out to production during maintenance window
  5. Validation: Verify the patch resolved the vulnerability. Re-scan.
  6. Documentation: Update patch records, close vulnerability tickets

Prioritization

Not all patches are equal. Prioritize by:

  1. Actively exploited vulnerabilities (CISA KEV)
  2. Internet-facing systems
  3. Critical CVSSCommon Vulnerability Scoring System — Standard severity rating 0.0-10.0 with available exploits
  4. High-value/critical business systems
  5. Everything else

Challenges

  • Legacy systems that can’t be patched (EOL software, ICSIndustrial Control System — Systems managing physical industrial processes/SCADASupervisory Control and Data Acquisition — Industrial control system for remote monitoring)
  • Application compatibility breaking after patches
  • Patch availability delays from vendors
  • Downtime requirements for patching critical systems

Vulnerability Reporting and Metrics

Key Metrics

  • Mean Time to Remediate (MTTRMean Time to Repair — Average time to restore after failure): Average time from discovery to fix. Lower is better.
  • Vulnerability density: Vulnerabilities per asset/system. Identifies problem areas.
  • Patch compliance rate: Percentage of systems at current patch level
  • Aging vulnerabilities: Count of vulnerabilities open beyond SLAService Level Agreement — Measurable performance expectations with a vendor (30/60/90 days)
  • Scan coverage: Percentage of assets scanned. Gaps = blind spots.

Reporting

  • Dashboard for security operations (real-time)
  • Executive summaries for leadership (monthly/quarterly)
  • Compliance reports for auditors (as required)

Offensive Context

Vulnerability management from the attacker’s perspective is a race. When a CVECommon Vulnerabilities and Exposures — Standard identifier for known vulnerabilities is published and a patch is released, the clock starts. Attackers reverse-engineer patches to develop exploits (n-day attacks). The window between patch release and patch deployment is when most exploitation happens — not through zero-days, but through known, patched vulnerabilities that organizations haven’t applied yet. Your patch deployment speed is directly correlated with your attack surface.