OBJECTIVE 5.5 Explain

Explain types and purposes of audits and assessments

Audits and assessments verify that security controls are implemented, effective, and aligned with requirements. They’re how you prove your security posture — to yourself, to regulators, and to business partners.

Assessment Types

Vulnerability Assessment

Identifies known vulnerabilities in systems and applications.

  • Automated scanning tools (Nessus, Qualys, OpenVAS)
  • Produces a list of vulnerabilities with CVSSCommon Vulnerability Scoring System — Standard severity rating 0.0-10.0 severity scores
  • Does not exploit vulnerabilities — identifies them only
  • Regular cadence: quarterly at minimum, continuous for critical systems

Penetration Testing

Authorized attempt to exploit vulnerabilities to demonstrate real-world impact.

  • Goes beyond vulnerability scanning — actually attempts to compromise systems
  • Demonstrates what an attacker could achieve, not just what’s theoretically vulnerable

Types:

  • Black box: Tester has no prior knowledge of the target. Simulates external attacker.
  • White box: Tester has full knowledge (source code, network diagrams, credentials). Most thorough.
  • Gray box: Tester has partial knowledge. Simulates insider or attacker with some reconnaissance.

Methodology:

  • Rules of engagement: Scope, timing, authorized targets, escalation procedures, emergency contacts
  • Reconnaissance → Scanning → Exploitation → Post-exploitation → Reporting
  • Written authorization (get-out-of-jail letter) required before any testing

Reconnaissance Types

CompTIA expects you to distinguish between:

TypeMethodDetection RiskExamples
PassiveGather information without directly interacting with the targetNone — target doesn’t knowOSINTOpen Source Intelligence — Intelligence gathered from publicly available sources, DNSDomain Name System — Port 53 (UDP/TCP). Resolves domain names to IP addresses records, WHOIS, CT logs, Shodan, social media, job postings
ActiveDirectly interact with target systemsDetectable — may trigger alertsPort scanning, vulnerability scanning, banner grabbing, directory bruting

Exam distinction: Passive recon never touches the target. Active recon sends packets to the target. If the target’s IDSIntrusion Detection System — Monitors and alerts on suspicious activity (passive) could detect it, it’s active.

Rules of Engagement Detail

  • Scope boundaries: Which IPs, domains, applications are authorized targets. Which are explicitly excluded.
  • Time windows: Testing hours (business hours only, or 24/7). Blackout periods (quarter-end, holiday).
  • Permitted techniques: Can the tester use social engineering? Physical intrusion? DDoSDistributed Denial of Service — Attack overwhelming target from multiple sources simulation?
  • Notification requirements: Who knows testing is happening? (Limited to avoid skewing results.)
  • Emergency procedures: What if the tester causes an outage? What if they find evidence of an active breach?
  • Data handling: How the tester stores findings, how data is transmitted, destruction after engagement.

Threat Assessment

Evaluates potential threats specific to the organization.

  • Which threat actors are most likely to target you?
  • What are their capabilities and motivations?
  • What attack vectors would they use?

Physical Security Assessment

Evaluates physical controls: locks, cameras, access control, perimeter security.

  • May include physical penetration testing (tailgating, social engineering for physical access)

Audit Types

Internal Audit

Conducted by the organization’s own audit team.

  • Regular self-assessment of control effectiveness
  • Tests compliance with internal policies and external requirements
  • Identifies gaps before external auditors find them
  • Independence matters: Internal auditors should report to leadership independent of the teams being audited

External Audit

Conducted by independent third-party auditors.

  • Required for many compliance standards (SOC 2, PCI-DSSPayment Card Industry Data Security Standard — Credit card data protection standard, ISOInternational Organization for Standardization — Publishes ISO 27001/27002 security standards 27001)
  • Greater credibility because of independence
  • Results may be shared with regulators, customers, or business partners

Regulatory Audit

Conducted by or on behalf of a regulatory body.

  • HIPAAHealth Insurance Portability and Accountability Act — US healthcare data protection law audits by HHS Office for Civil Rights
  • PCI-DSSPayment Card Industry Data Security Standard — Credit card data protection standard assessments by Qualified Security Assessors (QSA)
  • Non-voluntary — regulators can mandate these

Assessment Frameworks

NIST SP 800-53

Comprehensive catalog of security controls for federal systems.

  • Controls organized by family (Access Control, Audit, Incident Response, etc.)
  • Risk-based selection — choose controls based on system categorization (low/moderate/high impact)

CIS Benchmarks

Prescriptive configuration guidelines for specific technologies.

  • Benchmark for Windows Server, Linux, AWS, Azure, Kubernetes, etc.
  • Two levels: Level 1 (practical, minimal impact) and Level 2 (defense-in-depth, may affect functionality)

SCAP (Security Content Automation Protocol)

Set of standards for automated vulnerability management and compliance checking.

  • XCCDFExtensible Configuration Checklist Description Format — Language for security checklists: Language for defining security checklists
  • OVALOpen Vulnerability and Assessment Language — Language for describing system configuration states: Language for describing system configuration states
  • CVECommon Vulnerabilities and Exposures — Standard identifier for known vulnerabilities: Common Vulnerabilities and Exposures — standard naming for vulnerabilities
  • CVSSCommon Vulnerability Scoring System — Standard severity rating 0.0-10.0: Common Vulnerability Scoring System — standard severity rating (0-10)
  • CPECommon Platform Enumeration — Standard naming for IT products/platforms: Common Platform Enumeration — standard naming for products

Bug Bounty Programs

Crowdsourced security testing where external researchers are paid for finding and reporting vulnerabilities.

How they work:

  • Organization publishes a scope (which systems, what vulnerability types, what’s excluded)
  • Researchers test within scope and submit findings through a platform (HackerOne, Bugcrowd)
  • Triage team validates findings, assigns severity, and determines payout
  • Researcher is paid based on severity (critical: $5K–$50K+, low: $100–$500 typical)

Disclosure policies:

  • Responsible disclosure: Researcher reports to vendor, vendor has a set timeframe (typically 90 days) to fix before public disclosure
  • Full disclosure: Immediate public disclosure (controversial, used when vendor is unresponsive)
  • Coordinated disclosure: Researcher, vendor, and sometimes CERTComputer Emergency Response Team — Organization coordinating vulnerability responses work together on timing

Legal considerations:

  • Safe harbor clause protects researchers acting in good faith within scope
  • Without safe harbor, security research could technically violate CFAA (Computer Fraud and Abuse Act)
  • Clear scope prevents disputes about what was authorized

NIST 800-53 in Depth

Control Families (Key Ones to Know)

FamilyIDFocus
Access ControlACAuthentication, authorization, account management
Audit & AccountabilityAULog management, audit review, non-repudiation
Configuration ManagementCMBaseline configs, change control, least functionality
Identification & AuthenticationIAIdentity proofing, MFAMulti-Factor Authentication — Requiring multiple authentication factors, credential management
Incident ResponseIRIncident Response — Structured approach to handling security incidentsIRIncident Response — Structured approach to handling security incidents planning, detection, reporting, recovery
Risk AssessmentRARegistration Authority — Verifies identity before CA issues certificateVulnerability scanning, risk determination
System & Comms ProtectionSCEncryption, boundary protection, secure channels
System & Info IntegritySIMalware protection, monitoring, patch management

Impact Baselines

Controls are selected based on FIPS 199 system categorization:

  • Low impact: Minimum controls. Loss would have limited adverse effect.
  • Moderate impact: Expanded controls. Loss would have serious adverse effect. Most common for government systems.
  • High impact: Maximum controls. Loss would have severe or catastrophic adverse effect.

Higher baseline = more controls from each family, plus additional enhancements (stronger audit, more frequent assessment, stricter access).

Attestation of Findings

Who Attests

  • External auditors: CPA firms for SOC reports, QSAs for PCI-DSSPayment Card Industry Data Security Standard — Credit card data protection standard, certification bodies for ISOInternational Organization for Standardization — Publishes ISO 27001/27002 security standards
  • Internal auditors: For internal compliance reports (less credible externally)
  • Management: Self-attestation of compliance status (least credible)
  • Regulatory bodies: Government agencies attesting to framework compliance (FedRAMP ATO)

Limitations

  • Point-in-time: An attestation reflects the state at the time of the audit, not today. Controls could degrade after the audit.
  • Scope-limited: A SOC 2 covering “cloud hosting services” doesn’t attest to the vendor’s mobile app security.
  • Reliance on evidence: Auditors test samples, not everything. A clean audit doesn’t guarantee zero issues.
  • Professional judgment: Different auditors may reach different conclusions on the same evidence.

Point-in-Time vs. Continuous

  • Point-in-time: Traditional audits. Annual assessment produces a report valid for that moment.
  • Continuous: Automated monitoring that continuously validates control effectiveness. Increasingly expected for cloud/SaaSSoftware as a Service — Cloud: provider manages everything, you configure.
  • SOC 2 Type I = point-in-time. SOC 2 Type II = period-of-time (6-12 months, closer to continuous but still periodic).

Security Ratings

Third-party services that continuously assess an organization’s external security posture.

  • BitSight, SecurityScorecard — score organizations based on observable external data
  • Used for vendor risk assessment, board reporting, benchmarking
  • Limitation: Only sees what’s externally visible. A high score doesn’t mean internal security is strong.

Audit Evidence

Types of Evidence

  • Documentary: Policies, procedures, configuration files, architecture diagrams
  • Technical: Scan results, log files, system configurations, firewall rules
  • Testimonial: Interviews with personnel about processes and practices
  • Observational: Auditor directly observes processes being performed

Attestation

Formal statement by an authorized party that controls are in place and operating.

  • SOC 2 attestation: auditor attests that controls meet Trust Services Criteria
  • Self-attestation: organization certifies its own compliance (less credible)

Offensive Context

Penetration testing is offense in service of defense — the only sanctioned way to prove that your defenses actually work. Vulnerability scans find what’s theoretically exploitable; pen tests prove what’s practically exploitable. The gap between the two is where real risk lives. A CVSSCommon Vulnerability Scoring System — Standard severity rating 0.0-10.0 10.0 vulnerability that requires local access on a fully segmented, air-gapped system is lower practical risk than a CVSSCommon Vulnerability Scoring System — Standard severity rating 0.0-10.0 7.0 on your internet-facing VPNVirtual Private Network — Encrypted tunnel over public networks. Assessments that account for exploitability and context are more valuable than ones that sort by CVSSCommon Vulnerability Scoring System — Standard severity rating 0.0-10.0 score alone.