Incident Response and Forensics
Incident Response and Forensics¶
Previous: 13. Security Testing | Next: 15. Project: Building a Secure REST API
Security incidents are inevitable. What separates resilient organizations from vulnerable ones is not whether they get breached, but how effectively they respond. This lesson covers the complete incident response lifecycle based on the NIST framework, digital forensics fundamentals, log analysis techniques, and practical Python scripts for detecting indicators of compromise (IOCs). By the end, you will be able to build incident response playbooks and analyze security events systematically.
Learning Objectives¶
- Understand the NIST Incident Response lifecycle phases
- Build and maintain an incident response plan
- Analyze logs for indicators of compromise
- Understand digital forensics fundamentals and chain of custody
- Write Python scripts for log parsing and IOC detection
- Create incident response playbooks for common scenarios
- Conduct effective post-incident reviews
1. Incident Response Overview¶
1.1 What is a Security Incident?¶
A security incident is any event that compromises the confidentiality, integrity, or availability of information or systems. Not every security event is an incident -- triage determines severity.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Security Event β Incident Classification β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β Security Events (millions/day) β
β βββ Firewall blocks β
β βββ Failed login attempts β
β βββ Port scans β
β βββ Malware detections (quarantined) β
β βββ IDS/IPS alerts β
β β β
β βΌ Triage & Correlation β
β β
β Security Incidents (few/month) β
β βββ Successful unauthorized access β
β βββ Data breach / exfiltration β
β βββ Malware infection (active) β
β βββ Denial of service attack β
β βββ Insider threat activity β
β βββ Ransomware deployment β
β β β
β βΌ Severity Classification β
β β
β βββββββββββββ¬βββββββββββββββββββββββββββββββββββββββββββββββ β
β β Severity β Description β β
β βββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ€ β
β β Critical β Active data breach, ransomware, system-wide β β
β β (P1) β compromise. Immediate response required. β β
β βββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ€ β
β β High β Confirmed intrusion, single system β β
β β (P2) β compromised, active malware. Hours. β β
β βββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ€ β
β β Medium β Suspicious activity, policy violation, β β
β β (P3) β vulnerability actively exploited. Days. β β
β βββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ€ β
β β Low β Minor policy violation, unsuccessful β β
β β (P4) β attack attempt, informational. Weeks. β β
β βββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1.2 Common Incident Types¶
| Incident Type | Examples | Typical Indicators |
|---|---|---|
| Malware | Ransomware, trojan, worm | Unusual processes, file encryption, C2 traffic |
| Unauthorized Access | Compromised credentials, brute force | Failed logins, off-hours access, unusual geolocations |
| Data Breach | Exfiltration, accidental exposure | Large data transfers, DB dumps, unusual queries |
| DoS/DDoS | Volumetric, application-layer | Traffic spikes, service degradation, CPU exhaustion |
| Insider Threat | Data theft, sabotage | Excessive access, large downloads, policy violations |
| Web Application | SQLi, XSS, RCE | Suspicious request patterns, WAF alerts, error spikes |
| Supply Chain | Compromised dependency, update | Unexpected package changes, suspicious build artifacts |
| Phishing | Credential harvesting, BEC | Reported emails, unusual login locations, wire transfer requests |
2. NIST Incident Response Lifecycle¶
2.1 The Four Phases¶
The NIST Computer Security Incident Handling Guide (SP 800-61) defines four main phases. In practice, these phases overlap and cycle.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β NIST Incident Response Lifecycle β
β (SP 800-61 Rev. 2) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β βββββββββββββββββββ β
β β 1. Preparation βββββββββββββββββββββββββββββββββ β
β β β β β
β β - IR plan β β β
β β - Team training β β β
β β - Tools ready β β β
β β - Communication β β β
β ββββββββββ¬βββββββββββ β β
β β β β
β βΌ β β
β βββββββββββββββββββββββββββ β β
β β 2. Detection & β β β
β β Analysis β β β
β β β β β
β β - Monitor alerts β β β
β β - Triage events β β β
β β - Determine scope β β β
β β - Classify severity β β β
β ββββββββββ¬βββββββββββββββββ β β
β β β β
β βΌ β β
β βββββββββββββββββββββββββββ β β
β β 3. Containment, β βββ May cycle β β
β β Eradication & β between these β β
β β Recovery β sub-phases β β
β β β β β
β β - Isolate affected β β β
β β - Remove threat β β β
β β - Restore systems β β β
β β - Validate recovery β β β
β ββββββββββ¬βββββββββββββββββ β β
β β β β
β βΌ β β
β βββββββββββββββββββββββββββ β β
β β 4. Post-Incident ββββββββββββββββββββββββββ β
β β Activity β Lessons feed back β
β β β into Preparation β
β β - Lessons learned β β
β β - Report writing β β
β β - Process improvement β β
β βββββββββββββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
2.2 Phase 1: Preparation¶
Preparation is the most critical phase. Without it, everything else is improvisation under pressure.
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Preparation Checklist β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β People: β
β [x] IR team identified with clear roles β
β [x] Contact list (team, management, legal, PR, vendors) β
β [x] On-call rotation schedule β
β [x] Regular training and tabletop exercises β
β [x] External IR retainer (optional) β
β β
β Process: β
β [x] Written IR plan approved by management β
β [x] Playbooks for common incident types β
β [x] Escalation procedures β
β [x] Communication templates (internal, external, legal) β
β [x] Evidence handling procedures β
β [x] Regulatory notification requirements documented β
β β
β Technology: β
β [x] Logging infrastructure (centralized, retained) β
β [x] SIEM or log analysis tools β
β [x] Forensic workstation / toolkit β
β [x] Network monitoring / IDS β
β [x] Endpoint detection and response (EDR) β
β [x] Backup and recovery systems tested β
β [x] Clean OS images / golden images β
β [x] Jump bag: portable forensic tools, cables, storage β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
IR Team Roles¶
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Incident Response Team Roles β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β βββββββββββββββββββββββ β
β β Incident Manager β Overall coordination, decisions, β
β β (Team Lead) β communication with management β
β βββββββββββ¬ββββββββββββ β
β β β
β βββββββββββ΄βββββββββββββββββββββββββββββββββ β
β β β β β β
β βΌ βΌ βΌ βΌ β
β ββββββββββ ββββββββββββββ ββββββββββββ ββββββββββββ β
β βSecurityβ β Forensic β β System β β Comms / β β
β βAnalyst β β Analyst β β Admin β β Legal β β
β β β β β β β β β β
β βMonitor β βEvidence β βContain & β βNotify & β β
β βTriage β βCollection β βRecover β βDocument β β
β βAnalyze β βAnalysis β βPatch β βRegulate β β
β ββββββββββ ββββββββββββββ ββββββββββββ ββββββββββββ β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
2.3 Phase 2: Detection and Analysis¶
Detection Sources¶
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Detection Sources β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β Automated: β
β βββ SIEM alerts (correlated log events) β
β βββ IDS/IPS (Snort, Suricata) β
β βββ EDR alerts (CrowdStrike, Carbon Black, etc.) β
β βββ WAF alerts (ModSecurity, AWS WAF) β
β βββ Antivirus / Anti-malware β
β βββ File integrity monitoring (OSSEC, Tripwire) β
β βββ Network traffic anomalies (NetFlow, Zeek) β
β βββ Cloud security alerts (GuardDuty, Security Center) β
β β
β Human: β
β βββ User reports ("something looks wrong") β
β βββ Help desk tickets β
β βββ External notification (partner, vendor, researcher) β
β βββ Law enforcement notification β
β βββ Media reports β
β βββ Threat intelligence feeds β
β β
β Proactive: β
β βββ Threat hunting β
β βββ Penetration testing results β
β βββ Vulnerability scanning β
β βββ Log review / audit β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Initial Analysis Questions¶
When an alert fires, systematically answer these questions:
1. WHAT happened?
- What systems/data are affected?
- What type of incident is this?
- What is the initial evidence?
2. WHEN did it happen?
- When was the first indicator?
- When was it detected?
- Is it ongoing?
3. WHERE is the impact?
- Which hosts/networks?
- Which applications/services?
- Which data/users?
4. WHO is involved?
- Source IPs/accounts?
- Targeted users/systems?
- Internal or external actor?
5. HOW did it happen?
- Attack vector (phishing, exploit, insider, etc.)?
- Vulnerability exploited?
- Tools/techniques used?
6. HOW BAD is it?
- Scope: how many systems affected?
- Impact: data loss, service disruption?
- Severity classification (P1-P4)?
2.4 Phase 3: Containment, Eradication, and Recovery¶
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Containment β Eradication β Recovery β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β βββββββββββββββββββββββββββββββββββββββββββββββ β
β β CONTAINMENT (Stop the Bleeding) β β
β β β β
β β Short-term: β β
β β βββ Isolate affected systems (network) β β
β β βββ Block malicious IPs/domains β β
β β βββ Disable compromised accounts β β
β β βββ Redirect DNS if needed β β
β β βββ Preserve evidence before changes β β
β β β β
β β Long-term: β β
β β βββ Apply temporary patches/workarounds β β
β β βββ Increase monitoring on affected area β β
β β βββ Implement additional access controls β β
β β βββ Set up honeypot/canary if appropriate β β
β βββββββββββββββββββββββββββββββββββββββββββββββ β
β β β
β βΌ β
β βββββββββββββββββββββββββββββββββββββββββββββββ β
β β ERADICATION (Remove the Threat) β β
β β β β
β β βββ Remove malware / backdoors β β
β β βββ Close vulnerability that was exploited β β
β β βββ Reset compromised credentials β β
β β βββ Rebuild affected systems if needed β β
β β βββ Update firewall/IDS rules β β
β β βββ Scan for persistence mechanisms β β
β βββββββββββββββββββββββββββββββββββββββββββββββ β
β β β
β βΌ β
β βββββββββββββββββββββββββββββββββββββββββββββββ β
β β RECOVERY (Return to Normal) β β
β β β β
β β βββ Restore from clean backups β β
β β βββ Rebuild systems from golden images β β
β β βββ Gradually restore services β β
β β βββ Verify system integrity β β
β β βββ Monitor closely for re-compromise β β
β β βββ Declare incident resolved β β
β βββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β IMPORTANT: Document every action with timestamps! β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
2.5 Phase 4: Post-Incident Activity (Lessons Learned)¶
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Post-Incident Review Meeting Agenda β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β Schedule: Within 1-2 weeks of incident closure β
β Attendees: All involved parties (blameless environment) β
β β
β 1. Timeline Review (30 min) β
β - Walk through the complete timeline β
β - When was first indicator? When detected? When resolved? β
β β
β 2. What Went Well (15 min) β
β - Effective detection mechanisms β
β - Quick containment actions β
β - Good communication β
β β
β 3. What Could Be Improved (30 min) β
β - Detection gaps β
β - Response delays β
β - Communication breakdowns β
β - Tool/process gaps β
β β
β 4. Root Cause Analysis (30 min) β
β - What was the root cause? β
β - Why did existing controls fail? β
β - Use "5 Whys" technique β
β β
β 5. Action Items (15 min) β
β - Specific improvements with owners and deadlines β
β - Process changes β
β - Technology changes β
β - Training needs β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
3. Log Analysis and SIEM Concepts¶
3.1 Logging Architecture¶
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Centralized Logging Architecture β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β Log Sources Collection Storage/Analysisβ
β ββββββββββββ β
β β Web ββββ β
β β Servers β β βββββββββββββββββ βββββββββββββββββββββ β
β ββββββββββββ ββββΊ β Log Shipper ββββΊ β β β
β ββββββββββββ β β (Filebeat, β β SIEM / Log β β
β β App ββββ€ β Fluentd, β β Management β β
β β Servers β β β rsyslog) β β β β
β ββββββββββββ β βββββββββββββββββ β - Elasticsearch β β
β ββββββββββββ β β - Splunk β β
β β Database ββββ€ βββββββββββββββββ β - Graylog β β
β β Servers β ββββΊ β Message ββββΊ β - Wazuh β β
β ββββββββββββ β β Queue β β - QRadar β β
β ββββββββββββ β β (Kafka, β β β β
β β Firewall ββββ€ β Redis) β β Features: β β
β β / IDS β β βββββββββββββββββ β - Search β β
β ββββββββββββ β β - Correlation β β
β ββββββββββββ β β - Alerting β β
β β Endpoint ββββ β - Dashboards β β
β β Agents β β - Retention β β
β ββββββββββββ βββββββββββββββββββββ β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
3.2 Critical Log Sources¶
| Log Source | What to Capture | Security Value |
|---|---|---|
| Web server (nginx, Apache) | Access logs, error logs | Attack detection, anomaly detection |
| Application | Auth events, errors, API calls | Business logic attacks, abuse |
| Database | Queries, connections, errors | SQL injection, data exfiltration |
| OS / System | Auth logs, process exec, file changes | Privilege escalation, persistence |
| Firewall | Allow/deny, connections | Network attacks, lateral movement |
| DNS | Queries, responses | C2 communication, data exfiltration |
| Send/receive, attachments | Phishing, data exfiltration | |
| Cloud | API calls, config changes | Misconfiguration, unauthorized access |
3.3 SIEM Correlation Rules¶
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Common SIEM Correlation Rules β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β Rule 1: Brute Force Detection β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β IF: >10 failed logins from same IP in 5 minutes β β
β β THEN: Alert "Possible brute force attack" β β
β β SEVERITY: Medium β β
β β ACTION: Block IP temporarily, notify SOC β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β Rule 2: Impossible Travel β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β IF: Same user logs in from 2 geolocations β β
β β that are >500 miles apart within 30 minutes β β
β β THEN: Alert "Impossible travel detected" β β
β β SEVERITY: High β β
β β ACTION: Force re-authentication, notify user β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β Rule 3: Data Exfiltration β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β IF: >100MB data transfer to external IP β β
β β from a server that normally sends <1MB/hour β β
β β THEN: Alert "Possible data exfiltration" β β
β β SEVERITY: Critical β β
β β ACTION: Block transfer, isolate host, alert IR β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β Rule 4: Privilege Escalation β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β IF: User added to admin/root group β β
β β AND change was not from approved change system β β
β β THEN: Alert "Unauthorized privilege escalation" β β
β β SEVERITY: Critical β β
β β ACTION: Revert change, disable account, alert IR β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β Rule 5: Web Application Attack β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β IF: >5 WAF blocks from same IP in 1 minute β β
β β AND HTTP 500 errors increase from same app β β
β β THEN: Alert "Active web application attack" β β
β β SEVERITY: High β β
β β ACTION: Block IP, increase logging, alert IR β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
4. Python: Log Parsing and Analysis¶
4.1 Web Server Log Parser¶
"""
log_parser.py - Parse and analyze web server access logs.
Supports Apache/Nginx combined log format.
Example log line:
192.168.1.100 - admin [15/Jan/2025:10:30:45 +0000] "GET /admin HTTP/1.1" 200 1234 "http://example.com" "Mozilla/5.0"
"""
import re
from collections import Counter, defaultdict
from dataclasses import dataclass, field
from datetime import datetime
from pathlib import Path
from typing import Optional, Iterator
# Combined log format regex
LOG_PATTERN = re.compile(
r'(?P<ip>\S+)\s+' # IP address
r'\S+\s+' # ident (usually -)
r'(?P<user>\S+)\s+' # authenticated user
r'\[(?P<time>[^\]]+)\]\s+' # timestamp
r'"(?P<method>\S+)\s+' # HTTP method
r'(?P<path>\S+)\s+' # request path
r'(?P<protocol>\S+)"\s+' # HTTP version
r'(?P<status>\d+)\s+' # status code
r'(?P<size>\S+)\s+' # response size
r'"(?P<referer>[^"]*)"\s+' # referer
r'"(?P<agent>[^"]*)"' # user agent
)
@dataclass
class LogEntry:
"""Parsed log entry."""
ip: str
user: str
timestamp: datetime
method: str
path: str
protocol: str
status: int
size: int
referer: str
user_agent: str
raw: str = ""
@dataclass
class SecurityAlert:
"""A security alert generated from log analysis."""
alert_type: str
severity: str # CRITICAL, HIGH, MEDIUM, LOW
description: str
source_ip: Optional[str] = None
count: int = 0
sample_entries: list = field(default_factory=list)
timestamp: str = ""
def __str__(self):
return (f"[{self.severity}] {self.alert_type}: {self.description} "
f"(IP: {self.source_ip}, count: {self.count})")
def parse_log_line(line: str) -> Optional[LogEntry]:
"""Parse a single log line into a LogEntry."""
match = LOG_PATTERN.match(line.strip())
if not match:
return None
data = match.groupdict()
# Parse timestamp
try:
ts = datetime.strptime(data['time'], '%d/%b/%Y:%H:%M:%S %z')
except ValueError:
ts = datetime.now()
# Parse size (may be '-')
try:
size = int(data['size'])
except ValueError:
size = 0
return LogEntry(
ip=data['ip'],
user=data['user'],
timestamp=ts,
method=data['method'],
path=data['path'],
protocol=data['protocol'],
status=int(data['status']),
size=size,
referer=data['referer'],
user_agent=data['agent'],
raw=line.strip()
)
def parse_log_file(filepath: str) -> Iterator[LogEntry]:
"""Parse all entries from a log file."""
path = Path(filepath)
if not path.exists():
raise FileNotFoundError(f"Log file not found: {filepath}")
with path.open('r', errors='ignore') as f:
for line in f:
entry = parse_log_line(line)
if entry:
yield entry
class LogAnalyzer:
"""Analyze parsed log entries for security indicators."""
def __init__(self):
self.entries: list[LogEntry] = []
self.alerts: list[SecurityAlert] = []
def load(self, filepath: str) -> int:
"""Load and parse a log file. Returns entry count."""
self.entries = list(parse_log_file(filepath))
return len(self.entries)
def analyze_all(self) -> list[SecurityAlert]:
"""Run all analysis rules."""
self.alerts = []
self.detect_brute_force()
self.detect_directory_traversal()
self.detect_sql_injection()
self.detect_scanner_activity()
self.detect_error_spikes()
self.detect_suspicious_user_agents()
self.detect_admin_access()
return self.alerts
def detect_brute_force(self, threshold: int = 10,
window_minutes: int = 5) -> None:
"""Detect brute force login attempts."""
# Group 401/403 responses by IP
failed_logins = defaultdict(list)
for entry in self.entries:
if entry.status in (401, 403):
failed_logins[entry.ip].append(entry)
for ip, entries in failed_logins.items():
if len(entries) >= threshold:
# Check if they occur within the time window
entries.sort(key=lambda e: e.timestamp)
for i in range(len(entries) - threshold + 1):
window = entries[i:i + threshold]
time_diff = (window[-1].timestamp -
window[0].timestamp).total_seconds()
if time_diff <= window_minutes * 60:
self.alerts.append(SecurityAlert(
alert_type="BRUTE_FORCE",
severity="HIGH",
description=(
f"{len(entries)} failed auth attempts from {ip} "
f"({threshold}+ within {window_minutes} min)"
),
source_ip=ip,
count=len(entries),
sample_entries=[e.raw for e in entries[:3]],
))
break # One alert per IP
def detect_directory_traversal(self) -> None:
"""Detect path traversal attempts."""
traversal_patterns = [
'../', '..\\', '%2e%2e', '%252e%252e',
'/etc/passwd', '/etc/shadow', '/windows/system32',
'boot.ini', 'web.config',
]
traversal_attempts = defaultdict(list)
for entry in self.entries:
path_lower = entry.path.lower()
for pattern in traversal_patterns:
if pattern in path_lower:
traversal_attempts[entry.ip].append(entry)
break
for ip, entries in traversal_attempts.items():
self.alerts.append(SecurityAlert(
alert_type="DIRECTORY_TRAVERSAL",
severity="HIGH",
description=(
f"Path traversal attempt from {ip}: "
f"{entries[0].path}"
),
source_ip=ip,
count=len(entries),
sample_entries=[e.raw for e in entries[:3]],
))
def detect_sql_injection(self) -> None:
"""Detect SQL injection attempts in request paths."""
sqli_patterns = [
"' or ", "' and ", "union select", "order by",
"1=1", "' --", "'; drop", "sleep(",
"benchmark(", "waitfor delay", "pg_sleep",
"%27", "char(", "concat(",
]
sqli_attempts = defaultdict(list)
for entry in self.entries:
path_lower = entry.path.lower()
for pattern in sqli_patterns:
if pattern in path_lower:
sqli_attempts[entry.ip].append(entry)
break
for ip, entries in sqli_attempts.items():
self.alerts.append(SecurityAlert(
alert_type="SQL_INJECTION",
severity="CRITICAL",
description=(
f"SQL injection attempt from {ip}: "
f"{entries[0].path[:100]}"
),
source_ip=ip,
count=len(entries),
sample_entries=[e.raw for e in entries[:3]],
))
def detect_scanner_activity(self) -> None:
"""Detect automated vulnerability scanner activity."""
scanner_paths = [
'/.env', '/wp-admin', '/wp-login.php',
'/phpmyadmin', '/admin', '/administrator',
'/.git/config', '/.svn/entries',
'/robots.txt', '/sitemap.xml',
'/backup', '/database', '/db',
'/server-status', '/server-info',
'/.htaccess', '/web.config',
'/xmlrpc.php', '/api/v1',
]
ip_scanner_hits = defaultdict(set)
for entry in self.entries:
for scanner_path in scanner_paths:
if entry.path.lower().startswith(scanner_path):
ip_scanner_hits[entry.ip].add(entry.path)
for ip, paths in ip_scanner_hits.items():
if len(paths) >= 5: # Hit 5+ scanner paths
self.alerts.append(SecurityAlert(
alert_type="VULNERABILITY_SCANNER",
severity="MEDIUM",
description=(
f"Scanner activity from {ip}: "
f"probed {len(paths)} common paths"
),
source_ip=ip,
count=len(paths),
sample_entries=list(paths)[:5],
))
def detect_error_spikes(self, threshold: int = 50) -> None:
"""Detect unusual spikes in error responses."""
# Count 5xx errors per IP
error_counts = Counter()
for entry in self.entries:
if 500 <= entry.status < 600:
error_counts[entry.ip] += 1
for ip, count in error_counts.most_common():
if count >= threshold:
self.alerts.append(SecurityAlert(
alert_type="ERROR_SPIKE",
severity="MEDIUM",
description=(
f"High error rate from {ip}: "
f"{count} server errors (5xx)"
),
source_ip=ip,
count=count,
))
def detect_suspicious_user_agents(self) -> None:
"""Detect requests with suspicious user agents."""
suspicious_agents = [
'sqlmap', 'nikto', 'nmap', 'masscan',
'dirbuster', 'gobuster', 'wfuzz',
'burpsuite', 'acunetix', 'nessus',
'python-requests', # May be legitimate, but flag
'curl/', # May be legitimate, but flag
]
for entry in self.entries:
agent_lower = entry.user_agent.lower()
for sus_agent in suspicious_agents:
if sus_agent in agent_lower:
self.alerts.append(SecurityAlert(
alert_type="SUSPICIOUS_USER_AGENT",
severity="MEDIUM",
description=(
f"Suspicious user agent from {entry.ip}: "
f"{entry.user_agent[:80]}"
),
source_ip=entry.ip,
count=1,
))
break # One alert per entry
def detect_admin_access(self) -> None:
"""Detect access to administrative endpoints."""
admin_paths = ['/admin', '/dashboard', '/manage', '/api/admin']
admin_access = defaultdict(list)
for entry in self.entries:
for admin_path in admin_paths:
if entry.path.lower().startswith(admin_path):
if entry.status == 200:
admin_access[entry.ip].append(entry)
for ip, entries in admin_access.items():
self.alerts.append(SecurityAlert(
alert_type="ADMIN_ACCESS",
severity="LOW",
description=(
f"Successful admin access from {ip}: "
f"{entries[0].path}"
),
source_ip=ip,
count=len(entries),
))
def print_report(self) -> None:
"""Print a formatted analysis report."""
print("=" * 65)
print(" LOG ANALYSIS SECURITY REPORT")
print("=" * 65)
print(f" Total entries analyzed: {len(self.entries)}")
print(f" Total alerts: {len(self.alerts)}")
severity_order = ['CRITICAL', 'HIGH', 'MEDIUM', 'LOW']
for severity in severity_order:
alerts = [a for a in self.alerts if a.severity == severity]
if not alerts:
continue
print(f"\n{'β' * 65}")
print(f" {severity} ({len(alerts)})")
print(f"{'β' * 65}")
for alert in alerts:
print(f"\n [{alert.alert_type}]")
print(f" {alert.description}")
if alert.sample_entries:
print(f" Samples:")
for sample in alert.sample_entries[:2]:
print(f" {str(sample)[:100]}")
# Summary statistics
print(f"\n{'β' * 65}")
print(" TOP SOURCE IPs")
print(f"{'β' * 65}")
ip_alert_count = Counter()
for alert in self.alerts:
if alert.source_ip:
ip_alert_count[alert.source_ip] += 1
for ip, count in ip_alert_count.most_common(10):
print(f" {ip:20s} {count} alerts")
print(f"\n{'=' * 65}")
if __name__ == "__main__":
import sys
if len(sys.argv) < 2:
print("Usage: python log_parser.py <access.log>")
sys.exit(1)
analyzer = LogAnalyzer()
count = analyzer.load(sys.argv[1])
print(f"[*] Loaded {count} log entries")
alerts = analyzer.analyze_all()
analyzer.print_report()
4.2 IOC Detection Script¶
"""
ioc_detector.py - Indicator of Compromise (IOC) detection.
Checks files, network connections, and system state for known IOCs.
"""
import hashlib
import json
import os
import re
import socket
import subprocess
from dataclasses import dataclass, field
from datetime import datetime
from pathlib import Path
from typing import Optional
@dataclass
class IOC:
"""An Indicator of Compromise."""
ioc_type: str # IP, DOMAIN, HASH_MD5, HASH_SHA256, FILENAME, REGEX
value: str
description: str = ""
source: str = "" # Where this IOC came from
severity: str = "MEDIUM"
@dataclass
class IOCMatch:
"""A match found during scanning."""
ioc: IOC
location: str # Where the match was found
context: str = "" # Additional context
timestamp: str = ""
def __post_init__(self):
if not self.timestamp:
self.timestamp = datetime.now().isoformat()
class IOCDatabase:
"""
Simple IOC database.
In production, use STIX/TAXII or a proper threat intelligence platform.
"""
def __init__(self):
self.iocs: list[IOC] = []
def load_from_json(self, filepath: str) -> int:
"""Load IOCs from a JSON file."""
with open(filepath) as f:
data = json.load(f)
for item in data.get('iocs', []):
self.iocs.append(IOC(
ioc_type=item['type'],
value=item['value'],
description=item.get('description', ''),
source=item.get('source', ''),
severity=item.get('severity', 'MEDIUM'),
))
return len(self.iocs)
def load_sample_iocs(self) -> None:
"""Load sample IOCs for demonstration."""
# These are FAKE IOCs for educational purposes
sample_iocs = [
IOC("IP", "198.51.100.1", "Known C2 server", "sample", "HIGH"),
IOC("IP", "203.0.113.50", "Phishing infrastructure", "sample", "HIGH"),
IOC("DOMAIN", "evil-malware.example.com", "Malware C2", "sample", "CRITICAL"),
IOC("DOMAIN", "phish.example.net", "Phishing domain", "sample", "HIGH"),
IOC("HASH_SHA256",
"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"Known malware hash (empty file SHA256)", "sample", "MEDIUM"),
IOC("FILENAME", "mimikatz.exe", "Credential dumping tool", "sample", "CRITICAL"),
IOC("FILENAME", "nc.exe", "Netcat - possible backdoor", "sample", "HIGH"),
IOC("REGEX", r"(?:eval|exec)\s*\(\s*base64", "Obfuscated code execution",
"sample", "HIGH"),
]
self.iocs.extend(sample_iocs)
def get_by_type(self, ioc_type: str) -> list[IOC]:
"""Get all IOCs of a specific type."""
return [ioc for ioc in self.iocs if ioc.ioc_type == ioc_type]
class IOCScanner:
"""Scan system for Indicators of Compromise."""
def __init__(self, ioc_db: IOCDatabase):
self.ioc_db = ioc_db
self.matches: list[IOCMatch] = []
def scan_all(self, scan_dir: Optional[str] = None) -> list[IOCMatch]:
"""Run all IOC scans."""
self.matches = []
print("[*] Starting IOC scan...")
self.scan_file_hashes(scan_dir or "/tmp")
self.scan_file_names(scan_dir or "/tmp")
self.scan_network_connections()
self.scan_dns_cache()
self.scan_file_contents(scan_dir or "/tmp")
return self.matches
def scan_file_hashes(self, directory: str) -> None:
"""Check file hashes against known malware hashes."""
print(f"[*] Scanning file hashes in {directory}...")
hash_iocs = {
ioc.value.lower(): ioc
for ioc in self.ioc_db.get_by_type("HASH_SHA256")
}
md5_iocs = {
ioc.value.lower(): ioc
for ioc in self.ioc_db.get_by_type("HASH_MD5")
}
if not hash_iocs and not md5_iocs:
return
scan_path = Path(directory)
for filepath in scan_path.rglob("*"):
if not filepath.is_file():
continue
# Skip very large files (>100MB)
try:
if filepath.stat().st_size > 100 * 1024 * 1024:
continue
except OSError:
continue
try:
content = filepath.read_bytes()
sha256 = hashlib.sha256(content).hexdigest().lower()
md5 = hashlib.md5(content).hexdigest().lower()
if sha256 in hash_iocs:
self.matches.append(IOCMatch(
ioc=hash_iocs[sha256],
location=str(filepath),
context=f"SHA256: {sha256}",
))
if md5 in md5_iocs:
self.matches.append(IOCMatch(
ioc=md5_iocs[md5],
location=str(filepath),
context=f"MD5: {md5}",
))
except (PermissionError, OSError):
pass
def scan_file_names(self, directory: str) -> None:
"""Check for files with known malicious names."""
print(f"[*] Scanning file names in {directory}...")
name_iocs = {
ioc.value.lower(): ioc
for ioc in self.ioc_db.get_by_type("FILENAME")
}
if not name_iocs:
return
scan_path = Path(directory)
for filepath in scan_path.rglob("*"):
if filepath.name.lower() in name_iocs:
self.matches.append(IOCMatch(
ioc=name_iocs[filepath.name.lower()],
location=str(filepath),
context=f"Filename match: {filepath.name}",
))
def scan_network_connections(self) -> None:
"""Check active network connections against known bad IPs."""
print("[*] Scanning network connections...")
ip_iocs = {
ioc.value: ioc
for ioc in self.ioc_db.get_by_type("IP")
}
if not ip_iocs:
return
try:
# Use netstat or ss to get connections
result = subprocess.run(
["netstat", "-an"],
capture_output=True, text=True, timeout=10
)
for line in result.stdout.splitlines():
for bad_ip in ip_iocs:
if bad_ip in line:
self.matches.append(IOCMatch(
ioc=ip_iocs[bad_ip],
location="Active network connection",
context=line.strip(),
))
except (FileNotFoundError, subprocess.TimeoutExpired):
# Try ss as fallback
try:
result = subprocess.run(
["ss", "-an"],
capture_output=True, text=True, timeout=10
)
for line in result.stdout.splitlines():
for bad_ip in ip_iocs:
if bad_ip in line:
self.matches.append(IOCMatch(
ioc=ip_iocs[bad_ip],
location="Active network connection",
context=line.strip(),
))
except (FileNotFoundError, subprocess.TimeoutExpired):
print(" [!] Could not check network connections")
def scan_dns_cache(self) -> None:
"""Check DNS resolutions for known malicious domains."""
print("[*] Checking known malicious domains...")
domain_iocs = self.ioc_db.get_by_type("DOMAIN")
for ioc in domain_iocs:
try:
# Try to resolve the domain - if it resolves from cache
# the system may have contacted it
result = socket.getaddrinfo(
ioc.value, None, socket.AF_INET,
socket.SOCK_STREAM
)
if result:
ip = result[0][4][0]
self.matches.append(IOCMatch(
ioc=ioc,
location="DNS resolution",
context=f"{ioc.value} resolves to {ip}",
))
except (socket.gaierror, socket.timeout, OSError):
pass # Domain doesn't resolve - good
def scan_file_contents(self, directory: str,
extensions: tuple = ('.py', '.js', '.sh', '.php',
'.rb', '.pl', '.ps1')) -> None:
"""Scan file contents for IOC patterns (regex)."""
print(f"[*] Scanning file contents in {directory}...")
regex_iocs = self.ioc_db.get_by_type("REGEX")
if not regex_iocs:
return
compiled_patterns = []
for ioc in regex_iocs:
try:
compiled_patterns.append((re.compile(ioc.value, re.IGNORECASE), ioc))
except re.error:
print(f" [!] Invalid regex pattern: {ioc.value}")
scan_path = Path(directory)
for filepath in scan_path.rglob("*"):
if not filepath.is_file():
continue
if filepath.suffix.lower() not in extensions:
continue
try:
if filepath.stat().st_size > 10 * 1024 * 1024: # Skip >10MB
continue
content = filepath.read_text(errors='ignore')
for pattern, ioc in compiled_patterns:
matches = pattern.findall(content)
if matches:
self.matches.append(IOCMatch(
ioc=ioc,
location=str(filepath),
context=f"Pattern '{ioc.value}' found {len(matches)} times",
))
except (PermissionError, OSError):
pass
def print_report(self) -> None:
"""Print IOC scan results."""
print("\n" + "=" * 65)
print(" IOC SCAN REPORT")
print("=" * 65)
print(f" IOCs in database: {len(self.ioc_db.iocs)}")
print(f" Matches found: {len(self.matches)}")
if not self.matches:
print("\n No IOC matches found. System appears clean.")
print("=" * 65)
return
for severity in ['CRITICAL', 'HIGH', 'MEDIUM', 'LOW']:
matches = [m for m in self.matches if m.ioc.severity == severity]
if not matches:
continue
print(f"\n{'β' * 65}")
print(f" {severity} ({len(matches)} matches)")
print(f"{'β' * 65}")
for match in matches:
print(f"\n Type: {match.ioc.ioc_type}")
print(f" IOC: {match.ioc.value}")
print(f" Desc: {match.ioc.description}")
print(f" Found: {match.location}")
print(f" Context: {match.context}")
print(f"\n{'=' * 65}")
print(" RECOMMENDED ACTIONS:")
critical = [m for m in self.matches if m.ioc.severity == 'CRITICAL']
if critical:
print(" [!] CRITICAL matches found - initiate incident response")
print(" [!] Isolate affected system immediately")
elif self.matches:
print(" [*] Review matches and determine if they are true positives")
print(" [*] Escalate confirmed matches to security team")
print("=" * 65)
# βββ Example IOC JSON format βββ
SAMPLE_IOC_JSON = """
{
"iocs": [
{
"type": "IP",
"value": "198.51.100.1",
"description": "Known C2 server for BotnetX",
"source": "ThreatIntel Feed Alpha",
"severity": "HIGH"
},
{
"type": "DOMAIN",
"value": "evil-malware.example.com",
"description": "Malware distribution domain",
"source": "OSINT",
"severity": "CRITICAL"
},
{
"type": "HASH_SHA256",
"value": "a1b2c3d4e5f6...",
"description": "Ransomware binary",
"source": "VirusTotal",
"severity": "CRITICAL"
},
{
"type": "FILENAME",
"value": "mimikatz.exe",
"description": "Credential dumping tool",
"source": "MITRE ATT&CK",
"severity": "CRITICAL"
},
{
"type": "REGEX",
"value": "(?:eval|exec)\\\\s*\\\\(\\\\s*base64",
"description": "Obfuscated code execution pattern",
"source": "Custom rule",
"severity": "HIGH"
}
]
}
"""
if __name__ == "__main__":
import sys
# Initialize IOC database
ioc_db = IOCDatabase()
if len(sys.argv) > 1 and sys.argv[1].endswith('.json'):
count = ioc_db.load_from_json(sys.argv[1])
print(f"[*] Loaded {count} IOCs from {sys.argv[1]}")
scan_dir = sys.argv[2] if len(sys.argv) > 2 else "/tmp"
else:
ioc_db.load_sample_iocs()
print(f"[*] Loaded {len(ioc_db.iocs)} sample IOCs")
scan_dir = sys.argv[1] if len(sys.argv) > 1 else "/tmp"
# Run scanner
scanner = IOCScanner(ioc_db)
scanner.scan_all(scan_dir)
scanner.print_report()
5. Digital Forensics Basics¶
5.1 Forensic Principles¶
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Digital Forensics Principles β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β 1. PRESERVE the evidence β
β - Never work on original evidence β
β - Create forensic images (bit-for-bit copies) β
β - Use write-blockers for disk access β
β - Document everything with timestamps β
β β
β 2. DOCUMENT the chain of custody β
β - Who collected the evidence? β
β - When was it collected? β
β - How was it stored? β
β - Who had access? β
β β
β 3. VERIFY integrity β
β - Hash all evidence immediately (SHA-256) β
β - Verify hashes before and after analysis β
β - Any change invalidates the evidence β
β β
β 4. ANALYZE on copies β
β - Work on forensic copies, never originals β
β - Use forensic tools that don't modify evidence β
β - Keep detailed notes of all analysis steps β
β β
β 5. REPORT findings β
β - Factual, objective reporting β
β - Reproducible methodology β
β - Clear chain from evidence to conclusions β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
5.2 Order of Volatility¶
When collecting evidence, start with the most volatile (shortest-lived) data first.
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Order of Volatility (Most β Least) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β Most Volatile (collect FIRST) β
β β β
β βββ 1. CPU registers, cache β
β β Lifetime: nanoseconds β
β β β
β βββ 2. Memory (RAM) β
β β Lifetime: power cycle β
β β Contains: running processes, network connections, β
β β decrypted data, passwords, encryption keys β
β β β
β βββ 3. Network state β
β β Lifetime: seconds-minutes β
β β Contains: active connections, routing tables, ARP cache β
β β β
β βββ 4. Running processes β
β β Lifetime: until process ends β
β β Contains: process list, open files, loaded libraries β
β β β
β βββ 5. Disk (file system) β
β β Lifetime: until overwritten β
β β Contains: files, logs, swap, temp files, slack space β
β β β
β βββ 6. Remote logging / monitoring β
β β Lifetime: retention policy β
β β β
β βββ 7. Archival media (backups, tapes) β
β Lifetime: years β
β β
β Least Volatile (collect LAST) β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
5.3 Chain of Custody¶
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Chain of Custody Form β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β Case Number: IR-2025-0042 β
β Evidence ID: EVD-001 β
β Description: Dell Latitude 7420 laptop, S/N: ABC123DEF β
β Location Found: Office 302, Building A β
β β
β Evidence Hash (at collection): β
β SHA-256: a1b2c3d4e5f6789... β
β β
β ββββββββββββ¬ββββββββββββββββββββ¬βββββββββββββ¬βββββββββββββββββ β
β β Date β Released By β Received Byβ Purpose β β
β ββββββββββββΌββββββββββββββββββββΌβββββββββββββΌβββββββββββββββββ€ β
β β 01/15/25 β Officer Smith β Analyst Leeβ Initial β β
β β 10:30 AM β (Badge #1234) β (IR Team) β collection β β
β ββββββββββββΌββββββββββββββββββββΌβββββββββββββΌβββββββββββββββββ€ β
β β 01/15/25 β Analyst Lee β Evidence β Secure β β
β β 11:45 AM β (IR Team) β Locker β storage β β
β ββββββββββββΌββββββββββββββββββββΌβββββββββββββΌβββββββββββββββββ€ β
β β 01/16/25 β Evidence Locker β Forensic β Disk β β
β β 09:00 AM β β Analyst Kimβ imaging β β
β ββββββββββββΌββββββββββββββββββββΌβββββββββββββΌβββββββββββββββββ€ β
β β 01/16/25 β Forensic Analyst β Evidence β Return after β β
β β 05:00 PM β Kim β Locker β imaging β β
β ββββββββββββ΄ββββββββββββββββββββ΄βββββββββββββ΄βββββββββββββββββ β
β β
β Notes: β
β - Laptop was powered off when collected β
β - Battery was removed to prevent accidental boot β
β - Disk imaged using FTK Imager, hash verified β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
5.4 Forensic Disk Imaging¶
# Create a forensic image using dd
# WARNING: Be VERY careful with dd - wrong parameters can destroy data
# Step 1: Identify the target disk
lsblk
# or
fdisk -l
# Step 2: Create forensic image (bit-for-bit copy)
# /dev/sdb = source (evidence drive, via write-blocker)
# evidence.dd = destination image file
sudo dd if=/dev/sdb of=evidence.dd bs=4096 conv=noerror,sync status=progress
# Step 3: Calculate hash of original and image
sha256sum /dev/sdb > original_hash.txt
sha256sum evidence.dd > image_hash.txt
# Step 4: Verify hashes match
diff original_hash.txt image_hash.txt
# Better alternative: dc3dd (forensic-focused dd)
sudo dc3dd if=/dev/sdb of=evidence.dd hash=sha256 log=imaging.log
# Alternative: FTK Imager (cross-platform, GUI)
# Creates E01 (Expert Witness) format with built-in hashing
6. Memory Forensics Concepts¶
6.1 Why Memory Forensics?¶
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β What Lives Only in Memory? β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β βββββββββββββββββββββββββββββββββββββββββββββββ β
β β Things you can ONLY find in RAM: β β
β β β β
β β - Running processes (including hidden) β β
β β - Network connections β β
β β - Decryption keys β β
β β - Passwords (plaintext in memory) β β
β β - Injected code (fileless malware) β β
β β - Clipboard contents β β
β β - Chat messages (before saved to disk) β β
β β - Encryption keys (full disk encryption) β β
β β - Command history β β
β β - Unpacked/decrypted malware β β
β βββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β Modern malware often operates entirely in memory β
β ("fileless malware") to avoid disk-based detection. β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
6.2 Memory Acquisition¶
# Linux memory acquisition
# Method 1: /proc/kcore (requires root)
sudo dd if=/proc/kcore of=memory.raw bs=1M
# Method 2: LiME (Linux Memory Extractor) - preferred
# Load LiME kernel module
sudo insmod lime-$(uname -r).ko "path=memory.lime format=lime"
# Method 3: AVML (Microsoft's Linux memory acquisition tool)
sudo ./avml memory.lime
# Windows memory acquisition
# - FTK Imager (free, GUI)
# - WinPmem (command line)
# - Belkasoft RAM Capturer (free)
# macOS memory acquisition
# - osxpmem
sudo ./osxpmem -o memory.aff4
6.3 Volatility Framework (Overview)¶
# Volatility 3 - Memory forensics framework
# Install: pip install volatility3
# Identify the OS profile
vol -f memory.raw windows.info
# List running processes
vol -f memory.raw windows.pslist
vol -f memory.raw windows.pstree # Tree view
# Find hidden processes
vol -f memory.raw windows.psscan
# Network connections
vol -f memory.raw windows.netscan
# Command history
vol -f memory.raw windows.cmdline
# DLL list for a specific process
vol -f memory.raw windows.dlllist --pid 1234
# Dump a specific process
vol -f memory.raw windows.memmap --pid 1234 --dump
# Registry analysis
vol -f memory.raw windows.registry.hivelist
# Linux memory analysis
vol -f memory.raw linux.pslist
vol -f memory.raw linux.bash # Bash history
vol -f memory.raw linux.netstat
7. Network Forensics¶
7.1 Packet Capture Analysis¶
"""
pcap_analyzer.py - Basic network forensics with packet capture analysis.
Requires: pip install scapy
WARNING: Only analyze captures from networks you are authorized to monitor.
"""
try:
from scapy.all import rdpcap, IP, TCP, UDP, DNS, Raw
SCAPY_AVAILABLE = True
except ImportError:
SCAPY_AVAILABLE = False
print("Scapy not installed. Install with: pip install scapy")
from collections import Counter, defaultdict
from dataclasses import dataclass, field
@dataclass
class PcapAnalysis:
"""Results of PCAP file analysis."""
total_packets: int = 0
protocols: dict = field(default_factory=dict)
top_talkers: list = field(default_factory=list)
dns_queries: list = field(default_factory=list)
suspicious_connections: list = field(default_factory=list)
http_requests: list = field(default_factory=list)
large_transfers: list = field(default_factory=list)
def analyze_pcap(filepath: str) -> PcapAnalysis:
"""
Analyze a PCAP file for security-relevant information.
Args:
filepath: Path to .pcap or .pcapng file
Returns:
PcapAnalysis with findings
"""
if not SCAPY_AVAILABLE:
raise ImportError("Scapy is required for PCAP analysis")
packets = rdpcap(filepath)
analysis = PcapAnalysis(total_packets=len(packets))
# Track statistics
ip_src_counter = Counter()
ip_dst_counter = Counter()
protocol_counter = Counter()
connection_sizes = defaultdict(int)
dns_queries = []
http_requests = []
for pkt in packets:
# Protocol analysis
if pkt.haslayer(TCP):
protocol_counter['TCP'] += 1
elif pkt.haslayer(UDP):
protocol_counter['UDP'] += 1
# IP layer analysis
if pkt.haslayer(IP):
src = pkt[IP].src
dst = pkt[IP].dst
ip_src_counter[src] += 1
ip_dst_counter[dst] += 1
# Track connection data volume
if pkt.haslayer(Raw):
key = f"{src} -> {dst}"
connection_sizes[key] += len(pkt[Raw].load)
# DNS analysis
if pkt.haslayer(DNS) and pkt[DNS].qr == 0: # Query
try:
query_name = pkt[DNS].qd.qname.decode('utf-8', errors='ignore')
dns_queries.append({
'query': query_name,
'src': pkt[IP].src if pkt.haslayer(IP) else 'unknown',
'type': pkt[DNS].qd.qtype,
})
except (AttributeError, IndexError):
pass
# HTTP request detection (basic)
if pkt.haslayer(TCP) and pkt.haslayer(Raw):
payload = pkt[Raw].load
try:
text = payload.decode('utf-8', errors='ignore')
if text.startswith(('GET ', 'POST ', 'PUT ', 'DELETE ')):
lines = text.split('\r\n')
http_requests.append({
'method': lines[0].split(' ')[0],
'path': lines[0].split(' ')[1] if len(lines[0].split(' ')) > 1 else '',
'src': pkt[IP].src if pkt.haslayer(IP) else 'unknown',
'dst': pkt[IP].dst if pkt.haslayer(IP) else 'unknown',
})
except (UnicodeDecodeError, IndexError):
pass
# Compile results
analysis.protocols = dict(protocol_counter)
# Top talkers (by packet count)
analysis.top_talkers = [
{'ip': ip, 'packets_sent': count}
for ip, count in ip_src_counter.most_common(10)
]
# DNS queries (deduplicated)
seen_queries = set()
for q in dns_queries:
if q['query'] not in seen_queries:
analysis.dns_queries.append(q)
seen_queries.add(q['query'])
# Large data transfers (potential exfiltration)
analysis.large_transfers = [
{'connection': conn, 'bytes': size}
for conn, size in sorted(
connection_sizes.items(), key=lambda x: x[1], reverse=True
)[:10]
]
analysis.http_requests = http_requests[:50]
# Suspicious pattern detection
analysis.suspicious_connections = detect_suspicious_patterns(
packets, ip_src_counter, dns_queries
)
return analysis
def detect_suspicious_patterns(packets, ip_counter, dns_queries):
"""Detect suspicious network patterns."""
suspicious = []
# 1. Beaconing detection (regular interval connections)
# Simplified: check for IPs with very regular packet intervals
# (Real beaconing detection requires statistical analysis)
# 2. DNS tunneling indicators
long_queries = [q for q in dns_queries if len(q['query']) > 50]
if long_queries:
suspicious.append({
'type': 'DNS_TUNNELING_POSSIBLE',
'description': f"Found {len(long_queries)} unusually long DNS queries",
'samples': [q['query'][:80] for q in long_queries[:3]],
})
# 3. Port scanning detection
dst_ports = defaultdict(set)
for pkt in packets:
if pkt.haslayer(TCP) and pkt.haslayer(IP):
src = pkt[IP].src
dst_port = pkt[TCP].dport
dst_ports[src].add(dst_port)
for ip, ports in dst_ports.items():
if len(ports) > 20: # Hitting many different ports
suspicious.append({
'type': 'PORT_SCAN_POSSIBLE',
'description': f"{ip} contacted {len(ports)} different ports",
'samples': sorted(list(ports))[:10],
})
return suspicious
def print_pcap_report(analysis: PcapAnalysis) -> None:
"""Print formatted PCAP analysis report."""
print("=" * 65)
print(" NETWORK FORENSICS REPORT")
print("=" * 65)
print(f" Total packets: {analysis.total_packets}")
print(f" Protocols: {analysis.protocols}")
print(f"\n{'β' * 65}")
print(" TOP TALKERS (by packets sent)")
print(f"{'β' * 65}")
for t in analysis.top_talkers:
print(f" {t['ip']:20s} {t['packets_sent']} packets")
if analysis.dns_queries:
print(f"\n{'β' * 65}")
print(f" DNS QUERIES ({len(analysis.dns_queries)} unique)")
print(f"{'β' * 65}")
for q in analysis.dns_queries[:20]:
print(f" {q['src']:20s} -> {q['query']}")
if analysis.large_transfers:
print(f"\n{'β' * 65}")
print(" LARGEST DATA TRANSFERS")
print(f"{'β' * 65}")
for t in analysis.large_transfers:
size_kb = t['bytes'] / 1024
print(f" {t['connection']:40s} {size_kb:.1f} KB")
if analysis.suspicious_connections:
print(f"\n{'β' * 65}")
print(" SUSPICIOUS PATTERNS")
print(f"{'β' * 65}")
for s in analysis.suspicious_connections:
print(f" [{s['type']}] {s['description']}")
if s.get('samples'):
for sample in s['samples']:
print(f" - {sample}")
print(f"\n{'=' * 65}")
7.2 Useful Command-Line Tools¶
# tcpdump - capture packets
# Capture all traffic on eth0
sudo tcpdump -i eth0 -w capture.pcap
# Capture only traffic to/from specific IP
sudo tcpdump -i eth0 host 192.168.1.100 -w suspicious.pcap
# Capture only HTTP traffic
sudo tcpdump -i eth0 port 80 -w http_traffic.pcap
# Capture DNS traffic
sudo tcpdump -i eth0 port 53 -w dns_traffic.pcap
# tshark (command-line Wireshark)
# Extract HTTP requests
tshark -r capture.pcap -Y "http.request" -T fields \
-e ip.src -e http.request.method -e http.request.uri
# Extract DNS queries
tshark -r capture.pcap -Y "dns.qr == 0" -T fields \
-e ip.src -e dns.qry.name
# Extract file transfers
tshark -r capture.pcap --export-objects http,exported_files/
# Show conversation statistics
tshark -r capture.pcap -z conv,ip -q
8. Incident Response Playbooks¶
8.1 Playbook Template¶
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β IR Playbook Template β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β PLAYBOOK: [Incident Type] β
β VERSION: 1.0 β
β LAST UPDATED: [Date] β
β OWNER: [Team/Person] β
β SEVERITY: [P1-P4] β
β β
β TRIGGER: β
β [What alerts/conditions activate this playbook] β
β β
β INITIAL TRIAGE (first 15 minutes): β
β [ ] Step 1: ... β
β [ ] Step 2: ... β
β [ ] Step 3: Classify severity β
β [ ] Step 4: Notify incident manager β
β β
β CONTAINMENT (first 1-4 hours): β
β [ ] Step 1: ... β
β [ ] Step 2: ... β
β [ ] Step 3: Preserve evidence β
β β
β ERADICATION: β
β [ ] Step 1: ... β
β [ ] Step 2: ... β
β β
β RECOVERY: β
β [ ] Step 1: ... β
β [ ] Step 2: Verify normal operations β
β β
β COMMUNICATION: β
β - Internal: [who to notify and when] β
β - External: [customers, regulators, law enforcement] β
β β
β ESCALATION: β
β - Condition β Action β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
8.2 Playbook: Ransomware Incident¶
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PLAYBOOK: Ransomware Incident β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β SEVERITY: P1 (Critical) β
β β
β TRIGGER: β
β - Ransom note displayed on endpoint β
β - Mass file encryption detected β
β - EDR alert: ransomware behavior β
β β
β INITIAL TRIAGE (first 15 minutes): β
β [ ] 1. DO NOT power off affected systems β
β [ ] 2. Disconnect affected systems from network β
β (pull ethernet, disable WiFi - do NOT shut down) β
β [ ] 3. Document ransom note (photograph/screenshot) β
β [ ] 4. Identify ransomware variant if possible β
β [ ] 5. Determine scope: how many systems affected? β
β [ ] 6. Notify incident manager β activate IR team β
β [ ] 7. Notify CISO / executive management β
β β
β CONTAINMENT (first 1-4 hours): β
β [ ] 1. Isolate affected network segments β
β [ ] 2. Block known ransomware C2 IPs/domains β
β [ ] 3. Disable network shares to prevent spread β
β [ ] 4. Reset all potentially compromised credentials β
β [ ] 5. Capture memory dumps of affected systems β
β [ ] 6. Preserve logs (SIEM, firewall, endpoint) β
β [ ] 7. Check backup integrity (are backups affected?) β
β β
β ERADICATION: β
β [ ] 1. Identify initial infection vector (email, exploit, etc.) β
β [ ] 2. Check NoMoreRansom.org for decryption tools β
β [ ] 3. Remove malware from all affected systems β
β [ ] 4. Patch vulnerability that allowed infection β
β [ ] 5. Scan all systems for persistence mechanisms β
β β
β RECOVERY: β
β [ ] 1. Restore from clean, verified backups β
β [ ] 2. Rebuild systems that cannot be cleaned β
β [ ] 3. Restore in phases, monitoring for reinfection β
β [ ] 4. Reset all passwords organization-wide β
β [ ] 5. Enhance monitoring for 30 days post-recovery β
β β
β COMMUNICATION: β
β - Internal: All-hands notification within 2 hours β
β - Legal: Engage legal counsel immediately β
β - Insurance: Notify cyber insurance carrier β
β - Law enforcement: File report with FBI IC3 β
β - Regulators: Per regulatory requirements (GDPR: 72 hours) β
β - Customers: If data breach confirmed β
β β
β DO NOT: β
β - Pay the ransom without consulting legal and law enforcement β
β - Communicate with attackers without legal guidance β
β - Destroy evidence β
β - Restore from backups before ensuring they are clean β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
8.3 Playbook: Compromised Credentials¶
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PLAYBOOK: Compromised Credentials β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β SEVERITY: P2 (High) β
β β
β TRIGGER: β
β - Credential found on dark web / paste site β
β - User reports phishing / credential theft β
β - Impossible travel alert β
β - MFA bypass detected β
β β
β INITIAL TRIAGE: β
β [ ] 1. Identify affected account(s) β
β [ ] 2. Determine credential type (password, API key, token) β
β [ ] 3. Check for unauthorized access in audit logs β
β [ ] 4. Determine if MFA was enabled β
β β
β CONTAINMENT: β
β [ ] 1. Force password reset on affected account β
β [ ] 2. Revoke all active sessions / tokens β
β [ ] 3. Rotate API keys if applicable β
β [ ] 4. Enable MFA if not already enabled β
β [ ] 5. Block suspicious source IPs β
β [ ] 6. Check for mailbox rules (forwarding, deletion) β
β β
β INVESTIGATION: β
β [ ] 1. Review all actions taken with compromised credential β
β [ ] 2. Check for lateral movement (access to other systems) β
β [ ] 3. Check for data access / exfiltration β
β [ ] 4. Identify how credential was compromised β
β [ ] 5. Check if credential was reused on other services β
β β
β RECOVERY: β
β [ ] 1. Verify account is secured (new password + MFA) β
β [ ] 2. Reverse any unauthorized changes β
β [ ] 3. Notify user of incident and require security training β
β [ ] 4. Monitor account for 30 days β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
9. Post-Incident Review Template¶
"""
incident_report.py - Generate post-incident review reports.
"""
from dataclasses import dataclass, field
from datetime import datetime
from typing import Optional
@dataclass
class TimelineEvent:
"""A single event in the incident timeline."""
timestamp: str
description: str
actor: str = "" # Who performed the action
evidence: str = "" # Supporting evidence
@dataclass
class ActionItem:
"""A follow-up action from the incident review."""
description: str
owner: str
due_date: str
priority: str = "MEDIUM" # HIGH, MEDIUM, LOW
status: str = "OPEN" # OPEN, IN_PROGRESS, DONE
@dataclass
class IncidentReport:
"""Complete post-incident review report."""
# Metadata
incident_id: str
title: str
severity: str
status: str = "CLOSED"
report_date: str = ""
report_author: str = ""
# Timeline
detected_at: str = ""
contained_at: str = ""
eradicated_at: str = ""
recovered_at: str = ""
closed_at: str = ""
# Details
summary: str = ""
root_cause: str = ""
impact: str = ""
affected_systems: list[str] = field(default_factory=list)
affected_users: int = 0
data_compromised: str = ""
# Analysis
attack_vector: str = ""
attacker_info: str = ""
timeline: list[TimelineEvent] = field(default_factory=list)
# Lessons
what_went_well: list[str] = field(default_factory=list)
what_went_wrong: list[str] = field(default_factory=list)
action_items: list[ActionItem] = field(default_factory=list)
# Metrics
time_to_detect: str = "" # Time from compromise to detection
time_to_contain: str = "" # Time from detection to containment
time_to_recover: str = "" # Time from containment to recovery
total_duration: str = "" # Total incident duration
def __post_init__(self):
if not self.report_date:
self.report_date = datetime.now().strftime("%Y-%m-%d")
def generate_markdown(self) -> str:
"""Generate a Markdown report."""
lines = []
lines.append(f"# Incident Report: {self.incident_id}")
lines.append(f"\n**Title**: {self.title}")
lines.append(f"**Severity**: {self.severity}")
lines.append(f"**Status**: {self.status}")
lines.append(f"**Report Date**: {self.report_date}")
lines.append(f"**Author**: {self.report_author}")
# Executive Summary
lines.append("\n## Executive Summary\n")
lines.append(self.summary)
# Impact
lines.append("\n## Impact\n")
lines.append(self.impact)
if self.affected_systems:
lines.append(f"\n**Affected Systems**: {', '.join(self.affected_systems)}")
lines.append(f"**Affected Users**: {self.affected_users}")
if self.data_compromised:
lines.append(f"**Data Compromised**: {self.data_compromised}")
# Timeline
lines.append("\n## Timeline\n")
lines.append("| Time | Event | Actor |")
lines.append("|------|-------|-------|")
for event in self.timeline:
lines.append(
f"| {event.timestamp} | {event.description} | {event.actor} |"
)
# Key Metrics
lines.append("\n## Key Metrics\n")
lines.append(f"- **Time to Detect**: {self.time_to_detect}")
lines.append(f"- **Time to Contain**: {self.time_to_contain}")
lines.append(f"- **Time to Recover**: {self.time_to_recover}")
lines.append(f"- **Total Duration**: {self.total_duration}")
# Root Cause
lines.append("\n## Root Cause Analysis\n")
lines.append(self.root_cause)
lines.append(f"\n**Attack Vector**: {self.attack_vector}")
# Lessons Learned
lines.append("\n## Lessons Learned\n")
lines.append("### What Went Well\n")
for item in self.what_went_well:
lines.append(f"- {item}")
lines.append("\n### What Needs Improvement\n")
for item in self.what_went_wrong:
lines.append(f"- {item}")
# Action Items
lines.append("\n## Action Items\n")
lines.append("| # | Action | Owner | Due Date | Priority | Status |")
lines.append("|---|--------|-------|----------|----------|--------|")
for i, action in enumerate(self.action_items, 1):
lines.append(
f"| {i} | {action.description} | {action.owner} | "
f"{action.due_date} | {action.priority} | {action.status} |"
)
return "\n".join(lines)
# βββ Example Usage βββ
def create_sample_report() -> IncidentReport:
"""Create a sample incident report for demonstration."""
report = IncidentReport(
incident_id="IR-2025-0042",
title="Unauthorized Access via Compromised API Key",
severity="P2 - High",
report_author="Security Team",
summary=(
"On January 15, 2025, an unauthorized party accessed our "
"production API using a compromised API key. The key was "
"inadvertently committed to a public GitHub repository. "
"The attacker accessed customer order data for approximately "
"2 hours before detection."
),
root_cause=(
"A developer committed an API key to a public GitHub "
"repository on January 10. The key was scraped by an "
"automated bot and used to access the production API "
"on January 15. Pre-commit hooks for secret detection "
"were not configured on the developer's machine."
),
impact=(
"Customer order data (names, addresses, order history) "
"for approximately 1,200 customers was potentially accessed. "
"No payment card data was exposed (stored separately). "
"No evidence of data modification."
),
attack_vector="Compromised API key from public Git repository",
affected_systems=["api-prod-01", "api-prod-02", "orders-db"],
affected_users=1200,
data_compromised="Customer names, addresses, order history",
detected_at="2025-01-15 14:30 UTC",
contained_at="2025-01-15 14:45 UTC",
eradicated_at="2025-01-15 16:00 UTC",
recovered_at="2025-01-15 18:00 UTC",
closed_at="2025-01-20 09:00 UTC",
time_to_detect="5 days (from key commit to detection)",
time_to_contain="15 minutes",
time_to_recover="3.5 hours",
total_duration="5 days",
timeline=[
TimelineEvent(
"2025-01-10 09:15", "API key committed to public repo",
"Developer A"
),
TimelineEvent(
"2025-01-15 12:30", "First unauthorized API access",
"Unknown attacker"
),
TimelineEvent(
"2025-01-15 14:30", "Anomalous API usage alert triggered",
"SIEM"
),
TimelineEvent(
"2025-01-15 14:35", "SOC analyst confirms unauthorized access",
"Analyst B"
),
TimelineEvent(
"2025-01-15 14:45", "API key revoked, attacker blocked",
"Analyst B"
),
TimelineEvent(
"2025-01-15 15:00", "Incident manager notified, IR activated",
"IR Lead C"
),
TimelineEvent(
"2025-01-15 16:00", "All exposed API keys rotated",
"DevOps Team"
),
TimelineEvent(
"2025-01-15 18:00", "Monitoring confirms no further access",
"SOC Team"
),
],
what_went_well=[
"SIEM alert fired quickly once anomalous pattern detected",
"API key revocation was fast (15 min from alert to containment)",
"IR team followed playbook effectively",
"Good communication between SOC and development teams",
],
what_went_wrong=[
"API key was in public repo for 5 days before detection",
"No automated secret scanning on GitHub repositories",
"Pre-commit hooks not enforced across all developer machines",
"API key had overly broad permissions (read all orders)",
"No IP-based access restrictions on API keys",
],
action_items=[
ActionItem(
"Deploy Gitleaks on all repositories",
"DevOps Team", "2025-02-01", "HIGH"
),
ActionItem(
"Enforce pre-commit hooks with detect-secrets",
"Dev Lead", "2025-02-15", "HIGH"
),
ActionItem(
"Implement API key scope restrictions (least privilege)",
"API Team", "2025-03-01", "HIGH"
),
ActionItem(
"Add IP allowlisting for production API keys",
"Infrastructure", "2025-03-01", "MEDIUM"
),
ActionItem(
"Conduct developer security training (secrets management)",
"Security Team", "2025-02-28", "MEDIUM"
),
ActionItem(
"Implement automated key rotation (90-day max)",
"DevOps Team", "2025-04-01", "MEDIUM"
),
],
)
return report
if __name__ == "__main__":
report = create_sample_report()
markdown = report.generate_markdown()
print(markdown)
# Save to file
with open("incident_report_IR-2025-0042.md", "w") as f:
f.write(markdown)
print("\nReport saved to: incident_report_IR-2025-0042.md")
10. Exercises¶
Exercise 1: Log Analysis¶
Given the following sample log entries, identify all security incidents:
192.168.1.50 - - [15/Jan/2025:10:00:01 +0000] "GET /login HTTP/1.1" 200 1234
192.168.1.50 - - [15/Jan/2025:10:00:02 +0000] "POST /login HTTP/1.1" 401 89
192.168.1.50 - - [15/Jan/2025:10:00:03 +0000] "POST /login HTTP/1.1" 401 89
192.168.1.50 - - [15/Jan/2025:10:00:04 +0000] "POST /login HTTP/1.1" 401 89
10.0.0.5 - admin [15/Jan/2025:10:05:00 +0000] "GET /admin/users HTTP/1.1" 200 5678
10.0.0.5 - admin [15/Jan/2025:10:05:01 +0000] "GET /admin/export?table=users HTTP/1.1" 200 890123
10.0.0.5 - admin [15/Jan/2025:10:05:02 +0000] "GET /admin/export?table=payments HTTP/1.1" 200 1234567
203.0.113.10 - - [15/Jan/2025:10:10:00 +0000] "GET /../../etc/passwd HTTP/1.1" 403 0
203.0.113.10 - - [15/Jan/2025:10:10:01 +0000] "GET /search?q=' OR 1=1 -- HTTP/1.1" 500 0
203.0.113.10 - - [15/Jan/2025:10:10:02 +0000] "GET /search?q=<script>alert(1)</script> HTTP/1.1" 200 456
Tasks: 1. Classify each suspicious pattern (brute force, traversal, SQLi, XSS, data exfiltration) 2. Determine the severity of each finding 3. Write a brief incident summary for each
Exercise 2: IOC Database¶
Create a JSON file with at least 20 IOCs covering: - 5 malicious IP addresses - 5 malicious domains - 5 malware file hashes - 5 suspicious filenames
Run the IOC scanner against a test directory you create with some matching items.
Exercise 3: Incident Response Playbook¶
Write a complete incident response playbook for a SQL injection attack that: 1. Defines trigger conditions (what alerts indicate SQLi) 2. Covers all four NIST phases 3. Includes specific commands/tools to use at each step 4. Defines communication and escalation procedures 5. Includes a post-incident checklist
Exercise 4: Memory Forensics Analysis¶
Research Volatility 3 and write a step-by-step guide for analyzing a memory dump to: 1. List all running processes and identify suspicious ones 2. Find active network connections 3. Extract command line arguments for each process 4. Identify injected DLLs or code 5. Recover encryption keys or passwords from memory
Exercise 5: Post-Incident Report¶
Using the IncidentReport class from this lesson, create a complete post-incident report for the following scenario:
Your company's web application was defaced at 3 AM on a Sunday. The attacker exploited a known CVE in an unpatched WordPress plugin. They replaced the homepage with a political message. Your monitoring system detected the change at 3:15 AM. The on-call engineer restored from backup at 4:00 AM. Investigation revealed the attacker also created a backdoor admin account.
Exercise 6: PCAP Analysis¶
Download a sample PCAP file from a CTF or security training resource (e.g., malware-traffic-analysis.net). Analyze it using the tools from this lesson and write a network forensics report covering: 1. Top talkers (most active IP addresses) 2. DNS queries (especially suspicious ones) 3. HTTP requests (look for malware downloads, C2 communications) 4. Any indicators of data exfiltration
Summary¶
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Incident Response Key Takeaways β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β 1. Preparation is everything: Have plans, tools, and trained β
β people BEFORE an incident occurs β
β β
β 2. Follow the NIST lifecycle: Preparation β Detection β β
β Containment β Eradication β Recovery β Lessons Learned β
β β
β 3. Preserve evidence: Document everything, maintain chain β
β of custody, hash all evidence, work on copies β
β β
β 4. Centralize logging: You cannot investigate what you β
β did not log. Invest in logging infrastructure β
β β
β 5. Automate detection: Use SIEM correlation rules and β
β IOC scanning to reduce detection time β
β β
β 6. Practice with playbooks: Written, tested playbooks reduce β
β response time and ensure consistency β
β β
β 7. Learn from incidents: Post-incident reviews are the most β
β valuable source of security improvements β
β β
β 8. Time matters: Minutes count during active incidents. β
β Mean Time to Detect (MTTD) and Mean Time to Respond β
β (MTTR) are your key metrics β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Previous: 13. Security Testing | Next: 15. Project: Building a Secure REST API