Continuum Resources LLC — Applied AI Research Series
WP-CR-2025-11  ·  Unclassified  ·  Public Release Authorized

LLM-Assisted Multi-Sensor
Fusion for C-UAS Threat Classification

A Defense AI Architecture for Real-Time Drone Threat Identification, Intent Assessment, and Cost-Optimized Engagement Recommendation Across Radar, RF, Electro-Optical, and Acoustic Sensor Modalities

Author
Kurt A. Richardson, PhD
Affiliation
Head of R&D, Continuum Resources LLC
Published
March 2025
Classification
Unclassified // Public
Domain
C-UAS · Multi-Sensor AI · Defense Systems
Scroll to read
Section 00

Executive Summary

The battlefield economics of unmanned aerial systems have undergone a structural inversion. Commercial off-the-shelf drones costing hundreds to thousands of dollars are being deployed in coordinated swarms against defensive systems that cost orders of magnitude more to operate. The April 2024 Iran–Israel engagement illustrated the problem at scale: an estimated 170-plus Shahed-136 drones and loitering munitions, each costing approximately $20,000, required intercept with Patriot PAC-3 missiles at approximately $4 million per shot. The arithmetic is unsustainable. At that engagement rate, a single night's swarm attack consumes a significant fraction of a theater's strategic interceptor stockpile.

This is not a hardware problem — it is a decision intelligence problem. The cost-exchange crisis is solvable only by improving the quality, speed, and selectivity of engagement decisions: identifying which threats require kinetic intercept, which can be defeated by electronic warfare, which pose no immediate threat, and in what sequence to engage a coordinated swarm without depleting high-value interceptors on low-value targets. This paper presents a rigorous technical architecture for that decision intelligence layer — the LLM-Assisted Multi-Sensor Fusion framework for C-UAS threat classification, which forms the research foundation for Continuum's ATLAS (Autonomous Threat Level Assessment System) capability.

8,000:1
Worst-case exchange ratio — $500 commercial drone defeated by ~$4M Patriot intercept
<8s
Target decision latency for ATLAS — from multi-sensor track to engagement recommendation
>90%
Target drone type classification accuracy for the proposed LLM fusion architecture across combined sensor modalities
⚡ Core Architecture Thesis

Single-sensor classifiers — whether radar Doppler signatures, RF emission profiles, or electro-optical object detection — produce accurate classifications for well-characterized threats under ideal conditions. They produce unacceptable false negative rates for novel or spoofed threats in degraded environments. LLM-based multi-sensor fusion changes the problem: the LLM does not just aggregate classifier outputs, it reasons over the full multi-modal evidence context, applies threat intelligence, and provides calibrated confidence scores that reflect the actual uncertainty in the data — enabling human operators to make correctly-informed engagement decisions in seconds, not minutes.

Section 01

Introduction: The Decision Intelligence Gap

Counter-Unmanned Aerial Systems (C-UAS) is among the fastest-evolving domains in defense technology. The proliferation of commercially available drones — DJI Mavic-class consumer platforms, modified first-person-view racing drones, fixed-wing loitering munitions, and coordinated swarms controlled by commercial software — has created a threat landscape that evolved faster than the sensor and effector systems designed to counter it. The systems that exist — FAAD-C2, SkyWarden, LIDS, Dedrone, Ku-band radars, acoustic sensor arrays — are in many cases capable of detecting and tracking UAS threats. The gap is not detection. The gap is the intelligence layer that sits between detection and decision.

An operator presented with a track from a Ku-band radar must determine: Is this a quad-rotor commercial drone or a modified military-grade platform? Is it conducting reconnaissance, approaching a target, or establishing communication relay? Is it operating autonomously or under active RF control? Is it part of a coordinated swarm with a specific geometric attack pattern, or a single opportunistic threat? What is the optimal engagement option given the current interceptor inventory, the threat priority, and the proximity to friendly personnel and civilian infrastructure? Current systems provide partial answers to some of these questions — but not synthesized, prioritized, and presented in the 8-second decision cycle that active defense scenarios demand.

"The drone threat is evolving faster than our ability to write requirements for counter-systems. We need software-native partners who can move at the speed of the threat — not hardware programs that take five years to field."
— Senior DoD Program Executive, public remarks (paraphrased)

The ATLAS Architecture

📋 Methodology Status — Proposed Architecture Under Active Development

This paper presents the ATLAS architecture as a proposed methodology and system design that Continuum Resources is actively developing. The architecture, data flows, classifier designs, and engagement logic described here represent our current technical approach — grounded in published research and Continuum's existing AI/ML capabilities — not a fielded system. Performance targets and design parameters will be established through the phased evaluation program described in Section 18. Readers should understand this paper as the technical foundation for a development program, not a description of an operational system.

This paper describes the technical architecture underlying ATLAS — Continuum's Autonomous Threat Level Assessment System for C-UAS decision support. ATLAS is not a detection system; it assumes the existence of one or more organic sensors. ATLAS is the intelligence fusion layer that would ingest multi-modal sensor data, apply LLM-assisted reasoning to produce threat classification and intent assessment, and deliver human-readable engagement recommendations with quantified confidence scores to an operator interface. The architecture directly extends Continuum's published research in LLM Defense Evaluation (WP-CR-2025-09) and Secure RAG Architectures (WP-CR-2025-10), applying those frameworks to the specific requirements of real-time C-UAS operations.

Relationship to the Continuum Research Series

ATLAS operationalizes multiple Continuum research threads. The LLM evaluation methodology from WP-CR-2025-09 governs the selection and continuous assessment of the models used in the fusion layer. The Secure RAG Architecture from WP-CR-2025-10 provides the threat intelligence retrieval substrate. The adversarial robustness framework from WP-CR-2025-04 directly informs the anti-spoofing and adversarial RF input defenses. The DevSecOps delivery methodology from WP-CR-2025-06 governs the deployment pipeline to IL4/IL5 environments.

Section 02

The UAS Threat Landscape

Effective threat classification requires understanding the threat taxonomy. The C-UAS threat environment is not a single threat class — it is a continuously evolving spectrum from consumer-grade nuisances to sophisticated military systems, each with distinct sensor signatures, flight behaviors, payload capabilities, and optimal countermeasures.

UAS Threat Taxonomy

Threat ClassExamplesUnit CostKey CapabilitiesOptimal Countermeasure
Commercial COTS Small UAS DJI Mavic/Phantom, Autel EVO $500–$3K ISR at low altitude; payload carriage; GPS-dependent; limited range RF jamming / GPS spoofing (~$5K)
Modified FPV Racing Drones FPV strike drones (Ukraine-pattern) $400–$800 High speed (100+ km/h); low radar cross-section; direct video link; shaped charge payload Hard kill or high-energy laser (speed makes EW difficult)
Loitering Munitions Shahed-136, ZALA Lancet, Switchblade $20K–$100K Long range; autonomous terminal guidance; precision strike; difficult to intercept at terminal phase Medium-range kinetic (NASAMS, IRIS-T) or directed energy
Group 3 Military UAS MQ-9 Reaper class (adversary equivalents) $5M+ High altitude; long endurance; multi-payload; encrypted C2; active electronic countermeasures Air defense missile systems (THAAD, Patriot)
Coordinated Swarms 10–200 COTS/modified UAS in coordinated attack $5K–$2M total Saturates operator bandwidth; geometric attack patterns; single-shot defeat insufficient; may include decoys Priority-ordered EW + kinetic; DEW when available

The Cost-Exchange Crisis

The fundamental C-UAS challenge is the cost-exchange ratio — the ratio of interceptor cost to threat cost. When adversaries deploy $500 drones against Patriot-defended assets, the cost-exchange ratio can reach 8,000:1. Even against the more capable Shahed-136 class threats, a 200:1 exchange ratio is strategically unsustainable at scale. The April 2024 Iran-Israel engagement — 170+ drones and ballistic missiles met with an estimated 350+ interceptors across Israeli, U.S., UK, and Jordanian air defenses — illustrated how rapidly a coordinated attack can consume high-value interceptor stockpiles.

The path to a sustainable defense posture requires matching threats to the most cost-effective countermeasure: electronic warfare for GPS-dependent COTS drones ($5K per engagement), directed energy for volumetric threats ($1–2 per shot when systems are operational), and kinetic intercept reserved for threats that defeat all non-kinetic options. This matching problem — real-time, under operational pressure, with incomplete sensor data — is precisely where LLM-assisted decision intelligence provides transformative value.

⚠ The Swarm Saturation Problem

Single-threat engagement doctrine breaks down against coordinated swarms of 10–200+ UAS. The cognitive load on operators — tracking 50+ simultaneous threats, assessing each threat's priority, selecting the optimal engagement option for each, and managing effector inventory depletion — exceeds human bandwidth by an order of magnitude. Swarm defense requires an AI layer that can simultaneously assess all tracks, predict swarm geometry and intent, recommend a prioritized engagement sequence, and continuously update recommendations as the engagement evolves. This is not optional for swarm defense — it is structurally required.

Section 03

Why LLMs for Sensor Fusion

Multi-sensor fusion for UAS threat classification is not a new problem. Traditional approaches — Kalman filter track fusion, Bayesian classification networks, and multi-layer perceptron ensembles — have been applied to this problem for decades. They perform well under the conditions for which they were designed and trained. They perform poorly under conditions they were not designed for: novel threat signatures, adversarial sensor spoofing, degraded sensor environments, and the combinatorial complexity of multi-swarm engagements with mixed threat classes. LLMs add qualitatively different capabilities at exactly these high-uncertainty boundaries.

CAPABILITY 01
Natural Language Threat Intelligence Reasoning
A fine-tuned LLM can reason over threat intelligence reports, sensor signature databases, tactics documents, and operational context simultaneously. When a sensor detects an anomalous signature, the LLM can retrieve and reason over relevant threat intelligence to determine whether it matches a known or emerging threat variant — contextualizing sensor data against a continuously updated knowledge base that traditional ML classifiers cannot access.
CAPABILITY 02
Calibrated Uncertainty Quantification
Unlike traditional classifiers that output a probability distribution without uncertainty meta-information, an LLM-based fusion layer can express and communicate the quality of its own uncertainty: "Classification as Shahed-136 variant is 87% confident based on RF signature and flight path correlation; acoustic signature is degraded by wind noise and contributes low confidence to this assessment." This calibrated uncertainty is operationally essential for the human-in-the-loop decision interface.
CAPABILITY 03
Cross-Modal Evidence Synthesis
Radar, RF, EO, and acoustic sensors measure different physical phenomena with different spatial and temporal resolutions. LLMs can reason over these heterogeneous evidence sources in natural language representations, identifying when modalities agree, when they conflict, and what the conflict implies about the threat. A discrepancy between RF classification (COTS DJI protocol) and EO classification (military-grade airframe) is a security-relevant anomaly that a traditional fusion algorithm would average away — an LLM can reason about what that discrepancy means.
CAPABILITY 04
Zero-Shot Reasoning for Novel Threats
Traditional ML classifiers fail on threat signatures outside their training distribution. An LLM reasoning over sensor data can apply general physical and behavioral principles — flight dynamics, RF emission characteristics, acoustic signatures — to assess novel threats it was not explicitly trained on. This zero-shot generalization is particularly valuable for the rapidly evolving modified-FPV and emerging loitering munition threat classes where training data is sparse.

LLM as Orchestrator, Not Classifier

The ATLAS architecture uses the LLM as a reasoning orchestrator rather than a primary classifier. Each sensor modality has a dedicated specialist model optimized for that modality's data characteristics: an STFT-based convolutional neural network for RF signature classification, a CFAR-equipped Doppler radar track processor for radar, a fine-tuned EfficientNet variant for electro-optical classification, and a ResNet acoustic fingerprinter. The LLM receives the structured outputs of these specialist models — classification labels, confidence scores, physical measurements — and reasons over this multi-modal evidence to produce a synthesized threat assessment. This division of labor maximizes both specialist accuracy and cross-modal reasoning quality.

🔬 LLM vs. Classical Fusion: Empirical Comparison

Published research on LLM-assisted multi-modal fusion for classification tasks — including cross-modal evidence synthesis where sensor modalities provide conflicting signals — consistently demonstrates accuracy improvements of 4–8 percentage points over ensemble ML baselines on ambiguous cases (novel or modified platforms), which represent the operationally highest-consequence classification scenarios. The ATLAS architecture is designed to exploit this advantage specifically. The DRONERF dataset (University of Tulsa / AFIT) and the UC San Diego RF Drone Dataset provide the public foundation for benchmarking the RF and acoustic classifier components; the ATLAS development roadmap includes systematic evaluation against these datasets in Phase 1. Latency modeling based on current inference hardware suggests LLM fusion overhead of 300–500ms is achievable within the 8-second decision window at the Tier 2 hardware profile.

Section 04

Sensor Modalities

Effective multi-sensor fusion requires understanding each modality's information content, operational range, weather sensitivity, and failure modes. The ATLAS architecture is designed to degrade gracefully when individual sensors are unavailable or degraded — the LLM fusion layer explicitly models sensor availability and confidence when synthesizing assessments.

ModalityDetection RangeClassification ValueLimitationsWeather Sensitivity
Ku-Band Radar 3–10 km (small UAS) Micro-Doppler signature (rotor type, blade count), flight kinematics, RCS estimation — strong for size and flight behavior classification Clutter-limited at low altitude; struggles with very small targets; no RF/payload information All-weather capable
RF / Spectrum Analyzer 500m–5 km (C2 link) Drone manufacturer/model from RF protocol fingerprint; C2 link type (DJI OcuSync, analog FPV, encrypted military); frequency hopping pattern No C2 detection for fully autonomous or encrypted military drones; range limited by terrain All-weather; terrain-affected
Electro-Optical / IR 500m–3 km (day); 1 km IR Visual airframe classification, payload identification, formation geometry for swarms; IR signature for jet-powered platforms Limited by lighting (day only for EO); weather-degraded (fog, rain); requires pointing at target Weather-sensitive
Acoustic Array 50–500m (UAS prop noise) Motor count and type from propeller acoustic signature; detection before LOS available; passive (no emissions) Short range; highly susceptible to wind and ambient noise; low classification specificity Wind-sensitive
Passive RF / ELINT 1–20 km Electronic order of battle; military-grade encrypted C2 detection; SDR-based signal classification Requires classified signal databases; ITAR-controlled; limited public information All-weather

Complementarity and Coverage Gaps

No single sensor provides sufficient classification information for all threat classes across all operational environments. This is the fundamental motivation for multi-sensor fusion. Radar provides all-weather detection and flight kinematics but cannot identify RF protocol or payload. RF sensing can identify manufacturer model and controller type but fails on autonomous or encrypted threats. EO provides the richest classification information but is weather-dependent and short-range. Acoustic provides passive detection before LOS but has the lowest classification specificity of any modality. The fusion layer must reason over whatever combination of sensors is available, explicitly modeling which threat classes are distinguishable given the available evidence.

Section 05

Multi-Sensor Data Architecture

The ATLAS data architecture normalizes heterogeneous sensor data streams into a unified representation that the LLM fusion layer can reason over. Each sensor produces data at different rates, in different formats, with different spatial and temporal resolutions — the ingestion and normalization pipeline translates these into a sensor-agnostic track record format that captures the classification-relevant information from each modality.

Sensor Layer — Physical Data Acquisition
Ku-Band Radar (10Hz)
RF Spectrum Analyzer (continuous)
EO/IR Camera (30fps)
Acoustic Array (44kHz)
ADS-B / Mode C Transponder
Operator Reportable
Format-specific parsers → normalized sensor event stream
Ingestion & Normalization — Track Association
Track Initiation (CFAR)
Multi-Sensor Track Correlation
Kalman Track Filter
Geo-Registration
Temporal Alignment
Correlated track objects → specialist classifiers per modality
Specialist Classifier Layer — Per-Modality ML
Doppler CNN (radar)
RF Protocol Fingerprinter
EO Object Detector (YOLOv8)
Acoustic ResNet Classifier
Kinematics Classifier (LSTM)
Classifier outputs + confidence scores → LLM fusion context assembly
LLM Fusion Layer — Threat Reasoning
Multi-Modal Evidence Assembly
Threat Intel RAG Retrieval
LLM Threat Classification
Intent Assessment
Confidence Calibration
Threat assessment → engagement recommendation engine
Decision Layer — Engagement & Operator Interface
Cost-Optimization Engine
Effector Inventory Model
Engagement Recommendation
Operator Display (Guardian C2)
Human Approval Gate
Immutable Engagement Log
Figure 1 — ATLAS Multi-Sensor Fusion Architecture — Sensor data to engagement recommendation in <8 seconds

Track Correlation Algorithm

The most architecturally critical step before the LLM fusion layer is multi-sensor track correlation — associating observations from different sensors with the same physical UAS object. A quad-rotor detected by radar at a specific geolocation must be correlated with the RF emission detected in the same area at the same time and the EO track initiated on the object that matches the radar-reported size and heading. Track correlation failures — failing to associate observations from the same object, or incorrectly associating observations from different objects — produce incorrect multi-sensor evidence aggregation that the LLM cannot recover from.

The proposed ATLAS architecture uses a gating-and-assignment approach: spatial gating eliminates association candidates outside a physically plausible window given the track state and sensor uncertainty; the Hungarian algorithm solves the optimal association within the gate; and a track management component handles initiation of new tracks for unassociated observations and termination of tracks for which no observations have been received within the prediction window. Track quality scores — reflecting the number of sensors contributing, the age of each sensor's last observation, and the consistency of sensor estimates — are propagated to the LLM fusion context as explicit uncertainty metadata.

Section 06

RF Signature Analysis

Radio frequency analysis is the most discriminating sensor modality for identifying commercial and semi-commercial UAS platforms. Most commercial drones — DJI, Autel, Parrot, and their derivatives — use proprietary RF protocols for both C2 (controller-to-drone commands) and FPV video downlink. These protocols have characteristic frequency patterns, modulation schemes, packet structures, and frequency hopping algorithms that are as distinctive as a fingerprint. The RF classifier in ATLAS identifies drone manufacturer, model family, and controller type with high confidence when an RF C2 link is active.

RF Classification Architecture

The proposed RF classifier would operate on short-time Fourier transform (STFT) spectrograms of the captured RF spectrum — 2D time-frequency representations that convert RF signal data into an image-classification problem. A convolutional neural network trained on a library of known UAS RF signatures would process these spectrograms to produce protocol classification probabilities. The classifier design calls for training on the publicly available DRONERF dataset (University of Tulsa / AFIT) as a baseline, with planned augmentation through structured data collection during Phase 1 and 2 field trials to cover the most operationally significant commercial platforms.

RF Protocol ClassPlatformsPublished Benchmark Accuracy¹Operational Implication
DJI OcuSync 2.0/3.0DJI Mavic 3, Mini 4 Pro, Air 3 series~95–98% (DRONERF literature)Commercial platform; likely ISR or payload delivery; EW-defeatable with DJI frequency bands
DJI Lightbridge 2DJI Phantom 4, Inspire 2, older M-series~93–96% (DRONERF literature)Older commercial; lower capability; same EW applicability
Analog FPV (5.8GHz)Modified FPV racing/strike drones~85–92% (class-dependent)High-priority threat — FPV strike profile; harder to EW; may require kinetic response
Digital FPV (DJI O3 / ELRS)Military-modified FPV, high-performance platforms~82–90% (emerging class)High-capability modified platform; warrants elevated priority assessment
No RF EmissionAutonomous waypoint mission; encrypted military C2N/A — triggers no-RF threat flagCRITICAL — autonomous or military-encrypted; cannot be EW-jammed; escalated threat level
Unknown SignatureNovel platform, prototype, or active spoofingOOD detection triggers flagHigh priority — novel threat or active EW spoofing attempt; human review required

¹ Accuracy ranges reflect published results on public RF UAS datasets (DRONERF, UC San Diego). ATLAS target performance will be established through structured evaluation in Phase 1 against these benchmarks. Operational accuracy will depend on sensor hardware, deployment environment, and training corpus composition.

⚠ The No-RF Detection Case

A UAS track with no associated RF emission is the highest-priority anomaly in ATLAS's classification logic. It indicates either a fully autonomous platform operating on pre-programmed waypoints (which are not defeatable by RF jamming), a military-grade platform with encrypted communications (which requires hard-kill options), or an adversary that has deliberately disabled RF emissions to evade RF-based detection. All three scenarios warrant immediate elevation to highest threat priority and override of the default cost-optimized engagement recommendation in favor of kinetic options.

Section 07

Radar Track Processing

Radar provides the foundational tracking capability in most C-UAS deployments — it offers all-weather, 360-degree coverage, and can detect UAS at ranges of 3–10 kilometers for small rotary-wing platforms. Beyond detection and tracking, modern frequency-modulated continuous-wave (FMCW) and pulsed Doppler radars provide micro-Doppler signatures that reveal the rotational characteristics of propeller systems — a rich information source for classification even before any other sensor modality contributes.

Micro-Doppler Signature Analysis

The rotating propellers of a multi-rotor UAS produce characteristic micro-Doppler sidebands around the main target Doppler return. The frequency and spacing of these sidebands encodes the number of rotors, the number of blades per rotor, and the rotational speed — all of which are characteristic of specific UAS platforms and payload configurations. A DJI Mavic with four three-blade propellers produces a micro-Doppler signature distinctly different from a modified FPV quad with two-blade propellers running at 12,000 RPM.

The proposed ATLAS Doppler CNN would be trained on a library of micro-Doppler signatures drawn from published academic datasets and planned structured data collection during development. The network would classify UAS into categories based on rotor configuration, platform size, and characteristic flight dynamics — providing a classification signal that is independent of RF and EO information and therefore maintains classification capability even when those modalities are unavailable or degraded.

Kinematics-Based Intent Assessment

Beyond classification, radar track kinematics — speed, altitude, heading, acceleration, and flight path geometry — provide direct signals for intent assessment. An LSTM-based kinematics classifier trained on labeled flight behavior sequences can distinguish between:

  • ISR orbit: Circular or figure-eight flight pattern at constant altitude; persistent loiter over point of interest. Intent: reconnaissance.
  • Approach vector: Direct heading toward a protected asset with decreasing altitude; increasing speed in terminal phase. Intent: strike or close proximity ISR.
  • Standoff relay: Stationary hover at high altitude, outside sensor engagement range. Intent: communication relay or observation platform for coordinating ground attack.
  • Swarm geometric pattern: Multiple tracks maintaining geometric spacing with coordinated velocity changes. Intent: coordinated attack — highest threat priority.
  • Erratic/evasive: Random or highly maneuverable flight path. Possible false alarm (bird flock), possible active EW countermeasure engagement, or possible adversarial response to detection.
Section 08

Electro-Optical Classification

Electro-optical and infrared sensors provide the richest classification information of any sensor modality — visible or IR imagery allows direct visual identification of airframe type, payload configuration, and formation geometry. The limitation is range and weather: EO classification is typically limited to 500 meters to 3 kilometers depending on optic quality and magnification, and degrades significantly in fog, rain, smoke, or low-light conditions. As a result, EO contributes high-confidence classification information at close range and serves as the corroborating sensor for radar and RF classifications at longer ranges.

YOLOv8 UAS Airframe Detector

ATLAS's EO classification module is designed around a fine-tuned YOLOv8 object detector for airframe classification from EO imagery. The detector would be trained on a curated dataset combining publicly available UAS imagery, synthetic renders of target platforms, and structured imagery collected during field evaluation phases. The design target is real-time bounding box detection with airframe class labels and confidence scores — producing a continuous classification stream that the track correlator associates with the corresponding radar/RF track.

Beyond airframe classification, the EO detector includes a payload classification head: optical sensors, weapons packages, dropsonde devices, and communication antennas are classified from visible imagery when sufficient resolution is available. Payload classification is critical for intent assessment — a DJI Mavic carrying an obvious optical sensor has a very different threat profile than one carrying a modified munition, even if the airframe classification is identical.

Swarm Geometry Analysis from EO

When multiple UAS are simultaneously within EO field of view, the detector's output includes multi-object track positions that the swarm analysis module uses to characterize swarm geometry. Precise angular spacing, formation type (linear, V-formation, circular encirclement, random distribution), and synchronized behavior are all discriminable from EO imagery and carry direct tactical implications for the engagement recommendation.

Section 09

Acoustic Fingerprinting

Acoustic detection is the most range-limited but uniquely passive UAS detection modality — an acoustic sensor array detects UAS before they are within line-of-sight of EO sensors and requires no emissions of its own, making it immune to electronic detection countermeasures. At ranges up to 500 meters, propeller acoustic signatures carry sufficient information for platform classification; at closer ranges, precise motor count and rotational speed can be estimated.

Acoustic Feature Extraction

UAS acoustic signals are characterized by tonal components (fundamental propeller blade pass frequency and its harmonics) superimposed on broadband motor noise. The blade pass frequency — the number of blades multiplied by the rotational speed in revolutions per second — is the primary classification feature. A quad-rotor with three-blade propellers at 8,000 RPM produces a blade pass frequency of approximately 400 Hz with harmonics at 800, 1200, and 1600 Hz. This signature is platform-specific and relatively robust to acoustic propagation effects.

The acoustic module in the ATLAS design would use a ResNet classifier operating on mel-frequency cepstral coefficient (MFCC) features extracted from beamformed microphone array data. The classifier would be trained on the publicly available Drones Acoustic Dataset as a starting baseline. Published literature on acoustic UAS classification consistently shows significant accuracy degradation above approximately 15 km/h wind speeds — the ATLAS architecture therefore incorporates explicit meteorological sensor integration, with wind speed used to dynamically reduce the acoustic classifier's weight in the LLM fusion context when conditions are degraded.

🎙️ Acoustic Array Placement Optimization

The operational value of acoustic sensing in ATLAS will be highly dependent on array placement. Arrays positioned on elevated terrain or rooftops with good clear-sky exposure provide substantially better detection range and bearing accuracy than ground-level arrays in complex terrain. The ATLAS deployment design includes a sensor coverage planning tool that uses terrain analysis and prevailing wind direction data to recommend optimal acoustic array placement — generating sensor coverage heat maps that identify acoustic detection gaps for operator awareness prior to fielding.

Section 10

The LLM Fusion Layer

The LLM fusion layer is the architectural core of ATLAS — the component that transforms a set of parallel specialist classifier outputs into a synthesized, contextually-grounded threat assessment. It operates on a structured natural language representation of the multi-sensor evidence, retrieves relevant threat intelligence from a secure RAG corpus, and produces a classified threat assessment with calibrated confidence and an operator-readable justification.

Fusion Context Assembly

For each active track, the fusion context assembly module generates a structured prompt that presents the LLM with the complete evidence picture. A typical fusion context includes:

  • Track summary: Track ID, geolocation, altitude, velocity vector, time-on-track, number of contributing sensors, last update time per sensor.
  • Radar evidence: Micro-Doppler classifier output (platform class, confidence), estimated RCS, kinematic classifier output (behavior class, confidence), track quality score.
  • RF evidence: Protocol classifier output (manufacturer/model, controller type, confidence) or no-RF flag with elapsed time since last RF observation.
  • EO evidence: Airframe classifier output (model class, confidence), payload classifier output (payload type, confidence), or no-LOS flag with range-to-track.
  • Acoustic evidence: Acoustic classifier output (rotor count, confidence) or wind-degraded flag.
  • Retrieved threat intelligence: Relevant entries from the threat intelligence RAG corpus retrieved by the track's characteristics — known variants, recent TTPs, associated payload types.
  • Operational context: Protected asset types in track's approach vector, current effector inventory, time-to-engagement-window estimate, current threat posture.

LLM Model Selection and Fine-Tuning

For unclassified deployments using commercial cloud infrastructure, the ATLAS architecture proposes using a fine-tuned GPT-4o via Azure Government Cloud — subject to selection through the LDEF evaluation methodology from WP-CR-2025-09 applied against a C-UAS-specific task benchmark to be developed in Phase 1. For classified deployments (IL4/IL5) and air-gapped operational environments, the design calls for deploying a fine-tuned Llama 3.1-70B on government-owned hardware — the sovereign deployment pattern described in WP-CR-2025-09 Section 02.

Both model variants would be fine-tuned on a curated dataset of labeled C-UAS engagement scenarios, threat intelligence documents, and synthetic multi-sensor evidence scenarios generated from the ATLAS simulation environment. Fine-tuning is specifically intended to improve performance on the multi-modal evidence synthesis task — teaching the model to correctly weight conflicting sensor evidence, apply physical constraints on threat characterization, and produce well-calibrated confidence scores on the defense-specific task distribution. Building and evaluating this fine-tuning dataset is a primary Phase 1 objective.

Threat Classification Output Schema

The LLM produces a structured JSON output conforming to the ATLAS threat assessment schema, which includes:

  • threat_class: Primary classification (e.g., "Commercial_COTS_Quad", "Modified_FPV_Strike", "Loitering_Munition", "Military_UAS", "Unknown").
  • intent_assessment: Behavioral intent classification ("ISR_Orbit", "Strike_Approach", "Relay_Platform", "Swarm_Element", "Unknown").
  • threat_level: Ordinal threat priority (1–5 scale, where 5 is highest).
  • overall_confidence: Calibrated probability that the threat_class and intent_assessment are correct (0.0–1.0).
  • sensor_contributions: Per-modality contribution scores to the final assessment, and any modalities with anomalous or conflicting evidence.
  • assessment_narrative: Human-readable 2–4 sentence justification of the classification, citing specific evidence from each sensor modality.
  • escalation_flags: Boolean flags for no-RF detected, novel signature, sensor spoofing indicator, swarm element, approaching engagement window.
Section 11

Swarm Behavior Detection

Single-UAS threat classification is a well-defined problem. Swarm detection is qualitatively harder: it requires reasoning about the relationships between multiple simultaneous tracks, detecting coordinated behavior that emerges from the collective rather than from any individual platform, and characterizing the swarm's geometric and tactical properties to support engagement prioritization. The ATLAS architecture includes a dedicated swarm analysis module designed to operate across the full track picture in parallel with individual track classification.

Swarm Geometric Pattern Recognition

Coordinated UAS swarms exhibit geometric patterns that reflect their tactical objectives. The ATLAS swarm pattern classifier is designed to recognize several operationally significant configurations:

  • Saturation attack (distributed): Multiple UAS approaching from different azimuth angles simultaneously, designed to overwhelm point-defense engagement capacity. Key indicator: convergent headings toward a common target point from azimuths separated by more than 60 degrees. Engagement implication: requires simultaneous engagement across multiple approach vectors — single-direction defense is insufficient.
  • Sensor-effector pair: One high-altitude ISR UAS providing targeting data to multiple lower-altitude strike platforms. Key indicator: one stationary high-altitude track with RF link to multiple moving tracks approaching target. Engagement implication: prioritize the ISR platform to degrade swarm coordination.
  • Decoy-strike pair: One or more UAS triggering defensive engagement while additional strike platforms approach from a different vector. Key indicator: one conspicuous RF-active track with high RCS alongside additional lower-observable tracks. Engagement implication: do not commit all effectors to the conspicuous track.
  • Progressive release: Successive UAS launches from a ground platform at timed intervals, designed to deplete interceptor inventory before the final strike package arrives. Key indicator: periodic track initiations from same geographic origin at consistent intervals. Engagement implication: reserve interceptors; switch to non-kinetic options early.

Swarm Intent Prediction

Beyond recognizing current swarm geometry, the ATLAS trajectory prediction module would project each track's future position over a configurable window (design target: 30 seconds) and compute the geometric centroid of the projected swarm positions. The convergence point, arrival time, and swarm density at the projected impact point would constitute the swarm intent estimate — feeding directly into the engagement recommendation engine's prioritization logic. This projection is designed to update at 2Hz throughout the engagement, continuously refreshing the engagement recommendation as the swarm maneuvers.

Section 12

Engagement Recommendation Engine

The engagement recommendation engine translates the LLM's threat assessment into a prioritized, cost-optimized engagement recommendation for each active track — accounting for the current threat characterization, available effectors, effector inventory levels, collateral effects constraints, and time-to-engagement-window. The engine's primary objective is to minimize cost-exchange ratio while maximizing probability of defeat for each threat, subject to the constraint that high-confidence high-priority threats are always engaged even if the optimal effector is costly.

The Effector Selection Matrix

Effector TypeCost/EngagementEffective AgainstConstraintsATLAS Recommendation Priority
RF Jamming (spot)~$50–$500/eventCOTS UAS with active RF C2 (DJI, analog FPV)Ineffective on autonomous/encrypted; collateral interference riskPrimary for RF-confirmed COTS threats
GPS Spoofing~$1,000–$5,000/eventGPS-dependent navigation; COTS platformsArea effect — affects friendly navigation; requires deconflictionSecondary option; operator confirmation required
High-Energy Laser (DEL)~$1–$10/shotSmall/medium UAS; FPV; loitering munitions in terminal phaseWeather-dependent (fog/dust degrades beam); requires dwell time; range-limitedPreferred for high-volume threat; conserves missiles
CUAS Net/Physical~$1,000–$5,000/eventSlow COTS UAS at short rangeVery short range; limited to 50m–200m; single-useLast-resort close-in option
SHORAD Missile (NASAMS, IRIS-T)~$300K–$1M/shotLoitering munitions, Group 3+ UAS, cruise missilesHigh-value asset; limited inventory; reload timeReserved for threats EW/DEL cannot defeat
Patriot/THAAD~$4–$12M/shotBallistic missiles, high-altitude UAS, sophisticated cruise missilesStrategic asset; must not waste on COTS; politically sensitiveOnly if no other option; requires commander override

Cost-Optimization Logic

The engagement engine implements a constrained optimization: maximize probability of defeat across all active tracks subject to inventory constraints and minimum engagement confidence thresholds. For each threat-effector combination, the engine computes the expected defeat probability (from the threat assessment's confidence score and historical engagement data), the engagement cost, and whether the threat's priority warrants commitment of the effector. The recommendation is the minimum-cost engagement sequence that achieves the threshold defeat probability for each threat given current inventory.

A critical policy constraint: if no effector below the Patriot/THAAD cost tier achieves the minimum defeat probability for a threat, ATLAS does not recommend withholding engagement. It escalates to the next cost tier and flags the engagement for commander awareness rather than silently recommending the cheap but ineffective option. Operational effectiveness takes precedence over cost optimization when they are in conflict.

Section 13

Human-in-the-Loop Design

ATLAS is an engagement decision support system, not an autonomous engagement system. Every engagement recommendation requires explicit human authorization before actuation — this is both an architectural requirement and a legal and ethical obligation under DoD Directive 3000.09 (Autonomous Weapons), which requires human judgment in the loop for lethal force decisions. The HITL design principles in ATLAS are not compliance afterthoughts — they are first-class design requirements that shape the entire recommendation presentation architecture.

"The goal is not to remove humans from the loop — it is to give humans the right information, in the right format, at the right time, to make better decisions faster. ATLAS compresses the decision cycle from minutes to seconds by eliminating the cognitive work of evidence gathering and synthesis, while preserving human judgment for the decision that matters."
— Kurt A. Richardson, PhD, Continuum Resources LLC

The Operator Interface Design Principles

  • Prioritized situational display: All active tracks are displayed on a geospatial map with threat-level color coding. Tracks are automatically sorted by priority — operators always see the highest-threat tracks at the top of the engagement queue without scanning.
  • Single-click approval workflow: Each engagement recommendation is presented as a card with the recommended effector, estimated cost, defeat probability, confidence score, and the 2-4 sentence assessment narrative. Approval requires a single click; rejection requires no action (the recommendation expires at end-of-window).
  • Evidence transparency: Expanding any track card shows the per-sensor evidence summary — what each sensor contributed to the classification, which sensors are degraded, and the specific evidence items that produced the recommendation. Operators can see exactly why ATLAS made the recommendation it did.
  • Override and escalate controls: Operators can override the system's recommendation for any track — selecting a different effector, declining engagement, or escalating to a higher authority. All overrides are logged with the operator ID and reason code for post-engagement analysis.
  • No-recommendation posture: When the system's confidence in a track's classification falls below the deployment's configured threshold, it presents the track without a recommendation — flagging it for operator judgment rather than producing a low-confidence recommendation that may be acted upon uncritically.

Role-Based Authorization

ATLAS specifies a four-tier authorization model: Observer (full display access, no engagement controls); Recommender (can submit engagement recommendations to the approval queue but cannot execute); Engagement Authority (can approve and execute recommendations up to a configurable cost and consequence tier); Commander Override (can approve engagements in all tiers including those requiring escalation). This role model maps directly to military chain-of-command authority structures and would support delegation of engagement authority to the lowest appropriate echelon without removing senior commander oversight for high-consequence decisions.

Section 14

Interactive Threat Classification Demo

The following tools simulate the ATLAS threat classification and engagement recommendation outputs for representative C-UAS scenarios. The Sensor Fusion Board shows a live-simulated multi-sensor feed for an active track. The Threat Classification Explorer shows how different drone profiles produce distinct classification outputs across sensor modalities.

ATLAS Sensor Fusion Board — Track BRAVO-7
LIVE SIM · Updating 1Hz
Threat Classification Explorer — Select UAS Profile
LLM fusion layer output across all sensor modalities · Simulated classifications
Select UAS Threat Profile
ATLAS Classification Output
Section 15

Adversarial Robustness

A sophisticated adversary deploying UAS against a system using the ATLAS architecture will attempt to defeat or degrade the classification layer. The adversarial threat model for C-UAS AI is distinct from the general LLM adversarial model in WP-CR-2025-04 — it involves physical adversarial attacks on sensor data rather than text-based prompt injection. The ATLAS adversarial robustness design addresses the specific physical attack vectors available to a UAS adversary.

Physical Adversarial Attack Taxonomy

Attack TypeMechanismEffect on ATLASMitigation
RF Protocol Spoofing Transmitting DJI protocol emissions from a military UAS to appear as COTS platform RF classifier assigns low threat level to military platform; wrong effector recommendation Cross-modal consistency check — DJI protocol from platform with military-grade kinematics or EO signature triggers spoofing flag
Radar Cross-Section Reduction Radar-absorbing materials, small form factor, low-altitude terrain masking Radar detection delayed or missed; radar classifier receives low-confidence input Multi-sensor dependency — ATLAS notes reduced radar confidence and weights other modalities higher; acoustic/RF can initiate tracks radar-independent
RF Silence (No C2 Emissions) Pre-programmed autonomous mission; C2 encrypted and spread-spectrum RF classifier produces no-emission flag; threat level should be elevated No-RF flag is a hardcoded threat level escalation in ATLAS — no RF is treated as HIGHER threat, not lower
Acoustic Suppression Muffled/shrouded propellers; fixed-wing configuration for lower acoustic signature Acoustic classifier confidence reduced; detection range shortened Acoustic is a supplementary modality — degradation is explicitly modeled; other sensors maintain primary classification
Decoy Swarm High-signature decoy platforms to draw engagement while stealthy strike platforms approach Effector inventory depleted on decoys; real threat reaches engagement window unengaged Swarm geometry analysis identifies suspicious pattern; engagement cost optimization recommends non-kinetic options for decoy candidates
Adversarial RF Injection Ground-based transmitter flooding RF spectrum with false drone signatures to saturate ATLAS track initiation False track flood overwhelms operator interface; denial of service to real tracks Track quality score requires multi-sensor correlation — RF-only tracks below quality threshold are suppressed or deprioritized

The Cross-Modal Consistency Principle

The most effective structural defense against physical adversarial attacks is the cross-modal consistency check embedded in the LLM fusion layer. An adversary can plausibly spoof one sensor modality — transmitting false RF signatures, applying radar-absorbing coatings, or muffling acoustic signatures. Simultaneously spoofing all four independent sensor modalities in a consistent way is significantly harder. The LLM fusion layer is explicitly instructed to reason about inter-modality consistency as a security signal: when two or more sensors produce conflicting characterizations of the same track, the inconsistency itself is a classification-relevant feature that should be reported in the threat assessment and may indicate adversarial activity.

Section 16

Deployment & ITAR Architecture

ATLAS operates across a deployment spectrum from unclassified test environments to IL4/IL5 classified operational deployments. The deployment architecture is tiered to match the classification requirements and operational constraints of each environment, with explicit ITAR controls governing the export and sharing of sensor signature data, threat intelligence, and the trained classification models themselves.

ITAR and Export Control Considerations

Several components of the ATLAS architecture are subject to International Traffic in Arms Regulations (ITAR) and Export Administration Regulations (EAR):

  • RF signature library: A comprehensive library of UAS RF signatures — particularly those for military and semi-military platforms — constitutes controlled defense technical data under USML Category XI. The ATLAS RF classifier's training data and model weights for military platform classification are ITAR-controlled.
  • Threat intelligence corpus: RAG corpus entries containing classified or controlled threat intelligence are handled within classification-appropriate partitions consistent with the Secure RAG Architecture (WP-CR-2025-10). No classified threat intelligence is accessible from unclassified deployment tiers.
  • Engagement optimization models: Models that directly optimize engagement sequencing against specific foreign military UAS platforms may constitute controlled defense services under ITAR Category XII. Legal review is required before sharing or licensing to non-U.S. entities.
  • Software itself: ATLAS software that specifically controls or directs C-UAS effectors (beyond providing recommendations for human action) may require DDTC licensing for export. The current HITL architecture — where software provides recommendations only — is designed to minimize this exposure.

Deployment Tier Architecture

TierEnvironmentLLM BackendThreat Intel CorpusClassification Handling
Tier 1 — Demo/UnclassifiedTraining ranges, exercises, vendor demosAzure Gov OpenAI (GPT-4o)Public datasets only; no controlled signaturesUnclassified; FOUO
Tier 2 — CUI OperationalCONUS base defense; JIDA programsAzure Gov + fine-tuned modelCUI threat intel; controlled RF signaturesCUI / IL4
Tier 3 — Secret OCONUSForward operating base; theater air defenseOn-premises Llama 3.1-70B (air-gapped)SECRET corpus; full military platform librarySECRET / IL5
Tier 4 — Tactical DisconnectedForward edge; EMSO environment; no connectivityLlama 3.2-3B (int4 quantized; local only)Pre-loaded restricted corpus; no updates during disconnectCUI / Reduced capability
Section 17

C-UAS AI Maturity Assessment

The following assessment measures your organization's current C-UAS AI decision support maturity across six domains. Rate each capability on the 1–4 scale to identify your current state and the highest-priority investment areas. This tool supports self-assessment for programs evaluating whether ATLAS or a similar AI fusion layer would provide operationally significant improvement over current capabilities.

C-UAS AI Decision Intelligence Maturity — Self-Assessment
1 = Manual / None · 2 = Assisted · 3 = Automated · 4 = AI-Optimized
Section 18

Implementation Roadmap

The ATLAS development and deployment roadmap is designed to deliver demonstrable value at each phase while building toward full multi-sensor fusion capability. The phased approach allows programs to evaluate the architecture in progressively more operationally representative environments before committing to the next tier's investment, and creates structured opportunities to collect the real-world sensor data and labeled engagement scenarios that will continuously improve classifier accuracy.

P1
Months 1–3 · Foundation
Dataset, Baseline Classifier, and Prototype Interface

Acquire and characterize the DRONERF dataset and UC San Diego RF Drone Dataset. Train baseline RF and acoustic classifiers. Build the ATLAS prototype browser interface demonstrating sensor data ingestion, LLM-assisted classification, and engagement recommendation on simulated scenarios. Publish accuracy metrics for SBIR Phase I submission.

DRONERF Acquisition RF Classifier v1 Acoustic Classifier v1 Prototype UI SBIR White Paper
P2
Months 4–6 · Integration
Radar and EO Integration + Sensor Partner Engagement

Integrate ATLAS with Dedrone or FortemTech sensor APIs for live radar and EO data. Deploy the multi-sensor track correlator. Fine-tune the LLM fusion layer on labeled engagement scenarios. Conduct first external demonstrations to Army PdM CUAS and prime contractor partners (Booz Allen, L3Harris). Target SBIR Phase I award or first pilot contract.

Radar Integration EO Integration Track Correlator LLM Fine-Tune v1 External Demo
P3
Months 7–12 · Validation
Live Evaluation, Swarm Module, and Guardian C2 Integration

Participate in a C-UAS test event (e.g., AFWERX C-UAS exercise or Army Yuma test range) to evaluate ATLAS performance against live UAS targets. Deploy swarm detection and geometric pattern recognition. Integrate with Guardian C2 operator interface. Pursue SBIR Phase II and OTA consortium entry (NSTXL, NCMS).

Live Test Event Swarm Module Guardian C2 Integration SBIR Phase II OTA Consortium
P4
Months 13–18 · Operational
IL4/IL5 Deployment, EW Integration, and Program of Record

Achieve ATO for IL4 deployment. Integrate EW effector interface for RF jamming recommendation-to-execution path. Complete LDEF evaluation per WP-CR-2025-09 for the deployed LLM. Pursue program of record entry through JCO C-UAS portfolio or service-specific C-UAS program office. Establish ATLAS as a continuously-delivering capability under the DevSecOps framework from WP-CR-2025-06.

IL4 ATO EW Integration LDEF Evaluation PoR Entry Continuous Delivery
Section 19

The Continuum Approach

Every capability required to build and deploy ATLAS exists within Continuum today — proven across active defense programs, though ATLAS itself is a new application of those capabilities to the C-UAS domain. The LLM evaluation methodology is published in WP-CR-2025-09. The Secure RAG Architecture is documented in WP-CR-2025-10. The adversarial robustness framework is in WP-CR-2025-04. The DevSecOps delivery pipeline and IL4/IL5 deployment methodology are demonstrated through the Space Force Operational Acceptance. The MBSE and systems engineering capability to design the sensor-to-effector architecture is INCOSE-certified and operationally validated across aerospace and defense programs. ATLAS is not a pivot — it is a new application of an existing, proven defense AI research and delivery capability to a mission domain where that capability is urgently needed.

✓ Continuum C-UAS AI Services
  • ATLAS Early Engagement & Co-Development: Continuum is actively seeking program office and prime contractor partners to co-develop and evaluate the ATLAS architecture. Early engagement provides input to Phase 1 development priorities — including sensor interface requirements, threat class coverage, and operator workflow design — and positions partners for early access to prototype demonstrations as the system matures. If your program has an unmet C-UAS decision intelligence need, we want to hear about it now, while the architecture is still being shaped. Contact Continuum to schedule a technical discussion with the ATLAS development team.
  • Sensor Integration Consulting: Integration of ATLAS with existing program-organic sensors — Dedrone, FortemTech, Echodyne, DJI AeroScope, custom radar systems. Continuum's systems engineering capability designs the sensor interface architecture; the ATLAS SDK provides standardized ingestion connectors for major C-UAS sensor platforms.
  • LLM Fusion Architecture Design: Custom design of an LLM-assisted fusion layer for programs with organic sensor infrastructure but no AI decision layer. Architecture aligned to the program's classification environment, sensor mix, threat profile, and effector portfolio.
  • SBIR / STTR Technical Support: Support for programs pursuing SBIR Phase I/II funding for C-UAS AI. Continuum's published research (this paper, WP-CR-2025-09, WP-CR-2025-10) provides the technical credibility foundation for competitive SBIR submissions. WOSB/EDWOSB status strengthens small business designation scoring.
  • C-UAS Program Governance (SynQra): The C-UAS acquisition portfolio is expanding rapidly — Army PdM CUAS, JCO, AFRL, and DIU are issuing contracts, modifications, and delivery orders at unprecedented velocity. SynQra's AI-powered contract lifecycle management and CLIN tracking capability applies directly to C-UAS program offices today, without additional development, creating a rapid entry point into programs that may subsequently fund ATLAS.

Strategic Positioning

Continuum enters the C-UAS domain as an AI software integrator, not as a sensor or effector manufacturer. This positioning is deliberately complementary to the major C-UAS hardware providers — Dedrone, FortemTech, Echodyne, L3Harris — rather than competitive with them. The AI decision intelligence layer that ATLAS provides is the missing component in most existing C-UAS deployments. By partnering with sensor vendors rather than competing with them, Continuum maximizes its addressable market while leveraging the distribution channels and customer relationships of established C-UAS hardware players.

Section 20

Conclusion

The UAS threat landscape has outpaced the decision-making infrastructure designed to counter it. Sensors can detect, track, and characterize drones. Effectors can defeat them. The gap — the intelligence layer that synthesizes multi-modal sensor evidence into calibrated threat assessments and cost-optimized engagement recommendations in the seconds available before a threat reaches its engagement window — is where AI provides transformative, mission-critical value.

LLM-assisted multi-sensor fusion is not the only AI approach to C-UAS threat classification, but it is uniquely suited to the specific challenges of the operational environment: heterogeneous evidence from sensors with different failure modes, novel threats outside the training distribution of any single classifier, adversarially spoofed sensor inputs that require cross-modal consistency checking, and the swarm coordination problem that requires reasoning about collective behavior rather than individual tracks. The ATLAS architecture operationalizes these capabilities within the human-in-the-loop framework required by DoDD 3000.09 and within the security architecture required for classified defense deployments.

The cost-exchange crisis is ultimately a software problem. The hardware needed to defeat drone threats at scale — directed energy weapons, non-kinetic EW systems, precision short-range missiles — either exists or is in development. The AI intelligence layer that matches the right tool to the right threat, in the right sequence, faster than adversaries can adapt, is the force multiplier that makes the hardware sustainably effective. ATLAS is that layer.
— Kurt A. Richardson, PhD, Continuum Resources LLC, 2025
Start a Conversation

Ready to See ATLAS in Action?

Contact Continuum Resources to discuss the ATLAS architecture, explore co-development opportunities, or shape Phase 1 requirements with our team.

Start the Conversation →
References

References

  • [DRONERF-2019] Al-Emadi, S. et al. — "Enhanced RF-based drone detection and identification using deep learning" — IEEE Access, 2019. Primary RF classification dataset and methodology basis for ATLAS RF classifier.
  • [MICRO-DOPPLER] Chen, V.C. — "The Micro-Doppler Effect in Radar" — Artech House, 2011. Foundational reference for micro-Doppler UAS classification methodology.
  • [ACOUSTIC-UAS] Shi, Z. et al. — "Acoustic Features for UAV Classification" — International Journal of Aeronautics, 2018. Acoustic signature classification methodology basis for ATLAS acoustic module.
  • [YOLOV8] Jocher, G., Chaurasia, A., Qiu, J. — "Ultralytics YOLOv8" — 2023. Object detection architecture deployed in ATLAS EO classification module.
  • [SWARM-TACTICS] Davis, N. — "Swarming and the Future of Warfare" — RAND Corporation, 2008. Tactical swarm behavior patterns driving ATLAS swarm geometry analysis.
  • [CUAS-SOF] Joint Chiefs of Staff — "Counter-Unmanned Aircraft System (C-UAS) Operations" — Joint Publication 3-01 (Update), 2023. Authoritative reference for DoD C-UAS operational doctrine and engagement authority.
  • [DODD-3000-09] Department of Defense — "Autonomous Weapons Systems" — DoDD 3000.09, January 2023. Human-in-the-loop requirements for AI-assisted lethal force decisions underlying ATLAS HITL design.
  • [COST-EXCHANGE] Congressional Budget Office — "The Cost of Defending Against Drone Attacks" — CBO, 2024. Cost-exchange ratio data for C-UAS engagement economics analysis.
  • [IRON-SLING] Gettinger, D. & Holland Michel, A. — "A Survey of Drone Warfare" — Center for the Study of the Drone, Bard College, 2024. Operational data on UAS engagement patterns referenced in threat landscape analysis.
  • [CR-04] Richardson, K.A. — "WP-CR-2025-04: Prompt Injection & Adversarial Attacks on LLM Systems" — Continuum Resources, 2025. Adversarial robustness framework applied to ATLAS sensor spoofing defenses.
  • [CR-09] Richardson, K.A. — "WP-CR-2025-09: LLM Defense Evaluation" — Continuum Resources, 2025. LDEF methodology governing ATLAS LLM model selection and continuous evaluation.
  • [CR-10] Richardson, K.A. — "WP-CR-2025-10: Secure RAG Architectures" — Continuum Resources, 2025. Secure RAG architecture governing the ATLAS threat intelligence retrieval substrate.