top of page

TECHNOLOGY BOOK

An introduction to AIRSPEED's acoustic C-UAS sensing technology for Defence and Security professionals

A New Air Defence Challenge

Drones pose a significant threat on the battlefield due to their ability to conduct surveillance, gather intelligence, and carry out precision strikes with minimal risk to the operator. Their small size, agility, and low radar signature make them difficult to detect and counter using traditional air defence systems. Adversaries can use drones for reconnaissance, targeting enemy positions, or even deploying explosives in kamikaze-style attacks. Additionally, autonomous and radio-silent drones can bypass electronic warfare measures, further complicating defence efforts. Their affordability and accessibility make them a powerful tool for asymmetric warfare, allowing even non-state actors to challenge conventional military forces.

​

shutterstock_2489481047_800px.jpg

Military UAV operator launches drone with grenade to drop into enemy fortifications and trenches. (SHUTTERSTOCK IMAGES)

​​

​In civilian settings, drones present security risks to airports, critical infrastructure, and public events. Unauthorised drones can interfere with air traffic, potentially causing catastrophic accidents. They can also be weaponized for terrorist attacks or used for smuggling contraband across borders and into prisons. Privacy concerns are another issue, as drones equipped with high-resolution cameras and other sensors can be exploited for espionage, corporate surveillance, or stalking. The affordability and accessibility of drones make them an asymmetric threat, allowing even small groups or individuals to disrupt operations and challenge law enforcement and military defences. As drone technology advances, the need for effective countermeasures becomes increasingly urgent.

​​

The Technology Landscape

Many traditional air defence systems are ill-suited for detecting and tracking small UAVs. To address the growing drone threat, numerous new technologies and products have emerged. Today, the vast majority of Counter-UAS (CUAS) sensing relies on RF monitoring—specifically, detecting radio signals transmitted by either the drone or its remote pilot. However, determined adversaries have found ways to bypass these systems by using autonomous drones or alternative command-and-control (C2) links, such as fibre optics, creating radio-silent “dark drones.”​

 A counter-drone solution requires sensors, effectors and a command and control system.

​​​

Cameras are valuable tracking tools but typically have a narrow field of view, requiring integration with other sensors to effectively locate and follow targets. Dedicated counter-drone radar systems offer long-range performance but are often expensive, susceptible to jamming, and, as active sensors, can expose the operator’s position—an undesirable trait in tactical environments.

​

Passive acoustic sensing helps mitigate some of these vulnerabilities, particularly in low-visibility conditions or contested RF environments. We see machine listening systems as a complementary technology, enhancing a broader, layered approach to airspace surveillance.

Many C-UAS systems often integrate technology from different suppliers. Airspeed works as part of broader industrial consortia to build layered counter drone systems.

​

Passive Distributed Acoustic Sensing

A network of AIRSPEED's TS-16 acoustic remote sensors at the British Army's AWE-24 excercise, Salisbury plain, UK.

 

Our solution deploys networks of passive acoustic sensors for wide-area coverage. Each sensor is equipped with an integrated mesh radio transceiver and GPS receiver, allowing it to quickly determine its position and seamlessly connect with other sensors.

Each unit provides a hemispherical sensing region, typically detecting and tracking small quadrotors within a 200–300 m range in rural environments—equating to a 30–70 acre coverage area per sensor. The detection footprint expands with additional sensors, creating a scalable network.

​

Our sensors function as both endpoints and repeaters within the mesh radio network, ensuring robust connectivity and eliminating concerns about range limitations. With a typical cost of £6,500 to £12,000 per unit, they offer an effective, cost-conscious solution for wide-area drone detection.

Sensor Fusion

The acoustic sensors estimate the target’s bearing and elevation angles with degree-level precision. However, direct range estimation is impractical due to variations in acoustic signal strength. In a networked setup, multiple sensors can triangulate the target’s position and altitude within a few meters by combining data from at least two sensors.

​

Real-time sensor fusion of data transmitted from a network of distributed acoustic sensors.

 

​A single mesh radio gateway receives target track messages from all sensors and forwards the data to a central server. The server then fuses the data, which consists of angles to the target from each sensor, by triangulating the target's position. The intersection points of these target directions are fed into a tracking algorithm that maintains the target’s Cartesian coordinates using a Kalman filter. This track information can then be relayed to third-party systems through standard data interfaces, such as SAPIENT or TAK Cursor-on-Target messages. Additionally, the server performs network discovery by broadcasting interrogation messages to the sensor network via the mesh radio, ensuring seamless communication and connectivity across the system.

​

Drone Acoustic Signature

Multirotors emit various acoustic signals, but the two dominant sources of noise are aeroacoustic noise and blade pass tones. Aeroacoustic noise is broadband white noise caused by moving air, making it difficult to distinguish from natural sounds like wind. In contrast, blade pass tones are more useful for drone detection.

​

Blade pass tones arise from the interaction between spinning rotor blades and the aircraft's static structure. Their fundamental frequency is proportional to rotor speed, typically ranging from 100 to 200 Hz for two- or three-bladed rotors. Harmonics of this frequency create a chord-like effect with multiple discrete frequencies, giving drone noise its characteristic "rasping" quality, often perceived as irritating.

Human perception of drone noise differs significantly from machine-based detection. Unlike machines, human hearing is subject to psychoacoustic effects and is not uniformly sensitive across all frequencies. As a result, one drone may seem louder than another simply because it emits noise in a frequency range where human ears are more responsive. Machine listening systems, however, remain neutral to such variations.

​​

Received Time-Frequency spectrogram of a DJI Phantom quadrotor at various ranges to target.

​​

​Beyond detection, a drone’s acoustic signature provides valuable information about its characteristics. It can reveal the number of rotors, pitch imbalances, and rapid pitch variations. These factors help infer drone subclasses, estimate payload mass (as added weight can affect rotor pitch), and determine whether the drone is manually piloted or autonomously controlled.

​

Microphone Arrays

AIRSPEED's BK-16 large aperture microphone array, Westcott, UK, 2024

 

The sensor units feature an array of microphones, enabling the use of advanced phased-array signal processing techniques that effectively separate target sounds from ambient background noise. Using these microphone arrays, various direction-finding methods can be applied to accurately determine the spatial location of the target.

​

At longer target distances, the sensor primarily receives low-frequency tones, as these signals tend to propagate more effectively over greater ranges. Generally, increasing the size of the microphone array enhances the detection range, creating a design trade-off between sensor size and detection capability. In our designs, we prioritise target detection performance over compactness, resulting in larger sensor configurations that maximise detection range.

​

Our microphone arrays are configured to provide a hemispherical field of view, covering 360° azimuth and 90° elevation. This ensures the sensor has no blind spots and maintains consistent performance regardless of its orientation.

​

Microphones

 

Electret condenser microphone capsules are chosen over MEMS microphones for their superior performance, despite a more complex analogue electrical interface. Though MEMS microphones are popular due to their low cost, small size and ease of integration, the performance characteristics of traditional electret condensers make them better suited for far-field target detection.

​

Exploded view of a phantom-powered condenser microphone assembly produced by AIRSPEED

​

Each microphone capsule includes a waterproof acoustic vent, which effectively prevents water and dust from entering the microphone, ensuring durability in various environments. Additionally, a reticulated foam wind shield reduces low-frequency interference caused by wind noise by creating a stable region of air around the microphone diaphragm.

​

Each microphone unit incorporates a discrete transistor preamplifier, powered by phantom power supplied by the sensor signal processing unit. This combination results in a highly robust audio capture unit characterized by ultra-low noise and low distortion. These attributes are essential for the long-range detection of drones, where signal clarity is critical.

​

Signal Processing Hardware

Each sensor features advanced electronic signal processing hardware that converts audio signals from a 16-channel microphone array into precise target tracking data. Equipped with NVIDIA GPUs capable of performing 70 trillion mathematical operations per second, the sensors operate in real-time at approximately 10 frames per second.

​

Signal processing hardware comprising an NVIDIA GPU and custom broadcast quality audio capture boards.

​

The audio signals are captured using broadcast-grade analogue-to-digital converters, offering a signal-to-noise ratio exceeding 120 dB. This high level of processing power and audio capture performance is crucial for detecting, identifying, and tracking drone targets, even when the received signals are exceptionally weak.

​

Signal Processing Algorithms

Engineering dashboard generated by a remote sensor whilst tracking a small drone.

 

Each sensor resolves azimuth and elevation angles within a hemispherical envelope at 1° resolution, using array signal processing to create an acoustic camera that updates at 10 frames per second. This is achieved through Time Difference of Arrival (TDOA), which measures coherence rather than steered response power, enhancing long-range performance.

​

For tracking, local peaks in the sound field image are assigned to tracks based on angular proximity. Confirmed tracks are classified via a Neural Network, using a rolling 3-second time-frequency spectrogram (1.8 kHz bandwidth, <2 Hz resolution). A beamformer follows the target’s position, feeding the spectrogram into a pre-trained CNN for classification. The system tracks multiple targets simultaneously.

Once detected, target track messages—including timestamp, track ID, azimuth, elevation, and classification probability—are transmitted via mesh radio at 10 FPS. Target class masks control which classifications are reported.

​

For precise triangulation, each sensor uses GPS for self-location, and manual magnetic north alignment ensures accurate orientation.

​

Microphone Array Performance Modelling

 

Selecting an optimal microphone array geometry for a specific application is often considered a complex and nuanced challenge. To address this, we have developed specialised software tools that evaluate the performance of a given microphone array configuration. The software works by calculating the 3D beampattern of the array across a range of frequencies, generating key performance metrics such as gain, beamwidth, and bandwidth.​

​

Simulated beam pattern of a spherical microphone array at 1200 Hz.

​​

To identify the best array geometry for a given operational requirement, we employ a Monte Carlo technique that systematically sweeps the geometric parameters of the microphone positions. This optimisation process is significantly accelerated by leveraging general-purpose GPU processing for the beampattern calculations, allowing the solution space to be explored comprehensively within a matter of hours.

​

A parametric sweep of array geometry parameters

©2025 Airspeed Electronics Ltd

bottom of page