GB2623498A - Improved classification using a combined confidence score - Google Patents

Improved classification using a combined confidence score Download PDF

Info

Publication number
GB2623498A
GB2623498A GB2215056.9A GB202215056A GB2623498A GB 2623498 A GB2623498 A GB 2623498A GB 202215056 A GB202215056 A GB 202215056A GB 2623498 A GB2623498 A GB 2623498A
Authority
GB
United Kingdom
Prior art keywords
target
classifier
time instance
target category
confidence scores
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2215056.9A
Other versions
GB202215056D0 (en
Inventor
Ahmad Bashar
Harman Stephen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thales SA
Original Assignee
Thales SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thales SA filed Critical Thales SA
Priority to GB2215056.9A priority Critical patent/GB2623498A/en
Publication of GB202215056D0 publication Critical patent/GB202215056D0/en
Priority to PCT/GB2023/052450 priority patent/WO2024079436A1/en
Publication of GB2623498A publication Critical patent/GB2623498A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/04Systems determining presence of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/933Radar or analogous systems specially adapted for specific applications for anti-collision purposes of aircraft or spacecraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • G06V10/811Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/84Arrangements for image or video recognition or understanding using pattern recognition or machine learning using probabilistic graphical models from image or video features, e.g. Markov models or Bayesian networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

A method of using confidence scores output by one or more classifiers 206 to determine a combined confidence score, each of the confidence scores indicating a probability that input data comprises a target (106) belonging to a target category. The method comprises: receiving, at a time instance, one or more confidence scores from one or more of the classifiers; obtaining at least one target category probability; determining the combined confidence score using: (i) the one or more confidence scores received at said time instance; (ii) the at least one target category probability; and (iii) a likelihood of observing all confidence scores received from the classifiers up to a previous time instance given that the target belongs to the target category; or a previously determined combined confidence score determined using one or more confidence scores received from one or more of the classifiers at the previous time instance; and outputting 208 the combined score. The method could be applied to radar data in order to identify unmanned aerial vehicles or drones.

Description

IMPROVED CLASSIFICATION USING A COMBINED CONFIDENCE SCORE
FIELD
The present disclosure relates to improved classification using a combined confidence score. In particular, the present disclosure relates to a method of using confidence scores output by one or more classifier to determine a combined confidence score.
BACKGROUND
A classifier is a type of machine learning algorithm used to assign a class label (or a confidence value for a class label) to a data input. An example is an image recognition classifier to label an image. Classifier algorithms are trained using labelled data. In the image recognition example, for instance, the classifier receives training data that labels images. After sufficient training, the classifier can then receive unlabelled images as inputs and will output classification labels (or a confidence values for class labels) for each image.
One example application is using a classifier to receive radar data to discriminate between a detected object being an unmanned aerial vehicle (UAV) or not an UAV.
There are some known classifier techniques in the context of this particular application.
The paper 'SNR-dependent drone classification using convolutional neural networks" by H,Dale et al describes how a particular classification model (e.g. a neural network with fixed input and output formats) is trained L number of times, each for a specific operational condition, namely the Signal to Noise Ratio (SNR). Based on the estimated SNR, the corresponding classifier model (i.e. the one specifically configured for this scenario) is then applied to deliver the best target recognition results.
The paper "Neural network ensembles for sensor-based human activity recognition within smart environments" by N. Irvine et al describes how a particular classification model (e.g. a neural network or a decision tree) is trained/configured L number of times, e.g. each with specific initial condition (e.g. neural network weights are random initialised) and/or from a sub-set of the available training dataset. For a given input, the N resultant classifiers are simultaneously applied where each (synchronously) outputs a result. Heuristic approaches such as averaging the N outcomes or M-out-N votes or even adaptive voting strategies or similar, are then used to combine the classifiers results.
The paper "Deep learning on multi sensor data for counter UAV applications" describes deep learning advances on counter UAV related tasks when applied to data originating from many different sensors as well as multi-sensor information fusion. The paper illustrates how known approaches have focussed on: 1) selecting which data source to use at any given point in time (e.g. range from radar and angle from camera or RF sensor) and/or 2) fusing synchronised features extracted from each data stream.
SUMMARY
The inventors have identified that whilst existing classifier solutions may improve aspects of classification performance, e.g. reduced false recognition (false positives) or increase correct recognition (true positives), they do have drawbacks. For example, voting solutions requiring N from M require N classifiers to give a positive recognition can impose delays with respect to one classifier. Also known fusion techniques rely on classifications from individual classifiers to report at the same time and rate (i.e. the classifiers are synchronised), which can introduce delays. In addition, known fusion techniques are based on the assumption that the multiple classifiers perform their respective classification based on the same set of target categories (i.e. that the classifiers are homogeneous).
Furthermore known techniques which combine neural network results in some way are computationally expensive for the processor preforming the classification.
According to another aspect of the present disclosure there is provided a method of using confidence scores output by one or more classifier to determine a combined confidence score, each of the confidence scores indicating a probability that input data received by the respective classifier comprises a target belonging to a target category, the method performed on a computing device comprising: receiving, at a time instance, one or more confidence scores from the one or more classifier; obtaining at least one target category probability; determining the combined confidence score based on: (i) the one or more confidence scores received at said time instance; (ii) the at least one target category probability; and (iii) a likelihood of observing all confidence scores received from the one or more classifier up to a previous time instance immediately prior to said time instance given that the target belongs to the target category; or a previously determined combined confidence score determined using one or more confidence scores received from one or more of the one or more classifier at the previous time instance; and outputting the combined confidence score.
Thus in embodiments of the present disclosure optimal classification performance is achieved based on prior knowledge and learnt individual classifier behaviour, rather than considering that all classifiers are equally good. Furthermore, the false classification rate is advantageously reduced.
In some embodiments, determining the combined confidence score is based on the likelihood of observing all confidence scores received from the one or more classifier up to a previous time instance immediately prior to said time instance given that the target belongs to the target category, and the at least one target category probability defines a likelihood of the target belonging to the target category.
In some embodiments, determining the combined confidence score may comprise multiplying a likelihood of observing all confidence scores received from the one or more classifier up to the time instance given that the target belongs to the target category with the likelihood of the target belonging to the target category.
The method may comprise determining the likelihood of observing all confidence scores received from the one or more classifier up to the time instance given that the target belongs to the target category by multiplying a likelihood of observing the one or more confidence scores received at said time instance given all confidence scores received from the one or more classifier up to the previous time instance and that the target belongs to the target category, with the likelihood of observing all confidence scores received from the one or more classifier up to the previous time instance given that the target belongs to the target category.
The method comprising determining the likelihood of observing the one or more confidence scores received at said time instance given all confidence scores received from the one or more classifier up to the previous time instance and that the target belongs to the target category using the one or more confidence scores received at said time instance and one or more confidence scores received from one or more of the one or more classifiers at the previous time instance.
In these embodiments the likelihood of observing all confidence scores received from the one or more classifiers up to the time instance given that the target belongs to the target category may dynamically change over time.
Advantageously in these embodiments, the fusion performed to determine the combined confidence score (using merely multiplicative and additive operations) represents a low processing burden compared to known signal processing techniques employed in solutions utilizing single or multiple classifiers.
In other embodiments, the at least one target category probability comprises multiple target category probabilities, each target category probability defining a probability of the target changing the way it represents itself between successive time instances such that at a previous time instance immediately prior to said time instance it will be identified as belonging to a respective target category from a set of target categories and at the time instance it will be identified as belonging to the target category from the set of target categories, and determining the combined confidence score is based on the previously determined combined confidence score determined using one or more confidence scores received from one or more of the one or more classifiers at the previous time instance, and the multiple target category probabilities.
In these embodiments, determining the combined confidence score may comprise: for each of the multiple target category probabilities, multiplying the target category probability with the previously determined combined confidence score; summing results of said multiplying to obtain a summation result; and multiplying summation result with a likelihood of observing the one or more confidence scores received at said time instance given that the target belongs to the target category.
The method may comprise determining the likelihood of observing the one or more confidence scores received at said time instance given that the target belongs to the target category using the one or more confidence scores received at said time instance.
In these embodiments the likelihood of observing the one or more confidence scores received at said time instance given that the target belongs to the target category may dynamically change over time.
In these embodiments, the fusion performed to determine the combined confidence score (using merely multiplicative and additive operations) also represents a low processing burden compared to known signal processing techniques employed in solutions utilizing single or multiple classifiers.
In embodiments of the present disclosure, the at least one target category probability may be fixed. Alternatively, the at least one target category probability may dynamically change over time.
An initial value of the at least one target category probability may be user defined.
The at least one target category probability may be determined based on contextual information relating to a detector which is a source of the input data.
The detector may be a radar device, and the contextual information may comprise one or more of: location information associated with the radar device; target specific attributes extracted by the radar device based on processing of the input data; target specific attributes extracted by the radar device based on processing of sensor data received by the radar device from a remote sensor; target specific attributes extracted by the radar device based on processing of sensor data output from a sensor on the radar device; meteorological information associated with a field of surveillance of the radar device; sensor data associated with the field of surveillance of the radar device; location information associated with a target in the field of surveillance; information relating to road traffic in the field of surveillance; information relating to terrain within the field of surveillance; and bird migration patterns in the field of surveillance.
The at least one target category probability may be determined based on training data used to train the one or more classifiers.
The at least one target category probability may be determined based on input data received by one or more of the one or more classifiers.
Preferably, the method may comprise using confidence scores output by a plurality of classifiers to determine the combined confidence score. In these embodiments the outputs of multiple classifiers are combined so as to improve classification performance above any one classifier. The technique works on the premise that no single classifier technique is perfect, and that compromises are made in all classifiers e.g. those that perform better take longer to report a classification whilst those that deliver decisions quicker suffer from, for example, higher incorrect classification performance. Thus in this aspect of the present disclosure there is provided a method of using confidence scores output by a plurality of classifiers to determine a combined confidence score, each of the confidence scores indicating a probability that input data received by the respective classifier comprises a target belonging to a target category, the method performed on a computing device comprising: receiving, at a time instance, one or more confidence scores from one or more of the plurality of classifiers; obtaining at least one target category probability; determining the combined confidence score based on: (i) the one or more confidence scores received at said time instance; 00 the at least one target category probability; and (Hi) a likelihood of observing all confidence scores received from the plurality of classifiers up to a previous time instance immediately prior to said time instance given that the target belongs to the target category; or a previously determined combined confidence score determined using one or more confidence scores received from one or more of the plurality of classifiers at the previous time instance; and outputting the combined confidence score. The confidence scores may be output by the plurality of classifiers asynchronously. This advantageously captures and propagates the classification uncertainty of the classifiers over time and the time at which the first correct classification is declared.
Each classifier of the plurality of classifiers may be associated with a classifier type, and the plurality of classifiers may comprise classifiers of multiple classifier types. In this embodiment it is possible to exploit the performance advantages of different types of classifiers.
The input data received by a classifier of the plurality of classifiers may be different to the input data received from one or more remaining classifiers of the plurality of classifiers.
A source of the input data may be a radar device. In these implementations, the input data received by each classifier comprises one or more of: unprocessed radar data output by radar receiver elements of the radar device; processed radar data generated by processing the unprocessed radar data output by the radar device; and tracking information output by a tracking module of the radar device.
The input data received by the one or more classifier may originate from a single source (e.g. a single detector such as a radar device). Alternatively, the input data received by the one or more classifier may originate from multiples sources (e.g. multiple detector such as radar devices).
Outputting the combined confidence score may comprise transmitting the combined confidence score to a display of the computing device. Alternatively or additionally, outputting the combined confidence score may comprise transmitting the combined confidence score to a remote computing device According to another aspect of the present disclosure there is provided at least one non-transitory computer-readable storage medium comprising instructions which, when executed by at least one processor causes the at least one processor to perform any of the methods described herein.
The instructions may be provided on one or more carriers. For example there may be one or more non-transient memories, e.g. a EEPROM (e.g. a flash memory) a disk, CD-or DVD-ROM, programmed memory such as read-only memory (e.g. for Firmware), one or more transient memories (e.g. RAM), and/or a data carrier(s) such as an optical or electrical signal carrier. The memory/memories may be integrated into a corresponding processing chip and/or separate to the chip. Code (and/or data) to implement embodiments of the present disclosure may comprise source, object or executable code in a conventional programming language (interpreted or compiled) such as C, or assembly code, code for setting up or controlling an ASIC (Application Specific Integrated Circuit) or FPGA (Field Programmable Gate Array), or code for a hardware
description language.
According to another aspect of the present disclosure there is provided a computer program comprising instructions which, when the program is executed by a device, cause the device to perform any of the methods described herein.
According to another aspect of the present disclosure there is provided a device for using confidence scores output by a one or more classifier to generate a combined confidence score, each of the confidence scores indicating the probability that input data received by the respective classifier comprises a target belonging to a target category, the device comprising a processor, wherein the processor is configured to perform any of the methods described herein.
That is, in one aspect of the present disclosure there is provided a device for using confidence scores output by a one or more classifier to generate a combined confidence score, each of the confidence scores indicating the probability that input data received by the respective classifier comprises a target belonging to a target category, the device comprising: a processor, wherein the processor is configured to: receive, at a time instance, one or more confidence scores from the one or more classifier; obtain at least one target category probability; determine the combined confidence score based on: (i) the one or more confidence scores received at said time instance; (ii) the at least one target category probability; and (iii) a likelihood of observing all confidence scores received from the one or more classifier up to a previous time instance immediately prior to said time instance given that the target belongs to the target category; or a previously determined combined confidence score determined using one or more confidence scores received from one or more of the one or more classifier at the previous time instance; and output the combined confidence score.
As explained above, the device may use confidence scores output by a plurality of classifiers to determine the combined confidence score. Thus in this aspect of the present disclosure there is provided a device for using confidence scores output by a plurality of classifiers to generate a combined confidence score, each of the confidence scores indicating the probability that input data received by the respective classifier comprises a target belonging to a target category, the device comprising: a processor, wherein the processor is configured to: receive, at a time instance, one or more confidence scores from one or more of the plurality of classifiers; obtain at least one target category probability; determine the combined confidence score based on: (i) the one or more confidence scores received at said time instance; 00 the at least one target category probability; and (iii) a likelihood of observing all confidence scores received from the plurality of classifiers up to a previous time instance immediately prior to said time instance given that the target belongs to the target category; or a previously determined combined confidence score determined using one or more confidence scores received from one or more of the plurality of classifiers at the previous time instance; and output the combined confidence score.
These and other aspects will be apparent from the embodiments described in the following. The scope of the present disclosure is not intended to be limited by this summary nor to implementations that necessarily solve any or all of the disadvantages noted.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of the present disclosure and to show how embodiments may be put into effect, reference is made to the accompanying drawings in which: Figure 1 illustrates an environment in which a radar device has been positioned; Figure 2 is a schematic block diagram of a computing device; Figure 3 illustrates the inputs and outputs of a fusion engine of the computing device; and Figure 4 is a flowchart illustrating a method performed by the fusion engine of using confidence scores output by a one or more classifiers to determine a combined confidence score; Figure 5 is a flowchart illustrating steps performed by the fusion engine to determine a combined confidence score according to one aspect of the present disclosure; and Figure 6 is a flowchart illustrating steps performed by the fusion engine to determine a combined confidence score according to another aspect of the present disclosure.
DETAILED DESCRIPTION
Embodiments of the present disclosure relate to using confidence scores output by a one or more classifier to generate a combined confidence score, whereby each of the confidence scores indicate the probability that input data received by the respective classifier comprises a target belonging to a target category.
As a mere example, embodiments of the present disclosure are described with reference to the input data received by the classifier(s) as originating from a radar device, such that the confidence scores output by each classifier indicates a likelihood (i.e. probability) that an unmanned aerial vehicle (also referred to herein a drone) is present in the surveillance area of the radar device. However it will be appreciated that embodiments of the present disclosure can be applied to any problem requiring classification or discrimination between two or more different classes.
Whilst radars can be the only type of sensors that can non-cooperatively detect the presence of certain types of (potentially harmful or malicious) UAVs, particularly over wide area and at longer ranges, their target recognition algorithms must contend with: 1. Large number of potential targets: benign birds cohabitate the airspace where drones are expected to be found, often in large numbers over the entire sensor coverage area in urban or rural areas. This is especially that birds and small drones can exhibit similar flight characteristics and comparable radar cross sections. Maintaining a low rate of false positives from birds or even ground targets (such as cars or pedestrians) is hence a key challenge to counter UAV radar surveillance systems.
2. Varying quality of radar data and features (e.g. micro-Doppler signatures of a rotary-wing UAV): this is induced not only by varying clutter, multipath and occlusion effects, but also by the agility and ability of some UAVs to follow complex trajectories incorporating sharp manoeuvres. This significantly impacts the quality of salient features relied on by the target classifier, most notably the pivotal micro-Doppler signatures arising from UAV on-board rotor(s) as with micro-Doppler-based classification techniques. For instance, as the RCS of blades is usually in the order of 15-20 dB below that of the drone body, acceptable SNR margins and the target distances or elevations at which micro-Doppler signatures can be detected can vary drastically over-time and between different operating environments (e.g. sites).
3. Evolving nature of the threat: new drone platforms of various sizes and constructs (including using new material for blades) are regularly emerging, especially as the consumer market for drones continues to expand rapidly. This can substantially affect the characteristics of micro-Doppler signatures from on-board rotors depended on by several recognition techniques.
Thus, despite a radar device having in excess of 99% recognition accuracy it can produce approximately 1 false declaration of a drone every minute. This is due to their being typically 100 targets moving within its surveillance field of view at any one time and classification decisions need to be made at the sub second level to ensure users receive timely information. Therefore, an operator is required to triage a positive recognition of a drone, also counter drone systems tend to use additional sensors to perform triage or provide multiple independent corroborating decisions.
There is a need to reduce the supervisory manpower needed to operate a counter drone system, and raise confidence in counter drone radar-based recognitions so that lower skilled radar operators can use counter drone systems (e.g. Air Traffic Control operations staff). It is also desirable to provide an autonomous counter drone system (i.e. providing warnings directly to cue actions to be taken to the presence of a drone). Other types of sensors tend not to have the long range that a radar device can provide so corroboration of longer range targets may not be achievable in a counter drone system. Furthermore, raising the confidence in the decisions made by one sensor (e.g. the radar device) could reduce the need for as many sensors in a counter drone system making complexity savings.
Figure 1 illustrates an environment 100 in which a radar device 104 is positioned. The radar device 104 is associated with a Field of Surveillance (FoS) in which a drone 106 may be present.
The radar device 104 comprises a transmitter for transmitting radar signals into the FoS (a volume of interest), and a receiver for receiving return signals of said radar signals returned from within the FoS. The radar device 104 may be configured to provide persistent interrogation of the FoS. Persistent interrogation is coherent illumination of and reception from substantially all targets within a Field of Surveillance (FoS) (that provide returns when illuminated with radar signals) at a pulse repetition frequency (PRF) that permits unambiguous evaluation of Doppler, and without interruption due to sequential mechanical or electronic scanning, or a hardware-determined windowing process, that limits the duration of coherent signal analysis.
The radar device 104 is configured to communicate with a computing device 102. The radar device 104 communicates with the computing device 102 by way of a wired and/or wireless connection. The radar device 104 may communicate with the computing device 102 via a network (not shown) such as a packet based network (e.g. the Internet) and/or a cellular communication network. The computing device 102 may be located in a monitoring station 101.
Figure 2 illustrates a simplified view of the computing device 102. A shown in Figure 2, the device 102 comprises a central processing unit ("CPU") 202, to which is connected a memory 210. The functionality of the CPU 202 described herein may be implemented in code (software) stored on a memory (e.g. memory 210) comprising one or more storage media, and arranged for execution on a processor comprising one or more processing units. The storage media may be integrated into and/or separate from the CPU 202. The code is configured so as when fetched from the memory and executed on the processor to perform operations in line with embodiments discussed herein. Alternatively, it is not excluded that some or all of the functionality of the CPU 202 is implemented in dedicated hardware circuitry (e.g. ASIC(s), simple circuits, gates, logic, and/or configurable hardware circuitry like an FPGA). In other embodiments (not shown) a processing system executes the processing steps described herein, wherein the processing system may consist of the processor as described herein or may be comprised of distributed processing devices that may be distributed across two or more devices. Each processing device of the distributed processing devices may comprise any one or more of the processing devices or units referred to herein.
Whilst Figure 2 illustrates the computing device 102 comprising the memory 210 it will be appreciated that some or all of the memory 210 used by the computing device 102 may be on a remote server (e.g. cloud storage), or other remote device.
The CPU 202 comprises a fusion engine 204. The fusion engine is configured to receive various inputs, including confidence scores output by one or more classifiers 206, and to generate and output a combined confidence score. The input data used to generate the combined confidence score is described in more detail below with reference to Figure 3.
The "fusion engine" may be implemented in software, firmware, hardware, or a combination thereof. In the case of a software implementation, the fusion engine 204 represents program code that performs specified tasks when executed on a processor (e.g. CPU or CPUs). The program code can be stored in one or more computer readable memory devices.
The fusion engine 204 is arranged to receive confidence scores from N classifiers 206 (where W1. In embodiments where there is a single classifier, at any particular time instance, a single confidence score is received from the single classifier. In embodiments where there is a plurality of classifiers, at any particular time instance, one or more confidence scores output from the plurality of classifiers 206 is supplied as an input to the fusion engine 204. That is, the fusion engine 204 may not receive a confidence score from all of the plurality of classifiers at a particular time instance.
As shown in Figure 2, the CPU 202 may comprise one or more of the classifiers 206. As an example, the CPU 202 may comprise all of the classifiers 206. In these embodiments, data is received via a communications interface 214 and supplied as inputs to the classifiers on the computing device 102. The classifiers on the computing device 102 are arranged to output confidence scores as inputs to the fusion engine 204.
One or more of the classifiers 206 may be on at least one external device. As an example, the at least one external device may comprise all of the classifiers 206. In particular, a single device (e.g. the radar device 104 or another device) may comprise all of the classifiers 206. In these embodiments, confidence scores output by the classifiers on the external device(s) are received via the communications interface 214 and supplied as inputs to the fusion engine 204 on the computing device 102.
As noted above, the computing device 102 comprises a communications interface 214 which allows the computing device 102 to receive data from the radar device. The communications interface 214 may be a wired and/or wireless communications interface.
The communications interface 214 also enables the computing device 102 to transmit the combined confidence score and/or an alert based on the combined confidence score, to a remote device (not shown in Figure 1). This remote device may for example be a mobile computing device (e.g. a tablet or smartphone).
The computing device 102 may comprise an output device 208 to output the combined confidence score, or an alert based on the combined confidence score. For example, the CPU 202 may control a visual output device (e.g. a light or a display) on device 102 to output the combined confidence score and/or an alert based on the combined confidence score. Alternatively or additionally, the CPU 202 may control an audible output device (e.g. a speaker) on device 102 to output the combined confidence score and/or an alert based on the combined confidence score. The CPU 202 may control or interact with a displayed Graphical User Interface (GUI) such as that of a command and control module.
The alert described above may be generated based on the combined confidence score (e.g. of a drone class category) exceeding a predetermined threshold. In another example, the alert may be generated based on the combined confidence score (e.g. of a non-drone class category) being below a predetermined threshold.
Figure 3 illustrates the inputs to the fusion engine 204 which may be used to generate the combined confidence score. Whilst Figure 3 illustrates multiple classifiers, as noted above, this is merely an example and only a single classifier may be present.
Each of the classifiers 206 comprises a trained classifier model. The trained classifier model of each classifier is trained with measured data (e.g. measured radar data) and/or simulated data (e.g. simulated radar data).
A trained classifier model can be trained using training data that is associated with a single target category. For example a classifier could be fed training data in which a drone is present. In the example whereby the classifier is fed training data for the non-drone target category, the training data may comprise training data associated with a single non-drone objects (training data associated with one of birds, cars, airplanes etc.). Alternatively, in the example whereby the classifier is fed training data for the non-drone target category, the classifier may be fed respective sets of training data associated with multiple non-drone objects such as birds, cars, airplanes etc. A trained classifier model can be trained using training data that is associated with multiple target categories. For example, a classifier could be fed training data in which a drone is present and also training data in which a drone is not present.
In embodiments, where the fusion engine 204 is in communication with a plurality of classifiers 206, the plurality of classifiers 206 may be trained using the same training data. Alternatively, one or more of the plurality of classifiers 206 may be trained with different training data to the remaining classifiers of the plurality of classifiers 206. In one example, each of the plurality of classifiers 206 is trained with different training data.
Each classifier of the classifiers 206 is associated with a classifier type. In embodiments, where the fusion engine 204 is in communication with a plurality of classifiers 206, the plurality of classifiers 206 may all be of the same classifier type. In other embodiments, the plurality of classifiers 206 may comprise classifiers of multiple classifier types. In one example each of the plurality of classifiers 206 is of a different classifier type. We refer herein to "classifier type" as being indicative of the particular classification algorithm used by the particular classifier used to process the input data it receives. For example, a classifier of the plurality of classifiers 206 may be a Naïve Bayes classifier; employ a decision tree classification algorithm; use generalised fuzzy trees; use a support vector machine (SVM); use a K-nearest neighbour (KNN) algorithm, use a deep learning neural network such as an Artificial Neural Network (ANN), Convolution Neural Network (CNN) or Recurrent Neural Network (RNN). It will be appreciated that these types of classifiers are merely examples and embodiments are not limited to the use of classifier being of a particular type referred to herein.
Each classifier of the one or more classifier 206 performs classification based on a set of target categories. That is, each classifier outputs a confidence score for each of the target categories in the set, the confidence score indicating the likelihood of the input data it receives comprising a target belonging to the particular target category.
In embodiments, where the fusion engine 204 is in communication with a plurality of classifiers 206, one or more of the plurality of classifiers may perform classification based on a different set of target categories to the remaining classifiers of the plurality of classifiers 206 (i.e. the classifiers are heterogeneous). In this example, the different sets of target categories would each comprise the target category referred to herein. For example, a first classifier may perform a classification based on a set of target categories comprising a drone present category, and a drone not-present target category; and a second classifier may perform a classification based on a set of target categories comprising a drone present category, a bird present category, and a car present category. In this example, the sets of target categories are different but they share the drone present category (used as an example of the "the target category" referred to herein).
During operation, each of the one or more classifiers 206 receives input data. The input data received by the classifiers 206 may originate from a single source (e.g. a single detector such as the radar device 104).
In embodiments, where the fusion engine 204 is in communication with a plurality of classifiers 206, the plurality of classifiers 206 may each receive the same input data. Alternatively, one or more of the plurality of classifiers 206 may receive input data that is different to the input data that is received by the remaining classifiers of the plurality of classifiers 206. In one example, each of the plurality of classifiers 206 receives different input data.
In the context of the environment 100, the input data received by each classifier may comprises one or any combination of: (i) unprocessed radar data output by the radar device. That is, each classifier may receive a "raw" radar signal comprising In-phase (I) and Quadrature-phase (Q) components that are output by one or more radar receiver elements of the radar device.
(ii) processed radar data generated by processing the unprocessed radar data output by the radar device. The processed radar data may take many forms. The processed radar data may be generated by transforming (e.g. by a Fast Fourier Transform (FFT)) the unprocessed radar data into a frequency domain. The processed radar data may be generated by processing the unprocessed radar data output prior to any transformation to the frequency domain, or by a processing the radar data after it has been transformed into the frequency domain. Such processing may comprise taking logarithms, normalisation or averaging etc. In one example, the processed radar data comprises beamformed data (i.e. data related to a particular beam in a 3D space). In another example, the processed radar data is generated to minimise interference from nearby targets (e.g. by performing signal separation).
(iii) tracking information output by a tracking module of the radar device. The tracking information comprise information relating to a tracked object and may comprise one or more of positional information (e.g. co-ordinates), speed information, velocity information, acceleration information, the size of the object (e.g. a Radar Cross Section RCS), trajectory shape, and manoeuvres by the target etc. The tracking information also allows the determination of the presence of micro-Doppler (i.e. returns from rotor blades) including number of micro-Doppler components, their spacing etc. and additional target information including RCS, SNR, range, azimuth and elevation and such.
Each trained classifier model is used at operation time to determine a classification score, using a method known by the person skilled in the art. The score may for example provide an indication of a likelihood or level of confidence that the input data at a particular time instance comprises a target belonging to a target category (e.g. that a drone is present).
Each classifier of one or more classifier 206 is configured to output its own observation confidence score 13(yMc1), i.e. the relationship between the reported confidence yfc at the ith time instance and the target category C, (e.g. drone or non-drone). For simplicity, these are shown as yr where k identifies the classifier. For example the confidence score output by classifier 1 206a at the ith time instance is yil, the confidence score output by classifier 2 206b at the ith time instance is yi2, and so on. We refer herein to the time instance t as a timestamp pertaining to the ith frame of input data (e.g. the ith radar frame), and time instance ti is the first timestamp in the examined track such that tri = {t1, t2, ti} is the i successive timesteps. The interval between time instances L1 and t is not fixed and is dependent on the time between when one or more confidence scores are output by the one or more classifiers 206.
Each classifier of the one or more classifier 206 is associated with a classification report period which defines the rate at which the classifier outputs a confidence score. In embodiments, where the fusion engine 204 is in communication with a plurality of classifiers 206, the plurality of classifiers 206 may each be associated with the same classification report period such that the confidence scores are output by the plurality of classifiers 206 in a synchronous manner. Alternatively, the confidence scores may be output by the plurality of classifiers 206 asynchronously. In particular, one or more of the plurality of classifiers 206 may be associated with a classification report period that is different to the classification report period of the remaining classifiers of the plurality of classifiers 206. That is, of the N classifiers forming the plurality of classifiers 206, two or more different classification report periods may be employed by the classifiers. In one example, each of the plurality of classifiers 206 is associated with a different classification report period.
At the ith time instance, one or more confidence score output from the classifiers 206 is supplied as an input, denoted yf, to the fusion engine 204. It will be appreciated from the above that the number of confidence scores in the input yi can vary from 1 to N (where Nal).
As shown in Figure 3, the fusion engine 204 receives an initialized combined confidence score p(C I yo). The initialized confidence score may be stored in, and retrieved from, memory 210.
As further shown in Figure 3, a previously determined combined confidence score P(C/-11371,1-0 determined using one or more confidence scores received from one or more of the one or more classifiers 206 at a previous time instance t.1 may be used in the determination of a combined confidence score, denoted p(Cilyti), after receipt of one or more confidence scores from one or more of the one or more classifiers at time instance t. It will be appreciated that the initialized confidence score is used as the previously determined combined confidence score in the determination of a combined confidence score after receipt of one or more confidence scores from one or more of the classifiers 206 at time instance t1.
The fusion engine 204 also uses learnt classifier behaviour in the determination of the combined confidence score, p(Cily,,i). In particular, the one or more confidence scores received from one or more of the classifiers 206 at each successive time instance can be stored in memory 210 and accessed by the fusion engine 204 in order to compute the combined confidence score. In particular, the fusion engine 204 may use the one or more confidence scores received from one or more of the classifiers 206 at each successive time instance, to determine a likelihood p(yi:i I CO of observing the full stream of classifier outputs up to the time instance t given that the target belongs to the target category (i.e. is a drone).
In some embodiments, the fusion engine 204 uses a likelihood of the target belonging to the target category, denoted p(C), in the determination of the combined confidence score. Some target categories can be more likely than others in general, e.g. probability of a target being a drone is less likely than it being a non-drone on average, this is captured by this prior probability p(C). Embodiments using the likelihood p(C) of the target belonging to the target category are described in more detail below with reference to Figure 5.
In other embodiments, the fusion engine 204 uses multiple target category probabilities, each denoted p(CilCi_i) i.e. a Bayesian prior probability defining the likelihood of the target changing the way it represents itself between successive time instances such that at time instance t_, it will be identified as belonging to a respective target category (e.g. drone, bird, or place) from a set of target categories and at the time instance time instance t it will be identified as belonging to the target category (e.g. drone) from the set of target categories. Whilst the target true identity (e.g. drone, bird, etc.) remains fixed and does not change over time, the characteristics exhibited by the target and observed by a particular "ideal configuration" of an available sensor (e.g. a radar) can vary. The target may truly exhibit characteristics of a bird for one time interval, and a drone for a different time interval. For an ideal sensor that only examines the target motion or kinematics features, a drone flying in a manner that makes it indistinguishable from a bird truly has the characteristics of a bird. Similarly, if this ideal radar relies entirely on detecting rotors to differentiate birds from drones, but the target altitude or construction is such that the rotors cannot be visible, the target again displays the characteristics of a bird. It can even be argued that a human operator closely monitoring the same target characteristics will reach the same conclusion. Therefore, the "quasi-true" target category, Ci at time instant t is defined here to be the target characteristics presented to an ideal sensor. Therefore, the target "quasi-true" target category can be assumed to potentially change target categories over time which is governed by the transition probability p(CilCi.i) between two successive time instances ti and t.
Reasonable assumptions can be incorporated here, e.g. probability of a drone target retaining its class/status is expected to be significantly higher than it becoming (as represented to the ideal radar) a non-drone (e.g. due to no longer visible micro-Doppler signature). This is to circumvent a drone target being declared as non-drone if its drone-like features are temporarily unobservable. However, this will be overwhelmed with evidence from data (i.e. classifier confidence scores) if the non-drone-features persist. For a non-drone target, it is possible that it switches to a drone (e.g. as micro-Doppler becomes visible).
The multiple target category probabilities may for example comprise: (i) the probability of a target representing itself as a drone at t given that it represented itself as a drone at ti; (ii) the probability of a target representing itself as a drone at t given that it represented itself as a bird at ti; and (iii) the probability of a target representing itself as a drone at t given that it represented itself as a car at t.1. In this example the set of target categories consists of a drone target category, a bird target category and a car target category.
An initial value of the likelihood of the target belonging to the target category p(C) and/or each of the multiple target category probabilities p(CilCi.i) is stored in memory 210 for retrieval by the fusion engine 204.
The initial value of the likelihood of the target belonging to the target category p(C) and/or each of the multiple target category probabilities p(C,IC,.1) may be used defined.
Alternatively, the initial value of the likelihood of the target belonging to the target category p(C) and/or each of the multiple target category probabilities p(CilCi.i) may be determined based on training data used to train the classifiers 206.
In some implementations, the likelihood of the target belonging to the target category p(C) and/or each of the multiple target category probabilities p(CilCv) may be fixed (i.e. do not dynamically change over time).
In other implementations, the likelihood of the target belonging to the target category p(C) and/or each of the multiple target category probabilities p(C;10_1) may dynamically change over time. In these implementations, the likelihood of the target belonging to the target category p(C) and/or each of the multiple target category probabilities p(CilCi.1) may be determined based on contextual information relating to a detector which is a source of the input data supplied to the one or more classifiers 206. The fusion engine 204 adapts the priors p(C) and p(CilCi-i) based on the contextual information. This contextual information may be stored in memory 210 for retrieval by the fusion engine 204. Alternatively or additionally, the likelihood of the target belonging to the target category p(C) and/or each of the multiple target category probabilities p(CilCi.i) may be determined based on input data received by one or more of the classifiers 206.
In implementations where the detector is a radar device, the contextual information comprises one or more of: location information associated with the radar device; target specific attributes extracted by the radar device based on processing of the input data received by the classifiers. In particular, the radar device may comprise one or more other modules which process the input data to detect features of targets in the FoS of the radar device, but do not perform classification. For example, these other modules may process radar signals to detect presence of rotors on-board the target from a Doppler spectrum.
target specific attributes extracted by the radar device based on processing of sensor data received by the radar device from a remote sensor or from a sensor on the radar device. The target specific attributes may be identified based on sensor data received from other sensors (e.g. image data captured by a camera on the radar device, or a remote camera whose field of view overlaps with the FoS).
meteorological information (e.g. weather) associated with a FoS of the radar device; sensor data associated with the FoS of the radar device. This sensor data may be received from a sensor on the radar device, or a remote sensor that is external to the radar device which can feed data to the radar device. This sensor data may for example be RF sensor data received from an RF sensor on the radar device or in the vicinity of the radar such that it is able to detect communication with a target in the FoS of the radar device; location information associated with a target in the FoS of the radar device. This location information may correspond to GPS co-ordinates reported to the radar device from a GPS sensor (or other location sensor) on the target; information relating to road traffic in the FoS of the radar device; information relating to terrain within the FoS of the radar device; and bird migration patterns in the FoS of the radar device.
Figure 4 is a flowchart illustrating a method 400 performed by the fusion engine 204 of using confidence scores output by the one or more classifier 206 to determine a combined confidence score in accordance with embodiments of the present disclosure.
At step S402, at the ith time instance, the fusion engine 204 receives one or more confidence score output from the classifiers 206, denoted yi, to the fusion engine 204.
As noted above, the number of confidence scores in the input yi can vary from 1 to N. In particular, in embodiments, where the fusion engine 204 is in communication with a plurality of classifiers 206, the fusion engine 204 does not wait for a predetermined number of confidence scores to be received before determining the combined confidence score.
In embodiments where the fusion engine 204 is in communication with a single classifier 206 or a plurality of classifiers 206 and receives a single confidence score at t, the combined confidence score computed by the fusion engine 204 is a "combined" confidence score given that it is computed based on the single confidence score and all previously received confidence scores (received at previous time instances). In embodiments where the fusion engine 204 is in communication with a plurality of classifiers 206 and receives multiple confidence score at t, the combined confidence score computed by the fusion engine 204 is a "combined" confidence score given that it is computed based on the multiple confidence scores and all previously received confidence scores.
In embodiments where the fusion engine 204 receives only a single confidence score at successive time steps, it computes the combined confidence score across the time steps and this can be viewed as temporal fusion of the classification results.
At step S404, the fusion engine 204 obtains at least on target category probability. The fusion engine 204 may obtain the at least on target category probability by retrieving it from memory 210. Alternatively, the fusion engine 204 may compute the at least on target category probability (e.g. based on information stored in memory 210 or based on the input data received by one or more of the plurality of classifiers). As explained above, in some embodiments, the at least on target category probability comprises a likelihood of the target belonging to the target category, denoted p(C). In other embodiments the at least on target category probability comprises multiple target category probabilities, each denoted p(CilCi-i).
At step S406, the fusion engine 204 determines the combined confidence score, P(CilYti).
In one embodiment, the fusion engine 204 determines the combined confidence score using the one or more confidence score yi output from the classifiers 206 received at step S402, the at least on target category probability obtained at step S404 (whereby the at least on target category probability comprises a likelihood of the target belonging to the target category, denoted p(C)), and a likelihood PCJi1i-110 of observing all confidence scores received from the classifiers up to the time instance ti.i immediately prior to t given that the target belongs to the target category (e.g. is a drone). This embodiment is described in more detail below with reference to Figure 5.
In another embodiment, the fusion engine 204 determines the combined confidence score using the one or more confidence score yi output from the classifiers 206 received at step S402, the at least on target category probability obtained at step S404 (whereby the at least on target category probability comprises multiple target category probabilities, each denoted p(CilCi_i)), and a previously determined combined confidence score determined using one or more confidence scores received from one or more of the one or more classifiers 206 at the time instance tyl.This embodiment is described in more detail below with reference to Figure 6.
At step S408, the fusion engine 204 outputs the combined confidence score.
The fusion engine 204 may output the combined confidence score via the communications interface 214 to a remote device (not shown in Figure 1). Additionally or alternatively, the fusion engine 204 may output the combined confidence score to the output device 208.
The fusion engine 204 may output the combined confidence score to an alert generation module (not shown in Figure 2) configured to generate an alert based on comparing the combined confidence score to one or more predetermined thresholds (e.g. the combined confidence score exceeding, or being less than a predetermined threshold). The alert generation module may output the alert via the communications interface 214 to a remote device (not shown in Figure 1). Additionally or alternatively, the alert generation module may output the alert to the output device 208.
As illustrated, once the fusion engine 204 outputs the combined confidence score, the process 400 loops back to step S402 where the fusion engine 204 receives one or more new confidence scores output from the classifiers 206 at the next time instance.
Figure 5 is a flowchart illustrating a method 500 performed by the fusion engine 204 to determine a combined confidence score according to one embodiment of the present disclosure. In particular, it illustrates in more detail the steps performed at step S406 to determine the combined confidence score that may be performed in one embodiment.
In the method 500, at step S404 the fusion engine 204 obtains a likelihood of the target belonging to the target category, denoted p(C).
Based on Bayes' theorem the combined confidence score p(Cilyti) may be expressed as: Ptc oc 73(371,i1c)p(c) Where: p@ti10 = p(Yi 0p(y1;i_1 IC) Thus at step S502, the fusion engine 204 obtains the likelihood p(yti_i IC) of observing all confidence scores received from the classifiers 206 up to the time instance ti given that the target belongs to the target category. The likelihood p(yti_,IC) is known from the previous time instance t.1 when p(yttIC) for that time instance was computed, and thus can be retrieved from memory 210. The fusion engine 204 may alternatively compute the likelihood p(yti_i IC) using the previously determined combined confidence score p(Cilyti_i) At step S504, the fusion engine 204 computes the likelihood p(yilyti_i, C) of observing the one or more confidence scores received at the time instance t given all confidence scores received from the classifiers 206 up to the previous time instance tki and that the target belongs to the target category. It will be appreciated that the likelihood C) may dynamically change over time.
The likelihood p(yilyti_i,C) is learnt from the classifiers behavior on previous data using empirical kernel estimates or parametric or non-parametric models of multivariate distributions or Copulas for multivariate data. The "previous data" referred to here may correspond to confidence scores that were received from the classifiers 206 at previous time instances. Alternatively, or additionally, the previous data" referred to here may correspond to test confidence scores used to configure the fusion engine 204 prior to operation.
Assuming a first order Markov model for the classifier confidences conditioned on the target true class, to reduce the complexity of the conditional likelihood distributions to be learnt, it is possible to approximate the probability of seeing the next classifier confidence value (i.e. measurement) given the target class as depending only on the classifier result at the previous timestep and not on those prior to that, such that: 73(Yi C) C) In particular, the likelihood p(yi C) may be determined using the one or more confidence scores yi received at time instance t and one or more confidence scores yi_i received from one or more of the one or more classifiers 206 at the previous time instance. This can then be computed with Copulas techniques, for example Archimedean Copulas, or kernel estimate techniques.
At step S506, the fusion engine 204 computes the likelihood p(yi,i IC) of observing all confidence scores received from the classifiers 206 up to the time instance t given that the target belongs to the target category by multiplying the likelihood p(yti_i IC) with the likelihood13.37i ( I ti_DC).
At step S508, fusion engine 204 computes the combined confidence score p(Ci In; i) by multiplying the likelihood p(yti IC) with the likelihood p(C). The target class Ci and C are used interchangeably in this embodiment since target class is fixed over time.
Whilst the steps of the method 500 have been shown in a particular order, it will be appreciated that embodiments of the present disclosure are not limited to the order shown in Figure 5.
Figure 6 is a flowchart illustrating a method 600 performed by the fusion engine 204 to determine a combined confidence score according to another embodiment of the present disclosure. In particular, it illustrates in more detail the steps performed at step S406 to determine the combined confidence score that may be performed in another embodiment.
In the method 600, at step S404 the fusion engine 204 obtains multiple target category probabilities, each denoted p(C,ICfri) and defining a probability of the target changing the way it represents itself between successive time instances such that at time instance tki it will be identified as belonging to a respective target category (e.g. drone, bird, or place) from a set of target categories and at the time instance time instance t it will be identified as belonging to the target category (e.g. drone) from the set of target categories.
In an example whereby the set of target categories consists of a drone target category, a bird target category and a car target category, the multiple target category probabilities would comprise: (i) the probability of a target representing itself as a drone at t given that it represented itself as a drone at Li; (ii) the probability of a target representing itself as a drone at t given that it represented itself as a bird at t.1; and (iii) the probability of a target representing itself as a drone at t given that it represented itself as a car at t_1.
Based on Bayes' theorem the combined confidence score p(Cilyti) may be expressed as: P (Ci:0 OC P(yi 1 Ci_icc Thus at step 3602, the fusion engine 204 obtains a previously determined combined confidence score p(Ci-, determined using one or more confidence scores received from one or more of the classifiers 206 at the previous time instance t_i. The fusion engine 204 may retrieve the previously determined combined confidence score p(Cv I) from memory 210.
As noted above, the initialized confidence score is used as the previously determined combined confidence score in the determination of a combined confidence score after receipt of one or more confidence scores from one or more of the classifiers 206 at time instance t1.
At step 3604, the fusion engine 204 also compute a likelihood p(yilCi) of observing the one or more confidence scores received at the time instance t given that the target belongs to the target category.
This likelihood p(yilCi) is learnt from classifiers behaviour on previous data, however at a given time step conditioned on a target true class, using empirical kernel estimates or parametric or non-parametric models of univariate or multivariate distributions. This likelihood p(yilCi) is the probability (learnt from the classifier(s) previous behaviour) of seeing a particular confidence score yi for a given "true" target class Ci at the time instance t. The "previous data" referred to here may correspond to confidence scores that were received from the classifiers 206 at previous time instances. Alternatively, or additionally, the previous data" referred to here may correspond to test confidence scores used to configure the fusion engine 204 prior to operation.
We note that for a fusion engine 204 based on a Markov model for a quasi-truth class (i.e. for the target latent class at t) the following applies: P(Y1:11C1) = 73(y1 ICi) In embodiments, where the fusion engine 204 is in communication with a plurality of classifiers 206, this can be further simplified assuming the plurality of classifiers are conditionally independently to each other as: 13(YilCi) = 13(y ICi) P(y11Ci) -.13641CD At step S606, for each of the multiple target category probabilities p(C;10_1) the fusion engine 204 multiplies the target category probability p(CilCi_i) with the previously determined combined confidence score p(Ci_ilyti_i), and sums the results to obtain a summation result.
At step S608, the fusion engine 204 computes the combined confidence score p(Ci by multiplying the likelihood p(yi ICD with the summation result.
Whilst the steps of the method 600 have been shown in a particular order, it will be appreciated that embodiments of the present disclosure are not limited to the order shown in Figure 6.
As noted above, whilst embodiments of the present disclosure are described with reference to the input data received by the one or more classifiers 206 as originating from a radar device, such that the confidence scores output by each of the classifiers indicates a likelihood (i.e. probability) that an unmanned aerial vehicle (also referred to herein a drone) is present in the surveillance area of the radar device, this is merely an example. However it will be appreciated that embodiments of the present disclosure can be applied to any problem requiring classification or discrimination between two or more different classes. It will be appreciated that the input data supplied to the classifiers 206 will depend on the particular application. In particular, whilst we have described the detector as being a radar device, the detector could be a Lidar transceiver, an ultrasonic transceiver, an infrared transceiver, a camera, a video camera etc. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (28)

  1. CLAIMS1. A method of using confidence scores output by one or more classifier to determine a combined confidence score, each of the confidence scores indicating a probability that input data received by the respective classifier comprises a target belonging to a target category, the method performed on a computing device comprising: receiving, at a time instance, one or more confidence scores from the one or more classifier; obtaining at least one target category probability; determining the combined confidence score based on: (i) the one or more confidence scores received at said time instance; 00 the at least one target category probability; and clip a likelihood of observing all confidence scores received from the one or more classifier up to a previous time instance immediately prior to said time instance given that the target belongs to the target category; or a previously determined combined confidence score determined using one or more confidence scores received from one or more of the one or more classifier at the previous time instance; and outputting the combined confidence score.
  2. 2. The computer implemented method of claim 1, wherein determining the combined confidence score is based on the likelihood of observing all confidence scores received from the one or more classifier up to a previous time instance immediately prior to said time instance given that the target belongs to the target category, and the at least one target category probability defines a likelihood of the target belonging to the target category.
  3. 3. The computer implemented method of claim 2, wherein determining the combined confidence score comprises multiplying a likelihood of observing all confidence scores received from the one or more classifier up to the time instance given that the target belongs to the target category with the likelihood of the target belonging to the target category.
  4. 4. The computer implemented method of claim 3, the method comprising determining the likelihood of observing all confidence scores received from the one or more classifiers up to the time instance given that the target belongs to the target category by multiplying a likelihood of observing the one or more confidence scores received at said time instance given all confidence scores received from the one or more classifier up to the previous time instance and that the target belongs to the target category, with the likelihood of observing all confidence scores received from the one or more classifier up to the previous time instance given that the target belongs to the target category.
  5. 5. The computer implemented method of claim 4, the method comprising determining the likelihood of observing the one or more confidence scores received at said time instance given all confidence scores received from the one or more classifier up to the previous time instance and that the target belongs to the target category using the one or more confidence scores received at said time instance and one or more confidence scores received from one or more of the one or more classifier at the previous time instance.
  6. 6. The computer implemented method of any of claims 3 to 5, wherein the likelihood of observing all confidence scores received from the one or more classifier up to the time instance given that the target belongs to the target category dynamically changes over time.
  7. 7. The computer implemented method of claim 1, wherein the at least one target category probability comprises multiple target category probabilities, each target category probability defining a probability of the target changing the way it represents itself between successive time instances such that at a previous time instance immediately prior to said time instance it will be identified as belonging to a respective target category from a set of target categories and at the time instance it will be identified as belonging to the target category from the set of target categories, and determining the combined confidence score is based on the previously determined combined confidence score determined using one or more confidence scores received from one or more of the one or more classifier at the previous time instance, and the multiple target category probabilities.
  8. 8. The computer implemented method of claim 7, wherein determining the combined confidence score comprises: for each of the multiple target category probabilities, multiplying the target category probability with the previously determined combined confidence score; summing results of said multiplying to obtain a summation result; and multiplying summation result with a likelihood of observing the one or more confidence scores received at said time instance given that the target belongs to the target category.
  9. 9. The computer implemented method of claim 8, the method comprising determining the likelihood of observing the one or more confidence scores received at said time instance given that the target belongs to the target category using the one or more confidence scores received at said time instance.
  10. 10. The computer implemented method of any of claims 7 to 9, wherein the likelihood of observing the one or more confidence scores received at said time instance given that the target belongs to the target category dynamically changes over time.
  11. 11. The computer implemented method of any preceding claim, wherein the at least one target category probability is fixed.
  12. 12. The computer implemented method of any of claims 1 to 10, wherein the at least one target category probability dynamically changes over time.
  13. 13. The computer implemented method of any preceding claim, wherein an initial value of the at least one target category probability is user defined.
  14. 14. The computer implemented method of any preceding claim, wherein the at least one target category probability is determined based on contextual information relating to a detector which is a source of the input data.
  15. 15. The computer implemented method of claim 14, wherein the detector is a radar device, and the contextual information comprises one or more of: location information associated with the radar device; target specific attributes extracted by the radar device based on processing of the input data; target specific attributes extracted by the radar device based on processing of sensor data received by the radar device from a remote sensor; target specific attributes extracted by the radar device based on processing of sensor data output from a sensor on the radar device; meteorological information associated with a field of surveillance of the radar device; sensor data associated with the field of surveillance of the radar device; location information associated with a target in the field of surveillance; information relating to road traffic in the field of surveillance; information relating to terrain within the field of surveillance; and bird migration patterns in the field of surveillance.
  16. 16. The computer implemented method of any preceding claim, wherein the at least one target category probability is determined based on training data used to train the one or more classifier.
  17. 17. The computer implemented method of any preceding claim, wherein the at least one target category probability is determined based on input data received by one or more of the one or more classifier.
  18. 18. The computer implemented method of any preceding claim, the method comprising using confidence scores output by a plurality of classifiers to determine the combined confidence score.
  19. 19. The computer implemented method of claim 18, wherein the confidence scores are output by the plurality of classifiers asynchronously.
  20. 20. The computer implemented method of claim 18 or 19, wherein each classifier of the plurality of classifiers is associated with a classifier type, and the plurality of classifiers comprise classifiers of multiple classifier types.
  21. 21. The computer implemented method of any of claims 18 to 20, wherein the input data received by a classifier of the plurality of classifiers is different to the input data received from one or more remaining classifiers of the plurality of classifiers.
  22. 22. The computer implemented method of any preceding claim, wherein a source of the input data is a radar device.
  23. 23. The computer implemented method of claim 22, wherein input data received by each classifier comprises one or more of unprocessed radar data output by radar receiver elements of the radar device; processed radar data generated by processing the unprocessed radar data output by the radar device; and tracking information output by a tracking module of the radar device.
  24. 24. The computer implemented method of any preceding claim, wherein outputting the combined confidence score comprising transmitting the combined confidence score to a display of the computing device.
  25. 25. The computer implemented method of any preceding claim, wherein outputting the combined confidence score comprising transmitting the combined confidence score to a remote computing device.
  26. 26. At least one non-transitory computer-readable storage medium comprising instructions which, when executed by at least one processor causes the at least one processor to perform the method of any preceding claim.
  27. 27. A computer program comprising instructions which, when the program is executed by a device, cause the device to perform the method of any of claims 1 to 25. 25
  28. 28. A device for using confidence scores output by one or more classifier to generate a combined confidence score, each of the confidence scores indicating the probability that input data received by the respective classifier comprises a target belonging to a target category, the device comprising: a processor, wherein the processor is configured to: receive, at a time instance, one or more confidence scores from the one or more classifier; obtain at least one target category probability; determine the combined confidence score based on: (i) the one or more confidence scores received at said time instance; 00 the at least one target category probability; and (iii) a likelihood of observing all confidence scores received from the one or more classifier up to a previous time instance immediately prior to said time instance given that the target belongs to the target category; or a previously determined combined confidence score determined using one or more confidence scores received from one or more of the one or more classifier at the previous time instance; and output the combined confidence score.
GB2215056.9A 2022-10-12 2022-10-12 Improved classification using a combined confidence score Pending GB2623498A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB2215056.9A GB2623498A (en) 2022-10-12 2022-10-12 Improved classification using a combined confidence score
PCT/GB2023/052450 WO2024079436A1 (en) 2022-10-12 2023-09-21 Improved classification using a combined confidence score

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2215056.9A GB2623498A (en) 2022-10-12 2022-10-12 Improved classification using a combined confidence score

Publications (2)

Publication Number Publication Date
GB202215056D0 GB202215056D0 (en) 2022-11-23
GB2623498A true GB2623498A (en) 2024-04-24

Family

ID=84817975

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2215056.9A Pending GB2623498A (en) 2022-10-12 2022-10-12 Improved classification using a combined confidence score

Country Status (2)

Country Link
GB (1) GB2623498A (en)
WO (1) WO2024079436A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070160262A1 (en) * 2006-01-11 2007-07-12 Samsung Electronics Co., Ltd. Score fusion method and apparatus
US20180253589A1 (en) * 2017-03-02 2018-09-06 Hong Kong Applied Science and Technology Research Institute Company Limited Anomaly Detection for Medical Samples under Multiple Settings
US20190164010A1 (en) * 2017-11-30 2019-05-30 Kofax, Inc. Object detection and image cropping using a multi-detector approach

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8781992B2 (en) * 2012-06-28 2014-07-15 Raytheon Company System and method for scaled multinomial-dirichlet bayesian evidence fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070160262A1 (en) * 2006-01-11 2007-07-12 Samsung Electronics Co., Ltd. Score fusion method and apparatus
US20180253589A1 (en) * 2017-03-02 2018-09-06 Hong Kong Applied Science and Technology Research Institute Company Limited Anomaly Detection for Medical Samples under Multiple Settings
US20190164010A1 (en) * 2017-11-30 2019-05-30 Kofax, Inc. Object detection and image cropping using a multi-detector approach

Also Published As

Publication number Publication date
WO2024079436A1 (en) 2024-04-18
GB202215056D0 (en) 2022-11-23

Similar Documents

Publication Publication Date Title
US11927668B2 (en) Radar deep learning
US9429650B2 (en) Fusion of obstacle detection using radar and camera
US20220189326A1 (en) Detection and classification of unmanned aerial vehicles
US20230139751A1 (en) Clustering in automotive imaging
CN110907906B (en) Object classification method and related device
CN113536850B (en) Target object size testing method and device based on 77G millimeter wave radar
JP2017156219A (en) Tracking device, tracking method, and program
Bhatia et al. Object classification technique for mmWave FMCW radars using range-FFT features
CN111323756A (en) Deep learning-based marine radar target detection method and device
US20220108552A1 (en) Method and Apparatus for Determining Drivable Region Information
US20240103130A1 (en) Feature extraction for remote sensing detections
CN111323757A (en) Target detection method and device for marine radar
US20220011786A1 (en) Correlated motion and detection for aircraft
Argüello et al. Radar classification for traffic intersection surveillance based on micro-Doppler signatures
GB2623498A (en) Improved classification using a combined confidence score
Vitiello et al. Ground-to-air experimental assessment of low SWaP radar-optical fusion strategies for low altitude Sense and Avoid
DK180729B1 (en) System for processing radar data representing intensity values of received power of reflected radar wave signals
US20220138968A1 (en) Computer vision aircraft detection
Ghazlane et al. Development Of A Vision-based Anti-drone Identification Friend Or Foe Model To Recognize Birds And Drones Using Deep Learning
US11288523B2 (en) Pseudo-range estimation from a passive sensor
US20230350020A1 (en) False target filtering
Pierucci et al. Improvements of radar clutter classification in air traffic control environment
Jochumsen Radar target classification using recursive knowledge-based methods
WO2024141039A1 (en) Object recognition method and related apparatus
Prasad et al. Immediate Animal Recognition and Anticipation Classification for Harvest Fields Using Machine Learning Techniques

Legal Events

Date Code Title Description
COOA Change in applicant's name or ownership of the application

Owner name: THALES SA

Free format text: FORMER OWNER: AVEILLANT LIMITED