CN114358211A - Multi-mode deep learning-based aircraft behavior intention recognition method - Google Patents

Multi-mode deep learning-based aircraft behavior intention recognition method Download PDF

Info

Publication number
CN114358211A
CN114358211A CN202210044234.0A CN202210044234A CN114358211A CN 114358211 A CN114358211 A CN 114358211A CN 202210044234 A CN202210044234 A CN 202210044234A CN 114358211 A CN114358211 A CN 114358211A
Authority
CN
China
Prior art keywords
aircraft
sequence
behavior
encoder
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210044234.0A
Other languages
Chinese (zh)
Other versions
CN114358211B (en
Inventor
朱秀翠
黄宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Shitong Hengqi Beijing Technology Co ltd
Original Assignee
Zhongke Shitong Hengqi Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Shitong Hengqi Beijing Technology Co ltd filed Critical Zhongke Shitong Hengqi Beijing Technology Co ltd
Priority to CN202210044234.0A priority Critical patent/CN114358211B/en
Publication of CN114358211A publication Critical patent/CN114358211A/en
Application granted granted Critical
Publication of CN114358211B publication Critical patent/CN114358211B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an aircraft behavior intention recognition method based on multi-mode deep learning, and belongs to the technical field of aircraft behavior recognition. The method utilizes a deep learning technology, and based on two modal data of a track sequence of the flight behavior of the aircraft and projection images of the track at different angles, the flight behavior intentions of moving targets such as the aircraft and the like are identified at a brand-new view angle, and in addition, a behavior intention identification model is trained independently by taking behavior categories obtained by clustering as labels, so that the behavior intentions of the subsequently added flight tracks of the aircraft can be directly identified.

Description

Multi-mode deep learning-based aircraft behavior intention recognition method
Technical Field
The invention relates to the technical field of behavior recognition, in particular to an aircraft behavior intention recognition method based on multi-mode deep learning.
Background
With the rapid development of mobile communication technology and location awareness technology, it becomes easier and easier to acquire real-time location information of various moving targets such as aircrafts, and the scale of moving target trajectory data is rapidly increased. In recent years, in various industries and departments such as transportation, military and internet enterprises, a large number of data assets containing geographical position information are accumulated, the data comprise original track information of moving target activities, and the data have important values for regular mining of moving target behaviors and identification of moving target behavior intentions.
At present, the following two problems mainly exist in most methods for analyzing the track behavior of a moving target:
1. the input data mode is single, either only text data or image data is input, and the influence of possible relevance of different mode data on the accuracy of the moving target track behavior analysis result is ignored.
2. Most of the track behavior analysis adopts a clustering method, which is limited by a clustering algorithm, so that the subsequent newly-added track behavior intention cannot be directly identified, and if the behavior intention of the subsequent newly-added track data is identified, the clustering analysis needs to be carried out again, which is very troublesome.
The development of big data and artificial intelligence technology provides a new visual angle for moving target activity analysis, and a new generation artificial intelligence technology revolution which takes deep learning as a characteristic provides technical support for behavior intention identification based on moving target trajectory data. The multi-modal deep learning is a leading and hot technical field nowadays, a great deal of technical companies in the field make great technical research and development investments in the multi-modal deep learning field, and scientific achievements are also shown in spring bamboo shoots after rain, such as text classification based on multi-modal and the like, but have not been applied to behavior intention recognition of moving targets such as aircrafts and the like.
Disclosure of Invention
The invention innovatively provides an aircraft behavior intention recognition method based on multi-mode deep learning, by utilizing a deep learning technology and based on two modal data of a track sequence of aircraft flight behaviors and projection images of the track at different angles, the flight behavior intentions of moving targets such as an aircraft and the like are recognized at a brand-new view angle, in addition, a behavior intention recognition model is trained independently by taking a behavior category obtained by clustering as a label, and the behavior intention of a subsequently added aircraft flight track can be directly recognized.
In order to achieve the purpose, the invention adopts the following technical scheme:
the aircraft behavior intention recognition method based on the multi-mode deep learning comprises the following steps:
s1, acquiring flight path data of the aircraft, wherein the flight path data comprises flight data of each path point and projection image data of the aircraft projected to the ground at different angles on each path point, sequencing the flight data of each path point into a path sequence according to a flight time sequence, and sequencing each projection image projected to the ground at the same angle on each path point of the aircraft into a projection image sequence;
s2, respectively extracting the behavior characteristic vector of the track sequence and the projection image characteristic vector of each projection image sequence;
s3, performing cluster analysis on the eigenvectors of the aircraft with different associations extracted in the step S2 to obtain behavior intention categories of each aircraft;
s4, training and forming a behavior intention multi-classification model by taking the feature vector of the aircraft corresponding to the association extracted in the step S2 and the behavior intention category of the aircraft corresponding to the association obtained in the step S3 as model training samples;
and S5, inputting the acquired flight trajectory data associated with the aircraft into the behavior intention multi-classification model, and outputting a behavior intention identification result of the aircraft by the model.
As a preferred aspect of the present invention, the data dimension of each track point in the track sequence includes a flight time, a flight altitude, a flight speed, and a longitude and a latitude of the track point.
In a preferred embodiment of the present invention, in step S2, the trajectory behavior feature corresponding to the ith trajectory point in the trajectory sequence to be extracted as the behavior feature is written as bi,biIncluding flight position of said aircraft at the ith said track pointAmount of change
Figure BDA0003471495040000021
Flying speed
Figure BDA0003471495040000022
Flight angle
Figure BDA0003471495040000023
And the amount of change in flying speed
Figure BDA0003471495040000024
And the amount of change in flight angle
Figure BDA0003471495040000025
Figure BDA0003471495040000026
As a preferred embodiment of the present invention,
Figure BDA0003471495040000027
calculated by the following equations (1) to (5), respectively:
Figure BDA0003471495040000028
Figure BDA0003471495040000029
Figure BDA00034714950400000210
Figure BDA00034714950400000211
Figure BDA00034714950400000212
in the formulae (1) to (5),
Figure BDA00034714950400000213
respectively representing the latitude and the longitude of the ith track point;
Figure BDA0003471495040000031
respectively representing the latitude and longitude of the ith-1 track point which is sequenced in the track sequence before the ith track point;
Figure BDA0003471495040000032
respectively representing the heights of the ith track point and the (i-1) th track point;
Figure BDA0003471495040000033
respectively representing the time when the aircraft flies to the ith and (i-1) th track points.
As a preferable mode of the present invention, the aircraft respectively projects two projection images on each track point in a manner of projecting in parallel to the ground and in a manner of projecting in perpendicular to the ground.
As a preferred embodiment of the present invention, in step S2, the pre-trained auto-encoder model based on LSTM is used to extract the behavior features of the track sequence, and the method for extracting the behavior features of the track sequence based on the auto-encoder model based on LSTM includes:
the auto-encoder model based on the LSTM comprises an LSTM encoder and an LSTM decoder, and the line characteristic sequence of the track sequence is BTR=(b1,b2,…,bi,…,bT) Wherein b is1,b2,…,bi,…,bTSetting i to be 0,1, …, and T to represent the number of the track points in the track sequence; the input to the LSTM encoder is a sequence BTRSaid LSTM weaveThe decoder reads the input sequence in order and updates the hidden layer state h accordinglytThe updating method is as follows:
ht=fLSTM(ht-1,bt),fLSTMis an activation function, btIs an element in the input sequence currently read by the LSTM encoder;
at the last locus bTAfter being processed, the layer state h is impliedTAs said track sequence BTRA low-dimensional implicit representation of;
the LSTM decoder first starts with hTGenerating c as an initialized implicit State1And then further generates (c)2,c3,…,cT) The updating mode of the LSTM decoder is as follows:
Figure BDA0003471495040000034
wherein f isLSTMIs an activation function;
the goal of the LSTM decoder is to reconstruct the input sequence BTRSaid LSTM encoder and said LSTM decoder pass minimization (b)1,b2,…,bi,…,bT) And (c)1,c2,…,ci,…,cT) Training the trajectory sequence with the reconstruction error; the loss function of the auto-encoder model based on the LSTM adopts the mean square error, and the calculation formula is as follows:
Figure BDA0003471495040000035
as a preferred embodiment of the present invention, in step S2, the pre-trained auto-encoder model based on CNN is used to extract the behavior features of the projection image sequence, and the method for extracting the behavior features of the projection image sequence based on the auto-encoder model based on CNN comprises:
the CNN-based auto-encoder model includes a CNN encoder and a CNN decoder, with the projection image sequence being I, the aim of the CNN encoder being to convert an input vector I into a potential representation Z2The aim of the CNN decoder is to represent the potential representation Z2The mixture is reconstructed into I',
the CNN encoder comprises 3 conv layers, 1 reshape layer and 1 FC layer which are sequentially connected, a LeakyRelu activation function is adopted behind each conv layer, the conv layers in the CNN encoder are used for extracting image features of each element in an input vector I, the reshape layers in the CNN encoder are used for changing the size of a feature map output by the conv layers, and the FC layers in the CNN encoder are used for reducing the dimension of input data;
the CNN decoder comprises 3 deconv layers, 1 reshape layer and 1 FC layer, a LeakyRelu activation function is adopted after each deconv layer, and the FC layer in the CNN decoder is used for performing dimensionality raising on output data of the CNN encoder; the reshape layer in the CNN decoder is used for changing the size of the feature map after being subjected to dimension upgrading by the FC layer; the 3 deconv layers in the CNN decoder are used for reconstructing the feature map output by the reshape layer into I';
the loss function of the auto-encoder model based on the CNN adopts the mean square error, and the calculation formula is as follows:
L(I,I′)=|I-I′|2and L (I, I') represents a loss function.
As a preferable embodiment of the present invention, in step S3, the feature vectors of the aircraft with different associations extracted in step S2 are subjected to cluster analysis by a DBSCAN density clustering algorithm.
As a preferred embodiment of the present invention, in step S4, the method for training the behavior intention multi-classification model includes:
l1, the behavior feature of the trajectory sequence extracted in step S2 is represented as Z1(p) normalizing Z by min-max1(p) conversion to z1(p)', the transfer function is:
Figure BDA0003471495040000041
max represents the maximum value of the sample data, min represents the minimum value of the sample data
Order to
Figure BDA0003471495040000042
Converting z' to obtain I1∈RK×K,I1The mth row of data has the following calculation formula:
I1(m)=A+B
wherein, A represents one-dimensional data with m zero elements;
B=[z′1m,z′1m+1,…,z′1k-1]and is and
Figure BDA0003471495040000043
p represents z1(p) length;
l2, 11∈RK×KTwo projection images I associated with the same aircraft2∈RK×KObtaining I epsilon R after splicingK ×K×3
L3, performing data enhancement on the image I by using any one or more of image turning, rotation, scaling, clipping and translation to expand a model training sample, and dividing the model training sample into a training data set, a test data set and a verification data set in proportion;
l4, reading the training data set and the testing data set into a Darknet-53 network, using the class obtained by clustering in the step S3 as a label and using a cross entropy loss function to train the network to finally form the behavior intention multi-classification model.
The method utilizes a deep learning technology, and based on two modal data of a track sequence of the flight behavior of the aircraft and projection images of the track at different angles, the flight behavior intentions of moving targets such as the aircraft and the like are identified at a brand-new view angle, and in addition, a behavior intention identification model is trained independently by taking behavior categories obtained by clustering as labels, so that the behavior intentions of the subsequently added flight tracks of the aircraft can be directly identified.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a flowchart of an implementation of a method for recognizing an aircraft behavior intention based on multi-modal deep learning according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for auto-encoder based extraction of behavioral characteristics associated with a sequence of trajectories and a sequence of projection images of the same aircraft;
FIG. 3 is an internal structural view of an LSTM-based auto-encoder model employed in the present embodiment;
FIG. 4 is an internal structural diagram of the auto-encoder model based on CNN employed in the present embodiment;
FIG. 5 is a clustering diagram of DBSCAN algorithm;
FIG. 6 is a schematic diagram of cluster analysis process of DBSCAN algorithm implemented by codes;
FIG. 7 is a logic block diagram of a training behavioral intent multi-classification model;
FIG. 8 is a Block diagram of a Residual Block Residual Block.
Detailed Description
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings.
Wherein the showings are for the purpose of illustration only and are shown by way of illustration only and not in actual form, and are not to be construed as limiting the present patent; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if the terms "upper", "lower", "left", "right", "inner", "outer", etc. are used for indicating the orientation or positional relationship based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not indicated or implied that the referred device or element must have a specific orientation, be constructed in a specific orientation and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes and are not to be construed as limitations of the present patent, and the specific meanings of the terms may be understood by those skilled in the art according to specific situations.
In the description of the present invention, unless otherwise explicitly specified or limited, the term "connected" or the like, if appearing to indicate a connection relationship between the components, is to be understood broadly, for example, as being fixed or detachable or integral; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or may be connected through one or more other components or may be in an interactive relationship with one another. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The aircraft behavior intention identification method based on the multi-mode deep learning, as shown in fig. 1, includes 5 steps of data preparation, data preprocessing, auto-encoder-based feature representation, DBSCAN-based behavior intention clustering and darknet 53-based behavior intention multi-classification, and specifically includes the following steps:
step 1, data preparation
The method comprises the steps of crawling flight track data of aircrafts in open-source websites related to aircraft tracks by utilizing a web crawler technology to obtain a track sequence tra (x) consisting of a series of multi-dimensional data points arranged in a flight time sequence0,...,xi,xi+1,..), wherein xiRepresenting a point in the trajectory, including dimensions of height, time, etc., the details of which are shown in table a below:
Figure BDA0003471495040000061
TABLE a
And 2, preprocessing data.
2.1 extracting trajectory behavior features
Extracting the track sequence tra (x) obtained in the step 10,...,xi,xi+1,..) the behavior characteristics of the 2 nd, 3 rd and … … th points are extracted in a specific way:
respectively calculating the variation of the position, the speed and the angle of two adjacent track points, wherein the calculation formula is as follows:
Figure BDA0003471495040000062
Figure BDA0003471495040000063
Figure BDA0003471495040000071
Figure BDA0003471495040000072
Figure BDA0003471495040000073
in the formulae (1) to (5),
Figure BDA0003471495040000074
respectively representing the latitude and the longitude of the ith track point;
Figure BDA0003471495040000075
respectively representing the latitude and longitude of the ith-1 track point which is sequenced in the track sequence before the ith track point;
Figure BDA0003471495040000076
respectively represent the ith andthe height of the ith-1 track point;
Figure BDA0003471495040000077
respectively representing the time when the aircraft flies to the ith and (i-1) th track points.
And b, recording the track behavior characteristic corresponding to the ith track point in the extracted track sequence as a track behavior characteristici,biIncluding the flight position variation of the aircraft at the ith track point
Figure BDA0003471495040000078
Flying speed
Figure BDA0003471495040000079
Flight angle
Figure BDA00034714950400000710
And the amount of change in flying speed
Figure BDA00034714950400000711
And the amount of change in flight angle
Figure BDA00034714950400000712
Figure BDA00034714950400000713
2.2 acquiring a sequence of trajectory projection images
The sequence of traces tra (x)0,...,xi,xi+1,..) projection on the RN planes to obtain a projection image sequence of the aircraft track. In the invention, two projection image sequences which are related to the same aircraft track are obtained by respectively projecting in a mode of projecting parallel to the ground and a mode of projecting perpendicular to the ground, namely RN is 2.
It is emphasized here that a trajectory is a directed line segment that connects each trajectory point in turn in the time of flight sequence of the same aircraft. The projection image obtained by projecting the aircraft at each track point at different angles adopts a gray scale image, and the other parts of the image are all darkest black except the projection track line which is brightest white. In the case of a gray image, an image having only one sample color per pixel is usually displayed in gray scale from darkest black to brightest white.
And 3, automatically extracting the feature vector representation with fixed length based on the auto-encoder feature representation.
A method for extracting the behavior characteristics of the trajectory sequence and each projection image sequence associated with the same aircraft based on auto-encoder is shown in fig. 2. Firstly, the step trains an auto-encoder model based on LSTM (Long short term memory network, also called Long-time memory network, which is an improved Recurrent Neural Network (RNN)) to extract a one-dimensional behavior characteristic vector with fixed length, which is expressed as Z1∈R1×pP represents Z1Length of (d); then training the auto-encoder model based on CNN to extract a one-dimensional projection image feature vector with fixed length, which is expressed as Z2∈R1×qQ represents Z2The length of the vector is equal to the length of the vector, and finally, the two feature vectors are spliced to obtain an input one-dimensional vector Z belonging to the next step1×(p+q)As in fig. 2. In fig. 2, the auto-encoder is a self-encoding network, which trains a model using input data as output labels, wherein the number of neurons in the middle layer is often smaller than the number of features, and the output of the neurons in the middle layer is used as features, thereby achieving the purpose of feature space transformation and dimension reduction.
3.1 auto-encoder model based on LSTM
The internal structure of the LSTM-based auto-Encoder model is shown in FIG. 3, and includes an LSTM Encoder (denoted by LSTM Encoder in FIG. 3) and an LSTM decoder (denoted by LSTM decoder in FIG. 3). For a given sequence of behavior features BTR=(b1,b2,…,bi,…,bT)(b1,b2,…,bi,…,bTFor the behavior characteristics of each track point in the track sequence, i ═ 0,1, …, T denotes the number of said track points in the track sequence), the input to the LSTM encoder is a sequence BTRThe LSTM encoder reads the input sequence in order and updates the hidden layer state h accordinglytAs shown in fig. 3, the update method is:
ht=fLSTM(ht-1,bt),fLSTMis an activation function, btIs an element in the input sequence that is currently read by the LSTM encoder;
at the last locus bTAfter being processed, the layer state h is impliedTAs a track sequence BTRA low-dimensional implicit representation of;
LSTM decoder first starts with hTGenerating c as an initialized implicit State1And then further generates (c)2,c3,…,cT) As shown in fig. 3, the LSTM decoder updates in the following manner:
Figure BDA0003471495040000081
wherein f isLSTMIs an activation function;
the goal of the LSTM decoder is to reconstruct the input sequence BTRThe LSTM decoder takes the output of the LSTM encoder as input and the input of the LSTM encoder as its learning target, and is a self-supervised learning method without additional tag data. LSTM encoder and LSTM decoder pass minimization (b)1,b2,…,bi,…,bT) And (c)1,c2,…,ci,…,cT) Training the trajectory sequence by the reconstruction error; the loss function of the auto-encoder model based on the LSTM adopts the mean square error, and the calculation formula is as follows:
Figure BDA0003471495040000082
since the entire input sequence can be reconstructed by an LSTM decoder, a fixed-length vector Z is set in this model1∈R1×pBehavior characteristic sequence B capable of well representing inputTR=(b1,b2,…,bT)。
3.2 auto-encoder model based on CNN
CNN (Convolutional Neural Network), also called Convolutional Neural Network, is a kind of feed forward Neural Network (fed Neural Networks) containing convolution calculation and having a deep structure, and is commonly used for processing image-related tasks, such as image classification, image detection, and the like.
In FIG. 4, the CNN-based auto-encoder model takes each projection image sequence I as an input, the output layer is the same size as the input layer, and aims to reconstruct its own input by acting as a compression filter through the CNN encoder, converting the input vector I into a smaller potential representation Z2Then the CNN decoder tries to reconstruct it to I'.
As shown in fig. 4, the auto-Encoder model based on CNN includes a CNN Encoder (denoted by CNN Encoder in fig. 4) and a CNN decoder (denoted by CNN decoder in fig. 4), the CNN Encoder includes 3 conv layers (convolutional layers), 1 reshape layer and 1 FC layer (fully connected layers, abbreviated as "FC") connected in sequence, and each conv layer is followed by a leaky relu activation function,
the conv layers in the CNN encoder are used to extract the image features of each element in the input vector I, and the parameters of each conv layer relate to the convolution kernel size, the number of convolution kernels and the step size. Taking the first conv layer in the CNN encoder as an example, conv specifically operates as follows,
let us note the I size K of the first layer of input data, the KS size of the convolution kernels×KsIf the number of convolution kernels is num and the step length is s, the size of the output data O of the first conv layer is
Figure BDA0003471495040000091
The last dimension of O is channel c, OcIs the data of O in the c channel, then
Oc=WcI
Wherein, WcAre the hyper-parameters to be learned in the model.
Let O bec(i, j) represents the value of data O at location (i, j, k) on the c-channel, wc(i, j) represents data WcThe value at location (I, j), I (I, j) representing the value of data I at location (I, j), then Oc(i, j) is calculated as,
Figure BDA0003471495040000092
after each convolutional layer in the CNN encoder, a LeakyReLU activation function is used, and the expression is
Figure BDA0003471495040000093
Wherein a ∈ (1, + ∞) fixed parameter
The reshape layer in the CNN encoder is used to change the size of the feature map output by the conv layer.
The FC layer in the CNN encoder is used to perform dimension reduction on input data, and reduce the data X input to the FC layer from M dimension to M 'dimension (M' < M) to obtain Y, where the expression is,
Y=WXT
wherein X ∈ R1×M,Y∈RM′×1,W∈RM′×MAnd here W is also the hyper-parameter to learn in the model.
The CNN decoder includes 3 deconv layers (deconvolution layers), 1 reshape layer, and 1 FC layer, and employs a LeakyRelu activation function after each deconv layer,
the FC layer in the CNN decoder is used to upscale the output data of the CNN encoder, increasing the input data X from M 'dimension to M dimension (M' < M) to get Y, the expression is,
Y=WdXT
wherein X ∈ R1×M′,Y∈RM×1,Wd∈RM×M′And here WdAnd is also a hyper-parameter to be learned by the CNN decoder.
The reshape layer in the CNN decoder is used to change the size of the feature map after being upscaled by the FC layer. The layer needs to convert the input data into data with dimension of M multiplied by N multiplied by C, and the input data X belongs to R1×MThe output data is Y ∈ RM×N×CThen, the calculation formula for the value Y (m, n, c) at the position (m, n, c) in the data Y is as follows,
y (m, n, c) x (0, i), wherein i mn + c
The 3 deconv layers in the CNN decoder are used to reconstruct the feature map output by the reshape layer into I', and the parameters of each deconv layer also relate to the size of the convolution kernel, the number of convolution kernels and the step size. Taking the last deconv layer of the CNN decoder as an example, the deconv operation is as follows,
the input data I has a size of K × K × C, and the convolution kernel KS has a size of Ks×KsIf the number of convolution kernels is 1 and the step length is s, the size of the output data O is
Figure BDA0003471495040000101
Then
O=WdI
Wherein, WdIs a hyper-parameter to be learned in the CNN decoder model.
Let O (i, j) denote the value of the output data O at position (i, j), W (i, j, c) denote the coefficient matrix WdThe value at the position (I, j, c) and I (I, j, c) represents the value of the data I at the position (I, j, c), the calculation formula of O (I, j) is as follows:
Figure BDA0003471495040000102
wherein m is 0,2, …, Ks-1,
n=0,2,…,Ks-1
The loss function of the auto-encoder model based on the CNN adopts the mean square error, and the calculation formula is as follows:
L(I,I′)=|I-I′|2and L (I, I') represents a loss function.
Output Z of CNN encoder2∈R1×qIs the input of the CNN decoder and is also the feature vector representation of the projected image of length q to be acquired in this step.
3.3 splicing
The characteristics obtained by splicing (3.1) and (3.2) represent Z1(p) and Z2(q) Z (p + q) is obtained, and the calculation formula is as follows, wherein Z (p + q) ═ Z1(p)+Z2(q)
=[z1,0,z1,1,…,z1,p-1]+[z2,0,z2,1,…,z2,q-1]=[z1,0,z1,1,…,z1,p-1,z2,0,z2,1,…,z2,q-1]
Step 4, clustering behavioral intention based on DBSCAN
And (3) clustering behavioral intention based on DBSCAN, using the feature representation Z (p + q) obtained in the step (3) as algorithm input, and clustering by adopting a DBSCAN algorithm to obtain behavioral intention categories. DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a Density-Based Spatial Clustering algorithm. The algorithm characterizes how close the sample distribution is based on a set of "domain" parameters (ε, MinPts). Given dataset D ═ x1,x2,…,xmDefines and illustrates the following concepts:
(a) ε -neighborhood: for xjE.g. D, whose e-neighborhood contains the sum x in the sample set DiSamples having a distance of not more than epsilon, i.e. Nε(xj)={xi∈D|dist(xi,xj) ≦ ε }, as shown by the dashed circle in FIG. 5;
(b) core object (core object) if xjContains at least MinPts samples, i.e. | Nε(xj) | ≧ MinPts, then xjIs a core object, such as x in FIG. 51
(c) Direct density-reachable (x) numberjAt xiIn the epsilon-neighborhood of (c), and xiIs a core object, then called xjFrom xiDensity through, as in x in FIG. 52From x1The density is direct;
(d) density-accessible (dense-accessible) for xiAnd xjIf a sample sequence p is present1,p2,...,pnWherein p is1=xi,pn=xjAnd p isi+1From piWhen the density is up to, it is called xjFrom xiThe density can be reached, as shown by x in FIG. 53From x1The density can be reached;
(e) density-connected (density-connected) for xiAnd xjIf present, if presentxkSo that xiAnd xjAre all xkWhen the density is up, it is called xiAnd xjDensity connected, as in x in FIG. 53From x4The densities are connected.
Fig. 6 is a schematic diagram of the cluster analysis process of the DBSCAN algorithm implemented by codes. As shown in fig. 6, in the present invention, the two core steps of the DBSCAN algorithm for cluster analysis of the eigenvectors of the aircraft are:
(1) finding all core objects according to the given domain parameters (ε, MinPts), as shown in lines 1-7 of FIG. 6;
(2) starting with any core object, find the cluster generated by the samples whose density is reachable until all core objects have been visited, as shown in fig. 6, lines 10-24.
Step 5, behavior intention multi-classification model based on darknet-53
Adopting the behavior characteristic representation Z corresponding to the track sequences related to different aircrafts and obtained by 3.1 in the step 31∈R1 ×pAnd the projection image I obtained in step 2 by 2.22∈RK×KThe I epsilon obtained after the treatment belongs to RK×K×3And (4) as the input of the multi-classification model, taking the corresponding clustering class obtained in the step (4) as an output label of the multi-classification model, and training the multi-classification model for identifying the behavior intention of the aircraft based on the dark net-53 network.
Two steps 5.1, 5.2 of training the behavioral intent multi-classification model are specifically set forth below in conjunction with fig. 7:
5.1 data processing
(1) Representing the behavior characteristics by adopting a min-max normalization method to form Z1(1 × p) to Z1' (1 × p), the transfer function is as follows
Figure BDA0003471495040000121
Where max is the maximum value of the sample data and min is the minimum value of the sample data.
(2) Order to
Figure BDA0003471495040000122
Will z1' conversion to obtain I1∈RK×K,I1The mth row of data is calculated as,
I1(m)=A+B
wherein "+" is consistent with the splicing operation of 3.3 in step 3,
a is one-dimensional data with m zero elements, e.g., [0,0, …,0],
Figure BDA0003471495040000123
and is
Figure BDA0003471495040000124
(3) The I obtained in (2)1∈RK×KWith two projection images I2∈RK×KObtaining I epsilon R after splicingK×K×3It can be approximately seen as an image with three channels.
(4) The training data is augmented to the image using one or more of image data enhancement techniques such as flipping, rotating, scaling, cropping, and translating.
(5) And (4) dividing the data set according to the ratio of 6:2:2 to obtain a training, testing and verifying data set.
5.2 Darknet-53-based Multi-Classification model
And (3) reading the training set data and the verification set data obtained in the step (5.1) into a Darknet-53 network, and training the network by using the category obtained by clustering in the step (4) as a label and using a cross entropy loss function. For input data I ∈ RK×K×3The parameters and output data of each layer of Darknet-53 network are shown in the following table b, wherein
stride is the step length, kernel is the filter dimension, channel is the number of filters, and class _ num is the total number of classes of the classification model;
1 ×,2 ×,8 ×,4 × in Number column indicate that the module is repeated 1 time, 2 times, 8 times and 4 times, respectively;
conv layer represents a convolutional layer; residual Block represents that the Block is a Residual layer;
gobal avgpool represents global average pooling;
a full connected layer (FC layer for short) represents a full connection layer;
the third dimension of the data represents the number of channels, e.g., "32" in K × 32 is the number of channels; class _ N is the total number of cluster categories obtained in step 4;
the Activate function uses LeakyReLu after the other convolutional layers, except for the output layer, which uses softmax.
(1) Layer conv
In Table b, the input dimension of the first conv layer is 5.1(3) to obtain data I e RK×K×3Here, the convolution kernel size is 3 × 3, the step size is 1, and the output is
Figure BDA0003471495040000131
Then for each of the outputs
Figure BDA0003471495040000132
The calculation formula of (a) is as follows,
Yc=WcX0
wherein, X0Input data of dimension K × 3; wcA coefficient matrix with K multiplied by 3 dimensions on the c channel; y iscOn the c-th channel
Figure BDA0003471495040000133
Dimension data.
Let Yc(i, j) represents Y1Data, x, in the array at row i and column j0(m, n, l) represents X0Data at the (m, n, l) position in the array, wc(m, n, l) represents data at the (m, n, l) position in the W array on the c-th channel,
then Y iscThe calculation formula of (i, j) is as follows,
Figure BDA0003471495040000134
(2) activating a function
Given a set of inputs in a neural network, the activation function of a neuron defines a set of outputs, and the darknet-53 network used in this patent employs a softmax activation function at the output level, where z defines a vector of output level inputs.
The expression of the softmax activation function is,
Figure BDA0003471495040000135
wherein, the convolutional layers except the output layer are all activated by using LeakyReLU.
Figure BDA0003471495040000136
Figure BDA0003471495040000141
Table b
(3)Residual Block
The input and output of the Residual Block have the same size and channel number, and the Residual Block is derived from a Residual network resnet, and the structure of the Residual Block is shown in fig. 8, and includes two branches: identity mapping and residual branching. The solid circles with a plus sign in the figure represent skipped connections, and their corresponding formula is defined as follows:
xt+1=Ft(xt)+xt
wherein x istAnd xt+1Input and output vectors, F, of the t-th residual block, respectivelyt(xt) The transfer function is represented, corresponding to the branches stacked by convolution and leakyRelu. The deep residual network thus composed is easy to flow information and easy to train.
(4) FC layer
The FC layer (FC) plays the role of a classifier in the Darknet-53 network, the learned distributed feature representation is mapped to a sample class space, the fully connected core operation is the matrix-vector product, and the expression is as follows:
Y=WXT
wherein, W represents a class _ Nx 1024 coefficient matrix, X represents a 1X 1024 matrix output by the global avgpool layer, Y represents a class _ Nx 1 matrix, and X represents a class _ Nx 1 matrixTRepresenting the transpose of X.
(5) Layer of global avgpool
global avgpool is also called global average pooling layer (GAP for short). The input of the GAP layer is the output of the previous layer, namely (K/32) × (K/32) × 1024, the kernel is (K/32) × (K/32) and the step length is 1, and then the average value is selected as the output in each (K/32) × (K/32) area, and the output scale is M. GAP not only avoids the overfitting risk brought by full connected but also reduces the parameter amount.
(6) Cross entropy loss function
The expression of the cross entropy loss function is as follows
Figure BDA0003471495040000151
Wherein: class_N represents the number of categories of behavioral intentions of the aircraft;
sample _ N represents the total number of samples participating in training;
yicrepresenting a symbolic function (taking a value of 0 or 1), if the real category of the sample i is equal to c, taking 1, and otherwise, taking 0;
picrepresenting the predicted probability that the observed sample i belongs to class c.
In conclusion, the method and the device make full use of the track data of each mode of the aircraft, not only can dig out the behavior intention category of the aircraft, but also can identify the behavior intention of a section of new track. The DBSCAN-based behavior category clustering algorithm input adopted by the invention not only comprises text data (comprising time, longitude, latitude, speed and altitude dimensions) but also comprises projected images of the aircraft track to the ground. In order to judge the flight behavior intention of a section of new aviation track, the invention also adds a CNN-based multi-classification deep learning model, and takes behavior clustering categories as label data of a classification algorithm, thereby saving the manual labeling cost.
It should be understood that the above-described embodiments are merely preferred embodiments of the invention and the technical principles applied thereto. It will be understood by those skilled in the art that various modifications, equivalents, changes, and the like can be made to the present invention. However, such variations are within the scope of the invention as long as they do not depart from the spirit of the invention. In addition, certain terms used in the specification and claims of the present application are not limiting, but are used merely for convenience of description.

Claims (9)

1. An aircraft behavior intention recognition method based on multi-mode deep learning is characterized by comprising the following steps:
s1, acquiring flight path data of the aircraft, wherein the flight path data comprises flight data of each path point and projection image data of the aircraft projected to the ground at different angles on each path point, sequencing the flight data of each path point into a path sequence according to a flight time sequence, and sequencing each projection image projected to the ground at the same angle on each path point of the aircraft into a projection image sequence;
s2, respectively extracting the behavior characteristic vector of the track sequence and the projection image characteristic vector of each projection image sequence;
s3, performing cluster analysis on the eigenvectors of the aircraft with different associations extracted in the step S2 to obtain behavior intention categories of each aircraft;
s4, training and forming a behavior intention multi-classification model by taking the feature vector of the aircraft corresponding to the association extracted in the step S2 and the behavior intention category of the aircraft corresponding to the association obtained in the step S3 as model training samples;
and S5, inputting the acquired flight trajectory data associated with the aircraft into the behavior intention multi-classification model, and outputting a behavior intention identification result of the aircraft by the model.
2. The multi-modal deep learning-based aircraft behavioral intention recognition method according to claim 1, characterized in that the data dimensions of each of the trajectory points in the trajectory sequence include time of flight, altitude of flight, speed of flight, and longitude and latitude of the trajectory point.
3. The method for recognizing aircraft behavior intention based on multi-modal deep learning of claim 1, wherein in step S2, the behavior feature of the track corresponding to the ith track point in the track sequence as the behavior feature extraction object is recorded as bi,biIncluding the flight position variation of the aircraft at the ith track point
Figure FDA0003471495030000011
Flying speed
Figure FDA0003471495030000012
Flight angle
Figure FDA0003471495030000013
And the amount of change in flying speed
Figure FDA0003471495030000014
And the amount of change in flight angle
Figure FDA0003471495030000015
4. The method for recognizing behavioral intention of aircraft based on multi-modal deep learning according to claim 3,
Figure FDA0003471495030000016
calculated by the following equations (1) to (5), respectively:
Figure FDA0003471495030000017
Figure FDA0003471495030000018
Figure FDA0003471495030000019
Figure FDA0003471495030000021
Figure FDA0003471495030000022
in the formulae (1) to (5),
Figure FDA0003471495030000023
respectively representing the latitude and the longitude of the ith track point;
Figure FDA0003471495030000024
respectively representing the latitude and longitude of the ith-1 track point which is sequenced in the track sequence before the ith track point;
Figure FDA0003471495030000025
respectively representing the heights of the ith track point and the (i-1) th track point;
Figure FDA0003471495030000026
respectively representing the time when the aircraft flies to the ith and (i-1) th track points.
5. The method for recognizing the behavioral intention of the aircraft based on the multi-modal deep learning according to claim 1, wherein the aircraft respectively projects two projection images on each track point in a manner of projecting parallel to the ground and perpendicular to the ground.
6. The method for recognizing aircraft behavior intention based on multi-modal deep learning as claimed in any one of claims 1 to 5, wherein in step S2, the behavior features of the track sequence are extracted by using a pre-trained LSTM-based auto-encoder model, and the method for extracting the behavior features of the track sequence based on the LSTM-based auto-encoder model comprises the following steps:
the auto-encoder model based on the LSTM comprises an LSTM encoder and an LSTM decoder, and the line characteristic sequence of the track sequence is BTR=(b1,b2,…,bi,…,bT) Wherein b is1,b2,…,bi,…,bTSetting i to be 0,1, …, and T to represent the number of the track points in the track sequence; the input to the LSTM encoder is a sequence BTRThe LSTM encoder reads the input sequence in order and updates the hidden layer state h accordinglytThe updating method is as follows:
ht=fLSTM(ht-1,bt),fLSTMis an activation function, btIs an element in the input sequence currently read by the LSTM encoder;
at the last locus bTAfter being processed, the layer state h is impliedTAs said track sequence BTRA low-dimensional implicit representation of;
the LSTM decoder first starts with hTGenerating c as an initialized implicit State1And then further generates (c)2,c3,…,cT) The updating mode of the LSTM decoder is as follows:
Figure FDA0003471495030000027
wherein f isLSTMIs an activation function;
the goal of the LSTM decoder is reconstructionInput sequence BTRSaid LSTM encoder and said LSTM decoder pass minimization (b)1,b2,…,bi,…,bT) And (c)1,c2,…,ci,…,cT) Training the trajectory sequence with the reconstruction error; the loss function of the auto-encoder model based on the LSTM adopts the mean square error, and the calculation formula is as follows:
Figure FDA0003471495030000031
7. the method for recognizing aircraft behavior intention based on multi-modal deep learning as claimed in any one of claims 1 to 5, wherein in step S2, the pre-trained auto-encoder model based on CNN is used to extract the behavior characteristics of the projection image sequence, and the method for extracting the behavior characteristics of the projection image sequence based on CNN is as follows:
the CNN-based auto-encoder model includes a CNN encoder and a CNN decoder, with the projection image sequence being I, the aim of the CNN encoder being to convert an input vector I into a potential representation Z2The aim of the CNN decoder is to represent the potential representation Z2The mixture is reconstructed into I',
the CNN encoder comprises 3 conv layers, 1 reshape layer and 1 FC layer which are sequentially connected, a LeakyRelu activation function is adopted behind each conv layer, the conv layers in the CNN encoder are used for extracting image features of each element in an input vector I, the reshape layers in the CNN encoder are used for changing the size of a feature map output by the conv layers, and the FC layers in the CNN encoder are used for reducing the dimension of input data;
the CNN decoder comprises 3 deconv layers, 1 reshape layer and 1 FC layer, a LeakyRelu activation function is adopted after each deconv layer, and the FC layer in the CNN decoder is used for performing dimensionality raising on output data of the CNN encoder; the reshape layer in the CNN decoder is used for changing the size of the feature map after being subjected to dimension upgrading by the FC layer; the 3 deconv layers in the CNN decoder are used for reconstructing the feature map output by the reshape layer into I';
the loss function of the auto-encoder model based on the CNN adopts the mean square error, and the calculation formula is as follows:
L(I,I′)=|I-I′|2and L (I, I') represents a loss function.
8. The method for recognizing behavioral intention of aircraft according to claim 1, wherein in step S3, the feature vectors extracted in step S2 and related to different aircraft are subjected to cluster analysis by DBSCAN density clustering algorithm.
9. The method for recognizing the behavioral intention of the aircraft based on the multi-modal deep learning according to claim 1, wherein in the step S4, the method for training the behavioral intention multi-classification model comprises the steps of:
l1, the behavior feature of the trajectory sequence extracted in step S2 is represented as Z1(p) normalizing Z by min-max1(p) conversion to z1(p)', the transfer function is:
Figure FDA0003471495030000032
max represents the maximum value of the sample data, min represents the minimum value of the sample data
Order to
Figure FDA0003471495030000033
Converting z' to obtain I1∈RK×K,I1The mth row of data has the following calculation formula:
I1(m)=A+B
wherein, A represents one-dimensional data with m zero elements;
Figure FDA0003471495030000041
and is
Figure FDA0003471495030000042
p represents z1(p) length;
l2, 11∈RK×KTwo projection images I associated with the same aircraft2∈RK×KObtaining I epsilon R after splicingK×K×3
L3, performing data enhancement on the image I by using any one or more of image turning, rotation, scaling, clipping and translation to expand a model training sample, and dividing the model training sample into a training data set, a test data set and a verification data set in proportion;
l4, reading the training data set and the testing data set into a Darknet-53 network, using the class obtained by clustering in the step S3 as a label and using a cross entropy loss function to train the network to finally form the behavior intention multi-classification model.
CN202210044234.0A 2022-01-14 2022-01-14 Multi-mode deep learning-based aircraft behavior intention recognition method Active CN114358211B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210044234.0A CN114358211B (en) 2022-01-14 2022-01-14 Multi-mode deep learning-based aircraft behavior intention recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210044234.0A CN114358211B (en) 2022-01-14 2022-01-14 Multi-mode deep learning-based aircraft behavior intention recognition method

Publications (2)

Publication Number Publication Date
CN114358211A true CN114358211A (en) 2022-04-15
CN114358211B CN114358211B (en) 2022-08-23

Family

ID=81090499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210044234.0A Active CN114358211B (en) 2022-01-14 2022-01-14 Multi-mode deep learning-based aircraft behavior intention recognition method

Country Status (1)

Country Link
CN (1) CN114358211B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2040137A1 (en) * 2007-09-21 2009-03-25 The Boeing Company Predicting aircraft trajectory
CN103336863A (en) * 2013-06-24 2013-10-02 北京航空航天大学 Radar flight path observation data-based flight intention recognition method
EP2685440A1 (en) * 2012-07-09 2014-01-15 The Boeing Company Using aircraft trajectory data to infer aircraft intent
US20160314692A1 (en) * 2015-04-22 2016-10-27 The Boeing Company Data driven airplane intent inferencing
US20170024899A1 (en) * 2014-06-19 2017-01-26 Bae Systems Information & Electronic Systems Integration Inc. Multi-source multi-modal activity recognition in aerial video surveillance
US20180144647A1 (en) * 2015-05-08 2018-05-24 Bombardier Inc. Systems and methods for assisting with aircraft landing
CN109508812A (en) * 2018-10-09 2019-03-22 南京航空航天大学 A kind of aircraft Trajectory Prediction method based on profound memory network
EP3459857A1 (en) * 2017-09-22 2019-03-27 Aurora Flight Sciences Corporation Systems and methods for monitoring pilot health
CN110111608A (en) * 2019-05-15 2019-08-09 南京莱斯信息技术股份有限公司 Method based on radar track building machine level ground scene moving target operation intention assessment
CN110751099A (en) * 2019-10-22 2020-02-04 东南大学 Unmanned aerial vehicle aerial video track high-precision extraction method based on deep learning
US20200082731A1 (en) * 2017-07-17 2020-03-12 Aurora Flight Sciences Corporation System and Method for Detecting Obstacles in Aerial Systems
CN112488061A (en) * 2020-12-18 2021-03-12 电子科技大学 Multi-aircraft detection and tracking method combined with ADS-B information
CN112947541A (en) * 2021-01-15 2021-06-11 南京航空航天大学 Unmanned aerial vehicle intention track prediction method based on deep reinforcement learning

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2040137A1 (en) * 2007-09-21 2009-03-25 The Boeing Company Predicting aircraft trajectory
EP2685440A1 (en) * 2012-07-09 2014-01-15 The Boeing Company Using aircraft trajectory data to infer aircraft intent
CN103336863A (en) * 2013-06-24 2013-10-02 北京航空航天大学 Radar flight path observation data-based flight intention recognition method
US20170024899A1 (en) * 2014-06-19 2017-01-26 Bae Systems Information & Electronic Systems Integration Inc. Multi-source multi-modal activity recognition in aerial video surveillance
US20160314692A1 (en) * 2015-04-22 2016-10-27 The Boeing Company Data driven airplane intent inferencing
US20180144647A1 (en) * 2015-05-08 2018-05-24 Bombardier Inc. Systems and methods for assisting with aircraft landing
US20200082731A1 (en) * 2017-07-17 2020-03-12 Aurora Flight Sciences Corporation System and Method for Detecting Obstacles in Aerial Systems
EP3459857A1 (en) * 2017-09-22 2019-03-27 Aurora Flight Sciences Corporation Systems and methods for monitoring pilot health
CN109508812A (en) * 2018-10-09 2019-03-22 南京航空航天大学 A kind of aircraft Trajectory Prediction method based on profound memory network
CN110111608A (en) * 2019-05-15 2019-08-09 南京莱斯信息技术股份有限公司 Method based on radar track building machine level ground scene moving target operation intention assessment
CN110751099A (en) * 2019-10-22 2020-02-04 东南大学 Unmanned aerial vehicle aerial video track high-precision extraction method based on deep learning
CN112488061A (en) * 2020-12-18 2021-03-12 电子科技大学 Multi-aircraft detection and tracking method combined with ADS-B information
CN112947541A (en) * 2021-01-15 2021-06-11 南京航空航天大学 Unmanned aerial vehicle intention track prediction method based on deep reinforcement learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YEPES J L ET AL: "New Algorithms for Aircraft Intent Inference and Trajectory Prediction", 《JOURNAL OF GUIDANCE CONTROL & DYNAMICS》 *
罗艺等: "一种高超声速飞行器攻击意图预测方法", 《西安电子科技大学学报》 *
袁利: "面向不确定环境的航天器智能自主控制技术", 《宇航学报》 *

Also Published As

Publication number Publication date
CN114358211B (en) 2022-08-23

Similar Documents

Publication Publication Date Title
Song et al. A survey of remote sensing image classification based on CNNs
Kang et al. Classification of hyperspectral images by Gabor filtering based deep network
Li et al. Segmenting objects in day and night: Edge-conditioned CNN for thermal image semantic segmentation
Hong et al. Endmember-guided unmixing network (EGU-Net): A general deep learning framework for self-supervised hyperspectral unmixing
He et al. Remote sensing scene classification using multilayer stacked covariance pooling
Ball et al. Comprehensive survey of deep learning in remote sensing: theories, tools, and challenges for the community
Chen et al. Symmetrical dense-shortcut deep fully convolutional networks for semantic segmentation of very-high-resolution remote sensing images
CN112308860B (en) Earth observation image semantic segmentation method based on self-supervision learning
Hosseiny et al. WetNet: A spatial–temporal ensemble deep learning model for wetland classification using Sentinel-1 and Sentinel-2
Yasrab et al. An encoder-decoder based convolution neural network (CNN) for future advanced driver assistance system (ADAS)
US11651302B2 (en) Method and device for generating synthetic training data for an artificial-intelligence machine for assisting with landing an aircraft
Feng et al. Marginal stacked autoencoder with adaptively-spatial regularization for hyperspectral image classification
Veeravasarapu et al. Adversarially tuned scene generation
Derksen et al. Geometry aware evaluation of handcrafted superpixel-based features and convolutional neural networks for land cover mapping using satellite imagery
Baeta et al. Learning deep features on multiple scales for coffee crop recognition
Zhou et al. Peanut planting area change monitoring from remote sensing images based on deep learning
Lin et al. Building damage assessment from post-hurricane imageries using unsupervised domain adaptation with enhanced feature discrimination
Zhang et al. Polygon structure-guided hyperspectral image classification with single sample for strong geometric characteristics scenes
Ahmed et al. A real-time efficient object segmentation system based on U-Net using aerial drone images
Langford et al. Convolutional neural network approach for mapping arctic vegetation using multi-sensor remote sensing fusion
Shu Deep convolutional neural networks for object extraction from high spatial resolution remotely sensed imagery
Panetta et al. Ftnet: Feature transverse network for thermal image semantic segmentation
Savelonas et al. Computer Vision and Pattern Recognition for the Analysis of 2D/3D Remote Sensing Data in Geoscience: A Survey
Zaman et al. Analysis of hyperspectral data to develop an approach for document images
Burns et al. Machine-learning for mapping and monitoring shallow coral reef habitats

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant