CN108920993B - Pedestrian attitude identification method and system based on radar and multi-network fusion - Google Patents

Pedestrian attitude identification method and system based on radar and multi-network fusion Download PDF

Info

Publication number
CN108920993B
CN108920993B CN201810247528.7A CN201810247528A CN108920993B CN 108920993 B CN108920993 B CN 108920993B CN 201810247528 A CN201810247528 A CN 201810247528A CN 108920993 B CN108920993 B CN 108920993B
Authority
CN
China
Prior art keywords
pedestrian
convolutional neural
posture
neural network
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810247528.7A
Other languages
Chinese (zh)
Other versions
CN108920993A (en
Inventor
张道明
高元正
龙希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Radardoctor Electronic Science And Technology Co ltd
Original Assignee
Wuhan Radardoctor Electronic Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Radardoctor Electronic Science And Technology Co ltd filed Critical Wuhan Radardoctor Electronic Science And Technology Co ltd
Priority to CN201810247528.7A priority Critical patent/CN108920993B/en
Publication of CN108920993A publication Critical patent/CN108920993A/en
Application granted granted Critical
Publication of CN108920993B publication Critical patent/CN108920993B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Abstract

The invention relates to a pedestrian posture identification method and a system based on radar and multi-network fusion, wherein the method comprises the steps of preprocessing an echo signal of a radar signal to obtain an output signal; carrying out suppression processing on a static target in the output signal; searching a distance unit where the pedestrian is located in the output signal after the suppression processing; performing time-frequency analysis on echo signals corresponding to the distance units where the pedestrians are located to obtain echo signal time-frequency graphs; respectively identifying the echo signal time-frequency graphs by utilizing a plurality of convolutional neural networks to obtain an identification result of each convolutional neural network; and fusing the recognition results of each convolutional neural network to obtain a fused posture recognition result. According to the pedestrian posture identification method, the pedestrian posture identification result is given in real time through analysis of radar echoes, and through fusion of a plurality of neural networks, the higher posture identification accuracy can be ensured, the pedestrian posture identification method is not interfered by factors such as illumination conditions, weather and smoke, and can work all day long and all weather.

Description

Pedestrian attitude identification method and system based on radar and multi-network fusion
Technical Field
The invention relates to the technical field of radar signal processing and image recognition, in particular to a pedestrian posture recognition method and system based on radar and multi-network fusion.
Background
In the prior art, most of pedestrian gesture recognition adopts an optical camera to acquire images, and then the acquired images are subjected to image recognition processing, so that the recognition effect is greatly reduced, the recognition accuracy is low, even the situation of incapability of recognition occurs at night due to the fact that the recognition effect is greatly reduced in the mode of being interfered by environmental factors such as illumination conditions, weather, smoke and the like, and the stability is very poor. Therefore, this method cannot meet the high requirements of 24 hours in all days.
Disclosure of Invention
The invention aims to solve the technical problem of the prior art and provides a pedestrian posture identification method and system based on radar and multi-network fusion.
The technical scheme for solving the technical problems is as follows:
according to one aspect of the invention, a pedestrian posture identification method based on radar and multi-network fusion is provided, which comprises the following steps:
step 1: preprocessing an echo signal of a radar signal to obtain an output signal;
step 2: carrying out suppression processing on a static target in the output signal;
and 3, step 3: searching a distance unit where the pedestrian is located in the output signal after the suppression processing;
and 4, step 4: performing time-frequency analysis on the echo signal corresponding to the distance unit where the pedestrian is located to obtain an echo signal time-frequency diagram;
and 5: respectively identifying the echo signal time-frequency graphs by utilizing a plurality of convolutional neural networks to obtain an identification result of each convolutional neural network;
step 6: and fusing the recognition result of each convolutional neural network to obtain a fused gesture recognition result.
The invention has the beneficial effects that: according to the pedestrian attitude identification method based on the radar and the multi-network fusion, disclosed by the invention, the pedestrian attitude identification result is given in real time through the analysis of the radar echo, and the higher attitude identification accuracy can be ensured through the multi-neural network fusion. Meanwhile, the method is realized based on radar, and compared with an optical camera, the method is not interfered by factors such as illumination conditions, weather, smoke and the like, and can work all day long and all weather. The method has strong engineering realizability, can effectively identify the pedestrian posture at high accuracy, is not easily influenced by various environmental factors, and has good system robustness.
On the basis of the technical scheme, the invention can be further improved as follows:
further: in step 1, the preprocessing the echo signal of the radar signal specifically includes:
step 11: performing frequency modulation removal processing on the echo signal, specifically as follows:
assuming a radar signal s t (τ) is represented as follows:
s t (τ)=exp{jπ(2f c τ+γτ 2 )}
wherein f is c Is the emission frequency, tau is the fast time, t is the slow time, gamma is the frequency modulation frequency;
echo signal s corresponding to the radar signal r (t, τ) is expressed as:
s r (t,τ)=A rm exp{jπ(2f c (τ-t d (t))+γ(τ-t d (t)) 2 )}τ∈(0,T p ]
Figure GDA0003648894150000021
wherein A is rm Is a constant amplitude, T p Is one period of frequency modulation, t d (t) is the time delay, c is the speed of light, a 0 、a 1 Respectively are the motion parameters of the target;
then, the echo signal is subjected to frequency modulation removal processing, and the calculation formula is as follows:
Figure GDA0003648894150000031
Figure GDA0003648894150000032
s 0 (t, τ) represents the echo signal after the dechirp process, s r (t, τ) represents an echo signal of the radar signal,
Figure GDA0003648894150000033
representing the fast time phase of the dechirped echo signal,
Figure GDA0003648894150000034
representing the reference signal s of the demodulation frequency ref (τ) conjugate signal;
step 12: the echo wave after frequency modulation is removedSignal s 0 (t, τ) performing a fourier transform;
step 13: for the echo signal s after Fourier transform rm (t, f) discrete sampling to obtain said output signal, denoted s rm (M, N), let T be M Δ T, f be N Δ f, where Δ T, Δ f are sampling intervals, M is a slow time index, and M be 0, 1, 2, … M, M is a slow time acquisition pulse number, N is a fast time index, and N be 0, 1, 2, … N, N is a number of fast time sampling points.
The beneficial effects of the further scheme are as follows: through right echo signal removes frequency modulation processing, has realized the conversion of radio frequency signal to baseband signal, has reduced signal sampling rate demand, and then makes acquisition hardware system realize more easily, through removing the echo signal after frequency modulation processing and carrying out Fourier transform, has realized pulse compression processing, makes the target echo energy accumulation of single pulse, and then can acquire the position and the echo energy information of object in the radar scene.
Further: in the step 2, the output signal is subjected to static target suppression by adopting a three-pulse cancellation method along the slow time dimension t, and the suppressed output signal is represented as s bs (m′,n);
Figure GDA0003648894150000035
Wherein, w i (i is 0, 1, 2) is the weight of the third-order pulse canceller, s rm (M ', n) denotes the de-frequency modulated output signal, M ' denotes the slow time index of the suppressed output signal, and M ' is 0, 1, 2, … M-2.
The beneficial effects of the further scheme are as follows: through right output signal carries out static target suppression and handles, can restrain the characteristics of the object (like the wall etc.) that stews that are not used for discernment in the scene, has improved the SNR for the multi-pulse accumulation of the echo energy that the pedestrian corresponds further improves the SNR, and the characteristic of the pedestrian of treating discernment is prominent, and the follow-up gesture recognition of being convenient for improves discernment precision and recognition efficiency.
Further: the specific implementation of the step 3 is as follows:
step 31: the output signal s after being suppressed bs (m', n) are summed along the slow time dimension to obtain an echo energy sequence s e (n) the calculation formula is:
Figure GDA0003648894150000041
wherein M is the total pulse number of the echo,
step 32: selecting the echo energy sequence s e (n) the distance means n with the largest energy 0 And will be n-th 0 Each distance unit is marked as a distance unit where the pedestrian is located.
The beneficial effects of the further scheme are as follows: by searching the distance unit where the pedestrian is located, the echo signal corresponding to the pedestrian can be conveniently extracted from the whole output signal.
Further: the specific implementation of the step 4 is as follows:
selecting the distance unit n where the pedestrian is 0 Corresponding suppressed output signal s bs (m′,n 0 ) The output signal s after being suppressed is subjected to a sliding window mode bs (m′,n 0 ) And carrying out short-time Fourier transform processing to obtain the echo signal time-frequency diagram.
The beneficial effects of the further scheme are as follows: by carrying out short-time Fourier transform processing on the output signals after inhibition, extraction of the pedestrian micro-motion characteristic signals can be realized, so that the pedestrian posture can be identified according to the micro-motion characteristics of the pedestrian in the following process.
Further: the step 5 specifically includes:
step 51: initializing parameters of the plurality of convolutional neural networks and pedestrian attitude classification information;
step 52: obtaining a time-frequency graph of a sample pedestrian posture and a label of a sample pedestrian current posture through a sample pedestrian posture experiment, training the plurality of convolutional neural networks by utilizing the pedestrian posture classification information, adjusting the plurality of convolutional neural network parameters by adopting a batch gradient descent method, enabling a posture classification result of each convolutional neural network to be matched with the pedestrian posture, and storing the parameters of each convolutional neural network;
step 53: identifying and classifying the echo signal time-frequency diagram of the target pedestrian by using the trained convolutional neural networks to obtain a pedestrian posture identification result output by each convolutional neural network;
and the pedestrian attitude identification result output by the convolutional neural network comprises a pedestrian attitude category and the probability corresponding to the pedestrian attitude category.
The beneficial effects of the further scheme are as follows: the method comprises the steps of training the plurality of convolutional neural networks, learning and training a pedestrian posture sample library, obtaining and constructing weight coefficients of the convolutional neural networks, identifying echo signal time-frequency graphs corresponding to target pedestrians, obtaining human body micro-motion time-frequency characteristic signals in real time, and identifying the current motion posture category and the corresponding probability of a human body.
Further: the step 6 is specifically realized as follows:
and reading the pedestrian attitude recognition result of each convolutional neural network, and taking the pedestrian attitude category corresponding to the maximum probability value as a fusion recognition result.
The beneficial effects of the further scheme are as follows: by fusing the pedestrian attitude recognition results of each convolutional neural network, higher attitude recognition accuracy can be ensured, and the stability and accuracy of the recognition results of the system are greatly improved.
According to another aspect of the invention, a pedestrian posture recognition system based on radar and multi-network fusion is provided, which comprises a preprocessing module, a signal processing module and a signal processing module, wherein the preprocessing module is used for preprocessing an echo signal of a radar signal to obtain an output signal; the suppression module is used for suppressing the static target in the output signal; the searching module is used for searching a distance unit where a pedestrian is located in the output signal after the suppression processing; the analysis module is used for carrying out time-frequency analysis on the echo signal corresponding to the distance unit where the pedestrian is located to obtain an echo signal time-frequency graph; the identification module is used for respectively identifying the echo signal time-frequency graphs by utilizing a plurality of convolutional neural networks to obtain an identification result of each convolutional neural network; and the fusion module is used for fusing the recognition result of each convolutional neural network to obtain a fused gesture recognition result.
The pedestrian posture recognition system based on the radar and the multi-network fusion can give a pedestrian posture recognition result in real time through analysis of radar echoes, and can ensure higher posture recognition accuracy through the multi-neural network fusion. Meanwhile, the method is realized based on radar, and compared with an optical camera, the method is not interfered by factors such as illumination conditions, weather, smoke and the like, and can work all day long and all weather. The method has strong engineering realizability, can effectively identify the pedestrian posture at high accuracy, is not easily influenced by various environmental factors, and has good system robustness.
On the basis of the technical scheme, the invention can be further improved as follows:
further: the identification module comprises:
the initialization submodule is used for initializing parameters of the convolutional neural network and pedestrian posture classification information;
the training submodule is used for obtaining a time-frequency graph of a sample pedestrian posture and a label of the sample pedestrian current posture through a sample pedestrian posture experiment, training the convolutional neural network by using the pedestrian posture classification information, and adjusting the parameters of the convolutional neural network by adopting a batch gradient descent method so that the posture classification result of the convolutional neural network is matched with the posture of the pedestrian;
and the recognition submodule is used for recognizing and classifying the echo signal time-frequency diagram of the target pedestrian by utilizing the trained convolutional neural networks to obtain a pedestrian posture recognition result output by each convolutional neural network, wherein the pedestrian posture recognition result output by the convolutional neural networks comprises a pedestrian posture category and the probability corresponding to the pedestrian posture category.
The beneficial effects of the further scheme are as follows: the method comprises the steps of training the plurality of convolutional neural networks, learning and training a pedestrian posture sample library, obtaining and constructing weight coefficients of the convolutional neural networks, identifying echo signal time-frequency graphs corresponding to target pedestrians, obtaining human body micro-motion time-frequency characteristic signals in real time, and identifying the current pedestrian posture category of a human body and the probability corresponding to the human body.
Further: the fusion module is specifically configured to:
and reading the pedestrian attitude recognition result of each convolutional neural network, and taking the pedestrian attitude category corresponding to the maximum probability value as a fusion recognition result.
The beneficial effects of the further scheme are as follows: by fusing the pedestrian attitude recognition results of each convolutional neural network, higher attitude recognition accuracy can be ensured, and the stability and accuracy of the recognition results of the system are greatly improved.
Drawings
FIG. 1 is a schematic flow chart of a pedestrian posture identification method based on radar and multi-network fusion according to the present invention;
FIG. 2a is a time-frequency diagram corresponding to a pedestrian stepping posture in accordance with the present invention;
FIG. 2b is a time-frequency diagram corresponding to a normal walking posture of a sample pedestrian according to the present invention;
FIG. 2c is a time-frequency diagram corresponding to a sample pedestrian falling posture of the present invention;
FIG. 2d is a time-frequency diagram corresponding to a sample pedestrian squatting posture of the present invention;
FIG. 3a is a time-frequency diagram corresponding to a normal walking posture identified by the first convolutional neural network of the present invention;
FIG. 3b is a time-frequency diagram corresponding to the squat gesture identified by the first convolutional neural network of the present invention;
FIG. 3c is a time-frequency diagram corresponding to the positive stepping posture identified by the first convolutional neural network of the present invention;
FIG. 3d is a time-frequency diagram corresponding to a fall gesture identified by the first convolutional neural network of the present invention;
FIG. 4 is a time-frequency diagram corresponding to the pose of the target pedestrian identified by the second convolutional neural network;
fig. 5 is a schematic structural diagram of a pedestrian posture recognition system based on radar and multi-network fusion according to the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, a pedestrian posture identification method based on radar and multi-network fusion includes the following steps:
step 1: preprocessing an echo signal of a radar signal to obtain an output signal;
step 2: carrying out suppression processing on a static target in the output signal;
and step 3: searching a distance unit where the pedestrian is located in the output signal after the suppression processing;
and 4, step 4: performing time-frequency analysis on the echo signal corresponding to the distance unit where the pedestrian is located to obtain an echo signal time-frequency diagram;
and 5: respectively identifying the echo signal time-frequency graphs by utilizing a plurality of convolutional neural networks to obtain an identification result of each convolutional neural network;
step 6: and fusing the recognition results of each convolutional neural network to obtain a fused posture recognition result.
According to the pedestrian attitude identification method based on the radar and the multi-network fusion, disclosed by the invention, the pedestrian attitude identification result is given in real time through the analysis of the radar echo, and the higher attitude identification accuracy can be ensured through the multi-neural network fusion. Meanwhile, the method is realized based on radar, and compared with an optical camera, the method is not interfered by factors such as illumination conditions, weather, smoke and the like, and can work all day long and all weather. The method has strong engineering realizability, can effectively identify the pedestrian posture at high accuracy, is not easily influenced by various environmental factors, and has good system robustness.
In the foregoing embodiment, in step 1, the preprocessing the echo signal specifically includes:
step 11: performing frequency modulation removal processing on the echo signal of the radar signal, specifically as follows:
suppose a radar signal s t (τ) is represented as follows:
s t (τ)=exp{jπ(2f c τ+γτ 2 )}
wherein f is c Is the emission frequency, tau is the fast time, t is the slow time, gamma is the frequency modulation frequency;
echo signal s corresponding to the radar signal r (t, τ) is expressed as:
s r (t,τ)=A rm exp{jπ(2f c (τ-t d (t))+γ(τ-t d (t)) 2 )}τ∈(0,T p ]
Figure GDA0003648894150000091
wherein A is rm Is a constant amplitude, T p One period of frequency modulation, t d (t) is the time delay, c is the speed of light, a 0 、a 1 Respectively are the motion parameters of the target;
then, the echo signal is subjected to frequency modulation removal processing, and the calculation formula is as follows:
Figure GDA0003648894150000092
Figure GDA0003648894150000093
s 0 (t, τ) represents the echo signal after the dechirp process, s r (t, τ) represents an echo signal of the radar signal,
Figure GDA0003648894150000094
representing echoes after dechirp processingThe fast-time phase of the signal is,
Figure GDA0003648894150000095
representing the reference signal s of the demodulation frequency ref (τ) conjugate signal;
step 12: the echo signal s after frequency modulation is removed 0 (t, τ) performing a fourier transform;
step 13: for the echo signal s after Fourier transform rm (t, f) discrete sampling to obtain said output signal, denoted s rm (m, n). Let T be M Δ T, f be N Δ f, where Δ T, Δ f are sampling intervals, M is the slow time index, M is 0, 1, 2, … M, M is the number of slow time acquisition pulses, N is the fast time index, and N is 0, 1, 2, … N, N is the number of fast time sampling points.
Through right echo signal removes frequency modulation processing, has realized the conversion of radio frequency signal to baseband signal, has reduced signal sampling rate demand, and then makes acquisition hardware system realize more easily, through removing the echo signal after frequency modulation processing and carrying out Fourier transform, has realized pulse compression processing, makes the target echo energy accumulation of single pulse, and then can acquire the position and the echo energy information of object in the radar scene.
In the above embodiment, in step 2, the output signal is subjected to stationary target suppression by using a three-pulse cancellation method along the slow time dimension t, and the suppressed output signal is represented as s bs (m′,n);
Figure GDA0003648894150000101
Wherein, w i (i is 0, 1, 2) is the weight of the third-order pulse canceller, s rm (M ', n) denotes the de-frequency modulated output signal, M ' denotes the slow time index of the suppressed output signal, and M ' is 0, 1, 2, … M-2.
Through right output signal carries out static target suppression and handles, can restrain the characteristics of the object (like the wall etc.) that stews that are not used for discernment in the scene, has improved the SNR for the multi-pulse accumulation of the echo energy that the pedestrian corresponds further improves the SNR, and the characteristic of the pedestrian of treating discernment is prominent, and the follow-up gesture recognition of being convenient for improves discernment precision and recognition efficiency.
In the above embodiment, the step 3 is specifically implemented as:
step 31: the output signal s after being suppressed bs (m', n) are summed along the slow time dimension to obtain an echo energy sequence s e (n) the calculation formula is:
Figure GDA0003648894150000102
wherein M is the total pulse number of the echo,
step 32: selecting the echo energy sequence s e (n) the distance means n with the largest energy 0 And will n be 0 Each distance unit is marked as a distance unit where the pedestrian is located.
The beneficial effects of the further scheme are as follows: by searching the distance unit where the pedestrian is located, the echo signal corresponding to the pedestrian can be conveniently extracted from the whole output signal.
In the above embodiment, the step 4 is specifically implemented as:
selecting the distance unit n where the pedestrian is 0 Corresponding suppressed output signal s bs (m′,n 0 ) The output signal s after being suppressed is subjected to a sliding window mode bs (m′,n 0 ) And carrying out short-time Fourier transform processing to obtain the echo signal time-frequency diagram.
In practice, the output signal s after being suppressed is subjected to bs (m′,n 0 ) Performing short-time Fourier transform to obtain time-frequency output signal s tf (m', l) and then outputs a signal s according to the time frequency tf (m', l) drawing the echo signal time-frequency graph. The time-frequency output signal s tf The expression of (m ", l) is:
Figure GDA0003648894150000111
wherein N is D For the sliding window length, m' is the slow time pulse index after short time Fourier transform, and l is the frequency index after short time Fourier transform.
By carrying out short-time Fourier transform processing on the output signals after inhibition, extraction of the pedestrian micro-motion characteristic signals can be realized, so that the pedestrian posture can be identified according to the micro-motion characteristics of the pedestrian in the following process.
In the above embodiment, the step 5 specifically includes:
step 51: initializing parameters of the plurality of convolutional neural networks and pedestrian attitude classification information;
step 52: obtaining a time-frequency graph of a sample pedestrian posture and a label of a sample pedestrian current posture through a sample pedestrian posture experiment, training the plurality of convolutional neural networks by utilizing the pedestrian posture classification information, adjusting the plurality of convolutional neural network parameters by adopting a batch gradient descent method, enabling a posture classification result of each convolutional neural network to be matched with the pedestrian posture, and storing the parameters of each convolutional neural network;
as shown in fig. 2a, 2b, 2c, and 2d, the time-frequency diagrams corresponding to four sample pedestrian postures after network training are respectively shown, where fig. 2a shows the time-frequency diagram corresponding to the normal stepping posture, fig. 2b shows the time-frequency diagram corresponding to the normal walking posture, fig. 2c shows the time-frequency diagram corresponding to the falling posture, and fig. 2d shows the time-frequency diagram corresponding to the squatting posture.
Step 53: identifying and classifying the echo signal time-frequency diagram of the target pedestrian by using the trained convolutional neural networks to obtain a pedestrian posture identification result output by each convolutional neural network;
and the pedestrian attitude identification result output by the convolutional neural network comprises a pedestrian attitude category and the probability corresponding to the pedestrian attitude category.
The method comprises the steps of training the plurality of convolutional neural networks, learning and training a pedestrian posture sample library, obtaining and constructing weight coefficients of the convolutional neural networks, identifying echo signal time-frequency graphs corresponding to target pedestrians, obtaining human body micro-motion time-frequency characteristic signals in real time, and identifying the current motion posture category and the corresponding probability of a human body.
Taking two convolutional neural networks as an example, in the step 51, firstly, the identification output and input parameters of the two convolutional neural networks are set, the first convolutional neural network takes the echo signal time-frequency diagram as the input of the network, and the network output is the determined pedestrian posture classification category and the probability corresponding to each category; the second convolution neural network takes the echo signal time-frequency diagram as network input, and the network output is two parts; one part is detection frame information which comprises the width, the height and the center of the detection frame and marks the position of the attitude information on the echo signal time-frequency diagram; the other part is classification information in the detection frame, including pedestrian posture classification categories and corresponding probabilities of each category.
In this embodiment, the parameters of the convolutional neural network and the pedestrian posture classification information are specifically: the number of layers and the classification number of the convolutional neural network are set, the convolutional neural network is composed of a convolutional layer, a full-link layer and a loss layer, the specific number of the layers of the convolutional layer, the full-link layer and the loss layer can be set according to actual conditions, for example, the depth of the first convolutional neural network is set to be 7 layers and mainly composed of the convolutional layer and the full-link layer, the depth of the whole network is shallow, network parameters are few, the network training time is short, the network processing speed is high, but the feature extraction of a video image is insufficient, so the classification accuracy rate is lower than that of the second convolutional neural network; the depth of the second convolutional neural network is 22 layers and mainly comprises convolutional layers and full-connection layers, the depth of the whole network is deep enough to fully extract the feature information of each scale of the picture, and the network parameters are more, so that the network training speed is low, the network processing speed is lower than that of the first convolutional neural network, but the identification accuracy rate is higher than that of the first convolutional neural network. Of course, the posture of the pedestrian can be adjusted in time and classified into various categories such as no action, squatting, standing, falling, forward stepping, lateral stepping, normal walking and the like.
As shown in fig. 3a, 3b, 3c and 3d, the posture categories of the target pedestrians obtained by the first convolutional neural network recognition are respectively represented, wherein fig. 3a represents a time-frequency graph corresponding to the posture of the pedestrian obtained by the first convolutional neural network recognition and corresponding to a normal walking posture, fig. 3b represents a time-frequency graph corresponding to the posture of the pedestrian obtained by the first convolutional neural network recognition and corresponding to a squatting posture, fig. 3c represents a time-frequency graph corresponding to the posture of the pedestrian obtained by the first convolutional neural network recognition and corresponding to a stepping posture, and fig. 3d represents a time-frequency graph corresponding to the posture of the pedestrian obtained by the first convolutional neural network recognition and corresponding to a falling posture. As shown in fig. 4, it represents a time-frequency diagram corresponding to the target pedestrian gesture identified by the second convolutional neural network, and three boxes in the diagram represent three identification results identified by the second convolutional neural network, including the pedestrian gesture and the corresponding probability: 1.00 for right stepping; 1.00 step; crouch down, 0.66.
In addition, the sample pedestrian posture experiment is to adopt the method of the step 1 to the step 4, perform the experiment by taking the sample pedestrian as a target, respectively obtain the time-frequency diagram of the posture of the sample pedestrian and the label of the current posture of the sample pedestrian, then identify the echo signal time-frequency diagram of the target pedestrian by using the trained convolutional neural network, obtain the final dynamic posture category and the corresponding probability, that is, the pedestrian posture is various categories such as no action, crouching, standing, falling, forward stepping, lateral stepping, normal walking, and the like, and obtain the probability corresponding to each category.
In the above embodiment, the step 6 is specifically implemented as:
and reading the pedestrian attitude recognition result of each convolutional neural network, and taking the pedestrian attitude category corresponding to the maximum probability value as a fusion recognition result.
By fusing the pedestrian attitude recognition results of each convolutional neural network, higher attitude recognition accuracy can be ensured, and the stability and accuracy of the recognition results of the system are greatly improved.
In practice, for a certain target pedestrian as an example, the pedestrian postures and the corresponding probability of each posture are respectively identified by the first convolutional neural network and the second convolutional neural network, and the obtained pedestrian posture categories and the probability of each posture are shown in the following table:
Figure GDA0003648894150000141
in the invention, multi-network fusion is adopted, complementation can be carried out through various (two in the embodiment) networks, and the pedestrian posture with simple and obvious characteristics can be quickly identified through the result of the first convolutional neural network; for complex pedestrian postures which are difficult to distinguish, the output of the first convolutional neural network and the output of the second convolutional neural network can be combined to give a more reliable recognition result.
As shown in fig. 5, a pedestrian posture recognition system based on radar and multi-network fusion includes a preprocessing module, configured to preprocess an echo signal of a radar signal to obtain an output signal; the suppression module is used for suppressing the static target in the output signal; the search module is used for searching a distance unit where the pedestrian is located in the output signal after the suppression processing; the analysis module is used for carrying out time-frequency analysis on the echo signal corresponding to the distance unit where the pedestrian is located to obtain an echo signal time-frequency graph; the identification module is used for respectively identifying the echo signal time-frequency graph by utilizing a plurality of convolutional neural networks to obtain an identification result of each convolutional neural network; and the fusion module is used for fusing the recognition result of each convolutional neural network to obtain a fused gesture recognition result.
The pedestrian posture recognition system based on the radar and the multi-network fusion can give a pedestrian posture recognition result in real time through analysis of radar echoes, and can ensure higher posture recognition accuracy through the multi-neural network fusion. Meanwhile, the method is realized based on radar, and compared with an optical camera, the method is not interfered by factors such as illumination conditions, weather, smoke and the like, and can work all day long and all weather. The method has strong engineering realizability, can effectively identify the pedestrian posture at high accuracy, is not easily influenced by various environmental factors, and has good system robustness.
In the above embodiment, the preprocessing module includes: a de-frequency modulation submodule for de-modulating the echo signal s r (t, τ) performing a pretreatment; a transformation submodule for de-modulating the frequency of the echo signal s 0 (t, τ) performing a fourier transform; a sampling sub-module for the Fourier transformed echo signal s rm (t, f) performing discrete sampling to obtain the output signal.
Through right echo signal removes frequency modulation processing, has realized the conversion of radio frequency signal to baseband signal, has reduced the signal sampling rate demand, and then makes the collection hardware system realize more easily, through removing the echo signal after frequency modulation processing and carrying out Fourier transform, has realized pulse compression processing, makes the target echo energy accumulation of single pulse, and then can acquire the position and the echo energy information of object in the radar scene.
In the above embodiment, the search module comprises a calculation submodule for suppressing the suppressed output signal s bs (m', n) are summed along the slow time dimension to obtain an echo energy sequence s e (n); a marking submodule for selecting the echo energy sequence s e (n) the distance means n with the largest energy 0 And will n be 0 Each distance unit is marked as a distance unit where the pedestrian is located. By searching the distance unit where the pedestrian is located, the echo signal corresponding to the pedestrian can be conveniently extracted from the whole output signal.
In the above embodiment, the identification module includes:
the initialization submodule is used for initializing parameters of the convolutional neural network and pedestrian attitude classification information;
the training submodule is used for obtaining a time-frequency graph of a sample pedestrian posture and a label of the sample pedestrian current posture through a sample pedestrian posture experiment, training the convolutional neural network by using the pedestrian posture classification information, and adjusting the parameters of the convolutional neural network by adopting a batch gradient descent method so that the posture classification result of the convolutional neural network is matched with the posture of the pedestrian;
and the recognition submodule is used for recognizing and classifying the echo signal time-frequency diagram of the target pedestrian by utilizing the trained convolutional neural networks to obtain a pedestrian posture recognition result output by each convolutional neural network, wherein the pedestrian posture recognition result output by the convolutional neural networks comprises a pedestrian posture category and the probability corresponding to the pedestrian posture category.
The method comprises the steps of training the plurality of convolutional neural networks, learning and training a pedestrian attitude sample library, obtaining and constructing weight coefficients of the convolutional neural networks, identifying echo signal time-frequency graphs corresponding to target pedestrians, obtaining human body micro-motion time-frequency characteristic signals in real time, and identifying the current pedestrian attitude category of a human body and the probability corresponding to the human body.
In this embodiment, the parameters of the convolutional neural network and the pedestrian posture classification information are specifically: the number of layers and the classification number of the convolutional neural network are set, specifically, the convolutional neural network is set to be composed of a convolutional layer, a full-connection layer and a loss layer, and the pedestrian postures are classified into various types such as no action, squatting, standing up, falling down, normal stepping, lateral stepping and normal walking.
In addition, the sample pedestrian posture experiment is to adopt the method of the step 1 to the step 4, perform the experiment by taking the sample pedestrian as a target, respectively obtain the time-frequency diagram of the posture of the sample pedestrian and the label of the current posture of the sample pedestrian, then identify the echo signal time-frequency diagram of the target pedestrian by using the trained convolutional neural network, obtain the final dynamic posture category and the corresponding probability, that is, the pedestrian posture is various categories such as no action, crouching, standing, falling, forward stepping, lateral stepping, normal walking, and the like, and obtain the probability corresponding to each category.
Preferably, in the above embodiment, the fusion module is specifically configured to:
and reading the pedestrian attitude recognition result of each convolutional neural network, and taking the pedestrian attitude category corresponding to the maximum probability value as a fusion recognition result.
By fusing the pedestrian attitude recognition results of each convolutional neural network, higher attitude recognition accuracy can be ensured, and the stability and accuracy of the recognition results of the system are greatly improved.
The embodiment of the invention also provides a pedestrian posture recognition device based on radar and multi-network fusion, which comprises: a memory and a processor;
the memory for storing a computer program;
the processor is used for executing the pedestrian gesture recognition method when reading the computer program stored in the memory.
Embodiments of the present invention further provide a computer storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the pedestrian posture identifying method.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the module described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules is merely a division of logical functions, and an actual implementation may have another division, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Modules described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each module may exist alone physically, or two or more units are integrated into one unit. The integrated module can be realized in a form of hardware or a form of a software functional unit.
The integrated module, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. A pedestrian posture identification method based on radar and multi-network fusion is characterized by comprising the following steps:
step 1: preprocessing an echo signal of a radar signal to obtain an output signal;
step 2: carrying out suppression processing on a static target in the output signal;
and 3, step 3: searching a distance unit where the pedestrian is located in the output signal after the suppression processing;
and 4, step 4: performing time-frequency analysis on the echo signal corresponding to the distance unit where the pedestrian is located to obtain an echo signal time-frequency diagram;
and 5: respectively identifying the echo signal time-frequency graphs by utilizing a plurality of convolutional neural networks to obtain an identification result of each convolutional neural network;
step 6: fusing the recognition result of each convolutional neural network to obtain a fused posture recognition result;
the step 5 specifically includes:
step 51: initializing parameters of the plurality of convolutional neural networks and pedestrian attitude classification information;
step 52: obtaining a time-frequency graph of a sample pedestrian posture and a label of a sample pedestrian current posture through a sample pedestrian posture experiment, training the plurality of convolutional neural networks by utilizing the pedestrian posture classification information, adjusting the plurality of convolutional neural network parameters by adopting a batch gradient descent method, enabling a posture classification result of each convolutional neural network to be matched with the pedestrian posture, and storing the parameters of each convolutional neural network;
step 53: identifying and classifying the echo signal time-frequency diagram of the target pedestrian by using the trained convolutional neural networks to obtain a pedestrian posture identification result output by each convolutional neural network;
and the pedestrian attitude identification result output by the convolutional neural network comprises a pedestrian attitude category and the probability corresponding to the pedestrian attitude category.
2. The pedestrian posture identification method based on radar and multi-network fusion as claimed in claim 1, wherein in the step 1, the preprocessing of the echo signal of the radar signal specifically comprises:
step 11: and performing frequency modulation removing processing on the echo signal, which specifically comprises the following steps:
assuming a radar signal s t (τ) is represented as follows:
s t (τ)=exp{jπ(2f c τ+γτ 2 )}
wherein f is c Is the emission frequency, tau is the fast time, t is the slow time, gamma is the frequency modulation frequency;
echo signal s corresponding to the radar signal r (t, τ) is expressed as:
s r (t,τ)=A rm exp{jπ(2f c (τ-t d (t))+γ(τ-t d (t)) 2 )}τ∈(0,T p ]
Figure FDA0003648894140000021
wherein A is rm Is a constant amplitude, T p Is one period of frequency modulation, t d (t) is time delay, c is speed of light, a 0 、a 1 Respectively are the motion parameters of the target;
then, the echo signal is subjected to frequency modulation removal processing, and the calculation formula is as follows:
Figure FDA0003648894140000022
s 0 (t, τ) represents the echo signal after the dechirp process, s r (t, τ) represents an echo signal of the radar signal,
Figure FDA0003648894140000023
representing the fast time phase of the dechirped echo signal,
Figure FDA0003648894140000024
representing the reference signal s of the demodulation frequency ref (τ) conjugate signal;
step 12: the echo signal s after frequency modulation is removed 0 (t, τ) performing a fourier transform;
step 13: for the echo signal s after Fourier transform rm (t, f) discrete sampling to obtain said output signal, denoted s rm (M, N) let T be M Δ T, f be N Δ f, where Δ T, Δ f are the sampling intervals, M is the slow time index, and M is 0, 1, 2, … M, M is the number of slow time acquisition pulses, N is the fast time index, and N is 0, 1, 2, … N, N is the fast time sampleAnd (6) counting the number of points.
3. The pedestrian posture identifying method based on radar and multi-network fusion as claimed in claim 2, wherein in the step 2, the output signal is suppressed by a three-pulse cancellation method along the slow time dimension t, and the suppressed output signal is represented as s bs (m′,n);
Figure FDA0003648894140000031
Wherein, w i (i is 0, 1, 2) is the weight of the third-order pulse canceller, s rm (M ', n) denotes the de-frequency modulated output signal, M ' denotes the slow time index of the suppressed output signal, and M ' is 0, 1, 2, … M-2.
4. The pedestrian posture recognition method based on radar and multi-network fusion as claimed in claim 3, wherein the step 3 is implemented as follows:
step 31: the output signal s after being suppressed bs (m', n) are summed along the slow time dimension to obtain an echo energy sequence s e (n) the calculation formula is:
Figure FDA0003648894140000032
wherein M is the total pulse number of the echo,
step 32: selecting the echo energy sequence s e (n) the distance means n with the largest energy 0 And will n be 0 Each distance unit is marked as a distance unit where the pedestrian is located.
5. The pedestrian posture identifying method based on radar and multi-network fusion as claimed in claim 1, wherein the step 4 is implemented as follows:
selecting the distance of the pedestrianFrom unit n 0 Corresponding suppressed output signal s bs (m′,n 0 ) The output signal s after being suppressed is subjected to a sliding window mode bs (m′,n 0 ) And carrying out short-time Fourier transform processing to obtain the echo signal time-frequency diagram.
6. The pedestrian posture recognition method based on radar and multi-network fusion according to any one of claims 1 to 5, wherein the step 6 is implemented as:
and reading the pedestrian attitude recognition result of each convolutional neural network, and taking the pedestrian attitude category corresponding to the maximum probability value as a fusion recognition result.
7. A pedestrian attitude identification system based on radar and multi-network fusion is characterized by comprising:
the preprocessing module is used for preprocessing an echo signal of the radar signal to obtain an output signal;
the suppression module is used for suppressing the static target in the output signal;
the search module is used for searching a distance unit where the pedestrian is located in the output signal after the suppression processing;
the analysis module is used for carrying out time-frequency analysis on the echo signal corresponding to the distance unit where the pedestrian is located to obtain an echo signal time-frequency graph;
the identification module is used for respectively identifying the echo signal time-frequency graphs by utilizing a plurality of convolutional neural networks to obtain an identification result of each convolutional neural network;
the fusion module is used for fusing the recognition result of each convolutional neural network to obtain a fused gesture recognition result;
the fusion module is specifically configured to:
initializing parameters of the plurality of convolutional neural networks and pedestrian attitude classification information;
obtaining a time-frequency graph of a sample pedestrian posture and a label of a sample pedestrian current posture through a sample pedestrian posture experiment, training the plurality of convolutional neural networks by utilizing the pedestrian posture classification information, adjusting the plurality of convolutional neural network parameters by adopting a batch gradient descent method, enabling a posture classification result of each convolutional neural network to be matched with the pedestrian posture, and storing the parameters of each convolutional neural network;
identifying and classifying the echo signal time-frequency diagram of the target pedestrian by using the trained convolutional neural networks to obtain a pedestrian posture identification result output by each convolutional neural network;
and the pedestrian attitude recognition result output by the convolutional neural network comprises a pedestrian attitude category and the probability corresponding to the pedestrian attitude category.
8. The system of claim 7, wherein the recognition module comprises:
the initialization submodule is used for initializing parameters of the convolutional neural network and pedestrian posture classification information;
the training submodule is used for obtaining a time-frequency graph of a sample pedestrian posture and a label of the sample pedestrian current posture through a sample pedestrian posture experiment, training the convolutional neural network by using the pedestrian posture classification information, and adjusting the parameters of the convolutional neural network by adopting a batch gradient descent method so that the posture classification result of the convolutional neural network is matched with the posture of the pedestrian;
and the recognition submodule is used for recognizing and classifying the echo signal time-frequency diagram of the target pedestrian by utilizing the trained convolutional neural networks to obtain a pedestrian posture recognition result output by each convolutional neural network, wherein the pedestrian posture recognition result output by the convolutional neural networks comprises a pedestrian posture category and the probability corresponding to the pedestrian posture category.
9. The system according to claim 7 or 8, wherein the fusion module is specifically configured to:
and reading the pedestrian attitude recognition result of each convolutional neural network, and taking the pedestrian attitude category corresponding to the maximum probability value as a fusion recognition result.
CN201810247528.7A 2018-03-23 2018-03-23 Pedestrian attitude identification method and system based on radar and multi-network fusion Active CN108920993B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810247528.7A CN108920993B (en) 2018-03-23 2018-03-23 Pedestrian attitude identification method and system based on radar and multi-network fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810247528.7A CN108920993B (en) 2018-03-23 2018-03-23 Pedestrian attitude identification method and system based on radar and multi-network fusion

Publications (2)

Publication Number Publication Date
CN108920993A CN108920993A (en) 2018-11-30
CN108920993B true CN108920993B (en) 2022-08-16

Family

ID=64403082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810247528.7A Active CN108920993B (en) 2018-03-23 2018-03-23 Pedestrian attitude identification method and system based on radar and multi-network fusion

Country Status (1)

Country Link
CN (1) CN108920993B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111723824A (en) * 2019-03-18 2020-09-29 北京木牛领航科技有限公司 Biological characteristic identification method based on micro-motion detection technology and neural network algorithm
CN110363219B (en) * 2019-06-10 2023-08-22 南京理工大学 Method for identifying middle-stage target micro-motion form of trajectory based on convolutional neural network
CN110146855B (en) * 2019-06-11 2020-10-23 北京无线电测量研究所 Radar intermittent interference suppression threshold calculation method and device
CN110414426B (en) * 2019-07-26 2023-05-30 西安电子科技大学 Pedestrian gait classification method based on PC-IRNN
CN112444785B (en) * 2019-08-30 2024-04-12 华为技术有限公司 Target behavior recognition method, device and radar system
CN110638460B (en) * 2019-09-16 2022-07-15 深圳数联天下智能科技有限公司 Method, device and equipment for detecting state of object relative to bed
CN111007496B (en) * 2019-11-28 2022-11-04 成都微址通信技术有限公司 Through-wall perspective method based on neural network associated radar
CN111368930B (en) * 2020-03-09 2022-11-04 成都理工大学 Radar human body posture identification method and system based on multi-class spectrogram fusion and hierarchical learning
CN111796272B (en) * 2020-06-08 2022-09-16 桂林电子科技大学 Real-time gesture recognition method and computer equipment for through-wall radar human body image sequence
CN111965620B (en) * 2020-08-31 2023-05-02 中国科学院空天信息创新研究院 Gait feature extraction and identification method based on time-frequency analysis and deep neural network
CN112183586B (en) * 2020-09-10 2024-04-02 浙江工业大学 Human body posture radio frequency identification method for online multitask learning
CN113705482B (en) * 2021-08-31 2024-03-22 江苏唯宝体育科技发展有限公司 Body health monitoring management system and method based on artificial intelligence
CN113985393B (en) * 2021-10-25 2024-04-16 南京慧尔视智能科技有限公司 Target detection method, device and system
CN114863556A (en) * 2022-04-13 2022-08-05 上海大学 Multi-neural-network fusion continuous action recognition method based on skeleton posture
CN114895363A (en) * 2022-05-07 2022-08-12 上海恒岳智能交通科技有限公司 Method for recognizing state potential of invaded object by visual imaging monitoring on two sides of roadbed

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102323575A (en) * 2011-07-16 2012-01-18 西安电子科技大学 Range migration correction method for pulse Doppler (PD) radar in feeble signal detection process
WO2016174659A1 (en) * 2015-04-27 2016-11-03 Snapaid Ltd. Estimating and using relative head pose and camera field-of-view
CN107169435A (en) * 2017-05-10 2017-09-15 天津大学 A kind of convolutional neural networks human action sorting technique based on radar simulation image
CN107290741A (en) * 2017-06-02 2017-10-24 南京理工大学 Combine the indoor human body gesture recognition method apart from time-frequency conversion based on weighting
CN107808111A (en) * 2016-09-08 2018-03-16 北京旷视科技有限公司 For pedestrian detection and the method and apparatus of Attitude estimation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012118683A (en) * 2010-11-30 2012-06-21 Daihatsu Motor Co Ltd Pedestrian recognition device
CN106537180B (en) * 2014-07-25 2020-01-21 罗伯特·博世有限公司 Method for mitigating radar sensor limitations with camera input for active braking of pedestrians

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102323575A (en) * 2011-07-16 2012-01-18 西安电子科技大学 Range migration correction method for pulse Doppler (PD) radar in feeble signal detection process
WO2016174659A1 (en) * 2015-04-27 2016-11-03 Snapaid Ltd. Estimating and using relative head pose and camera field-of-view
CN107808111A (en) * 2016-09-08 2018-03-16 北京旷视科技有限公司 For pedestrian detection and the method and apparatus of Attitude estimation
CN107169435A (en) * 2017-05-10 2017-09-15 天津大学 A kind of convolutional neural networks human action sorting technique based on radar simulation image
CN107290741A (en) * 2017-06-02 2017-10-24 南京理工大学 Combine the indoor human body gesture recognition method apart from time-frequency conversion based on weighting

Also Published As

Publication number Publication date
CN108920993A (en) 2018-11-30

Similar Documents

Publication Publication Date Title
CN108920993B (en) Pedestrian attitude identification method and system based on radar and multi-network fusion
CN107862705B (en) Unmanned aerial vehicle small target detection method based on motion characteristics and deep learning characteristics
Du et al. Object tracking in satellite videos by fusing the kernel correlation filter and the three-frame-difference algorithm
Yang et al. Robust superpixel tracking
Zhang et al. Fast visual tracking via dense spatio-temporal context learning
Wang et al. Deep learning-based UAV detection in pulse-Doppler radar
Al Hadhrami et al. Transfer learning with convolutional neural networks for moving target classification with micro-Doppler radar spectrograms
CN111505632B (en) Ultra-wideband radar action attitude identification method based on power spectrum and Doppler characteristics
CN108614993A (en) A kind of pedestrian's gesture recognition method and system based on radar and pattern-recognition
Sukanya et al. A survey on object recognition methods
Zhao et al. Multiresolution airport detection via hierarchical reinforcement learning saliency model
Cheng et al. Object detection in VHR optical remote sensing images via learning rotation-invariant HOG feature
Joshi et al. A random forest approach to segmenting and classifying gestures
CN111175718A (en) Time-frequency domain combined ground radar automatic target identification method and system
CN111080674A (en) Multi-target ISAR key point extraction method based on Gaussian mixture model
CN112949655A (en) Fine-grained image recognition method combined with attention mixed cutting
Benedek et al. Moving target analysis in ISAR image sequences with a multiframe marked point process model
CN109448024B (en) Visual tracking method and system for constructing constraint correlation filter by using depth data
CN110084834A (en) A kind of method for tracking target based on quick tensor singular value decomposition Feature Dimension Reduction
CN110516638B (en) Sign language recognition method based on track and random forest
RoyChowdhury et al. Distinguishing weather phenomena from bird migration patterns in radar imagery
Aishwarya et al. Multilayer vehicle classification integrated with single frame optimized object detection framework using CNN based deep learning architecture
CN107730532B (en) Badminton motion trajectory tracking method, system, medium and equipment
Wang et al. Low-slow-small target tracking using relocalization module
Ditzel et al. Genradar: Self-supervised probabilistic camera synthesis based on radar frequencies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant