CN108920993A - A kind of pedestrian's gesture recognition method and system based on radar and multiple networks fusion - Google Patents
A kind of pedestrian's gesture recognition method and system based on radar and multiple networks fusion Download PDFInfo
- Publication number
- CN108920993A CN108920993A CN201810247528.7A CN201810247528A CN108920993A CN 108920993 A CN108920993 A CN 108920993A CN 201810247528 A CN201810247528 A CN 201810247528A CN 108920993 A CN108920993 A CN 108920993A
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- signal
- echo
- convolutional neural
- neural networks
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The present invention relates to a kind of pedestrian's gesture recognition method and system based on radar and multiple networks fusion, method includes pre-processing to the echo-signal of radar signal, obtains output signal;Static target in output signal is subjected to inhibition processing;Search is by inhibiting distance unit locating for the pedestrian in treated output signal;Time frequency analysis is carried out to the corresponding echo-signal of distance unit where pedestrian, obtains echo-signal time-frequency figure;Echo-signal time-frequency figure is identified respectively using multiple convolutional neural networks, obtains the recognition result of each convolutional neural networks;The recognition result of each convolutional neural networks is merged, fused gesture recognition result is obtained.Pedestrian's gesture recognition method of the invention provides pedestrian's gesture recognition result by the analysis to radar return in real time, pass through Fusion of Multiple Neural Networks, it can guarantee higher gesture recognition accuracy rate, not be illuminated by the light the interference of the factors such as condition, weather, smog, it being capable of round-the-clock, all weather operations.
Description
Technical field
The present invention relates to Radar Signal Processing and image identification technical fields, more particularly to one kind to be based on radar and Multi net voting
The pedestrian's gesture recognition method and system of fusion.
Background technique
Optical camera is mostly used to obtain image greatly the gesture recognition of pedestrian in the prior art, then to the image of acquisition
Image recognition processing is carried out, due to being interfered by environmental factors such as illumination condition, weather, smog in this sample loading mode, is especially existed
At night, the big heavy discount of recognition effect, recognition accuracy is not high, or even unrecognized situation occurs, and stability is excessively poor.Cause
This, this mode is no longer satisfied the occasion of the high requests such as whole day 24 hours.
Summary of the invention
The technical problem to be solved by the present invention is to solve the above shortcomings of the prior art and to provide a kind of based on radar and more
The pedestrian's gesture recognition method and system of the network integration.
The technical solution that the present invention solves above-mentioned technical problem is as follows:
According to one aspect of the present invention, a kind of pedestrian gesture recognition side based on radar and multiple networks fusion is provided
Method includes the following steps:
Step 1:The echo-signal of radar signal is pre-processed, output signal is obtained;
Step 2:Static target in the output signal is subjected to inhibition processing;
Step 3:Search is by inhibiting distance unit locating for the pedestrian in treated the output signal;
Step 4:Time frequency analysis is carried out to the corresponding echo-signal of distance unit where the pedestrian, when obtaining echo-signal
Frequency is schemed;
Step 5:The echo-signal time-frequency figure is identified respectively using multiple convolutional neural networks, obtains each institute
State the recognition result of convolutional neural networks;
Step 6:The recognition result of each convolutional neural networks is merged, fused gesture recognition knot is obtained
Fruit.
The beneficial effects of the invention are as follows:Pedestrian's gesture recognition method based on radar and multiple networks fusion of the invention leads to
The analysis to radar return is crossed, provides pedestrian's gesture recognition in real time as a result, can guarantee higher by Fusion of Multiple Neural Networks
Gesture recognition accuracy rate.Meanwhile this method based on radar realize, compared to optical camera, the invention be not illuminated by the light condition,
The interference of the factors such as weather, smog, being capable of round-the-clock, all weather operations.The method of the present invention engineering realizability is strong, can be effectively
High-accuracy identification is carried out to pedestrian's posture, while not vulnerable to various such environmental effects, system robustness is good.
Based on the above technical solution, the present invention can also be improved as follows:
Further:In the step 1, it is described to echo-signal carry out pretreatment specifically include:
Step 11:Deramp processing is carried out to echo-signal to described, it is specific as follows:
Assuming that radar signal st(τ) is expressed as follows:
st(τ)=exp { j π (2fcτ+γτ2)}
Wherein, fcFor tranmitting frequency, τ is the fast time, and t is the slow time, and γ is modulation frequency;
The corresponding echo-signal s of the radar signalr(t, τ) is represented by:
sr(t, τ)=Armexp{jπ(2fc(τ-td(t))+γ(τ-td(t))2)}τ∈(0,Tp]
Wherein, ArmFor an amplitude constant, TpFor a frequency modulation(PFM) period, tdIt (t) is time delay, c is the light velocity, a0、a1Point
Not Wei target kinematic parameter;
Deramp processing then is carried out to the echo-signal, calculation formula is:
s0(t, τ) indicates the echo-signal after deramp processing, sr(t, τ) indicates the echo-signal of the radar signal,The fast time phase of echo-signal after indicating deramp processing,Indicate JieDuHuaYu II Decoction reference signal sref(τ's) is total to
Conjugate signal;
Step 12:By the echo-signal s after frequency modulation removal0(t, τ) carries out Fourier transformation;
Step 13:To the echo-signal s after Fourier transformationrm(t, f) carries out discrete sampling, obtains the output letter
Number, it is expressed as srm(m,n).T=m Δ T, f=n Δ f is enabled, wherein Δ T, Δ f is the sampling interval, and m is slow time index, and m=
0,1,2 ... M, M are slow time acquisition pulse number, and n is fast time index, and n=0,1,2 ... N, N are fast time sampling points.
The beneficial effect of above-mentioned further scheme is:By carrying out deramp processing to the echo-signal, realizes and penetrate
Conversion of the frequency signal to baseband signal, reduces signal sampling rate demand, so that acquisition hardware system is easier to realize,
By carrying out Fourier transformation to the echo-signal after deramp processing, process of pulse-compression is realized, the mesh of single pulse is made
Mark backward energy accumulation, and then the position of object and backward energy information in available radar scene.
Further:In the step 2, method is offseted using three pulses along slow time dimension t to the output signal and is carried out
Static target inhibits, and the output signal after inhibiting is expressed as sbs(m′,n);
Wherein, wi(i=0,1,2) is the weight of three rank pulse cancellers, srm(m ', n) indicates defeated after frequency modulation removal
Signal out, the slow time index of output signal of the m ' expression after inhibiting, and m '=0,1,2 ... M-2.
The beneficial effect of above-mentioned further scheme is:It, can by carrying out static target inhibition processing to the output signal
Inhibited with the feature to the standing object (such as wall) for being not used in identification in scene, signal-to-noise ratio is improved, so that pedestrian
The multi-pulse accumulation of corresponding backward energy further increases signal-to-noise ratio, and the feature of pedestrian to be identified highlights, after being convenient for
It is continuous to carry out gesture recognition, improve accuracy of identification and recognition efficiency.
Further:The step 3 is implemented as:
Step 31:By the output signal s after inhibitingbs(m ', n) sums along slow time dimension, obtains backward energy sequence
Arrange se(n), calculation formula is:
Wherein, M is the overall pulse number of echo,
Step 32:Choose the backward energy sequence se(n) the maximum distance unit n of energy in0, and by n-th0A distance
Unit is labeled as distance unit locating for pedestrian.
The beneficial effect of above-mentioned further scheme is:By distance unit locating for search pedestrian, can will go in order to subsequent
The corresponding echo-signal of people is extracted from entire output signal.
Further:The step 4 is implemented as:
Choose the distance unit n where pedestrian0The corresponding output signal s after inhibitingbs(m′,n0), it uses
The mode of sliding window, to the output signal s after inhibitingbs(m′,n0) Short Time Fourier Transform processing is carried out, it obtains described
Echo-signal time-frequency figure.
The beneficial effect of above-mentioned further scheme is:By being carried out in Fu in short-term to the output signal after inhibiting
Leaf transformation processing, may be implemented the extraction of pedestrian's fine motion characteristic signal, identify pedestrian according to the fine motion feature of pedestrian so as to subsequent
Posture.
Further:The step 5 specifically includes:
Step 51:Initialize the parameter and pedestrian's posture classification information of the multiple convolutional neural networks;
Step 52:The time-frequency figure and sample pedestrian current pose for obtaining sample pedestrian posture are tested by sample pedestrian's posture
Label, and the multiple convolutional neural networks are trained using pedestrian's posture classification information, using batch processing ladder
It spends descent method and adjusts the multiple convolutional neural networks parameter, so that the posture classification results and row of each convolution mind network
The posture of people matches, and the parameter of each convolutional neural networks is saved;
Step 53:Using the multiple convolutional neural networks after training to the echo-signal time-frequency figure of target pedestrian
Identification classification is carried out, pedestrian's gesture recognition result of each convolutional neural networks output is obtained;
Wherein, pedestrian's gesture recognition result of convolutional neural networks output includes pedestrian's posture classification and its corresponding
Probability.
The beneficial effect of above-mentioned further scheme is:By being trained to the multiple convolutional neural networks, realization pair
The study and training of pedestrian's posture sample database, obtain and construct the weight coefficient of the convolutional neural networks, and by target line
The corresponding echo-signal time-frequency figure of people identifies, can obtain human body fine motion time-frequency characteristics signal in real time, and identification human body is current
Athletic posture classification and corresponding probability.
Further:The step 6 is implemented as:
Pedestrian's gesture recognition of each convolutional neural networks is read as a result, and by the corresponding row of maximum probability value
People's posture classification is as fusion recognition result.
The beneficial effect of above-mentioned further scheme is:By by pedestrian's gesture recognition knot of each convolutional neural networks
Fruit is merged, and can guarantee higher gesture recognition accuracy rate, greatly improves the stability of the recognition result of system and accurate
Property.
According to another aspect of the invention, a kind of pedestrian's gesture recognition system based on radar and multiple networks fusion is provided
System, including preprocessing module, pre-process for the echo-signal to radar signal, obtain output signal;Suppression module is used
In the static target in the output signal is carried out inhibition processing;Search module, for searching for by inhibiting treated institute
State distance unit locating for the pedestrian in output signal;Analysis module, for the corresponding echo of distance unit where the pedestrian
Signal carries out time frequency analysis, obtains echo-signal time-frequency figure;Identification module, for utilizing multiple convolutional neural networks respectively to institute
It states echo-signal time-frequency figure to be identified, obtains the recognition result of each convolutional neural networks;Fusion Module, being used for will be every
The recognition result of a convolutional neural networks is merged, and fused gesture recognition result is obtained.
Pedestrian's gesture recognition system based on radar and multiple networks fusion of the invention, by the analysis to radar return,
Pedestrian's gesture recognition is provided in real time as a result, can guarantee higher gesture recognition accuracy rate by Fusion of Multiple Neural Networks.Together
When, this method is realized based on radar, and compared to optical camera, it is dry which is not illuminated by the light the factors such as condition, weather, smog
It disturbs, it being capable of round-the-clock, all weather operations.The method of the present invention engineering realizability is strong, can effectively carry out to pedestrian's posture high
Accuracy rate identification, while not vulnerable to various such environmental effects, system robustness is good.
Based on the above technical solution, the present invention can also be improved as follows:
Further:The identification module includes:
Initialization submodule, for initializing the parameter and pedestrian's posture classification information of the convolutional neural networks;
Training submodule, for obtaining time-frequency figure and the sample pedestrian of sample pedestrian posture by the experiment of sample pedestrian posture
The label of current pose, and the convolutional neural networks are trained with pedestrian's posture classification information, using batch processing
Gradient descent method adjusts the convolutional neural networks parameter, so that the appearance of the posture classification results of the convolution mind network and pedestrian
State matches;
Submodule is identified, for believing using the echo of the multiple convolutional neural networks after training to target pedestrian
Number time-frequency figure carries out identification classification, obtains pedestrian's gesture recognition of each convolutional neural networks output as a result, wherein described
Pedestrian's gesture recognition result of convolutional neural networks output includes pedestrian's posture classification and its corresponding probability.
The beneficial effect of above-mentioned further scheme is:By being trained to the multiple convolutional neural networks, realization pair
The study and training of pedestrian's posture sample database, obtain and construct the weight coefficient of the convolutional neural networks, and by target line
The corresponding echo-signal time-frequency figure of people identifies, can obtain human body fine motion time-frequency characteristics signal in real time, and identification human body is current
Pedestrian's posture classification and its corresponding probability.
Further:The Fusion Module is specifically used for:
Pedestrian's gesture recognition of each convolutional neural networks is read as a result, and by the corresponding row of maximum probability value
People's posture classification is as fusion recognition result.
The beneficial effect of above-mentioned further scheme is:By by pedestrian's gesture recognition knot of each convolutional neural networks
Fruit is merged, and can guarantee higher gesture recognition accuracy rate, greatly improves the stability of the recognition result of system and accurate
Property.
Detailed description of the invention
Fig. 1 is pedestrian's gesture recognition method flow diagram of the invention based on radar and multiple networks fusion;
Fig. 2 a is that sample pedestrian of the invention is just marking time the corresponding time-frequency figure of posture;
Fig. 2 b is the corresponding time-frequency figure of sample pedestrian's normal walking posture of the invention;
Fig. 2 c is that sample pedestrian of the invention falls down the corresponding time-frequency figure of posture;
Fig. 2 d is that sample pedestrian of the invention squats down the corresponding time-frequency figure of posture;
Fig. 3 a is the corresponding time-frequency figure of normal walking posture that first convolutional neural networks of the invention identify;
Fig. 3 b is the corresponding time-frequency figure of posture of squatting down that first convolutional neural networks of the invention identify;
Fig. 3 c is the corresponding time-frequency figure of posture of just marking time that first convolutional neural networks of the invention identify;
Fig. 3 d falls down the corresponding time-frequency figure of posture for what first convolutional neural networks of the invention identified;
Fig. 4 is the corresponding time-frequency figure of target pedestrian's posture that second convolutional neural networks identifies;
Fig. 5 is pedestrian's gesture recognition system structural schematic diagram of the invention based on radar and multiple networks fusion.
Specific embodiment
The principle and features of the present invention will be described below with reference to the accompanying drawings, and the given examples are served only to explain the present invention, and
It is non-to be used to limit the scope of the invention.
As shown in Figure 1, a kind of pedestrian's gesture recognition method based on radar and multiple networks fusion, includes the following steps:
Step 1:The echo-signal of radar signal is pre-processed, output signal is obtained;
Step 2:Static target in the output signal is subjected to inhibition processing;
Step 3:Search is by inhibiting distance unit locating for the pedestrian in treated the output signal;
Step 4:Time frequency analysis is carried out to the corresponding echo-signal of distance unit where the pedestrian, when obtaining echo-signal
Frequency is schemed;
Step 5:The echo-signal time-frequency figure is identified respectively using multiple convolutional neural networks, obtains each institute
State the recognition result of convolutional neural networks;
Step 6:The recognition result of each convolutional neural networks is merged, fused gesture recognition knot is obtained
Fruit.
Pedestrian's gesture recognition method based on radar and multiple networks fusion of the invention, by the analysis to radar return,
Pedestrian's gesture recognition is provided in real time as a result, can guarantee higher gesture recognition accuracy rate by Fusion of Multiple Neural Networks.Together
When, this method is realized based on radar, and compared to optical camera, it is dry which is not illuminated by the light the factors such as condition, weather, smog
It disturbs, it being capable of round-the-clock, all weather operations.The method of the present invention engineering realizability is strong, can effectively carry out to pedestrian's posture high
Accuracy rate identification, while not vulnerable to various such environmental effects, system robustness is good.
In the above-described embodiments, in the step 1, it is described to echo-signal carry out pretreatment specifically include:
Step 11:Deramp processing is carried out to echo-signal to described, it is specific as follows:
Assuming that radar signal st(τ) is expressed as follows:
st(τ)=exp { j π (2fcτ+γτ2)}
Wherein, fcFor tranmitting frequency, τ is the fast time, and t is the slow time, and γ is modulation frequency;
The corresponding echo-signal s of the radar signalr(t, τ) is represented by:
sr(t, τ)=Armexp{jπ(2fc(τ-td(t))+γ(τ-td(t))2)}τ∈(0,Tp]
Wherein, ArmFor an amplitude constant, TpFor a frequency modulation(PFM) period, tdIt (t) is time delay, c is the light velocity, a0、a1Point
Not Wei target kinematic parameter;
Deramp processing then is carried out to the echo-signal, calculation formula is:
s0(t, τ) indicates the echo-signal after deramp processing, sr(t, τ) indicates the echo-signal of the radar signal,The fast time phase of echo-signal after indicating deramp processing,Indicate JieDuHuaYu II Decoction reference signal sref(τ's) is total to
Conjugate signal;
Step 12:By the echo-signal s after frequency modulation removal0(t, τ) carries out Fourier transformation;
Step 13:To the echo-signal s after Fourier transformationrm(t, f) carries out discrete sampling, obtains the output letter
Number, it is expressed as srm(m,n).T=m Δ T, f=n Δ f is enabled, wherein Δ T, Δ f is the sampling interval, and m is slow time index, and m=
0,1,2 ... M, M are slow time acquisition pulse number, and n is fast time index, and n=0,1,2 ... N, N are fast time sampling points.
By carrying out deramp processing to the echo-signal, conversion of the radiofrequency signal to baseband signal is realized, is reduced
Signal sampling rate demand, so that acquisition hardware system is easier to realize, by believing the echo after deramp processing
Number carry out Fourier transformation, realize process of pulse-compression, accumulate the target echo energy of single pulse, and then available
The position of object and backward energy information in radar scene.
In the above-described embodiments, in the step 2, the output signal is offseted along slow time dimension t using three pulses
Method carries out static target inhibition, and the output signal after inhibiting is expressed as sbs(m′,n);
Wherein, wi(i=0,1,2) is the weight of three rank pulse cancellers, srm(m ', n) indicates defeated after frequency modulation removal
Signal out, the slow time index of output signal of the m ' expression after inhibiting, and m '=0,1,2 ... M-2.
It, can be to the standing object for being not used in identification in scene by carrying out static target inhibition processing to the output signal
The feature of body (such as wall) is inhibited, and signal-to-noise ratio is improved, so that the multi-pulse accumulation of the corresponding backward energy of pedestrian, into
One step improves signal-to-noise ratio, and the feature of pedestrian to be identified highlights, and is convenient for subsequent carry out gesture recognition, improve accuracy of identification and
Recognition efficiency.
In the above-described embodiments, the step 3 is implemented as:
Step 31:By the output signal s after inhibitingbs(m ', n) sums along slow time dimension, obtains backward energy sequence
Arrange se(n), calculation formula is:
Wherein, M is the overall pulse number of echo,
Step 32:Choose the backward energy sequence se(n) the maximum distance unit n of energy in0, and by n-th0A distance
Unit is labeled as distance unit locating for pedestrian.
The beneficial effect of above-mentioned further scheme is:By distance unit locating for search pedestrian, can will go in order to subsequent
The corresponding echo-signal of people is extracted from entire output signal.
In the above-described embodiments, the step 4 is implemented as:
Choose the distance unit n where pedestrian0The corresponding output signal s after inhibitingbs(m′,n0), it uses
The mode of sliding window, to the output signal s after inhibitingbs(m′,n0) Short Time Fourier Transform processing is carried out, it obtains described
Echo-signal time-frequency figure.
In practice, to the output signal s after inhibitingbs(m′,n0) Short Time Fourier Transform processing is carried out, it obtains
Time-frequency output signal stf(m ", l), then further according to time-frequency output signal stf(m ", l) draws the echo-signal time-frequency figure.Institute
State time-frequency output signal stfThe expression formula of (m ", l) is:
Wherein, NDLong for sliding window window, m " is the slow time pulse index after Short Time Fourier Transform, and l is Fourier in short-term
Transformed frequency indices.
By carrying out Short Time Fourier Transform processing to the output signal after inhibiting, pedestrian's fine motion may be implemented
The extraction of characteristic signal identifies pedestrian's posture according to the fine motion feature of pedestrian so as to subsequent.
In the above-described embodiments, the step 5 specifically includes:
Step 51:Initialize the parameter and pedestrian's posture classification information of the multiple convolutional neural networks;
Step 52:The time-frequency figure and sample pedestrian current pose for obtaining sample pedestrian posture are tested by sample pedestrian's posture
Label, and the multiple convolutional neural networks are trained using pedestrian's posture classification information, using batch processing ladder
It spends descent method and adjusts the multiple convolutional neural networks parameter, so that the posture classification results and row of each convolution mind network
The posture of people matches, and the parameter of each convolutional neural networks is saved;
As shown in Fig. 2 a, 2b, 2c and 2d, when four kinds of sample pedestrian's postures after respectively indicating progress network training are corresponding
Frequency is schemed, and wherein Fig. 2 a indicates that the corresponding time-frequency figure of posture of just marking time, Fig. 2 b indicate the corresponding time-frequency figure of normal walking posture, Fig. 2 c
The corresponding time-frequency figure of posture is fallen down in expression, and Fig. 2 d indicates the corresponding time-frequency figure of posture of squatting down.
Step 53:Using the multiple convolutional neural networks after training to the echo-signal time-frequency figure of target pedestrian
Identification classification is carried out, pedestrian's gesture recognition result of each convolutional neural networks output is obtained;
Wherein, pedestrian's gesture recognition result of convolutional neural networks output includes pedestrian's posture classification and its corresponding
Probability.
By being trained to the multiple convolutional neural networks, study and training of the realization to pedestrian's posture sample database,
The weight coefficient of the convolutional neural networks is obtained and constructs, and by knowing to the corresponding echo-signal time-frequency figure of target pedestrian
Not, human body fine motion time-frequency characteristics signal can be obtained in real time, identify the current athletic posture classification of human body and corresponding probability.
By taking two convolutional neural networks as an example, in the step 51, two kinds of convolutional neural networks are set first
Identification output and input parameter, first convolutional neural networks is using the echo-signal time-frequency figure as the input of network, network
Output is the pedestrian's posture class categories and the corresponding probability of every kind of classification of judgement;Second convolutional neural networks is by the echo
Signal time-frequency figure is two parts as network inputs, network output;A part is detection block information, width including detection block,
Height and center, are labelled with position of the posture information on the echo-signal time-frequency figure;Another part is point in detection block
Category information, including pedestrian's posture class categories and the corresponding probability of every kind of classification.
In the present embodiment, the parameter and pedestrian's posture classification information of the convolutional neural networks are specially:The volume is set
The number of plies and classification number of product neural network, setting convolutional neural networks are made of convolutional layer, full articulamentum and loss layer, the volume
The specific number of plies of lamination, full articulamentum and loss layer can be arranged according to the actual situation, for example, first convolutional neural networks depth
Degree is set as 7 layers, is mainly made of convolutional layer and full articulamentum, and whole network depth is shallower, and network parameter is few, when network training
Between short, network processes fast speed, but the feature extraction of video figure is not enough, therefore classification accuracy is not so good as second
Convolutional neural networks;Second convolutional neural networks depth is 22 layers, is mainly made of convolutional layer and full articulamentum, whole network
Depth is deep enough, can sufficiently extract the characteristic information of each scale of picture, and network parameter is more, therefore network training speed is slow,
First convolutional neural networks of network processes speed ratio are slow, but recognition accuracy is higher than first convolutional neural networks.Certainly
Can also adjust in due course, pedestrian's posture be classified as attonity, squat down, stand up, falling down, just marking time, that side is marked time, is normally walked etc. is a variety of
Classification.
As shown in Fig. 3 a, 3b, 3c and 3d, target pedestrian's posture that first convolutional neural networks identifies is respectively indicated
Classification, wherein Fig. 3 a indicates the corresponding time-frequency figure of pedestrian's posture that first convolutional neural networks identifies, corresponding normal row
Posture is walked, Fig. 3 b indicates the corresponding time-frequency figure of pedestrian's posture that first convolutional neural networks identifies, posture of squatting down is corresponded to,
Fig. 3 c indicates the corresponding time-frequency figure of pedestrian's posture that first convolutional neural networks identifies, corresponding posture of just marking time, Fig. 3 d
Indicate the corresponding time-frequency figure of pedestrian's posture that first convolutional neural networks identifies, correspondence falls down posture.As shown in figure 4,
Indicate the corresponding time-frequency figure of target pedestrian's posture that second convolutional neural networks identifies, three boxes in figure indicate the
Three recognition results that two convolutional neural networks identify, including pedestrian's posture and corresponding probability:Just mark time, 1.00;Just
Mark time 1.00;It squats down, 0.66.
In addition, here, the sample pedestrian posture experiment is using step 1 to the method for step 4, and with sample, pedestrian is
Target is tested, and is obtained the time-frequency figure of sample pedestrian posture and the label of sample pedestrian current pose respectively, is then being utilized
The trained convolutional neural networks identify the echo-signal time-frequency figure of target pedestrian, obtain final dynamic appearance
State classification and corresponding probability, i.e. pedestrian's posture be attonity, squats down, stand up, falling down, just marking time, that side is marked time, is normally walked etc. is more
Kind classification, and obtain the corresponding probability of each classification.
In the above-described embodiments, the step 6 is implemented as:
Pedestrian's gesture recognition of each convolutional neural networks is read as a result, and by the corresponding row of maximum probability value
People's posture classification is as fusion recognition result.
By merging pedestrian's gesture recognition result of each convolutional neural networks, it can guarantee higher appearance
State recognition accuracy greatly improves the stability and accuracy of the recognition result of system.
In practice, with the artificial example of some target line, pass through the first convolutional neural networks and the second convolutional neural networks respectively
It is identified, obtained pedestrian's posture classification and the corresponding probability of every kind of posture is as shown in the table:
In the present invention, using multiple networks fusion, complementation can be carried out by a variety of (being two kinds in the present embodiment) networks, for
Simply, the apparent pedestrian's posture of feature can quickly be identified by the result of first convolutional neural networks;For complicated,
It is difficult to the pedestrian's posture differentiated, in combination with the output of first convolutional neural networks and second convolutional neural networks, is provided more
Add believable recognition result.
As shown in figure 5, a kind of pedestrian's gesture recognition system based on radar and multiple networks fusion, including preprocessing module,
It is pre-processed for the echo-signal to radar signal, obtains output signal;Suppression module, being used for will be in the output signal
Static target carry out inhibition processing;Search module, for searching for by inhibiting the pedestrian in treated the output signal
Locating distance unit;Analysis module is obtained for carrying out time frequency analysis to the corresponding echo-signal of distance unit where the pedestrian
To echo-signal time-frequency figure;Identification module, for using multiple convolutional neural networks respectively to the echo-signal time-frequency figure into
Row identification, obtains the recognition result of each convolutional neural networks;Fusion Module is used for each convolutional neural networks
Recognition result merged, obtain fused gesture recognition result.
Pedestrian's gesture recognition system based on radar and multiple networks fusion of the invention, by the analysis to radar return,
Pedestrian's gesture recognition is provided in real time as a result, can guarantee higher gesture recognition accuracy rate by Fusion of Multiple Neural Networks.Together
When, this method is realized based on radar, and compared to optical camera, it is dry which is not illuminated by the light the factors such as condition, weather, smog
It disturbs, it being capable of round-the-clock, all weather operations.The method of the present invention engineering realizability is strong, can effectively carry out to pedestrian's posture high
Accuracy rate identification, while not vulnerable to various such environmental effects, system robustness is good.
In the above-described embodiments, the preprocessing module includes:Frequency modulation removal submodule is used for described to echo-signal sr
(t, τ) is pre-processed;Transformation submodule, for by the echo-signal s after frequency modulation removal0(t, τ) carries out Fourier
Transformation;Submodule is sampled, for the echo-signal s after Fourier transformationrm(t, f) carries out discrete sampling, obtains described
Output signal.
By carrying out deramp processing to the echo-signal, conversion of the radiofrequency signal to baseband signal is realized, is reduced
Signal sampling rate demand, so that acquisition hardware system is easier to realize, by believing the echo after deramp processing
Number carry out Fourier transformation, realize process of pulse-compression, accumulate the target echo energy of single pulse, and then available
The position of object and backward energy information in radar scene.
In the above-described embodiments, described search module includes computational submodule, for by through inhibition after output signal
sbs(m ', n) sums along slow time dimension, obtains backward energy sequence se(n);Submodule is marked, for choosing the echo energy
Measure sequence se(n) the maximum distance unit n of energy in0, and by n-th0A distance unit is labeled as distance unit locating for pedestrian.It is logical
Distance unit locating for search pedestrian is crossed, can be extracted the corresponding echo-signal of pedestrian from entire output signal in order to subsequent
Come.
In the above-described embodiments, the identification module includes:
Initialization submodule, for initializing the parameter and pedestrian's posture classification information of the convolutional neural networks;
Training submodule, for obtaining time-frequency figure and the sample pedestrian of sample pedestrian posture by the experiment of sample pedestrian posture
The label of current pose, and the convolutional neural networks are trained with pedestrian's posture classification information, using batch processing
Gradient descent method adjusts the convolutional neural networks parameter, so that the appearance of the posture classification results of the convolution mind network and pedestrian
State matches;
Submodule is identified, for believing using the echo of the multiple convolutional neural networks after training to target pedestrian
Number time-frequency figure carries out identification classification, obtains pedestrian's gesture recognition of each convolutional neural networks output as a result, wherein described
Pedestrian's gesture recognition result of convolutional neural networks output includes pedestrian's posture classification and its corresponding probability.
By being trained to the multiple convolutional neural networks, study and training of the realization to pedestrian's posture sample database,
The weight coefficient of the convolutional neural networks is obtained and constructs, and by knowing to the corresponding echo-signal time-frequency figure of target pedestrian
Not, human body fine motion time-frequency characteristics signal can be obtained in real time, identify the current pedestrian's posture classification of human body and its corresponding probability.
In the present embodiment, the parameter and pedestrian's posture classification information of the convolutional neural networks are specially:The volume is set
The number of plies and classification number of product neural network, specifically, setting convolutional neural networks are by convolutional layer, full articulamentum and loss layer group
It is classified as attonity at, pedestrian's posture, squats down, stand up, falling down, just marking time, the plurality of classes such as side is marked time, is normally walked.
In addition, here, the sample pedestrian posture experiment is using step 1 to the method for step 4, and with sample, pedestrian is
Target is tested, and is obtained the time-frequency figure of sample pedestrian posture and the label of sample pedestrian current pose respectively, is then being utilized
The trained convolutional neural networks identify the echo-signal time-frequency figure of target pedestrian, obtain final dynamic appearance
State classification and corresponding probability, i.e. pedestrian's posture be attonity, squats down, stand up, falling down, just marking time, that side is marked time, is normally walked etc. is more
Kind classification, and obtain the corresponding probability of each classification.
Preferably, in above-described embodiment, the Fusion Module is specifically used for:
Pedestrian's gesture recognition of each convolutional neural networks is read as a result, and by the corresponding row of maximum probability value
People's posture classification is as fusion recognition result.
By merging pedestrian's gesture recognition result of each convolutional neural networks, it can guarantee higher appearance
State recognition accuracy greatly improves the stability and accuracy of the recognition result of system.
The embodiments of the present invention also provide a kind of pedestrian's gesture recognition device based on radar and multiple networks fusion, packet
It includes:Memory and processor;
The memory, for being stored with computer program;
The processor, for executing the pedestrian when reading the computer program of the memory storage
Gesture recognition method.
The embodiments of the present invention also provide a kind of computer storage mediums, are stored thereon with computer program, the meter
When calculation machine program is executed by processor, pedestrian's gesture recognition method is realized.
It is apparent to those skilled in the art that for convenience of description and succinctly, foregoing description is
The specific work process of system and module, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system and method can pass through it
Its mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of module, only
A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple module or components can combine or
Person is desirably integrated into another system, or some features can be ignored or not executed.
Module may or may not be physically separated as illustrated by the separation member, shown as a unit
Component may or may not be physical unit, it can and it is in one place, or may be distributed over multiple networks
On unit.It can select some or all of unit therein according to the actual needs to realize the mesh of the embodiment of the present invention
's.
It, can also be in addition, each functional module in each embodiment of the present invention can integrate in one processing unit
It is that modules physically exist alone, is also possible to two or more units and is integrated in one unit.It is above-mentioned integrated
Module both can take the form of hardware realization, can also realize in the form of software functional units.
It, can if integrated module is realized in the form of SFU software functional unit and when sold or used as an independent product
To be stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention substantially or
Say that all or part of the part that contributes to existing technology or the technical solution can embody in the form of software products
Out, which is stored in a storage medium, including some instructions are used so that a computer equipment
(can be personal computer, server or the network equipment etc.) executes all or part of each embodiment method of the present invention
Step.And storage medium above-mentioned includes:It is USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random
Access various Jie that can store program code such as memory (RAM, Random Access Memory), magnetic or disk
Matter.
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and
Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.
Claims (10)
1. a kind of pedestrian's gesture recognition method based on radar and multiple networks fusion, which is characterized in that include the following steps:
Step 1:The echo-signal of radar signal is pre-processed, output signal is obtained;
Step 2:Static target in the output signal is subjected to inhibition processing;
Step 3:Search is by inhibiting distance unit locating for the pedestrian in treated the output signal;
Step 4:Time frequency analysis is carried out to the corresponding echo-signal of distance unit where the pedestrian, obtains echo-signal time-frequency
Figure;
Step 5:The echo-signal time-frequency figure is identified respectively using multiple convolutional neural networks, obtains each volume
The recognition result of product neural network;
Step 6:The recognition result of each convolutional neural networks is merged, fused gesture recognition result is obtained.
2. pedestrian's gesture recognition method according to claim 1 based on radar and multiple networks fusion, which is characterized in that institute
State in step 1, it is described to echo-signal carry out pretreatment specifically include:
Step 11:Deramp processing is carried out to echo-signal to described, it is specific as follows:
Assuming that radar signal st(τ) is expressed as follows:
st(τ)=exp { j π (2fcτ+γτ2)}
Wherein, fcFor tranmitting frequency, τ is the fast time, and t is the slow time, and γ is modulation frequency;
The corresponding echo-signal s of the radar signalr(t, τ) is represented by:
sr(t, τ)=Armexp{jπ(2fc(τ-td(t))+γ(τ-td(t))2)}τ∈(0,Tp]
Wherein, ArmFor an amplitude constant, TpFor a frequency modulation(PFM) period, tdIt (t) is time delay, c is the light velocity, a0、a1Respectively
The kinematic parameter of target;
Deramp processing then is carried out to the echo-signal, calculation formula is:
s0(t, τ) indicates the echo-signal after deramp processing, sr(t, τ) indicates the echo-signal of the radar signal,Table
The fast time phase of echo-signal after showing deramp processing,Indicate JieDuHuaYu II Decoction reference signal srefThe conjugation of (τ) is believed
Number;
Step 12:By the echo-signal s after frequency modulation removal0(t, τ) carries out Fourier transformation;
Step 13:To the echo-signal s after Fourier transformationrm(t, f) carries out discrete sampling, obtains the output signal,
It is expressed as srm(m,n).T=m Δ T, f=n Δ f being enabled, wherein Δ T, Δ f is the sampling interval, and m is slow time index, and m=0,1,
2 ... M, M are slow time acquisition pulse number, and n is fast time index, and n=0,1,2 ... N, N are fast time sampling points.
3. pedestrian's gesture recognition method according to claim 2 based on radar and multiple networks fusion, which is characterized in that institute
It states in step 2, method is offseted using three pulses along slow time dimension t to the output signal and carries out static target inhibition, and is passed through
The output signal crossed after inhibiting is expressed as sbs(m′,n);
Wherein, wi(i=0,1,2) is the weight of three rank pulse cancellers, srm(m ', n) indicates the letter of the output after frequency modulation removal
Number, the slow time index of output signal of the m ' expression after inhibiting, and m '=0,1,2 ... M-2.
4. pedestrian's gesture recognition method according to claim 3 based on radar and multiple networks fusion, which is characterized in that institute
State being implemented as step 3:
Step 31:By the output signal s after inhibitingbs(m ', n) sums along slow time dimension, obtains backward energy sequence se
(n), calculation formula is:
Wherein, M is the overall pulse number of echo,
Step 32:Choose the backward energy sequence se(n) the maximum distance unit n of energy in0, and by n-th0A distance unit
Labeled as distance unit locating for pedestrian.
5. pedestrian's gesture recognition method according to claim 1 based on radar and multiple networks fusion, which is characterized in that institute
State being implemented as step 4:
Choose the distance unit n where pedestrian0The corresponding output signal s after inhibitingbs(m′,n0), using sliding window
Mode, to the output signal s after inhibitingbs(m′,n0) Short Time Fourier Transform processing is carried out, obtain the echo letter
Number time-frequency figure.
6. pedestrian's gesture recognition method according to any one of claims 1 to 5 based on radar and multiple networks fusion, special
Sign is that the step 5 specifically includes:
Step 51:Initialize the parameter and pedestrian's posture classification information of the multiple convolutional neural networks;
Step 52:The mark of the time-frequency figure and sample pedestrian current pose that obtain sample pedestrian posture is tested by sample pedestrian's posture
Label, and the multiple convolutional neural networks are trained using pedestrian's posture classification information, using under batch processing gradient
Drop method adjusts the multiple convolutional neural networks parameter, so that the posture classification results of each convolution mind network and pedestrian
Posture matches, and the parameter of each convolutional neural networks is saved;
Step 53:It is carried out using the echo-signal time-frequency figure of the multiple convolutional neural networks after training to target pedestrian
Identification classification obtains pedestrian's gesture recognition result of each convolutional neural networks output;
Wherein, pedestrian's gesture recognition result of convolutional neural networks output includes pedestrian's posture classification and its corresponding general
Rate.
7. pedestrian's gesture recognition method according to claim 6 based on radar and multiple networks fusion, which is characterized in that institute
Step 6 is stated to be implemented as:
Pedestrian's gesture recognition of each convolutional neural networks is read as a result, and by the corresponding pedestrian's appearance of maximum probability value
State classification is as fusion recognition result.
8. a kind of pedestrian's gesture recognition system based on radar and multiple networks fusion, which is characterized in that including:
Preprocessing module pre-processes for the echo-signal to radar signal, obtains output signal;
Suppression module, for the static target in the output signal to be carried out inhibition processing;
Search module, for searching for by inhibiting distance unit locating for the pedestrian in treated the output signal;
Analysis module obtains echo letter for carrying out time frequency analysis to the corresponding echo-signal of distance unit where the pedestrian
Number time-frequency figure;
Identification module is obtained every for being identified respectively to the echo-signal time-frequency figure using multiple convolutional neural networks
The recognition result of a convolutional neural networks;
Fusion Module obtains fused posture and knows for merging the recognition result of each convolutional neural networks
Other result.
9. pedestrian's gesture recognition method according to claim 8 based on radar and multiple networks fusion, which is characterized in that institute
Stating identification module includes:
Initialization submodule, for initializing the parameter and pedestrian's posture classification information of the convolutional neural networks;
Training submodule, the time-frequency figure and sample pedestrian for obtaining sample pedestrian posture by the experiment of sample pedestrian posture are current
The label of posture, and the convolutional neural networks are trained with pedestrian's posture classification information, using batch processing gradient
Descent method adjusts the convolutional neural networks parameter, so that the posture phase of the posture classification results of the convolution mind network and pedestrian
It coincide;
Submodule is identified, when for using the multiple convolutional neural networks after training to the echo-signal of target pedestrian
Frequency figure carries out identification classification, obtains pedestrian's gesture recognition of each convolutional neural networks output as a result, the wherein convolution
Pedestrian's gesture recognition result of neural network output includes pedestrian's posture classification and its corresponding probability.
10. pedestrian's gesture recognition method according to claim 9 or 10 based on radar and multiple networks fusion, feature exist
In the Fusion Module is specifically used for:
Pedestrian's gesture recognition of each convolutional neural networks is read as a result, and by the corresponding pedestrian's appearance of maximum probability value
State classification is as fusion recognition result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810247528.7A CN108920993B (en) | 2018-03-23 | 2018-03-23 | Pedestrian attitude identification method and system based on radar and multi-network fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810247528.7A CN108920993B (en) | 2018-03-23 | 2018-03-23 | Pedestrian attitude identification method and system based on radar and multi-network fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108920993A true CN108920993A (en) | 2018-11-30 |
CN108920993B CN108920993B (en) | 2022-08-16 |
Family
ID=64403082
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810247528.7A Active CN108920993B (en) | 2018-03-23 | 2018-03-23 | Pedestrian attitude identification method and system based on radar and multi-network fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108920993B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110146855A (en) * | 2019-06-11 | 2019-08-20 | 北京无线电测量研究所 | Radar Intermittent AF panel thresholding calculation method and device |
CN110363219A (en) * | 2019-06-10 | 2019-10-22 | 南京理工大学 | Midcourse target fine motion form recognition methods based on convolutional neural networks |
CN110414426A (en) * | 2019-07-26 | 2019-11-05 | 西安电子科技大学 | A kind of pedestrian's Approach for Gait Classification based on PC-IRNN |
CN110638460A (en) * | 2019-09-16 | 2020-01-03 | 深圳和而泰家居在线网络科技有限公司 | Method, device and equipment for detecting state of object relative to bed |
CN111007496A (en) * | 2019-11-28 | 2020-04-14 | 成都微址通信技术有限公司 | Through-wall perspective method based on neural network associated radar |
CN111368930A (en) * | 2020-03-09 | 2020-07-03 | 成都理工大学 | Radar human body posture identification method and system based on multi-class spectrogram fusion and hierarchical learning |
CN111723824A (en) * | 2019-03-18 | 2020-09-29 | 北京木牛领航科技有限公司 | Biological characteristic identification method based on micro-motion detection technology and neural network algorithm |
CN111796272A (en) * | 2020-06-08 | 2020-10-20 | 桂林电子科技大学 | Real-time gesture recognition method and computer equipment for through-wall radar human body image sequence |
CN111965620A (en) * | 2020-08-31 | 2020-11-20 | 中国科学院空天信息创新研究院 | Gait feature extraction and identification method based on time-frequency analysis and deep neural network |
CN112183586A (en) * | 2020-09-10 | 2021-01-05 | 浙江工业大学 | Human body posture radio frequency identification method for on-line multi-task learning |
WO2021036286A1 (en) * | 2019-08-30 | 2021-03-04 | 华为技术有限公司 | Target behavior recognition method, apparatus and radar system |
CN113705482A (en) * | 2021-08-31 | 2021-11-26 | 江苏唯宝体育科技发展有限公司 | Body health monitoring and management system and method based on artificial intelligence |
CN113985393A (en) * | 2021-10-25 | 2022-01-28 | 南京慧尔视智能科技有限公司 | Target detection method, device and system |
CN114863556A (en) * | 2022-04-13 | 2022-08-05 | 上海大学 | Multi-neural-network fusion continuous action recognition method based on skeleton posture |
CN114895363A (en) * | 2022-05-07 | 2022-08-12 | 上海恒岳智能交通科技有限公司 | Method for recognizing state potential of invaded object by visual imaging monitoring on two sides of roadbed |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102323575A (en) * | 2011-07-16 | 2012-01-18 | 西安电子科技大学 | Range migration correction method for pulse Doppler (PD) radar in feeble signal detection process |
JP2012118683A (en) * | 2010-11-30 | 2012-06-21 | Daihatsu Motor Co Ltd | Pedestrian recognition device |
WO2016174659A1 (en) * | 2015-04-27 | 2016-11-03 | Snapaid Ltd. | Estimating and using relative head pose and camera field-of-view |
CN106537180A (en) * | 2014-07-25 | 2017-03-22 | 罗伯特·博世有限公司 | Method for mitigating radar sensor limitations with video camera input for active braking for pedestrians |
CN107169435A (en) * | 2017-05-10 | 2017-09-15 | 天津大学 | A kind of convolutional neural networks human action sorting technique based on radar simulation image |
CN107290741A (en) * | 2017-06-02 | 2017-10-24 | 南京理工大学 | Combine the indoor human body gesture recognition method apart from time-frequency conversion based on weighting |
CN107808111A (en) * | 2016-09-08 | 2018-03-16 | 北京旷视科技有限公司 | For pedestrian detection and the method and apparatus of Attitude estimation |
-
2018
- 2018-03-23 CN CN201810247528.7A patent/CN108920993B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012118683A (en) * | 2010-11-30 | 2012-06-21 | Daihatsu Motor Co Ltd | Pedestrian recognition device |
CN102323575A (en) * | 2011-07-16 | 2012-01-18 | 西安电子科技大学 | Range migration correction method for pulse Doppler (PD) radar in feeble signal detection process |
CN106537180A (en) * | 2014-07-25 | 2017-03-22 | 罗伯特·博世有限公司 | Method for mitigating radar sensor limitations with video camera input for active braking for pedestrians |
WO2016174659A1 (en) * | 2015-04-27 | 2016-11-03 | Snapaid Ltd. | Estimating and using relative head pose and camera field-of-view |
CN107808111A (en) * | 2016-09-08 | 2018-03-16 | 北京旷视科技有限公司 | For pedestrian detection and the method and apparatus of Attitude estimation |
CN107169435A (en) * | 2017-05-10 | 2017-09-15 | 天津大学 | A kind of convolutional neural networks human action sorting technique based on radar simulation image |
CN107290741A (en) * | 2017-06-02 | 2017-10-24 | 南京理工大学 | Combine the indoor human body gesture recognition method apart from time-frequency conversion based on weighting |
Non-Patent Citations (2)
Title |
---|
DOMENIC BELGIOVANE等人: "77 GHz Radar Scattering Properties of Pedestrians", 《2014 IEEE RADAR CONFERENCE》 * |
WOJKE N等: "Simple online and realtime tracking with a deep association", 《PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING》 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111723824A (en) * | 2019-03-18 | 2020-09-29 | 北京木牛领航科技有限公司 | Biological characteristic identification method based on micro-motion detection technology and neural network algorithm |
CN110363219A (en) * | 2019-06-10 | 2019-10-22 | 南京理工大学 | Midcourse target fine motion form recognition methods based on convolutional neural networks |
CN110146855A (en) * | 2019-06-11 | 2019-08-20 | 北京无线电测量研究所 | Radar Intermittent AF panel thresholding calculation method and device |
CN110146855B (en) * | 2019-06-11 | 2020-10-23 | 北京无线电测量研究所 | Radar intermittent interference suppression threshold calculation method and device |
CN110414426A (en) * | 2019-07-26 | 2019-11-05 | 西安电子科技大学 | A kind of pedestrian's Approach for Gait Classification based on PC-IRNN |
CN110414426B (en) * | 2019-07-26 | 2023-05-30 | 西安电子科技大学 | Pedestrian gait classification method based on PC-IRNN |
CN112444785A (en) * | 2019-08-30 | 2021-03-05 | 华为技术有限公司 | Target behavior identification method and device and radar system |
WO2021036286A1 (en) * | 2019-08-30 | 2021-03-04 | 华为技术有限公司 | Target behavior recognition method, apparatus and radar system |
CN112444785B (en) * | 2019-08-30 | 2024-04-12 | 华为技术有限公司 | Target behavior recognition method, device and radar system |
CN110638460B (en) * | 2019-09-16 | 2022-07-15 | 深圳数联天下智能科技有限公司 | Method, device and equipment for detecting state of object relative to bed |
CN110638460A (en) * | 2019-09-16 | 2020-01-03 | 深圳和而泰家居在线网络科技有限公司 | Method, device and equipment for detecting state of object relative to bed |
CN111007496A (en) * | 2019-11-28 | 2020-04-14 | 成都微址通信技术有限公司 | Through-wall perspective method based on neural network associated radar |
CN111368930A (en) * | 2020-03-09 | 2020-07-03 | 成都理工大学 | Radar human body posture identification method and system based on multi-class spectrogram fusion and hierarchical learning |
CN111368930B (en) * | 2020-03-09 | 2022-11-04 | 成都理工大学 | Radar human body posture identification method and system based on multi-class spectrogram fusion and hierarchical learning |
CN111796272A (en) * | 2020-06-08 | 2020-10-20 | 桂林电子科技大学 | Real-time gesture recognition method and computer equipment for through-wall radar human body image sequence |
CN111965620B (en) * | 2020-08-31 | 2023-05-02 | 中国科学院空天信息创新研究院 | Gait feature extraction and identification method based on time-frequency analysis and deep neural network |
CN111965620A (en) * | 2020-08-31 | 2020-11-20 | 中国科学院空天信息创新研究院 | Gait feature extraction and identification method based on time-frequency analysis and deep neural network |
CN112183586A (en) * | 2020-09-10 | 2021-01-05 | 浙江工业大学 | Human body posture radio frequency identification method for on-line multi-task learning |
CN112183586B (en) * | 2020-09-10 | 2024-04-02 | 浙江工业大学 | Human body posture radio frequency identification method for online multitask learning |
CN113705482A (en) * | 2021-08-31 | 2021-11-26 | 江苏唯宝体育科技发展有限公司 | Body health monitoring and management system and method based on artificial intelligence |
CN113705482B (en) * | 2021-08-31 | 2024-03-22 | 江苏唯宝体育科技发展有限公司 | Body health monitoring management system and method based on artificial intelligence |
CN113985393A (en) * | 2021-10-25 | 2022-01-28 | 南京慧尔视智能科技有限公司 | Target detection method, device and system |
CN113985393B (en) * | 2021-10-25 | 2024-04-16 | 南京慧尔视智能科技有限公司 | Target detection method, device and system |
CN114863556A (en) * | 2022-04-13 | 2022-08-05 | 上海大学 | Multi-neural-network fusion continuous action recognition method based on skeleton posture |
CN114895363A (en) * | 2022-05-07 | 2022-08-12 | 上海恒岳智能交通科技有限公司 | Method for recognizing state potential of invaded object by visual imaging monitoring on two sides of roadbed |
Also Published As
Publication number | Publication date |
---|---|
CN108920993B (en) | 2022-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108920993A (en) | A kind of pedestrian's gesture recognition method and system based on radar and multiple networks fusion | |
CN108614993A (en) | A kind of pedestrian's gesture recognition method and system based on radar and pattern-recognition | |
Guo et al. | A CenterNet++ model for ship detection in SAR images | |
Mao et al. | What can help pedestrian detection? | |
Liong et al. | Micro-expression recognition using apex frame with phase information | |
CN111008583B (en) | Pedestrian and rider posture estimation method assisted by limb characteristics | |
Ragheb et al. | Vihasi: virtual human action silhouette data for the performance evaluation of silhouette-based action recognition methods | |
CN106778837B (en) | SAR image target recognition method based on polyteny principal component analysis and tensor analysis | |
Vishal et al. | Accurate localization by fusing images and GPS signals | |
CN110598586A (en) | Target detection method and system | |
CN111814690B (en) | Target re-identification method, device and computer readable storage medium | |
CN109444839A (en) | The acquisition methods and device of objective contour | |
CN110532886A (en) | A kind of algorithm of target detection based on twin neural network | |
CN106557740A (en) | The recognition methods of oil depot target in a kind of remote sensing images | |
CN104537356A (en) | Pedestrian re-identification method and device for carrying out gait recognition through integral scheduling | |
CN111159475B (en) | Pedestrian re-identification path generation method based on multi-camera video image | |
CN108830172A (en) | Aircraft remote sensing images detection method based on depth residual error network and SV coding | |
Ernisse et al. | Complete automatic target cuer/recognition system for tactical forward-looking infrared images | |
CN106934339B (en) | Target tracking and tracking target identification feature extraction method and device | |
RoyChowdhury et al. | Distinguishing weather phenomena from bird migration patterns in radar imagery | |
Porwal et al. | Recognition of human activities in a controlled environment using CNN | |
Kar et al. | An approach towards automatic intensity detection of tropical cyclone by weight based unique feature vector | |
CN109858308B (en) | Video retrieval device, video retrieval method, and storage medium | |
CN106156775A (en) | Human body feature extraction method based on video, human body recognition method and device | |
Brosch et al. | Automatic target recognition on high resolution sar images with deep learning domain adaptation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |