CN112200146A - Gesture recognition detection method based on FMCW - Google Patents

Gesture recognition detection method based on FMCW Download PDF

Info

Publication number
CN112200146A
CN112200146A CN202011211819.4A CN202011211819A CN112200146A CN 112200146 A CN112200146 A CN 112200146A CN 202011211819 A CN202011211819 A CN 202011211819A CN 112200146 A CN112200146 A CN 112200146A
Authority
CN
China
Prior art keywords
data
fmcw
carrying
gesture
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011211819.4A
Other languages
Chinese (zh)
Inventor
朱梦婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Bailongzhi Technology Co ltd
Original Assignee
Changzhou Bailongzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Bailongzhi Technology Co ltd filed Critical Changzhou Bailongzhi Technology Co ltd
Priority to CN202011211819.4A priority Critical patent/CN112200146A/en
Publication of CN112200146A publication Critical patent/CN112200146A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a gesture recognition detection method based on FMCW, which comprises the steps of collecting various gesture behavior data of different human bodies by utilizing FMCW equipment, carrying out Fourier transform on the obtained gesture behavior data of each row and each line to obtain a distance matrix and a Doppler matrix, obtaining a corresponding horizontal angle estimation value according to phase change, converting horizontal angle point cloud information into Cartesian coordinate system data, carrying out effective point superposition on multi-frame data at a set scanning speed to obtain a three-dimensional matrix, carrying out feature extraction on the three-dimensional matrix by utilizing a 3D convolutional neural network, clustering and extracting the converted data, inputting the fused feature values and feature vectors into a multi-layer LSTM neural network for classification training, completing detection and improving the accuracy of gesture recognition.

Description

Gesture recognition detection method based on FMCW
Technical Field
The invention relates to the technical field of FMCW radar recognition, in particular to a gesture recognition detection method based on FMCW.
Background
With the continuous mining of the application scene of radio frequency signals, the gesture recognition of wireless sensing becomes a potential market demand, at present, the gesture recognition based on the wireless sensing technology becomes a research hotspot at home and abroad, and in the life of an intelligent home, a user can turn on a television through gestures, turn off the television, adjust the volume of the television and also operate other intelligent household appliances; in the family entertainment, the user can control game roles more flexibly and more timely through gesture recognition, more complex operations are completed, and the participation degree and experience degree of the user can be improved.
Some traditional gesture recognition methods mainly utilize an optical camera and a depth camera to capture gesture behavior data, and analyze information such as outlines, directions, amplitudes and the like of gesture behaviors in data of a video stream. Under abnormal illumination, the information captured by the optical camera loses some resolution, and the gesture recognition is inaccurate.
Disclosure of Invention
The invention aims to provide a gesture recognition detection method based on FMCW (frequency modulated continuous wave), which improves the accuracy of gesture recognition.
In order to achieve the above object, the present invention provides a method for detecting gesture recognition based on FMCW, comprising the following steps:
acquiring various gesture behavior data, and performing filtering processing on the data obtained by performing Fourier transform for multiple times to obtain a corresponding horizontal angle estimation value;
performing coordinate conversion on the horizontal angle, and simultaneously performing effective point superposition of multi-frame data to obtain a three-dimensional matrix;
extracting the characteristics of the three-dimensional matrix by using a 3D convolutional neural network, and clustering and extracting the converted data;
and inputting the fused characteristic values and characteristic vectors into a multi-layer LSTM neural network for classification training to finish detection.
The method includes the steps of obtaining various gesture behavior data, performing filtering processing on data obtained by performing Fourier transform for multiple times, and obtaining corresponding horizontal angle estimated values, and includes the following steps:
the method comprises the steps of collecting various gesture behavior data of different human bodies by using FMCW equipment, carrying out Fourier transform on each column of the obtained gesture behavior data to obtain a distance matrix and a distance parameter, and simultaneously carrying out Fourier transform on each row of the gesture behavior data to obtain a Doppler matrix.
Wherein, acquire multiple gesture behavior data to carry out filtering process to the data that many times carry out Fourier transform and obtain, obtain corresponding horizontal angle estimated value, still include:
and sequentially subtracting a phase value corresponding to the set peak value from each data in the Doppler matrix, simultaneously carrying out phase calibration on the obtained phase value, and carrying out Doppler Fourier transform on the calibrated data to obtain a corresponding Doppler velocity value.
The coordinate conversion is carried out on the horizontal angle, effective point superposition of multi-frame data is carried out simultaneously, and a three-dimensional matrix is obtained, and the method comprises the following steps:
and converting the point cloud information of the horizontal angle into Cartesian coordinate system data, acquiring 32 effective points with Doppler velocity greater than 0.01m/s at a set scanning velocity, and performing interpolation processing on the effective point data with the number less than 32 until the 32 effective point data are supplemented.
The method comprises the following steps of utilizing a 3D convolutional neural network to extract features of the three-dimensional matrix, clustering and extracting converted data, and comprises the following steps:
and carrying out point cloud denoising and standardization processing on the effective point data, and extracting the characteristics of the processed data by using a 3D convolutional neural network.
Wherein, utilize 3D convolution neural network to carry out feature extraction to the three-dimensional matrix to carry out clustering and extraction to the data after the conversion, still include:
and carrying out point location sequencing on the data converted into the Cartesian coordinate system according to the Doppler velocity values, and carrying out weighted analysis on the screened effective points to obtain corresponding palm action comprehensive characteristics and arm action comprehensive characteristics.
The invention relates to a detection method of gesture recognition based on FMCW, which comprises the steps of collecting various gesture behavior data of different human bodies by utilizing FMCW equipment, carrying out Fourier transform on the obtained gesture behavior data of each row and each line to obtain a distance matrix and a Doppler matrix, obtaining a corresponding horizontal angle estimation value according to phase change, converting the horizontal angle point cloud information into Cartesian coordinate system data, carrying out effective point superposition on multi-frame data at a set scanning speed to obtain a three-dimensional matrix, carrying out feature extraction on the three-dimensional matrix by utilizing a 3D convolutional neural network, clustering and extracting the converted data, inputting the fused feature values and feature vectors into a multi-layer LSTM neural network for classification training, completing detection and improving the accuracy of gesture recognition.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic step diagram of a FMCW-based gesture recognition detection method provided in the present invention.
Fig. 2 is a schematic flow chart of a detection method of FMCW-based gesture recognition provided in the present invention.
Fig. 3 is a schematic structural diagram of a 3D convolutional neural network provided by the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
In the description of the present invention, it is to be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like, indicate orientations or positional relationships that are merely used to facilitate the description of the invention and to simplify the description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be taken as limiting the invention. Further, in the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
Referring to fig. 1 and 2, the present invention provides a method for detecting gesture recognition based on FMCW, comprising the following steps:
s101, acquiring various gesture behavior data, and performing filtering processing on the data obtained by performing Fourier transform for multiple times to obtain a corresponding horizontal angle estimation value.
Specifically, the Frequency of a signal transmitted by an FMCW (Modulated Continuous wave) radar system increases linearly with time. In the process of collecting original data, the frame number of the frequency sweep is Nf512 sampling points are used, gesture behavior data of different human bodies, including data of boys, girls, fat, thin, fast gestures, slow gestures, normal gesture speed and the like, are acquired by using a group of 2-transmission-4-reception FMCW equipment according to corresponding gesture behaviors, and labels are made; based on gestures which can be used in the current scene, eight gesture behaviors of pushing left, pushing right, pushing forward, pulling backward, sliding upward, sliding downward, putting a fist backward and sliding upward with a thumb of the fist are designed, and are shown in table 1.
TABLE 1 applications corresponding to gesture behavior
Gesture behavior Intelligent control behavior
Push left Return to
Push right The interface slides to the right;
push forward Interface magnification
Is pulled backwards Interface reduction
Go upwards Skip this interface
Slide downwards The next step is
Fist drawing back Quit
Fist-making thumb slides upwards Determining
If the gesture behavior recognition is applied to application scenes such as internet television pictures, game interface control and the like, the corresponding interface control behavior can be applied as follows. The gesture behavior is between 1m and 4m from the FMCW device; the device height is not more than 1m from the gesture behavior height, and the whole gesture behavior space is in the detection space range of the FMCW data.
Transmitting signal TxAnd echo signal RxInput into a mixer to obtain a mixing signal SxThen the intermediate frequency signal S is obtained through a low pass filterIFAnd obtaining a distance parameter R, a horizontal angle parameter theta and a Doppler velocity parameter D from the intermediate frequency signal matrixvFirstly, carrying out first Fourier transform to obtain a distance matrix, and obtaining an effective distance parameter R according to a wave crest, wherein the calculation method comprises the following steps:
the distance R of the radar signal is based on the back-and-forth flight time delay t of the signaldObtaining:
Figure BDA0002758990290000041
and then according to the similar principle of similar triangles, the method can be obtained:
Figure BDA0002758990290000051
this makes it possible to obtain:
Figure BDA0002758990290000052
where c is the speed of light (constant), tsIs half the period of the frequency modulated wave generated by the frequency generator; f. ofd e vSweeping the frequency bandwidth for the frequency modulated wave; f. ofbIs the difference between the transmitted and reflected frequencies.
Performing Fourier transform on the gesture behavior data of each row to obtain a Doppler matrix, subtracting phase data (phase corresponding to a first peak value) of the data of a first row from each data of the Doppler matrix to obtain a relative phase and calibrating the relative phase in order to eliminate the influence of background noise, and performing Doppler Fourier transform to obtain a Doppler velocity value Dv
According to the horizontal arrangement of the receiving antennas, the estimation of the arrival angle AOA is obtained through the Fourier transform phase change of different channels, and because the antenna offset in the horizontal direction is adopted, the effective weighted mean value of the effectively estimated AOA is taken as the estimation value of the horizontal angle theta, and the calculation method comprises the following steps:
after the original intermediate frequency data is subjected to two FFTs, the phase change formulas of the two receiving antennas are as follows:
Figure BDA0002758990290000053
in the XY plane, Δ d — lsin (θ), where l is the distance between the antennas. The approach to the angle of arrival is therefore calculated as follows:
Figure BDA0002758990290000054
and S102, carrying out coordinate conversion on the horizontal angle, and simultaneously carrying out effective point superposition on multi-frame data to obtain a three-dimensional matrix.
Specifically, on the XY plane, the point cloud information of the polar coordinates can be converted into cartesian coordinate system data as follows:
x=Rsinθ
y=Rcosθ
so that each point can obtain a three-dimensional information (x, y, D)v) The patent is directed to dynamic gesture behavior recognition, and the use scene is a gesture action generated by a user when the trunk of the user is in a static state. Therefore, when the FMCW system scans data, the scanning speed is set at 20fps, the Doppler speed of the captured palm valid points is greater than 0.01m/s, the number of valid points is 32, and if the data of the captured valid points is less than 32, the data is interpolated to supplement the information of the 32 valid points, so that a three-dimensional matrix is obtained.
S103, extracting the characteristics of the three-dimensional matrix by using a 3D convolutional neural network, and clustering and extracting the converted data.
Specifically, because the range of one gesture behavior is controllable, when FMCW frequency sweeping is performed, point cloud denoising is performed on the collected effective information points, and some noise data are removed. Then, normalizing the data of three dimensions, and performing feature extraction on the data by using 3D CNN, wherein the hardware hard connection layer is in a form of (3 x 3), and the first convolution kernel is in a form of 16 layers (3 x 3); the format of the second wrapper kernel is 32 layers (3 × 3), and a specific 3D convolution model structure is shown in fig. 3.
Clustering the converted data to obtain effective data, and distinguishing the palm point location from the point location generated by arm swinging according to the strength; this patent will be right according to the Doppler velocity value DvPoint location sorting is carried out, 8 effective points in the top speed are screened out, and D isvAccording to the x and y information, a weighted analysis is carried out, and according to the clustering result, a comprehensive characteristic describing the palm action and a comprehensive characteristic describing the arm action are obtained, wherein the specific calculation formula is as follows:
Figure BDA0002758990290000061
Figure BDA0002758990290000062
wherein M is1Means Doppler velocity front M of palm1Number of points, M2Refers to the Doppler velocity front M of the arm2Number of points of (1), and has M1+M2=8;
In the process of modeling the 3D convolutional neural network, the data stream of about 300 milliseconds is used as test data, and the data after 6 frames of FFT. Therefore, each set of test data can obtain 12 most effective doppler characteristic values of 2 × 6;
and finally, combining the feature vector obtained by the 3D convolutional neural network with the 12 feature values to carry out the next model training.
And S104, inputting the fused characteristic values and characteristic vectors into a multi-layer LSTM neural network for classification training to finish detection.
Specifically, the obtained feature vectors are put into a 3-layer LSTM neural network structure for classification training, and specific training parameters are as follows;
Figure BDA0002758990290000063
Figure BDA0002758990290000071
firstly, initializing a learning rate, inputting the fused data into an LSTM neural network structure for classification training until iteration is carried out to a set maximum training iteration number, calculating a loss function by using a minimum mean square error, and optimizing a training result by using an Adam optimization algorithm to finish detection.
The visualization of the data is verified and tested on the ROS platform to observe the accuracy of the effective point information and assist in model iteration. In practical tests, a prediction result of an invalid gesture is added in the patent, and non-gesture behaviors such as foot lifting, standing, sitting and the like can be predicted to be the invalid gesture; the overall flow architecture of the patent is shown in fig. 2; in a practical environment, the average accuracy of 8 gesture recognitions obtained by a test group consisting of 10 boys and 6 girls is as high as 93%.
The beneficial effect of this patent does:
1. efficient background noise processing
In the method, after the first distance Fourier transformation is carried out on the original data, the phase data of the peak value can be obtained, the phase obtained by subtracting the first peak value from each phase data can obtain relative phase data, the phase data are subjected to phase calibration, and finally the Fourier transformation is carried out on the phase data to obtain cleaner Doppler matrix data.
2. Newly reconstructed three-dimensional data lattice
In the method, after signal reconstruction is carried out on gesture signal data, point location information superposition is carried out on three frames of data to obtain an integrated three-dimensional matrix, a sliding frame is set to be one frame, clustering analysis is carried out on the new frame data, abnormal points are removed, and the accuracy of a model is improved; meanwhile, Kalman filtering processing is carried out on the newly reconstructed three-dimensional matrix series, some invalid points are removed again, and meanwhile data points below half of the distance resolution are integrated to obtain effective gesture point location data. The data calculation amount and the model calculation amount can be effectively reduced.
3. Effectively utilizes the Doppler velocity in the point location information
The method mainly classifies and judges the dynamic gesture behaviors, and after FMCW parameters are set, the Doppler speed is used for denoising and background elimination of the positioning; after the point data is subjected to Cartesian coordinate conversion, the Doppler velocity is taken as a third-dimensional feature, and the 3D convolutional neural network can be effectively used for extracting key features; in the feature extraction, after a palm point location set and an arm point location set are distinguished, a comprehensive Doppler velocity feature value is constructed, and texture information generated in the behavior process of the gesture can be effectively extracted.
4. Without using pitch angle information
This patent is under the condition that does not have the angle of pitch, and make full use of Doppler velocity draws more key information to establish an effectual set of neural network structure, utilize 3D convolution neural network earlier to extract the characteristic, put multilayer LSTM network model with the eigenvector again, compare in traditional some directly put the neural network model with point location information and have more advantages, the rate of accuracy of actual measurement is also very high.
5. The whole model is not complex and the operation amount is small
Before a 3D convolutional neural network is used, denoising processing and background object information elimination are carried out on point location information, in the process of extracting dynamic gesture behavior characteristic information, 20% of dropout can be achieved, dynamic characteristics of gesture behaviors can be effectively obtained only by carrying out convolution characteristic extraction twice, maximum pooling layer processing is carried out before one-dimensional characteristic vectors are generated, and the calculated amount is effectively reduced.
The invention relates to a detection method of gesture recognition based on FMCW, which comprises the steps of collecting various gesture behavior data of different human bodies by utilizing FMCW equipment, carrying out Fourier transform on the obtained gesture behavior data of each row and each line to obtain a distance matrix and a Doppler matrix, obtaining a corresponding horizontal angle estimation value according to phase change, converting the horizontal angle point cloud information into Cartesian coordinate system data, carrying out effective point superposition on multi-frame data at a set scanning speed to obtain a three-dimensional matrix, carrying out feature extraction on the three-dimensional matrix by utilizing a 3D convolutional neural network, clustering and extracting the converted data, inputting the fused feature values and feature vectors into a multi-layer LSTM neural network for classification training, completing detection and improving the accuracy of gesture recognition.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (6)

1. A gesture recognition detection method based on FMCW is characterized by comprising the following steps:
acquiring various gesture behavior data, and performing filtering processing on the data obtained by performing Fourier transform for multiple times to obtain a corresponding horizontal angle estimation value;
performing coordinate conversion on the horizontal angle, and simultaneously performing effective point superposition of multi-frame data to obtain a three-dimensional matrix;
extracting the characteristics of the three-dimensional matrix by using a 3D convolutional neural network, and clustering and extracting the converted data;
and inputting the fused characteristic values and characteristic vectors into a multi-layer LSTM neural network for classification training to finish detection.
2. A FMCW-based gesture recognition detection method as claimed in claim 1, wherein obtaining a plurality of gesture behavior data and filtering the data obtained by performing fourier transform on a plurality of times to obtain corresponding horizontal angle estimates comprises:
the method comprises the steps of collecting various gesture behavior data of different human bodies by using FMCW equipment, carrying out Fourier transform on each column of the obtained gesture behavior data to obtain a distance matrix and a distance parameter, and simultaneously carrying out Fourier transform on each row of the gesture behavior data to obtain a Doppler matrix.
3. A FMCW-based gesture recognition detection method as claimed in claim 2, wherein a plurality of gesture behavior data are obtained and the data obtained by performing fourier transform on a plurality of times are filtered to obtain corresponding horizontal angle estimates, further comprising:
and sequentially subtracting a phase value corresponding to the set peak value from each data in the Doppler matrix, simultaneously carrying out phase calibration on the obtained phase value, and carrying out Doppler Fourier transform on the calibrated data to obtain a corresponding Doppler velocity value.
4. A FMCW-based gesture recognition detection method according to claim 3, wherein coordinate-converting the horizontal angle while performing effective point superposition for multiple frames of data to obtain a three-dimensional matrix comprises:
and converting the point cloud information of the horizontal angle into Cartesian coordinate system data, acquiring 32 effective points with Doppler velocity greater than 0.01m/s at a set scanning velocity, and performing interpolation processing on the effective point data with the number less than 32 until the 32 effective point data are supplemented.
5. An FMCW-based gesture recognition detection method as in claim 4, wherein the three dimensional matrix is feature extracted using a 3D convolutional neural network and the converted data is clustered and extracted, including:
and carrying out point cloud denoising and standardization processing on the effective point data, and extracting the characteristics of the processed data by using a 3D convolutional neural network.
6. An FMCW-based gesture recognition detection method as in claim 5, wherein the three dimensional matrix is feature extracted using a 3D convolutional neural network and the converted data is clustered and extracted, further comprising:
and carrying out point location sequencing on the data converted into the Cartesian coordinate system according to the Doppler velocity values, and carrying out weighted analysis on the screened effective points to obtain corresponding palm action comprehensive characteristics and arm action comprehensive characteristics.
CN202011211819.4A 2020-11-03 2020-11-03 Gesture recognition detection method based on FMCW Withdrawn CN112200146A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011211819.4A CN112200146A (en) 2020-11-03 2020-11-03 Gesture recognition detection method based on FMCW

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011211819.4A CN112200146A (en) 2020-11-03 2020-11-03 Gesture recognition detection method based on FMCW

Publications (1)

Publication Number Publication Date
CN112200146A true CN112200146A (en) 2021-01-08

Family

ID=74033057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011211819.4A Withdrawn CN112200146A (en) 2020-11-03 2020-11-03 Gesture recognition detection method based on FMCW

Country Status (1)

Country Link
CN (1) CN112200146A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361450A (en) * 2021-06-24 2021-09-07 上海鼎算智能科技有限公司 RFID-based activity sequence identification method, system, medium and terminal
CN113591938A (en) * 2021-07-10 2021-11-02 亿太特(陕西)科技有限公司 Multi-feature fusion traffic target identification method and system, computer equipment and application

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361450A (en) * 2021-06-24 2021-09-07 上海鼎算智能科技有限公司 RFID-based activity sequence identification method, system, medium and terminal
CN113591938A (en) * 2021-07-10 2021-11-02 亿太特(陕西)科技有限公司 Multi-feature fusion traffic target identification method and system, computer equipment and application

Similar Documents

Publication Publication Date Title
CN110765974B (en) Micro gesture recognition method based on millimeter wave radar and convolutional neural network
CN109271838B (en) FMCW radar-based three-parameter feature fusion gesture recognition method
CN112200146A (en) Gesture recognition detection method based on FMCW
CN110348288A (en) A kind of gesture identification method based on 77GHz MMW RADAR SIGNAL USING
CN114581958B (en) Static human body posture estimation method based on CSI signal arrival angle estimation
CN108828548A (en) A kind of three Parameter fusion data set construction methods based on fmcw radar
CN113408328B (en) Gesture segmentation and recognition algorithm based on millimeter wave radar
CN111427031A (en) Identity and gesture recognition method based on radar signals
CN108171119B (en) SAR image change detection method based on residual error network
CN113050797A (en) Method for realizing gesture recognition through millimeter wave radar
CN111157988A (en) Gesture radar signal processing method based on RDTM and ATM fusion
CN114219853A (en) Multi-person three-dimensional attitude estimation method based on wireless signals
CN114966560B (en) Ground penetrating radar backward projection imaging method and system
CN113064483A (en) Gesture recognition method and related device
CN103077398A (en) Livestock group number monitoring method based on embedded natural environment
Pan et al. Dynamic hand gesture detection and recognition with WiFi signal based on 1d-CNN
CN112180359B (en) FMCW-based human body tumbling detection method
CN117741647A (en) Millimeter wave radar intelligent home sensing method and device
JP2021032879A (en) Attitude recognizing device and method based on radar and electronic apparatus
CN113283415B (en) Sedentary and recumbent detection method based on depth camera
CN105160287B (en) A kind of constant space-time interest points characteristic detection method of camera motion
Song et al. High-accuracy gesture recognition using mm-wave radar based on convolutional block attention module
CN114168058A (en) Method and device for recognizing handwritten characters in air by FMCW single millimeter wave radar
CN107171748A (en) The collaboration frequency measurement of many arrays and the direct localization method of lack sampling
CN103810461B (en) Interference object detection method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210108

WW01 Invention patent application withdrawn after publication