CN115563478A - Millimeter wave radar non-line-of-sight human behavior recognition system based on multi-class feature fusion - Google Patents

Millimeter wave radar non-line-of-sight human behavior recognition system based on multi-class feature fusion Download PDF

Info

Publication number
CN115563478A
CN115563478A CN202211546815.0A CN202211546815A CN115563478A CN 115563478 A CN115563478 A CN 115563478A CN 202211546815 A CN202211546815 A CN 202211546815A CN 115563478 A CN115563478 A CN 115563478A
Authority
CN
China
Prior art keywords
feature
image
millimeter wave
wave radar
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202211546815.0A
Other languages
Chinese (zh)
Inventor
贾超
丁从张
徐子涵
郭世盛
崔国龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202211546815.0A priority Critical patent/CN115563478A/en
Publication of CN115563478A publication Critical patent/CN115563478A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • G01S13/08Systems for measuring distance only
    • G01S13/10Systems for measuring distance only using transmission of interrupted, pulse modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/50Systems of measurement based on relative movement of target
    • G01S13/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • G01S13/581Velocity or trajectory determination systems; Sense-of-movement determination systems using transmission of interrupted pulse modulated waves and based upon the Doppler effect resulting from movement of targets
    • G01S13/582Velocity or trajectory determination systems; Sense-of-movement determination systems using transmission of interrupted pulse modulated waves and based upon the Doppler effect resulting from movement of targets adapted for simultaneous range and velocity measurements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/411Identification of targets based on measurements of radar reflectivity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Abstract

The invention discloses a millimeter wave radar non-line-of-sight human behavior recognition system based on multi-class feature fusion, and belongs to the field of radio signal positioning recognition. Aiming at radar echo signals of different behaviors of a human body, target behavior information is researched by methods such as pulse compression, doppler-FFT and the like to generateVR k,d() Drawing, and extracting respectivelyVR k,d() And reducing the dimensions and fusing the four features by utilizing a Principal Component Analysis (PCA) method according to the Gabor, LBP, SIFT and HOG features of the graph, and finally inputting the fused features into a Support Vector Machine (SVM) classifier for classification, thereby realizing non-line-of-sight behavior identification. The method fuses the features by utilizing the complementarity of the features of different types, can effectively solve the problem of incomplete expression of single type features of the sample, can effectively improve the recognition rate, realizes accurate non-line-of-sight human behavior recognition, and has the advantages of protecting the privacy of users, being not easily influenced by the environment and the like.

Description

Millimeter wave radar non-line-of-sight human behavior recognition system based on multi-class feature fusion
Technical Field
The invention relates to the field of radio signal positioning and identification, in particular to the field of human behavior identification application in radar signal processing.
Background
Technologies currently available for human behavior recognition include wearable devices and non-wearable devices. The wearable device comprises an acceleration sensor, a gyroscope and the like, and has the characteristics of no environmental influence and high accuracy, but the devices need to be worn by a user in real time, and for the fields of military and security monitoring, because behavior identification is in a non-public state, the devices cannot be worn by a tested object in advance, and the application field of the devices is limited. In addition, the real-time operation of the wearable device requires sufficient power, so a battery with long-term endurance or periodic charging is required, and there is still a large room for improvement. The non-wearable device mainly comprises a camera, a radar and the like. The camera has higher accuracy in the field of human behavior recognition as an optical instrument, but has the defects of invading user privacy and being greatly influenced by light line elements. The privacy of the user can be violated by continuous monitoring, the user does not feel safe, and the video monitoring is easy to be blocked and cannot work under the condition of non-line of sight. The radar is different from the traditional monitoring equipment, has the characteristics of privacy friendliness, non-invasion, high environmental adaptability and the like, and can effectively solve the problems.
The human behavior recognition method based on the radar mainly comprises a threshold analysis method, a machine learning method and a deep learning method. The threshold analysis method judges the type of the target behavior by setting thresholds such as target distance, speed, acceleration and the like, although the calculation speed is high, the method has the defects of high false alarm rate and large error, and the setting of the thresholds can generate different effects under different scenes, so that the performance is unstable. The machine learning method is similar to the deep learning method and can be summarized into three steps of feature expression, feature extraction and classification and identification. Firstly, signal processing is carried out on radar echoes to form an expression image containing attribute characteristic information, then characteristic information contained in the expression image is extracted, and finally a classifier is used for distinguishing target behavior types according to the characteristic information, so that behavior recognition is realized. The difference between the two methods is that the machine learning method needs to extract features manually, while the deep learning method can extract features automatically through a network. Although the features extracted by the deep learning method have high discriminativity and can bring better recognition rate, the calculation complexity is high, and the features are difficult to be transplanted to a hardware system with limited calculation capacity. The machine learning method can solve the problems, and achieves the expected effect by considering both the recognition rate and the calculation speed.
The accuracy of the machine learning method is related not only to the kind and parameters of the classifier but also to the feature quantities input to the classifier. Compared with the traditional optical image, the effective target area in the radar characteristic image is smaller, the color, edge and texture information is less obvious, namely the characteristic information is less, the target loss and information loss are caused, and the target behavior is not easy to be accurately identified. Under different distances, the information content of the acquired millimeter wave echo signals is different. At a long distance, the attenuation of the target energy is large, so that the detection accuracy of the echo signal is influenced. And clutter, noise and target signals are usually contained in the radar echo, and these factors will cause the background of the characteristic image to be disordered. The fusion characteristics of the characteristic images applied by the invention can effectively solve the problems by utilizing the complementarity of the characteristics.
Disclosure of Invention
The invention aims to provide a millimeter wave radar non-line-of-sight human behavior recognition method which solves the problems, is friendly in privacy, non-invasive, strong in environmental adaptability and high in recognition rate and is based on multi-class feature fusion.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: a millimeter wave radar non-line-of-sight human behavior recognition system based on multi-class feature fusion comprises: the system comprises a millimeter wave radar, a data processing module, a behavior recognition module and a man-machine interaction module, wherein the millimeter wave radar is arranged at the corner of a wall, and the number of targets in a detection area of the millimeter wave radar is known; the millimeter wave radar transmits millimeter waves, collects echo signals, transmits the echo signals to the data processing module, and the data processing module processes the echo signals to obtain the targetVR k,d() In the figure, the figure shows that,VR k,d() figure shows the firstkObtained on a line time-distance imagedDistance-velocity amplitude map of points, thenVR k,d() The graph is transmitted to a behavior recognition module, the behavior recognition module recognizes the behavior of the target and then transmits the behavior to a man-machine interaction module for displaying; meanwhile, the man-machine interaction module can send out an instruction and transmit the instruction to the data processing module, and the data processing module transmits the instruction to the millimeter wave radar;
wherein, the data processing moduleVR k,d() The calculation method of the graph comprises the following steps:
step A1: extracting the distance information of the target, transmitting chirp signals by the radar according to the radar ranging principle, and receiving the delayt d The latter echo signal; mixing and filtering the received signal and the transmitted signal to obtain a single frequency signalf=μt d WhereinμThe method comprises the following steps that for a frequency slope, data received by a radar are obtained by sampling signals, pulse compression is carried out on the data, and a time-distance image is obtained, wherein the calculation method comprises the following steps:
Figure 241104DEST_PATH_IMAGE001
wherein the content of the first and second substances,TR k m(,) is as followsmA chirp signal is inkThe amplitude at each of the sampling points is,W u in order to be a function of a predetermined window,S (n-u,m) is as followsmA first of the chirp signalsn-uThe data of the individual sample points is,N s the total number of samples is N is the number of FFT points;
step A2: pulse-compressing the distance spectrum of each frame to obtain a range-velocity image of the target, i.e.VR k,d() A drawing; the calculation method comprises the following steps:
Figure 757536DEST_PATH_IMAGE002
wherein the content of the first and second substances,VR k,d() is shown askObtained on a line time-distance imagedThe distance at a point-the magnitude of the velocity,W u is a function of a pre-set window,TR k m-u(,) represents the first on the FramekLine, firstm-uThe time-distance image data of the column,N c represents the total number of samples;
the processing method in the behavior identification module comprises the following steps:
step B1: extraction ofVR k,d() The Gabor feature of the figure is as feature 1;
and step B2: dividing the image into a plurality of blocks, taking the central pixel of each block as a threshold value, comparing the gray level of other area pixels in the block with the central pixel, if the pixel value is less than the central pixel value, marking the pixel position as 0, otherwise, marking the pixel position as 1, and finally, setting the central pixel value as 1; after traversing all the blocks, the whole frame is obtainedVR k,d() Encoding of fig. 01 as feature 2;
and step B3: constructing a scale space, detectingVR k,d() Filtering abnormal extreme points of the graph, and accurately positioning the abnormal extreme points to obtain characteristic points; then, the direction value is distributed to the characteristic point to generateA feature descriptor; clustering the feature descriptors by using a K-means method; making the central point of each type into a word bag to obtain a word bag model of each picture as a characteristic 3;
and step B4: first, calculateVR k,d() The gradient magnitude and direction of each pixel of the map; secondly, dividing the image into a plurality of areas, wherein each area consists of a fixed number of grid units, and each grid unit consists of a determined number of pixels; then, respectively calculating the gradient characteristics of one region, and normalizing the gradient characteristics; finally, sliding each region on the whole image by taking a grid unit as a step length, and combining the feature vectors of each region together to obtain a feature 4;
and step B5: firstly, centralizing the feature 1, the feature 2, the feature 3 and the feature 4; secondly, a covariance matrix is obtained for each feature, the eigenvalue and the eigenvector corresponding to the covariance matrix are calculated, the eigenvalues are sorted from large to small, and the eigenvalue is selected as the firstkThe eigenvectors corresponding to the characteristic values are spliced,kless than 4; then, projecting the original features onto the spliced feature vectors to obtain fusion features;
and step B6: and training an SVM classifier by adopting the fusion characteristics, and recognizing the human behavior by adopting the trained SVM classifier during actual detection.
Further, the behavior recognition module extractsVR k,d() The method for the Gabor features of the figure is:
firstly, designing a Gabor filter bank, wherein the number of Gabor filters in the Gabor filter bank is more than or equal to 4, and the Gabor filters are as follows:
Figure 395190DEST_PATH_IMAGE003
wherein the content of the first and second substances,
Figure 482226DEST_PATH_IMAGE004
Figure 801212DEST_PATH_IMAGE005
Figure 172151DEST_PATH_IMAGE006
is a function of the wavelength of the light,
Figure 731439DEST_PATH_IMAGE007
which represents the direction of the filter or filters,
Figure 820618DEST_PATH_IMAGE008
the phase shift of the tuning function is represented,
Figure 677715DEST_PATH_IMAGE009
the aspect ratio of the space is shown,
Figure 168739DEST_PATH_IMAGE010
representing the variance of a Gaussian filter, and generating a Gabor filter bank by using Gabor filters with different directions and scales;
then, convolving each filter in the Gabor filter bank with the image, then down-sampling the obtained filtered image to reduce redundant information, converting and normalizing each down-sampled image into a feature vector with zero mean and unit variance, and finally combining the normalized feature vectors to generate the Gabor feature vector of the image.
Further, the SVM classifier is:
mapping the input features to a high dimension by using a Gaussian kernel function, and determining the Gaussian kernel function as follows:
Figure 898929DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure 209825DEST_PATH_IMAGE012
in order to be a low-dimensional feature,
Figure 870613DEST_PATH_IMAGE013
standard deviation of the gaussian envelope;
by usingxRepresenting the original sample point byϕ(x) To representxNew vectors mapped to new feature spaces, the maximum separation required is hyperplane
Figure 760684DEST_PATH_IMAGE014
The dual problem of the nonlinear SVM is:
Figure 114305DEST_PATH_IMAGE015
wherein the content of the first and second substances,
Figure 646918DEST_PATH_IMAGE016
a new vector is represented that maps to the new feature space,Ca human being is set with a penalty factor larger than 0, and is used for punishing the samples violating the inequality constraint condition,
Figure 127709DEST_PATH_IMAGE017
in order to be a multiplier of the KKT,
Figure 124484DEST_PATH_IMAGE018
a label that is representative of the characteristic(s),
Figure 383427DEST_PATH_IMAGE019
in order to be a lagrange multiplier,
Figure 419647DEST_PATH_IMAGE020
in order to optimize the number of variables,
Figure 750134DEST_PATH_IMAGE021
andbparameters of the hyperplane to be solved are obtained; then, the SMO algorithm is used for solving the problem to obtain an optimal solution, and the optimal solution is further solved
Figure 804678DEST_PATH_IMAGE022
Andbthereby obtaining an optimal hyperplane.
The overall thought of the invention is as follows: target behavior information is analyzed by methods such as pulse compression, doppler-FFT and the like aiming at radar echo signals of different behaviors of a target to generateVR k,d() The drawings are referred to respectivelyAnd taking multiple characteristics, performing dimension reduction and fusion on the four characteristics by using a fusion method, inputting the fusion characteristics into a Support Vector Machine (SVM) classifier for classification, and judging the type of the target behavior. Compared with the prior art, the invention has the advantages that: the millimeter wave radar is used for detection, so that the problems of invasion to user privacy and influence of illumination are avoided, and the method has the characteristic of real-time detection; in addition, the method utilizes complementarity of different types of features to fuse the features, can effectively solve the problem of incomplete expression of single type features of the sample, can effectively improve the recognition rate, realizes accurate non-line-of-sight human behavior recognition, and has the advantages of protecting the privacy of users, being not easily influenced by the environment and the like.
Drawings
FIG. 1 is a block diagram of a non-line-of-sight human behavior recognition system provided by the present invention;
FIG. 2 is a flow chart of a millimeter wave radar non-line-of-sight human behavior recognition method based on multi-class feature fusion, provided by the invention;
fig. 3 is a schematic layout diagram of a millimeter wave radar provided by the present invention;
FIG. 4 shows different behaviors of the embodiments of the present inventionVR k,d() Figure (a).
Detailed Description
The following description of the embodiments of the present invention will be described in conjunction with the accompanying schematic drawings so that those skilled in the art can understand the present invention, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made therein without departing from the spirit and scope of the invention as defined and defined by the appended claims, and all matters produced by the invention using the inventive concept are to be protected.
The system of the invention as shown in fig. 1 comprises: the system comprises a millimeter wave radar, a data processing module, a behavior recognition module and a human-computer interaction module; the millimeter wave radar is arranged at a corner of a wall, and the number of targets in a detection area of the millimeter wave radar is determined; the millimeter wave radar transmits millimeter waves, collects echo signals, transmits the echo signals to the data processing module, and the data processing moduleProcessing the echo signal to obtain the targetVR k,d() In the figure, the figure shows that,VR k,d() figure shows the firstkObtained on a line time-distance imagedDistance-velocity amplitude map of points, which will then beVR k,d() The graph is transmitted to a behavior recognition module, the behavior recognition module recognizes the behavior of the target and then transmits the behavior to a man-machine interaction module for displaying; meanwhile, the man-machine interaction module can send out an instruction and transmit the instruction to the data processing module, and the data processing module transmits the instruction to the millimeter wave radar;
the invention realizes non-line-of-sight human behavior recognition by using millimeter wave radar signal processing and machine learning technology, extracts the motion characteristics of the target by adopting methods such as pulse compression, doppler-FFT and the like, and generates the targetVR k,d() And extracting four characteristics respectively, performing dimension reduction and fusion on the four characteristics respectively by using a PCA method, and inputting the fusion characteristics into an SVM classifier to classify target behaviors so as to realize behavior recognition. Compared with a human behavior recognition system based on wearable equipment and optical equipment, the human behavior recognition system has the characteristics of user privacy protection, high recognition accuracy, low false alarm rate and false alarm rate, and high system stability.
Based on the principle, the invention provides a millimeter wave radar non-line-of-sight human behavior recognition method based on multi-class feature fusion as shown in FIG. 2, which comprises the following steps:
step 1: installing a millimeter wave radar at a corner of a wall, and determining the number of targets in a radar detection area;
step 2: original target echo data are collected in real time through a millimeter wave radar, uploaded to a data processing module of the system, and then radar signal processing is carried out on the uploaded target echo data to obtain the targetVR k,d() A drawing;
and 3, step 3: manual extractionVR k,d() Four features of the graph;
and 4, step 4: and designing an SVM classifier, training the SVM classifier by using the data set, and using the SVM classifier to classify the target behaviors.
And 5: designing a behavior recognition module in the system based on a pre-trained SVM classifier, uploading target behavior fusion characteristics obtained in real time by a data processing module to the system recognition module, and detecting target behaviors in real time;
and 6: and the system service module is connected with the remote terminal platform to share the identification result.
The step 1: as shown in fig. 3, in order to enable the radar to realize non-line-of-sight behavior recognition, the millimeter wave radar is installed at a corner of a wall. While the radar is tilted down 25 deg., which is the FMCW 6843 ISK radar of TI.
The step 2 is specifically as follows:
raw radar data received by one receiving antenna has 50 frames (frames), each frame has 64 chirp, and each chirp has 256 sampling points.
First, distance information of the target is extracted. According to radar ranging principle, a radar transmits a chirp signal and receives delayt d The latter echo signal. Mixing and filtering the received signal and the transmitted signal to obtain a single frequency signalf=μt d In whichμIs the frequency slope. The data received by the radar is obtained by sampling signals, the data is subjected to pulse compression to obtain a time-distance image, and the calculation method comprises the following steps:
Figure 985255DEST_PATH_IMAGE001
wherein the content of the first and second substances,TR k m(,) is a firstmEach chirp is inkThe amplitude of the wave (d) is,W u as a function of a predetermined window.S (n-u,m) Is as followsmThe first of a chirpn-uThe data of the individual sample points is,N s is the ADC sample number.
Next, speed information of the target is extracted. Since the velocity of the target is the range rate, the velocity information is contained in the phase shift range information of the radar. Thus, the distance spectrum of each frame is pulse-compressed again to obtain a distance-velocity image of the object, i.e.VR k,d() Drawing. The calculation method comprises the following steps:
Figure 758039DEST_PATH_IMAGE002
wherein, the first and the second end of the pipe are connected with each other,VR k,d() is shown askObtained on a line time-distance imagedThe distance at a point-the magnitude of the velocity,W u is a function of a preset window, and is,TR k m-u(,) represents the Frame is the firstkLine, firstm-uTime-distance image data of the column.
Firstly, a Gabor filter bank is designed, wherein the Gabor filter bank comprises 24 filters, and the definition of the 2-D Gabor filter is as follows:
Figure 626637DEST_PATH_IMAGE023
wherein, the first and the second end of the pipe are connected with each other,
Figure 41349DEST_PATH_IMAGE024
Figure 173253DEST_PATH_IMAGE025
is a function of the wavelength of the light,
Figure 167754DEST_PATH_IMAGE026
which represents the direction of the filter or filters,
Figure 590776DEST_PATH_IMAGE027
which represents the phase shift of the tuning function,
Figure 354332DEST_PATH_IMAGE028
the aspect ratio of the space is shown,
Figure 391559DEST_PATH_IMAGE029
representing the variance of the gaussian filter. The Gabor filter bank is generated using Gabor filters having dimensions of 7 × 7, 9 × 9, 11 × 11, 13 × 13, 15 × 15 and directions of 0 °, 45 °, 90 °, 135 °.
Then, convolving each filter in the Gabor filter bank with the image, then down-sampling the obtained filtered image to reduce redundant information, converting and normalizing each down-sampled image into a feature vector with zero mean and unit variance, and finally combining the normalized feature vectors to generate the Gabor feature vector of the image.
The step 4 is specifically:
dividing the data set into a training set and a testing set, training the training set by using the fusion features in the training set as the input of an SVM classifier, and adjusting the parameters C and C of the SVM
Figure 420826DEST_PATH_IMAGE030
And testing the performance of the SVM classifier by using a test set to ensure that the SVM classifier achieves an ideal recognition rate, and selecting the SVM model with the highest accuracy as the classifier model used by the invention.
The SVM design method comprises the following steps:
firstly, mapping sample features to a high dimension by using a gaussian kernel function, wherein the gaussian kernel function is as follows:
Figure 834489DEST_PATH_IMAGE031
by usingxThe original sample points are represented by the original sample points,
Figure 452552DEST_PATH_IMAGE032
to representxNew vectors mapped to new feature spaces, the maximum separation required is hyperplane
Figure 473729DEST_PATH_IMAGE033
The dual problem of the non-linear SVM is:
Figure 442822DEST_PATH_IMAGE034
wherein the content of the first and second substances,Ca human being is set with a penalty factor larger than 0 and is used for punishing the samples violating the inequality constraint condition,
Figure 660177DEST_PATH_IMAGE035
is a lagrange multiplier. Then, the SMO algorithm is used for solving the problem to obtain an optimal solution, and the optimal solution is further solved
Figure 132747DEST_PATH_IMAGE036
Andbthereby obtaining an optimal hyperplane.
The step 5 is specifically:
the target radar echo signals acquired by the side-mounted millimeter wave radar in real time are sent to a system data processing module to obtain the fusion characteristics of the target behaviors, the fusion characteristics are used as the input of an SVM classifier, and finally the output of the SVM is used as the real-time monitoring result of the non-line-of-sight human behavior recognition system.
Of different behaviour obtained by step 2VR k,d() As shown in fig. 4, the confusion matrix of the classified recognition results is shown in table 1.
TABLE 1 confusion matrix of recognition results obtained by classification
Figure 321895DEST_PATH_IMAGE037

Claims (3)

1. A millimeter wave radar non-line-of-sight human behavior recognition system based on multi-class feature fusion comprises: the system comprises a millimeter wave radar, a data processing module, a behavior recognition module and a human-computer interaction module; the method is characterized in that the millimeter wave radar is arranged at a corner of a wall, and the number of targets in a detection area of the millimeter wave radar is known; the millimeter wave radar transmits millimeter waves, collects echo signals, transmits the echo signals to the data processing module, and the data processing module processes the echo signals to obtain the targetVR k,d() In the figure, the figure shows that,VR k,d() figure shows the firstkObtained on a line time-distance imagedDistance-velocity amplitude map of points, thenVR k,d() The graph is transmitted to a behavior recognition module, and the behavior recognition module recognizes the behavior of the target and then transmits the behaviorThe data is transmitted to a human-computer interaction module for display; meanwhile, the man-machine interaction module can send out an instruction and transmit the instruction to the data processing module, and the data processing module transmits the instruction to the millimeter wave radar;
wherein, the data processing moduleVR k,d() The calculation method of the graph comprises the following steps:
step A1: extracting the distance information of the target, transmitting chirp signals by the radar according to the radar ranging principle, and receiving the delayt d The latter echo signal; mixing and filtering the received signal and the transmitted signal to obtain a single frequency signalf=μt d In whichμThe method comprises the following steps that for a frequency slope, data received by a radar are obtained by sampling signals, pulse compression is carried out on the data, and a time-distance image is obtained, wherein the calculation method comprises the following steps:
Figure 898645DEST_PATH_IMAGE001
wherein the content of the first and second substances,TR k m(,) is as followsmA chirp signal is inkThe amplitude at each of the sampling points is,W u in order to be a function of a preset window,S (n-u,m) is as followsmA first of the chirp signalsn-uThe data of the individual sampling points is,N s the total number of samples is N is the number of FFT points;
step A2: pulse-compressing the distance spectrum of each frame to obtain a range-velocity image of the target, i.e.VR k,d() Drawing; the calculation method comprises the following steps:
Figure 388533DEST_PATH_IMAGE002
wherein the content of the first and second substances,VR k,d() is shown askObtained on a line time-distance imagedThe distance at a point-the magnitude of the velocity,W u is a function of a pre-set window,TR k m-u(,) represents the first on the FramekLine, firstm-uThe time-distance image data of the column,N c represents the total number of samples;
the processing method in the behavior identification module comprises the following steps:
step B1: extraction ofVR k,d() The Gabor feature of the figure is as feature 1;
and step B2: dividing the image into a plurality of blocks, taking the central pixel of each block as a threshold value, comparing the gray scale of other area pixels in the block with the central pixel, if the pixel value is less than the central pixel value, marking the pixel position as 0, otherwise, marking the pixel position as 1, and finally, setting the central pixel value as 1; after traversing all blocks, the whole frame is obtainedVR k,d() Encoding of fig. 01 as feature 2;
and step B3: constructing a scale space, detectingVR k,d() Filtering abnormal extreme points of the graph, and accurately positioning the abnormal extreme points to obtain feature points; then, distributing direction values for the feature points to generate feature descriptors; clustering the feature descriptors by using a K-means method; making the central point of each type into a word bag to obtain a word bag model of each picture as a characteristic 3;
and step B4: first, calculateVR k,d() The gradient magnitude and direction of each pixel of the map; secondly, dividing the image into a plurality of areas, wherein each area consists of a fixed number of grid units, and each grid unit consists of a determined number of pixels; then, respectively calculating the gradient characteristics of one region, and normalizing the gradient characteristics; finally, sliding each region on the whole image by taking a grid unit as a step length, and combining the feature vectors of each region together to obtain a feature 4;
and step B5: firstly, centralizing the feature 1, the feature 2, the feature 3 and the feature 4; secondly, a covariance matrix is obtained for each feature, eigenvalues and eigenvectors corresponding to the covariance matrix are calculated, the eigenvalues are sorted from large to small, and the top is selectedkThe eigenvectors corresponding to the characteristic values are spliced,kless than 4; then, projecting the original features onto the spliced feature vectors to obtain fusion features;
step B6: and training an SVM classifier by adopting the fusion characteristics, and recognizing the human behavior by adopting the trained SVM classifier during actual detection.
2. The system for recognizing the non-line-of-sight human body behavior of the millimeter wave radar based on the fusion of the multiple classes of features as claimed in claim 1, wherein the behavior recognition module extractsVR k,d() The method for the Gabor features of the figure is:
firstly, designing a Gabor filter bank, wherein the number of Gabor filters in the Gabor filter bank is more than or equal to 4, and the Gabor filters are as follows:
Figure 794237DEST_PATH_IMAGE003
wherein, the first and the second end of the pipe are connected with each other,
Figure 344167DEST_PATH_IMAGE004
Figure 22273DEST_PATH_IMAGE005
Figure 125971DEST_PATH_IMAGE006
is a function of the wavelength of the light,
Figure 838712DEST_PATH_IMAGE007
which represents the direction of the filter or filters,
Figure 559543DEST_PATH_IMAGE008
which represents the phase shift of the tuning function,
Figure 475678DEST_PATH_IMAGE009
the aspect ratio of the space is shown,
Figure 369685DEST_PATH_IMAGE010
representing the variance of a Gaussian filter, using Gabor filters of different directions and scales to generate the Gabor filterGroup (iv);
then, each filter in the Gabor filter bank is convoluted with the image, the obtained filtering image is downsampled to reduce redundant information, each downsampled image is converted and normalized into a feature vector with zero mean and unit variance, and finally the normalized feature vectors are combined to generate the Gabor feature vector of the image.
3. The system for recognizing non-line-of-sight human behavior by millimeter wave radar based on multi-class feature fusion according to claim 1, wherein the SVM classifier is as follows:
mapping the input features to a high dimension by using a Gaussian kernel function, and determining the Gaussian kernel function as follows:
Figure 202511DEST_PATH_IMAGE011
wherein, the first and the second end of the pipe are connected with each other,
Figure 844977DEST_PATH_IMAGE012
in order to be a low-dimensional feature,
Figure 294412DEST_PATH_IMAGE013
standard deviation of gaussian envelope;
by usingxRepresenting the original sample point byϕ(x) RepresentxNew vectors mapped to new eigenspaces, the required maximum separation hyperplane is
Figure 195372DEST_PATH_IMAGE014
The dual problem of the nonlinear SVM is:
Figure 633438DEST_PATH_IMAGE015
wherein, the first and the second end of the pipe are connected with each other,
Figure 696072DEST_PATH_IMAGE016
a new vector is represented that maps to the new feature space,Ca human being is set with a penalty factor larger than 0 and is used for punishing the samples violating the inequality constraint condition,
Figure 836066DEST_PATH_IMAGE017
is a multiplier of the KKT so as to obtain the KKT,
Figure 819678DEST_PATH_IMAGE018
a label that represents a feature of the image,
Figure 361518DEST_PATH_IMAGE019
is a function of the lagrange multiplier and is,
Figure 860633DEST_PATH_IMAGE020
in order to optimize the number of variables,
Figure 222344DEST_PATH_IMAGE021
andbparameters of a hyperplane to be solved; then, the SMO algorithm is used for solving the problem to obtain an optimal solution, and the optimal solution is further solved
Figure 12576DEST_PATH_IMAGE022
Andbthereby obtaining an optimal hyperplane.
CN202211546815.0A 2022-12-05 2022-12-05 Millimeter wave radar non-line-of-sight human behavior recognition system based on multi-class feature fusion Withdrawn CN115563478A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211546815.0A CN115563478A (en) 2022-12-05 2022-12-05 Millimeter wave radar non-line-of-sight human behavior recognition system based on multi-class feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211546815.0A CN115563478A (en) 2022-12-05 2022-12-05 Millimeter wave radar non-line-of-sight human behavior recognition system based on multi-class feature fusion

Publications (1)

Publication Number Publication Date
CN115563478A true CN115563478A (en) 2023-01-03

Family

ID=84770010

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211546815.0A Withdrawn CN115563478A (en) 2022-12-05 2022-12-05 Millimeter wave radar non-line-of-sight human behavior recognition system based on multi-class feature fusion

Country Status (1)

Country Link
CN (1) CN115563478A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108872984A (en) * 2018-03-15 2018-11-23 清华大学 Human body recognition method based on multistatic radar micro-doppler and convolutional neural networks
CN110007366A (en) * 2019-03-04 2019-07-12 中国科学院深圳先进技术研究院 A kind of life searching method and system based on Multi-sensor Fusion
CN110163161A (en) * 2019-05-24 2019-08-23 西安电子科技大学 Multiple features fusion pedestrian detection method based on Scale invariant
CN112784722A (en) * 2021-01-13 2021-05-11 南京邮电大学 Behavior identification method based on YOLOv3 and bag-of-words model
CN114529970A (en) * 2022-02-17 2022-05-24 广州大学 Pedestrian detection system based on fusion of Gabor features and HOG features
CN114708663A (en) * 2022-04-12 2022-07-05 浙江工业大学 Millimeter wave radar sensing gesture recognition method based on few-sample learning
CN115061126A (en) * 2022-06-08 2022-09-16 电子科技大学 Radar cluster target behavior identification method based on multi-dimensional parameter neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108872984A (en) * 2018-03-15 2018-11-23 清华大学 Human body recognition method based on multistatic radar micro-doppler and convolutional neural networks
CN110007366A (en) * 2019-03-04 2019-07-12 中国科学院深圳先进技术研究院 A kind of life searching method and system based on Multi-sensor Fusion
CN110163161A (en) * 2019-05-24 2019-08-23 西安电子科技大学 Multiple features fusion pedestrian detection method based on Scale invariant
CN112784722A (en) * 2021-01-13 2021-05-11 南京邮电大学 Behavior identification method based on YOLOv3 and bag-of-words model
CN114529970A (en) * 2022-02-17 2022-05-24 广州大学 Pedestrian detection system based on fusion of Gabor features and HOG features
CN114708663A (en) * 2022-04-12 2022-07-05 浙江工业大学 Millimeter wave radar sensing gesture recognition method based on few-sample learning
CN115061126A (en) * 2022-06-08 2022-09-16 电子科技大学 Radar cluster target behavior identification method based on multi-dimensional parameter neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
崔国龙 等: "《认知智能雷达抗干扰技术综述与展望》", 《雷达学报》 *
陈鑫 等: "《基于EfficientNet模型的毫米波雷达人体行为识别》", 《计算机技术与发展》 *

Similar Documents

Publication Publication Date Title
US11455735B2 (en) Target tracking method, device, system and non-transitory computer readable storage medium
Zheng et al. CitySim: a drone-based vehicle trajectory dataset for safety-oriented research and digital twins
Praz et al. Solid hydrometeor classification and riming degree estimation from pictures collected with a Multi-Angle Snowflake Camera
Chriki et al. Deep learning and handcrafted features for one-class anomaly detection in UAV video
Shen et al. Sky region detection in a single image for autonomous ground robot navigation
Grazioli et al. Hydrometeor classification from two-dimensional video disdrometer data
CN110689043A (en) Vehicle fine granularity identification method and device based on multiple attention mechanism
CN106096506A (en) Based on the SAR target identification method differentiating doubledictionary between subclass class
CN111415533A (en) Bend safety early warning monitoring method, device and system
Liu et al. Trajectory and image-based detection and identification of UAV
Ryu et al. Small infrared target detection by data-driven proposal and deep learning-based classification
CN115131580B (en) Space target small sample identification method based on attention mechanism
Sikirić et al. Image representations on a budget: Traffic scene classification in a restricted bandwidth scenario
CN113705375A (en) Visual perception device and method for ship navigation environment
CN107203779A (en) The EO-1 hyperion dimension reduction method kept based on empty spectrum information
CN115690545A (en) Training target tracking model and target tracking method and device
CN111639212B (en) Image retrieval method in mining intelligent video analysis
Rahman et al. Predicting driver behaviour at intersections based on driver gaze and traffic light recognition
CN115563478A (en) Millimeter wave radar non-line-of-sight human behavior recognition system based on multi-class feature fusion
Zhao et al. Ship classification with high resolution TerraSAR-X imagery based on analytic hierarchy process
Guo et al. Visibility detection based on the recognition of the preceding vehicle’s taillight signals
González et al. Automatic location of L/H transition times for physical studies with a large statistical basis
CN109558771B (en) Behavior state identification method, device and equipment of marine ship and storage medium
Noor et al. A hybrid deep learning model for UAVs detection in day and night dual visions
Tang et al. Comparison of visual features for image-based visibility detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20230103

WW01 Invention patent application withdrawn after publication