CN115220007A - Radar point cloud data enhancement method aiming at attitude identification - Google Patents

Radar point cloud data enhancement method aiming at attitude identification Download PDF

Info

Publication number
CN115220007A
CN115220007A CN202210884128.3A CN202210884128A CN115220007A CN 115220007 A CN115220007 A CN 115220007A CN 202210884128 A CN202210884128 A CN 202210884128A CN 115220007 A CN115220007 A CN 115220007A
Authority
CN
China
Prior art keywords
point cloud
radar
human body
data
dimension
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210884128.3A
Other languages
Chinese (zh)
Inventor
王勇
王智铭
蒋德琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202210884128.3A priority Critical patent/CN115220007A/en
Publication of CN115220007A publication Critical patent/CN115220007A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a radar point cloud data enhancement method aiming at attitude identification. Collecting a human body posture point cloud data set through a radar; sequentially enhancing data of distance dimension, angle dimension and speed dimension of point cloud; dividing the point cloud into voxel blocks in a three-dimensional matrix form through voxel processing; the main part of the human body is divided according to the human body posture characteristics, and a neural network aiming at the human body posture is used for training. According to the invention, through inputting real point cloud data, data enhancement can be carried out in a distance dimension, an angle dimension and a speed dimension, virtual point cloud data with different distances, angles and speeds are generated, data preprocessing is carried out according to attitude characteristics, and then a neural network is used for carrying out human body attitude classification, so that the problem of a small sample data set in the radar attitude field is solved, the data set of a radar is enriched, the identification accuracy is improved, and the deep learning level research is facilitated.

Description

Radar point cloud data enhancement method aiming at attitude identification
Technical Field
The invention relates to the field of data enhancement, in particular to a radar point cloud data enhancement method aiming at attitude identification.
Background
In recent years, deep learning is rapidly developed and widely applied to various fields, and the quality of a training model in the deep learning is closely related to the size of a quantity set. The sufficient amount of data also avoids the problem of overfitting during the training process. Obtaining high-quality data integration is a bottleneck that restricts deep learning from being effective in some fields.
For image attitude data, a large number of source data sets exist at present, and data acquisition is convenient. In the field of radar, due to the problems of difficulty in data acquisition, universality and the like, a public data set is lacked, and the phenomenon of a small sample data set generally exists. When the radar point cloud data is applied to a deep learning method in the gesture recognition field, problems such as network overfitting are easy to occur.
Therefore, how to combine the characteristics of radar signals and the parameters of sensors to perform reliable data enhancement by using radar point cloud data for gesture recognition and expand data sets is a technical problem to be solved urgently.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a radar point cloud data enhancement method aiming at attitude identification.
The purpose of the invention is realized by the following technical scheme: a radar point cloud data enhancement method aiming at gesture recognition comprises the following steps:
step 1, acquiring a human body posture point cloud data set through a radar, using the human body posture point cloud data set as a first data subset, and calculating a point cloud signal-to-noise ratio;
step 2, distance dimension enhancement is carried out on the first data subset, the signal-to-noise ratio of the point cloud is updated after the point cloud is translated outwards by a fixed distance, and then noise is added to the signal-to-noise ratio to increase generalization performance; according to the radar detection performance, eliminating abnormal points which do not accord with the physical rule to obtain a data-enhanced subset, and recording as a second data subset;
step 3, angle dimension enhancement is carried out on the first data subset, the signal-to-noise ratio of the point cloud after angle adjustment is updated, and then noise is added to the signal-to-noise ratio to increase generalization performance; according to the radar detection performance, eliminating abnormal points which do not accord with the physical rule to obtain a data-enhanced subset, and recording as a third data subset;
step 4, performing speed dimension enhancement on the first data subset, performing down-sampling operation on the point cloud data set, and adjusting the speed value of the point cloud to serve as a fourth data subset;
step 5, preprocessing the enhanced human body posture point cloud data by adopting a voxelization method, and dividing the human body posture point cloud data into an MxNxL three-dimensional grid matrix;
and 6, aiming at the action characteristics of the human body posture, segmenting the three-dimensional grid matrix obtained in the step 5 according to the human body structure, and then sending the three-dimensional grid matrix to a neural network model for recognizing the human body posture for training.
Further, in step 1, the point cloud obtained by the radar is divided into a point cloud under a polar coordinate system and a point cloud under a rectangular coordinate system, and the point cloud under the rectangular coordinate system needs to be converted into the point cloud under the polar coordinate system, and the formula is as follows:
Figure BDA0003765314770000021
Figure BDA0003765314770000022
Figure BDA0003765314770000023
wherein x, y and z are coordinates of x, y and z axes in a rectangular coordinate system, R is the distance of the radar point cloud, theta is a horizontal angle, and phi is a pitch angle.
Further, in step 1, the calculating to obtain the signal-to-noise ratio information of the point cloud includes:
calculating the power density S at the point cloud:
Figure BDA0003765314770000024
wherein P is t The radar transmitting power is adopted, and R is the distance of the radar to the point cloud;
the radar generally adopts a directional antenna, and the relationship between the antenna gain G and the effective area A and the radar wavelength lambda is as follows:
Figure BDA0003765314770000025
at a radar transmitting antenna gain of G t In the radiation direction of (2), the power density S at the point cloud with distance R from the radar 1 Comprises the following steps:
Figure BDA0003765314770000026
if the fact that the human body at the point cloud position does not have loss omnidirectional radiation of the received echo signal is assumed, the echo power density S of the radar receiving antenna 2 Comprises the following steps:
Figure BDA0003765314770000027
wherein alpha is the radar scattering cross section;
according to the effective receiving area A of the radar receiving antenna r Calculating the received echo power P r
Figure BDA0003765314770000028
Where λ is the radar wavelength, G r Gain of the receiving antenna;
and (3) considering radar internal noise and external environment interference, and finally obtaining the signal-to-noise ratio SNR of the point cloud:
Figure BDA0003765314770000029
wherein T is meas For the total measurement time, k is the Boltz constant, T is the antenna temperature, and F is the noise figure.
Further, in the step 2, based on the known distance d 1 SNR of point cloud 1 Calculated as the distance d 2 SNR of point cloud 2
Figure BDA0003765314770000031
Therefore, after the point cloud is translated outwards by a fixed distance d ', the signal-to-noise ratio SNR after translation is obtained' d
Figure BDA0003765314770000032
Wherein d is o For translating the distance of the point cloud from the radar, SNR d Is the signal-to-noise ratio before translation.
Further, in the step 3, for the single-channel radar, the signal-to-noise ratio SNR 'after the angle dimension enhancement' a The calculation formula is as follows:
Figure BDA0003765314770000033
wherein, G o Gain, G, of the front radar transmit-receive antenna pair for angular adjustment v For the gain, SNR of the radar transmitting-receiving antenna pair after angle adjustment a Is the signal-to-noise ratio before angle adjustment.
Further, in the step 3, antenna simulation and antenna actual measurement are sequentially performed on the radar, directional pattern gain characteristics of the transmitting and receiving antenna pairs are respectively obtained, and the directional pattern gain obtained through simulation is used for correcting actually measured directional pattern gain deviation to obtain corrected directional pattern gain; if the radar has a plurality of transmitting and receiving antenna pairs, fitting the corrected directional pattern gain characteristics of different transmitting and receiving antenna pairs according to the pairwise combination relationship of the transmitting and receiving antennas to obtain a gain function G (theta) = { G ] of the transmitting and receiving antenna pairs 1 (θ),G 2 (θ),…,G n (theta), theta being the new angle after adjustment, G i (θ) is a gain function of the i-th set of transceiver antenna pairs; the gain of the transmitting and receiving antennas needs to be accumulated after the angle adjustment, and the SNR after the angle adjustment is' b The calculation formula is as follows:
Figure BDA0003765314770000034
where n is the total number of transmit-receive antenna pairs, G oi Gain of the i-th group of antenna transmit-receive pairs before angle adjustment, G vi Gain, SNR, of the i-th group antenna receiving-transmitting pair after angle adjustment b Is the signal-to-noise ratio before angle adjustment.
Further, in the step 3, if the radar supports the detection of the pitch angle and the horizontal angle, the point cloud data is subjected to angle adjustment of the horizontal angle dimension and the pitch angle dimension, and then the point cloud data and the pitch angle dimension are combined to obtain the human posture point cloud data after the angle dimension expansion; aiming at the human body postures with large vertical plane variation amplitude such as jumping and squatting, the pitch angle dimension is taken as the main part, and the horizontal angle dimension is taken as the auxiliary correction; aiming at the human body postures with large horizontal variation amplitude such as walking and punching a fist, the horizontal angle dimension is taken as the main dimension, and the pitch angle dimension is taken as the auxiliary correction.
Further, in the steps 2 and 3, the added noise is gaussian noise, and with the point cloud as an origin, the gaussian noise at the position adjacent to the point cloud is as follows:
Figure BDA0003765314770000035
SNR′ (x,y,z) =SNR (x,y,z) +H x,y,z
wherein sigma is the variance of the point cloud, and (x, y, z) are the coordinates of the point cloud,
Figure BDA0003765314770000041
as coordinates of the point cloud in the vicinity, H x,y,z Gaussian noise intensity, SNR, as a point cloud of coordinates (x, y, z) (x,y,z) ,NR′ (x,y,z) The signal-to-noise ratios before and after adding noise are respectively for the coordinate (x, y, z) point cloud.
Further, in the steps 2 and 3, according to the radar detection performance, abnormal points which do not accord with the physical law are removed, and the conditions are as follows: point cloud minimum signal-to-noise ratio (SNR) actually detected by radar min For reference, the signal-to-noise ratio intensity after a series of operations is smaller than the SNR min The points of (2) are removed from the point cloud dataset.
Further, in the steps 2 and 3, a density-based clustering algorithm is adopted to cluster the human body posture point cloud, abnormal interference points which do not belong to the human body are eliminated, according to the characteristics that the human body posture point cloud is closely associated on a horizontal plane and has large dispersity on a vertical plane, the improved Euclidean distance is adopted to replace the traditional Euclidean distance to serve as a distance parameter in the density-based clustering algorithm, the influence of a z axis in the clustering process is weakened, and the formula is as follows:
D(q i ,q j )=(x i -x j ) 2 +(y i -y j ) 2 +0.25*(z i -z j ) 2
wherein D (q) i ,q j ) Is a point cloud q i And point cloud q j Improved Euclidean distance, x i ,y i ,z i And x j ,y j ,z j In turn is a point cloud q i And point cloud q j Three-dimensional coordinates of (a).
Furthermore, in the step 4, since the radar collects data at a fixed frame rate, when the human body accelerates, the number of frames for collecting the same posture action will be reduced; the collected point cloud data sets are distributed according to time sequence and are expressed as F = { F = { (F) 1 ,f 2 ,…,f h In which f i The human body posture point cloud data set of the ith frame is represented, and h represents the total frame number of the collection; selecting frames with the proportion of p from F by adopting a random sampling mode to form a new time sequence distribution human body posture point cloud data set F' = { F 1 ,f 2 ,…,f m Therein of
Figure BDA0003765314770000043
And simultaneously updating the speed of the point cloud in the point cloud data set:
Figure BDA0003765314770000042
wherein v represents the original velocity of the point cloud and v' represents the velocity of the point cloud after updating.
Further, a mode of combining and expanding the second data subset, the third data subset and the fourth data subset is adopted, namely the methods in the step 2, the step 3 and the step 4 are combined randomly as required to form point cloud data sets with different distances, angles and speeds; for human attitude point cloud data characteristics, speed dimension enhancement can provide human attitude point cloud data of different speeds containing more information for the neural network, and distance dimension translation changes human attitude point cloud distribution less than angle dimension rotation, therefore, the selection order of three dimensions is: and the speed dimension, the angle dimension and the distance dimension are combined and expanded, and the distributed weights are sequentially decreased.
Further, in the step 5, a voxelization operation is performed, and the maximum value and the minimum value of the three-dimensional coordinates of the human body posture point cloud are obtained by traversing all the point clouds of the human body, wherein the maximum value and the minimum value are x in sequence min ,x max ,y min ,y max ,z min ,z max (ii) a Obtaining the length X, the width Y and the height Z of the human body area, uniformly dividing the human body area into M multiplied by N multiplied by L blocks, and calculating the intensity value of each block; the intensity values for the mass were calculated in the following manner: the number of point clouds in the block, the sum of the signal-to-noise ratios of all the point clouds in the block, and the sum of the velocities of all the point clouds in the block; finally obtaining a three-dimensional grid matrix pi;
Figure BDA0003765314770000051
wherein I x 、I y 、I z The block numbers are shown in the order of x-axis, y-axis and z-axis.
Further, in the step 6, for the common postures of the human body, the main motion part of the human body is divided into three parts, namely a left arm, a trunk and a right arm, the left arm, the trunk, the right arm and the human body are integrally processed by using a long-short term memory network, meanwhile, the human posture point clouds of all frames in the time window are processed by using a convolutional neural network after being superposed and aggregated, and the weights of the five parts are adjusted by an attention mechanism module in a fusion mode after decision making to obtain a human posture identification result.
The invention has the following beneficial effects: the radar point cloud data enhancement method aiming at the attitude identification can perform data enhancement of a distance dimension, an angle dimension and a speed dimension based on real human body attitude point cloud data acquired by a radar, screens point clouds conforming to physical characteristics by combining the characteristics of the radar, clusters the human body attitude point clouds by adopting a density-based clustering algorithm, and eliminates abnormal interference points which do not belong to a human body. Dividing the point cloud into voxel blocks in a three-dimensional matrix form through voxel processing; the main part of the human body is divided according to the human body posture characteristic, and a neural network aiming at the human body posture is used for training. By the method, the scale of the data set can be greatly expanded, the problem that the radar signal data set is insufficient at present is solved, and the performance of the neural network is effectively improved.
Drawings
FIG. 1 is a method flow implementation diagram provided by an exemplary embodiment;
FIG. 2 is a schematic diagram of an original point cloud provided by an exemplary embodiment;
FIG. 3 is a schematic diagram of a distance dimension enhanced point cloud provided by an exemplary embodiment;
FIG. 4 is a schematic diagram of an enhanced point cloud in an angular dimension provided by an exemplary embodiment;
FIG. 5 is a schematic diagram of a velocity dimension enhanced point cloud provided by an exemplary embodiment;
FIG. 6 is a schematic illustration of a radar installation provided by an exemplary embodiment;
FIG. 7 is a schematic diagram of point cloud voxelization provided by an exemplary embodiment;
fig. 8 is a schematic diagram of a neural network structure provided in an exemplary embodiment.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration and explanation only, not limitation.
Without loss of generality, the embodiment provides a radar point cloud data enhancement method for gesture recognition, and the flow is shown in fig. 1. An FMCW millimeter wave sensor with the frequency interval of 60GHz to 64GHz is adopted, the frame rate of transmitted signals is 20 frames/second, each frame of data is divided into 288 Chirp signals, and each Chirp signal has 96 sampling points.
Step 1, erecting a radar at a position with a ground height of 1.5m, as shown in fig. 6. And acquiring a human body posture point cloud data set as a first data subset through a radar. If the obtained point cloud is under a rectangular coordinate system, the point cloud needs to be converted into a point cloud under a polar coordinate system, and the formula is as follows:
Figure BDA0003765314770000061
Figure BDA0003765314770000062
Figure BDA0003765314770000063
wherein x, y and z are coordinates of x, y and z axes in a rectangular coordinate system, R is the distance of the radar point cloud, theta is a horizontal angle, and phi is a pitch angle.
Calculating the power density S at the point cloud:
Figure BDA0003765314770000064
wherein P is t The radar transmitting power is adopted, and R is the distance of the radar to the point cloud;
the radar generally adopts a directional antenna, and the relationship between the antenna gain G and the effective area A and the radar wavelength lambda is as follows:
Figure BDA0003765314770000065
at a radar transmitting antenna gain of G t In the radiation direction of (2), the power density S at the point cloud with distance R from the radar 1 Comprises the following steps:
Figure BDA0003765314770000066
if the fact that the human body at the point cloud position does not have loss omnidirectional radiation of the received echo signal is assumed, the echo power density S of the radar receiving antenna 2 Comprises the following steps:
Figure BDA0003765314770000067
wherein alpha is the radar scattering cross section;
according to the effective receiving area A of the radar receiving antenna r Calculating the received echo power P r
Figure BDA0003765314770000068
Where λ is the radar wavelength, G r Gain of the receiving antenna;
considering radar internal noise and external environment interference, and finally obtaining the SNR of the point cloud:
Figure BDA0003765314770000069
wherein T is meas For the total measurement time, k is the Boltz constant, T is the antenna temperature, and F is the noise figure.
And 2, performing distance dimension enhancement on the first data subset. When the distance changes, the signal strength will change greatly. Based on the known distance d 1 SNR of point cloud 1 Calculated as the distance d 2 SNR of point cloud 2
Figure BDA00037653147700000610
Therefore, after the point cloud is translated outwards by a fixed distance d ', the signal-to-noise ratio SNR after translation is obtained' d
Figure BDA0003765314770000071
Wherein d is o For translating the distance of the point cloud from the radar, SNR d Is the signal-to-noise ratio before translation.
Gaussian noise is then added to the signal-to-noise ratio to increase the generalization performance. Adding Gaussian noise, taking the point cloud as an origin, wherein the Gaussian noise at the position adjacent to the point cloud is as follows:
Figure BDA0003765314770000072
SNR′ (x,y,z) =SNR (x,y,z) +H x,y,z
wherein sigma is the point cloud variance, and (x, y, z) is the point cloud coordinate,
Figure BDA0003765314770000073
as coordinates of the point cloud in the vicinity, H x,y,z Gaussian noise intensity, SNR, as a point cloud of coordinates (x, y, z) (x,y,z) ,SNR′ (x,y,z) The signal-to-noise ratios before and after adding noise are respectively for the coordinate (x, y, z) point cloud.
Then according to the radar detection capability, the SNR of the point cloud minimum signal-to-noise ratio which can be actually detected by the radar is used min For reference, the signal-to-noise ratio intensity after a series of operations is smaller than the SNR min The points of (2) are removed from the point cloud dataset. Clustering the human posture point cloud by adopting a density-based clustering algorithm, removing abnormal interference points which do not belong to a human body, and replacing the traditional Euclidean distance with an improved Euclidean distance as a distance parameter in the density-based clustering algorithm according to the characteristics of close association of the human posture point cloud on a horizontal plane and large dispersity on a vertical plane, so that the influence of a z-axis in the clustering process is weakened, a subset with enhanced data is obtained, and the subset is marked as a second data subset;
D(q i ,q j )=(x i -x j ) 2 +(y j -y j ) 2 +0.25*(z i -z j ) 2
wherein D (q) i ,q j ) Is a point cloud q i And point cloud q j Improved Euclidean distance, x i ,y i ,z i And x j ,y j ,z j In turn is a point cloud q i And point cloud q j Three-dimensional coordinates of (a).
Step 3, angle dimension enhancement is carried out on the first data subset, and for a single-channel radar, the signal-to-noise ratio SNR 'after the angle dimension enhancement is carried out' a The calculation formula is as follows:
Figure BDA0003765314770000074
wherein, G o Gain, G, of the front radar transmit-receive antenna pair for angular adjustment v For the gain, SNR of the radar transmitting-receiving antenna pair after angle adjustment a Is the signal-to-noise ratio before angle adjustment.
In this embodiment, the radar has 4 receiving antennas and 3 transmitting antennas, and 12 pairs of the receiving and transmitting antennas are formed by the mimo technology. Sequentially carrying out antenna simulation and antenna actual measurement on the radar to respectively obtain directional pattern gain characteristics of a transmitting-receiving antenna pair, and correcting actual measurement directional pattern gain deviation by using directional pattern gain obtained through simulation to obtain corrected directional pattern gain; for the radar with a plurality of transmitting and receiving antenna pairs, fitting the corrected directional pattern gain characteristics of different transmitting and receiving antenna pairs according to the pairwise combination relationship of the transmitting and receiving antennas to obtain a gain function G (theta) = { G (G) = of the transmitting and receiving antenna pairs 1 (θ),G 2 (θ),…,G n (theta), theta being the new angle after adjustment, G i (θ) is a gain function of the i-th set of transceiver antenna pairs; the gain of the transmitting and receiving antennas needs to be accumulated after the angle adjustment, and the SNR after the angle adjustment is' b The calculation formula is as follows:
Figure BDA0003765314770000081
where n is the total number of transmit-receive antenna pairs, G oi Is an angleAdjusting gain of the i-th group antenna receiving and transmitting pair before adjustment, G vi Gain, SNR, of the i-th group antenna receiving-transmitting pair after angle adjustment b Is the signal-to-noise ratio before angle adjustment.
As in step 2, noise is added to the signal-to-noise ratio to increase generalization performance. According to the radar detection capability, eliminating abnormal points which do not accord with the physical rule, clustering the human body attitude point cloud by adopting a density-based clustering algorithm, eliminating abnormal interference points which do not belong to the human body, obtaining a data-enhanced subset, and marking as a third data subset; in this embodiment, the radar supports the detection of the pitch angle and the horizontal angle, so that the data enhancement of the horizontal angle dimension and the pitch angle dimension can be simultaneously performed on the point cloud data. And then combining the two to obtain the human body posture point cloud data after the angle dimension expansion. Aiming at the human body postures with large vertical plane variation amplitude such as jumping and squatting, the pitch angle dimension is taken as the main part, and the horizontal angle dimension is taken as the auxiliary correction; aiming at the human body postures with large horizontal variation amplitude such as walking and punching a fist, the horizontal angle dimension is taken as the main dimension, and the pitch angle dimension is taken as the auxiliary correction; and updating the angle-adjusted signal-to-noise ratio as a third data subset.
And 4, performing speed dimension enhancement on the first data subset. Because the radar collects data at a fixed frame rate, when the human body moves in an accelerated manner, the number of frames for collecting the same gesture movement is reduced; the collected point cloud data sets are distributed according to time sequence and are expressed as F = { F = { (F) 1 ,f 2 ,…,f h In which f i Representing a human body posture point cloud data set of the ith frame, and h representing the total frame number of the collected frames; selecting frames with the proportion of p =0.8 from F by adopting a random sampling mode to form a new time sequence distribution human body posture point cloud data set F' = { F = } 1 ,f 2 ,…,f m Therein of
Figure BDA0003765314770000083
And simultaneously updating the speed of the point cloud in the point cloud data set:
Figure BDA0003765314770000082
wherein v represents the original velocity of the point cloud and v' represents the velocity of the point cloud after updating.
Fig. 2 to 5 are schematic diagrams of an original point cloud, a distance-dimension-enhanced point cloud, an angle-dimension-enhanced point cloud, and a speed-dimension-enhanced point cloud provided in an embodiment, respectively.
And 5, combining and expanding the second data subset, the third data subset and the fourth data subset, namely combining the methods in the steps 2, 3 and 4 randomly according to needs to form point cloud data sets with different distances, angles and speeds. Aiming at the characteristics of the human body posture point cloud data, the speed dimension enhancement can provide the human body posture point cloud data with different speeds containing more information for the neural network, and the distance dimension translation changes the human body posture point cloud distribution less than the angle dimension rotation, so the selection sequence of the three dimensions is as follows: and the speed dimension, the angle dimension and the distance dimension are combined and expanded, and the distributed weights are sequentially decreased.
And preprocessing the point cloud by a voxelization method. A schematic view of point cloud voxelization is shown in fig. 7. Traversing all point clouds of the human body to obtain the maximum value and the minimum value of the three-dimensional coordinate of the point clouds of the human body posture, wherein x is sequentially min ,x max ,y min ,y max ,z min ,z max (ii) a The length X, the width Y and the height Z of the human body area are obtained, the human body area is uniformly divided into 32X 10 blocks, the sum of signal-to-noise ratios of all point clouds in the blocks is calculated to serve as the intensity value of the blocks, and finally the three-dimensional grid matrix pi can be obtained.
Figure BDA0003765314770000091
Wherein I x 、I y 、I z The block numbers on the x-axis, y-axis and z-axis are shown in this order.
And 6, for common postures of the human body, dividing main motion parts of the human body into a left arm part, a trunk part and a right arm part, processing the left arm part, the trunk part, the right arm part and the whole human body by using a long-short term memory network, simultaneously processing all frames of human posture point clouds in a time window by using a convolutional neural network after superposition and aggregation, and adjusting the weights of the five parts by using an attention mechanism module in a fusion mode after decision to obtain a human posture recognition result. Fig. 8 is a schematic diagram of a neural network structure provided in an embodiment.
The following provides a radar point cloud data enhancement application scenario, but is not limited thereto. The method is characterized in that a radar sensor is placed indoors, original point cloud data of specific postures (jumping, walking, squatting and punching) are collected at fixed distance and fixed angle, enhanced parameters including translation distance, rotation angle and random sampling proportion are set, and an enhanced data set is generated. After the voxel preprocessing, the data are sent to a neural network model shown in fig. 8 for training, and classification results are compared. The accuracy of the enhanced human posture recognition is improved to 93.29 percent from 90.42 percent of the original point cloud data. Thus, data enhancement is completed, and the accuracy is improved while the data set is greatly expanded.
In summary, the invention provides a radar point cloud data enhancement method for attitude identification, which can be used for acquiring point cloud data of a radar, can enhance distance dimension, angle dimension, speed dimension and any dimension combination, and has reliability. After the point cloud is subjected to voxelization, human body trunk data are divided according to the human posture characteristics and trained through a specific neural network. By the method, the data volume of the data set can be greatly expanded, the identification accuracy rate is improved, sufficient data are provided for subsequent various researches, the problem that the data volume in the current radar attitude data set is insufficient can be solved, and the deep learning level research can be better carried out.
The foregoing is a further detailed description of the present invention in connection with specific preferred embodiments thereof, and it is not intended to limit the invention to the specific embodiments thereof. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the scope of the invention.

Claims (10)

1. A radar point cloud data enhancement method aiming at attitude recognition is characterized by comprising the following steps:
step 1, acquiring a human body posture point cloud data set through a radar, using the human body posture point cloud data set as a first data subset, and calculating a point cloud signal-to-noise ratio;
step 2, distance dimension enhancement is carried out on the first data subset, the signal-to-noise ratio of the point cloud is updated after the point cloud is translated outwards by a fixed distance, and then noise is added to the signal-to-noise ratio to increase generalization performance; according to the radar detection performance, eliminating abnormal points which do not accord with the physical rule to obtain a data-enhanced subset, and recording the data-enhanced subset as a second data subset;
step 3, angle dimension enhancement is carried out on the first data subset, the signal-to-noise ratio of the point cloud after angle adjustment is updated, and then noise is added to the signal-to-noise ratio to increase generalization performance; according to the radar detection performance, eliminating abnormal points which do not accord with the physical rule to obtain a data-enhanced subset, and recording as a third data subset;
step 4, performing speed dimension enhancement on the first data subset, performing down-sampling operation on the point cloud data set, and adjusting the speed value of the point cloud to serve as a fourth data subset;
step 5, preprocessing the enhanced human body attitude point cloud data by adopting a voxelization method, and dividing the human body attitude point cloud data into an MXNxL three-dimensional grid matrix;
and 6, aiming at the action characteristics of the human body posture, segmenting the three-dimensional grid matrix obtained in the step 5 according to the human body structure, and then sending the three-dimensional grid matrix to a neural network model for recognizing the human body posture for training.
2. The method for enhancing radar point cloud data for gesture recognition according to claim 1, wherein in the step 1, the calculating of the signal-to-noise ratio information of the point cloud includes:
calculating the power density S at the point cloud:
Figure FDA0003765314760000011
wherein P is t The radar transmitting power is R, and the distance of the radar to the point cloud is R;
the antenna gain G of the radar is related to the effective area a and the radar wavelength λ as follows:
Figure FDA0003765314760000012
at a radar transmitting antenna gain of G t In the radiation direction of (2), the power density S at the point cloud with distance R from the radar 1 Comprises the following steps:
Figure FDA0003765314760000013
if the fact that the human body at the point cloud position does not have loss omnidirectional radiation of the received echo signal is assumed, the echo power density S of the radar receiving antenna 2 Comprises the following steps:
Figure FDA0003765314760000014
wherein alpha is the radar scattering cross section;
according to the effective receiving area A of the radar receiving antenna r Calculating the received echo power P r
Figure FDA0003765314760000021
Where λ is the radar wavelength, G r Gain for the receive antenna;
considering radar internal noise and external environment interference, and finally obtaining the SNR of the point cloud:
Figure FDA0003765314760000022
wherein T is meas For the total measurement time, k is the Boltz constant, T is the antenna temperature, and F is the noise figure.
3. The radar point cloud data enhancement method for gesture recognition according to claim 1, wherein in the step 2, the distance is d based on the known distance 1 SNR of point cloud 1 Calculated as the distance d 2 SNR of point cloud 2
Figure FDA0003765314760000023
Therefore, after the point cloud is translated outwards by a fixed distance d ', the signal-to-noise ratio SNR ' after translation is obtained ' d
Figure FDA0003765314760000024
Wherein d is o For translating the distance of the cloud from the radar, SNR d Is the signal-to-noise ratio before translation.
4. The method for enhancing radar point cloud data for gesture recognition according to claim 1, wherein in the step 3, for a single-channel radar, the signal-to-noise ratio (SNR ') after angle dimensional enhancement is' a The calculation formula is as follows:
Figure FDA0003765314760000025
wherein, G o Gain, G, of the front radar transmit-receive antenna pair for angular adjustment v For the gain, SNR of the radar transmitting-receiving antenna pair after angle adjustment a The signal-to-noise ratio before angle adjustment;
sequentially carrying out antenna simulation and antenna actual measurement on the radar to respectively obtain directional pattern gain characteristics of a transmitting-receiving antenna pair, and correcting actual pattern gain deviation by using directional pattern gain obtained through simulation to obtain corrected directional pattern gain;
if the radar has multiple transmissions and receptionsAnd fitting the corrected directional diagram gain characteristics of different transmitting and receiving antenna pairs according to the pairwise combination relationship of the transmitting and receiving antennas to obtain a gain function G (theta) = { G) = of the transmitting and receiving antenna pairs 1 (θ),G 2 (θ),...,G n (θ) }, θ is the new angle after adjustment, G i (θ) is a gain function of the i-th set of transceiver antenna pairs; the gain of the receiving and transmitting antenna needs to be accumulated after the angle adjustment, and the SNR 'after the angle adjustment' b The calculation formula is as follows:
Figure FDA0003765314760000026
where n is the total number of transmit-receive antenna pairs, G oi Gain of the i-th group of antenna transmit-receive pairs before angle adjustment, G vi Gain, SNR, of the i-th group antenna receiving-transmitting pair after angle adjustment b The signal-to-noise ratio before angle adjustment;
if the radar supports the detection of the pitch angle and the horizontal angle, simultaneously carrying out angle adjustment on the horizontal angle dimension and the pitch angle dimension on the point cloud data, and then combining the two to obtain the human body attitude point cloud data after the angle dimension expansion; aiming at the human body posture with large vertical plane variation amplitude, taking the pitch angle dimension as a main part and the horizontal angle dimension as auxiliary correction; aiming at the human body posture with large horizontal variation amplitude, the horizontal angle dimension is taken as the main dimension, and the pitch angle dimension is taken as the auxiliary correction.
5. The radar point cloud data enhancement method for attitude identification according to claim 1, wherein in the steps 2 and 3, the added noise is gaussian noise, and with the point cloud as an origin, the gaussian noise at the adjacent position of the point cloud is as follows:
Figure FDA0003765314760000031
SNR′ (x,y,z) =SNR (x,y,z) +H x,y,z
wherein σ isThe point cloud variance, (x, y, z) is the point cloud coordinates,
Figure FDA0003765314760000032
as coordinates of the point cloud at the neighboring location, H x,y,z Gaussian noise intensity, SNR, as a point cloud of coordinates (x, y, z) (x,y,z) ,SNR′ (x,y,z) And respectively adding signal-to-noise ratios before and after noise to the coordinate (x, y, z) point cloud.
6. The method for enhancing radar point cloud data for gesture recognition according to claim 1, wherein in the steps 2 and 3, outliers which do not conform to physical laws are removed according to radar detection performance, and the following conditions are provided: SNR (signal to noise ratio) minimum intensity of point cloud actually detected by radar min For reference, the signal-to-noise ratio intensity after a series of operations is smaller than the SNR min Removing the points from the point cloud data set;
the method comprises the following steps of clustering the human posture point cloud by adopting a density-based clustering algorithm, eliminating abnormal interference points which do not belong to a human body, replacing the traditional Euclidean distance with the improved Euclidean distance as a distance parameter in the density-based clustering algorithm according to the characteristics of close association of the human posture point cloud on a horizontal plane and large dispersity on a vertical plane, and weakening the influence of a z axis in the clustering process, wherein the formula is as follows:
D(q i ,q j )=(x i -x j ) 2 +(y i -y j ) 2 +0.25*(z i -z j ) 2
wherein D (q) i ,q j ) Is a point cloud q i And point cloud q j Improved Euclidean distance, x i ,y i ,z i And x j ,y j ,z j In turn is a point cloud q i And point cloud q j Three-dimensional coordinates of (a).
7. The method for enhancing radar point cloud data for gesture recognition as claimed in claim 1, wherein in the step 4, the radar point cloud data is obtained by radarThe fixed frame rate is used for acquiring data, and when the human body accelerates, the number of frames for acquiring the same gesture action is reduced; the collected point cloud data set is distributed in time sequence and is expressed as F = { F = { (F) 1 ,f 2 ,...,f h In which f i Representing a human body posture point cloud data set of the ith frame, and h representing the total frame number of the collected frames; selecting frames with the proportion of p from F by adopting a random sampling mode to form a new time sequence distribution human body posture point cloud data set F' = { F 1 ,f 2 ,...,f m ) Wherein
Figure FDA0003765314760000034
And simultaneously updating the speed of the point cloud in the point cloud data set:
Figure FDA0003765314760000033
wherein v represents the original velocity of the point cloud and v' represents the velocity of the point cloud after updating.
8. The radar point cloud data enhancement method for attitude identification according to claim 1, wherein the second, third and fourth data subsets are combined and expanded, that is, the methods of step 2, step 3 and step 4 are combined arbitrarily as required to form point cloud data sets with different distances, angles and speeds; aiming at the characteristics of the human body posture point cloud data, the speed dimension enhancement can provide the human body posture point cloud data with different speeds containing more information for the neural network, and the distance dimension translation changes the human body posture point cloud distribution less than the angle dimension rotation, so the selection sequence of the three dimensions is as follows: and the speed dimension, the angle dimension and the distance dimension are combined and expanded, and the distributed weights are sequentially decreased.
9. The radar point cloud data enhancement method aiming at attitude identification as claimed in claim 1, wherein in the step 5, a voxelization operation is performed, and the maximum value of three-dimensional coordinates of the point cloud of the attitude of the human body is obtained by traversing all the point clouds of the human bodyAnd a minimum value, in order, of x min ,x max ,y min ,y max ,z min ,z max (ii) a Obtaining the length X, the width Y and the height Z of the human body area, uniformly dividing the human body area into M multiplied by N multiplied by L individual blocks, and calculating the strength value of each block; the intensity values for the mass were calculated in the following manner: the number of point clouds in the block, the sum of the signal-to-noise ratios of all the point clouds in the block, and the sum of the velocities of all the point clouds in the block; finally obtaining a three-dimensional grid matrix pi;
Figure FDA0003765314760000041
in which I x 、I y 、I z The block numbers are shown in the order of x-axis, y-axis and z-axis.
10. The radar point cloud data enhancement method aiming at posture recognition is characterized in that in the step 6, for common postures of a human body, main motion parts of the human body are divided into a left arm part, a trunk part and a right arm part, the left arm part, the trunk part, the right arm part and the whole human body are processed by using a long-term and short-term memory network, human posture point clouds of all frames in a time window are processed by using a convolutional neural network after being superposed and aggregated, and the weights of the five parts are adjusted by an attention mechanism module in a fusion mode after decision making to obtain a human posture recognition result.
CN202210884128.3A 2022-07-26 2022-07-26 Radar point cloud data enhancement method aiming at attitude identification Pending CN115220007A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210884128.3A CN115220007A (en) 2022-07-26 2022-07-26 Radar point cloud data enhancement method aiming at attitude identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210884128.3A CN115220007A (en) 2022-07-26 2022-07-26 Radar point cloud data enhancement method aiming at attitude identification

Publications (1)

Publication Number Publication Date
CN115220007A true CN115220007A (en) 2022-10-21

Family

ID=83612956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210884128.3A Pending CN115220007A (en) 2022-07-26 2022-07-26 Radar point cloud data enhancement method aiming at attitude identification

Country Status (1)

Country Link
CN (1) CN115220007A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115540875A (en) * 2022-11-24 2022-12-30 成都运达科技股份有限公司 Method and system for high-precision detection and positioning of train vehicles in station track
CN116051925A (en) * 2023-01-04 2023-05-02 北京百度网讯科技有限公司 Training sample acquisition method, device, equipment and storage medium
CN117647788A (en) * 2024-01-29 2024-03-05 北京清雷科技有限公司 Dangerous behavior identification method and device based on human body 3D point cloud

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115540875A (en) * 2022-11-24 2022-12-30 成都运达科技股份有限公司 Method and system for high-precision detection and positioning of train vehicles in station track
CN115540875B (en) * 2022-11-24 2023-03-07 成都运达科技股份有限公司 Method and system for high-precision detection and positioning of train vehicles in station track
CN116051925A (en) * 2023-01-04 2023-05-02 北京百度网讯科技有限公司 Training sample acquisition method, device, equipment and storage medium
CN116051925B (en) * 2023-01-04 2023-11-10 北京百度网讯科技有限公司 Training sample acquisition method, device, equipment and storage medium
CN117647788A (en) * 2024-01-29 2024-03-05 北京清雷科技有限公司 Dangerous behavior identification method and device based on human body 3D point cloud
CN117647788B (en) * 2024-01-29 2024-04-26 北京清雷科技有限公司 Dangerous behavior identification method and device based on human body 3D point cloud

Similar Documents

Publication Publication Date Title
CN115220007A (en) Radar point cloud data enhancement method aiming at attitude identification
CN106355151B (en) A kind of three-dimensional S AR images steganalysis method based on depth confidence network
CN103295242B (en) A kind of method for tracking target of multiple features combining rarefaction representation
CN106842165B (en) Radar centralized asynchronous fusion method based on different distance angular resolutions
CN111091105A (en) Remote sensing image target detection method based on new frame regression loss function
CN111899172A (en) Vehicle target detection method oriented to remote sensing application scene
CN107300698B (en) Radar target track starting method based on support vector machine
CN103824093B (en) It is a kind of based on KFDA and SVM SAR image target's feature-extraction and recognition methods
CN109242028A (en) SAR image classification method based on 2D-PCA and convolutional neural networks
CN107992818B (en) Method for detecting sea surface ship target by optical remote sensing image
CN114595732B (en) Radar radiation source sorting method based on depth clustering
CN107729926A (en) A kind of data amplification method based on higher dimensional space conversion, mechanical recognition system
CN114200477A (en) Laser three-dimensional imaging radar ground target point cloud data processing method
CN113486961A (en) Radar RD image target detection method and system based on deep learning under low signal-to-noise ratio and computer equipment
CN111368930B (en) Radar human body posture identification method and system based on multi-class spectrogram fusion and hierarchical learning
CN111610492A (en) Multi-acoustic sensor array intelligent sensing method and system
CN108152812B (en) Improved AGIMM tracking method for adjusting grid spacing
CN110703221A (en) Urban low-altitude small target classification and identification system based on polarization characteristics
CN113820682B (en) Millimeter wave radar-based target detection method and device
CN112379393A (en) Train collision early warning method and device
CN113271539B (en) Indoor target positioning method based on improved CNN model
CN117237902B (en) Robot character recognition system based on deep learning
CN111428627B (en) Mountain landform remote sensing extraction method and system
CN114814776B (en) PD radar target detection method based on graph attention network and transfer learning
CN116310780A (en) Optical remote sensing image ship target detection method in any direction based on contour modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination