CN115205891A - Personnel behavior recognition model training method, behavior recognition method and device - Google Patents

Personnel behavior recognition model training method, behavior recognition method and device Download PDF

Info

Publication number
CN115205891A
CN115205891A CN202210612391.7A CN202210612391A CN115205891A CN 115205891 A CN115205891 A CN 115205891A CN 202210612391 A CN202210612391 A CN 202210612391A CN 115205891 A CN115205891 A CN 115205891A
Authority
CN
China
Prior art keywords
point cloud
target object
cloud data
spatial
extraction module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210612391.7A
Other languages
Chinese (zh)
Inventor
周安福
燕婕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202210612391.7A priority Critical patent/CN115205891A/en
Publication of CN115205891A publication Critical patent/CN115205891A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/415Identification of targets based on measurements of movement associated with the target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a training method of a personnel behavior recognition model, a behavior recognition method and a device, comprising the following steps: acquiring point cloud data generated by the millimeter wave radar on the sensing area and target object space motion data acquired based on the point cloud data; inputting point cloud data into a point cloud feature extraction module frame by frame, inputting target object space motion data into the target feature extraction module at one time, extracting the space features of the point cloud data and the space features of the target object space motion data and splicing to form a combined feature sequence; and adding corresponding behavior category vectors to the combined characteristic sequence, adding the behavior category vectors to the set position vector, inputting the added behavior category vectors to an encoder module of the time sequence characteristic extraction module, calculating to obtain an output matrix with time sequence characteristics, acquiring first-dimension data of the output matrix, and inputting a linear classification layer to obtain a behavior classification result. The invention can realize non-contact continuous personnel behavior identification, enrich identifiable types and improve identification accuracy.

Description

Personnel behavior recognition model training method, behavior recognition method and device
Technical Field
The invention relates to the technical field of machine learning, in particular to a personnel behavior recognition model training method, a behavior recognition method and a device.
Background
Driven by the rapidly evolving deep learning, mobile computing technology, many applications for monitoring the daily behavior of personnel have come to light. The personnel behavior recognition has an important role in many fields, for example, in the aspect of medical health, the personnel behavior recognition can be used in a rehabilitation center or in the home of a solitary old person to realize falling detection; in the aspect of daily monitoring, the system can be used in places such as banks, supermarkets, campuses and the like to monitor the occurrence of abnormal events, and is convenient for taking remedial measures in time.
The traditional personnel behavior identification is usually realized by using an image technology or wearable equipment, but the personnel behavior identification mode based on the image can be influenced by weather and illumination conditions, and meanwhile, a user also needs to face the risk of privacy leakage; the person identification method based on the wearable device may bring a sense of restraint to a user and may be difficult to provide a comfortable use experience, and there is a burden that requires periodic charging and maintenance.
The technology of wireless signal perception emerging in recent years can realize contactless and continuous person behavior recognition. Among wireless signals, millimeter wave signals are increasingly applied to personnel behavior recognition tasks due to advantages of millimeter wave signals in fine perception, distance measurement, speed measurement and the like. Because the millimeter wave point cloud has the characteristics of irregularity, sparseness, easiness in being influenced by environmental noise and the like, the millimeter wave point cloud has quality fluctuation such as increase of noise points or excessive sparseness of the point cloud, and accordingly the accuracy of personnel behavior identification is influenced.
Therefore, a person behavior identification method capable of identifying the daily behavior category of a person and accurately identifying similar behaviors is needed.
Disclosure of Invention
In view of this, embodiments of the present invention provide a training method for a personnel behavior recognition model, a behavior recognition method and an apparatus thereof, so as to eliminate or improve one or more defects in the prior art, and solve the problems that the types of recognizable behaviors are few and it is difficult to distinguish similar behaviors (such as walking and running, sitting and falling) in the prior art.
In one aspect, the invention provides a training method for a personnel behavior recognition model, which comprises the following steps:
acquiring a training sample set, wherein the training sample set comprises a plurality of samples, each sample comprises continuous point cloud data of a set frame number generated by a millimeter wave radar to a sensing area, and target object space motion data acquired based on the point cloud data, and the target object space motion data comprises space position coordinates, speed information and acceleration information of a target object in the sensing area; adding the behavior of the target object as a label for each sample;
the method comprises the steps of obtaining an initial neural network model, wherein the initial neural network model comprises a spatial feature extraction module and a time sequence feature extraction module, the spatial feature extraction module comprises a point cloud feature extraction module and a target feature extraction module, the time sequence feature extraction module comprises an encoder module and a linear classification layer, the encoder module is formed by stacking a plurality of sub-modules, and each sub-module comprises a multi-head self-attention layer and a multi-layer sensor;
inputting the point cloud data in a single sample into the point cloud feature extraction module frame by frame to extract the spatial features of the point cloud data, inputting the target object spatial motion data in the single sample into the target feature extraction module at one time to extract the spatial features of the target object spatial motion data, and splicing the spatial features of the point cloud data and the spatial features of the target object spatial motion data to obtain a combined feature sequence of the single sample; adding corresponding behavior category vectors to the combined feature sequence, adding the behavior category vectors to a set position vector, inputting the set position vector to the encoder module, calculating to obtain an output matrix with time sequence characteristics, acquiring first-dimensional data of the output matrix, and inputting the first-dimensional data to the linear classification layer to output a behavior classification result;
and training the initial neural network model by adopting the training sample set to obtain a personnel behavior recognition model.
In some embodiments of the invention, the point cloud feature extraction module includes a first spatial transformation network, a first multi-layer sensor, a second spatial transformation network, a second multi-layer sensor, and a third multi-layer sensor, which are connected in sequence; the output and the input of the first spatial transformation network are subjected to point multiplication and then input into the first multilayer sensor, and the output and the input of the second spatial transformation network are subjected to point multiplication and then input into the second multilayer sensor.
In some embodiments of the invention, the point cloud feature extraction module further comprises a max-pooling layer that aggregates features of the point cloud data in various dimensions using max-pooling operations on the output of the third multi-layered perceptron.
In some embodiments of the present invention, the target feature extraction module comprises a plurality of multi-layered sensors, and the ReLu and BN operations are performed after processing by each multi-layered sensor.
In some embodiments of the present invention, the target object spatial motion data is obtained by using a tracking algorithm based on extended kalman filtering, including:
acquiring an observed value of a state parameter of the target object detected by the millimeter wave radar, wherein the state parameter comprises a spatial position coordinate, speed information and acceleration information;
calculating a predicted value of the target object state parameter at the current moment according to the observed value of the target object state parameter at the previous moment by adopting a state transition equation and determining a predicted track;
acquiring one or more points in the point cloud data, wherein the distance between the point cloud data and the predicted track is smaller than a set value, and calculating the average value of the state parameters of each point to be used as the measured value of the target object at the current moment;
and correcting the predicted value and the measured value of the state parameter by using Kalman gain to obtain an estimated value of the state parameter of the target object at the current moment, and taking the estimated value as the spatial motion data of the target object.
In some embodiments of the invention, the encoder module is represented by the following formula:
Figure BDA0003673348620000031
Figure BDA0003673348620000032
wherein R is m-1 Represents the output of the m-1 th sub-module, R m Indicating the output of the execution of the mth sub-module,
Figure BDA0003673348620000033
represents the input value of the multi-layer perceptron in the mth sub-module, M is the total number of the set sub-modules, MHSA (-) represents the processing of the multi-head self-attention layer, LN (-) represents the layer normalization processing, and MLP (-) represents the processing of the multi-layer perceptron.
In some embodiments of the present invention, a Softmax layer is connected after the linear classification layer, and the classification result is normalized.
In another aspect, the present invention provides a method for identifying a person behavior, comprising the steps of:
acquiring continuous point cloud data of a set frame number generated by a millimeter wave radar in a sensing area, acquiring target object space motion data based on the point cloud data, and inputting the point cloud data and the target object space motion data into a personnel behavior recognition model in the personnel behavior recognition model training method, so as to obtain a recognition result of the behavior of active personnel in the sensing area.
In another aspect, the present invention provides an electronic device comprising a processor and a memory, wherein the memory has stored therein computer instructions, and the processor is configured to execute the computer instructions stored in the memory, and when the computer instructions are executed by the processor, the device implements the steps of the method as recited in any one of the above-mentioned.
In another aspect, the invention also provides a computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a processor, implements the steps of the method according to any of the above mentioned.
The invention has the beneficial effects that at least:
the invention relates to a personnel behavior recognition model training method, a behavior recognition method and a device, comprising the following steps: acquiring point cloud data generated by the millimeter wave radar on the sensing area and target object space motion data acquired based on the point cloud data; the point cloud data is input into a point cloud feature extraction module frame by frame, the target object space motion data is input into a target feature extraction module at one time, and the space features of the point cloud data and the space features of the target object space motion data are extracted and spliced to form a combined feature sequence. And adding corresponding behavior category vectors to the combined characteristic sequence, adding the behavior category vectors to the set position vector, inputting the added behavior category vectors to an encoder module of the time sequence characteristic extraction module, calculating to obtain an output matrix with time sequence characteristics, acquiring first-dimension data of the output matrix, and inputting a linear classification layer to obtain a behavior classification result. Personnel behavior recognition is realized, and meanwhile, signals are transmitted and received by the millimeter wave radar, so that non-contact and continuous data acquisition is realized; and extracting target object space motion data from the point cloud data, and inputting the point cloud data and the target object space motion data into a personnel behavior recognition model by taking the point cloud data and the target object space motion data as data sources, so that the richness and stability of the data sources are ensured, the recognition accuracy is improved, and the recognizable types are enriched.
Further, a point cloud feature extraction module is designed in the personnel behavior recognition model to extract point cloud features, and the point cloud data is processed by utilizing a spatial transformation network to realize the invariance of the spatial change of the point cloud data. The point cloud feature extraction module further comprises a maximum pooling layer, and the arrangement disorder of the point clouds is kept by using the maximum pooling operation as a symmetric function.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
It will be appreciated by those skilled in the art that the objects and advantages that can be achieved with the present invention are not limited to the specific details set forth above, and that these and other objects that can be achieved with the present invention will be more clearly understood from the detailed description that follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principle of the invention. In the drawings:
fig. 1 is a schematic flow chart illustrating steps of a training method for a human behavior recognition model according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart illustrating steps of a method for acquiring spatial motion data of a target object based on an extended kalman filter tracking algorithm according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a method for acquiring spatial motion data of a target object based on a tracking algorithm of extended kalman filter according to an embodiment of the present invention.
Fig. 4 is a diagram of a structure of a human behavior recognition model according to an embodiment of the present invention.
FIG. 5 is a diagram illustrating a structure of a point cloud feature extraction module according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the following embodiments and the accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention, but not to limit the present invention.
It should be noted that, in order to avoid obscuring the present invention with unnecessary details, only the structures and/or processing steps closely related to the scheme according to the present invention are shown in the drawings, and other details not so relevant to the present invention are omitted.
It should be emphasized that the term "comprises/comprising" when used herein, is taken to specify the presence of stated features, elements, steps or components, but does not preclude the presence or addition of one or more other features, elements, steps or components.
It is also noted herein that the term "coupled," if not specifically stated, may refer herein to not only a direct connection, but also an indirect connection in which an intermediate is present.
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the drawings, the same reference numerals denote the same or similar components, or the same or similar steps.
It should be emphasized that the step labels mentioned in the following are not limitations to the order of steps, but should be understood that the steps may be executed in the order mentioned in the embodiments, may be executed in a different order from the embodiments, or may be executed simultaneously.
In order to solve the problems of few identifiable behavior types and difficulty in distinguishing similar behaviors (such as walking, running, sitting and falling) in the existing human behavior identification technology, the invention provides a human behavior identification model training method, as shown in fig. 1, which comprises the following steps S110 to S130, and in step S120, the method further comprises the steps S121 to S122:
step S110: acquiring a training sample set, wherein the training sample set comprises a plurality of samples, each sample comprises continuous point cloud data with a set frame number generated by a millimeter wave radar to a sensing area, and target object space motion data acquired based on the point cloud data, and the target object space motion data comprises space position coordinates, speed information and acceleration information of a target object in the sensing area; the behavior of the target object is added as a label to each sample.
Step S120: the method comprises the steps of obtaining an initial neural network model, wherein the initial neural network model comprises a space feature extraction module and a time sequence feature extraction module, the space feature extraction module comprises a point cloud feature extraction module and a target feature extraction module, the time sequence feature extraction module comprises an encoder module and a linear classification layer, the encoder module is formed by stacking a plurality of sub-modules, and each sub-module comprises a multi-head self-attention layer and a multi-layer perceptron.
Step S121: the method comprises the steps of inputting point cloud data in a single sample into a point cloud feature extraction module frame by frame to extract spatial features of the point cloud data, inputting target object spatial motion data in the single sample into a target feature extraction module at one time to extract spatial features of the target object spatial motion data, and splicing the spatial features of the point cloud data and the spatial features of the target object spatial motion data to obtain a combined feature sequence of the single sample.
Step S122: and adding a corresponding behavior category vector to the combined characteristic sequence, adding the behavior category vector to the set position vector, inputting the added behavior category vector to an encoder module, calculating to obtain an output matrix with time sequence characteristics, acquiring first-dimension data of the output matrix, and inputting a linear classification layer to output a behavior classification result.
Step S130: and training the initial neural network model by adopting a training sample set to obtain a personnel behavior recognition model.
In step S110, the millimeter wave radar is a radar operating in the millimeter wave band, which is usually 30 GHz-300 GHz, and has a wavelength of 1-10 mm, which is between microwave and centimeter wave, and has some advantages of both microwave radar and photoelectric radar. In the electromagnetic spectrum, because the millimeter wave belongs to a short wavelength, the millimeter wave radar has the characteristics of small volume, easy integration, high detection accuracy and high spatial resolution compared with a centimeter wave radar. Illustratively, the invention uses the commercial millimeter wave radar with the working frequency band of 60-64 GHz, and the device has the advantages of small volume, simple deployment, low cost and the like, and can detect the movement as small as a few tenths of millimeters.
The millimeter Wave radar transmits Frequency Modulated Continuous Wave (FMCW) to the sensing area, receives the radar waves reflected by the target object and acquires point cloud data in the sensing area. Illustratively, a commercial FMCW millimeter wave radar is erected at a height of 2.5m, the downward inclination angle is adjusted to 15 degrees, the millimeter wave transmitting and receiving antenna faces the sensing area, the millimeter wave radar outputs point cloud data at a rate of about 20 frames per second, and the points in each frame of point cloud contain at least 5-dimensional information: the system comprises an x-axis coordinate, a y-axis coordinate, a z-axis coordinate, a speed and a signal-to-noise ratio, wherein the speed can be understood as the frequency change of radar waves returned to a receiving antenna according to Doppler effect calculation, so that the movement speed of a target object relative to a radar is obtained, the signal-to-noise ratio is the ratio of the signal intensity of the radar waves returned to the receiving antenna to the signal intensity of internal and external noise, point cloud data depends on the accuracy of the reflected radar waves, and therefore the higher the signal-to-noise ratio is, the stronger the radar detection performance is.
In some embodiments, in step S110, the target object spatial motion data is obtained by using a tracking algorithm based on extended kalman filtering, as shown in fig. 2, including steps S111 to S114:
step S111: and acquiring an observed value of a state parameter of the target object detected by the millimeter wave radar, wherein the state parameter comprises a space position coordinate, speed information and acceleration information.
Step S112: and calculating the predicted value of the state parameter of the target object at the current moment according to the observed value of the state parameter of the target object at the previous moment by adopting a state transition equation and determining a predicted track.
Step S113: and one or more points with the distance from the predicted track to the point cloud data smaller than a set value are obtained, and the average value of the state parameters of each point is calculated to be used as the measured value of the current target object.
Step S114: and correcting the predicted value and the measured value of the state parameter by using Kalman gain to obtain an estimated value of the state parameter of the target object at the current moment, and taking the estimated value as the spatial motion data of the target object.
In steps S111 to S114, the target object spatial motion data is a more accurate estimation value obtained by correcting the predicted value and the measured value through the dynamically updated kalman gain. The main method steps can be divided into four parts, namely state vector prediction, point cloud association, new target distribution and parameter updating. The state vector refers to a vector of spatial position coordinates (position coordinates in the x, y, z-axis directions) of the target object, a velocity vector, and an acceleration vector, and these physical quantities are detected by the millimeter wave radar. By the state transition equation and the state of the target object at the previous momentAnd calculating parameters to obtain a predicted value of the target object at the current moment and determining a predicted track of the target object. Since there may be a plurality of persons within the perception area, i.e., a plurality of target objects, each target object generates a respective predicted value and predicted trajectory. And (3) associating the points in the point cloud at the current moment with the predicted track according to a set distance rule, ideally, associating the points generated by one target object with the predicted track generated by the corresponding target object. The set distance may be an euclidean distance, and when the real distance between two points in the point cloud data dimensional space is smaller than the set distance of this embodiment, it is specified that the point may be associated with the predicted trajectory. The average of the spatial positions of the points that can be associated to the predicted trajectory is calculated and taken as the measured value of the target object at the current moment. And correcting the predicted value and the measured value by using Kalman gain to obtain an estimated value of the target object at the current moment, and taking the estimated value as the spatial motion data of the target object. For example, as shown in fig. 3, for a first target object and a second target object in the sensing area of the millimeter wave radar, the state parameters at the previous time are respectively marked as G 1 (n-1) and G 2 (n-1), where the dimension of the state parameter is 1 × 9, and the specific dimension information includes: coordinates, velocity and acceleration in the x-axis direction, coordinates, velocity and acceleration in the y-axis direction, and coordinates, velocity and acceleration in the z-axis direction. Predicting the state parameters of the two targets at the current moment according to the state parameters of the two target objects at the previous moment based on the extended Kalman filtering to obtain predicted values G of the state parameters of the first target object and the second target object 1,apr (n-1) and G 2,apr (n-1). G is to be 1 (n-1)→G 1,apr (n-1) as a first predicted trajectory of the first target object, G 2 (n-1)→G 2,apr (n-1) a second predicted trajectory as a second target object. Calculating Euclidean distances from all points in the point cloud data to the first predicted track and the second predicted track, such as point u 5 The Euclidean distances to the first predicted track and the second predicted track are respectively d s1 And d s2 . In the figure, the point less than the set distance from the first predicted trajectory includes u 1 、u 4 、u 5 、u 8 And u 9 This isCan directly calculate each point to G 1,apr Euclidean distance of (n-1), for u 1 、u 4 、u 5 、u 8 And u 9 Calculating the average value of the state parameters to obtain
Figure BDA0003673348620000071
The measured value of the first target object is obtained; the point less than the set distance from the second predicted track includes u 2 、u 3 、u 5 And u 7 Here, points can be directly calculated to G 2,apr Euclidean distance of (n-1), for u 2 、u 3 、u 5 And u 7 Calculating the average value of the state parameter to obtain
Figure BDA0003673348620000072
(not shown) is the measured value of the second target object. Taking the first target object as an example, utilize
Figure BDA0003673348620000073
For G 1,apr (n-1) corrected to obtain G 1 And (n) the estimated value of the first target object is used as the target space motion data of the first target object.
Parameters involved in the extended kalman filtering, such as a noise covariance matrix, a kalman gain, and the like, are updated at each time and then participate in the prediction calculation at the next time.
In some embodiments, there is a new target assignment step, and after the point cloud is associated with the predicted track, some points in the point cloud data are not located on the corresponding predicted track, that is, the euclidean distance with any one predicted track is greater than the set distance. Clustering the points which are not related to any predicted track by adopting a Clustering algorithm, wherein the Clustering algorithm adopts a DBSCAN Clustering algorithm (Density-Based Spatial Clustering of applications with noise Based Clustering method), the distance radius is set as r, an area in which the Density of the points of the point cloud data is greater than a set value in the range of the distance radius r is divided into clusters, and the clusters are defined as the maximum set of the points meeting the distance requirement and the Density requirement. The DBSCAN clustering algorithm is adopted for filtering low-density areas in the point cloud data, extracting high-density areas and determining new candidate objects according to the high-density areas. And calculating the space position average value of each point in the corresponding cluster as the space position parameter of the candidate object, and when the candidate target object appears in continuous multiple frames and the appearance times are more than a preset value, considering that the candidate target object really exists in the sensing area, allocating identification information to the candidate target object, and taking the candidate target object as a new target object to participate in the method.
After obtaining the sample set, adding a behavior of the target object as a tag for each sample, where in this embodiment, the behavior of the target object includes: walking, running, jumping, standing, squatting, falling and stooping, wherein, walking and running, squatting and the action of falling are approximate, and the millimeter wave radar possesses high detection accuracy and high spatial resolution, simultaneously, regard point cloud data and target object spatial data as the extraction that the data source was used for the characteristic of target object together, guaranteed the richness and the stability of the data that the millimeter wave radar gathered, accurate discernment similar action.
In step S120, as shown in fig. 4, the internal structure of the initial neural network model is defined.
In some embodiments, the point cloud feature extraction module comprises a first spatial transformation network, a first multi-layer sensor, a second spatial transformation network, a second multi-layer sensor and a third multi-layer sensor which are connected in sequence; the output and the input of the first space transformation network are subjected to point multiplication and then input into the first multilayer perceptron, and the output and the input of the second space transformation network are subjected to point multiplication and then input into the second multilayer perceptron.
As shown in fig. 5, a method for extracting spatial features of point cloud data according to a structure diagram of a point cloud feature extraction module is obtained, and includes:
the point cloud data is input into the point cloud extraction module, for example, each frame of point cloud data is set as a matrix with a dimension of 48 × 5, and main 48 points in each frame are extracted, each point includes the above-mentioned 5 pieces of dimension information, so in step S121, the point cloud data is input into the point cloud extraction module in a frame-by-frame manner, that is, a matrix with a dimension of 48 × 5 is input each time, and the number of frames is set as 20 frames.
Firstly, the point cloud data learns an alignment Network through a first Spatial Transform Network (STN) to generate a corresponding spatial transform parameter matrix, and the spatial transform parameter matrix and the point cloud data are subjected to point multiplication, namely, matrix multiplication operation is performed on coordinate dimensions of an x axis, a y axis and a z axis of the point cloud data to realize spatial alignment, and then, the spatial features of the point cloud data are mapped from 5 dimensions to 16 dimensions by using a first multilayer sensor. And then learning an alignment matrix through a second spatial transformation network, performing dot product operation on the alignment matrix and the 16-dimensional point cloud data to realize spatial feature alignment, and then mapping the spatial feature dimensions of the point cloud data to 32-dimension and 64-dimension by sequentially utilizing a second multilayer sensor and a third multilayer sensor.
In some embodiments, the point cloud feature extraction module further comprises a max-pooling layer that aggregates features of the point cloud data in various dimensions using a max-pooling operation on the output of the third multi-layer perceptron.
After the point cloud data which is promoted to 64 dimensions are obtained, the maximum pooling operation is used as a symmetric function, one frame of point cloud data space features with the dimension of 1 × 64 are output, and the point cloud data space features with the dimension of 20 × 64 are obtained until the space features of 20 continuous frames of point cloud data are extracted.
In order to ensure the invariance of the point cloud data sequence, a maximum pooling symmetrical network based on a symmetrical function is adopted, and the maximum pooling layer can obtain the same result no matter what the sequence of the point cloud data of each frame is input. Specifically, the point cloud data of the previous layer after alignment matrix mapping calibration is subjected to two MLP layers to extract point cloud characteristics, and then all point cloud data in a high-dimensional characteristic space are aggregated through a maximum pooling layer to obtain final global characteristics. Meanwhile, high-dimensional point cloud data are obtained by the original point cloud data through a plurality of MLP layers, information is redundant in the expression form of the five-dimensional point cloud data in a high-dimensional space, and the loss of the information can be reduced by operating the comprehensive information symmetrically.
In some embodiments, ignoring the alignment operation of the spatial transformation network, the output of the point cloud feature extraction module is numerically represented as equation (1):
Figure BDA0003673348620000091
wherein, P = { P i J =1, 2.., n, P is a set of points comprising n points; max (·) represents taking the maximum of a plurality of specified values; mlp (-) represents the processing of the multi-layer perceptron.
And a Transformer neural network is introduced in the time sequence feature extraction module. At present, common Neural networks include a Recurrent Neural Network (RNN) and a Convolutional Neural Network (CNN), a method for identifying personnel behaviors mostly adopts a method based on the RNN or a variant thereof (such as an LSTM, long Short-Term Memory and Short-Term Memory artificial Neural Network), when time sequence feature extraction is carried out on a continuous point cloud data sequence, a traditional Network based on the RNN can only predict the next time through a result of the last time, parallelism cannot be achieved, the extraction speed is slow, and the efficiency is low. The Transformer neural network adopts a Self-attention mechanism to realize rapid parallelism, and in addition, the Transformer can also deepen the network depth, unlike the CNN which only can add the model to 2-3 layers, so that the global information can be obtained through the Transformer neural network, and the model accuracy is further improved.
The transform neural network model has an encoder-decoder structure, the encoder is composed of a plurality of identical sub-modules, each sub-module has two sub-layers, namely a Multi-headed self-attention layer and a Multi-layer Perceptron (MLP layer), the two sub-layers are separated in the model, namely, the Multi-headed self-attention layer only performs aggregation, and the MLP layer only performs transformation, wherein the MLP layer is a feedforward artificial neural network model for mapping a set of input vectors to a set of output vectors. An MLP can be viewed as a directed graph consisting of multiple layers of nodes, each layer being connected to the next, except for the input nodes, each node being a neuron with a nonlinear activation function.
In some embodiments, the encoder module in the temporal feature extraction module is formed by stacking two sub-modules, each sub-module includes a multi-head self-attention layer and an MLP layer, wherein the multi-head sub-attention layer includes 8 heads, and a model dimension of each multi-head self-attention is 16 dimensions.
In some embodiments, the target feature extraction module comprises a plurality of multi-layered sensors, and the ReLu and BN operations are performed after processing by each multi-layered sensor.
In step S121, since the target object spatial motion data obtained based on the point cloud data is one-dimensional data with a dimension of 1 × 9, 20 frames of target object spatial motion data are input to the target feature extraction module at one time, and in some embodiments, the spatial features of the target object spatial motion data are mapped from 9 dimensions to 32 dimensions and 64 dimensions through two MLP layers, and the spatial features of the target object spatial motion data with a dimension of 20 × 64 are output.
Since the spatial features are extracted separately from the point cloud data and the target object spatial motion data, after the spatial features are extracted, the spatial features of the point cloud data with the dimension of 20 × 64 and the spatial features of the target object spatial motion data with the dimension of 20 × 64 are combined and spliced to obtain a combined feature sequence with the dimension of 20 × 128.
In step S122, a corresponding behavior category vector is added to the combined feature sequence for learning a corresponding person behavior through the extracted features in the time sequence feature module. The combined feature sequence and a set position vector are added, wherein the set position vector is used for learning the relative position of the spatial features of each frame, a Transformer neural network is used in a time sequence feature module, the Transformer neural network does not have the iterative operation of an RNN network, but uses a pure self-attention mechanism to capture the relation between input data, the input data are processed in parallel, so the self-attention mechanism cannot capture the sequence of the input combined feature sequence, the combined feature sequence and the set position vector are added, the relative position of each frame of combined feature sequence is learned by the set position vector while the time sequence feature module performs time sequence feature extraction, and the time sequence feature extraction module is helped to determine the position of each frame of combined feature sequence, wherein the added position vector needs to follow the recognition rule of the human behavior recognition model learning.
In some embodiments, a position vector is set for the combined feature sequence, and a sin function and a cos function of different frequencies can also be used for position coding, as shown in equations (2) and (3):
Figure BDA0003673348620000101
Figure BDA0003673348620000102
wherein PE (positional encoding) represents position encoding; pos represents the absolute position of the spatial feature in the combined feature sequence, pos =0,1,2; d model Representing the dimensions of the combined signature sequence, see above, where d model =128;2i and 2i +1 denote parity, i denotes the number of dimensions in the combined signature sequence, i ∈ [1, 128 +](ii) a sin (·) denotes the processing of a sine function; cos (-) represents the processing of the cosine function.
Inputting the combined feature sequence with the behavior category vector and the position vector into an encoder module of the time sequence feature module, respectively calculating two sub-modules, wherein each sub-module needs to pass through a self-attention layer and then an MLP layer, and therefore, the output of the encoder module is expressed as formulas (4) and (5):
Figure BDA0003673348620000111
Figure BDA0003673348620000112
wherein R is m-1 Represents the output of the m-1 th sub-module, R m Indicating the output of the execution of the mth sub-module,
Figure BDA0003673348620000113
the input value of the multi-layer sensor in the mth sub-module is shown, M is the set total number of sub-modules, MHSA (-) shows the processing of the multi-head self-attention layer, LN (-) shows the layer normalization processing, and MLP (-) shows the processing of the multi-layer sensor.
In some embodiments, after the combined feature sequence passes through the self-attention layer and the MLP layer, the combined feature sequence also passes through a residual module, and data is subjected to residual connection in the residual module, so that gradient dissipation or gradient explosion can be caused as the depth of the human behavior recognition model is increased, and the model can be optimized better by adopting the residual connection.
In some embodiments, a Softmax layer is followed by a linear classification layer to normalize the classification results. Specifically, the Softmax layer maps each behavior category output by the linear classification layer into a (0, 1) interval, which is equivalent to that the behavior categories are normalized into a probability distribution, the sum of the probabilities of the behavior categories is 1, and the behavior category with the maximum probability value is selected as the final behavior category. Illustratively, after being processed by the Softmax layer, the probability values corresponding to walking, running, jumping, standing, squatting, falling and bending are respectively [0.4,0.2,0.1, 0.05], and the current behavior of the person is finally determined to be walking by comparing the probability values.
The invention also provides a personnel behavior identification method, which comprises the following steps:
acquiring continuous point cloud data of a set frame number generated by the millimeter wave radar in the sensing area and target object space motion data acquired based on the point cloud data, and inputting the point cloud data and the target object space motion data into a personnel behavior recognition model in any one of the above-mentioned personnel behavior recognition model training methods to obtain a recognition result of the behavior of active personnel in the sensing area.
In accordance with the above method, the present invention also provides an apparatus comprising a computer device including a processor and a memory, the memory having stored therein computer instructions, the processor being configured to execute the computer instructions stored in the memory, the apparatus implementing the steps of the method as described above when the computer instructions are executed by the processor.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the steps of the foregoing edge computing server deployment method. The computer readable storage medium may be a tangible storage medium such as Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, floppy disks, hard disks, removable storage disks, CD-ROMs, or any other form of storage medium known in the art.
In summary, the present invention provides a training method, a behavior recognition method and an apparatus for a personnel behavior recognition model, including: acquiring point cloud data generated by the millimeter wave radar on the sensing area and target object space motion data acquired based on the point cloud data; the point cloud data is input into a point cloud feature extraction module frame by frame, the target object space motion data is input into the target feature extraction module at one time, and the space features of the point cloud data and the space features of the target object space motion data are extracted and spliced to form a combined feature sequence. And adding corresponding behavior category vectors to the combined characteristic sequence, adding the behavior category vectors to the set position vector, inputting the added behavior category vectors to an encoder module of the time sequence characteristic extraction module, calculating to obtain an output matrix with time sequence characteristics, acquiring first-dimension data of the output matrix, and inputting a linear classification layer to obtain a behavior classification result. The personnel behavior recognition is realized, and meanwhile, signals are transmitted and received by the millimeter wave radar, so that non-contact and continuous data acquisition is realized; and extracting target object space motion data from the point cloud data, and inputting the point cloud data and the target object space motion data into a personnel behavior recognition model by taking the point cloud data and the target object space motion data as data sources, so that the richness and stability of the data sources are ensured, the recognition accuracy is improved, and the recognizable types are enriched.
Further, a point cloud feature extraction module is designed in the personnel behavior recognition model to extract point cloud features, and the point cloud data is processed by using a space transformation network, so that the invariance of the spatial change of the point cloud data is realized. The point cloud feature extraction module further comprises a maximum pooling layer, and the maximum pooling operation is used as a symmetric function to protect the arrangement disorder of the point clouds.
Those of ordinary skill in the art will appreciate that the various illustrative components, systems, and methods described in connection with the embodiments disclosed herein may be implemented as hardware, software, or combinations thereof. Whether this is done in hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
Features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments in the present invention.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made to the embodiment of the present invention by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A training method for a personnel behavior recognition model is characterized by comprising the following steps:
acquiring a training sample set, wherein the training sample set comprises a plurality of samples, each sample comprises continuous point cloud data of a set frame number generated by a millimeter wave radar to a sensing area, and target object space motion data acquired based on the point cloud data, and the target object space motion data comprises space position coordinates, speed information and acceleration information of a target object in the sensing area; adding the behavior of the target object as a label for each sample;
the method comprises the steps of obtaining an initial neural network model, wherein the initial neural network model comprises a spatial feature extraction module and a time sequence feature extraction module, the spatial feature extraction module comprises a point cloud feature extraction module and a target feature extraction module, the time sequence feature extraction module comprises an encoder module and a linear classification layer, the encoder module is formed by stacking a plurality of sub-modules, and each sub-module comprises a multi-head self-attention layer and a multi-layer sensor;
inputting the point cloud data in a single sample into the point cloud feature extraction module frame by frame to extract the spatial features of the point cloud data, inputting the target object spatial motion data in the single sample into the target feature extraction module at one time to extract the spatial features of the target object spatial motion data, and splicing the spatial features of the point cloud data and the spatial features of the target object spatial motion data to obtain a combined feature sequence of the single sample; adding corresponding behavior category vectors to the combined feature sequence, adding the behavior category vectors to a set position vector, inputting the set position vector to the encoder module for learning the relative position of the spatial features of each frame, calculating to obtain an output matrix with time sequence features, acquiring first-dimensional data of the output matrix, and inputting the first-dimensional data to the linear classification layer to output a behavior classification result;
and training the initial neural network model by adopting the training sample set to obtain a personnel behavior recognition model.
2. The personnel behavior recognition model training method according to claim 1, wherein the point cloud feature extraction module comprises a first space transformation network, a first multi-layer sensor, a second space transformation network, a second multi-layer sensor and a third multi-layer sensor which are connected in sequence; the output and the input of the first spatial transformation network are subjected to point multiplication and then input into the first multilayer sensor, and the output and the input of the second spatial transformation network are subjected to point multiplication and then input into the second multilayer sensor.
3. The personnel behavior recognition model training method of claim 2, wherein the point cloud feature extraction module further comprises a max pooling layer, and features of the point cloud data in various dimensions are aggregated by using a max pooling operation on the output of the third multi-layer perceptron.
4. The training method of the human behavior recognition model according to claim 1,
the target feature extraction module comprises a plurality of multilayer perceptrons, and ReLu and BN operations are carried out after the processing of each multilayer perceptron.
5. The personnel behavior recognition model training method according to claim 1, wherein the target object space motion data is obtained by adopting a tracking algorithm based on extended kalman filtering, and the method comprises the following steps:
acquiring an observed value of a state parameter of the target object detected by the millimeter wave radar, wherein the state parameter comprises a spatial position coordinate, speed information and acceleration information;
calculating a predicted value of the target object state parameter at the current moment according to the observed value of the target object state parameter at the previous moment by adopting a state transition equation and determining a predicted track;
acquiring one or more points in the point cloud data, wherein the distance between the point cloud data and the predicted track is smaller than a set value, and calculating the average value of the state parameters of each point to serve as the measured value of the target object at the current moment;
and correcting the predicted value and the measured value of the state parameter by using Kalman gain to obtain an estimated value of the state parameter of the target object at the current moment, and taking the estimated value as the spatial motion data of the target object.
6. The training method of the human behavior recognition model according to claim 1, wherein the encoder module is expressed as the following formula:
Figure FDA0003673348610000022
Figure FDA0003673348610000021
wherein R is m-1 Represents the output of the m-1 th sub-module, R m Indicating the output of the execution of the mth sub-module,
Figure FDA0003673348610000023
and the input value of the multi-layer sensor in the mth sub-module is represented, M is the set total number of sub-modules, MHSA (-) represents the processing of the multi-head self-attention layer, LN (-) represents the layer normalization processing, and MLP (-) represents the processing of the multi-layer sensor.
7. The training method of the human behavior recognition model according to claim 1, wherein a Softmax layer is connected after the linear classification layer, and the classification result is normalized.
8. A method for identifying a person's behavior, the method comprising the steps of:
acquiring continuous point cloud data of a set frame number generated by a millimeter wave radar in a sensing area, acquiring target object space motion data based on the point cloud data, and inputting the point cloud data and the target object space motion data into a personnel behavior recognition model in the personnel behavior recognition model training method according to any one of claims 1 to 7 to obtain a recognition result of the behavior of active personnel in the sensing area.
9. An electronic device comprising a processor and a memory, wherein the memory has stored therein computer instructions for executing the computer instructions stored in the memory, wherein the device implements the steps of the method according to any one of claims 1 to 8 when the computer instructions are executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of a method according to any one of claims 1 to 8.
CN202210612391.7A 2022-05-31 2022-05-31 Personnel behavior recognition model training method, behavior recognition method and device Pending CN115205891A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210612391.7A CN115205891A (en) 2022-05-31 2022-05-31 Personnel behavior recognition model training method, behavior recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210612391.7A CN115205891A (en) 2022-05-31 2022-05-31 Personnel behavior recognition model training method, behavior recognition method and device

Publications (1)

Publication Number Publication Date
CN115205891A true CN115205891A (en) 2022-10-18

Family

ID=83576244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210612391.7A Pending CN115205891A (en) 2022-05-31 2022-05-31 Personnel behavior recognition model training method, behavior recognition method and device

Country Status (1)

Country Link
CN (1) CN115205891A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115496170A (en) * 2022-11-17 2022-12-20 中南民族大学 Human body posture recognition method and system, electronic equipment and storage medium
CN116184352A (en) * 2023-04-26 2023-05-30 武汉能钠智能装备技术股份有限公司四川省成都市分公司 Radio frequency target detection system based on track estimation
CN117158967A (en) * 2023-07-25 2023-12-05 北京邮电大学 Personnel pressure non-sensing continuous monitoring method and system based on millimeter wave sensing

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115496170A (en) * 2022-11-17 2022-12-20 中南民族大学 Human body posture recognition method and system, electronic equipment and storage medium
CN115496170B (en) * 2022-11-17 2023-02-17 中南民族大学 Human body posture recognition method and system, electronic equipment and storage medium
CN116184352A (en) * 2023-04-26 2023-05-30 武汉能钠智能装备技术股份有限公司四川省成都市分公司 Radio frequency target detection system based on track estimation
CN116184352B (en) * 2023-04-26 2023-08-22 武汉能钠智能装备技术股份有限公司四川省成都市分公司 Radio frequency target detection system based on track estimation
CN117158967A (en) * 2023-07-25 2023-12-05 北京邮电大学 Personnel pressure non-sensing continuous monitoring method and system based on millimeter wave sensing
CN117158967B (en) * 2023-07-25 2024-06-04 北京邮电大学 Personnel pressure non-sensing continuous monitoring method and system based on millimeter wave sensing

Similar Documents

Publication Publication Date Title
Hsieh et al. Deep learning-based indoor localization using received signal strength and channel state information
CN115205891A (en) Personnel behavior recognition model training method, behavior recognition method and device
Lin et al. WiAU: An accurate device-free authentication system with ResNet
Fan et al. TagFree activity identification with RFIDs
Gu et al. WiGRUNT: WiFi-enabled gesture recognition using dual-attention network
US20220180534A1 (en) Pedestrian tracking method, computing device, pedestrian tracking system and storage medium
CN113313040B (en) Human body posture identification method based on FMCW radar signal
CN106559749B (en) Multi-target passive positioning method based on radio frequency tomography
CN107463898A (en) The stage performance abnormal behavior monitoring method of view-based access control model sensing network
KR20140067604A (en) Apparatus, method and computer readable recording medium for detecting, recognizing and tracking an object based on a situation recognition
CN114818788A (en) Tracking target state identification method and device based on millimeter wave perception
Arab et al. A convolutional neural network for human motion recognition and classification using a millimeter-wave Doppler radar
Ni et al. Open-set human identification based on gait radar micro-Doppler signatures
Qiao et al. Human activity classification based on moving orientation determining using multistatic micro-Doppler radar signals
Cao et al. Correlation-based tracking of multiple targets with hierarchical layered structure
Lin et al. WiWrite: An accurate device-free handwriting recognition system with COTS WiFi
Zhang et al. Unsupervised domain adaptation for device-free gesture recognition
Kabir et al. CSI-IANet: An inception attention network for human-human interaction recognition based on CSI signal
Wei et al. RSSI-based location fingerprint method for RFID indoor positioning: a review
Zhang et al. Wi-adaptor: Fine-grained domain adaptation in wifi-based activity recognition
Jiang et al. RF‐Gait: Gait‐Based Person Identification with COTS RFID
Hu et al. IDSDL: a sensitive intrusion detection system based on deep learning
CN116631577A (en) Hand rehabilitation action evaluation system based on radio frequency-sensor information fusion
Chen et al. HeadSee: Device-free head gesture recognition with commodity RFID
Deng et al. UWB NLOS identification and mitigation based on gramian angular field and parallel deep learning model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination