CN111160258A - Identity recognition method, device, system and storage medium - Google Patents

Identity recognition method, device, system and storage medium Download PDF

Info

Publication number
CN111160258A
CN111160258A CN201911398786.6A CN201911398786A CN111160258A CN 111160258 A CN111160258 A CN 111160258A CN 201911398786 A CN201911398786 A CN 201911398786A CN 111160258 A CN111160258 A CN 111160258A
Authority
CN
China
Prior art keywords
radio frequency
recognized
frequency signal
frame sequence
action frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911398786.6A
Other languages
Chinese (zh)
Inventor
杨大业
宋建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911398786.6A priority Critical patent/CN111160258A/en
Publication of CN111160258A publication Critical patent/CN111160258A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0029Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement being specially adapted for wireless interrogation of grouped or bundled articles tagged with wireless record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses an identity recognition method, an identity recognition device, an identity recognition system and a storage medium, wherein the identity recognition method comprises the following steps: acquiring a radio frequency signal flow reflected by an object to be identified; inputting the radio frequency signal stream into a trained action frame generation network to obtain a first 3D action frame sequence; performing gesture recognition on the object to be recognized at least according to the first 3D action frame sequence to obtain the gesture of the object to be recognized; and determining the identity of the object to be recognized according to the posture of the object to be recognized.

Description

Identity recognition method, device, system and storage medium
Technical Field
The present application relates to the field of electronic technology, and relates to, but is not limited to, a method, apparatus, system, and storage medium for identity recognition.
Background
Home theater equipment such as notebooks, televisions, home entertainment centers, etc., are typically implemented using camera-based human motion recognition and tracking systems for interaction with users. The general procedure is: the action of the user is identified according to the image acquired by the camera, so that better interaction and user experience are provided for the user according to the action of the user. For camera-based human motion recognition and tracking systems, they typically operate only under certain suitable lighting conditions. Under the condition of poor light, the camera cannot accurately identify the action of the person. The prior art is generally addressed by Near Infrared (NIR) techniques such as NIR lamp illumination. Although the near-infrared light is an electromagnetic wave between visible light and mid-infrared light, since the band of the near-infrared light is close to the visible light, the near-infrared light emitting device itself generates visible light, which interferes the home theater device and the action of identifying people in dark conditions, and thus the near-infrared technology cannot be practically applied.
Disclosure of Invention
In view of this, embodiments of the present application provide an identity recognition method, apparatus, system, and storage medium.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides an identity identification method, where the method includes: acquiring a radio frequency signal flow reflected by an object to be identified; inputting the radio frequency signal stream into a trained action frame generation network to obtain a first 3D action frame sequence; performing gesture recognition on the object to be recognized at least according to the first 3D action frame sequence to obtain the gesture of the object to be recognized; and determining the identity of the object to be recognized according to the posture of the object to be recognized.
In a second aspect, an embodiment of the present application provides an identity recognition apparatus, where the apparatus includes: the first acquisition module is used for acquiring a radio frequency signal stream reflected by an object to be identified; the generating module is used for inputting the radio frequency signal stream into a trained action frame generating network to obtain a first 3D action frame sequence; the recognition module is used for recognizing the gesture of the object to be recognized at least according to the first 3D action frame sequence to obtain the gesture of the object to be recognized; and the determining module is used for determining the identity of the object to be recognized according to the posture of the object to be recognized.
In a third aspect, an embodiment of the present application provides an identity identification system, including a radio frequency transceiver and an electronic device, where the radio frequency transceiver is configured to send a radio frequency signal to an object to be identified and receive a radio frequency signal stream reflected by the object to be identified, and the electronic device is configured to obtain the radio frequency signal stream reflected by the object to be identified; inputting the radio frequency signal stream into a trained action frame generation network to obtain a first 3D action frame sequence; performing gesture recognition on the object to be recognized at least according to the first 3D action frame sequence to obtain the gesture of the object to be recognized; and determining the identity of the object to be recognized according to the posture of the object to be recognized.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the method.
The identity recognition method, the identity recognition device, the identity recognition equipment and the storage medium provided by the embodiment of the application are suitable for the environment without the condition of deploying the video detection equipment, such as: dark and no-light environment. The wireless detection equipment is deployed under the condition for detecting the action characteristics, a required action frame can be trained to generate a network before the wireless detection equipment is deployed, the action recognition precision is improved when the wireless action characteristic detection equipment is used for working, and meanwhile, more convenient and more effective interaction between human-computer is realized.
Drawings
Fig. 1 is a schematic flow chart illustrating an implementation of an identity recognition method according to an embodiment of the present application;
FIG. 2A is a graph illustrating the position of an object obtained by using RF signals according to an embodiment of the present disclosure;
fig. 2B is a schematic flow chart illustrating an implementation of another identity recognition method according to an embodiment of the present application;
fig. 2C is a schematic flow chart illustrating an implementation of another identity recognition method according to an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating an implementation of a method for generating a network by training an action framework according to an embodiment of the present application;
FIG. 4 is a flow chart of generating network training for an action framework according to an embodiment of the present disclosure;
FIG. 5A is a schematic diagram of an embodiment of identifying motion characteristics using a wireless device and a video device;
FIG. 5B is a flowchart illustrating an action feature detection method for deploying a wireless detection device according to an embodiment of the present application;
fig. 5C is a flowchart of detecting an action characteristic when the wireless detection device and the video detection device are deployed at the same time in the embodiment of the present application;
fig. 6 is a schematic structural diagram of a component of an identity recognition apparatus according to an embodiment of the present application;
fig. 7 is a schematic diagram of a hardware entity of an identification system according to an embodiment of the present application.
Detailed Description
When the identity of a person is identified, situations such as identification of a sniper and long-term stationary office staff in a dark environment exist. In a dark environment, a camera system cannot be used for identifying the identity of a person, but if a radio frequency receiving and transmitting device is used for acquiring a radio frequency signal, the effect of accurately identifying the identity can be achieved under the condition of not being influenced by illumination conditions. And the adaptability of the radio frequency signal environment is higher, and the concealment is good.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
It should be understood that some of the embodiments described herein are only for explaining the technical solutions of the present application, and are not intended to limit the technical scope of the present application.
The embodiment provides an identity recognition method. Fig. 1 is a schematic view of an implementation flow of an identity recognition method provided in an embodiment of the present application, and as shown in fig. 1, the method includes:
s101, acquiring a radio frequency signal stream reflected by an object to be identified;
deploying a wireless identification device. First, the wireless device transmits a radio frequency signal to an object to be identified, and then the radio frequency signal is transmitted back when encountering the object to be identified. And finally, the equipment acquires the radio frequency signals reflected by the object to be identified, and the radio frequency signals are accumulated in a time domain to obtain radio frequency signal flow.
Step S102, inputting the radio frequency signal stream into a trained action frame generation network to obtain a first 3D action frame sequence;
and inputting the radio frequency signal stream reflected by the object to be identified into an action frame generation network, wherein the action frame generation network is obtained by training a video signal stream acquired by the camera equipment in advance. The radio frequency signal flow is processed by the action frame generation network to obtain a first 3D action frame sequence. Here, the first 3D motion frame sequence is generated by the system from the acquired radio frequency signals.
Step S103, performing gesture recognition on the object to be recognized at least according to the first 3D action frame sequence to obtain the gesture of the object to be recognized;
and performing gesture recognition on the object to be recognized according to a first 3D action frame sequence obtained by the radio frequency signal to obtain the gesture of the object to be recognized. For example: hand-held mobile phone, sit-and-sit PC, hand-held pistol, lying posture, etc.
And S104, determining the identity of the object to be recognized according to the posture of the object to be recognized.
And determining the identity of the object to be recognized according to the posture of the object to be recognized. For example, the person who is in the office can be confirmed to be sitting on the PC for a long time, the person who is in the sniper can be confirmed to be sitting on the hand-held pistol, and the like.
The identity recognition method provided by the embodiment of the application is suitable for an environment without a condition for deploying video detection equipment, such as: dark and no-light environment. The wireless detection equipment is deployed under the condition for detecting the action characteristics, a required action frame can be trained to generate a network before the wireless detection equipment is deployed, the action recognition precision is improved when the wireless action characteristic detection equipment is used for working, and meanwhile, more convenient and more effective interaction between human-computer is realized.
The embodiment provides an identity recognition method. Fig. 2B is a schematic view of an implementation flow of another identity recognition method provided in the embodiment of the present application, and as shown in fig. 2B, the method includes:
step S201, sending a radio frequency signal to the object to be identified through a radio frequency transceiver;
the radio frequency transceiver is used for transmitting radio frequency signals to the object to be identified. The radio frequency transceiver may use two sets of antennas for receiving and transmitting radio frequency signals, wherein each set of antennas has two orthogonal parts I and Q, one imaginary part and one real part. Here, two sets of antennas with a certain distance are the most basic configuration for positioning, and in practical application, multiple sets of antennas such as three sets, four sets, etc. may be used. If multiple sets of antennas are used to transmit and receive RF signals, the accuracy of the transmitted and received signals can be greatly improved.
Step S202, receiving a radio frequency signal flow reflected by the object to be identified;
the coded radio frequency signals can detect objects at different distances. By detecting the radio frequency signal reflected on the target object, the distance and motion of the object can be determined. The radio frequency signal is transmitted to the object to be identified and reflects the radio frequency signal to the radio frequency transceiver. And the radio frequency transceiver receives the radio frequency signal flow reflected by the object to be identified.
Step S203, determining a signal intensity heat map according to the radio frequency signal flow;
referring to fig. 2A, the horizontal axis 201 is time, the vertical axis 202 is frequency, the transmitted signal 203, the received signal 204, the frequency shift 205 of the time delay mapping between the transmitted signal and the received signal, the signal flight time 206 of the transmitted signal, the transmitted signal 203 frequency is linear with time (slope of fig. 2A). The time delay between the two signals, the transmitted signal 203 and the received signal 204, is mapped to a frequency shift (fb)205 that can be obtained by processing the received rf signal using fast fourier transform.
Signal time of flight (To) ═ fb/slope; wherein,/represents a division symbol;
here, the slope is an amount representing the degree to which a straight line is inclined with respect to the coordinate axis. It is usually expressed as the tangent of the angle between the straight line and the coordinate axis, or the ratio of the difference between the ordinate and the abscissa of two points.
The signal flight time multiplied by the light speed is equal to the signal flight path; wherein x represents a multiplication symbol;
the signal flight path is twice the distance of the object from the transmitting antenna.
This results in a distribution of object positions due to the multiple path clutter of the rf signal. The signal strength heatmap is generated by the effect of the superposition of multiple signals, with more signal being obtained and more red on the signal strength heatmap.
Step S204, inputting the signal intensity heat map into the convolutional neural network CNN to obtain the space-time characteristics of the object to be identified;
convolutional Neural Networks (CNN) are a class of feed forward Neural Networks (fed forward Neural Networks) that contain convolution computations and have a deep structure, and are one of the algorithms that represent deep learning. Convolutional neural networks have a characteristic learning ability, and can perform translation invariant classification on input information according to a hierarchical structure thereof, and are also called "translation invariant artificial neural networks".
The convolutional neural network is a deep neural network with a convolutional structure, the convolutional structure can reduce the amount of memory occupied by the deep network, and three key operations are that local receptive fields are adopted, weight sharing is adopted, and a pooling layer is adopted, so that the number of parameters of the network is effectively reduced, and the overfitting problem of a model is relieved.
The input layer reads in the regularized image, each neuron of each layer takes a group of small local adjacent units of the previous layer as input, namely local receptive field and weight sharing, the neuron extracts some basic visual features such as edges, angular points and the like, and the features are used by the neurons of the higher layer later. The convolutional neural network obtains a feature map by a convolution operation, and at each position, the units from different feature maps obtain different types of features respectively. A convolutional layer usually contains a plurality of feature maps with different weight vectors, so that richer features of the image can be retained. The back of the convolutional layer is connected with the pooling layer for down-sampling operation, so that the resolution of the image can be reduced, the parameter quantity can be reduced, and the robustness of translation and deformation can be obtained. The alternating distribution of the convolution layer and the pooling layer leads the number of the characteristic maps to be gradually increased and the resolution to be gradually reduced, thus the structure is a double pyramid.
The signal intensity heatmap is input into a convolutional neural network, and the space-time characteristics of the object to be identified can be obtained through the input.
Step S205, inputting the space-time characteristics of the object to be identified into the regional candidate network RPN to obtain a candidate frame envelope region;
the RPN is essentially a non-category object detector based on a sliding window, and the RPN network realizes that a convolutional neural network is used for directly generating a candidate window, namely a space-time feature map output by the convolutional neural network is used as an input, and a window is directly extracted from the space-time feature map to obtain a candidate frame envelope area.
Step S206, inputting the candidate frame envelope area into the frame extraction network to obtain a first 3D action frame sequence;
a cyclic neural network (RNN) based on Long-Short Term Memory (LSTM) can be adopted to build a framework extraction network, so that effective characteristics can be learned, a dynamic process of a time domain can be modeled, and end-to-end behavior recognition and detection can be realized. The framework extraction Network may be composed of a main Network (MainLSTM Network), a Temporal Attention sub-Network (Temporal Attention), and a Spatial Attention sub-Network (Spatial Attention). Wherein the main network is used for extracting, time-domain correlation utilization and final classification of the features. A time domain attention subnetwork is used to assign the appropriate importance to the different frames. A spatial attention subnetwork is used to assign the appropriate importance to the different nodes.
Inputting the candidate frame envelope region into a frame extraction network to obtain a first 3D action frame sequence, wherein the first 3D action frame sequence is a 3D action frame sequence obtained based on a radio frequency signal.
Step S207, inputting the first 3D action frame sequence into action feature detection equipment to obtain the posture features of the object to be recognized, wherein the posture features refer to action features changing along with time;
the motion characteristic detection device is used for detecting the attitude characteristic of the object to be recognized, and the attitude characteristic of the object to be recognized can be obtained by inputting the first 3D motion frame sequence obtained based on the radio frequency signal into the motion characteristic detection device.
S208, determining the posture of the object to be recognized according to the posture characteristic of the object to be recognized;
according to the obtained posture characteristics of the object to be recognized, the posture of the object to be recognized can be determined, for example: holding the mobile phone by hand, using the PC in a sitting state, lying down, and the like.
And S209, determining the identity of the object to be recognized according to the posture of the object to be recognized.
From the pose of the object to be recognized, the identity of the object to be recognized may be determined, for example: office staff and the like use the PC for a long time in a sitting state.
The action characteristic detection mode for deploying the wireless detection equipment provided by the embodiment of the application is suitable for the environment without deploying the video detection equipment in a condition, such as: dark and no-light environment. The action characteristic detection equipment is deployed under the condition, a required action frame can be trained before deployment to generate a network, when the action characteristic detection equipment is used for working, action recognition precision is improved, and meanwhile, more convenient and more effective interaction between human and machines is achieved.
Fig. 2C is a schematic view of an implementation flow of another identity recognition method provided in the embodiment of the present application, and as shown in fig. 2C, the method includes:
step S211, sending a radio frequency signal to the object to be identified through a radio frequency transceiver;
step S212, receiving a radio frequency signal stream reflected by the object to be identified;
step S213, acquiring the action video stream of the object to be identified, which is time-synchronized with the radio frequency signal stream, through a camera system;
and acquiring a motion video stream of the object to be identified by using a camera system while acquiring the radio frequency signal stream by using a radio frequency device.
Step S214, analyzing the motion video stream to obtain a second 3D motion frame sequence;
the action video stream is a multi-frame picture acquired by the camera system within a period of time, and according to the acquired video frame within the period of time, posture judgment and 3D action frame extraction are carried out to obtain a second 3D action frame sequence. Here, the second 3D motion frame sequence is a 3D motion frame sequence obtained based on the video stream.
Step S215, carrying out gesture recognition on the object to be recognized according to the first 3D action frame sequence and the second 3D action frame sequence to obtain the gesture of the object to be recognized;
and simultaneously inputting the first 3D action frame sequence and the second 3D action frame sequence to the action characteristic detection equipment, and performing gesture recognition on the object to be recognized to obtain the gesture of the object to be recognized.
Step S216, determining the identity of the object to be recognized according to the posture of the object to be recognized.
The action characteristic detection mode for deploying the wireless detection equipment and the video detection equipment simultaneously is suitable for the environment for deploying the video detection equipment conditionally. In the action detection process, the action frame generation network can be trained in real time by using video data, and the action frame generation network with higher precision is obtained. The 3D action frame sequence generated by combining the video data stream and the wireless data stream effectively provides a real-time high-precision sight angle, the action recognition precision is higher, the action characteristic extraction is more accurate, and the user identity can be more accurately judged.
Fig. 3 is a schematic flow chart of an implementation process of a method for generating a network by training an action framework according to an embodiment of the present application, where as shown in fig. 3, the method includes:
s301, acquiring a radio frequency signal flow reflected by a sample object through a radio frequency transceiver;
the sample object is used for training the action framework generation network to use, and the sample can be selected according to actual use. For example: a person sitting still, a soldier keeping an aiming posture.
And acquiring the radio frequency signal flow reflected by the sample object by using a radio frequency transceiver.
Step S302, acquiring a motion video stream of the sample object time-synchronized with the radio frequency signal stream through a camera system;
the camera system acquires a motion video stream of a sample object which is time-synchronized with the radio frequency signal stream, that is, the objects acquired by the camera system and the radio frequency system are the same sample object at the same time.
Step S303, analyzing an action frame in the action video stream of the sample object to obtain a 3D action frame sequence of the sample object;
since the motion frame accuracy of the sample object obtained by the motion video stream is high, an accurate 3D motion frame sequence of the sample object is obtained first by analyzing the motion frame in the motion video stream of the sample object.
And S304, taking the radio frequency signal flow of the sample object and the 3D motion frame sequence of the sample object as a training sample set, and training a motion frame generation network.
And taking the 3D action frame sequence of the obtained accurate sample object and the radio frequency signal stream of the sample object as a training sample set, and training the action frame generation network.
According to the embodiment of the application, the video stream 3D action frame sequence of the sample object is obtained according to the sample object video shot by the camera, the action frame generation network is trained according to the video stream 3D action frame sequence of the sample object, the accuracy is improved according to the 3D action frame sequence extracted by the wireless signals, and therefore the accuracy and the capability of acquiring the human body action by the wireless signals can be improved under the condition of no light.
According to the method, a Radio Frequency (RF) signal is used as an input, a motion frame is used for generating a network to generate a human body 3D frame, and the motion frame is generated to train the network according to the human body 3D frame and a common image acquired at the same time. The action frame trained by the common image generates a network, and the action recognition precision is greatly improved; can be used in a non-light or dark light environment. Fig. 4 is a flowchart of network training for generating an action framework according to an embodiment of the present application, and as shown in fig. 4, a workflow is described as follows:
s401, acquiring real-time radio frequency signals reflected by a human body by using a radio frequency transceiver;
radio frequency transceiver means a communication device in which both the receiving and transmitting parts are mounted in a single housing or rack. The transmitting part of the radio frequency transceiver sends out radio frequency signals, and the radio frequency signals meet the human body and then reflect real-time radio frequency signals to the receiving part of the radio frequency transceiver. The rf transmitter generally uses two sets of transceiving channels, and two sets of orthogonal signals can be obtained through transmitting and receiving. The orthogonal signals are two paths of carriers with the same frequency and 90-degree phase difference, and the frequency spectrum utilization rate can be improved by using the orthogonal signals for transmission. In many radar, sonar and communication systems, it is generally necessary to convert the intermediate frequency output signal of the receiver into two orthogonal baseband signals, i.e., I, Q channels are used for detection. Because the phase information of the signals is preserved, the two baseband signals can be used for coherent integration, and therefore, an IQ receiver using quadrature detection techniques has a greater dynamic range and higher accuracy than a receiver not using quadrature detection techniques.
Step S402, obtaining a signal intensity heat map (heatmap) according to the real-time radio frequency signal;
because the radio frequency transceiver will process the received real-time radio frequency signal to obtain two sets of orthogonal signals. The signal intensity is obtained according to the two orthogonal groups of signals, and then a signal intensity heat map is made according to the obtained signal intensity. The signal intensity heat map can simply aggregate a large amount of data and uses a progressive color band to represent the data elegantly, the final effect is generally better than the direct display of discrete points, and the density degree or the frequency of the spatial data can be visually represented.
The formula for obtaining a signal intensity heat map (heatmap) from real-time radio frequency signals is as follows:
amplitude (Amplitude) ═ sqrt (I ^2+ Q ^2)
Where I and Q are two sets of quadrature signals, sqrt represents the sign of the square-on operation, and ^ represents the sign of the square-on operation.
Step S403, processing the signal intensity heat map by adopting an action frame generation network to obtain a 3D action frame sequence;
the action framework generation network consists of three parts: a first part, a Convolutional Neural Network (CNN) for feature extraction extracts space-time features in the radio frequency signal according to the heatmap, wherein the space is the distribution of the intensity of the radio frequency signal in a region, and the intensity distribution changes along with time and is called as time features; the two are superposed to form space-time characteristics; a second part, a Region candidate Network (RPN) acquires a possible frame envelope Region according to the space-time characteristics; the third part, the framework extraction network, extracts the 3D motion framework sequence from the possible envelope region.
S404, acquiring a motion video stream of the human body in time synchronization with the RF signal by using a camera system;
and simultaneously acquiring the RF signal by using the radio frequency transceiver, and synchronously acquiring the motion video stream of the human body by using the camera system.
Step S405, finishing posture judgment of the human body according to the motion video stream;
the motion video stream is shot of human motion by the camera system under the condition of illumination, and shot data is analyzed to obtain the posture judgment of the human body. The gesture judgment is that the recognition action changes along with time. Such as: sit for 20 minutes, hold aim posture for 5 minutes.
S406, acquiring a video stream 3D action frame sequence according to the posture judgment and the 3D action frame;
the posture judgment of the human body and the 3D action frame are two different data recording modes of the same human body action acquired at the same time, wherein the 3D action frame is generated according to wireless signals acquired at the same time. And obtaining a video stream 3D action frame sequence according to the posture judgment of the human body and the 3D action frame.
And step S407, training the motion frame by using the video stream 3D motion frame to generate a network.
The motion frame is trained by utilizing the video stream 3D motion frame sequence frame to generate a network, so that the motion recognition precision of the motion detection system can be greatly improved.
According to the method and the device, the video stream 3D action frame sequence is obtained according to the video shot by the camera, the action frame generation network is trained according to the video stream 3D action frame sequence, the 3D action frame sequence extracted according to the wireless signals is helped to improve the precision, and therefore the precision and the capability of obtaining the human body action through the wireless signals can be improved under the condition of no light.
Fig. 5A is a schematic diagram of recognizing motion characteristics using a wireless device and a video device according to an embodiment of the present invention, and as shown in fig. 5A, includes a wireless data stream generating module 501, a video data stream generating module 502, a motion characteristic detecting module 503, and a gesture characteristic recognizing module 504.
Fig. 5B is a flowchart of detecting an action characteristic of a deployed wireless detection device according to an embodiment of the present application, and as shown in fig. 5B, a workflow is described as follows:
s501, acquiring real-time radio frequency signals reflected by a human body by using a radio frequency transceiver;
as shown in fig. 5A, an RF modulation transceiver is disposed in the wireless data stream generating module 501 for acquiring real-time radio frequency signals reflected by a human body.
Step S502, obtaining a signal intensity heat map according to the real-time radio frequency signal;
as shown in fig. 5A, in the wireless data stream generating module 501, the system obtains a signal strength heatmap according to the real-time radio frequency signal received by the RF modulation transceiver.
Step S503, processing the signal intensity heat map by adopting an action frame generation network to obtain a 3D action frame sequence;
as shown in fig. 5A, in the wireless data stream generating module 501, the system processes the signal strength heatmap by using the motion frame generating network, so as to obtain a 3D motion frame sequence.
Step S504, inputting the 3D action frame as a feature into action feature detection equipment for action feature extraction;
as shown in fig. 5A, the 3D motion frame sequence of the human body is input as a feature to the motion feature detection device 503, the motion feature detection device performs motion feature extraction, and the real-time motion of the detected human body can be determined according to the extracted motion feature, for example: hold the mobile phone, sit still and use the PC, lie, wait the posture.
And step S505, the user identity is judged according to the combination of the posture and the time characteristics obtained by the posture characteristic extraction.
As shown in fig. 5A, in the gesture feature recognition module 504, the system may determine the identity of the detected human body by combining the gesture obtained by the working feature extraction with the time feature, for example: office staff and the like use the PC for a long time in a sitting state.
The action characteristic detection mode only deploying the wireless detection equipment is suitable for the environment without deploying the video detection equipment in a condition, such as: dark and no-light environment. The action characteristic detection equipment is deployed under the condition, a required action frame can be trained before deployment to generate a network, when the action characteristic detection equipment is used for working, action recognition precision is improved, and meanwhile, more convenient and more effective interaction between human and machines is achieved.
Fig. 5C is a flowchart of detecting an action characteristic when the wireless detection device and the video detection device are deployed at the same time in the embodiment of the present application, and as shown in fig. 5C, a workflow is described as follows:
step S511, inputting the 3D action frame sequence generated by the video data stream and the wireless data stream as the characteristic to the action characteristic detection part for action characteristic extraction;
as shown in fig. 5A, first, an RF modulation transceiver is deployed in the wireless data stream generating module 501 for acquiring real-time radio frequency signals reflected by a human body; a camera system is deployed in the video data stream generation module 502 for acquiring a video data stream. Next, the 3D motion frame sequence generated in the wireless data stream generation module 501 and the 3D motion frame sequence generated in the video data stream generation module 502 are input to the motion feature detection section at the same time to perform motion feature extraction.
And S512, judging the identity of the user according to the combination of the posture and the time characteristics obtained by the posture characteristic extraction.
The action characteristic detection mode for deploying the wireless detection equipment and the video detection equipment simultaneously is suitable for the environment for deploying the video detection equipment conditionally. In the action detection process, the action frame generation network can be trained in real time by using video data, and the action frame generation network with higher precision is obtained. The 3D action frame sequence generated by combining the video data stream and the wireless data stream effectively provides a real-time high-precision sight angle, the action recognition precision is higher, the action characteristic extraction is more accurate, and the user identity can be more accurately judged.
Based on the foregoing embodiments, an identity recognition processing apparatus is provided in an embodiment of the present application, where the apparatus includes modules and sub-modules included in the modules, and the identity recognition processing apparatus may be implemented by a processor in an electronic device; of course, the implementation can also be realized through a specific logic circuit; in implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 6 is a schematic structural diagram of an identity recognition apparatus provided in an embodiment of the present application, and as shown in fig. 6, the apparatus 600 includes a first obtaining module 601, a generating module 602, an identifying module 603, and a determining module 604, where:
a first obtaining module 601, configured to obtain a radio frequency signal stream reflected by an object to be identified;
a generating module 602, configured to input the radio frequency signal stream to a trained action frame generation network, so as to obtain a first 3D action frame sequence;
the recognition module 603 is configured to perform gesture recognition on the object to be recognized at least according to the first 3D motion frame sequence, so as to obtain a gesture of the object to be recognized;
the determining module 604 is configured to determine the identity of the object to be recognized according to the posture of the object to be recognized.
Based on the foregoing embodiments, an embodiment of the present application provides an identity recognition apparatus, where the apparatus includes a first obtaining module, a generating module, an identifying module, and a determining module, where the first obtaining module includes a sending unit and a receiving unit, where:
the transmitting unit is used for transmitting radio frequency signals to the object to be identified through a radio frequency transmitting and receiving device;
the receiving unit is used for receiving the radio frequency signal flow reflected by the object to be identified;
the generating module is used for inputting the radio frequency signal stream into a trained action frame generating network to obtain a first 3D action frame sequence;
the recognition module is used for recognizing the gesture of the object to be recognized at least according to the first 3D action frame sequence to obtain the gesture of the object to be recognized;
and the determining module is used for determining the identity of the object to be recognized according to the posture of the object to be recognized.
Based on the foregoing embodiments, an identity recognition apparatus is provided in an embodiment of the present application, where the apparatus includes a first obtaining module, a generating module, an identifying module, a second obtaining module, a first analyzing module, and a determining module, where the first obtaining module includes a sending unit and a receiving unit, where:
the transmitting unit is used for transmitting radio frequency signals to the object to be identified through a radio frequency transmitting and receiving device;
and the receiving unit is used for receiving the radio frequency signal flow reflected by the object to be identified.
The generating module is used for inputting the radio frequency signal stream into a trained action frame generating network to obtain a first 3D action frame sequence;
the second acquisition module is used for acquiring the action video stream of the object to be identified, which is time-synchronized with the radio frequency signal stream, through a camera system;
the first analysis module is used for analyzing the motion video stream to obtain a second 3D motion frame sequence; correspondingly, the recognition module is configured to perform gesture recognition on the object to be recognized according to the first 3D action frame sequence and the second 3D action frame sequence to obtain a gesture of the object to be recognized;
and the determining module is used for determining the identity of the object to be recognized according to the posture of the object to be recognized.
Based on the foregoing embodiments, an embodiment of the present application provides an identity recognition apparatus, where the apparatus includes a first obtaining module, a generating module, an identifying module, and a determining module, where the generating module includes a determining unit and a generating unit, where:
the first acquisition module is used for acquiring a radio frequency signal stream reflected by an object to be identified;
a determining unit for determining a signal strength heatmap from the radio frequency signal stream;
the generating unit is used for inputting the signal intensity heat map into the action frame generation network to obtain a first 3D action frame sequence;
the recognition module is used for recognizing the gesture of the object to be recognized at least according to the first 3D action frame sequence to obtain the gesture of the object to be recognized;
and the determining module is used for determining the identity of the object to be recognized according to the posture of the object to be recognized.
Based on the foregoing embodiments, an identity recognition apparatus is provided in an embodiment of the present application, where the apparatus includes a first obtaining module, a generating module, an identifying module, and a determining module, where the generating module includes a first input subunit, a second input subunit, and an extracting subunit, where:
the first acquisition module is used for acquiring a radio frequency signal stream reflected by an object to be identified;
the first input subunit is used for inputting the signal intensity heat map into the convolutional neural network CNN to obtain the space-time characteristics of the object to be identified;
the second input subunit is used for inputting the space-time characteristics of the object to be identified into the region candidate network RPN to obtain a candidate frame envelope region;
the extraction subunit is used for inputting the candidate frame envelope area into the frame extraction network to obtain a first 3D action frame sequence;
the recognition module is used for recognizing the gesture of the object to be recognized at least according to the first 3D action frame sequence to obtain the gesture of the object to be recognized;
and the determining module is used for determining the identity of the object to be recognized according to the posture of the object to be recognized.
Based on the foregoing embodiments, an embodiment of the present application provides an identity recognition apparatus, where the apparatus includes a first obtaining module, a generating module, an identifying module, and a determining module, where the identifying module includes an input unit and a determining unit, where:
the first acquisition module is used for acquiring a radio frequency signal stream reflected by an object to be identified;
the generating module is used for inputting the radio frequency signal stream into a trained action frame generating network to obtain a first 3D action frame sequence;
the input unit is used for inputting the first 3D action frame sequence to action characteristic detection equipment to obtain the posture characteristic of the object to be recognized, wherein the posture characteristic refers to the action characteristic changing along with time;
the determining unit is used for determining the posture of the object to be recognized according to the posture characteristic of the object to be recognized;
and the determining module is used for determining the identity of the object to be recognized according to the posture of the object to be recognized.
Based on the foregoing embodiments, an identity recognition apparatus is provided in an embodiment of the present application, where the apparatus includes a first obtaining module, a generating module, a recognizing module, a determining module, a third obtaining module, a fourth obtaining module, a second analyzing module, and a training module, where:
the first acquisition module is used for acquiring a radio frequency signal stream reflected by an object to be identified;
the generating module is used for inputting the radio frequency signal stream into a trained action frame generating network to obtain a first 3D action frame sequence;
the recognition module is used for recognizing the gesture of the object to be recognized at least according to the first 3D action frame sequence to obtain the gesture of the object to be recognized;
the determining module is used for determining the identity of the object to be recognized according to the posture of the object to be recognized;
the third acquisition module is used for acquiring the radio frequency signal flow reflected by the sample object through the radio frequency transceiver;
a fourth acquiring module, configured to acquire, by a camera system, a motion video stream of the sample object that is time-synchronized with the radio frequency signal stream;
the second analysis module is used for analyzing the action frame in the action video stream of the sample object to obtain a 3D action frame sequence of the sample object;
and the training module is used for taking the radio frequency signal flow of the sample object and the 3D action frame sequence of the sample object as a training sample set and training an action frame generation network.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the above-mentioned identification method is implemented in the form of a software functional module and sold or used as a standalone product, it may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially embodied in the form of a software product stored in a storage medium, and including instructions for enabling an identification system (which may be a mobile phone, a tablet computer, a desktop computer, a personal digital assistant, a navigator, a digital phone, a video phone, a television, a sensing device, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the identity recognition method provided in the above embodiments.
Correspondingly, an embodiment of the present application provides an identity recognition system, and fig. 7 is a schematic diagram of a hardware entity of the identity recognition system according to the embodiment of the present application, as shown in fig. 7, the hardware entity of the identity recognition system 700 includes: comprises a radio frequency transceiver 701 and an electronic device 702, wherein:
the radio frequency transceiving device is used for sending radio frequency signals to an object to be identified and receiving radio frequency signal streams reflected by the object to be identified;
the electronic equipment is used for acquiring a radio frequency signal stream reflected by an object to be identified; inputting the radio frequency signal stream into a trained action frame generation network to obtain a first 3D action frame sequence; performing gesture recognition on the object to be recognized at least according to the first 3D action frame sequence to obtain the gesture of the object to be recognized; and determining the identity of the object to be recognized according to the posture of the object to be recognized.
Based on the foregoing embodiments, an identity identification system is provided in an embodiment of the present application, where the system includes a radio frequency transceiver, a camera system, and an electronic device, where:
the radio frequency transceiving device is used for sending radio frequency signals to an object to be identified and receiving radio frequency signal streams reflected by the object to be identified;
the camera system is used for acquiring a motion video stream of the object to be identified, wherein the motion video stream is time-synchronized with the radio frequency signal stream;
the electronic equipment is also used for acquiring a motion video stream of the object to be identified, which is time-synchronized with the radio frequency signal stream, through a camera system;
analyzing the motion video stream to obtain a second 3D motion frame sequence;
and performing gesture recognition on the object to be recognized according to the first 3D action frame sequence and the second 3D action frame sequence to obtain the gesture of the object to be recognized.
In some embodiments, the system comprises a radio frequency transceiver, and an electronic device, wherein:
the radio frequency transceiving device is used for sending radio frequency signals to an object to be identified and receiving radio frequency signal streams reflected by the object to be identified;
the electronic device is further used for determining a signal strength heat map according to the radio frequency signal flow; and inputting the signal intensity heat map into the action frame generation network to obtain a first 3D action frame sequence.
In some embodiments, the system comprises a radio frequency transceiver, and an electronic device, wherein:
the radio frequency transceiving device is used for sending radio frequency signals to an object to be identified and receiving radio frequency signal streams reflected by the object to be identified;
the electronic equipment is also used for inputting the signal intensity heat map into the convolutional neural network CNN to obtain the space-time characteristics of the object to be identified; inputting the space-time characteristics of the object to be identified into the region candidate network RPN to obtain a candidate frame envelope region; and inputting the candidate frame envelope region into the frame extraction network to obtain a first 3D action frame sequence.
In some embodiments, the system comprises a radio frequency transceiver, and an electronic device, wherein:
the radio frequency transceiving device is used for sending radio frequency signals to an object to be identified and receiving radio frequency signal streams reflected by the object to be identified;
the electronic equipment is further used for inputting the first 3D action frame sequence into action feature detection equipment to obtain a posture feature of the object to be recognized, wherein the posture feature refers to an action feature changing along with time; and determining the posture of the object to be recognized according to the posture characteristic of the object to be recognized.
In some embodiments, the system comprises a radio frequency transceiver, a camera system, and an electronic device, wherein:
the radio frequency transceiver is also used for collecting the radio frequency signal flow reflected by the sample object;
a camera system further for acquiring a motion video stream of the sample object time-synchronized with the radio frequency signal stream;
the electronic equipment is also used for acquiring a radio frequency signal flow reflected by the sample object through the radio frequency transceiving device; acquiring, by a camera system, a motion video stream of the sample object time-synchronized with the radio frequency signal stream; analyzing the motion frame in the motion video stream of the sample object to obtain a 3D motion frame sequence of the sample object; and taking the radio frequency signal flow of the sample object and the 3D motion frame sequence of the sample object as a training sample set, and training a motion frame generation network.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of identity recognition, the method comprising:
acquiring a radio frequency signal flow reflected by an object to be identified;
inputting the radio frequency signal stream into a trained action frame generation network to obtain a first 3D action frame sequence;
performing gesture recognition on the object to be recognized at least according to the first 3D action frame sequence to obtain the gesture of the object to be recognized;
and determining the identity of the object to be recognized according to the posture of the object to be recognized.
2. The method of claim 1, wherein the acquiring a radio frequency signal stream reflected by an object to be identified comprises:
sending a radio frequency signal to the object to be identified through a radio frequency transceiver;
and receiving the radio frequency signal flow reflected by the object to be identified.
3. The method of claim 2, wherein the method further comprises:
acquiring a motion video stream of the object to be identified, which is time-synchronized with the radio frequency signal stream, through a camera system;
analyzing the motion video stream to obtain a second 3D motion frame sequence;
correspondingly, performing gesture recognition on the object to be recognized according to the first 3D action frame sequence and the second 3D action frame sequence to obtain the gesture of the object to be recognized.
4. The method of claim 1, wherein the inputting the radio frequency signal stream into a trained motion frame generation network resulting in a first 3D motion frame sequence comprises:
determining a signal strength heatmap from the radio frequency signal stream;
and inputting the signal intensity heat map into the action frame generation network to obtain a first 3D action frame sequence.
5. The method of claim 4, wherein the action frame generation network comprises a trained Convolutional Neural Network (CNN), a regional candidate network (RPN), and a frame extraction network, and correspondingly, the inputting the signal strength heatmap to the action frame generation network results in a first 3D action frame sequence comprising:
inputting the signal intensity heat map into the convolutional neural network CNN to obtain the space-time characteristics of the object to be identified;
inputting the space-time characteristics of the object to be identified into the region candidate network RPN to obtain a candidate frame envelope region;
and inputting the candidate frame envelope region into the frame extraction network to obtain a first 3D action frame sequence.
6. The method according to claim 1, wherein the performing gesture recognition on the object to be recognized according to at least the first 3D motion frame sequence to obtain the gesture of the object to be recognized comprises:
inputting the first 3D action frame sequence into action feature detection equipment to obtain the posture feature of the object to be recognized, wherein the posture feature refers to the action feature changing along with time;
and determining the posture of the object to be recognized according to the posture characteristic of the object to be recognized.
7. The method of any of claims 1 to 6, wherein the method further comprises:
acquiring a radio frequency signal flow reflected by a sample object through a radio frequency transceiver;
acquiring, by a camera system, a motion video stream of the sample object time-synchronized with the radio frequency signal stream;
analyzing the motion frame in the motion video stream of the sample object to obtain a 3D motion frame sequence of the sample object;
and taking the radio frequency signal flow of the sample object and the 3D motion frame sequence of the sample object as a training sample set, and training a motion frame generation network.
8. An identification device comprising:
the first acquisition module is used for acquiring a radio frequency signal stream reflected by an object to be identified;
the generating module is used for inputting the radio frequency signal stream into a trained action frame generating network to obtain a first 3D action frame sequence;
the recognition module is used for recognizing the gesture of the object to be recognized at least according to the first 3D action frame sequence to obtain the gesture of the object to be recognized;
and the determining module is used for determining the identity of the object to be recognized according to the posture of the object to be recognized.
9. An identification system, the system comprising:
the radio frequency transceiving device is used for sending radio frequency signals to an object to be identified and receiving radio frequency signal streams reflected by the object to be identified;
the electronic equipment is used for acquiring a radio frequency signal stream reflected by an object to be identified; inputting the radio frequency signal stream into a trained action frame generation network to obtain a first 3D action frame sequence; performing gesture recognition on the object to be recognized at least according to the first 3D action frame sequence to obtain the gesture of the object to be recognized; and determining the identity of the object to be recognized according to the posture of the object to be recognized.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN201911398786.6A 2019-12-30 2019-12-30 Identity recognition method, device, system and storage medium Pending CN111160258A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911398786.6A CN111160258A (en) 2019-12-30 2019-12-30 Identity recognition method, device, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911398786.6A CN111160258A (en) 2019-12-30 2019-12-30 Identity recognition method, device, system and storage medium

Publications (1)

Publication Number Publication Date
CN111160258A true CN111160258A (en) 2020-05-15

Family

ID=70559278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911398786.6A Pending CN111160258A (en) 2019-12-30 2019-12-30 Identity recognition method, device, system and storage medium

Country Status (1)

Country Link
CN (1) CN111160258A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114246582A (en) * 2021-12-20 2022-03-29 杭州慧光健康科技有限公司 System and method for detecting bedridden people based on long-term and short-term memory neural network
WO2023138154A1 (en) * 2022-01-24 2023-07-27 上海商汤智能科技有限公司 Object recognition method, network training method and apparatus, device, medium, and program
EP4325251A1 (en) * 2022-08-19 2024-02-21 Waymo LLC High throughput point cloud processing

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114246582A (en) * 2021-12-20 2022-03-29 杭州慧光健康科技有限公司 System and method for detecting bedridden people based on long-term and short-term memory neural network
WO2023138154A1 (en) * 2022-01-24 2023-07-27 上海商汤智能科技有限公司 Object recognition method, network training method and apparatus, device, medium, and program
EP4325251A1 (en) * 2022-08-19 2024-02-21 Waymo LLC High throughput point cloud processing

Similar Documents

Publication Publication Date Title
Zheng et al. Zero-effort cross-domain gesture recognition with Wi-Fi
He et al. WiFi vision: Sensing, recognition, and detection with commodity MIMO-OFDM WiFi
Zhao et al. mid: Tracking and identifying people with millimeter wave radar
Zhang et al. Latern: Dynamic continuous hand gesture recognition using FMCW radar sensor
Gu et al. WiGRUNT: WiFi-enabled gesture recognition using dual-attention network
Li et al. Practical human sensing in the light
CN110741385B (en) Gesture recognition method and device, and positioning tracking method and device
CN111160258A (en) Identity recognition method, device, system and storage medium
Li et al. Towards domain-independent and real-time gesture recognition using mmwave signal
Sakamoto et al. Hand gesture recognition using a radar echo I–Q plot and a convolutional neural network
CN103038725B (en) Use no touch sensing and the gesture identification of continuous wave ultrasound signal
KR20230169969A (en) Manual positioning by radio frequency sensitive labels
EP3791315A1 (en) Radio frequency (rf) object detection using radar and machine learning
Li et al. A taxonomy of WiFi sensing: CSI vs passive WiFi radar
Deng et al. Gaitfi: Robust device-free human identification via wifi and vision multimodal learning
Yang et al. Environment adaptive RFID-based 3D human pose tracking with a meta-learning approach
Kabir et al. CSI-IANet: An inception attention network for human-human interaction recognition based on CSI signal
Ding et al. Multiview features fusion and Adaboost based indoor localization on Wifi platform
Showmik et al. Human activity recognition from wi-fi csi data using principal component-based wavelet cnn
Gu et al. Attention-based gesture recognition using commodity wifi devices
Li et al. Digital gesture recognition based on millimeter wave radar
Bulugu Gesture recognition system based on cross-domain CSI extracted from Wi-Fi devices combined with the 3D CNN
Sun et al. Moving target localization and activity/gesture recognition for indoor radio frequency sensing applications
Zhou et al. Efficiently user-independent ultrasonic-based gesture recognition algorithm
Waqar et al. Direction-Independent Human Activity Recognition Using a Distributed MIMO Radar System and Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination