CN114004255A - Gesture detection method, gesture detection device, electronic device, and readable storage medium - Google Patents

Gesture detection method, gesture detection device, electronic device, and readable storage medium Download PDF

Info

Publication number
CN114004255A
CN114004255A CN202111259043.8A CN202111259043A CN114004255A CN 114004255 A CN114004255 A CN 114004255A CN 202111259043 A CN202111259043 A CN 202111259043A CN 114004255 A CN114004255 A CN 114004255A
Authority
CN
China
Prior art keywords
signals
target
neural network
information
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111259043.8A
Other languages
Chinese (zh)
Inventor
陈彦
李文轩
张东恒
张冬
孙启彬
吴曼青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202111259043.8A priority Critical patent/CN114004255A/en
Publication of CN114004255A publication Critical patent/CN114004255A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The embodiment of the disclosure provides a gesture detection method, a gesture detection device, an electronic device and a readable storage medium, wherein the method comprises the following steps: acquiring millimeter wave radar signals for detecting actions of people to be detected, wherein the millimeter wave radar signals comprise a plurality of sending signals arranged according to a time sequence and a plurality of receiving signals corresponding to each sending signal; generating a plurality of mixing signals from each transmission signal and each reception signal corresponding to the transmission signal; processing each mixing signal to obtain a plurality of target distance angle graphs; and inputting the target distance angle graph into a target neural network, and outputting a recognition result, wherein the target neural network is obtained by training the initial neural network by using a training sample data set, and the recognition result comprises posture classification information of the crowd to be detected.

Description

Gesture detection method, gesture detection device, electronic device, and readable storage medium
Technical Field
The present disclosure relates to the field of human perception technology, and more particularly, to a gesture detection method, a gesture detection apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
Background
With the development of society, the medical care problems of the elderly and other people are more and more concerned. Among them, accidental falls have become a major threat to the safety of the elderly. In the related art, wearable equipment based on speed identification is generally adopted to detect the postures of people such as the old, and help is timely provided for the old when the old is detected to fall down.
In implementing the disclosed concept, the inventors found that there are at least the following problems in the related art: the wearable device based on speed recognition easily generates false alarm when recognizing the posture of the crowd to be detected.
Disclosure of Invention
In view of the above, the disclosed embodiments provide a gesture detection method, a gesture detection apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
One aspect of the disclosed embodiments provides a gesture detection method, including:
acquiring millimeter wave radar signals for detecting actions of people to be detected, wherein the millimeter wave radar signals comprise a plurality of sending signals arranged according to a time sequence and a plurality of receiving signals corresponding to each sending signal;
generating a plurality of mixed signals from each of the transmission signals and each of the reception signals corresponding to the transmission signals;
processing a plurality of the mixing signals to obtain a target distance angle diagram; and
and inputting the target distance angle graph into a target neural network, and outputting a recognition result, wherein the target neural network is obtained by training an initial neural network by using a training sample data set, and the recognition result comprises the posture classification information of the crowd to be detected.
According to an embodiment of the present disclosure, the processing the mixed signals to obtain a target range-angle diagram includes:
processing a plurality of the mixing signals to obtain an initial distance angle diagram; and
and performing data enhancement processing on the initial distance angle map to obtain the target distance angle map, wherein the data enhancement processing comprises inversion processing, translation processing and/or frame extraction processing of the initial distance angle map.
According to an embodiment of the present disclosure, the plurality of receiving signals are respectively obtained by a plurality of receiving antennas of a millimeter wave radar, and the initial distance angle map includes an initial two-dimensional matrix map;
wherein, the processing the mixing signal to obtain an initial distance angle map includes:
performing fast fourier transform on each of the mixed signals and mixed signals associated with the mixed signals to obtain distance information and velocity information corresponding to the mixed signals;
for each mixing signal, performing fast fourier transform on angles of the receiving signals acquired by different receiving antennas to obtain angle information of the crowd to be detected;
generating a two-dimensional matrix map based on the distance information, the velocity information, and the angle information corresponding to the mixed signal; and
and generating the initial two-dimensional matrix map according to the two-dimensional matrix maps.
According to an embodiment of the present disclosure, the generating a two-dimensional matrix map based on the distance information, the velocity information, and the angle information corresponding to the mixed signal includes:
constructing a target three-dimensional matrix according to the distance information, the speed information and the angle information;
summing the target three-dimensional matrix to obtain a target numerical value and a two-dimensional matrix image;
under the condition that the target value meets a preset threshold value, filtering the mixing signal corresponding to the target value to obtain a filtered mixing signal; and
and adjusting the two-dimensional matrix map according to the filtered mixing signal to obtain the two-dimensional matrix map.
According to an embodiment of the present disclosure, the distance information d is calculated as shown in equation (1), the velocity information v is calculated as shown in equation (2), and the angle information θ is calculated as shown in equation (3):
Figure BDA0003318595690000031
Figure BDA0003318595690000032
Figure BDA0003318595690000033
wherein f isIFCharacterizing the frequency of the mixed signal, fIFThe slope of the mixed signal is characterized, c represents the speed of light, lambda represents the wavelength of the mixed signal, omega represents the phase difference between two adjacent mixed signals, TcThe time difference between two adjacent mixed signals is characterized, and l represents the distance between two adjacent receiving antennas.
According to an embodiment of the present disclosure, the training of the initial neural network by using the training sample data set to obtain the target neural network includes:
acquiring the training sample data set, wherein the training samples in the training sample data set comprise training images containing distance and angle information and label data of the training images;
inputting the training image into the initial neural network, and outputting predicted posture classification information;
calculating a loss function according to the predicted posture classification information and the label data to obtain a loss result; and
iteratively adjusting the network parameters of the initial neural network according to the loss result to generate the trained target neural network.
According to an embodiment of the present disclosure, the initial neural network includes at least three 3d convolutional layers and at least two fully connected layers;
wherein the inputting the training image into the initial neural network and outputting the predicted posture classification information includes:
inputting the training image into at least three 3d convolutional layers and outputting a feature extraction graph; and
inputting the feature extraction map into at least two of the fully-connected layers, and outputting the predicted pose classification information.
Another aspect of the disclosed embodiments provides a gesture detection apparatus including:
the acquisition module is used for acquiring millimeter wave radar signals for detecting actions of people to be detected, and the millimeter wave radar signals comprise a plurality of sending signals arranged according to a time sequence and a plurality of receiving signals corresponding to each sending signal;
a generating module configured to generate a plurality of mixing signals based on each of the transmission signals and each of the reception signals corresponding to the transmission signals;
an obtaining module, configured to process each of the mixing signals to obtain a plurality of target distance angle maps; and
and the output module is used for inputting the target distance angle graph into a target neural network and outputting a recognition result, wherein the target neural network is obtained by training an initial neural network by using a training sample data set, and the recognition result comprises the posture classification information of the crowd to be detected.
Another aspect of an embodiment of the present disclosure provides an electronic device including: one or more processors; memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as described above.
Another aspect of embodiments of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
Another aspect of an embodiment of the present disclosure provides a computer program product comprising computer executable instructions for implementing the method as described above when executed.
According to the embodiment of the disclosure, a mixing signal is obtained by processing a millimeter wave radar signal, the mixing signal is converted into a target distance angle diagram, the trained target neural network is used for processing the target distance angle diagram to obtain the recognition result of the posture classification information of the crowd to be detected, and the target neural network is used for recognizing the target distance angle diagram based on the distance and the angle, so that the technical problem that the wearable equipment based on speed recognition is easy to generate false alarm when recognizing the posture of the crowd to be detected is at least partially solved, and the technical effect of reducing the false alarm probability generated by recognizing the posture classification information of the crowd to be detected is achieved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an exemplary system architecture to which a gesture detection method is applied, according to an embodiment of the present disclosure;
FIG. 2 schematically shows a flow diagram of a gesture detection method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a schematic diagram of the use of millimeter wave radar in accordance with an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow chart for obtaining an initial distance angle map according to an embodiment of the present disclosure;
FIG. 5 schematically illustrates a schematic diagram of a target distance angle map, according to an embodiment of the present disclosure;
FIG. 6 schematically illustrates a training flow diagram for a target neural network according to an embodiment of the present disclosure;
FIG. 7 schematically shows a block diagram of a gesture detection apparatus according to an embodiment of the present disclosure; and
fig. 8 schematically shows a block diagram of an electronic device implementing a gesture detection method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
In the related art, recognition of gesture classification is generally performed in the following two ways. One is wearable equipment based on WiFi and/or doppler radar technology, and the principle is that fall has a very high speed, which brings a large Channel State Information (CSI) phase shift or doppler shift, so that the recognition method has a limited generalization capability to different environments and people, cannot handle multiple complex gestures, and is easy to generate false alarm. The other is to use a neural network to identify the shot image, which has a higher accuracy rate under a specific environment, but the deployment of the camera faces a more serious privacy invasion problem.
In view of the above, the inventor finds that the millimeter wave radar has the characteristics of higher resolution, wider operating frequency band, larger-value doppler frequency response, shorter wavelength, easier acquisition of detailed features and clear outline imaging of people to be detected, and the like, is suitable for classifying and identifying the postures of the people to be detected, and meanwhile, the millimeter wave radar is small in size, light in weight, and convenient to carry and deploy. Therefore, the distance angle diagram can be generated by utilizing the millimeter wave radar signals, and the gesture classification of the crowd to be detected is identified by utilizing the target neural network.
Embodiments of the present disclosure provide a gesture detection method, a gesture detection apparatus, an electronic device, a computer-readable storage medium, and a computer program product. The method comprises the steps of obtaining millimeter wave radar signals for detecting actions of a crowd to be detected, wherein the millimeter wave radar signals comprise a plurality of sending signals arranged according to a time sequence and a plurality of receiving signals corresponding to each sending signal; generating a plurality of mixing signals from each transmission signal and each reception signal corresponding to the transmission signal; processing each mixing signal to obtain a plurality of target distance angle graphs; and inputting the target distance angle graph into a target neural network, and outputting a recognition result, wherein the target neural network is obtained by training the initial neural network by using a training sample data set, and the recognition result comprises posture classification information of the crowd to be detected.
FIG. 1 schematically shows an exemplary system architecture 100 to which a gesture detection method may be applied, according to an embodiment of the disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, network 104, server 105, and millimeter-wave radar 106. Network 104 is the medium used to provide communication links between terminal devices 101, 102, 103, server 105, and millimeter-wave radar 106. Network 104 may include various connection types, such as wired and/or wireless communication links, and so forth.
A user may use terminal devices 101, 102, 103 to interact with server 105 over network 104 to receive or send messages, or to obtain millimeter-wave radar signals of millimeter-wave radar 106 over network 104, and so on. The terminal devices 101, 102, 103 may have installed thereon various client applications, such as a data transfer application, a web browser application, a search-type application, an instant messaging tool, a mailbox client, and/or social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server that provides various services, such as a background management server (for example only) that provides support for users to utilize millimeter-wave radar signals transmitted by the terminal devices 101, 102, 103. The background management server may analyze and otherwise process the received data such as the millimeter wave radar signal, and feed back a processing result (for example, a webpage, information, or data obtained or generated according to a user request) to the terminal device.
It should be noted that the gesture detection method provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the gesture detection apparatus provided by the embodiments of the present disclosure may be generally disposed in the server 105. The gesture detection method provided by the embodiments of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the gesture detection apparatus provided in the embodiments of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Alternatively, the gesture detection method provided by the embodiment of the present disclosure may also be executed by the terminal device 101, 102, or 103, or may also be executed by another terminal device different from the terminal device 101, 102, or 103. Accordingly, the gesture detection apparatus provided in the embodiments of the present disclosure may also be disposed in the terminal device 101, 102, or 103, or in another terminal device different from the terminal device 101, 102, or 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
FIG. 2 schematically shows a flow diagram of a gesture detection method according to an embodiment of the present disclosure.
As shown in fig. 2, the gesture detection method may include operations S210 to S240.
In operation S210, a millimeter wave radar signal for detecting a motion of a crowd to be detected is acquired, where the millimeter wave radar signal includes a plurality of transmission signals arranged in a time sequence and a plurality of reception signals corresponding to each of the transmission signals.
In operation S220, a plurality of mixing signals are generated according to each transmission signal and each reception signal corresponding to the transmission signal.
In operation S230, the plurality of mixed signals are processed to obtain a target range-angle map.
In operation S240, the target distance angle map is input into a target neural network, and a recognition result is output, where the target neural network is obtained by training an initial neural network using a training sample data set, and the recognition result includes posture classification information of a group to be detected.
According to an embodiment of the present disclosure, the population to be detected may include elderly people, physically handicapped people, or other special populations that need to be monitored. The posture classification information may include, but is not limited to, a fallen state and a non-fallen state, or an upright state and a recumbent state, and the like.
According to the embodiment of the present disclosure, when a millimeter wave radar is used to collect millimeter wave radar signals, 3 transmission signals and 4 reception signals corresponding to each transmission signal at one time can be obtained, 12 reception signals in total are obtained, and the transmission signals and each reception signal corresponding to the transmission signals are mixed, and 12 mixed signals in total can be obtained. And processing the plurality of mixing signals to obtain a target distance angle diagram, and inputting the target distance angle diagram into a trained target neural network to obtain posture classification information of the crowd to be detected. According to the embodiment of the disclosure, a mixing signal is obtained by processing a millimeter wave radar signal, the mixing signal is converted into a target distance angle diagram, the trained target neural network is used for processing the target distance angle diagram to obtain the recognition result of the posture classification information of the crowd to be detected, and the target neural network is used for recognizing the target distance angle diagram based on the distance and the angle, so that the technical problem that the wearable equipment based on speed recognition is easy to generate false alarm when recognizing the posture of the crowd to be detected is at least partially solved, and the technical effect of reducing the false alarm probability generated by recognizing the posture classification information of the crowd to be detected is achieved.
Fig. 3 schematically illustrates a schematic diagram of the use of a millimeter wave radar in accordance with an embodiment of the present disclosure.
As shown in fig. 3, the millimeter wave radar signal may be collected using an FMCW radar having three transmitting ends and four receiving ends. The radar may be mounted at a predetermined height, for example 3 metres, directly above the environment to be detected, before use, such as at a roof as in figure 3.
According to an embodiment of the present disclosure, processing the plurality of mixed signals to obtain the target range-angle map may include the following operations.
And processing the multiple mixed signals to obtain an initial distance angle diagram.
And performing data enhancement processing on the initial distance angle map to obtain a plurality of target distance angle maps, wherein the data enhancement processing comprises inversion processing, translation processing and/or frame extraction processing of the initial distance angle map.
According to the embodiment of the disclosure, in order to make the recognition result of the target distance angle map by the target neural network more accurate, the target distance angle map subjected to data enhancement processing may be used.
According to the embodiment of the disclosure, for example, the initial distance-angle map obtained by processing the mixing signal is subjected to inversion processing, translation processing and/or frame extraction processing to obtain the target distance-angle map after data enhancement, so that when the target neural network identifies the target distance-angle map, a more accurate identification result can be obtained.
According to an embodiment of the present disclosure, processing the plurality of mixed signals to obtain the initial distance-angle map may include the following operations.
And processing each mixing signal to obtain a plurality of distance angle graphs. An initial distance-angle map is generated from the plurality of distance-angle maps.
According to the embodiment of the disclosure, a plurality of distance-angle maps within a preset time period may be superimposed to obtain an initial distance-angle map according to the acquisition frequency of the FMCW radar. For example, 40 distance angle maps within 2 seconds may be superimposed, and it should be noted that the preset time period may be specifically determined according to actual requirements.
According to the embodiment of the disclosure, the plurality of receiving signals are respectively acquired by a plurality of receiving antennas of the millimeter wave radar, and the initial distance angle map includes an initial two-dimensional matrix map.
Fig. 4 schematically illustrates a flow chart for obtaining an initial distance angle map according to an embodiment of the present disclosure.
As shown in fig. 4, processing the plurality of mixed signals to obtain an initial distance-angle map may include operations S410 to S440.
In operation S410, a fast fourier transform is performed on each mixed signal and the mixed signal associated with the mixed signal, resulting in distance information and velocity information corresponding to the mixed signal.
In operation S420, for each mixing signal, fast fourier transform is performed on angles of the receiving signals acquired by different receiving antennas to obtain angle information of the crowd to be detected.
In operation S430, a two-dimensional matrix map is generated according to the distance information, the velocity information, and the angle information corresponding to the mixed signal.
In operation S440, an initial two-dimensional matrix map is generated from the plurality of two-dimensional matrix maps.
According to the embodiment of the present disclosure, the distance information d is calculated as shown in formula (1), the velocity information v is calculated as shown in formula (2), and the angle information θ is calculated as shown in formula (3):
Figure BDA0003318595690000101
Figure BDA0003318595690000102
Figure BDA0003318595690000103
wherein f isIFCharacterizing the frequency of the mixed signal, fIFThe slope of the mixed signal is characterized, c represents the speed of light, lambda represents the wavelength of the mixed signal, omega represents the phase difference between two adjacent mixed signals, TcThe time difference between two adjacent mixed signals is characterized, and l represents the distance between two adjacent receiving antennas.
According to the embodiment of the disclosure, Fast Fourier Transform (FFT) is performed on the mixed signal along the distance dimension and the velocity dimension, so that the distance information d and the velocity information v of the mixed signal can be obtained. And performing fast Fourier transform according to the angles of the received signals acquired by different receiving antennas to obtain angle information theta of the crowd to be detected, and generating a two-dimensional matrix chart serving as an initial distance angle chart according to the distance information, the speed information and the angle information corresponding to the mixing signal.
According to the embodiment of the disclosure, before the fast Fourier transform is performed on the angle of the received signal, the high-pass filtering on the velocity dimension can be performed on the frequency mixing signal, so that the influence of the environmental factors of the environment where the crowd to be detected is located on the angle of the crowd to be detected is avoided.
According to an embodiment of the present disclosure, generating a two-dimensional matrix map according to distance information, velocity information, and angle information corresponding to a mixed signal may include the following operations.
And constructing a target three-dimensional matrix according to the distance information, the speed information and the angle information.
And summing the target three-dimensional matrix to obtain a target numerical value and a two-dimensional matrix image.
And under the condition that the target value meets a preset threshold value, filtering the mixing signal corresponding to the target value to obtain a filtered mixing signal.
And adjusting the two-dimensional matrix map according to the filtered mixing signal to obtain the two-dimensional matrix map.
According to the embodiment of the disclosure, the preset threshold value can be specifically set according to actual requirements. The preset threshold may include a speed threshold and an intensity threshold.
According to the embodiment of the disclosure, based on the relationship between the numerical values of the distance information and the angle information and the signal reflection intensity, the constructed target three-dimensional matrix is converted into a two-dimensional matrix diagram in the speed dimension, wherein the two-dimensional matrix diagram can include two elements of the speed information and the reflection intensity, and the mixed signals lower than the preset threshold value are filtered, for example, the mixed signals with the reflection intensity lower than the intensity threshold value and the speed lower than the speed threshold value can be filtered, and the purpose of filtering is to filter the environmental factors and the noise factors of the environment to be detected, so as to obtain a more accurate initial distance angle diagram.
According to the embodiment of the disclosure, the two-dimensional matrix map is adjusted according to the filtered mixing signal, so that the two-dimensional matrix map is obtained.
Fig. 5 schematically shows a schematic diagram of a target distance angle diagram according to an embodiment of the present disclosure.
As shown in fig. 5, when the method is applied to a scene in which a person to be detected falls down, and the person to be detected stands normally, the obtained target distance angle graphs are as shown in (1) to (3) in fig. 5, and since the reflection intensity corresponding to the distance information, the speed information, or the angle information is lower than the intensity threshold, the target distance angles in (1) to (3) in fig. 5 only display part of the person to be detected.
According to the embodiment of the disclosure, when the crowd to be detected is walking normally, the obtained target distance angle graphs are shown in (5) to (9) of fig. 5. When the crowd to be detected falls to a falling state, the obtained target distance angle graph is as shown in (10) to (15) in fig. 5, for example, in (11) in fig. 5, in the process that the crowd to be detected falls, the height of the distance on the longitudinal axis of the crowd to be detected is lower than the height of the distance in (1), and meanwhile, as the distance information, the speed information or the angle information of each part of the body changes obviously, more graphs are displayed in (11), until (15) the crowd to be detected is in a creeping state completely, the height of the distance to the human body to be detected is at the lowest point, and as the reflection intensity corresponding to the distance information, the speed information or the angle information is lower than the intensity threshold value, further, the target distance angles in fig. 5(14) to (15) only display part of the crowd to be detected.
Fig. 6 schematically illustrates a training flow diagram of a target neural network according to an embodiment of the present disclosure.
As shown in fig. 6, the target neural network is obtained by training the initial neural network using the training sample data set, and may include the following operations S610 to S640.
In operation S610, a training sample data set is obtained, where training samples in the training sample data set include training images including distance and angle information and label data of the training images.
In operation S620, the training image is input to the initial neural network, and the predicted pose classification information is output.
In operation S630, a loss function is calculated according to the predicted pose classification information and the tag data, resulting in a loss result.
In operation S640, network parameters of the initial neural network are iteratively adjusted according to the loss result, generating a trained target neural network.
According to embodiments of the present disclosure, the Loss function may include a Focal local function.
According to the embodiment of the present disclosure, in training, training can be performed by using various training samples, for example, training images generated by 1600 pieces of fall data and training images generated by 12000 pieces of non-fall data, wherein the fall data can include various fall postures, for example, at least 4 fall postures, such as forward fall, backward fall, side fall, and slow sitting on the ground, and the non-fall data can include various postures, such as an upright posture, a walking posture, a sitting posture, and a bed-lying posture. Through the target neural network trained by a large number of training samples, the recognition accuracy of 98.8% can be achieved when the postures of the crowd to be detected are recognized in practice, and therefore the posture classification of the crowd to be detected can be recognized accurately in real time.
According to an embodiment of the present disclosure, the initial neural network may include at least three 3d convolutional layers and at least two fully-connected layers.
According to an embodiment of the present disclosure, inputting a training image into an initial neural network, and outputting predicted gesture classification information may include the following operations.
Inputting the training image into at least three 3d convolution layers and outputting a feature extraction diagram.
And inputting the feature extraction graph into at least two full-connection layers, and outputting the predicted posture classification information.
According to the embodiment of the disclosure, the trained target neural network has smaller parameter quantity through the network structure of the initial neural network, so that the posture classification information of the crowd to be detected can be detected conveniently in real time.
According to the embodiment of the disclosure, in order to facilitate real-time detection of the posture classification information of the crowd to be detected, the obtained millimeter wave radar signal can be segmented by using the sliding window, and the target distance angle diagram is generated according to the segmented millimeter wave radar signal, so that the target distance angle diagram is input into the target neural network to predict the posture classification information of the crowd to be detected, wherein specific parameters of the sliding window can include the segmentation frequency of once segmentation at each time interval and the segmentation length of the millimeter wave radar signal within a time period during each segmentation.
According to the embodiments of the present disclosure, in performing the above-described dividing operation, partially overlapping millimeter wave radar signals may be included in two consecutive divided millimeter wave radar signals. For example, existing D1、D2、D3、D4、D5、D6、D7、D8、D9、D10The total number of 10 millimeter wave radar signals is calculated, and if 5 millimeter wave radar signals are cut each time, D can be obtained by cutting for the first time1、D2、D3、D4、D5The second cutting may give D3、D4、D5、D6、D7And the millimeter wave radar signals after twice cutting comprise overlapped D3、D4、D5
Fig. 7 schematically shows a block diagram of a gesture detection apparatus according to an embodiment of the present disclosure.
As shown in fig. 7, the gesture detection apparatus may include an acquisition module 710, a generation module 720, a derivation module 730, and an output module 740.
The obtaining module 710 is configured to obtain a millimeter wave radar signal for detecting actions of a crowd to be detected, where the millimeter wave radar signal includes a plurality of sending signals arranged according to a time sequence and a plurality of receiving signals corresponding to each sending signal.
The generating module 720 is configured to generate a plurality of mixing signals according to each transmitting signal and each receiving signal corresponding to the transmitting signal.
The obtaining module 730 is configured to process the multiple mixing signals to obtain a target distance angle map.
The output module 740 is configured to input the target distance angle map into a target neural network, and output a recognition result, where the target neural network is obtained by training an initial neural network using a training sample data set, and the recognition result includes posture classification information of a group to be detected.
According to the embodiment of the disclosure, a mixed frequency signal is obtained by processing a millimeter wave radar signal, the mixed frequency signal is converted into a target distance angle diagram, the trained target neural network is used for processing the target distance angle diagram to obtain the recognition result of the posture classification information of the crowd to be detected, and the target neural network is used for recognizing the target distance angle diagram based on the distance and the angle, so that the technical problem that the wearable equipment based on speed recognition easily generates false alarm when recognizing the posture of the crowd to be detected is at least partially solved, and the technical effect of reducing the false alarm probability generated by recognizing the posture classification information of the crowd to be detected is achieved.
According to an embodiment of the present disclosure, the obtaining module 730 may include a first obtaining submodule and a second obtaining submodule.
The first obtaining submodule is used for processing the multiple mixing signals to obtain an initial distance angle diagram.
And the second obtaining submodule is used for performing data enhancement processing on the initial distance angle map to obtain a target distance angle map, wherein the data enhancement processing comprises inversion processing, translation processing and/or frame extraction processing of the initial distance angle map.
According to the embodiment of the disclosure, the plurality of receiving signals are respectively acquired by a plurality of receiving antennas of the millimeter wave radar, and the initial distance angle map includes an initial two-dimensional matrix map.
According to an embodiment of the present disclosure, the first obtaining submodule may include a first obtaining unit, a second obtaining unit, a first generating unit, and a second generating unit.
The first obtaining unit is used for performing fast Fourier transform on each mixed signal and the mixed signal related to the mixed signal to obtain distance information and speed information corresponding to the mixed signal.
The second obtaining unit is used for carrying out fast Fourier transform on the angles of the received signals obtained by different receiving antennas aiming at each mixing signal to obtain the angle information of the crowd to be detected.
The first generating unit is used for generating a two-dimensional matrix map according to the distance information, the speed information and the angle information corresponding to the mixing signals.
The second generating unit is used for generating an initial two-dimensional matrix map according to the plurality of two-dimensional matrix maps.
According to an embodiment of the present disclosure, the first generation unit may include a construction subunit, a summation subunit, a filtering subunit, and a derivation subunit.
And the construction subunit is used for constructing a target three-dimensional matrix according to the distance information, the speed information and the angle information.
And the summation subunit is used for carrying out summation processing on the target three-dimensional matrix to obtain a target numerical value and a two-dimensional matrix diagram.
The filtering subunit is configured to, when the target value meets a preset threshold, filter the mixed signal corresponding to the target value to obtain a filtered mixed signal.
The obtaining subunit is used for adjusting the two-dimensional matrix map according to the filtered mixing signal to obtain a two-dimensional matrix map.
According to the embodiment of the present disclosure, the distance information d is calculated as shown in equation (4), the velocity information v is calculated as shown in equation (5), and the angle information θ is calculated as shown in equation (6):
Figure BDA0003318595690000141
Figure BDA0003318595690000142
Figure BDA0003318595690000143
wherein f isIFCharacterizing the frequency of the mixed signal, fIFThe slope of the mixed signal is characterized, c represents the speed of light, lambda represents the wavelength of the mixed signal, omega represents the phase difference between two adjacent mixed signals, TcThe time difference between two adjacent mixed signals is characterized, and l represents the distance between two adjacent receiving antennas.
According to an embodiment of the present disclosure, the target neural network is obtained by training the initial neural network using the training sample data set, which may include the following operations.
The method comprises the steps of obtaining a training sample data set, wherein training samples in the training sample data set comprise training images containing distance and angle information and label data of the training images.
Inputting the training image into an initial neural network, and outputting the predicted posture classification information.
And calculating a loss function according to the predicted posture classification information and the label data to obtain a loss result.
Iteratively adjusting network parameters of the initial neural network according to the loss result to generate a trained target neural network.
According to an embodiment of the present disclosure, the initial neural network includes at least three 3d convolutional layers and at least two fully-connected layers.
Wherein, inputting the training image into the initial neural network, and outputting the predicted posture classification information may include the following operations.
Inputting the training image into at least three 3d convolution layers and outputting a feature extraction diagram. And inputting the feature extraction graph into at least two full-connection layers, and outputting the predicted posture classification information.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented at least partially as a hardware Circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a Circuit, or implemented by any one of three implementations of software, hardware, and firmware, or any suitable combination of any of them. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any plurality of the obtaining module 710, the generating module 720, the obtaining module 730, and the outputting module 740 may be combined and implemented in one module/sub-module/unit/sub-unit, or any one of the modules/sub-modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least part of the functionality of one or more of these modules/sub-modules/units/sub-units may be combined with at least part of the functionality of other modules/units/sub-units and implemented in one module/sub-module/unit/sub-unit. According to an embodiment of the present disclosure, at least one of the obtaining module 710, the generating module 720, the obtaining module 730, and the outputting module 740 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or may be implemented by any one of three implementations of software, hardware, and firmware, or any suitable combination of any of them. Alternatively, at least one of the obtaining module 710, the generating module 720, the obtaining module 730, and the outputting module 740 may be at least partially implemented as a computer program module, which when executed may perform a corresponding function.
It should be noted that the gesture detection device portion in the embodiments of the present disclosure corresponds to the gesture detection method portion in the embodiments of the present disclosure, and the description of the gesture detection device portion specifically refers to the gesture detection method portion and is not repeated herein.
Fig. 8 schematically shows a block diagram of an electronic device adapted to implement the above described method according to an embodiment of the present disclosure. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, an electronic device 800 according to an embodiment of the present disclosure includes a processor 801 that can perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. The processor 801 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 801 may also include onboard memory for caching purposes. The processor 801 may include a single processing unit or multiple processing units for performing different actions of the method flows according to embodiments of the present disclosure.
In the RAM 803, various programs and data necessary for the operation of the electronic apparatus 800 are stored. The processor 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. The processor 801 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 802 and/or RAM 803. Note that the programs may also be stored in one or more memories other than the ROM 802 and RAM 803. The processor 801 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
Electronic device 800 may also include input/output (I/O) interface 805, input/output (I/O) interface 805 also connected to bus 804, according to an embodiment of the present disclosure. The system 800 may also include one or more of the following components connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output portion 807 including a Display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program, when executed by the processor 801, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to an embodiment of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable Computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM) or flash Memory), a portable compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the preceding. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 802 and/or RAM 803 described above and/or one or more memories other than the ROM 802 and RAM 803.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method provided by the embodiments of the present disclosure, when the computer program product is run on an electronic device, the program code being adapted to cause the electronic device to carry out the gesture detection method provided by the embodiments of the present disclosure.
The computer program, when executed by the processor 801, performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure. The systems, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted in the form of a signal on a network medium, distributed, downloaded and installed via communication section 809, and/or installed from removable media 811. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (11)

1. A gesture detection method, comprising:
the method comprises the steps that millimeter wave radar signals for detecting actions of people to be detected are obtained, wherein the millimeter wave radar signals comprise a plurality of sending signals arranged according to a time sequence and a plurality of receiving signals corresponding to the sending signals;
generating a plurality of mixing signals according to each transmitting signal and each receiving signal corresponding to the transmitting signal;
processing the multiple mixing signals to obtain a target distance angle diagram; and
and inputting the target distance angle graph into a target neural network, and outputting a recognition result, wherein the target neural network is obtained by training an initial neural network by using a training sample data set, and the recognition result comprises the posture classification information of the crowd to be detected.
2. The method of claim 1, wherein the processing the mixed signals to obtain a target range-angle map comprises:
processing the multiple mixing signals to obtain an initial distance angle diagram; and
and performing data enhancement processing on the initial distance angle map to obtain the target distance angle map, wherein the data enhancement processing comprises inversion processing, translation processing and/or frame extraction processing of the initial distance angle map.
3. The method of claim 2, the plurality of receive signals being respectively acquired by a plurality of receive antennas of a millimeter wave radar, the initial range angle map comprising an initial two-dimensional matrix map;
wherein the processing the plurality of mixing signals to obtain an initial distance-angle map includes:
performing fast Fourier transform on each mixed signal and the mixed signal associated with the mixed signal to obtain distance information and speed information corresponding to the mixed signal;
aiming at each mixing signal, carrying out fast Fourier transform on the angles of the receiving signals acquired by different receiving antennas to obtain the angle information of the crowd to be detected;
generating a two-dimensional matrix map according to the distance information, the speed information and the angle information corresponding to the mixing signal; and
generating the initial two-dimensional matrix map from the plurality of two-dimensional matrix maps.
4. The method of claim 3, wherein the generating a two-dimensional matrix map from the distance information, the velocity information, and the angle information corresponding to the mixed signal comprises:
constructing a target three-dimensional matrix according to the distance information, the speed information and the angle information;
summing the target three-dimensional matrix to obtain a target numerical value and a two-dimensional matrix image;
under the condition that the target value meets a preset threshold value, filtering the mixing signal corresponding to the target value to obtain a filtered mixing signal; and
and adjusting the two-dimensional matrix map according to the filtered mixing signal to obtain the two-dimensional matrix map.
5. The method according to claim 3, wherein the distance information d is calculated as shown in formula (1), the velocity information v is calculated as shown in formula (2), and the angle information θ is calculated as shown in formula (3):
Figure FDA0003318595680000021
Figure FDA0003318595680000022
Figure FDA0003318595680000023
wherein f isIFCharacterizing the frequency of the mixed signal, fIFThe slope of the mixed signal is characterized, c represents the speed of light, lambda represents the wavelength of the mixed signal, omega represents the phase difference between two adjacent mixed signals, TcThe time difference between two adjacent mixed signals is characterized, and l represents the distance between two adjacent receiving antennas.
6. The method according to any one of claims 1 to 5, wherein the target neural network is obtained by training an initial neural network by using a training sample data set, and comprises the following steps:
acquiring the training sample data set, wherein training samples in the training sample data set comprise training images containing distance and angle information and label data of the training images;
inputting the training image into the initial neural network, and outputting predicted posture classification information;
calculating a loss function according to the predicted posture classification information and the label data to obtain a loss result; and
iteratively adjusting network parameters of the initial neural network according to the loss result to generate the trained target neural network.
7. The method of claim 6, wherein the initial neural network comprises at least three 3d convolutional layers and at least two fully connected layers;
wherein the inputting the training image into the initial neural network and outputting the predicted posture classification information comprises:
inputting the training image into at least three 3d convolutional layers and outputting a feature extraction graph; and
and inputting the feature extraction graph into at least two fully-connected layers, and outputting the predicted posture classification information.
8. A gesture detection apparatus comprising:
the acquisition module is used for acquiring millimeter wave radar signals for detecting actions of people to be detected, and the millimeter wave radar signals comprise a plurality of sending signals arranged according to a time sequence and a plurality of receiving signals corresponding to each sending signal;
a generating module configured to generate a plurality of mixing signals according to each of the transmission signals and each of the reception signals corresponding to the transmission signals;
an obtaining module, configured to process each of the mixing signals to obtain a plurality of target distance angle maps; and
and the output module is used for inputting the target distance angle graph into a target neural network and outputting a recognition result, wherein the target neural network is obtained by training an initial neural network by using a training sample data set, and the recognition result comprises the posture classification information of the crowd to be detected.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-7.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 7.
11. A computer program product comprising a computer program which, when executed by a processor, is adapted to carry out the method of any one of claims 1 to 7.
CN202111259043.8A 2021-10-25 2021-10-25 Gesture detection method, gesture detection device, electronic device, and readable storage medium Pending CN114004255A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111259043.8A CN114004255A (en) 2021-10-25 2021-10-25 Gesture detection method, gesture detection device, electronic device, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111259043.8A CN114004255A (en) 2021-10-25 2021-10-25 Gesture detection method, gesture detection device, electronic device, and readable storage medium

Publications (1)

Publication Number Publication Date
CN114004255A true CN114004255A (en) 2022-02-01

Family

ID=79924403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111259043.8A Pending CN114004255A (en) 2021-10-25 2021-10-25 Gesture detection method, gesture detection device, electronic device, and readable storage medium

Country Status (1)

Country Link
CN (1) CN114004255A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030534A (en) * 2023-02-22 2023-04-28 中国科学技术大学 Training method of sleep posture model and sleep posture recognition method
CN117031434A (en) * 2023-10-08 2023-11-10 中国科学技术大学 Real-time falling detection method based on millimeter wave radar

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030534A (en) * 2023-02-22 2023-04-28 中国科学技术大学 Training method of sleep posture model and sleep posture recognition method
CN117031434A (en) * 2023-10-08 2023-11-10 中国科学技术大学 Real-time falling detection method based on millimeter wave radar
CN117031434B (en) * 2023-10-08 2024-02-20 中国科学技术大学 Real-time falling detection method based on millimeter wave radar

Similar Documents

Publication Publication Date Title
US10878255B2 (en) Providing automatic responsive actions to biometrically detected events
US20210103763A1 (en) Method and apparatus for processing laser radar based sparse depth map, device and medium
CN106952303B (en) Vehicle distance detection method, device and system
CN114004255A (en) Gesture detection method, gesture detection device, electronic device, and readable storage medium
US11039044B2 (en) Target detection and mapping using an image acqusition device
WO2019080747A1 (en) Target tracking method and apparatus, neural network training method and apparatus, storage medium and electronic device
CN113743607A (en) Training method of anomaly detection model, anomaly detection method and device
Fang et al. Superrf: Enhanced 3d rf representation using stationary low-cost mmwave radar
CN111967332B (en) Visibility information generation method and device for automatic driving
AU2020278660A1 (en) Neural network and classifier selection systems and methods
CN116823884A (en) Multi-target tracking method, system, computer equipment and storage medium
Liu et al. Cooperative and comprehensive multi-task surveillance sensing and interaction system empowered by edge artificial intelligence
CN116343169A (en) Path planning method, target object motion control device and electronic equipment
CN112651351B (en) Data processing method and device
CN114581836A (en) Abnormal behavior detection method, device, equipment and medium
CN110991312A (en) Method, apparatus, electronic device, and medium for generating detection information
CN117471421B (en) Training method of object falling detection model and falling detection method
CN117636404B (en) Fall detection method and system based on non-wearable equipment
JP7484492B2 (en) Radar-based attitude recognition device, method and electronic device
CN117031434B (en) Real-time falling detection method based on millimeter wave radar
CN115019278B (en) Lane line fitting method and device, electronic equipment and medium
CN115798053B (en) Training method of human body posture estimation model, human body posture estimation method and device
US20240153274A1 (en) Artificial intelligence enabled distance event detection using image analysis
CN117017276B (en) Real-time human body tight boundary detection method based on millimeter wave radar
CN116311943B (en) Method and device for estimating average delay time of intersection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination