CN117671739B - User identity recognition method and device - Google Patents
User identity recognition method and device Download PDFInfo
- Publication number
- CN117671739B CN117671739B CN202410139924.3A CN202410139924A CN117671739B CN 117671739 B CN117671739 B CN 117671739B CN 202410139924 A CN202410139924 A CN 202410139924A CN 117671739 B CN117671739 B CN 117671739B
- Authority
- CN
- China
- Prior art keywords
- result
- residual
- map data
- target object
- processing result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 238000012545 processing Methods 0.000 claims description 176
- 230000004927 fusion Effects 0.000 claims description 56
- 238000001514 detection method Methods 0.000 claims description 32
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 25
- 230000036544 posture Effects 0.000 claims description 20
- 238000005070 sampling Methods 0.000 claims description 16
- 238000000605 extraction Methods 0.000 claims description 13
- 238000003491 array Methods 0.000 claims description 10
- 238000004891 communication Methods 0.000 claims description 8
- 210000002310 elbow joint Anatomy 0.000 claims description 8
- 238000007499 fusion processing Methods 0.000 claims description 8
- 210000003127 knee Anatomy 0.000 claims description 8
- 210000001991 scapula Anatomy 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 abstract description 15
- 238000004590 computer program Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 4
- 238000013145 classification model Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000029058 respiratory gaseous exchange Effects 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000001994 activation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000011895 specific detection Methods 0.000 description 1
- 210000002784 stomach Anatomy 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/96—Management of image or video recognition tasks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Human Computer Interaction (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention provides a user identity recognition method and a device, comprising the following steps: acquiring pressure map data of a target object at the current time; based on the pressure diagram data, carrying out gesture recognition on the target object to obtain a sleeping gesture recognition result; if the sleeping gesture recognition result accords with preset sleeping gesture information, extracting feature nodes of the target object based on pressure diagram data to obtain a plurality of feature node information; and determining the user identity of the target object based on the characteristic node information and the characteristic array of each prestored user. According to the invention, the specific sleeping posture of the target object is identified based on the pressure diagram data, and then the human body characteristic node under the specific sleeping posture is detected based on the pressure diagram data, so that the occupation of the model on related resources can be reduced, and finally, the specific user is identified according to the characteristic node information of the human body, and the characteristic node information of the human body is not easily influenced by environment or other factors, so that the accuracy and stability of the user identification are effectively improved.
Description
Technical Field
The present invention relates to the field of identity recognition technologies, and in particular, to a user identity recognition method and device.
Background
With the intelligent development of mattresses, part of intelligent mattresses have realized physiological parameter data (heart rate, respiration, body movement, etc.) monitoring and part of private customization functions (specific head-of-bed inclination, foot-of-bed inclination, massage modes, etc.). However, in the scenario where multiple persons use the same bed, it is necessary to quickly and effectively identify the identity of the user, and the user to whom the sleep related data belongs can be determined, so as to determine the private customizing function associated with the user.
At present, the common identity recognition technology mainly depends on fingerprints, voiceprints, DNA and part of physiological characteristics, wherein although the user identity can be recognized very accurately through the technologies of fingerprints, voiceprints, DNA and the like, the use of the related technology has certain equipment requirements and use scene limitation. For example, fingerprint identification needs to acquire fingerprint information of a user through an optical sensor, a capacitive sensor and the like, and the whole acquisition process needs to be actively matched by the user and has higher requirements on the precision of the sensor. In order to reduce the requirements on equipment hardware, reduce the power consumption of equipment and the resource occupation of identification functions, physiological characteristics of human bodies are also one of widely used methods, for example, the height, weight and physiological parameters (such as heart rate, respiration, heart rate variability and the like) of users, and although the method can basically meet the requirements of identification, the accuracy of identification of the users is lower.
Disclosure of Invention
The invention provides a user identity recognition method and device, and aims to solve the technical problem of low accuracy of user identity recognition.
The invention provides a user identity recognition method, which comprises the following steps:
Acquiring pressure map data of a target object at the current time;
based on the pressure map data, carrying out gesture recognition on the target object to obtain a sleeping gesture recognition result;
if the sleeping gesture recognition result accords with preset sleeping gesture information, extracting feature nodes of the target object based on the pressure map data to obtain a plurality of feature node information;
determining the user identity of the target object based on the characteristic node information and the characteristic array of each prestored user;
and carrying out gesture recognition on the target object based on the pressure map data, and before obtaining a sleeping gesture recognition result, further comprising:
determining pressure data difference values of all position points on the intelligent mattress in any adjacent time based on the pressure map data of the current time and the pressure map data of the historical time period;
determining an absolute value superposition result and a maximum absolute value of any adjacent time based on the absolute value corresponding to the pressure data difference value of each position point in any adjacent time;
Judging whether the pressure map data of the current time is in a stable state or not based on the absolute value superposition result and the maximum absolute value of any adjacent time;
And if the target object is in a stable state, executing the step of carrying out gesture recognition on the target object based on the pressure map data to obtain a sleeping gesture recognition result.
According to the user identity recognition method provided by the invention, the user identity of the target object is determined based on the characteristic node information and the characteristic arrays of all pre-stored users, and the method comprises the following steps:
Determining the distance between the characteristic nodes based on the characteristic node information;
Forming a target feature array of the target object based on the distances between the feature nodes;
Respectively calculating the matching coefficients between the target feature array and the feature arrays of all the prestored users;
and determining the user identity of the target object based on each matching coefficient.
According to the user identity recognition method provided by the invention, the matching coefficients between the target feature array and the feature arrays of the prestored users are calculated respectively, and the method comprises the following steps:
for any prestored user's feature array:
calculating to obtain a first characteristic mean value based on the target characteristic array;
Calculating a second characteristic mean value based on the characteristic array of the prestored user;
Calculating a correlation coefficient between the target feature array and the feature array of the pre-stored user based on the target feature array, the feature array of the pre-stored user, the first feature average value and the second feature average value;
Determining the difference value between each distance in the target feature array and each distance in the feature array of the pre-stored user;
And calculating a matching coefficient between the target object and the pre-stored user based on the correlation coefficient and each difference value.
According to the user identity identification method provided by the invention, the user identity of the target object is determined based on each matching coefficient, and the method comprises the following steps:
selecting a target matching coefficient with the largest numerical value from the matching coefficients;
comparing the target matching coefficient with a preset coefficient threshold;
And if the target matching coefficient is larger than the preset coefficient threshold, determining that the user identity of the target object is a prestored user corresponding to the target matching coefficient.
According to the user identity recognition method provided by the invention, based on the pressure map data, gesture recognition is performed on the target object to obtain a sleeping gesture recognition result, and the method comprises the following steps:
performing Gaussian filtering processing on the pressure map data to obtain target pressure map data;
Performing edge extraction on the target pressure map data to obtain edge information of the target object;
And classifying sleeping postures of the edge information to obtain a sleeping posture recognition result.
According to the user identity recognition method provided by the invention, if the sleeping gesture recognition result accords with preset sleeping gesture information, feature node extraction is performed on the target object based on the pressure map data to obtain a plurality of feature node information, and the method comprises the following steps:
If the sleeping gesture recognition result accords with preset sleeping gesture information, inputting the pressure map data into a pre-constructed target detection model, and extracting feature nodes of the target object by using the target detection model to obtain a plurality of feature node information output by the target detection model;
The target detection model is trained based on a plurality of pieces of historical pressure map data collected in advance and position information of feature nodes corresponding to each piece of the historical pressure map data.
According to the user identity recognition method provided by the invention, the pressure map data is input into a pre-constructed target detection model, so that characteristic node extraction is carried out on the target object by using the target detection model, and a plurality of characteristic node information output by the target detection model is obtained, and the method comprises the following steps:
carrying out convolution processing on the pressure map data by using a first convolution layer to obtain a first convolution processing result;
performing convolution processing on the first convolution processing result by using a second convolution layer to obtain a second convolution processing result;
respectively carrying out convolution processing on the second convolution processing result by using a third convolution layer and a fourth convolution layer to obtain a third convolution processing result and a fourth convolution processing result;
Carrying out convolution processing on the third convolution processing result by using a fifth convolution layer to obtain a fifth convolution processing result;
downsampling the fourth convolution processing result to fuse the downsampling result with the third convolution processing result to obtain a first fusion result;
upsampling the third convolution processing result to fuse the upsampling result with the fourth convolution processing result to obtain a second fusion result;
Residual processing is carried out on the fifth convolution processing result by using a first residual block, so that a first residual processing result is obtained;
Residual processing is carried out on the first fusion result by using a second residual block, so that a second residual processing result is obtained;
residual processing is carried out on the second fusion result by using a third residual block, and a third residual processing result is obtained;
Respectively carrying out downsampling treatment on the second residual error treatment result and the third residual error treatment result so as to carry out fusion treatment on the first residual error treatment result, the downsampled second residual error treatment result and the third residual error treatment result and obtain a third fusion result;
Performing up-sampling processing on the first residual processing result and performing down-sampling processing on the third residual processing result so as to perform fusion processing on the second residual processing result, the up-sampled first residual processing result and the down-sampled third residual processing result to obtain a fourth fusion result;
Respectively carrying out up-sampling treatment on the first residual error treatment result and the second residual error treatment result so as to carry out fusion treatment on the third residual error treatment result and the up-sampled first residual error treatment result and second residual error treatment result to obtain a fifth fusion result;
Residual processing is carried out on the third fusion result by using a fourth residual block, so that a fourth residual processing result is obtained;
Residual processing is carried out on the fourth fusion result by utilizing a fifth residual block, and a fifth residual processing result is obtained;
residual processing is carried out on the fifth fusion result by utilizing a sixth residual block, and a sixth residual processing result is obtained;
Performing fusion processing on the fourth residual error processing result, the fifth residual error processing result and the sixth residual error processing result to obtain a sixth fusion result;
carrying out convolution processing on the sixth fusion result by using a sixth convolution layer to obtain a sixth convolution processing result;
And performing full connection processing on the sixth convolution processing result by using a full connection layer to obtain the plurality of characteristic node information.
According to the user identification method provided by the invention, the characteristic node information at least comprises one of head position information, right scapula position information, left scapula position information, right elbow joint position information, left elbow joint position information, right palm position information, left palm position information, right hip position information, left hip position information, right leg knee position information, left leg knee position information, right foot position information and left foot position information.
The invention also provides a user identity recognition device, which comprises:
the intelligent mattress comprises an intelligent mattress body, wherein a sleep area of the intelligent mattress body is provided with a plurality of pressure sensors, and the pressure sensors are used for acquiring pressure map data of a target object in the sleep area;
The management platform is in communication connection with the intelligent mattress and is used for receiving pressure map data sent by the intelligent mattress;
The management platform is further used for carrying out gesture recognition on the target object based on the pressure map data to obtain a sleeping gesture recognition result; if the sleeping gesture recognition result accords with preset sleeping gesture information, extracting feature nodes of the target object based on the pressure map data to obtain a plurality of feature node information; determining the user identity of the target object based on the characteristic node information and the characteristic array of each prestored user; and carrying out gesture recognition on the target object based on the pressure map data, and before obtaining a sleeping gesture recognition result, further comprising:
determining pressure data difference values of all position points on the intelligent mattress in any adjacent time based on the pressure map data of the current time and the pressure map data of the historical time period;
determining an absolute value superposition result and a maximum absolute value of any adjacent time based on the absolute value corresponding to the pressure data difference value of each position point in any adjacent time;
Judging whether the pressure map data of the current time is in a stable state or not based on the absolute value superposition result and the maximum absolute value of any adjacent time;
And if the target object is in a stable state, executing the step of carrying out gesture recognition on the target object based on the pressure map data to obtain a sleeping gesture recognition result.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements any one of the user identification methods described above when executing the program.
The invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a user identification method as described in any of the above.
The invention also provides a computer program product comprising a computer program which when executed by a processor implements a user identification method as described in any one of the above.
The invention provides a user identity recognition method and a device, comprising the following steps: acquiring pressure map data of a target object at the current time; based on the pressure map data, carrying out gesture recognition on the target object to obtain a sleeping gesture recognition result; if the sleeping gesture recognition result accords with preset sleeping gesture information, extracting feature nodes of the target object based on the pressure map data to obtain a plurality of feature node information; and determining the user identity of the target object based on the characteristic node information and the characteristic array of each prestored user. According to the invention, the specific sleeping posture of the target object is identified based on the pressure map data, so that when the sleeping posture accords with the preset specific sleeping posture, the human body characteristic nodes under the specific sleeping posture are detected based on the pressure map data, the power consumption can be reduced, and the occupation of the model on related resources can be reduced; and finally, the specific user is identified according to the characteristic node information of the human body, the characteristic node information of the human body is stable and is not easily influenced by environment or other factors, and the accuracy and stability of the user identification are effectively improved.
Drawings
In order to more clearly illustrate the invention or the technical solutions in the prior art, the drawings that are used in the description of the embodiments or the prior art will be briefly described one by one, it being obvious that the drawings in the description below are some embodiments of the invention, and that other drawings can be obtained from these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a user identification method provided by the invention;
FIG. 2 is a schematic diagram of pressure for different sleeping positions according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of edge extraction of pressure map data for different sleeping positions according to an embodiment of the present invention;
FIG. 4 is a schematic view of feature node information provided in a supine position according to an embodiment of the present invention;
FIG. 5 is a second flowchart of a user identification method according to the present invention;
FIG. 6 is a schematic view of a connection structure between human body feature nodes in a supine position according to an embodiment of the present invention;
FIG. 7 is a third flow chart of the user identification method according to the present invention;
FIG. 8 is a flowchart of a user identification method according to the present invention;
FIG. 9 is a diagram of a model structure of an object detection model according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a user identification device according to the present invention;
Fig. 11 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terminology used in the one or more embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the invention. As used in one or more embodiments of the invention, the singular forms "a," "an," "the," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present invention refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of the invention to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the invention. The word "if" as used herein may be interpreted as "at … …" or "when … …", depending on the context.
Fig. 1 is a schematic flow chart of a user identification method provided by the invention. As shown in fig. 1, the user identification method includes:
Step S11, obtaining pressure map data of a target object at the current time;
it should be noted that, a plurality of pressure sensors are disposed on the sleeping area of the intelligent mattress, and the pressure sensors may be arranged in an array, for example, in the sleeping area, the pressure sensors are disposed in a horizontal direction and a longitudinal direction, respectively, and are used for collecting pressure data of different parts of the user lying on the sleeping area. Referring to fig. 2, fig. 2 is a schematic pressure diagram of different sleeping positions according to an embodiment of the present invention, and pressure data of a user at a current time point may be determined according to pressure values collected by pressure sensors at different positions.
Step S12, carrying out gesture recognition on the target object based on the pressure map data to obtain a sleeping gesture recognition result;
Specifically, in an embodiment, based on pressure map data of the current time, the target object may be classified into a sleeping posture by using a pre-trained posture classification model, so as to obtain a sleeping posture recognition result, where the sleeping posture recognition result includes sleeping postures such as lying on the back, lying on the stomach, lying on the side, and the like.
In an embodiment, in order to improve accuracy of sleep gesture recognition, in this embodiment, the gaussian filtering process may be performed on the pressure map data to obtain target pressure map data, so as to remove interference noise of the pressure map data, and further perform edge extraction on the target pressure map data to obtain edge information of the target object, and fig. 3 may be specifically referred to as a schematic diagram after edge extraction on the pressure map data for different sleep gestures provided by an embodiment of the present invention; and further, based on the edge information, the sleeping gesture of the target object can be classified by using a gesture classification model obtained through pre-training, so that the sleeping gesture recognition result is obtained.
Based on the pressure map data, carrying out gesture recognition on the target object, and before obtaining a sleeping gesture recognition result, further comprising:
Step S21, determining the pressure data difference value of each position point on the intelligent mattress in any adjacent time based on the pressure map data of the current time and the pressure map data of the historical time period;
Step S22, determining an absolute value superposition result and a maximum absolute value of any adjacent time based on the absolute value corresponding to the pressure data difference value of each position point in any adjacent time;
Step S23, judging whether the pressure map data of the current time is in a stable state or not based on the absolute value superposition result and the maximum absolute value of any adjacent time;
and step S24, if the target object is in a stable state, executing the step of carrying out gesture recognition on the target object based on the pressure map data to obtain a sleeping gesture recognition result.
Specifically, the pressure map data of the historical time period is collected, and then the pressure data difference value of each position point on the intelligent mattress in any adjacent time is calculated based on the pressure map data of the current time and the pressure map data of the historical time period, for example, the pressure data difference value of each position point is determined based on the pressure map data of the current time and the pressure map data of the previous 1 second, and the pressure data difference value of each position point is determined based on the pressure map data of the previous 1 second and the pressure map data of the previous 2 seconds.
Further, the pressure data difference for each location point in any adjacent time: determining the maximum absolute value based on the absolute value corresponding to each pressure data difference value, and adding the absolute values of each pressure data difference value to obtain an absolute value superposition result; further, the absolute value superposition result is compared with a first numerical threshold, and the maximum absolute value is compared with a second numerical threshold, wherein the first numerical threshold and the second numerical threshold can be set according to practical situations, and the first numerical threshold is larger than the second numerical threshold, for example, the first numerical threshold is set to 1000, and the second numerical threshold is set to 200.
Further, if the absolute value superposition result is smaller than the first numerical threshold and the maximum absolute value is larger than the second numerical threshold, determining that the pressure map data at the current time is in a stable state, and executing the step of performing gesture recognition on the target object based on the pressure map data to obtain a sleep gesture recognition result, namely performing sleep gesture judgment on the target object under the stable state of the pressure data. If the absolute value superposition result is not smaller than the first numerical threshold or the maximum absolute value is not larger than the second numerical threshold, the pressure map data of the current time is considered to be in an unstable state, and then the pressure map data of the target object is continuously acquired.
According to the embodiment of the invention, based on the pressure map data in a period of time, whether the pressure map data of the target object in the current time is in a stable state is judged, so that gesture recognition is carried out on the target object based on the pressure map data in the stable state, the false recognition risk caused by unstable pressure data is reduced, and the accuracy of subsequent sleep gesture detection is effectively improved.
Step S13, if the sleeping gesture recognition result accords with preset sleeping gesture information, extracting feature nodes of the target object based on the pressure map data to obtain a plurality of feature node information;
it should be noted that, the preset sleeping posture information may be set according to the actual situation, and optionally, the preset sleeping posture information includes lying on the back, lying on the side, and lying on the side, in this embodiment, since the user has less feature node information extracted in the lying on the side and lying on the side in the process of extracting feature node information, in order to improve accuracy of user identification, as a preferred example, the preset sleeping posture information is set to lie on the back and lying on the side. It will be appreciated that the identity of the user need only be identified when the user is in a supine or prone position.
The characteristic node information includes at least one of head position information, right scapula position information, left scapula position information, right elbow joint position information, left elbow joint position information, right palm position information, left palm position information, right hip position information, left hip position information, right leg knee position information, left leg knee position information, right foot position information, and left foot position information. In order to improve accuracy of user identification, in other embodiments, information of other characteristic nodes of the human body may be further added.
Specifically, whether the sleeping gesture recognition result accords with preset sleeping gesture information is judged, if so, the human body characteristic nodes are extracted by utilizing a pre-constructed target detection model based on the pressure map data, and therefore a plurality of characteristic node information is obtained. Optionally, before training to obtain the target detection model, a great amount of pressure map data related to supination or prone position is collected in advance, the position information of the human body feature nodes is marked in a manual marking mode, and the position information of the human body feature nodes is used as a label value. In the actual training process, the pressure map data can be directly input into the model to be trained to obtain the key point coordinates which are directly regressed and output by the model to be trained, so that the model parameters of the optimal model are iterated based on the key point coordinates and the label values.
As can be appreciated, referring to fig. 4, fig. 4 is a schematic view of feature node information in a supine state provided by an embodiment of the present invention, taking pressure map data in the supine state as an example, the feature node information of a human body includes a (head position), B (right scapula position), C (left scapula position), D (right elbow joint position), E (left elbow joint position), F (right palm position), G (left palm position), H (right hip position), I (left hip position), J (right leg knee position), K (left leg knee position), L (right foot position), M (left foot position).
Step S14, determining the user identity of the target object based on the characteristic node information and the characteristic arrays of the prestored users.
Specifically, based on the feature node information, calculating to obtain the distance between the feature nodes, forming a target feature array of the target object based on the distance, further, calculating the matching coefficient between the target feature array and the feature array of each pre-stored user, selecting the largest matching coefficient from the matching coefficients, comparing the largest matching coefficient with a preset coefficient threshold, and if the largest matching coefficient is larger than the preset coefficient threshold, determining the user identity of the target object as the pre-stored user corresponding to the matching coefficient.
The embodiment of the invention comprises the following steps: acquiring pressure map data of a target object at the current time; based on the pressure map data, carrying out gesture recognition on the target object to obtain a sleeping gesture recognition result; if the sleeping gesture recognition result accords with preset sleeping gesture information, extracting feature nodes of the target object based on the pressure map data to obtain a plurality of feature node information; and determining the user identity of the target object based on the characteristic node information and the characteristic array of each prestored user. The specific sleeping posture of the target object is identified based on the pressure map data, and then the human body characteristic nodes under the specific sleeping posture are detected based on the pressure map data under the condition that the sleeping posture accords with the preset specific sleeping posture, so that the effects of reducing power consumption and reducing the occupation of related resources by the model can be achieved; and finally, the specific user is identified according to the characteristic node information of the human body, the characteristic node information of the human body is stable and is not easily influenced by environment or other factors, and the accuracy and stability of the user identification are effectively improved.
Referring to fig. 5, fig. 5 is a second flowchart of a user identity recognition method provided by the present invention, in one embodiment of the present invention, determining a user identity of the target object based on the feature node information and a feature array of each pre-stored user includes:
step S31, determining the distance between the characteristic nodes based on the characteristic node information;
step S32, forming a target feature array of the target object based on the distance between the feature nodes;
Step S33, respectively calculating the matching coefficients between the target feature array and the feature arrays of all the prestored users;
Step S34, determining the user identity of the target object based on each of the matching coefficients.
Specifically, based on each of the feature node information, a distance between each of the feature nodes is calculated, and the distance may be a euclidean distance, a manhattan distance, or the like between the feature nodes. For example, referring to fig. 6, fig. 6 is a schematic diagram of a connection structure between human body feature nodes in a supine state according to an embodiment of the present invention, where the distances between feature nodes include a distance D 1 between a and B and a connecting line intermediate point N, a distance D 2 between B and C, a distance D 3 between H and I, a distance D 4 between N and H and a connecting line intermediate point O, a distance D 5 between B and D, a distance D 6 between D and F, a distance D 7 between C and E, a distance D 8 between E and G, a distance D 9 between H and J, a distance D 10 between J and L, a distance D 11 between I and K, and a distance D 12 between K and M.
Further, a target feature array of the target object is formed based on the distance between the feature nodes, and then matching coefficients between the target feature array and the feature arrays of the pre-stored users are calculated respectively. And if the target matching coefficient is larger than the preset coefficient threshold, determining the user identity of the target object as the pre-stored user corresponding to the target matching coefficient, and if the target matching coefficient is not larger than the preset coefficient threshold, proving that the target object does not belong to the preset user.
According to the embodiment of the invention, the target feature array of the target object is formed based on the plurality of feature node information of the human body nodes, and then the user identity of the target object is determined according to the matching coefficient between the target feature array and the feature arrays of all pre-stored users, so that the accuracy and stability of user identity identification can be effectively improved.
Referring to fig. 7, fig. 7 is a third flowchart of a user identification method provided by the present invention, in one embodiment of the present invention, calculating matching coefficients between the target feature array and each of the feature arrays of the pre-stored users, respectively, includes:
for any prestored user's feature array:
Step S41, calculating to obtain a first characteristic mean value based on the target characteristic array;
Step S42, calculating a second characteristic mean value based on the characteristic array of the pre-stored user;
Step S43, calculating a correlation coefficient between the target feature array and the feature array of the pre-stored user based on the target feature array, the feature array of the pre-stored user, the first feature average value and the second feature average value;
Step S44, determining the difference value between each distance in the target feature array and each distance in the feature array of the pre-stored user;
Step S45, calculating a matching coefficient between the target object and the pre-stored user based on the correlation coefficient and each difference value.
Specifically, an average operation is performed on each distance in the target feature array to obtain a first feature average value, and an average operation is performed on each distance in the feature array to obtain a second feature average value. Further, based on the distances in the target feature array, the distances in the feature array, the first feature mean value and the second feature mean value, a correlation coefficient between the target feature array and the feature array is calculated. Wherein, the calculation formula of the correlation coefficient is as follows:
Wherein, Representing the first characteristic mean,/>Representing a second feature mean, corr (d, l) represents a correlation coefficient, d i represents an ith distance in the target feature array, and l i represents an ith distance in the feature array.
Further, calculating a difference value between any distance in the target feature array and a corresponding distance in the feature array, and further calculating the matching coefficient based on the correlation coefficient, the difference value corresponding to each distance and each preset weight coefficient. Wherein, the calculation formula of the matching coefficient is as follows:
Wherein, Representing the matching coefficient, wherein min represents a minimum function of the matching coefficient and the min; /(I)Expressed as absolute function, beta 1、αi、β2、β3 and/>Representing the respective weight coefficients set in advance. For example, in one embodiment, β 1 is set to 0.6, the value of α i is set to 0.03, 0.18, 0.15, 0.3, 0.05, 0.02, 0.08, 0.02, β 2 is set to 0.4, and β 3 is set to-0.1,/>The values of (2) are respectively set to be 0.06, 0.18, 0.2, 0.25, 0.05, 0.04, 0.08 and 0.04.
According to the embodiment of the invention, the correlation coefficient between the target feature array and the feature array is calculated, and the difference value between each distance in the target feature array and each distance in the feature array is calculated, so that the matching coefficient between the target object and the pre-stored user is accurately analyzed and calculated based on the correlation coefficient and the difference value corresponding to each distance, and the accuracy of subsequent identity recognition is improved.
Referring to fig. 8, fig. 8 is a flowchart of a user identification method provided by the present invention, in one embodiment of the present invention, gesture recognition is performed on the target object based on the pressure map data, so as to obtain a sleep gesture recognition result, which includes:
step S51, gaussian filtering processing is carried out on the pressure map data, and target pressure map data are obtained;
Step S52, carrying out edge extraction on the target pressure map data to obtain edge information of the target object;
And step S53, classifying the sleeping gesture of the edge information to obtain the sleeping gesture recognition result.
Specifically, the pressure map data is subjected to a gaussian filter process to remove high frequency noise to obtain target pressure map data, optionally using a filter convolution kernel including, but not limited to, a gaussian kernel of the form.
Further, edge extraction is performed on the target pressure map data to obtain edge information of the target object, where the edge information may refer to fig. 4, for example, a canny operator is used to extract boundary information, and in more detail: performing Gaussian smoothing on target pressure map data, wherein the method is specifically calculated as follows;
Where h (x, y) is pressure data after Gaussian smoothing, the lower left corner of the pressure matrix is taken as the left origin, the horizontal right is taken as the horizontal axis positive direction, the vertical upward is taken as the vertical axis positive direction, x is taken as the horizontal axis coordinate, y is taken as the vertical axis coordinate, and f (x, y) is the i-th row and j-th column data of the target pressure map data Gaussian (i, j). And then calculating gradient amplitude and direction for the Gaussian smoothed pressure data, wherein a kernel function used for calculation comprises but is not limited to a sobel operator, and the calculation method comprises the following steps:
Wherein Sobel x is a cross-axis convolution kernel of a Sobel operator, sobel y is a convolution kernel of a Sobel operator longitudinal axis, d x (x, y) is a gradient value in a cross-axis direction at a point (x, y), d y (x, y) is a gradient value in a longitudinal axis direction at a point (x, y), M (x, y) is a gradient amplitude described at a point (x, y), and θ M is a gradient amplitude described at a point (x, y). Non-maximum suppression is carried out on the amplitude according to the angle, and the specific steps of the non-maximum suppression are as follows: the direction determination is first made based on preset rules, for example, the rules are as follows: an angle of 0 ° if θ M is greater than-22.5 and less than 22.5, or θ M is greater than 157.5 and less than or equal to 180, or θ M is greater than-180 and less than or equal to-157.5; if θ M is greater than or equal to 22.5 and less than 67.5, or θ M is greater than or equal to-157.5 and less than-112.5, then the angle is 45 °; if θ M is greater than or equal to 67.5 and less than or equal to 112.5, or θ M is greater than or equal to-112.5 and less than or equal to-67.5, the angle is 90 °; if θ M is greater than 112.5 and less than or equal to 157.5, or θ M is greater than or equal to-67.5 and less than or equal to-22.5, then the angle is 135 ° or-45 °. Further, the center point is compared with two pixels in the angular direction, if the center pixel is the maximum value, the center point is reserved, otherwise, the center point is set to 0. Still further, edges are detected and connected using a double threshold algorithm, setting a first threshold coefficient TH (e.g., the first threshold coefficient is set to 0.3) and a second threshold coefficient TL (e.g., the second threshold coefficient is set to 0.1); if the pressure value after the treatment is smaller than TL, setting the pressure value to be 0; if the pressure value is larger than TH, determining that the pressure value is an edge point, and setting the pressure value to be 1; if the pressure value is greater than TL and less than TH, judging whether 8 adjacent points around the pressure value are edge points, if the edge points exist, setting the pressure value to be 1, otherwise, setting the pressure value to be 0, and obtaining the finally extracted boundary information.
And further, based on the boundary information, classifying the sleeping gesture by using the classification model obtained through pre-training to obtain the sleeping gesture recognition result. Optionally, in an embodiment, the kernel function used by the classification model is RBF (Radial Basis Function, gaussian kernel function), the penalty parameter adopts a default value of 1.0, the policy combination adopts OVR (One-vs-Rest, multi-class classification policy), and the classes identified by the classifier include lying, prone, lying on side, other abnormal states, and the like.
According to the embodiment of the invention, the edge information of the target object is accurately obtained through filtering processing and edge extraction of the pressure map data, and then the edge information of the target object is subjected to sleeping gesture classification based on the edge information, so that a sleeping gesture recognition result is obtained, and the accuracy of sleeping gesture recognition classification is effectively improved.
In one embodiment of the present invention, if the sleep gesture recognition result meets preset sleep gesture information, extracting feature nodes of the target object based on the pressure map data to obtain a plurality of feature node information, including:
If the sleeping gesture recognition result accords with preset sleeping gesture information, inputting the pressure map data into a pre-constructed target detection model, and extracting feature nodes of the target object by using the target detection model to obtain a plurality of feature node information output by the target detection model; the target detection model is trained based on a plurality of pieces of historical pressure map data collected in advance and position information of feature nodes corresponding to each piece of the historical pressure map data.
It should be noted that, referring to fig. 9, fig. 9 is a model structure diagram of a target detection model provided by an embodiment of the present invention, where Conv represents a convolution layer, and specific operations include three steps of Conv2d (convolution process), BN (batch normalization process), and ReLU (activation process), K3 represents a convolution kernel size 3*3, s2 represents a step size 2, s1 represents a step size 1, p1 represents a padding maxpooling, c16 represents a number of convolution kernels 16, c32 represents a number of convolution kernels 32, c64 represents a number of convolution kernels 64, and 32×16×32 represents a number and a form of output parameters of the layer. Up x2 denotes Up-sampling by 2 times, up x4 denotes Up-sampling by 4 times, down x2 denotes Down-sampling by 2 times, down x4 denotes Down-sampling by 4 times, including two operations of summation and ReLU, basic Block is Basic Block residual Block in ResNet, conv 1*1 is 1*1-sized convolutional layer for input-size shape adjustment and addition nonlinearity (which also includes ReLU operation), fully connected layer is fully connected layer, output layer is Output layer, for example, outputting positional information of final 13 feature nodes in supine or prone sleeping position.
More specifically, referring to fig. 10, a specific detection procedure of the target detection model is as follows: carrying out convolution processing on the pressure map data by using a first convolution layer to obtain a first convolution processing result; performing convolution processing on the first convolution processing result by using a second convolution layer to obtain a second convolution processing result; respectively carrying out convolution processing on the second convolution processing result by using a third convolution layer and a fourth convolution layer to obtain a third convolution processing result and a fourth convolution processing result; carrying out convolution processing on the third convolution processing result by using a fifth convolution layer to obtain a fifth convolution processing result; downsampling the fourth convolution processing result to fuse the downsampling result with the third convolution processing result to obtain a first fusion result; upsampling the third convolution processing result to fuse the upsampling result with the fourth convolution processing result to obtain a second fusion result; residual processing is carried out on the fifth convolution processing result by using a first residual block, so that a first residual processing result is obtained; residual processing is carried out on the first fusion result by using a second residual block, so that a second residual processing result is obtained; residual processing is carried out on the second fusion result by using a third residual block, and a third residual processing result is obtained; respectively carrying out downsampling treatment on the second residual error treatment result and the third residual error treatment result so as to carry out fusion treatment on the first residual error treatment result, the downsampled second residual error treatment result and the third residual error treatment result and obtain a third fusion result; performing up-sampling processing on the first residual processing result and performing down-sampling processing on the third residual processing result so as to perform fusion processing on the second residual processing result, the up-sampled first residual processing result and the down-sampled third residual processing result to obtain a fourth fusion result; respectively carrying out up-sampling treatment on the first residual error treatment result and the second residual error treatment result so as to carry out fusion treatment on the third residual error treatment result and the up-sampled first residual error treatment result and second residual error treatment result to obtain a fifth fusion result; residual processing is carried out on the third fusion result by using a fourth residual block, so that a fourth residual processing result is obtained; residual processing is carried out on the fourth fusion result by utilizing a fifth residual block, and a fifth residual processing result is obtained; residual processing is carried out on the fifth fusion result by utilizing a sixth residual block, and a sixth residual processing result is obtained; performing fusion processing on the fourth residual error processing result, the fifth residual error processing result and the sixth residual error processing result to obtain a sixth fusion result; carrying out convolution processing on the sixth fusion result by using a sixth convolution layer to obtain a sixth convolution processing result; and performing full connection processing on the sixth convolution processing result by using a full connection layer to obtain the plurality of characteristic node information.
According to the embodiment of the invention, the target object is subjected to characteristic node extraction by utilizing the target detection model, so that a plurality of pieces of characteristic node information of the target object are obtained, and the accuracy of user identification is improved.
The user identification device provided by the invention is described below, and the user identification device described below and the user identification method described above can be referred to correspondingly.
Fig. 10 is a schematic structural diagram of a user identity recognition device provided by the present invention, and as shown in fig. 10, a user identity recognition device according to an embodiment of the present invention includes:
an intelligent mattress 61, wherein a plurality of pressure sensors are arranged on a sleeping area of the intelligent mattress, and the pressure sensors are used for collecting pressure map data of a target object on the sleeping area;
a management platform 62, which is in communication connection with the intelligent mattress 61 and is used for receiving pressure map data sent by the intelligent mattress 61;
the management platform 62 is further configured to perform gesture recognition on the target object based on the pressure map data, so as to obtain a sleep gesture recognition result; if the sleeping gesture recognition result accords with preset sleeping gesture information, extracting feature nodes of the target object based on the pressure map data to obtain a plurality of feature node information; determining the user identity of the target object based on the characteristic node information and the characteristic array of each prestored user; and carrying out gesture recognition on the target object based on the pressure map data, and before obtaining a sleeping gesture recognition result, further comprising:
determining pressure data difference values of all position points on the intelligent mattress in any adjacent time based on the pressure map data of the current time and the pressure map data of the historical time period;
determining an absolute value superposition result and a maximum absolute value of any adjacent time based on the absolute value corresponding to the pressure data difference value of each position point in any adjacent time;
Judging whether the pressure map data of the current time is in a stable state or not based on the absolute value superposition result and the maximum absolute value of any adjacent time;
And if the target object is in a stable state, executing the step of carrying out gesture recognition on the target object based on the pressure map data to obtain a sleeping gesture recognition result.
It should be noted that, the above device provided in the embodiment of the present invention can implement all the method steps implemented in the method embodiment and achieve the same technical effects, and detailed descriptions of the same parts and beneficial effects as those of the method embodiment in the embodiment are omitted.
Fig. 11 is a schematic structural diagram of an electronic device according to the present invention, and as shown in fig. 11, the electronic device may include: processor 310, memory 320, communication interface (Communications Interface) 330, and communication bus 340, wherein processor 310, memory 320, and communication interface 330 communicate with each other via communication bus 340. Processor 310 may invoke logic instructions in memory 320 to perform the user identification method.
Further, the logic instructions in the memory 320 described above may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the user identification method provided by the above methods.
In another aspect, the present invention also provides a computer program product, the computer program product comprising a computer program, the computer program being storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of performing the user identification method provided by the methods described above.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (7)
1. A method for identifying a user, comprising:
Acquiring pressure map data of a target object at the current time;
based on the pressure map data, carrying out gesture recognition on the target object to obtain a sleeping gesture recognition result;
if the sleeping gesture recognition result accords with preset sleeping gesture information, extracting feature nodes of the target object based on the pressure map data to obtain a plurality of feature node information;
determining the user identity of the target object based on the characteristic node information and the characteristic array of each prestored user;
and carrying out gesture recognition on the target object based on the pressure map data, and before obtaining a sleeping gesture recognition result, further comprising:
Determining pressure data differences of all position points on the intelligent mattress in any adjacent time based on the pressure map data of the current time and the pressure map data of the historical time period;
determining an absolute value superposition result and a maximum absolute value of any adjacent time based on the absolute value corresponding to the pressure data difference value of each position point in any adjacent time;
Judging whether the pressure map data of the current time is in a stable state or not based on the absolute value superposition result and the maximum absolute value of any adjacent time;
If the target object is in a stable state, executing the step of carrying out gesture recognition on the target object based on the pressure map data to obtain a sleeping gesture recognition result;
if the sleep gesture recognition result accords with preset sleep gesture information, extracting feature nodes of the target object based on the pressure map data to obtain a plurality of feature node information, wherein the method comprises the following steps:
If the sleeping gesture recognition result accords with preset sleeping gesture information, inputting the pressure map data into a pre-constructed target detection model, and extracting features of the target object by using the target detection model to obtain a plurality of feature node information output by the target detection model;
the target detection model is obtained by training based on a plurality of pieces of historical pressure map data collected in advance and the position information of the feature node corresponding to each piece of the historical pressure map data;
The step of inputting the pressure map data to a pre-constructed target detection model to perform feature extraction on the target object by using the target detection model to obtain a plurality of feature node information output by the target detection model, including:
carrying out convolution processing on the pressure map data by using a first convolution layer to obtain a first convolution processing result;
performing convolution processing on the first convolution processing result by using a second convolution layer to obtain a second convolution processing result;
respectively carrying out convolution processing on the second convolution processing result by using a third convolution layer and a fourth convolution layer to obtain a third convolution processing result and a fourth convolution processing result;
Carrying out convolution processing on the third convolution processing result by using a fifth convolution layer to obtain a fifth convolution processing result;
downsampling the fourth convolution processing result to fuse the downsampling result with the third convolution processing result to obtain a first fusion result;
upsampling the third convolution processing result to fuse the upsampling result with the fourth convolution processing result to obtain a second fusion result;
Residual processing is carried out on the fifth convolution processing result by using a first residual block, so that a first residual processing result is obtained;
Residual processing is carried out on the first fusion result by using a second residual block, so that a second residual processing result is obtained;
residual processing is carried out on the second fusion result by using a third residual block, and a third residual processing result is obtained;
Respectively carrying out downsampling treatment on the second residual error treatment result and the third residual error treatment result so as to carry out fusion treatment on the first residual error treatment result, the downsampled second residual error treatment result and the third residual error treatment result and obtain a third fusion result;
Performing up-sampling processing on the first residual processing result and performing down-sampling processing on the third residual processing result so as to perform fusion processing on the second residual processing result, the up-sampled first residual processing result and the down-sampled third residual processing result to obtain a fourth fusion result;
Respectively carrying out up-sampling treatment on the first residual error treatment result and the second residual error treatment result so as to carry out fusion treatment on the third residual error treatment result and the up-sampled first residual error treatment result and second residual error treatment result to obtain a fifth fusion result;
Residual processing is carried out on the third fusion result by using a fourth residual block, so that a fourth residual processing result is obtained;
Residual processing is carried out on the fourth fusion result by utilizing a fifth residual block, and a fifth residual processing result is obtained;
residual processing is carried out on the fifth fusion result by utilizing a sixth residual block, and a sixth residual processing result is obtained;
Performing fusion processing on the fourth residual error processing result, the fifth residual error processing result and the sixth residual error processing result to obtain a sixth fusion result;
carrying out convolution processing on the sixth fusion result by using a sixth convolution layer to obtain a sixth convolution processing result;
And performing full connection processing on the sixth convolution processing result by using a full connection layer to obtain the plurality of characteristic node information.
2. The method for identifying a user according to claim 1, wherein said determining the user identity of the target object based on each of the characteristic node information and the characteristic array of each of the prestored users comprises:
Determining the distance between the characteristic nodes based on the characteristic node information;
Forming a target feature array of the target object based on the distances between the feature nodes;
Respectively calculating the matching coefficients between the target feature array and the feature arrays of all the prestored users;
and determining the user identity of the target object based on each matching coefficient.
3. The method for identifying a user according to claim 2, wherein the calculating the matching coefficients between the target feature array and the feature arrays of each of the prestored users respectively includes:
for any prestored user's feature array:
calculating to obtain a first characteristic mean value based on the target characteristic array;
Calculating a second characteristic mean value based on the characteristic array of the prestored user;
Calculating a correlation coefficient between the target feature array and the feature array of the pre-stored user based on the target feature array, the feature array of the pre-stored user, the first feature average value and the second feature average value;
Determining the difference value between each distance in the target feature array and each distance in the feature array of the pre-stored user;
And calculating a matching coefficient between the target object and the pre-stored user based on the correlation coefficient and each difference value.
4. The method of claim 2, wherein said determining the user identity of the target object based on each of the matching coefficients comprises:
selecting a target matching coefficient with the largest numerical value from the matching coefficients;
comparing the target matching coefficient with a preset coefficient threshold;
And if the target matching coefficient is larger than the preset coefficient threshold, determining that the user identity of the target object is a prestored user corresponding to the target matching coefficient.
5. The method for identifying a user according to claim 1, wherein the performing gesture recognition on the target object based on the pressure map data to obtain a sleep gesture recognition result includes:
performing Gaussian filtering processing on the pressure map data to obtain target pressure map data;
Performing edge extraction on the target pressure map data to obtain edge information of the target object;
And classifying sleeping postures of the edge information to obtain a sleeping posture recognition result.
6. The user identification method according to claim 1, wherein the feature node information includes head position information, right scapula position information, left scapula position information, right elbow joint position information, left elbow joint position information, right palm position information, left palm position information, right hip position information, left hip position information, right leg knee position information, left leg knee position information, right foot position information, and left foot position information.
7. A user identification device, comprising:
the intelligent mattress comprises an intelligent mattress body, wherein a sleep area of the intelligent mattress body is provided with a plurality of pressure sensors, and the pressure sensors are used for acquiring pressure map data of a target object in the sleep area;
The management platform is in communication connection with the intelligent mattress and is used for receiving pressure map data sent by the intelligent mattress;
The management platform is further used for carrying out gesture recognition on the target object based on the pressure map data to obtain a sleeping gesture recognition result; if the sleeping gesture recognition result accords with preset sleeping gesture information, extracting feature nodes of the target object based on the pressure map data to obtain a plurality of feature node information; determining the user identity of the target object based on the characteristic node information and the characteristic array of each prestored user; and carrying out gesture recognition on the target object based on the pressure map data, and before obtaining a sleeping gesture recognition result, further comprising:
Determining pressure data differences of all position points on the intelligent mattress in any adjacent time based on the pressure map data of the current time and the pressure map data of the historical time period;
determining an absolute value superposition result and a maximum absolute value of any adjacent time based on the absolute value corresponding to the pressure data difference value of each position point in any adjacent time;
Judging whether the pressure map data of the current time is in a stable state or not based on the absolute value superposition result and the maximum absolute value of any adjacent time;
If the target object is in a stable state, executing the step of carrying out gesture recognition on the target object based on the pressure map data to obtain a sleeping gesture recognition result;
if the sleep gesture recognition result accords with preset sleep gesture information, extracting feature nodes of the target object based on the pressure map data to obtain a plurality of feature node information, wherein the method comprises the following steps:
If the sleeping gesture recognition result accords with preset sleeping gesture information, inputting the pressure map data into a pre-constructed target detection model, and extracting features of the target object by using the target detection model to obtain a plurality of feature node information output by the target detection model;
the target detection model is obtained by training based on a plurality of pieces of historical pressure map data collected in advance and the position information of the feature node corresponding to each piece of the historical pressure map data;
The step of inputting the pressure map data to a pre-constructed target detection model to perform feature extraction on the target object by using the target detection model to obtain a plurality of feature node information output by the target detection model, including:
carrying out convolution processing on the pressure map data by using a first convolution layer to obtain a first convolution processing result;
performing convolution processing on the first convolution processing result by using a second convolution layer to obtain a second convolution processing result;
respectively carrying out convolution processing on the second convolution processing result by using a third convolution layer and a fourth convolution layer to obtain a third convolution processing result and a fourth convolution processing result;
Carrying out convolution processing on the third convolution processing result by using a fifth convolution layer to obtain a fifth convolution processing result;
downsampling the fourth convolution processing result to fuse the downsampling result with the third convolution processing result to obtain a first fusion result;
upsampling the third convolution processing result to fuse the upsampling result with the fourth convolution processing result to obtain a second fusion result;
Residual processing is carried out on the fifth convolution processing result by using a first residual block, so that a first residual processing result is obtained;
Residual processing is carried out on the first fusion result by using a second residual block, so that a second residual processing result is obtained;
residual processing is carried out on the second fusion result by using a third residual block, and a third residual processing result is obtained;
Respectively carrying out downsampling treatment on the second residual error treatment result and the third residual error treatment result so as to carry out fusion treatment on the first residual error treatment result, the downsampled second residual error treatment result and the third residual error treatment result and obtain a third fusion result;
Performing up-sampling processing on the first residual processing result and performing down-sampling processing on the third residual processing result so as to perform fusion processing on the second residual processing result, the up-sampled first residual processing result and the down-sampled third residual processing result to obtain a fourth fusion result;
Respectively carrying out up-sampling treatment on the first residual error treatment result and the second residual error treatment result so as to carry out fusion treatment on the third residual error treatment result and the up-sampled first residual error treatment result and second residual error treatment result to obtain a fifth fusion result;
Residual processing is carried out on the third fusion result by using a fourth residual block, so that a fourth residual processing result is obtained;
Residual processing is carried out on the fourth fusion result by utilizing a fifth residual block, and a fifth residual processing result is obtained;
residual processing is carried out on the fifth fusion result by utilizing a sixth residual block, and a sixth residual processing result is obtained;
Performing fusion processing on the fourth residual error processing result, the fifth residual error processing result and the sixth residual error processing result to obtain a sixth fusion result;
carrying out convolution processing on the sixth fusion result by using a sixth convolution layer to obtain a sixth convolution processing result;
And performing full connection processing on the sixth convolution processing result by using a full connection layer to obtain the plurality of characteristic node information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410139924.3A CN117671739B (en) | 2024-02-01 | 2024-02-01 | User identity recognition method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410139924.3A CN117671739B (en) | 2024-02-01 | 2024-02-01 | User identity recognition method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117671739A CN117671739A (en) | 2024-03-08 |
CN117671739B true CN117671739B (en) | 2024-05-07 |
Family
ID=90086595
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410139924.3A Active CN117671739B (en) | 2024-02-01 | 2024-02-01 | User identity recognition method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117671739B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118464170B (en) * | 2024-07-15 | 2024-09-24 | 爱梦睡眠(珠海)智能科技有限公司 | Weight measurement method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114359975A (en) * | 2022-03-16 | 2022-04-15 | 慕思健康睡眠股份有限公司 | Gesture recognition method, device and system of intelligent cushion |
CN115062657A (en) * | 2022-06-09 | 2022-09-16 | 绍兴市众信安医疗器械科技有限公司 | Pressure sensor data identification method, mattress control method and related device |
CN115203658A (en) * | 2022-06-17 | 2022-10-18 | 平安银行股份有限公司 | Identity recognition method and device, storage medium and electronic equipment |
CN116563887A (en) * | 2023-04-21 | 2023-08-08 | 华北理工大学 | Sleeping posture monitoring method based on lightweight convolutional neural network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110059661B (en) * | 2019-04-26 | 2022-11-22 | 腾讯科技(深圳)有限公司 | Action recognition method, man-machine interaction method, device and storage medium |
-
2024
- 2024-02-01 CN CN202410139924.3A patent/CN117671739B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114359975A (en) * | 2022-03-16 | 2022-04-15 | 慕思健康睡眠股份有限公司 | Gesture recognition method, device and system of intelligent cushion |
CN115062657A (en) * | 2022-06-09 | 2022-09-16 | 绍兴市众信安医疗器械科技有限公司 | Pressure sensor data identification method, mattress control method and related device |
CN115203658A (en) * | 2022-06-17 | 2022-10-18 | 平安银行股份有限公司 | Identity recognition method and device, storage medium and electronic equipment |
CN116563887A (en) * | 2023-04-21 | 2023-08-08 | 华北理工大学 | Sleeping posture monitoring method based on lightweight convolutional neural network |
Also Published As
Publication number | Publication date |
---|---|
CN117671739A (en) | 2024-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Huang et al. | Multimodal sleeping posture classification | |
CN117671739B (en) | User identity recognition method and device | |
Shams et al. | Iris recognition based on LBP and combined LVQ classifier | |
WO2018163555A1 (en) | Image processing device, image processing method, and image processing program | |
CN104680508A (en) | Convolutional neural network and target object detection method based on convolutional neural network | |
CN106529504B (en) | A kind of bimodal video feeling recognition methods of compound space-time characteristic | |
Ambeth Kumar et al. | Exploration of an innovative geometric parameter based on performance enhancement for foot print recognition | |
CN110532850B (en) | Fall detection method based on video joint points and hybrid classifier | |
CN102902970A (en) | Iris location method | |
JP2018206321A (en) | Image processing device, image processing method and image processing program | |
CN110717372A (en) | Identity verification method and device based on finger vein recognition | |
CN106372583A (en) | Millimeter wave image-based human body foreign matter detection method and system | |
Quek et al. | Pseudo-outer product based fuzzy neural network fingerprint verification system | |
CN111967354B (en) | Depression tendency identification method based on multi-mode characteristics of limbs and micro-expressions | |
CN111178130A (en) | Face recognition method, system and readable storage medium based on deep learning | |
Azam et al. | Iris recognition using convolutional neural network | |
CN112101235B (en) | Old people behavior identification and detection method based on old people behavior characteristics | |
CN116942149B (en) | Lumbar vertebra monitoring method, device, equipment and storage medium based on millimeter wave radar | |
Abikoye Oluwakemi et al. | Iris feature extraction for personal identification using fast wavelet transform (FWT) | |
CN115019220B (en) | Posture tracking method and system based on deep learning | |
CN111104902B (en) | Hemiplegia gait classification method based on graph convolution network | |
JP2005351814A (en) | Detector and detecting method | |
KR102278410B1 (en) | High-performance deep learning finger vein authentication system and metod that can simultaneously measure personal health status | |
CN111144167A (en) | Gait information identification optimization method, system and storage medium | |
CN111369490B (en) | Method for extracting bone X-ray image characteristics to judge osteoporosis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |