CN112102946B - Sleep quality detection method and device based on deep learning - Google Patents

Sleep quality detection method and device based on deep learning Download PDF

Info

Publication number
CN112102946B
CN112102946B CN202011274897.9A CN202011274897A CN112102946B CN 112102946 B CN112102946 B CN 112102946B CN 202011274897 A CN202011274897 A CN 202011274897A CN 112102946 B CN112102946 B CN 112102946B
Authority
CN
China
Prior art keywords
image
infrared
human body
features
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011274897.9A
Other languages
Chinese (zh)
Other versions
CN112102946A (en
Inventor
李宇欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Health Hope (beijing) Technology Co ltd
Original Assignee
Health Hope (beijing) Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Health Hope (beijing) Technology Co ltd filed Critical Health Hope (beijing) Technology Co ltd
Publication of CN112102946A publication Critical patent/CN112102946A/en
Application granted granted Critical
Publication of CN112102946B publication Critical patent/CN112102946B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4815Sleep quality

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention relates to a sleep quality detection method and a device based on deep learning, wherein the method comprises the following steps: acquiring at least one group of visible light images and infrared images of the sleeping state of a human body to be detected; for each set of visible light image and infrared image, performing: performing feature extraction on the visible light image by using a first neural network obtained by training historical data of the visible light image to obtain image features, wherein the image features are used for representing the state of a key part of a human body to be detected, and the key part at least comprises a joint; performing feature extraction on the infrared image by using a second neural network obtained by training historical data of the infrared image to obtain infrared features; performing feature fusion on the image features and the infrared features to obtain fusion features of the current image; and determining the sleep quality of the human body to be detected according to the obtained fusion characteristics of each group of images. The scheme can improve the detection precision of the sleep quality.

Description

Sleep quality detection method and device based on deep learning
Technical Field
The invention relates to the technical field of computers, in particular to a sleep quality detection method and device based on deep learning.
Background
The sleep can eliminate fatigue of body and brain, promote intelligence development and growth development of human body, enhance immunity mechanism and delay aging. Therefore, the quality of sleep plays an important role in the human body.
Currently, sleep quality detection mainly collects information such as heart rate, respiration and movement of a human body through various sensors to analyze and evaluate sleep quality. However, this method usually requires the sensor to contact with the human body to obtain the test signal, and this contact detection method itself may interfere with the sleep of the human body, resulting in low accuracy of evaluating the sleep quality of the human body.
Therefore, it is necessary to provide a sleep quality detection method based on deep learning to solve the above-mentioned problem of low detection accuracy in the prior art.
Disclosure of Invention
The invention aims to solve the technical problem that the detection precision of the sleep quality of a human body is not high because a contact type sleep quality detection mode can interfere the sleep of the human body. In view of the above technical problems, the present invention provides a sleep quality detection method based on deep learning, which includes:
acquiring at least one group of visible light images and infrared images of a human body to be detected in a sleep state, wherein each group of the visible light images and the infrared images are acquired at the same time;
for each set of the visible light image and the infrared image, performing:
performing feature extraction on the visible light image by using a first neural network obtained by training historical data of the visible light image to obtain image features, wherein the image features are used for representing the state of a key part of the human body to be detected, and the key part at least comprises a joint;
performing feature extraction on the infrared image by using a second neural network obtained by training historical data of the infrared image to obtain infrared features, wherein the infrared features are used for representing the temperature distribution of key parts of the human body to be detected;
performing feature fusion on the image features and the infrared features to obtain fusion features of the current image;
and determining the sleep quality of the human body to be detected according to the obtained fusion characteristics of each group of images.
Optionally, the performing feature fusion on the image feature and the infrared feature includes:
determining fusion dimensionality according to the image features and the infrared features; wherein the fusion dimension is greater than or equal to the maximum value in the matrix corresponding to the image feature and the matrix corresponding to the infrared feature;
aligning the matrix corresponding to the image features and the matrix corresponding to the infrared features according to the fusion dimension;
and performing fusion calculation on the matrix corresponding to the aligned image features and the matrix corresponding to the infrared features.
Optionally, the acquiring at least one group of visible light images and infrared images of the sleep state of the human body to be detected includes:
acquiring historical data of a sleep state and a first label, wherein the first label is used for defining the sleep state and a non-sleep state;
performing model training by using the historical data of the sleep state and the first label to obtain a sleep state judgment model;
collecting at least one group of visible light images and infrared images in real time;
and inputting each group of visible light images and infrared images into the sleep state judgment model to obtain at least one group of visible light images and infrared images of the human body to be detected in the sleep state.
Optionally, before the determining the at least one group of visible light image and infrared image of the sleep state of the human body to be tested, further comprising:
acquiring sleeping posture recognition model training data and a second label, wherein the second label is used for defining a mapping relation between pixel points corresponding to key parts of a human body in an image and the sleeping posture of the human body;
training by using the sleeping posture recognition model training data and a second label to obtain a sleeping posture recognition model;
acquiring quality evaluation model training data and a third label, wherein the third label is used for defining the mapping relation between the sleeping posture of the human body and the sleeping quality;
training by using the quality evaluation model training data and the third label to obtain a quality evaluation model;
the identifying each group of the fusion features and determining the sleep quality of the human body to be detected comprises the following steps:
recognizing the sleeping postures of each group of the fusion characteristics by using the sleeping posture recognition model;
and evaluating each sleeping posture obtained by the sleeping posture identification model by using the quality evaluation model, and determining the sleeping quality of the human body to be detected.
Optionally, the evaluating each sleep posture obtained by the sleep posture identifying model by using the quality evaluating model includes:
scoring the sleep quality of the human body to be measured according to a sleep quality scoring formula, wherein the sleep quality scoring formula is as follows:
Figure 864004DEST_PATH_IMAGE001
wherein P is used for representing the sleep quality score k of the human body to be detected in the time period corresponding to the visible light image and the infrared imageiWeight for characterizing sleep quality corresponding to the ith sleeping posture, AiA parameter value for characterizing the ith sleeping posture identified by the sleeping posture identification model, BjThe parameter values are used for representing the jth sleeping posture in the quality evaluation model, m is used for representing the number of the sleeping postures contained in the quality evaluation model, and n is used for representing the number of the sleeping postures identified by the sleeping posture identification model.
The embodiment of the invention also provides a sleep quality detection device based on deep learning, which comprises: the device comprises an acquisition module, an execution module and a determination module;
the acquisition module is used for acquiring at least one group of visible light images and infrared images of the sleeping state of the human body to be detected, wherein each group of the visible light images and the infrared images are acquired at the same time;
the execution module is configured to execute the following operations for each set of the visible light image and the infrared image acquired by the acquisition module:
performing feature extraction on the visible light image by using a first neural network obtained by training historical data of the visible light image to obtain image features, wherein the image features are used for representing the state of a key part of the human body to be detected, and the key part at least comprises a joint;
performing feature extraction on the infrared image by using a second neural network obtained by training historical data of the infrared image to obtain infrared features, wherein the infrared features are used for representing the temperature distribution of key parts of the human body to be detected;
performing feature fusion on the image features and the infrared features to obtain fusion features of the current image;
the determining module is used for determining the sleep quality of the human body to be detected according to the fusion characteristics of each group of images obtained by the executing module.
Optionally, the execution module is further configured to perform the following operations:
determining fusion dimensionality according to the image features and the infrared features; wherein the fusion dimension is greater than or equal to the maximum value in the matrix corresponding to the image feature and the matrix corresponding to the infrared feature;
aligning the matrix corresponding to the image features and the matrix corresponding to the infrared features according to the fusion dimension;
and performing fusion calculation on the matrix corresponding to the aligned image features and the matrix corresponding to the infrared features.
The embodiment of the invention also provides a sleep quality evaluation system based on artificial intelligence, which comprises: the system comprises an image acquisition device and a computer running an artificial intelligence program;
the image acquisition device comprises a camera, a far infrared heat sensor array, a controller and a memory; the controller controls the camera, the far infrared heat sensor array and the memory; the memory is used for storing image data shot by the camera and far infrared data obtained by the far infrared thermal sensor array;
the computer running the artificial intelligence program comprises a USB interface.
Optionally, the sleep quality evaluation system based on artificial intelligence has a working process that:
s1: before sleeping, a user presses a power switch of the image acquisition device and starts the image acquisition device;
s2: the image acquisition device synchronously records and stores the image and the far infrared heat sensor array data in real time;
s3: after waking up, a user is connected with a USB interface of the image acquisition device through a USB disk or other storage media, data information recorded and stored in the image acquisition device is read, and the data information is imported into a computer running the artificial intelligence program through the USB interface of the computer running the artificial intelligence program;
s4: the computer running the artificial intelligence program fuses the data of the far infrared thermal sensor array into the image, and the posture of the user when the user is asleep is fitted by analyzing the distribution condition of the temperature in the view field of the image acquisition device, so that the complete process of the sleep posture of the user is acquired;
s5: and operating an artificial intelligence program according to the sleeping posture of the user, measuring the sleeping quality of the user according to the information of turning over, posture and activity frequency of the user during sleeping, and giving a score of the sleeping quality.
Optionally, the artificial intelligence algorithm running in the computer running the artificial intelligence program is based on an openposition framework, and by learning the information of turning-over, posture and activity frequency of the person during sleep, the posture of the person during sleep is recognized, so that the quality of the person during sleep is evaluated.
The sleep quality detection method and the device based on deep learning at least have the following beneficial effects:
when the sleep quality of the human body to be detected is detected and evaluated, the scheme adopts the image recognition to the sleep image of the human body to be detected. Specifically, firstly, a visible light image and an infrared image of a human body to be detected in a sleep state are acquired, then the light image and the infrared image are respectively subjected to feature extraction, and further the extracted image features and the infrared features are fused to obtain fusion features, so that the sleep quality of the human body to be detected is evaluated by identifying the fusion features. Because the scheme adopts the mode of acquiring the image of the human body to be detected in the sleeping state, the whole detection process is prevented from contacting the human body to be detected, and the problem of low detection precision caused by the contact between the detection process and the human body to be detected is solved. In addition, the image characteristics and the infrared characteristics are subjected to characteristic fusion, the sleep quality of the human body to be detected is determined by identifying the fusion characteristics, and the image characteristics and the infrared characteristics are identified by utilizing the multi-source information, so that the accuracy of image identification is improved, and the accuracy of sleep quality detection is improved.
Drawings
Fig. 1 is a flowchart of a sleep quality detection method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another sleep quality detection method provided by an embodiment of the present invention;
fig. 3 is a schematic diagram of an apparatus in which a sleep quality detection device according to an embodiment of the present invention is provided;
fig. 4 is a schematic diagram of a sleep quality detection apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a sleep quality detection method based on deep learning, which may include the following steps:
step 101: acquiring at least one group of visible light images and infrared images of the sleeping state of a human body to be detected, wherein each group of visible light images and infrared images are images acquired at the same moment;
step 102: for each set of visible light image and infrared image, performing:
performing feature extraction on the visible light image by using a first neural network obtained by training historical data of the visible light image to obtain image features, wherein the image features are used for representing the state of a key part of a human body to be detected, and the key part at least comprises a joint;
performing feature extraction on the infrared image by using a second neural network obtained by training historical data of the infrared image to obtain infrared features, wherein the infrared features are used for representing the temperature distribution of key parts of a human body to be detected;
performing feature fusion on the image features and the infrared features to obtain fusion features of the current image;
step 103: and determining the sleep quality of the human body to be detected according to the obtained fusion characteristics of each group of images.
In the embodiment of the invention, when the sleep quality of the human body to be detected is detected and evaluated, the scheme adopts the image recognition of the sleep image of the human body to be detected. Specifically, firstly, a visible light image and an infrared image of a human body to be detected in a sleep state are acquired, then the light image and the infrared image are respectively subjected to feature extraction, and further the extracted image features and the infrared features are fused to obtain fusion features, so that the sleep quality of the human body to be detected is evaluated by identifying the fusion features. Because the scheme adopts the mode of acquiring the image of the human body to be detected in the sleeping state, the whole detection process is prevented from contacting the human body to be detected, and the problem of low detection precision caused by the contact between the detection process and the human body to be detected is solved. In addition, the image characteristics and the infrared characteristics are subjected to characteristic fusion, the sleep quality of the human body to be detected is determined by identifying the fusion characteristics, and the image characteristics and the infrared characteristics are identified by utilizing the multi-source information, so that the accuracy of image identification is improved, and the accuracy of sleep quality detection is improved.
Optionally, because the obtained matrix corresponding to the image feature and the obtained matrix corresponding to the infrared feature have different matrix dimensions, which may cause that fusion calculation cannot be performed on the matrices corresponding to the image feature and the infrared feature, fusion calculation needs to be performed after dimension alignment is performed on the matrices corresponding to the image feature and the infrared feature. Specifically, the method may include the steps of:
determining fusion dimensionality according to the image characteristics and the infrared characteristics; the fusion dimensionality is greater than or equal to the maximum value in the matrix corresponding to the image characteristic and the matrix corresponding to the infrared characteristic;
aligning the matrix corresponding to the image characteristic and the matrix corresponding to the infrared characteristic according to the fusion dimension;
and performing fusion calculation on the matrix corresponding to the aligned image features and the matrix corresponding to the infrared features.
In the embodiment of the invention, because the dimensionalities of the matrixes corresponding to the image characteristics and the infrared characteristics are different, the fusion calculation of the image characteristics and the infrared characteristics cannot be carried out, and the matrixes corresponding to the image characteristics and the infrared characteristics are aligned to be in the same fusion dimensionality at the same time, so that the operation among the matrixes can be ensured, and the guarantee is provided for accurately identifying the fusion characteristics to obtain the sleeping posture of the human body to be detected.
Optionally, since image data of the human body to be detected that is not in a sleep state may exist in the image acquired by the image acquisition device, if the image data is used as data for sleep quality analysis, it is inevitable that a relatively large error exists in the sleep quality result of the human body to be detected. Therefore, the collected image data needs to be screened to filter the image data of the human body to be detected when the human body is not in a sleep state. Specifically, the method can be realized by the following steps:
acquiring historical data of a sleep state and a first label, wherein the first label is used for defining the sleep state and a non-sleep state;
performing model training by using the historical data of the sleep state and the first label to obtain a sleep state judgment model;
collecting at least one group of visible light images and infrared images in real time;
and inputting each group of visible light images and infrared images into the sleep state judgment model to obtain at least one group of visible light images and infrared images of the human body to be detected in the sleep state.
In the embodiment of the invention, firstly, the sleep state is respectively defined as the sleep state and the non-sleep state according to whether a person is in the sleep state, and then the sleep state judgment model can be obtained according to the defined label and the historical data training, so that the image data of the human body to be detected which is not in the sleep state in the collected image data is filtered by using the sleep state judgment model, thereby ensuring that the adopted image data are the images of the human body to be detected which is in the sleep state when the sleep quality of the sleep image of the human body to be detected is judged, and further realizing the purpose of improving the detection precision of the sleep quality.
Optionally, when the sleep quality of the human body to be detected is determined through the fusion features, the fusion features need to be recognized through a sleep posture recognition model to determine the sleep posture, and then the sleep posture is evaluated through a quality evaluation model to determine the sleep quality of the human body to be detected. Specifically, the following steps may be included:
acquiring sleeping posture recognition model training data and a second label, wherein the second label is used for defining a mapping relation between pixel points corresponding to key parts of a human body in an image and the sleeping posture of the human body;
training by using the sleeping posture recognition model training data and the second label to obtain a sleeping posture recognition model;
acquiring quality evaluation model training data and a third label, wherein the third label is used for defining the mapping relation between the sleeping posture of the human body and the sleeping quality;
training by using the quality evaluation model training data and the third label to obtain a quality evaluation model;
identifying each group of fusion characteristics and determining the sleep quality of the human body to be detected, wherein the method comprises the following steps:
recognizing the sleeping postures of each group of fusion characteristics by using a sleeping posture recognition model;
and evaluating each sleeping posture obtained by the sleeping posture identification model by using the quality evaluation model, and determining the sleeping quality of the human body to be detected.
In the embodiment of the invention, firstly, a sleeping posture identification model is obtained through training according to the historical data of the sleeping posture and the second label used for defining the mapping relation between the pixel point corresponding to the key part of the human body in the image and the human body. And then training to obtain a quality evaluation model by using the historical data of quality evaluation and a third label for representing the mapping relation between the human body sleeping posture and the sleeping quality. Therefore, the sleeping posture recognition model can be used for recognizing the fusion characteristics to obtain the sleeping posture state of the human body to be detected, and then the quality evaluation model is used for recognizing the sleeping posture state to determine the sleeping quality of the human body to be detected. Because a large amount of historical data and labels are adopted for model training, the sleeping posture and the sleeping quality are identified and detected by using the model, the sleeping posture identification efficiency and the sleeping quality detection efficiency can be improved, and the accuracy of sleeping quality detection on a human body to be detected is ensured.
Optionally, when each sleeping posture obtained by the sleeping posture identification model is evaluated by using the quality evaluation model, the sleeping quality score can be obtained according to the following sleeping quality scoring formula, so as to judge the sleeping quality of the human body to be detected. The sleep quality score formula is as follows:
Figure 160731DEST_PATH_IMAGE002
wherein P is used for representing sleep quality score, k, of the human body to be detected in the time period corresponding to the visible light image and the infrared imageiWeight for characterizing sleep quality corresponding to the ith sleeping posture, AiParameter values for characterizing the i-th sleeping posture identified by the sleeping posture identification model, BjThe parameter values are used for representing the jth sleeping posture in the quality evaluation model, m is used for representing the number of the sleeping postures contained in the quality evaluation model, and n is used for representing the number of the sleeping postures identified by the sleeping posture identification model.
In the embodiment of the invention, the sleep quality score of a human body to be detected in a certain time period can be accurately obtained by using a sleep quality scoring formula for image data acquired by the human body to be detected in the certain time period, and then the sleep quality of the human body to be detected in the certain time period is determined according to a rule defined between the preset sleep quality score and the sleep quality, so that the aim of accurately detecting the sleep quality of the human body to be detected is fulfilled.
As shown in fig. 2, another embodiment of the present invention further provides a deep learning-based sleep quality detection method, which may include:
step 201: and carrying out image preprocessing on the collected visible light image and infrared image to obtain the visible light image and infrared image of the human body to be detected in the sleeping state.
In the embodiment of the invention, the visible light images and the infrared images collected by the visible light camera and the infrared camera comprise images of the human body to be detected in a non-sleep state. For example, the human body to be measured reads a book on a bed, plays a mobile phone, and the like. Obviously, these states cannot be used as image data for evaluating the sleep quality of the human body to be measured. Therefore, a model for judging the sleep state of the human body to be detected is constructed by predefining the sleep state and the non-sleep state and then utilizing historical data and the defined label. Therefore, when visible light images and infrared images are collected, the model can be used for screening the images, the images of the human body to be detected in the sleep state are filtered, and the accuracy of detecting the sleep quality of the human body by using image recognition is ensured.
Of course, it is noted that when the visible light image and the infrared image are obtained, they should be obtained in a grouped manner, that is, each group of the visible light image and the infrared image should be an image captured at the same time, so that the accuracy of image recognition can be effectively ensured when image feature fusion and recognition are performed.
Step 202: and performing feature extraction on the visible light image and the infrared image to respectively obtain image features and infrared features.
In the embodiment of the invention, the characteristics of the visible light image and the infrared image need to be extracted respectively. Specifically, the feature extraction may be performed on the visible light image by using a first neural network trained from historical data of the visible light image, and the feature extraction may be performed on the infrared image by using a second neural network trained from historical data of the infrared image. During model training, labels need to be defined, namely, a mapping relation between the state of the key parts of the human body and the image characteristics and a mapping relation between the temperature distribution of the key parts of the human body and the infrared characteristics need to be defined. The key parts of the human body mentioned above mainly may include: nose, neck, left eye, right eye, left ear, right ear, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist, left hip, right hip, left knee, right knee, left ankle, right ankle, and the like.
Step 203: and aligning the matrix corresponding to the image characteristic and the matrix corresponding to the infrared characteristic.
Since the image feature and the infrared feature are information from two different sources, there is also a difference in the processing. Therefore, the dimensionalities of the matrixes corresponding to the image features and the infrared features are inconsistent, so that fusion calculation cannot be performed on the image features and the infrared features, and therefore the matrixes need to be aligned. Specifically, a fusion dimension needs to be determined according to each matrix, the fusion dimension is greater than or equal to a maximum dimension value in each matrix to be processed, and then all the matrices are aligned according to the fusion dimension, so that the dimensions of all the matrices are unified into the fusion dimension. For example, if there are three sub-features in the image feature, whose corresponding matrix dimensions are 50 × 50, 40 × 40, and 30 × 30, and the infrared feature has two sub-features, whose corresponding matrix dimensions are 40 × 40 and 20 × 20, respectively, then it can be determined that the fusion dimension should be not less than 50 × 50, which is exemplified by 60 × 60. Therefore, the dimensionality of each matrix corresponding to the three sub-features in the image feature and the two sub-features in the infrared feature is 60 x 60, namely, equivalent dimension expansion (usually 0 complementing) needs to be carried out on each matrix, so that fusion calculation of each matrix under the same dimensionality is guaranteed.
It should be noted that, of course, when determining the fusion dimension, the fusion dimension may be determined to be smaller than the maximum value of the dimension in each matrix. However, when the fusion dimension is smaller than the maximum dimension value in each matrix, the matrix with the matrix dimension larger than the fusion dimension is bound to be reduced, and in the process of reducing the dimension, information may be lost, so that the accuracy of the posture identification of the human body to be detected is reduced, and the detection effect on the sleep quality of the human body to be detected is affected.
Step 204: and performing fusion calculation on the matrix corresponding to the aligned image features and the matrix corresponding to the infrared features.
In the embodiment of the present invention, it is necessary to fuse the aligned image features and the infrared features. In the fusion process, the aligned matrix corresponding to the infrared features can be arbitrarily fused with the aligned matrix corresponding to the image features. For example, any one or more of the infrared features and any one or more of the image features are simultaneously added into a void space convolution pooling pyramid (ASPP) for fusion to obtain a fusion feature. Of course, before the fusion, the image features and the infrared features can be processed by a deformable convolution layer (DCN) to obtain the image features and the infrared features which are more effective for the sleep posture identification.
Of course, in practical applications, it is not necessary to arbitrarily fuse each image feature and the infrared feature, because this may cause a situation of losing information, for example, if an image feature having global information in the image features is not fused with the infrared feature, some important features in the global information may be lost in the fused feature after fusion. In addition, in the fusion process, the dimension of the fused primary fusion features needs to be reduced to make the primary fusion features consistent with the dimensions of the matrixes before fusion, and if any matrix is fused, the data processing amount in the dimension reduction process is possibly very large, which is unfavorable for the execution efficiency of the processor. Generally, the infrared features are fused into the image features with the lowest feature resolution and the most abstract feature expression in the multi-scale image features. Because the image features with the lowest feature resolution have global information, the condition of information loss can not occur, dimension reduction is performed after fusion, and only the dimension reduction is performed on the fusion matrix, so that the data processing amount in the dimension reduction process is greatly reduced. Further, after the dimension reduction, since the dimension of the fused feature is consistent with that of each of the remaining aligned image features, the merging calculation can be directly performed, and the fused feature is finally obtained. Therefore, the optimal scheme of feature fusion is adopted, wherein the infrared features are fused into the image features, the feature resolution is the lowest, and the image features with the most abstract feature expression are the most abstract image features.
Step 205: and determining the sleep quality of the human body to be detected according to the fusion characteristics.
In the embodiment of the invention, when the sleep quality of the human body to be detected is determined, firstly, the sleep posture identification needs to be carried out on the fusion characteristic to obtain the sleep posture corresponding to the fusion characteristic, and then the sleep quality of the human body to be detected is determined according to the relationship between the sleep posture state and the sleep quality. Therefore, before the sleep quality detection is carried out, firstly, model training is carried out by considering the sleep posture recognition model training data and the label for defining the mapping relation between the pixel points corresponding to the key parts of the human body in the image and the human body sleep posture, and the sleep posture recognition model is obtained. For example, the turning times, the body curling state and the like of the human body to be detected are identified according to the state of the key part of the human body represented by the fusion features. Further, the quality evaluation model training is carried out by using the quality evaluation model training data and the label for defining the mapping relation between the human body sleeping posture and the sleeping quality. Therefore, the sleeping posture identification model is used for identifying the sleeping posture of the fusion characteristic, and then the quality evaluation model is used for analyzing the sleeping posture of the human body to be detected in a period of time, so that the sleeping quality of the human body to be detected is determined.
Specifically, when analyzing the sleep quality of the human body to be tested, the following quality scoring formula can be used to score various sleep postures of the human body to be tested, and then the sleep quality of the human body to be tested is analyzed, wherein the formula is as follows:
Figure 890790DEST_PATH_IMAGE003
wherein P is used for representing sleep quality score, k, of the human body to be detected in the time period corresponding to the visible light image and the infrared imageiWeight for characterizing sleep quality corresponding to the ith sleeping posture, AiParameter for representing i-th sleeping posture identified by sleeping posture identification modelMagnitude, BjThe parameter values are used for representing the jth sleeping posture in the quality evaluation model, m is used for representing the number of the sleeping postures contained in the quality evaluation model, and n is used for representing the number of the sleeping postures identified by the sleeping posture identification model.
For example, after the sleeping posture identification model identifies, a plurality of sleeping posture states within a period of time are obtained, an average scoring value of all sleeping postures within the period of time can be obtained by substituting parameter values corresponding to the sleeping postures into the formula, and the sleeping quality of the human body to be detected is determined according to the scoring value and a rule between the predetermined scoring value and the sleeping quality. For example, by defining that the score value is between 85 and 100 points, the score value is good sleep quality, between 65 and 84 points, the score value is general sleep quality, and the score value is less than 65 points, the sleep quality is poor, so that whether to take corresponding measures for improving the sleep quality can be determined according to the result. It should be noted here that, since the weight of each sleeping posture to the sleeping quality is related in the quality scoring formula, the effect of different sleeping postures on the sleeping quality is considered, so that the sleeping quality detection result can be obtained more accurately.
As shown in fig. 3 and 4, embodiments of the present invention provide an apparatus in which a sleep quality detection apparatus is located and a sleep quality detection apparatus based on deep learning. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. From a hardware level, as shown in fig. 3, a hardware structure diagram of a device in which the sleep quality detection apparatus is located is provided for an embodiment of the present invention, and in addition to the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 3, the device in which the apparatus is located in the embodiment may also include other hardware, such as a forwarding chip responsible for processing a packet, and the like. Taking a software implementation as an example, as shown in fig. 4, as a logical apparatus, the apparatus is formed by reading a corresponding computer program instruction in a non-volatile memory into a memory by a CPU of a device in which the apparatus is located and running the computer program instruction. As shown in fig. 4, an embodiment of the present invention provides a sleep quality detection apparatus based on deep learning, including: an acquisition module 401, an execution module 402 and a determination module 403;
the acquisition module 401 is configured to acquire at least one group of visible light images and infrared images of a sleeping state of a human body to be detected, where each group of visible light images and infrared images are images acquired at the same time;
an executing module 402, configured to execute the following operations for each set of visible light image and infrared image acquired by the acquiring module 401:
performing feature extraction on the visible light image by using a first neural network obtained by training historical data of the visible light image to obtain image features, wherein the image features are used for representing the state of a key part of a human body to be detected, and the key part at least comprises a joint;
performing feature extraction on the infrared image by using a second neural network obtained by training historical data of the infrared image to obtain infrared features, wherein the infrared features are used for representing the temperature distribution of key parts of a human body to be detected;
performing feature fusion on the image features and the infrared features to obtain fusion features of the current image;
a determining module 403, configured to determine sleep quality of the human body to be detected according to the fusion features of each group of images obtained by the executing module 402.
In the deep learning based sleep quality detection apparatus as shown in fig. 4, the execution module 402 is further configured to perform the following operations:
determining fusion dimensionality according to the image characteristics and the infrared characteristics; the fusion dimensionality is greater than or equal to the maximum value in the matrix corresponding to the image characteristic and the matrix corresponding to the infrared characteristic;
aligning the matrix corresponding to the image characteristic and the matrix corresponding to the infrared characteristic according to the fusion dimension;
and performing fusion calculation on the matrix corresponding to the aligned image features and the matrix corresponding to the infrared features.
In the deep learning based sleep quality detection apparatus as shown in fig. 4, the obtaining module 401 is further configured to perform the following operations:
acquiring historical data of a sleep state and a first label, wherein the first label is used for defining the sleep state and a non-sleep state;
performing model training by using the historical data of the sleep state and the first label to obtain a sleep state judgment model;
collecting at least one group of visible light images and infrared images in real time;
and inputting each group of visible light images and infrared images into the sleep state judgment model to obtain at least one group of visible light images and infrared images of the human body to be detected in the sleep state.
In the deep learning-based sleep quality detection apparatus as shown in fig. 4, a model training module may be further included;
the model training module is used for executing the following operations:
acquiring sleeping posture recognition model training data and a second label, wherein the second label is used for defining a mapping relation between pixel points corresponding to key parts of a human body in an image and the sleeping posture of the human body;
training by using the sleeping posture recognition model training data and the second label to obtain a sleeping posture recognition model;
acquiring quality evaluation model training data and a third label, wherein the third label is used for defining the mapping relation between the sleeping posture of the human body and the sleeping quality;
training by using the quality evaluation model training data and the third label to obtain a quality evaluation model;
the determining module 403 is further configured to perform the following operations:
recognizing the sleeping postures of each group of fusion characteristics by using a sleeping posture recognition model;
and evaluating each sleeping posture obtained by the sleeping posture identification model by using the quality evaluation model, and determining the sleeping quality of the human body to be detected.
In a deep learning based sleep quality detection apparatus as shown in fig. 4, the determining module 403 is further configured to perform the following operations:
scoring the sleep quality of the human body to be measured according to a sleep quality scoring formula, wherein the sleep quality scoring formula is as follows:
Figure 90827DEST_PATH_IMAGE004
wherein P is used for representing sleep quality score, k, of the human body to be detected in the time period corresponding to the visible light image and the infrared imageiWeight for characterizing sleep quality corresponding to the ith sleeping posture, AiParameter values for characterizing the i-th sleeping posture identified by the sleeping posture identification model, BjThe parameter values are used for representing the jth sleeping posture in the quality evaluation model, m is used for representing the number of the sleeping postures contained in the quality evaluation model, and n is used for representing the number of the sleeping postures identified by the sleeping posture identification model.
In the embodiment of the present invention, when the sleep quality detection method is executed, the following artificial intelligence based sleep quality evaluation system may be adopted, where the evaluation system includes an image acquisition device and a computer running an artificial intelligence program;
the image acquisition device comprises a camera, a far infrared heat sensor array, a controller and a memory; the controller controls the camera, the far infrared heat sensor array and the memory; the memory is used for storing image data shot by the camera and far infrared data obtained by the far infrared thermal sensor array;
the computer running the artificial intelligence program comprises a USB interface.
In the embodiment of the invention, the sleep quality evaluation system based on artificial intelligence is characterized in that: the image acquisition device is provided with a USB interface, and data information stored in the memory can be read through the USB interface.
In the embodiment of the invention, the sleep quality evaluation system based on artificial intelligence is characterized in that: the working process of the evaluation system is as follows:
s1: before sleeping, a user presses a power switch of the image acquisition device and starts the image acquisition device;
s2: the image acquisition device synchronously records and stores the image and the far infrared heat sensor array data in real time;
s3: after waking up, a user is connected with a USB interface of the image acquisition device through a USB disk or other storage media, data information recorded and stored in the image acquisition device is read, and the data information is imported into a computer running the artificial intelligence program through the USB interface of the computer running the artificial intelligence program;
s4: the computer running the artificial intelligence program fuses the data of the far infrared thermal sensor array into the image, and the posture of the user when the user is asleep is fitted by analyzing the distribution condition of the temperature in the view field of the image acquisition device, so that the complete process of the sleep posture of the user is acquired;
s5: and operating an artificial intelligence program according to the sleeping posture of the user, measuring the sleeping quality of the user according to the information of turning over, posture and activity frequency of the user during sleeping, and giving a score of the sleeping quality.
In the embodiment of the invention, the sleep quality evaluation system based on artificial intelligence is characterized in that: the artificial intelligence algorithm operated in the computer operating the artificial intelligence program is based on an openposition framework, and realizes the recognition of the posture of the person during sleeping through the learning of the information of turning-over, posture and activity frequency of the person during sleeping, thereby realizing the evaluation of the sleep quality of the person.
In the embodiment of the invention, the sleep quality evaluation system based on artificial intelligence is characterized in that: the far infrared heat sensor array adopts MLX90641 with a Michelson core.
Example 1
An artificial intelligence based sleep quality evaluation system comprises an image acquisition device and a computer running an artificial intelligence program;
the image acquisition device comprises a camera, a far infrared heat sensor array, a controller and a memory; the controller controls the camera, the far infrared heat sensor array and the memory; the memory is used for storing image data shot by the camera and far infrared data obtained by the far infrared thermal sensor array;
the computer running the artificial intelligence program comprises a USB interface.
In specific implementation, the image acquisition device is provided with a USB interface, and data information stored in the memory can be read through the USB interface.
During the concrete implementation, camera and far infrared heat sensor array are with the optical axis to in the image information that the information fusion that the far infrared heat sensor array acquireed was acquireed into the camera, the far infrared heat sensor array uses STM32 as core control chip, transmits the memory of image acquisition device after reading the data of far infrared sensor array through the IIC agreement.
A sleep quality evaluation system based on artificial intelligence comprises the working processes of:
s1: before sleeping, a user presses a power switch of the image acquisition device and starts the image acquisition device;
s2: the image acquisition device synchronously records and stores the image and the far infrared heat sensor array data in real time;
s3: after waking up, a user is connected with a USB interface of the image acquisition device through a USB disk or other storage media, data information recorded and stored in the image acquisition device is read, and the data information is imported into a computer running the artificial intelligence program through the USB interface of the computer running the artificial intelligence program;
s4: the computer running the artificial intelligence program fuses the data of the far infrared thermal sensor array into the image, and the posture of the user when the user is asleep is fitted by analyzing the distribution condition of the temperature in the view field of the image acquisition device, so that the complete process of the sleep posture of the user is acquired;
s5: and operating an artificial intelligence program according to the sleeping posture of the user, measuring the sleeping quality of the user according to the information of turning over, posture and activity frequency of the user during sleeping, and giving a score of the sleeping quality.
In specific implementation, the artificial intelligence algorithm running in the computer running the artificial intelligence program is based on an openposition framework, and by learning the information of turning-over, body posture and activity frequency of the person in sleep, the posture of the person in sleep is recognized, so that the sleep quality of the person is evaluated.
In specific implementation, the far infrared thermal sensor array adopts MLX90641 with a marching core.
By adopting the sleep quality evaluation system based on artificial intelligence, the influence of the covering on a video monitoring device when people sleep is eliminated by utilizing the far infrared heat sensor array, the posture of the people during sleep is obtained more clearly and reliably, the human posture is identified by adopting the artificial intelligence openfuse framework, the automatic evaluation of the sleep quality is further realized, the score is given, and the control of a user on the sleep quality condition is assisted. Meanwhile, the privacy leakage problem of the camera is avoided by adopting a storage medium mode, and the reliability and the practicability of the equipment for simply recording the video are improved.
The embodiment of the invention also provides a sleep quality detection device based on deep learning, which is characterized by comprising the following components: at least one memory and at least one processor;
the at least one memory to store a machine readable program;
the at least one processor is configured to invoke the machine readable program to perform the method for detecting sleep quality in any embodiment of the present invention.
An embodiment of the present invention further provides a computer-readable medium, where the computer-readable medium stores computer instructions, and when the computer instructions are executed by a processor, the processor is caused to execute the method for detecting sleep quality in any embodiment of the present invention. Specifically, a method or an apparatus provided with a computer-readable medium on which software program codes that realize the functions of any of the embodiments described above are stored may be provided, and a computer (or a CPU or MPU) of the method or the apparatus is caused to read out and execute the program codes stored in the storage medium.
In this case, the program code itself read from the storage medium can realize the functions of any of the above-described embodiments, and thus the program code and the storage medium storing the program code constitute a part of the present invention.
Examples of the storage medium for supplying the program code include a floppy disk, a hard disk, a magneto-optical disk, an optical disk (e.g., CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD + RW), a magnetic tape, a nonvolatile memory card, and a ROM. Alternatively, the program code may be downloaded from a server computer via a communications network.
Further, it should be clear that the functions of any one of the above-described embodiments can be implemented not only by executing the program code read out by the computer, but also by performing a part or all of the actual operations by an operation method or the like operating on the computer based on instructions of the program code.
It should be noted that, because the contents of information interaction, execution process, and the like between the units in the apparatus are based on the same concept as the method embodiment of the present invention, specific contents may refer to the description in the method embodiment of the present invention, and are not described herein again.

Claims (4)

1. A sleep quality detection method based on deep learning is characterized by comprising the following steps:
acquiring at least one group of visible light image and infrared image of the sleeping state of a human body to be detected, wherein each group of visible light image and infrared image
The light image and the infrared image are images acquired at the same time;
for each set of the visible light image and the infrared image, performing:
characterizing a visible light image using a first neural network trained from historical data of the visible light image
Extracting characteristics to obtain image characteristics, wherein the image characteristics are used for representing key parts of the human body to be detected
The critical site includes at least a joint;
feature extraction of infrared images using a second neural network trained from historical data of the infrared images
Obtaining and obtaining infrared characteristics, wherein the infrared characteristics are used for representing the temperature of the key part of the human body to be detected
Degree distribution;
performing feature fusion on the image features and the infrared features to obtain fusion features of the current image;
determining the sleep quality of the human body to be detected according to the obtained fusion characteristics of each group of images;
wherein performing feature fusion comprises: feature resolution maximization for fusing infrared features into multi-scale image features
Low, and features express the most abstract image features;
the acquiring of the visible light image and the infrared image of the sleep state of at least one group of human bodies to be detected comprises the following steps:
acquiring historical data of a sleep state and a first label, wherein the first label is used for defining the sleep state and
a non-sleep state;
performing model training by using the historical data of the sleep state and the first label to obtain sleep state judgment
A model;
collecting at least one group of visible light images and infrared images in real time;
inputting each group of the visible light image and the infrared image into the sleep state judgment model to obtain at least one group
The human body to be detected is in a visible light image and an infrared image in the sleep state;
before the acquisition of at least one group of visible light image and infrared image of the sleep state of the human body to be detected, the method further comprises the step of
Comprises the following steps:
obtaining sleeping posture recognition model training data and a second label, wherein the second label is used for defining people in the image
Mapping relation between pixel points corresponding to the key parts of the body and the sleeping posture of the human body;
training by using the sleeping posture recognition model training data and a second label to obtain a sleeping posture recognition model;
obtaining quality evaluation model training data and a third label, wherein the third label is used for defining the sleeping posture of the human body
Mapping relation with sleep quality;
training by using the quality evaluation model training data and the third label to obtain a quality evaluation model;
the determining the sleep quality of the human body to be detected according to the obtained fusion characteristics of each group of images comprises the following steps:
recognizing the sleeping postures of each group of the fusion characteristics by using the sleeping posture recognition model;
evaluating each sleeping posture obtained by the sleeping posture identification model by using the quality evaluation model, and determining the sleeping posture
The sleep quality of the human body to be detected;
evaluating each sleeping posture obtained by the sleeping posture identification model by using the quality evaluation model, and packaging
Comprises the following steps:
scoring the sleep quality of the human body to be measured according to a sleep quality scoring formula
As follows:
Figure DEST_PATH_IMAGE001
wherein P is used for representing the human body to be detected in the time period corresponding to the visible light image and the infrared image
Sleep quality score of, kiWeight for characterizing sleep quality corresponding to the ith sleeping posture, AiFor characterizing the sleep
Parameter value of i-th sleeping posture identified by posture identification model, BjJ for characterizing the quality evaluation model
Parameter values of the sleeping postures are planted, m is used for representing the number of the sleeping postures contained in the quality evaluation model, and n is used for representing
The number of the sleeping postures identified by the sleeping posture identification model.
2. The method of claim 1, wherein said characterizing said image and said infrared features
Performing feature fusion, comprising:
determining fusion dimensionality according to the image features and the infrared features; wherein the fusion dimension is greater than or equal to
The maximum value of the matrix corresponding to the image characteristic and the matrix corresponding to the infrared characteristic is equal to;
aligning the matrix corresponding to the image features and the matrix corresponding to the infrared features according to the fusion dimension;
and performing fusion calculation on the matrix corresponding to the aligned image features and the matrix corresponding to the infrared features.
3. A sleep quality detection device based on deep learning, comprising: acquisition module and execution
A module and a determination module;
the acquisition module is used for acquiring at least one group of visible light images and infrared images of the sleeping state of the human body to be detected,
each group of the visible light images and the infrared images are images acquired at the same time;
the execution module is used for aiming at each group of the visible light image and the red acquired by the acquisition module
The outer images all perform the following operations:
characterizing a visible light image using a first neural network trained from historical data of the visible light image
Extracting characteristics to obtain image characteristics, wherein the image characteristics are used for representing key parts of the human body to be detected
The critical site includes at least a joint;
feature extraction of infrared images using a second neural network trained from historical data of the infrared images
Obtaining and obtaining infrared characteristics, wherein the infrared characteristics are used for representing the temperature of the key part of the human body to be detected
Degree distribution;
performing feature fusion on the image features and the infrared features to obtain fusion features of the current image;
the determining module is used for determining the fusion characteristics of each group of images obtained by the executing module
The sleep quality of the human body to be detected;
the determining module is further configured to:
scoring the sleep quality of the human body to be measured according to a sleep quality scoring formula which is as follows
Shown in the figure:
Figure 499981DEST_PATH_IMAGE001
wherein P is used for representing sleep quality evaluation of the human body to be detected in the time period corresponding to the visible light image and the infrared image
Minute, kiWeight for characterizing sleep quality corresponding to the ith sleeping posture, AiRecognition model for representing sleeping posture
The parameter value of the ith sleeping posture, BjA parameter value for representing the jth sleeping posture in the quality evaluation model, m is used for
The number of the sleeping postures contained in the characterization quality evaluation model, n is used for characterizing the sleeping posture identified by the sleeping posture identification model
And (4) counting.
4. The apparatus of claim 3,
the execution module is further configured to perform the following operations:
determining fusion dimensionality according to the image features and the infrared features; wherein the fusion dimension is greater than or equal to
The maximum value of the matrix corresponding to the image characteristic and the matrix corresponding to the infrared characteristic is equal to;
aligning the matrix corresponding to the image features and the matrix corresponding to the infrared features according to the fusion dimension;
and performing fusion calculation on the matrix corresponding to the aligned image features and the matrix corresponding to the infrared features.
CN202011274897.9A 2019-11-21 2020-11-16 Sleep quality detection method and device based on deep learning Active CN112102946B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019111440525 2019-11-21
CN201911144052.5A CN110942824A (en) 2019-11-21 2019-11-21 Sleep quality evaluation system based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN112102946A CN112102946A (en) 2020-12-18
CN112102946B true CN112102946B (en) 2021-08-03

Family

ID=69907952

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201911144052.5A Pending CN110942824A (en) 2019-11-21 2019-11-21 Sleep quality evaluation system based on artificial intelligence
CN202011274897.9A Active CN112102946B (en) 2019-11-21 2020-11-16 Sleep quality detection method and device based on deep learning

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201911144052.5A Pending CN110942824A (en) 2019-11-21 2019-11-21 Sleep quality evaluation system based on artificial intelligence

Country Status (1)

Country Link
CN (2) CN110942824A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113365382B (en) * 2021-08-10 2021-11-09 深圳市信润富联数字科技有限公司 Light control method and device, electronic equipment and storage medium
CN115581435A (en) * 2022-08-30 2023-01-10 湖南万脉医疗科技有限公司 Sleep monitoring method and device based on multiple sensors

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101972140A (en) * 2010-09-07 2011-02-16 航天海鹰安全技术工程有限公司 Thermal imaging temperature monitoring device, system and method
CN104834946A (en) * 2015-04-09 2015-08-12 清华大学 Method and system for non-contact sleep monitoring
CN105930778A (en) * 2016-04-14 2016-09-07 厦门理工学院 Nighttime human sleeping posture monitoring method and system based on infrared image
CN109886137A (en) * 2019-01-27 2019-06-14 武汉星巡智能科技有限公司 Infant sleeping posture detection method, device and computer readable storage medium
CN110348500A (en) * 2019-06-30 2019-10-18 浙江大学 Sleep disturbance aided diagnosis method based on deep learning and infrared thermal imagery
CN110472481A (en) * 2019-07-01 2019-11-19 华南师范大学 A kind of sleeping position detection method, device and equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001108278A (en) * 1999-10-05 2001-04-20 Daikin Ind Ltd Air conditioner
CN104083160A (en) * 2014-06-30 2014-10-08 天津大学 Sleep state monitoring method and device based on machine vision
CN105894424A (en) * 2016-06-30 2016-08-24 宁德师范学院 Sleep quality monitoring method
CN109102898A (en) * 2018-07-20 2018-12-28 渝新智能科技(上海)有限公司 A kind of method for building up, device and the equipment of large database concept of sleeping
CN109934182A (en) * 2019-03-18 2019-06-25 北京旷视科技有限公司 Object behavior analysis method, device, electronic equipment and computer storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101972140A (en) * 2010-09-07 2011-02-16 航天海鹰安全技术工程有限公司 Thermal imaging temperature monitoring device, system and method
CN104834946A (en) * 2015-04-09 2015-08-12 清华大学 Method and system for non-contact sleep monitoring
CN105930778A (en) * 2016-04-14 2016-09-07 厦门理工学院 Nighttime human sleeping posture monitoring method and system based on infrared image
CN109886137A (en) * 2019-01-27 2019-06-14 武汉星巡智能科技有限公司 Infant sleeping posture detection method, device and computer readable storage medium
CN110348500A (en) * 2019-06-30 2019-10-18 浙江大学 Sleep disturbance aided diagnosis method based on deep learning and infrared thermal imagery
CN110472481A (en) * 2019-07-01 2019-11-19 华南师范大学 A kind of sleeping position detection method, device and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《深度学习:记录学习过程中思考的一些小问题》;u011862114;《https://blog.csdn.net/u011862114/article/details/80645477》;20180610;第1-2页 *

Also Published As

Publication number Publication date
CN112102946A (en) 2020-12-18
CN110942824A (en) 2020-03-31

Similar Documents

Publication Publication Date Title
US10991094B2 (en) Method of analyzing dental image for correction diagnosis and apparatus using the same
CN107784282B (en) Object attribute identification method, device and system
CN104732601B (en) Automatic high-recognition-rate attendance checking device and method based on face recognition technology
CN100361131C (en) Information processing apparatus and information processing method
CN104657705B (en) Pattern recognition device and data entry method towards pattern recognition device
CN110674785A (en) Multi-person posture analysis method based on human body key point tracking
CN112102946B (en) Sleep quality detection method and device based on deep learning
CN107358157A (en) A kind of human face in-vivo detection method, device and electronic equipment
CN108499107B (en) Control method and device for virtual role in virtual reality and storage medium
CN105243386A (en) Face living judgment method and system
CN109934182A (en) Object behavior analysis method, device, electronic equipment and computer storage medium
CN117379016B (en) Remote monitoring system and method for beef cattle cultivation
CN110321871B (en) Palm vein identification system and method based on LSTM
CN117152152B (en) Production management system and method for detection kit
CN112991343B (en) Method, device and equipment for identifying and detecting macular region of fundus image
CN109949272A (en) Identify the collecting method and system of skin disease type acquisition human skin picture
US20210059596A1 (en) Cognitive function evaluation method, cognitive function evaluation device, and non-transitory computer-readable recording medium in which cognitive function evaluation program is recorded
CN112525355A (en) Image processing method, device and equipment
US20220208383A1 (en) Method and system for mental index prediction
CN112801013B (en) Face recognition method, system and device based on key point recognition verification
CN107403192A (en) A kind of fast target detection method and system based on multi-categorizer
CN113627255A (en) Mouse behavior quantitative analysis method, device, equipment and readable storage medium
CN112507952A (en) Self-adaptive human body temperature measurement area screening method and forehead non-occlusion area extraction method
CN111428577A (en) Face living body judgment method based on deep learning and video amplification technology
Li et al. Non-Invasive Screen Exposure Time Assessment Using Wearable Sensor and Object Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant