CN116563887B - Sleeping posture monitoring method based on lightweight convolutional neural network - Google Patents

Sleeping posture monitoring method based on lightweight convolutional neural network Download PDF

Info

Publication number
CN116563887B
CN116563887B CN202310437342.9A CN202310437342A CN116563887B CN 116563887 B CN116563887 B CN 116563887B CN 202310437342 A CN202310437342 A CN 202310437342A CN 116563887 B CN116563887 B CN 116563887B
Authority
CN
China
Prior art keywords
sleeping
gesture
data
posture
pressure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310437342.9A
Other languages
Chinese (zh)
Other versions
CN116563887A (en
Inventor
杨芳
王明君
崔慧英
刘广天
陈连庆
贾成芳
刘洽
李学博
唐惠艳
赵博渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China University of Science and Technology
Original Assignee
North China University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China University of Science and Technology filed Critical North China University of Science and Technology
Priority to CN202310437342.9A priority Critical patent/CN116563887B/en
Publication of CN116563887A publication Critical patent/CN116563887A/en
Application granted granted Critical
Publication of CN116563887B publication Critical patent/CN116563887B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physiology (AREA)
  • Dentistry (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Human Computer Interaction (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to the technical field of data identification and record carrier processing, in particular to a sleeping posture monitoring method based on a lightweight convolutional neural network, which comprises the following steps: acquiring a plurality of samples of a single sleeping posture category, and performing sleeping posture mapping on an acquisition surface according to each data of each sample to form sleeping posture correction data; processing the sleep posture correction data and forming a plurality of image data; extracting features of each sleeping gesture category and acquiring corresponding category features; collecting real-time sleeping gestures, and recognizing sleeping gesture categories of the stored real-time sleeping gestures; storing the real-time sleep gesture recognition result as sleep gesture monitoring data; the invention solves the problems that the existing sleeping gesture recognition method needs to bind the sensor on the human body or has hidden privacy hazards when using a camera, and utilizes the large-area pressure sensor to collect the sleeping gesture of the human body, thereby improving the practicability of monitoring the sleeping gesture in the household environment.

Description

Sleeping posture monitoring method based on lightweight convolutional neural network
Technical Field
The invention relates to the technical field of data identification and record carrier processing, in particular to a sleeping posture monitoring method based on a lightweight convolutional neural network.
Background
Sleep is an important component of our lives, sleep states are directly related to psychological and physiological health of people, and in sleep state monitoring, sleep posture is one of the keys for objectively evaluating sleep quality. The effective monitoring of sleep posture in the home environment can realize early diagnosis and early prevention of diseases such as respiratory diseases, pressure sores and the like.
The existing common sleeping posture monitoring method mainly comprises four methods, the sleeping posture is monitored mainly based on visual signals in the early stage, the method is easy to be influenced by environment and has privacy hidden danger, and the sleeping posture is monitored through a wearable sensor, so that the recognition rate of the sleeping posture is improved, the sensor is bound on a human body to bring strong constraint feeling, and psychological pressure of a patient is increased. In addition, at present, a method for identifying and monitoring sleeping postures by using multi-sensor fusion exists, but basically, the method is a mode of combining a camera and a pressure pad, the equipment is complex to install, the information is complex and time-consuming to process, and the method is not suitable for daily application. In recent years, a non-binding and non-interference sleeping posture monitoring method gradually becomes a main research direction, but the existing method has low recognition precision and is difficult to realize practical application.
Chinese patent application publication No.: CN111353425a discloses a sleeping posture monitoring method based on feature fusion and artificial neural network, the method aims at the characteristics of six sleeping posture types, performs histogram analysis on sleeping posture images, adopts a mode of combining multiple image processing technologies, performs targeted preprocessing on the images, effectively retains as much useful information as possible while removing noise and improving image quality, prepares for subsequent feature extraction and overall monitoring precision, and obtains a more complete and more obvious sleeping posture image by targeted preprocessing means. The multi-feature fusion and the artificial neural network are combined to obtain higher recognition accuracy, which reaches 99.17%, and the experiment shows that the recognition of 180 pictures can be completed only by 0.13 s. The invention directly collects the pressure data between the human body and the mattress to generate the sleeping gesture image, has short data processing time, improves the real-time performance of sleeping gesture recognition, and is beneficial to establishing a relation model of sleeping gesture conversion and dynamic pressure in the later period.
However, the above method has the following problems: the sleeping gesture which is different due to different pressures can not be accurately identified through the image.
Disclosure of Invention
Therefore, the invention provides a sleeping gesture monitoring method based on a lightweight convolutional neural network, which is used for solving the problem that sleeping gesture recognition cannot be applied in a household environment due to the fact that sleeping gestures with different pressures cannot be accurately recognized through images in the prior art.
In order to achieve the above object, the present invention provides a sleeping posture monitoring method based on a lightweight convolutional neural network, comprising:
setting pre-classified sleeping gesture categories, collecting a plurality of sleeping gesture samples under each sleeping gesture category, and when a plurality of samples of a single sleeping gesture category are obtained, collecting pressure distribution by using a first sensor array arranged on a collecting surface of each sleeping gesture sample of the single sleeping gesture category by using a collecting module so as to form first sleeping gesture data corresponding to the single sleeping gesture sample, and storing each first sleeping gesture data;
the acquisition module acquires second sleeping posture data corresponding to the first sleeping posture data by using a second sensor array distributed below the acquisition surface, and stores the second sleeping posture data;
the sleeping posture pre-analysis module reads second sleeping posture data of each sleeping posture sample, and performs sleeping posture mapping on the pressure distribution on the acquisition surface according to the first sleeping posture data so as to form sleeping posture correction data;
the sleeping gesture pre-analysis module processes the sleeping gesture correction data in a first preset processing mode to form first sleeping gesture image data, and processes the first sleeping gesture data and the first sleeping gesture image data in a second preset processing mode to form second sleeping gesture image data;
the sleeping gesture analysis module utilizes the first sleeping gesture image data and the second sleeping gesture image data of each sleeping gesture sample to construct a preset network framework, and utilizes the preset network framework to respectively extract characteristics of each sleeping gesture category so as to obtain category characteristics under each sleeping gesture category;
the sleeping gesture analysis module controls the acquisition module to acquire real-time sleeping gestures, and performs sleeping gesture category recognition on the real-time sleeping gestures which are stored according to the category characteristics;
storing a sleeping gesture category result for real-time sleeping gesture recognition and real-time sleeping gesture acquisition time to form sleeping gesture monitoring data;
the first sleeping posture data are pressure data generated by the sleeping posture sample on the acquisition surface, the second sleeping posture data are deformation factors generated by the sleeping posture sample on the acquisition surface, and the second sleeping posture data are matched with the first sleeping posture data on corresponding coordinates of the acquisition surface;
the first preset processing mode is to carry out imaging processing on the pressure data, the second preset processing mode is to process the pressure distribution data through inversion, local equalization, sleeping gesture segmentation and morphological denoising, the first sleeping gesture image data are sleeping gesture pressure images of sleeping gestures, and the second sleeping gesture image data are sleeping gesture feature images of the sleeping gestures;
the sleeping gesture is mapped to deformation correction of the acquisition surface which is deformed through the second sleeping gesture data;
the sleeping gesture pre-analysis module is provided with a minimum mapping pressure threshold value and a maximum breaking pressure threshold value, and if a single acquisition point in the acquisition surface is deformed, the sleeping gesture pre-analysis module judges that the acquisition point is a deformed acquisition point;
if the first sleeping posture data corresponding to the deformation acquisition point is not greater than a minimum mapping pressure threshold value, the sleeping posture pre-analysis module judges that the deformation acquisition point is not compensated;
if the first sleeping posture data corresponding to the deformation acquisition point is larger than a minimum mapping pressure threshold and is not larger than a maximum damage pressure threshold, the sleeping posture pre-analysis module judges that the deformation acquisition point is compensated in a first preset mapping mode;
if the first sleeping posture data corresponding to the deformation acquisition point is larger than the maximum breaking pressure threshold, the sleeping posture pre-analysis module judges that the deformation acquisition point is compensated in a second preset mapping mode, and compensates the adjacent points around the deformation acquisition point in the first preset mapping mode;
the first preset mapping mode is a deformation compensation mode set by the sleeping gesture pre-analysis module, and the first preset mapping mode is a deformation compensation value set by the sleeping gesture pre-analysis module;
the preset network framework is a light convolutional neural network sleeping gesture recognition network framework.
Further, under a preliminary acquisition condition, the acquisition module acquires a sleeping posture sample, and for single sleeping posture sample acquisition, the acquisition module comprises pressure data of a plurality of positions and position coordinate data corresponding to each pressure data; the acquisition module collects reference data when each pressure sensor identifies that the sleeping posture bearing side of the acquisition surface is empty;
the acquisition module is provided with a plurality of pressure sensors on the sleeping gesture bearing side and is used for collecting pressure generated by each sleeping gesture and corresponding position coordinates of the pressure;
the pressure acquisition period is that the acquisition module reads the pressure data acquired by each pressure sensor with the preset time period as a period;
the reference data is pressure data of the sleeping posture bearing side which is not in a pressed state and is related to the dead weight of the acquisition surface.
Further, the pressure sensors are arranged on the lower surface of the bed sheet or the upper surface of the bed body, and when the single sleeping posture sample applies pressure to the bed, the relative positions of the pressure sensors to the bed sheet do not move;
the number of the pressure sensors is at least 9, and the largest area surrounded by each pressure sensor covers the projection area of each sleeping posture sample on the surface of the bed.
Further, under the condition of collection and collection, the collection module records the sleeping gesture data corresponding to each sleeping gesture sample, and transmits the sleeping gesture data to the storage module, and records the sleeping gesture data with the preset duration as a period, and records the recording time of the corresponding sleeping gesture data to form time stamp sleeping gesture data;
and the acquisition and collection conditions are that the acquisition module reads pressure readings of the pressure sensors bearing sleeping posture pressure in the current sleeping posture state.
Further, under a sleeping gesture analysis condition, the sleeping gesture pre-analysis module reads the time stamp sleeping gesture data in the storage module, performs difference calculation on the pressure data corresponding to each time stamp sleeping gesture data and the reference data to form paired correction pressure data, and forms a sleeping gesture pressure image under a single time stamp according to the position of each pressure sensor corresponding to the same time stamp and the corresponding correction pressure data;
wherein the difference is calculated as the difference between the pressure data and the reference data;
and the sleeping gesture analysis condition is that the acquisition module collects the reference data.
Further, under a sleep pose simplification condition, the sleep pose analysis module converts the sleep pose pressure image into the sleep pose characteristic image by using the preset network framework;
the sleep pose simplification condition is that the sleep pose analysis module forms the sleep pose pressure image.
Further, when the preset network framework is constructed, for a single sleeping gesture feature image, the projection feature of the single sleeping gesture feature image on the projection plane comprises a horizontal abscissa, a horizontal ordinate and an image with three dimensions of pressure perpendicular to a plane where the upper surface of the bed body is located, and the sleeping gesture analysis module carries out vectorization processing on the sleeping gesture feature image according to each dimension data of the sleeping gesture feature image so as to form a sleeping gesture feature function;
the projection surface is a plane where the upper surface of the bed body is located, and the sleeping gesture characteristic function has a real solution at any point of the projection surface.
Further, when the preset network architecture is constructed, the sleep gesture analysis module sorts the sleep gesture feature images, and for the sleep gesture feature images of a single sleep gesture class, the sleep gesture analysis module is provided with a corresponding threshold interval,
when the real solution of the sleeping gesture feature function on the projection surface is in the threshold interval corresponding to a single sleeping gesture category, the sleeping gesture module marks the sleeping gesture feature function as a category feature under the corresponding sleeping gesture category;
the threshold interval is a closed interval determined by a first sleeping gesture threshold and a second sleeping gesture threshold, and the first sleeping gesture threshold is smaller than the second sleeping gesture threshold.
Further, when the sleeping gesture analysis module finishes acquiring the category characteristics under each sleeping gesture category, the sleeping gesture analysis module records the classified sleeping gesture characteristic functions and forms a sleeping gesture classification model;
the sleeping gesture feature function of the single sleeping gesture classification model is used as a classification operator to identify the sleeping gesture category of the real-time sleeping gesture data;
the sleeping gesture category is identified as the sleeping gesture analysis module classifies the sleeping gesture feature image with the pre-classified sleeping gesture category.
Further, when the sleep gesture analysis module finishes classifying the real-time sleep gesture, the sleep gesture analysis module transmits each sleep gesture data of the real-time sleep gesture to the storage module, adjusts the first sleep gesture threshold and the second sleep gesture threshold, and when the sleep gesture analysis module classifies the next real-time sleep gesture, applies the adjusted first sleep gesture threshold and second sleep gesture threshold.
Compared with the prior art, the method has the beneficial effects that the problem that the existing sleeping gesture recognition method needs to bind the sensor on the human body or has hidden privacy hazards when using a camera is solved, the sleeping gesture of the human body is acquired by using a large-area pressure sensor, and the complete sleeping gesture of the human body is acquired under the conditions of no constraint and no interference to the human body, so that the practicability of monitoring the sleeping gesture in a household environment is improved.
Further, through carrying out histogram analysis on the sleeping gesture image, adopting a mode of combining multiple image processing technologies, carrying out targeted pretreatment on the image, removing noise, improving the image quality, and effectively retaining as much useful information as possible, thereby further improving the practicability of monitoring the sleeping gesture under the household environment.
Furthermore, by constructing a network framework for the sleeping gesture image, the sleeping gesture recognition is trained, so that the training efficiency is effectively improved, and meanwhile, the practicability of monitoring the sleeping gesture in the home environment of the sleeping gesture recognition is further improved.
Furthermore, the sleeping gesture image is generated by directly collecting pressure data between the human body and the mattress, the data processing time is short, the real-time performance of sleeping gesture recognition is improved, and the establishment of a relation model of sleeping gesture conversion and dynamic pressure is facilitated, so that the practicability of monitoring the sleeping gesture in the home environment of the sleeping gesture recognition is further improved.
Furthermore, the accuracy of the sleeping gesture recognition is continuously adjusted in a mode of continuously perfecting the sleeping gesture data, so that the practicality of monitoring the sleeping gesture in the home environment of the sleeping gesture recognition is further improved while the accuracy of the sleeping gesture recognition is effectively improved.
Drawings
FIG. 1 is a flow chart of a sleep position monitoring method based on a lightweight convolutional neural network of the present invention;
FIG. 2 is a schematic diagram illustrating the operation of a sleep position monitoring and recognition system according to an embodiment of the present invention;
FIG. 3 is a sleep position histogram of an embodiment of the present invention;
FIG. 4 is a schematic diagram of an exemplary residual module according to an embodiment of the present invention;
wherein: 1, a large area pressure sensor; 11, a first sensor array; a second sensor array; 2, a bed body; 3, a first data acquisition device; 4, an upper computer terminal; 5, a second data acquisition device; 6, a mattress; 7, user.
Detailed Description
In order that the objects and advantages of the invention will become more apparent, the invention will be further described with reference to the following examples; it should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
It should be noted that, in the description of the present invention, terms such as "upper," "lower," "left," "right," "inner," "outer," and the like indicate directions or positional relationships based on the directions or positional relationships shown in the drawings, which are merely for convenience of description, and do not indicate or imply that the apparatus or elements must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
Furthermore, it should be noted that, in the description of the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those skilled in the art according to the specific circumstances.
Referring to fig. 1, a flowchart of a sleeping posture monitoring method based on a lightweight convolutional neural network according to the present invention includes:
setting pre-classified sleeping gesture categories, collecting a plurality of sleeping gesture samples under each sleeping gesture category, and when a plurality of samples of a single sleeping gesture category are obtained, collecting pressure distribution by using a first sensor array arranged on a collecting surface of each sleeping gesture sample of the single sleeping gesture category by using a collecting module so as to form first sleeping gesture data corresponding to the single sleeping gesture sample, and storing each first sleeping gesture data;
the acquisition module acquires second sleeping posture data corresponding to the first sleeping posture data by using a second sensor array distributed below the acquisition surface, and stores the second sleeping posture data;
the sleeping gesture pre-analysis module reads second sleeping gesture data of each sleeping gesture sample, and performs sleeping gesture mapping on pressure distribution on an acquisition surface according to the first sleeping gesture data so as to form sleeping gesture correction data;
the sleeping gesture pre-analysis module processes the sleeping gesture correction data in a first preset processing mode to form first sleeping gesture image data, and processes the first sleeping gesture data and the first sleeping gesture image data in a second preset processing mode to form second sleeping gesture image data;
the sleeping gesture analysis module utilizes the first sleeping gesture image data and the second sleeping gesture image data of each sleeping gesture sample to construct a preset network framework, and utilizes the preset network framework to respectively extract characteristics of each sleeping gesture category so as to obtain category characteristics under each sleeping gesture category;
the sleeping gesture analysis module controls the acquisition module to acquire real-time sleeping gestures, and performs sleeping gesture category recognition on the real-time sleeping gestures which are stored according to category characteristics;
storing a sleeping gesture category result for real-time sleeping gesture recognition and real-time sleeping gesture acquisition time to form sleeping gesture monitoring data;
the first sleeping posture data are pressure data generated by the sleeping posture sample on the acquisition surface, the second sleeping posture data are deformation factors generated by the sleeping posture sample on the acquisition surface, and the second sleeping posture data are matched with the first sleeping posture data on corresponding coordinates of the acquisition surface;
the first preset processing mode is to carry out imaging processing on the pressure data, the second preset processing mode is to process the pressure distribution data through inversion, local equalization, sleeping gesture segmentation and morphological denoising, the first sleeping gesture image data is a sleeping gesture pressure image of a sleeping gesture, and the second sleeping gesture image data is a sleeping gesture characteristic image of the sleeping gesture;
the sleeping gesture mapping is deformation correction of the deformed acquisition surface through the second sleeping gesture data;
the sleeping gesture pre-analysis module is provided with a minimum mapping pressure threshold value and a maximum breaking pressure threshold value, and if a single acquisition point in the acquisition surface is deformed, the sleeping gesture pre-analysis module judges that the acquisition point is a deformed acquisition point;
if the first sleeping posture data corresponding to the deformation acquisition point is not greater than the minimum mapping pressure threshold value, the sleeping posture pre-analysis module judges that the deformation acquisition point is not compensated;
if the first sleeping posture data corresponding to the deformation acquisition point is larger than the minimum mapping pressure threshold and is not larger than the maximum damage pressure threshold, the sleeping posture pre-analysis module judges that the deformation acquisition point is compensated in a first preset mapping mode;
if the first sleeping posture data corresponding to the deformation acquisition point is larger than the maximum breaking pressure threshold, the sleeping posture pre-analysis module judges that the deformation acquisition point is compensated in a second preset mapping mode, and compensates the adjacent points around the deformation acquisition point in the first preset mapping mode;
the first preset mapping mode is a deformation compensation mode set by the sleeping gesture pre-analysis module, and the first preset mapping mode is a deformation compensation value set by the sleeping gesture pre-analysis module;
the preset network framework is a light convolutional neural network sleeping posture identification network framework.
Taking a single deformation acquisition point A as an example:
constructing a coordinate system by taking a long side of the bed body as a transverse axis, a short side as a longitudinal axis and any intersection point of the long side and the short side as an origin, wherein the original coordinates of the point A are (100 ), and after being pressed, the coordinates of the point A are changed into: (95,100)
At this time, the pressure applied to the point a is determined:
if the pressure applied to the point A is within 100N, the sleeping gesture pre-analysis module judges that the point A is not compensated, and marks the sitting of the point A as (100 );
if the pressure range of the point A is 100N to 400N, the sleeping gesture pre-analysis module judges that the point A is compensated, and the point A is marked as (98, 100);
if the pressure at the point A exceeds 400N, the sleeping gesture and analysis module marks the sitting at the point A as (95, 95) and compensates for all shops nearby.
Through the compensation, the sleeping gesture image generated aiming at the point A and the vicinity of the point A can more accurately embody the sleeping gesture characteristics after analysis.
The invention solves the problems that the existing sleeping gesture recognition method needs to bind the sensor on the human body or has privacy hidden trouble by using a camera, and utilizes the large-area pressure sensor to collect the sleeping gesture of the human body, and the complete sleeping gesture of the human body is collected under the conditions of no binding and no interference to the human body, thereby improving the practicability of monitoring the sleeping gesture in the household environment.
Specifically, under the condition of preliminary acquisition, the acquisition module acquires a sleeping posture sample, and for single sleeping posture sample acquisition, the acquisition module comprises pressure data of a plurality of positions and position coordinate data corresponding to each pressure data; the acquisition module collects reference data when each pressure sensor identifies that the sleeping posture bearing side of the acquisition surface is empty;
the acquisition module is provided with a plurality of pressure sensors on the sleeping gesture bearing side and is used for collecting pressure generated by each sleeping gesture and corresponding position coordinates of the pressure;
the pressure acquisition period is that the acquisition module reads the pressure data acquired by each pressure sensor with the preset time length as the period;
the reference data is pressure data of the sleeping posture bearing side which is not in a pressed state and is related to the dead weight of the acquisition surface.
Specifically, the pressure sensors are arranged on the lower surface of the bed sheet or the upper surface of the bed body, and when a single sleeping posture sample applies pressure to the bed, the relative positions of the pressure sensors to the bed sheet are not moved;
the number of the pressure sensors is at least 9, and the largest area surrounded by the pressure sensors covers the projection area of each sleeping posture sample on the surface of the bed.
Taking a sleeping posture monitoring and identifying system based on a bed unit type large-area pressure sensor array as an example, the sleeping posture monitoring and identifying system comprises:
the large-area pressure sensor 1 is divided into a first sensor array 11 and a second sensor array 12, wherein the first sensor array 11 is arranged on a mattress 6 positioned on the upper surface of a bed body 2 and is used for collecting pressure data of a user 7 on the mattress 6, and the second sensor array 12 is arranged in the bed body 2 and is used for collecting deformation data of the mattress 6 generated by the pressure of the user 7;
a first data acquisition device 3 connected to the large area pressure sensor 1 for measuring pressure data applied to the bed 2 by the user 7;
a second data acquisition device 5 connected to the bed 6 for measuring the deformation of the large area pressure sensor 1;
the data storage device is connected with the data acquisition device and is used for recording the pressure data of the human body to the bed sheet and the vectorized data and stamping a time stamp;
the upper computer terminal 4 is connected with the large-area pressure sensor 1, the first data acquisition device 3, the second data acquisition device 5 and the data storage device, and is used for presetting sensitivity to identify sleeping gesture and adjusting preset sensitivity according to actual conditions.
Referring to fig. 2, which is a schematic working diagram of a sleeping posture monitoring and identifying system according to an embodiment of the present invention, a bed sheet type large-area pressure sensor array is used to collect pressure data of six kinds of sleeping postures (supine, prone, right trunk type, right fetal type, left trunk type, left fetal type) pre-classified, and the pressure data are converted into two-dimensional sleeping posture images after being recombined and sequenced.
The sensor can collect coordinates of pressure values in each sleeping posture image and positions of the sensor units in a one-to-one correspondence manner in the sleeping posture image collecting process, and the direction and the positions of the human body lying on the sensor are practically restored.
Specifically, under the condition of collection and collection, the collection module records the sleep gesture data corresponding to each sleep gesture sample, and transmits the sleep gesture data to the storage module to record each sleep gesture data with the preset time period as a period, and records the recording time of the corresponding sleep gesture data to form time stamp sleep gesture data;
the collecting and collecting conditions are that the collecting module reads the pressure reading of each pressure sensor bearing the sleeping posture pressure in the current sleeping posture state.
Specifically, under the sleeping gesture analysis condition, the sleeping gesture analysis module reads the time stamp sleeping gesture data in the storage module, performs difference calculation on pressure data corresponding to each time stamp sleeping gesture data and reference data to form pair correction pressure data, and forms a sleeping gesture pressure image under a single time stamp according to the position of each pressure sensor corresponding to the same time stamp and the corresponding correction pressure data;
wherein the difference is calculated as the difference between the pressure data and the reference data;
the sleeping gesture analysis condition is that the acquisition module collects reference data.
Taking three sleeping positions corresponding to the time stamp A, B, C as an example:
wherein A is the previous sleeping posture of B, B is the previous sleeping posture of C, the A sleeping posture is the same as the B sleeping posture, the B sleeping posture is different from the C sleeping posture,
it can be understood that when the elastic limit of the acquisition surface is not reached, the deformation acquisition point generated by the A sleeping gesture deepens and deforms when the B sleeping gesture is reached, the deformation acquisition point generated by the B sleeping gesture influences the pressure acquisition work aiming at the C sleeping gesture, and at the moment, the A sleeping gesture is recorded as the influence sleeping gesture of the B sleeping gesture, and the B sleeping gesture is the image sleeping gesture of the C sleeping gesture;
taking different sleeping positions of adjacent periods as an example:
setting the coordinate set of the deformation acquisition point corresponding to the sleeping gesture in the ith period as S i The coordinate set of the deformation acquisition point corresponding to the sleeping gesture in the (i+1) th period is S i+1 The difference set on the acquisition surface is delta S i ,ΔS i =S i -S i+1 Intersection is S i ∩S i+1 The sleep gesture pre-analysis module is provided with a pressure compensation value and a difference element quantity threshold value omega, and the sleep gesture pre-analysis module carries out delta S i The number of elements DeltaOmega in (a) i Comparing with Ω to determine the sleep position pressure compensation for the (i+1) th cycle,
if delta omega i The sleep gesture pre-analysis module judges that the sleep gesture of the ith period does not influence the sleep gesture pressure acquisition of the (i+1) th period and does not carry out pressure compensation on the sleep gesture of the (i+1) th period;
if delta omega i The sleep gesture pre-analysis module judges that the sleep gesture of the ith period can influence the sleep gesture pressure acquisition of the (i+1) th period, and performs pressure compensation on the sleep gesture of the (i+1) th period, wherein the compensation position is S i ∩S i+1 The compensation quantity is a pressure compensation value at each nearest acquisition point corresponding to the geometric center of each element on the acquisition surface;
the difference quantity threshold value omega is related to the elastic recovery quantity of the bed body, the pressure compensation value is related to the elastic coefficient of the acquisition surface, and the pressure compensation value is a negative value.
Through carrying out histogram analysis on the sleeping gesture image, adopting a mode of combining multiple image processing technologies, carrying out targeted pretreatment on the image, removing noise, improving the image quality, and effectively retaining as much useful information as possible, thereby further improving the practicability of monitoring the sleeping gesture under the household environment.
Specifically, under the condition of simplifying the sleeping gesture, the sleeping gesture analysis module converts the sleeping gesture pressure image into a sleeping gesture characteristic image by using a preset network framework;
the sleeping gesture simplifying condition is that the sleeping gesture analyzing module forms a sleeping gesture pressure image.
Taking as an example the data collected by a large area array of pressure sensors of the sheet type:
and (3) making a difference between the sleeping posture image and pressure data output by the sensor under the no-load condition, eliminating noise brought by the sensor, obtaining an actual sleeping posture pressure image, carrying out histogram analysis on the actual sleeping posture pressure image, and obtaining a sleeping posture image with obvious characteristics through pretreatment methods of inversion, local equalization, sleeping posture segmentation and morphological denoising.
Specifically, when a preset network framework is constructed, for a single sleeping gesture feature image, the projection features of the single sleeping gesture feature image on a projection plane comprise a horizontal abscissa, a horizontal ordinate and an image with three dimensions of pressure perpendicular to a plane where the upper surface of a bed body is located, and a sleeping gesture analysis module carries out vectorization processing on the sleeping gesture feature image according to each dimension data of the sleeping gesture feature image so as to form a sleeping gesture feature function;
the projection surface is the plane where the upper surface of the bed body is located, and the sleeping gesture characteristic function has a real solution at any point of the projection surface.
Fig. 3 is a schematic view of a sleeping posture image according to an embodiment of the present invention,
referring to figure 3 (a) and figure 3 (b),
wherein, the diagram (a) in fig. 3 is a left trunk-type original sleep posture histogram according to the embodiment of the invention,
fig. 3 (b) is a diagram showing an actual sleep position pressure histogram obtained by subtracting static pressure from the trunk shape on the left side in the embodiment of the present invention;
specifically, when a preset network framework is constructed, the sleep gesture analysis module sorts the sleep gesture feature images, and for the sleep gesture feature images of a single sleep gesture class, the sleep gesture analysis module is provided with a corresponding threshold interval,
when the real solution of the sleeping gesture feature function on the projection surface is in a threshold interval corresponding to a single sleeping gesture category, the sleeping gesture module marks the sleeping gesture feature function as category features under the corresponding sleeping gesture category;
the threshold interval is a closed interval determined by a first sleeping gesture threshold and a second sleeping gesture threshold, and the first sleeping gesture threshold is smaller than the second sleeping gesture threshold.
Please refer to the (c) diagram in fig. 3 and the (d) diagram in fig. 3 in combination with the (a) diagram in fig. 3 and the (b) diagram in fig. 3,
wherein, the diagram (c) in fig. 3 is a left trunk-type reversed sleeping posture histogram according to the embodiment of the invention,
fig. 3 (d) shows a left-side trunk-type partially equalized sleep posture histogram according to an embodiment of the present invention,
specifically, when the sleeping gesture analysis module finishes acquiring category characteristics under each sleeping gesture category, the sleeping gesture analysis module records each sleeping gesture characteristic function which is classified, and forms a sleeping gesture classification model;
the sleeping gesture feature function of the single sleeping gesture classification model is used as a classification operator to carry out sleeping gesture classification recognition on real-time sleeping gesture data;
the sleeping gesture category recognition is used for classifying the sleeping gesture feature images by the sleeping gesture analysis module according to the pre-classified sleeping gesture categories.
The sleeping gesture image is generated by directly collecting the pressure data between the human body and the mattress, the data processing time is short, the real-time performance of sleeping gesture recognition is improved, and the establishment of a relation model of sleeping gesture conversion and dynamic pressure is facilitated, so that the practicability of monitoring the sleeping gesture in the home environment of the sleeping gesture recognition is further improved.
Specifically, when the sleep gesture analysis module finishes classifying the real-time sleep gesture, the sleep gesture analysis module transmits all the sleep gesture data of the real-time sleep gesture to the storage module, adjusts the first sleep gesture threshold and the second sleep gesture threshold, and applies the adjusted first sleep gesture threshold and second sleep gesture threshold when the next real-time sleep gesture is classified.
The accuracy of the sleeping gesture recognition is continuously adjusted in a mode of continuously perfecting the sleeping gesture data, so that the practicality of monitoring the sleeping gesture in the household environment of the sleeping gesture recognition is further improved while the accuracy of the sleeping gesture recognition is effectively improved.
Taking sleeping posture monitoring based on classical CNN and suitable for graph classification as an example, the lightweight convolutional neural network architecture:
the Convolutional Neural Network (CNN) has the characteristics of strong robustness, high fault tolerance, high recognition precision and the like in terms of image classification, but the traditional neural network can generate a network degradation phenomenon along with the increase of network depth, so that the integration of linear characteristics and nonlinear characteristics can be realized through a ResNet residual error module, and the gradient elimination and gradient explosion phenomenon caused by the increase of the network in the convolutional neural network are solved.
Referring to fig. 4, a schematic structural diagram of a typical residual module according to an embodiment of the invention is shown:
the residual network formula is represented by formula (1):
H(x)=F(x)+x (1)
where x is the input, H (x) is the post-summation network map, and F (x) is the pre-summation network map.
In a typical residual network, the tactile image features are extracted using two 3 x 3 convolution kernels, the convolution formula being represented by equation (2):
wherein,is the ith layer, the jth neuron; m is M j Is a collection of input images; />Is a weight value; />Is a bias value; f () is an activation function.
ReLu is an activation function in the network, which can fit nonlinear features, learn complex relationships in the data, the complex relationships being represented by equation (3):
the BN layer is added between the convolution layer and the ReLU activation function, so that data distribution can be mapped to a determined space, and the problem of internal variable offset is solved;
it can be appreciated that, to prevent overfitting, model training is accelerated, model training accuracy is improved, and the training depth model is more stable, wherein the BN layer is a batch normalization operation, and the batch normalization operation is sequentially performed according to formulas (4) to (7):
wherein x is i For training data of batch, μ β For the purpose of training the average value of the batch data,in order to train the variance of the batch,for normalized data, epsilon is a tiny positive number, gamma is a size factor, beta is a translation factor, and gamma and beta are obtained through retraining a model.
The lightweight CNN network is constructed according to ResNet-18, the intended identification network framework is shown in FIG. 4, and the lightweight network is named ResNet-mini, which is improved compared with ResNet-18 mainly in the following aspects:
to accommodate the low resolution haptic image, the more detailed information is better extracted, changing the initial 7 x 7 convolution kernel to 3 x 3 size, step size from 2 to 1;
simplifying the multi-layer residual structure of ResNet-18 into two residuals;
dropout operation is added between the two residual blocks, neurons are randomly discarded, and network overfitting is prevented;
in order to meet the classification requirement and the embedded application requirement of the single-frame tactile image, the network input layer is replaced by the single-frame tactile image;
in order to further simplify the model and improve the running speed of the model, the number of convolution kernels is correspondingly reduced, specifically: the number of convolution kernels of the first residual block is changed to 32, the number of convolution kernels of the second residual block is changed to 64, and the number of convolution kernels of the 1×1 block is changed to 128.
Training six sleeping postures by using an algorithm based on a lightweight convolutional neural network to obtain a classification model, and firstly constructing a network structure based on the convolutional neural network; then using the collected experimental data as a sleeping posture data set, and respectively setting the labels of the six sleeping postures as y i The method comprises the steps of combining feature vectors with corresponding labels to obtain a sleeping gesture sample training set, taking the sleeping gesture sample training set as input of a neural network, training to obtain classification models of different sleeping gestures, and storing the classification models as classification operators to be directly used for classifying and identifying the sleeping gestures.
And displaying the pressure data acquired in real time on a front-end interface of the system in real time, repeatedly collecting sleeping gesture information, continuously identifying sleeping gesture images acquired in real time by using a classification operator obtained through training, generating a log record of the identification result, and realizing long-time monitoring of the sleeping gesture of the human body.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will be within the scope of the present invention.
The foregoing description is only of the preferred embodiments of the invention and is not intended to limit the invention; various modifications and variations of the present invention will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. The utility model provides a sleeping posture monitoring method based on lightweight convolutional neural network which is characterized by comprising the following steps:
setting pre-classified sleeping gesture categories, collecting a plurality of sleeping gesture samples under each sleeping gesture category, and when a plurality of samples of a single sleeping gesture category are obtained, collecting pressure distribution by using a first sensor array arranged on a collecting surface of each sleeping gesture sample of the single sleeping gesture category by using a collecting module so as to form first sleeping gesture data corresponding to the single sleeping gesture sample, and storing each first sleeping gesture data;
the acquisition module acquires second sleeping posture data corresponding to the first sleeping posture data by using a second sensor array distributed below the acquisition surface, and stores the second sleeping posture data;
the sleeping posture pre-analysis module reads second sleeping posture data of each sleeping posture sample, and performs sleeping posture mapping on the pressure distribution on the acquisition surface according to the first sleeping posture data so as to form sleeping posture correction data;
the sleeping gesture pre-analysis module processes the sleeping gesture correction data in a first preset processing mode to form first sleeping gesture image data, and processes the first sleeping gesture data and the first sleeping gesture image data in a second preset processing mode to form second sleeping gesture image data;
the sleeping gesture analysis module utilizes the first sleeping gesture image data and the second sleeping gesture image data of each sleeping gesture sample to construct a preset network framework, and utilizes the preset network framework to respectively extract characteristics of each sleeping gesture category so as to obtain category characteristics under each sleeping gesture category;
the sleeping gesture analysis module controls the acquisition module to acquire real-time sleeping gestures, and performs sleeping gesture category recognition on the real-time sleeping gestures which are stored according to the category characteristics;
storing a sleeping gesture category result for real-time sleeping gesture recognition and real-time sleeping gesture acquisition time to form sleeping gesture monitoring data;
the first sleeping posture data are pressure data generated by the sleeping posture sample on the acquisition surface, the second sleeping posture data are deformation factors generated by the sleeping posture sample on the acquisition surface, and the second sleeping posture data are matched with the first sleeping posture data on corresponding coordinates of the acquisition surface;
the first preset processing mode is to carry out imaging processing on the pressure data, the second preset processing mode is to process the pressure distribution data through inversion, local equalization, sleeping gesture segmentation and morphological denoising, the first sleeping gesture image data is a sleeping gesture pressure image of a sleeping gesture, the second sleeping gesture image data is a sleeping gesture characteristic image of the sleeping gesture,
the sleeping gesture mapping is deformation correction of the acquisition surface which is deformed through the second sleeping gesture data;
the sleeping gesture pre-analysis module is provided with a minimum mapping pressure threshold value and a maximum breaking pressure threshold value, and if a single acquisition point in the acquisition surface is deformed, the sleeping gesture pre-analysis module judges that the acquisition point is a deformed acquisition point;
if the first sleeping posture data corresponding to the deformation acquisition point is not greater than a minimum mapping pressure threshold value, the sleeping posture pre-analysis module judges that the deformation acquisition point is not compensated;
if the first sleeping posture data corresponding to the deformation acquisition point is larger than a minimum mapping pressure threshold and is not larger than a maximum damage pressure threshold, the sleeping posture pre-analysis module judges that the deformation acquisition point is compensated in a first preset mapping mode;
if the first sleeping posture data corresponding to the deformation acquisition point is larger than the maximum breaking pressure threshold, the sleeping posture pre-analysis module judges that the deformation acquisition point is compensated in a second preset mapping mode, and compensates the adjacent points around the deformation acquisition point in the first preset mapping mode;
the first preset mapping mode is a deformation compensation mode set by the sleeping gesture pre-analysis module, and the first preset mapping mode is a deformation compensation value set by the sleeping gesture pre-analysis module;
the preset network framework is a light convolutional neural network sleeping gesture recognition network framework.
2. The sleeping posture monitoring method based on the lightweight convolutional neural network according to claim 1, wherein the acquisition module acquires sleeping posture samples under a preliminary acquisition condition, and for single sleeping posture sample acquisition, the method comprises pressure data of a plurality of positions and position coordinate data corresponding to each pressure data; the acquisition module collects reference data when each pressure sensor identifies that the sleeping posture bearing side of the acquisition surface is empty;
the acquisition module is provided with a plurality of pressure sensors on the sleeping gesture bearing side and is used for collecting pressure generated by each sleeping gesture and corresponding position coordinates of the pressure;
the pressure acquisition period is that the acquisition module reads the pressure data acquired by each pressure sensor with the preset time period as a period;
the reference data is pressure data of the sleeping posture bearing side which is not in a pressed state and is related to the dead weight of the acquisition surface.
3. The sleeping posture monitoring method based on the lightweight convolutional neural network according to claim 2, wherein the pressure sensors are arranged on the lower surface of a bed sheet or the upper surface of a bed body, and when the single sleeping posture sample applies pressure to the bed, the relative positions of the pressure sensors to the bed sheet do not move;
the number of the pressure sensors is at least 9, and the largest area surrounded by each pressure sensor covers the projection area of each sleeping posture sample on the surface of the bed.
4. The sleeping posture monitoring method based on the lightweight convolutional neural network according to claim 3, wherein under the condition of collection, the collection module records the sleeping posture data corresponding to each sleeping posture sample, and transmits the data to a storage module to record each sleeping posture data with the preset duration as a period, and records the recording time of the corresponding sleeping posture data to form time stamp sleeping posture data;
and the acquisition and collection conditions are that the acquisition module reads pressure readings of the pressure sensors bearing sleeping posture pressure in the current sleeping posture state.
5. The sleeping posture monitoring method based on the lightweight convolutional neural network according to claim 4, wherein under a sleeping posture analysis condition, the sleeping posture pre-analysis module reads the time stamp sleeping posture data in the storage module, performs difference calculation on the pressure data corresponding to each time stamp sleeping posture data and the reference data to form paired correction pressure data, and forms a sleeping posture pressure image under a single time stamp according to the position of each pressure sensor corresponding to the same time stamp and the corresponding correction pressure data;
wherein the difference is calculated as the difference between the pressure data and the reference data;
and the sleeping gesture analysis condition is that the acquisition module collects the reference data.
6. The method for monitoring the sleeping posture based on the lightweight convolutional neural network according to claim 5, wherein the sleeping posture analysis module converts the sleeping posture pressure map into the sleeping posture feature image by using the preset network framework under a sleeping posture simplification condition;
the sleep pose simplification condition is that the sleep pose analysis module forms the sleep pose pressure image.
7. The sleeping posture monitoring method based on the lightweight convolutional neural network according to claim 6, wherein when the preset network framework is constructed, for a single sleeping posture feature image, the projection features of the single sleeping posture feature image on a projection plane comprise three dimensions of horizontal abscissa, horizontal ordinate and pressure perpendicular to a plane where the upper surface of a bed body is located, and the sleeping posture analysis module performs vectorization processing on the sleeping posture feature image according to each dimension data of the sleeping posture feature image so as to form a sleeping posture feature function;
the projection surface is a plane where the upper surface of the bed body is located, and the sleeping gesture characteristic function has a real solution at any point of the projection surface.
8. The method for monitoring the sleeping posture based on the lightweight convolutional neural network according to claim 7, wherein the sleeping posture analysis module sorts the sleeping posture characteristic images when the preset network framework is constructed, and for the sleeping posture characteristic images of single sleeping posture category, the sleeping posture analysis module is provided with a corresponding threshold interval,
when the real solution of the sleeping gesture feature function on the projection surface is in the threshold interval corresponding to a single sleeping gesture category, the sleeping gesture module marks the sleeping gesture feature function as a category feature under the corresponding sleeping gesture category;
the threshold interval is a closed interval determined by a first sleeping gesture threshold and a second sleeping gesture threshold, and the first sleeping gesture threshold is smaller than the second sleeping gesture threshold.
9. The method for monitoring the sleeping posture based on the lightweight convolutional neural network according to claim 8, wherein when the sleeping posture analysis module finishes acquiring the class characteristics under each sleeping posture class, the sleeping posture analysis module records the classified sleeping posture characteristic functions and forms a sleeping posture classification model;
the sleeping gesture feature function of the single sleeping gesture classification model is used as a classification operator to identify the sleeping gesture category of the real-time sleeping gesture data;
the sleeping gesture category is identified as the sleeping gesture analysis module classifies the sleeping gesture feature image by the pre-classified sleeping gesture category;
and when the sleep gesture analysis module finishes classifying the real-time sleep gesture, the sleep gesture analysis module transmits all sleep gesture data of the real-time sleep gesture to the storage module, adjusts the first sleep gesture threshold and the second sleep gesture threshold, and applies the adjusted first sleep gesture threshold and second sleep gesture threshold when the next real-time sleep gesture is classified.
CN202310437342.9A 2023-04-21 2023-04-21 Sleeping posture monitoring method based on lightweight convolutional neural network Active CN116563887B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310437342.9A CN116563887B (en) 2023-04-21 2023-04-21 Sleeping posture monitoring method based on lightweight convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310437342.9A CN116563887B (en) 2023-04-21 2023-04-21 Sleeping posture monitoring method based on lightweight convolutional neural network

Publications (2)

Publication Number Publication Date
CN116563887A CN116563887A (en) 2023-08-08
CN116563887B true CN116563887B (en) 2024-03-12

Family

ID=87493832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310437342.9A Active CN116563887B (en) 2023-04-21 2023-04-21 Sleeping posture monitoring method based on lightweight convolutional neural network

Country Status (1)

Country Link
CN (1) CN116563887B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116746914B (en) * 2023-08-14 2023-11-10 北京领创医谷科技发展有限责任公司 User gesture determining method and device, electronic equipment and storage medium
CN117671739B (en) * 2024-02-01 2024-05-07 爱梦睡眠(珠海)智能科技有限公司 User identity recognition method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330352A (en) * 2016-08-18 2017-11-07 河北工业大学 Sleeping position pressure image-recognizing method based on HOG features and machine learning
CN108244874A (en) * 2018-02-14 2018-07-06 深圳市三分之睡眠科技有限公司 Automatic adjusting bed and its adjusting method
CN111067537A (en) * 2019-11-11 2020-04-28 珠海格力电器股份有限公司 Sleeping posture monitoring method, monitoring terminal and storage medium
KR20200069776A (en) * 2018-12-07 2020-06-17 가천대학교 산학협력단 Analysis apparatus for sleep posture and method thereof
CN111353425A (en) * 2020-02-28 2020-06-30 河北工业大学 Sleeping posture monitoring method based on feature fusion and artificial neural network
CN112869710A (en) * 2021-01-19 2021-06-01 惠州市金力智能科技有限公司 Bed with sleeping and physical therapy functions
CN113273998A (en) * 2021-07-08 2021-08-20 南京大学 Human body sleep information acquisition method and device based on RFID label matrix
CN113456061A (en) * 2021-06-16 2021-10-01 南京润楠医疗电子研究院有限公司 Sleep posture monitoring method and system based on wireless signals
CN113688720A (en) * 2021-08-23 2021-11-23 安徽农业大学 Neural network recognition-based sleeping posture prediction method
CN114998229A (en) * 2022-05-23 2022-09-02 电子科技大学 Non-contact sleep monitoring method based on deep learning and multi-parameter fusion

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111493584A (en) * 2019-01-31 2020-08-07 绿样实业股份有限公司 Bed device and method for automatically adjusting bed surface based on sleeping posture
KR20210040626A (en) * 2019-10-04 2021-04-14 엘지전자 주식회사 Apparatus and method for detecting posture using artificial intelligence

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330352A (en) * 2016-08-18 2017-11-07 河北工业大学 Sleeping position pressure image-recognizing method based on HOG features and machine learning
CN108244874A (en) * 2018-02-14 2018-07-06 深圳市三分之睡眠科技有限公司 Automatic adjusting bed and its adjusting method
KR20200069776A (en) * 2018-12-07 2020-06-17 가천대학교 산학협력단 Analysis apparatus for sleep posture and method thereof
CN111067537A (en) * 2019-11-11 2020-04-28 珠海格力电器股份有限公司 Sleeping posture monitoring method, monitoring terminal and storage medium
CN111353425A (en) * 2020-02-28 2020-06-30 河北工业大学 Sleeping posture monitoring method based on feature fusion and artificial neural network
CN112869710A (en) * 2021-01-19 2021-06-01 惠州市金力智能科技有限公司 Bed with sleeping and physical therapy functions
CN113456061A (en) * 2021-06-16 2021-10-01 南京润楠医疗电子研究院有限公司 Sleep posture monitoring method and system based on wireless signals
CN113273998A (en) * 2021-07-08 2021-08-20 南京大学 Human body sleep information acquisition method and device based on RFID label matrix
CN113688720A (en) * 2021-08-23 2021-11-23 安徽农业大学 Neural network recognition-based sleeping posture prediction method
CN114998229A (en) * 2022-05-23 2022-09-02 电子科技大学 Non-contact sleep monitoring method based on deep learning and multi-parameter fusion

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
i-Sleep: Intelligent Sleep Detection System for Analyzing Sleep Behavior;Dhamchatsoontree, S 等;2019 4th International Conference on Information Technology (InCIT). Proceedings;第144-148页 *
基于心冲击信号的睡姿识别;张艺超;袁贞明;孙晓燕;;计算机工程与应用(17);第135-140页 *
基于生物电阻抗技术的睡眠姿势识别方法的探讨;许欢;张平;;中国医疗设备(06);第39-44页 *
基于贝叶斯分类器的高危睡姿监测系统;黄怡沁;胡加鑫;江家宾;张振;;自动化技术与应用(09);第108-110页 *

Also Published As

Publication number Publication date
CN116563887A (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN116563887B (en) Sleeping posture monitoring method based on lightweight convolutional neural network
Sun et al. PerAE: an effective personalized AutoEncoder for ECG-based biometric in augmented reality system
CN113052113B (en) Depression identification method and system based on compact convolutional neural network
CN107133612A (en) Based on image procossing and the intelligent ward of speech recognition technology and its operation method
CN109993068B (en) Non-contact human emotion recognition method based on heart rate and facial features
CN111466878A (en) Real-time monitoring method and device for pain symptoms of bedridden patients based on expression recognition
CN110251079A (en) A kind of sufferer pain detection method and system for mobile device
CN111695520A (en) High-precision child sitting posture detection and correction method and device
CN110693510A (en) Attention deficit hyperactivity disorder auxiliary diagnosis device and using method thereof
CN113116363A (en) Method for judging hand fatigue degree based on surface electromyographic signals
CN111063438B (en) Sleep quality evaluation system and method based on infrared image sequence
CN112257559A (en) Identity recognition method based on gait information of biological individual
CN108962379A (en) A kind of mobile phone assisted detection system of cerebral nervous system disease
CN115909438A (en) Pain expression recognition system based on depth time-space domain convolutional neural network
CN113887374B (en) Brain control water drinking system based on dynamic convergence differential neural network
CN113070875A (en) Manipulator control method and device based on brain wave recognition
CN109034079B (en) Facial expression recognition method for non-standard posture of human face
Visell et al. Learning constituent parts of touch stimuli from whole hand vibrations
CN108319368A (en) A kind of wearable AI action learning systems
CN112545535B (en) Sleep-wake cycle analysis method based on amplitude integrated electroencephalogram
CN114916928B (en) Human body posture multichannel convolutional neural network detection method
CN117671774B (en) Face emotion intelligent recognition analysis equipment
CN115062704A (en) Sleeping posture identification method based on deep migration learning
CN117475505A (en) Sleeping gesture recognition method based on dark quilt environment
CN112530553A (en) Method and device for estimating interaction force between soft tissue and tool

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant