CN117958804B - IMU signal-based sleeping gesture recognition method, system, medium and computer - Google Patents

IMU signal-based sleeping gesture recognition method, system, medium and computer Download PDF

Info

Publication number
CN117958804B
CN117958804B CN202410037450.1A CN202410037450A CN117958804B CN 117958804 B CN117958804 B CN 117958804B CN 202410037450 A CN202410037450 A CN 202410037450A CN 117958804 B CN117958804 B CN 117958804B
Authority
CN
China
Prior art keywords
axis
signal
angular velocity
triaxial
omega
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410037450.1A
Other languages
Chinese (zh)
Other versions
CN117958804A (en
Inventor
何春华
刘水彬
方泽文
林靖
张建文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202410037450.1A priority Critical patent/CN117958804B/en
Publication of CN117958804A publication Critical patent/CN117958804A/en
Application granted granted Critical
Publication of CN117958804B publication Critical patent/CN117958804B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application relates to a sleeping gesture recognition method, a system, a medium and a computer based on IMU signals, and provides a sleeping gesture detection method realized by using non-wearable non-sensing equipment, wherein motion signals from a body, including respiratory signals, heartbeat signals and noise, are utilized, and the distance between a heart and a sensor position is different due to different sleeping gesture, so that signal energy is generated, the sleeping gesture recognition method can be characterized by instantaneous energy of acceleration and angular velocity, and the sleeping gesture recognition accuracy can reach 99.72% by combining a deep learning model. Compared with non-contact equipment such as radar, the method is simple to install and debug, and the identification accuracy is basically equivalent to that of the former two means.

Description

IMU signal-based sleeping gesture recognition method, system, medium and computer
Technical Field
The invention relates to the technical field of sleeping gesture detection, in particular to a sleeping gesture recognition method, system, medium and computer based on IMU signals.
Background
High quality sleep is important for cognitive function, emotional and psychological well-being, cardiovascular, cerebrovascular and metabolic well-being. The american society of sleep medicine (AASM), the society of Sleep Research (SRS), and the National Sleep Foundation (NSF) suggest that average adults should guarantee a sleep time of 7 hours or more per night to promote optimal health. However, baseline survey data from the U.S. center for disease control and prevention (CDC) and the women and child care office (MCHB) indicate that more than one-third of the people do not get sufficient sleep time, and about one-fourth of the people suffer from sleep disorders. Poor sleep quality is often caused by sleep disorders, including hypopnea, insomnia, and Obstructive Sleep Apnea (OSA), which can have profound deleterious effects on physical health, mental well-being, mood, and public safety. OSA is studied to be closely related to sleep posture. Therefore, detecting sleep posture in real time is significant.
Different sleeping postures have a significant impact on sleep health, especially breathing and ventilation during sleep. For example, a supine position may result in OSA because the upper airway often partially or fully collapses during sleep. Furthermore, prone posture may increase cardiopulmonary pressure, resulting in reduced sleep quality. However, the above-described problems can be improved or solved by the side lying posture. Thus, sleep gesture recognition is also very important. Also, there are mainly two kinds of detection methods of sleep posture, namely, a contact type and a noncontact type.
For non-contact detection, optical, infrared and depth cameras may be used. Although the detection accuracy of the optical camera or the infrared camera is high, serious privacy problems are caused. The depth camera can recognize a sleep posture without visible light at night. Although it protects privacy, it is costly. In addition, doppler radar, FMCW, IR-UWB and millimeter wave radar sensors may also be used to detect sleep postures, but the disadvantages are the same as described above. In terms of contact detection, a conductive strain sensor, a fiber optic strain sensor, and a force sensor may be employed to detect sleep, but the drawbacks are the same as those of the above sensors. In addition, can also embed the pressure sensor array in the mattress and monitor the sleep posture, the detection accuracy is higher. But the products are complex and expensive. Because of its low cost and high performance, IMUs are widely available for detecting sleeping positions, and are typically integrated into watches, wrist bands, and chest bands. However, wearing products may cause discomfort, and thus non-wearing products are becoming a trend. At the same time, flexible sensors that incorporate IMU for sleep position detection are becoming increasingly popular.
In the prior art, a wearable flexible sleep posture monitoring device is developed, which combines a flexible sensor and a 6-axis inertial measurement unit, and can monitor the body posture change, the turnover angle and the acceleration of a subject during sleep in real time. The flexible angle sensor is attached to the left and right waists and the upper and lower spines of the subject, changes of the body posture are monitored, when the body posture changes, the flexible angle sensor can be bent and deformed, and the conductivity of the sensitive grid material can change along with the change of deformation, so that fluctuation of an output measurement signal is caused. Meanwhile, the 6-axis inertial measurement unit is tied on the navel through a waist bag and takes three axes of the body of a subject as directions, wherein the X axis of the sensor is taken as a rotation axis when the subject turns over. When the analog output waveforms of the flexible angle sensor and the inertia measurement unit simultaneously fluctuate greatly, whether the sleeping posture of the subject changes or not is judged by referring to the sleeping posture change data of each subject. However, the device still has the following drawbacks: the device combines the flexible sensor and the 6-axis inertial measurement unit, has larger volume, and the flexible angle sensor is attached to the left and right sides of the waist, the upper and lower spines and the inertial measurement unit of the subject and is bound on the navel through the waist bag, so that the device has constraint, has poor user experience, and particularly during sleeping, the sleeping quality of the user is very easy to influence. Furthermore, the user may experience an early night effect due to discomfort caused by wearing the flexible sensor.
Thus, to achieve a high performance, non-wearable and comfortable experience, a MEMS tri-axial accelerator and gyroscope based sleep monitoring belt was developed that can be used to detect heart and respiratory rate, body movements, out-of-bed and snoring events, and can also be applied to further estimate sleep stages and sleep quality. On the basis, a sleeping posture recognition method is provided, wherein the sleeping posture of a user is researched by using the instantaneous signal energy of an inertial sensor (IMU) as a characteristic, a CNN sleeping posture detection model is established, and four sleeping postures of lying, left side lying, right side lying and prone lying are recognized.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention aims to provide a sleeping gesture recognition method, a system, a medium and a computer based on IMU signals, so as to overcome the defects that wearing equipment of the existing sleeping gesture detection equipment can influence sleeping comfort and the installation and debugging of phenanthrene wearing equipment are difficult.
The technical aim of the invention is realized by the following technical scheme: the sleeping gesture recognition method based on the IMU signal comprises the following steps:
s1, acquiring a triaxial acceleration signal and a triaxial angular velocity signal of a subject in a sleep state in a preset time by using a flexible sleep monitoring belt;
S2, preprocessing the triaxial acceleration signals and triaxial angular velocity signals to correspondingly obtain triaxial enhanced acceleration signals and triaxial enhanced angular velocity signals;
S3, respectively carrying out feature extraction on the triaxial enhanced acceleration signal and the triaxial enhanced angular velocity signal to obtain a corresponding triaxial acceleration feature matrix and a triaxial angular velocity feature matrix; connecting the triaxial acceleration characteristic matrix with the triaxial angular velocity characteristic matrix to obtain a corresponding instantaneous energy characteristic matrix;
s4, inputting the instantaneous energy feature matrix into a pre-trained convolutional neural network model for classification, and obtaining a corresponding sleep posture classification result.
Optionally, the acquiring, by using the flexible sleep monitoring belt, the triaxial acceleration signal and the triaxial angular velocity signal of the subject in the sleep state in a predetermined time includes:
collecting a triaxial acceleration signal and a triaxial angular velocity signal of a subject in a sleep state in 6S by using the flexible sleep monitoring belt;
The triaxial acceleration signal includes: an X-axis acceleration signal, a Y-axis acceleration signal, and a Z-axis acceleration signal;
the triaxial angular velocity signal includes: omega x shaft angular velocity signal, omega y shaft angular velocity signal, omega z shaft angular velocity signal.
Optionally, the preprocessing the triaxial acceleration signal and the triaxial angular velocity signal correspondingly obtains a triaxial enhanced acceleration signal and a triaxial enhanced angular velocity signal, including:
Using an HPF first-order high-pass filter to respectively carry out filtering treatment on an X-axis acceleration signal, a Y-axis acceleration signal, a Z-axis acceleration signal, an omega x -axis angular velocity signal, an omega y -axis angular velocity signal and an omega z -axis angular velocity signal, enhancing a high-frequency part and attenuating a low-frequency part, and reducing the influence caused by motion artifacts, thereby correspondingly obtaining an X-axis enhanced acceleration signal, a Y-axis enhanced acceleration signal, a Z-axis enhanced acceleration signal, an omega x -axis enhanced angular velocity signal, an omega x -axis enhanced angular velocity signal and an omega x -axis enhanced angular velocity signal;
The time domain formula of the HPF first-order high-pass filter is as follows:
Wherein i represents the i-th time; x [ i ] represents an input signal; y [ i ] represents an output signal; k p denotes a filter factor determining the cut-off frequency, and k p is calculated as follows:
kp=2×π×T×fs
Wherein f is the cut-off frequency, which is 0.5Hz; f s is the sampling frequency, 128Hz, and k p is 0.0245.
Optionally, the feature extraction is performed on the triaxial enhanced acceleration signal and the triaxial enhanced angular velocity signal respectively, and the triaxial enhanced acceleration signal and the triaxial enhanced angular velocity signal correspond to a triaxial acceleration feature matrix and a triaxial angular velocity feature matrix; connecting the triaxial acceleration characteristic matrix and the triaxial angular velocity characteristic matrix to obtain a corresponding instantaneous energy characteristic matrix, wherein the method comprises the following steps:
Taking 1 second as a section, respectively framing an X-axis enhanced acceleration signal, a Y-axis enhanced acceleration signal, a Z-axis enhanced acceleration signal, an omega x -axis enhanced angular velocity signal, an omega y -axis enhanced angular velocity signal and an omega z -axis enhanced angular velocity signal to correspondingly obtain six X-axis sub-signals, six Y-axis sub-signals, six Z-axis sub-signals, six omega x -axis sub-signals, six omega y -axis sub-signals and six omega z -axis sub-signals;
Adopting FFT fast Fourier transform to respectively convert each X-axis sub-signal into a corresponding X-axis frequency domain signal, respectively convert each Y-axis sub-signal into a corresponding Y-axis frequency domain signal, respectively convert each Z-axis sub-signal into a corresponding Z-axis frequency domain signal, respectively convert each omega x -axis sub-signal into a corresponding omega x -axis frequency domain signal, respectively convert each omega y -axis sub-signal into a corresponding omega y -axis frequency domain signal, respectively convert each omega Z-axis sub-signal into a corresponding omega Z-axis frequency domain signal; the FFT fast Fourier transform formula is:
Wherein, N is the length of the input vector, which represents the sampling rate of 1 second, and the value of N is 128; y [ i ] represents the ith sampling point of the time domain signal; y [ k ] represents the kth discrete frequency in the frequency domain;
Calculating the X-axis instantaneous energy corresponding to each X-axis frequency domain signal by adopting a mode of calculating the square sum of elements in the frequency domain amplitude vector; respectively calculating Y-axis instantaneous energy corresponding to each Y-axis frequency domain signal; respectively calculating the Z-axis instantaneous energy corresponding to each Z-axis frequency domain signal; respectively calculating omega-axis instantaneous energy corresponding to each omega-axis frequency domain signal; respectively calculating the theta-axis instantaneous energy corresponding to each theta-axis frequency domain signal; respectively calculate each Corresponding to the axial frequency domain signalShaft instantaneous energy; the formula for calculating the sum of squares of the elements in the frequency domain amplitude vector is:
Wherein abs operates modulo, q represents the instantaneous energy at a frequency of 0-64 Hz;
By combining six of the X-axis instantaneous energies, six of the Y-axis instantaneous energies, six of the Z-axis instantaneous energies, six of the omega-axis instantaneous energies, six of the theta-axis instantaneous energies, and six of the Y-axis instantaneous energies The instantaneous energy of the axes is taken as the element of the matrix, and an instantaneous energy characteristic matrix with the size of 6*6 is established.
Optionally, inputting the instantaneous energy feature matrix into a pre-trained convolutional neural network model for classification, to obtain a corresponding sleep pose classification result, including:
And outputting four classification results, namely lying, left lying, right lying and prone lying.
Optionally, inputting the instantaneous energy feature matrix into a pre-trained convolutional neural network model for classification, to obtain a corresponding sleep pose classification result, and further including:
Performing first convolution on the instantaneous energy feature matrix to obtain a corresponding first tensor C1, wherein the size of a first convolution kernel is 3*3, the number of the first convolution kernels is 32, and the activation function of the first convolution is ReLU; the size of the first tensor C1 is 4 x 32;
Carrying out primary pooling on the first tensor C1 to obtain a corresponding second tensor A1, wherein the size of a first pooling core is 2 x 2, the step length of the primary pooling is 2, and the activation function of the primary pooling is ReLU; the size of the second tensor A1 is 2 x 32;
Performing a second convolution on the second tensor A1 to obtain a corresponding third tensor C2, wherein the size of a second convolution kernel is 1*1, the number of the second convolution kernels is 64, and the activation function of the second convolution is ReLU; the size of the third tensor C2 is2 x 64;
Performing second pooling on the third tensor C2 to obtain a fourth tensor A2, wherein the size of a second pooling core is 2x 2, the step size of the second pooling is 2, and the activation function of the second pooling is ReLU; the size of the fourth tensor A2 is 1×1×64;
Performing first full connection on the fourth tensor A2 to obtain a corresponding fifth tensor D1, wherein the number of nodes of the first full connection layer is 32; the activation function of the first full connection is ReLU; the fifth tensor D1 has a size of 1 x 32
Regularization is carried out on the fifth tensor D1 to obtain a sixth tensor D2 correspondingly;
Performing second full connection on the sixth tensor D2 to obtain a corresponding seventh tensor D3; the activation function of the second full connection is Softmax; the size of the seventh tensor D3 is 1*4; the four elements in the seventh tensor D3 represent probabilities corresponding to lying, left-side lying, right-side lying, and prone lying, respectively.
Optionally, the sleeping gesture with the highest probability is selected as the output result of the model.
A sleep pose recognition system based on IMU signals, comprising:
And the signal detection module is used for: the device comprises a flexible sleep monitoring belt, a sensor and a sensor, wherein the flexible sleep monitoring belt is used for acquiring triaxial acceleration signals and triaxial angular velocity signals of a subject in a sleep state in a preset time;
the signal preprocessing module is used for: the method comprises the steps of preprocessing the triaxial acceleration signals and triaxial angular velocity signals to correspondingly obtain triaxial enhanced acceleration signals and triaxial enhanced angular velocity signals;
and the feature extraction module is used for: the method comprises the steps of respectively carrying out feature extraction on the triaxial enhanced acceleration signal and the triaxial enhanced angular velocity signal to obtain a corresponding triaxial acceleration feature matrix and a triaxial angular velocity feature matrix; connecting the triaxial acceleration characteristic matrix with the triaxial angular velocity characteristic matrix to obtain a corresponding instantaneous energy characteristic matrix;
Sleeping posture prediction module: the method is used for inputting the instantaneous energy feature matrix into a pre-trained convolutional neural network model for classification, and a corresponding sleeping gesture classification result is obtained.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method described above.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when the processor executes the computer program.
In summary, the application has the following beneficial effects: the application provides a sleeping gesture detection method realized by using non-wearable non-sensing equipment, which utilizes motion signals from a body, including respiratory signals, heartbeat signals and noise, and because the distances between the heart and the position of a sensor are different due to different sleeping gestures, the distance between the heart and the position of the sensor are different, the energy of the generated signals is also different, the method can be characterized by the instantaneous energy of acceleration and angular velocity, and the sleeping gesture recognition accuracy can reach 99.72 percent by combining a deep learning model. Compared with non-contact equipment such as radar, the method is simple to install and debug, and the identification accuracy is basically equivalent to that of the former two means.
Drawings
FIG. 1 is a flow chart of a sleep position identification method based on IMU signals of the present invention;
FIG. 2 is a block diagram of a sleep position recognition system based on IMU signals of the present invention;
FIG. 3 is an internal block diagram of a computer device in accordance with an embodiment of the present invention;
FIG. 4 is a schematic perspective view of a flexible sleep monitoring belt according to the present invention;
FIG. 5 is a block diagram of the internal circuitry of the flexible sleep monitoring strip of the present invention;
FIG. 6 is a schematic diagram of the internal circuitry of the flexible sleep monitoring strip of the present invention;
FIG. 7 is a schematic circuit diagram of an inertial sensor system of the present invention;
FIG. 8 is a schematic diagram of a transient energy feature matrix of the present invention;
fig. 9 is a schematic structural diagram of a convolutional neural network model of the present invention.
In the figure: 1. a signal detection module; 2. a signal preprocessing module; 3. a feature extraction module; 4. and the sleeping posture prediction module.
Detailed Description
In order that the objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Several embodiments of the invention are presented in the figures. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein.
In the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured," and the like are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances. The terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature.
In the present invention, unless expressly stated or limited otherwise, a first feature "above" or "below" a second feature may include both the first and second features being in direct contact, as well as the first and second features not being in direct contact but being in contact with each other through additional features therebetween. Moreover, a first feature being "above," "over" and "on" a second feature includes the first feature being directly above and obliquely above the second feature, or simply indicating that the first feature is higher in level than the second feature. The first feature being "under", "below" and "beneath" the second feature includes the first feature being directly under and obliquely below the second feature, or simply means that the first feature is less level than the second feature. The terms "vertical," "horizontal," "left," "right," "up," "down," and the like are used for descriptive purposes only and are not to indicate or imply that the devices or elements being referred to must have a particular orientation, be constructed and operated in a particular orientation, and therefore should not be construed as limiting the invention.
The present invention will be described in detail below with reference to the accompanying drawings and examples.
The invention provides a sleeping gesture recognition method based on IMU signals, which is shown in figure 1 and comprises the following steps:
s1, acquiring a triaxial acceleration signal and a triaxial angular velocity signal of a subject in a sleep state in a preset time by using a flexible sleep monitoring belt;
S2, preprocessing the triaxial acceleration signals and triaxial angular velocity signals to correspondingly obtain triaxial enhanced acceleration signals and triaxial enhanced angular velocity signals;
S3, respectively carrying out feature extraction on the triaxial enhanced acceleration signal and the triaxial enhanced angular velocity signal to obtain a corresponding triaxial acceleration feature matrix and a triaxial angular velocity feature matrix; connecting the triaxial acceleration characteristic matrix with the triaxial angular velocity characteristic matrix to obtain a corresponding instantaneous energy characteristic matrix;
s4, inputting the instantaneous energy feature matrix into a pre-trained convolutional neural network model for classification, and obtaining a corresponding sleep posture classification result.
In practical application, the structure of the sleep monitoring belt is shown in fig. 4, the circuit block diagram of the sleep monitoring belt is shown in fig. 5, and the circuit schematic diagrams of the sleep monitoring belt are shown in fig. 6 and 7; the sleep monitoring belt is internally provided with the inertial sensor, can detect slight vibration of a human body due to heart beat, analyzes acceleration signals and angular velocity signals of vibration, inputs the vibration signals and the angular velocity signals into a deep learning model trained in advance for classification, and can correspondingly judge sleeping positions of people. The IMU original signal mainly comprises motion signals from a body, including respiratory signals, heartbeat signals and noise, and the distance between the heart and the position of the sensor is different due to different sleeping postures, so that the energy of the generated signals is also different, the IMU original signal can be characterized by instantaneous energy of acceleration and angular velocity, and the sleeping posture identification accuracy can reach 99.72% by combining a deep learning model.
Further, the method for acquiring the triaxial acceleration signal and the triaxial angular velocity signal of the subject in the sleep state by using the flexible sleep monitoring band in a preset time comprises the following steps:
collecting a triaxial acceleration signal and a triaxial angular velocity signal of a subject in a sleep state in 6S by using the flexible sleep monitoring belt;
The triaxial acceleration signal includes: an X-axis acceleration signal, a Y-axis acceleration signal, and a Z-axis acceleration signal;
the triaxial angular velocity signal includes: omega x shaft angular velocity signal, omega y shaft angular velocity signal, omega z shaft angular velocity signal.
Further, the preprocessing the triaxial acceleration signal and the triaxial angular velocity signal to obtain a triaxial enhanced acceleration signal and a triaxial enhanced angular velocity signal, which includes:
using HPF first order high pass filter to respectively obtain X-axis acceleration signal, Y-axis acceleration signal, Z-axis acceleration signal, omega-axis angular velocity signal, theta-axis angular velocity signal and The axial angular velocity signal is filtered, the high-frequency part is enhanced, the low-frequency part is attenuated, the influence caused by motion artifact is reduced, and the X-axis enhanced acceleration signal, the Y-axis enhanced acceleration signal, the Z-axis enhanced acceleration signal, the omega-axis enhanced angular velocity signal, the theta-axis enhanced angular velocity signal and the X-axis enhanced angular velocity signal are correspondingly obtainedThe shaft enhances the angular velocity signal;
The time domain formula of the HPF first-order high-pass filter is as follows:
Wherein i represents the i-th time; x [ i ] represents an input signal; y [ i ] represents an output signal; k p denotes a filter factor determining the cut-off frequency, and k p is calculated as follows:
kp=2×π×f×fs
Wherein f is the cut-off frequency, which is 0.5Hz; f s is the sampling frequency, 128Hz, and k p is 0.0245.
The main purpose of filtering with a high pass filter is to enhance the high frequency part and attenuate the low frequency part, reducing the effects of motion artifacts.
Further, the three-axis enhanced acceleration signal and the three-axis enhanced angular velocity signal are respectively subjected to feature extraction, and the three-axis enhanced acceleration signal and the three-axis enhanced angular velocity signal correspond to a three-axis acceleration feature matrix and a three-axis angular velocity feature matrix; connecting the triaxial acceleration characteristic matrix and the triaxial angular velocity characteristic matrix to obtain a corresponding instantaneous energy characteristic matrix, wherein the method comprises the following steps:
Taking 1 second as a section, respectively framing an X-axis enhanced acceleration signal, a Y-axis enhanced acceleration signal, a Z-axis enhanced acceleration signal, an omega x -axis enhanced angular velocity signal, an omega y -axis enhanced angular velocity signal and an omega z -axis enhanced angular velocity signal to correspondingly obtain six X-axis sub-signals, six Y-axis sub-signals, six Z-axis sub-signals, six omega x -axis sub-signals, six omega y -axis sub-signals and six omega z -axis sub-signals;
Adopting FFT fast Fourier transform to respectively convert each X-axis sub-signal into a corresponding X-axis frequency domain signal, respectively convert each Y-axis sub-signal into a corresponding Y-axis frequency domain signal, respectively convert each Z-axis sub-signal into a corresponding Z-axis frequency domain signal, respectively convert each omega x -axis sub-signal into a corresponding omega x -axis frequency domain signal, respectively convert each omega y -axis sub-signal into a corresponding omega y -axis frequency domain signal, respectively convert each omega z -axis sub-signal into a corresponding omega z -axis frequency domain signal; the FFT fast Fourier transform formula is:
Wherein, N is the length of the input vector, which represents the sampling rate of 1 second, and the value of N is 128; y [ i ] represents the ith sampling point of the time domain signal; y [ k ] represents the kth discrete frequency in the frequency domain;
Calculating the X-axis instantaneous energy corresponding to each X-axis frequency domain signal by adopting a mode of calculating the square sum of elements in the frequency domain amplitude vector; respectively calculating Y-axis instantaneous energy corresponding to each Y-axis frequency domain signal; respectively calculating the Z-axis instantaneous energy corresponding to each Z-axis frequency domain signal; respectively calculating omega-axis instantaneous energy corresponding to each omega-axis frequency domain signal; respectively calculating the theta-axis instantaneous energy corresponding to each theta-axis frequency domain signal; respectively calculate each Corresponding to the axial frequency domain signalShaft instantaneous energy; the formula for calculating the sum of squares of the elements in the frequency domain amplitude vector is:
Wherein abs operates modulo, q represents the instantaneous energy at a frequency of 0-64 Hz;
By combining six of the X-axis instantaneous energies, six of the Y-axis instantaneous energies, six of the Z-axis instantaneous energies, six of the omega-axis instantaneous energies, six of the theta-axis instantaneous energies, and six of the Y-axis instantaneous energies The instantaneous energy of the axes is taken as the element of the matrix, and an instantaneous energy characteristic matrix with the size of 6*6 is established.
In practical application, the acquisition time is 6S, the acquired signal of each second is taken as a frame, the remaining signals are subjected to framing processing, 6*6 =36 sub-signals can be obtained in total, the acceleration and angular velocity time domain signals are respectively converted to the frequency domain by adopting fast fourier transform for each sub-signal, then the instantaneous energy in the frequency domain is calculated for the 36 frequency domain signals, the energy values are taken as elements of a matrix to construct the matrix, and the matrix is input into a pre-trained deep learning model to carry out classification judgment. As shown in fig. 8, it is indicated that different instantaneous energy feature matrices correspond to different sleep postures.
Further, the step of inputting the instantaneous energy feature matrix into a pre-trained convolutional neural network model for classification to obtain a corresponding sleep pose classification result comprises the following steps:
And outputting four classification results, namely lying, left lying, right lying and prone lying.
In practical applications, a Convolutional Neural Network (CNN) based model is designed for sleep gesture recognition. As shown in fig. 9. The model has two convolution layers, two pooling layers, two full connection layers (Dense layers) and one Dropout layer, and the output classification result is four types: recumbent, left recumbent, right recumbent and prone.
Further, the step of inputting the instantaneous energy feature matrix into a pre-trained convolutional neural network model for classification to obtain a corresponding sleep pose classification result, and the step of further comprises:
In order to extract local features from the global feature matrix, performing first convolution on the instantaneous energy feature matrix to obtain a corresponding first tensor C1, wherein the size of a first convolution kernel is 3*3, the number of the first convolution kernels is 32, and the activation function of the first convolution is ReLU; the size of the first tensor C1 is 4 x 32; the expression formula of the convolution is:
Wout=[(Win-F+2P)/S]+1;
Hout=[(Hin-F+2P)/S]+1;
Dout=K;
C1=Wout×Hout×Dout
Wherein K is the number of convolution kernels and is set to 32; f is the convolution kernel size, set to 3 x 3, s is the step size, set to 1; p is panding (zero padding strategy, filling strategy), no padding is required, set to 0; w in and H in are instantaneous energy feature matrix input dimensions; [ (W in -F+2P)/S ] and [ (H in -F+2P)/S ] represent rounding down.
The activation function selects a ReLU, and the function formula is:
f(x)=max(0,x);
Where x represents the input value, i.e. all elements in the first tensor C1; compared with the ReLU function, the sigmod function and the tanh function have the advantages of overcoming the problem of gradient disappearance, accelerating the training speed and the like.
Carrying out primary pooling on the first tensor C1 to obtain a corresponding second tensor A1, wherein the size of a first pooling core is 2 x 2, the step length of the primary pooling is 2, and the activation function of the primary pooling is ReLU; the size of the second tensor A1 is 2 x 32; the formula for the first pooling is:
Wpout=[(Wpin-Fp)/Sp]+1;
Hpout=[(Hpin-Fp)/Sp]+1;
Dpout=Kp
A1=Wpout×Hpout×Dpout
K p is the input depth, since the input is 4 x 32, K p is 32; f p*Fp is the pool core size, 2x 2; s p is a step size, set to 2; w pin and H pin are the input dimensions of the first tensor C1; [ (W pin-Fp)/Sp ] and [ (H pin-Fp)/Sp ] represent rounding down).
Performing a second convolution on the second tensor A1 to obtain a corresponding third tensor C2, wherein the size of a second convolution kernel is 1*1, the number of the second convolution kernels is 64, and the activation function of the second convolution is ReLU; the size of the third tensor C2 is2 x 64;
Performing second pooling on the third tensor C2 to obtain a fourth tensor A2, wherein the size of a second pooling core is 2x 2, the step size of the second pooling is 2, and the activation function of the second pooling is ReLU; the size of the fourth tensor A2 is 1×1×64;
Performing first full connection on the fourth tensor A2 to obtain a corresponding fifth tensor D1, wherein the number of nodes of the first full connection layer is 32; the activation function of the first full connection is ReLU; the fifth tensor D1 has a size of 1 x 32
D1=ReLU(W1×A2+b1);
Where W 1 is the weight matrix and b 1 is the bias. The weight matrix W 1 is a matrix, each column representing an output feature of a previous layer, each row representing an input feature of a next layer, and each element representing a relationship between two features.
Regularization is carried out on the fifth tensor D1 to obtain a sixth tensor D2 correspondingly; dropout is a powerful regularization method in deep learning, and during training, some nodes are randomly selected to participate in prediction and back propagation, so that model overfitting can be reduced, and model generalization is improved. The drop rate of the Dropout layer is set to 0.2, D1 is taken as input, and a tensor of 1×32 is output, which is named D2.
R is a binary mask vector (values 0 and 1) and each element r i within it obeys the bernoulli distribution, since the discard rate is set to 0.2, the value of element r i has a probability of 80% of 1 and a probability of 20% of 0.
The output layer is a full connection layer, and the sixth tensor D2 is subjected to second full connection to obtain a corresponding seventh tensor D3; the activation function of the second full connection is Softmax; the size of the seventh tensor D3 is 1*4; the four elements in the seventh tensor D3 represent probabilities corresponding to lying, left-side lying, right-side lying, and prone lying, respectively.
D3=Softmax(W2×D2+b2);
Softmax converts the output of the neural network into a probability distribution, which is suitable for multi-classification problems. For any real vector of length k, the Softmax function may compress it to a real vector of length k, limiting the output value to a range of [0,1], and the sum of the elements in the vector to 1, with the formula:
Where x is a vector and x i and x j are one of the elements.
The model training loss function adopts cross entropy, and the multi-classification cross entropy loss formula is as follows:
k is the number of categories, here set to 4, y ij is the label, if the true category of sample i is equal to j, 1 is taken, otherwise 0 is taken. p ij is the probability of the class output by the neural network, that is, the probability that class i belongs to j, which is calculated by Softmax activation function. And by calculating the loss function, the training parameters in the model are iteratively updated, so that the predicted value approaches the true value, and the model accuracy is improved.
Further, since the output result of the application is the probabilities corresponding to the four kinds of sleeping postures, in order to be able to predict the sleeping postures, the sleeping posture with the highest probability needs to be selected as the output result of the model.
As shown in fig. 2, the present invention further provides a sleep posture recognition system based on IMU signals, including:
And the signal detection module is used for: the device comprises a flexible sleep monitoring belt, a sensor and a sensor, wherein the flexible sleep monitoring belt is used for acquiring triaxial acceleration signals and triaxial angular velocity signals of a subject in a sleep state in a preset time;
the signal preprocessing module is used for: the method comprises the steps of preprocessing the triaxial acceleration signals and triaxial angular velocity signals to correspondingly obtain triaxial enhanced acceleration signals and triaxial enhanced angular velocity signals;
and the feature extraction module is used for: the method comprises the steps of respectively carrying out feature extraction on the triaxial enhanced acceleration signal and the triaxial enhanced angular velocity signal to obtain a corresponding triaxial acceleration feature matrix and a triaxial angular velocity feature matrix; connecting the triaxial acceleration characteristic matrix with the triaxial angular velocity characteristic matrix to obtain a corresponding instantaneous energy characteristic matrix;
Sleeping posture prediction module: the method is used for inputting the instantaneous energy feature matrix into a pre-trained convolutional neural network model for classification, and a corresponding sleeping gesture classification result is obtained.
For specific limitations regarding the IMU signal-based sleep posture recognition system, reference may be made to the above limitations regarding the IMU signal-based sleep posture recognition method, and will not be described in detail herein. The modules in the sleep gesture recognition system based on the IMU signals can be all or partially realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 3. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The computer program, when executed by a processor, implements a sleep position recognition method based on IMU signals.
It will be appreciated by those skilled in the art that the structure shown in FIG. 3 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of: comprising the following steps:
s1, acquiring a triaxial acceleration signal and a triaxial angular velocity signal of a subject in a sleep state in a preset time by using a flexible sleep monitoring belt;
S2, preprocessing the triaxial acceleration signals and triaxial angular velocity signals to correspondingly obtain triaxial enhanced acceleration signals and triaxial enhanced angular velocity signals;
S3, respectively carrying out feature extraction on the triaxial enhanced acceleration signal and the triaxial enhanced angular velocity signal to obtain a corresponding triaxial acceleration feature matrix and a triaxial angular velocity feature matrix; connecting the triaxial acceleration characteristic matrix with the triaxial angular velocity characteristic matrix to obtain a corresponding instantaneous energy characteristic matrix;
s4, inputting the instantaneous energy feature matrix into a pre-trained convolutional neural network model for classification, and obtaining a corresponding sleep posture classification result.
In one embodiment, the acquiring the triaxial acceleration signal and the triaxial angular velocity signal of the subject in the sleep state for a predetermined time by using the flexible sleep monitoring band includes:
collecting a triaxial acceleration signal and a triaxial angular velocity signal of a subject in a sleep state in 6S by using the flexible sleep monitoring belt;
The triaxial acceleration signal includes: an X-axis acceleration signal, a Y-axis acceleration signal, and a Z-axis acceleration signal;
the triaxial angular velocity signal includes: omega x shaft angular velocity signal, omega y shaft angular velocity signal, omega z shaft angular velocity signal.
In one embodiment, the preprocessing the triaxial acceleration signal and the triaxial angular velocity signal correspondingly obtains a triaxial enhanced acceleration signal and a triaxial enhanced angular velocity signal, including:
Using an HPF first-order high-pass filter to respectively carry out filtering treatment on an X-axis acceleration signal, a Y-axis acceleration signal, a Z-axis acceleration signal, an omega x -axis angular velocity signal, an omega y -axis angular velocity signal and an omega z -axis angular velocity signal, enhancing a high-frequency part and attenuating a low-frequency part, and reducing the influence caused by motion artifacts, thereby correspondingly obtaining an X-axis enhanced acceleration signal, a Y-axis enhanced acceleration signal, a Z-axis enhanced acceleration signal, an omega x -axis enhanced angular velocity signal, an omega x -axis enhanced angular velocity signal and an omega x -axis enhanced angular velocity signal;
The time domain formula of the HPF first-order high-pass filter is as follows:
Wherein i represents the i-th time; x [ i ] represents an input signal; y [ i ] represents an output signal; k p denotes a filter factor determining the cut-off frequency, and k p is calculated as follows:
kp=2×π×T×fs
Wherein f is the cut-off frequency, which is 0.5Hz; f s is the sampling frequency, 128Hz, and k p is 0.0245.
In one embodiment, the feature extraction is performed on the triaxial enhanced acceleration signal and the triaxial enhanced angular velocity signal respectively, and the triaxial acceleration feature matrix and the triaxial angular velocity feature matrix are corresponding to each other; connecting the triaxial acceleration characteristic matrix and the triaxial angular velocity characteristic matrix to obtain a corresponding instantaneous energy characteristic matrix, wherein the method comprises the following steps:
Taking 1 second as a section, respectively framing an X-axis enhanced acceleration signal, a Y-axis enhanced acceleration signal, a Z-axis enhanced acceleration signal, an omega x -axis enhanced angular velocity signal, an omega y -axis enhanced angular velocity signal and an omega z -axis enhanced angular velocity signal to correspondingly obtain six X-axis sub-signals, six Y-axis sub-signals, six Z-axis sub-signals, six omega x -axis sub-signals, six omega y -axis sub-signals and six omega z -axis sub-signals;
Adopting FFT fast Fourier transform to respectively convert each X-axis sub-signal into a corresponding X-axis frequency domain signal, respectively convert each Y-axis sub-signal into a corresponding Y-axis frequency domain signal, respectively convert each Z-axis sub-signal into a corresponding Z-axis frequency domain signal, respectively convert each omega x -axis sub-signal into a corresponding omega x -axis frequency domain signal, respectively convert each omega y -axis sub-signal into a corresponding omega y -axis frequency domain signal, respectively convert each omega z -axis sub-signal into a corresponding omega z -axis frequency domain signal; the FFT fast Fourier transform formula is:
Wherein, N is the length of the input vector, which represents the sampling rate of 1 second, and the value of N is 128; y [ i ] represents the ith sampling point of the time domain signal; y [ k ] represents the kth discrete frequency in the frequency domain;
Calculating the X-axis instantaneous energy corresponding to each X-axis frequency domain signal by adopting a mode of calculating the square sum of elements in the frequency domain amplitude vector; respectively calculating Y-axis instantaneous energy corresponding to each Y-axis frequency domain signal; respectively calculating the Z-axis instantaneous energy corresponding to each Z-axis frequency domain signal; respectively calculating omega-axis instantaneous energy corresponding to each omega-axis frequency domain signal; respectively calculating the theta-axis instantaneous energy corresponding to each theta-axis frequency domain signal; respectively calculate each Corresponding to the axial frequency domain signalShaft instantaneous energy; the formula for calculating the sum of squares of the elements in the frequency domain amplitude vector is:
Wherein abs operates modulo, q represents the instantaneous energy at a frequency of 0-64 Hz;
By combining six of the X-axis instantaneous energies, six of the Y-axis instantaneous energies, six of the Z-axis instantaneous energies, six of the omega-axis instantaneous energies, six of the theta-axis instantaneous energies, and six of the Y-axis instantaneous energies The instantaneous energy of the axes is taken as the element of the matrix, and an instantaneous energy characteristic matrix with the size of 6*6 is established.
In one embodiment, the inputting the instantaneous energy feature matrix into a pre-trained convolutional neural network model for classification to obtain a corresponding sleep pose classification result includes:
And outputting four classification results, namely lying, left lying, right lying and prone lying.
In one embodiment, the inputting the instantaneous energy feature matrix into a pre-trained convolutional neural network model for classification to obtain a corresponding sleep pose classification result, and further includes:
Performing first convolution on the instantaneous energy feature matrix to obtain a corresponding first tensor C1, wherein the size of a first convolution kernel is 3*3, the number of the first convolution kernels is 32, and the activation function of the first convolution is ReLU; the size of the first tensor C1 is 4 x 32;
Carrying out primary pooling on the first tensor C1 to obtain a corresponding second tensor A1, wherein the size of a first pooling core is 2 x 2, the step length of the primary pooling is 2, and the activation function of the primary pooling is ReLU; the size of the second tensor A1 is 2 x 32;
Performing a second convolution on the second tensor A1 to obtain a corresponding third tensor C2, wherein the size of a second convolution kernel is 1*1, the number of the second convolution kernels is 64, and the activation function of the second convolution is ReLU; the size of the third tensor C2 is2 x 64;
Performing second pooling on the third tensor C2 to obtain a fourth tensor A2, wherein the size of a second pooling core is 2x 2, the step size of the second pooling is 2, and the activation function of the second pooling is ReLU; the size of the fourth tensor A2 is 1×1×64;
Performing first full connection on the fourth tensor A2 to obtain a corresponding fifth tensor D1, wherein the number of nodes of the first full connection layer is 32; the activation function of the first full connection is ReLU; the fifth tensor D1 has a size of 1 x 32
Regularization is carried out on the fifth tensor D1 to obtain a sixth tensor D2 correspondingly;
Performing second full connection on the sixth tensor D2 to obtain a corresponding seventh tensor D3; the activation function of the second full connection is Softmax; the size of the seventh tensor D3 is 1*4; the four elements in the seventh tensor D3 represent probabilities corresponding to lying, left-side lying, right-side lying, and prone lying, respectively.
In one embodiment, the sleeping pose with the highest probability is selected as the output result of the model.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to the present invention may occur to one skilled in the art without departing from the principles of the present invention and are intended to be within the scope of the present invention.

Claims (6)

1. The sleeping gesture recognition method based on the IMU signal is characterized by comprising the following steps of:
s1, acquiring a triaxial acceleration signal and a triaxial angular velocity signal of a subject in a sleep state in a preset time by using a flexible sleep monitoring belt;
S2, preprocessing the triaxial acceleration signals and triaxial angular velocity signals to correspondingly obtain triaxial enhanced acceleration signals and triaxial enhanced angular velocity signals;
S3, respectively carrying out feature extraction on the triaxial enhanced acceleration signal and the triaxial enhanced angular velocity signal to obtain a corresponding triaxial acceleration feature matrix and a triaxial angular velocity feature matrix; connecting the triaxial acceleration characteristic matrix with the triaxial angular velocity characteristic matrix to obtain a corresponding instantaneous energy characteristic matrix;
s4, inputting the instantaneous energy feature matrix into a pre-trained convolutional neural network model for classification, and obtaining a corresponding sleep pose classification result;
the method for acquiring the triaxial acceleration signal and the triaxial angular velocity signal of the subject in the sleep state by using the flexible sleep monitoring band in a preset time comprises the following steps:
collecting a triaxial acceleration signal and a triaxial angular velocity signal of a subject in a sleep state in 6S by using the flexible sleep monitoring belt;
The triaxial acceleration signal includes: an X-axis acceleration signal, a Y-axis acceleration signal, and a Z-axis acceleration signal;
The triaxial angular velocity signal includes: omega x shaft angular velocity signal, omega y shaft angular velocity signal, omega z shaft angular velocity signal;
the preprocessing of the triaxial acceleration signal and the triaxial angular velocity signal correspondingly obtains a triaxial enhanced acceleration signal and a triaxial enhanced angular velocity signal, including:
Using an HPF first-order high-pass filter to respectively carry out filtering treatment on an X-axis acceleration signal, a Y-axis acceleration signal, a Z-axis acceleration signal, an omega x -axis angular velocity signal, an omega y -axis angular velocity signal and an omega z -axis angular velocity signal, enhancing a high-frequency part and attenuating a low-frequency part, and reducing the influence caused by motion artifacts, thereby correspondingly obtaining an X-axis enhanced acceleration signal, a Y-axis enhanced acceleration signal, a Z-axis enhanced acceleration signal, an omega x -axis enhanced angular velocity signal, an omega x -axis enhanced angular velocity signal and an omega x -axis enhanced angular velocity signal;
The time domain formula of the HPF first-order high-pass filter is as follows:
Wherein i represents the i-th time; x [ i ] represents an input signal; y [ i ] represents an output signal; k p denotes a filter factor determining the cut-off frequency, and k p is calculated as follows:
kp=2×π×f×fs
where f is the cut-off frequency; f s is the sampling frequency;
The three-axis enhanced acceleration signals and the three-axis enhanced angular velocity signals are subjected to feature extraction respectively, and the three-axis enhanced acceleration signals and the three-axis enhanced angular velocity signals correspond to a three-axis acceleration feature matrix and a three-axis angular velocity feature matrix; connecting the triaxial acceleration characteristic matrix and the triaxial angular velocity characteristic matrix to obtain a corresponding instantaneous energy characteristic matrix, wherein the method comprises the following steps:
Taking 1 second as a section, respectively framing an X-axis enhanced acceleration signal, a Y-axis enhanced acceleration signal, a Z-axis enhanced acceleration signal, an omega x -axis enhanced angular velocity signal, an omega y -axis enhanced angular velocity signal and an omega z -axis enhanced angular velocity signal to correspondingly obtain six X-axis sub-signals, six Y-axis sub-signals, six Z-axis sub-signals, six omega x -axis sub-signals, six omega y -axis sub-signals and six omega z -axis sub-signals;
Adopting FFT fast Fourier transform to respectively convert each X-axis sub-signal into a corresponding X-axis frequency domain signal, respectively convert each Y-axis sub-signal into a corresponding Y-axis frequency domain signal, respectively convert each Z-axis sub-signal into a corresponding Z-axis frequency domain signal, respectively convert each omega x -axis sub-signal into a corresponding omega x -axis frequency domain signal, respectively convert each omega y -axis sub-signal into a corresponding omega y -axis frequency domain signal, respectively convert each omega z -axis sub-signal into a corresponding omega z -axis frequency domain signal; the FFT fast Fourier transform formula is:
Wherein, N is the length of the input vector, which represents the sampling rate of 1 second, and the value of N is 128; y [ i ] represents the ith sampling point of the time domain signal; y [ k ] represents the kth discrete frequency in the frequency domain;
Calculating the X-axis instantaneous energy corresponding to each X-axis frequency domain signal by adopting a mode of calculating the square sum of elements in the frequency domain amplitude vector; respectively calculating Y-axis instantaneous energy corresponding to each Y-axis frequency domain signal; respectively calculating the Z-axis instantaneous energy corresponding to each Z-axis frequency domain signal; respectively calculating omega-axis instantaneous energy corresponding to each omega-axis frequency domain signal; respectively calculating the theta-axis instantaneous energy corresponding to each theta-axis frequency domain signal; respectively calculate each Corresponding to the axial frequency domain signalShaft instantaneous energy; the formula for calculating the sum of squares of the elements in the frequency domain amplitude vector is:
Wherein abs operates modulo, q represents the instantaneous energy at a frequency of 0-64 Hz;
By combining six of the X-axis instantaneous energies, six of the Y-axis instantaneous energies, six of the Z-axis instantaneous energies, six of the omega-axis instantaneous energies, six of the theta-axis instantaneous energies, and six of the Y-axis instantaneous energies The instantaneous energy of the shaft is taken as the element of the matrix, and an instantaneous energy characteristic matrix with the size of 6*6 is established;
The step of inputting the instantaneous energy feature matrix into a pre-trained convolutional neural network model for classification to obtain a corresponding sleep pose classification result, which comprises the following steps:
And outputting four classification results, namely lying, left lying, right lying and prone lying.
2. The IMU signal-based sleep posture identification method of claim 1, wherein the inputting the instantaneous energy feature matrix into a pre-trained convolutional neural network model for classification, to obtain a corresponding sleep posture classification result, further comprises:
Performing first convolution on the instantaneous energy feature matrix to obtain a corresponding first tensor C1, wherein an activation function of the first convolution is ReLU;
carrying out first pooling on the first tensor C1 to obtain a corresponding second tensor A1, wherein the activation function of the first pooling is ReLU; ;
performing a second convolution on the second tensor A1 to obtain a corresponding third tensor C2, wherein an activation function of the second convolution is ReLU;
performing second pooling on the third tensor C2 to obtain a fourth tensor A2, wherein an activation function of the second pooling is ReLU;
performing first full connection on the fourth tensor A2 to obtain a corresponding fifth tensor D1, wherein an activation function of the first full connection is ReLU;
regularization is carried out on the fifth tensor D1 to obtain a sixth tensor D2 correspondingly;
Performing second full connection on the sixth tensor D2 to obtain a corresponding seventh tensor D3; the activation function of the second full connection is Softmax; the size of the seventh tensor D3 is 1*4; the four elements in the seventh tensor D3 represent probabilities corresponding to lying, left-side lying, right-side lying, and prone lying, respectively.
3. The IMU signal-based sleep posture recognition method of claim 2, wherein a sleep posture having a highest probability is selected as an output result of the model.
4. A sleep posture recognition system based on IMU signals, characterized by the steps for performing the method of any of claims 1-3, comprising:
And the signal detection module is used for: the device comprises a flexible sleep monitoring belt, a sensor and a sensor, wherein the flexible sleep monitoring belt is used for acquiring triaxial acceleration signals and triaxial angular velocity signals of a subject in a sleep state in a preset time;
the signal preprocessing module is used for: the method comprises the steps of preprocessing the triaxial acceleration signals and triaxial angular velocity signals to correspondingly obtain triaxial enhanced acceleration signals and triaxial enhanced angular velocity signals;
and the feature extraction module is used for: the method comprises the steps of respectively carrying out feature extraction on the triaxial enhanced acceleration signal and the triaxial enhanced angular velocity signal to obtain a corresponding triaxial acceleration feature matrix and a triaxial angular velocity feature matrix; connecting the triaxial acceleration characteristic matrix with the triaxial angular velocity characteristic matrix to obtain a corresponding instantaneous energy characteristic matrix;
Sleeping posture prediction module: the method is used for inputting the instantaneous energy feature matrix into a pre-trained convolutional neural network model for classification, and a corresponding sleeping gesture classification result is obtained.
5. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 3.
6. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 3 when the computer program is executed.
CN202410037450.1A 2024-01-10 IMU signal-based sleeping gesture recognition method, system, medium and computer Active CN117958804B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410037450.1A CN117958804B (en) 2024-01-10 IMU signal-based sleeping gesture recognition method, system, medium and computer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410037450.1A CN117958804B (en) 2024-01-10 IMU signal-based sleeping gesture recognition method, system, medium and computer

Publications (2)

Publication Number Publication Date
CN117958804A CN117958804A (en) 2024-05-03
CN117958804B true CN117958804B (en) 2024-07-09

Family

ID=

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108703760A (en) * 2018-06-15 2018-10-26 安徽中科智链信息科技有限公司 Human motion gesture recognition system and method based on nine axle sensors
CN115105100A (en) * 2021-03-19 2022-09-27 深圳市韶音科技有限公司 Motion data processing method and motion monitoring system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108703760A (en) * 2018-06-15 2018-10-26 安徽中科智链信息科技有限公司 Human motion gesture recognition system and method based on nine axle sensors
CN115105100A (en) * 2021-03-19 2022-09-27 深圳市韶音科技有限公司 Motion data processing method and motion monitoring system

Similar Documents

Publication Publication Date Title
US7766841B2 (en) Sleep diagnosis device
CN102753095B (en) For determining the method and apparatus of breath signal
Liu et al. A dense pressure sensitive bedsheet design for unobtrusive sleep posture monitoring
EP3065628B1 (en) Biomechanical activity monitoring
WO2017183039A1 (en) Body motion monitor
US20200029832A1 (en) Abnormality reporting device, recording medium, and abnormality reporting method
EP3927234B1 (en) A sleep monitoring system and method
CN108888271B (en) Physiological parameter measuring system and intelligent seat with same
Diao et al. Deep residual networks for sleep posture recognition with unobtrusive miniature scale smart mat system
JP7500568B2 (en) Systems and methods for detecting liquid accumulation - Patents.com
CN106264447A (en) Sleep position detection method and system
Zhang et al. Monitoring cardio-respiratory and posture movements during sleep: What can be achieved by a single motion sensor
Nguyen et al. IMU-based spectrogram approach with deep convolutional neural networks for gait classification
Clemente et al. Helena: Real-time contact-free monitoring of sleep activities and events around the bed
Wang et al. A new physiological signal acquisition patch designed with advanced respiration monitoring algorithm based on 3-axis accelerator and gyroscope
JP6518056B2 (en) Sleep state determination device, sleep state determination method and program
CN113749644B (en) Intelligent garment capable of monitoring lumbar vertebra movement of human body and correcting autonomous posture
Diao et al. Unobtrusive smart mat system for sleep posture recognition
Roshini et al. An Enhanced Posture Prediction‐Bayesian Network Algorithm for Sleep Posture Recognition in Wireless Body Area Networks
CN117958804B (en) IMU signal-based sleeping gesture recognition method, system, medium and computer
Kye et al. Detecting periodic limb movements in sleep using motion sensor embedded wearable band
CN103027684A (en) Device and method for removing noise caused by body motion in respiratory movement monitoring
CN117958804A (en) IMU signal-based sleeping gesture recognition method, system, medium and computer
EP3967225B1 (en) Respiratory cessation detection system and storage medium
Wang et al. Vision analysis in detecting abnormal breathing activity in application to diagnosis of obstructive sleep apnoea

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant