CN116077798A - Hypnotizing method based on combination of voice induction and computer vision - Google Patents
Hypnotizing method based on combination of voice induction and computer vision Download PDFInfo
- Publication number
- CN116077798A CN116077798A CN202310371275.5A CN202310371275A CN116077798A CN 116077798 A CN116077798 A CN 116077798A CN 202310371275 A CN202310371275 A CN 202310371275A CN 116077798 A CN116077798 A CN 116077798A
- Authority
- CN
- China
- Prior art keywords
- hypnotic
- key point
- user
- eye
- state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 78
- 230000006698 induction Effects 0.000 title claims abstract description 16
- 230000000147 hypnotic effect Effects 0.000 claims abstract description 117
- 230000008569 process Effects 0.000 claims abstract description 38
- 230000001815 facial effect Effects 0.000 claims abstract description 12
- 208000013403 hyperactivity Diseases 0.000 claims abstract description 5
- 230000033001 locomotion Effects 0.000 claims description 97
- 241001282135 Poromitra oscitans Species 0.000 claims description 17
- 206010048232 Yawning Diseases 0.000 claims description 17
- 210000003128 head Anatomy 0.000 claims description 13
- 230000004397 blinking Effects 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 12
- 230000003068 static effect Effects 0.000 claims description 10
- 230000004399 eye closure Effects 0.000 claims description 9
- 230000009471 action Effects 0.000 claims description 8
- 210000003423 ankle Anatomy 0.000 claims description 8
- 210000003127 knee Anatomy 0.000 claims description 8
- 210000000707 wrist Anatomy 0.000 claims description 8
- 230000004886 head movement Effects 0.000 claims description 4
- 208000031872 Body Remains Diseases 0.000 claims description 3
- 230000000284 resting effect Effects 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract description 28
- 208000013738 Sleep Initiation and Maintenance disease Diseases 0.000 abstract description 12
- 206010022437 insomnia Diseases 0.000 abstract description 12
- 230000006870 function Effects 0.000 description 8
- 239000003326 hypnotic agent Substances 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 238000005315 distribution function Methods 0.000 description 5
- 238000002474 experimental method Methods 0.000 description 4
- 230000008921 facial expression Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 210000003205 muscle Anatomy 0.000 description 3
- 230000005526 G1 to G0 transition Effects 0.000 description 2
- 206010062519 Poor quality sleep Diseases 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000010924 continuous production Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000003860 sleep quality Effects 0.000 description 2
- 230000008667 sleep stage Effects 0.000 description 2
- 208000019901 Anxiety disease Diseases 0.000 description 1
- 210000001015 abdomen Anatomy 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000036506 anxiety Effects 0.000 description 1
- 210000001217 buttock Anatomy 0.000 description 1
- 244000309466 calf Species 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000000193 eyeblink Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 210000004247 hand Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000000144 pharmacologic effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 210000004761 scalp Anatomy 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 210000000689 upper leg Anatomy 0.000 description 1
- 230000002618 waking effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M21/02—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis for inducing sleep or relaxation, e.g. by direct nerve stimulation, hypnosis, analgesia
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1103—Detecting eye twinkling
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1116—Determining posture transitions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1118—Determining activity level
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1126—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
- A61B5/1128—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4806—Sleep evaluation
- A61B5/4809—Sleep detection, i.e. determining whether a subject is asleep or not
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4806—Sleep evaluation
- A61B5/4812—Detecting sleep stages or cycles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4836—Diagnosis combined with treatment in closed-loop systems or methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M2021/0005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
- A61M2021/0027—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M2205/00—General characteristics of the apparatus
- A61M2205/50—General characteristics of the apparatus with microprocessors or computers
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M2230/00—Measuring parameters of the user
- A61M2230/62—Posture
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M2230/00—Measuring parameters of the user
- A61M2230/63—Motion, e.g. physical activity
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02B—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
- Y02B20/00—Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
- Y02B20/40—Control techniques providing energy savings, e.g. smart controller or presence detection
Abstract
The invention discloses a hypnotizing method based on combination of voice induction and computer vision, which detects facial information of a user in a hypnotizing process; detecting body posture information of a user in a hypnotizing process; determining a sleep state according to the facial information and the body posture information of the user, and playing a corresponding hypnotic guide according to the sleep state; sleep states include a pre-hyperactivity state, a mid-plateau state, and a post-relative rest state. The advantages are that: the hypnotic detection system comprises a hypnotic detection system, a hypnotic detection system and a hypnotic detection system, wherein the hypnotic detection system is used for detecting facial information of a user in a hypnotic process and body posture information of the user semi-lying on a chair in the hypnotic process, determining a hypnotic stage according to the facial information and the body posture information, playing hypnotic guide words corresponding to the hypnotic stage according to the hypnotic stage, simulating hypnotic process of hypnotic doctors on patients, hypnotizing insomnia people, and accordingly enabling the hypnotic people to quickly enter the sleep state, and effectively solving the problem that other hypnotic detection systems are required to enable the user to wear various sensors to generate constraint feeling and uncomfortable feeling.
Description
Technical Field
The invention relates to a hypnotizing method based on combination of voice induction and computer vision, and belongs to the technical field of computer vision and medical hypnosis.
Background
The health of the human body is kept from good sleep, and the sleep is very important for the relaxation of the human body and the recovery of the body function, and is a basic condition for keeping the body healthy. However, due to the acceleration of modern life rhythm, people have poor sleep quality caused by large working pressure and irregular life work and rest, and symptoms of the sleep quality are insomnia, night awakening and the like.
Hypnosis is a method that allows a person to quickly go into light sleep or even deep sleep. Traditionally, hypnotics are given to people by special hypnotics, but the special hypnotics are less, the inviting difficulty is high, and the price is high. At present, some technologies are used for simulating hypnotics to hypze, mainly, a user wears a sensor to detect bioelectric signals, and hypnotics guidance is performed through hypnotic guidance, so that the hypnotics are simulated to hypze the user. However, this method requires the user to wear various sensors, which may cause the user to feel a bound feeling and discomfort, thereby greatly affecting the hypnotic effect.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a hypnotizing method based on combination of voice induction and computer vision.
In order to solve the technical problems, the invention provides a hypnotizing method based on combination of voice induction and computer vision, which comprises the following steps:
detecting face information of a user in a hypnotizing process, wherein the face information comprises opening and closing of eyes, blink times and mouth yawning times;
detecting body posture information of a user in a hypnotizing process, wherein the body posture information comprises whether a lateral head acts or not and whether a body is static or not;
determining a sleep state according to the facial information and the body posture information of the user, and playing a corresponding hypnotic guide according to the sleep state; the sleep state comprises a pre-period overactive state, a mid-period stable state and a later-period relative static state; the early-stage overactive state comprises that eyes are opened or blink frequency is higher than a first set value, and the body moves greatly; the middle-stage steady state comprises eye blinking or closing or blinking frequency lower than a second set value or accompanied by yawning action and lateral head action; the later relative resting state includes eye closure and the body remains stationary.
Further, the detecting facial information of the user in the hypnotizing process includes:
six eye key points close to the innermost layer of the eye contour are detected through the MediaPipe, namely an eye contour left key point P1, an eye contour left upper key point P2, an eye contour right upper key point P3, an eye contour right key point P4, an eye contour right lower key point P5 and an eye contour left lower key point P6; according to the coordinates of the six eye key points, calculating an EAR value, and judging whether the eyes are open or closed according to the EAR value, wherein the EAR value represents an eye contour aspect ratio value;
acquiring a preset open eye EAR value range and a closed eye EAR value range, if the calculated EAR value falls from the open eye EAR value range to the closed eye EAR value range in a preset time, then recovering to the open eye EAR value range, judging that the eye blinking is performed, and recording the number of times of blinking; if the calculated EAR value is reduced from the EAR value range of the open eyes to the EAR value range of the closed eyes, and is maintained in the EAR value range of the closed eyes, judging that the eyes are closed;
six mouth key points positioned around the innermost layer of the mouth outline are detected through the MediaPipe, and are a mouth outline left key point F1, a mouth outline left upper key point F2, a mouth outline right upper key point F3, a mouth outline right key point F4, a mouth outline right lower key point F5 and a mouth outline left issuing key point F6; calculating MAR values according to coordinates of six mouth key points, and judging whether the mouth is opened or closed according to the MAR values, wherein the MAR values represent the aspect ratio values of the mouth outline;
acquiring a preset MAR value range with an open mouth and a MAR value range with a closed mouth, if the calculated MAR value is reduced from the MAR value range with the open mouth to the MAR value range with the closed mouth within preset time, then recovering to the MAR ratio range with the closed mouth, judging that the MAR value is yawning, and recording the yawning times for one time.
Further, the calculation formula of the eye contour aspect ratio value is as follows:
in the formula ,EARrepresents the eye contour aspect ratio value, and II represents taking absolute value;
the calculation formula of the aspect ratio value of the mouth outline is as follows:
in the formula ,MARrepresenting the mouth contour aspect ratio value.
Further, the detecting body posture information of the user in the hypnotizing process includes:
detecting 17 body key point positions on a user body in a video frame through a PoseNet model, wherein the body key point positions are respectively a nose key point, a left eye key point, a right eye key point, a left ear key point, a right ear key point, a left shoulder key point, a right shoulder key point, a left elbow key point, a right elbow key point, a left wrist key point, a right wrist key point, a left crotch key point, a right crotch key point, a left knee key point, a right knee key point, a left ankle key point and a right ankle key point;
representing nose keypoints asq 1 The left shoulder key point is expressed asq 2 The right shoulder key point is expressed asq 3 Determining left shoulder key pointsq 2 And right shoulder key pointq 3 Midpoint of the connection lineq 4 Calculate +.q 1 q 4 q 3 and ∠q 1 q 4 q 2 The value of (1)q 1 q 4 q 3 Or-q 1 q 4 q 2 If the angle value is smaller than the preset angle value, judging that the lateral head acts;
and calculating the body movement amount according to the coordinate information of the 17 body key points, judging whether the body moves according to the body movement amount, judging that the body moves when the movement amount exceeds a set threshold value, and judging that the body is stationary.
Further, the calculating the body movement amount according to the coordinate information of the 17 body key points, judging whether the body moves according to the body movement amount, and judging that the body is stationary when the body movement amount exceeds a set threshold value, comprising:
according tokFixed time before timeTMotion amount determination of internal 17 body key pointskThe body motion amount of the user at the moment;
acquiring 17 key points per unit timeiThe inner movement distances are respectively recorded as:S1 ,S2 , …,S17 Wherein the fixed timeTConsists of a plurality of unit time;
kFixed time before timeTBody motion function for an internal userFExpressed as:
F=f(M k T i-(-) ,M k T i-(-2) ,…,M k i- ,M k )
wherein,M k T i-(-) 、M k T i-(-2) 、M k i- respectively representk-(T-i)、k-(T-2i)、k-iThe amount of motion of the user at the moment,frepresenting a functional form;
setting a weight vectorW k Expressed as:
W k =[w k T i-(-) ,w k T i-(-2) ,…,w k i- ,w k ]
wherein the method comprises the steps ofw k T i-(-) ,w k T i-(-2) ,…,w k i- ,w k Respectively representk-(T-i)、k-(T-2i)、k-i、kWeight of moment motion;
according to the weight vectorW k Andkfixed time before timeTDetermination of body movement function from movement amount at each time withinFExpressed as:
according to body movement functionsFDetermining the body motion quantity of the user at the current moment;
and judging the threshold value of the body movement quantity of the user at the current moment, and determining whether the body movement quantity of the user at the current moment is in a motionless state, a steady state or a relative static state.
Further, the determining, by performing threshold judgment on the physical state of the user at the current moment, whether the physical state of the user at the current moment is a motionless state, a steady state or a relatively stationary state includes:
if the body movement amount of the user at the current moment is greater than the threshold valueT 1 Judging that the number of the front-stage is moreA dynamic state;
if the body movement amount of the user at the current moment is not greater than the threshold valueT 1 Greater than a threshold valueT 2 Judging that the vehicle is in a medium-term stable state;
if the body movement amount of the user at the current moment is not greater than the threshold valueT 2 Judging the relative stationary state in the later period;
wherein,T 1 >T 2 。
further, the determining the sleep state according to the facial information and the body posture information of the user includes:
the eye opening or blink frequency is higher than a set value one and the body movement amount is greater than a threshold valueT 1 Determining the early-stage multi-active-period state;
eye closure or blink frequency below a set point or lateral head movement, body movement not greater thanT 1 Greater thanT 2 Determining a medium-term steady state;
eye closure, body movement less than thresholdT 2 A later relative stationary state is determined.
Further, the playing the corresponding hypnotic guide according to the sleep state includes:
when the user is detected to reach a sleep state, a corresponding hypnotic guide is played, when the user is detected to not reach a preset hypnotic stage after the corresponding guide is played, the hypnotic guide is played again, the playing times of the same hypnotic guide are not more than 3, and otherwise, hypnotic is withdrawn in advance.
Further, the method further comprises the following steps: and playing corresponding background music according to different sleep states.
Further, the method further comprises the following steps:
before face information and body posture information of a user in the hypnotizing process are detected, a hypnotizing scene which is constructed on the basis of Unity3D in advance is obtained, the user is guided to reach a designated area through a virtual person in the hypnotizing scene, and preparation before hypnosis is made.
The invention has the beneficial effects that:
the invention utilizes computer vision to detect the face information of the user in the hypnotizing process and the body posture information of the user lying on the arm rest chair in the hypnotizing process, determines the hypnotizing stage according to the face information and the body posture information, plays the hypnotizing guide language corresponding to the hypnotizing stage according to the hypnotizing stage, can simulate the hypnotizing process of hypnotizing doctors on patients, hypnotizes the insomnia crowd, so that the insomnia crowd can quickly enter the sleep state, and effectively solves the problem that other hypnotizing detection systems need to wear various sensors by the user to generate constraint feeling and uncomfortable feeling.
Drawings
FIG. 1 is a schematic illustration of six eye keypoints of an innermost layer of an eye contour;
FIG. 2 is an eye profile aspect ratio diagram of eye blinking;
FIG. 3 is an eye contour aspect ratio value diagram for eye closure;
FIG. 4 is a schematic view of 6 mouth keypoints around the innermost layer of the mouth contour;
FIG. 5 is a cross-bar value plot of a yawned mouth profile;
FIG. 6 is an angular schematic view of a lateral head motion;
FIG. 7 is a Gaussian distribution plot;
fig. 8 is a flow chart of the method of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
Example 1
As shown in fig. 8, a hypnotic method based on combination of voice induction and computer vision, comprising:
using a hypnotic guide with a recorded hypnotic guide by a professional hypnotic doctor and background music with hypnotic effect;
detecting facial expression information of a user in a hypnotizing process, wherein the facial information comprises eye opening and closing, blink frequency and mouth yawning frequency;
detecting body posture information of a user in a hypnotizing process, wherein the body posture information comprises the quantity and frequency of motion of the body and whether lateral head actions exist;
judging the sleep state of the user according to the facial expression information and the body posture information of the user, and playing a corresponding hypnotic guide language according to the sleep state of the user; the sleep state comprises a pre-period overactive state, a mid-period stable state and a later-period relatively static state. The early-stage overactive state comprises that eyes are open or blink frequency is high, and the body moves greatly. The mid-term plateau includes eye blinks or closures or a decrease in blink frequency, possibly accompanied by yawning and lateral head movements. The latter relative resting state includes eye closure and the body remains almost stationary.
The detecting of the facial information of the user in the hypnotizing process comprises the following steps:
six eye key points close to the innermost layer of the eye contour are detected through the MediaPipe, namely an eye contour left key point P1, an eye contour left upper key point P2, an eye contour right upper key point P3, an eye contour right key point P4, an eye contour right lower key point P5 and an eye contour left lower key point P6; finding the coordinates of the key points of the six eyes, calculating an Eye contour Aspect Ratio (EAR) value by using an Eye Aspect Ratio (EAR) algorithm, and judging whether the eyes are open or closed according to the Eye EAR value;
acquiring a preset eye open EAR value range and an eye closed EAR value range, if the calculated eye EAR value falls from the eye open outline aspect ratio value range to the eye closed range within 2 to 6 seconds, then recovering to the eye open EAR value range, judging that the eye is blinking, and recording the number of blinking times; if the calculated eye EAR value is reduced from the eye EAR value range of the open eyes to the eye EAR value range of the closed eyes in a short time, and is maintained in the eye EAR value range of the closed eyes, judging that the eyes are closed;
6 mouth key points positioned around the innermost layer of the mouth outline are detected through the MediaPipe and are respectively a mouth outline left key point F1, a mouth outline left upper key point F2, a mouth outline right upper key point F3, a mouth outline right key point F4, a mouth outline right lower key point F5 and a mouth outline left issuing key point F6; finding coordinates of six mouth key points, calculating mouth contour aspect ratio values (MARs) by using a Mouth Aspect Ratio (MAR) algorithm, and judging whether the mouth is open or closed according to the mouth MAR values;
acquiring a preset mouth opening MAR value range and a mouth closing mouth wheel MAR value range, if the calculated mouth MAR value is reduced from the mouth opening MAR value range to the mouth closing mouth MAR value range within 6 seconds, then recovering to the mouth closing mouth MAR ratio range, judging that the yawning is performed, and recording the yawning times.
The calculation formula of the eye contour aspect ratio value is as follows:
where EAR represents the eye contour aspect ratio value and II represents the absolute value.
The calculation formula of the aspect ratio value of the mouth outline is as follows:
wherein MAR denotes the mouth contour aspect ratio value.
The detecting the body posture information of the user in the hypnotizing process comprises the following steps:
detecting 17 key point positions on a user body in a video frame through a PoseNet model, wherein the 17 key point positions are respectively a nose key point, a left eye key point, a right eye key point, a left ear key point, a right ear key point, a left shoulder key point, a right shoulder key point, a left elbow key point, a right elbow key point, a left wrist key point, a right wrist key point, a left crotch key point, a right crotch key point, a left knee key point, a right knee key point, a left ankle key point and a right ankle key point;
the nose key point is represented as q1, the left shoulder key point is represented as q2, the right shoulder key point is represented as q3, the midpoint q4 of the connection line of the left shoulder key point q2 and the right shoulder key point q3 is determined, the values of the angle q1 q4 q3 and the angle q1 q4 q2 are calculated, and if the angle q1 q4 q3 or the angle q1 q4 q2 is smaller than a preset angle value, the head-side action of the position is determined;
sleep is a continuous process, and the whole hypnotic process is divided into 3 states, namely: and the early stage is in a multiple-motion state, the middle stage is in a stable state, and the later stage is in a relatively static state. The closer the motion quantity is to the rear side in the whole hypnotizing process, the larger the influence on hypnotizing detection is, so that the influence of the motion quantity at different moments on hypnotizing detection effect is reflected by giving different weights to different moments. The more rearward hypnotic the greater the weight.
The total hypnotic time is 20 minutes, the early-stage overactive period is 6 minutes, the middle-stage stationary period is 10 minutes, and the later-stage relative stationary period is 4 minutes. The method sets the state of the user at the moment k to be jointly determined by the quantity of motion of the previous 30 s. The motion amount calculation formula is as follows:
the movement distances within 1 second before and after the 17 key points are respectively recorded as follows: s1, S2, S3, S4 … S16, S17, the amount of motion at each moment is noted asM k ,;
Then the motion function at time kF=f(M k-29 ,M k-28 ,…,M k i- , …,w k-1,M k ),i∈[0,29];
Wherein the method comprises the steps ofM k i- Representation ofk-iThe amount of motion of the user at the moment. Taking into account the closerkQuantity of motion of user at momentkThe greater the influence of the hypnotic state is judged at the moment, the separationkThe farther the moment iskThe smaller the influence of the moment on the hypnotic state is, the more weight should be given to the motion amounts obtained at different moments, and the closerkThe greater the weight, the more distantkThe farther away the moment in time, the less weight should be.
Weight vectorW k The method comprises the following steps:
W k =[w k-29 ,w k-28 ,…,w k i- , …,w k-1,w k ]whereinw k i- The representation impartsk-iWeight of moment motion.
The Gaussian distribution function is a smooth distribution function and has good anti-interference capability to noise, and the distribution characteristic is similar to that of the inventionkThe distribution condition of the weight of the motion quantity in the moment motion function is highly consistent, so that the invention selects Gaussian distribution to reasonably weight the motion quantity at different moments. In Gaussian distribution, the larger the variance, the more concentrated the data distribution, the smaller the variance, and the more discrete the data distribution, taking into accountkThe motion at the moment has contribution and cannot be ignored, so that the flat distribution curve is more suitable for determining the weight, and the invention usesm=0,signGaussian distribution=8.0 to process data.
kThe weight of the moment is maximum, letw k =1, solving to obtain a weight vector:
when the body movement amount is greater than the threshold value in the detection processT 1 When the body movement amount is not greater than the previous movement stateT 1 Greater thanT 2 When the body movement amount is not greater thanT 2 And determining the relative stationary state at the later stage, wherein the relative stationary state is obtained through experiments and calculation analysisT 1 =25,T 2 =8。
The playing hypnotic guide and background music corresponding to the sleep stage according to the sleep stage comprises:
the hypnotic guiding stage is mainly divided into a pre-stage virtual person guiding and hypnotic guiding language guiding. The virtual person guides the user to sit in the appointed area and prepare for hypnotic early stage;
the hypnotic guiding part uses Unity3D to construct a comfortable and warm scene, which is more beneficial to hypnosis. A virtual person is also arranged in the system, so that the patient with insomnia is prepared before hypnosis;
the virtual person will introduce itself after the system is opened, "you good-! I are your virtual hypnotic assistant fritters, please now follow me to enter hypnotic bars together. Then the virtual person can lie on the sofa to wait for playing hypnotic guide language. Background music is circularly played in the whole hypnotizing process, and the background music is different from common music and is special for hypnotizing. There are studies showing that intervention by music can be used as a complementary non-pharmacological approach to pain control, anxiety, insomnia and improvement of health.
The guidance is designed mainly based on three states, where the pre-sleep hyperactivity phase is mainly focused on the preliminary relaxation of the whole body. The mid-term stationary phase is mainly focused on relaxation of various parts of the body. The whole relaxation process guidance language in the later relative rest period is completed, and hypnosis is assisted mainly by background music.
The guidance in the guidance language is mainly divided into the following steps:
the pre-sleep hyperactivity phase: firstly, the guiding words prompt the insomnia patient to prepare for hypnosis in the early stage, and firstly, the binding on the patient is removed, for example, the tight things such as a hairpin, a collar buckle, shoelaces, glasses and the like are removed or loosened. The insomnia patient is then allowed to lie on the chair in the most comfortable position, in order to facilitate the relaxation of the muscles of the various parts behind. Then, the insomnia patient is prompted to slightly close eyes and take deep breath for several times.
Mid-plateau: then, the individual parts of the body are gradually relaxed, and the scalp, eyes, face, nose, mouth, ears, chin, neck, shoulders, chest, hands, back, abdomen, buttocks, thighs, calves and soles are sequentially relaxed from top to bottom. When the muscles of a certain part are relaxed, the guiding language enables the insomnia patient to concentrate consciousness to the part through the language suggestion, then the insomnia patient is relaxed through repeated guiding language playing, and then some sensory suggestions are added in the process of the language suggestion, such as 'you feel that the fingers are expanded and have the sense of extension' when the hand muscles are relaxed, and 'arms become heavy and feel tingling' and the like when the arms are relaxed.
Late relative rest period: at this time, the user has entered a sleep state, mainly by continuously playing background music
And judging whether the user reaches a specified sleep state according to the facial expression detection and the body posture detection results.
If the state of the multiple periods before sleeping is judged according to the following conditions: the eyes open or blink frequently, the body movement is greater than 25
Medium term stationary phase state judgment basis: the eye closure or blink frequency is slower than the pre-sleep hyperactivity period or lateral head movement occurs, and the body movement quantity is less than or equal to 25 and more than 8.
Late relative rest period: the eyes are closed, and the body movement amount is less than or equal to 8.
When the specific hypnotic state is detected, the system automatically plays the hypnotic guide words of the next stage, when the specific hypnotic state is not detected, the hypnotic guide words of the stage are replayed, the guide words of the same stage are played for 3 times at most, the hypnotic program is automatically exited for more than 3 times, the current state of the user is not suitable for hypnosis, and the hypnotic system is required to be used after the user is relaxed by himself.
Example 2
The invention also provides an intelligent hypnotic detection system based on computer vision, which comprises: the device comprises a detection module, a hypnotic module, a guiding module and an intelligent analysis module.
The detection module comprises: the face detection unit and the gesture detection unit are used for collecting images through the camera and transmitting the images to the host for analysis.
The human face detection unit is used for detecting the expression of the face of the user in hypnosis, and mainly comprises opening and closing of eyes, blink times and yawning times of the mouth. Face detection uses MediaPipe face recognition, which is a lightweight and well-behaved face detector based on BlazeFace.
Eye detection for use as a criterion for eye opening and closing based on the aspect ratio between the eye-face contours. The specific method comprises the following steps:
the face 468 is first detected by MediaPipe, and then six key points P1, P2, P3, P4, P5, P6 near the innermost layer of the eye contour are selected therefrom as shown in figure 1,
we note the eye contour aspect ratio value as EAR, which is calculated to represent an open eye and closed eye condition, and normally the open eye EAR value is about 0.6, and when the EAR value suddenly drops to about 0.05, blinking is determined, as shown in FIG. 2, and when the EAR value drops to about 0.05 and remains at about 0.05, the closed eye condition is determined, as shown in FIG. 3. The calculation formula is shown as formula (1), wherein the numerator in formula (1) calculates the distance between the longitudinal key points, the denominator calculates the distance between the transverse key points, 2 represents the weight, and since only one group (p 1, p 4) exists in the horizontal direction and two groups (p 2, p 6), (p 3, p 5) exist in the vertical direction, the difference between the longitudinal and transverse data is balanced by the weight 2.
Yawning detection is used for detecting mouth change of a user in a hypnotizing process, and 6 key points F1, F2, F3, F4, F5 and F6 around the innermost layer of the mouth outline are selected from 468 key points detected by mediaPipe as shown in fig. 4:
the open/close condition of the mouth is represented by the calculated EAR value, and the EAR value is about 0 when the mouth is closed through a plurality of experiments and analysis of data, and the EAR value is judged to be yawning when suddenly rising to the vicinity of 0.8, as shown in fig. 5, and the graph records the yawning process twice. The calculation formula is as follows, the numerator in formula (2) calculates the distance between the vertical keypoints, the denominator calculates the distance between the horizontal landmarks, 2 represents the weight, since there is only one group (F1, F4) in the horizontal direction, and there are two groups (F2, F6), (F3, F5) in the vertical direction.
Gesture detection for estimating 17 keypoint locations on the user's body in a video frame by using a pre-trained pousenet model, respectively: nose, left eye, right eye, left ear, right ear, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist, left crotch, right crotch, left knee, right knee, left ankle, right ankle. The 17 key points are used for detecting the change of the body posture of the user in the hypnotizing process. Including whether there is lateral head motion and whether the body is stationary. Because our hypnosis is to let the user lie half on the armchair, the lateral head motion will appear when the user is hypnotized, and the body will appear in a stationary state when the user is in deep sleep.
Side head motion detection for finding four gesture key points, q, of a user 1 Is the key point of nose, q 2 And q 3 Are key points of the left shoulder and the right shoulder, q 4 Is q 2 And q 3 By calculating a vectorAnd->Included angle betweenθJudging whether the user has lateral head action or not, and through a large number of experiments, judging whether the user is normalθBetween 70 and 90 degrees whenθLess than 70 degrees, and a duration of 5 seconds or more, the lateral head is determined.
And body movement detection, which is used for detecting whether the positions of 17 key points of the user are moved or not through the PoseNet model to judge whether the user moves or not.
Sleep is a continuous process, and the whole hypnotic process is divided into 3 states, namely: and the early stage is in a multiple-motion state, the middle stage is in a stable state, and the later stage is in a relatively static state. The closer the motion quantity is to the rear side in the whole hypnotizing process, the larger the influence on hypnotizing detection is, so that the influence of the motion quantity at different moments on hypnotizing detection effect is reflected by giving different weights to different moments. The more rearward hypnotic the greater the weight.
The total hypnotic time is 20 minutes, the early-stage overactive period is 6 minutes, the middle-stage stationary period is 10 minutes, and the later-stage relative stationary period is 4 minutes. The method is setkThe state of the user at the moment is commonly determined by the quantity of motion of the previous 30 s. The motion amount calculation formula is as follows:
the movement distances within 1 second before and after the 17 key points are respectively recorded as follows: s1, S2, S3, S4 … S16, S17.
Then the motion function at time kF=f(M k-29 ,M k-28 ,…,M k i- , …,w k-1,M k ),i∈[0,29];
Wherein the method comprises the steps ofM k i- Representation ofk-iThe amount of motion of the user at the moment. Taking into account the closerkQuantity of motion of user at momentkThe greater the influence of the hypnotic state is judged at the moment, the separationkThe farther the moment iskThe smaller the influence of the moment on the hypnotic state is, the more weight should be given to the motion amounts obtained at different moments, and the closerkThe greater the weight, the more distantkThe farther away the moment in time, the less weight should be.
Weight vectorW k The method comprises the following steps:
W k =[w k-29 ,w k-28 ,…,w k i- , …,w k-1,w k ]whereinw k i- The representation impartsk-iWeight of moment motion.
As shown in FIG. 7, the Gaussian distribution function is a smooth distribution function, has good anti-interference capability on noise, and the distribution characteristic of the Gaussian distribution function is highly consistent with the distribution condition of the weight of the motion quantity in the motion function at the moment k of the invention, so that the invention reasonably weights the motion quantities at different moments by using Gaussian distribution. In Gaussian distribution, the variance is the moreThe more concentrated the data distribution, the smaller the variance, the more discrete the data distribution, considering that the motion at k moment has contribution and cannot be ignored, so the flat distribution curve is more suitable for determining the weight, and the invention usesm=0,signGaussian distribution=8.0 to process data.
The weight at the moment k is the largest, letw k =1, solving to obtain a weight vector:
when the body movement amount is greater than the threshold value in the detection processT 1 When the body movement amount is not greater than the previous movement stateT 1 Greater thanT 2 When the body movement amount is not greater thanT 2 And determining the relative stationary state at the later stage, wherein the relative stationary state is obtained through experiments and calculation analysisT 1 =25,T 2 =8。
And the hypnotizing module is used for playing a preset hypnotizing guide word to hypze the user when the user is in a waking state. The detection module detects the user at the beginning, and when the eyes of the user are detected to be open or the body is detected to move, the hypnotizing module judges the user to be in a wakeful state, and then the hypnotizing module starts to play the preset hypnotizing guide words.
The guiding module is used for playing progressive hypnotic instructions to guide the user to sleep when the current hypnotic depth of the user reaches a preset hypnotic stage, and the hypnotic stage is 3 stages: a mild hypnotic phase, a moderate hypnotic phase and a deep hypnotic phase, each hypnotic phase having a corresponding decision criterion, wherein the mild hypnotic phase is characterized by: the user's eyes are closed or blink, yawning action occurs. The main characteristics of the medium hypnotic phase are as follows: the user's eyes are tight, and the body is stationary or slightly moving. The deep hypnotic phase is mainly characterized in that: the eyes of the user are tightly closed and the body is static. When detecting that the user reaches a hypnotic stage, the guiding module plays a corresponding hypnotic guiding language. When the user is detected to not reach the preset hypnotic stage after the corresponding guide language is played, the hypnotic guide language is played again, the playing times of the same hypnotic guide language are not more than 3 times, and otherwise, the hypnotic guide language is withdrawn in advance.
In the invention, a hypnotic system based on Unity3D and computer vision is designed for hypnotizing a patient with insomnia by simulating hypnotic process of hypnotic doctors. The method is characterized in that the background music and the hypnotic guide words are played simultaneously, facial expression and body posture characteristics of a hypnotized person in the hypnotizing process are detected through a camera and fed back to a system, the system recognizes and judges the sleeping condition of the user through an algorithm, and then the playing of the guide words is intelligently adjusted.
The method combines the background music playing and the hypnotic guiding with the computer vision to assist sleeping, has better effect compared with the method of simply playing white noise and background music, and plays the guiding language intelligently.
The non-contact method can reduce the uncomfortable feeling and the binding feeling of the hypnotic person in the hypnotic process compared with the method of detecting the sleeping condition by a sensor, thereby achieving better hypnotic effect.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.
Claims (10)
1. A hypnotic method based on a combination of voice induction and computer vision, comprising:
detecting face information of a user in a hypnotizing process, wherein the face information comprises opening and closing of eyes, blink times and mouth yawning times;
detecting body posture information of a user in a hypnotizing process, wherein the body posture information comprises whether a lateral head acts or not and whether a body is static or not;
determining a sleep state according to the facial information and the body posture information of the user, and playing a corresponding hypnotic guide according to the sleep state; the sleep state comprises a pre-period overactive state, a mid-period stable state and a later-period relative static state; the early-stage overactive state comprises that eyes are opened or blink frequency is higher than a first set value, and the body moves greatly; the middle-stage steady state comprises eye blinking or closing or blinking frequency lower than a second set value or accompanied by yawning action and lateral head action; the later relative resting state includes eye closure and the body remains stationary.
2. The hypnotic method based on combination of voice induction and computer vision according to claim 1, wherein the detecting facial information of the user during hypnosis comprises:
six eye key points close to the innermost layer of the eye contour are detected through the MediaPipe, namely an eye contour left key point P1, an eye contour left upper key point P2, an eye contour right upper key point P3, an eye contour right key point P4, an eye contour right lower key point P5 and an eye contour left lower key point P6; according to the coordinates of the six eye key points, calculating an EAR value, and judging whether the eyes are open or closed according to the EAR value, wherein the EAR value represents an eye contour aspect ratio value;
acquiring a preset open eye EAR value range and a closed eye EAR value range, if the calculated EAR value falls from the open eye EAR value range to the closed eye EAR value range in a preset time, then recovering to the open eye EAR value range, judging that the eye blinking is performed, and recording the number of times of blinking; if the calculated EAR value is reduced from the EAR value range of the open eyes to the EAR value range of the closed eyes, and is maintained in the EAR value range of the closed eyes, judging that the eyes are closed;
six mouth key points positioned around the innermost layer of the mouth outline are detected through the MediaPipe, and are a mouth outline left key point F1, a mouth outline left upper key point F2, a mouth outline right upper key point F3, a mouth outline right key point F4, a mouth outline right lower key point F5 and a mouth outline left lower key point F6; calculating MAR values according to coordinates of six mouth key points, and judging whether the mouth is opened or closed according to the MAR values, wherein the MAR values represent the aspect ratio values of the mouth outline;
acquiring a preset MAR value range with an open mouth and a MAR value range with a closed mouth, if the calculated MAR value is reduced from the MAR value range with the open mouth to the MAR value range with the closed mouth within preset time, then recovering to the MAR ratio range with the closed mouth, judging that the MAR value is yawning, and recording the yawning times for one time.
3. The hypnotic method based on the combination of voice induction and computer vision according to claim 2, wherein the calculation formula of the eye profile aspect ratio value is:
in the method, in the process of the invention,EARrepresents the eye contour aspect ratio value, and II represents taking absolute value;
the calculation formula of the aspect ratio value of the mouth outline is as follows:
in the method, in the process of the invention,MARrepresenting the mouth contour aspect ratio value.
4. The hypnotic method based on combination of voice induction and computer vision according to claim 1, wherein the detecting the physical posture information of the user in the hypnotic process comprises:
detecting 17 body key point positions on a user body in a video frame through a PoseNet model, wherein the body key point positions are respectively a nose key point, a left eye key point, a right eye key point, a left ear key point, a right ear key point, a left shoulder key point, a right shoulder key point, a left elbow key point, a right elbow key point, a left wrist key point, a right wrist key point, a left crotch key point, a right crotch key point, a left knee key point, a right knee key point, a left ankle key point and a right ankle key point;
representing nose keypoints asq 1 The left shoulder key point is expressed asq 2 The right shoulder key point is expressed asq 3 Determining left shoulder key pointsq 2 And right shoulder key pointq 3 Midpoint of the connection lineq 4 Calculate +.q 1 q 4 q 3 And-q 1 q 4 q 2 The value of (1)q 1 q 4 q 3 Or-q 1 q 4 q 2 If the angle value is smaller than the preset angle value, judging that the lateral head acts;
and calculating the body movement amount according to the coordinate information of the 17 body key points, judging whether the body moves according to the body movement amount, judging that the body moves when the movement amount exceeds a set threshold value, and judging that the body is stationary.
5. The hypnotic method based on combination of voice induction and computer vision according to claim 4, wherein the calculating the body movement amount according to the coordinate information of 17 body key points, judging whether the body moves according to the body movement amount, and judging that the body is stationary when the body movement amount exceeds a set threshold value, comprises:
according tokFixed time before timeTMotion amount determination of internal 17 body key pointskThe body motion amount of the user at the moment;
acquiring 17 key points per unit timeiThe inner movement distances are respectively recorded as:S1 , S2 , …,S17 Wherein the fixed timeTConsists of a plurality of unit time;
kFixed time before timeTBody motion function for an internal userFExpressed as:
F=f(M k T i-(-) ,M k T i-(-2) ,…,M k i- ,M k )
wherein,M k T i-(-) 、M k T i-(-2) 、M k i- respectively representk-(T-i)、k-(T-2i)、k-iThe amount of motion of the user at the moment, frepresenting a functional form;
setting a weight vectorW k Expressed as:
W k =[w k T i-(-) ,w k T i-(-2) ,…,w k i- , w k ]
wherein the method comprises the steps ofw k T i-(-) ,w k T i-(-2) ,…,w k i- , w k Respectively representk-(T-i)、k-(T-2i)、k-i、kWeight of moment motion;
according to the weight vectorW k Andkfixed time before timeTDetermination of body movement function from movement amount at each time withinFExpressed as:
according to body movement functionsFDetermining the body motion quantity of the user at the current moment;
and judging the threshold value of the body movement quantity of the user at the current moment, and determining whether the body movement quantity of the user at the current moment is in a motionless state, a steady state or a relative static state.
6. The hypnotic method based on combination of voice guidance and computer vision according to claim 5, wherein the threshold determining the physical state of the user at the current time, determining whether the physical state of the user at the current time is a motionless state, a stationary state, or a relatively stationary state, comprises:
if the body movement amount of the user at the current moment is greater than the threshold valueT 1 Judging that the device is in a pre-hyperactivity state;
if the body movement amount of the user at the current moment is not greater than the threshold valueT 1 Greater than a threshold valueT 2 Judging that the vehicle is in a medium-term stable state;
if the body movement amount of the user at the current moment is not greater than the threshold valueT 2 Judging the relative stationary state in the later period;
wherein,T 1 >T 2 。
7. the hypnotic method based on combination of voice induction and computer vision according to claim 6, wherein the determining a sleep state according to facial information and body posture information of the user comprises:
the eye opening or blink frequency is higher than a set value one and the body movement amount is greater than a threshold valueT 1 Determining the early-stage multi-active-period state;
eye closure or blink frequency below a set point or lateral head movement, body movement not greater thanT 1 Greater thanT 2 Determining a medium-term steady state;
eye closure, body movement less than thresholdT 2 A later relative stationary state is determined.
8. The hypnotic method based on combination of voice induction and computer vision according to claim 7, wherein playing the corresponding hypnotic guide according to the sleep state comprises:
when the user is detected to reach a sleep state, a corresponding hypnotic guide is played, when the user is detected to not reach a preset hypnotic stage after the corresponding guide is played, the hypnotic guide is played again, the playing times of the same hypnotic guide are not more than 3, and otherwise, hypnotic is withdrawn in advance.
9. The method for hypnotizing based on a combination of voice induction and computer vision according to claim 8, further comprising: and playing corresponding background music according to different sleep states.
10. The hypnotic method based on the combination of voice induction and computer vision according to claim 1, further comprising:
before face information and body posture information of a user in the hypnotizing process are detected, a hypnotizing scene which is constructed on the basis of Unity3D in advance is obtained, the user is guided to reach a designated area through a virtual person in the hypnotizing scene, and preparation before hypnosis is made.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310371275.5A CN116077798B (en) | 2023-04-10 | 2023-04-10 | Hypnotizing method based on combination of voice induction and computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310371275.5A CN116077798B (en) | 2023-04-10 | 2023-04-10 | Hypnotizing method based on combination of voice induction and computer vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116077798A true CN116077798A (en) | 2023-05-09 |
CN116077798B CN116077798B (en) | 2023-07-28 |
Family
ID=86204869
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310371275.5A Active CN116077798B (en) | 2023-04-10 | 2023-04-10 | Hypnotizing method based on combination of voice induction and computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116077798B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6070098A (en) * | 1997-01-11 | 2000-05-30 | Circadian Technologies, Inc. | Method of and apparatus for evaluation and mitigation of microsleep events |
CN106178222A (en) * | 2016-09-21 | 2016-12-07 | 广州视源电子科技股份有限公司 | Based on magnetic intelligence assisting sleep method and system |
JP2016539446A (en) * | 2013-10-29 | 2016-12-15 | キム,ジェ−チョル | A device for preventing doze driving in two stages through recognition of movement, face, eyes and mouth shape |
CN108187210A (en) * | 2017-12-28 | 2018-06-22 | 刘勇 | Intelligence renders the methods, devices and systems of virtual reality adjustment sleep mood |
CN209900391U (en) * | 2018-09-14 | 2020-01-07 | 段新 | Hypnosis system based on virtual reality |
CN112016429A (en) * | 2020-08-21 | 2020-12-01 | 高新兴科技集团股份有限公司 | Fatigue driving detection method based on train cab scene |
WO2022113275A1 (en) * | 2020-11-27 | 2022-06-02 | 三菱電機株式会社 | Sleep detection device and sleep detection system |
CN114708641A (en) * | 2022-04-26 | 2022-07-05 | 深圳市优必选科技股份有限公司 | Sleep detection method and device, computer readable storage medium and terminal equipment |
-
2023
- 2023-04-10 CN CN202310371275.5A patent/CN116077798B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6070098A (en) * | 1997-01-11 | 2000-05-30 | Circadian Technologies, Inc. | Method of and apparatus for evaluation and mitigation of microsleep events |
JP2016539446A (en) * | 2013-10-29 | 2016-12-15 | キム,ジェ−チョル | A device for preventing doze driving in two stages through recognition of movement, face, eyes and mouth shape |
CN106178222A (en) * | 2016-09-21 | 2016-12-07 | 广州视源电子科技股份有限公司 | Based on magnetic intelligence assisting sleep method and system |
CN108187210A (en) * | 2017-12-28 | 2018-06-22 | 刘勇 | Intelligence renders the methods, devices and systems of virtual reality adjustment sleep mood |
CN209900391U (en) * | 2018-09-14 | 2020-01-07 | 段新 | Hypnosis system based on virtual reality |
CN112016429A (en) * | 2020-08-21 | 2020-12-01 | 高新兴科技集团股份有限公司 | Fatigue driving detection method based on train cab scene |
WO2022113275A1 (en) * | 2020-11-27 | 2022-06-02 | 三菱電機株式会社 | Sleep detection device and sleep detection system |
CN114708641A (en) * | 2022-04-26 | 2022-07-05 | 深圳市优必选科技股份有限公司 | Sleep detection method and device, computer readable storage medium and terminal equipment |
Also Published As
Publication number | Publication date |
---|---|
CN116077798B (en) | 2023-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10524667B2 (en) | Respiration-based estimation of an aerobic activity parameter | |
CN111868788B (en) | System and method for generating pressure point diagrams based on remotely controlled haptic interactions | |
Belkacem et al. | Real-time control of a video game using eye movements and two temporal EEG sensors | |
CN105190707B (en) | Patient interface device based on three-dimensional modeling selects system and method | |
CN108714020B (en) | Adaptive psychometric adjustment instrument, real-time eye movement delay obtaining method and storage medium | |
JP2013111449A (en) | Video generating apparatus and method, and program | |
CN105210118B (en) | It is suitble to the 3-D model visualization of the patient interface device of the face of patient | |
CN109288651A (en) | Personalized upper-limbs rehabilitation training robot system and its recovery training method | |
CN107491648A (en) | Hand recovery training method based on Leap Motion motion sensing control devices | |
JP6213936B2 (en) | Sleep environment control system and sleep environment control program used therefor | |
KR102425481B1 (en) | Virtual reality communication system for rehabilitation treatment | |
CN116077798B (en) | Hypnotizing method based on combination of voice induction and computer vision | |
CN113749644A (en) | Intelligent garment capable of monitoring lumbar movement of human body and automatically correcting posture | |
WO2021232629A1 (en) | Sitting posture detection method, neck massage instrument, and computer-readable storage medium | |
Zheng et al. | Multi-modal physiological signals based fear of heights analysis in virtual reality scenes | |
Islam et al. | Computer vision based eye gaze controlled virtual keyboard for people with quadriplegia | |
KR102425479B1 (en) | System And Method For Generating An Avatar With User Information, Providing It To An External Metaverse Platform, And Recommending A User-Customized DTx(Digital Therapeutics) | |
Groenegress et al. | The physiological mirror—a system for unconscious control of a virtual environment through physiological activity | |
González et al. | Vision based interface: an alternative tool for children with cerebral palsy | |
CN110473602B (en) | Body state data collection processing method for wearable body sensing game device | |
Kau et al. | Pressure-sensor-based sleep status and quality evaluation system | |
CN113893429A (en) | Virtual/augmented reality auxiliary stabilization device and method | |
KR102432250B1 (en) | The System that Provides Care Chatbot | |
Chen | Sitting behaviour-based pattern recognition for predicting driver fatigue | |
CN112294329B (en) | Psychological monitoring system and method based on music emotion recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |