WO2022239415A1 - 睡眠時動画解析方法および睡眠時動画解析装置 - Google Patents
睡眠時動画解析方法および睡眠時動画解析装置 Download PDFInfo
- Publication number
- WO2022239415A1 WO2022239415A1 PCT/JP2022/009521 JP2022009521W WO2022239415A1 WO 2022239415 A1 WO2022239415 A1 WO 2022239415A1 JP 2022009521 W JP2022009521 W JP 2022009521W WO 2022239415 A1 WO2022239415 A1 WO 2022239415A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sleeping posture
- sleeping
- frame
- sleep
- image
- Prior art date
Links
- 238000003703 image analysis method Methods 0.000 title claims abstract description 5
- 238000010191 image analysis Methods 0.000 title description 3
- 238000005259 measurement Methods 0.000 claims abstract description 66
- 238000004458 analytical method Methods 0.000 claims description 57
- 238000000034 method Methods 0.000 claims description 49
- 238000004904 shortening Methods 0.000 claims description 8
- 230000002194 synthesizing effect Effects 0.000 claims description 6
- 238000010801 machine learning Methods 0.000 claims description 2
- 230000036544 posture Effects 0.000 description 386
- 238000012545 processing Methods 0.000 description 23
- 238000010586 diagram Methods 0.000 description 9
- 230000007704 transition Effects 0.000 description 6
- 238000013136 deep learning model Methods 0.000 description 4
- 210000003128 head Anatomy 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 210000002784 stomach Anatomy 0.000 description 4
- 210000003423 ankle Anatomy 0.000 description 3
- 238000013145 classification model Methods 0.000 description 3
- 210000005069 ears Anatomy 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 210000001508 eye Anatomy 0.000 description 3
- 210000001624 hip Anatomy 0.000 description 3
- 210000003127 knee Anatomy 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 210000001513 elbow Anatomy 0.000 description 2
- 210000002683 foot Anatomy 0.000 description 2
- 210000003108 foot joint Anatomy 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 210000002832 shoulder Anatomy 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 210000003414 extremity Anatomy 0.000 description 1
- 210000004247 hand Anatomy 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000004297 night vision Effects 0.000 description 1
- 210000001331 nose Anatomy 0.000 description 1
- 208000019116 sleep disease Diseases 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1126—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
- A61B5/1128—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1116—Determining posture transitions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Definitions
- the present invention relates to a sleep video analysis method and a sleep video analysis device.
- the above Japanese Patent Laying-Open No. 2019-219989 discloses a posture estimation method for estimating the posture of a person from a target image in which the person is photographed.
- a post-rotation image is generated by rotating a target image in a direction that facilitates estimation of a person's posture.
- the posture of the person is estimated from the rotated image obtained by rotating the orientation of the target image, and a posture estimation result representing the positions of the joint points of the person is generated.
- a video of a person (subject) sleeping is recorded, and from the recorded sleep video (sleep video), sleeping on the back, lying on the stomach, and sleeping on the side
- sleep video sleep video
- a user such as a doctor may check the sleeping posture (posture during sleep) of the subject during sleep.
- the present invention has been made to solve the above-described problems, and one object of the present invention is to enable a user to easily grasp changes in the sleeping posture of a subject during sleep moving images. It is to provide a sleep video analysis method and a sleep video analysis device.
- a method for analyzing moving images during sleep includes a sleeping posture estimation step of estimating a sleeping posture of a subject during sleep based on a sleeping motion image, which is a moving image of the subject during sleep; A sleeping posture classification step of classifying the sleeping posture of the subject during sleep based on the result of estimating the sleeping posture in the estimating step; a body motion measurement step of measuring; and a display step of displaying at least one of the sleeping posture classification result in the sleeping posture classification step and the body motion measurement result in the body motion measurement step.
- a sleep video analysis apparatus includes a control unit that analyzes a sleep video that is a video of a subject's sleep, and a display unit that displays analysis results by the control unit, The control unit performs sleep posture estimation control for estimating the sleeping posture of the subject during sleep based on the sleep video, and estimates the sleeping posture of the subject during sleep based on the result of sleeping posture estimation by the sleeping posture estimation control.
- Sleeping posture classification control for classification, body motion measurement control for measuring the body movement of the subject during sleep based on the sleeping posture estimation result by the sleeping posture estimation control, sleeping posture classification result and body motion by the sleeping posture classification control It is configured to perform display control for performing control to display at least one of body motion measurement results by measurement control on a display unit.
- the user by displaying the sleeping posture classification result classified based on the sleeping posture estimation result, the user can visually check the sleeping posture classification result and confirm the sleeping posture classification during sleep of the subject. can do.
- the user can easily grasp the classification of the subject's sleeping posture and the change in the classification in the sleep video.
- the user can visually confirm the body motion measurement results and check the amount of change in body motion during sleep of the subject. can.
- the body motion measurement result is not displayed, the user can easily grasp the change in the body motion of the subject during the sleep video.
- a sleep video analysis method and a sleep video analysis device can be provided.
- FIG. 1 is a block diagram showing the overall configuration of a sleep video analysis device according to an embodiment of the present invention
- FIG. FIG. 1 is the first diagram for explaining the selection of frames from the sleeping movie.
- Fig. 2 is a second diagram for explaining the selection of frames from the sleeping movie;
- FIG. 3 is a third diagram for explaining the selection of frames from the sleeping movie;
- FIG. 4 is a fourth diagram for explaining the selection of frames from the sleeping movie;
- FIG. 5 is a fifth diagram for explaining the selection of frames from the sleeping movie; It is the figure which showed the production
- FIG. 1 is the first diagram for explaining the selection of frames from the sleeping movie.
- Fig. 2 is a second diagram for explaining the selection of frames from the sleeping movie;
- FIG. 3 is a third diagram for explaining the selection of frames from the sleeping movie;
- FIG. 4 is a fourth diagram for explaining the selection of frames from the sleeping movie
- FIG. 10 is a diagram showing a flow from sleeping posture estimation to acquisition of sleeping posture classification results and body movement measurement results; It is the figure which showed an example of the display by a display part.
- FIG. 7 is a diagram showing another example of display by the display unit;
- FIG. 11 is a first flow chart showing control by a PC during sleep image analysis;
- FIG. 10 is a second flow chart showing control by a PC during sleep image analysis;
- FIG. 11 is a diagram showing an example of processing for determining the direction of rotation according to the second modified example;
- the analysis device 100 includes a PC 1 and a display 2.
- the analysis apparatus 100 is an apparatus that analyzes a moving image of the subject 200 during sleep captured by the camera 101 (moving image during sleep).
- Camera 101 is, for example, a night vision camera for nighttime photography. Note that the camera 101 does not have to be connected to the PC 1 at all times.
- a captured moving image of the subject 200 asleep (moving image during sleep) is stored in a recording medium built in or external to the camera 101, and at a later date (after the image is captured), the stored moving image during sleep is analyzed by the PC 1. It may be in the form of Note that the analysis device 100 is an example of a “sleep video analysis device” in the claims.
- the subject 200 is, for example, an infant (child).
- the analyzing apparatus 100 (sleep movie analysis method) according to the present embodiment, the movement of the body (body motion) of the subject 200 during sleep is analyzed based on the movie (sleep movie) captured while the subject 200 is asleep. Since the analysis is performed, the movement of the body (body movement) during sleep can be analyzed (non-contact analysis) without attaching a sensor such as an acceleration sensor or a position sensor to the body. As a result, even if the child (child) dislikes attaching the sensor to the body, or even if the child (child) removes the sensor from the body, the movement (body movement) of the child (child) during sleep can be prevented. dynamics) can be easily analyzed. Note that the subject 200 is not limited to a child.
- the PC 1 is a personal computer including a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), a ROM (Read Only Memory), a RAM (Random Access Memory), and the like.
- the PC 1 also includes storage devices (internal storage) such as HDDs (Hard Disk Drives) and SSDs (Solid State Drives).
- the PC 1 is configured to analyze a sleeping movie, which is a sleeping movie of the subject 200 .
- the PC 1 is configured to capture the moving image data of the sleeping moving image captured by the camera 101 .
- PC1 is an example of the "control part" of a claim.
- the PC 1 records (stores) a program for executing the sleeping video analysis method (each control for sleep video analysis), which will be described later.
- a program that causes analysis device 100 (PC1) to execute the sleep video analysis method may be recorded in a server on a network.
- the display 2 is, for example, a liquid crystal display or an organic EL display.
- the display 2 is connected to the PC 1 via a video interface such as HDMI (registered trademark). Note that the display 2 is an example of the "display section" in the claims.
- selection control The PC 1 sequentially reads the images of each frame of the sleep movie, and selects the frames in which the image difference between the frames of the sleep movie is equal to or greater than a selected threshold value in the sleep movie.
- a selection control selection control is performed to select a selected frame whose position) has changed. Control of frame selection (frame pick-up) by the PC 1 will be described with reference to FIGS. 2 to 6. FIG.
- the PC 1 sequentially reads the images of each frame of the sleep video, and selects frames in the sleep video in which the image difference between the frames of the sleep video is equal to or greater than a selection threshold. It is selected as a selected frame in which the sleeping posture of the subject 200 has changed. In addition to the selected frame, the PC 1 also selects the frames before and after the selected frame as frames of images used for estimating the sleeping posture (estimating the joint points) in sleeping posture estimation described later.
- the PC 1 sequentially reads the image of each frame of the sleeping movie, and selects the frame whose image difference between adjacent frames of the sleeping movie is equal to or greater than a first selection threshold as the first selected frame 10 (FIG. 2). reference).
- a method of taking pixel differences and correlation between images is used.
- the difference between the images is given as a scalar value, and the PC 1 selects the frame having the value within the set range as the frame of the image used for estimating the sleeping posture.
- ZNCC Zero-Mean Normalized Cross Correlation
- ZNCC zero-mean normalized cross-correlation
- ZNCC zero-mean normalized cross-correlation
- the calculated value of ZNCC falls within -1.0 or more and 1.0 or less, and the lower the calculated value, the lower the degree of matching (the greater the difference).
- a threshold is set for the calculated value (matching degree) between adjacent frames of the sleep video, and the calculation When the value (matching degree) becomes equal to or less than the threshold (when the image difference becomes equal to or greater than the first selection threshold), the frame being read (current frame) is selected as the selection frame (first selection frame 10). be.
- the PC 1 as shown in FIG. Select as a frame.
- the PC 1 selects, as the second selected frame 20, the frame of the image whose difference from the image of the immediately preceding first selected frame 10 is equal to or greater than the second selection threshold in each frame of the sleeping video. do.
- the second selection threshold may be set to the same value as the first selection threshold, or may be set to a different value.
- the PC 1 in addition to the second selected frame 20, the PC 1 also uses the frames before and after the second selected frame 20 (frames 21 and 22) as images to be used for estimating the sleeping posture in estimating the sleeping posture. Select as a frame.
- the PC 1 selects the frames from the first selected frame 10 immediately before the second selected frame 20 to the second selected frame 20 at predetermined intervals. First, it is selected as an image frame to be used for sleeping posture estimation in sleeping posture estimation.
- the PC 1 performs sleep posture estimation for each predetermined interval between the first selected frame 10 immediately before the second selected frame 20 and the second selected frame 20.
- the PC 1 includes frames before and after frame 30 (frames 31 and 32), frames before and after frame 40 (frames 41 and 42), and frames before and after frame 50 (frames 31 and 42). 51 and 52) are also selected as image frames used for sleeping posture estimation in sleeping posture estimation.
- each of the first selected frame 10, the second selected frame 20, the frame 30, the frame 40, and the frame 50 is shown in front and rear frames.
- An example of selecting is shown.
- one or more frames are selected.
- the frames to be selected need not be limited to the frames immediately before and after.
- the PC 1 selects a predetermined number of frames from each of the first selected frame 10, the second selected frame 20, the frame 30, the frame 40, and the frame 50, or the frames within a predetermined period of time, and selects the sleeping posture. It is selected as the frame of the image used for estimating the sleeping posture in the estimation.
- the first selected frame 10, the second selected frame 20, the frame 30, the frame 40, and the frame from several seconds before to several seconds after each of the first selected frame 10, the second selected frame 20, the frame 30, frames 40 and 50 are selected as image frames used for sleeping posture estimation in sleeping posture estimation.
- the PC 1 performs control (movie shortening control) to generate and output a shortened movie by synthesizing images of a plurality of selected frames selected from the sleeping movie in the selection control.
- the PC 1 in generating the shortened video, the PC 1 generates images of a plurality of selected frames (a plurality of first selected frames 10 and a plurality of second selected frames 20) selected from the sleeping video, and a second selected frame. In the frames between the first selected frame 10 and the second selected frame 20 immediately before frame 20, images of frames (frames 30, 40 and 50) selected at predetermined intervals are used. Furthermore, in generating the shortened video, the PC 1 uses not only the first selected frame 10, the second selected frame 20, the frame 30, the frame 40 and the frame 50, but also the images of the frames before and after each frame. Note that the FPS (Frames Per Second) of the moving image is the same before and after shortening.
- FPS Framework Per Second
- the PC 1 temporarily or long-term holds the location (the number of frames or the time period) of the selected frame in the sleep movie (original movie), and stores the shortened movie for the sleep movie (original movie). You may record (store) and output the correspondence of the frames of (movie after shortening).
- the PC 1 rotates the image of each frame of the shortened moving image to a plurality of angles based on at least one of each fixed time period of the shortened moving image and each time the difference between the images before and after the shortened moving image becomes equal to or greater than the rotation threshold. and performs control (rotation direction determination control) to determine the rotation angle of the image used for sleeping posture estimation in sleeping posture estimation.
- the PC 1 sequentially reads the images of each frame of the shortened video, and reads the shortened video every predetermined time of the shortened video and each time the difference between the images before and after the shortened video becomes equal to or greater than the rotation threshold.
- a process (control) is performed to rotate the image of each frame of 1 to a plurality of angles and determine the rotation angle of the image used for estimating the sleeping posture in the sleeping posture estimation. That is, the PC 1 sequentially reads the images of each frame of the shortened moving image, and when the read frame images satisfy a predetermined condition (at regular intervals of the shortened moving image and of images between the frames before and after the shortened moving image). Each time the difference becomes equal to or greater than the rotation threshold, prior to sleeping posture estimation, the rotation angle of the image used for sleeping posture estimation in sleeping posture estimation is determined.
- the PC 1 does not perform the rotation direction determination control, and rotates the frame at the same rotation angle as the previous frame rotation. , rotate the image prior to sleeping posture estimation. That is, when the images of each frame of the shortened video are sequentially read, and a predetermined condition is satisfied (every fixed time of the shortened video, and each time the difference between the images before and after the shortened video is equal to or greater than the rotation threshold), Only, the rotation angle of the image used for sleeping posture estimation in sleeping posture estimation is changed (determined).
- the PC 1 rotates the image of each frame of the shortened moving image to a plurality of angles each time the shortened moving image is fixed for a certain period of time and each time the difference between the images before and after the shortened moving image is greater than or equal to the rotation threshold.
- the sleeping posture of the subject 200 during sleep is estimated for each image rotated in a plurality of directions.
- the determination of the rotation angle of the image used for estimating the sleeping posture in the sleeping posture estimation is performed using a method (sleeping posture estimation model) similar to sleeping posture estimation control, which will be described later.
- the PC 1 receives an image rotated to each rotation angle as an input, and outputs a joint point and a certainty factor to be described later. Based on this, the rotation angle of the image that maximizes the sleeping posture estimation accuracy is determined as the rotation angle.
- the sleeping posture estimation model outputs images obtained by rotating the images of each frame of the shortened video to multiple angles (0°, 90°, 180°, and -90°).
- the rotation angle at which the average value of the certainty for each joint point is maximized is set as the rotation angle at which the sleeping posture estimation accuracy is maximized, and the changed rotation angle is determined.
- the rotation angle is determined at ⁇ 90°, which has the highest average value of confidence.
- the angle for rotating the image may be an arbitrary angle such as an interval of 45°.
- the PC 1 performs control (sleeping posture estimation control) for estimating the sleeping posture of the subject 200 during sleep based on the sleep video.
- the PC 1 estimates the sleeping posture of the subject 200 during sleep based on the machine-learned model.
- the PC 1 estimates the sleeping posture of the subject 200 during sleep using the image of the selected frame selected in the selection control.
- the PC 1 receives an image as an input and uses a machine-learned model (sleeping posture estimation model) to estimate the sleeping posture of the subject 200 during sleep.
- PC1 uses a sleeping posture estimation model that outputs joint points and certainty.
- PC1 uses a deep learning model such as Higher HRNet, OpenPose, etc. as a sleeping posture estimation model.
- the sleeping posture estimation model takes as input the image of each frame of the shortened moving image after rotation processing, and outputs the joint point positions and certainty factors of the subject 200 .
- the definition of the joint point differs depending on the data used to learn the model.
- the sleeping posture estimation model outputs the positions of the eyes, nose, ears, shoulders, elbows, wrists, hips, knees, ankles, etc. as the positions of the joint points.
- the sleeping posture estimation result includes noise (even if the subject 200 does not move in the moving image, the result fluctuates due to noise, etc.), so the sleeping posture estimation result is subjected to noise removal processing. good too.
- the PC 1 determines the degree of certainty of sleeping posture estimation by a sleeping posture estimation model (learned model) obtained when estimating the sleeping posture of the subject 200 when sleeping, and a preset validity threshold. Based on this, it is determined whether or not the result of sleeping posture estimation by the sleeping posture estimation model is appropriate.
- a threshold (validity threshold) is set for the degree of certainty of the position of each joint point, which is the output (sleeping posture estimation result) of the sleeping posture estimation model, and the output sleeping posture estimation result (each joint point It is determined whether it is appropriate to perform sleeping posture classification and body motion measurement based on comparison with the validity threshold.
- the PC 1 is also configured to perform control (sleeping posture classification control) for classifying the sleeping posture of the subject 200 during sleep based on the sleeping posture estimation result obtained by the sleeping posture estimation control.
- the PC 1 receives the sleeping posture estimation result as an input, and determines the sleeping posture of the subject 200 during sleep such as the supine position (sleeping on the back), the prone position (sleeping on the stomach), and the side-lying position (sleeping on the side). classification (sleeping posture classification result) is output (see FIG. 9).
- the PC 1 uses a sleeping posture classification model that outputs classifications of sleeping postures during sleep such as supine position (sleeping on back), prone position (sleep on stomach), and lateral position (sleeping on side).
- the PC 1 uses a deep learning model such as DNN (Deep Neural Network) or GCN (Graph Convolutional Network) as a sleeping posture classification model.
- the sleeping posture classification model receives the sleeping posture estimation result (position of each joint point) as input, and the subject 200 is placed in the supine position (sleeping on the back), the prone position (sleeping on the stomach), and the lateral position (sideways). sleeping), etc.)
- SVM Small Vector Machine
- logistics regression may be used as a sleeping posture classification method without using a deep learning model.
- deep learning models such as CNN (Convolutional Neural Network) can be used to directly input shortened videos or sleeping video images to determine sleeping postures.
- a classification may be performed.
- classification of the sleeping posture of the subject 200 by the PC 1 is performed when it is determined in the estimation result determination that the sleeping posture estimation result in the sleeping posture estimation is appropriate.
- the PC 1 is also configured to perform control (body motion measurement control) for measuring the body motion of the subject 200 during sleep based on the sleeping posture estimation result obtained by the sleeping posture estimation control.
- control body motion measurement control
- the PC 1 calculates the difference between the sleep posture estimation result of the previous frame of the shortened moving image (or the past frame specified by an arbitrary condition at a fixed number of frames) and the body movement (body movement), as shown in FIG. output as dynamic measurement results).
- the difference in the sleeping posture estimation result (position of each joint point) between frames of the shortened video is calculated by each joint point (eyes, nose, etc.) in each frame of the shortened video.
- ears, shoulders, elbows, wrists, waists, knees, ankles, etc. are output as body movements (body movement measurement results).
- the PC 1 is configured to perform display control for displaying on the display 2 at least one of the sleeping posture classification result by the sleeping posture classification control and the body motion measurement result by the body motion measurement control.
- the display 2 displays both the sleeping posture classification result and the body motion measurement result under the control of the PC 1.
- the PC 1 controls the display 2 to display the sleep posture classification results and the body movement measurement results in chronological order.
- Sleeping posture classification results are displayed in bar graphs with different colors for each sleeping posture classification, thereby showing changes in sleeping postures.
- the body motion measurement results are displayed separately for the body motion of the whole body and the body motion of each part (head, limbs, etc.).
- the part to be displayed may be switched between only the hands, only the feet, and the like.
- the body movement measurement result is displayed for each site by displaying the color of each site in a different color to indicate the change for each site.
- the display 2 may display a graph of changes in chronological order for each part.
- the body motion output result may be displayed as a continuous value, or may be displayed as a discrete value based on the magnitude of the body motion.
- the sleeping posture classification result is displayed on the display 2 .
- the display 2 indicates that the classification is not performed (see FIG. 10). is done. For example, as shown in FIG. 10, control is performed to display a display such as "impossible" or a display such as determination impossible on the bar graph.
- the analyzing apparatus 100 displays on the display 2 the sleeping posture based on the sleeping posture estimation model when the sleeping posture classification in the sleeping posture classification is not performed based on the determination result of the estimation result determination.
- the display method is configured to be switchable so that the display (see FIG. 11) indicating the degree of certainty of estimation is performed. For example, by switching the display method, as shown in FIG. 11, control is performed to display values equal to or lower than a preset threshold value (validity threshold value) such as "0.2" on the band graph.
- step 301 the moving image frame of the sleeping moving image is read (moving image frame read). The process then proceeds to step 302 .
- step 302 the difference between frames is calculated.
- step 302 the difference between each frame of the sequentially read sleep movie is calculated using the previously described technique (ZNCC: Zero Mean Normalized Cross Correlation). Then, the processing step moves to step 303 .
- step 303 it is determined whether or not the difference between frames is equal to or greater than a threshold. As described above, if it is determined that the difference between frames is greater than or equal to the threshold (greater than or equal to the first selection threshold or greater than or equal to the second selection threshold), the process proceeds to step 304 . Also, if it is determined that the difference between frames is not equal to or greater than the threshold (the first selection threshold or the second selection threshold), the process returns to step 301 .
- step 304 frame selection is performed. Specifically, in step 304 , a frame in which the image difference between adjacent frames in the sleeping video is greater than or equal to the first selection threshold is selected as the first selected frame 10 . Further, in step 304 , in each frame of the sleeping video, a frame of an image whose difference from the image of the first selected frame 10 immediately before is equal to or greater than the second selection threshold is selected as the second selected frame 20 .
- step 304 as described above, when the second selected frame 20 is selected, the frame at each predetermined interval between the immediately preceding first selected frame 10 and the second selected frame 20 is changed to the sleeping posture. is selected as the frame of the image to be used for estimating . Furthermore, when the first selected frame 10, the second selected frame 20, and the frames at predetermined intervals between the first selected frame 10 and the second selected frame 20 are selected, the frames before and after each are selected. It is selected as the frame of the image used for estimating the sleeping posture. After selecting a frame (after step 304 is completed), processing steps transition to step 305 .
- step 305 it is determined whether or not the video has ended (reading of all frames of the sleeping video has been completed). If it is determined that the moving image has ended (reading of all frames of the sleeping moving image has been completed), the process proceeds to step 306 . If it is determined that the moving image has not ended (reading of all frames of the sleeping moving image has been completed), the process returns to step 301 .
- steps 301 to 305 are an example of the "selection step" in the scope of claims. That is, in steps 301 to 305, the frames in which the image difference between the frames of the sleep video is equal to or greater than the selection threshold (equal to or greater than the first selection threshold or the second selection threshold) are determined in steps 301 to 305. Selected as changed selected frame.
- a shortened video is output.
- a shortened video is generated and output by synthesizing the images of a plurality of selected frames (first selected frame 10 and second selected frame 20) selected from the sleeping video in step 304 (selection step).
- the first selected frame 10 the second selected frame 20
- the frames at predetermined intervals between the first selected frame 10 and the second selected frame 20 and the frames before and after each is used to generate the shortened video.
- step 306 is an example of the "moving image shortening step" in the claims.
- step 307 the images of each frame of the shortened video are sequentially input to the sleeping posture estimation model (see FIG. 9). Processing then proceeds to step 308 .
- step 308 it is determined whether or not the conditions for changing the rotation angle are satisfied. In the present embodiment, it is determined that the conditions for changing the rotation angle are met at regular time intervals of the shortened moving image or when the difference between the images before and after the frame is greater than or equal to the rotation threshold. If it is determined that the conditions for changing the rotation angle are satisfied, the process proceeds to step 309 . Also, if it is determined that the condition for changing the rotation angle is not satisfied, the processing step proceeds to step 310 .
- step 309 the image rotation angle is determined.
- step 309 before estimating the sleeping posture, the image of each frame of the shortened moving image is rotated by a plurality of angles, and the rotation angle of the image used for estimating the sleeping posture in the sleeping posture estimation is determined.
- step 309 as described above, the sleeping posture of the subject 200 during sleep is estimated for each image rotated in a plurality of directions. Then, based on the degree of certainty for each joint point output by the sleeping posture estimation model, the rotation angle of the image that maximizes the sleeping posture estimation accuracy is determined as the rotation angle.
- step 309 is an example of a "rotation angle determination step" in the claims. Then, after step 309 is completed, the processing steps transition to step 310 .
- step 310 image rotation is performed.
- step 310 if the image rotation angle was determined (changed) in step 309, the image is rotated to the determined image rotation angle. Note that if step 309 is not performed (if the image of the read frame does not satisfy the predetermined conditions), the same rotation angle as the rotation of the previous frame is used. The image is rotated prior to sleeping posture estimation. After rotating the image, the processing steps transition to step 311 .
- step 311 sleeping posture estimation is performed.
- step 311 the sleeping posture of the subject 200 during sleep is estimated based on the sleeping motion picture of the subject 200 . Note that step 311 is an example of the "sleeping posture estimation step" in the claims.
- step 311 the sleeping posture of the subject 200 during sleep is estimated by the sleeping posture estimation model as described above. Further, in step 311, as described above, in step 304, the image of the selected frame (the image of each frame of the shortened moving image) selected prior to step 311 is used to determine the sleeping posture of the subject 200 during sleep. Presumed. After step 311 (sleeping posture estimation) is completed, the processing steps move to step 312 .
- step 312 the sleeping posture estimation result is determined.
- step 312 based on the certainty of sleeping posture estimation by a sleeping posture estimation model (learned model) obtained when estimating the sleeping posture of the subject 200 and a preset validity threshold, It is determined whether or not the sleep posture estimation result by the posture estimation model (learned model) is appropriate.
- step 312 is an example of the "estimation result determination step" in the claims. After step 312 is completed, processing steps transition to steps 313 and 314 .
- step 313 sleeping posture classification is performed.
- step 313 the sleeping posture of the subject 200 during sleep is classified based on the sleeping posture estimation result in step 311 .
- the classification of the sleeping posture of the subject 200 is performed when it is determined in step 312 (estimation result determination step) that the sleeping posture estimation result in step 311 (sleeping posture estimation step) is appropriate.
- Step 313 is an example of the "sleeping posture classification step" in the claims.
- step 314 body motion measurement is performed.
- step 314 the body motion of the subject 200 during sleep is measured based on the sleeping posture estimation result in step 311 (sleeping posture estimation step).
- step 314 is an example of the "body motion measurement step” in the claims.
- step 315 at least one of the sleeping posture classification result and the body motion measurement result is displayed.
- the display 2 displays both the sleeping posture classification result and the body motion measurement result under the control of the PC 1 .
- step 315 is an example of the "display step" in the scope of claims.
- step 315 when the sleeping posture is classified based on the determination result of the estimation result determination, the sleeping posture classification result (sleeping posture classification) is displayed. Also, in step 315, the sleeping posture classification results and the body motion measurement results in the body motion measurement step are displayed in chronological order (see FIGS. 10 and 11).
- step 315 if the classification of the sleeping posture is not performed based on the determination result of the estimation result determination, as described above, a display indicating that the classification was not performed (see FIG. 10); Alternatively, a display (see FIG. 11) indicating the certainty of sleeping posture estimation is performed.
- the user by displaying the sleeping posture classification result classified based on the sleeping posture estimation result on the display 2, the user can visually confirm the sleeping posture classification result and determine the sleeping posture of the subject 200 during sleep. You can check the classification.
- the user can easily grasp the classification of the sleeping posture of the subject 200 and the change in the classification in the sleep video.
- the user can visually confirm the body motion measurement results and determine the amount of change in the body motion of the subject 200 during sleep. can be confirmed.
- the body motion measurement result is not displayed, the user can easily grasp the change in the body motion of the subject 200 during the sleep moving image.
- the sleeping video enables the user to easily grasp the change in the sleeping posture (posture during sleep) of the subject 200 during the sleeping video.
- An analysis method and sleep video analysis device can be provided.
- the sleeping video analysis method sequentially reads the images of each frame of the sleeping video, and selects frames in which the image difference between the frames of the sleeping video is equal to or greater than a selected threshold value in the sleeping video to the subject 200.
- a step 304 selection step is provided for selecting the frame as a selection frame in which the sleeping posture has changed.
- a frame in which the sleeping posture of the subject 200 has changed is selected from among the frames of the moving image during sleep. As a result, even if the sleeping video is long, it is possible to easily confirm the image during the time period when the sleeping posture of the subject 200 has changed.
- step 311 is performed by extracting the image of the selected frame selected prior to step 311 (sleeping posture estimation step) in step 304 (selection step). is used to estimate the sleeping posture of the subject 200 during sleep. This reduces the number of processes for estimating the sleeping posture in the sleeping video compared to the case of estimating the sleeping posture of the subject 200 during sleep using the images of each frame (all frames) of the sleeping video. can do.
- step 304 in addition to the selected frames (the first selected frame 10 and the second selected frame 20), the frames before and after the selected frame are also selected in step 311. It is selected as an image frame used for estimating the sleeping posture in the (sleeping posture estimating step).
- the difference in the images between the frames is reduced before and after the image difference becomes large.
- the frame after the transition can also be selected as the frame of the image used for estimating the sleeping posture.
- steps 301 to 305 the images of each frame of the sleep video are sequentially read, and the difference between the images between adjacent frames of the sleep video is the first.
- a frame that is equal to or greater than one selection threshold is selected as the first selected frame 10 .
- step 304 in each frame of the sleep video, the frame of the image whose difference from the image of the first selected frame 10 immediately before is equal to or greater than the second selection threshold is selected as the second selected frame 20.
- the sleeping posture of the subject 200 can be determined. It is possible to select (acquire) a frame in which the posture has changed.
- step 304 in addition to the first selection frame 10 and the second selection frame 20, from the first selection frame 10 immediately before the second selection frame 20 Frames up to the second selection frame 20 are selected as image frames used for estimating the sleeping posture in step 311 (sleeping posture estimation step) at predetermined intervals.
- the frames between the first selected frame 10 and the second selected frame 20 can be used for estimating the sleeping posture. can be reasonably estimated.
- the user can easily grasp changes in the sleeping posture (posture during sleep) of the subject 200 between the first selection frame 10 and the second selection frame 20 .
- step 315 display step
- each of the sleeping posture classification results and the body motion measurement results is displayed in chronological order.
- each of the sleeping posture classification results and the body motion measurement results is displayed in chronological order, so that the user can easily check changes in the sleeping posture and body motion of the subject 200 in chronological order. can be done.
- the sleeping video analysis method includes step 306 (video shortening step) of generating a shortened video by synthesizing a plurality of selected frame images selected from the sleep video in step 304 (selecting step). .
- step 306 video shortening step
- the sleeping video analysis method includes step 306 (video shortening step) of generating a shortened video by synthesizing a plurality of selected frame images selected from the sleep video in step 304 (selecting step).
- step 311 the image of each frame of the shortened movie is rotated to a plurality of angles, and each image rotated in a plurality of directions , a step 309 (rotation angle determination step) for estimating the sleeping posture of the subject 200 during sleep and determining the rotation angle of the image used for estimating the sleeping posture in step 311 (sleeping posture estimation step).
- step 309 rotation angle determination step
- every predetermined time of the shortened video, and every time the difference between the images before and after the frames in the shortened video becomes equal to or greater than the rotation threshold is determined.
- the image of each frame of the shortened video is rotated to a plurality of angles, and the step It is possible to reduce the number of processes for determining the rotation angle of the image used for estimating the sleeping posture in 311 (sleeping posture estimation step).
- a process of determining the rotation angle of the image used for estimating the sleeping posture is performed every certain time period of the shortened video and each time the difference between the images before and after the shortened video becomes equal to or greater than the rotation threshold. In only one of them, the sleeping posture can be estimated with higher accuracy than when the processing of determining the rotation angle of the image used for estimating the sleeping posture is performed.
- the sleeping video analysis method includes the certainty of sleeping posture estimation by a sleeping posture estimation model (learned model) obtained when estimating the sleeping posture of the subject 200 during sleep, and the preset Step 312 (estimation result determination step) is provided for determining whether or not the sleeping posture estimation result by the sleeping posture estimation model is appropriate based on the validity threshold.
- the preset Step 312 estimation result determination step
- step 313 determines whether the sleeping posture estimation result in step 311 (sleeping posture estimation step) is appropriate in step 312 (estimation result determination step). , the sleeping posture of the subject 200 is classified. Then, in step 315 (display step), based on the determination result in step 312 (estimation result determination step), if the sleeping posture is classified in step 313 (sleeping posture classification step), step 313 (sleep posture classification step) is performed. Posture classification step) to display the result of sleeping posture classification. Thereby, the user can easily confirm the sleeping posture classification of the subject 200 by displaying the sleeping posture classification result.
- step 315 display step
- the sleeping posture based on the determination result in step 312 (estimation result determination step)
- the sleeping posture is not classified in step 313 (sleeping posture classification step)
- the sleeping posture At least one of a display indicating that classification has not been performed and a display indicating the certainty of sleeping posture estimation by the sleeping posture estimation model (learned model) is switched.
- the user can easily confirm that the sleeping posture classification has not been performed in step 313 (sleeping posture classification step). can be done.
- the user selects step 313 (Sleeping It is possible to easily confirm that classification of the sleeping posture in the posture classification step) was not performed.
- ZNCC zero-mean normalized cross-correlation
- the method of calculating the difference between the frames of the sleep video may be a method of calculating the correlation of the images between the frames, or a method of calculating the pixel difference between the frames. .
- the frame (first selected frame or second selected frame) may be selected. Also, multiple techniques may be used in frame selection.
- both the sleeping posture classification result and the body movement measurement result has been shown, but the present invention is not limited to this.
- either one of the sleeping posture classification result and the body movement measurement result may be switched and displayed.
- only one of the sleeping posture classification result and the body movement measurement result may be displayed.
- the display step only the sleeping posture classification results may be displayed. In this case, the user can easily comprehend the sleeping posture of the subject in the sleeping video from the displayed sleeping posture classification result of the subject.
- only the body motion measurement result may be displayed. In this case, the user can easily comprehend the change in the body motion of the subject in the sleeping video from the displayed body motion measurement result of the subject during sleep.
- the sleep posture of the subject during sleep may be estimated using the images of each frame (all frames) of the sleep video.
- the frames before and after the selected frame are also used for estimating the sleeping posture in the sleeping posture estimating step.
- the present invention is not limited to this. In the present invention, only the images of the selected frames (the frames in which the image difference between the frames of the sleeping video is equal to or greater than the selected threshold value) may be used for estimating the sleeping posture in the sleeping posture estimating step.
- the present invention is not limited to this. In the present invention, only the first selected frame may be selected as the image frame used for estimating the sleeping posture.
- the frames between the first selected frame 10 immediately before the second selected frame 20 and the second selected frame 20 are selected as predetermined frames.
- An example has been shown in which each interval is selected as an image frame to be used for sleeping posture estimation in step 311 (sleeping posture estimation step), but the present invention is not limited to this.
- the PC 1 sequentially reads the images of each frame of the sleeping video, and selects the frames in which the image difference between adjacent frames of the sleeping video is equal to or greater than the first selection threshold as the first selected frame 10.
- the frames for which the image difference is calculated are not necessarily adjacent frames. ) may be selected for which the image difference in is greater than or equal to the selection threshold.
- the present invention is not limited to this.
- the user selects the position (number of frames or time period) of the selected frame in the sleep video, and the result (sleeping posture estimation result, sleeping posture classification result, or body motion measurement result) for each selected frame is displayed. may be made.
- the sleep posture classification result or the body movement measurement result for each frame of the shortened video is associated with the sleeping video (long-time video) and the shortened video as described above (paragraph [0032 ]), it may be displayed after returning to the time of the sleep video (long-duration video).
- the result of the frame that is closer in time series may be selected from among the frames in which there is a result in the front or back in time series.
- the present invention is not limited to this.
- the sleeping posture prior to estimating the sleeping posture, the sleeping posture may be estimated without rotating the image used for estimating the sleeping posture.
- the rotation angle determination step changes the image of each frame of the shortened video every predetermined time of the shortened video and each time the difference between the images before and after the frames in the shortened video exceeds the rotation threshold.
- An example has been shown in which the image is rotated to a plurality of angles and the rotation angle of the image used for estimating the sleeping posture in the sleeping posture estimation step is determined, but the present invention is not limited to this.
- the rotation angle determining step rotates the images of each frame (all frames) of the shortened moving image to a plurality of angles, and rotates the image of each frame of the shortened moving image to a plurality of angles to obtain an image used for estimating the sleeping posture in the sleeping posture estimating step. You may perform the process which determines the rotation angle of .
- each frame image of the shortened moving image is rotated at a plurality of angles each time the shortened moving image is set for a certain period of time and each time the difference between the images before and after the shortened moving image becomes equal to or greater than the rotation threshold.
- step 311 to determine the rotation angle of the image used for estimating the sleeping posture
- the present invention is not limited to this.
- the rotation angle of the image used for estimating the sleeping posture is determined based on at least one of the abbreviated video for a certain period of time and each time the difference between the images before and after the abbreviated video becomes equal to or greater than the rotation threshold. may be determined.
- the rotation angle of the image used for estimating the sleeping posture may be determined for each fixed time period of the shortened video.
- the image of each frame of the shortened video is rotated to a plurality of angles, It is possible to reduce the number of processes for determining the rotation angle of the image used for posture estimation.
- the rotation angle of the image used for estimating the sleeping posture may be determined each time the difference between the images of the frames before and after the shortened video becomes equal to or greater than the rotation threshold.
- the image of each frame of the shortened video is rotated to a plurality of angles compared to the case of determining the rotation angle of the image used for estimating the sleeping posture. , the number of processes for determining the rotation angle of the image used for estimating the sleeping posture can be reduced.
- step 313 the certainty of sleeping posture estimation by the posture estimation model obtained when estimating the sleeping posture of the subject 200 during sleep and the preset
- step 312 estimate result determination step
- sleeping posture classification in the sleeping posture classification step or body motion measurement in the body motion measuring step may be performed without determining whether or not the sleeping posture estimation result by the learned model is appropriate.
- the present invention is not limited to this.
- a display indicating that the sleeping posture has not been classified is displayed.
- the judgment result of the estimation result judging step if the sleeping posture is not classified by the sleeping posture classification step, only the display indicating the degree of certainty of the sleeping posture estimation by the learned model is performed. good too.
- the present invention is not limited to this.
- the presence or absence of a futon may be displayed in addition to the sleeping posture classification result and the body motion measurement result.
- the degree of certainty for each joint point output by the sleeping posture estimation model is calculated.
- An example has been shown in which the rotation angle that maximizes the average value is determined as the rotation angle of the image used for estimating the sleeping posture, but the present invention is not limited to this.
- the frames of the sleep video or the shortened video are sequentially analyzed, and the body orientation is calculated from the results of the sleeping posture estimation, etc., so that the body orientation of the subject is reflected in the image.
- the image may be rotated from that frame or the next frame to an angle that makes the subject's body orientation appropriate (an angle in which the head is up).
- an angle in which the head is up an angle in which the head is up.
- the center position 201 of the head joint points (eyes, ears, nose, etc.) and the center position 202 of the foot joint points (waist, knee, ankle, etc.) are calculated.
- the image is rotated to an angle where the point center position 201 is above the foot joint point center position 202 .
- the rotation angle obtained in this case may be set every 90°, or may be set to any angle that makes the head and feet perpendicular to each other.
- the processing in the sleeping video analysis method may be performed by event-driven processing that executes processing on an event-by-event basis.
- the processing in the sleep video analysis method may be completely event-driven, or may be performed in combination of event-driven and flow-driven.
- the sleeping posture estimation step includes a step of estimating the sleeping posture of the subject during sleep using the image of the selected frame selected prior to the sleeping posture estimation step in the selecting step.
- the selecting step includes a step of selecting, in addition to the selected frame, frames before and after the selected frame as image frames used for estimating a sleeping posture in the sleeping posture estimating step.
- Video analysis method
- the selection step includes: a step of sequentially reading the image of each frame of the sleeping video, and selecting a frame in which the image difference between adjacent frames of the sleeping video is equal to or greater than a first selection threshold as a first selected frame; any of items 2 to 4, including the step of selecting, as a second selection frame, an image frame whose difference from the image of the immediately preceding first selection frame is equal to or greater than a second selection threshold in each frame of the sleep video. 1.
- the sleep video analysis method according to 1.
- (Item 7) The item according to any one of items 1 to 6, wherein the display step includes a step of displaying each of the sleeping posture classification result in the sleeping posture classification step and the body movement measurement result in the body movement measurement step in chronological order. sleep video analysis method.
- (Item 8) Sleep video analysis according to any one of items 2 to 6, further comprising a video shortening step of generating a shortened video by synthesizing the images of the plurality of selected frames selected from the sleep video in the selection step.
- the rotation angle of the shortened moving image is determined based on at least one of each predetermined time period of the shortened moving image and each time a difference between images before and after frames in the shortened moving image becomes equal to or greater than a rotation threshold value.
- the method of analyzing moving images during sleep according to item 9, comprising the step of rotating the image of the frame to a plurality of angles and determining the rotation angle of the image used for estimating the sleeping posture in the sleeping posture estimation step.
- the sleeping posture estimation step includes a step of estimating the sleeping posture of the subject during sleep based on a learned model that has undergone machine learning, The result of estimating the sleeping posture by the learned model based on the certainty of the sleeping posture estimation by the learned model obtained when estimating the sleeping posture of the subject during sleep and a preset validity threshold.
- the sleeping video analysis method according to any one of items 1 to 10, further comprising an estimation result determination step of determining whether is appropriate.
- the sleeping posture classification step is a step of classifying the sleeping posture of the subject when it is determined in the estimation result determination step that the sleeping posture estimation result in the sleeping posture estimation step is appropriate,
- the display step includes: a step of displaying the sleeping posture classification result of the sleeping posture classification step when the sleeping posture classification step is performed based on the judgment result of the estimation result judging step; Based on the determination result of the estimation result determination step, if the sleeping posture classification step was not performed, a display indicating that the classification was not performed, and a display based on the learned model 12.
- a control unit that analyzes a sleeping video that is a video of the subject sleeping; a display unit that displays analysis results by the control unit,
- the control unit Sleeping posture estimation control for estimating the sleeping posture of the subject during sleep based on the sleeping video; sleeping posture classification control for classifying the sleeping posture of the subject during sleep based on the sleeping posture estimation result obtained by the sleeping posture estimation control; body movement measurement control for measuring body movement of the subject during sleep based on the sleeping posture estimation result obtained by the sleeping posture estimation control; display control for displaying at least one of the sleeping posture classification result by the sleeping posture classification control and the body motion measurement result by the body motion measurement control on the display unit; Time video analysis device.
- the control unit sequentially reads the images of each frame of the sleeping video, and detects frames in which the image difference between the frames of the sleeping video is equal to or greater than a selected threshold value. 16.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physiology (AREA)
- Theoretical Computer Science (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- General Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Dentistry (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Image Analysis (AREA)
Abstract
Description
PC1は、睡眠時動画の各フレームの画像を順次読み込み、睡眠時動画のフレーム間における画像の差異が選定閾値以上のフレームを、睡眠時動画において被検体の寝姿勢(被検体200の関節点の位置)が変化した選定フレームとして選定する制御(選定制御)を行う。PC1によるフレーム選定(フレームピックアップ)の制御について、図2~図6を参照して説明する。
また、PC1は、図7に示すように、選定制御において睡眠時動画から選定された複数の選定フレームの画像を合成した短縮動画を生成し、出力する制御(動画短縮制御)を行う。
PC1は、短縮動画の一定時間毎と、短縮動画において前後のフレーム間の画像の差異が回転閾値以上になる毎とのうち、少なくとも一方に基づいて、短縮動画の各フレームの画像を複数の角度に回転させ、寝姿勢推定における寝姿勢の推定に用いる画像の回転角度を決定する制御(回転方向決定制御)を行う。
PC1は、睡眠時動画に基づいて、被検体200の睡眠時における寝姿勢を推定する制御(寝姿勢推定制御)を行う。PC1は、機械学習された学習済モデルに基づいて、被検体200の睡眠時における寝姿勢推定を行う。
ここで、寝姿勢推定の精度が低い場合、後述する寝姿勢分類および体動計測に影響を及ぼす。そのため、精度の低い寝姿勢推定結果を入力することによって、精度の低い寝姿勢分類や体動計測結果が出力されることを防止するために、寝姿勢推定結果が利用可能であるか(妥当であるか否か)の判定を寝姿勢推定結果(図9参照)に対して行う。
また、PC1は、寝姿勢推定制御による寝姿勢推定結果に基づいて、被検体200の睡眠時における寝姿勢を分類する制御(寝姿勢分類制御)を行うように構成されている。
また、PC1は、寝姿勢推定制御による寝姿勢推定結果に基づいて、被検体200の睡眠時における体動を計測する制御(体動計測制御)を行うように構成されている。PC1は、体動計測制御において、図9に示すように、短縮動画の前フレーム(あるいは一定枚数間隔、任意の条件で指定した過去のフレーム)の寝姿勢推定結果との差分を体動(体動計測結果)として出力する。
PC1は、寝姿勢分類制御による寝姿勢分類結果と体動計測制御による体動計測結果とのうち、少なくとも一方をディスプレイ2に表示させる制御を行う表示制御を行うように構成されている。本実施形態では、図10に示すように、ディスプレイ2には、PC1の制御によって、寝姿勢分類結果および体動計測結果の両方が表示される。
次に、本実施形態の解析装置100(PC1)による睡眠時動画解析の処理フローについて、図12および図13を参照して説明する。
本実施形態では、以下のような効果を得ることができる。
なお、今回開示された実施形態は、すべての点で例示であって制限的なものではないと考えられるべきである。本発明の範囲は、上記した実施形態の説明ではなく請求の範囲によって示され、さらに請求の範囲と均等の意味および範囲内でのすべての変更(変形例)が含まれる。
上記した例示的な実施形態は、以下の態様の具体例であることが当業者により理解される。
被検体の睡眠時の動画である睡眠時動画に基づいて、前記被検体の睡眠時における寝姿勢を推定する寝姿勢推定ステップと、
前記寝姿勢推定ステップにおける寝姿勢推定結果に基づいて、前記被検体の睡眠時における寝姿勢を分類する寝姿勢分類ステップと、
前記寝姿勢推定ステップにおける前記寝姿勢推定結果に基づいて、前記被検体の睡眠時における体動を計測する体動計測ステップと、
前記寝姿勢分類ステップにおける寝姿勢分類結果と前記体動計測ステップにおける体動計測結果とのうち、少なくとも一方を表示する表示ステップと、を備える、睡眠時動画解析方法。
前記睡眠時動画の各フレームの画像を順次読み込み、前記睡眠時動画のフレーム間における画像の差異が選定閾値以上のフレームを、前記睡眠時動画において前記被検体の寝姿勢が変化した選定フレームとして選定する選定ステップをさらに備える、項目1に記載の睡眠時動画解析方法。
前記寝姿勢推定ステップは、前記選定ステップにおいて、前記寝姿勢推定ステップに先立って選定された前記選定フレームの画像を用いて、前記被検体の睡眠時における寝姿勢を推定するステップを含む、項目2に記載の睡眠時動画解析方法。
前記選定ステップは、前記選定フレームに加えて、前記選定フレームの前後のフレームも、前記寝姿勢推定ステップにおける寝姿勢の推定に用いる画像のフレームとして選定するステップを含む、項目3に記載の睡眠時動画解析方法。
前記選定ステップは、
前記睡眠時動画の各フレームの画像を順次読み込み、前記睡眠時動画の隣接する各フレーム間における画像の差異が第1選定閾値以上のフレームを第1選定フレームとして選定するステップと、
前記睡眠時動画の各フレームにおいて、直前の前記第1選定フレームの画像との差異が第2選定閾値以上の画像のフレームを第2選定フレームとして選定するステップとを含む、項目2~4のいずれか1項に記載の睡眠時動画解析方法。
前記選定ステップは、前記第1選定フレームおよび前記第2選定フレームに加えて、前記第2選定フレームの直前の前記第1選定フレームから前記第2選定フレームまでの間のフレームを、所定の間隔毎に前記寝姿勢推定ステップにおける寝姿勢の推定に用いる画像のフレームとして選定するステップをさらに含む、項目5に記載の睡眠時動画解析方法。
前記表示ステップは、前記寝姿勢分類ステップにおける寝姿勢分類結果、および、前記体動計測ステップにおける体動計測結果の各々を時系列表示するステップを含む、項目1~6のいずれか1項に記載の睡眠時動画解析方法。
前記選定ステップにおいて前記睡眠時動画から選定された複数の前記選定フレームの画像を合成した短縮動画を生成する動画短縮ステップをさらに備える、項目2~6のいずれか1項に記載の睡眠時動画解析方法。
前記寝姿勢推定ステップに先立って、前記短縮動画の各フレームの画像を複数の角度に回転させ、複数の方向に回転させた画像毎に、前記被検体の睡眠時における寝姿勢を推定し、前記寝姿勢推定ステップにおける寝姿勢の推定に用いる画像の回転角度を決定する回転角度決定ステップをさらに備える、項目8に記載の睡眠時動画解析方法。
前記回転角度決定ステップは、前記短縮動画の一定時間毎と、前記短縮動画において前後のフレーム間の画像の差異が回転閾値以上になる毎とのうち、少なくとも一方に基づいて、前記短縮動画の各フレームの画像を複数の角度に回転させ、前記寝姿勢推定ステップにおける寝姿勢の推定に用いる画像の回転角度を決定する処理を行うステップを含む、項目9に記載の睡眠時動画解析方法。
前記寝姿勢推定ステップは、機械学習された学習済モデルに基づいて、前記被検体の睡眠時における寝姿勢推定を行うステップを含み、
前記被検体の睡眠時における寝姿勢推定の際に得られる前記学習済モデルによる寝姿勢推定の確信度と、予め設定された妥当性閾値とに基づいて、前記学習済モデルによる前記寝姿勢推定結果が妥当であるか否かを判定する推定結果判定ステップをさらに備える、項目1~10のいずれか1項に記載の睡眠時動画解析方法。
前記寝姿勢分類ステップは、前記推定結果判定ステップにおいて、前記寝姿勢推定ステップにおける前記寝姿勢推定結果が妥当であると判定された場合に、前記被検体の寝姿勢の分類を行うステップであり、
前記表示ステップは、
前記推定結果判定ステップの判定結果に基づいて、前記寝姿勢分類ステップによる寝姿勢の分類が行われた場合には、前記寝姿勢分類ステップによる寝姿勢分類結果を表示するステップと、
前記推定結果判定ステップの前記判定結果に基づいて、前記寝姿勢分類ステップによる寝姿勢の分類が行われなかった場合には、分類が行われなかったことを示す表示、および、前記学習済モデルによる寝姿勢推定の確信度を示す表示のうち、少なくとも一方を行うステップとを含む、項目11に記載の睡眠時動画解析方法。
項目1~12のいずれか1項に記載された睡眠時動画解析方法をコンピュータに実行させる、プログラム。
項目13に記載のプログラムを記録した、コンピュータ読み取り可能な記録媒体。
被検体の睡眠時の動画である睡眠時動画の解析を行う制御部と、
前記制御部による解析結果を表示する表示部と、を備え、
前記制御部は、
前記睡眠時動画に基づいて、前記被検体の睡眠時における寝姿勢を推定する寝姿勢推定制御と、
前記寝姿勢推定制御による寝姿勢推定結果に基づいて、前記被検体の睡眠時における寝姿勢を分類する寝姿勢分類制御と、
前記寝姿勢推定制御による前記寝姿勢推定結果に基づいて、前記被検体の睡眠時における体動を計測する体動計測制御と、
前記寝姿勢分類制御による寝姿勢分類結果と前記体動計測制御による体動計測結果とのうち、少なくとも一方を前記表示部に表示させる制御を行う表示制御とを行うように構成されている、睡眠時動画解析装置。
前記制御部は、前記睡眠時動画の各フレームの画像を順次読み込み、前記睡眠時動画のフレーム間における画像の差異が選定閾値以上のフレームを、前記睡眠時動画において前記被検体の寝姿勢が変化した選定フレームとして選定する選定制御を行うように構成されている、項目15に記載の睡眠時動画解析装置。
2 ディスプレイ(表示部)
10 第1選定フレーム
20 第2選定フレーム
100 解析装置(睡眠時動画解析装置)
200 被検体
Claims (16)
- 被検体の睡眠時の動画である睡眠時動画に基づいて、前記被検体の睡眠時における寝姿勢を推定する寝姿勢推定ステップと、
前記寝姿勢推定ステップにおける寝姿勢推定結果に基づいて、前記被検体の睡眠時における寝姿勢を分類する寝姿勢分類ステップと、
前記寝姿勢推定ステップにおける前記寝姿勢推定結果に基づいて、前記被検体の睡眠時における体動を計測する体動計測ステップと、
前記寝姿勢分類ステップにおける寝姿勢分類結果と前記体動計測ステップにおける体動計測結果とのうち、少なくとも一方を表示する表示ステップと、を備える、睡眠時動画解析方法。 - 前記睡眠時動画の各フレームの画像を順次読み込み、前記睡眠時動画のフレーム間における画像の差異が選定閾値以上のフレームを、前記睡眠時動画において前記被検体の寝姿勢が変化した選定フレームとして選定する選定ステップをさらに備える、請求項1に記載の睡眠時動画解析方法。
- 前記寝姿勢推定ステップは、前記選定ステップにおいて、前記寝姿勢推定ステップに先立って選定された前記選定フレームの画像を用いて、前記被検体の睡眠時における寝姿勢を推定するステップを含む、請求項2に記載の睡眠時動画解析方法。
- 前記選定ステップは、前記選定フレームに加えて、前記選定フレームの前後のフレームも、前記寝姿勢推定ステップにおける寝姿勢の推定に用いる画像のフレームとして選定するステップを含む、請求項3に記載の睡眠時動画解析方法。
- 前記選定ステップは、
前記睡眠時動画の各フレームの画像を順次読み込み、前記睡眠時動画の隣接する各フレーム間における画像の差異が第1選定閾値以上のフレームを第1選定フレームとして選定するステップと、
前記睡眠時動画の各フレームにおいて、直前の前記第1選定フレームの画像との差異が第2選定閾値以上の画像のフレームを第2選定フレームとして選定するステップとを含む、請求項2に記載の睡眠時動画解析方法。 - 前記選定ステップは、前記第1選定フレームおよび前記第2選定フレームに加えて、前記第2選定フレームの直前の前記第1選定フレームから前記第2選定フレームまでの間のフレームを、所定の間隔毎に前記寝姿勢推定ステップにおける寝姿勢の推定に用いる画像のフレームとして選定するステップをさらに含む、請求項5に記載の睡眠時動画解析方法。
- 前記表示ステップは、前記寝姿勢分類ステップにおける寝姿勢分類結果、および、前記体動計測ステップにおける体動計測結果の各々を時系列表示するステップを含む、請求項1に記載の睡眠時動画解析方法。
- 前記選定ステップにおいて前記睡眠時動画から選定された複数の前記選定フレームの画像を合成した短縮動画を生成する動画短縮ステップをさらに備える、請求項2に記載の睡眠時動画解析方法。
- 前記寝姿勢推定ステップに先立って、前記短縮動画の各フレームの画像を複数の角度に回転させ、複数の方向に回転させた画像毎に、前記被検体の睡眠時における寝姿勢を推定し、前記寝姿勢推定ステップにおける寝姿勢の推定に用いる画像の回転角度を決定する回転角度決定ステップをさらに備える、請求項8に記載の睡眠時動画解析方法。
- 前記回転角度決定ステップは、前記短縮動画の一定時間毎と、前記短縮動画において前後のフレーム間の画像の差異が回転閾値以上になる毎とのうち、少なくとも一方に基づいて、前記短縮動画の各フレームの画像を複数の角度に回転させ、前記寝姿勢推定ステップにおける寝姿勢の推定に用いる画像の回転角度を決定する処理を行うステップを含む、請求項9に記載の睡眠時動画解析方法。
- 前記寝姿勢推定ステップは、機械学習された学習済モデルに基づいて、前記被検体の睡眠時における寝姿勢推定を行うステップを含み、
前記被検体の睡眠時における寝姿勢推定の際に得られる前記学習済モデルによる寝姿勢推定の確信度と、予め設定された妥当性閾値とに基づいて、前記学習済モデルによる前記寝姿勢推定結果が妥当であるか否かを判定する推定結果判定ステップをさらに備える、請求項1に記載の睡眠時動画解析方法。 - 前記寝姿勢分類ステップは、前記推定結果判定ステップにおいて、前記寝姿勢推定ステップにおける前記寝姿勢推定結果が妥当であると判定された場合に、前記被検体の寝姿勢の分類を行うステップであり、
前記表示ステップは、
前記推定結果判定ステップの判定結果に基づいて、前記寝姿勢分類ステップによる寝姿勢の分類が行われた場合には、前記寝姿勢分類ステップによる寝姿勢分類結果を表示するステップと、
前記推定結果判定ステップの前記判定結果に基づいて、前記寝姿勢分類ステップによる寝姿勢の分類が行われなかった場合には、分類が行われなかったことを示す表示、および、前記学習済モデルによる寝姿勢推定の確信度を示す表示のうち、少なくとも一方を行うステップとを含む、請求項11に記載の睡眠時動画解析方法。 - 請求項1に記載された睡眠時動画解析方法をコンピュータに実行させる、プログラム。
- 請求項13に記載のプログラムを記録した、コンピュータ読み取り可能な記録媒体。
- 被検体の睡眠時の動画である睡眠時動画の解析を行う制御部と、
前記制御部による解析結果を表示する表示部と、を備え、
前記制御部は、
前記睡眠時動画に基づいて、前記被検体の睡眠時における寝姿勢を推定する寝姿勢推定制御と、
前記寝姿勢推定制御による寝姿勢推定結果に基づいて、前記被検体の睡眠時における寝姿勢を分類する寝姿勢分類制御と、
前記寝姿勢推定制御による前記寝姿勢推定結果に基づいて、前記被検体の睡眠時における体動を計測する体動計測制御と、
前記寝姿勢分類制御による寝姿勢分類結果と前記体動計測制御による体動計測結果とのうち、少なくとも一方を前記表示部に表示させる制御を行う表示制御とを行うように構成されている、睡眠時動画解析装置。 - 前記制御部は、前記睡眠時動画の各フレームの画像を順次読み込み、前記睡眠時動画のフレーム間における画像の差異が選定閾値以上のフレームを、前記睡眠時動画において前記被検体の寝姿勢が変化した選定フレームとして選定する選定制御を行うように構成されている、請求項15に記載の睡眠時動画解析装置。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023520839A JP7480914B2 (ja) | 2021-05-13 | 2022-03-04 | 睡眠時動画解析方法および睡眠時動画解析装置 |
EP22807122.1A EP4339885A1 (en) | 2021-05-13 | 2022-03-04 | Sleep-period moving image analysis method and sleep-period moving image analysis device |
CN202280032651.1A CN117337448A (zh) | 2021-05-13 | 2022-03-04 | 睡眠时视频解析方法及睡眠时视频解析装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021-081445 | 2021-05-13 | ||
JP2021081445 | 2021-05-13 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022239415A1 true WO2022239415A1 (ja) | 2022-11-17 |
Family
ID=84029525
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/009521 WO2022239415A1 (ja) | 2021-05-13 | 2022-03-04 | 睡眠時動画解析方法および睡眠時動画解析装置 |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP4339885A1 (ja) |
JP (1) | JP7480914B2 (ja) |
CN (1) | CN117337448A (ja) |
WO (1) | WO2022239415A1 (ja) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017183603A1 (ja) * | 2016-04-19 | 2017-10-26 | コニカミノルタ株式会社 | 被監視者監視システムおよび被監視者監視方法 |
JP2018164615A (ja) * | 2017-03-28 | 2018-10-25 | 国立大学法人大阪大学 | 睡眠深度判定システム、睡眠深度判定装置及び睡眠深度判定方法 |
JP2019219989A (ja) | 2018-06-21 | 2019-12-26 | 日本電信電話株式会社 | 姿勢推定装置、姿勢推定方法、およびプログラム |
JP2020116304A (ja) * | 2019-01-28 | 2020-08-06 | パナソニックIpマネジメント株式会社 | 情報処理方法、情報処理プログラム及び情報処理システム |
JP2021005207A (ja) * | 2019-06-26 | 2021-01-14 | EMC Healthcare株式会社 | 情報処理装置、情報処理方法、及びプログラム |
-
2022
- 2022-03-04 JP JP2023520839A patent/JP7480914B2/ja active Active
- 2022-03-04 WO PCT/JP2022/009521 patent/WO2022239415A1/ja active Application Filing
- 2022-03-04 EP EP22807122.1A patent/EP4339885A1/en not_active Withdrawn
- 2022-03-04 CN CN202280032651.1A patent/CN117337448A/zh active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017183603A1 (ja) * | 2016-04-19 | 2017-10-26 | コニカミノルタ株式会社 | 被監視者監視システムおよび被監視者監視方法 |
JP2018164615A (ja) * | 2017-03-28 | 2018-10-25 | 国立大学法人大阪大学 | 睡眠深度判定システム、睡眠深度判定装置及び睡眠深度判定方法 |
JP2019219989A (ja) | 2018-06-21 | 2019-12-26 | 日本電信電話株式会社 | 姿勢推定装置、姿勢推定方法、およびプログラム |
JP2020116304A (ja) * | 2019-01-28 | 2020-08-06 | パナソニックIpマネジメント株式会社 | 情報処理方法、情報処理プログラム及び情報処理システム |
JP2021005207A (ja) * | 2019-06-26 | 2021-01-14 | EMC Healthcare株式会社 | 情報処理装置、情報処理方法、及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
CN117337448A (zh) | 2024-01-02 |
EP4339885A1 (en) | 2024-03-20 |
JP7480914B2 (ja) | 2024-05-10 |
JPWO2022239415A1 (ja) | 2022-11-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yi et al. | Transpose: Real-time 3d human translation and pose estimation with six inertial sensors | |
CN101243471B (zh) | 对用户的运动进行分析的系统和方法 | |
US11948401B2 (en) | AI-based physical function assessment system | |
US11281896B2 (en) | Physical activity quantification and monitoring | |
US11403882B2 (en) | Scoring metric for physical activity performance and tracking | |
US20120194513A1 (en) | Image processing apparatus and method with three-dimensional model creation capability, and recording medium | |
US11449718B2 (en) | Methods and systems of real time movement classification using a motion capture suit | |
Fieraru et al. | Learning complex 3D human self-contact | |
Xiao et al. | Machine learning for placement-insensitive inertial motion capture | |
CN112434679A (zh) | 康复运动的评估方法及装置、设备、存储介质 | |
JP2018164615A (ja) | 睡眠深度判定システム、睡眠深度判定装置及び睡眠深度判定方法 | |
CN111401340B (zh) | 目标对象的运动检测方法和装置 | |
Li et al. | Real-time gaze estimation using a kinect and a HD webcam | |
Li et al. | Buccal: Low-cost cheek sensing for inferring continuous jaw motion in mobile virtual reality | |
CN111448589B (zh) | 用于检测患者的身体移动的设备、系统和方法 | |
Davoodnia et al. | Estimating pose from pressure data for smart beds with deep image-based pose estimators | |
Sun et al. | Camera-based discomfort detection using multi-channel attention 3D-CNN for hospitalized infants | |
WO2022239415A1 (ja) | 睡眠時動画解析方法および睡眠時動画解析装置 | |
Jolly et al. | Posture Correction and Detection using 3-D Image Classification | |
Bačić et al. | Towards Real-Time Drowsiness Detection for Elderly Care | |
JP7501543B2 (ja) | 情報処理装置、情報処理方法及び情報処理プログラム | |
US20230137904A1 (en) | System and method for generating and visualizing virtual figures from pressure data captured using weight support devices for visualization of user movement | |
JP2024087325A (ja) | 行動解析結果出力方法、行動解析結果出力プログラム、および行動解析結果出力システム | |
Fernandes et al. | Sticks and STONES may build my bones: Deep learning reconstruction of limb rotations in stick figures | |
GB2559809A (en) | Motion tracking apparatus and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22807122 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2023520839 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280032651.1 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18559084 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022807122 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022807122 Country of ref document: EP Effective date: 20231213 |