WO2019003859A1 - Monitoring system, control method therefor, and program - Google Patents

Monitoring system, control method therefor, and program Download PDF

Info

Publication number
WO2019003859A1
WO2019003859A1 PCT/JP2018/021984 JP2018021984W WO2019003859A1 WO 2019003859 A1 WO2019003859 A1 WO 2019003859A1 JP 2018021984 W JP2018021984 W JP 2018021984W WO 2019003859 A1 WO2019003859 A1 WO 2019003859A1
Authority
WO
WIPO (PCT)
Prior art keywords
state
image
bed
score
unit
Prior art date
Application number
PCT/JP2018/021984
Other languages
French (fr)
Japanese (ja)
Inventor
純平 松永
田中 清明
信二 高橋
達哉 村上
Original Assignee
オムロン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オムロン株式会社 filed Critical オムロン株式会社
Publication of WO2019003859A1 publication Critical patent/WO2019003859A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/04Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using a single signalling line, e.g. in a closed loop

Definitions

  • the present invention relates to a technique for supporting watching of a subject on a bed.
  • Patent Document 1 discloses displaying a graph of body motion, respiration and the like.
  • Patent Document 3 discloses that an alert icon is displayed when a fall of a patient occurs.
  • the watching side can monitor the presence or absence of the movement of the patient and the size by displaying a graph of the body movement measured by the sensor.
  • the output of the body movement sensor contains many noises (signals due to movements other than the action desired to be detected), so that it is not possible to accurately detect or predict the state or action of the patient.
  • the present invention has been made in view of the above-described circumstances, and an object thereof is to provide a technique for detecting the state or behavior of a subject on a bed with high accuracy and high reliability.
  • the present invention adopts a method of generating a score that quantifies the condition of a subject based on an image and graphically displaying a time change of the score.
  • a first aspect of the present invention is a watching support system that supports watching of a subject on a bed, and an imaging apparatus installed to shoot a monitoring area including the bed of the subject
  • An image acquisition unit for acquiring an image
  • a state quantification unit for outputting a score obtained by quantifying the condition of the subject based on the image of the monitoring area acquired by the image acquisition unit
  • the state quantification unit And a status display unit for displaying on the display device a graph indicating temporal change of the score output from the monitoring device.
  • a graph indicating temporal changes in the score that quantified the condition of the subject is output, so the watching side (nurse, carer, etc.) can easily change the condition or trend of the subject. It can be confirmed. Then, if the change or tendency of the state is known, for example, the action of the subject can be predicted to prevent the occurrence of danger, or the daily action pattern of the subject can be grasped and used for watching.
  • the state quantifying unit has a regressor machine-learned to receive a bed and an image of a person and output a score indicating the state of the person relative to the bed, and the image of the monitoring area is It is good to acquire the score which quantified the said subject's state by inputting into a regressor. Since the state of the subject in the input image is estimated using the regressor, highly accurate state estimation can be performed on the unknown input image. Further, since the score of the regressor is output as a continuous value, it is possible to obtain a reasonable estimation result even if the condition of the subject can not be clearly classified. Furthermore, even if the target person is covered with a futon, there are strange people or objects around the target person, or the lighting environment is different from normal, etc., the image is difficult to detect the head. You can expect to get
  • the regressor may be a neural network. By using a neural network, highly accurate and highly robust state estimation can be performed.
  • the state of the person with respect to the bed is classified in advance into a plurality of types, and different scores are assigned to each of the plurality of types, and the regressor is configured to determine that the state of the person is between two types. If so, it may be configured to output a value between the two types of scores.
  • Such a design makes it possible to express various human-readable states as a one-dimensional score, making it easy to handle "human states” mathematically or in programs, for example, The processing of (the graph output by the status display unit etc.) can be constructed extremely easily.
  • the plurality of types may include state 0 in which the person is sleeping in the bed, state 1 in which the person is rising on the bed, and state 2 in which the person is away from the bed . This is because if it is possible to distinguish at least three types of states of state 0 to state 2, it is possible to detect “wake up” and “getting out of bed” which have a high need for watching.
  • the score quantifying the condition of the subject may be a score representing the degree of danger of the condition of the subject.
  • a determination criteria storage unit in which determination criteria for determining a dangerous state are set in advance for each of a plurality of determination areas set based on the bed area in the image of the monitoring area, the state quantification The unit has a detection unit that detects the head of the subject from the image of the monitoring area, and the judgment unit of the judgment area corresponding to the position at which the head is detected It is good to calculate the score showing the degree of danger of a state. According to this configuration, it is possible to easily and accurately determine from the image whether the subject on the bed is in a safe state or in a dangerous state.
  • the state quantifying unit determines the degree of danger of the state of the subject based on at least one of the head orientation, head movement speed, head movement direction, and head movement vector. It is good to calculate the score to express.
  • the present invention can be understood as a watching support system having at least a part of the above configuration or function.
  • the present invention also provides a watching support method or a watching support system control method including at least a part of the above-described processing, a program for causing a computer to execute these methods, or non-temporarily such a program. It can also be regarded as a recorded computer readable recording medium.
  • the state or behavior of the subject on the bed can be detected with high accuracy and high reliability.
  • FIG. 1 is a block diagram schematically showing a hardware configuration and a functional configuration of the watching support system of the first embodiment.
  • FIG. 2 is a view showing an installation example of the imaging device.
  • FIG. 3A and FIG. 3B are examples of the monitoring area set for the image.
  • FIG. 4 shows types of human states and examples of images.
  • FIG. 5 is a diagram schematically showing machine learning of the regressor.
  • FIG. 6 is a diagram schematically showing the ability of the regressor.
  • FIG. 7 is a flowchart of the state monitoring process.
  • FIG. 8 is an example of the status display.
  • FIG. 9 is a block diagram schematically showing the hardware configuration and the functional configuration of the watching support system of the second embodiment.
  • 10A to 10C are examples of determination regions set for an image.
  • FIG. 11A is an example of a data structure of the determination reference of head orientation for each determination area, and FIG. 11B is a diagram for explaining codes representing eight directions.
  • FIG. 12 is a flowchart of the state monitoring process of the first example of the second embodiment.
  • FIG. 13 is an example of danger degree determination of Example 1 of 2nd Embodiment.
  • FIG. 14 is a state display example of the second embodiment.
  • FIG. 15 is a flowchart of the state monitoring process of the second embodiment of the second embodiment.
  • FIG. 16 is an example of danger degree determination of Example 2 of 2nd Embodiment.
  • FIG. 17 is a flowchart of the state monitoring process of the third example of the second embodiment.
  • FIG. 18 is a flowchart of the state monitoring process of the fourth example of the second embodiment.
  • the present invention relates to a technique for supporting watching of a subject on a bed.
  • This technology can be applied to a system that automatically detects the getting-up and getting-up behavior of patients and care recipients in hospitals and nursing facilities, etc., and performs necessary notification when a dangerous state occurs.
  • This system can be preferably used, for example, for watching and supporting elderly people, patients with dementia, children and the like.
  • FIG. 1 is a block diagram schematically showing a hardware configuration and a functional configuration of the watching support system 1
  • FIG. 2 is a diagram showing an installation example of an imaging device.
  • the watching support system 1 includes an imaging device 10, an information processing device 11, and a display device 13 as main hardware configurations.
  • the imaging device 10 and the information processing device 11 are connected by wire or wirelessly. Further, the information processing device 11 and the display device 13 are also connected by wire or wirelessly.
  • the child machine unit including the imaging device 10 and the information processing device 11 is installed in the room of the target person 21 while the parent machine unit including the display device 13 is installed in the nurse center or the central monitoring room.
  • a configuration in which communication is performed by wired LAN or wireless LAN can be adopted between machine units.
  • all or part of the information processing apparatus 11 may be provided on the base unit side.
  • a plurality of slave units may be connected to one master unit.
  • the imaging device 10 is a device for capturing a subject on a bed and capturing image data.
  • a monochrome or color visible light camera, an infrared camera, a three-dimensional camera or the like can be used.
  • the imaging device 10 configured by the infrared LED illumination 100 and the near infrared camera 101 is adopted in order to enable watching of the target person even at night (even when the room is dark).
  • the imaging device 10 is installed to look over the entire bed 20 from the top of the bed 20 to the foot.
  • the imaging device 10 captures an image at a predetermined time interval (for example, 30 fps), and the image data is sequentially captured by the information processing device 11.
  • a predetermined time interval for example, 30 fps
  • the information processing apparatus 11 is an apparatus provided with a function of analyzing image data taken in from the imaging apparatus 10 in real time, detecting the state and behavior of the subject 21 on the bed 20, and outputting the detection result to the display apparatus 13. is there.
  • the information processing apparatus 11 includes, as specific functional modules, an image acquisition unit 110, an area setting unit 111, a preprocessing unit 112, a regression unit 113, a score stabilization unit 114, a determination unit 115, a state display unit 116, and a storage unit 117. have.
  • the preprocessing unit 112, the regression unit 113, the score stabilizing unit 114, and the determining unit 115 constitute a state quantifying unit that outputs a score that quantifies the condition of the target person 21. .
  • the information processing apparatus 11 includes a general-purpose computer including a CPU (processor), memory, storage (HDD, SSD, etc.), input device (keyboard, mouse, touch panel, etc.), communication interface, etc.
  • a general-purpose computer including a CPU (processor), memory, storage (HDD, SSD, etc.), input device (keyboard, mouse, touch panel, etc.), communication interface, etc.
  • Each module of the information processing apparatus 11 is realized by the CPU executing a program stored in the storage or the memory.
  • the configuration of the information processing apparatus 11 is not limited to this example.
  • distributed computing may be performed by a plurality of computers, a part of the module may be executed by a cloud server, or a part of the module may be configured by a circuit such as an ASIC or an FPGA. It is also good.
  • the image acquisition unit 110 is a module for acquiring an image captured by the imaging device 10.
  • the image data input from the image acquisition unit 110 is temporarily stored in a memory or storage, and is used for area setting processing and status monitoring processing described later.
  • the area setting unit 111 is a module for setting a monitoring area for an image captured by the imaging device 10.
  • the monitoring area is a range (in other words, an image range used as an input of the regressor 113) to be subjected to the state monitoring process in the field of view of the imaging device 10. Details of the area setting process will be described later.
  • the preprocessing unit 112 is a module for performing necessary preprocessing on an image (hereinafter referred to as an “original image”) input from the image acquisition unit 110 in the state monitoring process. For example, the preprocessing unit 112 performs processing of clipping an image within the monitoring area from the original image (hereinafter, the clipped image is referred to as a “monitoring area image”). In addition, the preprocessing unit 112 may perform processing such as resizing (reduction), affine transformation, and luminance correction on the monitoring area image. Resizing (reduction) has the effect of shortening the calculation time of the regressor 113.
  • the affine transformation can be expected to have the effect of normalizing the input image to the regressor 113 and improving the estimation accuracy by performing necessary distortion correction such as, for example, deforming a bed reflected in a trapezoidal shape in the image into a rectangular shape .
  • the luminance correction can be expected to have an effect of improving the estimation accuracy, for example, by reducing the influence of the illumination environment.
  • the regressor 113 is a module for outputting a score indicating the state of the subject 21 (for example, bedtime state, wakeup state, leaving state) shown in the monitoring area image when the monitoring area image is given.
  • Regressor 113 has constructed a relationship model between features of the input image and human state by machine learning so that the image showing the bed and the person is taken as input, and outputs a score quantitatively indicating the human state with respect to the bed. It is a thing. It is assumed that the training of the regressor 113 is performed in advance (before shipping or operation of the system) by the learning device 12 using a large number of training images.
  • any model such as a neural network, a random forest, a support vector machine, etc. may be used.
  • a convolutional neural network (CNN) suitable for image recognition is used.
  • the score of this embodiment is also called a "state score.”
  • the score stabilization unit 114 is a module for suppressing rapid change and fluttering of the score output from the regressor 113.
  • the score stabilization unit 114 calculates an average of the current score obtained from the image of the current frame and the past score obtained from the images of the immediately preceding two frames, and outputs the average as a stabilization score. This process is equivalent to applying a temporal low-pass filter to the time series data of the score.
  • the score stabilization part 114 may be abbreviate
  • the determination unit 115 is a module for determining the action of the subject based on the score obtained by the regressor 113. Specifically, the determination unit 115 determines what kind of behavior (for example, wake-up action, leaving-behind action) of the subject based on temporal change in score (that is, transition of the “subject's state” indicated by the score). Etc.) are estimated. Details of the processing of the determination unit 115 will be described later.
  • the state display unit 116 is a module that displays on the display device 13 a graph (hereinafter referred to as a “state change graph”) indicating temporal change of the score output from the score stabilization unit 114 on a real time basis.
  • the storage unit 117 is a module for storing various data used by the watching support system 1 for processing.
  • the storage unit 117 stores, for example, setting information of a monitoring area, parameters used in preprocessing, parameters used in score stabilization processing, time series data of scores, parameters used in determination processing, and the like.
  • the setting of the monitoring area may be performed manually or automatically.
  • the area setting unit 111 may provide a user interface for allowing the user to input the area of the bed 20 in the image or the monitoring area itself.
  • the area setting unit 111 may detect the bed 20 from the image by object recognition processing, and set the monitoring area so as to include the detected area of the bed 20.
  • the area setting process is performed, for example, when the monitoring area needs to be updated along with the movement of the bed 20 or the imaging device 10 when the monitoring area is not set (for example, when the system is installed).
  • FIG. 3A is an example of the monitoring area set for the original image.
  • a monitoring area 30 is set by adding a margin of a predetermined width to the left side, the right side, and the upper side (foot side) of the area of the bed 20.
  • the width of the margin is set so that the whole body of the person (see FIG. 3B) rising on the bed 20 falls within the monitoring area 30.
  • the human condition with respect to the bed is classified into three types from 0 to 2 in advance.
  • the "type 0 type” is a state in which a person is sleeping in the bed (referred to as “sleeping state” or “state 0")
  • the "type 1 type” is a state in which a person is rising on the bed (“wake up state Or “state 1”)
  • “type 2” is a state in which a person is separated from the bed (dismounted) (referred to as “bed leaving state” or “state 2”).
  • FIG. 4 is an example showing correspondence between time-series images representing a series of actions of a person who was sleeping rising and leaving the bed, and three types.
  • FIG. 5 schematically shows the machine learning of the regressor 113.
  • images obtained by photographing an actual patient room and the like are collected, and each image is classified into type 0 to type 2.
  • a portion corresponding to the monitoring area of each image is clipped, and the type number (0, 1, 2) is assigned as a label to generate a set of training images.
  • the type number (0, 1, 2) is assigned as a label to generate a set of training images.
  • it is preferable to prepare a sufficient number of images and it is preferable to prepare images of various variations for each type.
  • the specific layer structure of the neural network, the filter, the activation function, the specification of the input image, and the like may be appropriately designed according to the mounting and the required accuracy.
  • FIG. 6 schematically shows the ability of the regressor 113.
  • the regressor 113 models the correspondence between the “feature amount” of the image and the “score” indicating the human state.
  • the regressor 113 extracts the feature amount from the input image according to the relationship model, and calculates and outputs a score corresponding to the feature amount.
  • FIG. 6 shows the relationship model as a two-dimensional linear model for the convenience of description, the actual feature amount space is multidimensional, and the relationship model is non-linear.
  • the score output from the regressor 113 is a real value (continuous value) in the range of 0-2.
  • the output score becomes 1 or a value very close to 1.
  • the input image there is also an ambiguous image as to which type it belongs to, such as a state in which the upper body is about to wake up from the sleeping position or a state in which it is about to stand up from the bed.
  • the extracted feature quantity is a feature quantity between the two types, so that an intermediate score between the two types is output.
  • a score of a value larger than 0 and smaller than 1 is obtained because it is an intermediate state between the 0th type and the 1st type.
  • the regressor 113 is used to estimate the human state in the input image. Therefore, highly accurate state estimation can be performed on an unknown input image. In addition, even when an intermediate state image is input, a reasonable estimation result can be obtained. Furthermore, even if the target person is covered with a futon, there are strange people or objects around the target person, or the lighting environment is different from normal, etc., the image is difficult to detect the head. You can get
  • step S ⁇ b> 70 the image acquisition unit 110 captures an image of one frame from the imaging device 10.
  • the acquired original image is temporarily stored in the storage unit 117.
  • the preprocessing unit 112 clips the monitoring area image from the original image, and executes resizing, affine transformation, luminance correction and the like as necessary (step S71).
  • the regressor 113 inputs the monitoring area image and outputs the corresponding score (step S72).
  • the score stabilization unit 114 performs stabilization processing of the score obtained in step S72 (step S73), and delivers the obtained score to the determination unit 115.
  • the determination unit 115 classifies the current state of the subject 21 into any one of a bed-up state / wake-up state / off-bed state based on the score (continuous value).
  • the bed state steps S74 and S75
  • the wake state steps S76 and S77
  • the threshold th2 In the case of ⁇ score, it is classified as leaving state (step S78).
  • the detection sensitivity can be adjusted by changing the thresholds th1 and th2.
  • the state display unit 116 outputs the score as the detection result and the state of the target person 21 to the display device 13 (step S79).
  • the above steps S70 to S79 are executed for each frame until the system is completed (step S80).
  • FIG. 8 shows an example of the status display screen output to the display device 13.
  • a state image 80 graphically showing the current state of each of the watchers A to D is displayed.
  • the state change graph 81 of each of the subjects A to D is displayed on the right side of the screen.
  • the horizontal axis of the state change graph 81 is time, and the vertical axis is a state score.
  • a marker 82 indicating the time of the current time is displayed so that the state score of each of the subjects A to D at the current time can be known.
  • the state estimation of the subject 21 is performed by the regressor 113, it is possible to accurately determine the state or the behavior of the subject 21. Further, since the state change graph 81 indicating the time change of the state score which quantified the state of the subject 21 is output, the watching side (nurse, carer etc.) can easily change the state or tendency of the subject 21. It can be confirmed. And if the change and the tendency of the state are known, for example, the action of the target person 21 can be predicted to prevent the occurrence of danger, or the daily action pattern of the target person 21 can be grasped and used for watching .
  • the state of the target person 21 is estimated by the regressor, whereas in the present embodiment, the head of the target person 21 is detected, and the state (risk degree) of the target person 2 from the position and state of the head.
  • FIG. 9 is a block diagram schematically showing a hardware configuration and a functional configuration of the watching support system 1 of the second embodiment.
  • the difference in configuration from the first embodiment (FIG. 1) is that the state quantifying unit is configured by the detection unit 90, the state recognition unit 91, and the determination unit 92, and the area setting unit 93 and the state The function of the display unit 94 is different.
  • the configuration specific to the present embodiment will be mainly described, and the description overlapping with the first embodiment will be omitted.
  • the detection unit 90 is a module that analyzes the image acquired by the image acquisition unit 110 and detects the human body of the watching target person 21 or a part thereof (head, face, upper body, etc.) from the image. Any method may be used as a method of detecting a human body or a part thereof from an image. For example, it is possible to preferably use an object detection algorithm using a classical Haar-like feature or a classifier using HoG feature or a recent Faster R-CNN.
  • the detection unit 90 of the present embodiment detects the head (portion above the neck) 22 of the object person 21 by the classifier using the Haar-like feature amount, and as a detection result, the position (x, y) and the size (the number of vertical and horizontal pixels) are output.
  • the position (x, y) of the head 22 is represented by, for example, image coordinates of a central point of a rectangular frame surrounding the head 22.
  • the detection part 90 of this embodiment outputs a detection result by the position and size of an image coordinate system
  • the detection part 90 converts an image coordinate system into a space coordinate system
  • the three-dimensional in the space coordinate system of the object person 21 The position or three-dimensional size may be output.
  • the state recognition unit 91 is a module that recognizes the state of the subject 21 detected by the detection unit 90.
  • the state of the head 22 of the target person 21 specifically, (1) the orientation of the head 22, (2) the moving speed of the head 22, and (3) the moving direction of the head 22.
  • the orientation of the head 22 may be recognized based on, for example, the positional relationship of the facial organs (eyes, nose, mouth, etc.) in the image of the head 22, or multiple types of identification learned for each orientation of the head 22 It may be recognized by using the device, or another algorithm may be used.
  • a continuous value (angle) may be calculated, and for example, which of N types of directions (directions) determined in advance such as right direction, front direction, and left direction correspond to You may judge.
  • the directions around three axes of yaw, pitch, and roll may be calculated, or the directions in the image coordinate system (in the xy plane) may be simply calculated.
  • the moving speed of the head 22 is the moving amount of the head 22 per predetermined time.
  • the moving speed can be obtained by calculating the distance between the position of the head 22 in the image of a plurality of frames earlier and the position of the head 22 in the latest image.
  • the moving direction of the head 22 can be calculated, for example, from the direction (angle) of a line connecting the position of the head 22 in the image of a plurality of frames earlier and the position of the head 22 in the latest image.
  • the movement velocity and the movement direction may be combined to obtain the movement vector of the head 22.
  • the movement speed, movement direction, and movement vector in the real space may be calculated by converting the image coordinate system into a space coordinate system.
  • the determination unit 92 is a module that determines whether the state of the subject 21 is a safe state or a dangerous state based on the results of the detection unit 90 and the state recognition unit 91. Specifically, the determination unit 92 uses the “determination criteria” for evaluating and determining a dangerous condition, the state of the head 22 recognized by the state recognition unit 91 (direction, moving speed, moving direction, A process of determining the "danger degree" of at least one of the movement vectors is performed.
  • a plurality of determination criteria are set in advance in the determination criteria storage unit in the storage unit 117, and the determination unit 92 uses in accordance with the position at which the target person 21 (head 22) is detected. Switch judgment criteria. This feature will be described in detail later.
  • the area setting unit 93 is a module for setting a monitoring area and a determination area on an image.
  • the state display unit 94 is a module that displays in real time a graph indicating temporal change of the risk score on the display device 13.
  • FIGS. 10A to 10C are examples of the monitoring area and the determination area set for the image.
  • the image acquisition unit 110 acquires an image from the imaging device 10 (FIG. 10A).
  • the area setting unit 93 allows the user to input the monitoring area 30 and the points 40 to 43 at the four corners of the bed, and sets a quadrangle surrounded by the four points 40 to 43 in the bed area 44 (FIG. 10B).
  • the area setting unit 93 calculates the range of the four determination areas A1 to A4 based on the bed area 44 (FIG. 4C).
  • the area setting unit 93 stores the information of the monitoring area 30 and the bed area 44 (the coordinates of the four corners of the bed area 44) and the information of the four determination areas A1 to A4 (the coordinates of the four corners of each determination area) in the storage unit 117. Store and complete the setting process.
  • the determination area A1 is an area set on the head side of the bed 20, and corresponds to the range in which the head 22 may exist when the subject 21 sleeps in the bed 20.
  • the judgment area A2 is an area set at the center of the foot side of the bed 20, and the head 22 when raising the upper body from the state where the subject 21 is sleeping or when getting down or falling from the foot side of the bed 20.
  • the judgment area A3 is an area set on the left side of the foot side of the bed 20, and when the subject 21 is seated on the left edge of the bed 20, or when the subject 21 descends or falls from the left side of the bed 20, the head 22 corresponds to the range that may exist.
  • the determination area A4 is an area set on the right side of the foot side of the bed 20, and the head is located when the subject 21 is seated on the right edge of the bed 20 or when the subject 21 descends or falls from the right side of the bed 20. 22 corresponds to the range that may exist. As shown in FIG. 10C, the determination areas A2 to A4 extend to the outside of the bed area 44.
  • the reason for setting a plurality of determination areas in this way is that the evaluation of the safe state or the dangerous state may change depending on where the subject 21 is present on the bed 20.
  • the head 22 of the subject 21 is present in the determination area A1
  • the risk of the subject 21 falling from the bed 20 can be said to be low.
  • the head 22 of the target person 21 is present in the judgment area A3 if the head 22 faces the left, it is considered that the target person 21 is about to get off the bed 20 by his own intention.
  • the determination area A4 is determined opposite to the left and right of the determination area A3.
  • FIG. 11A illustrates an example of the data structure of the determination reference set in the storage unit 117.
  • FIG. 11A is an example of a determination criterion of head orientation. Symbols such as “ ⁇ 1, ⁇ 1”, “ ⁇ 1, 0”,... Indicate head orientation (eight directions) as shown in FIG. 11B, and the value of the judgment criterion indicates the degree of danger . The larger the value, the higher the degree of danger, with 1 being the head with the lowest degree of risk and 5 being the head with the highest degree of risk. As described above, since the evaluation of the safe state or the dangerous state changes in each of the determination areas A1 to A4, the determination criteria storage unit in the storage unit 117 is associated with different determination criteria for each determination area. There is. Although the example of FIG. 11A is the determination criterion of head orientation, determination criteria corresponding to a plurality of items used for evaluation by the determination unit 92 may be set, such as the moving speed and movement direction of the head.
  • Example 1 An example of the state monitoring process of the first embodiment will be described with reference to FIGS. 12, 13 and 14.
  • FIG. 12 is a flowchart of the state monitoring process of the first embodiment executed by the information processing apparatus 11.
  • FIG. 13 is an example of risk degree determination.
  • FIG. 14 is a state display screen displayed on the display device 13. An example of
  • step S ⁇ b> 60 the image acquisition unit 110 acquires an image from the imaging device 10.
  • the acquired image is temporarily stored in the storage unit 117.
  • the detection unit 90 detects the head 22 of the target person 21 from the monitoring area in the image acquired in step S60.
  • step S62 the state recognition unit 91 estimates the orientation of the head detected in step S61.
  • FIG. 13 illustrates an example in which the head 22 is detected in the determination area A3 and the direction of the head 22 is estimated to be the direction of the arrow 70.
  • step S63 the determination unit 92 reads the determination reference corresponding to the determination area A3 from the storage unit 117.
  • the reference numeral 71 in FIG. 13 schematically shows the determination criterion corresponding to the determination area A3, and the danger degree (1 to 5) is set in each of eight directions (arrows).
  • step S64 the determination unit 92 determines whether the direction of the head 22 (arrow 70) is a safe direction or a dangerous direction, using the determination criterion corresponding to the determination area A3.
  • the determination result (risk score) of the determination unit 92 is stored in the storage unit 117.
  • step S65 the state display unit 94 outputs the degree of danger score and the state of the target person 21 to the display device 13 (step S65).
  • steps S60 to S65 are executed for each frame until the system is completed (step S66).
  • FIG. 14 shows an example of the status display screen output to the display device 13.
  • a state image 83 graphically showing the current state of each of the watchers A to D is displayed.
  • the risk score of the subject A is 1
  • the risk score of the subjects B and D is 2
  • the risk score of the subject C is 5.
  • an alert may be notified such as blinking display when the degree of risk score is 3 or more.
  • the state change graph 84 of each of the subjects A to D is displayed on the right side of the screen.
  • the horizontal axis of the state change graph 84 is time
  • the vertical axis is the risk score.
  • a marker 85 indicating the time of the current time is displayed so that the state score of each of the subjects A to D at the current time can be known.
  • the state change graph 84 indicating the time change of the risk score which quantified the state (risk degree) of the object person 21 is output
  • the watching side (nurse, carer etc.) Can easily confirm the change and tendency of the degree of danger of the subject person 21. Therefore, as in the first embodiment, it is possible to easily predict the danger of the object person 1 and to prevent the occurrence of an accident or the like.
  • Example 2 An example of the state monitoring process of Example 2 of the second embodiment will be described with reference to FIGS. 15 and 16.
  • FIG. 15 is a flowchart of the state monitoring process of the second embodiment executed by the information processing apparatus 11, and FIG. 16 is an example of the risk degree determination.
  • step S ⁇ b> 150 the image acquisition unit 110 acquires an image from the imaging device 10.
  • the acquired image is temporarily stored in the storage unit 117.
  • step S151 the detection unit 90 detects the head 22 of the target person 21 from the image acquired in step S150.
  • Information on the position of the detected head 22 is stored in the storage unit 117 in association with time information or a frame number of the image.
  • the state recognition unit 91 reads the information of the position of the head 22 in the image of a predetermined time before (for example, one second) from the storage unit 117 (step S152), and the position of the head 22 of the predetermined time and the step Based on the position of the head 22 detected in S151, the movement speed (movement amount per predetermined time) of the head 22 and the movement direction are calculated, and the movement vector of the head 22 is determined (step S153).
  • FIG. 16 shows an example in which the head 22 is detected in the determination area A3 and the movement vector of the head 22 is calculated as indicated by the arrow 150.
  • step S154 the determination unit 92 reads the determination reference corresponding to the determination area A3 from the storage unit 117.
  • Reference numerals 151 and 152 in FIG. 16 schematically show the judgment criteria corresponding to the judgment area A3.
  • symbol 151 is an example of the determination standard regarding a moving direction.
  • reference numeral 152 is an example of a determination criterion regarding the moving speed (moving amount), and indicates that the longer the arrow, the larger the degree of danger. In this example, four levels of danger of 1 to 4 are assigned according to the moving speed.
  • step S ⁇ b> 155 the determination unit 92 determines the degree of danger of the movement vector 150 of the head 22 using the determination criteria 151 and 152.
  • the product (multiplication value) or sum (addition value) or the like of the danger degree in the movement direction and the danger degree in the movement speed can be used as the danger degree of the movement vector 150.
  • the moving speed (moving amount) is very high although it is the direction of getting down from the bed, so it is not a normal getting down from the bed but a falling or falling movement from the bed It is considered that the risk is high.
  • the determination result (risk score) of the determination unit 92 is stored in the storage unit 117.
  • the state display unit 94 outputs the degree of danger score and the state of the subject 21 to the display device 13 (step S156).
  • the above steps S150 to S156 are executed for each frame until the system is completed (step S157).
  • the status display screen is the same as that of the first embodiment (FIG. 14). The same effect as that of the first embodiment can be obtained by the method of the present embodiment.
  • Example 3 An example of the state monitoring process of Example 3 of the second embodiment will be described with reference to FIG.
  • FIG. 17 is a flowchart of the state monitoring process of the third embodiment executed by the information processing apparatus 11.
  • step S ⁇ b> 100 the image acquisition unit 110 acquires an image from the imaging device 10.
  • the acquired image is temporarily stored in the storage unit 117.
  • step S101 the detection unit 90 detects the head 22 of the target person 21 from the image acquired in step S100.
  • Information on the position of the detected head 22 is stored in the storage unit 117 in association with time information or a frame number of the image.
  • the state recognition unit 91 reads out the information on the position of the head 22 in the image before a predetermined time (for example, one second) from the storage unit 117 (step S102), and the position and the step of the head 22 before the predetermined time Based on the position of the head 22 detected in S101, the moving speed of the head 22 (moving amount per predetermined time) is calculated (step S103).
  • a predetermined time for example, one second
  • step S104 the determination unit 92 reads from the storage unit 117 the determination criteria corresponding to the determination area in which the head 22 is detected.
  • the determination criteria in which the moving speed and the degree of danger are associated are set for each determination area. For example, when the head 22 is detected in the determination area A1, the subject 21 should be in the sleeping state.
  • the determination criteria for the determination area A1 for example, the moving velocity is 20 cm / sec or less
  • the determination criteria for the determination area A1 for example, the moving velocity is 20 cm / sec or less
  • the determination criteria for the determination area A1 for example, the moving velocity is 20 cm / sec or less
  • the rising action is assumed as an action that the target person 21 can take next.
  • step S105 the determination unit 92 determines the degree of danger of the movement speed of the head 22 using the above-described determination criteria. Subsequently, the state display unit 94 outputs the degree of danger score and the state of the subject 21 to the display device 13 (step S106). The above steps S100 to S106 are executed for each frame until the system is completed (step S107).
  • the status display screen is the same as that of the first embodiment (FIG. 14). The same effect as that of the first embodiment can be obtained by the method of the present embodiment.
  • Example 4 An example of the state monitoring process of the fourth example of the second embodiment will be described with reference to FIG.
  • FIG. 18 is a flowchart of the state monitoring process of the fourth embodiment performed by the information processing apparatus 11.
  • step S ⁇ b> 110 the image acquisition unit 110 acquires an image from the imaging device 10.
  • the acquired image is temporarily stored in the storage unit 117.
  • the detection unit 90 detects the head 22 of the target person 21 from the image acquired in step S100.
  • Information on the position of the detected head 22 is stored in the storage unit 117 in association with time information or a frame number of the image.
  • the state recognition unit 91 calculates the direction, moving speed, and moving direction of the head 22.
  • the specific calculation method may be the same as that described in the first to third embodiments.
  • step S113 the determination unit 92 reads from the storage unit 117 the determination criteria corresponding to the determination area in which the head 22 is detected. Then, the determination unit 92 calculates the degree of danger for the direction of the head 22 (step S114), the degree of danger for the movement vector of the head 22 (step S115), and the degree of danger for the movement speed of the head 22 (step S116). Do. The specific calculation method may be the same as that described in the first to third embodiments.
  • step S117 the determination unit 92 integrates the three values of the degree of danger obtained in steps S114 to S116 to calculate an integrated score of the degree of danger. For example, the maximum value among the three may be selected as the integrated score, or the average value, the multiplication value, the addition value, etc. of three or two values may be selected as the integrated score.
  • the state display unit 94 outputs the integrated score and the state of the target person 21 to the display device 13 (step S118).
  • the above steps S110 to S118 are executed for each frame until the system is completed (step S119).
  • the status display screen is the same as that of the first embodiment (FIG. 14). The same effect as that of the first embodiment can be obtained by the method of the present embodiment.
  • the sleeping state / wake-up state / disengagement state is estimated from the image, and the wake-up behavior and the leaving behavior of the subject are detected.
  • the state of the estimation target and the action of the detection target are not limited to these. That is, it is possible to handle various "human states” and “actions” as long as different features appear in the image. For example, it is also possible to detect actions such as eating and reading.
  • Watching support system 10 Imaging device, 11: Information processing device, 12: Learning device, 13: Display device 100: Illumination, 101: Near infrared camera, 110: Image acquisition unit, 111: Area setting unit, 112: Previous Processing unit, 113: Regressor, 114: Score stabilization unit, 115: Determination unit, 116: Status display unit, 117: Storage unit 20: Bed, 21: Target person, 22: Head 30: Monitoring region, 40- 43: points at four corners of bed, 44: bed area, A1 to A4: judgment area 70: arrow indicating head orientation, 71: judgment standard 80: state image, 81: state change graph, 82: marker, 83: State image 84: state change graph 85: marker 90: detection unit 91: state recognition unit 92: determination unit 93: area setting unit 94: state display unit 150: head movement vector 151: Criteria for moving direction, 152: determination as to the moving speed reference

Abstract

This monitoring system for assisting monitoring of a subject on a bed has: an image acquisition unit which acquires an image from an imaging device installed so as to photograph a monitoring area including the bed of the subject; a status quantification unit which, on the basis of the image of the monitoring area obtained by the image acquisition unit, outputs a score obtained by quantifying the status of the subject; and a status display unit which displays, on a display device, a graph indicating the temporal change of the score outputted from the status quantification unit.

Description

見守り支援システム及びその制御方法、及びプログラムOversight support system, control method therefor, and program
 本発明は、ベッド上の対象者の見守りを支援するための技術に関する。 TECHNICAL FIELD The present invention relates to a technique for supporting watching of a subject on a bed.
 ベッドからの転落事故などを未然に防ぐため、病院や介護施設などにおける患者の見守りを支援するシステムが知られている。特許文献1には、施設の各居室内に、非接触で入居者の状況が把握できるように人感センサ、呼吸センサ、体動センサ、ドアセンサなどが設置され、施設内に設置されたサーバに各センサからのセンサデータが収集され、施設内の複数の入居者の状況が瞬時に把握できるように、管理用PCの現在状況表示画面に各入居者の呼吸回数や体動の振幅や人感センサの検知結果をグラフ表示する構成が開示されている。また、特許文献2には、体動や呼吸などをグラフ表示することが開示されている。また、特許文献3には、患者の転倒が発生したときにアラート用のアイコンを表示することが開示されている。 In order to prevent a fall accident from a bed and the like, a system that supports watching and listening of patients in hospitals and nursing facilities is known. In patent document 1, a human sensor, a respiration sensor, a body movement sensor, a door sensor, etc. are installed in each living room of the facility so that the condition of the resident can be grasped without contact, and a server installed in the facility Sensor data from each sensor is collected, and the present situation display screen of the management PC allows the resident's number of breaths, the amplitude of body movement and the feeling of people to be able to instantly grasp the situation of multiple tenants in the facility An arrangement for graphically displaying the detection results of sensors is disclosed. Patent Document 2 discloses displaying a graph of body motion, respiration and the like. Patent Document 3 discloses that an alert icon is displayed when a fall of a patient occurs.
特開2017-016611号公報JP, 2017-016611, A 特開2009-082511号公報JP, 2009-082511, A 特開2008-289676号公報JP, 2008-289676, A
 特許文献1、2のように、センサにより計測された体動をグラフ表示することで、見守り側(看護師、医師、介護士など)は患者の動きの有無や大きさを監視することができる。しかしながら、動きの有無やその大きさだけでは、その動きが危険行動に因るものかどうかは判別できない。例えば、患者がベッドから離れたり転落しそうになっているのか、あるいは単にベッド上で伸びをしただけなのかは、体動センサの出力だけでは区別することはできない。言い換えると、体動センサの出力には、ノイズ(検知したい行動以外の動きに因る信号)が多く含まれるため、それだけでは患者の状態や行動を精度良く検知ないし予測することはできない。また、特許文献3のように転倒などが発生した場合にアラートを出力するだけでは、ベッドからの転落や転倒などの事故を未然に防ぐことはできない。 As in Patent Documents 1 and 2, the watching side (nurse, doctor, carer, etc.) can monitor the presence or absence of the movement of the patient and the size by displaying a graph of the body movement measured by the sensor. . However, it can not be determined whether or not the movement is due to the dangerous behavior only by the presence or absence and the size of the movement. For example, it can not be distinguished from the output of the motion sensor alone whether the patient is about to leave or fall from the bed or merely stretches on the bed. In other words, the output of the body movement sensor contains many noises (signals due to movements other than the action desired to be detected), so that it is not possible to accurately detect or predict the state or action of the patient. In addition, it is not possible to prevent an accident such as falling from a bed or falling by simply outputting an alert when a fall or the like occurs as in Patent Document 3.
 本発明は、上記実情に鑑みなされたものであって、ベッド上の対象者の状態ないし行動を高精度かつ高信頼に検知するための技術を提供することを目的とする。 The present invention has been made in view of the above-described circumstances, and an object thereof is to provide a technique for detecting the state or behavior of a subject on a bed with high accuracy and high reliability.
 上記目的を達成するために、本発明では、画像を基に対象者の状態を定量化したスコアを生成し、そのスコアの時間変化をグラフ表示する、という方法を採用する。 In order to achieve the above object, the present invention adopts a method of generating a score that quantifies the condition of a subject based on an image and graphically displaying a time change of the score.
 具体的には、本発明の第一態様は、ベッド上の対象者の見守りを支援する見守り支援システムであって、前記対象者のベッドを含む監視領域を撮影するように設置された撮像装置から画像を取得する画像取得部と、前記画像取得部により得られた前記監視領域の画像に基づいて、前記対象者の状態を定量化したスコアを出力する状態定量化部と、前記状態定量化部から出力されるスコアの時間的な変化を示すグラフを表示装置に表示する状態表示部と、を有することを特徴とする見守り支援システムを提供する。 Specifically, a first aspect of the present invention is a watching support system that supports watching of a subject on a bed, and an imaging apparatus installed to shoot a monitoring area including the bed of the subject An image acquisition unit for acquiring an image, a state quantification unit for outputting a score obtained by quantifying the condition of the subject based on the image of the monitoring area acquired by the image acquisition unit, and the state quantification unit And a status display unit for displaying on the display device a graph indicating temporal change of the score output from the monitoring device.
 この構成によれば、対象者の状態を定量化したスコアの時間的な変化を示すグラフを出力するので、見守り側(看護師、介護者など)は対象者の状態の変化や傾向を簡単に確認することができる。そして、状態の変化や傾向がわかると、例えば、対象者の行動を予測し危険の発生を未然に防止できたり、対象者の毎日の行動パターンを把握し見守りに役立てたりすることができる。 According to this configuration, a graph indicating temporal changes in the score that quantified the condition of the subject is output, so the watching side (nurse, carer, etc.) can easily change the condition or trend of the subject. It can be confirmed. Then, if the change or tendency of the state is known, for example, the action of the subject can be predicted to prevent the occurrence of danger, or the daily action pattern of the subject can be grasped and used for watching.
 前記状態定量化部は、ベッドと人が写る画像を入力とし前記ベッドに対する前記人の状態を示すスコアを出力するように機械学習された回帰器を有しており、前記監視領域の画像を前記回帰器に入力することにより、前記対象者の状態を定量化したスコアを取得するとよい。回帰器を用いて入力画像における対象者の状態を推定するので、未知の入力画像に対して高精度な状態推定を行うことができる。また、回帰器のスコアは連続値で出力されるので、対象者の状態が明確に分類できないものであったとしても、妥当な推定結果を得ることができる。さらに、対象者が布団を被っていたり、対象者の周囲にまぎらわしい人や物体が存在していたり、照明環境が通常と異なるなど、頭部検出が困難な画像であっても、妥当な推定結果を得ることが期待できる。 The state quantifying unit has a regressor machine-learned to receive a bed and an image of a person and output a score indicating the state of the person relative to the bed, and the image of the monitoring area is It is good to acquire the score which quantified the said subject's state by inputting into a regressor. Since the state of the subject in the input image is estimated using the regressor, highly accurate state estimation can be performed on the unknown input image. Further, since the score of the regressor is output as a continuous value, it is possible to obtain a reasonable estimation result even if the condition of the subject can not be clearly classified. Furthermore, even if the target person is covered with a futon, there are strange people or objects around the target person, or the lighting environment is different from normal, etc., the image is difficult to detect the head. You can expect to get
 前記回帰器は、ニューラルネットワークであるとよい。ニューラルネットワークを用いることにより、高精度かつ頑健性の高い状態推定を行うことができる。 The regressor may be a neural network. By using a neural network, highly accurate and highly robust state estimation can be performed.
 前記ベッドに対する前記人の状態があらかじめ複数の類型に分類され、かつ、前記複数の類型のそれぞれに異なるスコアが割り当てられており、前記回帰器は、前記人の状態が2つの類型のあいだの状態である場合に、前記2つの類型のスコアのあいだの値を出力するように構成されているとよい。このような設計とすることにより、人がとり得るさまざまな状態を一次元のスコアで表現できるようになるので、「人の状態」を数学的にあるいはプログラムにおいて取り扱うのが容易になり、例えば後段の処理(状態表示部によるグラフ出力など)を極めて簡易に構築できる。 The state of the person with respect to the bed is classified in advance into a plurality of types, and different scores are assigned to each of the plurality of types, and the regressor is configured to determine that the state of the person is between two types. If so, it may be configured to output a value between the two types of scores. Such a design makes it possible to express various human-readable states as a one-dimensional score, making it easy to handle "human states" mathematically or in programs, for example, The processing of (the graph output by the status display unit etc.) can be constructed extremely easily.
 例えば、前記複数の類型は、前記人が前記ベッドに寝ている状態0、前記人が前記ベッド上で起き上がっている状態1、および、前記人が前記ベッドから離れている状態2を含むとよい。少なくとも状態0~状態2の3種類の状態が判別できれば、見守りのニーズが高い「起床」と「離床」の検知が可能になるからである。 For example, the plurality of types may include state 0 in which the person is sleeping in the bed, state 1 in which the person is rising on the bed, and state 2 in which the person is away from the bed . This is because if it is possible to distinguish at least three types of states of state 0 to state 2, it is possible to detect “wake up” and “getting out of bed” which have a high need for watching.
 前記対象者の状態を定量化したスコアは、前記対象者の状態の危険度合いを表すスコアであってもよい。対象者の状態の危険度合いをグラフ表示することにより、見守り側は、対象者の危険を予測することが容易にでき、事故等の発生を未然に防ぐことが可能となる。 The score quantifying the condition of the subject may be a score representing the degree of danger of the condition of the subject. By displaying the degree of danger of the condition of the subject graphically, it is possible for the watching side to easily predict the danger of the subject and to prevent the occurrence of an accident or the like.
 前記監視領域の画像内のベッドの領域に基づき設定される複数の判定領域ごとに、危険な状態を判定するための判定基準があらかじめ設定されている判定基準記憶部を有し、前記状態定量化部は、前記監視領域の画像から前記対象者の頭部を検出する検出部を有しており、前記頭部が検出された位置に対応する判定領域の判定基準を用いて、前記対象者の状態の危険度合いを表すスコアを算出するとよい。この構成によれば、ベッド上の対象者が安全な状態にあるのか危険な状態にあるのかを、画像から簡単にかつ精度良く判定することが可能となる。 A determination criteria storage unit in which determination criteria for determining a dangerous state are set in advance for each of a plurality of determination areas set based on the bed area in the image of the monitoring area, the state quantification The unit has a detection unit that detects the head of the subject from the image of the monitoring area, and the judgment unit of the judgment area corresponding to the position at which the head is detected It is good to calculate the score showing the degree of danger of a state. According to this configuration, it is possible to easily and accurately determine from the image whether the subject on the bed is in a safe state or in a dangerous state.
 前記状態定量化部は、頭部の向き、頭部の移動速度、頭部の移動方向、頭部の移動ベクトルのうち少なくとも1つ以上の項目に基づいて、前記対象者の状態の危険度合いを表すスコアを算出するとよい。 The state quantifying unit determines the degree of danger of the state of the subject based on at least one of the head orientation, head movement speed, head movement direction, and head movement vector. It is good to calculate the score to express.
 なお、本発明は、上記構成ないし機能の少なくとも一部を有する見守り支援システムとして捉えることができる。また、本発明は、上記処理の少なくとも一部を含む、見守り支援方法又は見守り支援システムの制御方法や、これらの方法をコンピュータに実行させるためのプログラム、又は、そのようなプログラムを非一時的に記録したコンピュータ読取可能な記録媒体として捉えることもできる。上記構成及び処理の各々は技術的な矛盾が生じない限り互いに組み合わせて本発明を構成することができる。 The present invention can be understood as a watching support system having at least a part of the above configuration or function. The present invention also provides a watching support method or a watching support system control method including at least a part of the above-described processing, a program for causing a computer to execute these methods, or non-temporarily such a program. It can also be regarded as a recorded computer readable recording medium. Each of the above configurations and processes can be combined with each other as long as there is no technical contradiction.
 本発明によれば、ベッド上の対象者の状態ないし行動を高精度かつ高信頼に検知することができる。 According to the present invention, the state or behavior of the subject on the bed can be detected with high accuracy and high reliability.
図1は第1実施形態の見守り支援システムのハードウェア構成および機能構成を模式的に示すブロック図である。FIG. 1 is a block diagram schematically showing a hardware configuration and a functional configuration of the watching support system of the first embodiment. 図2は撮像装置の設置例を示す図である。FIG. 2 is a view showing an installation example of the imaging device. 図3A及び図3Bは画像に対し設定された監視領域の例である。FIG. 3A and FIG. 3B are examples of the monitoring area set for the image. 図4は人の状態の類型と画像の例である。FIG. 4 shows types of human states and examples of images. 図5は回帰器の機械学習を模式的に示す図である。FIG. 5 is a diagram schematically showing machine learning of the regressor. 図6は回帰器の能力を模式的に示す図である。FIG. 6 is a diagram schematically showing the ability of the regressor. 図7は状態監視処理のフローチャートである。FIG. 7 is a flowchart of the state monitoring process. 図8は状態表示例である。FIG. 8 is an example of the status display. 図9は第2実施形態の見守り支援システムのハードウェア構成および機能構成を模式的に示すブロック図である。FIG. 9 is a block diagram schematically showing the hardware configuration and the functional configuration of the watching support system of the second embodiment. 図10A~図10Cは画像に対し設定された判定領域の例である。10A to 10C are examples of determination regions set for an image. 図11Aは判定領域ごとの頭部向きの判定基準のデータ構造の一例であり、図11Bは8方向を表す符号を説明する図である。FIG. 11A is an example of a data structure of the determination reference of head orientation for each determination area, and FIG. 11B is a diagram for explaining codes representing eight directions. 図12は第2実施形態の実施例1の状態監視処理のフローチャートである。FIG. 12 is a flowchart of the state monitoring process of the first example of the second embodiment. 図13は第2実施形態の実施例1の危険度合い判定の例である。FIG. 13: is an example of danger degree determination of Example 1 of 2nd Embodiment. 図14は第2実施形態の状態表示例である。FIG. 14 is a state display example of the second embodiment. 図15は第2実施形態の実施例2の状態監視処理のフローチャートである。FIG. 15 is a flowchart of the state monitoring process of the second embodiment of the second embodiment. 図16は第2実施形態の実施例2の危険度合い判定の例である。FIG. 16: is an example of danger degree determination of Example 2 of 2nd Embodiment. 図17は第2実施形態の実施例3の状態監視処理のフローチャートである。FIG. 17 is a flowchart of the state monitoring process of the third example of the second embodiment. 図18は第2実施形態の実施例4の状態監視処理のフローチャートである。FIG. 18 is a flowchart of the state monitoring process of the fourth example of the second embodiment.
 本発明は、ベッド上の対象者の見守りを支援するための技術に関する。この技術は、病院や介護施設などにおいて、患者や要介護者などの離床・起床行動を自動で検知し、危険な状態が発生した場合などに必要な通知を行うシステムに適用することができる。このシステムは、例えば、高齢者、認知症患者、子供などの見守り支援に好ましく利用することができる。 TECHNICAL FIELD The present invention relates to a technique for supporting watching of a subject on a bed. This technology can be applied to a system that automatically detects the getting-up and getting-up behavior of patients and care recipients in hospitals and nursing facilities, etc., and performs necessary notification when a dangerous state occurs. This system can be preferably used, for example, for watching and supporting elderly people, patients with dementia, children and the like.
 以下、図面を参照して本発明を実施するための好ましい形態の一例を説明する。ただし、以下の実施形態に記載されている装置の構成や動作は一例であり、本発明の範囲をそれらのみに限定する趣旨のものではない。 Hereinafter, an example of a preferred embodiment for carrying out the present invention will be described with reference to the drawings. However, the configurations and operations of the devices described in the following embodiments are merely examples, and the scope of the present invention is not limited thereto.
 <第1実施形態>
 (システム構成)
 図1と図2を参照して、本発明の実施形態に係る見守り支援システムの構成を説明する。図1は、見守り支援システム1のハードウェア構成および機能構成を模式的に示すブロック図であり、図2は、撮像装置の設置例を示す図である。
First Embodiment
(System configuration)
The configuration of a watching support system according to an embodiment of the present invention will be described with reference to FIGS. 1 and 2. FIG. 1 is a block diagram schematically showing a hardware configuration and a functional configuration of the watching support system 1, and FIG. 2 is a diagram showing an installation example of an imaging device.
 見守り支援システム1は、主なハードウェア構成として、撮像装置10と情報処理装置11と表示装置13を有している。撮像装置10と情報処理装置11の間は有線又は無線により接続されている。また、情報処理装置11と表示装置13の間も有線又は無線により接続されている。例えば、撮像装置10と情報処理装置11を含む子機ユニットを見守り対象者21の部屋に設置し、表示装置13を含む親機ユニットをナースセンターや集中監視室に設置し、子機ユニットと親機ユニットの間は有線LAN又は無線LANで通信する構成を採ることができる。あるいは、情報処理装置11の全部又は一部を親機ユニット側に設けてもよい。また、一台の親機ユニットに複数台の子機ユニットを接続してもよい。 The watching support system 1 includes an imaging device 10, an information processing device 11, and a display device 13 as main hardware configurations. The imaging device 10 and the information processing device 11 are connected by wire or wirelessly. Further, the information processing device 11 and the display device 13 are also connected by wire or wirelessly. For example, the child machine unit including the imaging device 10 and the information processing device 11 is installed in the room of the target person 21 while the parent machine unit including the display device 13 is installed in the nurse center or the central monitoring room. A configuration in which communication is performed by wired LAN or wireless LAN can be adopted between machine units. Alternatively, all or part of the information processing apparatus 11 may be provided on the base unit side. Also, a plurality of slave units may be connected to one master unit.
 撮像装置10は、ベッド上の対象者を撮影して画像データを取り込むためのデバイスである。撮像装置10としては、モノクロ又はカラーの可視光カメラ、赤外線カメラ、三次元カメラなどを用いることができる。本実施形態では、夜間でも(部屋内が暗い場合でも)対象者の見守りを可能とするため、赤外線LED照明100と近赤外線カメラ101で構成される撮像装置10を採用する。撮像装置10は、図2に示すように、ベッド20の頭側上方から足側に向かって、ベッド20の全体を俯瞰するように設置される。撮像装置10は所定の時間間隔(例えば、30fps)で撮影を行い、その画像データは情報処理装置11に順次取り込まれる。 The imaging device 10 is a device for capturing a subject on a bed and capturing image data. As the imaging device 10, a monochrome or color visible light camera, an infrared camera, a three-dimensional camera or the like can be used. In the present embodiment, the imaging device 10 configured by the infrared LED illumination 100 and the near infrared camera 101 is adopted in order to enable watching of the target person even at night (even when the room is dark). As shown in FIG. 2, the imaging device 10 is installed to look over the entire bed 20 from the top of the bed 20 to the foot. The imaging device 10 captures an image at a predetermined time interval (for example, 30 fps), and the image data is sequentially captured by the information processing device 11.
 情報処理装置11は、撮像装置10から取り込まれる画像データをリアルタイムに分析して、ベッド20上の対象者21の状態および行動を検知し、表示装置13に検知結果を出力する機能を備える装置である。情報処理装置11は、具体的な機能モジュールとして、画像取得部110、領域設定部111、前処理部112、回帰器113、スコア安定化部114、判定部115、状態表示部116、記憶部117を有している。なお、本実施形態では、前処理部112と回帰器113とスコア安定化部114と判定部115とで、対象者21の状態を定量化したスコアを出力する状態定量化部が構成されている。 The information processing apparatus 11 is an apparatus provided with a function of analyzing image data taken in from the imaging apparatus 10 in real time, detecting the state and behavior of the subject 21 on the bed 20, and outputting the detection result to the display apparatus 13. is there. The information processing apparatus 11 includes, as specific functional modules, an image acquisition unit 110, an area setting unit 111, a preprocessing unit 112, a regression unit 113, a score stabilization unit 114, a determination unit 115, a state display unit 116, and a storage unit 117. have. In the present embodiment, the preprocessing unit 112, the regression unit 113, the score stabilizing unit 114, and the determining unit 115 constitute a state quantifying unit that outputs a score that quantifies the condition of the target person 21. .
 本実施形態の情報処理装置11は、CPU(プロセッサ)、メモリ、ストレージ(HDD、SSDなど)、入力デバイス(キーボード、マウス、タッチパネルなど)、通信インタフェースなどを具備する汎用のコンピュータにより構成され、上述した情報処理装置11の各モジュールは、ストレージ又はメモリに格納されたプログラムをCPUが実行することにより実現される。ただし、情報処理装置11の構成はこの例に限られない。例えば、複数台のコンピュータによる分散コンピューティングを行ってもよいし、上記モジュールの一部をクラウドサーバにより実行してもよいし、上記モジュールの一部をASICやFPGAのような回路で構成してもよい。 The information processing apparatus 11 according to the present embodiment includes a general-purpose computer including a CPU (processor), memory, storage (HDD, SSD, etc.), input device (keyboard, mouse, touch panel, etc.), communication interface, etc. Each module of the information processing apparatus 11 is realized by the CPU executing a program stored in the storage or the memory. However, the configuration of the information processing apparatus 11 is not limited to this example. For example, distributed computing may be performed by a plurality of computers, a part of the module may be executed by a cloud server, or a part of the module may be configured by a circuit such as an ASIC or an FPGA. It is also good.
 画像取得部110は、撮像装置10により撮影された画像を取得するためのモジュールである。画像取得部110より入力された画像データは一時的にメモリ又はストレージに記憶され、後述する領域設定処理や状態監視処理に供される。 The image acquisition unit 110 is a module for acquiring an image captured by the imaging device 10. The image data input from the image acquisition unit 110 is temporarily stored in a memory or storage, and is used for area setting processing and status monitoring processing described later.
 領域設定部111は、撮像装置10により撮影される画像に対し監視領域を設定するためのモジュールである。監視領域は、撮像装置10の視野のうち状態監視処理の対象となる範囲(言い換えると、回帰器113の入力として用いられる画像範囲)である。領域設定処理の詳細は後述する。 The area setting unit 111 is a module for setting a monitoring area for an image captured by the imaging device 10. The monitoring area is a range (in other words, an image range used as an input of the regressor 113) to be subjected to the state monitoring process in the field of view of the imaging device 10. Details of the area setting process will be described later.
 前処理部112は、状態監視処理において、画像取得部110より入力された画像(以後「オリジナル画像」と呼ぶ)に対し必要な前処理を施すためのモジュールである。例えば、前処理部112は、オリジナル画像から監視領域内の画像をクリップする処理を行う(クリップされた画像を以後「監視領域画像」と呼ぶ)。また、前処理部112は、監視領域画像に対して、リサイズ(縮小)、アフィン変換、輝度補正などの処理を施してもよい。リサイズ(縮小)は、回帰器113の演算時間を短縮する効果がある。リサイズには既存のどのような手法を用いてもよいが、演算コストと品質のバランスがよいバイリニア法が好ましい。アフィン変換は、例えば、画像において台形状に写るベッドを長方形状に変形するなど、必要な歪み補正を行うことで、回帰器113への入力画像を規格化し、推定精度を向上する効果が期待できる。輝度補正は、例えば、照明環境の影響を低減することで、推定精度を向上する効果が期待できる。なお、オリジナル画像をそのまま回帰器113に入力する場合には、前処理部112は省略してもよい。 The preprocessing unit 112 is a module for performing necessary preprocessing on an image (hereinafter referred to as an “original image”) input from the image acquisition unit 110 in the state monitoring process. For example, the preprocessing unit 112 performs processing of clipping an image within the monitoring area from the original image (hereinafter, the clipped image is referred to as a “monitoring area image”). In addition, the preprocessing unit 112 may perform processing such as resizing (reduction), affine transformation, and luminance correction on the monitoring area image. Resizing (reduction) has the effect of shortening the calculation time of the regressor 113. Although any existing method may be used for resizing, it is preferable to use a bilinear method that has a good balance of calculation cost and quality. The affine transformation can be expected to have the effect of normalizing the input image to the regressor 113 and improving the estimation accuracy by performing necessary distortion correction such as, for example, deforming a bed reflected in a trapezoidal shape in the image into a rectangular shape . The luminance correction can be expected to have an effect of improving the estimation accuracy, for example, by reducing the influence of the illumination environment. When the original image is input to the regressor 113 as it is, the preprocessing unit 112 may be omitted.
 回帰器113は、監視領域画像が与えられたときに、当該監視領域画像に写る対象者21の状態(例えば、就床状態、起床状態、離床状態)を示すスコアを出力するためのモジュールである。回帰器113は、ベッドと人が写る画像を入力とし、ベッドに対する人の状態を定量的に示すスコアを出力するように、入力画像の特徴と人の状態との関係モデルを機械学習により構築したものである。回帰器113のトレーニングは、多数のトレーニング用画像を用いて、学習装置12によって事前に(システムの出荷前ないし稼働前に)行われているものとする。なお、回帰器113の学習モデルとしては、ニューラルネットワーク、ランダムフォレスト、サポートベクターマシンなど、どのようなモデルを用いてもよい。本実施形態では、画像認識に好適な畳み込みニューラルネットワーク(CNN)を用いる。なお、本実施形態のスコアは「状態スコア」ともよぶ。 The regressor 113 is a module for outputting a score indicating the state of the subject 21 (for example, bedtime state, wakeup state, leaving state) shown in the monitoring area image when the monitoring area image is given. . Regressor 113 has constructed a relationship model between features of the input image and human state by machine learning so that the image showing the bed and the person is taken as input, and outputs a score quantitatively indicating the human state with respect to the bed. It is a thing. It is assumed that the training of the regressor 113 is performed in advance (before shipping or operation of the system) by the learning device 12 using a large number of training images. In addition, as a learning model of the regressor 113, any model such as a neural network, a random forest, a support vector machine, etc. may be used. In this embodiment, a convolutional neural network (CNN) suitable for image recognition is used. In addition, the score of this embodiment is also called a "state score."
 スコア安定化部114は、回帰器113から出力されるスコアの急激な変化やばたつきを抑制するためのモジュールである。スコア安定化部114は、例えば、現在のフレームの画像から得られた現在スコアと、直前の2フレームの画像からそれぞれ得られた過去スコアの平均を計算し、安定化スコアとして出力する。この処理は、スコアの時系列データに時間的なローパスフィルタをかけることと等価である。なお、スコアの安定化が不要であれば、スコア安定化部114は省略してもよい。 The score stabilization unit 114 is a module for suppressing rapid change and fluttering of the score output from the regressor 113. The score stabilization unit 114, for example, calculates an average of the current score obtained from the image of the current frame and the past score obtained from the images of the immediately preceding two frames, and outputs the average as a stabilization score. This process is equivalent to applying a temporal low-pass filter to the time series data of the score. In addition, if stabilization of a score is unnecessary, the score stabilization part 114 may be abbreviate | omitted.
 判定部115は、回帰器113により得られたスコアに基づいて、対象者の行動を判定するためのモジュールである。具体的には、判定部115は、スコアの時間的な変化(つまり、スコアが示す「対象者の状態」の遷移)に基づいて、対象者がどのような行動(例えば、起床行動、離床行動など)をとったのかを推定する。判定部115の処理の詳細は後述する。 The determination unit 115 is a module for determining the action of the subject based on the score obtained by the regressor 113. Specifically, the determination unit 115 determines what kind of behavior (for example, wake-up action, leaving-behind action) of the subject based on temporal change in score (that is, transition of the “subject's state” indicated by the score). Etc.) are estimated. Details of the processing of the determination unit 115 will be described later.
 状態表示部116は、スコア安定化部114から出力されるスコアの時間的な変化を示すグラフ(以後「状態変化グラフ」という)を表示装置13にリアルタイム表示するモジュールである。 The state display unit 116 is a module that displays on the display device 13 a graph (hereinafter referred to as a “state change graph”) indicating temporal change of the score output from the score stabilization unit 114 on a real time basis.
 記憶部117は、見守り支援システム1が処理に用いる各種のデータを記憶するモジュールである。記憶部117には、例えば、監視領域の設定情報、前処理で用いるパラメータ、スコア安定化処理で用いるパラメータ、スコアの時系列データ、判定処理で用いるパラメータなどが記憶される。 The storage unit 117 is a module for storing various data used by the watching support system 1 for processing. The storage unit 117 stores, for example, setting information of a monitoring area, parameters used in preprocessing, parameters used in score stabilization processing, time series data of scores, parameters used in determination processing, and the like.
 (監視領域の設定)
 撮像装置10の画角内にはベッド20や対象者21以外にさまざまな物が写り込んでいる。対象者21の状態や行動を検知するにあたっては、ベッド20と対象者21以外の物はノイズとして作用する可能性があるため、出来る限り除外することが好ましい。また、回帰器113に入力する画像については、画像サイズ(幅、高さ)および画像内のベッドの位置・範囲・大きさなどが規格化されているほうが、推定精度の向上が図りやすい。そこで、本実施形態では、ベッド20を基準にした所定の範囲を監視領域に設定し、後述する状態監視処理では監視領域内の画像をクリッピングして回帰器113の入力画像とする。
(Setting of monitoring area)
Various items other than the bed 20 and the target person 21 are reflected in the angle of view of the imaging device 10. When detecting the state or behavior of the target person 21, it is preferable to exclude as much as possible, since things other than the bed 20 and the target person 21 may act as noise. Further, with regard to the image input to the regressor 113, the estimation accuracy can be easily improved if the image size (width, height), and the position, range, and size of the bed in the image are standardized. Therefore, in the present embodiment, a predetermined range based on the bed 20 is set as a monitoring area, and in a state monitoring process described later, an image in the monitoring area is clipped to be an input image of the regression unit 113.
 監視領域の設定は、手動で行ってもよいし自動で行ってもよい。手動設定の場合、領域設定部111は、画像内のベッド20の領域ないし監視領域そのものをユーザに入力させるためのユーザインタフェースを提供するとよい。自動設定の場合、領域設定部111は、物体認識処理により画像からベッド20を検出し、検出したベッド20の領域を包含するように監視領域を設定するとよい。なお、領域設定処理は、監視領域が未設定の場合(例えば、システムの設置時など)、ベッド20や撮像装置10の移動に伴い監視領域を更新する必要がある場合などに実行される。 The setting of the monitoring area may be performed manually or automatically. In the case of manual setting, the area setting unit 111 may provide a user interface for allowing the user to input the area of the bed 20 in the image or the monitoring area itself. In the case of automatic setting, the area setting unit 111 may detect the bed 20 from the image by object recognition processing, and set the monitoring area so as to include the detected area of the bed 20. The area setting process is performed, for example, when the monitoring area needs to be updated along with the movement of the bed 20 or the imaging device 10 when the monitoring area is not set (for example, when the system is installed).
 図3Aは、オリジナル画像に対し設定された監視領域の例である。本実施形態では、ベッド20の領域の左側・右側・上側(足側)にそれぞれ所定幅のマージンを付加した範囲を、監視領域30に設定する。マージンの幅は、ベッド20上で起き上がっている人(図3B参照)の全身が監視領域30内に入るように設定される。 FIG. 3A is an example of the monitoring area set for the original image. In the present embodiment, a monitoring area 30 is set by adding a margin of a predetermined width to the left side, the right side, and the upper side (foot side) of the area of the bed 20. The width of the margin is set so that the whole body of the person (see FIG. 3B) rising on the bed 20 falls within the monitoring area 30.
 (状態の類型と機械学習)
 本システムでは、人の状態を回帰で取り扱うために、「ベッドに対する人の状態」をあらかじめ第0~第2の3つの類型に分類する。「第0類型」は、人がベッドに寝ている状態(「就床状態」又は「状態0」と呼ぶ)、「第1類型」は、人がベッド上で起き上がっている状態(「起床状態」又は「状態1」と呼ぶ)、「第2類型」は、人がベッドから離れている(降りている)状態(「離床状態」又は「状態2」と呼ぶ)である。図4は、寝ていた人が起き上がり、ベッドから離れる、という一連の行動を表す時系列画像と、3つの類型との対応を示す例である。
(Types of states and machine learning)
In this system, in order to treat the human condition by regression, "the human condition with respect to the bed" is classified into three types from 0 to 2 in advance. The "type 0 type" is a state in which a person is sleeping in the bed (referred to as "sleeping state" or "state 0"), the "type 1 type" is a state in which a person is rising on the bed ("wake up state Or “state 1”), “type 2” is a state in which a person is separated from the bed (dismounted) (referred to as “bed leaving state” or “state 2”). FIG. 4 is an example showing correspondence between time-series images representing a series of actions of a person who was sleeping rising and leaving the bed, and three types.
 図5は、回帰器113の機械学習を模式的に示している。まず、実際の病室等を撮影した画像を集め、各画像を第0類型~第2類型に分類する。そして、各画像の監視領域に相当する部分をクリッピングし、類型の番号(0,1,2)をラベルとして割り当て、トレーニング用画像のセットを生成する。回帰の精度向上のため、十分な数の画像を用意することが好ましく、またそれぞれの類型についてさまざまなバリエーションの画像を用意することが好ましい。ただし、人の状態がどちらの類型に属するかあいまいな画像は、トレーニング用画像に適さないので、除外することが好ましい。 FIG. 5 schematically shows the machine learning of the regressor 113. First, images obtained by photographing an actual patient room and the like are collected, and each image is classified into type 0 to type 2. Then, a portion corresponding to the monitoring area of each image is clipped, and the type number (0, 1, 2) is assigned as a label to generate a set of training images. In order to improve the accuracy of regression, it is preferable to prepare a sufficient number of images, and it is preferable to prepare images of various variations for each type. However, it is preferable to exclude the image which is ambiguous to which type of human condition belongs, because it is not suitable for the training image.
 学習装置12は、トレーニング用画像のセットを用い、各入力画像に対してそのラベルと同じスコア(つまり、第1類型に属する入力画像であればスコア=1)を出力するように、畳み込みニューラルネットワークのトレーニングを行う。そして、学習装置12は、学習結果であるニューラルネットワークのパラメータ群を、本システムの回帰器113に組み込む。なお、ニューラルネットワークの具体的な層構造、フィルタ、活性化関数、入力画像の仕様などは、実装や要求精度にあわせて適宜設計すればよい。 The learning device 12 uses a set of training images and outputs a convolutional neural network so as to output, for each input image, the same score as its label (that is, score = 1 if the input image belongs to the first type) Do training. Then, the learning device 12 incorporates the parameter group of the neural network, which is the learning result, into the regressor 113 of the present system. The specific layer structure of the neural network, the filter, the activation function, the specification of the input image, and the like may be appropriately designed according to the mounting and the required accuracy.
 図6は、回帰器113の能力を模式的に示している。回帰器113は、画像の「特徴量」と人の状態を示す「スコア」との対応関係をモデル化したものである。回帰器113は、その関係モデルに従って、入力画像から特徴量を抽出し、特徴量に対応したスコアを計算し出力する。なお、図6では、説明の便宜のため関係モデルを2次元の線型モデルで示しているが、実際の特徴量空間は多次元であり、関係モデルは非線型となる。 FIG. 6 schematically shows the ability of the regressor 113. The regressor 113 models the correspondence between the “feature amount” of the image and the “score” indicating the human state. The regressor 113 extracts the feature amount from the input image according to the relationship model, and calculates and outputs a score corresponding to the feature amount. Although FIG. 6 shows the relationship model as a two-dimensional linear model for the convenience of description, the actual feature amount space is multidimensional, and the relationship model is non-linear.
 回帰器113から出力されるスコアは0~2の範囲の実数値(連続値)である。例えば、第1類型(起床状態)の入力画像が与えられた場合は、その出力スコアは1又は1に極めて近い値となる。他の類型の場合も同様である。他方、入力画像のなかには、寝た姿勢から上半身を起こそうとしている状態や、ベッドから立ち上がろうとしている状態のように、どちらの類型に属するかあいまいな画像も存在する。そのような中間状態の画像の場合、抽出される特徴量は2つの類型のあいだの特徴量となるため、2つの類型の中間のスコアが出力されることとなる。例えば、寝た姿勢から上半身を起こそうとしている状態の画像であれば、第0類型と第1類型の中間状態ゆえ、0より大きく1より小さい値のスコアが得られる。 The score output from the regressor 113 is a real value (continuous value) in the range of 0-2. For example, when an input image of the first type (wake-up state) is given, the output score becomes 1 or a value very close to 1. The same is true for other types. On the other hand, in the input image, there is also an ambiguous image as to which type it belongs to, such as a state in which the upper body is about to wake up from the sleeping position or a state in which it is about to stand up from the bed. In the case of such an intermediate state image, the extracted feature quantity is a feature quantity between the two types, so that an intermediate score between the two types is output. For example, in the case of an image in a state in which the upper body is about to wake up from the sleeping position, a score of a value larger than 0 and smaller than 1 is obtained because it is an intermediate state between the 0th type and the 1st type.
 このように、本システムでは回帰器113を用いて、入力画像における人の状態を推定する。したがって、未知の入力画像に対して高精度な状態推定を行うことができる。また、中間状態の画像が入力された場合でも妥当な推定結果を得ることができる。さらに、対象者が布団を被っていたり、対象者の周囲にまぎらわしい人や物体が存在していたり、照明環境が通常と異なるなど、頭部検出が困難な画像であっても、妥当な推定結果を得ることができる。 Thus, in the present system, the regressor 113 is used to estimate the human state in the input image. Therefore, highly accurate state estimation can be performed on an unknown input image. In addition, even when an intermediate state image is input, a reasonable estimation result can be obtained. Furthermore, even if the target person is covered with a futon, there are strange people or objects around the target person, or the lighting environment is different from normal, etc., the image is difficult to detect the head. You can get
 (状態監視処理)
 図7を参照して本システムの状態監視処理の一例を説明する。図7の処理フローは、撮像装置10から1フレームの画像が取り込まれる度に実行される。
(Status monitoring process)
An example of the state monitoring process of the present system will be described with reference to FIG. The processing flow of FIG. 7 is executed each time an image of one frame is captured from the imaging device 10.
 ステップS70において、画像取得部110が、撮像装置10から1フレームの画像を取り込む。取得されたオリジナル画像は記憶部117に一時的に記憶される。次に前処理部112が、オリジナル画像から監視領域画像をクリップし、必要に応じてリサイズ、アフィン変換、輝度補正などを実行する(ステップS71)。次に回帰器113が、監視領域画像を入力し、対応するスコアを出力する(ステップS72)。次にスコア安定化部114が、ステップS72で得られたスコアの安定化処理を行い(ステップS73)、得られたスコアを判定部115に引き渡す。 In step S <b> 70, the image acquisition unit 110 captures an image of one frame from the imaging device 10. The acquired original image is temporarily stored in the storage unit 117. Next, the preprocessing unit 112 clips the monitoring area image from the original image, and executes resizing, affine transformation, luminance correction and the like as necessary (step S71). Next, the regressor 113 inputs the monitoring area image and outputs the corresponding score (step S72). Next, the score stabilization unit 114 performs stabilization processing of the score obtained in step S72 (step S73), and delivers the obtained score to the determination unit 115.
 判定部115は、スコア(連続値)に基づいて、対象者21の現在の状態を就床状態/起床状態/離床状態のいずれかに分類する。分類方法は問わないが、本実施形態では、スコア≦閾値th1の場合は就床状態(ステップS74,S75)、閾値th1<スコア≦閾値th2の場合は起床状態(ステップS76,S77)、閾値th2<スコアの場合は離床状態(ステップS78)、と分類する。閾値th1、th2は、例えば、th1=0.5、th2=1.5のように設定される。閾値th1、th2を変更することで、検知感度を調整することができる。 The determination unit 115 classifies the current state of the subject 21 into any one of a bed-up state / wake-up state / off-bed state based on the score (continuous value). There is no limitation on the classification method, but in the present embodiment, the bed state (steps S74 and S75) in the case of score ≦ threshold th1, the wake state (steps S76 and S77) in the case of threshold th1 <score ≦ th2, the threshold th2 In the case of <score, it is classified as leaving state (step S78). The threshold values th1 and th2 are set, for example, as th1 = 0.5 and th2 = 1.5. The detection sensitivity can be adjusted by changing the thresholds th1 and th2.
 続いて、状態表示部116が、検知結果であるスコアおよび対象者21の状態を表示装置13に出力する(ステップS79)。以上のステップS70~S79は、システムが終了するまでフレーム毎に実行される(ステップS80)。 Subsequently, the state display unit 116 outputs the score as the detection result and the state of the target person 21 to the display device 13 (step S79). The above steps S70 to S79 are executed for each frame until the system is completed (step S80).
 (表示例)
 図8に、表示装置13に出力される状態表示画面の一例を示す。画面左側には、見守りの対象者A~Dそれぞれの現在の状態をグラフィカルに示す状態画像80が表示されている。図8の例では、対象者AとCが就床状態であり、対象者Bが起床状態、対象者Dが離床状態であることが示されている。また、画面右側には、対象者A~Dそれぞれの状態変化グラフ81が表示されている。状態変化グラフ81の横軸は時間、縦軸は状態スコアである。また現時刻における各対象者A~Dの状態スコアがわかるよう、現時刻の時点を示すマーカー82が表示されている。
(Display example)
FIG. 8 shows an example of the status display screen output to the display device 13. On the left side of the screen, a state image 80 graphically showing the current state of each of the watchers A to D is displayed. In the example of FIG. 8, it is shown that the subjects A and C are in the bed, the subject B is in the wake, and the subject D is in the bed. Further, the state change graph 81 of each of the subjects A to D is displayed on the right side of the screen. The horizontal axis of the state change graph 81 is time, and the vertical axis is a state score. In addition, a marker 82 indicating the time of the current time is displayed so that the state score of each of the subjects A to D at the current time can be known.
 以上述べた本実施形態によれば、回帰器113により対象者21の状態推定を行うので、対象者21の状態ないし行動を精度良く判定することができる。また、対象者21の状態を定量化した状態スコアの時間変化を示す状態変化グラフ81を出力するので、見守り側(看護師、介護者など)は対象者21の状態の変化や傾向を簡単に確認することができる。そして、状態の変化や傾向がわかると、例えば、対象者21の行動を予測し危険の発生を未然に防止できたり、対象者21の毎日の行動パターンを把握し見守りに役立てたりすることができる。 According to this embodiment described above, since the state estimation of the subject 21 is performed by the regressor 113, it is possible to accurately determine the state or the behavior of the subject 21. Further, since the state change graph 81 indicating the time change of the state score which quantified the state of the subject 21 is output, the watching side (nurse, carer etc.) can easily change the state or tendency of the subject 21. It can be confirmed. And if the change and the tendency of the state are known, for example, the action of the target person 21 can be predicted to prevent the occurrence of danger, or the daily action pattern of the target person 21 can be grasped and used for watching .
 <第2実施形態>
 次に本発明の第2実施形態について説明する。第1実施形態では回帰器により対象者21の状態を推定したのに対し、本実施形態では対象者21の頭部を検出し、頭部の位置や状態から対象者2の状態(危険度合い)を推定する。
Second Embodiment
Next, a second embodiment of the present invention will be described. In the first embodiment, the state of the target person 21 is estimated by the regressor, whereas in the present embodiment, the head of the target person 21 is detected, and the state (risk degree) of the target person 2 from the position and state of the head. Estimate
 図9は、第2実施形態の見守り支援システム1のハードウェア構成および機能構成を模式的に示すブロック図である。第1実施形態(図1)との構成上の違いは、状態定量化部が、検出部90、状態認識部91、および判定部92により構成されている点、並びに、領域設定部93および状態表示部94の機能が異なる点である。以下、本実施形態に特有の構成を主に説明し、第1実施形態と重複する説明については省略する。 FIG. 9 is a block diagram schematically showing a hardware configuration and a functional configuration of the watching support system 1 of the second embodiment. The difference in configuration from the first embodiment (FIG. 1) is that the state quantifying unit is configured by the detection unit 90, the state recognition unit 91, and the determination unit 92, and the area setting unit 93 and the state The function of the display unit 94 is different. Hereinafter, the configuration specific to the present embodiment will be mainly described, and the description overlapping with the first embodiment will be omitted.
 検出部90は、画像取得部110により取得された画像を分析し、当該画像から、見守り対象者21の人体又はその一部(頭部、顔、上半身など)を検出するモジュールである。画像から人体やその一部を検出する方法としてはいかなる方法を用いてもよい。例えば、古典的なHaar-like特徴量やHoG特徴量を用いた識別器による手法や近年のFaster R-CNNによる手法を用いた物体検出アルゴリズムを好ましく用いることができる。本実施形態の検出部90は、Haar-like特徴量を用いた識別器により対象者21の頭部(首より上の部分)22を検出し、検出結果として、頭部22の位置(x,y)及びサイズ(縦横のピクセル数)を出力する。頭部22の位置(x,y)は、例えば、頭部22を囲む矩形枠の中心点の画像座標で表される。なお、本実施形態の検出部90は検出結果を画像座標系の位置・サイズで出力するが、検出部90が画像座標系を空間座標系に換算し、対象者21の空間座標系における3次元位置や3次元的なサイズを出力してもよい。 The detection unit 90 is a module that analyzes the image acquired by the image acquisition unit 110 and detects the human body of the watching target person 21 or a part thereof (head, face, upper body, etc.) from the image. Any method may be used as a method of detecting a human body or a part thereof from an image. For example, it is possible to preferably use an object detection algorithm using a classical Haar-like feature or a classifier using HoG feature or a recent Faster R-CNN. The detection unit 90 of the present embodiment detects the head (portion above the neck) 22 of the object person 21 by the classifier using the Haar-like feature amount, and as a detection result, the position (x, y) and the size (the number of vertical and horizontal pixels) are output. The position (x, y) of the head 22 is represented by, for example, image coordinates of a central point of a rectangular frame surrounding the head 22. In addition, although the detection part 90 of this embodiment outputs a detection result by the position and size of an image coordinate system, the detection part 90 converts an image coordinate system into a space coordinate system, The three-dimensional in the space coordinate system of the object person 21 The position or three-dimensional size may be output.
 状態認識部91は、検出部90により検出された対象者21の状態を認識するモジュールである。本実施形態では、対象者21の頭部22の状態、具体的には、(1)頭部22の向き、(2)頭部22の移動速度、(3)頭部22の移動方向の3つの少なくともいずれかを計算する。 The state recognition unit 91 is a module that recognizes the state of the subject 21 detected by the detection unit 90. In the present embodiment, the state of the head 22 of the target person 21, specifically, (1) the orientation of the head 22, (2) the moving speed of the head 22, and (3) the moving direction of the head 22. Calculate at least one of the
 頭部22の向きは、例えば、頭部22の画像における顔器官(目、鼻、口など)の位置関係に基づき認識してもよいし、頭部22の向きごとに学習した複数種類の識別器を用いることで認識してもよいし、その他のアルゴリズムを用いてもよい。また、頭部22の向きについては、連続値(角度)を計算してもよいし、例えば右向き・正面・左向きというように予め決められたN種類の向き(方向)のいずれに該当するかを判別してもよい。また、yaw、pitch、rollの3軸まわりの向きを計算してもよいし、単純に、画像座標系(xy面内)での向きを計算してもよい。 The orientation of the head 22 may be recognized based on, for example, the positional relationship of the facial organs (eyes, nose, mouth, etc.) in the image of the head 22, or multiple types of identification learned for each orientation of the head 22 It may be recognized by using the device, or another algorithm may be used. In addition, for the direction of the head 22, a continuous value (angle) may be calculated, and for example, which of N types of directions (directions) determined in advance such as right direction, front direction, and left direction correspond to You may judge. In addition, the directions around three axes of yaw, pitch, and roll may be calculated, or the directions in the image coordinate system (in the xy plane) may be simply calculated.
 頭部22の移動速度は、所定時間あたりの頭部22の移動量である。例えば、複数フレーム前の画像における頭部22の位置と最新の画像における頭部22の位置との間の距離を計算することにより、移動速度を得ることができる。また、頭部22の移動方向は、例えば、複数フレーム前の画像における頭部22の位置と最新の画像における頭部22の位置とを結ぶ線分の向き(角度)から計算できる。移動速度と移動方向を組み合わせて、頭部22の移動ベクトルを求めてもよい。なお、この場合も、画像座標系を空間座標系に換算することで、実空間(3次元空間)での移動速度、移動方向、移動ベクトルを計算してもよい。 The moving speed of the head 22 is the moving amount of the head 22 per predetermined time. For example, the moving speed can be obtained by calculating the distance between the position of the head 22 in the image of a plurality of frames earlier and the position of the head 22 in the latest image. In addition, the moving direction of the head 22 can be calculated, for example, from the direction (angle) of a line connecting the position of the head 22 in the image of a plurality of frames earlier and the position of the head 22 in the latest image. The movement velocity and the movement direction may be combined to obtain the movement vector of the head 22. Also in this case, the movement speed, movement direction, and movement vector in the real space (three-dimensional space) may be calculated by converting the image coordinate system into a space coordinate system.
 判定部92は、検出部90及び状態認識部91の結果に基づき、対象者21の状態が安全な状態であるか危険な状態であるかを判定するモジュールである。具体的には、判定部92は、危険な状態を評価・判定するための「判定基準」を用いて、状態認識部91で認識された頭部22の状態(向き、移動速度、移動方向、移動ベクトルの少なくともいずれか)の「危険度合い」を判定する処理を行う。危険度合いの判定は、安全/危険の2段階判定でもよいし、危険度合い=0,1,2,・・・のような多段階判定でもよい。この危険度合いを危険度スコアともよぶ。本実施形態では、記憶部117内の判定基準記憶部に複数の判定基準があらかじめ設定されており、判定部92が、対象者21(の頭部22)が検出された位置に応じて、用いる判定基準を切り替える。この特徴については後ほど詳しく説明する。 The determination unit 92 is a module that determines whether the state of the subject 21 is a safe state or a dangerous state based on the results of the detection unit 90 and the state recognition unit 91. Specifically, the determination unit 92 uses the “determination criteria” for evaluating and determining a dangerous condition, the state of the head 22 recognized by the state recognition unit 91 (direction, moving speed, moving direction, A process of determining the "danger degree" of at least one of the movement vectors is performed. The determination of the degree of danger may be a two-stage determination of safety / danger, or may be a multiple-stage determination such as the degree of danger = 0, 1, 2,. This degree of danger is also called a degree of danger score. In the present embodiment, a plurality of determination criteria are set in advance in the determination criteria storage unit in the storage unit 117, and the determination unit 92 uses in accordance with the position at which the target person 21 (head 22) is detected. Switch judgment criteria. This feature will be described in detail later.
 領域設定部93は、画像に対し監視領域および判定領域を設定するためのモジュールである。状態表示部94は、危険度スコアの時間的な変化を示すグラフを表示装置13にリアルタイム表示するモジュールである。 The area setting unit 93 is a module for setting a monitoring area and a determination area on an image. The state display unit 94 is a module that displays in real time a graph indicating temporal change of the risk score on the display device 13.
 (判定領域の設定)
 図10A~図10Cを参照して、監視領域と判定領域の設定処理の一例について説明する。図10A~図10Cは、画像に対し設定された監視領域と判定領域の例である。
(Setting of judgment area)
An example of setting processing of the monitoring area and the determination area will be described with reference to FIGS. 10A to 10C. 10A to 10C are examples of the monitoring area and the determination area set for the image.
 まず、画像取得部110が、撮像装置10から画像を取得する(図10A)。次に、領域設定部93が、監視領域30とベッドの四隅の点40~43をユーザに入力させ、その4点40~43で囲まれた四角形をベッド領域44に設定する(図10B)。次に、領域設定部93が、ベッド領域44に基づき4つの判定領域A1~A4の範囲を計算する(図4C)。そして、領域設定部93が、監視領域30とベッド領域44の情報(ベッド領域44の四隅の座標)及び4つの判定領域A1~A4の情報(各判定領域の四隅の座標)を記憶部117に格納し、設定処理を終了する。 First, the image acquisition unit 110 acquires an image from the imaging device 10 (FIG. 10A). Next, the area setting unit 93 allows the user to input the monitoring area 30 and the points 40 to 43 at the four corners of the bed, and sets a quadrangle surrounded by the four points 40 to 43 in the bed area 44 (FIG. 10B). Next, the area setting unit 93 calculates the range of the four determination areas A1 to A4 based on the bed area 44 (FIG. 4C). Then, the area setting unit 93 stores the information of the monitoring area 30 and the bed area 44 (the coordinates of the four corners of the bed area 44) and the information of the four determination areas A1 to A4 (the coordinates of the four corners of each determination area) in the storage unit 117. Store and complete the setting process.
 判定領域A1はベッド20の頭側に設定される領域であり、対象者21がベッド20に寝ている場合に頭部22が存在し得る範囲に対応する。判定領域A2はベッド20の足側の中央に設定される領域であり、対象者21が寝ている状態から上半身を起こした場合や、ベッド20の足側から降りる又は落ちる場合に、頭部22が存在し得る範囲に対応する。判定領域A3はベッド20の足側の左方に設定される領域であり、対象者21がベッド20の左の縁に腰かけている場合や、ベッド20の左側から降りる又は落ちる場合に、頭部22が存在し得る範囲に対応する。判定領域A4はベッド20の足側の右方に設定される領域であり、対象者21がベッド20の右の縁に腰かけている場合や、ベッド20の右側から降りる又は落ちる場合に、頭部22が存在し得る範囲に対応する。図10Cに示すように、判定領域A2~A4は、ベッド領域44の外側まで拡張している。 The determination area A1 is an area set on the head side of the bed 20, and corresponds to the range in which the head 22 may exist when the subject 21 sleeps in the bed 20. The judgment area A2 is an area set at the center of the foot side of the bed 20, and the head 22 when raising the upper body from the state where the subject 21 is sleeping or when getting down or falling from the foot side of the bed 20. Corresponds to the range in which The judgment area A3 is an area set on the left side of the foot side of the bed 20, and when the subject 21 is seated on the left edge of the bed 20, or when the subject 21 descends or falls from the left side of the bed 20, the head 22 corresponds to the range that may exist. The determination area A4 is an area set on the right side of the foot side of the bed 20, and the head is located when the subject 21 is seated on the right edge of the bed 20 or when the subject 21 descends or falls from the right side of the bed 20. 22 corresponds to the range that may exist. As shown in FIG. 10C, the determination areas A2 to A4 extend to the outside of the bed area 44.
 このように複数の判定領域を設定した理由は、安全な状態か危険な状態かの評価が、対象者21がベッド20上のどこに存在するかに依存して、変化し得るからである。例えば、対象者21の頭部22が判定領域A1内に存在する場合は、対象者21は正常な姿勢でベッド20に寝ていると考えられ、頭部22が大きく移動もしくは向きを変えたとしても、対象者21がベッド20から転落する危険性は低いといえる。また、対象者21の頭部22が判定領域A3内に存在する場合において、頭部22が左側を向いていれば、対象者21は自らの意思でベッド20から降りようとしていると考えられ、危険度合いは低いと評価できるが、頭部22が上や下あるいは右側を向いていたら、何らかの異常が発生しているか転落の危険性ありと判断すべきである。判定領域A4については、判定領域A3とは左右反対の判定となる。 The reason for setting a plurality of determination areas in this way is that the evaluation of the safe state or the dangerous state may change depending on where the subject 21 is present on the bed 20. For example, when the head 22 of the subject 21 is present in the determination area A1, it is considered that the subject 21 is lying on the bed 20 in a normal posture, and the head 22 has largely moved or changed its direction. Also, the risk of the subject 21 falling from the bed 20 can be said to be low. When the head 22 of the target person 21 is present in the judgment area A3, if the head 22 faces the left, it is considered that the target person 21 is about to get off the bed 20 by his own intention. Although the degree of danger can be evaluated as low, if the head 22 is facing up, down or to the right, it should be judged that there is any abnormality or there is a risk of falling. The determination area A4 is determined opposite to the left and right of the determination area A3.
 図11Aは、記憶部117に設定されている判定基準のデータ構造の一例を示している。図11Aは頭部向きの判定基準の例である。「-1,-1」、「-1,0」・・・などの符号は図11Bに示すように頭部向き(8方向)を表しており、判定基準の値が危険度合いを表している。値が大きいほど危険度合いが高いことを示し、1が最も危険度合いの低い頭部向き、5が最も危険度合いの高い頭部向きである。前述のように判定領域A1~A4のそれぞれで安全な状態か危険な状態かの評価が変わるため、記憶部117内の判定基準記憶部には、判定領域ごとに異なる判定基準が対応付けられている。なお、図11Aの例は、頭部向きの判定基準であるが、頭部の移動速度、移動方向など、判定部92で評価に用いる複数項目に対応する判定基準を設定してもよい。 FIG. 11A illustrates an example of the data structure of the determination reference set in the storage unit 117. FIG. 11A is an example of a determination criterion of head orientation. Symbols such as “−1, −1”, “−1, 0”,... Indicate head orientation (eight directions) as shown in FIG. 11B, and the value of the judgment criterion indicates the degree of danger . The larger the value, the higher the degree of danger, with 1 being the head with the lowest degree of risk and 5 being the head with the highest degree of risk. As described above, since the evaluation of the safe state or the dangerous state changes in each of the determination areas A1 to A4, the determination criteria storage unit in the storage unit 117 is associated with different determination criteria for each determination area. There is. Although the example of FIG. 11A is the determination criterion of head orientation, determination criteria corresponding to a plurality of items used for evaluation by the determination unit 92 may be set, such as the moving speed and movement direction of the head.
 次に、第2実施形態の見守り支援システム1による状態監視処理の具体的な実施例について説明する。 Next, a specific example of the state monitoring process by the watching support system 1 of the second embodiment will be described.
 (実施例1)
 図12、図13、図14を参照して実施例1の状態監視処理の一例を説明する。図12は、情報処理装置11により実行される実施例1の状態監視処理のフローチャートであり、図13は、危険度合い判定の例であり、図14は、表示装置13に表示される状態表示画面の一例である。
Example 1
An example of the state monitoring process of the first embodiment will be described with reference to FIGS. 12, 13 and 14. FIG. 12 is a flowchart of the state monitoring process of the first embodiment executed by the information processing apparatus 11. FIG. 13 is an example of risk degree determination. FIG. 14 is a state display screen displayed on the display device 13. An example of
 ステップS60において、画像取得部110が、撮像装置10から画像を取得する。取得された画像は記憶部117に一時的に記憶される。ステップS61では、検出部90が、ステップS60で取得された画像における監視領域から対象者21の頭部22を検出する。ステップS62では、状態認識部91が、ステップS61で検出された頭部の向きを推定する。図13は、判定領域A3内で頭部22が検出され、頭部22の向きが矢印70の方向と推定された例を示している。 In step S <b> 60, the image acquisition unit 110 acquires an image from the imaging device 10. The acquired image is temporarily stored in the storage unit 117. In step S61, the detection unit 90 detects the head 22 of the target person 21 from the monitoring area in the image acquired in step S60. In step S62, the state recognition unit 91 estimates the orientation of the head detected in step S61. FIG. 13 illustrates an example in which the head 22 is detected in the determination area A3 and the direction of the head 22 is estimated to be the direction of the arrow 70.
 ステップS63では、判定部92が、判定領域A3に対応する判定基準を記憶部117から読み出す。図13の符号71は、判定領域A3に対応する判定基準を模式的に図示したものであり、8方向(矢印)それぞれに危険度合い(1~5)が設定されている。そして、ステップS64において、判定部92は、判定領域A3に対応する判定基準を用いて、頭部22の向き(矢印70)が安全な向きか危険な向きかを判定する。図13の例では、安全な向き(危険度合い=1)という判定結果が得られる。すなわち、頭部22の向きがベッドの外側を向いていることから、対象者が自らの意思でベッドから降りようとしているとみなし、危険度合いは低いと判定するのである。判定部92の判定結果(危険度スコア)は記憶部117に格納される。 In step S63, the determination unit 92 reads the determination reference corresponding to the determination area A3 from the storage unit 117. The reference numeral 71 in FIG. 13 schematically shows the determination criterion corresponding to the determination area A3, and the danger degree (1 to 5) is set in each of eight directions (arrows). Then, in step S64, the determination unit 92 determines whether the direction of the head 22 (arrow 70) is a safe direction or a dangerous direction, using the determination criterion corresponding to the determination area A3. In the example of FIG. 13, the determination result of the safe direction (degree of danger = 1) is obtained. That is, since the direction of the head 22 faces the outside of the bed, it is considered that the subject is about to get off the bed by his own intention, and the degree of danger is determined to be low. The determination result (risk score) of the determination unit 92 is stored in the storage unit 117.
 続いて、状態表示部94が、危険度スコアおよび対象者21の状態を表示装置13に出力する(ステップS65)。以上のステップS60~S65は、システムが終了するまでフレーム毎に実行される(ステップS66)。 Subsequently, the state display unit 94 outputs the degree of danger score and the state of the target person 21 to the display device 13 (step S65). The above steps S60 to S65 are executed for each frame until the system is completed (step S66).
 図14に、表示装置13に出力される状態表示画面の一例を示す。画面左側には、見守りの対象者A~Dそれぞれの現在の状態をグラフィカルに示す状態画像83が表示されている。図14の例では、対象者Aの危険度スコアが1、対象者BとDの危険度スコアが2、対象者Cの危険度スコアが5であることが示されている。例えば、危険度スコアが3以上の場合に点滅表示するなど、アラートを報知するようにしてもよい。また、画面右側には、対象者A~Dそれぞれの状態変化グラフ84が表示されている。状態変化グラフ84の横軸は時間、縦軸は危険度スコアである。また現時刻における各対象者A~Dの状態スコアがわかるよう、現時刻の時点を示すマーカー85が表示されている。 FIG. 14 shows an example of the status display screen output to the display device 13. On the left side of the screen, a state image 83 graphically showing the current state of each of the watchers A to D is displayed. In the example of FIG. 14, it is shown that the risk score of the subject A is 1, the risk score of the subjects B and D is 2, and the risk score of the subject C is 5. For example, an alert may be notified such as blinking display when the degree of risk score is 3 or more. Further, the state change graph 84 of each of the subjects A to D is displayed on the right side of the screen. The horizontal axis of the state change graph 84 is time, and the vertical axis is the risk score. In addition, a marker 85 indicating the time of the current time is displayed so that the state score of each of the subjects A to D at the current time can be known.
 以上述べた本実施形態によれば、対象者21の状態(危険度合い)を定量化した危険度スコアの時間変化を示す状態変化グラフ84を出力するので、見守り側(看護師、介護者など)は対象者21の危険度合いの変化や傾向を簡単に確認することができる。したがって、第1実施形態と同様、対象者1の危険を予測することが容易にでき、事故等の発生を未然に防ぐことが可能となる。 According to this embodiment described above, since the state change graph 84 indicating the time change of the risk score which quantified the state (risk degree) of the object person 21 is output, the watching side (nurse, carer etc.) Can easily confirm the change and tendency of the degree of danger of the subject person 21. Therefore, as in the first embodiment, it is possible to easily predict the danger of the object person 1 and to prevent the occurrence of an accident or the like.
 (実施例2)
 図15と図16を参照して第2実施形態の実施例2の状態監視処理の一例を説明する。図15は、情報処理装置11により実行される実施例2の状態監視処理のフローチャートであり、図16は、危険度合い判定の例である。
(Example 2)
An example of the state monitoring process of Example 2 of the second embodiment will be described with reference to FIGS. 15 and 16. FIG. 15 is a flowchart of the state monitoring process of the second embodiment executed by the information processing apparatus 11, and FIG. 16 is an example of the risk degree determination.
 ステップS150において、画像取得部110が、撮像装置10から画像を取得する。取得された画像は記憶部117に一時的に記憶される。ステップS151では、検出部90が、ステップS150で取得された画像から対象者21の頭部22を検出する。検出された頭部22の位置の情報は、当該画像の時刻情報又はフレーム番号に対応付けて記憶部117に記憶される。次に、状態認識部91が、所定時間前(例えば1秒前)の画像における頭部22の位置の情報を記憶部117から読み出し(ステップS152)、所定時間前の頭部22の位置とステップS151で検出した頭部22の位置に基づき、頭部22の移動速度(所定時間あたりの移動量)と移動方向とを算出し、頭部22の移動ベクトルを求める(ステップS153)。図16は、判定領域A3内で頭部22が検出され、頭部22の移動ベクトルが矢印150のように計算された例を示している。 In step S <b> 150, the image acquisition unit 110 acquires an image from the imaging device 10. The acquired image is temporarily stored in the storage unit 117. In step S151, the detection unit 90 detects the head 22 of the target person 21 from the image acquired in step S150. Information on the position of the detected head 22 is stored in the storage unit 117 in association with time information or a frame number of the image. Next, the state recognition unit 91 reads the information of the position of the head 22 in the image of a predetermined time before (for example, one second) from the storage unit 117 (step S152), and the position of the head 22 of the predetermined time and the step Based on the position of the head 22 detected in S151, the movement speed (movement amount per predetermined time) of the head 22 and the movement direction are calculated, and the movement vector of the head 22 is determined (step S153). FIG. 16 shows an example in which the head 22 is detected in the determination area A3 and the movement vector of the head 22 is calculated as indicated by the arrow 150.
 ステップS154では、判定部92が、判定領域A3に対応する判定基準を記憶部117から読み出す。図16の符号151、152は、判定領域A3に対応する判定基準を模式的に図示したものである。符号151は移動方向に関する判定基準の例である。また、符号152は移動速度(移動量)に関する判定基準の例であり、矢印が長いほど危険度合いが大きいことを示している。この例では、移動速度に応じて1~4の4段階の危険度合いが割り当てられている。 In step S154, the determination unit 92 reads the determination reference corresponding to the determination area A3 from the storage unit 117. Reference numerals 151 and 152 in FIG. 16 schematically show the judgment criteria corresponding to the judgment area A3. The code | symbol 151 is an example of the determination standard regarding a moving direction. In addition, reference numeral 152 is an example of a determination criterion regarding the moving speed (moving amount), and indicates that the longer the arrow, the larger the degree of danger. In this example, four levels of danger of 1 to 4 are assigned according to the moving speed.
 ステップS155では、判定部92が、判定基準151、152を用いて、頭部22の移動ベクトル150の危険度合いを判定する。例えば、移動方向についての危険度合いと移動速度についての危険度合いの積(乗算値)又は和(加算値)などを、移動ベクトル150の危険度合いとすることができる。図16の例では、移動方向は安全(危険度合い=1)であるが、移動速度が大きい(危険度合い=4)ため、移動ベクトル90の危険度合いは4(乗算値の場合)という判定結果が得られる。すなわち、頭部22の移動方向だけをみればベッドから降りる方向ではあるが、その移動速度(移動量)が非常に大きいので、通常のベッドから降りる動作ではなくベッドから転落ないし転倒する動きであるとみなし、危険度合が高いと判定するのである。判定部92の判定結果(危険度スコア)は記憶部117に格納される。 In step S <b> 155, the determination unit 92 determines the degree of danger of the movement vector 150 of the head 22 using the determination criteria 151 and 152. For example, the product (multiplication value) or sum (addition value) or the like of the danger degree in the movement direction and the danger degree in the movement speed can be used as the danger degree of the movement vector 150. In the example of FIG. 16, the movement direction is safe (danger degree = 1), but the movement speed is large (danger degree = 4), so the judgment result that the danger degree of the movement vector 90 is 4 (in the case of multiplication value) is can get. In other words, looking only at the moving direction of the head 22, the moving speed (moving amount) is very high although it is the direction of getting down from the bed, so it is not a normal getting down from the bed but a falling or falling movement from the bed It is considered that the risk is high. The determination result (risk score) of the determination unit 92 is stored in the storage unit 117.
 続いて、状態表示部94が、危険度スコアおよび対象者21の状態を表示装置13に出力する(ステップS156)。以上のステップS150~S156は、システムが終了するまでフレーム毎に実行される(ステップS157)。状態表示画面は実施例1(図14)と同様である。本実施例の方法によっても実施例1と同様の効果を得ることができる。 Subsequently, the state display unit 94 outputs the degree of danger score and the state of the subject 21 to the display device 13 (step S156). The above steps S150 to S156 are executed for each frame until the system is completed (step S157). The status display screen is the same as that of the first embodiment (FIG. 14). The same effect as that of the first embodiment can be obtained by the method of the present embodiment.
 (実施例3)
 図17を参照して第2実施形態の実施例3の状態監視処理の一例を説明する。図17は、情報処理装置11により実行される実施例3の状態監視処理のフローチャートである。
(Example 3)
An example of the state monitoring process of Example 3 of the second embodiment will be described with reference to FIG. FIG. 17 is a flowchart of the state monitoring process of the third embodiment executed by the information processing apparatus 11.
 ステップS100において、画像取得部110が、撮像装置10から画像を取得する。取得された画像は記憶部117に一時的に記憶される。ステップS101では、検出部90が、ステップS100で取得された画像から対象者21の頭部22を検出する。検出された頭部22の位置の情報は、当該画像の時刻情報又はフレーム番号に対応付けて記憶部117に記憶される。次に、状態認識部91が、所定時間前(例えば1秒前)の画像における頭部22の位置の情報を記憶部117から読み出し(ステップS102)、所定時間前の頭部22の位置とステップS101で検出した頭部22の位置に基づき、頭部22の移動速度(所定時間あたりの移動量)を算出する(ステップS103)。 In step S <b> 100, the image acquisition unit 110 acquires an image from the imaging device 10. The acquired image is temporarily stored in the storage unit 117. In step S101, the detection unit 90 detects the head 22 of the target person 21 from the image acquired in step S100. Information on the position of the detected head 22 is stored in the storage unit 117 in association with time information or a frame number of the image. Next, the state recognition unit 91 reads out the information on the position of the head 22 in the image before a predetermined time (for example, one second) from the storage unit 117 (step S102), and the position and the step of the head 22 before the predetermined time Based on the position of the head 22 detected in S101, the moving speed of the head 22 (moving amount per predetermined time) is calculated (step S103).
 ステップS104では、判定部92が、頭部22が検出された判定領域に対応する判定基準を記憶部117から読み出す。本実施例では、移動速度と危険度合いとを対応付けた判定基準が、判定領域ごとに設定されている。例えば、判定領域A1で頭部22が検出される場合、対象者21は寝ている状態にあるはずである。したがって、起き上がり動作(上半身を起こす動作)における頭部22の一般的な速度(例えば20cm/秒)に基づいて、判定領域A1に対する判定基準を設定するとよい(例えば、移動速度が20cm/秒以下の場合:危険度合い=1、20~40cm/秒の場合:危険度合い=2、40cm/秒より大きい場合:危険度合い=3など)。また、判定領域A3やA4で頭部22が検出される場合、対象者21が次にとり得る動作として、立ち上がり動作が想定される。したがって、立ち上がり動作における頭部22の一般的な速度(例えば50cm/秒)に基づいて、判定領域A3やA4に対する判定基準を設定するとよい(例えば、移動速度が50cm/秒以下の場合:危険度合い=1、50~80cm/秒の場合:危険度合い=2、80cm/秒より大きい場合:危険度合い=3など)。 In step S104, the determination unit 92 reads from the storage unit 117 the determination criteria corresponding to the determination area in which the head 22 is detected. In the present embodiment, the determination criteria in which the moving speed and the degree of danger are associated are set for each determination area. For example, when the head 22 is detected in the determination area A1, the subject 21 should be in the sleeping state. Therefore, based on the general velocity (for example, 20 cm / sec) of the head 22 in the rising motion (motion for raising the upper body), it is preferable to set the determination criteria for the determination area A1 (for example, the moving velocity is 20 cm / sec or less) Case: degree of danger = 1, in the case of 20 to 40 cm / sec: degree of danger = 2, in the case of greater than 40 cm / sec: degree of danger = 3, etc.). In addition, when the head 22 is detected in the determination area A3 or A4, the rising action is assumed as an action that the target person 21 can take next. Therefore, based on the general velocity (for example, 50 cm / sec) of the head 22 in the rising motion, it is preferable to set the determination criteria for the determination regions A3 and A4 (for example, when the moving velocity is 50 cm / sec or less: degree of danger = 1, in the case of 50 to 80 cm / sec: degree of danger = 2, in the case of greater than 80 cm / sec: degree of danger = 3 etc.)
 ステップS105では、判定部92が、上述した判定基準を用いて、頭部22の移動速度の危険度合いを判定する。続いて、状態表示部94が、危険度スコアおよび対象者21の状態を表示装置13に出力する(ステップS106)。以上のステップS100~S106は、システムが終了するまでフレーム毎に実行される(ステップS107)。状態表示画面は実施例1(図14)と同様である。本実施例の方法によっても実施例1と同様の効果を得ることができる。 In step S105, the determination unit 92 determines the degree of danger of the movement speed of the head 22 using the above-described determination criteria. Subsequently, the state display unit 94 outputs the degree of danger score and the state of the subject 21 to the display device 13 (step S106). The above steps S100 to S106 are executed for each frame until the system is completed (step S107). The status display screen is the same as that of the first embodiment (FIG. 14). The same effect as that of the first embodiment can be obtained by the method of the present embodiment.
 (実施例4)
 図18を参照して第2実施形態の実施例4の状態監視処理の一例を説明する。図18は、情報処理装置11により実行される実施例4の状態監視処理のフローチャートである。
(Example 4)
An example of the state monitoring process of the fourth example of the second embodiment will be described with reference to FIG. FIG. 18 is a flowchart of the state monitoring process of the fourth embodiment performed by the information processing apparatus 11.
 ステップS110において、画像取得部110が、撮像装置10から画像を取得する。取得された画像は記憶部117に一時的に記憶される。ステップS111では、検出部90が、ステップS100で取得された画像から対象者21の頭部22を検出する。検出された頭部22の位置の情報は、当該画像の時刻情報又はフレーム番号に対応付けて記憶部117に記憶される。ステップS112では、状態認識部91が、頭部22の向き、移動速度、移動方向を計算する。具体的な計算方法は、実施例1~3で述べたものと同じでよい。 In step S <b> 110, the image acquisition unit 110 acquires an image from the imaging device 10. The acquired image is temporarily stored in the storage unit 117. In step S111, the detection unit 90 detects the head 22 of the target person 21 from the image acquired in step S100. Information on the position of the detected head 22 is stored in the storage unit 117 in association with time information or a frame number of the image. In step S112, the state recognition unit 91 calculates the direction, moving speed, and moving direction of the head 22. The specific calculation method may be the same as that described in the first to third embodiments.
 ステップS113では、判定部92が、頭部22が検出された判定領域に対応する判定基準を記憶部117から読み出す。そして、判定部92は、頭部22の向きに対する危険度合い(ステップS114)、頭部22の移動ベクトルに対する危険度合い(ステップS115)、頭部22の移動速度に対する危険度合い(ステップS116)をそれぞれ計算する。具体的な計算方法は、実施例1~3で述べたものと同じでよい。次にステップS117において、判定部92が、ステップS114~S116で得られた危険度合いの3つの値を統合して、危険度合いの統合スコアを計算する。例えば、3つのうちの最大値を統合スコアに選んでもよいし、3つないし2つの値の平均値、乗算値、加算値などを統合スコアに選んでもよい。 In step S113, the determination unit 92 reads from the storage unit 117 the determination criteria corresponding to the determination area in which the head 22 is detected. Then, the determination unit 92 calculates the degree of danger for the direction of the head 22 (step S114), the degree of danger for the movement vector of the head 22 (step S115), and the degree of danger for the movement speed of the head 22 (step S116). Do. The specific calculation method may be the same as that described in the first to third embodiments. Next, in step S117, the determination unit 92 integrates the three values of the degree of danger obtained in steps S114 to S116 to calculate an integrated score of the degree of danger. For example, the maximum value among the three may be selected as the integrated score, or the average value, the multiplication value, the addition value, etc. of three or two values may be selected as the integrated score.
 続いて、状態表示部94が、統合スコアおよび対象者21の状態を表示装置13に出力する(ステップS118)。以上のステップS110~S118は、システムが終了するまでフレーム毎に実行される(ステップS119)。状態表示画面は実施例1(図14)と同様である。本実施例の方法によっても実施例1と同様の効果を得ることができる。 Subsequently, the state display unit 94 outputs the integrated score and the state of the target person 21 to the display device 13 (step S118). The above steps S110 to S118 are executed for each frame until the system is completed (step S119). The status display screen is the same as that of the first embodiment (FIG. 14). The same effect as that of the first embodiment can be obtained by the method of the present embodiment.
 <その他>
 上記の各実施形態の説明は、本発明を例示的に説明するものに過ぎない。本発明は上記の具体的な形態には限定されることはなく、その技術的思想の範囲内で種々の変形が可能である。
<Others>
The above description of each embodiment merely illustrates the present invention. The present invention is not limited to the above specific embodiments, and various modifications are possible within the scope of the technical idea thereof.
 上記実施形態では、画像から就床状態/起床状態/離床状態を推定するとともに、対象者の起床行動と離床行動を検知する例を説明した。ただし、推定対象の状態や、検知対象の行動はこれらに限られない。すなわち、画像に異なる特徴が現れるものであれば、さまざまな「人の状態」や「行動」を取り扱うことが可能である。例えば、食事、読書などの行動を検知することも可能である。 In the above-described embodiment, an example has been described in which the sleeping state / wake-up state / disengagement state is estimated from the image, and the wake-up behavior and the leaving behavior of the subject are detected. However, the state of the estimation target and the action of the detection target are not limited to these. That is, it is possible to handle various "human states" and "actions" as long as different features appear in the image. For example, it is also possible to detect actions such as eating and reading.
 1:見守り支援システム
 10:撮像装置、11:情報処理装置、12:学習装置、13:表示装置
 100:照明、101:近赤外線カメラ、110:画像取得部、111:領域設定部、112:前処理部、113:回帰器、114:スコア安定化部、115:判定部、116:状態表示部、117:記憶部
 20:ベッド、21:対象者、22:頭部
 30:監視領域、40~43:ベッドの四隅の点、44:ベッド領域、A1~A4:判定領域
 70:頭部の向きを示す矢印、71:判定基準
 80:状態画像、81:状態変化グラフ、82:マーカー、83:状態画像、84:状態変化グラフ、85:マーカー
 90:検出部、91:状態認識部、92:判定部、93:領域設定部、94:状態表示部
 150:頭部の移動ベクトル、151:移動方向に関する判定基準、152:移動速度に関する判定基準
1: Watching support system 10: Imaging device, 11: Information processing device, 12: Learning device, 13: Display device 100: Illumination, 101: Near infrared camera, 110: Image acquisition unit, 111: Area setting unit, 112: Previous Processing unit, 113: Regressor, 114: Score stabilization unit, 115: Determination unit, 116: Status display unit, 117: Storage unit 20: Bed, 21: Target person, 22: Head 30: Monitoring region, 40- 43: points at four corners of bed, 44: bed area, A1 to A4: judgment area 70: arrow indicating head orientation, 71: judgment standard 80: state image, 81: state change graph, 82: marker, 83: State image 84: state change graph 85: marker 90: detection unit 91: state recognition unit 92: determination unit 93: area setting unit 94: state display unit 150: head movement vector 151: Criteria for moving direction, 152: determination as to the moving speed reference

Claims (10)

  1.  ベッド上の対象者の見守りを支援する見守り支援システムであって、
     前記対象者のベッドを含む監視領域を撮影するように設置された撮像装置から画像を取得する画像取得部と、
     前記画像取得部により得られた前記監視領域の画像に基づいて、前記対象者の状態を定量化したスコアを出力する状態定量化部と、
     前記状態定量化部から出力されるスコアの時間的な変化を示すグラフを表示装置に表示する状態表示部と、
    を有することを特徴とする見守り支援システム。
    It is a watching support system that supports watching of the target person on the bed,
    An image acquisition unit configured to acquire an image from an imaging device installed to photograph a monitoring area including the bed of the subject;
    A state quantifying unit that outputs a score obtained by quantifying the condition of the subject based on the image of the monitoring area obtained by the image obtaining unit;
    A state display unit for displaying on the display device a graph indicating temporal changes in the score output from the state quantifying unit;
    A watching support system characterized by having.
  2.  前記状態定量化部は、ベッドと人が写る画像を入力とし前記ベッドに対する前記人の状態を示すスコアを出力するように機械学習された回帰器を有しており、前記監視領域の画像を前記回帰器に入力することにより、前記対象者の状態を定量化したスコアを取得する
    ことを特徴とする請求項1に記載の見守り支援システム。
    The state quantifying unit has a regressor machine-learned to receive a bed and an image of a person and output a score indicating the state of the person relative to the bed, and the image of the monitoring area is The surveillance support system according to claim 1, wherein a score quantifying the condition of the subject is acquired by inputting into a regressor.
  3.  前記回帰器は、ニューラルネットワークである
    ことを特徴とする請求項2に記載の見守り支援システム。
    The surveillance support system according to claim 2, wherein the regressor is a neural network.
  4.  前記ベッドに対する前記人の状態があらかじめ複数の類型に分類され、かつ、前記複数の類型のそれぞれに異なるスコアが割り当てられており、
     前記回帰器は、前記人の状態が2つの類型のあいだの状態である場合に、前記2つの類型のスコアのあいだの値を出力するように構成されている
    ことを特徴とする請求項2又は3に記載の見守り支援システム。
    The state of the person with respect to the bed is classified in advance into a plurality of types, and different scores are assigned to each of the plurality of types,
    3. The apparatus according to claim 2, wherein the regressor is configured to output a value between the scores of the two types when the state of the person is a state between the two types. The watching support system described in 3.
  5.  前記複数の類型は、前記人が前記ベッドに寝ている状態0、前記人が前記ベッド上で起き上がっている状態1、および、前記人が前記ベッドから離れている状態2を含む
    ことを特徴とする請求項4に記載の見守り支援システム。
    The plurality of types include a state 0 in which the person sleeps in the bed, a state 1 in which the person stands up on the bed, and a state 2 in which the person leaves the bed. The watching support system according to claim 4.
  6.  前記対象者の状態を定量化したスコアは、前記対象者の状態の危険度合いを表すスコアである
    ことを特徴とする請求項1に記載の見守り支援システム。
    The surveillance support system according to claim 1, wherein the score obtained by quantifying the condition of the subject is a score representing the degree of risk of the condition of the subject.
  7.  前記監視領域の画像内のベッドの領域に基づき設定される複数の判定領域ごとに、危険な状態を判定するための判定基準があらかじめ設定されている判定基準記憶部を有し、
     前記状態定量化部は、前記監視領域の画像から前記対象者の頭部を検出する検出部を有しており、前記頭部が検出された位置に対応する判定領域の判定基準を用いて、前記対象者の状態の危険度合いを表すスコアを算出する
    ことを特徴とする請求項6に記載の見守り支援システム。
    A determination criterion storage unit in which determination criteria for determining a dangerous state are set in advance for each of a plurality of determination regions set based on the bed region in the image of the monitoring region,
    The state quantifying unit has a detection unit that detects the head of the subject from the image of the monitoring area, and uses the determination reference of the determination area corresponding to the position at which the head is detected. 7. The watching support system according to claim 6, wherein a score representing the degree of danger of the condition of the subject is calculated.
  8.  前記状態定量化部は、頭部の向き、頭部の移動速度、頭部の移動方向、頭部の移動ベクトルのうち少なくとも1つ以上の項目に基づいて、前記対象者の状態の危険度合いを表すスコアを算出する
    ことを特徴とする請求項7に記載の見守り支援システム。
    The state quantifying unit determines the degree of danger of the state of the subject based on at least one of the head orientation, head movement speed, head movement direction, and head movement vector. The watching support system according to claim 7, wherein a score to be represented is calculated.
  9.  ベッド上の対象者の見守りを支援する見守り支援システムの制御方法であって、
     前記対象者のベッドを含む監視領域を撮影するように設置された撮像装置から画像を取得するステップと、
     前記監視領域の画像に基づいて、前記対象者の状態を定量化したスコアを出力するステップと、
     前記スコアの時間的な変化を示すグラフを表示装置に表示するステップと、
    を有することを特徴とする見守り支援システムの制御方法。
    A control method of a watching support system for supporting watching of a target person on a bed,
    Acquiring an image from an imaging device installed to image a monitoring area including the bed of the subject;
    Outputting a score quantifying the condition of the subject based on the image of the monitoring area;
    Displaying a graph indicating temporal change of the score on a display device;
    A control method of a watching support system characterized by having.
  10.  請求項9に記載の見守り支援システムの制御方法の各ステップをコンピュータに実行させるためのプログラム。 A program for causing a computer to execute each step of the control method of the watching support system according to claim 9.
PCT/JP2018/021984 2017-06-27 2018-06-08 Monitoring system, control method therefor, and program WO2019003859A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017125232A JP6822328B2 (en) 2017-06-27 2017-06-27 Watching support system and its control method
JP2017-125232 2017-06-27

Publications (1)

Publication Number Publication Date
WO2019003859A1 true WO2019003859A1 (en) 2019-01-03

Family

ID=64742966

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/021984 WO2019003859A1 (en) 2017-06-27 2018-06-08 Monitoring system, control method therefor, and program

Country Status (2)

Country Link
JP (1) JP6822328B2 (en)
WO (1) WO2019003859A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020109582A (en) * 2019-01-07 2020-07-16 エイアイビューライフ株式会社 Information processing device

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020218409A1 (en) * 2019-04-25 2020-10-29 株式会社クロスエッジラボ Thermosensitive imaging device, watching and monitoring system using thermosensitive imaging device, and watching and monitoring method using thermosensitive imaging device
JP6884819B2 (en) * 2019-06-26 2021-06-09 株式会社 日立産業制御ソリューションズ Safety management equipment, safety management methods and safety management programs
JP7138158B2 (en) * 2020-12-25 2022-09-15 エヌ・ティ・ティ・コムウェア株式会社 OBJECT CLASSIFIER, OBJECT CLASSIFICATION METHOD, AND PROGRAM
JP7237382B1 (en) 2021-12-24 2023-03-13 知能技術株式会社 Image processing device, image processing method, and image processing program

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007213528A (en) * 2006-02-13 2007-08-23 Sanyo Electric Co Ltd Action recognition system
JP2007241477A (en) * 2006-03-06 2007-09-20 Fuji Xerox Co Ltd Image processor
JP2009082511A (en) * 2007-09-29 2009-04-23 Kohshin Rubber Co Ltd Sleep monitoring device
JP2015138460A (en) * 2014-01-23 2015-07-30 富士通株式会社 state recognition method and state recognition device
JP2016157219A (en) * 2015-02-24 2016-09-01 株式会社日立製作所 Image processing method, and image processor
JP2017098180A (en) * 2015-11-27 2017-06-01 株式会社レイトロン Lighting device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1040482A (en) * 1996-07-23 1998-02-13 Hiroshi Akashi Unmanned annunciation system based on sentence information
JP3460680B2 (en) * 2000-07-07 2003-10-27 日本エルエスアイカード株式会社 Field situation notification system and imaging unit used for the same

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007213528A (en) * 2006-02-13 2007-08-23 Sanyo Electric Co Ltd Action recognition system
JP2007241477A (en) * 2006-03-06 2007-09-20 Fuji Xerox Co Ltd Image processor
JP2009082511A (en) * 2007-09-29 2009-04-23 Kohshin Rubber Co Ltd Sleep monitoring device
JP2015138460A (en) * 2014-01-23 2015-07-30 富士通株式会社 state recognition method and state recognition device
JP2016157219A (en) * 2015-02-24 2016-09-01 株式会社日立製作所 Image processing method, and image processor
JP2017098180A (en) * 2015-11-27 2017-06-01 株式会社レイトロン Lighting device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020109582A (en) * 2019-01-07 2020-07-16 エイアイビューライフ株式会社 Information processing device
WO2020145145A1 (en) * 2019-01-07 2020-07-16 エイアイビューライフ株式会社 Information processing device

Also Published As

Publication number Publication date
JP2019008638A (en) 2019-01-17
JP6822328B2 (en) 2021-01-27

Similar Documents

Publication Publication Date Title
JP6822328B2 (en) Watching support system and its control method
US10786183B2 (en) Monitoring assistance system, control method thereof, and program
JP6137425B2 (en) Image processing system, image processing apparatus, image processing method, and image processing program
JP6588978B2 (en) Apparatus, system and method for automatic detection of human orientation and / or position
JP6915421B2 (en) Watching support system and its control method
US11116423B2 (en) Patient monitoring system and method
JPWO2016143641A1 (en) Attitude detection device and attitude detection method
US20200245904A1 (en) Posture estimation device, behavior estimation device, storage medium storing posture estimation program, and posture estimation method
JP6952299B2 (en) Sleep depth judgment system, sleep depth judgment device and sleep depth judgment method
US20210219873A1 (en) Machine vision to predict clinical patient parameters
WO2020145380A1 (en) Care recording device, care recording system, care recording program, and care recording method
Adolf et al. Deep neural network based body posture recognitions and fall detection from low resolution infrared array sensor
JP6729510B2 (en) Monitoring support system and control method thereof
JP3767898B2 (en) Human behavior understanding system
JP6822326B2 (en) Watching support system and its control method
Inoue et al. Bed exit action detection based on patient posture with long short-term memory
JP2022010581A (en) Detection device, detection method, image processing method and program
JP6635074B2 (en) Watching support system and control method thereof
JP6729512B2 (en) Monitoring support system and control method thereof
JP7314939B2 (en) Image recognition program, image recognition device, learning program, and learning device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18825423

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18825423

Country of ref document: EP

Kind code of ref document: A1