WO2019013105A1 - Monitoring assistance system and control method thereof - Google Patents

Monitoring assistance system and control method thereof Download PDF

Info

Publication number
WO2019013105A1
WO2019013105A1 PCT/JP2018/025596 JP2018025596W WO2019013105A1 WO 2019013105 A1 WO2019013105 A1 WO 2019013105A1 JP 2018025596 W JP2018025596 W JP 2018025596W WO 2019013105 A1 WO2019013105 A1 WO 2019013105A1
Authority
WO
WIPO (PCT)
Prior art keywords
person
watching
area
watched
image
Prior art date
Application number
PCT/JP2018/025596
Other languages
French (fr)
Japanese (ja)
Inventor
信二 高橋
田中 清明
純平 松永
達哉 村上
Original Assignee
オムロン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オムロン株式会社 filed Critical オムロン株式会社
Publication of WO2019013105A1 publication Critical patent/WO2019013105A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/22Status alarms responsive to presence or absence of persons
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/04Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using a single signalling line, e.g. in a closed loop

Definitions

  • the present invention relates to a technique for supporting watching of a subject on a bed.
  • Patent Document 1 proposes a system which detects a patient from an image (photographed image) photographed by a camera, determines the operation of the patient based on the detection result, and performs notification according to the judgment result.
  • the present invention has been made in view of the above circumstances, and it is an object of the present invention to provide a technology capable of suppressing unnecessary notification and obtaining effects such as reduction of burden on system users and privacy protection. Do.
  • Adopt a method of omitting notification when it is determined that a person is watching over.
  • a first aspect of the present invention is a watching support system for watching a target person on a bed, and an image acquiring unit for acquiring an image photographed by an imaging device; and the photographed image
  • a watching judgment unit that judges whether or not the subject is watched by another person based on an image of a judgment region that is a region of a part of the region and that includes the bed
  • the apparatus further includes a state determination unit that determines the state of the subject based on an image, and an output unit that performs notification according to the determination result of the state determination unit, and the output unit
  • the present invention provides a watching support system characterized by omitting the notification when it is judged by the watching judgment unit that the person is watched by a person.
  • this configuration it is determined from the image obtained by photographing the area of the bed and the surrounding area whether or not the subject is watched by another person.
  • a nurse, a caregiver, or another person watches over a target person such as a patient or a care recipient.
  • the subject is likely to be in bed and the other is likely to be in the bed vicinity. Therefore, according to the above configuration, it can be determined with high accuracy whether the target person is being watched by another person.
  • a notification is necessary to indirectly watch the subject who is not watched directly, and is not necessary when the subject is watched directly by another person.
  • the notification is omitted when it is determined that the target person is being watched by another person.
  • unnecessary notifications can be suppressed, and effects such as reduction of burden on the system user and protection of privacy can be obtained. For example, it is possible to save time and effort that a system user confirms unnecessary notifications.
  • unnecessary notification of the image of another person who is not involved in using the system such as a visitor can be suppressed, and the privacy of the other person can be protected.
  • the watching judgment unit may judge that the target person is watched by the other person when two or more persons exist in the judgment area. This is because there is a high possibility that two or more persons include the target person and the others who watch over the target person.
  • the watching judgment unit determines that the target person is the other person. It may be determined that the It is likely that the first person present in the bed is the target person, and the second person facing the target person (the first person) is likely to be the person watching the target person. is there.
  • the watching determination unit determines a person whose angle between the face direction vector and the vector directed to the first person is equal to or less than a threshold value as the second person facing the first person. It is good to do. This is because a person whose angle between the face direction vector and the vector directed to the first person is equal to or less than the threshold value is likely to face the first person.
  • the watching determination unit determines that the target person is watched by the other person when the person performing the watched operation and the person performing the watching operation exist in the determination area. It is also good. It is highly likely that the person performing the action to be watched is a target person being watched by another person, and the person performing the action to watch is likely to be another person watching the subject person It is.
  • the present invention can be understood as a watching support system having at least a part of the above configuration or function.
  • the present invention also provides a watching support method or a watching support system control method including at least a part of the above-described processing, a program for causing a computer to execute these methods, or non-temporarily such a program. It can also be regarded as a recorded computer readable recording medium.
  • FIG. 1 is a block diagram schematically showing a hardware configuration and a functional configuration of a watching support system.
  • FIG. 2 is a view showing an installation example of the imaging device.
  • FIG. 3A is an example of a photographed image
  • FIGS. 3B and 3D are examples of a watching judgment area
  • FIG. 3C is an example of a behavior judgment area.
  • FIG. 4 is a flowchart of the state monitoring process.
  • FIG. 5 is a flowchart of the watching determination process.
  • 6A and 6B are examples of photographed images.
  • FIG. 7 is a flowchart of watching judgment processing (first modification).
  • 8A and 8B are examples of photographed images.
  • FIG. 9 is a flowchart of the watching determination process (second modified example).
  • the present invention relates to a technique for supporting watching of a subject on a bed.
  • This technology can be applied to a system that automatically detects the getting-up and getting-up behavior of patients and care recipients in hospitals and nursing facilities, etc., and performs necessary notification when a dangerous state occurs.
  • This system can be preferably used, for example, for watching and supporting elderly people, patients with dementia, children and the like.
  • FIG. 1 is a block diagram schematically showing a hardware configuration and a functional configuration of the watching support system 1
  • FIG. 2 is a diagram showing an installation example of an imaging device.
  • the watching support system 1 includes an imaging device 10 and an information processing device 11 as main hardware configurations.
  • the imaging device 10 and the information processing device 11 are connected by wire or wirelessly. Although only one imaging device 10 is shown in FIG. 1, a plurality of imaging devices 10 may be connected to the information processing device 11.
  • the imaging device 10 is a device for capturing a subject on a bed and capturing image data.
  • the imaging device 10 a monochrome or color visible light camera, an infrared camera, a three-dimensional camera or the like can be used.
  • the imaging device 10 configured by the infrared LED illumination 100 and the near infrared camera 101 is adopted in order to enable watching of the target person even at night (even when the room is dark).
  • the imaging device 10 is installed to look over the entire bed 20 from the top of the bed 20 to the foot.
  • the imaging device 10 captures an image at a predetermined time interval (for example, 30 fps), and the image data is sequentially captured by the information processing device 11.
  • the orientation of the imaging device 10 is not particularly limited.
  • the imaging device 10 may be installed in the direction from the foot side of the bed 20, or the imaging device 10 may be installed in the direction from the side of the bed 20.
  • the information processing apparatus 11 has a function of analyzing image data taken in from the imaging apparatus 10 in real time, automatically detecting a rising state or leaving state of the target person 21 on the bed 20, and notifying when necessary. It is.
  • the information processing apparatus 11 includes an image acquisition unit 110, a detection unit 111, a watching determination unit 112, an action determination unit 113, an output unit 114, an area setting unit 115, and a storage unit 116 as specific functions.
  • the information processing apparatus 11 according to this embodiment includes a CPU (processor), memory, storage (HDD, SSD, etc.), input device (keyboard, mouse, touch panel, etc.), output device (display, speaker, etc.), communication interface, etc.
  • Each function of the information processing apparatus 11 described above is realized by the CPU executing a program stored in the storage or the memory.
  • the configuration of the information processing apparatus 11 is not limited to this example.
  • distributed computing may be performed by a plurality of computers, a part of the above functions may be performed by a cloud server, or a part of the above functions may be performed by a circuit such as an ASIC or an FPGA. It is also good.
  • the image acquisition unit 110 is a function of acquiring an image (captured image) captured by the imaging device 10.
  • the image data input from the image acquisition unit 110 is temporarily recorded in a memory or a storage, and is provided to the processing of the detection unit 111, the watching determination unit 112, the action determination unit 113, and the like.
  • the detection unit 111 analyzes the captured image acquired by the image acquisition unit 110, and detects the watching target person 21 and others (nurse, carer, etc.) from the captured image. For example, the detection unit 111 detects a human body of a person or a part of it (head, face, upper body, etc.). Any method may be used as a method of detecting a human body or a part thereof from a photographed image. For example, it is possible to preferably use an object detection algorithm using a classical Haar-like feature or a classifier using HoG feature or a recent Faster R-CNN.
  • the detection unit 111 detects the head (a part above the neck; for example, the head 22 of the target person 21) by the classifier using the Haar-like feature amount, and the position of the head as a detection result Output (x, y) and size (number of vertical and horizontal pixels).
  • the position (x, y) of the head is represented by, for example, image coordinates of the center point of a rectangular frame surrounding the head.
  • the detection unit 111 of this embodiment outputs the detection result as the position and size of the image coordinate system
  • the detection unit 111 converts the image coordinate system into a space coordinate system, and the three-dimensional position of the person in the space coordinate system A three dimensional size may be output.
  • the watching judgment unit 112 is a partial area of the photographed image acquired by the image acquiring unit 110, and based on the image of the watching judgment area which is an area including the bed 20, the target person 21 watches the other person It is a function to determine whether or not In the present embodiment, the watching determination unit 112 determines whether the target person 21 is watched by another person based on the detection result of the detection unit 111 with respect to the watching determination area. The process (the watching determination process) of the watching determination unit 112 will be described later.
  • the action determination unit 113 is a function of determining the state or action of the target person 21 based on the captured image acquired by the image acquisition unit 110.
  • the behavior determination unit 113 performs wakeup determination, release determination, and the like based on the detection result of the detection unit 111 with respect to the entire captured image.
  • the output unit 114 is a function of performing notification according to the determination result of the action determination unit 113.
  • the output unit 114 performs a necessary notification when the behavior determination unit 113 detects the wake-up operation or the bed-out operation of the target person 21.
  • the output unit 114 determines whether or not the notification is necessary (for example, notifies only in the dangerous state), the content of the notification (for example, the content of the message), the notification means (for example, voice) according to the degree of danger of the operation of the target person 21 , E-mail, buzzer, warning light), notification destination (eg, nurse, doctor), frequency of notification, etc. can be switched.
  • the output unit 114 omits the notification when it is determined by the watching determination unit 112 that the target person 21 is watched by another person.
  • the area setting unit 115 is a function of setting a determination area (a watching determination area or an action determination area) on a captured image captured by the imaging device 10.
  • the watching support system 1 sets a determination area based on the area of the bed 20 in the captured image.
  • the setting of the judgment area may be performed manually or automatically.
  • the area setting unit 115 may provide a user interface for allowing the user to input the bed area or the determination area itself in the captured image.
  • the area setting unit 115 may detect the bed area from the captured image by object recognition processing.
  • FIG. 3A is an example of a photographed image
  • FIG. 3B is an example of a watching judgment area set for the photographed image of FIG. 3A
  • FIG. 3C is an example of the action determination area set for the captured image of FIG. 3A.
  • the area setting unit 115 sets the watching determination area A0 and the action determination areas A1 to A3 on the basis of the bed area 30.
  • the watching judgment area A0 is an area including the bed area 30 and the periphery thereof, and corresponds to a range in which the target person 21 and the other person may exist when the target person 21 is watched by the other person.
  • the action determination area A1 is an area set on the head side of the bed 20, and corresponds to a range in which the head 22 of the subject 21 may exist when going to bed (when the subject 21 is sleeping on the bed 20) (Hereinafter referred to as a bedtime area A1).
  • the action determination area A2 is an area set on the foot side of the bed 20, and corresponds to a range in which the head 22 of the subject 21 may exist when getting up (when the subject 21 raises the upper body). (Hereafter, it is called rising area A2).
  • relative positions and sizes of the respective determination areas A0 to A2 with respect to the bed area 30 are determined in advance, and if the bed area 30 is specified, the range of each determination area A0 to A2 is determined by calculation. I assume.
  • the action determination area A3 is an area other than the action determination areas A1 and A2.
  • the bed leaving area A3 When leaving the bed (when the target person 21 is away from the bed 20), the head 22 of the target person 21 exists in the action determination area A3 (hereinafter referred to as the bed leaving area A3).
  • the watching determination unit 112 determines whether or not detection positions of two or more heads belong to the watching determination area A0. Then, when the detection positions of two or more heads belong to the watching determination area A0, the watching determination unit 112 determines that the target person 21 is watched by another person, and in the case other than that, the watching person 21 is the target person 21. Is determined not to be watched by others.
  • the action determination unit 113 determines which of the action determination areas A1 to A3 the detected position of the head belongs to, and classifies the state of the target person 21.
  • bed state the case where the head is detected in the bed area A1
  • wake area A2 the case where the head is detected in the wake area A2 as “wake up state”
  • bed departure Call it a state the case where the head is detected in the bed area A3 is referred to as bed departure Call it a state.
  • the action determination unit 113 detects a state change from the “sleeping state” to the “wake up state” as a wake up operation, and detects a state change from the “wake up state” to the “becoming out state” as a bed departure operation.
  • the storage unit 116 is a function of storing various data used by the watching support system 1 for processing.
  • the storage unit 116 at least includes various parameters (such as threshold values) used for wakeup determination, bed departure determination, etc., determination area setting information, image data of a plurality of past frames, or detection results (for calculation of moving speed and moving direction).
  • various parameters such as threshold values used for wakeup determination, bed departure determination, etc., determination area setting information, image data of a plurality of past frames, or detection results (for calculation of moving speed and moving direction).
  • a storage area for storing is provided.
  • step S ⁇ b> 40 the image acquisition unit 110 acquires a captured image of one frame from the imaging device 10.
  • the acquired photographed image is temporarily recorded in the storage unit 116.
  • step S41 the detection unit 111 detects a head from the captured image acquired in step S40.
  • the information on the detected head position (xy coordinates) is recorded in the storage unit 116 in association with the information on the photographing time of the photographed image acquired in step S40 or the frame number of the photographed image.
  • step S42 the watching determination unit 112 performs watching determination processing using the detection result of step S41.
  • step S43 the process is switched according to the determination result of step S42. If it is determined that the target person 21 is watched by another person, the process proceeds to step S44. If it is not determined that the target person 21 is being watched by another person, the processes of steps S44 and S45 are omitted, and the present flowchart is ended.
  • step S44 the action determination unit 113 determines the state or action of the target person 21 using the detection result of step S41.
  • the output unit 114 performs notification according to the determination result in step S44.
  • FIG. 5 is a flowchart of watching determination processing executed by the watching determination unit 112
  • FIGS. 6A and 6B are examples of photographed images acquired by the image acquiring unit 110.
  • step S50 the watching determination unit 112 determines whether or not two or more heads have been detected in the watching determination area A0. If two or more heads are detected in the watching determination area A0, the process proceeds to step S51. If only one head is detected in the watching determination area A0, the process proceeds to step S52. Processing proceeds.
  • step S51 the watching judgment unit 112 judges that the target person 21 is watched by another person (watching judgment).
  • step S ⁇ b> 52 the watching determination unit 112 determines that the target person 21 is not watched by another person (judgment of no watching).
  • step S40 consider the case where the captured image 60 of FIG. 6A is acquired in step S40.
  • the head 61 of the target person 21 is detected as the head in the watching determination area A0 in step S41. Therefore, it is determined that the target person 21 is not watched by the other person (step S52), and a notification is performed according to the state or action of the target person 21 (step S44 and step S45).
  • step S41 the head 61 of the target person 21 and the head 63 of another person watching the target person 21 are detected as the head in the watching determination area A0. Therefore, it is determined that the target person 21 is watched by the other person (step S51), and the notification in step S45 is omitted.
  • the present embodiment it is determined from the image obtained by photographing the area of the bed and the area around it that the subject is being watched by another person.
  • a target person such as a patient or a care recipient.
  • the subject is likely to be in bed and the other is likely to be in the bed vicinity. Therefore, according to the above configuration, it can be determined with high accuracy whether the target person is being watched by another person.
  • the target person is watching by another person. According to this configuration, it is highly possible that the target person is watched by the other person, because the target person and the other person who watches the target person are likely to be included in the two or more persons. It can be detected. Furthermore, when two or more persons appear in the photographed image, the state of the other person may be erroneously detected as the state of the target person, but in the present embodiment, such erroneous detection is suppressed. It can also be done.
  • a notification is necessary to indirectly watch the subject who is not watched directly, and is not necessary when the subject is watched directly by another person.
  • the notification when it is determined that the target person is being watched by another person, the notification is omitted.
  • unnecessary notifications can be suppressed, and effects such as reduction of burden on the system user and protection of privacy can be obtained. For example, it is possible to save time and effort that a system user confirms unnecessary notifications.
  • unnecessary notification of the image of the other person can be suppressed, and the privacy of the other person can be protected.
  • the above description of the embodiments merely illustrates the present invention.
  • the present invention is not limited to the above specific embodiments, and various modifications are possible within the scope of the technical idea thereof.
  • the shape and size of the watching determination area A0 are not particularly limited.
  • a hexagonal watching judgment area A0 may be set.
  • the flowchart of the watching determination process is not limited to the flowchart of FIG.
  • a modified example of the watching determination process will be described.
  • FIG. 7 is a flowchart of watching determination processing executed by the watching determination unit 112
  • FIGS. 8A and 8B are examples of photographed images acquired by the image acquiring unit 110.
  • step S70 the watching determination unit 112 determines whether or not two or more heads have been detected in the watching determination area A0. If two or more heads are detected in the watching determination area A0, the process proceeds to step S71. If only one head is detected in the watching determination area A0, the process proceeds to step S75. Processing proceeds.
  • step S71 the watching determination unit 112 detects a face from the watching determination area A0 of the captured image using the captured image acquired by the image acquisition unit 110.
  • a face is detected in the area where the head is detected. However, in an area such as the back of the head, the face is not detected, so the head is detected but the face is not detected.
  • step S72 the watching determination unit 112 determines whether one or more faces have been detected in step S71. If one or more faces are detected, the process proceeds to step S73. If no face is detected, the process proceeds to step S75.
  • step S73 the watching judgment unit 112 watches the first person present in the bed 20 and the second person facing the first person based on the detection results in step S41 and step S72. It is determined whether it exists in the determination area A0.
  • the first person since the first person is considered to be watched by the second person, hereinafter, the first person is described as a "guardee" and the second person is described as a "watcher”. . If there is a watcher and a watcher, the process proceeds to step S74. If not, the process proceeds to step S75.
  • step S74 the watching judgment unit 112 judges that the target person 21 is watched by another person (watching judgment).
  • step S ⁇ b> 75 the watching determination unit 112 determines that the target person 21 is not watched by another person (judgment of no watching).
  • step S41 the head 81 of the object person 21 and the head 82 of the other person who does not observe the object person 21 are detected as the head in the watching determination area A0. Then, in step S71, the face 83 of the other person is detected.
  • step S73 the following processing is performed.
  • the watching judgment unit 112 determines the head 81 detected in the bed 20 as the head of the watcher.
  • the watching judgment unit 112 calculates the vector 84 of the direction of the face 83 and the vector 85 directed from the face 83 (head 82) to the head 81.
  • the watching judgment unit 112 judges whether an angle ⁇ 1 ( ⁇ 180 degrees) formed by the vector 84 and the vector 85 is equal to or less than a threshold.
  • the threshold is an angle for determining whether or not the user is facing the watcher, and is 45 degrees, for example.
  • the watching determination unit 112 determines the person with the face 83 not as the watching person but as the person who does not face the watching person. That is, the watching determination unit 112 determines that there is a watched person in the watching determination area A0 but does not have a watching person in the watching determination area A0. Therefore, it is determined that the target person 21 is not watched by the other person (step S75), and a notification is performed according to the state or action of the target person 21 (step S44 and step S45).
  • step S41 the head 81 of the target person 21 and the head 87 of another person watching the target person 21 are detected as the head in the watching determination area A0. Then, in step S71, the face 88 of the other person is detected.
  • step S73 the following processing is performed.
  • the watching judgment unit 112 determines the head 81 detected in the bed 20 as the head of the watcher.
  • the watching judgment unit 112 calculates the vector 89 of the direction of the face 88 and the vector 90 directed from the face 88 (head 87) to the head 81.
  • the watching judgment unit 112 judges whether or not an angle ⁇ 2 ( ⁇ 180 degrees) formed by the vector 89 and the vector 90 is equal to or less than a threshold.
  • the watching determination unit 112 determines the person with the face 88 as the watching person. That is, the watching determination unit 112 determines that the watching person and the watching person exist in the watching determination area A0. Therefore, it is determined that the target person 21 is watched by the other person (step S74), and the notification in step S45 is omitted.
  • the watcher (first person) present in the bed and the watcher (second person) facing the watcher (second person) watch over It is determined that the target person is being watched by the other person only when it exists in the determination area.
  • the watcher present in the bed is likely to be the target person, and the watcher facing the target person (the watched person) is likely to be a person watching the target person. And even if the other person is beside the target person, it is difficult to say that the other person is watching over the target person if the other person is not facing the target person. Therefore, according to this configuration, it is possible to determine with high accuracy whether the target person is being watched by another person.
  • a person whose angle between the vector of the face direction and the vector directed to the watched person is equal to or less than the threshold is determined as the watcher.
  • a person whose angle between the face direction vector and the vector toward the watcher is equal to or less than the threshold value is likely to face the watcher. Therefore, according to this configuration, the watcher can be detected with high accuracy.
  • FIG. 9 is a flowchart of the watching determination process performed by the watching determination unit 112.
  • step S90 the watching determination unit 112 determines whether or not two or more heads are detected in the watching determination area A0. If two or more heads are detected in the watching determination area A0, the process proceeds to step S91. If only one head is detected in the watching determination area A0, the process proceeds to step S94. Processing proceeds.
  • the watching judgment unit 112 uses the photographed image acquired by the image acquiring unit 110 to observe the watching operation person who is performing watching operation from the watching judgment area A0 of the photographed image. Detect the watching person who is watching.
  • the motions to be watched are motions such as getting up, assistance to sleep, assistance to meals, assistance to changing clothes, etc.
  • the motions to watch are motions such as assistance to wake up or sleeping, assistance to food, assistance to changing clothes, etc.
  • the watching operation person and watching operation person are detected, for example, by an object detection algorithm using a method using a classifier using HoG feature values or a method using Faster R-CNN.
  • step S92 the watching determination unit 112 determines whether or not both the watch operation person and the watching operation person are detected in step S91. If both the watch operation person and the watching operation person are detected, the process proceeds to step S93. If not, the process proceeds to step S94.
  • step S93 the watching judgment unit 112 judges that the target person 21 is watched by another person (watching judgment).
  • step S94 the watching determination unit 112 determines that the target person 21 is not watched by another person (judgment of no watching).
  • the target person is the other person only when the person performing the operation to be watched and the person performing the operation to watch are present in the watching determination area. It is judged that it is watched by.
  • a person performing a watched action is likely to be a target person being watched by another person, and a person performing a watching action is likely to be another person watching a subject person.
  • the other person is near the target person, it is difficult to say that the other person is watching the target person if the other person is not performing the watching operation. Therefore, according to this configuration, it is possible to determine with high accuracy whether the target person is being watched by another person.
  • Oversight support system 10 Imaging device 11: Information processing device 110: Image acquisition unit 111: Detection unit 112: Oversight determination unit 113: Action determination unit 114: Output unit 115: Region setting unit 116: Storage unit 100: Infrared LED Lighting 101: Near infrared camera 20: Bed 21: Target person 22: Head 30: Bed area A0: Overwatch judgment area A1: Bed area (action judgment area) A2: Wake area (action judgment area) A3: Leave area ( Action judgment area) 60: shooting image 61: head 62: shooting image 63: head 80: shooting image 81: head 82: head 83: face 84: vector 85: vector 86: shooting image 87: head 88: face 89: Vector 90: Vector ⁇ 1: Angle ⁇ 2: Angle

Abstract

This monitoring assistance system, which assists monitoring of a target who is in bed, is provided with an image acquisition unit which acquires an image captured by an imaging device, a monitoring determination unit which determines whether or not the target is being monitored by another person on the basis of an image of a determination region, which is a subregion of the captured image and is a region that includes the bed, a condition determination unit which determines the condition of the target on the basis of the captured image, and an output unit which makes a notification depending on the determination result of the condition determination unit. The output unit omits the aforementioned notification if the monitoring determination unit has determined that the target is being monitored by another person.

Description

見守り支援システム及びその制御方法Oversight support system and control method therefor
 本発明は、ベッド上の対象者の見守りを支援するための技術に関する。 TECHNICAL FIELD The present invention relates to a technique for supporting watching of a subject on a bed.
 ベッドからの転落事故などを未然に防ぐため、病院や介護施設などにおける患者の見守りを支援するシステムが知られている。特許文献1には、カメラで撮影した画像(撮影画像)から患者を検出し、その検出結果に基づいて患者の動作を判定し、その判定結果に応じた通知を行うシステムが提案されている。 In order to prevent a fall accident from a bed and the like, a system that supports watching and listening of patients in hospitals and nursing facilities is known. Patent Document 1 proposes a system which detects a patient from an image (photographed image) photographed by a camera, determines the operation of the patient based on the detection result, and performs notification according to the judgment result.
特開2012-071003号公報Unexamined-Japanese-Patent No. 2012-071003
 上述のように、撮影画像から対象者を検出し、その検出結果を見守り支援に役立てるという試みは、従来からなされている。しかしながら、対象者が他者に直接的に見守られている場合には通知が不要であるにも拘わらず、特許文献1のように所定の動作に応じて通知を行う方法では、そのような不要な通知が行われることがある。そして、不要な通知の発生は、通知を確認するシステム利用者の負担増加につながる。さらに、所定の動作に応じて撮影画像を通知する方法では、不要な通知により、撮影画像に写っている他者のプライバシーを侵害することになりかねない。 As described above, attempts have been made to detect a target person from a photographed image, and to observe the detection result and use it for assistance. However, although the notification is not necessary when the target person is directly watched by another person, such a method is unnecessary in the method of notifying according to a predetermined operation as in Patent Document 1 Notification may be made. And generation | occurrence | production of the unnecessary notification leads to the burden increase of the system user who confirms notification. Furthermore, in the method of notifying a photographed image according to a predetermined operation, unnecessary notification may infringe the privacy of the other person appearing in the photographed image.
 本発明は、上記実情に鑑みなされたものであって、不要な通知を抑制することができ、システム利用者の負担軽減やプライバシー保護などの効果を得ることができる技術を提供することを目的とする。 The present invention has been made in view of the above circumstances, and it is an object of the present invention to provide a technology capable of suppressing unnecessary notification and obtaining effects such as reduction of burden on system users and privacy protection. Do.
 上記目的を達成するために、本発明では、撮像装置により撮影された画像(撮影画像)の判定領域に基づいて、対象者が他者に見守られているか否かを判定し、対象者が他者に見守られていると判定した場合に通知を省略する、という方法を採用する。 In order to achieve the above object, in the present invention, it is determined whether the subject is watched by others based on the determination area of the image (captured image) taken by the imaging device, and the subject is otherwise Adopt a method of omitting notification when it is determined that a person is watching over.
 具体的には、本発明の第一態様は、ベッド上の対象者の見守りを支援する見守り支援システムであって、撮像装置により撮影された画像を取得する画像取得部と、前記撮影された画像の一部の領域であり、且つ、前記ベッドを含む領域である判定領域の画像に基づいて、前記対象者が他者に見守られているか否かを判定する見守り判定部と、前記撮影された画像に基づいて、前記対象者の状態を判定する状態判定部と、前記状態判定部の判定結果に応じた通知を行う出力部と、を有し、前記出力部は、前記対象者が前記他者に見守られていると前記見守り判定部により判定された場合に、前記通知を省略することを特徴とする見守り支援システムを提供する。 Specifically, a first aspect of the present invention is a watching support system for watching a target person on a bed, and an image acquiring unit for acquiring an image photographed by an imaging device; and the photographed image A watching judgment unit that judges whether or not the subject is watched by another person based on an image of a judgment region that is a region of a part of the region and that includes the bed; The apparatus further includes a state determination unit that determines the state of the subject based on an image, and an output unit that performs notification according to the determination result of the state determination unit, and the output unit The present invention provides a watching support system characterized by omitting the notification when it is judged by the watching judgment unit that the person is watched by a person.
 この構成によれば、ベッドの領域とその周辺の領域とを撮影した画像から、対象者が他者に見守られているか否かが判定される。ここで、患者や要介護者などの対象者を、看護師や介護者などの他者が見守る場合を考える。この場合には、対象者はベッドにいる可能性が高く、他者はベッド周辺にいる可能性が高い。そのため、上記構成によれば、対象者が他者に見守られているか否かを高精度に判定することができる。そして、通知は、直接的に見守られていない対象者を間接的に見守るために必要であり、対象者が他者に直接的に見守られている場合には不要である。上記構成によれば、対象者が他者に見守られていると判定された場合に、通知が省略される。これにより、不要な通知を抑制することができ、システム利用者の負担軽減やプライバシー保護などの効果を得ることができる。例えば、不要な通知をシステム利用者が確認するといった手間を省くことができる。また、見舞客などのシステムの利用に関与しない他者の画像が不必要に通知されることを抑制でき、他者のプライバシーを保護することができる。 According to this configuration, it is determined from the image obtained by photographing the area of the bed and the surrounding area whether or not the subject is watched by another person. Here, consider a case in which a nurse, a caregiver, or another person watches over a target person such as a patient or a care recipient. In this case, the subject is likely to be in bed and the other is likely to be in the bed vicinity. Therefore, according to the above configuration, it can be determined with high accuracy whether the target person is being watched by another person. And, a notification is necessary to indirectly watch the subject who is not watched directly, and is not necessary when the subject is watched directly by another person. According to the above configuration, the notification is omitted when it is determined that the target person is being watched by another person. As a result, unnecessary notifications can be suppressed, and effects such as reduction of burden on the system user and protection of privacy can be obtained. For example, it is possible to save time and effort that a system user confirms unnecessary notifications. In addition, unnecessary notification of the image of another person who is not involved in using the system such as a visitor can be suppressed, and the privacy of the other person can be protected.
 前記見守り判定部は、前記判定領域に2人以上の人物が存在する場合に、前記対象者が前記他者に見守られていると判定するとよい。2人以上の人物の中に、対象者と、対象者を見守る他者とが含まれる可能性が高いからである。 The watching judgment unit may judge that the target person is watched by the other person when two or more persons exist in the judgment area. This is because there is a high possibility that two or more persons include the target person and the others who watch over the target person.
 前記見守り判定部は、前記ベッドに存在する第1の人物と、前記第1の人物の方を向いている第2の人物とが前記判定領域に存在する場合に、前記対象者が前記他者に見守られていると判定してもよい。ベッドに存在する第1の人物は対象者である可能性が高く、対象者(第1の人物)の方を向いている第2の人物は対象者を見守る人物である可能性が高いからである。 When the first person present in the bed and the second person facing the first person are present in the judgment area, the watching judgment unit determines that the target person is the other person. It may be determined that the It is likely that the first person present in the bed is the target person, and the second person facing the target person (the first person) is likely to be the person watching the target person. is there.
 前記見守り判定部は、顔向きのベクトルと、前記第1の人物へ向かうベクトルとのなす角度が閾値以下である人物を、前記第1の人物の方を向いている前記第2の人物として決定するとよい。顔向きのベクトルと、第1の人物へ向かうベクトルとのなす角度が閾値以下である人物は、第1の人物の方を向いている可能性が高いからである。 The watching determination unit determines a person whose angle between the face direction vector and the vector directed to the first person is equal to or less than a threshold value as the second person facing the first person. It is good to do. This is because a person whose angle between the face direction vector and the vector directed to the first person is equal to or less than the threshold value is likely to face the first person.
 前記見守り判定部は、見守られる動作を行っている人物と、見守る動作を行っている人物とが前記判定領域に存在する場合に、前記対象者が前記他者に見守られていると判定してもよい。見守られる動作を行っている人物は、他者に見守られている対象者である可能性が高く、見守る動作を行っている人物は、対象者を見守っている他者である可能性が高いからである。 The watching determination unit determines that the target person is watched by the other person when the person performing the watched operation and the person performing the watching operation exist in the determination area. It is also good. It is highly likely that the person performing the action to be watched is a target person being watched by another person, and the person performing the action to watch is likely to be another person watching the subject person It is.
 なお、本発明は、上記構成ないし機能の少なくとも一部を有する見守り支援システムとして捉えることができる。また、本発明は、上記処理の少なくとも一部を含む、見守り支援方法又は見守り支援システムの制御方法や、これらの方法をコンピュータに実行させるためのプログラム、又は、そのようなプログラムを非一時的に記録したコンピュータ読取可能な記録媒体として捉えることもできる。上記構成及び処理の各々は技術的な矛盾が生じない限り互いに組み合わせて本発明を構成することができる。 The present invention can be understood as a watching support system having at least a part of the above configuration or function. The present invention also provides a watching support method or a watching support system control method including at least a part of the above-described processing, a program for causing a computer to execute these methods, or non-temporarily such a program. It can also be regarded as a recorded computer readable recording medium. Each of the above configurations and processes can be combined with each other as long as there is no technical contradiction.
 本発明によれば、不要な通知を抑制することができ、システム利用者の負担軽減やプライバシー保護などの効果を得ることができる。 According to the present invention, unnecessary notifications can be suppressed, and effects such as reduction of burden on the system user and protection of privacy can be obtained.
図1は見守り支援システムのハードウェア構成および機能構成を模式的に示すブロック図である。FIG. 1 is a block diagram schematically showing a hardware configuration and a functional configuration of a watching support system. 図2は撮像装置の設置例を示す図である。FIG. 2 is a view showing an installation example of the imaging device. 図3Aは撮影画像の例であり、図3Bと図3Dは見守り判定領域の例であり、図3Cは行動判定領域の例である。FIG. 3A is an example of a photographed image, FIGS. 3B and 3D are examples of a watching judgment area, and FIG. 3C is an example of a behavior judgment area. 図4は状態監視処理のフローチャートである。FIG. 4 is a flowchart of the state monitoring process. 図5は見守り判定処理のフローチャートである。FIG. 5 is a flowchart of the watching determination process. 図6Aと図6Bは撮影画像の例である。6A and 6B are examples of photographed images. 図7は見守り判定処理(第1の変形例)のフローチャートである。FIG. 7 is a flowchart of watching judgment processing (first modification). 図8Aと図8Bは撮影画像の例である。8A and 8B are examples of photographed images. 図9は見守り判定処理(第2の変形例)のフローチャートである。FIG. 9 is a flowchart of the watching determination process (second modified example).
 本発明は、ベッド上の対象者の見守りを支援するための技術に関する。この技術は、病院や介護施設などにおいて、患者や要介護者などの離床・起床行動を自動で検知し、危険な状態が発生した場合などに必要な通知を行うシステムに適用することができる。このシステムは、例えば、高齢者、認知症患者、子供などの見守り支援に好ましく利用することができる。 TECHNICAL FIELD The present invention relates to a technique for supporting watching of a subject on a bed. This technology can be applied to a system that automatically detects the getting-up and getting-up behavior of patients and care recipients in hospitals and nursing facilities, etc., and performs necessary notification when a dangerous state occurs. This system can be preferably used, for example, for watching and supporting elderly people, patients with dementia, children and the like.
 以下、図面を参照して本発明を実施するための好ましい形態の一例を説明する。ただし、以下の実施形態に記載されている装置の構成や動作は一例であり、本発明の範囲をそれらのみに限定する趣旨のものではない。 Hereinafter, an example of a preferred embodiment for carrying out the present invention will be described with reference to the drawings. However, the configurations and operations of the devices described in the following embodiments are merely examples, and the scope of the present invention is not limited thereto.
 (システム構成)
 図1と図2を参照して、本発明の実施形態に係る見守り支援システムの構成を説明する。図1は、見守り支援システム1のハードウェア構成および機能構成を模式的に示すブロック図であり、図2は、撮像装置の設置例を示す図である。
(System configuration)
The configuration of a watching support system according to an embodiment of the present invention will be described with reference to FIGS. 1 and 2. FIG. 1 is a block diagram schematically showing a hardware configuration and a functional configuration of the watching support system 1, and FIG. 2 is a diagram showing an installation example of an imaging device.
 見守り支援システム1は、主なハードウェア構成として、撮像装置10と情報処理装置11を有している。撮像装置10と情報処理装置11の間は有線又は無線により接続されている。図1では、1つの撮像装置10のみ示しているが、複数台の撮像装置10を情報処理装置11に接続してもよい。 The watching support system 1 includes an imaging device 10 and an information processing device 11 as main hardware configurations. The imaging device 10 and the information processing device 11 are connected by wire or wirelessly. Although only one imaging device 10 is shown in FIG. 1, a plurality of imaging devices 10 may be connected to the information processing device 11.
 撮像装置10は、ベッド上の対象者を撮影して画像データを取り込むためのデバイスである。撮像装置10としては、モノクロ又はカラーの可視光カメラ、赤外線カメラ、三次元カメラなどを用いることができる。本実施形態では、夜間でも(部屋内が暗い場合でも)対象者の見守りを可能とするため、赤外線LED照明100と近赤外線カメラ101で構成される撮像装置10を採用する。撮像装置10は、図2に示すように、ベッド20の頭側上方から足側に向かって、ベッド20の全体を俯瞰するように設置される。撮像装置10は所定の時間間隔(例えば、30fps)で撮影を行い、その画像データは情報処理装置11に順次取り込まれる。なお、撮像装置10の向きは特に限定されない。ベッド20の足側からの向きで撮像装置10が設置されてもよいし、ベッド20の側面側からの向きで撮像装置10が設置されてもよい。 The imaging device 10 is a device for capturing a subject on a bed and capturing image data. As the imaging device 10, a monochrome or color visible light camera, an infrared camera, a three-dimensional camera or the like can be used. In the present embodiment, the imaging device 10 configured by the infrared LED illumination 100 and the near infrared camera 101 is adopted in order to enable watching of the target person even at night (even when the room is dark). As shown in FIG. 2, the imaging device 10 is installed to look over the entire bed 20 from the top of the bed 20 to the foot. The imaging device 10 captures an image at a predetermined time interval (for example, 30 fps), and the image data is sequentially captured by the information processing device 11. The orientation of the imaging device 10 is not particularly limited. The imaging device 10 may be installed in the direction from the foot side of the bed 20, or the imaging device 10 may be installed in the direction from the side of the bed 20.
 情報処理装置11は、撮像装置10から取り込まれる画像データをリアルタイムに分析し、ベッド20上の対象者21の起床状態や離床状態を自動で検知し、必要な場合に通知を行う機能を備える装置である。情報処理装置11は、具体的な機能として、画像取得部110、検出部111、見守り判定部112、行動判定部113、出力部114、領域設定部115、記憶部116を有している。本実施形態の情報処理装置11は、CPU(プロセッサ)、メモリ、ストレージ(HDD、SSDなど)、入力デバイス(キーボード、マウス、タッチパネルなど)、出力デバイス(ディスプレイ、スピーカなど)、通信インタフェースなどを具備する汎用のコンピュータにより構成され、上述した情報処理装置11の各機能は、ストレージ又はメモリに格納されたプログラムをCPUが実行することにより実現される。ただし、情報処理装置11の構成はこの例に限られない。例えば、複数台のコンピュータによる分散コンピューティングを行ってもよいし、上記機能の一部をクラウドサーバにより実行してもよいし、上記機能の一部をASICやFPGAのような回路で実行してもよい。 The information processing apparatus 11 has a function of analyzing image data taken in from the imaging apparatus 10 in real time, automatically detecting a rising state or leaving state of the target person 21 on the bed 20, and notifying when necessary. It is. The information processing apparatus 11 includes an image acquisition unit 110, a detection unit 111, a watching determination unit 112, an action determination unit 113, an output unit 114, an area setting unit 115, and a storage unit 116 as specific functions. The information processing apparatus 11 according to this embodiment includes a CPU (processor), memory, storage (HDD, SSD, etc.), input device (keyboard, mouse, touch panel, etc.), output device (display, speaker, etc.), communication interface, etc. Each function of the information processing apparatus 11 described above is realized by the CPU executing a program stored in the storage or the memory. However, the configuration of the information processing apparatus 11 is not limited to this example. For example, distributed computing may be performed by a plurality of computers, a part of the above functions may be performed by a cloud server, or a part of the above functions may be performed by a circuit such as an ASIC or an FPGA. It is also good.
 画像取得部110は、撮像装置10により撮影された画像(撮影画像)を取得する機能である。画像取得部110より入力された画像データは一時的にメモリ又はストレージに記録され、検出部111、見守り判定部112、行動判定部113などの処理に供される。 The image acquisition unit 110 is a function of acquiring an image (captured image) captured by the imaging device 10. The image data input from the image acquisition unit 110 is temporarily recorded in a memory or a storage, and is provided to the processing of the detection unit 111, the watching determination unit 112, the action determination unit 113, and the like.
 検出部111は、画像取得部110により取得された撮影画像を分析し、当該撮影画像から、見守り対象者21や他者(看護師や介護者など)を検出する。例えば、検出部111は、人物の人体又はその一部(頭部、顔、上半身など)を検出する。撮影画像から人体やその一部を検出する方法としてはいかなる方法を用いてもよい。例えば、古典的なHaar-like特徴量やHoG特徴量を用いた識別器による手法や近年のFaster R-CNNによる手法を用いた物体検出アルゴリズムを好ましく用いることができる。本実施形態の検出部111は、Haar-like特徴量を用いた識別器により頭部(首より上の部分;例えば対象者21の頭部22)を検出し、検出結果として、頭部の位置(x,y)及びサイズ(縦横のピクセル数)を出力する。頭部の位置(x,y)は、例えば、頭部を囲む矩形枠の中心点の画像座標で表される。なお、本実施形態の検出部111は検出結果を画像座標系の位置・サイズで出力するが、検出部111が画像座標系を空間座標系に換算し、人物の空間座標系における3次元位置や3次元的なサイズを出力してもよい。 The detection unit 111 analyzes the captured image acquired by the image acquisition unit 110, and detects the watching target person 21 and others (nurse, carer, etc.) from the captured image. For example, the detection unit 111 detects a human body of a person or a part of it (head, face, upper body, etc.). Any method may be used as a method of detecting a human body or a part thereof from a photographed image. For example, it is possible to preferably use an object detection algorithm using a classical Haar-like feature or a classifier using HoG feature or a recent Faster R-CNN. The detection unit 111 according to the present embodiment detects the head (a part above the neck; for example, the head 22 of the target person 21) by the classifier using the Haar-like feature amount, and the position of the head as a detection result Output (x, y) and size (number of vertical and horizontal pixels). The position (x, y) of the head is represented by, for example, image coordinates of the center point of a rectangular frame surrounding the head. Although the detection unit 111 of this embodiment outputs the detection result as the position and size of the image coordinate system, the detection unit 111 converts the image coordinate system into a space coordinate system, and the three-dimensional position of the person in the space coordinate system A three dimensional size may be output.
 見守り判定部112は、画像取得部110により取得された撮影画像の一部の領域であり、且つ、ベッド20を含む領域である見守り判定領域の画像に基づいて、対象者21が他者に見守られているか否かを判定する機能である。本実施形態では、見守り判定部112は、見守り判定領域に対する検出部111の検出結果に基づいて、対象者21が他者に見守られているか否かを判定する。見守り判定部112の処理(見守り判定処理)については後述する。 The watching judgment unit 112 is a partial area of the photographed image acquired by the image acquiring unit 110, and based on the image of the watching judgment area which is an area including the bed 20, the target person 21 watches the other person It is a function to determine whether or not In the present embodiment, the watching determination unit 112 determines whether the target person 21 is watched by another person based on the detection result of the detection unit 111 with respect to the watching determination area. The process (the watching determination process) of the watching determination unit 112 will be described later.
 行動判定部113は、画像取得部110により取得された撮影画像に基づいて、対象者21の状態や行動を判定する機能である。本実施形態では、行動判定部113は、撮影画像全体に対する検出部111の検出結果に基づいて、起床判定、離床判定などを行う。 The action determination unit 113 is a function of determining the state or action of the target person 21 based on the captured image acquired by the image acquisition unit 110. In the present embodiment, the behavior determination unit 113 performs wakeup determination, release determination, and the like based on the detection result of the detection unit 111 with respect to the entire captured image.
 出力部114は、行動判定部113の判定結果に応じた通知を行う機能である。本実施形態では、出力部114は、行動判定部113により対象者21の起床動作ないし離床動作が検知された場合に、必要な通知を行う。出力部114は、対象者21の動作の危険度合いに応じて、通知の要否(例えば、危険な状態の場合のみ通知を行う)、通知の内容(例えばメッセージの内容)、通知手段(例えば音声、メール、ブザー、警告灯)、通知先(例えば看護師、医師)、通知の頻度などを切り替えることができる。本実施形態では、出力部114は、対象者21が他者に見守られていると見守り判定部112により判定された場合に、通知を省略する。 The output unit 114 is a function of performing notification according to the determination result of the action determination unit 113. In the present embodiment, the output unit 114 performs a necessary notification when the behavior determination unit 113 detects the wake-up operation or the bed-out operation of the target person 21. The output unit 114 determines whether or not the notification is necessary (for example, notifies only in the dangerous state), the content of the notification (for example, the content of the message), the notification means (for example, voice) according to the degree of danger of the operation of the target person 21 , E-mail, buzzer, warning light), notification destination (eg, nurse, doctor), frequency of notification, etc. can be switched. In the present embodiment, the output unit 114 omits the notification when it is determined by the watching determination unit 112 that the target person 21 is watched by another person.
 領域設定部115は、撮像装置10により撮影される撮影画像に対し判定領域(見守り判定領域や行動判定領域)を設定する機能である。見守り支援システム1はベッド20上の対象者21の状態監視を目的とするため、撮影画像内のベッド20の領域に基づき判定領域が設定される。判定領域の設定は、手動で行ってもよいし自動で行ってもよい。手動設定の場合、領域設定部115は、撮影画像内のベッド領域ないし判定領域そのものをユーザに入力させるためのユーザインタフェースを提供するとよい。自動設定の場合、領域設定部115は、物体認識処理により撮影画像からベッド領域を検出するとよい。 The area setting unit 115 is a function of setting a determination area (a watching determination area or an action determination area) on a captured image captured by the imaging device 10. In order to monitor the state of the target person 21 on the bed 20, the watching support system 1 sets a determination area based on the area of the bed 20 in the captured image. The setting of the judgment area may be performed manually or automatically. In the case of manual setting, the area setting unit 115 may provide a user interface for allowing the user to input the bed area or the determination area itself in the captured image. In the case of automatic setting, the area setting unit 115 may detect the bed area from the captured image by object recognition processing.
 図3Aは、撮影画像の例であり、図3Bは、図3Aの撮影画像に対し設定された見守り判定領域の例である。図3Cは、図3Aの撮影画像に対し設定された行動判定領域の例である。本実施形態では、領域設定部115は、ベッド領域30を基準として、見守り判定領域A0と行動判定領域A1~A3を設定する。見守り判定領域A0は、ベッド領域30とその周辺からなる領域であり、対象者21が他者に見守られている時に対象者21と他者が存在し得る範囲に対応する。行動判定領域A1は、ベッド20の頭側に設定される領域であり、就床時(対象者21がベッド20に寝ている時)に対象者21の頭部22が存在し得る範囲に対応する(以後、就床領域A1と呼ぶ)。行動判定領域A2は、ベッド20の足側に設定される領域であり、起床時(対象者21が上半身を起こした姿勢の時)に対象者21の頭部22が存在し得る範囲に対応する(以後、起床領域A2と呼ぶ)。本実施形態において、ベッド領域30に対する各判定領域A0~A2の相対的な位置・サイズが予め決められており、ベッド領域30が特定されれば各判定領域A0~A2の範囲は計算で定まるものとする。行動判定領域A3は、行動判定領域A1、A2以外の領域である。離床時(対象者21がベッド20から離れた状態の時)は、対象者21の頭部22は行動判定領域A3内に存在する(以後、離床領域A3と呼ぶ)。 FIG. 3A is an example of a photographed image, and FIG. 3B is an example of a watching judgment area set for the photographed image of FIG. 3A. FIG. 3C is an example of the action determination area set for the captured image of FIG. 3A. In the present embodiment, the area setting unit 115 sets the watching determination area A0 and the action determination areas A1 to A3 on the basis of the bed area 30. The watching judgment area A0 is an area including the bed area 30 and the periphery thereof, and corresponds to a range in which the target person 21 and the other person may exist when the target person 21 is watched by the other person. The action determination area A1 is an area set on the head side of the bed 20, and corresponds to a range in which the head 22 of the subject 21 may exist when going to bed (when the subject 21 is sleeping on the bed 20) (Hereinafter referred to as a bedtime area A1). The action determination area A2 is an area set on the foot side of the bed 20, and corresponds to a range in which the head 22 of the subject 21 may exist when getting up (when the subject 21 raises the upper body). (Hereafter, it is called rising area A2). In the present embodiment, relative positions and sizes of the respective determination areas A0 to A2 with respect to the bed area 30 are determined in advance, and if the bed area 30 is specified, the range of each determination area A0 to A2 is determined by calculation. I assume. The action determination area A3 is an area other than the action determination areas A1 and A2. When leaving the bed (when the target person 21 is away from the bed 20), the head 22 of the target person 21 exists in the action determination area A3 (hereinafter referred to as the bed leaving area A3).
 本実施形態では、見守り判定部112は、見守り判定領域A0に2つ以上の頭部の検出位置が属すか否かを判定する。そして、見守り判定部112は、見守り判定領域A0に2つ以上の頭部の検出位置が属す場合に、対象者21が他者に見守られていると判定し、そうでない場合に、対象者21が他者に見守られていないと判定する。 In the present embodiment, the watching determination unit 112 determines whether or not detection positions of two or more heads belong to the watching determination area A0. Then, when the detection positions of two or more heads belong to the watching determination area A0, the watching determination unit 112 determines that the target person 21 is watched by another person, and in the case other than that, the watching person 21 is the target person 21. Is determined not to be watched by others.
 また、本実施形態では、行動判定部113は、頭部の検出位置が行動判定領域A1~A3のいずれに属するかを判定し、対象者21の状態を分類する。ここでは、頭部が就床領域A1内で検出された場合を「就床状態」、起床領域A2内で検出された場合を「起床状態」、離床領域A3内で検出された場合を「離床状態」と呼ぶ。そして、行動判定部113は、「就寝状態」から「起床状態」への状態変化を起床動作として検知し、「起床状態」から「離床状態」への状態変化を離床動作として検知する。 Further, in the present embodiment, the action determination unit 113 determines which of the action determination areas A1 to A3 the detected position of the head belongs to, and classifies the state of the target person 21. Here, the case where the head is detected in the bed area A1 is referred to as “bed state”, the case where the head is detected in the wake area A2 as “wake up state”, and the case where the head is detected in the bed area A3 is referred to as bed departure Call it a state. Then, the action determination unit 113 detects a state change from the “sleeping state” to the “wake up state” as a wake up operation, and detects a state change from the “wake up state” to the “becoming out state” as a bed departure operation.
 記憶部116は、見守り支援システム1が処理に用いる各種のデータを記憶する機能である。記憶部116には、少なくとも、起床判定、離床判定などで用いる各種パラメータ(閾値など)、判定領域の設定情報、過去複数フレームの画像データ又は検出結果(移動速度や移動方向の計算のため)を記憶するための記憶エリアが設けられる。 The storage unit 116 is a function of storing various data used by the watching support system 1 for processing. The storage unit 116 at least includes various parameters (such as threshold values) used for wakeup determination, bed departure determination, etc., determination area setting information, image data of a plurality of past frames, or detection results (for calculation of moving speed and moving direction). A storage area for storing is provided.
 (状態監視処理)
 図4を参照して本システムの状態監視処理の一例を説明する。図4の処理フローは、撮像装置10から1フレームの画像が取り込まれる度に実行される。
(Status monitoring process)
An example of the state monitoring process of the present system will be described with reference to FIG. The processing flow of FIG. 4 is executed each time an image of one frame is captured from the imaging device 10.
 まず、ステップS40において、画像取得部110は、撮像装置10から1フレームの撮影画像を取り込む。取得された撮影画像は記憶部116に一時的に記録される。 First, in step S <b> 40, the image acquisition unit 110 acquires a captured image of one frame from the imaging device 10. The acquired photographed image is temporarily recorded in the storage unit 116.
 次に、ステップS41において、検出部111は、ステップS40で取得された撮影画像から頭部を検出する。検出された頭部の位置(xy座標)の情報は、ステップS40で取得された撮影画像の撮影時刻の情報または当該撮影画像のフレーム番号に対応付けられて記憶部116に記録される。 Next, in step S41, the detection unit 111 detects a head from the captured image acquired in step S40. The information on the detected head position (xy coordinates) is recorded in the storage unit 116 in association with the information on the photographing time of the photographed image acquired in step S40 or the frame number of the photographed image.
 そして、ステップS42において、見守り判定部112は、ステップS41の検出結果を用いて見守り判定処理を行う。 Then, in step S42, the watching determination unit 112 performs watching determination processing using the detection result of step S41.
 次に、ステップS43において、ステップS42の判定結果に応じて処理が切り替えられる。対象者21が他者に見守られていると判定された場合には、ステップS44へ処理が進められる。対象者21が他者に見守られていると判定されなかった場合には、ステップS44とステップS45の処理が省略され、本フローチャートが終了される。 Next, in step S43, the process is switched according to the determination result of step S42. If it is determined that the target person 21 is watched by another person, the process proceeds to step S44. If it is not determined that the target person 21 is being watched by another person, the processes of steps S44 and S45 are omitted, and the present flowchart is ended.
 ステップS44において、行動判定部113は、ステップS41の検出結果を用いて、対象者21の状態や行動を判定する。ステップS45において、出力部114は、ステップS44の判定結果に応じた通知を行う。 In step S44, the action determination unit 113 determines the state or action of the target person 21 using the detection result of step S41. In step S45, the output unit 114 performs notification according to the determination result in step S44.
 (見守り判定処理)
 図5と図6Aと図6Bを参照して、ステップS42の見守り判定処理の一例について説明する。図5は、見守り判定部112により実行される見守り判定処理のフローチャートであり、図6Aと図6Bは、画像取得部110により取得された撮影画像の例である。
(Oversight judgment processing)
An example of the watching determination process in step S42 will be described with reference to FIGS. 5, 6A, and 6B. FIG. 5 is a flowchart of watching determination processing executed by the watching determination unit 112, and FIGS. 6A and 6B are examples of photographed images acquired by the image acquiring unit 110.
 まず、ステップS50において、見守り判定部112は、見守り判定領域A0内で2つ以上の頭部が検出されたか否かを判定する。見守り判定領域A0内で2つ以上の頭部が検出された場合には、ステップS51へ処理が進められ、見守り判定領域A0内で頭部が1つのみ検出された場合には、ステップS52へ処理が進められる。 First, in step S50, the watching determination unit 112 determines whether or not two or more heads have been detected in the watching determination area A0. If two or more heads are detected in the watching determination area A0, the process proceeds to step S51. If only one head is detected in the watching determination area A0, the process proceeds to step S52. Processing proceeds.
 ステップS51において、見守り判定部112は、対象者21が他者に見守られていると判定する(見守りあり判定)。 In step S51, the watching judgment unit 112 judges that the target person 21 is watched by another person (watching judgment).
 ステップS52において、見守り判定部112は、対象者21が他者に見守られていないと判定する(見守りなし判定)。 In step S <b> 52, the watching determination unit 112 determines that the target person 21 is not watched by another person (judgment of no watching).
 ここで、図6Aの撮影画像60がステップS40で取得された場合を考える。この場合には、ステップS41において、見守り判定領域A0内の頭部として、対象者21の頭部61のみが検出される。そのため、対象者21が他者に見守られていないと判定され(ステップS52)、対象者21の状態や行動に応じて通知が行われる(ステップS44とステップS45)。 Here, consider the case where the captured image 60 of FIG. 6A is acquired in step S40. In this case, only the head 61 of the target person 21 is detected as the head in the watching determination area A0 in step S41. Therefore, it is determined that the target person 21 is not watched by the other person (step S52), and a notification is performed according to the state or action of the target person 21 (step S44 and step S45).
 次に、図6Bの撮影画像62がステップS40で取得された場合を考える。この場合には、ステップS41において、見守り判定領域A0内の頭部として、対象者21の頭部61と、対象者21を見守る他者の頭部63とが検出される。そのため、対象者21が他者に見守られていると判定され(ステップS51)、ステップS45の通知が省略される。 Next, consider the case where the photographed image 62 of FIG. 6B is acquired in step S40. In this case, in step S41, the head 61 of the target person 21 and the head 63 of another person watching the target person 21 are detected as the head in the watching determination area A0. Therefore, it is determined that the target person 21 is watched by the other person (step S51), and the notification in step S45 is omitted.
 以上述べたように、本実施形態によれば、ベッドの領域とその周辺の領域とを撮影した画像から、対象者が他者に見守られているか否かが判定される。ここで、患者や要介護者などの対象者を、看護師や介護者などの他者が見守る場合を考える。この場合には、対象者はベッドにいる可能性が高く、他者はベッド周辺にいる可能性が高い。そのため、上記構成によれば、対象者が他者に見守られているか否かを高精度に判定することができる。 As described above, according to the present embodiment, it is determined from the image obtained by photographing the area of the bed and the area around it that the subject is being watched by another person. Here, consider a case in which a nurse, a caregiver, or another person watches over a target person such as a patient or a care recipient. In this case, the subject is likely to be in bed and the other is likely to be in the bed vicinity. Therefore, according to the above configuration, it can be determined with high accuracy whether the target person is being watched by another person.
 具体的には、見守り判定領域に2人以上の人物が存在する場合に限って、対象者が他者に見守られていると判定される。2人以上の人物の中に、対象者と、対象者を見守る他者とが含まれる可能性が高いため、この構成によれば、対象者が他者に見守られていることを高精度に検出することができる。さらに、撮影画像に2人以上の人物が写っている場合には、他者の状態が対象者の状態として誤検出されることがあるが、本実施形態では、そのような誤検出を抑制することもできる。 Specifically, only when there are two or more persons in the watching determination area, it is determined that the target person is watching by another person. According to this configuration, it is highly possible that the target person is watched by the other person, because the target person and the other person who watches the target person are likely to be included in the two or more persons. It can be detected. Furthermore, when two or more persons appear in the photographed image, the state of the other person may be erroneously detected as the state of the target person, but in the present embodiment, such erroneous detection is suppressed. It can also be done.
 そして、通知は、直接的に見守られていない対象者を間接的に見守るために必要であり、対象者が他者に直接的に見守られている場合には不要である。本実施形態によれば、対象者が他者に見守られていると判定された場合に、通知が省略される。これにより、不要な通知を抑制することができ、システム利用者の負担軽減やプライバシー保護などの効果を得ることができる。例えば、不要な通知をシステム利用者が確認するといった手間を省くことができる。また、他者の画像が不必要に通知されることを抑制でき、他者のプライバシーを保護することができる。 And, a notification is necessary to indirectly watch the subject who is not watched directly, and is not necessary when the subject is watched directly by another person. According to the present embodiment, when it is determined that the target person is being watched by another person, the notification is omitted. As a result, unnecessary notifications can be suppressed, and effects such as reduction of burden on the system user and protection of privacy can be obtained. For example, it is possible to save time and effort that a system user confirms unnecessary notifications. In addition, unnecessary notification of the image of the other person can be suppressed, and the privacy of the other person can be protected.
 <その他>
 上記の実施形態の説明は、本発明を例示的に説明するものに過ぎない。本発明は上記の具体的な形態には限定されることはなく、その技術的思想の範囲内で種々の変形が可能である。例えば、上記実施形態では、台形の見守り判定領域A0(図3B)が設定される例を説明したが、見守り判定領域A0の形状やサイズは特に限定されない。例えば、図3Dに示すように、六角形の見守り判定領域A0が設定されてもよい。
<Others>
The above description of the embodiments merely illustrates the present invention. The present invention is not limited to the above specific embodiments, and various modifications are possible within the scope of the technical idea thereof. For example, although the example in which the watching determination area A0 (FIG. 3B) having a trapezoidal shape is set has been described in the above embodiment, the shape and size of the watching determination area A0 are not particularly limited. For example, as shown in FIG. 3D, a hexagonal watching judgment area A0 may be set.
 また、見守り判定処理のフローチャートは図5のフローチャートに限られない。以下に、見守り判定処理の変形例について説明する。 Further, the flowchart of the watching determination process is not limited to the flowchart of FIG. Hereinafter, a modified example of the watching determination process will be described.
 (第1の変形例)
 図7と図8Aと図8Bを参照して、見守り判定処理の第1の変形例について説明する。図7は、見守り判定部112により実行される見守り判定処理のフローチャートであり、図8Aと図8Bは、画像取得部110により取得された撮影画像の例である。
(First modification)
A first modification of the watching determination process will be described with reference to FIGS. 7, 8A, and 8B. FIG. 7 is a flowchart of watching determination processing executed by the watching determination unit 112, and FIGS. 8A and 8B are examples of photographed images acquired by the image acquiring unit 110.
 まず、ステップS70において、見守り判定部112は、見守り判定領域A0内で2つ以上の頭部が検出されたか否かを判定する。見守り判定領域A0内で2つ以上の頭部が検出された場合には、ステップS71へ処理が進められ、見守り判定領域A0内で頭部が1つのみ検出された場合には、ステップS75へ処理が進められる。 First, in step S70, the watching determination unit 112 determines whether or not two or more heads have been detected in the watching determination area A0. If two or more heads are detected in the watching determination area A0, the process proceeds to step S71. If only one head is detected in the watching determination area A0, the process proceeds to step S75. Processing proceeds.
 ステップS71において、見守り判定部112は、画像取得部110により取得された撮影画像を用いて、当該撮影画像の見守り判定領域A0から顔を検出する。基本的には、頭部が検出された領域で、顔が検出される。但し、後頭部などの領域では、顔が写っていないため、頭部は検出されるが、顔は検出されない。 In step S71, the watching determination unit 112 detects a face from the watching determination area A0 of the captured image using the captured image acquired by the image acquisition unit 110. Basically, a face is detected in the area where the head is detected. However, in an area such as the back of the head, the face is not detected, so the head is detected but the face is not detected.
 次に、ステップS72において、見守り判定部112は、ステップS71で1つ以上の顔が検出されたか否かを判定する。1つ以上の顔が検出された場合には、ステップS73へ処理が進められ、顔が1つも検出されなかった場合には、ステップS75へ処理が進められる。 Next, in step S72, the watching determination unit 112 determines whether one or more faces have been detected in step S71. If one or more faces are detected, the process proceeds to step S73. If no face is detected, the process proceeds to step S75.
 ステップS73において、見守り判定部112は、ステップS41とステップS72の検出結果に基づいて、ベッド20に存在する第1の人物と、第1の人物の方を向いている第2の人物とが見守り判定領域A0に存在するか否かを判定する。ここで、第1の人物が第2の人物に見守られていると考えられるため、以後、第1の人物を「被見守り者」と記載し、第2の人物を「見守り者」と記載する。被見守り者と見守り者が存在する場合には、ステップS74へ処理が進められ、そうでない場合には、ステップS75へ処理が進められる。 In step S73, the watching judgment unit 112 watches the first person present in the bed 20 and the second person facing the first person based on the detection results in step S41 and step S72. It is determined whether it exists in the determination area A0. Here, since the first person is considered to be watched by the second person, hereinafter, the first person is described as a "guardee" and the second person is described as a "watcher". . If there is a watcher and a watcher, the process proceeds to step S74. If not, the process proceeds to step S75.
 ステップS74において、見守り判定部112は、対象者21が他者に見守られていると判定する(見守りあり判定)。 In step S74, the watching judgment unit 112 judges that the target person 21 is watched by another person (watching judgment).
 ステップS75において、見守り判定部112は、対象者21が他者に見守られていないと判定する(見守りなし判定)。 In step S <b> 75, the watching determination unit 112 determines that the target person 21 is not watched by another person (judgment of no watching).
 ここで、図8Aの撮影画像80がステップS40で取得された場合を考える。この場合には、ステップS41において、見守り判定領域A0内の頭部として、対象者21の頭部81と、対象者21を見守らない他者の頭部82とが検出される。そして、ステップS71において、他者の顔83が検出される。 Here, consider the case where the captured image 80 of FIG. 8A is acquired in step S40. In this case, in step S41, the head 81 of the object person 21 and the head 82 of the other person who does not observe the object person 21 are detected as the head in the watching determination area A0. Then, in step S71, the face 83 of the other person is detected.
 その後、ステップS73において、以下の処理が行われる。まず、見守り判定部112は、ベッド20内で検出された頭部81を、被見守り者の頭部として決定する。次に、見守り判定部112は、顔83の向きのベクトル84と、顔83(頭部82)から頭部81へ向かうベクトル85とを算出する。そして、見守り判定部112は、ベクトル84とベクトル85のなす角度θ1(≦180度)が閾値以下であるか否かを判定する。閾値は、被見守り者の方を向いているか否かを判定するための角度であり、例えば45度である。 Thereafter, in step S73, the following processing is performed. First, the watching judgment unit 112 determines the head 81 detected in the bed 20 as the head of the watcher. Next, the watching judgment unit 112 calculates the vector 84 of the direction of the face 83 and the vector 85 directed from the face 83 (head 82) to the head 81. Then, the watching judgment unit 112 judges whether an angle θ1 (≦ 180 degrees) formed by the vector 84 and the vector 85 is equal to or less than a threshold. The threshold is an angle for determining whether or not the user is facing the watcher, and is 45 degrees, for example.
 角度θ1が閾値(45度)よりも大きいため、見守り判定部112は、顔83の人物を、見守り者ではなく、被見守り者の方を向いていない人物として決定する。すなわち、見守り判定部112は、見守り判定領域A0に被見守り者が存在するが、見守り判定領域A0に見守り者が存在しないと判定する。そのため、対象者21が他者に見守られていないと判定され(ステップS75)、対象者21の状態や行動に応じて通知が行われる(ステップS44とステップS45)。 Since the angle θ1 is larger than the threshold value (45 degrees), the watching determination unit 112 determines the person with the face 83 not as the watching person but as the person who does not face the watching person. That is, the watching determination unit 112 determines that there is a watched person in the watching determination area A0 but does not have a watching person in the watching determination area A0. Therefore, it is determined that the target person 21 is not watched by the other person (step S75), and a notification is performed according to the state or action of the target person 21 (step S44 and step S45).
 次に、図8Bの撮影画像86がステップS40で取得された場合を考える。この場合には、ステップS41において、見守り判定領域A0内の頭部として、対象者21の頭部81と、対象者21を見守る他者の頭部87とが検出される。そして、ステップS71において、他者の顔88が検出される。 Next, consider the case where the captured image 86 of FIG. 8B is acquired in step S40. In this case, in step S41, the head 81 of the target person 21 and the head 87 of another person watching the target person 21 are detected as the head in the watching determination area A0. Then, in step S71, the face 88 of the other person is detected.
 その後、ステップS73において、以下の処理が行われる。まず、見守り判定部112は、ベッド20内で検出された頭部81を、被見守り者の頭部として決定する。次に、見守り判定部112は、顔88の向きのベクトル89と、顔88(頭部87)から頭部81へ向かうベクトル90とを算出する。そして、見守り判定部112は、ベクトル89とベクトル90のなす角度θ2(≦180度)が閾値以下であるか否かを判定する。 Thereafter, in step S73, the following processing is performed. First, the watching judgment unit 112 determines the head 81 detected in the bed 20 as the head of the watcher. Next, the watching judgment unit 112 calculates the vector 89 of the direction of the face 88 and the vector 90 directed from the face 88 (head 87) to the head 81. Then, the watching judgment unit 112 judges whether or not an angle θ2 (≦ 180 degrees) formed by the vector 89 and the vector 90 is equal to or less than a threshold.
 角度θ2は閾値(45度)以下であるため、見守り判定部112は、顔88の人物を、見守り者として決定する。すなわち、見守り判定部112は、見守り判定領域A0に被見守り者と見守り者が存在すると判定する。そのため、対象者21が他者に見守られていると判定され(ステップS74)、ステップS45の通知が省略される。 Since the angle θ2 is equal to or less than the threshold (45 degrees), the watching determination unit 112 determines the person with the face 88 as the watching person. That is, the watching determination unit 112 determines that the watching person and the watching person exist in the watching determination area A0. Therefore, it is determined that the target person 21 is watched by the other person (step S74), and the notification in step S45 is omitted.
 以上述べたように、第1の変形例によれば、ベッドに存在する被見守り者(第1の人物)と、被見守り者の方を向いている見守り者(第2の人物)とが見守り判定領域に存在する場合に限って、対象者が他者に見守られていると判定される。ベッドに存在する被見守り者は対象者である可能性が高く、対象者(被見守り者)の方を向いている見守り者は対象者を見守る人物である可能性が高い。そして、他者が対象者の傍にいたとしても、他者が対象者の方を向いていない場合には、他者が対象者を見守っているとは言いにくい。そのため、この構成によれば、対象者が他者に見守られているか否かをより高精度に判定することができる。 As described above, according to the first modification, the watcher (first person) present in the bed and the watcher (second person) facing the watcher (second person) watch over It is determined that the target person is being watched by the other person only when it exists in the determination area. The watcher present in the bed is likely to be the target person, and the watcher facing the target person (the watched person) is likely to be a person watching the target person. And even if the other person is beside the target person, it is difficult to say that the other person is watching over the target person if the other person is not facing the target person. Therefore, according to this configuration, it is possible to determine with high accuracy whether the target person is being watched by another person.
 また、第1の変形例によれば、顔向きのベクトルと、被見守り者へ向かうベクトルとのなす角度が閾値以下である人物が、見守り者として決定される。顔向きのベクトルと、被見守り者へ向かうベクトルとのなす角度が閾値以下である人物は、被見守り者の方を向いている可能性が高い。そのため、この構成によれば、見守り者を高精度に検出することができる。 Further, according to the first modification, a person whose angle between the vector of the face direction and the vector directed to the watched person is equal to or less than the threshold is determined as the watcher. A person whose angle between the face direction vector and the vector toward the watcher is equal to or less than the threshold value is likely to face the watcher. Therefore, according to this configuration, the watcher can be detected with high accuracy.
 (第2の変形例)
 図9を参照して、見守り判定処理の第2の変形例について説明する。図9は、見守り判定部112により実行される見守り判定処理のフローチャートである。
(Second modification)
A second modification of the watching determination process will be described with reference to FIG. FIG. 9 is a flowchart of the watching determination process performed by the watching determination unit 112.
 まず、ステップS90において、見守り判定部112は、見守り判定領域A0内で2つ以上の頭部が検出されたか否かを判定する。見守り判定領域A0内で2つ以上の頭部が検出された場合には、ステップS91へ処理が進められ、見守り判定領域A0内で頭部が1つのみ検出された場合には、ステップS94へ処理が進められる。 First, in step S90, the watching determination unit 112 determines whether or not two or more heads are detected in the watching determination area A0. If two or more heads are detected in the watching determination area A0, the process proceeds to step S91. If only one head is detected in the watching determination area A0, the process proceeds to step S94. Processing proceeds.
 ステップS91において、見守り判定部112は、画像取得部110により取得された撮影画像を用いて、当該撮影画像の見守り判定領域A0から、見守られる動作を行っている被見守り動作者と、見守る動作を行っている見守り動作者とを検出する。見守られる動作は、起床や就寝の被介助、食事の被介助、更衣の被介助などの動作であり、見守る動作は、起床や就寝の介助、食事の介助、更衣の介助などの動作である。被見守り動作者や見守り動作者は、例えば、HoG特徴量を用いた識別器による手法やFaster R-CNNによる手法を用いた物体検出アルゴリズムにより検出される。 In step S91, the watching judgment unit 112 uses the photographed image acquired by the image acquiring unit 110 to observe the watching operation person who is performing watching operation from the watching judgment area A0 of the photographed image. Detect the watching person who is watching. The motions to be watched are motions such as getting up, assistance to sleep, assistance to meals, assistance to changing clothes, etc. The motions to watch are motions such as assistance to wake up or sleeping, assistance to food, assistance to changing clothes, etc. The watching operation person and watching operation person are detected, for example, by an object detection algorithm using a method using a classifier using HoG feature values or a method using Faster R-CNN.
 次に、ステップS92において、見守り判定部112は、ステップS91で被見守り動作者と見守り動作者との両方が検出されたか否かを判定する。被見守り動作者と見守り動作者の両方が検出された場合には、ステップS93へ処理が進められ、そうでない場合には、ステップS94へ処理が進められる。 Next, in step S92, the watching determination unit 112 determines whether or not both the watch operation person and the watching operation person are detected in step S91. If both the watch operation person and the watching operation person are detected, the process proceeds to step S93. If not, the process proceeds to step S94.
 ステップS93において、見守り判定部112は、対象者21が他者に見守られていると判定する(見守りあり判定)。 In step S93, the watching judgment unit 112 judges that the target person 21 is watched by another person (watching judgment).
 ステップS94において、見守り判定部112は、対象者21が他者に見守られていないと判定する(見守りなし判定)。 In step S94, the watching determination unit 112 determines that the target person 21 is not watched by another person (judgment of no watching).
 以上述べたように、第2の変形例によれば、見守られる動作を行っている人物と、見守る動作を行っている人物とが見守り判定領域に存在する場合に限って、対象者が他者に見守られていると判定する。見守られる動作を行っている人物は、他者に見守られている対象者である可能性が高く、見守る動作を行っている人物は、対象者を見守っている他者である可能性が高い。そして、他者が対象者の傍にいたとしても、他者が見守る動作を行っていない場合には、他者が対象者を見守っているとは言いにくい。そのため、この構成によれば、対象者が他者に見守られているか否かをより高精度に判定することができる。 As described above, according to the second modification, the target person is the other person only when the person performing the operation to be watched and the person performing the operation to watch are present in the watching determination area. It is judged that it is watched by. A person performing a watched action is likely to be a target person being watched by another person, and a person performing a watching action is likely to be another person watching a subject person. And, even if the other person is near the target person, it is difficult to say that the other person is watching the target person if the other person is not performing the watching operation. Therefore, according to this configuration, it is possible to determine with high accuracy whether the target person is being watched by another person.
 1:見守り支援システム 10:撮像装置 11:情報処理装置
 110:画像取得部 111:検出部 112:見守り判定部 113:行動判定部 114:出力部 115:領域設定部 116:記憶部
 100:赤外線LED照明 101:近赤外線カメラ 20:ベッド 21:対象者 22:頭部
 30:ベッド領域 A0:見守り判定領域 A1:就床領域(行動判定領域) A2:起床領域(行動判定領域) A3:離床領域(行動判定領域)
 60:撮影画像 61:頭部 62:撮影画像 63:頭部
 80:撮影画像 81:頭部 82:頭部 83:顔 84:ベクトル 85:ベクトル 86:撮影画像 87:頭部 88:顔 89:ベクトル 90:ベクトル θ1:角度 θ2:角度
1: Oversight support system 10: Imaging device 11: Information processing device 110: Image acquisition unit 111: Detection unit 112: Oversight determination unit 113: Action determination unit 114: Output unit 115: Region setting unit 116: Storage unit 100: Infrared LED Lighting 101: Near infrared camera 20: Bed 21: Target person 22: Head 30: Bed area A0: Overwatch judgment area A1: Bed area (action judgment area) A2: Wake area (action judgment area) A3: Leave area ( Action judgment area)
60: shooting image 61: head 62: shooting image 63: head 80: shooting image 81: head 82: head 83: face 84: vector 85: vector 86: shooting image 87: head 88: face 89: Vector 90: Vector θ1: Angle θ2: Angle

Claims (7)

  1.  ベッド上の対象者の見守りを支援する見守り支援システムであって、
     撮像装置により撮影された画像を取得する画像取得部と、
     前記撮影された画像の一部の領域であり、且つ、前記ベッドを含む領域である判定領域の画像に基づいて、前記対象者が他者に見守られているか否かを判定する見守り判定部と、
     前記撮影された画像に基づいて、前記対象者の状態を判定する状態判定部と、
     前記状態判定部の判定結果に応じた通知を行う出力部と、
    を有し、
     前記出力部は、前記対象者が前記他者に見守られていると前記見守り判定部により判定された場合に、前記通知を省略する
    ことを特徴とする見守り支援システム。
    It is a watching support system that supports watching of the target person on the bed,
    An image acquisition unit that acquires an image captured by an imaging device;
    A watching judgment unit that judges whether the target person is watched by another person based on an image of a judgment area that is a part of the photographed image and is an area that includes the bed; ,
    A state determination unit that determines the state of the subject based on the captured image;
    An output unit that performs notification according to the determination result of the state determination unit;
    Have
    The surveillance support system according to claim 1, wherein the output unit omits the notification when it is determined by the watching determination unit that the target person is being watched by the other person.
  2.  前記見守り判定部は、前記判定領域に2人以上の人物が存在する場合に、前記対象者が前記他者に見守られていると判定する
    ことを特徴とする請求項1に記載の見守り支援システム。
    The watching support system according to claim 1, wherein the watching judgment unit judges that the target person is watched by the other person when two or more persons exist in the judgment area. .
  3.  前記見守り判定部は、前記ベッドに存在する第1の人物と、前記第1の人物の方を向いている第2の人物とが前記判定領域に存在する場合に、前記対象者が前記他者に見守られていると判定する
    ことを特徴とする請求項1に記載の見守り支援システム。
    When the first person present in the bed and the second person facing the first person are present in the judgment area, the watching judgment unit determines that the target person is the other person. The surveillance support system according to claim 1, wherein it is determined that the surveillance system is observed.
  4.  前記見守り判定部は、顔向きのベクトルと、前記第1の人物へ向かうベクトルとのなす角度が閾値以下である人物を、前記第1の人物の方を向いている前記第2の人物として決定する
    ことを特徴とする請求項3に記載の見守り支援システム。
    The watching determination unit determines a person whose angle between the face direction vector and the vector directed to the first person is equal to or less than a threshold value as the second person facing the first person. The watching support system according to claim 3, characterized in that:
  5.  前記見守り判定部は、見守られる動作を行っている人物と、見守る動作を行っている人物とが前記判定領域に存在する場合に、前記対象者が前記他者に見守られていると判定する
    ことを特徴とする請求項1に記載の見守り支援システム。
    The watching determination unit determines that the target person is watched by the other person, when a person performing a watched operation and a person performing a watching operation exist in the determination area. A watching support system according to claim 1, characterized in that.
  6.  ベッド上の対象者の見守りを支援する見守り支援システムの制御方法であって、
     撮像装置により撮影された画像を取得するステップと、
     前記撮影された画像の一部の領域であり、且つ、前記ベッドを含む領域である判定領域の画像に基づいて、前記対象者が他者に見守られているか否かを判定するステップと、
     前記撮影された画像に基づいて、前記対象者の状態を判定するステップと、
     前記対象者の状態の判定結果に応じた通知を行うステップと、
    を有し、
     前記対象者が前記他者に見守られていると判定された場合に、前記通知が省略される
    ことを特徴とする見守り支援システムの制御方法。
    A control method of a watching support system for supporting watching of a target person on a bed,
    Acquiring an image captured by an imaging device;
    Determining whether the target person is being watched by another person based on an image of a determination area that is a partial area of the captured image and is an area including the bed;
    Determining the state of the subject based on the captured image;
    Performing a notification according to the determination result of the state of the subject person;
    Have
    The control method of a watching support system, wherein the notification is omitted when it is determined that the target person is watched by the other person.
  7.  請求項6に記載の見守り支援システムの制御方法の各ステップをコンピュータに実行させるためのプログラム。 A program for causing a computer to execute each step of the control method of the watching support system according to claim 6.
PCT/JP2018/025596 2017-07-14 2018-07-05 Monitoring assistance system and control method thereof WO2019013105A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-138539 2017-07-14
JP2017138539A JP6870514B2 (en) 2017-07-14 2017-07-14 Watching support system and its control method

Publications (1)

Publication Number Publication Date
WO2019013105A1 true WO2019013105A1 (en) 2019-01-17

Family

ID=65002445

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/025596 WO2019013105A1 (en) 2017-07-14 2018-07-05 Monitoring assistance system and control method thereof

Country Status (2)

Country Link
JP (1) JP6870514B2 (en)
WO (1) WO2019013105A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766185A (en) * 2021-01-22 2021-05-07 燕山大学 Head posture monitoring method, device and system based on deep learning

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022215158A1 (en) * 2021-04-06 2022-10-13 三菱電機株式会社 Notification control device and notification control method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013149156A (en) * 2012-01-20 2013-08-01 Fujitsu Ltd State detection device and state detection method
JP2015139550A (en) * 2014-01-29 2015-08-03 シャープ株式会社 Bed-leaving determination device and bed-leaving determination method
WO2015133195A1 (en) * 2014-03-06 2015-09-11 Nkワークス株式会社 Information processing device, information processing method, and program
WO2016071314A1 (en) * 2014-11-03 2016-05-12 Koninklijke Philips N.V. Device, system and method for automated detection of orientation and/or location of a person

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015220596A (en) * 2014-05-16 2015-12-07 株式会社ニコン Electronic apparatus
JP2016177680A (en) * 2015-03-20 2016-10-06 株式会社リコー Information processing system, information processing terminal, information processing method, and program
CN107735813A (en) * 2015-06-10 2018-02-23 柯尼卡美能达株式会社 Image processing system, image processing apparatus, image processing method and image processing program
JP6503262B2 (en) * 2015-08-19 2019-04-17 アイホン株式会社 Motion recognition device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013149156A (en) * 2012-01-20 2013-08-01 Fujitsu Ltd State detection device and state detection method
JP2015139550A (en) * 2014-01-29 2015-08-03 シャープ株式会社 Bed-leaving determination device and bed-leaving determination method
WO2015133195A1 (en) * 2014-03-06 2015-09-11 Nkワークス株式会社 Information processing device, information processing method, and program
WO2016071314A1 (en) * 2014-11-03 2016-05-12 Koninklijke Philips N.V. Device, system and method for automated detection of orientation and/or location of a person

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766185A (en) * 2021-01-22 2021-05-07 燕山大学 Head posture monitoring method, device and system based on deep learning
CN112766185B (en) * 2021-01-22 2022-06-14 燕山大学 Head posture monitoring method, device and system based on deep learning

Also Published As

Publication number Publication date
JP2019021002A (en) 2019-02-07
JP6870514B2 (en) 2021-05-12

Similar Documents

Publication Publication Date Title
JP6717235B2 (en) Monitoring support system and control method thereof
US9600993B2 (en) Method and system for behavior detection
JP6137425B2 (en) Image processing system, image processing apparatus, image processing method, and image processing program
JP6167563B2 (en) Information processing apparatus, information processing method, and program
WO2015133195A1 (en) Information processing device, information processing method, and program
US9295390B2 (en) Facial recognition based monitoring systems and methods
WO2015118953A1 (en) Information processing device, information processing method, and program
JP6822328B2 (en) Watching support system and its control method
TW201943260A (en) Method and computing device for monitoring object
JP6119938B2 (en) Image processing system, image processing apparatus, image processing method, and image processing program
WO2019013105A1 (en) Monitoring assistance system and control method thereof
JPWO2018047795A1 (en) Watch system, watch device, watch method, and watch program
WO2019009377A1 (en) Watching support system and control method thereof
JP6822326B2 (en) Watching support system and its control method
US10762761B2 (en) Monitoring assistance system, control method thereof, and program
JP2017041079A (en) Operation recognition device
JP2023548886A (en) Apparatus and method for controlling a camera
TWI697869B (en) Posture determination method, electronic system and non-transitory computer-readable recording medium
JP6729512B2 (en) Monitoring support system and control method thereof
US20220054046A1 (en) Assessing patient out-of-bed and out-of-chair activities using embedded infrared thermal cameras
JP6635074B2 (en) Watching support system and control method thereof
JP2023051147A (en) Nurse call system and state determination system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18831027

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18831027

Country of ref document: EP

Kind code of ref document: A1