WO2016199495A1 - Dispositif de détection de comportement, procédé et programme de détection de comportement et dispositif de surveillance de sujet - Google Patents

Dispositif de détection de comportement, procédé et programme de détection de comportement et dispositif de surveillance de sujet Download PDF

Info

Publication number
WO2016199495A1
WO2016199495A1 PCT/JP2016/062046 JP2016062046W WO2016199495A1 WO 2016199495 A1 WO2016199495 A1 WO 2016199495A1 JP 2016062046 W JP2016062046 W JP 2016062046W WO 2016199495 A1 WO2016199495 A1 WO 2016199495A1
Authority
WO
WIPO (PCT)
Prior art keywords
behavior detection
monitored person
image
behavior
camera
Prior art date
Application number
PCT/JP2016/062046
Other languages
English (en)
Japanese (ja)
Inventor
敏行 山下
山下 雅宣
Original Assignee
コニカミノルタ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by コニカミノルタ株式会社 filed Critical コニカミノルタ株式会社
Priority to JP2016575258A priority Critical patent/JP6115692B1/ja
Publication of WO2016199495A1 publication Critical patent/WO2016199495A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a behavior detection device, a behavior detection method, and a behavior detection program for detecting a predetermined behavior in a monitored person to be monitored. And this invention relates to the to-be-monitored person monitoring apparatus using this action detection apparatus.
  • Japan is an aging society, more specifically the ratio of population over 65 years old to the total population due to the improvement of living standards accompanying the post-war high economic growth, improvement of sanitary environment and improvement of medical standards, etc. It is a super-aging society with an aging rate exceeding 21%.
  • the total population was about 126.5 million, while the elderly population over the age of 65 was about 25.56 million.
  • the total population was about 124.11 million.
  • the elderly population will be about 34.56 million.
  • nurses who need nursing or nursing care due to illness, injury, elderly age, etc., or those who need nursing care are those who need nursing in a normal society that is not an aging society.
  • the human body detection device disclosed in Patent Document 1 includes an imaging camera that images a floor surface of a monitored person's room, and an imaging range of the imaging camera that is separated and identified into a plurality of areas, and the presence of the monitored person in each area And a detection area determination unit for detecting an operation, and a detection target in a predetermined area or over a predetermined area and its adjacent area based on the detection result regarding the presence and operation of the monitored person in the detection area determination unit. And a determination processing unit that reports to the monitoring side that an abnormality has occurred when it is detected that the person is in a predetermined state.
  • the detection area determination unit includes a bed area where a bed on the floor surface of the monitored person's room exists as a kind of the plurality of areas, and a floor surface of the monitored person's room excluding at least the bed area. And a floor area that is the predetermined area is set, and the determination processing unit detects that the monitored person has moved from the bed area to the floor area. Anomaly is reported.
  • the image processing apparatus disclosed in Patent Document 2 determines whether a person is getting out of bed or entering a floor based on an imaging camera that performs imaging and image information captured by the imaging camera from an obliquely upward to an indoor downward direction.
  • An image processing device comprising: an image processing unit, wherein the image processing unit includes: a bed area that is occupied by a bed installed on an indoor floor surface; and a floor surface area of the image information.
  • a floor surface area other than the area where the bed is placed can be identified, and when a human body movement from the floor surface area to the bed area is detected by passing through a boundary side representing the outer lower side of the bed, When it is determined that the person has entered the floor and the movement of the human body from the bed area to the floor area is detected after passing through the boundary side, it is determined that the person is getting out of bed.
  • the human body detection device disclosed in Patent Document 1 is provided with an imaging camera at a substantially central position of the ceiling so that the entire floor surface of the room can be imaged. It is determined that the person is getting out of bed.
  • the image processing apparatus disclosed in Patent Document 2 determines each of entering and leaving the floor on the premise that an imaging camera is provided so that photographing can be performed from obliquely upward in the room toward downward in the room.
  • the size (size) and shape of the room are actually various, and it is not always possible to arrange the camera at a position where the image can be taken from the approximate center position of the ceiling or the diagonally upward direction of the room to the downward direction of the room. Absent. For this reason, if the premise is lacking, the human body detection device disclosed in Patent Document 1 and the image processing device disclosed in Patent Document 2 have accuracy in the determination of getting out of the floor and the determination of each of entering and leaving the floor. It will decline.
  • the present invention has been made in view of the above-described circumstances, and its purpose is behavior detection that can detect a predetermined behavior of a monitored person with higher accuracy regardless of the position where the camera is disposed.
  • An apparatus, a behavior detection method, a behavior detection program, and a monitored person monitoring device using the behavior detection device is a behavior detection method, and a monitored person monitoring device using the behavior detection device.
  • the behavior detection device, the behavior detection method, the behavior detection program, and the monitored person monitoring device an image obtained by capturing the monitored person as the monitoring target from above is acquired, and the predetermined behavior of the monitored person is obtained.
  • a behavior detection algorithm is selected from a plurality of different behavior detection algorithms for detecting the image based on the positional relationship between the camera that captured the image and a setting area that is a predetermined area in the image. Based on the image, a predetermined action in the monitored person is detected by the action detection algorithm performed. Therefore, the behavior detection device, the behavior detection method, the behavior detection program, and the monitored person monitoring device according to the present invention can perform the predetermined action in the monitored person with higher accuracy regardless of the installation position where the camera is installed. It can be detected.
  • the monitored person monitoring apparatus includes a behavior detection unit that detects a predetermined behavior set in advance in a monitored person to be monitored, and a notification unit that notifies the predetermined behavior detected by the behavior detection unit to the outside.
  • the said action detection part detects the predetermined
  • a monitored person monitoring apparatus may be realized by being integrally configured as one device, and may be realized by a plurality of devices as a system. When the monitored person monitoring apparatus is realized by a plurality of devices, the behavior detection unit may be mounted on any of the plurality of devices.
  • the monitored person monitoring apparatus will be described in the case where the monitored person monitoring apparatus is realized by a plurality of devices as a system. Note that, even when the monitored person monitoring apparatus is configured integrally as a single device, the monitored person monitoring apparatus can be configured similarly to the following description.
  • the behavior detection unit is mounted on a sensor device SU described later together with the notification unit will be described here.
  • other devices in the system for example, a management server device SV described later, Even when mounted on the fixed terminal device SP or the portable terminal device TA, the monitored person monitoring device can be configured similarly to the following description.
  • FIG. 1 is a diagram illustrating a configuration of a monitored person monitoring system according to the embodiment.
  • FIG. 2 is a diagram illustrating a configuration of a sensor device in the monitored person monitoring system according to the embodiment.
  • FIG. 3 is a diagram for explaining setting areas and behavior detection lines used in the sensor device.
  • FIG. 3 is also an example of an input screen for inputting these setting areas and action detection lines.
  • FIG. 4 is a diagram for explaining the positional relationship of the camera with respect to the setting area.
  • FIG. 5 is a diagram for explaining a method for obtaining the positional relationship of the camera used in the sensor device.
  • FIG. 5A shows a case where the image center is located in front of the action detection line in the setting area (bedding area), and FIG.
  • FIG. 5B shows a case where the image center is located in the setting area (bedding area).
  • FIG. 6 is a flowchart of a main routine showing an action detection algorithm for Xn used in the sensor device.
  • Xn is the positional relationship of the camera, and in the present embodiment, it is directly above, near side, back, side, diagonally forward and diagonally back.
  • FIG. 7 is a flowchart of a subroutine showing the wake-up determination algorithm for Xn in the behavior detection algorithm for Xn.
  • FIG. 8 is a flowchart of a subroutine showing a bed leaving determination algorithm for Xn in the behavior detection algorithm for Xn.
  • FIG. 9 is a diagram for explaining a wake-up determination method and a get-off determination method used in the action detection algorithm for the direct use used when the positional relationship of the camera is directly above.
  • the left side of FIG. 9 shows an example of an image acquired by the camera, and the right side of FIG. 9 shows a human body region (human body region) extracted from the image shown on the left side of FIG.
  • FIG. 10 is a diagram for explaining a wake-up determination method and a get-off determination method used in the near-behavior detection algorithm used when the positional relationship of the camera is in front.
  • FIG. 11 is a diagram for explaining a wake-up determination method and a get-off determination method used in a horizontal behavior detection algorithm used when the positional relationship of the camera is horizontal.
  • FIG. 12 is a diagram for explaining a wake-up determination method and a get-off determination method used in a back-behavior detection algorithm used when the positional relationship of the camera is in the back.
  • FIG. 13 is a diagram for explaining a wake-up determination method and a get-off determination method used in the action detection algorithm for the diagonally front used when the positional relationship of the camera is diagonally forward.
  • FIG. 14 is a diagram for explaining a wake-up determination method and a get-off determination method used in a behavior detection algorithm for a diagonal back used when the positional relationship of the camera is a diagonal back. 10 to 14, the left side of the figure shows before waking up, the center of the figure shows after waking up, and the right side of the figure shows after waking up. 11 to 14, the upper part shows an example of an image acquired by the camera, and the lower part shows a human body region extracted from the image shown in the upper part.
  • An example of a monitored person monitoring system MS that realizes the monitored person monitoring apparatus as a system is a monitored person (watched person) Ob (Ob-1) to be monitored (watched).
  • To Ob-4) by detecting a predetermined action set in advance and monitoring the monitored person Ob.
  • a monitored person watched person
  • Ob Watched person
  • To Ob-4 by detecting a predetermined action set in advance and monitoring the monitored person Ob.
  • the network NW may be provided with relays such as repeaters, bridges, routers, and cross-connects that relay communication signals.
  • the plurality of sensor devices SU-1 to SU-4, the management server device SV, the fixed terminal device SP, and the plurality of portable terminal devices TA-1 and TA-2 are wireless including an access point AP.
  • a LAN for example, a LAN according to the IEEE 802.11 standard
  • NW is connected to be communicable with each other.
  • the monitored person monitoring system MS is arranged at an appropriate place according to the monitored person Ob.
  • the monitored person (person to be watched) Ob is, for example, a person who needs nursing due to illness or injury, a person who needs care due to a decrease in physical ability, a single person living alone, or the like.
  • the monitored person Ob may be a person who needs the detection when a predetermined inconvenient event such as an abnormal state occurs in the person. preferable.
  • the monitored person monitoring system MS is suitably arranged in a building such as a hospital, a welfare facility for the elderly, and a dwelling unit according to the type of the monitored person Ob.
  • the monitored person monitoring system MS is disposed in a building of a care facility that includes a plurality of rooms RM in which a plurality of monitored persons Ob live and a plurality of rooms such as a nurse station.
  • the sensor device SU is a device that has a communication function of communicating with other devices SV, SP, and TA via the network NW, detects the monitored person Ob, and transmits the detection result to the management server device SV. This sensor device SU will be described in detail later.
  • the management server device SV has a communication function for communicating with other devices SU, SP, TA via the network NW, and receives a detection result regarding the monitored person Ob and an image of the monitored person Ob from the sensor device SU.
  • This is a device that manages information (monitoring information) related to monitoring of the monitored person Ob.
  • the management server apparatus SV stores (records) the monitoring information related to monitoring the monitored person Ob, and A communication signal (monitoring information communication signal) containing the monitoring information related to the monitoring of the observer Ob is transmitted to the fixed terminal device SP and the portable terminal device TA.
  • the management server device SV provides the client with data corresponding to the request of the client (in this embodiment, the fixed terminal device SP and the portable terminal device TA).
  • a management server device SV can be configured by, for example, a computer with a communication function.
  • the fixed terminal device SP includes a communication function for communicating with other devices SU, SV, TA via the network NW, a display function for displaying predetermined information, an input function for inputting predetermined instructions and data, and the like.
  • the user interface of the monitored person monitoring system MS is input by inputting predetermined instructions and data to be given to the management server device SV and the portable terminal device TA, and displaying detection results and images obtained by the sensor device SU.
  • It is a device that functions as (UI).
  • Such a fixed terminal device SP can be configured by, for example, a computer with a communication function.
  • the mobile terminal device TA has a communication function for communicating with other devices SV, SP, SU via the network NW, a display function for displaying predetermined information, an input function for inputting predetermined instructions and data, and a voice call. It has a call function to perform, and inputs a predetermined instruction or data to be given to the management server device SV or the sensor device SU, or displays the detection result or image obtained by the sensor device SU by a notification from the management server device SV. It is a device that receives and displays the monitoring information related to the monitoring of the monitored person Ob.
  • a portable terminal device TA can be configured by a portable communication terminal device such as a so-called tablet computer, a smartphone, or a mobile phone.
  • the sensor device SU includes a camera 1, a sound input / output unit 2, a control processing unit 3, a communication interface unit (communication IF unit) 4, and a storage unit 5.
  • the camera 1 is an apparatus that is connected to the control processing unit 3 and generates an image (image data) under the control of the control processing unit 3.
  • the camera 1 can monitor a space (location space, in the example shown in FIG. 1, the room RM of the installation location) where the monitored person Ob, who is the monitoring target to be monitored, is located above the location space. It is arranged and images the image of the location space as an imaging target from above, generates an image (image data) overlooking the imaging target, and outputs the imaging target image to the control processing unit 3.
  • the camera 1 generates a still image and a moving image.
  • the head of the monitored subject Ob is expected to be located in a bedding BT such as a bed lying on the monitored subject Ob. It is preferably arranged so that the imaging target can be imaged immediately above a preset head position (usually a pillow arrangement position), but in reality, the size (size) and shape of the room Depending on the above, it is disposed at an appropriate position, for example, above the ceiling or wall.
  • the camera 1 is an example of an image acquisition unit that acquires an image of the monitored person Ob that is a monitoring target, taken from above the monitored person Ob.
  • Such a camera 1 may be a device that generates an image of visible light, but in the present embodiment, it is a device that generates an image of infrared light so that the monitored person Ob can be monitored even in a relatively dark place. is there.
  • a camera 1 is, for example, in this embodiment, an imaging optical system that forms an infrared optical image of an imaging target on a predetermined imaging surface, and a light receiving surface that is aligned with the imaging surface.
  • An image sensor that converts an infrared optical image in the imaging target into an electrical signal, and image data that is data representing an infrared image in the imaging target by performing image processing on the output of the image sensor.
  • a digital infrared camera including an image processing unit to be generated.
  • the imaging optical system of the camera 1 is preferably a wide-angle optical system (so-called wide-angle lens (including a fisheye lens)) having an angle of view that can capture the entire room RM in which the camera 1 is disposed.
  • the camera 1 may be an infrared thermography device.
  • the sensor device SU may further include a Doppler sensor in order to detect abnormal micromotion as one of the predetermined actions in the monitored person.
  • This Doppler sensor is a body motion sensor that transmits a transmission wave, receives a reflection wave of the transmission wave reflected by an object, and outputs a Doppler signal having a Doppler frequency component based on the transmission wave and the reflection wave. .
  • the frequency of the reflected wave shifts in proportion to the moving speed of the object due to the so-called Doppler effect, so there is a difference (Doppler frequency component) between the frequency of the transmitted wave and the frequency of the reflected wave. Arise.
  • the Doppler sensor 11 generates a signal of the Doppler frequency component as a Doppler signal and outputs it to the control processing unit 3.
  • the transmission wave may be an ultrasonic wave, a microwave, or the like, but is a microwave in the present embodiment. Since the microwave can be transmitted through the clothing and reflected from the body surface of the monitored person Ob, the movement of the body surface can be detected even when the monitored person Ob is wearing clothes.
  • control processing unit 3 obtains the chest movement (up and down movement of the chest) accompanying the breathing motion of the monitored person Ob based on the Doppler signal of the Doppler sensor by the behavior detection processing unit 36 described later, and the chest If the amplitude of the body motion of the chest that is less than a preset threshold value is detected, the abnormal body motion is determined to be abnormal.
  • the sound input / output unit 2 is connected to the control processing unit 3 and is a circuit for acquiring an external sound and inputting it to the sensor device SU.
  • the sound input / output unit 2 corresponds to an electric signal representing the sound according to the control of the control processing unit 3 It is a circuit for generating and outputting sound.
  • the sound input / output unit 2 includes, for example, a microphone that converts sound acoustic vibrations into electrical signals, and a speaker that converts sound electrical signals into sound acoustic vibrations.
  • the sound input / output unit 2 outputs an electric signal representing an external sound to the control processing unit 3, and converts the electric signal input from the control processing unit 3 into an acoustic vibration of the sound and outputs the sound.
  • the communication IF unit 4 is a communication circuit that is connected to the control processing unit 3 and performs communication according to the control of the control processing unit 3.
  • the communication IF unit 4 generates a communication signal containing data to be transferred input from the control processing unit 3 in accordance with a communication protocol used in the network NW of the monitored person monitoring system MS, and generates the generated communication signal. It transmits to other devices SV, SP, TA via the network NW.
  • the communication IF unit 4 receives a communication signal from another device SV, SP, TA via the network NW, extracts data from the received communication signal, and a format in which the control processing unit 3 can process the extracted data. And output to the control processing unit 3.
  • the communication IF unit 4 further uses, for example, standards such as the Bluetooth (registered trademark) standard, the IrDA (Infrared Data Association) standard, and the USB (Universal Serial Bus) standard to input / output data to / from an external device.
  • standards such as the Bluetooth (registered trademark) standard, the IrDA (Infrared Data Association) standard, and the USB (Universal Serial Bus) standard to input / output data to / from an external device.
  • An interface circuit may be provided.
  • the storage unit 5 is a circuit that is connected to the control processing unit 3 and stores various predetermined programs and various predetermined data under the control of the control processing unit 3.
  • the various predetermined programs include, for example, a control processing program such as a monitoring processing program for executing information processing related to monitoring of the monitored person Ob.
  • the monitoring processing program includes a region setting processing program for performing processing for setting a predetermined region in the image as a setting region, and a plurality of predetermined stored in advance for detecting a predetermined action in the monitored person Ob.
  • a behavior detection line setting processing program for performing processing for setting a predetermined line in the image as a behavior detection line, which is used for at least one of the behavior detection algorithms, and obtaining a positional relationship of the camera 1 with respect to the setting region Positional calculation program, algorithm selection program for selecting a behavior detection algorithm based on the positional relationship obtained by the positional relationship calculation program from among the plurality of behavior detection algorithms, camera 1 using the behavior detection algorithm selected by the algorithm selection program Based on the acquired image, the monitored person Ob
  • a behavior detection processing program for detecting a predetermined behavior, a notification processing program for notifying the predetermined behavior detected by the behavior detection program to the outside, a fixed terminal device SP requesting the video Includes a streaming processing program for streaming to the mobile terminal device TA, a nurse call processing program for making a voice call with the fixed terminal device SP and the mobile terminal device TA by using the sound input / output unit 2 and the like.
  • Examples of the various predetermined data include data necessary for executing the above-described programs such as the setting area and the action detection
  • the storage unit 5 functionally includes a setting area storage unit 51, an action detection line storage unit 52, and an action detection algorithm storage unit 53. Is provided.
  • the setting area storage unit 51 stores a predetermined area in the image as a setting area.
  • the setting area is an area used for at least one of the plurality of action detection algorithms to detect a predetermined action in the monitored person Ob.
  • the predetermined action is the monitored person.
  • the setting area AR is, for example, shown as an area surrounded by four points PT1 to PT4 in FIG. This is an area of a bedding BT such as a bed in an image obtained by imaging a location space with the camera 1.
  • the action detection line storage unit 52 stores a predetermined line in the image as an action detection line.
  • the action detection line is a line used for at least one of the plurality of action detection algorithms in order to detect a predetermined action in the monitored person Ob.
  • the predetermined action is the monitored object.
  • the action detection line AL is, for example, shown as a line segment of two points PT2 and PT3 in FIG. These are one or more boundary lines in the set area AR set as the area of the bedding BT that enters and leaves the bedding BT.
  • the behavior detection algorithm storage unit 53 stores a plurality of different behavior detection algorithms in association with the positional relationship of the camera 1 that captured the image with respect to the setting area AR.
  • the positional relationship of the camera 1 with respect to the setting area AR is classified based on the setting area AR, and the camera PO 1 is located immediately above the setting area AR.
  • Camera PO1 is positioned in front of setting area AR
  • camera PO1 is positioned in front of setting area AR
  • camera PO3 is positioned in front of setting area AR.
  • diagonally forward PO5 and PO5 located diagonally forward and upward of the setting area AR
  • diagonally backward PO6 and PO6 where the camera 1 is diagonally backward and upward of the setting area AR.
  • These directly above PO1, front side PO2, back side PO3, side PO4, diagonally front PO5, and diagonally back PO6 also have a basic positional relationship described later.
  • a plurality of different behavior detection algorithms are prepared in advance corresponding to each of these six immediately above PO1, front PO2, back PO3, side PO4, diagonal front PO5 and diagonal back PO6,
  • the plurality of behavior detection algorithms include a behavior detection algorithm for directly above that is used when the positional relationship of the camera 1 is directly above PO1, and a behavior detection for the front that is used when the positional relationship of the camera 1 is near PO2.
  • An algorithm a behavior detection algorithm for the back used when the positional relationship of the camera 1 is the back PO3, a behavior detection algorithm for the horizontal used when the positional relationship of the camera 1 is the horizontal PO4, The behavior detection algorithm for oblique front used when the positional relationship is obliquely forward PO5, Including behavior detection algorithm for oblique back for use when the position relation of La 1 is an oblique rear.
  • the behavior detection algorithm for the directly above includes the first area S1 of the monitored person Ob outside the setting area AR on the image, and the setting area AR and the monitored person Ob on the image.
  • This is an algorithm for detecting a predetermined action in the monitored person Ob based on the second area S2 in which and overlap.
  • the foreground behavior detection algorithm is based on a distance Wa between the behavior detection line AL on the image and the toe position FP of the monitored person Ob. Is an algorithm for detecting This distance Wa corresponds to the length of the monitored person Ob that protrudes outside the action detection line AL.
  • the horizontal behavior detection algorithm includes a first boundary line BL ⁇ b> 1 that is positioned closest to the camera 1 along the horizontal direction of the setting area AR on the image and the setting area.
  • the monitored person based on the third area S3 in which the setting area AR and the monitored person Ob overlap with the second boundary line BL2 located farthest from the camera 1 along the horizontal direction of the AR. It is an algorithm for detecting a predetermined action in Ob.
  • the back-behavior detection algorithm detects a predetermined action in the monitored person Ob based on the second area S2 where the set area AR and the monitored person Ob overlap on the image. It is an algorithm to do.
  • the action detection algorithm for the diagonally front includes a distance Wa between the action detection line AL on the image and the toe position FP of the monitored person Ob, and a setting area on the image.
  • a first boundary line BL1 positioned closest to the camera 1 along the horizontal direction of the AR and a second boundary line BL2 positioned farthest from the camera 1 along the horizontal direction of the setting area AR;
  • the oblique back-behavior detection algorithm is set with the first boundary line BL1 that is positioned closest to the camera 1 along the horizontal direction of the setting area AR on the image.
  • the monitored area is based on the third area S3 where the set area AR and the monitored person Ob overlap with the second boundary line BL2 located farthest from the camera 1 along the horizontal direction of the area AR. It is an algorithm which detects the predetermined action in person Ob.
  • a human body region (human body region) is extracted from the image (S1n), and then a wake-up determination process for Xn is executed. (S2n), and a bed leaving determination process for Xn is executed (S3n).
  • a background image is obtained in advance and stored in advance in the storage unit 5 as one of the various predetermined data, and the presence / absence of moving objects is determined from the difference image between the image generated by the camera 1 and the background image.
  • the area of the moving object is set as a moving object area.
  • the presence / absence of a moving object is determined from a difference image between an image of a current frame and an image of a past frame (for example, the previous frame) generated by the camera 1. If there is a moving object region, this moving object region is set as the moving object region.
  • the Xn wake-up determination process S2n may be executed after the Xn wake-up determination process S3n is executed.
  • the wake-up determination process S2n for Xn it is first determined whether or not the current state related to wake-up in the monitored person Ob is before wake-up (S21n).
  • the current state regarding the wake-up in the monitored person Ob is stored in the storage unit 5.
  • the current state regarding the wake-up in the monitored person Ob is stored in the storage unit 5 by a flag (wake-up state flag) indicating the state related to the wake-up. For example, when it is determined that the monitored person Ob is before waking up, the wake-up state flag is “0”, and when it is determined that the monitored person Ob is after waking up, The wake-up state flag is “1”.
  • the process S22n it is determined whether or not the wake-up determination condition for Xn is satisfied. This process S22n will be further described later for each Xn. As a result of this determination, when the wake-up determination condition for Xn is satisfied (Yes), the wake-up is notified to the notification processing unit 37 (S23n), and the current state regarding wake-up in the monitored person Ob is stored in the storage unit 5 as after wake-up.
  • the bed leaving determination process S3n for Xn it is first determined whether or not the current state related to bed leaving in the monitored person Ob is before bed leaving (S31n).
  • the current state related to getting out of bed of the monitored person Ob is stored in the storage unit 5.
  • the current state related to getting out of bed in the monitored person Ob is stored in the storage unit 5 by a flag (a leaving state flag) indicating a state related to getting out of bed.
  • a flag indicating a state related to getting out of bed.
  • this bed leaving state flag is “0”, and when it is determined that the monitored person Ob is after getting out of bed,
  • the bed leaving state flag is set to “1”.
  • this Xn getting out determination processing S3n is finished, and when it is before getting out of bed (leaving state)
  • the process S32n it is determined whether or not the Xn bed leaving determination condition is satisfied. This process S32n will be further described later for each Xn. As a result of this determination, when the condition for determining bed leaving for Xn is satisfied (Yes), the notification processing unit 37 is notified of getting out of bed (S33n), and the current state related to getting off in the monitored person Ob is stored in the storage unit 5 as after leaving.
  • the first area S1 of the monitored person Ob outside the set area AR on the image, the set area AR on the image, and the monitored person The second area S2 where Ob overlaps, the first boundary line BL1 positioned closest to the camera 1 along the horizontal direction of the setting area AR and the camera 1 along the horizontal direction of the setting area AR.
  • the third area S3 in which the set area AR and the monitored person Ob overlap with the second boundary line BL2 located at the farthest position, and the action detection line AL on the image and the foot of the monitored person Ob This is a condition based on one or more of the distances Wa to the position FP.
  • the occurrence of getting up is detected as the first area S1 becomes larger, and the occurrence of getting out of bed is detected as the first area S1 becomes larger.
  • the occurrence of getting up is detected as the second area S2 becomes smaller, and the occurrence of getting out of bed is detected as the second area S2 becomes smaller.
  • the occurrence of getting up is detected as the third area S3 becomes smaller, and the occurrence of getting out of bed is detected as the third area S3 becomes smaller.
  • the occurrence of getting up is detected as the distance Wa increases, and the occurrence of getting out is detected as the distance Wa increases.
  • the wake-up determination condition for directly above that is used in the action detection algorithm for directly above is the first area S1 of the monitored person Ob outside the set area AR on the image. It is whether or not a preset first threshold value (directly rising wake-up determination threshold value) Th1 is exceeded.
  • Th1 directly rising wake-up determination threshold value
  • the first area S1 is equal to or less than the first threshold Th1 (S1 ⁇ Th1), it is determined that the monitored person Ob has not woken up, and the current state regarding the wakeup in the monitored person Ob is before waking up. It is said.
  • the first area S1 is obtained, for example, by counting the total number of pixels in the human body region outside the setting region AR among the human body regions extracted in the process S1n. Then, as shown in FIG. 9, the second floor area S2 in which the set area AR and the monitored person Ob on the image overlap each other is set in advance as the bed leaving determination condition used in the action detection algorithm for the overhead. It is whether or not the second threshold value (a bed leaving determination threshold value immediately above) Th2 is exceeded.
  • the second area S2 does not exceed the second threshold Th2, that is, if the second area S2 is less than the second threshold Th2 (S2 ⁇ Th2), it is determined that the monitored person Ob has left the floor, The current state related to getting out of bed in the monitoring person Ob is after getting out of bed.
  • the second area S2 is equal to or larger than the second threshold Th2 (S2 ⁇ Th2), it is determined that the monitored person Ob has not left the floor, and the current state regarding the leaving of the monitored person Ob is the state before leaving. It is said.
  • the second area S2 is obtained, for example, by counting the total number of pixels in a region where the setting region AR and the human body region overlap among the human body regions extracted in the process S1n.
  • the first and second threshold values Th1 and Th2 are set in advance experimentally by statistically processing a plurality of samples, for example.
  • the wake-up determination condition for the foreground used in the foreground behavior detection algorithm has a preset distance Wa between the action detection line AL and the toe position FP of the monitored person Ob as shown in FIG. This is whether or not the third threshold (front wake-up determination threshold) Th3 is exceeded.
  • the distance Wa is equal to or greater than the third threshold Th3 (Wa ⁇ Th3), it is determined that the monitored person Ob has woken up, and the current state regarding the wakeup in the monitored person Ob is after waking up (FIG. 10). In the middle).
  • the distance Wa does not exceed the third threshold Th3, that is, when the distance Wa is less than the third threshold Th3 (Wa ⁇ Th3), it is determined that the monitored person Ob is not waking up.
  • the current state regarding the wake-up by the observer Ob is assumed to be before wake-up (see the left side of FIG. 10).
  • the distance Wa is, for example, the position closest to the floor surface of the human body region extracted in the process S1n is the foot position FP of the monitored person Ob, and the action detection line AL and the foot position of the monitored person Ob It is obtained by counting the total number of pixels in one line along the vertical direction with the FP. Then, as shown in FIG.
  • the foreground determination condition used in the foreground behavior detection algorithm is whether or not the distance Wa exceeds a preset fourth threshold value (a foreground determination threshold value) Th4. It is.
  • a fourth threshold value a foreground determination threshold value
  • Th4 Wa ⁇ Th4
  • the distance Wa does not exceed the fourth threshold Th4, that is, when the distance Wa is less than the fourth threshold Th4 (Wa ⁇ Th4), it is determined that the monitored person Ob has not left the floor, The current state related to getting out of bed in the supervisor Ob is assumed to be before getting out (see the center of FIG. 10).
  • the third and fourth threshold values Th3 and Th4 are set in advance experimentally by statistically processing a plurality of samples, for example.
  • the horizontal wake-up determination condition used in the horizontal behavior detection algorithm is a first position located on the image closest to the camera 1 along the horizontal direction of the setting area AR.
  • Th5 horizontal wakeup determination threshold value
  • the third area S3 is, for example, the total number of pixels in a region where the human body region overlaps the setting region AR sandwiched between the first boundary line BL1 and the second boundary line BL2 among the human body regions extracted in the process S1n. It is obtained by counting.
  • the horizontal bed leaving detection condition used in the horizontal action detection algorithm is that the third area S3 exceeds a preset sixth threshold value (horizontal bed leaving determination threshold value) Th6. No.
  • the third area S3 is equal to or smaller than the sixth threshold Th6 (S3 ⁇ Th6), it is determined that the monitored person Ob has left the floor, and the current state regarding the leaving floor in the monitored person Ob is determined to be after leaving the floor ( (See the right side of FIG. 11).
  • the third area S3 exceeds the sixth threshold Th6 (S3> Th6), it is determined that the monitored person Ob has not left the floor, and the current state regarding the leaving of the monitored person Ob is the bed leaving. (Refer to the center of FIG. 11).
  • the fifth and sixth threshold values Th5 and Th6 are set in advance experimentally by statistically processing a plurality of samples, for example.
  • the back wake-up determination condition used in the back-behavior detection algorithm is a seventh area S2 in which the setting area AR on the image and the monitored person Ob overlap each other. It is whether or not the threshold value (back waking judgment threshold value) Th7 has been exceeded.
  • the second area S2 is equal to or smaller than the seventh threshold Th7 (S2 ⁇ Th7), it is determined that the monitored person Ob has woken up, and the current state regarding the wakeup in the monitored person Ob is after waking up ( (See the center of FIG. 12).
  • the second area S2 exceeds the seventh threshold Th7 (S2> Th7), it is determined that the monitored person Ob has not woken up, and the current state regarding the wakeup in the monitored person Ob is (Refer to the left side of FIG. 12). And, as shown in FIG. 12, in the back bed detection condition used in the back action detection algorithm, has the second area S2 exceeded the preset eighth threshold value (back bed determination threshold value) Th8? No.
  • the eighth threshold Th8 S2 ⁇ Th8
  • the seventh and eighth threshold values Th7 and Th8 are set in advance experimentally by statistically processing a plurality of samples, for example.
  • a distance Wa between the action detection line AL and the toe position FP of the monitored person Ob is preset.
  • the ninth threshold value first oblique wake-up determination threshold value
  • Th9 the ninth threshold value
  • S3 Whether or not S3 exceeds a preset 10th threshold value (second oblique front wake-up determination threshold value) Th10.
  • a preset 10th threshold value second oblique front wake-up determination threshold value
  • Th10 tenth threshold
  • the distance Wa does not exceed the ninth threshold Th9, that is, when the distance Wa is less than the ninth threshold Th9 and the third area S3 exceeds the tenth threshold Th10 (Wa ⁇ Th9 and S3>).
  • the condition for determining the bed leaving for diagonal use used in the action detection algorithm for diagonally forward is the eleventh threshold value (first oblique bed leaving determination threshold value) Th11 in which the distance Wa is preset. And whether or not the third area S3 exceeds a preset 12th threshold value (second oblique front leaving determination threshold value) Th12.
  • Th12 it is determined that the monitored person Ob has not left the floor, and the current state related to getting out of the floor of the monitored person Ob is set to before leaving (see the center of FIG. 13).
  • These ninth to twelfth threshold values Th9 to Th12 are set in advance experimentally by statistically processing a plurality of samples, for example.
  • the diagonally wake-up determination condition used in the diagonally-behind action detection algorithm is located at a position closest to the camera 1 along the horizontal direction of the setting area AR on the image.
  • the setting area AR and the monitored subject Ob overlap each other between the first boundary line BL1 and the second boundary line BL2 located farthest from the camera 1 along the horizontal direction of the setting area AR. Whether or not the three areas S3 exceed a preset thirteenth threshold value (diagonal wake-up determination threshold value) Th13.
  • the third area S3 is equal to or smaller than the thirteenth threshold Th13 (S3 ⁇ Th13), it is determined that the monitored person Ob has woken up, and the current state regarding the wakeup in the monitored person Ob is after waking up ( (See the center of FIG. 14).
  • the third area S3 exceeds the thirteenth threshold Th13 (S3> Th13)
  • the current state regarding the wakeup in the monitored person Ob is It is assumed to be the front (see the left side of FIG. 14). Then, as shown in FIG. 14, the oblique back bed leaving condition used in the diagonal back behavior detection algorithm is, as shown in FIG.
  • the third area S3 is equal to or less than the fourteenth threshold Th14 (S3 ⁇ Th14)
  • the fourteenth threshold Th14 S3 ⁇ Th14
  • the fourteenth threshold Th14 S3 ⁇ Th14
  • the thirteenth and fourteenth threshold values Th13 and Th14 are set in advance experimentally by statistically processing a plurality of samples, for example.
  • the first to fourteenth threshold values Th1 to Th14 described above may be set in consideration of the size and action speed of the monitored person Ob on the image.
  • the behavior detection algorithm for the front and the behavior detection algorithm for the oblique back are stored in the behavior detection algorithm storage unit 53 in advance.
  • Such a storage unit 5 includes, for example, a ROM (Read Only Memory) that is a nonvolatile storage element, an EEPROM (Electrically Erasable Programmable Read Only Memory) that is a rewritable nonvolatile storage element, and the like.
  • the storage unit 5 includes a RAM (Random Access Memory) or the like serving as a working memory of the so-called control processing unit 3 for storing data generated during execution of the predetermined program.
  • the control processing unit 3 controls each unit of the sensor device SU in accordance with the function of each unit, acquires an image obtained by capturing the monitored person Ob from above with the camera 1, and monitors based on the acquired image. It is a circuit for detecting and notifying a predetermined action in the person Ob.
  • the control processing unit 3 includes, for example, a CPU (Central Processing Unit) and its peripheral circuits.
  • the control processing unit 3 includes a control unit 31, a region setting processing unit 32, a behavior detection line setting processing unit 33, a positional relationship calculation unit 34, an algorithm selection unit 35, and a behavior detection processing unit. 36, a notification processing unit 37, a streaming processing unit 38, and a nurse call processing unit 39 are functionally provided.
  • the control unit 31 controls each part of the sensor device SU according to the function of each part, and controls the entire sensor device SU.
  • the area setting processing unit 32 performs processing for setting a predetermined area in the image as the setting area AR.
  • the action detection line setting processing unit 33 performs processing for setting a predetermined line in the image as the action detection line AL.
  • the control processing unit 3 receives a communication signal (setting request communication signal) for requesting the setting of the setting area AR and the action detection line AL from the fixed terminal device SP or the mobile terminal device TA by the communication IF unit 4. Then, a communication signal (setting image communication signal) containing the image acquired by the camera 1 is returned by the communication IF unit 4.
  • the fixed terminal device SP or the portable terminal device TA displays a setting screen for inputting the setting area AR and the action detection line AL shown in FIG. An image accommodated in the area setting image communication signal is displayed on the setting screen.
  • the user uses, for example, a pointing device such as a mouse or a tap operation on the touch panel, and each apex PT1 to PT4 of the setting area AR and the action detection line AL on the setting screen. Both end points PT2 and PT3 are input.
  • the input method may be input on each side of the setting area AR, input of the action detection line AL, or the like.
  • the fixed terminal device SP or the portable terminal apparatus TA sends the communication signal (setting information communication signal) containing the accepted setting area AR and the action detection line AL to the sensor device SU. Send.
  • the region setting processing unit 32 extracts the setting region AR (four vertices PT 1 to PT 4) contained in the received setting information communication signal and sets the setting in the storage unit 5.
  • the setting area AR is set in the sensor device SU by storing in the area storage unit 51, and the action detection line setting processing unit 33 sets the action detection line AL (both ends PT2, PT3) accommodated in the received setting information communication signal. ) And is stored in the action detection line storage unit 52 of the storage unit 5 to set the action detection line AL in the sensor device SU.
  • the communication IF unit 4 and the behavior detection line setting processing unit 33 correspond to an example of a behavior detection line receiving unit that receives a behavior detection line from the outside and stores it in the storage unit.
  • the sensor device SU may further include an input unit 6 and a display unit 7 as indicated by a broken line in FIG.
  • the input unit 6 is connected to the control processing unit 3 and is, for example, a device that inputs various data necessary for monitoring the monitored person Ob to the sensor device SU.
  • the input unit 6 has a plurality of functions assigned a predetermined function. Input switch, mouse, etc.
  • the display unit 7 is a device that is connected to the control processing unit 3 and displays data and the like input from the input unit 6 in accordance with the control of the control processing unit 3.
  • a display device such as a CRT display, LCD, or organic EL display It is.
  • a touch panel may be configured from the input unit 6 and the display unit 7.
  • the input unit 6 is a position input device that detects and inputs an operation position such as a resistive film type or a capacitance type.
  • a position input device is provided on the display surface of the display unit 7, one or a plurality of input content candidates that can be input are displayed on the display unit 7, and a display position where the input content that the user wants to input is displayed. Is touched, the position is detected by the position input device, and the display content displayed at the detected position is input to the sensor device SU as the operation input content of the user.
  • the control processing unit 3 displays a setting screen for inputting the setting area and the action detection line shown in FIG.
  • the setting area AR and the action detection line AL may be received by the input unit 6.
  • an image acquired by the camera 1 is displayed on the setting screen.
  • the user uses, for example, a pointing device such as a mouse as the input unit 6 or performs a tap operation on the touch panel constituted by the input unit 6 and the display unit 7 to display each vertex PT1 to PT4 of the setting area AR on the setting screen.
  • both end points PT2 and PT3 of the action detection line AL are input.
  • the area setting processing unit 32 Upon receiving the setting area AR and the action detection line AL, the area setting processing unit 32 stores the setting area AR (four vertices PT1 to PT4) received by the input unit 6 in the setting area storage unit 51 of the storage unit 5, The behavior detection line setting processing unit 33 stores the behavior detection lines AL (both end points PT2 and PT3) received by the input unit 6 in the behavior detection line storage unit 52 of the storage unit 5.
  • the input unit 6 and the behavior detection line setting processing unit 33 correspond to another example of a behavior detection line receiving unit that receives a behavior detection line from the outside and stores it in the storage unit.
  • the positional relationship calculation unit 34 obtains the positional relationship of the camera 1 with respect to the setting area AR.
  • the positional relationship calculation unit 34 determines the positional relationship of the camera 1 with respect to the setting area AR in consideration of the action detection line AL. More specifically, for example, as shown in FIG. 5, the positional relationship calculation unit 34 determines the positional relationship of the camera 1 depending on where the image center CP is located with respect to the setting area AR and the action detection line AL. And the positional relationship of the obtained camera 1 is stored in the storage unit 5.
  • the left-right direction along the action detection line AL is the X direction and the front-rear direction perpendicular thereto is the Y direction, for example, as shown in FIG.
  • the positional relationship calculation unit 34 As the positional relationship of 1, the front PO2 is calculated. Further, for example, as shown in FIG. 5B, when the image center CP is in the setting area AR, the positional relationship calculation unit 34 calculates the positional relationship of the camera 1 as PO1 directly above.
  • the image center CP is in the ⁇ Y direction (in front) of the setting area AR, and there is a boundary of the setting area AR between the action detection line AL and the image center CP, and the action detection line AL
  • the positional relationship calculation unit 34 calculates the rear PO3 as the positional relationship of the camera 1.
  • the positional relationship calculation unit 34 calculates horizontal PO4 as the positional relationship of the camera 1.
  • the image center CP is in the ⁇ Y direction (before) the setting area AR, there is no boundary of the setting area AR between the action detection line AL and the image center CP, and the action detection line AL (Ie, the X-coordinate value of the image center CP is not included in the range from the X-coordinate value of one end PT2 to the X-coordinate value of the other end PT3 in the action detection line AL).
  • the positional relationship calculation unit 34 calculates the diagonally forward PO5 as the positional relationship of the camera 1.
  • the image center CP is in the ⁇ Y direction (in front) of the setting area AR, and there is a boundary of the setting area AR between the action detection line AL and the image center CP, and the action detection line AL (Ie, the X-coordinate value of the image center CP is not included in the range from the X-coordinate value of one end PT2 to the X-coordinate value of the other end PT3 in the action detection line AL).
  • the positional relationship calculation unit 34 calculates the diagonal PO6 as the positional relationship of the camera 1. In FIG.
  • the positional relationship calculation unit 34 calculates the previous PO2 as the positional relationship of the camera 1.
  • the positional relationship calculation unit 34 calculates the horizontal PO4 as the positional relationship of the camera 1.
  • the camera is determined based on a tilt angle obtained by user input or automatic calculation based on an image by a known conventional means. 1 is obtained by projecting the position of 1 on the floor (on the image), and the position directly under the camera 1 is set as the image center CP. Similarly to the above, the positional relationship of the camera 1 is determined. Is calculated.
  • the algorithm selection unit 35 selects a behavior detection algorithm based on the positional relationship obtained by the positional relationship calculation unit 34 from the plurality of behavior detection algorithms stored in the behavior detection algorithm storage unit 53 of the storage unit 5. The selection result is notified to the behavior detection processing unit 36. More specifically, in this embodiment, the algorithm selection unit 35 determines the positional relationship of the camera 1 obtained by the positional relationship calculation unit 34, and as a result of the determination, the camera obtained by the positional relationship calculation unit 34.
  • the above-described action detection algorithm for directly above is selected, and as a result of the determination, the position relationship of the camera 1 obtained by the position relationship calculation unit 34 is the near PO2
  • the above-mentioned behavior detection algorithm for the foreground is selected, and when the positional relationship of the camera 1 obtained by the positional relationship calculation unit 34 is the back PO3 as a result of the determination, the above-described behavior detection for the back is performed.
  • the horizontal behavior detection algorithm described above is selected and the determination is performed.
  • the positional relationship calculation unit 34 when the positional relationship of the camera 1 obtained by the positional relationship calculation unit 34 is the diagonally forward PO5, the above-described behavior detection algorithm for diagonally forward is selected, and as a result of the determination, the positional relationship calculation unit 34 When the positional relationship of the camera 1 obtained in step S is diagonally back PO6, the above-described behavior detection algorithm for the diagonal back is selected, and the selection result is notified to the behavior detection processing unit 36.
  • the behavior detection processing unit 36 detects a predetermined behavior (whether a person wakes up and whether he / she has left in the present embodiment) based on an image acquired by the camera 1 with the behavior detection algorithm selected by the algorithm selection unit 35. Then, the detection result is notified to the notification processing unit 37.
  • the notification processing unit 37 notifies the predetermined behavior detected in the behavior detection processing unit 36 (in this embodiment, getting up and getting out of bed) to the outside. More specifically, the notification processing unit 37 represents information indicating the detected predetermined behavior (state, situation) (detected behavior information (information representing one or more of getting up and leaving in this embodiment)), Identifier information for identifying and identifying the monitored person Ob in which the predetermined action is detected (identifier information for identifying and identifying the sensor device SU detecting the monitored person Ob), and the predetermined A communication signal (monitoring information communication signal) containing an image or the like used for detecting the behavior of the user is generated and transmitted to the management server device SV by the communication IF unit 4.
  • the notification processing unit 37 represents information indicating the detected predetermined behavior (state, situation) (detected behavior information (information representing one or more of getting up and leaving in this embodiment)), Identifier information for identifying and identifying the monitored person Ob in which the predetermined action is detected (identifier information for identifying and identifying the sensor device SU
  • the streaming processing unit 38 when there is a moving image distribution request from the fixed terminal device SP or the portable terminal device TA via the network NW and the communication IF unit 4, the fixed terminal device SP or the portable terminal device that has made this request.
  • a moving image generated by the camera 1 (for example, a live moving image) is distributed to the TA via the communication IF unit 4 and the network NW by streaming reproduction.
  • the nurse call processing unit 39 When the nurse call processing unit 39 receives an input operation with a nurse call push button switch (not shown) that receives a nurse call from the monitored person Ob, the nurse call processing unit 39 receives the fixed terminal device SP or the network via the communication IF unit 4 and the network NW. A voice call can be made with the mobile terminal device TA.
  • FIG. 1 shows four first to fourth sensor devices SU-1 to SU-4 as an example, and the first sensor device SU-1 is one of the monitored persons Ob.
  • the second sensor device SU-2 is arranged in a room RM-2 (not shown) of Mr. B Ob-2 who is one of the monitored persons Ob.
  • the third sensor device SU-3 is disposed in a room RM-3 (not shown) of Mr. Cb Ob-3 who is one of the monitored subjects Ob, and the fourth sensor device is a monitored subject. It is arranged in the room RM-4 (not shown) of Mr. D Ob-4, one of Ob.
  • FIG. 15 is a flowchart of a main routine showing the operation of the sensor device.
  • FIG. 16 is a flowchart of a subroutine showing an action detection operation used in the main routine shown in FIG.
  • each device SU, SV, SP, TA performs initialization of each necessary part and starts its operation when power is turned on.
  • the control processing unit 3 includes the control unit 31, the region setting processing unit 32, the action detection line setting processing unit 33, the positional relationship calculation unit 34, and the algorithm selection unit 35.
  • the behavior detection processing unit 36, the notification processing unit 37, the streaming processing unit 38, and the nurse call processing unit 39 are functionally configured.
  • the control processing section 3 sets the setting area AR by the area setting processing section 32 (S41), and sets the action detection line AL by the action detection line setting processing section 33 (S42). More specifically, as described above, in the present embodiment, when the communication request unit 4 receives the setting request communication signal from the fixed terminal device SP or the mobile terminal device TA, the control processing unit 3 performs the setting image communication. A signal is returned by the communication IF unit 4.
  • the fixed terminal device SP or the portable terminal device TA displays the setting screen shown in FIG. 3, for example, and the user (monitoring person, monitored person, etc.) inputs the setting area AR and the action detection line AL to the setting screen, respectively. To do.
  • the fixed terminal device SP or the portable terminal device TA transmits the setting information communication signal to the sensor device SU.
  • the region setting processing unit 32 extracts the setting region AR accommodated in the received setting information communication signal and stores it in the setting region storage unit 51 of the storage unit 5.
  • the setting area AR is set in the sensor device SU, and the behavior detection line setting processing unit 33 takes out the behavior detection line AL accommodated in the received setting information communication signal, and the behavior detection line storage unit 52 of the storage unit 5.
  • the action detection line AL to the sensor device SU.
  • the control processing unit 3 obtains the positional relationship of the camera 1 with respect to the setting area AR by the positional relationship calculation unit 34, and stores the obtained positional relationship of the camera 1 in the storage unit 5 (S43). More specifically, as described above, in the present embodiment, the positional relationship calculation unit 34 determines the position of the camera 1 depending on where the image center CP is located with respect to the setting area AR and the action detection line AL. Seeking a relationship.
  • control processing unit 3 acquires an image obtained by capturing the monitored person Ob from above the monitored person Ob by the camera 1. Accordingly, the image for one frame is acquired and input to the sensor device SU (S44).
  • control processing unit 3 performs a later-described behavior detection process for detecting a predetermined behavior in the monitored person Ob based on the image acquired in the processing S44 by the behavior detection processing unit 36 (S45). In order to execute this action detection process on the image of the frame, the process returns to process S44. That is, the control processing unit 3 executes the action detection process for each frame by repeatedly executing the processes S44 and S45.
  • the control processing unit 3 uses the algorithm selection unit 35 to select a positional relationship from among the plurality of behavior detection algorithms stored in the behavior detection algorithm storage unit 53 of the storage unit 5.
  • An action detection algorithm is selected based on the positional relationship obtained by the calculation unit 34 (S51). More specifically, the algorithm selection unit 35 determines the positional relationship of the camera 1 obtained by the positional relationship calculation unit 34 by referring to the stored contents of the storage unit 5.
  • step S43 when the positional relationship of the camera 1 obtained by the positional relationship calculation unit 34 in step S43 is directly above PO1 (directly above), the algorithm selection unit 35 selects the above-described behavior detection algorithm for the above. Then, the selection result is notified to the behavior detection processing unit 36.
  • the behavior detection processing unit 36 that has received this notification executes step S52-1.
  • the action detection processing unit 36 performs a predetermined action (in this embodiment) on the monitored person Ob based on the image acquired by the camera 1 with the action detection algorithm for direct use selected by the algorithm selection unit 35.
  • step S51 when the positional relationship of the camera 1 obtained by the positional relationship calculation unit 34 in step S43 is the near side PO2 (near side), the algorithm selection unit 35 detects the above-described near-behavior detection. The algorithm is selected, and the selection result is notified to the behavior detection processing unit 36.
  • the behavior detection processing unit 36 that has received this notification executes step S52-2.
  • the action detection processing unit 36 selects the action detection algorithm for the near side selected by the algorithm selection part 35 (that is, each process according to the above-described flowcharts shown in FIGS.
  • step S51 when the positional relationship of the camera 1 obtained by the positional relationship calculation unit 34 in step S43 is horizontal PO4 (horizontal), the algorithm selection unit 35 performs the above-described horizontal behavior detection. The algorithm is selected, and the selection result is notified to the behavior detection processing unit 36. The behavior detection processing unit 36 that has received this notification executes step S52-3. In this process S52-3, the behavior detection processing unit 36 selects the horizontal behavior detection algorithm selected by the algorithm selection unit 35 (that is, each process according to the above-described flowcharts shown in FIGS.
  • step S51 when the positional relationship of the camera 1 obtained by the positional relationship calculation unit 34 in step S43 is the back PO3 (back), the algorithm selection unit 35 detects the above-described action detection for the back. The algorithm is selected, and the selection result is notified to the behavior detection processing unit 36.
  • the behavior detection processing unit 36 that has received this notification executes step S52-4.
  • the behavior detection processing unit 36 selects the behavior detection algorithm for the back selected by the algorithm selection unit 35 (that is, each processing according to the above-described flowcharts shown in FIGS.
  • step S51 when the positional relationship of the camera 1 obtained by the positional relationship calculation unit 34 in step S43 is diagonally forward PO5 (diagonal front), the algorithm selection unit 35 performs the diagonally forward operation described above.
  • the action detection algorithm is selected, and the selection result is notified to the action detection processing unit 36.
  • the behavior detection processing unit 36 that has received this notification executes step S52-5.
  • the action detection processing unit 36 performs the action detection algorithm for the diagonally forward selected by the algorithm selecting part 35 (that is, each of the flowcharts shown in FIGS.
  • step S51 when the positional relationship of the camera 1 obtained by the positional relationship calculation unit 34 in step S43 is the diagonal back PO6 (diagonal back), the algorithm selection unit 35 uses the diagonal back described above.
  • the action detection algorithm is selected, and the selection result is notified to the action detection processing unit 36.
  • the behavior detection processing unit 36 that has received this notification executes step S52-6.
  • the behavior detection processing unit 36 performs the behavior detection algorithm for the diagonally back selected by the algorithm selection unit 35 (that is, each of the above-described flowcharts shown in FIGS.
  • the notification processing unit 37 displays the determination result determined as the state of the monitored person Ob.
  • the management server device SV receives a communication signal (monitoring information communication signal) containing monitoring information such as determination result information (in this embodiment, getting up and leaving) and image data of a still image of the monitored person Ob via the network NW. Send to.
  • the management server device SV When the management server device SV receives the monitoring information communication signal from the sensor device SU via the network NW, the management server device SV stores monitoring information such as determination result information and still image data stored in the monitoring information communication signal in its storage unit. (Record). Then, the management server device SV transmits a monitoring information communication signal containing monitoring information such as the determination result information and still image data to the terminal device (in this embodiment, the fixed terminal device SP and the portable terminal device TA). . As a result, the state (situation) of the monitored person Ob is notified to a monitor such as a nurse or a caregiver via the terminal devices SP and TA.
  • the fixed terminal device SP and the portable terminal device TA display the monitoring information accommodated in the monitoring information communication signal.
  • the monitored person monitoring system MS detects each monitored person Ob by each sensor apparatus SU, management server apparatus SV, fixed terminal apparatus SP, and portable terminal apparatus TA, and sets each monitored person Ob. Monitoring.
  • the monitored person monitoring system MS which is an example of the monitored person monitoring apparatus, and the sensor apparatus SU of the example using the action detection apparatus, the action detection method, and the action detection program include a plurality of action detection algorithms. Since an action detection algorithm corresponding to the positional relationship of the camera 1 with respect to the setting area AR is selected from the inside, a predetermined action (wake-up and getting-off in this embodiment) is detected by the selected action detection algorithm. Regardless of the position where the camera 1 is disposed, the predetermined behavior of the monitored person Ob can be detected with higher accuracy. If the position of the bedding BT is changed after the camera 1 is installed and the behavior detection is started, the positional relationship of the camera 1 with respect to the bedding BT changes.
  • the monitored person monitoring system MS and the sensor device SU By changing the setting area AR stored in the storage unit 5 to the area of the bedding BT after the change, it is possible to cope with the change in the positional relationship of the camera 1 due to such a change in the position of the bedding BT. Therefore, the monitored person monitoring system MS and the sensor device SU can detect the predetermined action in the monitored person Ob with higher accuracy regardless of the position of the bedding BT.
  • the above-mentioned monitored person monitoring system MS and the sensor device SU determine the positional relationship of the camera 1 as a positional relationship calculation unit for determining whether the position is the directly above PO1, the near side PO2, the far side PO3, the lateral PO4, the diagonally forward PO5, or the diagonally back PO6. 34, it is possible to cope with the case where the camera 1 is disposed directly above, in front, in the back, side, diagonally in front, and diagonally in the back. In these cases, the predetermined action in the monitored person Ob is more It can be detected with high accuracy.
  • the monitored person monitoring system MS and the sensor device SU include the action detection algorithm for direct use described above with reference to FIG. 9, when the positional relationship of the camera 1 is PO1 directly above, the set area AR on the image Presence / absence of wake-up in the monitored person Ob by relatively simple image processing of obtaining the first area S1 of the outside monitored person Ob and the second area S2 where the set area AR and the monitored person Ob overlap on the image It is possible to determine the presence or absence of getting out of bed.
  • the presence or absence of getting up and the presence or absence of bed of the monitored person Ob can be determined by relatively simple image processing of obtaining the distance Wa between the AL and the toe position FP of the monitored person Ob.
  • the monitored person monitoring system MS and the sensor device SU include the horizontal behavior detection algorithm described above with reference to FIG. 11, when the positional relationship of the camera 1 is horizontal PO4, the set region on the image A first boundary line BL1 positioned closest to the camera 1 along the horizontal direction of AR and a second boundary line BL2 positioned farthest from the camera 1 along the horizontal direction of the setting area AR.
  • the presence or absence of getting up and the presence or absence of bed in the monitored person Ob can be determined by relatively simple image processing of obtaining the third area S3 where the set area AR and the monitored person Ob overlap each other.
  • the monitored person monitoring system MS and the sensor device SU include the back-behavior detection algorithm described above with reference to FIG. 12, when the positional relationship of the camera 1 is the back PO3, the second area S2 is set. With the relatively simple image processing of obtaining, it is possible to determine the presence or absence of getting up and the presence or absence of the person being monitored Ob.
  • the monitored person monitoring system MS and the sensor device SU include the behavior detection algorithm for oblique front described above with reference to FIG. 13, when the positional relationship of the camera 1 is oblique forward PO5, the distance Wa and With the relatively simple image processing of obtaining the third area S3, it is possible to determine the presence or absence of getting up and the presence or absence of the person being monitored Ob.
  • the third area is determined when the positional relationship of the camera 1 is the oblique back PO6.
  • the monitored person monitoring system MS and the sensor device SU receive the behavior detection line AL from the outside and store them in the storage unit 5 as an example of the behavior detection line reception unit and the communication IF unit 4 and the behavior detection line setting processing unit 33. Therefore, the action detection line AL can be easily input and set.
  • the positional relationship calculation unit 34 determines the positional relationship of the camera 1 depending on where the image center CP is located with respect to the setting area AR and the action detection line AL. 3, the image center CP is also displayed on the display unit 7 together with the image acquired by the camera 1 shown in FIG. 3, and the input of the setting area and the action detection line is accepted by the user's operation of the input unit 6.
  • the positional relationship between the area AR and the action detection line AL may be used.
  • the algorithm selection unit 35 performs the action detection algorithm based on the positional relationship of the camera 1 obtained by the positional relationship calculation unit 34 from the plurality of behavior detection algorithms in the process S51 of the behavior detection process S45.
  • the behavior detection algorithm may be selected at another timing after the positional relationship calculation unit 34 obtains the positional relationship of the camera 1.
  • the algorithm selection unit 35 may select an action detection algorithm immediately after the positional relationship calculation unit 34 obtains the positional relationship of the camera 1 in the above-described process S43.
  • the algorithm selection unit 35 selects a behavior detection algorithm based on the positional relationship of the camera 1 obtained by the positional relationship calculation unit 34 from a plurality of behavior detection algorithms for each predetermined period.
  • the algorithm selection unit 35 obtains the position relationship calculation unit 34 from a plurality of behavior detection algorithms for each predetermined period in the interrupt processing.
  • An action detection algorithm based on the positional relationship of the camera 1 may be selected.
  • the algorithm selection unit 35 selects an action detection algorithm in the interruption process by the user after the user sets the setting area AR or the action detection line AL. According to this, it is possible to appropriately deal with the rearrangement of the sensor device SU and the rearrangement of the bedding BT, and it is possible to detect the predetermined behavior in the monitored person Ob with higher accuracy.
  • the predetermined actions in the monitored person Ob are, for example, getting up and getting out of bed, but are not limited thereto.
  • the predetermined action in the monitored person Ob may be another action as long as the action can be detected based on an image captured from above the monitored person.
  • the predetermined action in the monitored person Ob may be an overhang due to turning over or the like.
  • the protrusion can be detected based on the head or arm extracted from the human body region by pattern matching using, for example, a head pattern or an arm pattern. For example, when the positional relationship of the camera 1 is directly above, the position of the sleeping head or arm can be detected by a behavior detection algorithm using a part area exceeding the boundary provided near the bed boundary.
  • the positional relationship When the positional relationship is in front, it can be detected by an action detection algorithm using a part length exceeding a boundary corresponding to the bed height. The fall can be detected based on the head and trunk extracted from the human body region by pattern matching using, for example, a head pattern and a trunk pattern.
  • a behavior detection algorithm for direct use using a difference in the size of the head or the trunk when waking up or leaving the floor, and for example, the positional relationship of the camera 1
  • in front PO2 it can be detected by a behavior detection algorithm for the front using the difference in the vertical movement amount of the head and the vertical movement amount of the trunk when getting up and leaving the bed.
  • the wake-up determination condition for Xn and the wake-up determination condition for Xn are one or more of the first area S1, the second area S2, the third area S3, and the distance Wa.
  • the Xn wake-up determination condition and the Xn wake-up determination condition may be other conditions as long as they can be determined based on an image captured from above the monitored person.
  • the wake-up determination condition for Xn and the wake-up determination condition for Xn may be a comparison with a predetermined threshold Th with respect to the ratio of the first area S1 and the second area S2.
  • the threshold value Th is preset in advance experimentally by statistically processing a plurality of samples, for example, as described above.
  • the wake-up determination condition for Xn and the wake-up determination condition for Xn may be the presence or absence of an operation that crosses the action detection line AL.
  • the positional relationship of the camera 1 is directly above PO1 or near side PO2
  • the rising of the human body region is determined by detecting a motion that first crosses the action detection line AL
  • the entire human body region is detected by detecting the motion that crosses the behavior detecting line AL.
  • the positional relationship of the camera 1 is the rear PO3
  • the human body region is determined to be woken up by detecting a motion that first crosses the action detection line AL, and the person 1 leaves the floor by detecting a motion in which half of the human body region crosses the action detection line AL. Is determined.
  • the setting area AR is set by the user, but may be automatically set. According to this, the setting area AR can be automatically set. For example, by extracting an edge from an image acquired by the camera 1 using an edge filter, the area of the bedding BT is detected, and the detected area of the bedding BT is automatically set as the setting area AR. For example, the area of the bedding BT is detected from the image acquired by the camera 1 by pattern matching using the pattern of the bedding BT, and the detected area of the bedding BT is automatically set as the setting area AR.
  • the area of the bedding BT is detected, and the detected bedding BT The area is automatically set as the setting area AR.
  • the sensor device SU includes the communication IF unit 4 and the behavior detection line setting processing unit 33 as another example of the behavior detection line receiving unit.
  • the sensor device SU includes a control processing unit. 3, a behavior detection line computing unit that functionally obtains the behavior detection line AL based on the image acquired by the camera 1 and stores the obtained behavior detection line AL in the behavior detection line storage unit 52 of the storage unit 5. You may prepare.
  • the action detection line AL can be set automatically. For example, the region of the bedding BT is detected from the image acquired by the camera 1 by edge extraction or pattern matching, and the long side of the detected bedding BT is automatically set as the action detection line AL.
  • the area of the bedding BT is detected, and the camera 1 which is an example of the image acquisition unit acquires a plurality of images captured at different times, and the moving body area is a human body from the plurality of images.
  • the frequency with which the detected human body region goes out of the area of the bedding BT is obtained for each side, and the side with the highest frequency in a predetermined period (set period) is automatically set as the action detection line AL. Is done.
  • the number of action detection lines AL is one, but a plurality of action detection lines AL may be provided.
  • the positional relationship of the camera 1 is calculated for each behavior detection line AL, and the behavior detection algorithm based on the calculated positional relationship of the camera 1 is determined for each behavior detection line AL by a predetermined in the monitored person Ob. Behavior is detected.
  • the positional relationship calculation unit 34 has the basic positional relationship of the six immediately above PO1, the near side PO2, the far side PO3, the lateral PO4, the diagonally forward PO5 and the diagonally far side PO6 as the positional relationship of the camera 1.
  • the algorithm selection unit 35 calculates one of them, and the algorithm selection unit 35 corresponds to the positional relationship of the camera 1 obtained by the positional relationship calculation unit 34.
  • the behavior detection algorithm may be gradually switched according to the position of the camera 1.
  • the positional relationship calculation unit 34 classifies the positional relationship of the camera 1 that captured the image with respect to the setting area AR into a plurality of different basic positional relationships.
  • the positional relation of the camera 1 that has captured the image with respect to the setting area AR is obtained.
  • a plurality of predetermined evaluation values for detecting a predetermined action in the monitored person Ob are obtained, and the calculated plurality of evaluation values are respectively weighted by the plurality of basic positional relationships.
  • the weight is detected and the predetermined action in the monitored person Ob is detected based on a comparison result between the weighted result and a predetermined threshold.
  • Rhythm is included as a weighted behavior detection algorithm
  • the algorithm selection unit 35 selects the weighted behavior detection algorithm based on the positional relationship obtained by the positional relationship calculation unit 34 from the plurality of behavior detection algorithms
  • the behavior detection processing unit 36 is a weighted behavior detection algorithm selected by the algorithm selection unit 35 using the weights of the plurality of basic positional relationships obtained by the positional relationship calculation unit 34, and the predetermined behavior of the monitored person Ob based on the image. Detect.
  • the positional relationship of the camera 1 with respect to the setting area AR is not expressed by one basic positional relationship, but is expressed by the weight of each of the plurality of basic positional relationships, and a plurality of the plurality of basic positional relationships corresponding to the plurality of basic positional relationships. Since the evaluation value is weighted by the weight of each of the plurality of basic positional relationships, and the predetermined action in the monitored person Ob is detected based on the weighted result, the behavior detection algorithm corresponding to one basic positional relationship is used. Compared with the case where the predetermined action in the monitoring person Ob is detected, the predetermined action in the monitored person Ob can be detected with higher accuracy.
  • FIG. 17 is a diagram for explaining a calculation method of the weight when the positional relationship of the camera is represented by weights of a plurality of basic positional relationships.
  • FIG. 17A shows the setting area AR and the image center CP
  • FIG. 17B shows an enlarged setting area ARE and an image center CP obtained by enlarging the setting area AR shown in FIG. 17A twice.
  • the basic positional relationship is, for example, the above-described immediately above PO1, front side PO2, back PO3, side PO4, diagonally front PO5, and diagonally back PO6, and the predetermined evaluation value is, for example, the above-described first evaluation value. 1 area S1, 2nd area S2, 3rd area S3, and distance Wa. Then, as shown in FIG.
  • the positional relationship calculation unit 34 expands the setting area twice around the intersection of the diagonal line (shown by a broken line in FIG. 17B) with the horizontal width W.
  • An enlargement setting area ARE having a vertical width H is obtained.
  • the positional relationship calculation unit 34 obtains the distance p closest to the enlargement setting area ARE from the image center CP.
  • the image center CP is lower right in a plan view in the setting area AR and the setting area AR is horizontally long. Therefore, the action detection line AL is determined as the lower side of the setting area AR, and the image center
  • the shortest distance p between the CP and the enlargement setting area ARE is a distance to the lower side of the enlargement setting area ARE.
  • the positional relationship calculation unit 34 determines a basic positional relationship depending on where the image center CP is located with respect to the setting area AR, the enlarged setting area ARE, and the action detection line AL, and the weight of the calculated basic positional relationship.
  • Ask for. the image center CP is in the setting area AR, and both the action detection line AL and the side of the enlarged setting area ARE where the distance p from the image center is the shortest are below the image center CP. Therefore, the two basic positional relationships of the immediately above PO1 and the near side PO2 are calculated, the weight of the immediately above PO1 is set to p, and the weight of the near side PO2 is set to H / 2-p.
  • the above-described wake-up determination condition for directly above is a comparison between the first area S1 and the first threshold Th1, and the above-mentioned wake-up determination condition for the front is a comparison between the distance Wa and the third threshold Th3.
  • the fifteenth threshold Th15 is set to 0, and when the weighted result V1 is equal to or greater than 0 (V1 ⁇ 0), the action detection processing unit 36 determines that the monitored person Ob has woken up and weighted the result.
  • the bed leaving determination condition for the upper portion is a comparison between the second area S2 and the second threshold Th2
  • the above-mentioned bed leaving determination condition for the front is a comparison between the distance Wa and the fourth threshold Th4.
  • the 16th threshold Th16 is set to 0, and when the weighted result V2 is 0 or more (V2 ⁇ 0), the behavior detection processing unit 36 determines that the monitored person Ob has left the floor and weighted the result. When V2 is less than 0 (V2 ⁇ 0), it is determined that the monitored person Ob is before getting out of bed.
  • the image center CP is outside the enlargement setting area ARE, the image is captured by the camera 1 from a sufficiently oblique direction. Etc. can be used as they are.
  • the sensor device SU includes the behavior detection unit and the notification unit.
  • the behavior detection unit and the notification unit may be included in the management server device SV, and a fixed terminal.
  • the device SP may be provided, or the mobile terminal device TA may be provided.
  • a communication interface that receives and acquires an image captured from above the monitored person Ob and the communication signal containing the Doppler signal from the sensor device SU via the network NW corresponds to an example of the image acquisition unit. Will do.
  • the behavior detection apparatus stores an image acquisition unit that acquires an image of a monitored person as an object to be monitored from above the monitored person, and a predetermined area in the image as a setting area.
  • a storage unit that stores a plurality of different behavior detection algorithms for detecting a predetermined behavior of the monitored person based on the image, a camera that captures the image from the plurality of behavior detection algorithms, and the An algorithm selection unit that selects a behavior detection algorithm based on a positional relationship with a setting region, and a behavior detection process that detects a predetermined behavior in the monitored person based on the image by the behavior detection algorithm selected by the algorithm selection unit A part.
  • the setting area is an area of a bedding such as a bed (bedding area).
  • a positional relationship calculation unit that obtains the positional relationship between the camera that has captured the image and the setting region according to a position where the center of the image exists with respect to the setting region. Further prepare.
  • Such a behavior detection device selects a behavior detection algorithm according to the positional relationship between the camera and the setting area from a plurality of behavior detection algorithms, and uses the selected behavior detection algorithm to perform a predetermined behavior of the monitored person. Since the detection is performed, it is possible to detect the predetermined behavior of the monitored person with higher accuracy regardless of the installation position where the camera is installed. Further, when the position of the bedding is changed after the camera is installed and the behavior detection is started, the positional relationship of the camera with respect to the bedding changes, but the behavior detection device stores the setting stored in the storage unit. By changing the area to the bedding area after the change, it is possible to cope with a change in the positional relationship of the camera due to such a change in the position of the bedding. Therefore, the behavior detection apparatus can detect the predetermined behavior of the monitored person with higher accuracy regardless of the position of the bedding.
  • the positional relationship is such that the camera is located immediately above the setting area, the camera is located immediately above the setting area, and the camera is A rear position that is located in the upper part of the setting area, a side in which the camera is located in the upper side of the setting area, an oblique front in which the camera is located obliquely in front of the setting area, and an oblique position in the setting area. It is one of the diagonal back located in the back upper part.
  • Such an action detection device can cope with the case where the camera is disposed directly above, in front, in the back, side, in front of the diagonal, and in the back of the diagonal. It can be detected with high accuracy.
  • the storage unit is used for at least one of the plurality of behavior detection algorithms to detect a predetermined behavior in the monitored person.
  • a predetermined line in the inside is further stored as a behavior detection line, and the positional relationship calculation unit obtains a positional relationship between the camera that has captured the image and the setting region in consideration of the behavior detection line.
  • the plurality of behavior detection algorithms include a behavior detection algorithm for direct use used when the positional relationship is directly above, and the behavior detection algorithm for direct use is A predetermined area of the monitored person based on a first area of the monitored person outside the setting area on the image and a second area where the setting area and the monitored person overlap on the image It is an algorithm that detects behavior.
  • Such a behavior detection device is a relatively simple image process in which the first area of the monitored person outside the set area on the image and the second area where the set area on the image overlaps the monitored person are obtained. A predetermined action in the monitored person can be detected.
  • the storage unit is used for at least one of the plurality of behavior detection algorithms to detect a predetermined behavior in the monitored person.
  • a plurality of behavior detection algorithms include a behavior detection algorithm for the foreground used when the positional relationship is foreground, and the behavior detection algorithm for the foreground is used as a behavior detection line. And an algorithm for detecting a predetermined action in the monitored person based on a distance between the action detection line on the image and a toe position of the monitored person.
  • Such a behavior detection device can detect a predetermined behavior of the monitored person by relatively simple image processing in which the distance between the action detection line on the image and the foot position of the monitored person is obtained.
  • the plurality of behavior detection algorithms include a lateral behavior detection algorithm used when the positional relationship is lateral, and the lateral behavior detection algorithm is On the image, the first boundary line positioned closest to the camera along the horizontal direction of the setting area and the position farthest from the camera along the horizontal direction of the setting area. This is an algorithm for detecting a predetermined action in the monitored person based on a third area where the setting area and the monitored person overlap with the second boundary line.
  • Such a behavior detection device is farthest from the camera along the horizontal direction of the setting area and the first boundary line positioned closest to the camera along the horizontal direction of the setting area on the image.
  • Predetermined behavior in the monitored person can be detected by relatively simple image processing of obtaining a third area where the set area and the monitored person overlap with the second boundary line located at the position.
  • the plurality of behavior detection algorithms include a behavior detection algorithm for the back used when the positional relationship is a depth, and the behavior detection algorithm for the depth is And an algorithm for detecting a predetermined action in the monitored person based on a second area where the setting area on the image overlaps the monitored person.
  • Such an action detection apparatus can detect a predetermined action in the monitored person by relatively simple image processing of obtaining a second area where the set area on the image and the monitored person overlap.
  • the storage unit is used for at least one of the plurality of behavior detection algorithms to detect a predetermined behavior in the monitored person.
  • a predetermined line is further stored as an action detection line, and the plurality of action detection algorithms include an action detection algorithm for diagonally forward used when the positional relationship is diagonally forward, and the action for diagonally forward
  • the detection algorithm includes a distance between the action detection line on the image and the foot position of the monitored person, and a position closest to the camera along the horizontal direction of the setting area on the image.
  • On the basis of the third area overlapped with the monitored person is an algorithm for detecting a predetermined behavior in the monitored person.
  • Such a behavior detection device is located at a position closest to the camera along the distance between the behavior detection line on the image and the position of the foot of the person to be monitored, and the horizontal direction of the setting region on the image.
  • Predetermined behavior in the monitored person can be detected by relatively simple image processing of obtaining.
  • the storage unit is used for at least one of the plurality of behavior detection algorithms to detect a predetermined behavior in the monitored person.
  • a predetermined line is further stored as an action detection line, and the plurality of action detection algorithms include an action detection algorithm for an oblique back used when the positional relationship is an oblique back, and the action for the oblique back
  • the detection algorithm includes a first boundary line positioned closest to the camera along the horizontal direction of the setting area on the image and the farthest camera from the horizontal direction of the setting area.
  • a predetermined action in the monitored person is detected based on a third area where the set area and the monitored person overlap with the second boundary line located at the position. It is an algorithm for.
  • Such a behavior detection device is farthest from the camera along the horizontal direction of the setting area and the first boundary line positioned closest to the camera along the horizontal direction of the setting area on the image.
  • Predetermined behavior in the monitored person can be detected by relatively simple image processing of obtaining a third area where the set area and the monitored person overlap with the second boundary line located at the position.
  • the positional relationship calculation unit classifies the positional relationship between the camera that captured the image and the setting region into a plurality of different basic positional relationships
  • the plurality of behavior detection algorithms include: A plurality of predetermined evaluation values for detecting a predetermined action in the monitored person based on an image are obtained according to the plurality of basic positional relationships, and each of the plurality of basic positional relationships is added to the obtained plurality of evaluation values.
  • An algorithm for weighting with a weight and detecting a predetermined action in the monitored person based on a comparison result between the weighted result and a predetermined threshold As a weighted behavior detection algorithm, and the algorithm selection unit selects the weighted behavior detection algorithm based on the positional relationship obtained by the positional relationship calculation unit from the plurality of behavior detection algorithms, and the behavior detection processing unit Detects a predetermined action in the monitored person based on the image by a weighting action detection algorithm selected by the algorithm selection unit using the weights of the plurality of basic positional relations obtained by the positional relation calculation unit.
  • Such a behavior detection device does not represent the positional relationship of the camera with respect to the setting area as a single basic positional relationship, but represents a plurality of basic positional relationships, each of which represents a plurality of basic positional relationships.
  • the evaluation value is weighted with the weight of each of the plurality of basic positional relationships, and a predetermined behavior in the monitored person is detected based on the weighted result, so the behavior detection algorithm according to one basic positional relationship Compared with the case where the predetermined action in the monitored person is detected, the predetermined action in the monitored person can be detected with higher accuracy.
  • the above-described behavior detection device further includes a behavior detection line receiving unit that receives the behavior detection line from the outside and stores it in the storage unit.
  • such a behavior detection device further includes a behavior detection line receiving unit, the behavior detection line can be easily input and set.
  • the behavior detection line calculation unit that obtains the behavior detection line based on the image acquired by the image acquisition unit and stores the obtained behavior detection line in the storage unit. Further prepare.
  • such a behavior detection device further includes a behavior detection line calculation unit, the behavior detection line can be automatically set.
  • the behavior detection method includes an image acquisition step of acquiring an image obtained by capturing a monitored person to be monitored from above the monitored person, and a predetermined action in the monitored person.
  • An action detection program includes: an image acquisition step of acquiring an image of a monitored person to be monitored from above the monitored person; and a predetermined action in the monitored person.
  • An algorithm selection step of selecting a behavior detection algorithm from a plurality of different behavior detection algorithms for detection based on a positional relationship between a camera that captured the image and a setting region that is a predetermined region in the image; And a behavior detection processing step of detecting a predetermined behavior in the monitored person based on the image by the behavior detection algorithm selected in the algorithm selection step.
  • Such a behavior detection method and a behavior detection program select a behavior detection algorithm according to the positional relationship between the camera and the setting area from a plurality of behavior detection algorithms, and the selected behavior detection algorithm uses Since the predetermined action is detected, the predetermined action in the monitored person can be detected with higher accuracy regardless of the installation position where the camera is installed.
  • the position of the bedding such as a bed is changed after the start of the behavior detection and the camera is arranged
  • the positional relationship of the camera with respect to the bedding changes, but the behavior detection method and the behavior detection program are By changing the setting area stored in the storage unit, it is possible to cope with a change in the positional relationship of the camera due to such a change in the position of the bedding. Therefore, the behavior detection method and the behavior detection program can detect the predetermined behavior of the monitored person with higher accuracy regardless of the position of the bedding.
  • a monitored person monitoring apparatus includes a behavior detection unit that detects a predetermined behavior of a monitored person that is a monitoring target, and a notification unit that notifies the predetermined behavior detected by the behavior detection unit to the outside
  • the behavior detection unit includes any one of the above-described behavior detection devices.
  • Such a monitored person monitoring apparatus includes any of the above-described action detecting apparatuses, it is possible to detect a predetermined action in the monitored person with higher accuracy regardless of the position where the camera is disposed. .
  • a behavior detection device a behavior detection method, a behavior detection program, and a monitored person monitoring device can be provided.

Abstract

Grâce à ce dispositif de détection de comportement, à ce procédé de détection de comportement, à ce programme de détection de comportement et à ce dispositif de surveillance de sujet, une image prise à partir du dessus d'un sujet à surveiller est acquise, un algorithme de détection de comportement est sélectionné parmi une pluralité de différents algorithmes de détection de comportement afin de détecter les comportements prescrits du sujet surveillé d'après une relation de position entre une caméra qui a capturé l'image et une zone définie qui est une zone prescrite dans l'image, et un comportement prescrit du sujet surveillé est détecté d'après l'image au moyen de l'algorithme de détection de comportement sélectionné.
PCT/JP2016/062046 2015-06-11 2016-04-14 Dispositif de détection de comportement, procédé et programme de détection de comportement et dispositif de surveillance de sujet WO2016199495A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2016575258A JP6115692B1 (ja) 2015-06-11 2016-04-14 行動検知装置、該方法および該プログラム、ならびに、被監視者監視装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015118388 2015-06-11
JP2015-118388 2015-06-11

Publications (1)

Publication Number Publication Date
WO2016199495A1 true WO2016199495A1 (fr) 2016-12-15

Family

ID=57503317

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/062046 WO2016199495A1 (fr) 2015-06-11 2016-04-14 Dispositif de détection de comportement, procédé et programme de détection de comportement et dispositif de surveillance de sujet

Country Status (2)

Country Link
JP (2) JP6115692B1 (fr)
WO (1) WO2016199495A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019009377A1 (fr) * 2017-07-06 2019-01-10 オムロン株式会社 Système de support de visualisation et son procédé de commande
JP2019030628A (ja) * 2017-08-07 2019-02-28 株式会社リコー 情報提供装置、情報提供システム、情報提供方法、及びプログラム
CN109964248A (zh) * 2017-03-02 2019-07-02 欧姆龙株式会社 看护辅助系统及其控制方法、以及程序
JP2019121962A (ja) * 2018-01-09 2019-07-22 アイホン株式会社 監視カメラシステム
JP2022069026A (ja) * 2020-10-23 2022-05-11 日本精密測器株式会社 見守り支援装置および見守り支援システム
WO2022098305A1 (fr) * 2020-11-04 2022-05-12 Astoria Solutions Pte Ltd. Système autonome de détection de violation de sécurité à travers une clôture virtuelle
CN117351684A (zh) * 2023-12-04 2024-01-05 成都蜀诚通信技术有限公司 智能安全帽的预警方法及智能安全帽

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200155040A1 (en) * 2018-11-16 2020-05-21 Hill-Rom Services, Inc. Systems and methods for determining subject positioning and vital signs

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014149584A (ja) * 2013-01-31 2014-08-21 Ramrock Co Ltd 通知システム
JP2014182409A (ja) * 2013-03-15 2014-09-29 Nk Works Co Ltd 監視装置

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014149584A (ja) * 2013-01-31 2014-08-21 Ramrock Co Ltd 通知システム
JP2014182409A (ja) * 2013-03-15 2014-09-29 Nk Works Co Ltd 監視装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TSUYOSHI TASAKI ET AL.: "Robot tono Kyori ni Ojita Parameter Settei ni yoru Koreisha no Camera-shiki Risho Kenshutsu", THE HUMAN INTERFACE SYMPOSIUM 2011 RONBUNSHU, 13 September 2011 (2011-09-13), pages 1021 - 1026 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109964248A (zh) * 2017-03-02 2019-07-02 欧姆龙株式会社 看护辅助系统及其控制方法、以及程序
WO2019009377A1 (fr) * 2017-07-06 2019-01-10 オムロン株式会社 Système de support de visualisation et son procédé de commande
JP2019016120A (ja) * 2017-07-06 2019-01-31 オムロン株式会社 見守り支援システム及びその制御方法
JP2019030628A (ja) * 2017-08-07 2019-02-28 株式会社リコー 情報提供装置、情報提供システム、情報提供方法、及びプログラム
JP2019121962A (ja) * 2018-01-09 2019-07-22 アイホン株式会社 監視カメラシステム
JP2022069026A (ja) * 2020-10-23 2022-05-11 日本精密測器株式会社 見守り支援装置および見守り支援システム
WO2022098305A1 (fr) * 2020-11-04 2022-05-12 Astoria Solutions Pte Ltd. Système autonome de détection de violation de sécurité à travers une clôture virtuelle
CN117351684A (zh) * 2023-12-04 2024-01-05 成都蜀诚通信技术有限公司 智能安全帽的预警方法及智能安全帽
CN117351684B (zh) * 2023-12-04 2024-02-13 成都蜀诚通信技术有限公司 智能安全帽的预警方法及智能安全帽

Also Published As

Publication number Publication date
JP6115692B1 (ja) 2017-04-19
JP2017168105A (ja) 2017-09-21
JPWO2016199495A1 (ja) 2017-06-22
JP6720909B2 (ja) 2020-07-08

Similar Documents

Publication Publication Date Title
JP6115692B1 (ja) 行動検知装置、該方法および該プログラム、ならびに、被監視者監視装置
JP6137425B2 (ja) 画像処理システム、画像処理装置、画像処理方法、および画像処理プログラム
JP6852733B2 (ja) 生体監視装置及び生体監視方法
JP6292283B2 (ja) 行動検知装置および行動検知方法ならびに被監視者監視装置
JP6142975B1 (ja) 被監視者監視装置および該方法ならびに被監視者監視システム
EP3486868A1 (fr) Dispositif de détermination de comportement et procédé de détermination de comportement
JP6696606B2 (ja) 介護支援システム、介護支援方法及びプログラム
WO2017026309A1 (fr) Dispositif capteur et système de prise en charge des soins
JP6115689B1 (ja) 転倒検知装置および転倒検知方法ならびに被監視者監視装置
JP2019197263A (ja) システム、およびシステムの制御方法
JP7264065B2 (ja) 被監視者監視支援システムおよび被監視者監視支援方法
JP7137155B2 (ja) 被監視者監視支援システム、被監視者監視支援方法およびプログラム
JP6481537B2 (ja) 被監視者監視装置および被監視者監視方法
JP6804510B2 (ja) 検知システムおよび検知システムの表示方法
JP6292363B2 (ja) 端末装置および端末装置の表示方法ならびに被監視者監視システム
JPWO2019031012A1 (ja) 行動検知装置および該方法ならびに被監視者監視支援システム
JP6737355B2 (ja) 頭部検出装置および頭部検出方法ならびに被監視者監視装置
JP7259540B2 (ja) 判定装置、判定装置の制御プログラム、および判定方法
JP6172416B1 (ja) ナースコールシステム
JP2022113309A (ja) 情報処理装置、見守りシステム、制御プログラム、および制御方法
JPWO2020031531A1 (ja) 移動行動異常判定装置および該方法ならびに被監視者監視支援システム
JPWO2019235068A1 (ja) 被監視者監視支援装置、被監視者監視支援方法、被監視者監視支援システムおよび被監視者監視支援サーバ装置

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2016575258

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16807202

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16807202

Country of ref document: EP

Kind code of ref document: A1