WO2016152182A1 - Abnormal state detection device, abnormal state detection method, and abnormal state detection program - Google Patents

Abnormal state detection device, abnormal state detection method, and abnormal state detection program Download PDF

Info

Publication number
WO2016152182A1
WO2016152182A1 PCT/JP2016/050281 JP2016050281W WO2016152182A1 WO 2016152182 A1 WO2016152182 A1 WO 2016152182A1 JP 2016050281 W JP2016050281 W JP 2016050281W WO 2016152182 A1 WO2016152182 A1 WO 2016152182A1
Authority
WO
WIPO (PCT)
Prior art keywords
pedestrian
abnormal state
real space
captured image
depth
Prior art date
Application number
PCT/JP2016/050281
Other languages
French (fr)
Japanese (ja)
Inventor
靖和 田中
安川 徹
Original Assignee
ノーリツプレシジョン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ノーリツプレシジョン株式会社 filed Critical ノーリツプレシジョン株式会社
Priority to JP2017507517A priority Critical patent/JP6737262B2/en
Publication of WO2016152182A1 publication Critical patent/WO2016152182A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb

Definitions

  • the present invention relates to an abnormal state detection device, an abnormal state detection method, and an abnormal state detection program.
  • Patent Document 1 a pedestrian is photographed with a stereo camera, and the obtained image data is analyzed three-dimensionally to detect the pedestrian's posture and motion, and the pedestrian is based on the detected posture and motion.
  • a system has been proposed to measure the function of life. According to such a system, the state of the pedestrian can be observed without causing the pedestrian to wear equipment.
  • the present invention has been made in consideration of such points, and an object thereof is to provide a system capable of appropriately watching a pedestrian.
  • the present invention adopts the following configuration in order to solve the above-described problems.
  • the abnormal state detection device acquires a captured image that is a captured image of a pedestrian performing a walking motion and includes depth data indicating the depth of each pixel in the captured image.
  • An image acquisition unit, an extraction unit that extracts a person region in which the pedestrian is captured in the acquired captured image, and a depth of each pixel included in the extracted person region, the image captured in the captured image A behavior measurement unit that measures the behavior of the local part in the real space of the pedestrian's body by continuously specifying the position of the local part in the real space, and the measurement A state determination unit that determines whether or not the pedestrian is in an abnormal state based on the behavior of the local part that has been performed, and a case where the determination result indicates that the pedestrian is in an abnormal state And that the pedestrian is in an abnormal state Comprising a notification unit that performs an abnormality detection notification for causing et al., The.
  • the captured image acquired in order to detect the abnormal state of the pedestrian includes depth data indicating the depth of each pixel.
  • the depth of each pixel indicates the depth from the photographing apparatus to the subject. More specifically, the depth of the subject is acquired with respect to the surface of the subject. That is, if the depth data is used, the position of the subject surface in the real space can be specified. Therefore, if this depth data is used, the state of the pedestrian in the real space (three-dimensional space) can be analyzed.
  • the behavior in the real space of the local part to be observed in the body of the pedestrian, not the entire body of the pedestrian, is measured. And based on the behavior in the real space of the said local site
  • the behavior measuring unit may measure a behavior in the real space above the pedestrian as the local part.
  • the state determination unit detects whether or not the upper part of the pedestrian has descended a predetermined distance or more in real time based on the measured behavior of the upper part of the pedestrian. When it is detected that the upper part of the person has fallen a predetermined distance or more in real time, the pedestrian may fall and determine that the pedestrian is in an abnormal state.
  • the position of the pedestrian's body is assumed to move rapidly downward. Therefore, in this configuration, the pedestrian's fall is monitored by detecting whether or not the upper part of the pedestrian has fallen a predetermined distance or more within a predetermined time. Thereby, according to the said structure, when a pedestrian falls, it can detect that the pedestrian fell into the abnormal state.
  • the upper part of the pedestrian indicates the upper end of the pedestrian in the real space, and may be one point of the upper end of the pedestrian or may be an area having an arbitrary area provided at the upper end of the pedestrian. Good.
  • the upper part of the pedestrian can be set as appropriate.
  • the upper end of a pedestrian is the highest part in real space among the pedestrian's bodies shown in the photographed image.
  • the behavior measuring unit may measure a behavior in the real space above the pedestrian as the local part.
  • the state determination unit detects whether the upper part of the pedestrian has moved to a position lower than a predetermined first height in real space based on the measured behavior of the upper part of the pedestrian. When it is detected that the upper part of the pedestrian has moved to a position lower than the predetermined first height in real space, the pedestrian is crooked and is in an abnormal state. May be determined.
  • the predetermined first height value for detecting the stagnation state may be appropriately set according to the embodiment.
  • the state determination unit may be configured such that the upper part of the pedestrian is in real space based on the measured behavior of the upper part of the pedestrian. Detecting whether or not the pedestrian has moved to a position lower than a predetermined second height lower than the first height, and the upper part of the pedestrian has moved to a position lower than the predetermined second height in real space When this is detected, it may be determined that the pedestrian is lying and the pedestrian is in an abnormal state.
  • the pedestrian When the pedestrian is lying down, the entire body of the pedestrian is assumed to be at a lower height than in the case of the above-mentioned cramped state. Therefore, in this configuration, the pedestrian lies down by detecting whether or not the upper part of the pedestrian has moved to a position lower than a predetermined second height that is lower than the first height in real space. Monitor whether or not Thereby, according to the said structure, when a pedestrian lies down, it can detect that the pedestrian fell into the abnormal state.
  • the predetermined second height value for detecting the lying state may be appropriately set according to the embodiment.
  • the notification unit may perform the abnormality detection notification when the abnormal state of the pedestrian continues for a predetermined time or more.
  • anomaly detection notification is made only when the abnormal state of the pedestrian continues for a certain time or longer, so that the pedestrian's state occurs for a moment when the abnormal state condition is satisfied. It is possible to prevent erroneous notification of abnormal detection notifications. Therefore, according to the said structure, the misreport of an abnormality detection notification can be prevented and it can notify appropriately that the abnormal state of the pedestrian was detected.
  • an information processing system that realizes each of the above configurations, an information processing method, or a program may be used.
  • it may be a storage medium that can be read by a computer, a device, a machine or the like in which such a program is recorded.
  • the computer-readable recording medium is a medium that stores information such as programs by electrical, magnetic, optical, mechanical, or chemical action.
  • the information processing system may be realized by one or a plurality of information processing devices.
  • the abnormal state detection method is a captured image in which a computer captures a pedestrian performing a walking motion, and includes a captured image including depth data indicating the depth of each pixel in the captured image.
  • the abnormal state detection program is a captured image obtained by capturing a pedestrian performing a walking motion, and includes depth data indicating the depth of each pixel in the captured image.
  • a step of acquiring a photographed image, a step of extracting a person region in which the pedestrian is captured in the acquired photographed image, and a depth of each pixel included in the extracted person region are referred to as the photographed image.
  • FIG. 1 schematically illustrates a scene where the present invention is applied.
  • FIG. 2 illustrates a hardware configuration of the abnormal state detection device according to the embodiment.
  • FIG. 3 illustrates the relationship between the depth acquired by the camera according to the embodiment and the subject.
  • FIG. 4 illustrates the functional configuration of the abnormal state detection device according to the embodiment.
  • FIG. 5 illustrates a processing procedure relating to pedestrian watching by the abnormal state detection device according to the embodiment.
  • FIG. 6 illustrates a captured image acquired by the camera according to the embodiment.
  • FIG. 7 illustrates the coordinate relationship in the captured image according to the embodiment.
  • FIG. 8 illustrates the positional relationship between an arbitrary point (pixel) of the captured image and the camera in the real space according to the embodiment.
  • FIG. 9 schematically illustrates a state where a pedestrian has fallen.
  • FIG. 10 schematically illustrates a state in which a pedestrian is cramped.
  • FIG. 11 schematically illustrates a state where a pedestrian is lying.
  • this embodiment will be described with reference to the drawings.
  • this embodiment described below is only an illustration of the present invention in all respects. It goes without saying that various improvements and modifications can be made without departing from the scope of the present invention. That is, in implementing the present invention, a specific configuration according to the embodiment may be adopted as appropriate.
  • data appearing in the present embodiment is described in a natural language, more specifically, it is specified by a pseudo language, a command, a parameter, a machine language, or the like that can be recognized by a computer.
  • FIG. 1 shows an example of a scene in which the abnormal state detection device 1 according to the present embodiment is used.
  • the abnormal state detection device 1 according to the present embodiment photographs a pedestrian with the camera 2 and analyzes the captured image 3 obtained thereby to monitor the state of the pedestrian in the captured image 3 and to perform the walking. It is an information processing apparatus that watches a person. Therefore, the abnormal state detection apparatus 1 according to the present embodiment can be widely used in a scene where the target person who is the target of watching is watched over.
  • the abnormal state detection device 1 acquires a captured image 3 obtained by capturing a pedestrian performing a walking motion from the camera 2.
  • the target person (pedestrian) is walking in the shooting range of the camera 2, and the camera 2 is installed for shooting such a target person.
  • the target person does not always have to perform a walking motion, and may remain in a specific place.
  • the camera 2 is configured to be able to acquire the depth corresponding to each pixel in the captured image 3.
  • the camera 2 includes a depth sensor (a depth sensor 21 described later) that measures the depth of the subject so that the depth of each pixel can be acquired.
  • the abnormal state detection apparatus 1 according to the present embodiment is connected to such a camera 2 and acquires a photographed image 3 obtained by photographing a pedestrian whose state is to be monitored.
  • the acquired captured image 3 includes depth data indicating the depth obtained for each pixel, as illustrated in FIG.
  • the captured image 3 only needs to include data indicating the depth of the subject within the imaging range, and the data format can be appropriately selected according to the embodiment.
  • the captured image 3 may be data (for example, a depth map) in which the depth of the subject within the imaging range is two-dimensionally distributed.
  • the captured image 3 may include an RGB image together with the depth data.
  • the captured image 3 may be configured with a moving image or one or a plurality of still images as long as the state of the pedestrian can be analyzed.
  • the abnormal state detection device 1 extracts a person area in which the pedestrian appears in the acquired captured image 3.
  • depth data indicating the depth of each pixel is included. Therefore, the abnormal state detection apparatus 1 can specify the position of the subject in the captured image 3 in the real space by using this depth data. More specifically, the depth of the subject is acquired with respect to the surface of the subject. That is, the abnormal state detection device 1 can specify the position of the subject surface in the real space by referring to the depth of each pixel indicated by the depth data.
  • the abnormal state detection device 1 refers to the depth of each pixel included in the extracted person region, and in the real space of the local part to be observed among the pedestrian's body shown in the captured image 3. By continuously specifying the position, the behavior of the local part in real space is measured.
  • the local region to be observed can be set as appropriate according to the embodiment.
  • the local site may be a specific site on the body such as the head, shoulder, chest, or leg.
  • the local part may be a part where the position on the body can be changed depending on the state of the pedestrian in the captured image, such as the upper part of the pedestrian, instead of such a specific part on the body.
  • it is desirable that the local part is set to a part where the state of the pedestrian is easily reflected.
  • the local part to be observed is a part indicating the position of the upper end of the pedestrian, such as the upper part of the pedestrian or the head.
  • the upper part 31 of the pedestrian is set as a local site to be observed.
  • the abnormal state detection device 1 determines whether or not the pedestrian is in an abnormal state based on the measured behavior of the local part. Furthermore, when the abnormal state detection device 1 determines that the pedestrian is in an abnormal state as a result of the determination, the abnormal state detection device 1 performs an abnormal state notification for notifying that the pedestrian is in the abnormal state. That is, when the pedestrian falls into an abnormal state, the abnormal state detection device 1 performs an alarm for notifying the abnormal state. Thereby, the user of the abnormal state detection device 1 according to the present embodiment can know the abnormal state of the pedestrian existing in the shooting range of the camera 2 and can watch over the pedestrian.
  • the state of the pedestrian is analyzed based on the captured image 3 including the depth data indicating the depth of each pixel.
  • the position of the subject surface in real space can be specified by using the depth data. Therefore, if this depth data is used, the state of the pedestrian in the real space (three-dimensional space) can be analyzed regardless of the viewing direction (viewpoint) of the camera 2 with respect to the pedestrian.
  • the abnormal state detection device 1 uses this depth data to determine a local part (for example, walking) of the pedestrian's body, not the entire pedestrian's body.
  • the behavior of the upper part 31) of the person in real space is measured.
  • the abnormal condition detection apparatus 1 determines whether a pedestrian is in an abnormal condition based on the behavior in the real space of the said local site
  • the location of the abnormal state detection device 1 can be determined as appropriate according to the embodiment as long as the captured image 3 can be acquired from the camera 2.
  • the abnormal state detection device 1 may be disposed so as to be close to the camera 2 as illustrated in FIG.
  • the abnormal state detection apparatus 1 may be connected to the camera 2 via a network, or may be arranged at a place completely different from the camera 2.
  • FIG. 2 illustrates a hardware configuration of the abnormal state detection device 1 according to the present embodiment.
  • the abnormal state detection apparatus 1 stores a control unit 11 including a CPU, a RAM (Random Access Memory), a ROM (Read Only Memory), and the like, a program 5 executed by the control unit 11, and the like.
  • a storage unit 12 a touch panel display 13 for displaying and inputting images, a speaker 14 for outputting sound, an external interface 15 for connecting to an external device, a communication interface 16 for communicating via a network,
  • the computer 17 is electrically connected to a drive 17 for reading a program stored in the storage medium 6.
  • the communication interface and the external interface are described as “communication I / F” and “external I / F”, respectively.
  • the components can be omitted, replaced, and added as appropriate according to the embodiment.
  • the control unit 11 may include a plurality of processors.
  • the touch panel display 13 may be replaced with an input device and a display device that are separately connected independently.
  • the speaker 14 may be omitted.
  • the speaker 14 may be connected to the abnormal state detection device 1 as an external device instead of as an internal device of the abnormal state detection device 1.
  • the abnormal state detection device 1 may incorporate the camera 2.
  • the abnormal state detection device 1 may include a plurality of external interfaces 15 and may be connected to a plurality of external devices.
  • the camera 2 is connected to the abnormal state detection device 1 via the external interface 15 and photographs a target pedestrian whose state is to be monitored.
  • the installation location of the camera 2 may be appropriately selected according to the embodiment.
  • the camera 2 may be arrange
  • the camera 2 includes a depth sensor 21 for measuring the depth of the subject in order to capture the captured image 3 including depth data.
  • the type and measurement method of the depth sensor 21 may be appropriately selected according to the embodiment.
  • the depth sensor 21 may be a sensor of TOF (TimeFOf Flight) method or the like.
  • the configuration of the camera 2 is not limited to such an example as long as the depth can be acquired, and can be appropriately selected according to the embodiment.
  • the camera 2 may be a stereo camera so that the depth of the subject within the shooting range can be specified. Since the stereo camera shoots the subject within the shooting range from a plurality of different directions, the depth of the subject can be recorded. Further, the camera 2 may be replaced with the depth sensor 21 as long as the depth of the subject within the shooting range can be specified.
  • the depth sensor 21 may be an infrared depth sensor that measures the depth based on infrared irradiation so that the depth can be acquired without being affected by the brightness of the shooting location.
  • relatively inexpensive imaging apparatuses including such an infrared depth sensor include Kinect from Microsoft, Xtion from ASUS, and CARMINE from PrimeSense.
  • FIG. 3 shows an example of a distance that can be handled as the depth according to the present embodiment.
  • the depth represents the depth of the subject.
  • the depth of the subject may be expressed by, for example, a straight line distance A between the camera 2 and the object, or a perpendicular distance B from the horizontal axis with respect to the subject of the camera 2. It may be expressed as
  • the depth according to the present embodiment may be the distance A or the distance B.
  • the distance B is treated as the depth.
  • the distance A and the distance B can be converted into each other based on, for example, the three-square theorem. Therefore, the following description using the distance B can be applied to the distance A as it is.
  • the abnormal state detection apparatus 1 according to the present embodiment can analyze the state of the pedestrian by using such a depth.
  • the storage unit 12 stores the program 5.
  • This program 5 is a program for causing the abnormal state detection device 1 to execute each process related to detection of an abnormal state of a pedestrian described later, and corresponds to the “abnormal state detection program” of the present invention.
  • the program 5 may be recorded on the storage medium 6.
  • the storage medium 6 stores information such as a program by an electrical, magnetic, optical, mechanical, or chemical action so that information such as a program recorded by a computer or other device or machine can be read. It is a medium to do.
  • the storage medium 6 corresponds to the “storage medium” of the present invention.
  • 2 illustrates a disk-type storage medium such as a CD (Compact Disk) or a DVD (Digital Versatile Disk) as an example of the storage medium 6.
  • the type of the storage medium 6 is not limited to the disk type and may be other than the disk type. Examples of the storage medium other than the disk type include a semiconductor memory such as a flash memory.
  • an abnormal state detection device 1 may be, for example, a device designed exclusively for the provided service, or a general-purpose device such as a PC (Personal Computer) or a tablet terminal. Furthermore, the abnormal state detection device 1 may be implemented by one or a plurality of computers.
  • FIG. 4 illustrates a functional configuration of the abnormal state detection device 1 according to the present embodiment.
  • the control unit 11 of the abnormal state detection device 1 expands the program 5 stored in the storage unit 12 in the RAM. And the control part 11 interprets and runs the program 5 expand
  • the abnormal state detection device 1 functions as a computer including the image acquisition unit 51, the extraction unit 52, the behavior measurement unit 53, the state determination unit 54, and the notification unit 55.
  • the image acquisition unit 51 acquires the captured image 3 captured by the camera 2.
  • the acquired captured image 3 includes depth data indicating the depth of each pixel.
  • the position of the subject in the captured image 3 in the real space more specifically, the position of the subject surface in the real space can be specified.
  • the extraction unit 52 extracts a person area in which the pedestrian appears in the acquired photographed image 3.
  • the behavior measurement unit 53 refers to the depth of each pixel included in the extracted person region and continuously determines the position in the real space of the local part to be observed among the pedestrian's body shown in the captured image 3. By specifying, the behavior of the local part in the real space is measured.
  • the state determination unit 54 determines whether or not the pedestrian in the captured image 3 is in an abnormal state based on the measured behavior of the local part. Then, as a result of the determination, when it is determined that the pedestrian is in an abnormal state, the notification unit 55 performs an abnormality detection notification for notifying that the pedestrian is in an abnormal state.
  • FIG. 5 illustrates a processing procedure related to watching of a pedestrian by the abnormal state detection device 1.
  • the processing procedure relating to watching of pedestrians described below corresponds to the “abnormal state detection method” of the present invention.
  • the processing procedure regarding watching of a pedestrian described below is only an example, and each processing may be changed as much as possible. Further, in the processing procedure described below, steps can be omitted, replaced, and added as appropriate according to the embodiment.
  • Step S101 In step S ⁇ b> 101, the control unit 11 functions as the image acquisition unit 51 and acquires the captured image 3 captured by the camera 2. Then, after acquiring the captured image 3, the control unit 11 advances the processing to the next step S102.
  • the camera 2 includes a depth sensor 21. Therefore, the captured image 3 acquired in step S101 includes depth data indicating the depth of each pixel measured by the depth sensor 21.
  • the control unit 11 acquires the captured image 3 illustrated in FIG. 6 as the captured image 3 including the depth data.
  • FIG. 6 shows an example of the captured image 3 including depth data.
  • the captured image 3 illustrated in FIG. 6 is an image in which the gray value of each pixel is determined according to the depth of each pixel.
  • a black pixel is closer to the camera 2.
  • a white pixel is farther from the camera 2.
  • the control unit 11 can specify the position of each pixel in the real space. That is, the control unit 11 can specify the position in the three-dimensional space (real space) of the subject captured in each pixel from the coordinates (two-dimensional information) and the depth of each pixel in the captured image 3. .
  • FIGS. 7 and 8 a calculation example in which the control unit 11 specifies the position of each pixel in the real space will be described with reference to FIGS. 7 and 8.
  • FIG. 7 schematically illustrates the coordinate relationship in the captured image 3.
  • FIG. 8 schematically illustrates a positional relationship between an arbitrary pixel (point s) of the captured image 3 and the camera 2 in the real space. 7 corresponds to a direction perpendicular to the paper surface of FIG. That is, the length of the captured image 3 shown in FIG. 8 corresponds to the length in the vertical direction (H pixels) illustrated in FIG. Further, the length in the horizontal direction (W pixels) illustrated in FIG. 7 corresponds to the length in the direction perpendicular to the paper surface of the captured image 3 that does not appear in FIG.
  • the coordinates of an arbitrary pixel (point s) of the captured image 3 are (x s , y s ), the horizontal field angle of the camera 2 is V x , and the vertical direction Let the angle of view be V y .
  • the number of pixels in the horizontal direction of the captured image 3 is W
  • the number of pixels in the vertical direction is H
  • the coordinates of the center point (pixel) of the captured image 3 are (0, 0).
  • the control unit 11 can acquire information indicating the angle of view (V x , V y ) of the camera 2 from the camera 2. Further, the control unit 11 may acquire information indicating the angle of view (V x , V y ) of the camera 2 based on a user input or may be acquired as a preset setting value. Further, the control unit 11 can acquire the coordinates (x s , y s ) of the point s and the number of pixels (W ⁇ H) of the captured image 3 from the captured image 3. Furthermore, the control unit 11 can acquire the depth Ds of the point s by referring to the depth data included in the captured image 3.
  • the control unit 11 can specify the position of each pixel (point s) in the real space by using these pieces of information. For example, the control unit 11 performs vector S (S x , S y , S z) from the camera 2 to the point s in the camera coordinate system illustrated in FIG. , 1) can be calculated. Thereby, the position of the point s in the two-dimensional coordinate system in the captured image 3 and the position of the point s in the camera coordinate system can be mutually converted.
  • the vector S is a vector of a three-dimensional coordinate system centered on the camera 2.
  • the camera 2 may be inclined with respect to a horizontal plane (ground). That is, the camera coordinate system may be tilted from the world coordinate system of a three-dimensional space with respect to the horizontal plane (ground). Therefore, the control unit 11 applies the projective transformation using the roll angle, pitch angle ( ⁇ in FIG. 8), and yaw angle of the camera 2 to the vector S, so that the vector S of the camera coordinate system is converted to the world coordinate system. And the position of the point s in the world coordinate system may be calculated.
  • Each of the camera coordinates and the world coordinates is a coordinate system representing a real space. In this way, the control unit 11 can specify the position of the subject in the captured image 3 in the real space by using the depth data.
  • the control unit 11 may acquire a moving image or a still image as the captured image 3.
  • the control unit 11 may acquire a moving image or one still image for one point as the captured image 3.
  • the control unit 11 may acquire a moving image or a plurality of still images for a predetermined time as the captured image 3.
  • the control unit 11 obtains a moving image for one time point or a predetermined time or one or a plurality of still images as the photographed image 3, and performs processing for steps S102 to S105 described later on the photographed image 3 obtained.
  • the state of the pedestrian in the captured image 3 is analyzed.
  • control unit 11 may acquire the captured image 3 in synchronization with the video signal of the camera 2 in order to monitor the pedestrian. Then, the control unit 11 may immediately execute the captured image 3 acquired in steps S102 to S105 described later.
  • the abnormal state detection apparatus 1 can perform real-time image processing by continuously executing such an operation continuously, and can watch a pedestrian existing in the shooting range of the camera 2 in real time.
  • Step S102 Returning to FIG. 5, in the next step S ⁇ b> 102, the control unit 11 functions as the extraction unit 52, and extracts a person region where a pedestrian is captured as illustrated in FIG. 6 from the captured image 3 acquired in step S ⁇ b> 101. To do. Then, after extracting the person area from the captured image 3, the control unit 11 advances the processing to the next step S103.
  • the control unit 11 may extract a person region in the captured image 3 by performing image analysis such as pattern detection and graphic element detection based on the shape of the pedestrian.
  • the control unit 11 may extract the person region by detecting the three-dimensional shape of the pedestrian using the depth data.
  • the control part 11 may extract a person area
  • the control unit 11 may extract the moving area as a person area based on the background difference method.
  • the control unit 11 acquires a background image used for the background subtraction method.
  • This background image may be acquired by an arbitrary method, and is set as appropriate according to the embodiment.
  • the control unit 11 may acquire a photographed image before a pedestrian enters the photographing range of the camera 2, in other words, a photographed image without a pedestrian as a background image.
  • the control part 11 calculates the difference of the picked-up image 3 acquired at the time of the said step S101, and a background image, and extracts the foreground area
  • This foreground region is a region where a change has occurred from the background image, and is a region where a moving object (moving object) is captured.
  • the control unit 11 may recognize the foreground area as a person area.
  • the control unit 11 may extract a person area from the foreground area by pattern detection or the like.
  • the process for extracting the foreground area is merely a process for calculating the difference between the captured image 3 and the background image. Therefore, according to the processing, the control unit 11 (abnormal state detection device 1) can narrow the range in which the person area is detected without using advanced image processing. Therefore, according to the processing, the processing load in step S102 can be reduced.
  • the background subtraction method applicable to the present embodiment is not limited to the above example.
  • Other types of background subtraction methods include, for example, a method of separating the background and the foreground using three different images, and a method of separating the background and the foreground by applying a statistical model. . With these methods, the control unit 11 may extract a person region.
  • Step S103 In the next step S103, the control unit 11 functions as the behavior measurement unit 53, and refers to the depth of each pixel included in the person region extracted in step S102, and observes the pedestrian's body in the captured image 3. By continuously specifying the position in the real space of the local part to be the target, the behavior of the local part in the real space is measured. And the control part 11 advances a process to the following step S104, after measuring the behavior in the real space of the local site
  • the local region to be observed can be set as appropriate according to the embodiment.
  • the local site may be a specific site on the body such as the head, shoulder, chest, or leg.
  • the local part may be a part where the position on the body can be changed depending on the state of the pedestrian in the captured image, such as the upper part of the pedestrian, instead of such a specific part on the body.
  • the local region to be observed may be selected according to the type of abnormal state of the pedestrian detected in step S104 described later.
  • the local part is set to a part where the state of the pedestrian is easily reflected. For example, as will be described later, when the pedestrian is in a state of falling, crouching, lying down, etc., the entire body of the pedestrian is present at a position near the ground. Therefore, in order to detect these states, it is preferable that the local part to be observed is a part indicating the position of the upper end of the pedestrian, such as the upper part of the pedestrian or the head.
  • the upper part 31 of the pedestrian is employed as a local site to be observed.
  • the upper part 31 of the pedestrian indicates the upper end of the pedestrian in the real space, and may be one point of the upper end of the pedestrian or an area having an arbitrary area provided at the upper end of the pedestrian. May be.
  • the upper part 31 of the pedestrian can be set as appropriate.
  • the upper end of a pedestrian is the highest part in real space among the pedestrian's bodies shown in the photographed image.
  • the control unit 11 can specify the position of the upper part 31 of the pedestrian in the real space by using the depth data.
  • the control unit 11 uses the depth of each pixel included in the person area, and specifies the position of each pixel included in the person area in the real space by the above method.
  • the control part 11 makes the pixel which exists in the highest position in real space among each pixel contained in a person area the upper end of a pedestrian, and makes this pixel or the predetermined area
  • the control unit 11 continuously specifies the position of the upper part 31 of the pedestrian in the real space. For example, by plotting the position of the upper part 31 of the pedestrian on the real space coordinates, It is possible to measure the behavior of the upper portion 31 of the upper part 31 in real space. For example, when the moving image for one time point or one still image is acquired as the captured image 3 in the above step S101, the control unit 11 displays the pedestrian appearing in the moving image for one time point or one still image. The position of the upper part 31 is plotted on the real space coordinates. Thereby, the behavior of the upper part 31 of the pedestrian at one time point is measured.
  • the control unit 11 converts the moving image or the plurality of still images for the predetermined time into the captured image 3.
  • the position of the upper part 31 of the pedestrian that appears is continuously plotted on real space coordinates. Thereby, the behavior of the upper part 31 of the pedestrian within a predetermined time is measured.
  • control unit 11 can specify the position of the part in the real space by using the depth of each pixel included in the person region. And the control part 11 can measure the behavior in the real space of the said part by plotting the position of the said part on real space coordinate.
  • the control unit 11 By performing pattern detection or the like in the area, the region of the local part to be observed is specified.
  • the control unit 11 performs pattern detection on the three-dimensional shape of the local part by using the depth of each pixel included in the person region.
  • an area in which a local part is captured may be specified in the person area.
  • the control part 11 may specify the area
  • control part 11 can specify the position in the real space of the said local site
  • Step S104 In the next step S104, the control unit 11 functions as the state determination unit 54, and determines whether or not the pedestrian is in an abnormal state based on the behavior of the local part measured in step S103. And as a result of the determination, if it is determined that the pedestrian is in an abnormal state, the control unit 11 advances the processing to the next step S105. On the other hand, when it determines with a pedestrian not being in an abnormal state, the control part 11 complete
  • the image analysis method for determining whether or not the pedestrian is in an abnormal state may be appropriately selected according to the embodiment.
  • the upper part 31 of a pedestrian is adopted as a local site to be observed. Therefore, the control unit 11 detects an abnormal state of the pedestrian when the behavior of the upper part 31 of the pedestrian measured in step S103 can be evaluated as a movement satisfying a predetermined condition. May be.
  • the control unit 11 detects the pedestrian's falling state, crouching state, and lying state.
  • an example of a method for detecting various abnormal states will be described.
  • FIG. 9 schematically illustrates a scene where a pedestrian is in a fall state. As illustrated in FIG. 9, when the pedestrian falls, the position of the pedestrian's body changes suddenly. Specifically, it is assumed that the pedestrian suddenly descends vertically downward toward the ground.
  • the control unit 11 detects whether or not the pedestrian's upper part 31 has fallen by a predetermined distance or more in a certain time on the real space based on the behavior of the pedestrian's upper part 31 measured in step S103. . And when the control part 11 detects that the upper part 31 of this pedestrian descend
  • each threshold value of time and distance for detecting a fall state may be set as appropriate according to the embodiment.
  • the method of detecting a fall state may not be restricted to such an example, and the control part 11 may detect the fall state of a pedestrian by another method.
  • FIG. 10 schematically illustrates a scene in which the pedestrian is in a cramped state. As illustrated in FIG. 10, when the pedestrian is cramped, it is assumed that the entire body of the pedestrian exists below a predetermined height in real space.
  • the control unit 11 moves the upper part 31 of the pedestrian to a position lower than the predetermined first height H1 in the real space. Whether or not is detected. And when the control part 11 detects that the upper part 31 of the said pedestrian moved to the position lower than predetermined
  • the control unit 11 compares the height h in the real space of the upper part 31 of the pedestrian with a predetermined first height H1. And as a result of the comparison, when it is determined that the height h of the pedestrian upper portion 31 is lower than the predetermined first height H1, the control unit 11 determines that the pedestrian upper portion 31 is the predetermined first height H1. It is detected that the pedestrian has moved to a position lower than the height H1.
  • the value of the predetermined first height H1 may be appropriately set according to the embodiment.
  • the method of detecting the cramped state is not limited to such an example, and the control unit 11 may detect the crooked state of the pedestrian by other methods.
  • the height h of the upper part 31 of a pedestrian and predetermined 1st height H1 are expressed on the basis of the ground.
  • the position (height) of the ground in the real space can be given by an arbitrary method.
  • the control unit 11 calculates the position (height) of the ground in the real space by using the depth of each pixel included in the area where the ground appears in the captured image 3 by the above method. Can do. Therefore, the control part 11 can express the height h of the upper part 31 of a pedestrian with the distance from the ground.
  • the expression form of the height h of the upper part 31 of the pedestrian and the predetermined first height H1 is not limited to such an example, and may be appropriately selected according to the embodiment.
  • the height h of the upper part 31 of the pedestrian and the predetermined first height H1 may be expressed with the camera 2 as a reference.
  • FIG. 11 schematically illustrates a scene where a pedestrian is lying down. As illustrated in FIG. 11, when the pedestrian is lying down, it is assumed that the entire body of the pedestrian is present at a lower height in real space than in the case of the crouched state.
  • the control unit 11 determines that the upper part 31 of the pedestrian is the predetermined first in real space based on the behavior of the upper part 31 of the pedestrian measured in step S ⁇ b> 103. It is detected whether or not it has moved to a position lower than a predetermined second height H2, which is lower than the height H1. And when the control part 11 detects that the upper part 31 of the said pedestrian moved to the position lower than predetermined
  • the control unit 11 compares the height h in the real space of the upper part 31 of the pedestrian with a predetermined second height H2. And as a result of the comparison, when it is determined that the height h of the pedestrian upper portion 31 is lower than the predetermined second height H2, the control unit 11 determines that the pedestrian upper portion 31 is the predetermined second height H2. It is detected that the pedestrian is lying down by detecting that it has moved to a position lower than the height H2.
  • the value of the predetermined second height H2 may be appropriately set according to the embodiment so as to be lower than the predetermined first height H1.
  • the method for detecting the lying state is not limited to such an example, and the control unit 11 may detect the lying state of the pedestrian by other methods. Further, in the example of FIG.
  • the predetermined second height H2 is expressed with reference to the ground, as in FIG.
  • the expression format of the predetermined second height H2 is not limited to such an example, and may be appropriately selected according to the embodiment.
  • the predetermined second height H2 may be expressed with the camera 2 as a reference.
  • step S104 determines in step S104 that the pedestrian's state corresponds to any of the pedestrian's falling state, crouching state, and lying state
  • the process proceeds to the next step S105.
  • step S104 determines in step S104 that the pedestrian's state does not correspond to any of the pedestrian's fallen state, cramped state, and lying state
  • the process according to this operation example Exit.
  • the state to be detected among the various states of the pedestrian may be appropriately selected according to the embodiment. That is, at least one of the fall state, the stagnation state, and the lying state may be excluded from the detection target.
  • the control part 11 may detect the state of pedestrians other than the above based on other conditions of said each conditions.
  • the type of the state of the pedestrian to be detected in step S104 may be appropriately selected according to the embodiment, may be selected by the user, or may be set in advance.
  • At least one of the falling state, the crouching state, and the lying state may be set as not being an abnormal state.
  • the control unit 11 when it is set that the cramped state is not an abnormal state, the control unit 11 does not determine that the pedestrian is in the abnormal state when detecting that the pedestrian is in the crooked state, It is determined that the pedestrian is in a normal state.
  • the control part 11 abbreviate
  • the control unit 11 determines whether the pedestrian is in a pedestrian fall state, a stagnation state, or a lying state. When it is determined that the state does not correspond to the state, the pedestrian may be recognized as being in a normal state. And the control part 11 may alert
  • Step S105 In the next step S105, the control unit 11 functions as the notification unit 55.
  • the control unit 11 confirms that the pedestrian is in the abnormal state.
  • An abnormal state detection notification is sent to notify. Thereby, the processing according to this operation example is completed. Note that the means by which the control unit 11 performs the abnormality detection notification can be appropriately selected according to the embodiment.
  • the abnormal state detection device 1 when used in a facility such as a hospital, the abnormal state detection device 1 can be connected to equipment such as a nurse call system via the external interface 15.
  • the control part 11 may perform abnormality detection notification in cooperation with equipment such as the nurse call system. That is, the control unit 11 may control the nurse call system via the external interface 15. And the control part 11 may perform the call by the said nurse call system as abnormality detection notification. Accordingly, it is possible to appropriately notify a nurse or the like who watches the pedestrian that the pedestrian is in an abnormal state.
  • control unit 11 may perform abnormality detection notification by outputting a predetermined sound from the speaker 14 connected to the abnormal state detection device 1. Further, for example, the control unit 11 may display a screen on the touch panel display 13 for notifying that an abnormal state of the pedestrian has been detected as an abnormality detection notification.
  • control unit 11 may perform such an abnormality detection notification using an e-mail, a short message service, a push notification, or the like.
  • the e-mail address, telephone number, and the like of the user terminal that is the notification destination may be registered in the storage unit 12 in advance.
  • the control part 11 may perform abnormality detection notification using this e-mail address, telephone number, etc. which are registered beforehand.
  • the abnormal state detection device 1 analyzes the state of a pedestrian based on the captured image 3 including depth data indicating the depth of each pixel. As described above, since the depth of each pixel is acquired with respect to the subject surface, the position of the subject surface in real space can be specified by using the depth data. Therefore, the abnormal state detection device 1 according to the present embodiment uses this depth data to measure the behavior in the real space of the upper part 31 of the pedestrian to be observed among the pedestrian's body, and the measured walking Based on the behavior of the upper part 31 of the person, it is detected whether or not the pedestrian is in an abnormal state.
  • the abnormal state of the pedestrian is detected based on the behavior of the local part of the pedestrian rather than the entire body of the pedestrian. Therefore, since the body region to be observed is limited to the upper part 31 of the pedestrian, the processing load for analyzing the state of the pedestrian can be reduced, and the state of the pedestrian can be analyzed at high speed. Moreover, since the observation target is narrowed down, the analysis content becomes simple.
  • the abnormal state detection device 1 detects the pedestrian's falling state, crouching state, and lying state based on the fluctuation and height of the pedestrian's upper portion 31. The measurement of the fluctuation and the height of the upper part 31 of the pedestrian hardly causes an error. Therefore, the state of the pedestrian can be analyzed with high accuracy. Therefore, according to this embodiment, a pedestrian can be watched appropriately.
  • the upper part 31 of a pedestrian is employ
  • Timing of notification processing As an example, in the above-described embodiment, when the control unit 11 determines in step S104 that the pedestrian is in an abnormal state, it immediately performs an abnormality detection notification. However, the timing at which the abnormality detection notification is performed may not be limited to such an example.
  • the control unit 11 may function as the notification unit 55 and perform abnormality detection notification when the abnormal state of the pedestrian continues for a certain time or more.
  • step S104 the control unit 11 determines whether or not the abnormal state of the pedestrian continues for a certain time or more. And the control part 11 performs abnormality detection notification in step S105, when it determines with the abnormal state of a pedestrian continuing more than fixed time.
  • the control unit 11 omits the process of step S105 and ends the process according to the above operation example.
  • the threshold value for determining whether the abnormal state of the pedestrian has continued for a certain time or more may be set as appropriate according to the embodiment.
  • the abnormality detection notification is made so that the state of the pedestrian satisfies the abnormal state condition for a moment. It is possible to prevent false reports of abnormality detection notifications. For example, when a pedestrian tries to pick up an object that has fallen on the ground, the state of the pedestrian can be cramped for a moment. In such a case, for example, if an abnormality detection notification is made by the speaker 14 or the like, a state different from the actual state of the pedestrian is notified to the people around the speaker 14, and the people concerned are erroneously notified. Information will be transmitted. On the other hand, according to the modified example, it is possible to appropriately notify the pedestrian's abnormality detection by preventing such a false alarm.
  • the abnormal state detection device 1 detects a pedestrian's fall state, cramped state, and lying state as the pedestrian's abnormal state. ing.
  • the types of abnormal states of pedestrians to be detected are not limited to these, and may be appropriately selected according to the embodiment.
  • the abnormal state detection device 1 may detect such walking with a high risk of falling as an abnormal state. Specifically, the movement of the joint in the lower limb of the pedestrian is reduced by reducing the range of motion of the joint due to aging, a decrease in physical strength, and the like. For example, the angle of the toes with respect to the walking surface (ground) decreases, and the distance from the bottom of the walking foot to the walking surface (ground) decreases.
  • the control unit 11 measures the behavior of the leg in step S103. For example, the control unit 11 specifies the range in which the leg portion is captured by performing pattern matching or the like in the person region extracted in step S102. Next, the control unit 11 uses the depth of each pixel in the portion where the toe appears in the range where the leg appears, and analyzes the shape of the toe to thereby determine the angle of the toe relative to the walking surface (ground). calculate. A known image analysis method may be used as a method of calculating the angle of the toe with respect to the walking surface (ground). Moreover, the control part 11 calculates the distance between the lowest point (bottom part) of a leg part, and the ground using the depth of each pixel of the range which a leg part shows.
  • the position (height) in the real space of the ground may be given by an arbitrary method as described above. Then, the control unit 11 continuously plots the angle between the toe with respect to the walking surface (ground) and the distance between the lowest point (bottom) of the leg and the ground. Thereby, the control part 11 can measure the behavior of the leg part in real space.
  • step S104 the control unit 11 refers to the continuously plotted data, and determines whether or not the maximum value of the angle of the toe with respect to the walking surface (ground) is equal to or less than a predetermined value. Moreover, the control part 11 determines whether the maximum value of the distance between the lowest point (bottom part) of a leg part and the ground is below a predetermined value. And the control part 11 has the maximum value of the angle with respect to the walking surface (ground) of a toe below a predetermined value, and the maximum value of the distance between the lowest point (bottom part) of a leg part and the ground is predetermined. When it determines with it being below a value, a pedestrian is in an abnormal state and a process is advanced to following step S105. On the other hand, the control part 11 complete
  • the abnormal state detection device 1 may detect a walk with a high risk of falling as an abnormal state.
  • the predetermined values for the angle and the distance which are threshold values for determining whether or not the state is abnormal, may be appropriately set according to the embodiment.
  • the target leg may be a right leg, a left leg, or both legs.
  • the abnormal state detection device 1 may hold the angle and the distance in a normal state measured in advance. And the said abnormal condition detection apparatus 1 is the amount of reduction

Abstract

Provided is a system that can appropriately keep watch over pedestrians. The abnormal state detection device according to one aspect of the present invention is provided with: an image acquisition unit that acquires photographed images that capture a pedestrian walking and that include depth data that indicates the depth of each of the pixels in the photographed images; an extraction unit that extracts a person region that is the region in which the pedestrian appears in the photographed images; a movement measurement unit that references the depth of the pixels included in the extracted person region and, by continuously specifying the position in real space of a local site that is an observation target on the body of the pedestrian that appears in the photographed images, measures the movement in real space of the local site; a status determination unit that determines whether the pedestrian is in an abnormal state on the basis of the measured movement of the local site; and a notification unit that, when the determination results have determined that the pedestrian is in an abnormal state, performs abnormality detection notification that is for making it known that the pedestrian is in an abnormal state.

Description

異常状態検知装置、異常状態検知方法、及び、異常状態検知プログラムAbnormal state detection device, abnormal state detection method, and abnormal state detection program
 本発明は、異常状態検知装置、異常状態検知方法、及び、異常状態検知プログラムに関する。 The present invention relates to an abnormal state detection device, an abnormal state detection method, and an abnormal state detection program.
 近年、歩行動作を行う歩行者を撮影し、得られたが画像データを解析することで歩行者の状態を監視して、当該歩行者を見守るシステムが開発されている。例えば、特許文献1では、ステレオカメラによって歩行者を撮影し、得られた画像データを三次元的に解析することで歩行者の姿勢及び動作を検出し、検出した姿勢及び動作に基づいて歩行者の生活機能を測定するシステムが提案されている。このようなシステムによれば、歩行者に機具を装着させなくても、当該歩行者の状態を見守ることができる。 Recently, a system has been developed in which a pedestrian performing a walking motion is photographed and obtained, but the state of the pedestrian is monitored by analyzing the image data and monitoring the pedestrian. For example, in Patent Document 1, a pedestrian is photographed with a stereo camera, and the obtained image data is analyzed three-dimensionally to detect the pedestrian's posture and motion, and the pedestrian is based on the detected posture and motion. A system has been proposed to measure the function of life. According to such a system, the state of the pedestrian can be observed without causing the pedestrian to wear equipment.
特開2009-285077号公報JP 2009-285077 A
 しかしながら、特許文献1に例示される従来のシステムでは、歩行者の全身の身体部位の推移を解析することで、換言すると、歩行者の全身の構造を解析することで、当該歩行者の姿勢又は動作を検出する。そのため、歩行者の状態を解析する処理の負荷が大きく、歩行者の状態を高速に解析することができなかった。また、観測対象の姿勢及び動作が多岐にわたるため、解析内容が煩雑になってしまい、歩行者の状態を精度よく解析することができなかった。したがって、従来のシステムでは、歩行者を適切に見守ることができなかった。 However, in the conventional system exemplified in Patent Document 1, by analyzing the transition of the body part of the pedestrian's whole body, in other words, by analyzing the structure of the pedestrian's whole body, the pedestrian's posture or Detect motion. Therefore, the processing load for analyzing the state of the pedestrian is large, and the state of the pedestrian cannot be analyzed at high speed. In addition, since the postures and movements of the observation target are diverse, the analysis content becomes complicated, and the pedestrian's state cannot be analyzed accurately. Therefore, in the conventional system, a pedestrian cannot be watched appropriately.
 本発明は、一側面では、このような点を考慮してなされたものであり、歩行者を適切に見守ることのできるシステムを提供することを目的とする。 In one aspect, the present invention has been made in consideration of such points, and an object thereof is to provide a system capable of appropriately watching a pedestrian.
 本発明は、上述した課題を解決するために、以下の構成を採用する。 The present invention adopts the following configuration in order to solve the above-described problems.
 すなわち、本発明の一側面に係る異常状態検知装置は、歩行動作を行う歩行者を撮影した撮影画像であって、当該撮影画像内の各画素の深度を示す深度データを含む撮影画像を取得する画像取得部と、取得した前記撮影画像内で、前記歩行者の写る人物領域を抽出する抽出部と、抽出した前記人物領域に含まれる各画素の深度を参照して、前記撮影画像に写る前記歩行者の身体のうち観測の対象とする局所的な部位の実空間上の位置を継続的に特定することで、前記局所的な部位の実空間上の挙動を測定する挙動測定部と、測定された前記局所的な部位の挙動に基づいて、前記歩行者が異常状態にあるか否かを判定する状態判定部と、前記判定の結果、前記歩行者が異常状態にあると判定された場合に、前記歩行者が異常状態にあることを知らせるための異常検知通知を行う通知部と、を備える。 That is, the abnormal state detection device according to one aspect of the present invention acquires a captured image that is a captured image of a pedestrian performing a walking motion and includes depth data indicating the depth of each pixel in the captured image. An image acquisition unit, an extraction unit that extracts a person region in which the pedestrian is captured in the acquired captured image, and a depth of each pixel included in the extracted person region, the image captured in the captured image A behavior measurement unit that measures the behavior of the local part in the real space of the pedestrian's body by continuously specifying the position of the local part in the real space, and the measurement A state determination unit that determines whether or not the pedestrian is in an abnormal state based on the behavior of the local part that has been performed, and a case where the determination result indicates that the pedestrian is in an abnormal state And that the pedestrian is in an abnormal state Comprising a notification unit that performs an abnormality detection notification for causing et al., The.
 上記構成によれば、歩行者の異常状態を検知するために取得される撮影画像には各画素の深度を示す深度データが含まれている。この各画素の深度は、撮影装置から被写体までの深さを示す。より詳細には、被写体の深度は、当該被写体の表面に対して取得される。すなわち、深度データを利用すれば、実空間上における被写体表面の位置を特定することができる。そのため、この深度データを利用すれば、歩行者の実空間(三次元空間)上の状態を解析することができる。 According to the above configuration, the captured image acquired in order to detect the abnormal state of the pedestrian includes depth data indicating the depth of each pixel. The depth of each pixel indicates the depth from the photographing apparatus to the subject. More specifically, the depth of the subject is acquired with respect to the surface of the subject. That is, if the depth data is used, the position of the subject surface in the real space can be specified. Therefore, if this depth data is used, the state of the pedestrian in the real space (three-dimensional space) can be analyzed.
 ここで、上記構成では、歩行者の身体全体ではなく、歩行者の身体のうち観測の対象とする局所的な部位の実空間上の挙動を測定する。そして、当該局所的な部位の実空間上の挙動に基づいて、歩行者が異常状態にあるか否かを判定する。そのため、観測対象とする身体領域が限られるため、歩行者の状態を解析する処理の負荷が少なくて済み、歩行者の状態を高速に解析することができる。また、観測対象が絞られているため、解析内容がシンプルになり、歩行者の状態を精度よく解析することができる。したがって、上記構成によれば、歩行者を適切に見守ることのできるシステムを提供することができる。 Here, in the above configuration, the behavior in the real space of the local part to be observed in the body of the pedestrian, not the entire body of the pedestrian, is measured. And based on the behavior in the real space of the said local site | part, it is determined whether a pedestrian is in an abnormal state. Therefore, since the body region to be observed is limited, the processing load for analyzing the pedestrian state can be reduced, and the pedestrian state can be analyzed at high speed. Moreover, since the observation target is narrowed down, the analysis content becomes simple and the pedestrian state can be analyzed with high accuracy. Therefore, according to the said structure, the system which can watch a pedestrian appropriately can be provided.
 また、上記一側面に係る異常状態検知装置の別の形態として、前記挙動測定部は、前記局所的な部位として前記歩行者の上部の実空間上の挙動を測定してもよい。そして、前記状態判定部は、測定された前記歩行者の上部の挙動に基づいて、前記歩行者の上部が実空間上で一定時間内に所定距離以上下降したか否かを検知し、前記歩行者の上部が実空間上で一定時間内に所定距離以上下降したことを検知した場合に、前記歩行者は転倒したとして、前記歩行者が異常状態にあると判定してもよい。 Further, as another form of the abnormal state detection device according to the above aspect, the behavior measuring unit may measure a behavior in the real space above the pedestrian as the local part. The state determination unit detects whether or not the upper part of the pedestrian has descended a predetermined distance or more in real time based on the measured behavior of the upper part of the pedestrian. When it is detected that the upper part of the person has fallen a predetermined distance or more in real time, the pedestrian may fall and determine that the pedestrian is in an abnormal state.
 歩行者が転倒した場合、歩行者の身体の位置が下方向に急激に移動すると想定される。そこで、当該構成では、歩行者の上部が一定時間内に所定距離以上下降したか否かを検知することによって、歩行者の転倒を監視する。これによって、当該構成によれば、歩行者が転倒した際に、歩行者が異常状態に陥ったことを検知することができる。 If the pedestrian falls, the position of the pedestrian's body is assumed to move rapidly downward. Therefore, in this configuration, the pedestrian's fall is monitored by detecting whether or not the upper part of the pedestrian has fallen a predetermined distance or more within a predetermined time. Thereby, according to the said structure, when a pedestrian falls, it can detect that the pedestrian fell into the abnormal state.
 なお、歩行者の上部は、実空間における歩行者の上端を示すものであり、歩行者上端の一点であってもよいし、歩行者上端に設けられた任意の面積を有する領域であってもよい。歩行者の上部は、適宜設定可能である。また、歩行者の上端とは、撮影画像に写る歩行者の身体のうち実空間上で最も高い部分である。 The upper part of the pedestrian indicates the upper end of the pedestrian in the real space, and may be one point of the upper end of the pedestrian or may be an area having an arbitrary area provided at the upper end of the pedestrian. Good. The upper part of the pedestrian can be set as appropriate. Moreover, the upper end of a pedestrian is the highest part in real space among the pedestrian's bodies shown in the photographed image.
 また、上記一側面に係る異常状態検知装置の別の形態として、前記挙動測定部は、前記局所的な部位として前記歩行者の上部の実空間上の挙動を測定してもよい。そして、前記状態判定部は、測定された前記歩行者の上部の挙動に基づいて、前記歩行者の上部が実空間上で所定の第一の高さよりも低い位置に移動したか否かを検知し、前記歩行者の上部が実空間上で前記所定の第一の高さよりも低い位置に移動したことを検知した場合に、前記歩行者はうずくまっているとして、前記歩行者が異常状態にあると判定してもよい。 Further, as another form of the abnormal state detection device according to the above aspect, the behavior measuring unit may measure a behavior in the real space above the pedestrian as the local part. The state determination unit detects whether the upper part of the pedestrian has moved to a position lower than a predetermined first height in real space based on the measured behavior of the upper part of the pedestrian. When it is detected that the upper part of the pedestrian has moved to a position lower than the predetermined first height in real space, the pedestrian is crooked and is in an abnormal state. May be determined.
 歩行者がうずくまっている場合には、歩行者の身体全体は所定の高さ以下に存在すると想定される。そこで、当該構成では、歩行者の上部が実空間上で所定の第一の高さよりも低い位置に移動したか否かを検知することによって、歩行者がうずくまっているか否かを監視する。これによって、当該構成によれば、歩行者がうずくまった際に、歩行者が異常状態に陥ったことを検知することができる。なお、うずくまり状態を検知するための所定の第一の高さの値は、実施の形態に応じて適宜設定されてよい。 When the pedestrian is cramped, it is assumed that the entire body of the pedestrian exists below a predetermined height. Therefore, in this configuration, it is monitored whether or not the pedestrian is crooked by detecting whether or not the upper part of the pedestrian has moved to a position lower than a predetermined first height in real space. Thereby, according to the said structure, when a pedestrian crawls, it can detect that the pedestrian fell into the abnormal state. Note that the predetermined first height value for detecting the stagnation state may be appropriately set according to the embodiment.
 また、上記一側面に係る異常状態検知装置の別の形態として、前記状態判定部は、測定された前記歩行者の上部の挙動に基づいて、前記歩行者の上部が実空間上で、前記第一の高さよりも更に低い所定の第二の高さより低い位置に移動したか否かを検知し、前記歩行者の上部が実空間上で前記所定の第二の高さよりも低い位置に移動したことを検知した場合に、前記歩行者は横たわっているとして、前記歩行者が異常状態にあると判定してもよい。 Further, as another form of the abnormal state detection device according to the one aspect, the state determination unit may be configured such that the upper part of the pedestrian is in real space based on the measured behavior of the upper part of the pedestrian. Detecting whether or not the pedestrian has moved to a position lower than a predetermined second height lower than the first height, and the upper part of the pedestrian has moved to a position lower than the predetermined second height in real space When this is detected, it may be determined that the pedestrian is lying and the pedestrian is in an abnormal state.
 歩行者が横たわっている場合には、歩行者の身体全体は、上記うずくまり状態の場合よりも更に低い高さに存在すると想定される。そこで、当該構成では、歩行者の上部が実空間上で、上記第一の高さよりも更に低い所定の第二の高さより低い位置に移動したか否かを検知することによって、歩行者が横たわっているか否かを監視する。これによって、当該構成によれば、歩行者が横たわった際に、歩行者が異常状態に陥ったことを検知することができる。なお、横たわり状態を検知するための所定の第二の高さの値は、実施の形態に応じて適宜設定されてよい。 When the pedestrian is lying down, the entire body of the pedestrian is assumed to be at a lower height than in the case of the above-mentioned cramped state. Therefore, in this configuration, the pedestrian lies down by detecting whether or not the upper part of the pedestrian has moved to a position lower than a predetermined second height that is lower than the first height in real space. Monitor whether or not Thereby, according to the said structure, when a pedestrian lies down, it can detect that the pedestrian fell into the abnormal state. Note that the predetermined second height value for detecting the lying state may be appropriately set according to the embodiment.
 また、上記一側面に係る異常状態検知装置の別の形態として、前記通知部は、前記歩行者の異常状態が所定時間以上継続したときに、前記異常検知通知を行ってもよい。当該構成によれば、歩行者の異常状態が一定時間以上継続したときに限り異常検知通知を行うようにすることで、歩行者の状態が一瞬だけ異常状態の条件を満たしたような場面で発生する異常検知通知の誤報を防止することができる。したがって、当該構成によれば、異常検知通知の誤報を防止し、歩行者の異常状態を検知したことを適切に知らせることができる。 Further, as another form of the abnormal state detection device according to the above aspect, the notification unit may perform the abnormality detection notification when the abnormal state of the pedestrian continues for a predetermined time or more. According to this configuration, anomaly detection notification is made only when the abnormal state of the pedestrian continues for a certain time or longer, so that the pedestrian's state occurs for a moment when the abnormal state condition is satisfied. It is possible to prevent erroneous notification of abnormal detection notifications. Therefore, according to the said structure, the misreport of an abnormality detection notification can be prevented and it can notify appropriately that the abnormal state of the pedestrian was detected.
 なお、上記各形態に係る異常状態検知装置の別の形態として、以上の各構成を実現する情報処理システムであってもよいし、情報処理方法であってもよいし、プログラムであってもよいし、このようなプログラムを記録したコンピュータその他装置、機械等が読み取り可能な記憶媒体であってもよい。ここで、コンピュータ等が読み取り可能な記録媒体とは、プログラム等の情報を、電気的、磁気的、光学的、機械的、又は、化学的作用によって蓄積する媒体である。また、情報処理システムは、1又は複数の情報処理装置によって実現されてもよい。 In addition, as another form of the abnormal state detection apparatus according to each of the above forms, an information processing system that realizes each of the above configurations, an information processing method, or a program may be used. However, it may be a storage medium that can be read by a computer, a device, a machine or the like in which such a program is recorded. Here, the computer-readable recording medium is a medium that stores information such as programs by electrical, magnetic, optical, mechanical, or chemical action. The information processing system may be realized by one or a plurality of information processing devices.
 例えば、本発明の一側面に係る異常状態検知方法は、コンピュータが、歩行動作を行う歩行者を撮影した撮影画像であって、当該撮影画像内の各画素の深度を示す深度データを含む撮影画像を取得するステップと、取得した前記撮影画像内で、前記歩行者の写る人物領域を抽出するステップと、抽出した前記人物領域に含まれる各画素の深度を参照して、前記撮影画像に写る前記歩行者の身体のうち観測の対象とする局所的な部位の実空間上の位置を継続的に特定することで、前記局所的な部位の実空間上の挙動を測定するステップと、測定された前記局所的な部位の挙動に基づいて、前記歩行者が異常状態にあるか否かを判定するステップと、前記判定の結果、前記歩行者が異常状態にあると判定された場合に、前記歩行者が異常状態にあることを知らせるための異常検知通知を行うステップと、を実行する情報処理方法である。 For example, the abnormal state detection method according to one aspect of the present invention is a captured image in which a computer captures a pedestrian performing a walking motion, and includes a captured image including depth data indicating the depth of each pixel in the captured image. A step of extracting a person area in which the pedestrian appears in the acquired photographed image, and a depth of each pixel included in the extracted person area in the acquired photographed image. Measuring the behavior of the local part of the pedestrian's body in real space by continuously specifying the position of the local part to be observed in the real space; and The step of determining whether or not the pedestrian is in an abnormal state based on the behavior of the local part, and when the result of the determination is that the pedestrian is in an abnormal state, the walking Person is in an abnormal state An information processing method for executing the steps of an abnormality detection notification for notifying Rukoto, the.
 また、例えば、本発明の一側面に係る異常状態検知プログラムは、コンピュータに、歩行動作を行う歩行者を撮影した撮影画像であって、当該撮影画像内の各画素の深度を示す深度データを含む撮影画像を取得するステップと、取得した前記撮影画像内で、前記歩行者の写る人物領域を抽出するステップと、抽出した前記人物領域に含まれる各画素の深度を参照して、前記撮影画像に写る前記歩行者の身体のうち観測の対象とする局所的な部位の実空間上の位置を継続的に特定することで、前記局所的な部位の実空間上の挙動を測定するステップと、測定された前記局所的な部位の挙動に基づいて、前記歩行者が異常状態にあるか否かを判定するステップと、前記判定の結果、前記歩行者が異常状態にあると判定された場合に、前記歩行者が異常状態にあることを知らせるための異常検知通知を行うステップと、を実行させるためのプログラムである。 Further, for example, the abnormal state detection program according to one aspect of the present invention is a captured image obtained by capturing a pedestrian performing a walking motion, and includes depth data indicating the depth of each pixel in the captured image. A step of acquiring a photographed image, a step of extracting a person region in which the pedestrian is captured in the acquired photographed image, and a depth of each pixel included in the extracted person region are referred to as the photographed image. Measuring the behavior of the local part in the real space by continuously specifying the position in the real space of the local part of the pedestrian body to be observed; And determining whether or not the pedestrian is in an abnormal state based on the behavior of the local part that has been performed, and if the result of the determination is that the pedestrian is in an abnormal state, Pedestrian And performing an abnormality detection notification for notifying that it is in an abnormal state, a program for execution.
 本発明によれば、歩行者を適切に見守ることのできるシステムを提供することが可能である。 According to the present invention, it is possible to provide a system capable of appropriately watching a pedestrian.
図1は、本発明が適用される場面を模式的に例示する。FIG. 1 schematically illustrates a scene where the present invention is applied. 図2は、実施の形態に係る異常状態検知装置のハードウェア構成を例示する。FIG. 2 illustrates a hardware configuration of the abnormal state detection device according to the embodiment. 図3は、実施の形態に係るカメラにより取得される深度と被写体との関係を例示する。FIG. 3 illustrates the relationship between the depth acquired by the camera according to the embodiment and the subject. 図4は、実施の形態に係る異常状態検知装置の機能構成を例示する。FIG. 4 illustrates the functional configuration of the abnormal state detection device according to the embodiment. 図5は、実施の形態に係る異常状態検知装置による歩行者の見守りに関する処理手順を例示する。FIG. 5 illustrates a processing procedure relating to pedestrian watching by the abnormal state detection device according to the embodiment. 図6は、実施の形態に係るカメラにより取得される撮影画像を例示する。FIG. 6 illustrates a captured image acquired by the camera according to the embodiment. 図7は、実施の形態に係る撮影画像内の座標関係を例示する。FIG. 7 illustrates the coordinate relationship in the captured image according to the embodiment. 図8は、実施の形態に係る撮影画像の任意の点(画素)とカメラとの実空間内での位置関係を例示する。FIG. 8 illustrates the positional relationship between an arbitrary point (pixel) of the captured image and the camera in the real space according to the embodiment. 図9は、歩行者が転倒した状態を模式的に例示する。FIG. 9 schematically illustrates a state where a pedestrian has fallen. 図10は、歩行者がうずくまっている状態を模式的に例示する。FIG. 10 schematically illustrates a state in which a pedestrian is cramped. 図11は、歩行者が横たわっている状態を模式的に例示する。FIG. 11 schematically illustrates a state where a pedestrian is lying.
 以下、本発明の一側面に係る実施の形態(以下、「本実施形態」とも表記する)を、図面に基づいて説明する。ただし、以下で説明する本実施形態は、あらゆる点において本発明の例示に過ぎない。本発明の範囲を逸脱することなく種々の改良や変形を行うことができることは言うまでもない。つまり、本発明の実施にあたって、実施形態に応じた具体的構成が適宜採用されてもよい。なお、本実施形態において登場するデータを自然言語により説明しているが、より具体的には、コンピュータが認識可能な疑似言語、コマンド、パラメタ、マシン語等で指定される。 Hereinafter, an embodiment according to one aspect of the present invention (hereinafter also referred to as “this embodiment”) will be described with reference to the drawings. However, this embodiment described below is only an illustration of the present invention in all respects. It goes without saying that various improvements and modifications can be made without departing from the scope of the present invention. That is, in implementing the present invention, a specific configuration according to the embodiment may be adopted as appropriate. Although data appearing in the present embodiment is described in a natural language, more specifically, it is specified by a pseudo language, a command, a parameter, a machine language, or the like that can be recognized by a computer.
 §1 適用場面
 まず、図1を用いて、本発明が適用される場面について説明する。図1は、本実施形態に係る異常状態検知装置1が用いられる場面の一例を示す。本実施形態に係る異常状態検知装置1は、カメラ2によって歩行者を撮影し、それにより得られた撮影画像3を解析することで、撮影画像3に写る歩行者の状態を監視し、当該歩行者の見守りを行う情報処理装置である。そのため、本実施形態に係る異常状態検知装置1は、見守りの対象となる対象人物を見守る場面で広く利用可能である。
§1 Application scene First, the scene where the present invention is applied will be described with reference to FIG. FIG. 1 shows an example of a scene in which the abnormal state detection device 1 according to the present embodiment is used. The abnormal state detection device 1 according to the present embodiment photographs a pedestrian with the camera 2 and analyzes the captured image 3 obtained thereby to monitor the state of the pedestrian in the captured image 3 and to perform the walking. It is an information processing apparatus that watches a person. Therefore, the abnormal state detection apparatus 1 according to the present embodiment can be widely used in a scene where the target person who is the target of watching is watched over.
 具体的には、まず、本実施形態に係る異常状態検知装置1は、歩行動作を行う歩行者を撮影した撮影画像3をカメラ2から取得する。図1で例示される場面では、対象人物(歩行者)はカメラ2の撮影範囲内を歩行しており、カメラ2はこのような対象人物を撮影するために設置されている。ただし、対象人物(歩行者)は、常に歩行動作を行っている必要はなく、特定の場所に留まっていてもよい。 Specifically, first, the abnormal state detection device 1 according to the present embodiment acquires a captured image 3 obtained by capturing a pedestrian performing a walking motion from the camera 2. In the scene illustrated in FIG. 1, the target person (pedestrian) is walking in the shooting range of the camera 2, and the camera 2 is installed for shooting such a target person. However, the target person (pedestrian) does not always have to perform a walking motion, and may remain in a specific place.
 カメラ2は、撮影画像3内の各画素に対応する深度を取得可能に構成される。本実施形態では、カメラ2は、各画素の深度を取得可能なように、被写体の深度を測定する深度センサ(後述する深度センサ21)を含んでいる。本実施形態に係る異常状態検知装置1は、このようなカメラ2と接続し、状態を監視する対象となる歩行者を撮影した撮影画像3を取得する。 The camera 2 is configured to be able to acquire the depth corresponding to each pixel in the captured image 3. In the present embodiment, the camera 2 includes a depth sensor (a depth sensor 21 described later) that measures the depth of the subject so that the depth of each pixel can be acquired. The abnormal state detection apparatus 1 according to the present embodiment is connected to such a camera 2 and acquires a photographed image 3 obtained by photographing a pedestrian whose state is to be monitored.
 取得される撮影画像3は、後述する図6に例示されるように、画素毎に得られる深度を示す深度データを含んでいる。この撮影画像3は、撮影範囲内の被写体の深度を示すデータを含んでいればよく、そのデータ形式は実施の形態に応じて適宜選択可能である。例えば、撮影画像3は、撮影範囲内の被写体の深度が二次元状に分布したデータ(例えば、深度マップ)であってもよい。また、例えば、撮影画像3は、深度データとともに、RGB画像を含んでもよい。更に、例えば、撮影画像3は、歩行者の状態を解析可能であれば、動画像で構成されてもよいし、1又は複数枚の静止画像で構成されてもよい。 The acquired captured image 3 includes depth data indicating the depth obtained for each pixel, as illustrated in FIG. The captured image 3 only needs to include data indicating the depth of the subject within the imaging range, and the data format can be appropriately selected according to the embodiment. For example, the captured image 3 may be data (for example, a depth map) in which the depth of the subject within the imaging range is two-dimensionally distributed. For example, the captured image 3 may include an RGB image together with the depth data. Furthermore, for example, the captured image 3 may be configured with a moving image or one or a plurality of still images as long as the state of the pedestrian can be analyzed.
 次に、異常状態検知装置1は、取得した撮影画像3内において歩行者の写る人物領域を抽出する。上記のとおり、各画素の深度を示す深度データを含んでいる。そのため、異常状態検知装置1は、この深度データを利用することで、撮影画像3内に写る被写体の実空間上の位置を特定することができる。より詳細には、被写体の深度は、当該被写体の表面に対して取得される。すなわち、異常状態検知装置1は、深度データの示す各画素の深度を参照することで、実空間上における被写体表面の位置を特定することができる。 Next, the abnormal state detection device 1 extracts a person area in which the pedestrian appears in the acquired captured image 3. As described above, depth data indicating the depth of each pixel is included. Therefore, the abnormal state detection apparatus 1 can specify the position of the subject in the captured image 3 in the real space by using this depth data. More specifically, the depth of the subject is acquired with respect to the surface of the subject. That is, the abnormal state detection device 1 can specify the position of the subject surface in the real space by referring to the depth of each pixel indicated by the depth data.
 そこで、異常状態検知装置1は、抽出した人物領域に含まれる各画素の深度を参照して、撮影画像3に写る歩行者の身体のうち観測の対象とする局所的な部位の実空間上の位置を継続的に特定することで、当該局所的な部位の実空間上の挙動を測定する。観測の対象とする局所的な部位は、実施の形態に応じて適宜設定可能である。局所的な部位は、例えば、頭部、肩部、胸部、脚部等、身体上の特定の部位であってもよい。また、局所的な部位は、そのような身体上の特定の部位ではなく、歩行者の上部等、撮影画像に写る歩行者の状態によって身体上の位置が変わりうる部位であってもよい。なお、当該局所的な部位は、歩行者の状態が反映されやすい部位に設定されるのが望ましい。例えば、歩行者が転倒、うずくまり、横たわり等の状態にある場合には、歩行者の身体全体が地面寄りの位置に存在する。そのため、これらの状態を検知するためには、観測対象とする局所的な部位は、歩行者の上部、頭部等、歩行者の上端の位置を示す部位であるのが好ましい。図1で例示される場面では、歩行者の上部31が、観測の対象とする局所的な部位に設定されている。 Therefore, the abnormal state detection device 1 refers to the depth of each pixel included in the extracted person region, and in the real space of the local part to be observed among the pedestrian's body shown in the captured image 3. By continuously specifying the position, the behavior of the local part in real space is measured. The local region to be observed can be set as appropriate according to the embodiment. The local site may be a specific site on the body such as the head, shoulder, chest, or leg. Further, the local part may be a part where the position on the body can be changed depending on the state of the pedestrian in the captured image, such as the upper part of the pedestrian, instead of such a specific part on the body. In addition, it is desirable that the local part is set to a part where the state of the pedestrian is easily reflected. For example, when the pedestrian is in a state of falling, crouching, lying down, etc., the entire body of the pedestrian is present at a position near the ground. Therefore, in order to detect these states, it is preferable that the local part to be observed is a part indicating the position of the upper end of the pedestrian, such as the upper part of the pedestrian or the head. In the scene illustrated in FIG. 1, the upper part 31 of the pedestrian is set as a local site to be observed.
 そして、異常状態検知装置1は、測定された局所的な部位の挙動に基づいて、歩行者が異常状態にあるか否かを判定する。更に、異常状態検知装置1は、当該判定の結果、歩行者が異常状態にあると判定された場合に、歩行者が異常状態にあることを知らせるための異常状態通知を行う。すなわち、異常状態検知装置1は、歩行者が異常状態に陥った際には、当該異常状態を知らせるための警報を行う。これによって、本実施形態に係る異常状態検知装置1の利用者は、カメラ2の撮影範囲に存在する歩行者の異常状態を知ることができ、当該歩行者の見守りを行うことができる。 Then, the abnormal state detection device 1 determines whether or not the pedestrian is in an abnormal state based on the measured behavior of the local part. Furthermore, when the abnormal state detection device 1 determines that the pedestrian is in an abnormal state as a result of the determination, the abnormal state detection device 1 performs an abnormal state notification for notifying that the pedestrian is in the abnormal state. That is, when the pedestrian falls into an abnormal state, the abnormal state detection device 1 performs an alarm for notifying the abnormal state. Thereby, the user of the abnormal state detection device 1 according to the present embodiment can know the abnormal state of the pedestrian existing in the shooting range of the camera 2 and can watch over the pedestrian.
 このように、本実施形態によれば、各画素の深度を示す深度データを含む撮影画像3に基づいて歩行者の状態が解析される。上記のとおり、各画素の深度は被写体表面に対して取得されるため、深度データを利用すれば、実空間上の被写体表面の位置を特定することができる。そのため、この深度データを利用すれば、歩行者に対するカメラ2の視野方向(視点)によらず、歩行者の実空間(三次元空間)上の状態を解析することができる。 Thus, according to the present embodiment, the state of the pedestrian is analyzed based on the captured image 3 including the depth data indicating the depth of each pixel. As described above, since the depth of each pixel is acquired with respect to the subject surface, the position of the subject surface in real space can be specified by using the depth data. Therefore, if this depth data is used, the state of the pedestrian in the real space (three-dimensional space) can be analyzed regardless of the viewing direction (viewpoint) of the camera 2 with respect to the pedestrian.
 ここで、本実施形態に係る異常状態検知装置1は、この深度データを利用して、歩行者の身体全体ではなく、歩行者の身体のうち観測の対象とする局所的な部位(例えば、歩行者の上部31)の実空間上の挙動を測定する。そして、異常状態検知装置1は、当該局所的な部位の実空間上の挙動に基づいて、歩行者が異常状態にあるか否かを判定する。そのため、観測対象とする身体領域が限られるため、歩行者の状態を解析する処理の負荷が少なくて済み、歩行者の状態を高速に解析することができる。また、観測対象が絞られているため、解析内容がシンプルになり、歩行者の状態を精度よく解析することができる。したがって、本実施形態によれば、歩行者を適切に見守ることができる。 Here, the abnormal state detection device 1 according to the present embodiment uses this depth data to determine a local part (for example, walking) of the pedestrian's body, not the entire pedestrian's body. The behavior of the upper part 31) of the person in real space is measured. And the abnormal condition detection apparatus 1 determines whether a pedestrian is in an abnormal condition based on the behavior in the real space of the said local site | part. Therefore, since the body region to be observed is limited, the processing load for analyzing the pedestrian state can be reduced, and the pedestrian state can be analyzed at high speed. Moreover, since the observation target is narrowed down, the analysis content becomes simple and the pedestrian state can be analyzed with high accuracy. Therefore, according to this embodiment, a pedestrian can be watched appropriately.
 なお、異常状態検知装置1の配置場所は、カメラ2から撮影画像3を取得可能であれば、実施の形態に応じて適宜決定可能である。例えば、異常状態検知装置1は、図1に例示されるように、カメラ2に近接するように配置されてもよい。また、異常状態検知装置1は、ネットワークを介してカメラ2と接続してもよく、当該カメラ2とは全く異なる場所に配置されてもよい。 Note that the location of the abnormal state detection device 1 can be determined as appropriate according to the embodiment as long as the captured image 3 can be acquired from the camera 2. For example, the abnormal state detection device 1 may be disposed so as to be close to the camera 2 as illustrated in FIG. Further, the abnormal state detection apparatus 1 may be connected to the camera 2 via a network, or may be arranged at a place completely different from the camera 2.
 §2 構成例
 <ハードウェア構成>
 次に、図2を用いて、異常状態検知装置1のハードウェア構成を説明する。図2は、本実施形態に係る異常状態検知装置1のハードウェア構成を例示する。異常状態検知装置1は、図2に例示されるように、CPU、RAM(Random Access Memory)、ROM(Read Only Memory)等を含む制御部11、制御部11で実行するプログラム5等を記憶する記憶部12、画像の表示と入力を行うためのタッチパネルディスプレイ13、音声を出力するためのスピーカ14、外部装置と接続するための外部インタフェース15、ネットワークを介して通信を行うための通信インタフェース16、及び記憶媒体6に記憶されたプログラムを読み込むためのドライブ17が電気的に接続されたコンピュータである。図2では、通信インタフェース及び外部インタフェースは、それぞれ、「通信I/F」及び「外部I/F」と記載されている。
§2 Configuration example <Hardware configuration>
Next, the hardware configuration of the abnormal state detection device 1 will be described with reference to FIG. FIG. 2 illustrates a hardware configuration of the abnormal state detection device 1 according to the present embodiment. As illustrated in FIG. 2, the abnormal state detection apparatus 1 stores a control unit 11 including a CPU, a RAM (Random Access Memory), a ROM (Read Only Memory), and the like, a program 5 executed by the control unit 11, and the like. A storage unit 12, a touch panel display 13 for displaying and inputting images, a speaker 14 for outputting sound, an external interface 15 for connecting to an external device, a communication interface 16 for communicating via a network, The computer 17 is electrically connected to a drive 17 for reading a program stored in the storage medium 6. In FIG. 2, the communication interface and the external interface are described as “communication I / F” and “external I / F”, respectively.
 なお、異常状態検知装置1の具体的なハードウェア構成に関して、実施形態に応じて、適宜、構成要素の省略、置換、及び追加が可能である。例えば、制御部11は、複数のプロセッサを含んでもよい。また、例えば、タッチパネルディスプレイ13は、それぞれ別個独立に接続される入力装置及び表示装置に置き換えられてもよい。また、例えば、スピーカ14は省略されてもよい。また、例えば、スピーカ14は、異常状態検知装置1の内部装置としてではなく、外部装置として異常状態検知装置1に接続されてもよい。また、異常状態検知装置1はカメラ2を内蔵してもよい。更に、異常状態検知装置1は、複数の外部インタフェース15を備えてもよく、複数の外部装置と接続してもよい。 It should be noted that regarding the specific hardware configuration of the abnormal state detection device 1, the components can be omitted, replaced, and added as appropriate according to the embodiment. For example, the control unit 11 may include a plurality of processors. In addition, for example, the touch panel display 13 may be replaced with an input device and a display device that are separately connected independently. For example, the speaker 14 may be omitted. Further, for example, the speaker 14 may be connected to the abnormal state detection device 1 as an external device instead of as an internal device of the abnormal state detection device 1. Further, the abnormal state detection device 1 may incorporate the camera 2. Furthermore, the abnormal state detection device 1 may include a plurality of external interfaces 15 and may be connected to a plurality of external devices.
 本実施形態に係るカメラ2は、外部インタフェース15を介して異常状態検知装置1に接続しており、状態を監視する対象の歩行者を撮影する。カメラ2の設置場所は、実施の形態に応じて適宜選択されてよい。カメラ2は、例えば、見守りの対象となる見守り対象者が歩行する場所を撮影可能なように配置されてよい。 The camera 2 according to the present embodiment is connected to the abnormal state detection device 1 via the external interface 15 and photographs a target pedestrian whose state is to be monitored. The installation location of the camera 2 may be appropriately selected according to the embodiment. The camera 2 may be arrange | positioned so that the place where the watching target person used as watching target walks can be image | photographed, for example.
 このカメラ2は、深度データを含む撮影画像3を撮影するために、被写体の深度を測定するための深度センサ21を備えている。この深度センサ21の種類及び測定方法は、実施の形態に応じて適宜選択されてよい。例えば、深度センサ21として、TOF(Time Of Flight)方式等のセンサを挙げることができる。 The camera 2 includes a depth sensor 21 for measuring the depth of the subject in order to capture the captured image 3 including depth data. The type and measurement method of the depth sensor 21 may be appropriately selected according to the embodiment. For example, the depth sensor 21 may be a sensor of TOF (TimeFOf Flight) method or the like.
 ただし、カメラ2の構成は、深度を取得可能であれば、このような例に限定されず、実施の形態に応じて適宜選択可能である。例えば、カメラ2は、撮影範囲内の被写体の深度を特定することが可能なように、ステレオカメラであってもよい。ステレオカメラは、撮影範囲内の被写体を複数の異なる方向から撮影するため、当該被写体の深度を記録することができる。また、カメラ2は、撮影範囲内の被写体の深度を特定可能であれば、深度センサ21単体に置き換わってもよい。 However, the configuration of the camera 2 is not limited to such an example as long as the depth can be acquired, and can be appropriately selected according to the embodiment. For example, the camera 2 may be a stereo camera so that the depth of the subject within the shooting range can be specified. Since the stereo camera shoots the subject within the shooting range from a plurality of different directions, the depth of the subject can be recorded. Further, the camera 2 may be replaced with the depth sensor 21 as long as the depth of the subject within the shooting range can be specified.
 なお、対象人物を撮影する場所は暗い可能性がある。そこで、撮影場所の明るさに影響されずに深度を取得可能なように、深度センサ21は、赤外線の照射に基づいて深度を測定する赤外線深度センサであってもよい。このような赤外線深度センサを含む比較的安価な撮影装置として、例えば、マイクロソフト社のKinect、ASUS社のXtion、PrimeSense社のCARMINEを挙げることができる。 In addition, the place where the target person is photographed may be dark. Therefore, the depth sensor 21 may be an infrared depth sensor that measures the depth based on infrared irradiation so that the depth can be acquired without being affected by the brightness of the shooting location. Examples of relatively inexpensive imaging apparatuses including such an infrared depth sensor include Kinect from Microsoft, Xtion from ASUS, and CARMINE from PrimeSense.
 ここで、図3を用いて、本実施形態に係る深度センサ21によって測定される深度を詳細に説明する。図3は、本実施形態に係る深度として扱うことが可能な距離の一例を示す。当該深度は、被写体の深さを表現する。図3で例示されるように、被写体の深さは、例えば、カメラ2と対象物との直線の距離Aで表現されてもよいし、カメラ2の被写体に対する水平軸から下ろした垂線の距離Bで表現されてもよい。 Here, the depth measured by the depth sensor 21 according to the present embodiment will be described in detail with reference to FIG. FIG. 3 shows an example of a distance that can be handled as the depth according to the present embodiment. The depth represents the depth of the subject. As exemplified in FIG. 3, the depth of the subject may be expressed by, for example, a straight line distance A between the camera 2 and the object, or a perpendicular distance B from the horizontal axis with respect to the subject of the camera 2. It may be expressed as
 すなわち、本実施形態に係る深度は、距離Aであってもよいし、距離Bであってもよい。本実施形態では、距離Bを深度として扱うことにする。ただし、距離A及び距離Bは、例えば、三平方の定理等に基づいて、互いに変換可能である。そのため、距離Bを用いた以降の説明は、そのまま、距離Aに適用することが可能である。本実施形態に係る異常状態検知装置1は、このような深度を利用することで、歩行者の状態を解析することができる。 That is, the depth according to the present embodiment may be the distance A or the distance B. In the present embodiment, the distance B is treated as the depth. However, the distance A and the distance B can be converted into each other based on, for example, the three-square theorem. Therefore, the following description using the distance B can be applied to the distance A as it is. The abnormal state detection apparatus 1 according to the present embodiment can analyze the state of the pedestrian by using such a depth.
 また、本実施形態では、記憶部12は、プログラム5を格納する。このプログラム5は、異常状態検知装置1に後述する歩行者の異常状態検知に関する各処理を実行させるためのプログラムであり、本発明の「異常状態検知プログラム」に相当する。このプログラム5は記憶媒体6に記録されていてもよい。 In the present embodiment, the storage unit 12 stores the program 5. This program 5 is a program for causing the abnormal state detection device 1 to execute each process related to detection of an abnormal state of a pedestrian described later, and corresponds to the “abnormal state detection program” of the present invention. The program 5 may be recorded on the storage medium 6.
 記憶媒体6は、コンピュータその他装置、機械等が記録されたプログラム等の情報を読み取り可能なように、当該プログラム等の情報を、電気的、磁気的、光学的、機械的又は化学的作用によって蓄積する媒体である。記憶媒体6は、本発明の「記憶媒体」に相当する。なお、図2は、記憶媒体6の一例として、CD(Compact Disk)、DVD(Digital Versatile Disk)等のディスク型の記憶媒体を例示している。しかしながら、記憶媒体6の種類は、ディスク型に限定される訳ではなく、ディスク型以外であってもよい。ディスク型以外の記憶媒体として、例えば、フラッシュメモリ等の半導体メモリを挙げることができる。 The storage medium 6 stores information such as a program by an electrical, magnetic, optical, mechanical, or chemical action so that information such as a program recorded by a computer or other device or machine can be read. It is a medium to do. The storage medium 6 corresponds to the “storage medium” of the present invention. 2 illustrates a disk-type storage medium such as a CD (Compact Disk) or a DVD (Digital Versatile Disk) as an example of the storage medium 6. However, the type of the storage medium 6 is not limited to the disk type and may be other than the disk type. Examples of the storage medium other than the disk type include a semiconductor memory such as a flash memory.
 また、このような異常状態検知装置1は、例えば、提供されるサービス専用に設計された装置であってもよいし、PC(Personal Computer)、タブレット端末等の汎用の装置であってもよい。更に、異常状態検知装置1は、1又は複数のコンピュータにより実装されてもよい。 Further, such an abnormal state detection device 1 may be, for example, a device designed exclusively for the provided service, or a general-purpose device such as a PC (Personal Computer) or a tablet terminal. Furthermore, the abnormal state detection device 1 may be implemented by one or a plurality of computers.
 <機能構成例>
 次に、図4を用いて、異常状態検知装置1の機能構成を説明する。図4は、本実施形態に係る異常状態検知装置1の機能構成を例示する。本実施形態では、異常状態検知装置1の制御部11は、記憶部12に記憶されたプログラム5をRAMに展開する。そして、制御部11は、RAMに展開されたプログラム5をCPUにより解釈及び実行して、各構成要素を制御する。これにより、異常状態検知装置1は、画像取得部51、抽出部52、挙動測定部53、状態判定部54及び通知部55を備えるコンピュータとして機能する。
<Functional configuration example>
Next, the functional configuration of the abnormal state detection device 1 will be described with reference to FIG. FIG. 4 illustrates a functional configuration of the abnormal state detection device 1 according to the present embodiment. In the present embodiment, the control unit 11 of the abnormal state detection device 1 expands the program 5 stored in the storage unit 12 in the RAM. And the control part 11 interprets and runs the program 5 expand | deployed by RAM by CPU, and controls each component. Accordingly, the abnormal state detection device 1 functions as a computer including the image acquisition unit 51, the extraction unit 52, the behavior measurement unit 53, the state determination unit 54, and the notification unit 55.
 画像取得部51は、カメラ2によって撮影された撮影画像3を取得する。取得される撮影画像3には、各画素の深度を示す深度データが含まれている。上記のとおり、この深度データによれば、撮影画像3内に写る被写体の実空間上の位置、より詳細には、実空間上における被写体表面の位置を特定することができる。 The image acquisition unit 51 acquires the captured image 3 captured by the camera 2. The acquired captured image 3 includes depth data indicating the depth of each pixel. As described above, according to the depth data, the position of the subject in the captured image 3 in the real space, more specifically, the position of the subject surface in the real space can be specified.
 そこで、抽出部52は、取得した撮影画像3内で、歩行者の写る人物領域を抽出する。挙動測定部53は、抽出した人物領域に含まれる各画素の深度を参照して、撮影画像3に写る歩行者の身体のうち観測対象とする局所的な部位の実空間上の位置を継続的に特定することで、当該局所的な部位の実空間上の挙動を測定する。 Therefore, the extraction unit 52 extracts a person area in which the pedestrian appears in the acquired photographed image 3. The behavior measurement unit 53 refers to the depth of each pixel included in the extracted person region and continuously determines the position in the real space of the local part to be observed among the pedestrian's body shown in the captured image 3. By specifying, the behavior of the local part in the real space is measured.
 更に、状態判定部54は、測定された局所的な部位の挙動に基づいて、撮影画像3に写る歩行者が異常状態にあるか否かを判定する。そして、当該判定の結果、歩行者が異常状態にあると判定された場合、通知部55は、当該歩行者が異常状態にあることを知らせるための異常検知通知を行う。 Furthermore, the state determination unit 54 determines whether or not the pedestrian in the captured image 3 is in an abnormal state based on the measured behavior of the local part. Then, as a result of the determination, when it is determined that the pedestrian is in an abnormal state, the notification unit 55 performs an abnormality detection notification for notifying that the pedestrian is in an abnormal state.
 なお、本実施形態では、これらの機能がいずれも汎用のCPUによって実現される例を説明している。しかしながら、これらの機能の一部又は全部が、1又は複数の専用のプロセッサにより実現されてもよい。また、異常状態検知装置1の機能構成に関して、実施形態に応じて、適宜、機能の省略、置換、及び追加が行われてもよい。各機能に関しては後述する動作例で詳細に説明する。 In the present embodiment, an example is described in which all of these functions are realized by a general-purpose CPU. However, some or all of these functions may be realized by one or more dedicated processors. In addition, regarding the functional configuration of the abnormal state detection device 1, functions may be omitted, replaced, and added as appropriate according to the embodiment. Each function will be described in detail in an operation example described later.
 §3 動作例
 次に、図5を用いて、異常状態検知装置1の動作例を説明する。図5は、異常状態検知装置1による歩行者の見守りに関する処理手順を例示する。なお、以下で説明する歩行者の見守りに関する処理手順は、本発明の「異常状態検知方法」に相当する。ただし、以下で説明する歩行者の見守りに関する処理手順は一例にすぎず、各処理は可能な限り変更されてもよい。また、以下で説明する処理手順について、実施の形態に応じて、適宜、ステップの省略、置換、及び追加が可能である。
§3 Operation example Next, an operation example of the abnormal state detection device 1 will be described with reference to FIG. FIG. 5 illustrates a processing procedure related to watching of a pedestrian by the abnormal state detection device 1. Note that the processing procedure relating to watching of pedestrians described below corresponds to the “abnormal state detection method” of the present invention. However, the processing procedure regarding watching of a pedestrian described below is only an example, and each processing may be changed as much as possible. Further, in the processing procedure described below, steps can be omitted, replaced, and added as appropriate according to the embodiment.
 (ステップS101)
 ステップS101では、制御部11は、画像取得部51として機能し、カメラ2により撮影された撮影画像3を取得する。そして、制御部11は、撮影画像3を取得した後に、次のステップS102に処理を進める。
(Step S101)
In step S <b> 101, the control unit 11 functions as the image acquisition unit 51 and acquires the captured image 3 captured by the camera 2. Then, after acquiring the captured image 3, the control unit 11 advances the processing to the next step S102.
 本実施形態では、カメラ2は、深度センサ21を備えている。そのため、本ステップS101において取得される撮影画像3には、当該深度センサ21により測定された各画素の深度を示す深度データが含まれる。制御部11は、この深度データを含む撮影画像3として、例えば、図6で例示される撮影画像3を取得する。 In the present embodiment, the camera 2 includes a depth sensor 21. Therefore, the captured image 3 acquired in step S101 includes depth data indicating the depth of each pixel measured by the depth sensor 21. For example, the control unit 11 acquires the captured image 3 illustrated in FIG. 6 as the captured image 3 including the depth data.
 図6は、深度データを含む撮影画像3の一例を示す。図6で例示される撮影画像3は、各画素の濃淡値が当該各画素の深度に応じて定められた画像である。黒色の画素ほど、カメラ2に近いことを示す。一方、白色の画素ほど、カメラ2から遠いことを示す。制御部11は、この深度データに基づいて、各画素の写る対象の実空間での位置を特定することができる。すなわち、制御部11は、撮影画像3内の各画素の座標(二次元情報)と深度とから、当該各画素内に写る被写体の三次元空間(実空間)での位置を特定することができる。以下、図7及び図8を用いて、制御部11が各画素の実空間上での位置を特定する計算例を示す。 FIG. 6 shows an example of the captured image 3 including depth data. The captured image 3 illustrated in FIG. 6 is an image in which the gray value of each pixel is determined according to the depth of each pixel. A black pixel is closer to the camera 2. On the other hand, a white pixel is farther from the camera 2. Based on the depth data, the control unit 11 can specify the position of each pixel in the real space. That is, the control unit 11 can specify the position in the three-dimensional space (real space) of the subject captured in each pixel from the coordinates (two-dimensional information) and the depth of each pixel in the captured image 3. . Hereinafter, a calculation example in which the control unit 11 specifies the position of each pixel in the real space will be described with reference to FIGS. 7 and 8.
 図7は、撮影画像3内の座標関係を模式的に例示する。また、図8は、撮影画像3の任意の画素(点s)とカメラ2との実空間内での位置関係を模式的に例示する。なお、図7の左右方向は、図8の紙面に垂直な方向に対応する。すなわち、図8で表れている撮影画像3の長さは、図7で例示される縦方向の長さ(Hピクセル)に対応する。また、図7で例示される横方向の長さ(Wピクセル)は、図1で表れていない撮影画像3の紙面垂直方向の長さに対応する。 FIG. 7 schematically illustrates the coordinate relationship in the captured image 3. FIG. 8 schematically illustrates a positional relationship between an arbitrary pixel (point s) of the captured image 3 and the camera 2 in the real space. 7 corresponds to a direction perpendicular to the paper surface of FIG. That is, the length of the captured image 3 shown in FIG. 8 corresponds to the length in the vertical direction (H pixels) illustrated in FIG. Further, the length in the horizontal direction (W pixels) illustrated in FIG. 7 corresponds to the length in the direction perpendicular to the paper surface of the captured image 3 that does not appear in FIG.
 ここで、図7で例示されるように、撮影画像3の任意の画素(点s)の座標を(xs,ys)とし、カメラ2の横方向の画角をVx、縦方向の画角をVyとする。また、撮影画像3の横方向のピクセル数をWとし、縦方向のピクセル数をHとし、撮影画像3の中心点(画素)の座標を(0,0)とする。 Here, as illustrated in FIG. 7, the coordinates of an arbitrary pixel (point s) of the captured image 3 are (x s , y s ), the horizontal field angle of the camera 2 is V x , and the vertical direction Let the angle of view be V y . The number of pixels in the horizontal direction of the captured image 3 is W, the number of pixels in the vertical direction is H, and the coordinates of the center point (pixel) of the captured image 3 are (0, 0).
 制御部11は、カメラ2の画角(Vx、Vy)を示す情報をカメラ2から取得することができる。また、制御部11は、このカメラ2の画角(Vx、Vy)を示す情報を、ユーザ入力に基づき取得してもよいし、予め設定されている設定値として取得してもよい。また、制御部11は、点sの座標(xs,ys)及び撮影画像3のピクセル数(W×H)を撮影画像3から取得することができる。更に、制御部11は、撮影画像3に含まれる深度データを参照することによって、点sの深度Dsを取得することができる。 The control unit 11 can acquire information indicating the angle of view (V x , V y ) of the camera 2 from the camera 2. Further, the control unit 11 may acquire information indicating the angle of view (V x , V y ) of the camera 2 based on a user input or may be acquired as a preset setting value. Further, the control unit 11 can acquire the coordinates (x s , y s ) of the point s and the number of pixels (W × H) of the captured image 3 from the captured image 3. Furthermore, the control unit 11 can acquire the depth Ds of the point s by referring to the depth data included in the captured image 3.
 制御部11は、これらの情報を利用することで、当該各画素(点s)の実空間上の位置を特定することができる。例えば、制御部11は、以下の数1~3で示される関係式に基づいて、図8に例示されるカメラ座標系におけるカメラ2から点sまでのベクトルS(Sx,Sy,Sz,1)の各値を算出することができる。これにより、撮影画像3内の二次元座標系における点sの位置とカメラ座標系における点sの位置とは相互に変換可能になる。 The control unit 11 can specify the position of each pixel (point s) in the real space by using these pieces of information. For example, the control unit 11 performs vector S (S x , S y , S z) from the camera 2 to the point s in the camera coordinate system illustrated in FIG. , 1) can be calculated. Thereby, the position of the point s in the two-dimensional coordinate system in the captured image 3 and the position of the point s in the camera coordinate system can be mutually converted.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 ただし、上記ベクトルSは、カメラ2を中心とした三次元座標系のベクトルである。このカメラ2は、図8に例示されるように、水平面(地面)に対して傾いている場合がある。すなわち、カメラ座標系は、水平面(地面)を基準とする三次元空間のワールド座標系から傾いている場合がある。そのため、制御部11は、カメラ2のロール角、ピッチ角(図8のα)及びヨー角を用いた射影変換を上記ベクトルSに適用することによって、上記カメラ座標系のベクトルSをワールド座標系のベクトルに変換し、ワールド座標系における点sの位置を算出してもよい。このカメラ座標及びワールド座標はそれぞれ、実空間を表す座標系である。制御部11は、このようにして、深度データを利用することで、撮影画像3に写る被写体の実空間上の位置を特定することができる。 However, the vector S is a vector of a three-dimensional coordinate system centered on the camera 2. As illustrated in FIG. 8, the camera 2 may be inclined with respect to a horizontal plane (ground). That is, the camera coordinate system may be tilted from the world coordinate system of a three-dimensional space with respect to the horizontal plane (ground). Therefore, the control unit 11 applies the projective transformation using the roll angle, pitch angle (α in FIG. 8), and yaw angle of the camera 2 to the vector S, so that the vector S of the camera coordinate system is converted to the world coordinate system. And the position of the point s in the world coordinate system may be calculated. Each of the camera coordinates and the world coordinates is a coordinate system representing a real space. In this way, the control unit 11 can specify the position of the subject in the captured image 3 in the real space by using the depth data.
 なお、本実施形態では、制御部11は、動画像又は静止画像を撮影画像3として取得してもよい。後述するステップS104で、歩行者の一時点の挙動から判定可能な異常状態を検知する場合、制御部11は、1時点分の動画像又は1枚の静止画像を撮影画像3として取得してもよい。また、歩行者の連続的な挙動から判定可能な異常状態を検知する場合、制御部11は、所定時間分の動画像又は複数枚の静止画像を撮影画像3として取得してもよい。制御部11は、1時点分若しくは所定時間分の動画像又は1若しくは複数枚の静止画像を撮影画像3として取得した段階で、後述するステップS102~S105までの処理を取得した撮影画像3に対して実行することで、当該撮影画像3に写る歩行者の状態を解析する。 In the present embodiment, the control unit 11 may acquire a moving image or a still image as the captured image 3. When detecting an abnormal state that can be determined from the behavior of a pedestrian at one point in step S104 described later, the control unit 11 may acquire a moving image or one still image for one point as the captured image 3. Good. When detecting an abnormal state that can be determined from the continuous behavior of a pedestrian, the control unit 11 may acquire a moving image or a plurality of still images for a predetermined time as the captured image 3. The control unit 11 obtains a moving image for one time point or a predetermined time or one or a plurality of still images as the photographed image 3, and performs processing for steps S102 to S105 described later on the photographed image 3 obtained. The state of the pedestrian in the captured image 3 is analyzed.
 また、制御部11は、歩行者のモニタリングを行うため、カメラ2のビデオ信号に同期させて撮影画像3を取得してもよい。そして、制御部11は、後述するステップS102~S105までの処理を取得した撮影画像3に対して即座に実行してもよい。異常状態検知装置1は、このような動作を絶え間なく連続して実行することにより、リアルタイム画像処理を実現し、カメラ2の撮影範囲に存在する歩行者の見守りをリアルタイムに行うことができる。 Further, the control unit 11 may acquire the captured image 3 in synchronization with the video signal of the camera 2 in order to monitor the pedestrian. Then, the control unit 11 may immediately execute the captured image 3 acquired in steps S102 to S105 described later. The abnormal state detection apparatus 1 can perform real-time image processing by continuously executing such an operation continuously, and can watch a pedestrian existing in the shooting range of the camera 2 in real time.
 (ステップS102)
 図5に戻り、次のステップS102では、制御部11は、抽出部52として機能し、ステップS101で取得した撮影画像3内で、図6で例示されるような歩行者の写る人物領域を抽出する。そして、制御部11は、撮影画像3内で人物領域を抽出した後に、次のステップS103に処理を進める。
(Step S102)
Returning to FIG. 5, in the next step S <b> 102, the control unit 11 functions as the extraction unit 52, and extracts a person region where a pedestrian is captured as illustrated in FIG. 6 from the captured image 3 acquired in step S <b> 101. To do. Then, after extracting the person area from the captured image 3, the control unit 11 advances the processing to the next step S103.
 なお、人物領域を抽出する方法は、種々の公知の方法があり、実施の形態に応じて適宜選択されてよい。例えば、制御部11は、歩行者の形状に基づいて、パターン検出、図形要素検出等の画像解析を行うことによって、撮影画像3内で人物領域を抽出してもよい。この場合、制御部11は、深度データを利用して、歩行者の三次元形状をパターン検出等することで、人物領域を抽出してもよい。また、制御部11は、深度データを利用せずに、歩行者の二次元形状をパターン検出等することで、人物領域を抽出してもよい。 Note that there are various known methods for extracting a person region, and the method may be appropriately selected according to the embodiment. For example, the control unit 11 may extract a person region in the captured image 3 by performing image analysis such as pattern detection and graphic element detection based on the shape of the pedestrian. In this case, the control unit 11 may extract the person region by detecting the three-dimensional shape of the pedestrian using the depth data. Moreover, the control part 11 may extract a person area | region by carrying out pattern detection etc. of the two-dimensional shape of a pedestrian, without using depth data.
 また、例えば、歩行者は、実空間上で動いている。そのため、撮影画像3内で、人物領域は移動する。このような移動する領域は、背景差分法によって抽出することができる。そこで、制御部11は、背景差分法に基づいて、この移動する領域を人物領域として抽出してもよい。 Also, for example, pedestrians are moving in real space. Therefore, the person area moves in the captured image 3. Such a moving area can be extracted by the background subtraction method. Therefore, the control unit 11 may extract the moving area as a person area based on the background difference method.
 より詳細には、まず、制御部11は、背景差分法に用いる背景画像を取得する。この背景画像は、任意の方法で取得されてよく、実施の形態に応じて適宜設定される。例えば、制御部11は、カメラ2の撮影範囲に歩行者が進入する前の撮影画像、換言すると、歩行者の写っていない撮影画像を背景画像として取得してもよい。そして、制御部11は、上記ステップS101の時点で取得した撮影画像3と背景画像との差分を算出し、当該撮影画像3の前景領域を抽出する。この前景領域は、背景画像から変化の生じた領域であり、移動する物体(動体)の写る領域である。 More specifically, first, the control unit 11 acquires a background image used for the background subtraction method. This background image may be acquired by an arbitrary method, and is set as appropriate according to the embodiment. For example, the control unit 11 may acquire a photographed image before a pedestrian enters the photographing range of the camera 2, in other words, a photographed image without a pedestrian as a background image. And the control part 11 calculates the difference of the picked-up image 3 acquired at the time of the said step S101, and a background image, and extracts the foreground area | region of the said picked-up image 3. FIG. This foreground region is a region where a change has occurred from the background image, and is a region where a moving object (moving object) is captured.
 そのため、制御部11は、抽出した前景領域が閾値以上の面積を有する場合に、当該前景領域を人物領域として認識してもよい。又は、制御部11は、当該前景領域からパターン検出等によって人物領域を抽出してもよい。この前景領域を抽出するための処理は、撮影画像3と背景画像との差分を計算する処理に過ぎない。そのため、当該処理によれば、制御部11(異常状態検知装置1)は、高度な画像処理を利用せずに、人物領域を検出する範囲を絞ることができる。よって、当該処理によれば、本ステップS102における処理の負荷を低減することができる。 Therefore, when the extracted foreground area has an area equal to or larger than the threshold, the control unit 11 may recognize the foreground area as a person area. Alternatively, the control unit 11 may extract a person area from the foreground area by pattern detection or the like. The process for extracting the foreground area is merely a process for calculating the difference between the captured image 3 and the background image. Therefore, according to the processing, the control unit 11 (abnormal state detection device 1) can narrow the range in which the person area is detected without using advanced image processing. Therefore, according to the processing, the processing load in step S102 can be reduced.
 なお、背景差分法には様々な種類が存在し、本実施形態に適用可能な背景差分法は上記のような例に限られる訳ではない。その他の種類の背景差分法として、例えば、異なる3枚の画像を用いて背景と前景とを分離する方法、及び統計的モデルを適用することで背景と前景とを分離する方法を挙げることができる。これらの方法によって、制御部11は、人物領域を抽出してもよい。 Note that there are various types of background subtraction methods, and the background subtraction method applicable to the present embodiment is not limited to the above example. Other types of background subtraction methods include, for example, a method of separating the background and the foreground using three different images, and a method of separating the background and the foreground by applying a statistical model. . With these methods, the control unit 11 may extract a person region.
 (ステップS103)
 次のステップS103では、制御部11は、挙動測定部53として機能し、ステップS102で抽出した人物領域に含まれる各画素の深度を参照して、撮影画像3に写る歩行者の身体のうち観測の対象とする局所的な部位の実空間上の位置を継続的に特定することで、当該局所的な部位の実空間上の挙動を測定する。そして、制御部11は、観測の対象とする局所的な部位の実空間上の挙動を測定した後、次のステップS104に処理を進める。
(Step S103)
In the next step S103, the control unit 11 functions as the behavior measurement unit 53, and refers to the depth of each pixel included in the person region extracted in step S102, and observes the pedestrian's body in the captured image 3. By continuously specifying the position in the real space of the local part to be the target, the behavior of the local part in the real space is measured. And the control part 11 advances a process to the following step S104, after measuring the behavior in the real space of the local site | part made into observation object.
 観測の対象とする局所的な部位は、実施の形態に応じて適宜設定可能である。局所的な部位は、例えば、頭部、肩部、胸部、脚部等、身体上の特定の部位であってもよい。また、局所的な部位は、そのような身体上の特定の部位ではなく、歩行者の上部等、撮影画像に写る歩行者の状態によって身体上の位置が変わりうる部位であってもよい。観測の対象とする局所的な部位は、後述するステップS104で検知する歩行者の異常状態の種別に応じて、選択されてもよい。 The local region to be observed can be set as appropriate according to the embodiment. The local site may be a specific site on the body such as the head, shoulder, chest, or leg. Further, the local part may be a part where the position on the body can be changed depending on the state of the pedestrian in the captured image, such as the upper part of the pedestrian, instead of such a specific part on the body. The local region to be observed may be selected according to the type of abnormal state of the pedestrian detected in step S104 described later.
 なお、当該局所的な部位は、歩行者の状態が反映されやすい部位に設定されるのが望ましい。例えば、後述するように、歩行者が転倒、うずくまり、横たわり等の状態にある場合には、歩行者の身体全体が地面寄りの位置に存在する。そのため、これらの状態を検知するためには、観測対象とする局所的な部位は、歩行者の上部、頭部等、歩行者の上端の位置を示す部位であるのが好ましい。 In addition, it is desirable that the local part is set to a part where the state of the pedestrian is easily reflected. For example, as will be described later, when the pedestrian is in a state of falling, crouching, lying down, etc., the entire body of the pedestrian is present at a position near the ground. Therefore, in order to detect these states, it is preferable that the local part to be observed is a part indicating the position of the upper end of the pedestrian, such as the upper part of the pedestrian or the head.
 本実施形態では、図1及び図6で例示されるように、観測の対象とする局所的な部位として、歩行者の上部31が採用される。この歩行者の上部31は、実空間における歩行者の上端を示すものであり、歩行者の上端の一点であってもよいし、歩行者の上端に設けられた任意の面積を有する領域であってもよい。歩行者の上部31は、適宜設定可能である。なお、歩行者の上端とは、撮影画像に写る歩行者の身体のうち実空間上で最も高い部分である。 In this embodiment, as illustrated in FIGS. 1 and 6, the upper part 31 of the pedestrian is employed as a local site to be observed. The upper part 31 of the pedestrian indicates the upper end of the pedestrian in the real space, and may be one point of the upper end of the pedestrian or an area having an arbitrary area provided at the upper end of the pedestrian. May be. The upper part 31 of the pedestrian can be set as appropriate. In addition, the upper end of a pedestrian is the highest part in real space among the pedestrian's bodies shown in the photographed image.
 制御部11は、深度データを利用することで、この歩行者の上部31の実空間上の位置を特定することができる。すなわち、制御部11は、人物領域に含まれる各画素の深度を利用し、上記の方法で、人物領域に含まれる各画素の実空間上の位置を特定する。そして、制御部11は、人物領域に含まれる各画素のうち実空間上で最も高い位置に存在する画素を歩行者の上端とし、この画素又はこの画素を含む所定領域を歩行者の上部31として認識する。これによって、制御部11は、歩行者の上部31の実空間上の位置を特定することができる。 The control unit 11 can specify the position of the upper part 31 of the pedestrian in the real space by using the depth data. In other words, the control unit 11 uses the depth of each pixel included in the person area, and specifies the position of each pixel included in the person area in the real space by the above method. And the control part 11 makes the pixel which exists in the highest position in real space among each pixel contained in a person area the upper end of a pedestrian, and makes this pixel or the predetermined area | region containing this pixel the upper part 31 of a pedestrian. recognize. Thereby, the control part 11 can pinpoint the position in the real space of the upper part 31 of a pedestrian.
 制御部11は、このような実空間上における歩行者の上部31の位置特定を継続的に行い、例えば、当該歩行者の上部31の位置を実空間座標上でプロットすることで、当該歩行者の上部31の実空間上の挙動を測定することができる。例えば、上記ステップS101で1時点分の動画像又は1枚の静止画像を撮影画像3として取得した場合、制御部11は、この1時点分の動画像又は1枚の静止画像に現れる歩行者の上部31の位置を実空間座標上でプロットする。これによって、1時点での当該歩行者の上部31の挙動が測定される。一方、上記ステップS101で所定時間分の動画像又は複数枚の静止画像を撮影画像3として取得した場合、制御部11は、この所定時間分の動画像又は複数枚の静止画像を撮影画像3に現れる歩行者の上部31の位置を実空間座標上で連続的にプロットする。これによって、所定時間内における当該歩行者の上部31の挙動が測定される。 The control unit 11 continuously specifies the position of the upper part 31 of the pedestrian in the real space. For example, by plotting the position of the upper part 31 of the pedestrian on the real space coordinates, It is possible to measure the behavior of the upper portion 31 of the upper part 31 in real space. For example, when the moving image for one time point or one still image is acquired as the captured image 3 in the above step S101, the control unit 11 displays the pedestrian appearing in the moving image for one time point or one still image. The position of the upper part 31 is plotted on the real space coordinates. Thereby, the behavior of the upper part 31 of the pedestrian at one time point is measured. On the other hand, when a moving image or a plurality of still images for a predetermined time is acquired as the captured image 3 in step S101, the control unit 11 converts the moving image or the plurality of still images for the predetermined time into the captured image 3. The position of the upper part 31 of the pedestrian that appears is continuously plotted on real space coordinates. Thereby, the behavior of the upper part 31 of the pedestrian within a predetermined time is measured.
 なお、歩行者の上部31の他、歩行者の下部等、歩行者の身体上の位置に関する部位を観測の対象とする局所的な部位に採用する場合も同様に処理することができる。すなわち、制御部11は、人物領域に含まれる各画素の深度を利用することで、当該部位の実空間上の位置を特定することができる。そして、制御部11は、実空間座標上で当該部位の位置をプロットすることで、当該部位の実空間上の挙動を測定することができる。 It should be noted that the same processing can be performed when a part related to the position of the pedestrian on the body, such as the lower part of the pedestrian, as well as the upper part 31 of the pedestrian is adopted as a local part to be observed. That is, the control unit 11 can specify the position of the part in the real space by using the depth of each pixel included in the person region. And the control part 11 can measure the behavior in the real space of the said part by plotting the position of the said part on real space coordinate.
 一方、観測対象とする局所的な部位として、例えば、頭部、肩部、胸部、脚部等、身体上の特定の部位を採用する場合には、制御部11は、撮影画像3の人物領域内でパターン検出等を行うことで、観測の対象とする局所的な部位の領域を特定する。 On the other hand, when a specific part on the body such as a head, a shoulder, a chest, or a leg is adopted as a local part to be observed, the control unit 11 By performing pattern detection or the like in the area, the region of the local part to be observed is specified.
 ここで、撮影画像3には深度データが含まれているため、制御部11は、人物領域に含まれる各画素の深度を利用して、当該局所的な部位の三次元形状をパターン検出等することで、局所的な部位の写る領域を人物領域内で特定してもよい。また、制御部11は、深度データを利用せずに、局所的な部位の二次元形状をパターン検出等することで、局所的な部位の領域を人物領域内で特定してもよい。 Here, since the captured image 3 includes depth data, the control unit 11 performs pattern detection on the three-dimensional shape of the local part by using the depth of each pixel included in the person region. Thus, an area in which a local part is captured may be specified in the person area. Moreover, the control part 11 may specify the area | region of a local site | part in a person area | region by pattern-detecting the two-dimensional shape of a local site | part, without using depth data.
 そして、制御部11は、このようにして特定した局所的な部位の写る領域に含まれる各画素の深度に基づいて、当該局所的な部位の実空間上の位置を特定することができる。更に、制御部11は、このような実空間上における局所的な部位の位置特定を継続的に行い、例えば、当該局所的な部位の位置を実空間座標上でプロットすることで、当該局所的な部位の実空間上の挙動を測定することができる。 And the control part 11 can specify the position in the real space of the said local site | part based on the depth of each pixel contained in the area | region where the local site | part identified in this way is reflected. Furthermore, the control unit 11 continuously identifies the position of the local part in the real space. For example, the local part is plotted by plotting the position of the local part on the real space coordinates. It is possible to measure the behavior of real parts in real space.
 (ステップS104)
 次のステップS104では、制御部11は、状態判定部54として機能し、ステップS103で測定された局所的な部位の挙動に基づいて、歩行者が異常状態にあるか否かを判定する。そして、当該判定の結果、歩行者が異常状態にあると判定した場合には、制御部11は、次のステップS105に処理を進める。一方、歩行者が異常状態にはないと判定した場合には、制御部11は、本動作例に係る処理を終了する。
(Step S104)
In the next step S104, the control unit 11 functions as the state determination unit 54, and determines whether or not the pedestrian is in an abnormal state based on the behavior of the local part measured in step S103. And as a result of the determination, if it is determined that the pedestrian is in an abnormal state, the control unit 11 advances the processing to the next step S105. On the other hand, when it determines with a pedestrian not being in an abnormal state, the control part 11 complete | finishes the process which concerns on this operation example.
 歩行者が異常状態にあるか否かを判定するための画像解析方法は、実施の形態に応じて適宜選択されてよい。例えば、本実施形態では、観測の対象とする局所的な部位として、歩行者の上部31が採用されている。そこで、制御部11は、ステップS103で測定された歩行者の上部31の挙動において当該歩行者の上部31が所定の条件を満たす動きを行ったと評価可能なときに、歩行者の異常状態を検知してもよい。本実施形態では、歩行者の異常状態の一例として、制御部11は、歩行者の転倒状態、うずくまり状態及び横たわり状態を検知する。以下、各種の異常状態を検知する方法の一例について説明する。 The image analysis method for determining whether or not the pedestrian is in an abnormal state may be appropriately selected according to the embodiment. For example, in this embodiment, the upper part 31 of a pedestrian is adopted as a local site to be observed. Therefore, the control unit 11 detects an abnormal state of the pedestrian when the behavior of the upper part 31 of the pedestrian measured in step S103 can be evaluated as a movement satisfying a predetermined condition. May be. In the present embodiment, as an example of the abnormal state of the pedestrian, the control unit 11 detects the pedestrian's falling state, crouching state, and lying state. Hereinafter, an example of a method for detecting various abnormal states will be described.
 (A)転倒状態
 第一に、図9を用いて、歩行者の異常状態の一つである、歩行者が転倒した状態(転倒状態)を検知する方法の一例を説明する。図9は、歩行者が転倒状態にある場面を模式的に例示する。図9に例示されるように、歩行者が転倒した場合には、歩行者の身体の位置が急激に変化する、具体的には、地面に向けて鉛直下方向に急降下すると想定される。
(A) Falling state First, an example of a method for detecting a state where a pedestrian has fallen (falling state), which is one of the abnormal states of the pedestrian, will be described with reference to FIG. FIG. 9 schematically illustrates a scene where a pedestrian is in a fall state. As illustrated in FIG. 9, when the pedestrian falls, the position of the pedestrian's body changes suddenly. Specifically, it is assumed that the pedestrian suddenly descends vertically downward toward the ground.
 そこで、制御部11は、ステップS103により測定された歩行者の上部31の挙動に基づいて、当該歩行者の上部31が実空間上で一定時間内に所定距離以上下降したか否かを検知する。そして、制御部11は、この歩行者の上部31が実空間上で一定時間内に所定距離以上下降したことを検知した場合に、歩行者は転倒したとして、当該歩行者が異常状態(転倒状態)にあると判定してもよい。なお、転倒状態を検知するための時間及び距離の各閾値は、実施の形態に応じて適宜設定されてよい。また、転倒状態を検知する方法は、このような例に限られなくてもよく、制御部11は、その他の方法によって歩行者の転倒状態を検知してもよい。 Therefore, the control unit 11 detects whether or not the pedestrian's upper part 31 has fallen by a predetermined distance or more in a certain time on the real space based on the behavior of the pedestrian's upper part 31 measured in step S103. . And when the control part 11 detects that the upper part 31 of this pedestrian descend | falls more than the predetermined distance within a fixed time in real space, the said pedestrian falls into an abnormal state (falling state) ) May be determined. In addition, each threshold value of time and distance for detecting a fall state may be set as appropriate according to the embodiment. Moreover, the method of detecting a fall state may not be restricted to such an example, and the control part 11 may detect the fall state of a pedestrian by another method.
(B)うずくまり状態
 第二に、図10を用いて、歩行者の異常状態の一つである、歩行者がうずくまっている状態(うずくまり状態)を検知する方法の一例を説明する。図10は、歩行者がうずくまり状態にある場面を模式的に例示する。図10に例示されるように、歩行者がうずくまっている場合には、歩行者の身体全体は実空間上で所定の高さ以下に存在すると想定される。
(B) Crowd state Secondly, an example of a method for detecting a state where the pedestrian is crooked (crumb state), which is one of the abnormal states of the pedestrian, will be described with reference to FIG. FIG. 10 schematically illustrates a scene in which the pedestrian is in a cramped state. As illustrated in FIG. 10, when the pedestrian is cramped, it is assumed that the entire body of the pedestrian exists below a predetermined height in real space.
 そこで、制御部11は、ステップS103により測定された歩行者の上部31の挙動に基づいて、当該歩行者の上部31が実空間上で所定の第一の高さH1よりも低い位置に移動したか否かを検知する。そして、制御部11は、当該歩行者の上部31が実空間上で所定の第一の高さH1よりも低い位置に移動したことを検知した場合に、歩行者はうずくまっているとして、歩行者が異常状態(うずくまり状態)にあると判定する。 Therefore, based on the behavior of the upper part 31 of the pedestrian measured in step S103, the control unit 11 moves the upper part 31 of the pedestrian to a position lower than the predetermined first height H1 in the real space. Whether or not is detected. And when the control part 11 detects that the upper part 31 of the said pedestrian moved to the position lower than predetermined | prescribed 1st height H1 on real space, a pedestrian is cramped and it is pedestrian Is determined to be in an abnormal state (coarse state).
 具体的に、制御部11は、歩行者の上部31の実空間での高さhと所定の第一の高さH1とを比較する。そして、当該比較の結果、歩行者の上部31の高さhが所定の第一の高さH1よりも低いと判定される場合に、制御部11は、歩行者の上部31が所定の第一の高さH1よりも低い位置に移動したことを検知し、歩行者はうずくまり状態にあると判定する。なお、この所定の第一の高さH1の値は、実施の形態に応じて適宜設定されてよい。また、うずくまり状態を検知する方法は、このような例に限られなくてもよく、制御部11は、その他の方法によって歩行者のうずくまり状態を検知してもよい。 Specifically, the control unit 11 compares the height h in the real space of the upper part 31 of the pedestrian with a predetermined first height H1. And as a result of the comparison, when it is determined that the height h of the pedestrian upper portion 31 is lower than the predetermined first height H1, the control unit 11 determines that the pedestrian upper portion 31 is the predetermined first height H1. It is detected that the pedestrian has moved to a position lower than the height H1. Note that the value of the predetermined first height H1 may be appropriately set according to the embodiment. Moreover, the method of detecting the cramped state is not limited to such an example, and the control unit 11 may detect the crooked state of the pedestrian by other methods.
 なお、図10の例では、歩行者の上部31の高さh及び所定の第一の高さH1は、地面を基準として表現されている。この地面の実空間での位置(高さ)は、任意の方法で与えることができる。例えば、制御部11は、上記の方法によって、撮影画像3内で地面の写る領域に含まれる各画素の深度を利用することで、この地面の実空間での位置(高さ)を算出することができる。そのため、制御部11は、歩行者の上部31の高さhを地面からの距離で表現することができる。ただし、歩行者の上部31の高さh及び所定の第一の高さH1の表現形式は、このような例に限られなくてもよく、実施の形態に応じて適宜選択されてよい。例えば、歩行者の上部31の高さh及び所定の第一の高さH1は、カメラ2を基準として表現されてもよい。 In addition, in the example of FIG. 10, the height h of the upper part 31 of a pedestrian and predetermined 1st height H1 are expressed on the basis of the ground. The position (height) of the ground in the real space can be given by an arbitrary method. For example, the control unit 11 calculates the position (height) of the ground in the real space by using the depth of each pixel included in the area where the ground appears in the captured image 3 by the above method. Can do. Therefore, the control part 11 can express the height h of the upper part 31 of a pedestrian with the distance from the ground. However, the expression form of the height h of the upper part 31 of the pedestrian and the predetermined first height H1 is not limited to such an example, and may be appropriately selected according to the embodiment. For example, the height h of the upper part 31 of the pedestrian and the predetermined first height H1 may be expressed with the camera 2 as a reference.
 (C)横たわり状態
 第三に、図11を用いて、歩行者の異常状態の一つである、歩行者が横たわっている状態(横たわり状態)を検知する方法の一例を説明する。図11は、歩行者が横たわり状態にある場面を模式的に例示する。図11に例示されるように、歩行者が横たわっている場合には、歩行者の身体全体は、実空間上で、上記うずくまり状態の場合よりも更に低い高さに存在すると想定される。
(C) Lying state Third, an example of a method for detecting a pedestrian's lying state (laying state), which is one of the abnormal states of the pedestrian, will be described with reference to FIG. FIG. 11 schematically illustrates a scene where a pedestrian is lying down. As illustrated in FIG. 11, when the pedestrian is lying down, it is assumed that the entire body of the pedestrian is present at a lower height in real space than in the case of the crouched state.
 そこで、制御部11は、図11に例示されるように、ステップS103により測定された歩行者の上部31の挙動に基づいて、当該歩行者の上部31が実空間上で、上記所定の第一の高さH1よりも更に低い所定の第二の高さH2よりも低い位置に移動したか否かを検知する。そして、制御部11は、当該歩行者の上部31が実空間上で所定の第二の高さH2よりも低い位置に移動したことを検知した場合に、歩行者は横たわっているとして、歩行者の異常状態(横たわり状態)にあると判定する。 Therefore, as illustrated in FIG. 11, the control unit 11 determines that the upper part 31 of the pedestrian is the predetermined first in real space based on the behavior of the upper part 31 of the pedestrian measured in step S <b> 103. It is detected whether or not it has moved to a position lower than a predetermined second height H2, which is lower than the height H1. And when the control part 11 detects that the upper part 31 of the said pedestrian moved to the position lower than predetermined | prescribed 2nd height H2 in real space, a pedestrian is lying, pedestrian Is determined to be in an abnormal state (a lying state).
 具体的に、制御部11は、歩行者の上部31の実空間での高さhと所定の第二の高さH2とを比較する。そして、当該比較の結果、歩行者の上部31の高さhが所定の第二の高さH2よりも低いと判定される場合に、制御部11は、歩行者の上部31が所定の第二の高さH2よりも低い位置に移動したことを検知し、歩行者は横たわり状態にあると判定する。なお、この所定の第二の高さH2の値は、上記所定の第一の高さH1よりも低い値となるように、実施の形態に応じて適宜設定されてよい。また、横たわり状態を検知する方法は、このような例に限られなくてもよく、制御部11は、その他の方法によって歩行者の横たわり状態を検知してもよい。更に、図11の例では、上記図10と同様に、所定の第二の高さH2は、地面を基準として表現されている。ただし、上記と同様に、所定の第二の高さH2の表現形式は、このような例に限られなくてもよく、実施の形態に応じて適宜選択されてよい。例えば、所定の第二の高さH2は、カメラ2を基準として表現されてもよい。 Specifically, the control unit 11 compares the height h in the real space of the upper part 31 of the pedestrian with a predetermined second height H2. And as a result of the comparison, when it is determined that the height h of the pedestrian upper portion 31 is lower than the predetermined second height H2, the control unit 11 determines that the pedestrian upper portion 31 is the predetermined second height H2. It is detected that the pedestrian is lying down by detecting that it has moved to a position lower than the height H2. Note that the value of the predetermined second height H2 may be appropriately set according to the embodiment so as to be lower than the predetermined first height H1. The method for detecting the lying state is not limited to such an example, and the control unit 11 may detect the lying state of the pedestrian by other methods. Further, in the example of FIG. 11, the predetermined second height H2 is expressed with reference to the ground, as in FIG. However, as described above, the expression format of the predetermined second height H2 is not limited to such an example, and may be appropriately selected according to the embodiment. For example, the predetermined second height H2 may be expressed with the camera 2 as a reference.
 (D)まとめ
 以上のようにして、制御部11は、歩行者の各種状態を検知することができる。制御部11は、本ステップS104において、歩行者の状態が、歩行者の転倒状態、うずくまり状態及び横たわり状態のうちのいずれかの状態に該当すると判定した場合に、次のステップS105に処理を進める。一方、制御部11は、本ステップS104において、歩行者の状態が、歩行者の転倒状態、うずくまり状態及び横たわり状態のいずれの状態にも該当しないと判定した場合には、本動作例に係る処理を終了する。
(D) Summary As described above, the control unit 11 can detect various states of the pedestrian. When the control unit 11 determines in step S104 that the pedestrian's state corresponds to any of the pedestrian's falling state, crouching state, and lying state, the process proceeds to the next step S105. . On the other hand, if the control unit 11 determines in step S104 that the pedestrian's state does not correspond to any of the pedestrian's fallen state, cramped state, and lying state, the process according to this operation example. Exit.
 なお、歩行者の各種状態のうち検知対象とする状態は、実施の形態に応じて適宜選択されてよい。すなわち、転倒状態、うずくまり状態及び横たわり状態のうちの少なくともいずれかの状態は、検知対象から除外されてもよい。また、制御部11は、上記各条件の他の条件に基づいて、上記以外の歩行者の状態を検知してもよい。更に、本ステップS104において検知の対象とする歩行者の状態の種別は、実施形態に応じて適宜選択さてもよく、利用者によって選択されてもよいし、予め設定されてもよい。 It should be noted that the state to be detected among the various states of the pedestrian may be appropriately selected according to the embodiment. That is, at least one of the fall state, the stagnation state, and the lying state may be excluded from the detection target. Moreover, the control part 11 may detect the state of pedestrians other than the above based on other conditions of said each conditions. Furthermore, the type of the state of the pedestrian to be detected in step S104 may be appropriately selected according to the embodiment, may be selected by the user, or may be set in advance.
 また、転倒状態、うずくまり状態及び横たわり状態のうちの少なくともいずれかの状態は、異常状態ではないと設定されてよい。例えば、うずくまり状態が異常状態ではないと設定された場合には、制御部11は、歩行者がうずくまり状態にあることを検知したときに、歩行者が異常状態にあるとは判定せずに、歩行者は正常状態にあると判定する。そして、制御部11は、次のステップS105の処理を省略して、本動作例に係る処理を終了する。 In addition, at least one of the falling state, the crouching state, and the lying state may be set as not being an abnormal state. For example, when it is set that the cramped state is not an abnormal state, the control unit 11 does not determine that the pedestrian is in the abnormal state when detecting that the pedestrian is in the crooked state, It is determined that the pedestrian is in a normal state. And the control part 11 abbreviate | omits the process of following step S105, and complete | finishes the process which concerns on this operation example.
 なお、本ステップS104において、転倒状態、うずくまり状態及び横たわり状態以外の状態の検知を実施しない場合、制御部11は、歩行者の状態が、歩行者の転倒状態、うずくまり状態及び横たわり状態のいずれの状態にも該当しないと判定したときに、当該歩行者は正常状態にあると認識してもよい。そして、制御部11は、当該歩行者が正常状態にあることを、異常状態検知装置1の利用者に報知してもよい。例えば、制御部11は、タッチパネルディスプレイ13上で当該歩行者が正常状態にあることを表示することで、当該報知を行ってもよい。 In addition, in this step S104, when not detecting a state other than the fall state, the stagnation state, and the lying state, the control unit 11 determines whether the pedestrian is in a pedestrian fall state, a stagnation state, or a lying state. When it is determined that the state does not correspond to the state, the pedestrian may be recognized as being in a normal state. And the control part 11 may alert | report the user of the abnormal condition detection apparatus 1 that the said pedestrian is in a normal state. For example, the control unit 11 may perform the notification by displaying that the pedestrian is in a normal state on the touch panel display 13.
 (ステップS105)
 次のステップS105では、制御部11は、通知部55として機能し、上記ステップS104での判定の結果、歩行者が異常状態にあると判定された場合に、歩行者が異常状態にあることを知らせるための異常状態検知通知を行う。これによって、本動作例に係る処理が終了する。なお、制御部11が、当該異常検知通知を行う手段は、実施の形態に応じて適宜選択可能である。
(Step S105)
In the next step S105, the control unit 11 functions as the notification unit 55. When it is determined that the pedestrian is in the abnormal state as a result of the determination in step S104, the control unit 11 confirms that the pedestrian is in the abnormal state. An abnormal state detection notification is sent to notify. Thereby, the processing according to this operation example is completed. Note that the means by which the control unit 11 performs the abnormality detection notification can be appropriately selected according to the embodiment.
 例えば、異常状態検知装置1が病院等の施設で利用される場合、当該異常状態検知装置1は、外部インタフェース15を介して、ナースコールシステム等の設備と接続することができる。この場合、制御部11は、当該ナースコールシステム等の設備と連携して、異常検知通知を行ってもよい。すなわち、制御部11は、外部インタフェース15を介して、当該ナースコールシステムを制御してもよい。そして、制御部11は、異常検知通知として、当該ナースコールシステムによる呼び出しを行ってもよい。これによって、歩行者が異常状態にあることを当該歩行者の見守りを行う看護師等に適切に知らせることができる。 For example, when the abnormal state detection device 1 is used in a facility such as a hospital, the abnormal state detection device 1 can be connected to equipment such as a nurse call system via the external interface 15. In this case, the control part 11 may perform abnormality detection notification in cooperation with equipment such as the nurse call system. That is, the control unit 11 may control the nurse call system via the external interface 15. And the control part 11 may perform the call by the said nurse call system as abnormality detection notification. Accordingly, it is possible to appropriately notify a nurse or the like who watches the pedestrian that the pedestrian is in an abnormal state.
 また、例えば、制御部11は、異常状態検知装置1に接続されるスピーカ14から所定の音声を出力することにより、異常検知通知を行ってもよい。また、例えば、制御部11は、タッチパネルディスプレイ13上に、異常検知通知として、歩行者の異常状態を検知したことを知らせるための画面を表示させてもよい。 Further, for example, the control unit 11 may perform abnormality detection notification by outputting a predetermined sound from the speaker 14 connected to the abnormal state detection device 1. Further, for example, the control unit 11 may display a screen on the touch panel display 13 for notifying that an abnormal state of the pedestrian has been detected as an abnormality detection notification.
 また、例えば、制御部11は、電子メール、ショートメッセージサービス、プッシュ通知等を利用して、このような異常検知通知を行ってもよい。このような異常検知通知を行う場合には、通知先となるユーザ端末の電子メールアドレス、電話番号等は記憶部12に予め登録されていてもよい。そして、制御部11は、この予め登録されている電子メールアドレス、電話番号等を利用して、異常検知通知を行ってもよい。 In addition, for example, the control unit 11 may perform such an abnormality detection notification using an e-mail, a short message service, a push notification, or the like. When such an abnormality detection notification is performed, the e-mail address, telephone number, and the like of the user terminal that is the notification destination may be registered in the storage unit 12 in advance. And the control part 11 may perform abnormality detection notification using this e-mail address, telephone number, etc. which are registered beforehand.
 (作用・効果)
 以上のように、本実施形態に係る異常状態検知装置1は、各画素の深度を示す深度データを含む撮影画像3に基づいて歩行者の状態を解析する。上記のとおり、各画素の深度は被写体表面に対して取得されるため、深度データを利用すれば、実空間上の被写体表面の位置を特定することができる。そこで、本実施形態に係る異常状態検知装置1は、この深度データを利用して、歩行者の身体のうち観測対象とする歩行者の上部31の実空間上の挙動を測定し、測定した歩行者の上部31の挙動に基づいて、歩行者が異常状態にあるか否かを検知する。
(Action / Effect)
As described above, the abnormal state detection device 1 according to the present embodiment analyzes the state of a pedestrian based on the captured image 3 including depth data indicating the depth of each pixel. As described above, since the depth of each pixel is acquired with respect to the subject surface, the position of the subject surface in real space can be specified by using the depth data. Therefore, the abnormal state detection device 1 according to the present embodiment uses this depth data to measure the behavior in the real space of the upper part 31 of the pedestrian to be observed among the pedestrian's body, and the measured walking Based on the behavior of the upper part 31 of the person, it is detected whether or not the pedestrian is in an abnormal state.
 すなわち、本実施形態によれば、歩行者の身体全体ではなく、歩行者の局所的な部位の挙動に基づいて、歩行者の異常状態を検知する。そのため、観測対象とする身体領域が歩行者の上部31に限られるため、歩行者の状態を解析する処理の負荷が少なくて済み、歩行者の状態を高速に解析することができる。また、観測対象が絞られているため、解析内容がシンプルになる。例えば、上記実施形態では、異常状態検知装置1は、歩行者の上部31の変動及び高さに基づいて、歩行者の転倒状態、うずくまり状態及び横たわり状態を検知している。この歩行者の上部31の変動及び高さの計測はエラーが生じ難い。そのため、歩行者の状態を精度よく解析することができる。したがって、本実施形態によれば、歩行者を適切に見守ることができる。 That is, according to the present embodiment, the abnormal state of the pedestrian is detected based on the behavior of the local part of the pedestrian rather than the entire body of the pedestrian. Therefore, since the body region to be observed is limited to the upper part 31 of the pedestrian, the processing load for analyzing the state of the pedestrian can be reduced, and the state of the pedestrian can be analyzed at high speed. Moreover, since the observation target is narrowed down, the analysis content becomes simple. For example, in the above-described embodiment, the abnormal state detection device 1 detects the pedestrian's falling state, crouching state, and lying state based on the fluctuation and height of the pedestrian's upper portion 31. The measurement of the fluctuation and the height of the upper part 31 of the pedestrian hardly causes an error. Therefore, the state of the pedestrian can be analyzed with high accuracy. Therefore, according to this embodiment, a pedestrian can be watched appropriately.
 また、本実施形態では、観測の対象とする局所的な部位として歩行者の上部31を採用し、当該歩行者の上部31の挙動に基づいて、歩行者の転倒状態、うずくまり状態及び横たわり状態を検知する。そのため、カメラ2の撮影範囲に存在する歩行者がこれらの状態に陥った際には、異常状態検知装置1は歩行者の異常状態を検知することができ、歩行者がこれらの状態に陥っていることを報知することができる。 Moreover, in this embodiment, the upper part 31 of a pedestrian is employ | adopted as a local site | part made into an observation object, Based on the behavior of the upper part 31 of the said pedestrian, a pedestrian's fall state, crouching state, and lying state are shown. Detect. Therefore, when a pedestrian existing in the shooting range of the camera 2 falls into these states, the abnormal state detection device 1 can detect the abnormal state of the pedestrian, and the pedestrian falls into these states. Can be notified.
 §4 変形例
 以上、本発明の実施の形態を詳細に説明してきたが、前述までの説明はあらゆる点において本発明の例示に過ぎない。本発明の範囲を逸脱することなく種々の改良や変形を行うことができることは言うまでもない。
§4 Modifications Embodiments of the present invention have been described in detail above, but the above description is merely an illustration of the present invention in all respects. It goes without saying that various improvements and modifications can be made without departing from the scope of the present invention.
 (1)通知処理のタイミング
 一例を挙げると、上記実施形態では、制御部11は、ステップS104において歩行者が異常状態にあると判定した場合に、直ちに異常検知通知を行っている。しかしながら、異常検知通知を行うタイミングはこのような例に限られなくてもよい。例えば、制御部11は、通知部55として機能して、歩行者の異常状態が一定時間以上継続した場合に、異常検知通知を行ってもよい。
(1) Timing of notification processing As an example, in the above-described embodiment, when the control unit 11 determines in step S104 that the pedestrian is in an abnormal state, it immediately performs an abnormality detection notification. However, the timing at which the abnormality detection notification is performed may not be limited to such an example. For example, the control unit 11 may function as the notification unit 55 and perform abnormality detection notification when the abnormal state of the pedestrian continues for a certain time or more.
 この場合、制御部11は、ステップS104において、歩行者の異常状態が一定時間以上継続しているか否かを判定する。そして、制御部11は、歩行者の異常状態が一定時間以上継続していると判定したときに、ステップS105において異常検知通知を行う。他方、歩行者が異常状態にないとき又は歩行者の異常状態が一定時間以上継続しなかったときには、制御部11は、ステップS105の処理を省略して、上記動作例に係る処理を終了する。なお、歩行者の異常状態が一定時間以上継続しているか否かを判定するための閾値は、実施の形態に応じて適宜設定されてよい。 In this case, in step S104, the control unit 11 determines whether or not the abnormal state of the pedestrian continues for a certain time or more. And the control part 11 performs abnormality detection notification in step S105, when it determines with the abnormal state of a pedestrian continuing more than fixed time. On the other hand, when the pedestrian is not in an abnormal state or when the abnormal state of the pedestrian has not continued for a certain time or longer, the control unit 11 omits the process of step S105 and ends the process according to the above operation example. In addition, the threshold value for determining whether the abnormal state of the pedestrian has continued for a certain time or more may be set as appropriate according to the embodiment.
 このように、歩行者の異常状態が一定時間以上継続した場合に限り、異常検知通知を行うようにすることで、歩行者の状態が一瞬だけ異常状態の条件を満たしたような場面で発生する異常検知通知の誤報を防止することができる。例えば、歩行者が地面に落ちている物を拾おうとした場合、歩行者の状態は一瞬だけうずくまり状態になりえる。このような場合に、例えば、スピーカ14等によって異常検知通知を行うと、歩行者の実際の状態とは異なった状態をスピーカ14の周辺にいる人々に知らせることになり、当該の人々に誤った情報を伝達することになる。これに対して、当該変形例によれば、このような誤報を防止することで、歩行者の異常検知を適切に知らせることができる。 In this way, only when the abnormal state of the pedestrian continues for a certain time or longer, the abnormality detection notification is made so that the state of the pedestrian satisfies the abnormal state condition for a moment. It is possible to prevent false reports of abnormality detection notifications. For example, when a pedestrian tries to pick up an object that has fallen on the ground, the state of the pedestrian can be cramped for a moment. In such a case, for example, if an abnormality detection notification is made by the speaker 14 or the like, a state different from the actual state of the pedestrian is notified to the people around the speaker 14, and the people concerned are erroneously notified. Information will be transmitted. On the other hand, according to the modified example, it is possible to appropriately notify the pedestrian's abnormality detection by preventing such a false alarm.
 (2)検知の対象とする歩行者の異常状態の種別
 また、上記実施形態では、異常状態検知装置1は、歩行者の異常状態として、歩行者の転倒状態、うずくまり状態及び横たわり状態を検知している。しかしながら、検知の対象とする歩行者の異常状態の種別は、これらに限られず、実施の形態に応じて適宜選択されてよい。
(2) Types of abnormal states of pedestrians to be detected Further, in the above embodiment, the abnormal state detection device 1 detects a pedestrian's fall state, cramped state, and lying state as the pedestrian's abnormal state. ing. However, the types of abnormal states of pedestrians to be detected are not limited to these, and may be appropriately selected according to the embodiment.
 例えば、加齢、体力の低下等により、転倒のリスクが高くなる。上記異常状態検知装置1は、そのような転倒のリスクの高い歩行を異常状態として検知してもよい。具体的には、加齢、体力の低下等によって関節の可動域が少なくなることで、歩行者の下肢における関節の動作が少なくなる。例えば、つま先の歩行面(地面)に対する角度が小さくなり、歩行する足の最底部から歩行面(地面)までの距離が小さくなる。 For example, the risk of falls increases due to aging, a decrease in physical strength, and the like. The abnormal state detection device 1 may detect such walking with a high risk of falling as an abnormal state. Specifically, the movement of the joint in the lower limb of the pedestrian is reduced by reducing the range of motion of the joint due to aging, a decrease in physical strength, and the like. For example, the angle of the toes with respect to the walking surface (ground) decreases, and the distance from the bottom of the walking foot to the walking surface (ground) decreases.
 そこで、制御部11は、ステップS103において、脚部の挙動を測定する。例えば、制御部11は、ステップS102で抽出した人物領域において、パターンマッチング等を行うことによって、脚部の写る範囲を特定する。次に、制御部11は、脚部の写る範囲のうちつま先の写る部分の各画素の深度を利用して、当該つま先の形状を画像解析することで、つま先の歩行面(地面)に対する角度を算出する。つま先の歩行面(地面)に対する角度を算出する方法には、公知の画像解析方法が用いられてよい。また、制御部11は、脚部の写る範囲の各画素の深度を利用して、脚部の最下点(最底部)と地面との間の距離を算出する。地面の実空間での位置(高さ)は、上記のとおり、任意の方法で与えられてよい。そして、制御部11は、このつま先の歩行面(地面)に対する角度及び脚部の最下点(最底部)と地面との間の距離をそれぞれ継続的にプロットする。これによって、制御部11は、実空間上の脚部の挙動を測定することができる。 Therefore, the control unit 11 measures the behavior of the leg in step S103. For example, the control unit 11 specifies the range in which the leg portion is captured by performing pattern matching or the like in the person region extracted in step S102. Next, the control unit 11 uses the depth of each pixel in the portion where the toe appears in the range where the leg appears, and analyzes the shape of the toe to thereby determine the angle of the toe relative to the walking surface (ground). calculate. A known image analysis method may be used as a method of calculating the angle of the toe with respect to the walking surface (ground). Moreover, the control part 11 calculates the distance between the lowest point (bottom part) of a leg part, and the ground using the depth of each pixel of the range which a leg part shows. The position (height) in the real space of the ground may be given by an arbitrary method as described above. Then, the control unit 11 continuously plots the angle between the toe with respect to the walking surface (ground) and the distance between the lowest point (bottom) of the leg and the ground. Thereby, the control part 11 can measure the behavior of the leg part in real space.
 また、制御部11は、上記ステップS104において、継続的にプロットしたデータを参照し、つま先の歩行面(地面)に対する角度の最大値が所定値以下であるか否かを判定する。また、制御部11は、脚部の最下点(最底部)と地面との間の距離の最大値が所定値以下であるか否かを判定する。そして、制御部11は、つま先の歩行面(地面)に対する角度の最大値が所定値以下であり、かつ、脚部の最下点(最底部)と地面との間の距離の最大値が所定値以下であると判定した場合に、歩行者が異常状態にあるとして、次のステップS105に処理を進める。一方、制御部11は、そのような条件を満たさない場合には、歩行者が異常状態にはないとして、上記動作例に係る処理を終了する。 Further, in step S104, the control unit 11 refers to the continuously plotted data, and determines whether or not the maximum value of the angle of the toe with respect to the walking surface (ground) is equal to or less than a predetermined value. Moreover, the control part 11 determines whether the maximum value of the distance between the lowest point (bottom part) of a leg part and the ground is below a predetermined value. And the control part 11 has the maximum value of the angle with respect to the walking surface (ground) of a toe below a predetermined value, and the maximum value of the distance between the lowest point (bottom part) of a leg part and the ground is predetermined. When it determines with it being below a value, a pedestrian is in an abnormal state and a process is advanced to following step S105. On the other hand, the control part 11 complete | finishes the process which concerns on the said operation example, assuming that a pedestrian is not in an abnormal state, when such conditions are not satisfy | filled.
 このようにして、上記異常状態検知装置1は、転倒のリスクの高い歩行を異常状態として検知してもよい。なお、異常状態であるか否かを判定するための閾値となる上記角度及び距離それぞれの所定値は、実施の形態に応じて適宜設定されてよい。また、対象とする脚部は、右脚でもよく、左脚でもよく、両脚でもよい。 In this way, the abnormal state detection device 1 may detect a walk with a high risk of falling as an abnormal state. Note that the predetermined values for the angle and the distance, which are threshold values for determining whether or not the state is abnormal, may be appropriately set according to the embodiment. Further, the target leg may be a right leg, a left leg, or both legs.
 また、上記異常状態検知装置1は、予め測定された健常時における上記角度及び距離を保持してもよい。そして、上記異常状態検知装置1は、予め測定された健常時における上記角度及び距離それぞれと上記ステップS103で測定した角度及び距離それぞれとの差分により、当該角度及び距離それぞれの健常時からの低下量を算出してもよい。当該角度及び距離の低下量は、転倒のリスクを示す指標として活用することができる。そこで、制御部11は、上記ステップS104において、角度及び距離の低下量がそれぞれ所定値を超えた場合に、歩行者が異常状態にあると判定してもよい。 Further, the abnormal state detection device 1 may hold the angle and the distance in a normal state measured in advance. And the said abnormal condition detection apparatus 1 is the amount of reduction | decrease from each normal of the said angle and distance by the difference of each of the said angle and distance in the normal normally measured, and each of the angle and distance measured by said step S103. May be calculated. The amount of decrease in the angle and distance can be used as an index indicating the risk of falling. Therefore, the control unit 11 may determine that the pedestrian is in an abnormal state when the amount of decrease in angle and distance exceeds a predetermined value in step S104.
 1…浴室異常検知装置、
 2…カメラ、3…撮影画像、31…上部、
 5…プログラム、6…記憶媒体、
11…制御部、12…記憶部、13…タッチパネルディスプレイ、
14…スピーカ、15…外部インタフェース、16…通信インタフェース、
17…ドライブ、
51…画像取得部、52…抽出部、53…挙動測定部、
54…状態判定部、55…通知部
1 ... Bathroom abnormality detection device,
2 ... Camera, 3 ... Image, 31 ... Upper part,
5 ... Program, 6 ... Storage medium,
11 ... Control unit, 12 ... Storage unit, 13 ... Touch panel display,
14 ... Speaker, 15 ... External interface, 16 ... Communication interface,
17 ... drive,
51 ... Image acquisition unit, 52 ... Extraction unit, 53 ... Behavior measurement unit,
54 ... state determination unit, 55 ... notification unit

Claims (7)

  1.  歩行動作を行う歩行者を撮影した撮影画像であって、当該撮影画像内の各画素の深度を示す深度データを含む撮影画像を取得する画像取得部と、
     取得した前記撮影画像内で、前記歩行者の写る人物領域を抽出する抽出部と、
     抽出した前記人物領域に含まれる各画素の深度を参照して、前記撮影画像に写る前記歩行者の身体のうち観測の対象とする局所的な部位の実空間上の位置を継続的に特定することで、前記局所的な部位の実空間上の挙動を測定する挙動測定部と、
     測定された前記局所的な部位の挙動に基づいて、前記歩行者が異常状態にあるか否かを判定する状態判定部と、
     前記判定の結果、前記歩行者が異常状態にあると判定された場合に、前記歩行者が異常状態にあることを知らせるための異常検知通知を行う通知部と、
    を備える、
    異常状態検知装置。
    An image acquisition unit that captures a captured image of a pedestrian performing a walking motion and includes depth data indicating the depth of each pixel in the captured image;
    An extraction unit for extracting a person region in which the pedestrian is captured in the acquired captured image;
    With reference to the depth of each pixel included in the extracted person region, the position in the real space of the local part to be observed among the pedestrian's body shown in the captured image is continuously specified. A behavior measuring unit for measuring the behavior of the local part in real space;
    A state determination unit for determining whether the pedestrian is in an abnormal state based on the measured behavior of the local part;
    As a result of the determination, when it is determined that the pedestrian is in an abnormal state, a notification unit that performs an abnormality detection notification for notifying that the pedestrian is in an abnormal state;
    Comprising
    Abnormal state detection device.
  2.  前記挙動測定部は、前記局所的な部位として前記歩行者の上部の実空間上の挙動を測定し、
     前記状態判定部は、測定された前記歩行者の上部の挙動に基づいて、前記歩行者の上部が実空間上で一定時間内に所定距離以上下降したか否かを検知し、前記歩行者の上部が実空間上で一定時間内に所定距離以上下降したことを検知した場合に、前記歩行者は転倒したとして、前記歩行者が異常状態にあると判定する、
    請求項1に記載の異常状態検知装置。
    The behavior measuring unit measures the behavior in the real space above the pedestrian as the local part,
    The state determination unit detects whether or not the upper part of the pedestrian has fallen a predetermined distance or more in real time based on the measured behavior of the upper part of the pedestrian. When it is detected that the upper part has fallen a predetermined distance or more in a certain time in real space, the pedestrian falls and determines that the pedestrian is in an abnormal state.
    The abnormal state detection device according to claim 1.
  3.  前記挙動測定部は、前記局所的な部位として前記歩行者の上部の実空間上の挙動を測定し、
     前記状態判定部は、測定された前記歩行者の上部の挙動に基づいて、前記歩行者の上部が実空間上で所定の第一の高さよりも低い位置に移動したか否かを検知し、前記歩行者の上部が実空間上で前記所定の第一の高さよりも低い位置に移動したことを検知した場合に、前記歩行者はうずくまっているとして、前記歩行者が異常状態にあると判定する、
    請求項1又は2に記載の異常状態検知装置。
    The behavior measuring unit measures the behavior in the real space above the pedestrian as the local part,
    The state determination unit detects whether or not the upper part of the pedestrian has moved to a position lower than a predetermined first height in real space based on the measured behavior of the upper part of the pedestrian, When it is detected that the upper part of the pedestrian has moved to a position lower than the predetermined first height in real space, the pedestrian is crooked and determined that the pedestrian is in an abnormal state. To
    The abnormal state detection device according to claim 1 or 2.
  4.  前記状態判定部は、測定された前記歩行者の上部の挙動に基づいて、前記歩行者の上部が実空間上で、前記第一の高さよりも更に低い所定の第二の高さより低い位置に移動したか否かを検知し、前記歩行者の上部が実空間上で前記所定の第二の高さよりも低い位置に移動したことを検知した場合に、前記歩行者は横たわっているとして、前記歩行者が異常状態にあると判定する、
    請求項3に記載の異常状態検知装置。
    Based on the measured behavior of the upper part of the pedestrian, the state determination unit is configured so that the upper part of the pedestrian is lower than a predetermined second height that is lower than the first height in real space. It is detected whether or not the pedestrian is lying when detecting that the upper part of the pedestrian has moved to a position lower than the predetermined second height in real space. Determining that the pedestrian is in an abnormal state,
    The abnormal state detection device according to claim 3.
  5.  前記通知部は、前記歩行者の異常状態が所定時間以上継続したときに、前記異常検知通知を行う、
    請求項1から4のいずれか1項に記載の異常状態検知装置。
    The notification unit performs the abnormality detection notification when the abnormal state of the pedestrian continues for a predetermined time or more.
    The abnormal condition detection apparatus of any one of Claim 1 to 4.
  6.  コンピュータが、
     歩行動作を行う歩行者を撮影した撮影画像であって、当該撮影画像内の各画素の深度を示す深度データを含む撮影画像を取得するステップと、
     取得した前記撮影画像内で、前記歩行者の写る人物領域を抽出するステップと、
     抽出した前記人物領域に含まれる各画素の深度を参照して、前記撮影画像に写る前記歩行者の身体のうち観測の対象とする局所的な部位の実空間上の位置を継続的に特定することで、前記局所的な部位の実空間上の挙動を測定するステップと、
     測定された前記局所的な部位の挙動に基づいて、前記歩行者が異常状態にあるか否かを判定するステップと、
     前記判定の結果、前記歩行者が異常状態にあると判定された場合に、前記歩行者が異常状態にあることを知らせるための異常検知通知を行うステップと、
    を実行する異常状態検知方法。
    Computer
    A captured image of a pedestrian performing a walking motion, the captured image including depth data indicating the depth of each pixel in the captured image; and
    Extracting the person area in which the pedestrian appears in the acquired captured image;
    With reference to the depth of each pixel included in the extracted person region, the position in the real space of the local part to be observed among the pedestrian's body shown in the captured image is continuously specified. Measuring the behavior of the local part in real space;
    Determining whether the pedestrian is in an abnormal state based on the measured behavior of the local site;
    As a result of the determination, when it is determined that the pedestrian is in an abnormal state, performing an abnormality detection notification for notifying that the pedestrian is in an abnormal state;
    Abnormal condition detection method to execute.
  7.  コンピュータに、
     歩行動作を行う歩行者を撮影した撮影画像であって、当該撮影画像内の各画素の深度を示す深度データを含む撮影画像を取得するステップと、
     取得した前記撮影画像内で、前記歩行者の写る人物領域を抽出するステップと、
     抽出した前記人物領域に含まれる各画素の深度を参照して、前記撮影画像に写る前記歩行者の身体のうち観測の対象とする局所的な部位の実空間上の位置を継続的に特定することで、前記局所的な部位の実空間上の挙動を測定するステップと、
     測定された前記局所的な部位の挙動に基づいて、前記歩行者が異常状態にあるか否かを判定するステップと、
     前記判定の結果、前記歩行者が異常状態にあると判定された場合に、前記歩行者が異常状態にあることを知らせるための異常検知通知を行うステップと、
    を実行させるための異常状態検知プログラム。
    On the computer,
    A captured image of a pedestrian performing a walking motion, the captured image including depth data indicating the depth of each pixel in the captured image; and
    Extracting the person area in which the pedestrian appears in the acquired captured image;
    With reference to the depth of each pixel included in the extracted person region, the position in the real space of the local part to be observed among the pedestrian's body shown in the captured image is continuously specified. Measuring the behavior of the local part in real space;
    Determining whether the pedestrian is in an abnormal state based on the measured behavior of the local site;
    As a result of the determination, when it is determined that the pedestrian is in an abnormal state, performing an abnormality detection notification for notifying that the pedestrian is in an abnormal state;
    Abnormal condition detection program to execute.
PCT/JP2016/050281 2015-03-23 2016-01-06 Abnormal state detection device, abnormal state detection method, and abnormal state detection program WO2016152182A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2017507517A JP6737262B2 (en) 2015-03-23 2016-01-06 Abnormal state detection device, abnormal state detection method, and abnormal state detection program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015059277 2015-03-23
JP2015-059277 2015-03-23

Publications (1)

Publication Number Publication Date
WO2016152182A1 true WO2016152182A1 (en) 2016-09-29

Family

ID=56978927

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/050281 WO2016152182A1 (en) 2015-03-23 2016-01-06 Abnormal state detection device, abnormal state detection method, and abnormal state detection program

Country Status (2)

Country Link
JP (1) JP6737262B2 (en)
WO (1) WO2016152182A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018099267A (en) * 2016-12-20 2018-06-28 株式会社竹中工務店 Movement quantity estimation device, movement quantity estimation program and movement quantity estimation system
CN112260402A (en) * 2020-10-22 2021-01-22 海南电网有限责任公司电力科学研究院 Method for monitoring state of intelligent substation inspection robot based on video monitoring

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014155693A (en) * 2012-12-28 2014-08-28 Toshiba Corp Movement information processor and program
JP2015042241A (en) * 2013-01-18 2015-03-05 株式会社東芝 Movement information processing device and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9161708B2 (en) * 2013-02-14 2015-10-20 P3 Analytics, Inc. Generation of personalized training regimens from motion capture data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014155693A (en) * 2012-12-28 2014-08-28 Toshiba Corp Movement information processor and program
JP2015042241A (en) * 2013-01-18 2015-03-05 株式会社東芝 Movement information processing device and method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018099267A (en) * 2016-12-20 2018-06-28 株式会社竹中工務店 Movement quantity estimation device, movement quantity estimation program and movement quantity estimation system
CN112260402A (en) * 2020-10-22 2021-01-22 海南电网有限责任公司电力科学研究院 Method for monitoring state of intelligent substation inspection robot based on video monitoring
CN112260402B (en) * 2020-10-22 2022-05-24 海南电网有限责任公司电力科学研究院 Monitoring method for state of intelligent substation inspection robot based on video monitoring

Also Published As

Publication number Publication date
JPWO2016152182A1 (en) 2018-01-18
JP6737262B2 (en) 2020-08-05

Similar Documents

Publication Publication Date Title
JP6115335B2 (en) Information processing apparatus, information processing method, and program
JP6534499B1 (en) MONITORING DEVICE, MONITORING SYSTEM, AND MONITORING METHOD
JP6504156B2 (en) INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
JP6500785B2 (en) INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
JP6432592B2 (en) Information processing apparatus, information processing method, and program
KR101053405B1 (en) Structure Deformation Detection System and Method
JP6489117B2 (en) Information processing apparatus, information processing method, and program
JP6780641B2 (en) Image analysis device, image analysis method, and image analysis program
JP6712778B2 (en) Object detection device, object detection system, and object detection method
JP6638723B2 (en) Image analysis device, image analysis method, and image analysis program
JP5525495B2 (en) Image monitoring apparatus, image monitoring method and program
WO2016031314A1 (en) Individual identification device, individual identification method, and individual identification program
JP2014236312A (en) Setting device and setting method
KR101972582B1 (en) Development for Displacement Measurement System Based on a PTZ Camera and Method thereof
JP2011209794A (en) Object recognition system, monitoring system using the same, and watching system
WO2016152182A1 (en) Abnormal state detection device, abnormal state detection method, and abnormal state detection program
TW201518759A (en) Object detecting device, object detecting method and object detecting system
JP6607253B2 (en) Image analysis apparatus, image analysis method, and image analysis program
JP6645503B2 (en) Image analysis device, image analysis method, and image analysis program
JP6565468B2 (en) Respiration detection device, respiration detection method, and respiration detection program
JP6780639B2 (en) Image analysis device, image analysis method, and image analysis program
JP6922914B2 (en) Watching system, watching device, watching method, and watching program
JP6606912B2 (en) Bathroom abnormality detection device, bathroom abnormality detection method, and bathroom abnormality detection program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16768085

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017507517

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16768085

Country of ref document: EP

Kind code of ref document: A1