WO2020188748A1 - Surveillance system, information processing device, fall detection method, and non-temporary computer readable medium - Google Patents

Surveillance system, information processing device, fall detection method, and non-temporary computer readable medium Download PDF

Info

Publication number
WO2020188748A1
WO2020188748A1 PCT/JP2019/011469 JP2019011469W WO2020188748A1 WO 2020188748 A1 WO2020188748 A1 WO 2020188748A1 JP 2019011469 W JP2019011469 W JP 2019011469W WO 2020188748 A1 WO2020188748 A1 WO 2020188748A1
Authority
WO
WIPO (PCT)
Prior art keywords
detected
information processing
reach
lidar sensor
fallen
Prior art date
Application number
PCT/JP2019/011469
Other languages
French (fr)
Japanese (ja)
Inventor
伸一 宮本
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to US17/437,659 priority Critical patent/US20220163671A1/en
Priority to PCT/JP2019/011469 priority patent/WO2020188748A1/en
Priority to JP2021506893A priority patent/JP7231011B2/en
Publication of WO2020188748A1 publication Critical patent/WO2020188748A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L23/00Control, warning, or like safety means along the route or between vehicles or vehicle trains
    • B61L23/007Safety arrangements on railway crossings
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L23/00Control, warning, or like safety means along the route or between vehicles or vehicle trains
    • B61L23/04Control, warning, or like safety means along the route or between vehicles or vehicle trains for monitoring the mechanical state of the route
    • B61L23/041Obstacle detection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L29/00Safety means for rail/road crossing traffic
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C7/00Tracing profiles
    • G01C7/02Tracing profiles of land surfaces
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors

Definitions

  • This disclosure relates to monitoring systems, information processing devices, fall detection methods, and non-temporary computer-readable media.
  • monitoring devices that monitor railroad crossings are used.
  • the monitoring device detects a person existing in the railroad crossing by using information obtained from a camera, radar, or the like inside the railroad crossing.
  • Patent Document 1 discloses a configuration of a railroad crossing obstacle detection device that can reliably detect an object whose height, area, size, etc., which should be detected as an obstacle, is changed as an obstacle in the railroad crossing. ing.
  • the railroad crossing obstacle detection device of Patent Document 1 emits a transmitted wave from a millimeter wave radar sensor or the like toward an obstacle detection region, and detects the distance to the obstacle, the size of the obstacle, etc. using the reflected wave. To do. Further, the obstacle detection device defines the emission range of the transmitted wave of the millimeter wave radar so as to cover the obstacle detection area in the railroad crossing from the center of the antenna portion of the millimeter wave radar sensor.
  • the transmitted wave emitted from the railroad crossing obstacle detection device disclosed in Patent Document 1 needs to be emitted substantially horizontally with the ground in order to cover the obstacle detection area. By emitting the transmitted wave substantially horizontally with the ground, the transmitted wave reaches the area at the end of the obstacle detection area. However, when the transmitted wave is emitted substantially horizontally with the ground, there is a problem that the transmitted wave does not hit the person who has fallen in the railroad crossing and the person who has fallen cannot be detected.
  • An object of the present disclosure is to provide a monitoring system, an information processing device, a fall detection method, and a non-temporary computer-readable medium capable of improving the accuracy of detecting a person who has fallen in a railroad crossing.
  • the monitoring system includes a LiDAR sensor that irradiates a plurality of laser beams having different reach to a monitoring region and outputs a detection signal indicating an object detection status in each laser beam, and a predetermined LiDAR sensor.
  • Information processing that determines that the object has fallen when the detection status of the laser beam having a reach shorter than the reach distance indicates that the object has been detected and the number of laser beams that have detected the object decreases. It is equipped with a device.
  • a plurality of laser beams having different reach are emitted from the LiDAR sensor to the monitoring region, and a detection signal indicating the detection status of an object in each laser beam is transmitted from the LiDAR sensor.
  • the detection status of the receiving communication unit and the laser beam having a reach shorter than the predetermined reach indicates that the object has been detected, and the number of laser beams that have detected the object decreases, the object It is provided with a determination unit for determining that the laser has fallen.
  • a plurality of laser beams having different reach distances are irradiated from the LiDAR sensor to the monitoring region, and a detection signal indicating the detection status of an object in each laser beam is transmitted from the LiDAR sensor.
  • the object has fallen when it is received and the detection status of the laser beam having a reach shorter than the predetermined reach indicates that the object has been detected and the number of laser beams that have detected the object decreases. Is determined.
  • a plurality of laser beams having different reach are emitted from the LiDAR sensor to the monitoring area, and a detection signal indicating the detection status of an object in each laser beam is received from the LiDAR sensor.
  • a monitoring system an information processing device, a fall detection method, and a non-temporary computer-readable medium that can improve the accuracy of detecting a person who has fallen in a railroad crossing.
  • FIG. It is a block diagram of the monitoring system which concerns on Embodiment 1.
  • FIG. It is a block diagram of the monitoring system which concerns on Embodiment 2.
  • FIG. It is a block diagram of the LiDAR sensor which concerns on Embodiment 2.
  • FIG. It is a block diagram of the information processing apparatus which concerns on Embodiment 2.
  • FIG. It is a figure explaining the determination process of whether or not there is a person who has fallen down which concerns on Embodiment 2.
  • FIG. It is a figure which shows the flow of the detection process of the fallen person which concerns on Embodiment 2.
  • FIG. It is a figure which shows the flow of the detection process of the fallen person which concerns on Embodiment 3.
  • FIG. It is a block diagram of the information processing apparatus which concerns on each embodiment.
  • the monitoring system of FIG. 1 is mainly used for monitoring the state inside a railroad crossing. Specifically, the monitoring system of FIG. 1 may be used to detect a fallen person at a railroad crossing within the monitoring area 60.
  • the monitoring system has a LiDAR (Light Detection and Ringing) sensor 10 and an information processing device 20.
  • the LiDAR sensor irradiates the monitoring area 60 with a plurality of laser beams having different reach, and outputs a detection signal indicating the detection status of the object 40 in each laser beam.
  • FIG. 1 shows a state in which the object 40 is walking and a state in which the object 40 is overturned. It can be said that the state in which the object 40 is overturned is lower than the state in which the object 40 is walking.
  • the state in which the object 40 is walking is shown by a solid line, and the state in which the object 40 is overturned is shown by a dotted line.
  • the monitoring area 60 is a space on the ground surface 50, and may be, for example, a space including a railroad crossing. Further, although it is shown in FIG. 1 that the upper end of the monitoring area 60 is defined, the upper end may not be defined.
  • a plurality of laser beams having different reach distances may be paraphrased as each laser beam having a different irradiation angle from other laser beams.
  • the irradiation angle may be determined by a plane parallel to the ground surface 50 and the irradiation direction of the laser beam. Alternatively, the irradiation angle may be determined by a plane orthogonal to the ground surface 50 and the irradiation direction of the laser beam.
  • the reach may be the distance between the LiDAR sensor 10 and the reach point of the laser beam on the ground surface 50. Alternatively, the reachable distance may be the distance between the contact point between the LiDAR sensor 10 and the ground surface 50 and the reachable point of the laser beam on the ground surface 50.
  • the laser light emitted from the LiDAR sensor 10 scatters when it hits the object 40 and becomes reflected light.
  • the reflected light is used for determining the presence or absence of the object 40 and for measuring the distance to the object 40.
  • the LiDAR sensor 10 receives the reflected light, it outputs a detection signal indicating the detection status of the object 40 to the information processing device 20.
  • the detection status is, for example, a status indicating whether or not the object 40 has been detected.
  • the LiDAR sensor 10 receives the reflected light, it cannot determine whether the recognized object 40 is walking or falling, and only recognizes that some object 40 has been detected. ..
  • the LiDAR sensor 10 may collectively set the detection status of the object 40 in each irradiated laser beam into one signal, or may set the detection status in a different signal for each laser beam.
  • the detection signal information indicating whether or not the object 40 is detected may be set, or information indicating whether or not reflected light is received may be set for each laser beam.
  • the LiDAR sensor 10 transmits a detection signal to the information processing device 20 via the network 30.
  • the network 30 may be, for example, an IP network.
  • the network 30 may be the so-called Internet or an intranet which is a closed network in a company or the like.
  • the LiDAR sensor 10 and the information processing device 20 may directly communicate with each other using a wired cable, short-range wireless communication, or the like.
  • the short-range wireless communication may be, for example, a wireless LAN (Local Area Network) or Bluetooth (registered trademark).
  • the information processing device 20 may be a computer device that operates by the processor executing a program stored in the memory.
  • the information processing device 20 may be, for example, a server device.
  • the information processing device 20 receives a detection signal from the LiDAR sensor 10.
  • the information processing apparatus 20 indicates that the detection status of the laser beam having a reach shorter than the predetermined reach indicates that the object 40 has been detected, and when the number of laser beams that have detected the object 40 decreases, the object becomes Determined to have fallen.
  • the object 40 is detected by three laser beams out of the four laser beams emitted from the LiDAR sensor 10. After that, when the object 40 falls down, the object 40 lies at a position close to the ground surface 50. In this case, among the plurality of laser beams emitted from the LiDAR sensor 10, the object 40 cannot be detected by the three laser beams having a long reach, and only the laser beam having the shortest reach is the object 40. Can be detected. In this way, when the laser beam that has detected the object 40 decreases and the laser beam that has a short reach detects the object 40, the LiDAR sensor 10 determines that the object 40 has fallen.
  • the laser beam having the shortest reach among the four laser beams detected the overturned object 40.
  • the laser beam may detect an overturned object.
  • it may be predetermined which laser light the object 40 is determined to have fallen when the object 40 is detected.
  • the monitoring system of FIG. 1 can use the detection results of objects in a plurality of laser beams having different reach areas when determining whether or not the object 40 has fallen.
  • Multiple lasers with different reach have different angles with respect to the surface of the earth. Therefore, it is possible to improve the possibility that any of the laser beams hits the overturned object as compared with the case where the laser beam is irradiated parallel to the ground surface.
  • the information processing apparatus 20 can determine whether or not the object 40 has fallen by using the detection results of the object 40 in the plurality of laser beams.
  • the monitoring system of FIG. 2 includes a LiDAR sensor 10, an information processing device 20, and a camera 70.
  • the LiDAR sensor 10 and the information processing device 20 are the same as the LiDAR sensor 10 and the information processing device 20 described with reference to FIG.
  • the functions and operations of the LiDAR sensor 10 and the information processing device 20 similar to those in FIG. 1 will not be described in detail.
  • the camera 70 is used to photograph the monitoring area 60.
  • the camera 70 may be, for example, a far-infrared camera, a general digital camera, a video camera, or the like.
  • the camera 70 transmits the captured image generated by photographing the monitoring area 60 to the information processing device 20.
  • the camera 70 may transmit the captured image to the information processing device 20 via the network 30.
  • the camera 70 may transmit the captured image to the information processing device 20 by using a wired cable, short-range wireless communication, or the like.
  • the LiDAR sensor 10 has an irradiation unit 11 and a communication unit 12.
  • the irradiation unit 11 may irradiate the laser beam and receive the reflected light.
  • the irradiation unit 11 may be composed of, for example, an amplifier, an ADC (Analog Digital Converter), a photodiode, or the like.
  • the irradiation unit 11 may be paraphrased as being composed of, for example, a transmitter that irradiates laser light and a receiver that receives reflected light.
  • the irradiation unit 11 irradiates a plurality of laser beams having different irradiation angles.
  • the irradiation unit 11 may irradiate a plurality of laser beams by changing the irradiation direction of one transmitter, or may irradiate a plurality of laser beams using a plurality of transmitters having a fixed irradiation direction. Good.
  • the irradiation unit 11 outputs to the communication unit 12 information indicating which of the irradiated laser light the reflected light is received.
  • the irradiation unit 11 may determine, for example, that the object 40 exists in the irradiation direction of the laser light that could receive the reflected light.
  • the laser beams of the first layer, the second layer, the third layer, and the fourth layer are used in order from the laser beam having the longest reach.
  • the irradiation unit 11 first outputs information indicating that the object 40 is detected in the laser light from the first layer to the third layer to the communication unit 12. After the object 40 has fallen, the irradiation unit 11 outputs information indicating that the object 40 has been detected in the laser light of the fourth layer to the communication unit 12.
  • the communication unit 12 communicates with the information processing device 20 via the network 30.
  • the communication unit 12 may be, for example, a network interface that transmits and receives data to and from the network 30.
  • the communication unit 12 transmits a detection signal including information indicating the detection result of the object 40 to the information processing device 20.
  • the information indicating the detection result of the object 40 may be, for example, information indicating in which layer of the laser light the object 40 was detected.
  • the communication unit 12 may transmit a detection signal to the information processing device 20 each time it receives information from the irradiation unit 11. Alternatively, the communication unit 12 may transmit the detection signal to the information processing device 20 at a predetermined timing.
  • the information processing device 20 has a communication unit 21 and a determination unit 22.
  • the communication unit 21 and the determination unit 22 may be software or modules whose processing is executed by the processor executing a program stored in the memory.
  • the communication unit 21 and the determination unit 22 may be hardware such as a circuit or a chip.
  • the communication unit 21 receives the captured image from the camera 70 and receives the detection signal from the LiDAR sensor 10 via the network 30.
  • the communication unit 21 outputs the received captured image and the detection signal to the determination unit 22.
  • the determination unit 22 determines whether or not there is a person who has fallen in the monitoring area 60.
  • the determination unit 22 performs detection processing of the object 40 from the captured image. For example, the determination unit 22 detects a pedestrian included in a captured image by using a learning model in which the characteristics of a pedestrian are learned by using deep learning or the like. Each time the determination unit 22 receives a photographed image from the camera 70, the determination unit 22 performs a pedestrian detection process using each photographed image as input data of the learning model.
  • the case where the determination unit 22 can detect a pedestrian is a case where the characteristics of the pedestrian are shown in the captured image.
  • the case where the determination unit 22 cannot detect the pedestrian may be, for example, the case where the pedestrian moves out of the monitoring area 60.
  • the case where the determination unit 22 cannot detect the pedestrian may be the case where the pedestrian's characteristics are not shown in the captured image due to the pedestrian's fall.
  • the case where the determination unit 22 cannot detect the pedestrian may be the case where the pedestrian cannot be distinguished from the ground surface 50 in the photographed image taken by the far-infrared camera. Specifically, when a pedestrian falls and comes into contact with the ground surface 50, the temperature difference between the pedestrian and the ground surface 50 may become small. In such a case, the pedestrian may not be distinguishable from the ground surface 50 in the image taken by the far-infrared camera.
  • the determination unit 22 determines which layer of laser light the object 40 is detected in the detection signal received from the LiDAR sensor 10. Further, the determination unit 22 determines whether the object 40 is detected by the plurality of laser beams.
  • the determination unit 22 determines that the pedestrian was detected in the captured image at the timing before the time t, but the pedestrian could not be detected in the captured image at the timing after the time t. .. Further, the determination unit 22 has detected the object 40 in the plurality of layers at the timing before the time t, but only the fourth layer has detected the object 40 at the timing after the time t. judge.
  • the determination unit 22 cannot detect the pedestrian from the captured image at the time t, and further, the object 40 detected in the plurality of layers at the time t and the object 40 only in the fourth layer. It will be detected. In this way, when the object 40 detected in the plurality of layers is detected only in the fourth layer at the timing when the pedestrian cannot be detected from the captured image, the determination unit 22 determines that there is a person who has fallen.
  • the determination unit 22 has a timing at which the pedestrian cannot be detected from the captured image and a timing at which the object 40 detected in the plurality of layers is detected in the fourth layer only. If they are substantially the same, it may be determined that some people have fallen.
  • the cases of substantially the same are the timing when the pedestrian cannot be detected from the captured image and the timing when the object 40 detected in the plurality of layers is detected only in the fourth layer. Includes cases where is within a predetermined range.
  • the determination unit 22 determines that the pedestrian and the object 40 are the same person when the position of the pedestrian detected in the captured image and the position of the object 40 detected by the LiDAR sensor 10 match. You may. Alternatively, if the distance between the position of the pedestrian detected in the captured image and the position of the object 40 detected by the LiDAR sensor 10 is shorter than the predetermined distance, the determination unit 22 determines the pedestrian and the object 40. May be determined to be the same person.
  • the position of the pedestrian detected in the captured image may be specified, for example, by converting the image coordinates or the camera coordinates to the world coordinates. Image coordinates or camera coordinates may be indicated using, for example, pixel positions. World coordinates may be indicated using, for example, latitude and longitude.
  • the determination unit 22 may hold, for example, table information indicating the correspondence between the image coordinates or the camera coordinates and the world coordinates in advance.
  • the detection position when the laser light of each layer detects the object 40 may be predetermined.
  • the distance from the LiDAR sensor 10 to the object 40 may be estimated as the position of the object 40 detected by the LiDAR sensor 10 according to the timing of receiving the reflected light.
  • the determination unit 22 may estimate the position of the object 40 based on the estimated distance.
  • the determination unit 22 detects a pedestrian from the captured image (S11).
  • the determination unit 22 detects a pedestrian from the captured image by executing machine learning such as deep learning, for example.
  • the determination unit 22 determines that the object 40 is detected by the laser light of the first layer to the third layer in the LiDAR sensor 10 (S12).
  • the determination unit 22 determines the laser light in which the object 40 is detected by using the detection signal received from the LiDAR sensor 10.
  • the determination unit 22 is in a state where it cannot detect a pedestrian from the captured image (S13).
  • the determination unit 22 determines whether or not the object 40 is detected only by the laser light of the fourth layer in the LiDAR sensor 10 (S14). Alternatively, the determination unit 22 may determine whether or not the LiDAR sensor 10 has detected the object 40 only with the laser light of a predetermined layer.
  • the determination unit 22 When the determination unit 22 recognizes that the object 40 is detected only by the laser light of the fourth layer, it determines that the pedestrian has fallen (S15). The determination unit 22 ends the process when the object 40 is detected by the laser light other than the fourth layer, or when the object 40 is not detected by any of the laser lights.
  • the information processing device 20 analyzes the detection status of a pedestrian in the captured image captured by the camera 70 and the detection status of the object 40 in the LiDAR sensor 10. It is possible to determine the presence or absence of a fallen person.
  • the plurality of laser beams emitted by the LiDAR sensor 10 have different angles and are applied to the monitoring area 60. Therefore, the LiDAR sensor 10 can detect the object 40 in the monitoring area 60 regardless of the height of the object 40.
  • the information processing device 20 detects the change of the laser beam detecting the pedestrian by the LiDAR sensor 10 and the detection of the pedestrian in the captured image. Changes in circumstances can be used. As a result, the information processing device 20 can detect that the pedestrian has fallen.
  • the plurality of laser beams including the first layer to the fourth layer irradiated by the LiDAR sensor 10 may have a reaching point on the ground surface 50 included in the monitoring area 60.
  • FIG. 7 shows the process after the determination unit 22 determines YES in step S14 of FIG.
  • the state in which the determination unit 22 determines YES in step S14 of FIG. 6 is a state in which the pedestrian cannot be detected from the captured image, and the object 40 is detected only by the laser light of the fourth layer. ..
  • the determination unit 22 rotates the captured image in which the pedestrian could not be detected and tries to detect the pedestrian.
  • the determination unit 22 mainly learns the standing posture of the pedestrian. Therefore, the determination unit 22 cannot detect the pedestrian when the pedestrian has fallen.
  • the determination unit 22 brings the pedestrian in the fallen state closer to the standing state by rotating the captured image. Further, the determination unit 22 attempts to detect a pedestrian by applying a learning model that learns the standing posture of the pedestrian by using the rotated captured image as an input. For example, the determination unit 22 may rotate the captured image by 90 degrees, or may rotate the captured image by 90 degrees ⁇ ⁇ ( ⁇ is an arbitrary positive value).
  • the determined unit 22 may rotate the captured image again and rotate the captured image a plurality of times until the pedestrian is detected. If the pedestrian can be detected by rotating within a predetermined number of times, the determination unit 22 determines that the pedestrian has fallen (S22). If the determination unit 22 cannot detect a pedestrian by rotating within a predetermined number of times, the determination unit 22 ends the process.
  • the information processing device 20 of the third embodiment can detect a fallen pedestrian by using the rotated photographed image. As a result, it is possible to improve the accuracy of determining whether or not the object 40 detected by the LiDAR sensor 10 is a fallen pedestrian.
  • the case where the determination unit 22 cannot detect the pedestrian from the captured image includes the case where the pedestrian simply walks away from the monitoring area 60. Even in such a case, if the determination unit 22 rotates the captured image to try to detect a pedestrian, the processing load of the information processing device 20 increases.
  • the determination unit 22 rotates the captured image to pedestrian. Attempts to detect. Therefore, the number of cases in which the determination unit 22 tries to detect a pedestrian by rotating the captured image can be reduced as compared with the case where the laser light information is not used. As a result, in the third embodiment, in all cases where the pedestrian cannot be detected from the captured image, it is possible to suppress an increase in the processing load as compared with the case where the captured image is rotated to try to detect the pedestrian. it can.
  • FIG. 8 is a block diagram showing a configuration example of the information processing device 20.
  • the information processing apparatus 20 includes a network interface 1201, a processor 1202, and a memory 1203.
  • the network interface 1201 is used to communicate with other network node devices that make up the communication system.
  • Network interface 1201 may be used to perform wireless communication.
  • the network interface 1201 may be used to perform wireless LAN communication specified in the IEEE 802.11 series or mobile communication specified in 3GPP (3rd Generation Partnership Project).
  • the network interface 1201 may include, for example, a network interface card (NIC) compliant with the IEEE 802.3 series.
  • NIC network interface card
  • the processor 1202 reads the software (computer program) from the memory 1203 and executes it to perform the processing of the information processing apparatus 20 described using the flowchart or the sequence in the above-described embodiment.
  • the processor 1202 may be, for example, a microprocessor, an MPU (Micro Processing Unit), or a CPU (Central Processing Unit).
  • Processor 1202 may include a plurality of processors.
  • Memory 1203 is composed of a combination of volatile memory and non-volatile memory. Memory 1203 may include storage located away from processor 1202. In this case, processor 1202 may access memory 1203 via an I / O interface (not shown).
  • the memory 1203 is used to store the software module group.
  • the processor 1202 can perform the processing of the information processing device 20 described in the above-described embodiment by reading these software modules from the memory 1203 and executing the software modules.
  • each of the processors included in the information processing apparatus 20 and the like executes one or a plurality of programs including a group of instructions for causing the computer to perform the algorithm described with reference to the drawings.
  • Non-temporary computer-readable media include various types of tangible storage media.
  • Examples of non-temporary computer-readable media include magnetic recording media, magneto-optical recording media (eg, magneto-optical disks), CD-ROMs (Read Only Memory), CD-Rs, CD-R / Ws, and semiconductor memories.
  • the magnetic recording medium may be, for example, a flexible disk, a magnetic tape, or a hard disk drive.
  • the semiconductor memory may be, for example, a mask ROM, a PROM (Programmable ROM), an EPROM (Erasable PROM), a flash ROM, or a RAM (Random Access Memory).
  • the program may also be supplied to the computer by various types of temporary computer readable media.
  • Examples of temporary computer-readable media include electrical, optical, and electromagnetic waves.
  • the temporary computer-readable medium can supply the program to the computer via a wired communication path such as an electric wire and an optical fiber, or a wireless communication path.
  • (Appendix 1) A LiDAR sensor that irradiates a monitoring area with multiple laser beams with different reach and outputs a detection signal indicating the detection status of an object in each laser beam. It is determined that the object has fallen when the detection status of the laser beam having a reach shorter than the predetermined reach indicates that the object has been detected and the number of laser beams that have detected the object decreases.
  • a monitoring system equipped with an information processing device. (Appendix 2) The information processing device The monitoring system according to Appendix 1, wherein when the detection signal of the laser light having the shortest reach among the plurality of laser lights detects the object, it is determined that the object has fallen.
  • the LiDAR sensor is The monitoring system according to Appendix 1 or 2, wherein the irradiation directions of the plurality of lasers are controlled so that the plurality of laser lights reach the ground surface in the monitoring area.
  • Appendix 4 Further equipped with a camera for capturing the surveillance area, Information processing equipment When the object was recognized from the photographed image taken by the camera and the object could not be recognized in the photographed image which could recognize the object, the object fell down.
  • the monitoring system according to any one of Appendix 1 to 3, wherein the monitoring system is determined to be.
  • a communication unit that irradiates a monitoring area with a plurality of laser beams having different reach from the LiDAR sensor and receives a detection signal indicating an object detection status in each laser beam from the LiDAR sensor. It is determined that the object has fallen when the detection status of the laser beam having a reach shorter than the predetermined reach indicates that the object has been detected and the number of laser beams that have detected the object decreases.
  • An information processing device including a determination unit. (Appendix 8) The determination unit The information processing device according to Appendix 7, wherein when the detection signal of the laser light having the shortest reach among the plurality of laser lights detects the object, it is determined that the object has fallen.
  • a plurality of laser beams having different reach are emitted from the LiDAR sensor to the monitoring area, and a detection signal indicating the detection status of an object in each laser beam is received from the LiDAR sensor. It is determined that the object has fallen when the detection status of the laser beam having a reach shorter than the predetermined reach indicates that the object has been detected and the number of laser beams that have detected the object decreases. , Fall detection method. (Appendix 10) A plurality of laser beams having different reach are emitted from the LiDAR sensor to the monitoring area, and a detection signal indicating the detection status of an object in each laser beam is received from the LiDAR sensor.
  • a non-transitory computer-readable medium that contains a program that causes a computer to do things.
  • LiDAR sensor 11
  • Irradiation unit 12
  • Communication unit 20
  • Information processing device 21
  • Communication unit 22
  • Judgment unit 30
  • Object 50
  • Ground surface 60
  • Monitoring area 70

Abstract

An objective of the present invention is to provide a surveillance system, an information processing device, a fall detection system, and a non-temporary computer readable medium capable of improving the precision of detecting a person who has fallen inside a railroad crossing. The surveillance system according to the present disclosure comprises: a LiDAR sensor (10) which irradiates a surveillance region (60) with a plurality of laser beams having different ranges, and outputs a detection signal indicating a detection status of an object (40) by the respective laser beams; and an information processing device (20) which determines that the object (40) has fallen when the detection status for the lasers having a range shorter than a prescribed range indicates that the object (40) has been detected and the number of laser beams detecting the object (40) has decreased.

Description

監視システム、情報処理装置、転倒検出方法、及び非一時的なコンピュータ可読媒体Surveillance systems, information processing equipment, inversion detection methods, and non-temporary computer-readable media
 本開示は監視システム、情報処理装置、転倒検出方法、及び非一時的なコンピュータ可読媒体に関する。 This disclosure relates to monitoring systems, information processing devices, fall detection methods, and non-temporary computer-readable media.
 踏切内における事故を減少させるために、踏切を監視する監視装置が用いられている。監視装置は、踏切内をカメラもしくはレーダー等から得られる情報を用いて、踏切内に存在する人を検出する。 In order to reduce accidents inside railroad crossings, monitoring devices that monitor railroad crossings are used. The monitoring device detects a person existing in the railroad crossing by using information obtained from a camera, radar, or the like inside the railroad crossing.
 特許文献1には、踏切内において、障害物として検知されるべき、高さ、面積または大きさ等の状況が変化した物を確実に障害物として検知できる踏切障害物検知装置の構成が開示されている。特許文献1の踏切障害物検知装置は、ミリ波レーダセンサ等から障害物検知領域に向けて送信波を出射し、その反射波を用いて障害物までの距離及び障害物の大きさ等を検出する。また、障害物検知装置は、ミリ波レーダセンサのアンテナ部を中心とし、その中心から、踏切内の障害物検知領域をカバーするように、ミリ波レーダーの送信波の出射範囲を定める。 Patent Document 1 discloses a configuration of a railroad crossing obstacle detection device that can reliably detect an object whose height, area, size, etc., which should be detected as an obstacle, is changed as an obstacle in the railroad crossing. ing. The railroad crossing obstacle detection device of Patent Document 1 emits a transmitted wave from a millimeter wave radar sensor or the like toward an obstacle detection region, and detects the distance to the obstacle, the size of the obstacle, etc. using the reflected wave. To do. Further, the obstacle detection device defines the emission range of the transmitted wave of the millimeter wave radar so as to cover the obstacle detection area in the railroad crossing from the center of the antenna portion of the millimeter wave radar sensor.
特開2016-155482号公報Japanese Unexamined Patent Publication No. 2016-155482
 特許文献1に開示されている踏切障害物検知装置から出射される送信波は、障害物検知領域をカバーするために、地面と実質的に水平方向に出射される必要がある。送信波が地面と実質的に水平方向に出射されることによって、障害物検知領域の端の領域まで、送信波が届くことになる。しかし、送信波が地面と実質的に水平方向に出射される場合、踏切内において転倒した人に対して送信波が当たらず、転倒した人を検知することができないという問題がある。 The transmitted wave emitted from the railroad crossing obstacle detection device disclosed in Patent Document 1 needs to be emitted substantially horizontally with the ground in order to cover the obstacle detection area. By emitting the transmitted wave substantially horizontally with the ground, the transmitted wave reaches the area at the end of the obstacle detection area. However, when the transmitted wave is emitted substantially horizontally with the ground, there is a problem that the transmitted wave does not hit the person who has fallen in the railroad crossing and the person who has fallen cannot be detected.
 本開示の目的は、踏切内において転倒した人を検出する精度を向上させることができる監視システム、情報処理装置、転倒検出方法、及び非一時的なコンピュータ可読媒体を提供することにある。 An object of the present disclosure is to provide a monitoring system, an information processing device, a fall detection method, and a non-temporary computer-readable medium capable of improving the accuracy of detecting a person who has fallen in a railroad crossing.
 本開示の第1の態様にかかる監視システムは、到達距離の異なる複数のレーザー光を監視領域へ照射し、それぞれのレーザー光における物体の検出状況を示す検出信号を出力するLiDARセンサと、所定の到達距離よりも短い到達距離のレーザー光における検出状況が前記物体を検出したことを示し、かつ、前記物体を検出したレーザー光の数が減少した場合に、前記物体が転倒したと判定する情報処理装置と、を備える。 The monitoring system according to the first aspect of the present disclosure includes a LiDAR sensor that irradiates a plurality of laser beams having different reach to a monitoring region and outputs a detection signal indicating an object detection status in each laser beam, and a predetermined LiDAR sensor. Information processing that determines that the object has fallen when the detection status of the laser beam having a reach shorter than the reach distance indicates that the object has been detected and the number of laser beams that have detected the object decreases. It is equipped with a device.
 本開示の第2の態様にかかる情報処理装置は、到達距離の異なる複数のレーザー光がLiDARセンサから監視領域に照射され、それぞれのレーザー光における物体の検出状況を示す検出信号を前記LiDARセンサから受信する通信部と、所定の到達距離よりも短い到達距離のレーザー光における検出状況が前記物体を検出したことを示し、かつ、前記物体を検出したレーザー光の数が減少した場合に、前記物体が転倒したと判定する判定部と、を備える。 In the information processing apparatus according to the second aspect of the present disclosure, a plurality of laser beams having different reach are emitted from the LiDAR sensor to the monitoring region, and a detection signal indicating the detection status of an object in each laser beam is transmitted from the LiDAR sensor. When the detection status of the receiving communication unit and the laser beam having a reach shorter than the predetermined reach indicates that the object has been detected, and the number of laser beams that have detected the object decreases, the object It is provided with a determination unit for determining that the laser has fallen.
 本開示の第3の態様にかかる転倒検出方法は、到達距離の異なる複数のレーザー光がLiDARセンサから監視領域に照射され、それぞれのレーザー光における物体の検出状況を示す検出信号を前記LiDARセンサから受信し、所定の到達距離よりも短い到達距離のレーザー光における検出状況が前記物体を検出したことを示し、かつ、前記物体を検出したレーザー光の数が減少した場合に、前記物体が転倒したと判定する。 In the fall detection method according to the third aspect of the present disclosure, a plurality of laser beams having different reach distances are irradiated from the LiDAR sensor to the monitoring region, and a detection signal indicating the detection status of an object in each laser beam is transmitted from the LiDAR sensor. The object has fallen when it is received and the detection status of the laser beam having a reach shorter than the predetermined reach indicates that the object has been detected and the number of laser beams that have detected the object decreases. Is determined.
 本開示の第4の態様にかかるプログラムは、到達距離の異なる複数のレーザー光がLiDARセンサから監視領域に照射され、それぞれのレーザー光における物体の検出状況を示す検出信号を前記LiDARセンサから受信し、所定の到達距離よりも短い到達距離のレーザー光における検出状況が前記物体を検出したことを示し、かつ、前記物体を検出したレーザー光の数が減少した場合に、前記物体が転倒したと判定することをコンピュータに実行させる。 In the program according to the fourth aspect of the present disclosure, a plurality of laser beams having different reach are emitted from the LiDAR sensor to the monitoring area, and a detection signal indicating the detection status of an object in each laser beam is received from the LiDAR sensor. , It is determined that the object has fallen when the detection status of the laser beam having a reach shorter than the predetermined reach indicates that the object has been detected and the number of laser beams that have detected the object decreases. Let the computer do what you want to do.
 本開示により、踏切内において転倒した人を検出する精度を向上させることができる監視システム、情報処理装置、転倒検出方法、及び非一時的なコンピュータ可読媒体を提供することができる。 According to the present disclosure, it is possible to provide a monitoring system, an information processing device, a fall detection method, and a non-temporary computer-readable medium that can improve the accuracy of detecting a person who has fallen in a railroad crossing.
実施の形態1にかかる監視システムの構成図である。It is a block diagram of the monitoring system which concerns on Embodiment 1. FIG. 実施の形態2にかかる監視システムの構成図である。It is a block diagram of the monitoring system which concerns on Embodiment 2. FIG. 実施の形態2にかかるLiDARセンサの構成図である。It is a block diagram of the LiDAR sensor which concerns on Embodiment 2. FIG. 実施の形態2にかかる情報処理装置の構成図である。It is a block diagram of the information processing apparatus which concerns on Embodiment 2. FIG. 実施の形態2にかかる、転倒している人がいるか否かの判定処理を説明する図である。It is a figure explaining the determination process of whether or not there is a person who has fallen down which concerns on Embodiment 2. FIG. 実施の形態2にかかる転倒者の検出処理の流れを示す図である。It is a figure which shows the flow of the detection process of the fallen person which concerns on Embodiment 2. FIG. 実施の形態3にかかる転倒者の検出処理の流れを示す図である。It is a figure which shows the flow of the detection process of the fallen person which concerns on Embodiment 3. FIG. それぞれの実施の形態にかかる情報処理装置の構成図である。It is a block diagram of the information processing apparatus which concerns on each embodiment.
 (実施の形態1)
 以下、図面を参照して本開示の実施の形態について説明する。図1を用いて実施の形態1にかかる監視システムの構成例について説明する。図1の監視システムは、主に、踏切内の状態を監視するために用いられる。具体的には、図1の監視システムは、監視領域60内の踏切における転倒者を発見するために用いられてもよい。
(Embodiment 1)
Hereinafter, embodiments of the present disclosure will be described with reference to the drawings. A configuration example of the monitoring system according to the first embodiment will be described with reference to FIG. The monitoring system of FIG. 1 is mainly used for monitoring the state inside a railroad crossing. Specifically, the monitoring system of FIG. 1 may be used to detect a fallen person at a railroad crossing within the monitoring area 60.
 監視システムは、LiDAR(Light Detection and Ranging)センサ10及び情報処理装置20を有している。LiDARセンサは、到達距離の異なる複数のレーザー光を監視領域60へ照射し、それぞれのレーザー光における物体40の検出状況を示す検出信号を出力する。物体40について、図1においては物体40が歩行している状態と、物体40が転倒している状態を示している。物体40が転倒している状態は、物体40が歩行している状態よりも低い状態であると言える。物体40が歩行している状態は実線にて示され、物体40が転倒している状態は点線にて示されている。 The monitoring system has a LiDAR (Light Detection and Ringing) sensor 10 and an information processing device 20. The LiDAR sensor irradiates the monitoring area 60 with a plurality of laser beams having different reach, and outputs a detection signal indicating the detection status of the object 40 in each laser beam. Regarding the object 40, FIG. 1 shows a state in which the object 40 is walking and a state in which the object 40 is overturned. It can be said that the state in which the object 40 is overturned is lower than the state in which the object 40 is walking. The state in which the object 40 is walking is shown by a solid line, and the state in which the object 40 is overturned is shown by a dotted line.
 監視領域60は、地表50上の空間とし、例えば、踏切を含む空間であってもよい。また、図1においては、監視領域60の上端が定められていることを示しているが、上端は定められなくてもよい。 The monitoring area 60 is a space on the ground surface 50, and may be, for example, a space including a railroad crossing. Further, although it is shown in FIG. 1 that the upper end of the monitoring area 60 is defined, the upper end may not be defined.
 到達距離の異なる複数のレーザー光は、それぞれのレーザー光が、他のレーザー光と異なる照射角度を有すると言い換えられてもよい。照射角度は、地表50と平行する平面とレーザー光の照射方向とによって定められてもよい。もしくは、照射角度は、地表50と直交する平面と、レーザー光の照射方向とによって定められてもよい。到達距離は、LiDARセンサ10と、地表50におけるレーザー光の到達地点との間の距離であってもよい。もしくは、到達距離は、LiDARセンサ10と地表50との接地点から、地表50におけるレーザー光の到達地点との間の距離であってもよい。 A plurality of laser beams having different reach distances may be paraphrased as each laser beam having a different irradiation angle from other laser beams. The irradiation angle may be determined by a plane parallel to the ground surface 50 and the irradiation direction of the laser beam. Alternatively, the irradiation angle may be determined by a plane orthogonal to the ground surface 50 and the irradiation direction of the laser beam. The reach may be the distance between the LiDAR sensor 10 and the reach point of the laser beam on the ground surface 50. Alternatively, the reachable distance may be the distance between the contact point between the LiDAR sensor 10 and the ground surface 50 and the reachable point of the laser beam on the ground surface 50.
 LiDARセンサ10から照射されたレーザー光は、物体40にあたると散乱し、反射光となる。反射光は、物体40の存在の有無の判定、さらには、物体40までの距離の測定に用いられる。LiDARセンサ10は、反射光を受光すると、物体40の検出状況を示す検出信号を情報処理装置20へ出力する。検出状況は、例えば、物体40を検出したか否かを示す状況である。ただし、LiDARセンサ10は、反射光を受光しても、認識した物体40が歩行しているか転倒しているかを判別することはできず、何らかの物体40を検出したということを認識するのみである。 The laser light emitted from the LiDAR sensor 10 scatters when it hits the object 40 and becomes reflected light. The reflected light is used for determining the presence or absence of the object 40 and for measuring the distance to the object 40. When the LiDAR sensor 10 receives the reflected light, it outputs a detection signal indicating the detection status of the object 40 to the information processing device 20. The detection status is, for example, a status indicating whether or not the object 40 has been detected. However, even if the LiDAR sensor 10 receives the reflected light, it cannot determine whether the recognized object 40 is walking or falling, and only recognizes that some object 40 has been detected. ..
 LiDARセンサ10は、照射したそれぞれのレーザー光における物体40の検出状況を、一つの信号にまとめて設定してもよく、レーザー光毎に異なる信号に検出状況を設定してもよい。検出信号には、物体40の検出の有無を示す情報が設定されてもよく、それぞれのレーザー光毎に反射光を受光したか否かを示す情報が設定されてもよい。 The LiDAR sensor 10 may collectively set the detection status of the object 40 in each irradiated laser beam into one signal, or may set the detection status in a different signal for each laser beam. In the detection signal, information indicating whether or not the object 40 is detected may be set, or information indicating whether or not reflected light is received may be set for each laser beam.
 LiDARセンサ10は、検出信号を、ネットワーク30を介して情報処理装置20へ送信する。ネットワーク30は、例えば、IPネットワークであってもよい。ネットワーク30は、具体的には、いわゆるインターネットであってもよく、企業内等に閉じたネットワークであるイントラネットであってもよい。もしくは、LiDARセンサ10と情報処理装置20とは、有線ケーブル、近距離無線通信等を用いて直接通信を行ってもよい。近距離無線通信は、例えば、無線LAN(Local Area Network)であってもよく、Bluetooth(登録商標)であってもよい。 The LiDAR sensor 10 transmits a detection signal to the information processing device 20 via the network 30. The network 30 may be, for example, an IP network. Specifically, the network 30 may be the so-called Internet or an intranet which is a closed network in a company or the like. Alternatively, the LiDAR sensor 10 and the information processing device 20 may directly communicate with each other using a wired cable, short-range wireless communication, or the like. The short-range wireless communication may be, for example, a wireless LAN (Local Area Network) or Bluetooth (registered trademark).
 情報処理装置20は、プロセッサがメモリに格納されたプログラムを実行することによって動作するコンピュータ装置であってもよい。情報処理装置20は、例えば、サーバ装置であってもよい。 The information processing device 20 may be a computer device that operates by the processor executing a program stored in the memory. The information processing device 20 may be, for example, a server device.
 情報処理装置20は、LiDARセンサ10から検出信号を受信する。情報処理装置20は、所定の到達距離よりも短い到達距離のレーザー光における検出状況が物体40を検出したことを示し、かつ、物体40を検出したレーザー光の数が減少した場合に、物体が転倒したと判定する。 The information processing device 20 receives a detection signal from the LiDAR sensor 10. The information processing apparatus 20 indicates that the detection status of the laser beam having a reach shorter than the predetermined reach indicates that the object 40 has been detected, and when the number of laser beams that have detected the object 40 decreases, the object becomes Determined to have fallen.
 例えば、図1に示されるように、LiDARセンサ10から照射された4つのレーザー光のうち、3つのレーザー光において、物体40を検出していたとする。その後、物体40が転倒した場合、物体40は地表50に近い位置に横たわることになる。この場合、LiDARセンサ10から照射された複数のレーザー光のうち、到達距離の長い3つのレーザー光においては、物体40を検出することができず、到達距離が一番短いレーザー光のみが物体40を検出することができる。このように、物体40を検出していたレーザー光が減少し、さらに、到達距離が短いレーザー光が物体40を検出した場合、LiDARセンサ10は、物体40が転倒したと判定する。 For example, as shown in FIG. 1, it is assumed that the object 40 is detected by three laser beams out of the four laser beams emitted from the LiDAR sensor 10. After that, when the object 40 falls down, the object 40 lies at a position close to the ground surface 50. In this case, among the plurality of laser beams emitted from the LiDAR sensor 10, the object 40 cannot be detected by the three laser beams having a long reach, and only the laser beam having the shortest reach is the object 40. Can be detected. In this way, when the laser beam that has detected the object 40 decreases and the laser beam that has a short reach detects the object 40, the LiDAR sensor 10 determines that the object 40 has fallen.
 図1においては、4つのレーザー光のうち、到達距離が最も短いレーザー光のみが転倒した物体40を検出したことを示しているが、例えば、4つのレーザー光のうち、到達距離が短い2つのレーザー光が転倒した物体を検出してもよい。物体40を検出していたレーザー光が減少した際に、どのレーザー光において物体40を検出した場合に、物体40が転倒したと判定されるかは、予め定められていてもよい。 In FIG. 1, it is shown that only the laser beam having the shortest reach among the four laser beams detected the overturned object 40. For example, two of the four laser beams having the shortest reach. The laser beam may detect an overturned object. When the laser light that has detected the object 40 decreases, it may be predetermined which laser light the object 40 is determined to have fallen when the object 40 is detected.
 以上説明したように、図1の監視システムは、物体40が転倒したか否かを判定する際に、到達領域の異なる複数のレーザー光における物体の検出結果を用いることができる。到達距離の異なる複数のレーザーは、地表に対して様々な角度を有する。そのため、レーザー光が地表と平行に照射される場合と比較して、いずれかのレーザー光が、転倒した物体へあたる可能性を向上させることができる。その結果、情報処理装置20は、複数のレーザー光における物体40の検出結果を用いることによって、物体40が転倒したか否かを判定することができる。 As described above, the monitoring system of FIG. 1 can use the detection results of objects in a plurality of laser beams having different reach areas when determining whether or not the object 40 has fallen. Multiple lasers with different reach have different angles with respect to the surface of the earth. Therefore, it is possible to improve the possibility that any of the laser beams hits the overturned object as compared with the case where the laser beam is irradiated parallel to the ground surface. As a result, the information processing apparatus 20 can determine whether or not the object 40 has fallen by using the detection results of the object 40 in the plurality of laser beams.
 (実施の形態2)
 続いて、図2を用いて実施の形態2にかかる監視システムの構成例について説明する。図2の監視システムは、LiDARセンサ10、情報処理装置20、及びカメラ70を有している。LiDARセンサ10及び情報処理装置20は、図1において説明したLiDARセンサ10及び情報処理装置20と同様である。LiDARセンサ10及び情報処理装置20について、図1と同様の機能及び動作については、詳細な説明を省略する。
(Embodiment 2)
Subsequently, a configuration example of the monitoring system according to the second embodiment will be described with reference to FIG. The monitoring system of FIG. 2 includes a LiDAR sensor 10, an information processing device 20, and a camera 70. The LiDAR sensor 10 and the information processing device 20 are the same as the LiDAR sensor 10 and the information processing device 20 described with reference to FIG. The functions and operations of the LiDAR sensor 10 and the information processing device 20 similar to those in FIG. 1 will not be described in detail.
 カメラ70は、監視領域60を撮影するために用いられる。カメラ70は、例えば、遠赤外線カメラであってもよく、一般的なデジタルカメラもしくはビデオカメラ等であってもよい。 The camera 70 is used to photograph the monitoring area 60. The camera 70 may be, for example, a far-infrared camera, a general digital camera, a video camera, or the like.
 カメラ70は、監視領域60を撮影することによって生成した撮影画像を情報処理装置20へ送信する。カメラ70は、ネットワーク30を介して撮影画像を情報処理装置20へ送信してもよい。もしくは、カメラ70は、有線ケーブル、近距離無線通信等を用いて、撮影画像を情報処理装置20へ送信してもよい。 The camera 70 transmits the captured image generated by photographing the monitoring area 60 to the information processing device 20. The camera 70 may transmit the captured image to the information processing device 20 via the network 30. Alternatively, the camera 70 may transmit the captured image to the information processing device 20 by using a wired cable, short-range wireless communication, or the like.
 続いて、図3を用いて実施の形態2にかかるLiDARセンサ10の構成例について説明する。LiDARセンサ10は、照射部11及び通信部12を有している。照射部11は、レーザー光を照射するとともに、反射光を受光してもよい。照射部11は、例えば、アンプ、ADC(Analog Digital Converter)、及びフォトダイオード等によって構成されてもよい。照射部11は、例えば、レーザー光を照射するトランスミッター及び反射光を受光するレシーバーによって構成されると言い換えられてもよい。照射部11は、照射角度が異なる複数のレーザー光を照射する。照射部11は、例えば、一つのトランスミッターの照射方向を変更することによって複数のレーザー光を照射してもよく、照射方向が固定された複数のトランスミッターを用いて複数のレーザー光を照射してもよい。 Subsequently, a configuration example of the LiDAR sensor 10 according to the second embodiment will be described with reference to FIG. The LiDAR sensor 10 has an irradiation unit 11 and a communication unit 12. The irradiation unit 11 may irradiate the laser beam and receive the reflected light. The irradiation unit 11 may be composed of, for example, an amplifier, an ADC (Analog Digital Converter), a photodiode, or the like. The irradiation unit 11 may be paraphrased as being composed of, for example, a transmitter that irradiates laser light and a receiver that receives reflected light. The irradiation unit 11 irradiates a plurality of laser beams having different irradiation angles. The irradiation unit 11 may irradiate a plurality of laser beams by changing the irradiation direction of one transmitter, or may irradiate a plurality of laser beams using a plurality of transmitters having a fixed irradiation direction. Good.
 照射部11は、照射されたレーザー光のうち、どのレーザー光に対する反射光を受光したかを示す情報を通信部12へ出力する。照射部11は、例えば、反射光を受光することができたレーザー光の照射方向に、物体40が存在すると判定してもよい。例えば、図1においてLiDARセンサ10から照射されるレーザー光のうち、到達距離が最も遠いレーザー光から順に、第1層、第2層、第3層、第4層のレーザー光とする。図1においては、照射部11は、はじめに、第1層から第3層までのレーザー光において物体40が検出されたことを示す情報を通信部12へ出力する。物体40が転倒した後、照射部11は、第4層のレーザー光において物体40が検出されたことを示す情報を通信部12へ出力する。 The irradiation unit 11 outputs to the communication unit 12 information indicating which of the irradiated laser light the reflected light is received. The irradiation unit 11 may determine, for example, that the object 40 exists in the irradiation direction of the laser light that could receive the reflected light. For example, among the laser beams emitted from the LiDAR sensor 10 in FIG. 1, the laser beams of the first layer, the second layer, the third layer, and the fourth layer are used in order from the laser beam having the longest reach. In FIG. 1, the irradiation unit 11 first outputs information indicating that the object 40 is detected in the laser light from the first layer to the third layer to the communication unit 12. After the object 40 has fallen, the irradiation unit 11 outputs information indicating that the object 40 has been detected in the laser light of the fourth layer to the communication unit 12.
 通信部12は、ネットワーク30を介して情報処理装置20と通信を行う。通信部12は、例えば、ネットワーク30との間においてデータの送信及び受信を行うネットワークインタフェースであってもよい。通信部12は、物体40の検出結果を示す情報を含む検出信号を情報処理装置20へ送信する。物体40の検出結果を示す情報は、例えば、どの層のレーザー光において物体40が検出されたかを示す情報であってもよい。通信部12は、照射部11から情報を受け取る度に、検出信号を情報処理装置20へ送信してもよい。もしくは、通信部12は、所定のタイミングに、検出信号を情報処理装置20へ送信してもよい。 The communication unit 12 communicates with the information processing device 20 via the network 30. The communication unit 12 may be, for example, a network interface that transmits and receives data to and from the network 30. The communication unit 12 transmits a detection signal including information indicating the detection result of the object 40 to the information processing device 20. The information indicating the detection result of the object 40 may be, for example, information indicating in which layer of the laser light the object 40 was detected. The communication unit 12 may transmit a detection signal to the information processing device 20 each time it receives information from the irradiation unit 11. Alternatively, the communication unit 12 may transmit the detection signal to the information processing device 20 at a predetermined timing.
 続いて、図4を用いて実施の形態2にかかる情報処理装置20の構成例について説明する。情報処理装置20は、通信部21及び判定部22を有している。通信部21及び判定部22は、プロセッサがメモリに格納されたプログラムを実行することによって処理が実行されるソフトウェアもしくはモジュールであってもよい。または、通信部21及び判定部22は、回路もしくはチップ等のハードウェアであってもよい。 Subsequently, a configuration example of the information processing apparatus 20 according to the second embodiment will be described with reference to FIG. The information processing device 20 has a communication unit 21 and a determination unit 22. The communication unit 21 and the determination unit 22 may be software or modules whose processing is executed by the processor executing a program stored in the memory. Alternatively, the communication unit 21 and the determination unit 22 may be hardware such as a circuit or a chip.
 通信部21は、ネットワーク30を介して、カメラ70から撮影画像を受信し、LiDARセンサ10から検出信号を受信する。通信部21は、受信した撮影画像及び検出信号を判定部22へ出力する。 The communication unit 21 receives the captured image from the camera 70 and receives the detection signal from the LiDAR sensor 10 via the network 30. The communication unit 21 outputs the received captured image and the detection signal to the determination unit 22.
 判定部22は、監視領域60において転倒している人がいるか否かを判定する。ここで、図5を用いて、判定部22における判定処理について説明する。判定部22は、撮影画像から、物体40の検出処理を実施する。例えば、判定部22は、歩行者の特徴を、ディープラーニング等を用いて学習した学習モデルを用いて、撮影画像に含まれる歩行者を検出する。判定部22は、カメラ70から撮影画像を受信する度に、それぞれの撮影画像を学習モデルの入力データとして、歩行者の検出処理を行う。判定部22が歩行者を検出することができた場合とは、撮影画像に歩行者の特徴が示されている場合である。判定部22が歩行者を検出することができない場合とは、例えば、歩行者が監視領域60の外に移動した場合であってもよい。もしくは、判定部22が歩行者を検出することができない場合とは、歩行者が転倒することによって、撮影画像内に、歩行者の特徴が示されなくなった場合であってもよい。もしくは、判定部22が歩行者を検出することができない場合とは、遠赤外線カメラを用いて撮影された撮影画像において、歩行者を地表50と区別することができない場合等であってもよい。具体的には、歩行者が転倒し、地表50と接触することによって、歩行者と地表50との温度差が小さくなることがある。このような場合、遠赤外線カメラを用いた撮影画像においては、歩行者を地表50と区別することができない場合がある。 The determination unit 22 determines whether or not there is a person who has fallen in the monitoring area 60. Here, the determination process in the determination unit 22 will be described with reference to FIG. The determination unit 22 performs detection processing of the object 40 from the captured image. For example, the determination unit 22 detects a pedestrian included in a captured image by using a learning model in which the characteristics of a pedestrian are learned by using deep learning or the like. Each time the determination unit 22 receives a photographed image from the camera 70, the determination unit 22 performs a pedestrian detection process using each photographed image as input data of the learning model. The case where the determination unit 22 can detect a pedestrian is a case where the characteristics of the pedestrian are shown in the captured image. The case where the determination unit 22 cannot detect the pedestrian may be, for example, the case where the pedestrian moves out of the monitoring area 60. Alternatively, the case where the determination unit 22 cannot detect the pedestrian may be the case where the pedestrian's characteristics are not shown in the captured image due to the pedestrian's fall. Alternatively, the case where the determination unit 22 cannot detect the pedestrian may be the case where the pedestrian cannot be distinguished from the ground surface 50 in the photographed image taken by the far-infrared camera. Specifically, when a pedestrian falls and comes into contact with the ground surface 50, the temperature difference between the pedestrian and the ground surface 50 may become small. In such a case, the pedestrian may not be distinguishable from the ground surface 50 in the image taken by the far-infrared camera.
 さらに、判定部22は、LiDARセンサ10から受信した検出信号において、どの層のレーザー光において物体40が検出されているかを判定する。さらに、判定部22は、複数のレーザー光において物体40が検出されているかを判定する。 Further, the determination unit 22 determines which layer of laser light the object 40 is detected in the detection signal received from the LiDAR sensor 10. Further, the determination unit 22 determines whether the object 40 is detected by the plurality of laser beams.
 例えば、判定部22は、時刻tより前のタイミングにおいては撮影画像において歩行者を検出していたが、時刻tより後のタイミングにおいては撮影画像において歩行者を検出することができなくなったと判定する。さらに、判定部22は、時刻tより前のタイミングにおいては、複数の層において物体40を検出していたが、時刻tより後のタイミングにおいて、第4層のみが物体40を検出していると判定する。 For example, the determination unit 22 determines that the pedestrian was detected in the captured image at the timing before the time t, but the pedestrian could not be detected in the captured image at the timing after the time t. .. Further, the determination unit 22 has detected the object 40 in the plurality of layers at the timing before the time t, but only the fourth layer has detected the object 40 at the timing after the time t. judge.
 つまり、判定部22は、時刻tにおいて、撮影画像から歩行者を検出することができなくなり、さらに、時刻tにおいて、複数の層において検出していた物体40を、第4層のみにおいて物体40を検出するようになる。このように、撮影画像から歩行者を検出することができなくなったタイミングに、複数の層において検出していた物体40を、第4層のみにおいて物体40を検出するようになった場合、判定部22は、転倒している人がいると判定する。判定部22は、撮影画像から歩行者を検出することができなくなったタイミングと、複数の層において検出していた物体40を、第4層のみにおいて物体40を検出するようになったタイミングとが実質的に同一の場合、転倒している人がいると判定してもよい。実質的に同一の場合とは、撮影画像から歩行者を検出することができなくなったタイミングと、複数の層において検出していた物体40を、第4層のみにおいて検出するようになったタイミングとが所定の範囲内にある場合を含む。 That is, the determination unit 22 cannot detect the pedestrian from the captured image at the time t, and further, the object 40 detected in the plurality of layers at the time t and the object 40 only in the fourth layer. It will be detected. In this way, when the object 40 detected in the plurality of layers is detected only in the fourth layer at the timing when the pedestrian cannot be detected from the captured image, the determination unit 22 determines that there is a person who has fallen. The determination unit 22 has a timing at which the pedestrian cannot be detected from the captured image and a timing at which the object 40 detected in the plurality of layers is detected in the fourth layer only. If they are substantially the same, it may be determined that some people have fallen. The cases of substantially the same are the timing when the pedestrian cannot be detected from the captured image and the timing when the object 40 detected in the plurality of layers is detected only in the fourth layer. Includes cases where is within a predetermined range.
 また、判定部22は、撮影画像において検出された歩行者の位置と、LiDARセンサ10において検出された物体40の位置とが一致する場合に、歩行者と物体40とが同一人物であると判定してもよい。もしくは、判定部22は、撮影画像において検出された歩行者の位置と、LiDARセンサ10において検出された物体40の位置との間が、予め定められた距離よりも短ければ、歩行者と物体40とが同一人物であると判定してもよい。撮影画像において検出された歩行者の位置は、例えば、画像座標もしくはカメラ座標を世界座標へ変換することによって特定されてもよい。画像座標もしくはカメラ座標は、例えば、画素の位置を用いて示されてもよい。世界座標は、例えば、緯度及び経度を用いて示されてもよい。判定部22は、例えば、画像座標もしくはカメラ座標と、世界座標との対応を示すテーブル情報を予め保持していてもよい。 Further, the determination unit 22 determines that the pedestrian and the object 40 are the same person when the position of the pedestrian detected in the captured image and the position of the object 40 detected by the LiDAR sensor 10 match. You may. Alternatively, if the distance between the position of the pedestrian detected in the captured image and the position of the object 40 detected by the LiDAR sensor 10 is shorter than the predetermined distance, the determination unit 22 determines the pedestrian and the object 40. May be determined to be the same person. The position of the pedestrian detected in the captured image may be specified, for example, by converting the image coordinates or the camera coordinates to the world coordinates. Image coordinates or camera coordinates may be indicated using, for example, pixel positions. World coordinates may be indicated using, for example, latitude and longitude. The determination unit 22 may hold, for example, table information indicating the correspondence between the image coordinates or the camera coordinates and the world coordinates in advance.
 LiDARセンサ10において検出された物体40の位置は、例えば、各層のレーザー光が物体40を検出した場合の検出位置が予め定められていてもよい。もしくは、LiDARセンサ10において検出された物体40の位置として、反射光の受光タイミングに応じてLiDARセンサ10から物体40までの距離が推定されてもよい。さらに、判定部22は、推定した距離に基づいて、物体40の位置を推定してもよい。 As for the position of the object 40 detected by the LiDAR sensor 10, for example, the detection position when the laser light of each layer detects the object 40 may be predetermined. Alternatively, the distance from the LiDAR sensor 10 to the object 40 may be estimated as the position of the object 40 detected by the LiDAR sensor 10 according to the timing of receiving the reflected light. Further, the determination unit 22 may estimate the position of the object 40 based on the estimated distance.
 続いて、図6を用いて実施の形態2にかかる転倒者の検出処理の流れについて説明する。はじめに、判定部22は、撮影画像から歩行者を検出する(S11)。判定部22は、例えば、ディープラーニング等の機械学習を実行することにより、撮影画像から歩行者を検出する。 Subsequently, the flow of the fallen person detection process according to the second embodiment will be described with reference to FIG. First, the determination unit 22 detects a pedestrian from the captured image (S11). The determination unit 22 detects a pedestrian from the captured image by executing machine learning such as deep learning, for example.
 次に、判定部22は、LiDARセンサ10において、第1層乃至第3層のレーザー光において物体40が検出されていると判定する(S12)。判定部22は、LiDARセンサ10から受信した検出信号を用いて、物体40が検出されたレーザー光を判定する。次に、判定部22は、撮影画像から歩行者を検出することができなくなる状態となる(S13)。 Next, the determination unit 22 determines that the object 40 is detected by the laser light of the first layer to the third layer in the LiDAR sensor 10 (S12). The determination unit 22 determines the laser light in which the object 40 is detected by using the detection signal received from the LiDAR sensor 10. Next, the determination unit 22 is in a state where it cannot detect a pedestrian from the captured image (S13).
 次に、判定部22は、LiDARセンサ10において、第4層のレーザー光においてのみ、物体40を検出したか否かを判定する(S14)。もしくは、判定部22は、LiDARセンサ10において、予め定められた層のレーザー光においてのみ物体40を検出したか否かを判定してもよい。 Next, the determination unit 22 determines whether or not the object 40 is detected only by the laser light of the fourth layer in the LiDAR sensor 10 (S14). Alternatively, the determination unit 22 may determine whether or not the LiDAR sensor 10 has detected the object 40 only with the laser light of a predetermined layer.
 判定部22は、第4層のレーザー光においてのみ物体40を検出したと認識した場合、歩行者が転倒したと判定する(S15)。判定部22は、第4層以外のレーザー光において物体40を検出した場合、もしくは、いずれのレーザー光においても物体40を検出しなかった場合、処理を終了する。 When the determination unit 22 recognizes that the object 40 is detected only by the laser light of the fourth layer, it determines that the pedestrian has fallen (S15). The determination unit 22 ends the process when the object 40 is detected by the laser light other than the fourth layer, or when the object 40 is not detected by any of the laser lights.
 以上において説明したように、実施の形態2にかかる情報処理装置20は、カメラ70において撮影された撮影画像における歩行者の検出状況と、LiDARセンサ10における物体40の検出状況とを分析して、転倒者の有無を判定することができる。 As described above, the information processing device 20 according to the second embodiment analyzes the detection status of a pedestrian in the captured image captured by the camera 70 and the detection status of the object 40 in the LiDAR sensor 10. It is possible to determine the presence or absence of a fallen person.
 LiDARセンサ10が照射する複数のレーザー光は、それぞれ異なる角度を有し、監視領域60に照射される。そのため、LiDARセンサ10は、監視領域60において、物体40の高さによらず、物体40を検出することができる。 The plurality of laser beams emitted by the LiDAR sensor 10 have different angles and are applied to the monitoring area 60. Therefore, the LiDAR sensor 10 can detect the object 40 in the monitoring area 60 regardless of the height of the object 40.
 さらに、監視領域60内の歩行者が転倒するという動きを検出するために、情報処理装置20は、LiDARセンサ10において歩行者を検出しているレーザー光の変化と、撮影画像における歩行者の検出状況の変化とを用いることができる。これにより、情報処理装置20は、歩行者が転倒したことを検出することができる。 Further, in order to detect the movement of the pedestrian falling in the monitoring area 60, the information processing device 20 detects the change of the laser beam detecting the pedestrian by the LiDAR sensor 10 and the detection of the pedestrian in the captured image. Changes in circumstances can be used. As a result, the information processing device 20 can detect that the pedestrian has fallen.
 また、LiDARセンサ10が照射する第1層乃至第4層を含む複数のレーザー光は、監視領域60に含まれる地表50に到達点を有してもよい。LiDARセンサ10から照射される複数のレーザー光の到達点を、監視領域60に含まれる地表50に限定することにより、監視領域60の外における物体の検出を回避することができる。 Further, the plurality of laser beams including the first layer to the fourth layer irradiated by the LiDAR sensor 10 may have a reaching point on the ground surface 50 included in the monitoring area 60. By limiting the arrival points of the plurality of laser beams emitted from the LiDAR sensor 10 to the ground surface 50 included in the monitoring area 60, it is possible to avoid the detection of an object outside the monitoring area 60.
 (実施の形態3)
 続いて、図7を用いて実施の形態3にかかる歩行者の検出処理について説明する。図7は、図6のステップS14において判定部22がYESと判定した後の処理を示している。図6のステップS14において判定部22がYESと判定した状態は、撮影画像から歩行者を検出できていない状態であって、第4層のレーザー光においてのみ物体40を検出している状態である。
(Embodiment 3)
Subsequently, the pedestrian detection process according to the third embodiment will be described with reference to FIG. 7. FIG. 7 shows the process after the determination unit 22 determines YES in step S14 of FIG. The state in which the determination unit 22 determines YES in step S14 of FIG. 6 is a state in which the pedestrian cannot be detected from the captured image, and the object 40 is detected only by the laser light of the fourth layer. ..
 このような状態において、判定部22は、歩行者を検出することができなかった撮影画像を回転させ、歩行者の検出を試みる。判定部22は、ディープラーニングを用いて歩行者の学習を行う際、歩行者が立っている姿勢を主に学習する。そのため、判定部22は、歩行者が転倒した状態では、歩行者を検出することができなくなる。 In such a state, the determination unit 22 rotates the captured image in which the pedestrian could not be detected and tries to detect the pedestrian. When learning a pedestrian using deep learning, the determination unit 22 mainly learns the standing posture of the pedestrian. Therefore, the determination unit 22 cannot detect the pedestrian when the pedestrian has fallen.
 そこで、判定部22は、撮影画像を回転させることによって、転倒した状態の歩行者を、立っている状態に近づける。さらに、判定部22は、回転した撮影画像を入力として、歩行者が立っている姿勢を学習した学習モデルを適用することによって、歩行者の検出を試みる。例えば、判定部22は、撮影画像を90度回転させてもよく、90度±α(αは任意の正の値)回転させてもよい。 Therefore, the determination unit 22 brings the pedestrian in the fallen state closer to the standing state by rotating the captured image. Further, the determination unit 22 attempts to detect a pedestrian by applying a learning model that learns the standing posture of the pedestrian by using the rotated captured image as an input. For example, the determination unit 22 may rotate the captured image by 90 degrees, or may rotate the captured image by 90 degrees ± α (α is an arbitrary positive value).
 判定部22は、回転した撮影画像から歩行者を検出することができなかった場合、再度撮影画像を回転させ、歩行者を検出するまで、複数回撮影画像を回転させてもよい。判定部22は、予め定められた回数以内の回転によって、歩行者を検出することができた場合、歩行者が転倒したと判定する(S22)。判定部22は、予め定められた回数以内の回転によっては、歩行者を検出することができなかった場合、処理を終了する。 If the determination unit 22 cannot detect a pedestrian from the rotated captured image, the determined unit 22 may rotate the captured image again and rotate the captured image a plurality of times until the pedestrian is detected. If the pedestrian can be detected by rotating within a predetermined number of times, the determination unit 22 determines that the pedestrian has fallen (S22). If the determination unit 22 cannot detect a pedestrian by rotating within a predetermined number of times, the determination unit 22 ends the process.
 以上説明したように、実施の形態3の情報処理装置20は、回転させた撮影画像を用いて、転倒した歩行者を検出することができる。これより、LiDARセンサ10において検出された物体40が、転倒した歩行者であるか否かの判定精度を向上させることができる。 As described above, the information processing device 20 of the third embodiment can detect a fallen pedestrian by using the rotated photographed image. As a result, it is possible to improve the accuracy of determining whether or not the object 40 detected by the LiDAR sensor 10 is a fallen pedestrian.
 また、判定部22は、撮影画像から歩行者を検出できなくなるケースには、歩行者が単に監視領域60から歩いて遠ざかる場合も含まれる。このような場合においても、判定部22が撮影画像を回転させて歩行者の検出を試みることは、情報処理装置20の処理負荷を増大させることとなる。 Further, the case where the determination unit 22 cannot detect the pedestrian from the captured image includes the case where the pedestrian simply walks away from the monitoring area 60. Even in such a case, if the determination unit 22 rotates the captured image to try to detect a pedestrian, the processing load of the information processing device 20 increases.
 一方、実施の形態3においては、撮影画像から歩行者を検出できなくなるケースのうち、第4層のレーザー光において物体40が検出された場合に、判定部22が撮影画像を回転させて歩行者の検出を試みる。そのため、判定部22が、撮影画像を回転させて歩行者の検出を試みるケースを、レーザー光の情報を用いない場合と比較して、減少させることができる。その結果、実施の形態3においては、撮影画像から歩行者を検出できなくなる全てのケースにおいて、撮影画像を回転させて歩行者の検出を試みる場合と比較して、処理負荷の増大を抑えることができる。 On the other hand, in the third embodiment, in the case where the pedestrian cannot be detected from the captured image, when the object 40 is detected by the laser beam of the fourth layer, the determination unit 22 rotates the captured image to pedestrian. Attempts to detect. Therefore, the number of cases in which the determination unit 22 tries to detect a pedestrian by rotating the captured image can be reduced as compared with the case where the laser light information is not used. As a result, in the third embodiment, in all cases where the pedestrian cannot be detected from the captured image, it is possible to suppress an increase in the processing load as compared with the case where the captured image is rotated to try to detect the pedestrian. it can.
 図8は、情報処理装置20の構成例を示すブロック図である。図8を参照すると、情報処理装置20は、ネットワーク・インターフェース1201、プロセッサ1202、及びメモリ1203を含む。ネットワーク・インターフェース1201は、通信システムを構成する他のネットワークノード装置と通信するために使用される。ネットワーク・インターフェース1201は、無線通信を行うために使用されてもよい。例えば、ネットワーク・インターフェース1201は、IEEE 802.11 seriesにおいて規定された無線LAN通信、もしくは3GPP(3rd Generation Partnership Project)において規定されたモバイル通信を行うために使用されてもよい。もしくは、ネットワーク・インターフェース1201は、例えば、IEEE 802.3 seriesに準拠したネットワークインターフェースカード(NIC)を含んでもよい。 FIG. 8 is a block diagram showing a configuration example of the information processing device 20. Referring to FIG. 8, the information processing apparatus 20 includes a network interface 1201, a processor 1202, and a memory 1203. The network interface 1201 is used to communicate with other network node devices that make up the communication system. Network interface 1201 may be used to perform wireless communication. For example, the network interface 1201 may be used to perform wireless LAN communication specified in the IEEE 802.11 series or mobile communication specified in 3GPP (3rd Generation Partnership Project). Alternatively, the network interface 1201 may include, for example, a network interface card (NIC) compliant with the IEEE 802.3 series.
 プロセッサ1202は、メモリ1203からソフトウェア(コンピュータプログラム)を読み出して実行することで、上述の実施形態においてフローチャートもしくはシーケンスを用いて説明された情報処理装置20の処理を行う。プロセッサ1202は、例えば、マイクロプロセッサ、MPU(Micro Processing Unit)、又はCPU(Central Processing Unit)であってもよい。プロセッサ1202は、複数のプロセッサを含んでもよい。 The processor 1202 reads the software (computer program) from the memory 1203 and executes it to perform the processing of the information processing apparatus 20 described using the flowchart or the sequence in the above-described embodiment. The processor 1202 may be, for example, a microprocessor, an MPU (Micro Processing Unit), or a CPU (Central Processing Unit). Processor 1202 may include a plurality of processors.
 メモリ1203は、揮発性メモリ及び不揮発性メモリの組み合わせによって構成される。メモリ1203は、プロセッサ1202から離れて配置されたストレージを含んでもよい。この場合、プロセッサ1202は、図示されていないI/Oインタフェースを介してメモリ1203にアクセスしてもよい。 Memory 1203 is composed of a combination of volatile memory and non-volatile memory. Memory 1203 may include storage located away from processor 1202. In this case, processor 1202 may access memory 1203 via an I / O interface (not shown).
 図8の例では、メモリ1203は、ソフトウェアモジュール群を格納するために使用される。プロセッサ1202は、これらのソフトウェアモジュール群をメモリ1203から読み出して実行することで、上述の実施形態において説明された情報処理装置20の処理を行うことができる。 In the example of FIG. 8, the memory 1203 is used to store the software module group. The processor 1202 can perform the processing of the information processing device 20 described in the above-described embodiment by reading these software modules from the memory 1203 and executing the software modules.
 図8を用いて説明したように、情報処理装置20等が有するプロセッサの各々は、図面を用いて説明されたアルゴリズムをコンピュータに行わせるための命令群を含む1又は複数のプログラムを実行する。 As described with reference to FIG. 8, each of the processors included in the information processing apparatus 20 and the like executes one or a plurality of programs including a group of instructions for causing the computer to perform the algorithm described with reference to the drawings.
 上述の例において、プログラムは、様々なタイプの非一時的なコンピュータ可読媒体(non-transitory computer readable medium)を用いて格納され、コンピュータに供給することができる。非一時的なコンピュータ可読媒体は、様々なタイプの実体のある記録媒体(tangible storage medium)を含む。非一時的なコンピュータ可読媒体の例は、磁気記録媒体、光磁気記録媒体(例えば光磁気ディスク)、CD-ROM(Read Only Memory)、CD-R、CD-R/W、半導体メモリを含む。磁気記録媒体は、例えばフレキシブルディスク、磁気テープ、ハードディスクドライブであってもよい。半導体メモリは、例えば、マスクROM、PROM(Programmable ROM)、EPROM(Erasable PROM)、フラッシュROM、RAM(Random Access Memory)であってもよい。また、プログラムは、様々なタイプの一時的なコンピュータ可読媒体(transitory computer readable medium)によってコンピュータに供給されてもよい。一時的なコンピュータ可読媒体の例は、電気信号、光信号、及び電磁波を含む。一時的なコンピュータ可読媒体は、電線及び光ファイバ等の有線通信路、又は無線通信路を介して、プログラムをコンピュータに供給できる。 In the above example, the program can be stored and supplied to a computer using various types of non-transitory computer readable medium. Non-temporary computer-readable media include various types of tangible storage media. Examples of non-temporary computer-readable media include magnetic recording media, magneto-optical recording media (eg, magneto-optical disks), CD-ROMs (Read Only Memory), CD-Rs, CD-R / Ws, and semiconductor memories. The magnetic recording medium may be, for example, a flexible disk, a magnetic tape, or a hard disk drive. The semiconductor memory may be, for example, a mask ROM, a PROM (Programmable ROM), an EPROM (Erasable PROM), a flash ROM, or a RAM (Random Access Memory). The program may also be supplied to the computer by various types of temporary computer readable media. Examples of temporary computer-readable media include electrical, optical, and electromagnetic waves. The temporary computer-readable medium can supply the program to the computer via a wired communication path such as an electric wire and an optical fiber, or a wireless communication path.
 なお、本開示は上記実施の形態に限られたものではなく、趣旨を逸脱しない範囲で適宜変更することが可能である。また、本開示は、それぞれの実施の形態を適宜組み合わせて実施されてもよい。 Note that this disclosure is not limited to the above embodiment, and can be appropriately modified without departing from the spirit. Further, the present disclosure may be carried out by appropriately combining the respective embodiments.
 上記の実施形態の一部又は全部は、以下の付記のようにも記載されうるが、以下には限られない。
 (付記1)
 到達距離の異なる複数のレーザー光を監視領域へ照射し、それぞれのレーザー光における物体の検出状況を示す検出信号を出力するLiDARセンサと、
 所定の到達距離よりも短い到達距離のレーザー光における検出状況が前記物体を検出したことを示し、かつ、前記物体を検出したレーザー光の数が減少した場合に、前記物体が転倒したと判定する情報処理装置と、を備える監視システム。
 (付記2)
 前記情報処理装置は、
 前記複数のレーザー光のうち、到達距離が最も近いレーザー光における検出信号が前記物体を検出した場合に、前記物体が転倒したと判定する、付記1に記載の監視システム。
 (付記3)
 前記LiDARセンサは、
 前記複数のレーザー光が前記監視領域内の地表に到達するように、前記複数のレーザーの照射方向を制御する、付記1又は2に記載の監視システム。
 (付記4)
 前記監視領域を撮影するカメラをさらに備え、
 情報処理装置は、
 前記カメラを用いて撮影された撮影画像から前記物体を認識し、前記物体を認識することができていた前記撮影画像において、前記物体を認識することができなくなった場合に、前記物体が転倒したと判定する、付記1乃至3のいずれか1項に記載の監視システム。
 (付記5)
 前記情報処理装置は、
 前記物体を検出したレーザー光の数が減少したタイミングと、前記物体を認識していた前記撮影画像において、前記物体を認識することができなくなったタイミングとが所定の範囲内である場合に、前記物体が転倒したと判定する、付記4に記載の監視システム。
 (付記6)
 前記情報処理装置は、
 前記物体を認識することができなくなった前記撮影画像を回転させ、回転させた前記撮影画像と、人が立っている状態を学習した学習モデルとを用いて、前記撮影画像から前記物体を認識することができた場合に、前記物体が転倒したと判定する、付記4又は5に記載の監視システム。
 (付記7)
 到達距離の異なる複数のレーザー光がLiDARセンサから監視領域に照射され、それぞれのレーザー光における物体の検出状況を示す検出信号を前記LiDARセンサから受信する通信部と、
 所定の到達距離よりも短い到達距離のレーザー光における検出状況が前記物体を検出したことを示し、かつ、前記物体を検出したレーザー光の数が減少した場合に、前記物体が転倒したと判定する判定部と、を備える情報処理装置。
 (付記8)
 前記判定部は、
 前記複数のレーザー光のうち、到達距離が最も近いレーザー光における検出信号が前記物体を検出した場合に、前記物体が転倒したと判定する、付記7に記載の情報処理装置。
 (付記9)
 到達距離の異なる複数のレーザー光がLiDARセンサから監視領域に照射され、それぞれのレーザー光における物体の検出状況を示す検出信号を前記LiDARセンサから受信し、
 所定の到達距離よりも短い到達距離のレーザー光における検出状況が前記物体を検出したことを示し、かつ、前記物体を検出したレーザー光の数が減少した場合に、前記物体が転倒したと判定する、転倒検出方法。
 (付記10)
 到達距離の異なる複数のレーザー光がLiDARセンサから監視領域に照射され、それぞれのレーザー光における物体の検出状況を示す検出信号を前記LiDARセンサから受信し、
 所定の到達距離よりも短い到達距離のレーザー光における検出状況が前記物体を検出したことを示し、かつ、前記物体を検出したレーザー光の数が減少した場合に、前記物体が転倒したと判定することをコンピュータに実行させるプログラムが格納された非一時的なコンピュータ可読媒体。
Some or all of the above embodiments may also be described, but not limited to:
(Appendix 1)
A LiDAR sensor that irradiates a monitoring area with multiple laser beams with different reach and outputs a detection signal indicating the detection status of an object in each laser beam.
It is determined that the object has fallen when the detection status of the laser beam having a reach shorter than the predetermined reach indicates that the object has been detected and the number of laser beams that have detected the object decreases. A monitoring system equipped with an information processing device.
(Appendix 2)
The information processing device
The monitoring system according to Appendix 1, wherein when the detection signal of the laser light having the shortest reach among the plurality of laser lights detects the object, it is determined that the object has fallen.
(Appendix 3)
The LiDAR sensor is
The monitoring system according to Appendix 1 or 2, wherein the irradiation directions of the plurality of lasers are controlled so that the plurality of laser lights reach the ground surface in the monitoring area.
(Appendix 4)
Further equipped with a camera for capturing the surveillance area,
Information processing equipment
When the object was recognized from the photographed image taken by the camera and the object could not be recognized in the photographed image which could recognize the object, the object fell down. The monitoring system according to any one of Appendix 1 to 3, wherein the monitoring system is determined to be.
(Appendix 5)
The information processing device
When the timing at which the number of laser beams that detect the object decreases and the timing at which the object cannot be recognized in the captured image that recognized the object are within a predetermined range, the said. The monitoring system according to Appendix 4, which determines that an object has fallen.
(Appendix 6)
The information processing device
The captured image that can no longer recognize the object is rotated, and the captured image that is rotated and a learning model that learns the state in which a person is standing are used to recognize the object from the captured image. The monitoring system according to Appendix 4 or 5, which determines that the object has fallen when it can be done.
(Appendix 7)
A communication unit that irradiates a monitoring area with a plurality of laser beams having different reach from the LiDAR sensor and receives a detection signal indicating an object detection status in each laser beam from the LiDAR sensor.
It is determined that the object has fallen when the detection status of the laser beam having a reach shorter than the predetermined reach indicates that the object has been detected and the number of laser beams that have detected the object decreases. An information processing device including a determination unit.
(Appendix 8)
The determination unit
The information processing device according to Appendix 7, wherein when the detection signal of the laser light having the shortest reach among the plurality of laser lights detects the object, it is determined that the object has fallen.
(Appendix 9)
A plurality of laser beams having different reach are emitted from the LiDAR sensor to the monitoring area, and a detection signal indicating the detection status of an object in each laser beam is received from the LiDAR sensor.
It is determined that the object has fallen when the detection status of the laser beam having a reach shorter than the predetermined reach indicates that the object has been detected and the number of laser beams that have detected the object decreases. , Fall detection method.
(Appendix 10)
A plurality of laser beams having different reach are emitted from the LiDAR sensor to the monitoring area, and a detection signal indicating the detection status of an object in each laser beam is received from the LiDAR sensor.
When the detection status of the laser beam having a reach shorter than the predetermined reach indicates that the object has been detected and the number of laser beams that have detected the object decreases, it is determined that the object has fallen. A non-transitory computer-readable medium that contains a program that causes a computer to do things.
 なお、本開示は上記実施の形態に限られたものではなく、趣旨を逸脱しない範囲で適宜変更することが可能である。 Note that this disclosure is not limited to the above embodiment, and can be appropriately modified without departing from the spirit.
 10 LiDARセンサ
 11 照射部
 12 通信部
 20 情報処理装置
 21 通信部
 22 判定部
 30 ネットワーク
 40 物体
 50 地表
 60 監視領域
 70 カメラ
10 LiDAR sensor 11 Irradiation unit 12 Communication unit 20 Information processing device 21 Communication unit 22 Judgment unit 30 Network 40 Object 50 Ground surface 60 Monitoring area 70 Camera

Claims (10)

  1.  到達距離の異なる複数のレーザー光を監視領域へ照射し、それぞれのレーザー光における物体の検出状況を示す検出信号を出力するLiDARセンサと、
     所定の到達距離よりも短い到達距離のレーザー光における検出状況が前記物体を検出したことを示し、かつ、前記物体を検出したレーザー光の数が減少した場合に、前記物体が転倒したと判定する情報処理装置と、を備える監視システム。
    A LiDAR sensor that irradiates a monitoring area with multiple laser beams with different reach and outputs a detection signal indicating the detection status of an object in each laser beam.
    It is determined that the object has fallen when the detection status of the laser beam having a reach shorter than the predetermined reach indicates that the object has been detected and the number of laser beams that have detected the object decreases. A monitoring system equipped with an information processing device.
  2.  前記情報処理装置は、
     前記複数のレーザー光のうち、到達距離が最も近いレーザー光における検出信号が前記物体を検出した場合に、前記物体が転倒したと判定する、請求項1に記載の監視システム。
    The information processing device
    The monitoring system according to claim 1, wherein when the detection signal of the laser light having the shortest reach among the plurality of laser lights detects the object, it is determined that the object has fallen.
  3.  前記LiDARセンサは、
     前記複数のレーザー光が前記監視領域内の地表に到達するように、前記複数のレーザーの照射方向を制御する、請求項1又は2に記載の監視システム。
    The LiDAR sensor is
    The monitoring system according to claim 1 or 2, wherein the irradiation directions of the plurality of lasers are controlled so that the plurality of laser lights reach the ground surface in the monitoring area.
  4.  前記監視領域を撮影するカメラをさらに備え、
     情報処理装置は、
     前記カメラを用いて撮影された撮影画像から前記物体を認識し、前記物体を認識することができていた前記撮影画像において、前記物体を認識することができなくなった場合に、前記物体が転倒したと判定する、請求項1乃至3のいずれか1項に記載の監視システム。
    Further equipped with a camera for capturing the surveillance area,
    Information processing equipment
    When the object was recognized from the photographed image taken by the camera and the object could not be recognized in the photographed image which could recognize the object, the object fell down. The monitoring system according to any one of claims 1 to 3, which determines that.
  5.  前記情報処理装置は、
     前記物体を検出したレーザー光の数が減少したタイミングと、前記物体を認識していた前記撮影画像において、前記物体を認識することができなくなったタイミングとが所定の範囲内である場合に、前記物体が転倒したと判定する、請求項4に記載の監視システム。
    The information processing device
    When the timing at which the number of laser beams that detect the object decreases and the timing at which the object cannot be recognized in the captured image that recognized the object are within a predetermined range, the said. The monitoring system according to claim 4, wherein it is determined that the object has fallen.
  6.  前記情報処理装置は、
     前記物体を認識することができなくなった前記撮影画像を回転させ、回転させた前記撮影画像と、人が立っている状態を学習した学習モデルとを用いて、前記撮影画像から前記物体を認識することができた場合に、前記物体が転倒したと判定する、請求項4又は5に記載の監視システム。
    The information processing device
    The captured image that can no longer recognize the object is rotated, and the captured image that is rotated and a learning model that learns the state in which a person is standing are used to recognize the object from the captured image. The monitoring system according to claim 4 or 5, wherein if the object can be determined to have fallen.
  7.  到達距離の異なる複数のレーザー光がLiDARセンサから監視領域に照射され、それぞれのレーザー光における物体の検出状況を示す検出信号を前記LiDARセンサから受信する通信部と、
     所定の到達距離よりも短い到達距離のレーザー光における検出状況が前記物体を検出したことを示し、かつ、前記物体を検出したレーザー光の数が減少した場合に、前記物体が転倒したと判定する判定部と、を備える情報処理装置。
    A communication unit that irradiates a monitoring area with a plurality of laser beams having different reach from the LiDAR sensor and receives a detection signal indicating an object detection status in each laser beam from the LiDAR sensor.
    It is determined that the object has fallen when the detection status of the laser beam having a reach shorter than the predetermined reach indicates that the object has been detected and the number of laser beams that have detected the object decreases. An information processing device including a determination unit.
  8.  前記判定部は、
     前記複数のレーザー光のうち、到達距離が最も近いレーザー光における検出信号が前記物体を検出した場合に、前記物体が転倒したと判定する、請求項7に記載の情報処理装置。
    The determination unit
    The information processing apparatus according to claim 7, wherein when the detection signal of the laser beam having the shortest reach among the plurality of laser beams detects the object, it is determined that the object has fallen.
  9.  到達距離の異なる複数のレーザー光がLiDARセンサから監視領域に照射され、それぞれのレーザー光における物体の検出状況を示す検出信号を前記LiDARセンサから受信し、
     所定の到達距離よりも短い到達距離のレーザー光における検出状況が前記物体を検出したことを示し、かつ、前記物体を検出したレーザー光の数が減少した場合に、前記物体が転倒したと判定する、転倒検出方法。
    A plurality of laser beams having different reach are emitted from the LiDAR sensor to the monitoring area, and a detection signal indicating the detection status of an object in each laser beam is received from the LiDAR sensor.
    It is determined that the object has fallen when the detection status of the laser beam having a reach shorter than the predetermined reach indicates that the object has been detected and the number of laser beams that have detected the object decreases. , Fall detection method.
  10.  到達距離の異なる複数のレーザー光がLiDARセンサから監視領域に照射され、それぞれのレーザー光における物体の検出状況を示す検出信号を前記LiDARセンサから受信し、
     所定の到達距離よりも短い到達距離のレーザー光における検出状況が前記物体を検出したことを示し、かつ、前記物体を検出したレーザー光の数が減少した場合に、前記物体が転倒したと判定することをコンピュータに実行させるプログラムが格納された非一時的なコンピュータ可読媒体。
    A plurality of laser beams having different reach are emitted from the LiDAR sensor to the monitoring area, and a detection signal indicating the detection status of an object in each laser beam is received from the LiDAR sensor.
    When the detection status of the laser beam having a reach shorter than the predetermined reach indicates that the object has been detected and the number of laser beams that detect the object decreases, it is determined that the object has fallen. A non-transitory computer-readable medium that contains a program that causes a computer to do things.
PCT/JP2019/011469 2019-03-19 2019-03-19 Surveillance system, information processing device, fall detection method, and non-temporary computer readable medium WO2020188748A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US17/437,659 US20220163671A1 (en) 2019-03-19 2019-03-19 Surveillance system, information processing device, fall detection method, and non-transitory computer readable medium
PCT/JP2019/011469 WO2020188748A1 (en) 2019-03-19 2019-03-19 Surveillance system, information processing device, fall detection method, and non-temporary computer readable medium
JP2021506893A JP7231011B2 (en) 2019-03-19 2019-03-19 MONITORING SYSTEM, INFORMATION PROCESSING DEVICE, FALL DETECTION METHOD AND PROGRAM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/011469 WO2020188748A1 (en) 2019-03-19 2019-03-19 Surveillance system, information processing device, fall detection method, and non-temporary computer readable medium

Publications (1)

Publication Number Publication Date
WO2020188748A1 true WO2020188748A1 (en) 2020-09-24

Family

ID=72519739

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/011469 WO2020188748A1 (en) 2019-03-19 2019-03-19 Surveillance system, information processing device, fall detection method, and non-temporary computer readable medium

Country Status (3)

Country Link
US (1) US20220163671A1 (en)
JP (1) JP7231011B2 (en)
WO (1) WO2020188748A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113311428A (en) * 2021-05-25 2021-08-27 山西大学 Intelligent human body falling monitoring system based on millimeter wave radar and identification method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024010898A1 (en) * 2022-07-08 2024-01-11 Herzog Technologies, Inc. System and method for railway right-of-way occupancy detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004210045A (en) * 2002-12-27 2004-07-29 Ishikawajima Harima Heavy Ind Co Ltd Monitoring device
JP2006522959A (en) * 2002-11-21 2006-10-05 セキュマネジメント ビー.ヴイ. Method and apparatus for fall prevention and detection
JP2012027899A (en) * 2010-07-22 2012-02-09 China Medical Univ Falling alarm system
JP2014016742A (en) * 2012-07-06 2014-01-30 Nippon Signal Co Ltd:The Fall detection system
JP2015182556A (en) * 2014-03-24 2015-10-22 公益財団法人鉄道総合技術研究所 Monitoring system for railroad-crossing passer and monitoring program

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8115641B1 (en) * 2008-04-18 2012-02-14 Dempsey Michael K Automatic fall detection system
US20100063885A1 (en) 2008-09-08 2010-03-11 International Business Machines Corporation Apparatus, system, and method for advertisement complexity scaling via traffic analysis
JP6519181B2 (en) 2015-01-09 2019-05-29 富士通株式会社 Method for monitoring trend of change in disorder degree, program for monitoring trend in change in disorder degree and device for monitoring trend in change in disorder degree
FI126922B (en) 2016-03-29 2017-08-15 Maricare Oy Method and system of control

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006522959A (en) * 2002-11-21 2006-10-05 セキュマネジメント ビー.ヴイ. Method and apparatus for fall prevention and detection
JP2004210045A (en) * 2002-12-27 2004-07-29 Ishikawajima Harima Heavy Ind Co Ltd Monitoring device
JP2012027899A (en) * 2010-07-22 2012-02-09 China Medical Univ Falling alarm system
JP2014016742A (en) * 2012-07-06 2014-01-30 Nippon Signal Co Ltd:The Fall detection system
JP2015182556A (en) * 2014-03-24 2015-10-22 公益財団法人鉄道総合技術研究所 Monitoring system for railroad-crossing passer and monitoring program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113311428A (en) * 2021-05-25 2021-08-27 山西大学 Intelligent human body falling monitoring system based on millimeter wave radar and identification method
CN113311428B (en) * 2021-05-25 2023-05-30 山西大学 Human body falling intelligent monitoring system and falling identification method based on millimeter wave radar

Also Published As

Publication number Publication date
JPWO2020188748A1 (en) 2020-09-24
US20220163671A1 (en) 2022-05-26
JP7231011B2 (en) 2023-03-01

Similar Documents

Publication Publication Date Title
CN111901758B (en) Detecting location within a network
KR101248054B1 (en) Object tracking system for tracing path of object and method thereof
TWI722353B (en) Method and device for laser ranging
WO2020188748A1 (en) Surveillance system, information processing device, fall detection method, and non-temporary computer readable medium
JPH10227609A (en) Distance measuring device for outdoor
JP7103405B2 (en) Monitoring control device, monitoring system, monitoring control method and program
JP6735446B2 (en) Camera system and its control method, electronic device and its control program
US10527712B2 (en) Ray-surface positioning systems and methods
US9922049B2 (en) Information processing device, method of processing information, and program for processing information
FR3022358A1 (en) DYNAMIC TRACKING SYSTEM AND AUTOMATIC GUIDING METHOD
US20120162370A1 (en) Apparatus and method for generating depth image
CN110471086A (en) A kind of radar survey barrier system and method
JP7423553B2 (en) Methods, systems and optical signs for wireless power supply
US11669092B2 (en) Time of flight system and method for safety-rated collision avoidance
KR101711156B1 (en) Image security system and method using mobile identification
CN109246371B (en) Light spot capturing system and method
JP3858576B2 (en) Control method of moving object tracking spotlight control device
KR101582404B1 (en) Method and apparatus for counting number of object using uwb radar
WO2021106207A1 (en) Measurement device, information processing device, data specification method, and non-transitory computer-readable medium
US11268804B2 (en) Automatic light position detection system
JP6726043B2 (en) Object detection sensor and monitoring system
JP4333256B2 (en) Cableway carrier monitoring system
JP6984737B2 (en) Distance measuring sensor, control device, control method and program
JP2021021639A (en) Information processing apparatus, information processing method, program, recording medium, and detection system
JPWO2020116232A1 (en) Controls, control methods, and programs

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19920194

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021506893

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19920194

Country of ref document: EP

Kind code of ref document: A1