WO2020105527A1 - Dispositif et système d'analyse d'image, et programme de commande - Google Patents

Dispositif et système d'analyse d'image, et programme de commande

Info

Publication number
WO2020105527A1
WO2020105527A1 PCT/JP2019/044580 JP2019044580W WO2020105527A1 WO 2020105527 A1 WO2020105527 A1 WO 2020105527A1 JP 2019044580 W JP2019044580 W JP 2019044580W WO 2020105527 A1 WO2020105527 A1 WO 2020105527A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unit
image analysis
information
camera
Prior art date
Application number
PCT/JP2019/044580
Other languages
English (en)
Japanese (ja)
Inventor
将則 吉澤
Original Assignee
コニカミノルタ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by コニカミノルタ株式会社 filed Critical コニカミノルタ株式会社
Publication of WO2020105527A1 publication Critical patent/WO2020105527A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to an image analysis device, an image analysis system, and a control program.
  • the behavior of a person being imaged is analyzed based on an image from a surveillance camera, a person whose behavior is an abnormal behavior including fraudulent behavior, etc. is detected, and the person is notified to a security guard or the like. Is being done.
  • Patent Document 1 it is determined whether or not a person's behavior corresponds to a specific behavior pattern by performing image analysis of a plurality of time-series frame images obtained from an imaging camera, and if so, There is disclosed a monitoring device for generating an alarm.
  • the quality of the image obtained from the surveillance camera is affected by the installation environment (surveillance environment), and there is a problem that it cannot be accurately determined because the image quality is significantly deteriorated in the environment such as rainy weather or nighttime.
  • the surveillance system disclosed in Patent Document 2 aims to analyze the behavior of a person in the surveillance area without being affected by the shooting environment (surveillance environment), and the surveillance camera Instead, the behavior of the person is determined using position information of the person obtained from a laser positioning device (also called a laser radar or a laser lidar).
  • a laser positioning device also called a laser radar or a laser lidar
  • the laser radar may not be able to correctly detect the target object under any environment.
  • the laser radar detects water droplets (water droplets) in the atmosphere on the front side of the target object and cannot correctly detect the distance to the target object. There is.
  • the present invention has been made in view of the above circumstances, and an object of the present invention is to provide an image analysis device capable of highly accurately analyzing an object in a shooting region regardless of shooting conditions.
  • a data acquisition unit that acquires the image data from a camera that generates the image data, From the measurement data acquired from the rider, the position information of the object, a distance information analysis unit for detecting object information including the size, From the image data acquired from the camera, an image analysis unit that detects object information,
  • a shooting condition acquisition unit that acquires shooting conditions in the first and second shooting regions; Based on the acquired shooting conditions, an integrated process of integrating the detection result of the distance information analysis unit and the detection result of the image analysis unit in the common area where the first and second shooting areas overlap, An information integration unit that generates information, An image analysis device including.
  • the image analysis unit uses an inference processing parameter generated by learning the image data acquired from the camera under predetermined shooting conditions among a plurality of shooting conditions based on the shot image data.
  • the image analysis apparatus according to (1) above which analyzes by inference processing by the neural network used.
  • a display unit that displays a display image based on the image data obtained from the camera, The image analysis device according to any one of (1) to (6) above, wherein the object information generated by the information integration unit is displayed on the display image in a superimposed manner.
  • the information integration unit determines whether or not to adopt one or both of the detection result of the distance information analysis unit and the detection result of the image analysis unit, according to the imaging condition, and makes a determination.
  • the image analysis apparatus according to any one of (1) to (6), which performs the integration processing according to a result.
  • the information integration unit adopts either or both of the detection result of the distance information analysis unit and the detection result of the image analysis unit for each detected object according to the imaging condition.
  • a display unit that displays a display image based on the image data obtained from the camera, The object information generated by the information integration unit is superimposed and displayed on the display image, and the object information to be displayed is displayed in a different mode according to the determination result of the information integration unit, (9) Or the image analysis device according to (10) above.
  • the rider and the camera continuously measure and photograph at a predetermined frame rate,
  • the number of frames per unit time processed by the image analysis unit is less than the number of frames per unit time processed by the distance information analysis unit,
  • the image analysis apparatus according to any one of (1) to (11) above, wherein the information integration unit executes the integration process at a timing when the process of the image analysis unit is completed.
  • a warning determination unit that determines a moving direction of the object detected as a vehicle in the common area and outputs an alert signal when the moving direction is not a normal direction, further includes (1) above.
  • the distance between the object detected as a person and the object detected as a vehicle, or the distance between the object detected as a person and the object detected as a vehicle is determined, and the distance is equal to or less than a predetermined value.
  • the image analysis device according to any one of (1) to (12) above, further including a report determination unit that outputs an alert signal.
  • a rider that measures a distance value to an object to be measured in the first imaging region and generates measurement data, A camera for photographing a second photographing area including at least a part of the first photographing area and generating image data;
  • An image analysis system including.
  • a control program executed by a computer that controls the image analysis device A step (a) of obtaining measurement data from a rider that measures a distance value to an object to be measured in the first imaging region; A step (b) of capturing a second image capturing area including at least the first image capturing area and acquiring the image data from a camera that generates image data; Detecting (c) object information including the position and size of the object from the measurement data acquired in step (a); Step (d) of detecting object information from the image data acquired in step (b), A step (e) of acquiring image capturing conditions in the first and second image capturing areas, The detection result of step (c) and the detection result of step (d) in the common area where the first and second imaging areas overlap are integrated based on the imaging conditions acquired in step (e). A control program for causing the computer to perform a process including a step (f) of performing an integrated process to generate object information.
  • the image analysis device is an image for detecting object information from the distance information analysis unit that detects the object information including the position and size of the object from the measurement data acquired from the rider, and the image data acquired from the camera.
  • An information integration unit that performs integration processing that integrates the detection result of the distance information analysis unit and the detection result of the image analysis unit in the area to generate object information.
  • FIG. 1 is a block diagram showing a configuration of an image analysis system according to a first embodiment. It is a sectional view showing a schematic structure of a rider.
  • FIG. 3 is a schematic diagram showing a state in which a rider is arranged such that a road on which a vehicle runs is a shooting area. It is a schematic diagram which shows the state which scans a photography region by a rider.
  • 9 is a table showing a correspondence relationship between a detection result, a shooting condition, and a determination result. It is a block diagram which shows the structure of the image analysis system which concerns on 2nd Embodiment. It is a block diagram which shows the structure of the image analysis system which concerns on 3rd Embodiment. 9 is a table showing a correspondence relationship between a detection result, a shooting condition, and a determination result. It is a block diagram which shows the structure of the image analysis system which concerns on 4th Embodiment.
  • FIG. 1 is a block diagram showing the main configuration of the image analysis system 10 according to the first embodiment.
  • the image analysis system 10 includes a rider 100, a camera 200, an image analysis device 300, and a display unit 400.
  • the lidar 100 (LiDAR: Light Detection and Ranging) scans the imaging area 710 (see FIG. 3) by the ToF method using infrared (about 800 to 1000 nm) laser light to measure the distance to the object. Details of the rider 100 will be described later.
  • the camera 200 is a well-known image capturing device, and is provided with an image sensor having a sensitivity in a visible light region such as a CCD or a CMOS, and an optical system such as a lens. Image).
  • the rider 100 and the camera 200 have their optical axes oriented substantially in the same direction, and at least a part of the shooting area 710 measured by the rider 100 and the shooting area 720 taken by the camera 200 overlap.
  • the overlapping area is referred to as a common area.
  • the display unit 400 is, for example, a liquid crystal display, and displays various information.
  • the display unit 400 is an output destination of the image analysis device 300 and displays the generated display image using the integration result.
  • FIG. 2 is a cross-sectional view showing a schematic configuration of the rider 100.
  • FIG. 3 is a schematic diagram showing a state in which the rider 100 and the camera 200 are arranged so that the shooting areas 710 and 720 are located on the road 61 as an example.
  • the rider 110 (and the camera 200) is arranged above the pillar 62 toward the road 61.
  • the first shooting area 710 includes the second shooting area 720.
  • the common area 730 in which the first and second shooting areas 710 and 720 overlap is equivalent to the second shooting area 720.
  • objects (moving objects) 81 and 82 exist on the road 61.
  • the object 81 is a vehicle (normal passenger car) and the object 82 is a pedestrian.
  • the rider 100 has a light emitting / receiving unit 111 and a distance measuring point cloud data generation unit 112.
  • the light projecting / receiving unit 111 has a semiconductor laser 51, a collimator lens 52, a mirror unit 53, a lens 54, a photodiode 55, a motor 56, and a housing 57 that houses each of these components.
  • a distance measuring point cloud data generation unit 112 is arranged in the housing 57. Based on the received light signal, the distance measurement point cloud data generation unit 112 detects the distance measurement point cloud data (simply “measurement data”) that is composed of a plurality of pixels indicating the distribution of distance values to the object in the measurement space. (Also called).
  • This distance measuring point group data is also called a distance image or a distance map.
  • the semiconductor laser 51 emits a pulsed laser beam.
  • the collimator lens 52 converts the divergent light from the semiconductor laser 51 into parallel light.
  • the mirror unit 53 scans and projects the laser light made parallel by the collimator lens 52 toward the measurement area by the rotating mirror surface, and reflects the reflected light from the object.
  • the lens 54 collects the reflected light from the object reflected by the mirror unit 53.
  • the photodiode 55 receives the light condensed by the lens 54 and has a plurality of pixels arranged in the Y direction.
  • the motor 56 rotationally drives the mirror unit 53.
  • the distance measurement point cloud data generation unit 112 controls the operation of the light emitting / receiving unit 111 and generates continuous frames (distance measurement point cloud data) at a predetermined cycle (for example, several to 20 Hz).
  • the distance measurement point group data generation unit 112 obtains distance information (distance value) based on the time interval (time difference) between the emission timing of the semiconductor laser 51 of the lidar 100 and the light reception timing of the photodiode 55.
  • the distance measuring point cloud data generation unit 112 is composed of a CPU (Central Processing Unit) and a memory, and executes various programs by executing the programs stored in the memory to obtain the distance measuring point cloud data.
  • a dedicated hardware circuit for generating distance measuring point cloud data may be provided.
  • the distance measurement point group data generation unit 112 may be omitted and the distance information analysis unit 320, which will be described later, may perform this function. In this case, the rider 100 simply outputs the light reception signal corresponding to each pixel to the image analysis device 300.
  • the semiconductor laser 51 and the collimator lens 52 constitute the emitting section 501
  • the lens 54 and the photodiode 55 constitute the light receiving section 502.
  • the optical axes of the emitting section 501 and the light receiving section 502 are preferably orthogonal to the rotation axis 530 of the mirror unit 53.
  • the box-shaped housing 57 fixedly installed on the pillar 62 or the like which is a rigid body, includes an upper wall 57a, a lower wall 57b facing the upper wall 57a, and a side wall 57c connecting the upper wall 57a and the lower wall 57b.
  • An opening 57d is formed in a part of the side wall 57c, and a transparent plate 58 is attached to the opening 57d.
  • the mirror unit 53 has a shape in which two quadrangular pyramids are joined in the opposite direction and integrated, that is, four pairs of mirror surfaces 531a and 531b tilted in a pair to face each other (but not limited to four pairs). ) I have.
  • the mirror surfaces 531a and 531b are preferably formed by depositing a reflective film on the surface of a resin material (for example, PC (polycarbonate)) in the shape of a mirror unit.
  • the mirror unit 53 is connected to a shaft 56a of a motor 56 fixed to a housing 57 and is rotationally driven.
  • the axis line (rotation axis line) of the shaft 56a extends in the Y direction, which is the vertical direction, in the state of being installed on the column 62, and is formed by the X direction and the Z direction orthogonal to the Y direction.
  • the XZ plane is a horizontal plane, the axis of the shaft 56a may be inclined with respect to the vertical direction.
  • the divergent light emitted intermittently in a pulse form from the semiconductor laser 51 is converted into a parallel light flux by the collimator lens 52, and is incident on the first mirror surface 531a of the rotating mirror unit 53. After that, after being reflected by the first mirror surface 531a and further reflected by the second mirror surface 531b, the light is transmitted as a laser spot light having a vertically long rectangular cross section toward the external measurement space through the transparent plate 58. Be illuminated.
  • the direction in which the laser spot light is emitted and the direction in which the emitted laser spot light is reflected by the object and returns as reflected light overlap, and these two overlapping directions are called the light emitting and receiving directions (see FIG. In FIG. 2, the emitted light and the reflected light are shifted from each other for the sake of clarity).
  • Laser spot light traveling in the same light emitting / receiving direction is detected by the same pixel.
  • the four pairs have different crossing angles.
  • the laser light is sequentially reflected by the rotating first mirror surface 531a and second mirror surface 531b.
  • the laser light reflected by the first mirror surface 531a and the second mirror surface 531b of the first pair is moved horizontally (“main scanning direction”) in the uppermost area of the measurement space according to the rotation of the mirror unit 53. (Also called) from left to right.
  • the laser light reflected by the first mirror surface 531a and the second mirror surface 531b of the second pair horizontally moves from the left to the right in the second area from the top of the measurement space according to the rotation of the mirror unit 53. Scanned into.
  • the laser light reflected by the first mirror surface 531a and the second mirror surface 531b of the third pair horizontally moves from the left to the right in the third area from the top of the measurement space according to the rotation of the mirror unit 53.
  • the laser light reflected by the first mirror surface 531a and the second mirror surface of the fourth pair scans the lowermost region of the measurement space in the horizontal direction from left to right according to the rotation of the mirror unit 53. To be done.
  • a part of the laser light reflected by the object in the scanning projected light beam again passes through the transparent plate 58 and enters the second mirror surface 531b of the mirror unit 53 in the housing 57,
  • the light is reflected here, is further reflected by the first mirror surface 531a, is condensed by the lens 54, and is detected for each pixel by the light receiving surface of the photodiode 55.
  • the distance measurement point cloud data generation unit 112 obtains distance information according to the time difference between the emission timing of the semiconductor laser 51 and the light reception timing of the photodiode 55.
  • the object can be detected in the entire area of the measurement space, and a frame as distance measuring point cloud data having distance information for each pixel can be obtained.
  • This frame is generated at a predetermined cycle, for example, 10 fps. Further, according to a user's instruction, the obtained distance measuring point cloud data may be stored as background image data in the memory within the distance measuring point cloud data generation unit 112 or the memory of the image analysis device 300.
  • markers objects having a characteristic shape
  • the distance measurement point cloud data and image data obtained from each are analyzed. And recognize this marker. Then, by associating the recognized coordinate positions of the common marker with each other, correction data for correcting the coordinate position of the rider 100 and the camera 200 (for associating the angle of view (direction)) is generated. This correction data is stored in the memory of the image analysis device 300.
  • the image analysis apparatus 300 is, for example, a computer, and includes a CPU, a memory (semiconductor memory, magnetic recording medium (hard disk, etc.)), an input / output unit (display, keyboard, etc.), a communication I / F (Interface), and the like.
  • the communication I / F is an interface for communicating with an external device.
  • a network interface according to a standard such as Ethernet (registered trademark), SATA, PCI Express, USB, or IEEE 1394 may be used.
  • a wireless communication interface such as Bluetooth (registered trademark), IEEE 802.11, or 4G may be used for communication.
  • the image analysis device 300 includes a data acquisition unit 310, a distance information analysis unit 320, an image analysis unit 330, a shooting condition acquisition unit 340, and an information integration unit 350.
  • the CPU of the image analysis device 300 functions as the distance information analysis unit 320, the image analysis unit 330, the shooting condition acquisition unit 340, and the information integration unit 350, and the communication I / F functions as the data acquisition unit 310 and the shooting condition. It functions as the acquisition unit 340.
  • the data acquisition unit 310 acquires the distance measurement point cloud data (also referred to as a frame) generated by the rider 100 and arranged in time series, and sends the acquired distance measurement point cloud data to the distance information analysis unit 320. Further, the data acquisition unit 310 acquires image data (also referred to as frames) arranged in time series from the camera 200, and sends the acquired image data to the image analysis unit 330.
  • the frame rate of the rider 100 and the frame rate of the camera 200 are different, and the frame rate of the camera 200 is higher.
  • the rider 100 is 10 fps and the camera 200 is 60 fps.
  • the distance information analysis unit 320 recognizes an object in the imaging region 710 using the distance measurement point cloud data acquired via the data acquisition unit 310.
  • the object information of the object obtained by the recognition is sent to the information integration unit 350 in the subsequent stage.
  • the object information includes at least the position and size of the object.
  • the position information the three-dimensional center position of the recognized object can be used.
  • the background subtraction method is adopted for object recognition. In this background subtraction method, background image data (also referred to as reference background data) generated in advance and stored in the memory is used.
  • the distance information analysis unit 320 compares the background image data held in the memory with the distance measuring point cloud data at the present time, and if there is a difference, some object such as a vehicle (foreground object) is detected in the imaging area 710. You can recognize that it appeared inside.
  • the foreground data is extracted by using the background subtraction method to compare the background image data with the current distance measuring point cloud data (distance image data).
  • the pixels (pixel group) of the extracted foreground data are divided into clusters according to the distance value of the pixels, for example.
  • the size of each cluster is calculated. For example, vertical dimension, horizontal dimension, total area, etc. are calculated.
  • the “size” here is an actual size, and unlike the apparent size (angle of view, that is, the spread of pixels), the cluster of pixel groups is determined according to the distance to the object.
  • the distance information analysis unit 320 determines whether or not the calculated size is less than or equal to a predetermined size threshold value for identifying the moving object to be analyzed which is the extraction target.
  • the size threshold can be arbitrarily set depending on the measurement location, the behavior analysis target, and the like. If the behavior of a vehicle or a person is tracked and analyzed, the minimum value of the size of each of the vehicle and the person may be used as the size threshold value for clustering. This makes it possible to exclude fallen leaves, dust such as plastic bags, or small animals from detection targets.
  • the distance information analysis unit 320 classifies (type discrimination) the recognized object into a specific target object.
  • the specific set object is a person, a vehicle, a moving machine other than the vehicle, or the like.
  • Vehicles include ordinary vehicles, large vehicles (trucks, etc.), motorcycles, and forklifts and other transportation vehicles.
  • the moving machines include heavy equipment, construction equipment, and belt conveyors.
  • the object may be appropriately selected according to the environment in which the rider 100 of the image analysis system 10 and the camera 200 are installed. For example, if it is an ordinary road or a highway, people and vehicles are targets. If it is a construction site, people, vehicles, and heavy equipment are targeted.
  • Classification into objects is performed, for example, by preliminarily storing the characteristics (size, shape) for each type of objects in the memory of the image analysis device 300, and matching with these characteristics. Further, this type determination algorithm may be machine-learned in advance by a known algorithm. This machine learning is carried out in advance on another high-performance computer using a huge amount of data, and the inference processing parameters are determined.
  • the distance information analysis unit 320 classifies the types by using the determined inference processing parameter.
  • the classification result may be included in the object information sent to the information integration unit 350, and the classification result may be displayed on the display unit 400. Further, statistical processing (integrated count, etc.) according to the classification result may be performed and output to an external terminal device or the like.
  • the image analysis unit 330 recognizes an object in the imaging region 710 using the image data acquired via the data acquisition unit 310.
  • the object information of the object obtained by the recognition is sent to the information integration unit 350 in the subsequent stage.
  • the object information includes at least the position and size of the object. As the information of this position, the center position of the recognized object at the angle of view can be used.
  • a time difference method or a background difference method can be adopted for object recognition.
  • an object can be detected by extracting a range of pixels having a relatively large difference in image data of two frames before and after in time series.
  • the background image data also referred to as reference background data
  • the object can be detected by extracting the foreground pixels.
  • the image analysis unit 330 may classify the object into a specific object based on the detected shape and size of the object.
  • the specific target portion is a person, a vehicle, a moving machine other than the vehicle, or the like.
  • the vehicles may be subclassified according to their color (black vehicle (vehicle configured with black)) and subclassified by size (large vehicle).
  • the classification result may be included in the object information sent to the information integration unit 350.
  • the imaging condition acquisition unit 340 acquires the imaging conditions (imaging conditions, measurement conditions) of the imaging regions 710 and 720 (common area 730). For example, the photographing condition acquisition unit 340 acquires temperature and humidity as the photographing condition from the temperature / humidity meter 91 provided inside or in the common area 730. In addition, the illuminance (brightness) is acquired as the shooting condition from the illuminance meter 92 provided inside or in the common area 730.
  • the shooting condition acquisition unit 340 also acquires weather information from the weather information distribution unit 93.
  • the weather information distribution unit 93 is, for example, a device managed by an external weather information sender (the Meteorological Agency, a private weather company) connected via a network.
  • the weather information includes, for example, rain, snow, and fog.
  • an API Application Programming Interface
  • weather information such as rainfall, snowfall, and fog may be estimated from an image from the camera 200 or measurement data of the rider 100, and the estimation result may be used as a shooting condition.
  • the image capturing condition acquisition unit 340 may be connected to a visibility meter or an environment sensor using a plurality of types of sensors, and acquire image capturing conditions from these measuring devices.
  • the photographing condition acquisition unit 340 may be provided with a clock function, and may judge from day and time to daytime and nighttime.
  • the information integration unit 350 uses the shooting conditions acquired from the shooting condition acquisition unit 340 to detect the detection result of the distance information analysis unit 320 (hereinafter, also referred to as “detection result 1”) and the detection result of the image analysis unit 330 (hereinafter, referred to as “detection result 1”). , "Detection result 2"). Specifically, in the common area 730, the object information (position, size) of the objects detected by the detection result 1 and the detection result 2 is compared for each object, and whether the detection result is correct or not is determined. It is determined by referring to the shooting conditions. Then, the integration process is performed according to the determination result. Specific processing will be described later.
  • Display control unit 360 The display control unit 360 generates a display image for display from the image data acquired from the camera 200, outputs it to the display unit 400, and displays it. Further, an additional image based on the integrated object information is superimposed on the display image. For example, when a vehicle is detected as an object, a rectangular frame surrounding the vehicle is created as an additional image, and this is superimposed on the display image. At this time, the display mode of the additional image may be changed according to the determination result of the information integration unit 350.
  • FIG. 4 is a flowchart showing an image analysis process executed by the image analysis device 300 of the image analysis system 10 according to the first embodiment.
  • FIG. 5 is a table showing a correspondence relationship between the detection result and the shooting condition and the determination result.
  • Step S11 The image analysis device 300 outputs a control signal to the rider 100 to start the measurement, and acquires the distance measurement point cloud data obtained from the measurement of the imaging region 710 from the rider 100 via the data acquisition unit 310.
  • the image analysis apparatus 300 outputs a control signal to the camera 200 to start shooting, and acquires image data obtained by shooting the shooting area 720 from the camera 200 via the data acquisition unit 310.
  • Step S12 The distance information analysis unit 320 analyzes the distance measurement point cloud data sent from the data acquisition unit 310, and detects object recognition and object information including the recognized position and size of the recognized object. When a plurality of objects are recognized in one frame, object information for each object is detected. The distance information analysis unit 320 sends this detection result (detection result 1) to the information integration unit 350.
  • Step S13 The image analysis unit 330 analyzes the image data sent from the data acquisition unit 310, and detects the object and the object information including the position and size of the recognized object. When a plurality of objects are recognized in one frame, object information for each object is detected. The image analysis unit 330 sends the detection result (detection result 2) to the information integration unit 350.
  • the image analysis unit 330 does not need to perform image processing on all the frames, and may process the frames by thinning them out to a fraction. For example, it is assumed that the rider 100 performs measurement at 10 fps, the processing rate of the distance information analysis unit 320 is 10 fps, and the image data captured by the camera 200 is 60 fps. In this case, the image analysis unit 330 may process 10 fps or 5 fps by processing 1/6 or 1/12 of the frames acquired at 60 fps.
  • the processing rate of the distance information analysis unit 320 is also set to 5 fps.
  • the information integration unit 350 performs both the distance information analysis unit 320 and the image analysis unit 330 with respect to the frames obtained by measurement and photographing at substantially the same timing, and The integration process is executed in accordance with the completion timing of the process, that is, the completion timing of the image analysis unit 330 having the slow process.
  • the shooting condition acquisition unit 340 acquires the shooting conditions and sends them to the information integration unit 350.
  • the shooting condition includes any one of information on temperature and humidity, illuminance, and weather.
  • Step S15 The information integration unit 350 performs integration processing of object information based on the detection results 1 and 2 and the shooting conditions obtained in steps S12 to S14. This integration processing is performed as follows using the table of FIG.
  • the detection result 1 (rider 100) is determined to be false. In this case, it is handled that there is no object determined here.
  • the rider 100 irradiates a pulsed laser of a predetermined size toward the imaging region 710, and a part of it is reflected by fog (water droplets) to be reflected light.
  • fog water droplets
  • the reflected light reflected from a plurality of water droplets in front of the object may erroneously detect that there is an object at the position of the fog.
  • the reflection from the fog is measured as the reflection from the object, and the object is erroneously recognized as if it were present.
  • the presence or absence of fog may be comprehensively determined from the temperature and humidity, weather information, and image data of the camera 200.
  • the detection result 1 (rider 100) is determined to be true.
  • the rider 100 can perform detection even if it is raining, snowing, or at night without much influence.
  • the image analysis unit 330 may again analyze the image data of the shooting area of the camera 200 corresponding to the area where the object information of the detection result 1 exists.
  • the processing parameters may be changed according to the shooting conditions.
  • the detection result 2 is determined to be true.
  • the camera 200 may obtain image data enough to recognize an object.
  • the information integration unit 350 performs integration processing for integrating the object information for each of the objects detected by the detection results 1 and 2 to generate new object information, or to generate new object information.
  • the information is adopted as it is.
  • Step S16 The display control unit 360 generates a display image from the image data acquired from the camera 200 and the object information generated in step S15. Specifically, an image for display is generated from the image data, an additional image is generated from the object information, and this is superimposed on the image for display to generate a display image.
  • the additional image is, for example, a rectangular frame surrounding the detected object. Further, this additional image may have a different display mode depending on the determination result of step S15. For example, the additional image is color-coded according to the object information adopted by the determination of the results A, B, and C, or a text data indicating the adopted object information is included.
  • the display control unit 360 sends the generated display image to the display unit 400.
  • Step S17 The display unit 400 displays the display image generated by the display control unit 360 and ends the process (END). After that, the processing from step S11 onward is repeated, and a display image is generated using the image data obtained by the camera 200 in real time and the integrated object information, and is continuously displayed on the display unit 400.
  • the image analysis system 10 and the image analysis apparatus 300 include the distance information analysis unit 320 that detects the object information including the position and size of the object from the measurement data acquired from the rider 100.
  • An image analysis unit 330 that detects object information from image data acquired from the camera 200, and a shooting condition acquisition unit 340 that acquires shooting conditions in the first and second shooting areas of the rider 100 and the camera 200, respectively.
  • an integration process is performed to integrate the detection result of the distance information analysis unit 320 and the detection result of the image analysis unit 330 in the common area where the first and second shooting areas overlap, and the object information is acquired.
  • an information integration unit 350 to generate. As a result, it is possible to analyze the object within the shooting area with high accuracy regardless of the shooting conditions.
  • FIG. 6 is a block diagram showing the configuration of the image analysis system 10 according to the second embodiment.
  • the image analysis unit 330 uses the inference processing parameters to analyze by inference processing by a neural network. It should be noted that in the same figure, the configuration other than the image analysis system 10 is omitted, but it may be connected to another device such as the temperature / humidity meter 91 as in the first embodiment. Further, the integrated object information generated by the information integration unit 350 can be output to various output destinations.
  • Yolo single shot detector
  • R-CNN Convolutional Neural Network
  • Fast R-CNN Fast R-CNN
  • the inference processing parameter 331 is stored in the memory of the image analysis system 10.
  • the inference processing parameter 331 is obtained by performing machine learning in advance by a known algorithm using a huge amount of image data acquired from the camera 200 under a plurality of types of imaging conditions that can be assumed.
  • At least one of weather, illuminance, and temperature / humidity is included as the assumed predetermined shooting condition.
  • weather conditions such as rain, snow, and fog, and shooting conditions at night (low illuminance) are included.
  • the inference processing parameter 331 obtained by learning is performed by using the image data obtained by image capturing under various assumed image capturing conditions. To use.
  • the same effect as that of the first embodiment can be obtained, and further, in the second embodiment, even if the image data obtained under the condition that the photographing condition is bad and the analysis is difficult, The object can be detected with high accuracy, and the robustness can be improved.
  • FIG. 7 is a block diagram showing the configuration of the image analysis system 10 according to the third embodiment.
  • FIG. 8 is a table showing a correspondence relationship between the detection result and the shooting condition and the determination result.
  • the shooting condition acquisition unit 340 acquires the shooting conditions from the external temperature / humidity meter 91, the illuminance meter 92, and the weather information distribution unit 93 as in the first embodiment, and the image analysis unit.
  • the analysis result of 330 is also used as the imaging condition.
  • the image analysis system 10 has an operation unit 500 that receives an instruction from a user, and the image analysis device 300 has an operation control unit 370 that controls the operation unit 500.
  • the operation control unit 370 receives the display setting by the user via the operation unit 500.
  • the display settings include (1) a display valid / invalid setting of the integration result of the information integration unit 350, (2) a display mode selection setting to be further displayed in the case of the integration result display valid setting, and (3) shooting conditions.
  • the display valid / invalid setting of is included.
  • the display setting (1) is a flag for displaying an additional image indicating whether the integrated object information has adopted both or both of the detection results 1 and 2.
  • Display setting (2) sets the display mode when display setting (1) is valid.
  • the display mode includes color coding, line thickness, text data, and the like.
  • the display mode of a frame surrounding the object as an additional image to be superimposed is a red frame when both detection results are adopted, a blue frame when only the detection result 1 (rider 100), and a detection result 2 (camera 200). Only display in green frame.
  • the display setting that the user can select may further include a display valid / invalid setting of the movement trajectory. When the movement locus is valid, the locus of the object in the past several seconds is superimposed and displayed on the display screen of the display unit 400.
  • the shooting conditions weather information, brightness, etc.
  • the shooting condition acquisition unit 340 are displayed by being superimposed on the display image of the display unit 400.
  • the table in FIG. 8 is a table corresponding to FIG. In FIG. 8, an object (black vehicle, large vehicle) is included as an imaging condition in FIG.
  • the rider 100 irradiates the object in the imaging area 710 with the laser light as described above, and measures the distance to the object at the timing of receiving the reflected light. If the surface of the object is a material that totally reflects, or if there are many absorption components due to the black surface, the laser light does not return sufficiently to the light receiving part and the distance value to the object is changed. I can't get it. Therefore, when the object is a black vehicle, the object may not be correctly recognized from the measurement data of the rider 100.
  • the traveling direction of the object and the irradiation direction of the laser light are nearly parallel, it will be difficult to capture the exact length in the depth direction. That is, in the distance measurement point cloud data obtained from the measurement data of the rider 100, the distance values are discrete between adjacent pixels. Particularly, in the case of a vehicle having a long total length, the spread of the distance value becomes large, and in such a case, there is a possibility that one object may be mistakenly recognized as a plurality of objects.
  • the image analysis unit 330 analyzes the image data from the camera 200 to determine whether the object detected in the imaging region 720 is a preset target object, that is, a black vehicle or a large vehicle, and the determination is made. The result is sent to the photographing condition acquisition unit 340.
  • the information integration unit 350 uses the preset target object sent from the shooting condition acquisition unit 340 as the shooting condition and makes a determination as shown in the table of FIG. Specifically, in the case of the result C, the information integration unit 350 determines that the detection result 2 is true when the imaging condition is a black vehicle or a large vehicle, and performs the integration process.
  • the user can easily confirm the status of the detection result by switching the display setting according to the detection result and displaying it on the display unit 400 in a mode according to the display setting.
  • the user can recognize the basis of the detection result or the effectiveness (certainty).
  • the object can be detected with higher accuracy than in the first and second embodiments, and the robustness is improved. it can.
  • FIG. 9 is a block diagram showing the configuration of the image analysis system 10 according to the fourth embodiment.
  • the image analysis device 300 has a report determination unit 380.
  • the image analysis system 10 also has an alarm device 600. It should be noted that although illustration of the configuration other than the image analysis system 10 is omitted in the figure, it may be connected to other devices as in the first and third embodiments.
  • the notification determination unit 380 sets a predetermined area of the shooting area 710 (common area 730) as a restricted area. This setting is preset according to a user's instruction. It is determined whether or not a person has entered this predetermined area. Specifically, the report determination unit 380 determines the intrusion of a person into a predetermined area by using the object information integrated by the information integration unit 350. When a person enters the predetermined area, an alert signal is sent to the alarm device 600. For example, when the image analysis system 10 is used with an entrance from a general road of a highway as a shooting area, an alert signal is output when a person invades the entrance to the highway as a predetermined area.
  • the report determination unit 380 determines the moving direction of the detection target object as a vehicle, and outputs an alert signal to the alarm device 600 when the moving direction is not a normal direction. Specifically, on the road 61 shown in FIG. 3, when the traveling direction along the lane of the road is set to the correct moving direction and the movement of the vehicle in the opposite direction (reverse running) is detected. To output an alert signal. The report determination unit 380 determines that the moving direction of the vehicle is not normal using the object information integrated by the information integration unit 350.
  • the report determination unit 380 uses the object information integrated by the information integration unit 350 to calculate the distance between a person and a vehicle, or a person and a machine (moving machine other than the vehicle) and a distance.
  • the distance measurement point cloud data obtained from the rider 100 is used to calculate the distance between the person and the vehicle, or the distance between the person and the machine.
  • an alert signal is output.
  • the image analysis system 10 is used with the construction site or the manufacturing factory as the imaging area, when it is detected that the worker and the heavy machine, or the worker and the machine of the factory approach a predetermined threshold value or more, an alert signal Is output.
  • the alarm device 600 is a digital signage including a speaker arranged inside or around the photographing area 710 (common area 730), a liquid crystal display, or the like. Depending on the alert signal, a warning sound is emitted or a warning is displayed on the digital signage. Further, the report determination unit 380 registers in advance a contact address of a specific administrator or a monitor terminal (personal computer) used by the administrator, and sends an email to the contact address according to the alert determination. May be transmitted or displayed on the terminal.
  • the fourth embodiment by using the object information integrated by the information integration unit 350, it is possible to accurately issue an alarm.
  • the main configuration has been described in describing the features of the above-described embodiment, and the present invention is not limited to the above-described configuration, and within the scope of the claims. , Can be modified. Further, the configuration included in the general image analysis device 300 or the image analysis system 10 is not excluded.
  • each processing described in the flowchart of FIG. 4 does not necessarily have to be performed as illustrated.
  • the processes of steps S11 and S12 may be performed in parallel.
  • the core assigned to each process may be processed separately.
  • the means and method for performing various processes in the image analysis system 10 can be realized by either a dedicated hardware circuit or a programmed computer.
  • the program may be provided by a computer-readable recording medium such as a USB memory or a DVD (Digital Versatile Disc) -ROM, or may be provided online via a network such as the Internet.
  • the program recorded on the computer-readable recording medium is usually transferred and stored in a storage unit such as a hard disk.
  • the program may be provided as independent application software, or may be incorporated into the software of the device as a function of the device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Emergency Alarm Devices (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

Le problème décrit par la présente invention est de fournir un dispositif d'analyse d'image pouvant analyser avec une grande précision un objet dans une zone d'imagerie, quelles que soient les conditions d'imagerie. La solution de l'invention porte sur un dispositif d'analyse d'image (300) comprenant : une unité d'analyse d'informations de distance (320) servant à détecter des informations d'objet, comprenant la position et la taille d'un objet, à partir de données de mesure acquises depuis un dispositif Lidar (100) ; une unité d'analyse d'image (300) servant à détecter les informations d'objet à partir de données d'image acquises depuis une caméra (200) ; une unité d'acquisition de conditions d'imagerie (340) servant à acquérir des conditions d'imagerie dans des première et seconde zones d'imagerie (710, 720) du dispositif Lidar (100) et de la caméra (200) ; et une unité d'intégration d'informations (350) servant à effectuer, sur la base des conditions d'imagerie acquises, un traitement d'intégration de façon à intégrer les résultats de détection de l'unité d'analyse d'informations de distance (320) et les résultats de détection de l'unité d'analyse d'image (330) concernant une zone partagée (730) dans laquelle les première et seconde zones d'imagerie (710, 720) se chevauchent, et de façon à générer des informations d'objet.
PCT/JP2019/044580 2018-11-20 2019-11-13 Dispositif et système d'analyse d'image, et programme de commande WO2020105527A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-217537 2018-11-20
JP2018217537A JP2022017619A (ja) 2018-11-20 2018-11-20 画像解析装置、画像解析システム、および制御プログラム

Publications (1)

Publication Number Publication Date
WO2020105527A1 true WO2020105527A1 (fr) 2020-05-28

Family

ID=70774519

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/044580 WO2020105527A1 (fr) 2018-11-20 2019-11-13 Dispositif et système d'analyse d'image, et programme de commande

Country Status (2)

Country Link
JP (1) JP2022017619A (fr)
WO (1) WO2020105527A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022101310A (ja) * 2020-12-24 2022-07-06 株式会社日立エルジーデータストレージ 測距システム及びその座標キャリブレーション方法
CN114873193A (zh) * 2022-06-09 2022-08-09 杭州科创标识技术有限公司 一种激光打码机漏打报警装置
CN114999217A (zh) * 2022-05-27 2022-09-02 北京筑梦园科技有限公司 一种车辆检测方法、装置及停车管理系统
WO2024134900A1 (fr) * 2022-12-23 2024-06-27 オプテックス株式会社 Capteur de balayage bidimensionnel
WO2024150322A1 (fr) * 2023-01-11 2024-07-18 日本電信電話株式会社 Dispositif de traitement de groupe de points, procédé de traitement de groupe de points et programme de traitement de groupe de points

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000329852A (ja) * 1999-05-17 2000-11-30 Nissan Motor Co Ltd 障害物認識装置
JP2007310741A (ja) * 2006-05-19 2007-11-29 Fuji Heavy Ind Ltd 立体物認識装置
JP2013186872A (ja) * 2012-03-12 2013-09-19 Mitsubishi Electric Corp 運転支援装置
JP2018097792A (ja) * 2016-12-16 2018-06-21 株式会社デンソー 移動体検出装置及び移動体検出システム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000329852A (ja) * 1999-05-17 2000-11-30 Nissan Motor Co Ltd 障害物認識装置
JP2007310741A (ja) * 2006-05-19 2007-11-29 Fuji Heavy Ind Ltd 立体物認識装置
JP2013186872A (ja) * 2012-03-12 2013-09-19 Mitsubishi Electric Corp 運転支援装置
JP2018097792A (ja) * 2016-12-16 2018-06-21 株式会社デンソー 移動体検出装置及び移動体検出システム

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022101310A (ja) * 2020-12-24 2022-07-06 株式会社日立エルジーデータストレージ 測距システム及びその座標キャリブレーション方法
JP7411539B2 (ja) 2020-12-24 2024-01-11 株式会社日立エルジーデータストレージ 測距システム及びその座標キャリブレーション方法
CN114999217A (zh) * 2022-05-27 2022-09-02 北京筑梦园科技有限公司 一种车辆检测方法、装置及停车管理系统
CN114873193A (zh) * 2022-06-09 2022-08-09 杭州科创标识技术有限公司 一种激光打码机漏打报警装置
CN114873193B (zh) * 2022-06-09 2023-09-05 杭州科创标识技术有限公司 一种激光打码机漏打报警装置
WO2024134900A1 (fr) * 2022-12-23 2024-06-27 オプテックス株式会社 Capteur de balayage bidimensionnel
WO2024150322A1 (fr) * 2023-01-11 2024-07-18 日本電信電話株式会社 Dispositif de traitement de groupe de points, procédé de traitement de groupe de points et programme de traitement de groupe de points

Also Published As

Publication number Publication date
JP2022017619A (ja) 2022-01-26

Similar Documents

Publication Publication Date Title
WO2020105527A1 (fr) Dispositif et système d'analyse d'image, et programme de commande
US20240046689A1 (en) Road side vehicle occupancy detection system
KR101030763B1 (ko) 이미지 획득 유닛, 방법 및 연관된 제어 유닛
CN111753609B (zh) 一种目标识别的方法、装置及摄像机
US8908038B2 (en) Vehicle detection device and vehicle detection method
US20110043806A1 (en) Intrusion warning system
US10183843B2 (en) Monitoring of step rollers and maintenance mechanics of passenger conveyors
KR102151815B1 (ko) 카메라 및 라이다 센서 융합을 이용한 객체 검출 방법 및 그를 위한 장치
CN104902246A (zh) 视频监视方法和装置
KR101852057B1 (ko) 영상 및 열화상을 이용한 돌발 상황 감지시스템
CN110431562B (zh) 图像识别装置
JP2002208073A (ja) 侵入監視装置
EP4145404A1 (fr) Système de détection d'occupation de véhicule en bord de route
JP2014059834A (ja) レーザースキャンセンサ
JP5761942B2 (ja) 物体検出センサ
JP7428136B2 (ja) 情報処理装置、情報処理システム、および情報処理方法
JP7073949B2 (ja) 避難誘導装置、避難誘導システム、および制御プログラム
JP4765113B2 (ja) 車両周辺監視装置、車両、車両周辺監視用プログラム、車両周辺監視方法
JP2018181114A (ja) 侵入監視方法、侵入監視プログラム、および侵入監視装置
WO2020008685A1 (fr) Dispositif de notification d'informations, programme destiné à un dispositif de notification d'informations, et système de notification d'informations
JP7020096B2 (ja) 物体検出装置、物体検出装置の制御方法、および物体検出装置の制御プログラム
JP6988797B2 (ja) 監視システム
KR101870151B1 (ko) 불법주정차 단속시스템
JP4567072B2 (ja) 車両周辺監視装置
KR100844640B1 (ko) 물체 인식 및 거리 계측 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19887427

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19887427

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP