JP2010286926A - Surroundings monitoring device - Google Patents

Surroundings monitoring device Download PDF

Info

Publication number
JP2010286926A
JP2010286926A JP2009138692A JP2009138692A JP2010286926A JP 2010286926 A JP2010286926 A JP 2010286926A JP 2009138692 A JP2009138692 A JP 2009138692A JP 2009138692 A JP2009138692 A JP 2009138692A JP 2010286926 A JP2010286926 A JP 2010286926A
Authority
JP
Japan
Prior art keywords
target
dimensional
optical flow
unit
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2009138692A
Other languages
Japanese (ja)
Other versions
JP5353455B2 (en
Inventor
Hiroshi Yamato
宏 大和
Original Assignee
Konica Minolta Holdings Inc
コニカミノルタホールディングス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Konica Minolta Holdings Inc, コニカミノルタホールディングス株式会社 filed Critical Konica Minolta Holdings Inc
Priority to JP2009138692A priority Critical patent/JP5353455B2/en
Publication of JP2010286926A publication Critical patent/JP2010286926A/en
Application granted granted Critical
Publication of JP5353455B2 publication Critical patent/JP5353455B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

In a periphery monitoring device that is mounted on a vehicle and determines the possibility of a collision with a surrounding target object, the processing speed is increased (frame rate is improved).
When a two-dimensional optical flow is calculated from a time-series image obtained by stereo cameras 11 and 12 by corresponding point search processing by a time-series information calculation unit 21, a target object candidate area extraction unit 22 first has a target object. Extract candidate areas that are likely to Thereafter, the three-dimensional information acquisition unit 23 acquires the three-dimensional information of the candidate area of the target object from the images of the stereo cameras 11 and 12, and from the obtained three-dimensional information and the positional relationship between the periphery monitoring device 1. The collision determination unit 24 determines the possibility of collision. Therefore, it does not require complicated object recognition processing or a road model, and the target object candidate area is limited using only a two-dimensional optical flow calculated from a time-series image, thereby performing a three-dimensional collision determination. , Can increase the processing speed.
[Selection] Figure 3

Description

  The present invention relates to a periphery monitoring device that is mounted on a self-propelled mobile body such as a vehicle or a robot and monitors a target object around the mobile body, and more particularly to a method for determining a collision of the mobile body with the target object. .

  Examples of typical conventional techniques that can be used as the periphery monitoring device that performs the collision determination as described above include the following Patent Documents 1 to 3. In Patent Document 1, an optical flow of a vertical or horizontal edge of interest is calculated from a time-series two-dimensional image of a single camera, and a collision time is calculated from the optical flow so that a collision risk can be detected at high speed. A vehicle collision warning method and apparatus have been proposed.

  Similarly, Patent Document 2 calculates an optical flow of vertical and horizontal edges from a time-series two-dimensional image of a single camera, extracts a target region using a moving velocity component in the horizontal direction on the optical flow, and extracts There has been proposed a collision determination device and method for determining the risk of collision by calculating the time to reach the host vehicle based on the vertical movement speed component included in the target area.

  In Patent Document 3, any two points belonging to the same target object on a time-series two-dimensional image of a single camera are extracted as evaluation points, and are extracted with reference to any coordinate axis set on the image. In addition, a collision time calculation device and an obstacle detection device have been proposed that calculate the time required for the target object to collide with the imaging surface of the camera based on the information calculated from the two points.

  With these methods, it is possible to calculate the time when a collision can occur with one (monocular) camera, but when the relative speed is low or the target object exists far away, If the moving speed of the target object on the screen is low, there is a possibility that it cannot be accurately detected. In addition, the determination of the possibility of collision using a monocular time-series image requires determination and extraction of the type of target object (vehicle type / person), and the processing becomes complicated.

  Therefore, Patent Document 4 is cited as a conventional technique that can cope with such a problem. According to the prior art, a three-dimensional optical flow is calculated from images captured by two (stereo) cameras, and a stationary object and a target object are determined based on the three-dimensional information. Specifically, the distance information is calculated from the stereo image, and compared with the height of the road surface corresponding in position in the parallax direction based on the road model, if the position is above the road surface, Data is extracted as a three-dimensional object, and a three-dimensional optical flow is calculated for the extracted three-dimensional object. As a result of removing the optical flow component between the self-running speed and the steering direction, the object that remains as a moving vector is not. An object is determined as a stationary object.

JP 11-353565 A JP 2006-99155 A JP 2006-107422 A JP 2006-134035 A

  In the above-described conventional technology, first, both the stationary object and the target object are extracted from the correspondence result between the standard image and the reference image by using the road surface model. It is necessary to determine whether there is any. Therefore, since the number of associations is not significantly reduced, there is a problem that it is not possible to realize high-speed processing (frame rate improvement).

  An object of the present invention is to provide a periphery monitoring device and a periphery monitoring method capable of realizing high-speed processing (improving the frame rate).

  The periphery monitoring device of the present invention is mounted on a moving body, and in a periphery monitoring device that monitors a target object around the moving body, an image acquisition unit that acquires the surrounding images in time series, and the image acquisition unit Based on the time-series information calculation unit that calculates a movement component from the obtained time-series image and the movement component obtained by the time-series information calculation unit, a candidate area where the target object is likely to exist is extracted A target object candidate area extraction unit, a three-dimensional information acquisition unit that acquires three-dimensional information of the candidate area calculated by the target object candidate area extraction unit, and the candidate area obtained by the three-dimensional information acquisition unit And a collision determination unit for determining the possibility of collision from the positional relationship with the periphery monitoring device.

  According to the above configuration, in the periphery monitoring device that is mounted on a moving body and monitors a surrounding target object, the surrounding image is obtained in time series by an image obtaining unit such as a monocular or stereo camera, and obtained. When the time-series information calculating unit calculates a moving component such as a two-dimensional optical flow from the time-series image, first, the target object candidate region extracting unit is based on the moving component and the candidate region where the target object is likely to exist To extract. After that, the three-dimensional information acquisition unit calculates a distance for the candidate region using a distance measurement unit or the like provided separately from the monocular camera. Get position) information. Alternatively, in the case of the stereo camera, when a captured image used for calculating the moving component is used as a reference image, a three-dimensional (actual) of the candidate region of the target object is obtained using a captured image obtained by another camera as a reference image. (Location in space) information is acquired. The collision determination unit determines the possibility of collision from the three-dimensional information of the candidate area of the target object obtained in this way and the positional relationship with the periphery monitoring device.

  Therefore, first, the target object candidate area extraction unit limits candidate areas where there is a high possibility that a target object that is difficult to predict a collision, such as a side jump, from the moving components calculated from the time-series images, and then three-dimensionally. The information acquisition unit calculates three-dimensional information of only the candidate area, and the collision determination unit determines the possibility of collision from the relationship between the extracted candidate area of the target object and the monitoring device. That is, the risk of collision is determined by obtaining detailed three-dimensional information for only a target object candidate region that is difficult to predict from a two-dimensional time-series image captured in a self-running state. In this way, complicated object recognition processing and road models are not required, and the processing speed is increased (frame rate is improved) by limiting candidate object candidate regions using only information calculated from time-series images. be able to.

  In the periphery monitoring device of the present invention, the time series information calculation unit calculates a two-dimensional optical flow as the moving component by a corresponding point search process of the time series image obtained by the image acquisition unit. And

  According to said structure, the highly accurate movement component of a pixel unit can be calculated | required by using the two-dimensional optical flow calculated | required by the corresponding point search process as said movement component.

  Furthermore, in the periphery monitoring device according to the present invention, the target object candidate region extraction unit selects a region having a size larger than a predetermined value by the two-dimensional optical flow calculated as the moving component in the time series information calculation unit. Even if the candidate area is extracted and the two-dimensional optical flow is less than a predetermined value, if the moving speed is equal to or less than a predetermined value, the candidate area is extracted as the candidate area.

  According to the above configuration, when acquiring three-dimensional (position in the real space) information corresponding to the position determined as the candidate area of the target object by the three-dimensional information acquisition unit, the candidate area is extracted. The target object candidate area extraction unit extracts an area having a size equal to or larger than a predetermined value of the two-dimensional optical flow so as to suppress the calculation process. However, when the target object is located in front, the two-dimensional optical flow is small, and when the relative speed is high, there is a possibility of a collision. Even if the optical flow is less than a predetermined value, if the moving speed is greater than or equal to a predetermined value, it is extracted as the candidate area.

  Therefore, it is possible to extract a target object with a possibility of collision without omission while suppressing a calculation processing amount.

  In the periphery monitoring device of the present invention, the target object candidate region extraction unit is the most out of the vanishing points of the straight line obtained by extending the two-dimensional optical flow calculated as the moving component in the time series information calculation unit. An area excluding an area where flows gather is extracted as the candidate area.

  According to the above configuration, in the two-dimensional optical flow generated between the time-series images of the camera mounted on the moving body, the vanishing point where the extension lines of the two-dimensional optical flow are most collected depends on the moving direction of the moving body. In addition, two-dimensional optical flows of stationary objects such as roads are collected.

  Therefore, by extracting an area excluding this area as a candidate area for the target object, it is possible to extract only the target object that moves more efficiently.

  In the periphery monitoring device of the present invention, the three-dimensional information acquisition unit may calculate the target object candidate region calculated by the target object candidate region extraction unit from a stereo image obtained using the image acquisition unit as a stereo camera. The three-dimensional (position in the real space) information corresponding to the determined position is calculated by the corresponding point search process between the stereo images.

  According to the above configuration, the distance information can be calculated from the information of only the image without using a means for obtaining the distance information separately, so that the collision determination system can be constructed with a simple configuration.

  Furthermore, in the periphery monitoring device of the present invention, the three-dimensional information acquisition unit is a three-dimensional (in real space) corresponding to the position determined as the target object candidate region calculated by the target object candidate region extraction unit. The position sensor is a distance sensor that acquires information.

  According to said structure, the said distance information can be calculated | required without performing a special calculation by using the means to acquire distance information separately, such as a millimeter wave radar.

  In the periphery monitoring device of the present invention, the collision determination unit includes the movement component obtained by the time series information calculation unit and the three-dimensional information of the candidate area obtained by the three-dimensional information acquisition unit. And a three-dimensional optical flow calculation unit that calculates a three-dimensional optical flow of the candidate region, and the three-dimensional optical flow calculated by the three-dimensional optical flow calculation unit and the positional relationship between the periphery monitoring device and Determining the possibility of collision.

  According to the above configuration, since the possibility of collision is determined from the three-dimensional optical flow and the positional relationship with the periphery monitoring device, the determination process is simple.

  Furthermore, in the periphery monitoring device of the present invention, the collision determination unit determines the intersection and the direction and length of the three-dimensional optical flow of the candidate area calculated by the three-dimensional optical flow calculation unit and the periphery monitoring device. The collision determination is performed by

  According to the above configuration, if the two parameters of the direction and length of the three-dimensional optical flow can be calculated, whether or not there is a possibility of collision is determined based on whether or not the target object is facing the periphery monitoring device by the intersection determination. Can be determined.

  In the periphery monitoring device of the present invention, the collision determination unit changes a target distance for performing the collision determination in accordance with the speed of the moving body.

  According to the above configuration, the target distance for performing the collision determination is not fixed, but the range considered to be safe based on the current speed of the moving body, for example, the range of the stoppable distance corresponding to the speed is excluded. Therefore, the determination process can be reduced.

  Furthermore, in the periphery monitoring device of the present invention, the collision determination unit can collide when the length of the three-dimensional optical flow of the candidate area calculated by the three-dimensional optical flow calculation unit is greater than a predetermined threshold. It is characterized in that it is determined as a sex region.

  According to the above configuration, when performing a collision determination from the relationship between the three-dimensional optical flow and the periphery monitoring device as described above, even in a region where the possibility of collision is determined to be low in the above determination, the three-dimensional When the length of the optical flow is larger than a predetermined threshold value, it is determined as a collision possibility region.

  Therefore, since the length of the three-dimensional optical flow represents the relative movement amount of the target object per unit time, if it is larger than the threshold, there is a possibility that the target object is approaching rapidly. By determining the region, it is possible to avoid a determination delay.

  In the periphery monitoring device of the present invention, the time series information calculation unit performs a corresponding point search process for each measurement point.

  According to the above configuration, when obtaining the two-dimensional optical flow as the moving component, the target object candidate area extraction unit performs the corresponding point search process for each pixel, so that the target object candidate area extraction unit does not include the background or the road surface. Regions can be extracted in detail.

  Furthermore, in the periphery monitoring device of the present invention, the target object candidate area extracting unit divides the screen into a plurality of areas in advance, and performs measurement on the representative point of each area, thereby the target object candidate It is characterized by determining whether to extract as a region.

  According to the above configuration, when cutting out to the target object candidate area, instead of calculating the two-dimensional optical flow for every measurement point, the screen is divided into a plurality of areas in advance and each area is divided. Measurement is performed on the representative points.

  Therefore, it is possible to reduce the number of measurement points and to perform higher speed processing.

  As described above, the periphery monitoring device of the present invention calculates a moving component such as a two-dimensional optical flow from a time-series image in a periphery monitoring device that is mounted on a moving body and monitors a surrounding target object, and moves the same. Based on the components, candidate areas where there is a high possibility that a collision target object is present are extracted, and the possibility of collision is determined from the three-dimensional information of the candidate areas and the positional relationship with the periphery monitoring device.

  Therefore, from the moving components calculated from the time-series images, the candidate areas where there is a high possibility that there is a target object that is difficult to predict the collision, such as side jumping out, are limited, and the three-dimensional information of only the candidate areas is calculated. The processing speed can be increased (the frame rate can be improved) without requiring complicated object recognition processing or a road model.

It is a schematic block diagram of the periphery monitoring apparatus which concerns on one Embodiment of this invention. It is a figure for demonstrating the general method of calculating | requiring the distance and moving amount | distance of a target object in the case of a collision possibility analysis. It is a block diagram which shows the example of 1 structure of the controller in the said periphery monitoring apparatus. It is a flowchart for demonstrating operation | movement of the said controller. It is a figure for demonstrating a mode that a two-dimensional optical flow is calculated | required from a time-sequential image. It is a figure for demonstrating a candidate area | region with high possibility that the target object with possibility of a collision exists. It is a figure which shows the said candidate area | region on three-dimensional space. It is a figure which shows the result of having determined whether the said target object is a stationary object or a moving object by the conventional method. It is a figure which shows the result of having determined whether the said target object is a stationary object or a moving object in this Embodiment. It is a figure which shows the example of the two-dimensional optical flow of a stationary object. It is a figure which shows the example of a two-dimensional optical flow in case a moving object and a stationary object coexist. It is a figure which shows the example of the two-dimensional optical flow of a road surface. It is a figure for demonstrating an example of the area | region division of the screen in the case of calculation of a two-dimensional optical flow. It is a figure for demonstrating the other example of the area | region division of the screen in the case of calculation of a two-dimensional optical flow. It is a figure for demonstrating the other example of the distance measurement to a target object. It is a figure for demonstrating the other example of collision determination. It is a figure for demonstrating the method of the multiresolution used at the time of a corresponding point search process. It is a figure for demonstrating an example of collision determination.

  FIG. 1 is a schematic configuration diagram of a periphery monitoring device 1 according to an embodiment of the present invention. The periphery monitoring device 1 is mounted on a vehicle 2 and obtains a three-dimensional image from the acquired stereo image by stereo cameras 11 and 12 that respectively obtain a two-dimensional input image of a subject, and a target object 4 existing on the road surface 3. , 5 is judged, and when the danger level is high, the controller 13 is provided with a controller 13 that issues a warning by sounding the buzzer 6, and is used in the collision judgment system. In addition, as warning processing, an operation such as displaying information on a display display so as to prompt the driver for avoidance measures, displaying the distance and time to the collision target, or the size of the target alone, Or it can carry out combining two or more. On the other hand, if a recording unit is provided instead of the buzzer 6, a drive recorder that automatically records the scene when the degree of danger is high can be realized. If a brake device is provided, the degree of danger is high. It is possible to realize a collision prevention system that automatically brakes the case.

  The stereo cameras 11 and 12 output a pair of left and right images (a standard image and a reference image) obtained by photographing the subject at the same timing. In this embodiment, for the sake of simplicity of explanation, it is assumed that the aberrations of the stereo cameras 11 and 12 are well corrected and installed parallel to each other. Further, even if the actual hardware is not under such conditions, it is possible to convert it to an equivalent image by image processing. From the stereo cameras 11 and 12, the stereo image is transmitted to the controller 13 via a communication line. The communication method of the image data between the stereo cameras 11 and 12 and the controller 13 is not limited to the wired method, and may be a wireless method.

  Here, as in Patent Document 4, the advantage of the collision possibility analysis using the three-dimensional moving component is that the collision possibility between the target objects 4 and 5 and the host vehicle 2 can be determined with high accuracy. That is possible. However, as shown in FIG. 2, when the three-dimensional movement component is calculated from the captured images of the stereo cameras 11 and 12, there is a problem that the number of associations between the images increases and the calculation cost increases significantly.

FIG. 2 is a diagram for explaining a general method for obtaining the distance and the moving amount of the target objects 4 and 5 in the analysis of the collision possibility. An image obtained by one of the stereo cameras 11 and 12 is a standard image I1, and the other is a reference image I2, and a total of four images after time t and after Δt (one frame period) have been obtained. In this case, first, the pixel P1 on the reference image I1 t at time t (i, j, t) with respect to, the corresponding point search process, a corresponding point on the reference image I2 t P2 (i, j, t) to calculate the Three-dimensional information F (i, j, t) is obtained, and the distance to the target objects 4 and 5 is known. Similarly, the corresponding point P2 (i, j, t + Δt) on the reference image I2 t + Δt is calculated by the corresponding point search process for the pixel P (i, j, t + Δt) on the standard image I1 t + Δt at time t + Δt, Three-dimensional information F (i, j, t + Δt) is obtained. Subsequently, corresponding point search processing is performed between the reference images I1 t and I1 t + Δt, and association between the pixels P1 (i, j, t) and P1 (i, j, t + Δt) is performed. As a result, the three-dimensional information F (i, j, t) and F (i, j, t + Δt) are also associated, and the amount of movement of the target objects 4 and 5 can be calculated.

However, it is this correspondence that takes the most time in the three-dimensional measurement by the stereo cameras 11 and 12, and the processing speed increases (the frame rate can be increased) as it is reduced. Therefore, in the present embodiment, first, the target objects 4 and 5 are obtained from the three-dimensional movement component using the distance and the movement amount of the target objects 4 and 5 between the reference time-series images I1 t and I1 t + Δt. It is determined whether or not the candidate area has a high possibility of collision.

FIG. 3 is a block diagram showing an example of the configuration of the controller 13, and FIG. 4 is a flowchart for explaining the operation thereof. The controller 13 includes a time series information calculation unit 21, a target object candidate region extraction unit 22, a three-dimensional information acquisition unit 23, and a collision possibility determination unit 24. In step S1, the stereo cameras 11 and 12 as image acquisition units acquire images around the vehicle 2 in time series. From the obtained time series images, the time series information calculation unit 21 should be noted first. In step S2, as shown in FIG. 5, the movement (vector) component OF XY (i) such as high-precision two-dimensional optical flow in pixel units is obtained by the corresponding point search process between the reference images I1 t and I1 t + Δt. , J, t + Δt). That is,
OF XY (i, j, t + Δt) = P1 (i, j, t + Δt) −P (i, j, t)
It is.

As a method for calculating the two-dimensional optical flow, a gradient method, a correlation calculation method, or the like can be used. The gradient method is a general technique for estimating a two-dimensional optical flow from a condition based on a constraint equation of spatiotemporal differentiation of an image. In addition, the correlation calculation method performs corresponding point search processing by correlation calculation within a window, and is a general technique already established. However, among these correlation calculation methods, it is possible to calculate the corresponding position in units of subpixels based on the similarity of the signals in which the pattern in the window set for two input images is frequency-resolved and the amplitude component is suppressed. By using the POC (phase-only correlation method) that can be performed, it becomes difficult to be affected by the luminance difference and noise between the reference images I1 t and I1 t + Δt , and the correspondence can be performed stably and with high accuracy. Become. As a method for frequency-resolving the pattern in the window, an already established method such as FFT, DFT, DCT, DST, wavelet transform, Hadamard transform, etc. can be used. It can be carried out. In this way, when obtaining the two-dimensional optical flow, the corresponding target candidate region extraction unit 22 extracts the target candidate region that does not include the background or the road surface 3 in detail by performing the corresponding point search process for each pixel unit. Will be able to.

Subsequently, the target object candidate area extraction unit 22 may cause the target objects 4 and 5 to exist as shown in FIG. 6 based on the movement component OF XY (i, j, t + Δt) in step S3. Candidate region OB (t, n) having a high value is extracted. n is 1, 2,..., and is the number of the grouped candidate area (target object), but may be used as a flag indicating whether the object is a stationary object or a target object when not grouped. In the example of FIG. 6, in the first candidate region OB (t, 1) at time t, the pixels constituting it are P1 (2,2, t), P1 (3,2, t), P1 (2 , 3, t), P1 (3, 3, t), P1 (2, 4, t), P1 (3,4, t).

When the candidate region OB (t, 1) is extracted in this way, the three-dimensional information acquisition unit 23, in step S4, the pixels P1 (2,2, t), P1 (3,2, t), For P1 (2,3, t), P1 (3,3, t), P1 (2,4, t), P1 (3,4, t), the corresponding point P2 (i on the reference image I2 t , J, t), the corresponding points are searched, and the three-dimensional (position in the real space) information XYZ (i, j, t) as shown in FIG. XYZ (2, 2, t, 1), XYZ (3, 2, t, 1), XYZ (2, 3, t, 1), XYZ (3, 3, t, 1), XYZ (2 , 4, t, 1), XYZ (3,4, t, 1)). Similarly, for the reference image I1 t + Δt after Δt , the three-dimensional information acquisition unit 23 also sets the three-dimensional information XYZ (i, i) of the candidate area OB (t + Δt, 1) corresponding to the candidate area OB (t, 1). j, t + Δt).

  From the three-dimensional movement component based on the time-series three-dimensional information XYZ (i, j, t) and XYZ (i, j, t + Δt) for the same target object thus obtained, the collision determination unit 24 calculates the three-dimensional optical flow. The unit 24a obtains the three-dimensional optical flow OF3 in step S5, determines the possibility of collision from the positional relationship between the three-dimensional optical flow OF3 and the host vehicle 2, and if there is a possibility of collision in step S6, In step S7, the buzzer 6 is sounded and the process returns to step S1 after time Δt. If there is no possibility of collision, the process returns to step S1 after time Δt.

  As shown in FIG. 18, the determination of the possibility of collision can be performed easily and with high accuracy by using the three-dimensional optical flow OF3. Specifically, by determining whether or not the extension line of the three-dimensional optical flow OF3 and the host vehicle 2 intersect, the target objects 4 and 5 positioned in front of the host vehicle 2 can collide with each other. It is determined whether the object is a sex object. Therefore, if the two parameters of the direction and length of the three-dimensional optical flow OF3 can be calculated, the extension line can be created, and the target objects 4 and 5 can monitor the surroundings based on the intersection determination between the extension line and the host vehicle 2. Whether or not there is a possibility of collision can be determined from whether or not the device 1 is facing. In FIG. 18, in addition to the target object 5 consisting of a road obstacle (stationary body) on the path of the host vehicle 2 and the target object 4 consisting of a preceding vehicle, although not on the path, crossing the lane, The target object 7 made up of persons intersecting with the three-dimensional optical flow OF3 is also determined as a collision object. As described above, the three-dimensional optical flow OF3 is represented by a combined vector of the speed of the host vehicle 2 and the speeds of the target objects 4, 5, and 7, and the movement of the target objects 4, 5, and 7 is analyzed three-dimensionally. Therefore, the collision determination process can be performed with high accuracy, which is preferable. Details of this collision determination processing are described in detail in Japanese Patent Application No. 2008-24478 by the applicant of the present application.

  In addition, preferably, the collision determination unit 24 performs the collision determination from the relationship between the three-dimensional optical flow OF3 and the host vehicle 2 as described above, even in an area where the possibility of a collision is determined to be low in the above determination. When the length of the three-dimensional optical flow OF3 of the candidate area calculated by the three-dimensional optical flow calculation unit 24a is larger than a predetermined threshold, it is determined as a collision possibility area. Therefore, since the length of the three-dimensional optical flow OF3 represents the relative movement amount of the target objects 4, 5, and 6 per unit time, if it is larger than the threshold value, there is a possibility that it is approaching rapidly. Since there is a collision possibility region, it is possible to avoid a determination delay.

  Here, when it is determined from the two-dimensional optical flow calculated from the image whether the target object is a stationary object or a moving object, the result is as shown in FIG. That is, when the size of the two-dimensional optical flow exceeds a certain level OF1, it can be clearly determined that the speed of the target object is a certain level V1, and if it is large, it is a moving object candidate, and if it is small, it is a stationary object candidate. If the size of the two-dimensional optical flow is equal to or less than the level OF1, it becomes difficult to determine whether the target object is a moving object candidate or a stationary object candidate. That is, for example, when the target object is relatively in front of the host vehicle 2, the two-dimensional optical flow is small even if the relative speed is high. Therefore, even when the optical flow is small, there is a high risk of collision, so it is necessary to extract an area including the target object without leakage. Therefore, in the present embodiment, by using the following two candidate object region extraction methods for the target object candidate region extraction unit 22, as shown in FIG. 9, the size of the two-dimensional optical flow is Even if it is less than level OF1, if the moving speed is equal to or less than a predetermined value V1, it is extracted as the candidate area, so that such extraction omission is eliminated while suppressing the amount of calculation processing.

  First, in the first object candidate region extraction method applied when the magnitude of the two-dimensional optical flow is equal to or higher than the level OF1 and the moving speed of the symmetric object is larger than a predetermined value V1, Using two object extraction methods, it is determined whether the object is a moving object to be subjected to collision determination or a stationary object that does not need collision determination, and the target object region is extracted as a target object candidate region. In the first cutting-out method, when the stereo cameras 11 and 12 are mounted forward and the host vehicle 2 moves straight forward, the two-dimensional optical flow OFS of the stationary object 8 is extended as shown in FIG. The straight line OFSa uses the intersection at a certain point. This point is called a vanishing point (FOE: Focus of Expansion) and is a fixed point determined according to the moving direction of the host vehicle 2. For example, in a scene where only the stationary object 8 exists as shown in FIG. 10, the extension line OFSa of the two-dimensional optical flow OFS calculated from the time-series images at time t and t + Δt is determined near the center of the image.

  On the other hand, the FOE of the scene in which the moving object 7 and the stationary object 8 are mixed as shown in FIG. 11 moves while the FOE of the stationary object 8 is determined near the center of the image as described above. The FOE of the extension line OFMa of the two-dimensional optical flow OFM of the object 7 is not limited to the center of the image but is arbitrarily scattered. In other words, each target object other than the stationary object 8 has an independent FOE. In the first cutout method, an object candidate region is extracted using the feature of the FOE.

  As an extraction method, it can be simply calculated from the number of measurement points gathered in each FOE. For example, among N measurement points of the two-dimensional optical flows OFM and OFS, the most gathered FOE is calculated, and the candidate points of the moving object 7 are extracted by excluding the measurement points gathered in the FOE. be able to. That is, among the two-dimensional optical flows OFS and OFM generated between the time-series images of the cameras 11 and 12 mounted on the moving body, the vanishing point (FOE) where the extension lines OFSa and OFMa of the two-dimensional optical flows OFS and OFM gather most. ) Is determined according to the moving direction of the moving body, and the two-dimensional optical flow of the stationary object 8 such as a road is gathered, so that the area excluding this area is set as a candidate area for the target objects 4 and 5 (moving object 7). By extracting, it is possible to extract only the moving target object more efficiently.

  Even in the case where there are a plurality of moving objects 7, since the FOEs of the moving objects are substantially the same, it is possible to group a plurality of objects for each FOE. Further, as shown in FIG. 12, the FOE of the extension line OFRa of the two-dimensional optical flow OFR on the road surface 3 is also collected in the FOE at the center of the image similar to the stationary object 8. It is possible to discriminate from other moving objects.

  On the other hand, in the second cutout method, the two-dimensional optical flow is not calculated for every measurement point as described above, but the screen is divided into a plurality of areas in advance, and the representative points of each area are calculated. To measure. Specifically, as shown in FIG. 13, the image region 10 is divided into equally spaced local regions A, and the target object is obtained from the result of the representative measurement points OB (i, j, t + Δt, n) of each local region. Extract candidates. The advantage of this method is that the number of measurement points can be reduced and higher speed processing is possible.

  In addition, as a setting method of each local area | region A, you may make it become equal intervals as above-mentioned, but it may be unequal intervals as shown in FIG. In the example of FIG. 14, the local area A1 in the vicinity (lower part of the screen) side of the own vehicle 2 is larger, and the farther away (going to the upper part of the screen), the smaller the reference symbols A2, A3,. ing. This is because in the case of a vehicle-mounted camera, the distance is further increased as it goes to the upper part of the image, that is, the target object becomes smaller on the image. As a result, the number of measurement points can be further reduced, and higher-speed processing is possible.

  Furthermore, the third clipping method is, for example, “(Examination of moving object recognition method using principal component analysis” Information Processing Society of Japan Research Report, Computer Vision Research Group Report IPSJ SIG Notes 96 (31) pp, 51-58 19960321. ), Which is an existing technology, and adds a estimation error when estimating the FOE for each area divided into local areas and performs threshold processing to detect the target object (remaining vanishing point estimation residual). Difference method). In this method, since the estimated residual in the area including the pedestrian is larger than the estimated residual in the background, the area having a large estimation error is set as the object candidate area OB (i, j, t + Δt, n). Can be detected. Using these first to third cutout methods, the first object candidate region extraction method can be realized.

In contrast, in the second object candidate region extraction method applied when the size of the two-dimensional optical flow is less than the level OF1, the two-dimensional optical flow (movement component) calculated as described above is used. ) OF XY (i, j, t + Δt) is extracted only when the vector size is smaller than a certain range, and the extracted region is extracted as a region where the object candidate OB (i, j, t + Δt, n) exists. To do. That is, the optical flow OF XY (i, j, t + Δt) calculated from the two-dimensional image can be calculated as a somewhat large flow (that exceeds OF1 in FIGS. 8 and 9), and can be easily separated from the road surface 3. In some cases, the possibility of collision can be determined by the two-dimensional optical flow OF XY (i, j, t + Δt), but if it is small (the one below OF1 in FIGS. 8 and 9), the movement of the target object It is difficult to determine whether the component is generated by the traveling of the host vehicle 2 or includes the component due to the movement of the target object itself. Therefore, by calculating the three-dimensional optical flow for only small ones and determining the presence or absence of a collision, it is possible to reduce the amount of calculation as described above and perform the determination with high accuracy.

  As described above, when the extraction of the candidate object region OB (i, j, t + Δt, n) by the candidate object extraction unit 22 in step S3 is completed, the three-dimensional information acquisition unit 23 performs the step S4. , Three-dimensional information XYZ (i, j, t) having a positional correspondence with the captured image I1 (i, j, t) at time t corresponding to the target object candidate region OB (i, j, t + Δt, n). , N), the captured image I1 (i, j, t + Δt) at time t + Δt, and three-dimensional information XYZ (i, j, t + Δt, n) having a positional correspondence relationship are calculated.

  In the calculation, in the above description, the image acquisition unit is the stereo cameras 11 and 12, and the three-dimensional information acquisition unit 23 calculates the target object calculated by the target object candidate region extraction unit 22 from the stereo image. Three-dimensional (position in the real space) information corresponding to the position determined as the candidate area OB (i, j, t + Δt, n) is calculated by the corresponding point search process between the stereo images. In this case, since the distance information can be calculated from the information of only the image without using a separate means for obtaining the distance information, a collision determination system can be constructed with a simple configuration.

  However, in order to obtain the three-dimensional information in this embodiment, it is only necessary to know the distance information to the target object, so that the camera is a monaural (monocular) camera and is replaced with a combination with an external distance measuring means. Also good.

  As the distance measuring means, for example, as shown in FIG. 15, the near infrared light is emitted from the LED 52 mounted around the CMOS sensor 51, and the reflected light returning to the subject 53 is received by the CMOS sensor 51. This time can be realized by a so-called TOF (time of flight) method in which the timer 54 measures time. The light reception result at each pixel of the CMOS sensor 51 becomes distance information to the target object existing in each area A on the image 10, and thus the three-dimensional information can be acquired. A typical example of such a configuration is a laser range finder manufactured by Canesta, which outputs a three-dimensional measurement result. By using this three-dimensional information, it is possible to acquire the positional correspondence of the three-dimensional information corresponding to each target object candidate region OB (i, j, t + Δt, n) extracted in step S3.

  Another method is to emit radar radio waves, receive reflected waves reflected from the target object, and measure the distance to obstacles ahead based on the propagation time and frequency difference caused by the Doppler effect. There is something. A typical example is a millimeter wave radar. The correspondence relationship between the acquisition of the three-dimensional information acquired by these different devices and the images captured by the cameras 11 and 12 is based on, for example, a method as disclosed in JP 2000-348199 A (texture mapping method and apparatus). If stored, it is possible to obtain a correspondence between the angle of view of the camera and the object detection angle of the millimeter wave. When the distance measuring means is separately provided as described above, the distance information can be obtained without performing a special calculation.

When the three-dimensional information corresponding to each target object candidate area OB (i, j, t + Δt, n) is obtained in this way, the collision possibility determination unit 24 determines that the target object 4 in step S6 as described above. , 5 collide with the host vehicle 2. For this purpose, the direction and length of the two-dimensional optical flow (movement component) OF XY (i, j, t + Δt) calculated in step S5, the speed of the host vehicle 2, and each target object obtained as described above. The collision determination is performed from the distances XYZ (i, j, t, n) and XYZ (i, j, t + Δt, n) to the candidate area OB (i, j, t + Δt, n). The speed information may be obtained from the vehicle 2 side, but can be obtained from a predetermined area, for example, the length of the optical flow on the road surface 3 below the front.

  Preferably, the collision determination unit 24 may change the distance corresponding to the current speed of the host vehicle 2 instead of fixing the target distance for performing the collision determination. By configuring in this way, the range considered safe based on the speed of the host vehicle 2, for example, the stoppable distance corresponding to the speed (= free running distance + braking distance), avoidance distance at the steering wheel (driving Since the range of the distance that can be safely avoided by the steering operation even if the response delay time of the steering mechanism is taken into account, the determination process can be reduced.

In addition, when the collision possibility determination unit 24 performs the collision determination, as shown in FIG. 16, when the steering operation of the host vehicle 2 is performed, the collision determination is performed in consideration of the moving direction by the steering. It is preferable to carry out. Specifically, when the moving object 7 is moving straight as indicated by the arrow F1, if the host vehicle 2 is bent at an angle θ as indicated by the arrow F2, the moving object 7 appears to be the arrow F3. As shown by, it is determined that the vehicle is bent at the angle θ and approaches, and the collision determination is performed with respect to the host vehicle 2 as the straight travel of the arrow F4. That is, for the two-dimensional optical flow of the moving object 7, correction for the angle θ is added to the two-dimensional optical flow OF XY (i, j, t + Δt) actually obtained from the time-series images I1 t and I1 t + Δt. It is done. The angle θ can be taken in as a steering sensor output or the like from a body control ECU or the like.

Furthermore, preferably, the time-series information calculation unit 21 uses a multi-resolution as shown in FIG. 17 for the corresponding point search process of the time-series images obtained by the stereo cameras 11 and 12. Specifically, in the above description, as shown in FIG. 5, since the corresponding point in the reference image I1 t + Δt at time t + Δt is searched for the reference image I1 t at time t, the original high resolution from the reference image I1 t + (in the example of FIG. 17 I1 't + Δt, I1 ''t + Δt) corresponding one or more hierarchies of low-resolution images to Delta] t to create the lowest-resolution image I1''t + Δt, I1 ' t + Δt and I1 t + Δt are sequentially searched for the corresponding point Q by increasing the resolution. By adopting such a method, even if the position of the corresponding point is separated between the two reference images I1 t and I1 t + Δt , the corresponding point can be searched efficiently and with high accuracy.

  As described above, in the periphery monitoring device 1 according to the present embodiment, in the periphery monitoring device that is mounted on the vehicle 2 and monitors the surrounding target objects 4 and 5, an image acquisition unit such as a monocular or stereo camera 11 or 12. When the surrounding image is acquired in time series, and the time series information calculating unit 21 calculates a moving component such as a two-dimensional optical flow from the obtained time series image, first, the target object candidate region extracting unit 22 first moves the moving object. Based on the components, candidate areas where the target objects 4 and 5 are likely to exist are extracted, and then the three-dimensional information acquisition unit 23 extracts the candidate areas of the target objects 4 and 5 from the candidate areas. The three-dimensional (position in the real space) information is acquired, and the collision determination unit 24 can collide from the obtained three-dimensional information of the candidate regions of the target objects 4 and 5 and the positional relationship with the periphery monitoring device 1. Determine sex.

  Therefore, first, the target object candidate area extraction unit 22 limits candidate areas where there is a high possibility that the target objects 4 and 5 that are difficult to predict a collision, such as side jump, from the moving components calculated from the time-series images, Next, the three-dimensional information acquisition unit 23 calculates three-dimensional information only for the candidate area, and the collision determination unit 24 determines the possibility of collision from the relationship between the extracted candidate area of the target object and the monitoring device 1. Since the collision risk is determined by obtaining detailed three-dimensional information for only the target object candidate region that is difficult to predict from a two-dimensional time-series image captured in a self-running state, complicated object recognition processing or By limiting the candidate areas of the target objects 4 and 5 using only information calculated from the time-series images without requiring a road model or the like, the processing speed can be increased (the frame rate can be improved).

  In addition, the time-series information calculation unit 21 calculates a two-dimensional optical flow as the movement component by the corresponding point search process of the time-series image obtained by the image acquisition unit. Can be requested.

DESCRIPTION OF SYMBOLS 1 Perimeter monitoring apparatus 2 Vehicle 3 Road surface 4, 5 Target object 6 Buzzer 7 Moving object 8 Still object 11, 12 Stereo camera 13 Controller 21 Time series information calculation part 22 Target object candidate area extraction part 23 3D information acquisition part 24 Possible collision Sex determination unit 24a 3D optical flow calculation unit 51 CMOS sensor 52 LED
53 Subject 54 Timer

Claims (12)

  1. In a periphery monitoring device that is mounted on a moving body and monitors a target object around the moving body,
    An image acquisition unit for acquiring the surrounding images in time series;
    A time series information calculation unit that calculates a moving component from the time series image obtained by the image acquisition unit;
    A target object candidate area extraction unit that extracts a candidate area where the target object is highly likely to exist based on the moving component obtained by the time-series information calculation unit;
    A three-dimensional information acquisition unit that acquires the three-dimensional information of the candidate region calculated by the target object candidate region extraction unit;
    A periphery monitoring device comprising: a collision determination unit that determines the possibility of collision from the three-dimensional information of the candidate area obtained by the three-dimensional information acquisition unit and the positional relationship with the periphery monitoring device. .
  2.   The periphery monitoring device according to claim 1, wherein the time-series information calculation unit calculates a two-dimensional optical flow as the moving component by a corresponding point search process of the time-series image obtained by the image acquisition unit. .
  3.   The target object candidate region extraction unit extracts, as the candidate region, a region having a size equal to or larger than a predetermined value of a two-dimensional optical flow calculated as the moving component in the time-series information calculation unit. 3. The periphery monitoring device according to claim 2, wherein even if the optical flow is less than a predetermined value, if the moving speed is equal to or less than a predetermined value, it is extracted as the candidate area.
  4.   The target object candidate region extraction unit is configured to exclude, from the vanishing point of the straight line obtained by extending the two-dimensional optical flow calculated as the moving component, the region excluding the region where the flow gathers most in the time series information calculation unit. The periphery monitoring device according to claim 2, wherein the periphery monitoring device is extracted as a candidate area.
  5.   The three-dimensional information acquisition unit corresponds to a position determined as a candidate region of the target object calculated by the target object candidate region extraction unit from a stereo image obtained by using the image acquisition unit as a stereo camera. 5. The periphery monitoring apparatus according to claim 3, wherein the information is calculated by a corresponding point search process between the stereo images.
  6.   The three-dimensional information acquisition unit is a distance sensor that acquires three-dimensional information corresponding to a position determined as a candidate region of the target object calculated by the target object candidate region extraction unit. The periphery monitoring device according to 3 or 4.
  7.   The collision determination unit uses the moving component obtained by the time-series information calculation unit and the three-dimensional information of the candidate region obtained by the three-dimensional information acquisition unit, and uses the three-dimensional optical of the candidate region. A three-dimensional optical flow calculation unit for calculating a flow is provided, and the possibility of collision is determined from the three-dimensional optical flow calculated by the three-dimensional optical flow calculation unit and the positional relationship between the peripheral monitoring device. The periphery monitoring device according to any one of claims 1 to 6.
  8.   The collision determination unit performs a collision determination based on an intersection determination between the direction and length of the three-dimensional optical flow of the candidate area calculated by the three-dimensional optical flow calculation unit and the periphery monitoring device. Item 8. The perimeter monitoring device according to item 7.
  9.   The periphery monitoring device according to claim 1, wherein the collision determination unit changes a target distance for performing a collision determination in accordance with a speed of the moving body.
  10.   The collision determination unit determines that the region is a collision possibility region when the length of the three-dimensional optical flow of the candidate region calculated by the three-dimensional optical flow calculation unit is larger than a predetermined threshold. Item 10. The periphery monitoring device according to item 8 or 9.
  11.   The periphery monitoring apparatus according to claim 1, wherein the time-series information calculation unit performs a corresponding point search process for each measurement point.
  12.   The target object candidate area extraction unit divides the screen into a plurality of areas in advance, and determines whether or not to extract as a target object candidate area by performing measurement on a representative point of each area. The perimeter monitoring device according to claim 1, wherein:
JP2009138692A 2009-06-09 2009-06-09 Perimeter monitoring device Expired - Fee Related JP5353455B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2009138692A JP5353455B2 (en) 2009-06-09 2009-06-09 Perimeter monitoring device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2009138692A JP5353455B2 (en) 2009-06-09 2009-06-09 Perimeter monitoring device

Publications (2)

Publication Number Publication Date
JP2010286926A true JP2010286926A (en) 2010-12-24
JP5353455B2 JP5353455B2 (en) 2013-11-27

Family

ID=43542598

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2009138692A Expired - Fee Related JP5353455B2 (en) 2009-06-09 2009-06-09 Perimeter monitoring device

Country Status (1)

Country Link
JP (1) JP5353455B2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012174117A (en) * 2011-02-23 2012-09-10 Denso Corp Mobile object detection device
JP2013020507A (en) * 2011-07-12 2013-01-31 Aisin Seiki Co Ltd Vehicle detecting device, vehicle detecting method, and program
JP2013040932A (en) * 2011-08-09 2013-02-28 Boeing Co:The Image-based position measurement
WO2014077170A1 (en) * 2012-11-19 2014-05-22 Ricoh Company, Ltd. Moving object recognizer
JP2014191474A (en) * 2013-03-26 2014-10-06 Fujitsu Ltd Concentration ratio determination program, concentration ratio determination apparatus and concentration ratio determination method
JPWO2013027803A1 (en) * 2011-08-25 2015-03-19 日産自動車株式会社 Autonomous driving control system for vehicles
JP2016125843A (en) * 2014-12-26 2016-07-11 株式会社リコー Moving body, measurement system, measurement method, and program
WO2016113904A1 (en) * 2015-01-16 2016-07-21 株式会社日立製作所 Three-dimensional-information-calculating device, method for calculating three-dimensional information, and autonomous mobile device
WO2016159142A1 (en) * 2015-04-03 2016-10-06 日立オートモティブシステムズ株式会社 Image pickup device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07262375A (en) * 1994-03-25 1995-10-13 Toshiba Corp Mobile object detector
JP2001006096A (en) * 1999-06-23 2001-01-12 Honda Motor Co Ltd Peripheral part monitoring device for vehicle
JP2004355082A (en) * 2003-05-27 2004-12-16 Nec Corp Optical flow detection system, detection method, and detection program
JP2005209019A (en) * 2004-01-23 2005-08-04 Toshiba Corp Apparatus, method and program for obstacle detection
JP2009026250A (en) * 2007-07-23 2009-02-05 Panasonic Corp Plane component detector, ground plane detector, and obstacle detector
JP2010204805A (en) * 2009-03-02 2010-09-16 Konica Minolta Holdings Inc Periphery-monitoring device and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07262375A (en) * 1994-03-25 1995-10-13 Toshiba Corp Mobile object detector
JP2001006096A (en) * 1999-06-23 2001-01-12 Honda Motor Co Ltd Peripheral part monitoring device for vehicle
JP2004355082A (en) * 2003-05-27 2004-12-16 Nec Corp Optical flow detection system, detection method, and detection program
JP2005209019A (en) * 2004-01-23 2005-08-04 Toshiba Corp Apparatus, method and program for obstacle detection
JP2009026250A (en) * 2007-07-23 2009-02-05 Panasonic Corp Plane component detector, ground plane detector, and obstacle detector
JP2010204805A (en) * 2009-03-02 2010-09-16 Konica Minolta Holdings Inc Periphery-monitoring device and method

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012174117A (en) * 2011-02-23 2012-09-10 Denso Corp Mobile object detection device
JP2013020507A (en) * 2011-07-12 2013-01-31 Aisin Seiki Co Ltd Vehicle detecting device, vehicle detecting method, and program
JP2013040932A (en) * 2011-08-09 2013-02-28 Boeing Co:The Image-based position measurement
JP2018116065A (en) * 2011-08-09 2018-07-26 ザ・ボーイング・カンパニーThe Boeing Company Determination of position of image base
JPWO2013027803A1 (en) * 2011-08-25 2015-03-19 日産自動車株式会社 Autonomous driving control system for vehicles
US9182761B2 (en) 2011-08-25 2015-11-10 Nissan Motor Co., Ltd. Autonomous driving control system for vehicle
CN104956400A (en) * 2012-11-19 2015-09-30 株式会社理光 Moving object recognizer
CN104956400B (en) * 2012-11-19 2018-02-06 株式会社理光 Mobile object identifier, vehicle and the method for identifying mobile object
JP2014115978A (en) * 2012-11-19 2014-06-26 Ricoh Co Ltd Mobile object recognition device, notification apparatus using the device, mobile object recognition program for use in the mobile object recognition device, and mobile object with the mobile object recognition device
WO2014077170A1 (en) * 2012-11-19 2014-05-22 Ricoh Company, Ltd. Moving object recognizer
US9607400B2 (en) 2012-11-19 2017-03-28 Ricoh Company, Ltd. Moving object recognizer
JP2014191474A (en) * 2013-03-26 2014-10-06 Fujitsu Ltd Concentration ratio determination program, concentration ratio determination apparatus and concentration ratio determination method
JP2016125843A (en) * 2014-12-26 2016-07-11 株式会社リコー Moving body, measurement system, measurement method, and program
US10151840B2 (en) 2014-12-26 2018-12-11 Ricoh Company, Ltd. Measuring system, measuring process, and non-transitory recording medium
WO2016113904A1 (en) * 2015-01-16 2016-07-21 株式会社日立製作所 Three-dimensional-information-calculating device, method for calculating three-dimensional information, and autonomous mobile device
JPWO2016113904A1 (en) * 2015-01-16 2017-08-03 株式会社日立製作所 Three-dimensional information calculation device, three-dimensional information calculation method, and autonomous mobile device
US10229331B2 (en) 2015-01-16 2019-03-12 Hitachi, Ltd. Three-dimensional information calculation device, three-dimensional information calculation method, and autonomous mobile device
CN107431747A (en) * 2015-04-03 2017-12-01 日立汽车系统株式会社 Camera device
JP2016197795A (en) * 2015-04-03 2016-11-24 日立オートモティブシステムズ株式会社 Imaging device
WO2016159142A1 (en) * 2015-04-03 2016-10-06 日立オートモティブシステムズ株式会社 Image pickup device
CN107431747B (en) * 2015-04-03 2020-03-27 日立汽车系统株式会社 Image pickup apparatus

Also Published As

Publication number Publication date
JP5353455B2 (en) 2013-11-27

Similar Documents

Publication Publication Date Title
US10832063B2 (en) Systems and methods for detecting an object
US10445928B2 (en) Method and system for generating multidimensional maps of a scene using a plurality of sensors of various types
US20190251375A1 (en) Systems and methods for curb detection and pedestrian hazard assessment
US9330320B2 (en) Object detection apparatus, object detection method, object detection program and device control system for moveable apparatus
US10127669B2 (en) Estimating distance to an object using a sequence of images recorded by a monocular camera
US10753758B2 (en) Top-down refinement in lane marking navigation
KR101829556B1 (en) Lidar-based classification of object movement
US10690770B2 (en) Navigation based on radar-cued visual imaging
US9886649B2 (en) Object detection device and vehicle using same
US9429650B2 (en) Fusion of obstacle detection using radar and camera
CN103852067B (en) The method for adjusting the operating parameter of flight time (TOF) measuring system
EP2728546B1 (en) Method and system for detecting object on a road
US20150336547A1 (en) Systems and methods for braking a vehicle based on a detected object
Fayad et al. Tracking objects using a laser scanner in driving situation based on modeling target shape
US7612800B2 (en) Image processing apparatus and method
US9047518B2 (en) Method for the detection and tracking of lane markings
US7327855B1 (en) Vision-based highway overhead structure detection system
KR101411668B1 (en) A calibration apparatus, a distance measurement system, a calibration method, and a computer readable medium recording a calibration program
JP5867273B2 (en) Approaching object detection device, approaching object detection method, and computer program for approaching object detection
US9313462B2 (en) Vehicle with improved traffic-object position detection using symmetric search
CN108271408B (en) Generating three-dimensional maps of scenes using passive and active measurements
US9836657B2 (en) System and method for periodic lane marker identification and tracking
US10061993B2 (en) Warning method of obstacles and device of obstacles
CN103852754B (en) The method of AF panel in flight time (TOF) measuring system
JP5297078B2 (en) Method for detecting moving object in blind spot of vehicle, and blind spot detection device

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20111201

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20130228

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20130305

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20130425

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20130730

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20130812

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

LAPS Cancellation because of no payment of annual fees