JP2008282106A - Method for detecting obstacle and obstacle detector - Google Patents

Method for detecting obstacle and obstacle detector Download PDF

Info

Publication number
JP2008282106A
JP2008282106A JP2007123908A JP2007123908A JP2008282106A JP 2008282106 A JP2008282106 A JP 2008282106A JP 2007123908 A JP2007123908 A JP 2007123908A JP 2007123908 A JP2007123908 A JP 2007123908A JP 2008282106 A JP2008282106 A JP 2008282106A
Authority
JP
Japan
Prior art keywords
motion vector
virtual
obstacle
vehicle
obstacle detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2007123908A
Other languages
Japanese (ja)
Other versions
JP5125214B2 (en
Inventor
Takafumi Enami
Sotaro Kaneko
隆文 枝並
宗太郎 金子
Original Assignee
Fujitsu Ltd
富士通株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd, 富士通株式会社 filed Critical Fujitsu Ltd
Priority to JP2007123908A priority Critical patent/JP5125214B2/en
Publication of JP2008282106A publication Critical patent/JP2008282106A/en
Application granted granted Critical
Publication of JP5125214B2 publication Critical patent/JP5125214B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

An object of the present invention is to appropriately detect an obstacle that affects a running vehicle regardless of the state of the road.
An obstacle detection device acquires images at different points in time from a camera, calculates a motion vector for each region of the image based on each acquired image, and travels the vehicle from the traveling state of the vehicle. A model of the space to be generated is generated, and a virtual motion vector is calculated based on the generated model. Then, the obstacle detection apparatus 100 divides the model into a plurality of virtual areas, and based on the motion vectors and virtual motion vectors corresponding to the divided virtual areas, and the criterion set for each virtual area. Is detected.
[Selection] Figure 2

Description

  The present invention relates to an obstacle detection device and an obstacle detection method for detecting an obstacle that affects the running of a vehicle from an image photographed by a camera, and more particularly to an obstacle that affects a running vehicle regardless of the state of a running road. The present invention relates to an obstacle detection method and an obstacle detection device capable of appropriately detecting an object.

  In recent years, a technology has been devised to prevent traffic accidents by detecting obstacles such as pedestrians and oncoming vehicles around the vehicle from images taken with an in-vehicle camera and notifying the detected obstacles to the driver. It has become.

  Technologies related to such obstacle detection are disclosed in, for example, Patent Document 1, Patent Document 2, and the like. Specifically, the technique disclosed in Patent Document 1 calculates an image taken by a single camera into a plurality of regions, calculates pixel motion vectors, detects image similarity, and detects such motion vectors. And a technique of detecting an obstacle based on the similarity of images.

  Further, in Patent Document 2, when the travel area is determined on the image by detecting the width and white line of the host vehicle and the obstacle extracted by tracking the feature amount such as template matching enters the travel area from the outside, A technique for detecting an entering object as an obstacle is disclosed.

Japanese Patent Laid-Open No. 2005-100204 JP 2004-280194 A

  In obstacle detection, it is desirable to detect an object that exists on the road or an object that moves in the direction of entering the road (or an object that affects a running vehicle) as an obstacle. However, since the technique disclosed in Patent Document 1 performs obstacle detection regardless of whether or not the vehicle has entered the road (for example, an object moving outside the road is also detected as an obstacle), There was a problem that unnecessary information was notified to the driver.

  Further, in the technique disclosed in Patent Document 2, obstacle detection can be performed for the first time when a white line exists on the runway. Therefore, depending on the state of the runway (for example, when there is no white line on the runway) There was a problem that obstacles could not be detected.

  That is, it is an extremely important issue to appropriately detect an obstacle that affects a running vehicle regardless of the state of the road.

  The present invention has been made to solve the above-described problems caused by the prior art, and an obstacle detection method capable of appropriately detecting an obstacle that affects a running vehicle regardless of the state of the road and An object is to provide an obstacle detection device.

  In order to solve the above-described problems and achieve the object, the present invention provides an obstacle detection method of an obstacle detection device for detecting an obstacle that affects the running of a vehicle from an image photographed by the camera, the camera A motion vector calculating step of acquiring an image at a different time from the image, storing the image in a storage device, and calculating a motion vector in each region of the image based on the image at a different time stored in the storage device; A virtual motion vector calculating step for generating a virtual motion vector for each region of the image based on the generated model, and generating a model of a plurality of virtual regions based on the generated model Based on the motion vector and the virtual motion vector corresponding to the divided virtual area and the criterion set for each virtual area. Characterized in that including an obstacle detecting step of detecting an object, the.

  Further, the present invention is the above invention, wherein the virtual area includes a first virtual area and a second virtual area, and the obstacle detection step includes the motion vector and the virtual area included in the first virtual area. A virtual motion vector, a difference between the motion vector and the virtual motion vector is equal to or greater than a threshold, and a sign of a vector consisting of a difference between the motion vector and the virtual motion vector and the virtual motion A part having a different code from the vector is detected as an obstacle.

  Further, according to the present invention, in the above invention, the obstacle detection step compares the motion vector and the virtual motion vector included in the second virtual area, and compares the motion vector with the virtual motion vector. A portion where the difference from the vector is equal to or greater than a threshold is detected as an obstacle.

  Further, the present invention is characterized in that, in the above-mentioned invention, the obstacle detection step adjusts the threshold according to a position of the motion vector to be compared and a virtual motion vector in the virtual area. To do.

  Further, the present invention provides an obstacle detection device for detecting an obstacle that affects the running of a vehicle from an image photographed by a camera according to the above invention, wherein the storage device acquires images from different points in time from the camera. And a motion vector calculation means for calculating a motion vector in each area of the image based on images stored at different times stored in the storage device, and a model of a space in which the vehicle travels from the travel state of the vehicle And a virtual motion vector calculating means for calculating a virtual motion vector for each region of the image based on the generated model, and dividing the model into a plurality of virtual regions, and corresponding to the divided virtual regions Obstacle detection means for detecting an obstacle based on a motion vector, a virtual motion vector, and a determination criterion set for each virtual area; And said that there were pictures.

  According to the present invention, an image at a different time point is acquired from a camera, stored in a storage device, a motion vector in each region of the image is calculated based on the image at a different time point stored in the storage device, and the vehicle A model of the space in which the vehicle travels is generated from the traveling state of the vehicle, a virtual motion vector for each region of the image is calculated based on the generated model, the model is divided into a plurality of virtual regions, and the divided virtual region Because obstacles are detected based on the motion vector and virtual motion vector corresponding to, and the criterion set for each virtual area, obstacles that affect the running vehicle regardless of the condition of the road It can be detected properly.

  Further, according to the present invention, the virtual area includes the first virtual area and the second virtual area, and the motion vector included in the first virtual area is compared with the virtual motion vector, and the motion vector and virtual Because the part where the difference between the motion vector and the virtual motion vector is different from the threshold and the vector code consisting of the difference between the motion vector and the virtual motion vector is different from the virtual motion vector is detected as an obstacle. Obstacles that enter the runway can be accurately detected.

  Further, according to the present invention, the motion vector included in the second virtual region is compared with the virtual motion vector, and a portion where the difference between the motion vector and the virtual motion vector is equal to or greater than a threshold is determined as an obstacle. Therefore, the vehicle traveling ahead can be accurately detected.

  Further, according to the present invention, the threshold value is adjusted according to the position of the motion vector to be compared and the virtual motion vector in the virtual area, so that it is possible to suppress erroneous detection of an obstacle due to the influence of noise or the like. Can do.

  Exemplary embodiments of an obstacle detection method and an obstacle detection device according to the present invention will be described below in detail with reference to the accompanying drawings.

  First, the outline and features of the obstacle detection apparatus according to the first embodiment will be described. FIG. 1 is a diagram for explaining the outline and features of the obstacle detection apparatus according to the first embodiment. As shown in the figure, the obstacle detection apparatus according to the present embodiment acquires images at different points in time from the camera, and based on the acquired images, the motion vector (upper right side of FIG. And a model of a space in which the vehicle travels is generated from the traveling state of the vehicle, and based on the generated model, a virtual motion vector (see the upper left side of FIG. 1; hereinafter referred to as a virtual motion vector) Calculated).

  Then, the obstacle detection device divides the model into a plurality of virtual regions, and based on the motion vectors and virtual motion vectors corresponding to the divided virtual regions, and the determination criteria set for each virtual region, To detect.

  In the example shown in the lower part of FIG. 1, the model is divided into a first virtual area and a second virtual area, and the obstacle detection device performs a motion vector and a virtual motion vector in the first virtual area. A region where the difference is greater than or equal to a threshold and the sign of the vector consisting of the difference between the motion vector and the virtual motion vector and the sign of the virtual motion vector are different signs is detected as an obstacle. The obstacle detection device detects, as an obstacle, an area where the difference between the motion vector and the virtual motion vector is equal to or greater than a threshold in the second virtual area.

  As described above, the obstacle detection apparatus according to the present embodiment divides the model into a plurality of virtual regions, the motion vector corresponding to the divided virtual region, the virtual motion vector, and the determination criterion set for each virtual region. Since the obstacle is detected based on the vehicle, the obstacle that affects the running vehicle can be appropriately detected regardless of the state of the road. For example, in the first virtual area, an object that moves out of the road is not detected as an obstacle, so that the problem that unnecessary information is notified to the driver can be solved.

  Next, the configuration of the obstacle detection device according to the first embodiment will be described (assuming that the obstacle detection device according to the first embodiment is mounted on a vehicle not shown). FIG. 2 is a functional block diagram of the configuration of the obstacle detection apparatus according to the first embodiment. As shown in the figure, the obstacle detection apparatus 100 includes an image input unit 110, an image storage unit 120, a motion vector calculation unit 130, a camera data input unit 140, a vehicle data input unit 150, a traveling space. The model generation unit 160, the traveling space region determination unit 170, the model difference vector calculation unit 180, the obstacle detection unit 190, and the obstacle notification unit 200 are configured.

  The image input unit 110 is a processing unit that acquires image data at a constant frame rate from an in-vehicle camera (not shown), and sequentially stores the acquired image data in the image storage unit 120. The image storage unit 120 is a storage unit that stores image data. The image storage unit 120 includes a buffer for storing as much image data as necessary to create a motion vector based on a time difference in an image.

  The motion vector calculation unit 130 acquires image data at different points in time from the image storage unit 120 (for example, image data stored last in the image storage unit 120 and an image taken a predetermined time before this image data) And a motion vector in each area of the image data is calculated based on the acquired image data.

  As a method for calculating the motion vector, any conventional method may be used. For example, the motion vector is calculated using a method disclosed in a patent document (Japanese Patent Laid-Open No. 2005-100204). can do. The motion vector calculation unit 130 outputs the calculated motion vector data to the model difference vector calculation unit 180.

  The camera data input unit 140 is a processing unit that acquires various data related to a camera installed in the vehicle. Various data related to the camera indicates the camera vertical setting position that represents the height from the camera to the road surface, the travel area width that indicates the lateral width of the space when the vehicle travels, and the distance from the camera to the focal position of the camera. It includes camera pixel pitch data indicating camera focal length and camera pixel pitch. Various parameters relating to the camera may be dynamically changed, or the user may set various parameters in advance. The camera data input unit 140 outputs various data related to the camera to the traveling space model generation unit 160.

  The vehicle data input unit 150 is a processing unit that acquires various data related to the running state of the vehicle. Various types of data relating to the running state of the vehicle include vehicle speed data indicating the speed of the vehicle, and vehicle rotation angle data indicating the yaw, roll, and pitch of the vehicle by steering and a slope. The vehicle speed data can be obtained from a vehicle speed pulse and the vehicle rotation angle can be obtained from a steering sensor or a gyroscope. The vehicle data input unit 150 outputs various data related to the traveling state of the vehicle to the traveling space model generation unit 160.

  The traveling space model generation unit 160 is a processing unit that calculates a virtual motion vector based on various data related to the camera and various data related to the traveling state of the vehicle. Hereinafter, processing performed by the traveling space model generation unit 160 will be described in order.

  First, the traveling space model generation unit 160 generates a traveling space model based on various data regarding the camera and various data regarding the traveling state of the vehicle. The traveling space model is a virtual traveling environment of a vehicle, and is a model in which a space where the vehicle will travel locally in the future is included so as to include the imaging range of the camera. It will be predicted that the vehicle will travel from now on is predicted based on the steering angle (vehicle rotation angle) and vehicle speed (vehicle speed data) of the vehicle, and is essentially independent of the white line on the road and the lane width. Therefore, it is possible to generate a traveling space model that is resistant to environmental changes.

  FIG. 3 is a diagram illustrating an example of a traveling space model. In the example shown in FIG. 3, a traveling space model at the time of straight ahead when an in-vehicle camera is arranged at the front center of the vehicle and the photographing range is rectangular is shown. In FIG. 3, h represents the camera vertical installation position, L represents the travel area width, and H represents the travel area height. In the first embodiment, the camera vertical installation position h is described as a negative number. This is because the position where the camera is installed is the origin, and the upward direction from the origin is positive. Further, as described above, the travel area width L and the travel area height H are not defined by the vehicle width and the vehicle height itself, but are defined with some margins.

  The right side of FIG. 3 shows a traveling space model assumed to be viewed from the vehicle-mounted camera according to the first embodiment. As shown in the figure, the space model viewed from the in-vehicle camera can be classified into an upper surface portion, a lower surface portion, and a side surface portion.

  Subsequently, the travel space model generation unit 160 calculates all virtual motion vectors corresponding to the motion vectors calculated by the motion vector calculation unit 130 using the travel space model. Here, for convenience of explanation, a method for calculating a virtual motion vector will be described with the coordinates on the image corresponding to the motion vector as (x, y). The calculation method of the virtual motion vector relating to other coordinates is not described because the coordinates are only changed.

  The traveling space model generation unit 160 converts the coordinates (x, y) in the image coordinate system into coordinates (u, v) in the model space coordinate system with the camera installation position as the origin. FIG. 4 is an explanatory diagram for explaining a process of converting coordinates (x, y) in the image coordinate system into coordinates (u, v) in the model space coordinate system. Note that FIG. 4 shows coordinate conversion in the case where the vanishing point is at the center of the image.

Width shown in the image coordinate system indicates the number of pixels included in the horizontal width of the image (hereinafter referred to as the number of horizontal pixels of the image), and Height indicates the number of pixels included in the vertical width of the image (hereinafter referred to as the number of vertical pixels of the image). Is shown. A specific formula for converting the coordinates (x, y) of the image coordinate system into the coordinates (u, v) of the model space coordinate system is as follows:
Can be represented by In addition, c contained in Formula (1) and Formula (2) shows the pixel pitch of a camera.

  Subsequently, the traveling space model generation unit 160 determines which region (upper surface portion, lower surface portion, side surface portion; see FIG. 3) of the traveling space model includes the coordinates (u, v) of the model space coordinate system. (Area determination is performed). FIG. 5 is an explanatory diagram for explaining region determination performed by the traveling space model generation unit 160.

The traveling space model generation unit 160 determines the position of the coordinates (u, v) based on the relationship between the coordinates (u, v) in the model space coordinate system, the camera vertical installation position h, the traveling region width L, and the traveling region height H. The area to be determined is determined. Specifically, the travel space model generation unit 160 has a relationship of coordinates (u, v), camera vertical installation position h, travel region width L, and travel region height H.
When satisfy | filling, it determines with a coordinate (u, v) being located in a lower surface part.

On the other hand, the travel space model generation unit 160 has a relationship of coordinates (u, v), camera vertical installation position h, travel area width L, and travel area height H.
If the condition is satisfied, it is determined that the coordinates (u, v) are located on the upper surface. In the traveling space model generation unit 160, the coordinates (u, v), the camera vertical installation position h, the traveling region width L, and the traveling region height H correspond to both the expressions (1) and (2). If not, it is determined that the coordinates (u, v) are located on the side surface.

  The traveling space model generation unit 160 calculates a virtual motion vector of coordinates (u, v) located in each region based on the determination result by region determination. Hereinafter, a virtual motion vector when the coordinates (u, v) are located on the lower surface, a virtual motion vector when located on the upper surface, and a calculation method of the virtual motion vector when located on the side will be described in order.

(When coordinates (u, v) are located on the bottom surface)
A method of calculating the virtual motion vector (du, dv) when the coordinates (u, v) are located on the lower surface will be described. FIG. 6 is a diagram for supplementing the calculation method of the virtual motion vector when the coordinates (u, v) are located on the lower surface. 6 indicates the focal position of the camera, f indicates the focal distance from the camera to the focal point, and z indicates the distance in the traveling direction from the focal point F to the position A of the object (object candidate for obstacle) ( That is, it indicates from the focal position F to the position B.

  Further, V represents the vehicle speed, dz represents the distance of the object moving in a minute time (time corresponding to the sample rate at which the camera captures an image) dt (in FIG. 6, the vehicle moves from left to right, and the object is Moving from right to left). Further, when the position where the object A moves after dt time is defined as A ′, and the position where the straight line including the position A ′ and the focal position F intersects the vertical axis (v axis) is defined as C, v and The difference from the position C is dv.

As shown in FIG. 6, from the similarity of triangles,
The following relational expression (5) can be derived. From the expression (5),
Can guide you.

And by differentiating equation (6) by z,
Can lead
From equation (8), the virtual motion vector dv of the v-axis component of coordinates (u, v) is
Can be represented by

On the other hand, the relational expression of u and v in coordinates (u, v) is
The virtual motion vector du of the u-axis component of the coordinates (u, v) from the equations (9) and (10) can be expressed by
Can be represented by

(When coordinates (u, v) are located on the top surface)
Next, a method for calculating the virtual motion vector (du, dv) when the coordinates (u, v) are located on the upper surface will be described. When the coordinates (u, v) are located on the upper surface, it can be obtained by changing the camera vertical installation position h included in the above formulas (5) to (11) to the travel area height H. Specifically, the virtual motion vector when the coordinates (u, v) are located on the upper surface is represented by (du, dv).
Can be represented by

(When coordinates (u, v) are located on the side surface)
Next, a method for calculating the virtual motion vector (du, dv) when the coordinates (u, v) are located on the side surface will be described. FIG. 7 is a diagram for supplementing the method of calculating the virtual motion vector when the coordinates (u, v) are located on the side surface. 7 indicates the focal position of the camera, f indicates the focal distance from the camera to the focal point, and z indicates the distance from the focal point F to the horizontal component E of the object (object that is a candidate for an obstacle) D. Show.

Further, V indicates the vehicle speed, dz indicates the distance of the object moving in a minute time (time corresponding to the sample rate at which the camera captures an image) dt (in FIG. 7, the vehicle moves from left to right, and the object is Moving from right to left). Further, when the position where the object D moves after dt time is defined as D ′, and the position where the straight line including the position D ′ and the focal position F intersects with the u axis is defined as G, the difference between v and the position G is expressed as follows. Assuming dv, the virtual motion vector (du, dv) in the case where the coordinates (u, v) are located on the side surface is as shown in FIG.
Can be represented by

  Returning to the description of FIG. 2, the travel space model generation unit 160 calculates the virtual motion vector corresponding to each coordinate, and then outputs data in which the coordinate and the virtual motion vector are associated to the model difference vector calculation unit 180. . FIG. 8 is a diagram illustrating an example of a virtual motion vector calculated by the traveling space model generation unit 160. The virtual motion vector calculated as a result by the traveling space model generation unit 160 means a motion in the image when a stationary object exists on the boundary of the defined traveling space. Therefore, for example, when the vehicle speed increases, the amount of motion between images increases and the value of the virtual motion vector also increases. Also, as shown in FIG. 8, the value of the virtual motion vector increases as the vehicle approaches.

  The traveling space region determination unit 170 divides the traveling space model into a plurality of virtual regions (side surface region, lower surface region), and determines which virtual region the coordinate (x, y) on the traveling space model belongs to It is. The administrator presets the coordinates of each virtual area of the traveling space model.

  For example, the administrator sets the x coordinate (A to B) and the y coordinate (C to D) in the side surface region, and sets the x coordinate (E to F) and the y coordinate (G to H) in the lower surface region ( Hereinafter, the set information is referred to as virtual area coordinate data). The traveling space area determination unit 170 determines which virtual area each coordinate belongs to by comparing the virtual area coordinate data with the coordinates. The traveling space area determination unit 170 outputs the determination result to the model difference vector calculation unit 180.

  The model difference vector calculation unit 180 is a processing unit that calculates a difference between a motion vector and a virtual motion vector for each virtual region (side surface region, lower surface region) and outputs the calculated result to the obstacle detection unit 190. FIG. 9 is a diagram for explaining the processing of the model difference vector calculation unit 180.

The diagram shown in the lower part of FIG. 9 represents the relationship between the abscissa x at a certain ordinate y in the image and the motion vector V x in the x direction. The motion vector in the x direction includes a motion vector calculated by the motion vector calculation unit 130 and a virtual motion vector generated by the traveling space model generation unit 160. The model difference vector calculation unit 180 detects the portion to be detected by comparing these two types of vectors using different determination criteria for each region determined by the traveling space region determination unit 170.

  For the side region, the model difference vector calculation unit 180 includes a difference value between the motion vector and the virtual motion vector (hereinafter abbreviated as a difference value), and a vector (hereinafter referred to as the difference) between the motion vector and the virtual motion vector. The detection target is detected based on the difference between the code of “vector” and the code of the virtual motion vector. The model difference vector calculation unit 180 detects, in the side surface part, a portion where the difference value is equal to or greater than a threshold and the difference vector code and the virtual motion vector code are different (for example, the object A shown in the upper part of FIG. To be detected). The object A is an object that has entered the runway side.

  Even if the difference value is equal to or greater than the threshold value in the side surface region, if the difference vector and the virtual motion vector have the same sign, the object that passes the side surface of the host vehicle and the vehicle away from the host vehicle Since this is an object, the object is excluded from the detection target (for example, the object C shown in the upper part of FIG. 9 is excluded from the detection target).

  Even when the difference value is equal to or larger than the threshold value in the side surface region and the signs of the motion vector and the virtual motion vector are the same (for example, when the object A moves in the lower left direction in FIG. 9), the difference vector and the virtual motion vector Are included in the detection target. For example, there is a lane on the left in FIG. 9 and this situation occurs when an attempt is made to overtake a vehicle in the left lane in which the host vehicle is running side by side.

For example, if the motion vector V x = −10 and the virtual motion vector V x = −20, the difference vector (motion vector V x −virtual motion vector V x ) is +10, and the sign of the difference vector and the sign of the virtual motion vector Unlike, it becomes a detection target.

  On the other hand, with respect to the lower surface area, the model difference vector calculation unit 180 detects a detection target only by the difference value. The model difference vector calculation unit 180 sets a detection target in the lower surface region where the difference value is equal to or greater than the threshold (the object B shown in the upper part of FIG. 9 is set as the detection target). The object B corresponds to a front obstacle such as a preceding vehicle that is not a road surface.

  Note that the model difference vector calculation unit 180 excludes, as noise, a part having a large difference in a sufficiently short section as indicated by 4 in the lower part of FIG. In addition, the model difference vector calculation unit 180 may adjust the threshold to be compared with the difference value in proportion to the distance from the center of the image (position where the motion vector and the virtual motion vector are compared). Adjust the threshold accordingly).

  The model difference vector calculation unit 180 detects a portion to be detected for each horizontal line, and specifies a section to be a detection candidate. In the first embodiment, attention should be paid simply to a horizontal motion vector and a virtual motion vector. This is because, for example, even when there is an obstacle ahead like the object B in the upper part of FIG. 9, the image is enlarged by approaching, and as a result, a horizontal movement occurs. Because. The model difference vector calculation unit 180 outputs obstacle detection candidate distribution data (hereinafter, detection candidate distribution data) to the obstacle detection unit 190.

  The obstacle detection unit 190 is a processing unit that acquires detection candidate distribution data from the model difference vector calculation unit 180 and identifies an obstacle region. The obstacle detection unit 190 outputs data of the specified obstacle area to the obstacle notification unit 200. When the obstacle detection unit 190 increases the accuracy of the obstacle region specification, the object tracking technique from the detection candidate area in the previous and subsequent frames and the object based on the feature amount such as the horizontal and vertical edges in the original image Known techniques such as identification techniques may be used in combination.

  The obstacle notification unit 200 acquires the obstacle region data from the obstacle detection unit 190, and draws the obstacle by drawing a rectangle in a corresponding region in a monitor (in-vehicle monitor; not shown) that can be confirmed by the user. It is a processing unit to notify. FIG. 10 is a diagram illustrating an example of the obstacle detection image output to the monitor.

  Next, a processing procedure of the obstacle detection apparatus 100 according to the first embodiment will be described. FIG. 11 is a flowchart of a process procedure performed by the obstacle detection apparatus 100 according to the first embodiment. As shown in the figure, the image input unit 110 acquires image data (step S101), stores the image data in the image storage unit 120 (step S102), and the motion vector calculation unit 130 stores the image data at different points in time. The motion vector of each area obtained from the image storage unit 120 is calculated (step S103).

  The camera data input unit 140 acquires various data related to the camera (step S104), the vehicle data input unit 150 acquires various data related to the running state of the vehicle (step S105), and the traveling space model generation unit 160 uses the camera. A travel space model is generated based on the various data relating to the above and the various data relating to the running state of the vehicle (step S106), and a virtual motion vector of coordinates corresponding to the motion vector region is calculated (step S107).

  Subsequently, the traveling space region determination unit 170 divides the traveling space model into a plurality of regions (step S108), and the model difference vector calculation unit 180 and the obstacle detection unit 190 detect an obstacle based on the determination criterion for each region. The obstacle notification unit 110 outputs the obstacle data to the monitor (step S110).

  As described above, the model difference vector calculation unit 180 and the obstacle detection unit 190 detect the obstacle based on the determination criterion for each region, so that unnecessary information is excluded and information on the obstacle that affects the running of the vehicle is obtained. Only the driver can be notified.

  As described above, the obstacle detection apparatus 100 according to the first embodiment acquires images at different times from the camera, calculates a motion vector for each region of the image based on each acquired image, and A model of a space in which the vehicle travels is generated from the traveling state of the vehicle, and a virtual motion vector is calculated based on the generated model. Then, the obstacle detection apparatus 100 divides the model into a plurality of virtual areas, and based on the motion vectors and virtual motion vectors corresponding to the divided virtual areas, and the criterion set for each virtual area. Therefore, it is possible to appropriately detect an obstacle that affects the running vehicle regardless of the state of the road.

  In addition, the obstacle detection apparatus 100 according to the first embodiment removes only the obstacle on the runway or the obstacle that seems to enter the runway while removing the distant view and the oncoming vehicle that do not collide from the detection target. It becomes possible to detect. In the front obstacle detection, it is more important to detect an object that moves so as to enter the runway from outside the runway as an obstacle.

  Usually, since the driver is driving while paying attention to the road ahead, he / she pays some attention to front obstacles such as preceding vehicles already existing on the road. However, for pedestrians who cross from the sidewalk, the degree of gaze is low because they are entering from outside the runway. With respect to such movement, the obstacle detection apparatus 100 according to the first embodiment uses the property that a significant difference appears between the horizontal motion vector and the virtual motion vector, thereby further reducing the intrusion obstacle. It can be detected firmly.

  Although the embodiments of the present invention have been described so far, the present invention may be implemented in various different forms other than the first embodiment described above. Therefore, another embodiment included in the present invention will be described below as a second embodiment.

(1) Cooperation with vehicle In the above-described first embodiment, the obstacle detection device 100 displays the information of the obstacle on the monitor when the obstacle is detected. However, the present invention is not limited to this. Instead, various processes can be executed in cooperation with the vehicle. For example, when the obstacle detection device 100 detects an obstacle in front of the vehicle, a deceleration command can be output to the control unit that controls the vehicle speed, thereby ensuring the safety of the vehicle. Further, when the driver suddenly brakes when the obstacle detection device 100 detects an obstacle ahead, the obstacle is avoided in cooperation with the control unit that controls the vehicle. Handle control may be performed.

(2) Calculation of virtual motion vector reflecting vehicle rotation angle In the first embodiment described above, the virtual motion vector calculation method when the vehicle is traveling straight has been described. However, depending on the vehicle rotation angle, the virtual motion vector is calculated. The vector value is corrected. Here, as an example, a method of calculating a virtual motion vector that reflects the yaw vehicle rotation angle will be described. FIG. 12 is an explanatory diagram for supplementarily explaining a method for correcting a virtual motion vector.

  The coordinate (u, v) is one coordinate in the model space coordinate system, but the yaw vehicle rotation angle does not depend on v and the correction value is always 0. That is, it becomes the value of the v component dv of the virtual motion vector obtained in the first embodiment. On the other hand, u forms an angle θ at the intersection of the focal point F, which is the focal length f, and the coordinate axis.

At this time, if the yaw vehicle rotation angle is dθ, it can be expressed by each rotation θ y per unit time and minute time dt, so that the correction amount du can be calculated. Below, an equation for deriving the correction amount du is shown.
Than
here
Therefore, the correction amount du is
It becomes. The corrected u-component virtual motion vector is obtained by adding the correction amount du to the u-component du of the virtual vector obtained in the first embodiment. Other vehicle rotation angles (roll vehicle rotation angle, pitch vehicle rotation angle) can be calculated in the same manner.

  FIG. 13 is a diagram illustrating a difference in virtual motion vector depending on the yaw vehicle rotation angle. The upper row shows the virtual motion vector when the vehicle is rotating right, and the lower row shows the virtual motion vector when the vehicle is turning left.

(4) System configuration, etc. Among the processes described in the present embodiment, all or part of the processes described as being automatically performed can be performed manually or manually. All or a part of the processing described in the above can be automatically performed by a known method. In addition, the processing procedure, control procedure, specific name, and information including various data and parameters shown in the above-mentioned document and drawings can be arbitrarily changed unless otherwise specified.

  Further, each component of the obstacle detection apparatus 100 shown in FIG. 2 is functionally conceptual and does not necessarily need to be physically configured as illustrated. In other words, the specific form of distribution / integration of each device is not limited to that shown in the figure, and all or a part thereof may be functionally or physically distributed or arbitrarily distributed in arbitrary units according to various loads or usage conditions. Can be integrated and configured. Furthermore, each processing function performed by each device may be realized by a CPU and a program that is analyzed and executed by the CPU, or may be realized as hardware by wired logic.

  FIG. 14 is a diagram illustrating a hardware configuration of a computer configuring the obstacle detection apparatus 100 illustrated in FIG. The computer includes an input device 30 that receives data input from a user, a monitor 31, a RAM (Random Access Memory) 32, a ROM (Read Only Memory) 33, and a medium reading device 34 that reads a program from a recording medium on which various programs are recorded. , A camera 35 that captures an image, a driving state detection device (corresponding to a steering sensor, a vehicle speed pulse, a gyroscope, etc.) 36 that detects the driving state of the vehicle, a CPU (Central Processing Unit) 37, and an HDD (Hard Disk Drive) 38 are connected by a bus 39.

  The HDD 38 stores an obstacle detection program 38b that exhibits the same function as that of the obstacle detection device 100 described above. The CPU 37 reads out the obstacle detection program 38b from the HDD 38 and executes it, thereby starting the obstacle detection process 37a that realizes the function of the functional unit of the obstacle detection device 100 described above. The obstacle detection process 37a includes an image input unit 110, a motion vector calculation unit 130, a camera data input unit 140, a vehicle data input unit 150, a travel space model generation unit 160, a travel space region determination unit 170, This corresponds to the model difference vector calculation unit 180, the obstacle detection unit 190, and the obstacle notification unit 190.

  Also, the HDD 38 stores various data 38a used for each functional unit described in the obstacle detection apparatus 100 described above. The CPU 37 stores various data 38 a in the HDD 38, reads the various data 38 a from the HDD 38 and stores it in the RAM 32, and executes obstacle detection processing using the various data 32 a stored in the RAM 32.

  By the way, the obstacle detection program 38b is not necessarily stored in the HDD 38 from the beginning. For example, a “portable physical medium” such as a flexible disk (FD), a CD-ROM, a DVD disk, a magneto-optical disk, or an IC card inserted into a computer, or a hard disk drive (HDD) provided inside or outside the computer. The obstacle detection program 38b is stored in the "fixed physical medium" of the computer, and also in the "other computer (or server)" connected to the computer via the public line, the Internet, LAN, WAN, or the like. The computer may read out and execute the obstacle detection program 38b from these.

(Supplementary note 1) An obstacle detection method for an obstacle detection apparatus for detecting an obstacle that affects the running of a vehicle from an image photographed by a camera,
A motion vector calculation step of acquiring images at different time points from the camera and storing them in a storage device, and calculating a motion vector in each region of the image based on the images at different time points stored in the storage device;
A virtual motion vector calculation step of generating a model of a space in which the vehicle travels from the traveling state of the vehicle and calculating a virtual motion vector for each region of the image based on the generated model;
An obstacle that divides the model into a plurality of virtual areas and detects an obstacle based on the motion vector and the virtual motion vector corresponding to the divided virtual area and a criterion set for each virtual area A detection process;
The obstacle detection method characterized by including.

(Supplementary Note 2) The virtual area includes a first virtual area and a second virtual area, and the obstacle detecting step includes the motion vector and the virtual motion vector included in the first virtual area. In comparison, the difference between the motion vector and the virtual motion vector is equal to or greater than a threshold, and the sign of the vector consisting of the difference between the motion vector and the virtual motion vector is different from the sign of the virtual motion vector. The obstacle detection method according to appendix 1, wherein a part to be a code is detected as an obstacle.

(Supplementary Note 3) In the obstacle detection step, the motion vector included in the second virtual region is compared with the virtual motion vector, and a difference between the motion vector and the virtual motion vector is a threshold value. The obstacle detection method according to supplementary note 2, wherein the above portion is detected as an obstacle.

(Additional remark 4) The said obstacle detection process adjusts the said threshold value according to the position on the said virtual area | region of the said motion vector used as a comparison object, and a virtual motion vector, or 1 or 2 characterized by the above-mentioned. 3. The obstacle detection method according to 3.

(Additional remark 5) It is an obstacle detection apparatus which detects the obstacle which influences driving | running | working of a vehicle from the image image | photographed with the camera,
Motion vector calculation means for acquiring images at different times from the camera, storing them in a storage device, and calculating a motion vector in each region of the image based on the images at different times stored in the storage device;
A virtual motion vector calculating means for generating a model of a space in which the vehicle travels from the traveling state of the vehicle and calculating a virtual motion vector for each region of the image based on the generated model;
An obstacle that divides the model into a plurality of virtual areas and detects an obstacle based on the motion vector and the virtual motion vector corresponding to the divided virtual area and a criterion set for each virtual area Detection means;
An obstacle detection device comprising:

(Supplementary Note 6) The virtual area includes a first virtual area and a second virtual area, and the obstacle detection means calculates the motion vector and the virtual motion vector included in the first virtual area. In comparison, the difference between the motion vector and the virtual motion vector is equal to or greater than a threshold, and the sign of the vector consisting of the difference between the motion vector and the virtual motion vector is different from the sign of the virtual motion vector. The obstacle detection apparatus according to appendix 5, wherein a part to be a code is detected as an obstacle.

(Supplementary Note 7) The obstacle detection unit compares the motion vector included in the second virtual region with the virtual motion vector, and a difference between the motion vector and the virtual motion vector is a threshold value. The obstacle detection apparatus according to appendix 6, wherein the above portion is detected as an obstacle.

(Additional remark 8) The said obstacle detection means adjusts the said threshold value according to the position on the said virtual area of the said motion vector used as a comparison object, and a virtual motion vector, The obstacle detection device according to claim 7.

(Additional remark 9) The motion which calculates the motion vector in each area | region of the said image based on the image at the different time memorize | stored in the memory | storage device which acquires the image at a different time from the camera in a computer Vector calculation procedure;
A virtual motion vector calculation procedure for generating a model of a space in which the vehicle travels from a traveling state of the vehicle, and calculating a virtual motion vector for each region of the image based on the generated model;
An obstacle that divides the model into a plurality of virtual areas and detects an obstacle based on the motion vector and the virtual motion vector corresponding to the divided virtual area and a criterion set for each virtual area Detection procedure;
Obstacle detection program for running.

(Supplementary Note 10) The virtual area includes a first virtual area and a second virtual area, and the obstacle detection procedure includes the motion vector and the virtual motion vector included in the first virtual area. In comparison, the difference between the motion vector and the virtual motion vector is equal to or greater than a threshold, and the sign of the vector consisting of the difference between the motion vector and the virtual motion vector is different from the sign of the virtual motion vector. The obstacle detection program according to appendix 9, wherein a part to be a code is detected as an obstacle.

(Supplementary Note 11) The obstacle detection procedure compares the motion vector included in the second virtual region with the virtual motion vector, and a difference between the motion vector and the virtual motion vector is a threshold value. The obstacle detection program according to appendix 10, wherein the part described above is detected as an obstacle.

(Additional remark 12) The said obstacle detection procedure adjusts the said threshold value according to the position on the said virtual area of the said motion vector used as a comparison object, and a virtual motion vector. The obstacle detection program according to 11.

  As described above, the obstacle detection method and the obstacle detection device according to the present invention are useful for an obstacle detection device that detects an obstacle on a runway, and in particular, an obstacle that eliminates unnecessary information notification for a driver. This is suitable when it is necessary to execute an object notification.

BRIEF DESCRIPTION OF THE DRAWINGS It is a figure for demonstrating the outline | summary and the characteristic of the obstruction detection apparatus concerning the present Example 1. FIG. 1 is a functional block diagram illustrating a configuration of an obstacle detection apparatus according to a first embodiment. It is a figure which shows an example of a driving | running | working space model. It is explanatory drawing for demonstrating the process which converts the coordinate (x, y) of an image coordinate system into the coordinate (u, v) of a model space coordinate system. It is explanatory drawing for demonstrating the area | region determination which a driving | running | working space model production | generation part performs. It is a figure for supplementing the calculation method of a virtual motion vector in case a coordinate (u, v) is located in a lower surface part. It is a figure for supplementing the calculation method of a virtual motion vector in case a coordinate (u, v) is located in a side part. It is a figure which shows an example of the virtual motion vector calculated by the traveling space model production | generation part. It is a figure for demonstrating the process of a model difference vector calculation part. It is a figure which shows an example of the obstruction detection image output to a monitor. 3 is a flowchart illustrating a processing procedure of the obstacle detection apparatus according to the first embodiment. It is explanatory drawing for supplementarily demonstrating the correction method of a virtual motion vector. It is a figure which shows the difference in the virtual motion vector by a yaw vehicle rotation angle. It is a figure which shows the hardware constitutions of the computer which comprises the obstruction detection apparatus shown in FIG.

Explanation of symbols

30 Input device 31 Monitor 32 RAM
32a, 38a Various data 33 ROM
34 Medium Reading Device 35 Camera 36 Running State Detection Device 37 CPU
37a Obstacle detection process 38 HDD
38b Obstacle detection program 39 Bus 100 Obstacle detection device 110 Image input unit 120 Image storage unit 130 Motion vector calculation unit 140 Camera data input unit 150 Vehicle data input unit 160 Traveling space model generation unit 170 Traveling space region determination unit 180 Model difference Vector calculation unit 190 Obstacle detection unit 200 Obstacle notification unit

Claims (5)

  1. An obstacle detection method of an obstacle detection device for detecting an obstacle that affects the running of a vehicle from an image photographed by a camera,
    A motion vector calculation step of acquiring images at different time points from the camera and storing them in a storage device, and calculating a motion vector in each region of the image based on the images at different time points stored in the storage device;
    A virtual motion vector calculation step of generating a model of a space in which the vehicle travels from the traveling state of the vehicle and calculating a virtual motion vector for each region of the image based on the generated model;
    An obstacle that divides the model into a plurality of virtual areas and detects an obstacle based on the motion vector and the virtual motion vector corresponding to the divided virtual area and a criterion set for each virtual area A detection process;
    The obstacle detection method characterized by including.
  2.   The virtual region includes a first virtual region and a second virtual region, and the obstacle detection step compares the motion vector included in the first virtual region with the virtual motion vector, and A portion in which a difference between a motion vector and the virtual motion vector is equal to or greater than a threshold, and a sign of a vector composed of a difference between the motion vector and the virtual motion vector is different from a sign of the virtual motion vector The obstacle detection method according to claim 1, wherein the obstacle is detected as an obstacle.
  3.   The obstacle detection step compares the motion vector included in the second virtual region with the virtual motion vector, and a difference between the motion vector and the virtual motion vector is equal to or greater than a threshold value The obstacle detection method according to claim 2, wherein the obstacle is detected as an obstacle.
  4.   The said obstacle detection process adjusts the said threshold value according to the position on the said virtual area of the said motion vector used as a comparison object, and a virtual motion vector, The Claim 1, 2, or 3 characterized by the above-mentioned. Obstacle detection method.
  5. An obstacle detection device that detects an obstacle that affects driving of a vehicle from an image taken by a camera,
    Motion vector calculation means for acquiring images at different times from the camera, storing them in a storage device, and calculating a motion vector in each region of the image based on the images at different times stored in the storage device;
    A virtual motion vector calculating means for generating a model of a space in which the vehicle travels from the traveling state of the vehicle and calculating a virtual motion vector for each region of the image based on the generated model;
    An obstacle that divides the model into a plurality of virtual areas and detects an obstacle based on the motion vector and the virtual motion vector corresponding to the divided virtual area and a criterion set for each virtual area Detection means;
    An obstacle detection device comprising:
JP2007123908A 2007-05-08 2007-05-08 Obstacle detection method and obstacle detection device Expired - Fee Related JP5125214B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007123908A JP5125214B2 (en) 2007-05-08 2007-05-08 Obstacle detection method and obstacle detection device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2007123908A JP5125214B2 (en) 2007-05-08 2007-05-08 Obstacle detection method and obstacle detection device

Publications (2)

Publication Number Publication Date
JP2008282106A true JP2008282106A (en) 2008-11-20
JP5125214B2 JP5125214B2 (en) 2013-01-23

Family

ID=40142889

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2007123908A Expired - Fee Related JP5125214B2 (en) 2007-05-08 2007-05-08 Obstacle detection method and obstacle detection device

Country Status (1)

Country Link
JP (1) JP5125214B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010204805A (en) * 2009-03-02 2010-09-16 Konica Minolta Holdings Inc Periphery-monitoring device and method
JP2013200778A (en) * 2012-03-26 2013-10-03 Fujitsu Ltd Image processing device and image processing method
US9053554B2 (en) 2009-04-23 2015-06-09 Toyota Jidosha Kabushiki Kaisha Object detection device using an image captured with an imaging unit carried on a movable body
WO2016151977A1 (en) * 2015-03-26 2016-09-29 パナソニックIpマネジメント株式会社 Moving body detection device, image processing device, moving body detection method, and integrated circuit

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07239998A (en) * 1994-02-28 1995-09-12 Mitsubishi Electric Corp Periphery monitoring device for vehicle
JP2004056763A (en) * 2002-05-09 2004-02-19 Matsushita Electric Ind Co Ltd Monitoring apparatus, monitoring method, and program for monitor
JP2005215985A (en) * 2004-01-29 2005-08-11 Fujitsu Ltd Traffic lane decision program and recording medium therefor, traffic lane decision unit and traffic lane decision method
JP2005293479A (en) * 2004-04-05 2005-10-20 Denso Corp Object danger decision device
WO2005098366A1 (en) * 2004-03-31 2005-10-20 Pioneer Corporation Route guidance system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07239998A (en) * 1994-02-28 1995-09-12 Mitsubishi Electric Corp Periphery monitoring device for vehicle
JP2004056763A (en) * 2002-05-09 2004-02-19 Matsushita Electric Ind Co Ltd Monitoring apparatus, monitoring method, and program for monitor
JP2005215985A (en) * 2004-01-29 2005-08-11 Fujitsu Ltd Traffic lane decision program and recording medium therefor, traffic lane decision unit and traffic lane decision method
WO2005098366A1 (en) * 2004-03-31 2005-10-20 Pioneer Corporation Route guidance system and method
JP2005293479A (en) * 2004-04-05 2005-10-20 Denso Corp Object danger decision device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010204805A (en) * 2009-03-02 2010-09-16 Konica Minolta Holdings Inc Periphery-monitoring device and method
US9053554B2 (en) 2009-04-23 2015-06-09 Toyota Jidosha Kabushiki Kaisha Object detection device using an image captured with an imaging unit carried on a movable body
JP2013200778A (en) * 2012-03-26 2013-10-03 Fujitsu Ltd Image processing device and image processing method
WO2016151977A1 (en) * 2015-03-26 2016-09-29 パナソニックIpマネジメント株式会社 Moving body detection device, image processing device, moving body detection method, and integrated circuit
JPWO2016151977A1 (en) * 2015-03-26 2017-09-07 パナソニックIpマネジメント株式会社 Mobile body detecting device, image processing device, mobile body detecting method, and integrated circuit

Also Published As

Publication number Publication date
JP5125214B2 (en) 2013-01-23

Similar Documents

Publication Publication Date Title
Prioletti et al. Part-based pedestrian detection and feature-based tracking for driver assistance: real-time, robust algorithms, and evaluation
US9330320B2 (en) Object detection apparatus, object detection method, object detection program and device control system for moveable apparatus
Khammari et al. Vehicle detection combining gradient analysis and AdaBoost classification
JP6254083B2 (en) In-vehicle ambient environment recognition device
US20160014406A1 (en) Object detection apparatus, object detection method, object detection program, and device control system mountable to moveable apparatus
JP5867273B2 (en) Approaching object detection device, approaching object detection method, and computer program for approaching object detection
US20150344028A1 (en) Parking assist system with annotated map generation
EP2815383B1 (en) Time to collision using a camera
US7957559B2 (en) Apparatus and system for recognizing environment surrounding vehicle
EP2463843B1 (en) Method and system for forward collision warning
EP1806595B1 (en) Estimating distance to an object using a sequence of images recorded by a monocular camera
JP4052650B2 (en) Obstacle detection device, method and program
EP2121405B1 (en) System and method for vehicle detection and tracking
JP4967666B2 (en) Image processing apparatus and method, and program
JP3925488B2 (en) Image processing apparatus for vehicle
US8175806B2 (en) Car navigation system
JP4933962B2 (en) Branch entry judgment device
US8184859B2 (en) Road marking recognition apparatus and method
EP3208635B1 (en) Vision algorithm performance using low level sensor fusion
EP2843615B1 (en) Device for detecting three-dimensional object and method for detecting three-dimensional object
US8605947B2 (en) Method for detecting a clear path of travel for a vehicle enhanced by object detection
Wang et al. Applying fuzzy method to vision-based lane detection and departure warning system
KR102027405B1 (en) Moving object recognizer
US20130129150A1 (en) Exterior environment recognition device and exterior environment recognition method
US9591274B2 (en) Three-dimensional object detection device, and three-dimensional object detection method

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20100119

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110712

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20110713

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110909

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20111213

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20120213

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20121002

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20121015

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20151109

Year of fee payment: 3

LAPS Cancellation because of no payment of annual fees