JP5300309B2 - Obstacle recognition device - Google Patents

Obstacle recognition device Download PDF

Info

Publication number
JP5300309B2
JP5300309B2 JP2008113923A JP2008113923A JP5300309B2 JP 5300309 B2 JP5300309 B2 JP 5300309B2 JP 2008113923 A JP2008113923 A JP 2008113923A JP 2008113923 A JP2008113923 A JP 2008113923A JP 5300309 B2 JP5300309 B2 JP 5300309B2
Authority
JP
Japan
Prior art keywords
image
vehicle
front
rear
means
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2008113923A
Other languages
Japanese (ja)
Other versions
JP2009262736A (en
Inventor
敏夫 伊東
和義 山下
Original Assignee
ダイハツ工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ダイハツ工業株式会社 filed Critical ダイハツ工業株式会社
Priority to JP2008113923A priority Critical patent/JP5300309B2/en
Publication of JP2009262736A publication Critical patent/JP2009262736A/en
Application granted granted Critical
Publication of JP5300309B2 publication Critical patent/JP5300309B2/en
Application status is Expired - Fee Related legal-status Critical
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00624Recognising scenes, i.e. recognition of a whole field of perception; recognising scene-specific objects
    • G06K9/00791Recognising scenes perceived from the perspective of a land vehicle, e.g. recognising lanes, obstacles or traffic signs on road scenes
    • G06K9/00805Detecting potential obstacles
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Abstract

An obstacle recognition device correctly recognizes an obstacle in the rear of a driver's own vehicle by simple processing of photographed images when another vehicle exists in front or rear of the driver's own vehicle or when a front image cannot be obtained. Based on the images photographed by a photographing means (3) of the driver's own vehicle (1) being driven, a generation means of the obstacle recognition device (2) superimposes portions including no vehicle in a plurality of frames of front images or rear images to generate an image including no vehicle, and a recognition means recognizes the obstacle included in the rear images, from image portions where the images including no vehicle and rear images identified by an identification means are different from each other.

Description

  The present invention relates to an obstacle recognition device that recognizes an obstacle behind a host vehicle by processing a captured image, and more particularly, to an improvement in recognition when a vehicle as an obstacle exists before and after the host vehicle.

  Conventionally, when realizing pre-crash safety (reduction of damage, collision avoidance, etc.) of a vehicle, it is important how to recognize obstacles ahead and behind the traveling vehicle, especially approaching from behind. Accurate recognition of obstacles such as vehicles is desired.

And as an apparatus for recognizing the obstacle, conventionally, after the front or rear of the own vehicle is photographed at regular intervals by a camera mounted on the own vehicle, projective transformation is performed on two photographed images whose times are before and after. The difference is taken, the feature point where the time lag has occurred is detected from the difference, and the optical flow of the motion vector is detected. Based on this detection, the obstacle (vertical road surface object) such as the vehicle in front of or behind the vehicle is detected. A recognition device has been proposed (see, for example, Patent Document 1).
Japanese Patent No. 3463858 (for example, claim 1, paragraphs [0014], [0041]-[0045], [0118], FIGS. 1 to 3)

  In the case of the conventional device described in Patent Document 1, a road such as a vehicle approaching from the rear is detected by detecting the optical flow of the motion vector from the difference image after projective transformation of two captured images whose time is around. However, in this case, since the obstacle is recognized from a slight difference with time and change in the distance and shape of the captured image between the obstacle and the own vehicle, the optical flow is reduced. In addition to the need for complex image processing to detect, there is a problem that obstacles are not easily recognized, and obstacles may not be recognized depending on the situation, and obstacles cannot be recognized stably and reliably. is there.

  Therefore, the applicant of the present application is a photographing unit capable of photographing a road surface viewed from a plurality of directions of the traveling vehicle, and an image obtaining unit for obtaining photographed images from a plurality of different directions of the photographing unit for substantially the same road surface area. An identification unit that extracts and identifies a predetermined monitoring range included in a captured image from a plurality of directions acquired by the image acquisition unit, and an image that differs between the captured images in the monitoring range depending on the identification result of the identification unit An obstacle recognition apparatus having a recognition means for recognizing a portion as an obstacle perpendicular to the road surface has already been filed (Japanese Patent Application No. 2007-186549).

  The obstacle recognition device of the already-applied application is, for example, a photographed image (front image) of a certain road surface area in front of the own vehicle photographed by the photographing means at time tm, and photographed at time tn when the own vehicle passes through the road surface region. For the captured image (rear image) of the same road surface area behind the host vehicle taken by the means, the identification means extracts almost the same monitoring range and identifies obstacles on the road from the difference in the images (difference in shade) To do.

  That is, if there is an obstacle ahead of the host vehicle, there is an obstacle in the monitoring range of the front image, but there is no obstacle in the rear image monitoring range. On the contrary, if there is an obstacle behind the host vehicle, there is no obstacle in the monitoring range of the front image, but there is an obstacle in the monitoring range of the rear image. And when an obstacle exists ahead or behind the host vehicle, the obstacle is included only in the monitoring range of either the front image or the rear image, and the contents of the monitoring range of both images are greatly different. Thus, the previously-obtained obstacle recognition device can easily detect obstacles in front of or behind the host vehicle from the difference in lightness and darkness of a simple image without performing complicated image processing or the like for detecting an optical flow. It can be recognized stably and reliably.

  However, in the case of the obstacle recognition device of the already-filed application, if a vehicle exists before and after the own vehicle and the vehicle in the front image and the rear image overlap, whether or not the vehicle as an obstacle exists behind the own vehicle. I do n’t know. The same applies to the front of the vehicle.

  Therefore, when vehicles as obstacles exist before and after the own vehicle, it is difficult to accurately recognize the obstacles behind the own vehicle such as a vehicle approaching from behind.

Further, even when the front image is Tsu kuna such obtained for some reason, accurate recognition of the vehicle behind the obstacle becomes difficult.

  An object of the present invention is to make it possible to accurately recognize an obstacle behind a host vehicle when a vehicle is present before and after the host vehicle or when a front image cannot be obtained by simple processing of captured images. And

In order to achieve the above-described object, the obstacle recognition device according to the present invention includes a photographing unit capable of photographing a road surface in the front-rear direction of a traveling vehicle, and a front image and a rear side photographed by the photographing unit over substantially the same road surface area. an image obtaining unit for obtaining an image, an area where the vehicle is not present in the front image image acquired by the image acquisition unit superposing over a plurality of frames, and generating means for generating a front image of the vehicle is not present, by the generation unit and identifying means for identifying a rear image of the road surface in the area of the generated front image, different image portions of the rear image identified by the forward image and the identification means generated by said generating means, a vehicle rear image And recognizing means for recognizing (claim 1).

  The obstacle recognition apparatus of the present application further includes a projective conversion unit that performs a projective conversion on the captured image obtained by the image acquisition unit (claim 2).

According to the first aspect of the present invention, even when a vehicle is present before and after the host vehicle, when the imaging unit captures the front image and the rear image, the generation unit acquires the front images of a plurality of frames acquired by the image acquisition unit. The front image without the vehicle is generated by superimposing the portions where the vehicle does not exist.

Further, the identification unit identifies a rear image acquired by the image acquisition unit, and an image in the road surface area of the front image where the vehicle does not exist by the generation unit .

Then, the recognition unit, a vehicle rear image is easily recognized by the generation means from the different image portions of the rear image existing front image and the identifying means does not have identified the vehicle.

Therefore, when there are vehicles before and after the host vehicle, it is possible to accurately recognize a vehicle approaching from behind by simple image processing using the front image and the rear image.

  On the other hand, when the photographing unit cannot capture the front image for some reason, the generation unit generates an image without the vehicle by superimposing the portions of the rear images of the plurality of frames acquired by the image acquisition unit without the vehicle. .

  Moreover, the rear image in which the vehicle exists in the road surface area of the image in which the vehicle does not exist is identified by the identification unit.

  Then, the obstacle (vehicle) in the rear image is easily recognized by the recognizing unit from different image portions of the image in which the vehicle does not exist and the rear image identified by the identifying unit.

  Therefore, when the front image cannot be obtained for some reason, the obstacle behind the host vehicle can be accurately recognized by simple image processing using the rear image.

  According to the second aspect of the present invention, a projective conversion image obtained by bird's-eye view (bird's-eye view) of each photographing range is obtained for the front image and the rear image by the projective conversion of the projective conversion means. Therefore, there is an advantage that the identification means can identify the image and recognize the recognition means more easily using the projective transformation image.

  Next, in order to describe the present invention in more detail, an embodiment will be described in detail with reference to FIGS.

  FIG. 1 shows a block configuration of an obstacle recognition device 2 mounted on the host vehicle 1, and FIG. 2 shows a configuration example of an imaging means of the obstacle recognition device 2. FIG. 3 is a flowchart for explaining the operation of the obstacle recognition apparatus 2. 4, 5, and 7 to 12 show examples of images for explaining the processing of the obstacle recognition apparatus 2, and FIG. 6 is an explanatory diagram of projective transformation.

  As shown in FIG. 1, the obstacle recognition device 2 mounted on the host vehicle 1 rewrites a photographing means 3, an image processing means 4 having a microcomputer configuration for processing and identifying the photographed image, a photographed image and the like. It is comprised by the data storage means 5 which accumulate | stores freely.

  The photographing means 3 is composed of a monochrome or color monocular camera capable of photographing a road surface as viewed from the front and rear directions of the traveling vehicle 1 and outputs a frame image taken every moment. Specifically, as shown in FIG. 2, for example, the photographing unit 3 includes a monocular camera 3 a in front of the inner mirror for monitoring the front of the vehicle 1 and a monocular camera 3 b above the back door for rear monitoring. . The monocular cameras 3a and 3b are the same, and for example, a moving image is divided into still images at 1/30 second intervals and output. In addition, the installation angle of the monocular cameras 3a and 3b is set to an angle including an infinite point at the top of the image so that many white lines on the side of the vehicle can be used when projective transformation is performed.

  Next, the image processing means 4 forms the image acquisition means, projective transformation means, generation means, and recognition means of the present invention, and forms differential binarization means, expansion means, calculation means, and contraction means, which will be described later. While the vehicle 1 is traveling, the recognition processing program in steps S1 to S10 in FIG. 3 is repeatedly executed.

  The image acquisition means captures a captured image (hereinafter referred to as a front image) F1 of the road surface area in front of the host vehicle 1 of FIG. 4 captured by the monocular camera 3a in step S1 of FIG. Then, a captured image of the substantially same road surface area behind the host vehicle 1 of FIG. 5 taken by the monocular camera 3b at a time tb delayed by a minute time Δt determined from the host vehicle speed or the like of a vehicle speed sensor (not shown) B1) (referred to as a rear image) is captured and held in the data storage means 5. In addition, the vehicle (front vehicle (alpha) and back vehicle (beta)) as an obstruction perpendicular | vertical to a road surface exists before and behind the own vehicle 1, the front image F1 contains the vehicle (alpha), and the back image B1 contains the vehicle (beta).

  In the case of this embodiment, the projective transformation means performs projective transformation on the front image F1 and the rear image B1 in step S2 of FIG.

  As shown in FIG. 6, this projective transformation is a well-known coordinate transformation in which a point P (x, y) on a plane L is projected as a point P0 (u, v) on another plane L0 with respect to the projection center O. Yes, in a road environment where the photographing road surface of the monocular cameras 3a and 3b can be assumed to be a plane, it can be projectively transformed into an image viewed from an arbitrary viewpoint.

  Then, for the front image F1 obtained by photographing the front situation with the monocular camera 3a and the rear image B1 obtained by photographing the rear situation with the monocular camera 3b, projective transformation is performed so that the road surface is a plane and the white lines of the road surface are parallel. A front image F2 in FIG. 4 and a rear image B2 in FIG. 5 as seen from above are obtained.

  The lower side of the front image F2 is the own vehicle 1 side, and the upper side of the rear image B2 is the own vehicle 1 side. In addition, in order to recognize an obstacle behind the host vehicle 1 by superimposing images described later, it is actually desirable to normalize the front image F2 and the rear image F3 so that they can be superimposed. Therefore, (i) the same area is imaged by the monocular cameras 3a and 3b. (Ii) An image viewed from directly above is generated by projective transformation on the assumption that the road surface is a perfect plane using the angles and heights of the monocular cameras 3a and 3b. At this time, the white lines on both sides of the lane are always extracted using the Hough transform, and the angle is adjusted so as to be parallel. (Iii) The width and direction of the lanes of the front image F2 and the rear image B2 are matched with the straight line component of the white line obtained by the Hough transform. (Vi) Since the distance moved from the own vehicle 1 can be grasped from the vehicle speed or the like, the front image F2 and the rear image B2 are accurately aligned in an area slightly larger than the traveling lane by template matching or the like.

  When the front image F2 and the rear image B2 obtained in this way are overlapped to obtain a difference in density, the difference image is dark in the area close to the monocular cameras 3a and 3b, and a monocular camera for illumination light such as sunlight. The shade varies greatly depending on the direction of 3a and 3b. In addition, the degree of change in the shade varies depending on the material of the road surface. Furthermore, a wide range of shading errors occurs due to a slight misalignment of the overlay. Therefore, there is a possibility that an obstacle such as a vehicle cannot be recognized from the difference image. Therefore, in this embodiment, after the image is differentiated and binarized by differential binarization means, expansion means, calculation means, and contraction means described below, well-known expansion processing and exclusive logic are used in image processing. Perform the sum operation and contraction process to eliminate these disadvantages.

  The differential binarization means is means for differentiating and binarizing the front image F1, the rear image B1 acquired by the image acquisition means, or the front image F2 and the rear image B2 obtained by projective transformation thereof. In the case of this embodiment provided, the front image F2 and the rear image B2 are differentiated by step S3 in FIG. 3 to emphasize the density, and then the differential edge is binarized with a predetermined threshold. In the differentiation process, it is preferable to obtain the differential values of the front image F2 and the rear image B2 after performing smoothing so that unnecessary differential values based on the unevenness of the road surface are not generated.

  The dilation process is a process of increasing the line width formed by the logic “1” pixels with respect to a binarized image obtained by differentiating and binarizing images such as the front image F2 and the rear image B2. For simplicity, the binarized image to be processed is divided into appropriate small areas of, for example, 4 pixels or 8 pixels. If even one of the pixels in each small area is “1”, all the pixels in the small area are This is realized by setting “1”.

  Then, when each of the images F2 and B2 is differentiated by the differential binarization means and the expansion processing is performed on the binarized image, a front image F3 and a rear image B3 in FIG. 7 are obtained.

  By the way, when the expansion process is performed, the dots and lines of the image are expanded to reduce the image shift. When the exclusive OR of the images F3 and B3 after the expansion processing is taken, the edges (different image portions) of the objects such as the vehicles α and β that exist only in one of the images remain after the expansion processing is performed. The road markings that are common to both images F3 and B3 disappear. Furthermore, when the image formed by taking the exclusive OR is contracted and the points and lines in the image are restored to the original thickness, the unnecessary OR which has been left unerased even after the exclusive OR such as road marking is removed. The image portion also disappears, and only the edge portion of the recognition target object is restored. If the vehicles α and β do not overlap, the vehicles α and β can be recognized from their edge portions.

  Therefore, when the exclusive OR of the images F3 and B3 is calculated by the calculating means and superimposed, the composite image FB3 of FIG. 7 is obtained. Furthermore, when the contraction process is performed on the composite image FB3 by the contraction unit, an image FB4 in FIG. 7 is obtained.

  Here, in the synthesized image FB3, the part surrounded by the solid line frame a in FIG. 7 is the front image F3, and the part surrounded by the solid line frame b in FIG. 7 is the rear image B3. When vehicles α and β exist before and after the host vehicle 1, as shown by the crossing of the solid line frames a and b in FIG. 7, the vehicles α and β in the composite image FB3 overlap to form one object. become that way. Therefore, the vehicle β as an obstacle behind the host vehicle 1 cannot be recognized separately from the image FB4 after the contraction process. The same applies to the case where the images F1 and B1 before the binarization are overlapped to obtain an image of the difference between the shades and the vehicle β is recognized from the image. If the vehicle β cannot be recognized separately, the distance from the vehicle 1 to the vehicle β cannot be determined from the recognition result, and the possibility of a collision cannot be determined.

  By the way, according to the change in the photographing position of the photographing means 3 accompanying the traveling of the own vehicle 1, for example, the photographing road surface of the monocular camera 3a moves forward with time change as shown in FIG. 8, and the vehicle α ahead also advances forward at the same time. . The same applies to the photographing road surface of the monocular camera 3b and the vehicle β.

  Then, the front images F2 (t1), F2 (t2), F2 (t3),... And the front images F3 (t1), F3 (t2), F3 (t3),... At each time t1, t2, t3,. In the range from the camera side end of the vehicle to the vehicle α, there is no vehicle α and only the road surface. This point is also apparent from FIG. 10, for example, which shows an enlarged front image F3 * that has been binarized before being subjected to expansion processing. The solid line frame c in FIG. 10 is a part of the vehicle α, and the part of the solid line frame d in front of it is a road surface part.

  And the road surface where the vehicle α of the front images F2 (t1), F2 (t2), F2 (t3),... Or the front images F3 (t1), F3 (t2), F3 (t3),. When the portions are overlapped, as shown schematically in FIG. 9, a front image obtained by joining the road surface portions, that is, a front image Fr without the vehicle α can be generated.

  Furthermore, when the front image Fr is generated, even if the vehicle α is present in front of the host vehicle 1, it is based on the front image Fr and the rear image B2 or the rear image B3 at an appropriate time within the road surface area. The vehicle β behind the host vehicle 1 can be recognized separately.

  Further, when the front image F1 cannot be obtained for some reason, the road surface portion where the vehicle β of the rear image B2 or the rear image B3 does not exist is overlapped over a plurality of frames, and the rear image Br corresponding to the front image Fr is generated. Based on the rear image Br and the rear image B2 or the rear image B3 at an appropriate time in the road surface area, the vehicle β behind the host vehicle 1 can be separated and recognized.

  Therefore, for example, the generation unit holds the forward image F2 and the rear image B2 every moment in the data storage unit 5, and when the front image F2 is obtained by step S4 in FIG. 3, the vehicle α of the front image F2 does not exist. The regions are overlapped over a plurality of frames (specifically, several frames to about a dozen frames determined by the vehicle speed or the like of the host vehicle 1), and the front image Fr without the vehicle α is generated. In addition, when the front image F2 is not obtained, the area | region where the vehicle (beta) of the back image B2 does not exist is overlapped over several frames, and the back image Br without the vehicle (beta) is produced | generated.

  In step S5 of FIG. 3, the identification unit reads the rear image B2 in the road surface area of the front image Fr or the rear image Br from the data storage unit 5 based on the vehicle speed of the host vehicle 1, the current position, etc. The rear image B2 is identified.

  The expansion means performs an expansion process on the front image Fr or the rear image Br and the recognition target rear image B2 in step S6 of FIG.

  The calculation means calculates an exclusive OR of the front image Fr or the rear image Br and the rear image B2 to be recognized in step S7 of FIG. 3, and the front image Fr or the rear image Br and the rear image of the recognition target. Cancel most of the road markings that overlap with B2. At this time, a different part, that is, a part of the vehicle β becomes a thick contour line subjected to the expansion process. Note that an operation for obtaining a difference may be performed instead of the exclusive OR operation. In this case, for example, the same result can be obtained by setting a pixel having an operation result of “−1” to “0”.

  The contraction means performs a contraction process opposite to the expansion process in step S8 of FIG. 3 on the composite image formed by the exclusive OR operation, and narrows the line width of the composite image and returns it. Then, a composite image (shrinkage image) having substantially only the vehicle β is formed.

  Based on the composite image formed by the contraction means, the recognition means starts from a different image portion between the front image Fr or the rear image Br and the recognition target rear image B2 identified by the identification means in step S9 of FIG. Then, the vehicle β as an obstacle perpendicular to the road surface of the rear image B2 is recognized, and the recognition result is sent to a collision prediction processing means (not shown) for reducing damage or avoiding collision of the own vehicle 1. At this time, for example, the distance from the height of the vehicle β on the captured images F1 and F2 to the vehicle β is easily known.

  An actual image example of the forward image Fr generated from the forward image F2 is as shown in an image Fr1 in FIG. 11, and when it is differentiated and expanded, an image Fr2 in the same figure is obtained. FIG. 12 shows a composite image Fexor formed by the exclusive OR operation and a recognition result image Fdet, and the solid line frame e of the image Fdet is the vehicle β.

  Therefore, the obstacle recognizing device 2 according to the present embodiment uses the generating unit to obtain the image acquiring unit when the photographing unit 3 captures the front image F1 and the rear image B1 even when a vehicle exists before and after the host vehicle 1. The front image Fr without the vehicle is generated by superimposing the portions where the vehicle α does not exist in the front image F2 (or F3) of the plurality of frames formed by acquiring.

  Further, the rear image B3 in the road surface area of the front image Fr is identified by the identification unit, and the obstacle of the rear image B3 from the different image portion of the front image Fr and the rear image B3 identified by the identification unit is identified by the recognition unit. (Vehicle β) can be easily recognized.

  Therefore, when vehicles α and β exist before and after the host vehicle 1, obstacles behind the host vehicle 1 such as the vehicle β approaching from the rear by simple image processing using the front image F1 and the rear image B1. Can be recognized accurately.

  Further, when the front image F1 cannot be obtained for some reason, the obstacle behind the host vehicle 1 can be accurately recognized in the same manner as described above by simple image processing using the rear image Br.

  Furthermore, there is an advantage that each process can be performed more easily using the front image F2 and the rear image B2 obtained by projective conversion by the projective conversion by the projective conversion means, and further, the recognition accuracy is further improved because the expansion process and the contraction process are performed. To do.

  Note that projective transformation may be omitted, and expansion processing and contraction processing may be omitted in order to simplify processing.

  The present invention is not limited to the above-described embodiment, and various modifications other than those described above can be made without departing from the spirit of the present invention.

  For example, in the embodiment, the projective transformation is performed before the differential binarization. However, the projective transformation may be performed at any timing such as after the differential binarization.

  The present invention can be applied to obstacle recognition of various vehicles.

It is a block diagram of one embodiment of the present invention. It is explanatory drawing of the structural example of the imaging means of FIG. It is a flowchart for operation | movement description of FIG. It is explanatory drawing of the example of a picked-up image in which the vehicle ahead of the own vehicle exists. It is explanatory drawing of the example of a picked-up image in which the vehicle of the back of the own vehicle exists. It is explanatory drawing of the projective transformation of FIG. It is explanatory drawing of the example of an image explaining the process of FIG. It is explanatory drawing of the production | generation process of the image which piled up the part which a vehicle does not exist. It is a schematic diagram of the image which overlapped the part where a vehicle does not exist. It is explanatory drawing of the part from which the vehicle of a picked-up image does not exist. It is explanatory drawing of the example of an image which piled up the part in which a vehicle does not exist. It is explanatory drawing of the recognition based on the image of FIG.

Explanation of symbols

DESCRIPTION OF SYMBOLS 1 Own vehicle 2 Obstacle recognition apparatus 3 Imaging | photography means 4 Image processing means

Claims (2)

  1. Photographing means capable of photographing the road surface in the front-rear direction of the traveling vehicle;
    Image acquisition means for acquiring a front image and a rear image captured by the imaging means for substantially the same road surface area;
    The region in which the vehicle is not present in the front image image acquired by the image acquisition unit superposing over a plurality of frames, and generating means for generating a front image of the vehicle is not present,
    Identifying means for identifying a rear image in the road surface area of the front image generated by the generating means;
    An obstacle recognizing apparatus comprising: a recognizing unit for recognizing a vehicle in a rear image from different image portions of the front image generated by the generating unit and the rear image identified by the identifying unit.
  2. The obstacle recognition apparatus according to claim 1,
    An obstacle recognition apparatus, further comprising projective conversion means for performing projective conversion on a captured image obtained by the image acquisition means.
JP2008113923A 2008-04-24 2008-04-24 Obstacle recognition device Expired - Fee Related JP5300309B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008113923A JP5300309B2 (en) 2008-04-24 2008-04-24 Obstacle recognition device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2008113923A JP5300309B2 (en) 2008-04-24 2008-04-24 Obstacle recognition device
PCT/JP2008/073328 WO2009130823A1 (en) 2008-04-24 2008-12-22 Obstacle recognition device

Publications (2)

Publication Number Publication Date
JP2009262736A JP2009262736A (en) 2009-11-12
JP5300309B2 true JP5300309B2 (en) 2013-09-25

Family

ID=41216568

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2008113923A Expired - Fee Related JP5300309B2 (en) 2008-04-24 2008-04-24 Obstacle recognition device

Country Status (2)

Country Link
JP (1) JP5300309B2 (en)
WO (1) WO2009130823A1 (en)

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2946727B2 (en) * 1990-10-26 1999-09-06 日産自動車株式会社 Vehicle obstacle detecting device
JP3540005B2 (en) * 1994-04-22 2004-07-07 株式会社デンソー Obstacle detection device
JP3494434B2 (en) * 1999-10-21 2004-02-09 松下電器産業株式会社 Parking assistance device
JP3847547B2 (en) * 2000-10-17 2006-11-22 三菱電機株式会社 Vehicle periphery monitoring support device
JP3997945B2 (en) * 2003-04-23 2007-10-24 株式会社デンソー Peripheral image display device
JP4192680B2 (en) * 2003-05-28 2008-12-10 アイシン精機株式会社 Moving object periphery monitoring device
JP4907883B2 (en) * 2005-03-09 2012-04-04 株式会社東芝 Vehicle periphery image display device and vehicle periphery image display method
WO2007015446A1 (en) * 2005-08-02 2007-02-08 Nissan Motor Co., Ltd. Device for monitoring around vehicle and method for monitoring around vehicle
JP4687411B2 (en) * 2005-11-15 2011-05-25 株式会社デンソー Vehicle peripheral image processing apparatus and program
JP4797877B2 (en) * 2006-08-14 2011-10-19 日産自動車株式会社 Vehicle video display device and vehicle around video display method

Also Published As

Publication number Publication date
WO2009130823A1 (en) 2009-10-29
JP2009262736A (en) 2009-11-12

Similar Documents

Publication Publication Date Title
US9959595B2 (en) Dense structure from motion
US9386302B2 (en) Automatic calibration of extrinsic and intrinsic camera parameters for surround-view camera system
JP6398347B2 (en) Image processing apparatus, recognition object detection method, recognition object detection program, and moving object control system
KR101362324B1 (en) System and Method for Lane Departure Warning
US9177196B2 (en) Vehicle periphery monitoring system
US8164432B2 (en) Apparatus, method for detecting critical areas and pedestrian detection apparatus using the same
US7372977B2 (en) Visual tracking using depth data
US8244027B2 (en) Vehicle environment recognition system
US9076046B2 (en) Lane recognition device
US10255509B2 (en) Adaptive lane marker detection for a vehicular vision system
JP4203512B2 (en) Vehicle periphery monitoring device
JP2013109760A (en) Target detection method and target detection system
JP4970516B2 (en) Surrounding confirmation support device
JP4654163B2 (en) Vehicle surrounding environment recognition device and system
JP6013884B2 (en) Object detection apparatus and object detection method
JP5325765B2 (en) Road shoulder detection device and vehicle using road shoulder detection device
US10129521B2 (en) Depth sensing method and system for autonomous vehicles
DE102013110615B3 (en) 3D camera according to the stereoscopic principle and method for acquiring depth maps
US8848980B2 (en) Front vehicle detecting method and front vehicle detecting apparatus
US7672514B2 (en) Method and apparatus for differentiating pedestrians, vehicles, and other objects
US7899211B2 (en) Object detecting system and object detecting method
KR20170014168A (en) Camera device for vehicle
JP5399027B2 (en) A device having a system capable of capturing a stereoscopic image to assist driving of an automobile
EP1403615A2 (en) Apparatus and method for processing stereoscopic images
JP2008219063A (en) Apparatus and method for monitoring vehicle's surrounding

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20101213

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20121218

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20130121

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20130618

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20130618

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

LAPS Cancellation because of no payment of annual fees