CN107423675B - Advanced warning system for forward collision warning of traps and pedestrians - Google Patents

Advanced warning system for forward collision warning of traps and pedestrians Download PDF

Info

Publication number
CN107423675B
CN107423675B CN201710344179.6A CN201710344179A CN107423675B CN 107423675 B CN107423675 B CN 107423675B CN 201710344179 A CN201710344179 A CN 201710344179A CN 107423675 B CN107423675 B CN 107423675B
Authority
CN
China
Prior art keywords
image
collision
pedestrian
camera
host vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710344179.6A
Other languages
Chinese (zh)
Other versions
CN107423675A (en
Inventor
丹·罗森鲍姆
阿米亚德·古尔曼
吉迪昂·斯坦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mobileye Vision Technologies Ltd
Original Assignee
Mobileye Vision Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mobileye Vision Technologies Ltd filed Critical Mobileye Vision Technologies Ltd
Publication of CN107423675A publication Critical patent/CN107423675A/en
Application granted granted Critical
Publication of CN107423675B publication Critical patent/CN107423675B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/165Anti-collision systems for passive traffic, e.g. including static obstacles, trees
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to an advanced warning system and method for forward collision warning of traps and pedestrians using a camera that can be incorporated in a motor vehicle. The method acquires image frames at known intervals. A blob may be selected in at least one image frame. The optical flow between image frames of multiple image points of the plaque may be tracked. The image points may be fitted to at least one model. Based on the fitting of the image points to the at least one model, a Time To Collision (TTC) may be determined if a collision is expected. The image points may be fitted to a road surface model and portions thereof modeled as imaged from the road surface. It is determined that no collision is expected based on the fitting of the image points to the road surface model. The at least one model may also include a hybrid model, and the first portion of the image points may be modeled as imaged from the road surface, and the second portion thereof may be modeled as imaged from a substantially vertical object. The image points may be fitted to a vertical surface model and the image point portions may be modeled as imaged from a vertical object. The TTC may be determined based on a fit of the image points to the vertical surface model.

Description

Advanced warning system for forward collision warning of traps and pedestrians
The present application is a divisional application of an application entitled "advanced warning system for warning a trap and a pedestrian of a frontal collision" with an application date of 2011, 12/7, application No. 201110404574.1.
Background
1. Background of the invention
The invention relates to a driver assistance system providing a frontal collision warning.
2. Description of the related Art
In recent years camera-based Driver Assistance Systems (DAS) have entered the market; the driver assistance system includes Lane Departure Warning (LDW), Automatic High-beam Control (AHC), pedestrian recognition, and Forward Collision Warning (FCW).
Lane Departure Warning (LDW) systems are designed to warn of an unintentional lane departure. A warning is issued when the vehicle passes or is about to pass the lane marker. Driver intent is determined based on the use of the steering signal, the change in angle of the steering wheel, the vehicle speed, and brake activation.
In image processing, the Moravec corner detection algorithm may be one of the earliest corner detection algorithms and defines a corner as a point with a low self-similarity. The Moravec algorithm tests each pixel in the image to see if a corner exists by considering how similar the blob (patch) centered on the pixel is to a nearby mostly overlapping blob. Similarity was measured by taking the Sum of Squared Differences (SSD) between two plaques. Smaller numbers indicate greater similarity. An alternative method of detecting corners in images is based on the method proposed by Harris and Stephens, which is an improvement over the method proposed by Moravec. Harris and Stephens improved the Moravec corner detection algorithm by considering the differentiation of the corner scores directly related to the orientation rather than using Moravec neighboring blobs.
In computer vision, a widely used differential method for optical flow estimation was developed by Bruce d. The Lucas-Kanade method assumes that the optical flow is substantially constant in a local neighborhood of the pixel under consideration, and solves the underlying optical flow equations for all pixels in that neighborhood by the least squares criterion. The Lucas-Kanade method is generally able to solve the inherent ambiguity of the optical flow equations by integrating information from several neighboring pixels. This method is also insensitive to image noise compared to the point-by-point method. On the other hand, since the method is a purely local method, it cannot provide stream information in an internal unified area of an image.
SUMMARY
According to a feature of the invention, different methods for signaling a frontal collision warning are provided, using a camera mountable in a motor vehicle. A plurality of image frames are acquired at known time intervals. An image blob may be selected in at least one image frame. The optical flow of multiple image points of the plaque may be tracked between image frames. The image points may be fitted to at least one model. Based on the fit of the image points, it may be determined whether a collision is expected and, if so, a Time To Collision (TTC). The image points may be fitted to a road surface model, and a portion of the image points may be modeled as imaged from the road surface. It may be determined that no collision is expected based on the fitting of the image points to the road surface model. The image points may be fitted to a vertical surface model, where a portion of the image points may be modeled as imaged from a vertical object. The time to collision TTC may be determined based on a fit of the image points to the vertical surface model. Image points may be fitted to the hybrid model, where a first portion of the image points may be modeled as imaged from the road surface and a second portion of the image points may be modeled as imaged from substantially vertical or upright objects rather than objects in a landscape road surface.
In an image frame, a candidate image of a pedestrian may be detected, wherein the patch is selected to include the candidate image of the pedestrian. When the best-fit model is a vertical surface model, it can be verified that the candidate image is an image of a standing pedestrian and not an object in the road surface. In an image frame, a vertical line may be detected, wherein the blob is selected to include the vertical line. When the best fit model is a vertical surface model, it can be verified that the vertical line is an image of a vertical object and not an image of an object in the road surface.
In a different approach, a warning may be issued based on the time to collision being less than a threshold. In a different approach, a relative proportion of plaque may be determined based on optical flow between image frames, and a Time To Collision (TTC) may be determined in response to the relative proportion and time interval. This approach may avoid object recognition in the plaque prior to determining the relative proportions.
In accordance with a feature of the present invention, a system is provided that includes a camera and a processor. The system may be used to provide frontal collision warning using a camera mountable in a motor vehicle. The system may also be configured to obtain a plurality of image frames at known time intervals for selecting a blob in at least one of the image frames; an optical flow between image frames for tracking a plurality of image points of the plaque; for fitting image points to at least one model and determining whether a collision is expected, if any, a Time To Collision (TTC) based on the fitting of image points to the at least one model. The system may also be used to fit image points to a road surface model. It may be determined that no collision is expected based on the fitting of the image points to the road surface model.
According to other embodiments of the invention, a blob in an image frame may be selected, which may correspond to a location where a vehicle will be located after a predetermined time interval. The plaque may be monitored; a frontal collision warning is issued if the subject is imaged in the plaque. Whether an object is substantially vertical, upright, or not in the road surface may be determined by tracking optical flow between image frames of a plurality of image points of the object in the patch. The image points may be fitted to at least one model. A portion of the image points may be modeled as imaged from the object. Based on the fitting of the image points to the at least one model, it is determined whether a collision is expected, and if so, a Time To Collision (TTC). A frontal collision warning may be issued when the best fit model includes a vertical surface model. The image points may be fitted to a road surface model. It may be determined that no collision is expected based on the fitting of the image points to the road surface model.
In accordance with a feature of the present invention, a system for providing a frontal collision warning in a motor vehicle is provided. The system includes a camera and a processor mountable in a motor vehicle. The camera may be used to acquire a plurality of image frames at known time intervals. The processor may be operable to select a blob in the image frame corresponding to a location where the vehicle will be located after a predetermined time interval. If the object is imaged in plaque, a frontal collision warning may be issued if the object is found to be upright and/or not in the road. The processor may also be configured to track a plurality of image points of the object in the plaque between the image frames and fit the image points to one or more models. The model may comprise a vertical object model, a road surface model and/or a hybrid model comprising one or more image points assumed to be from the road surface and one or more image points assumed to be from a standing object not in the road surface. Based on the fitting of the image points to the model, it is determined whether a collision is expected, and if a collision is expected, a Time To Collision (TTC) is determined. The processor may be operable to issue a frontal collision warning based on the TTC being less than a threshold.
The present application also relates to the following:
1) a method for providing a frontal collision warning using a camera mountable in a motor vehicle, the method comprising:
obtaining a plurality of image frames at known time intervals;
selecting a blob in at least one of the image frames;
tracking optical flow between image frames of a plurality of image points of the blob;
fitting the image points to at least one model; and
based on the fitting of the image points to the at least one model, a Time To Collision (TTC) is determined if a collision is expected.
2) The method of 1), further comprising:
fitting the image points to a road surface model, wherein at least a portion of the image points are modeled as imaged from a road surface;
determining that no collision is expected based on a fit of the image points to the model.
3) The method of 1), further comprising:
fitting the image points to a vertical surface model, wherein at least a portion of the image points are modeled as objects imaged from vertical; and
determining the TTC based on a fit of the image points to the vertical surface model.
4) The method of claim 3), further comprising:
detecting a candidate image of a pedestrian in the image frame, wherein the blob is selected to include the candidate image of the pedestrian; and
when the best-fit model is the vertical surface model, it is verified that the candidate image is an image of a standing pedestrian and not an image of an object in the road surface.
5) The method of claim 3), further comprising:
detecting vertical lines in the image frames, wherein the blobs are selected to include the vertical lines;
when the best fit model is the vertical surface model, it is verified that the vertical line is an image of a vertical object and not an image of an object in the road surface.
6) The method of claim 1), wherein the at least one model further comprises a hybrid model, wherein a first portion of the image points are modeled as imaged from a road surface and a second portion of the image points are modeled as imaged from a substantially vertical object.
7) The method of 1), further comprising:
issuing a warning based on the time to collision being less than a threshold.
8) A system comprising a camera and a processor mountable in a motor vehicle, the system operable to provide a frontal collision warning, the system operable to:
obtaining a plurality of image frames at known time intervals;
selecting a blob in at least one of the image frames;
tracking optical flow between image frames of a plurality of image points of the blob;
fitting the image points to at least one model; and
based on the fitting of the image points to the at least one model, a Time To Collision (TTC) is determined if a collision is expected.
9) The system of 8), further operable to:
fitting the image points to a road surface model;
determining that no collision is expected based on a fit of the image points to the road surface model.
10) A method of providing a frontal collision warning using a camera and a processor that are mountable in a motor vehicle, the method comprising:
obtaining a plurality of image frames at known time intervals;
selecting a blob in an image frame, the blob corresponding to a location where the vehicle will be located after a predetermined time interval; and
the plaque is monitored and a frontal collision warning is issued if the subject is imaged in the plaque.
11) The method of claim 10), further comprising:
determining whether the object includes a substantially vertical portion.
12) The method of claim 11), wherein the determining is performed by:
tracking optical flow between image frames of a plurality of image points in the blob; and
fitting the image points to at least one model.
13) The method of claim 11), wherein at least a portion of the image points are modeled as objects imaged from vertical; and
based on the fitting of the image points to the at least one model, a Time To Collision (TTC) is determined if a collision is expected.
14) The method of claim 11), wherein the at least one model comprises a road surface model, the method further comprising:
fitting the image points to a road surface model;
determining that no collision is expected based on a fit of the image points to the road surface model.
15) The method of claim 11), further comprising:
the warning is issued when the best fit model is a vertical surface model.
16) A system for providing a frontal collision warning in a motor vehicle, the system comprising:
a camera mountable in the motor vehicle, the camera operable to obtain a plurality of image frames at known time intervals;
a processor operable to:
selecting a blob in an image frame, the blob corresponding to a location where the vehicle will be located after a predetermined time interval;
monitoring the plaque; and
if an object is imaged in the plaque, a frontal collision warning is issued.
17) The system of 16), wherein the processor is further operable to determine whether the object includes a substantially vertical portion, the determining performed by:
tracking a plurality of image points of the object in the blob between the image frames;
fitting the image points to at least one model; and
based on the fitting of the image points to the at least one model, a Time To Collision (TTC) is determined if a collision is expected.
18) The system of 16), wherein the processor is operable to issue a frontal collision warning based on the TTC being less than a threshold.
Brief description of the drawings
The invention is herein described, by way of example only, with reference to the accompanying drawings, wherein:
fig. 1a and 1b schematically show two images captured from a front-view camera installed in a vehicle when the vehicle approaches a metal guardrail, according to a feature of the present invention.
Fig. 2a illustrates a method for providing a frontal collision warning using a camera installed in a host vehicle (host vehicle) according to a feature of the present invention.
Fig. 2b shows further details of the step of determining the time-to-collision shown in fig. 2a, according to a feature of the present invention.
Figure 3a shows an image frame of an upright surface (the back of a van) according to a feature of the invention.
Figure 3c shows a rectangular area of primarily road surface according to a feature of the present invention.
Fig. 3b shows the vertical movement δ y of a point as a function of the vertical image position (y) with respect to fig. 3a, according to a feature of the present invention.
Fig. 3d shows the vertical movement of a point dy as a function of the vertical image position (y) with respect to fig. 3c, according to a feature of the invention.
Figure 4a shows an image frame of an image including a metal fence having horizontal lines and rectangular patches according to a feature of the present invention.
Fig. 4b and 4c show more details of the rectangular patches shown in fig. 4a, according to a feature of the present invention.
Fig. 4d shows a graph of the vertical movement of a point (δ y) with respect to the vertical point position (y) according to a feature of the invention.
Fig. 5 illustrates another example of a mirage in an image frame, in accordance with features of the present invention.
FIG. 6 illustrates a method for providing a frontal collision warning trap, in accordance with features of the present invention.
Fig. 7a and 7b show examples of wall triggered frontal collision trap warnings according to exemplary features of the present invention.
FIG. 7c illustrates an example of a front impact trap warning triggered for a cartridge according to an exemplary feature of the present invention.
Fig. 7d shows an example of a frontal collision trap warning triggered for the side of an automobile according to an exemplary feature of the present invention.
FIG. 8a illustrates an example of an object having a distinct vertical line on the box, in accordance with an aspect of the present invention.
Fig. 8b illustrates an example of an object having a distinct vertical line on a lamppost, in accordance with an aspect of the present invention.
Fig. 9 and 10 illustrate a system including a camera or an image sensor installed in a vehicle according to an aspect of the present invention.
Detailed Description
Reference will now be made in detail to the features of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The features are described below in order to explain the present invention by referring to the figures.
Before the features of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of design and the arrangement of components set forth in the following description or illustrated in the following drawings. The invention is capable of other features or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting.
By way of introduction, embodiments of the present invention relate to a Forward Collision Warning (FCW) system. According to us patent 7113867, an image of a leading car is identified. The width of the vehicle may be used to detect the proportion between image frames or the change in the relative proportion S, and the relative proportion used to determine the time of collision. Specifically, for example, the width of the leading car has a length (as measured in pixels or millimeters, for example) represented by w (t1) and w (t2) in the first image and the second image, respectively. Then optionally the relative proportion is s (t) w (t2)/w (t 1).
According to the teachings of U.S. patent 7113867, a Forward Collision Warning (FCW) system relies on the identification of an image of an obstacle or object, for example, a leading vehicle as identified in an image frame. In a front collision warning system, as disclosed in us patent 7113867, a proportional change in the size (e.g., width) of a detected object (e.g., a vehicle) is used to calculate the Time To Collision (TTC). However, the object is first detected and segmented from the surrounding scene. This disclosure describes a system that uses relative scale changes to determine the time to collision TTC and the likelihood of collision based on optical flow and, if necessary, issues an FCW warning. Optical flow mirage (mirage): the perceived image appears larger as the imaged object becomes closer. According to different features of the invention, object detection and/or recognition may be performed or may be avoided.
Mirage has been widely studied in biological systems. Mirage appears to be a very low level of visual attention mechanism and can trigger instinctive responses. There have been many attempts in computer vision to detect mirage, even with a silicon sensor designed to detect purely flat mirage.
Mirage detection, which includes both horizontal movement and rotation, may be performed in real environments with constantly changing lighting conditions, complex scenes that include multiple objects, and hosts.
The term "relative scale" as used herein refers to the increase (or decrease) in the relative size of an image patch in one image frame and a corresponding image patch in a subsequent image frame.
Referring now to fig. 9 and 10, fig. 9 and 10 illustrate a system 16 including a camera or image sensor 12 mounted in a vehicle 18, in accordance with an aspect of the present invention. The image sensor 12 imaging the front field of view delivers images in real time, which are captured in a time sequence of image frames 15. The image processor 14 may be used to process the image frames 15 simultaneously and/or in parallel to serve a number of driver assistance systems. The driver assistance system may be implemented using specific hardware circuitry with on-board software and/or software control algorithms in the memory 13. The image sensor 12 may be monochrome or black and white, i.e. without color separation, or the image sensor 12 may be color sensitive. By way of example in fig. 10, the image frames 15 are used to serve Pedestrian Warnings (PW)20, Lane Departure Warnings (LDW)21, Frontal Collision Warnings (FCW)22 based on object detection and tracking according to the teachings of U.S. patent 7113867, picture mirage-based Frontal Collision Warnings (FCWL)209, and/or FCW trap (FCWT) 601-based frontal collision warnings. The image processor 14 is used to process the image frames 15 to detect the mirage of the image in the front view of the camera 12 for image and front collision warning 209 of the FCWT 601. The image mirage-based frontal collision warning 209 and the trap-based Frontal Collision Warning (FCWT)601 may be performed in parallel with the conventional FCW 22, as well as with other driver assistance functions, pedestrian detection (PW)20, Lane Departure Warning (LDW)21, traffic sign detection, and self-motion detection. FCWT 601 may be used to verify regular signals from FCW 22. The term "FCW signal" as used herein refers to a frontal collision warning signal. The terms "FCW signal," "frontal collision warning," and "warning" are used interchangeably herein.
Features of the invention are shown in fig. 1a and 1b which show examples of optical flow or mirage. When the vehicle 18 approaches the metal guardrail 30, two images captured from the forward looking camera 12 mounted within the vehicle 18 are displayed. The image in fig. 1a shows the field of view and the guard rail 30. The image in fig. 1b shows the same feature in which the vehicle 18 is closer to the metal guard rail 30, if a small rectangle p 32 (indicated with a dashed line) in the guard rail is observed, it may be seen that the horizontal line 34 in fig. 1b appears to extend as the vehicle 18 approaches the guard rail 30.
Referring now to FIG. 2a, a method 201 for providing a frontal collision warning 209(FCWL 209) using a camera 12 installed in a host vehicle 18, in accordance with features of the present invention, is illustrated. The method 201 does not rely on object recognition of objects in the front field of view of the vehicle 18. In step 203, a plurality of image frames 15 are obtained by the camera 12. The time interval between the capture of image frames is Δ t. The blobs 32 in the image frame 15 are selected in step 205 and the relative proportion (S) of the blobs 32 is determined in step 207. In step 209, the Time To Collision (TTC) is determined based on the relative scale (S) and time interval (Δ t) between frames 15.
Reference is now made to fig. 2b, which shows further details of step 209 of determining a time to collision, shown in fig. 2a, according to a feature of the present invention. In step 211, a plurality of image points in the blob 32 may be tracked between image frames 15. In step 213, the image points may be fitted to one or more models. The first model may be a vertical surface model, which may include objects such as pedestrians, vehicles, walls, shrubs, trees, or light poles. The second model may be a road surface model which takes into account the characteristics of the image points on the road surface. The hybrid model may comprise one or more image points from the road and one or more image points from the standing object. For a model that at least assumes a part of the image points comprising an upright object, a plurality of time-to-collision (TTC) may be calculated. In step 215, a best fit of the image points to the road model, the vertical surface model, or the hybrid model enables selection of time-to-collision (TTC) values. A warning may be issued based on a Time To Collision (TTC) less than a threshold and when the best fit model is a vertical surface model or a hybrid model.
Optionally, step 213 may also include the detection of candidate images in the image frame 15. The candidate image may be a pedestrian or a vertical line of a vertical object such as a lamppost. In the case of a pedestrian or vertical line, the blob 32 may be selected to include a candidate image. Once the patch 32 is selected, it is possible to perform verification that the candidate image is an image of an upright pedestrian and/or an image of a vertical line. This verification may confirm that the candidate image is not an object in the road surface when the best-fit model is the vertical surface model.
Referring back to fig. 1a and 1b, the arrangement of the sub-pixels of the patch 32 from the first image shown in fig. 1a to the second image shown in fig. 1b may cause an increase in size of 8% or an increase in the relative proportion S of 8% (S ═ 1.08) (step 207). Assuming that the time difference Δ t between the images is 0.5 seconds, the Time To Collision (TTC) can be calculated by the following equation 1 (step 209):
Figure GDA0003010259200000111
if the speed of the vehicle 18 is known as v (v ═ 4.8m/s), the distance Z to the target can also be calculated using equation 2 below:
Figure GDA0003010259200000112
in accordance with a feature of the invention, fig. 3b and 3d show the vertical movement δ y of a point as a function of the vertical image position (y). The vertical movement deltay is zero at the horizontal line and negative below the horizontal line. The vertical movement δ y of the dot is shown in the following equation 3.
Figure GDA0003010259200000113
Equation (3) is a linear model for y and δ y and actually has two variables. Two points may be used to solve for these two variables.
For vertical surfaces, becauseAll points are equidistant, as is the distance in the image shown in fig. 3b, the motion is in the horizontal line (y)0) Is zero and varies linearly with image position. For a road surface, the lower the point in the image, the closer (Z is smaller), as shown in equation 4 below:
Figure GDA0003010259200000114
therefore, the image motion δ y does not increase at only a linear rate, as shown in equation 5 below and in the graph of fig. 3 d.
Figure GDA0003010259200000121
Equation (5) is a constrained quadratic equation with two variables in practice.
Likewise, two points may be used to solve for both variables.
Reference is now made to fig. 3a and 3c, which show different image frames 15. In fig. 3a and 3c, two rectangular areas are shown in dashed lines. Fig. 3a shows an upright surface (rear of the van). The square points are the tracked points (step 211) and the motion matches the motion model of the upright surface shown in the image of the image motion (deltay) in fig. 3b compared to the height y of the points (step 213). The motion of the triangular points in fig. 3a does not match the motion model of the upright surface. Reference is now made to fig. 3c, which shows a rectangular area that is primarily a road surface. The square dots are dots that match the road surface model shown in the image of image motion (δ y) compared to the height y of the dots in fig. 3 d. The motion of the triangular point does not match the motion model of the road surface and is an outlier (outlier). Thus, in general, the task here is to determine which points belong to the model (and to which model) and which points are outliers, which can be performed by a robust fitting method as explained below.
Reference is now made to fig. 4a, 4b, 4c and 4d, which show typical situations of a mixture of two motion models located in an image, according to a feature of the present invention. Fig. 4a shows an image frame 15 comprising an image of a metal guard rail 30 and a rectangular patch 32a, wherein the image of the metal guard rail 30 has horizontal lines 34. Further details of the patch 32a are shown in fig. 4b and 4 c. Figure 4b shows a detail of the patch 32a in a previous image frame 15 and figure 4c shows a detail of the patch 32a in a subsequent image frame 15 when the vehicle 18 is closer to the guard rail 30. In fig. 4c and 4d some image points are shown as squares, triangles and circles on the upright obstacle 30 and some image points are shown on the road surface in front of the obstacle 30. The tracking points within the rectangular area 32a show that some points are in the lower part of the area 32a corresponding to the road model and some points are in the upper part of the area 32a corresponding to the upright surface model. Fig. 4d shows a graph of the vertical movement of a point (δ y) compared to the vertical point position (y). In FIG. 4d, the recovered model is graphically shown to have two parts: a curved (parabolic) portion 38a and a linear portion 38 b. The transition point between the portions 38a and 38b corresponds to the bottom of the upright surface 30. This transition point is also marked by the dashed horizontal line 36 in fig. 4 c. In fig. 4b and 4c there are some points shown by triangles which are tracked but not matching the model, some tracked points of matching model are shown by squares and some points which are not well tracked are shown as circles.
Referring now to fig. 5, another example of a mirage in an image frame 15 is shown. In image frame 15 of fig. 5, there is no upright surface in the patch 32b, only the unobstructed road ahead, and the transition point between the two models is marked with a dashed line 50 at the horizontal line.
Motion model and Time To Collision (TTC) estimation
The motion model and estimation of Time To Collision (TTC) (step 215) assume that a region 32, such as a rectangular region in the image frame 15, is provided. Examples of rectangular regions are rectangles 32a and 32b, such as shown in fig. 3 and 5. These rectangles may be selected based on detected objects such as pedestrians or based on movement of the host vehicle 18.
1. Tracking point (step 211):
(a) the rectangular area 32 may be subdivided into 5x20 sub-rectangular lattices.
(b) An algorithm may be performed for each sub-rectangle to find the corner points of the image, e.g. using Harris and Stephens methods, and the points may be tracked. Preferably, using 5x5 Harris points, the eigenvalues of the following matrix can be considered,
Figure GDA0003010259200000131
and two strong eigenvalues are found.
(c) Tracking may be performed by exhaustively searching for the best of some squared error (SSD) matches in a rectangular search area having a width W and a height H. This exhaustive search is important at the outset because it means that the previous motion was not taken and the measurements from all sub-rectangles are statistically more independent. The search is followed by a fine-tuning using optical flow estimation using, for example, the Lukas Kanade method. The Lukas Kanade method allows sub-pixel motion.
2. Robust model fitting (step 213):
(a) two or three points are randomly chosen from the 100 tracked points.
(b) Number of pairs (N) selectedTo pair) Depending on the vehicle speed (v), this is given, for example, by:
Nto pair=min(40,max(5,50-v)) (7)
Where v is in meters per second. Number (N) of triplets (triplets)Triple unit) Is given by:
Ntriple unit=50-NTo pair (8)
(c) For two points, they can be fitted to two models (step 213). One model assumes that the two points are on an upright object. The second model assumes that both points are on the road.
(d) For three points, they can also be fitted to two models. One model assumes that the upper two points are on an upright object and the third (lowest) point is on the road. The second model assumes that the top one point is on the upright object and the lower two points are on the road.
Two models can be solved for three points by solving the first model (equation 3) using two points and then using the result y0And the third point solves the second model (equation 5).
(e) Each model in (d) gives a time to collision TTC value (step 215). Each model also derives a score based on how well 98 other points fit the model. The score is given by the Sum of the truncated squares of the Distance between the y-motion of the point and the predicted model motion (SCSD). The SCSD values are converted to a probability-like function:
Figure GDA0003010259200000141
where N is the number of points (N98).
(f) Based on the TTC value, the speed of the vehicle 18 and assuming that the points are on a stationary object, the distance Z to the points may be calculated as vx TTC. From the x-image coordinates of each image point distance, the lateral position in world coordinates can be calculated:
Figure GDA0003010259200000142
Figure GDA0003010259200000143
(g) the lateral position at time TTC is thus calculated. The binary lateral score requires that at least one of the points from the pair or triplet must be in the path of the vehicle 18.
3. Fraction of multiple frames: a new model may be generated at each frame 15, each new model having its associated TTC and score. The 200 best (highest score) models may be retained from the previous 4 frames 15, with the scores weighted as follows:
fraction (n) ═ alphanScore of(12) Where n-0.3 is the age of the fraction, and α -0: 95.
FCW judgment: a true FCW warning is issued if any of the following three conditions occur:
(a) the TTC of the model with the highest score is below the TTC threshold and the score is greater than 0.75, and
Figure GDA0003010259200000151
(b) the TTC of the model with the highest score is below the TTC threshold and
Figure GDA0003010259200000152
(c)
Figure GDA0003010259200000153
fig. 3 and 4 have shown how to robustly provide FCW warnings for points within a given rectangle 32. How the rectangle is defined depends on the application as illustrated by the other exemplary features of fig. 7a-7d and 8a, 8 b.
FCW traps for generic stationary objects
Referring now to FIG. 6, a method 601 for providing a Forward Collision Warning Trap (FCWT)601 in accordance with features of the present invention is shown. In step 203, a plurality of image frames 15 are obtained by the camera 12. In step 605, a blob 32 in the image frame 15 is selected, the blob corresponding to a location where the vehicle 18 will be located after a predetermined time interval. The plaque 32 is then monitored in step 607. In decision step 609, if a generic object is imaged in the plaque 32 and detected therein, a frontal collision warning is issued in step 611. Otherwise the capturing of the image frame continues in step 203.
Figures 7a and 7b illustrate examples of FCWT 601 alerts triggered for wall 70 according to exemplary features of the invention; an example of a warning triggered for the side of the car 72 according to an exemplary feature of the present invention is shown in fig. 7 d; also, an example of a warning triggered for boxes 74a and 74b according to an exemplary feature of the present invention is shown in FIG. 7 c. Fig. 7a-7d are examples of generic stationary objects that do not require the previous class-based detection. The dashed rectangular area is defined as a target W1 m wide at a distance that the host vehicle will be at after t 4 s.
Z=vt (16)
Figure GDA0003010259200000161
Figure GDA0003010259200000162
Where v is the speed of the vehicle 18, H is the height of the camera 12, and w and y are the width of the rectangle and the vertical position in the image, respectively. This rectangular area is an example of an FCW trap. If an object "falls" within the rectangular area, the FCW trap may generate an alert if the TTC is less than a threshold.
Performance is improved using multiple traps:
to improve the detection rate, FCW traps can be replicated into 5 regions with 50% overlap to create a total trap region of 3m width.
The dynamic position of the FCW trap can be chosen according to the yaw rate (yaw rate): the trap area 32 may be laterally translated based on the path of the vehicle 18 determined from the yaw-rate sensor, the speed of the vehicle 18, and the dynamic model of the host vehicle 18.
FCW trap for verifying frontal collision warning signals
Special class objects such as vehicles and pedestrians may be detected in the image 15 using pattern recognition techniques. These objects are then tracked over time and the FCW 22 signal can be generated using the change in scale according to the teachings of us patent 7113867. However, it is important to validate the FCW 22 signal using a separate technique before issuing the warning. It may be particularly important to use a separate technique, such as using method 209 (FIG. 2b), to verify the FCW 22 signal if the system 16 is to activate the brakes. In a radar/vision fused system, the independent verification may come from the radar. In the vision-only system 16, the independent verification comes from an independent vision algorithm.
Detection of objects (e.g., pedestrians, front vehicles) is not an issue. A very high detection rate can be achieved with only a very low error rate. One feature of the present invention is to generate a reliable FCW signal without too many false alarms, which would irritate the driver or worse, cause the driver to brake unnecessarily. One possible problem with conventional pedestrian FCW systems is to avoid false frontal collision warnings, because the number of pedestrians in the scene is large and the number of real frontal collision situations is very small. Even a 5% error rate would mean that the driver would likely receive frequent false alarms, and may never experience a true warning.
Pedestrian targets are particularly challenging for FCW systems because the target is non-rigid, which makes tracking difficult (according to the teachings of us patent 7113867), and scaling is particularly subject to much disturbance. Thus, a robust model (method 209) can be used to verify a frontal collision warning for a pedestrian. The rectangular area 32 may be determined by the pedestrian detection system 20. According to us patent 7113867, the FCW signal may only be generated if target tracking is performed by the FCW 22, and the robust FCW (method 209) gives a TTC that is less than one or more thresholds that may or may not be predetermined. Forward collision warning FCW 22 may have a threshold value that is different from the threshold value used in the robust model (method 209).
One of the factors that may increase the number of false warnings is that pedestrians are often present in less structured roads where the driver's driving pattern may be quite unstable, including sharp turns and lane changes. It may therefore be necessary to include some further constraints for the issuance of the warning:
when a curb or lane marker is detected, the FCW signal is blocked if the pedestrian is on the far side of the curb or/and lane and none of the following conditions occur:
1. the pedestrian is crossing a lane marker or curb (or approaching very quickly). For this, it may be important to detect the feet of the pedestrian.
2. The host vehicle 18 is not crossing a lane marker or curb (e.g., as detected by the LDW 21 system).
The driver's intention is more difficult to predict. If the driver is driving straight, no turn signal is activated and no additional lane markings are expected, it is reasonable to assume that the driver will continue to proceed straight forward. Thus, if a pedestrian is in the path and the TTC is below the threshold, the FCW signal may be issued. However, if the driver is turning, it is equally possible that he/she will continue to turn or stop turning and continue to proceed. Thus, when detecting the yaw rate, the FCW signal is issued only if it is assumed that the vehicle 18 will continue to turn at the same yaw angle and the pedestrian is in the path, and if the vehicle is traveling straight and the pedestrian is in the path.
The concept of FCW trap 601 can be extended to objects that contain mostly vertical (or horizontal) lines. A possible problem with using point-based techniques for such objects is that good Harris (corner) points are typically generated by intersecting vertical lines on the edges of the object with horizontal lines of a distant background. The vertical motion of these points will resemble a distant road surface.
Fig. 8a and 8b show examples of objects with a distinct vertical line 82, said vertical line 82 being on the lamppost 80 in fig. 8b and on the box 84 in fig. 8 a. Vertical lines 82 are detected in the trap region 32. The detected straight lines 82 may be tracked between images. Robust estimation can be performed by pairing the lines 82 frame by frame and calculating the TTC model for each line pair, assuming a perpendicular object, and then giving a score based on the SCSD of the other lines 82. Since the number of straight lines may be small, it is common to test all possible pairs of lines in combination. Only straight line pairs with significant overlap are used. In terms of horizontal lines, the triad lines also give two models, as when points are used.
The indefinite articles "a", "an", as used herein, have the meaning of "one or more", i.e., "one or more images" or "one or more rectangular regions", as "an image" or "a rectangular region".
While selected features of the invention have been illustrated and described, it is to be understood that the invention is not limited to the features described. Rather, it is to be appreciated that changes may be made in these features without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.

Claims (34)

1. A method for determining the risk of collision between a host vehicle and a pedestrian, the method using a camera mountable in the host vehicle and a processor, wherein the processor is connectable to the camera and configured to capture a plurality of image frames of the environment of the host vehicle within a field of view of the camera, the method comprising:
detecting an image blob in at least one of the plurality of image frames, wherein the image blob comprises a candidate image of a pedestrian within a field of view of the camera;
detecting an image of a curb or an image of a lane marker in the plurality of image frames;
providing a collision warning or brake activation signal, wherein the pedestrian is distal to and passing through a curb or lane marker.
2. The method of claim 1, further comprising:
tracking optical flow between image frames of a plurality of image points of the image patch to produce a tracked optical flow;
fitting the tracked optical flow of at least a portion of the plurality of image points against a plurality of models to produce a plurality of fits to the plurality of models, wherein the plurality of models are selected from the group consisting of: (i) a road surface model, wherein a portion of the plurality of image points are modeled as imaged from the road surface, (ii) a vertical surface model, wherein a portion of the plurality of image points are modeled as imaged from a vertical object, and (iii) a hybrid model, wherein a first portion of the plurality of image points are modeled as imaged from the road surface and a second portion of the plurality of image points are modeled as imaged from a vertical object;
scoring a fit of the tracked optical flow to respective models using at least a portion of the plurality of image points to produce respective scores;
determining whether a collision is expected and determining a collision time by selecting a model having a score corresponding to a best fit of the plurality of image points to the tracked optical flow;
detecting the image patch comprises candidate images of pedestrians in the field of view of the camera, verifying and confirming that the best fit model is a vertical surface model.
3. The method of claim 1, further comprising:
preventing a collision warning or brake activation signal when the host vehicle is not crossing a lane marker or curb.
4. The method of claim 1, further comprising:
detecting an image of a foot of the pedestrian in the plurality of image frames.
5. The method of claim 1, further comprising:
wherein the pedestrian is in the straight path of the host vehicle, has no activated turn signal and no lane marker to which a turn is expected, and provides a collision warning or brake activation signal when the time to collision is calculated to be less than a threshold.
6. The method of claim 1, further comprising:
detecting the yaw rate of the main motor vehicle;
providing a collision warning signal or a brake activation signal when the pedestrian is in a path that the host vehicle is supposed to continue at the same yaw rate.
7. The method of claim 1, further comprising:
detecting the yaw rate of the main motor vehicle;
providing a collision warning signal or a brake activation signal when the pedestrian is in a path that assumes the direction of the host vehicle is straight.
8. The method of claim 1, further comprising:
preventing a collision warning or brake activation signal, wherein the pedestrian is distal to a curb or lane marker and the pedestrian is not traversing the curb or lane marker.
9. A system operable to determine a risk of collision between a host vehicle and a pedestrian, the system comprising: a camera mountable in the host vehicle; and a processor mountable in the host vehicle, wherein the processor is connectable to the camera, the processor configured to:
capturing from the camera image frames of the environment of the host vehicle within a field of view of the camera;
detecting an image blob in at least one of the image frames, wherein the image blob comprises a candidate image of the pedestrian within a field of view of the camera;
detecting an image of a curb or an image of a lane marker in the image frame;
providing a collision warning signal or a brake activation signal, wherein the pedestrian is distal to and passing through a curb or lane marker.
10. The system of claim 9, wherein the processor is further configured to:
tracking optical flow between image frames of a plurality of image points of the image patch to produce a tracked optical flow;
fitting the tracked optical flow of at least a portion of the plurality of image points against a plurality of models to produce a plurality of fits to the plurality of models, wherein the plurality of models are selected from the group consisting of: (i) a road surface model, wherein a portion of the plurality of image points are modeled as imaged from the road surface, (ii) a vertical surface model, wherein a portion of the plurality of image points are modeled as imaged from a vertical object, and (iii) a hybrid model, wherein a first portion of the plurality of image points are modeled as imaged from the road surface and a second portion of the plurality of image points are modeled as imaged from a vertical object;
scoring a fit of the tracked optical flow to respective models using at least a portion of the plurality of image points to produce respective scores;
determining whether a collision is expected and determining a collision time by selecting a model having a score corresponding to a best fit of the plurality of image points to the tracked optical flow; and
detecting the image patch comprises candidate images of the pedestrian within the field of view of the camera, verifying and confirming that the best fit model is a vertical surface model.
11. The system of claim 9, wherein the processor is further configured to:
preventing a collision warning or brake activation signal when the host vehicle is not crossing a lane marker or curb.
12. The system of claim 9, wherein the processor is further configured to:
detecting an image of the pedestrian's foot in the image frame.
13. The system of claim 9, wherein the processor is further configured to:
wherein the pedestrian is in the straight path of the host vehicle, has no activated turn signal and no lane marker to which a turn is expected, and provides a collision warning or brake activation signal when the time to collision is calculated to be less than a threshold.
14. The system of claim 9, wherein the processor is further configured to:
detecting the yaw rate of the main motor vehicle;
providing a collision warning signal or a brake activation signal when the pedestrian is in a path that the host vehicle is supposed to continue at the same yaw rate.
15. The system of claim 9, wherein the processor is further configured to:
detecting the yaw rate of the main motor vehicle;
providing a collision warning signal or a brake activation signal when the pedestrian is in a path that assumes the direction of the host vehicle is straight.
16. The system of claim 9, wherein the processor is further configured to:
preventing a collision warning or brake activation signal, wherein the pedestrian is distal to a curb or lane marker and the pedestrian is not traversing the curb or lane marker.
17. A method for determining the risk of a collision between a host vehicle and a pedestrian, the method using a camera mountable in the host vehicle and a processor, wherein the processor is connectable to the camera and configured to capture from the camera a plurality of image frames of the environment of the host vehicle within a field of view of the camera, the method comprising:
detecting an image blob in at least one of the plurality of image frames, wherein the image blob comprises a candidate image of a pedestrian within a field of view of the camera;
determining that a Time To Collision (TTC) to the pedestrian is less than a threshold;
detecting the yaw rate of the main motor vehicle;
providing a collision warning signal or a brake activation signal when the pedestrian is in a path that the host vehicle is supposed to continue at the same yaw rate;
providing a collision warning signal or a brake activation signal when the pedestrian is in a path that assumes the direction of the host vehicle is straight.
18. The method of claim 17, further comprising:
tracking optical flow between image frames of a plurality of image points of the image patch to produce a tracked optical flow;
fitting the tracked optical flow of at least a portion of the plurality of image points against a plurality of models to produce a plurality of fits to the plurality of models, wherein the plurality of models are selected from the group consisting of: (i) a road surface model, wherein a portion of the plurality of image points are modeled as imaged from the road surface, (ii) a vertical surface model, wherein a portion of the plurality of image points are modeled as imaged from a vertical object, and (iii) a hybrid model, wherein a first portion of the plurality of image points are modeled as imaged from the road surface and a second portion of the plurality of image points are modeled as imaged from a vertical object;
scoring a fit of the tracked optical flow to respective models using at least a portion of the plurality of image points to produce respective scores;
determining whether a collision is expected and determining a collision time by selecting a model having a score corresponding to a best fit of the plurality of image points to the tracked optical flow; and
detecting the image patch comprises candidate images of pedestrians in the field of view of the camera, verifying and confirming that the best fit model is a vertical surface model.
19. The method of claim 17, further comprising:
providing a collision warning or brake activation signal, wherein the pedestrian is distal to and passing through a curb or lane marker.
20. The method of claim 17, further comprising:
preventing a collision warning or brake activation signal when the pedestrian is distal to a curb or lane marker and the pedestrian is not traversing the curb or lane marker.
21. The method of claim 17, further comprising:
preventing a collision warning or brake activation signal when the pedestrian is not in a path that the primary motor vehicle is supposed to continue at the same yaw rate and when the pedestrian is not in a path that the primary motor vehicle is supposed to travel straight.
22. A system operable to determine a risk of collision between a host vehicle and a pedestrian, the system comprising: a camera mountable in the host vehicle; and a processor mountable in the host vehicle, wherein the processor is connectable to the camera and the processor is configured to capture from the camera a plurality of image frames of the environment of the host vehicle within a field of view of the camera, and the processor is further configured to:
detecting an image blob in at least one of the plurality of image frames, wherein the image blob comprises a candidate image of a pedestrian within a field of view of the camera;
determining that a Time To Collision (TTC) to the pedestrian is less than a threshold;
detecting the yaw rate of the main motor vehicle;
providing a collision warning signal or a brake activation signal when the pedestrian is in a path that the host vehicle is supposed to continue at the same yaw rate;
providing a collision warning signal or a brake activation signal when the pedestrian is in a path that assumes the primary vehicle is traveling straight.
23. The system of claim 22, wherein the processor is further configured to:
tracking optical flow between image frames of a plurality of image points of the image patch to produce a tracked optical flow;
fitting the tracked optical flow of at least a portion of the plurality of image points against a plurality of models to produce a plurality of fits to the plurality of models, wherein the plurality of models are selected from the group consisting of: (i) a road surface model, wherein a portion of the plurality of image points are modeled as imaged from the road surface, (ii) a vertical surface model, wherein a portion of the plurality of image points are modeled as imaged from a vertical object, and (iii) a hybrid model, wherein a first portion of the plurality of image points are modeled as imaged from the road surface and a second portion of the plurality of image points are modeled as imaged from a vertical object;
scoring a fit of the tracked optical flow to respective models using at least a portion of the plurality of image points to produce respective scores;
determining whether a collision is expected and determining a collision time by selecting a model having a score corresponding to a best fit of the plurality of image points to the tracked optical flow;
detecting the image patch comprises candidate images of pedestrians in the field of view of the camera, verifying and confirming that the best fit model is a vertical surface model.
24. The system of claim 22, wherein the processor is further configured to:
providing a collision warning or brake activation signal, wherein the pedestrian is distal to and passing through a curb or lane marker.
25. The system of claim 22, wherein the processor is further configured to:
preventing a collision warning or brake activation signal when the pedestrian is distal to a curb or lane marker and the pedestrian is not traversing the curb or lane marker.
26. The system of claim 22, wherein the processor is further configured to:
preventing a collision warning when the pedestrian is not in a path that the primary motor vehicle is supposed to continue at the same yaw rate and when the pedestrian is not in a path that the primary motor vehicle is supposed to travel straight.
27. A method for determining a risk of collision between a host vehicle and an upright object, the method using a camera mountable in the host vehicle and a processor, wherein the processor is connectable to the camera and configured to capture from the camera a plurality of image frames of an environment of the host vehicle within a field of view of the camera, the method comprising:
selecting a blob in an image frame, the blob corresponding to a location where the host vehicle will be located after a predetermined time interval;
monitoring the blob and tracking pairs of corresponding vertical lines between the plurality of image frames as a plurality of vertical lines are detected in the blob;
calculating a collision time of optical flows in response to corresponding vertical lines of at least one of the plurality of pairs assuming a vertical surface model of optical flows of the upright objects; and
scoring corresponding vertical lines of other pairs detected in the blob with respect to a fit to the vertical surface model;
determining that a collision is expected based on a fit of the optical flows of the other pairs of corresponding vertical lines to the vertical surface model of optical flows of vertical objects.
28. The method of claim 27, further comprising:
determining a vehicle path from a yaw rate sensor;
selecting the patch to translate laterally based on the vehicle path, the velocity of the primary motor vehicle, and a dynamic model of the primary motor vehicle.
29. The method of claim 27, wherein the optical flow of the vertical surface model comprises zero image motion at an image of a horizontal line and varies linearly with vertical image position.
30. A system operable to determine a risk of collision between a host vehicle and an upright object, the system comprising:
a camera mountable in the host vehicle; and a processor mountable in the host vehicle, wherein the processor is connectable to the camera and the processor is configured to capture from the camera a plurality of image frames of an environment of the host vehicle within a field of view of the camera, and the processor is further configured to:
selecting a blob in an image frame, the blob corresponding to a location where the host vehicle will be located after a predetermined time interval;
monitoring the blob and tracking pairs of corresponding vertical lines between the plurality of image frames as a plurality of vertical lines are detected in the blob;
calculating a collision time of optical flows of the corresponding vertical lines in response to at least one of the plurality of pairs assuming a vertical surface model of optical flows of vertical objects; and
scoring corresponding vertical lines of other pairs detected in the blob with respect to a fit to the vertical surface model;
determining that a collision is expected based on a fit of the optical flows of the other pairs of corresponding vertical lines to the vertical surface model of optical flows of vertical objects.
31. The system of claim 30, wherein the processor is further configured to:
selecting the patch to translate laterally based on a vehicle path determined from a yaw rate sensor, a speed of the primary motor vehicle, and a dynamic model of the primary motor vehicle.
32. The system of claim 30, wherein the optical flow of the vertical surface model comprises zero image motion at an image of a horizontal line and varies linearly with vertical image position.
33. A method for determining a risk of collision between a host vehicle and a vertical object, the method using a camera mountable in the host vehicle and a processor, wherein the processor is connectable to the camera and configured to capture from the camera a plurality of image frames of an environment of the host vehicle within a field of view of the camera, the method comprising:
calculating a time-to-collision TTC in response to a change in scale of a size of an image of an object when identifying the image of the object in the plurality of image frames;
selecting a patch based on the identified image of the object;
tracking a plurality of points between the plurality of image frames;
fitting image motion of the tracked points to a model in which the image motion is zero at horizontal line positions in the image and varies linearly with vertical position in the image;
providing a frontal collision warning or brake activation signal when the TTC is below a threshold and further based on a fit of image points to the model.
34. A system for determining the risk of collision between a host vehicle and a vertical object, the system using a camera mountable into the host vehicle and a processor, wherein the processor is connectable to the camera and configured to capture from the camera a plurality of image frames of the environment of the host vehicle within a field of view of the camera, the processor configured to:
identifying an image of an object in the plurality of image frames;
calculating a time to collision TTC responsive to a change in scale of the size of the image of the object;
selecting a patch based on the identified image of the object;
tracking a plurality of points between the plurality of image frames;
fitting image motion of the tracked points to a model in which the image motion is zero at horizontal line positions in the image and varies linearly with vertical position in the image;
providing a frontal collision warning or brake activation signal when the TTC is below a threshold and further based on a fit of image points to the model.
CN201710344179.6A 2010-12-07 2011-12-07 Advanced warning system for forward collision warning of traps and pedestrians Active CN107423675B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US42040510P 2010-12-07 2010-12-07
US61/420,405 2010-12-07
CN201110404574.1A CN102542256B (en) 2010-12-07 2011-12-07 The advanced warning system of front shock warning is carried out to trap and pedestrian

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201110404574.1A Division CN102542256B (en) 2010-12-07 2011-12-07 The advanced warning system of front shock warning is carried out to trap and pedestrian

Publications (2)

Publication Number Publication Date
CN107423675A CN107423675A (en) 2017-12-01
CN107423675B true CN107423675B (en) 2021-07-16

Family

ID=46349111

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201710344179.6A Active CN107423675B (en) 2010-12-07 2011-12-07 Advanced warning system for forward collision warning of traps and pedestrians
CN201110404574.1A Active CN102542256B (en) 2010-12-07 2011-12-07 The advanced warning system of front shock warning is carried out to trap and pedestrian

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201110404574.1A Active CN102542256B (en) 2010-12-07 2011-12-07 The advanced warning system of front shock warning is carried out to trap and pedestrian

Country Status (1)

Country Link
CN (2) CN107423675B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5877897A (en) 1993-02-26 1999-03-02 Donnelly Corporation Automatic rearview mirror, vehicle lighting control and vehicle interior monitoring system using a photosensor array
US6822563B2 (en) 1997-09-22 2004-11-23 Donnelly Corporation Vehicle imaging system with accessory control
US7655894B2 (en) 1996-03-25 2010-02-02 Donnelly Corporation Vehicular image sensing system
WO2003093857A2 (en) 2002-05-03 2003-11-13 Donnelly Corporation Object detection system for vehicle
US7526103B2 (en) 2004-04-15 2009-04-28 Donnelly Corporation Imaging system for vehicle
WO2008024639A2 (en) 2006-08-11 2008-02-28 Donnelly Corporation Automatic headlamp control system
DE102013213812A1 (en) * 2013-07-15 2015-01-15 Volkswagen Aktiengesellschaft Device and method for displaying a traffic situation in a vehicle
CN105981042B (en) * 2014-01-17 2019-12-06 Kpit技术有限责任公司 Vehicle detection system and method
WO2018049643A1 (en) * 2016-09-18 2018-03-22 SZ DJI Technology Co., Ltd. Method and system for operating a movable object to avoid obstacles

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6327536B1 (en) * 1999-06-23 2001-12-04 Honda Giken Kogyo Kabushiki Kaisha Vehicle environment monitoring system
CN1576123A (en) * 2003-07-03 2005-02-09 黄保家 Anticollision system for motor vehicle
US7113867B1 (en) * 2000-11-26 2006-09-26 Mobileye Technologies Limited System and method for detecting obstacles to vehicle motion and determining time to contact therewith using sequences of images
EP1837803A2 (en) * 2006-03-24 2007-09-26 MobilEye Technologies, Ltd. Headlight, taillight and streetlight detection
CN101305295A (en) * 2005-11-09 2008-11-12 丰田自动车株式会社 Object detection device
CN101633356A (en) * 2008-07-25 2010-01-27 通用汽车环球科技运作公司 System and method for detecting pedestrians
CN101837782A (en) * 2009-01-26 2010-09-22 通用汽车环球科技运作公司 Be used to collide the multiple goal Fusion Module of preparation system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7136506B2 (en) * 2003-03-03 2006-11-14 Lockheed Martin Corporation Correlation based in frame video tracker
JP2005226670A (en) * 2004-02-10 2005-08-25 Toyota Motor Corp Deceleration control device for vehicle
EP1741079B1 (en) * 2004-04-08 2008-05-21 Mobileye Technologies Limited Collision warning system
CN101261681B (en) * 2008-03-31 2011-07-20 北京中星微电子有限公司 Road image extraction method and device in intelligent video monitoring

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6327536B1 (en) * 1999-06-23 2001-12-04 Honda Giken Kogyo Kabushiki Kaisha Vehicle environment monitoring system
US7113867B1 (en) * 2000-11-26 2006-09-26 Mobileye Technologies Limited System and method for detecting obstacles to vehicle motion and determining time to contact therewith using sequences of images
CN1576123A (en) * 2003-07-03 2005-02-09 黄保家 Anticollision system for motor vehicle
CN101305295A (en) * 2005-11-09 2008-11-12 丰田自动车株式会社 Object detection device
EP1837803A2 (en) * 2006-03-24 2007-09-26 MobilEye Technologies, Ltd. Headlight, taillight and streetlight detection
CN101633356A (en) * 2008-07-25 2010-01-27 通用汽车环球科技运作公司 System and method for detecting pedestrians
CN101837782A (en) * 2009-01-26 2010-09-22 通用汽车环球科技运作公司 Be used to collide the multiple goal Fusion Module of preparation system

Also Published As

Publication number Publication date
CN107423675A (en) 2017-12-01
CN102542256B (en) 2017-05-31
CN102542256A (en) 2012-07-04

Similar Documents

Publication Publication Date Title
US10940818B2 (en) Pedestrian collision warning system
CN107423675B (en) Advanced warning system for forward collision warning of traps and pedestrians
US9251708B2 (en) Forward collision warning trap and pedestrian advanced warning system
US11087148B2 (en) Barrier and guardrail detection using a single camera
US11062155B2 (en) Monocular cued detection of three-dimensional structures from depth images
EP2958054B1 (en) Hazard detection in a scene with moving shadows
JP3822515B2 (en) Obstacle detection device and method
US9257045B2 (en) Method for detecting a traffic lane by means of a camera
JP6416293B2 (en) Method of tracking a target vehicle approaching a car by a car camera system, a camera system, and a car
US20180038689A1 (en) Object detection device
Miyasaka et al. Ego-motion estimation and moving object tracking using multi-layer lidar
RU2635280C2 (en) Device for detecting three-dimensional objects
JP3916930B2 (en) Approach warning device
JPH06162398A (en) Traffic lane detecting device, traffic lane deviation warning device and collision warning device
Hernández et al. Laser based collision warning system for high conflict vehicle-pedestrian zones
Wu et al. A vision-based collision warning system by surrounding vehicles detection
JP3961269B2 (en) Obstacle alarm device
JP4381394B2 (en) Obstacle detection device and method
Inoue et al. Following vehicle detection using multiple cameras
Lin et al. Understanding Vehicle Interaction in Driving Video with Spatial-temporal Deep Learning Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant