US11170232B2 - Method for capturing an object in an environmental region of a motor vehicle with prediction of the movement of the object, camera system as well as motor vehicle - Google Patents

Method for capturing an object in an environmental region of a motor vehicle with prediction of the movement of the object, camera system as well as motor vehicle Download PDF

Info

Publication number
US11170232B2
US11170232B2 US16/322,333 US201716322333A US11170232B2 US 11170232 B2 US11170232 B2 US 11170232B2 US 201716322333 A US201716322333 A US 201716322333A US 11170232 B2 US11170232 B2 US 11170232B2
Authority
US
United States
Prior art keywords
feature
image
prediction
determined
environmental region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/322,333
Other versions
US20190197321A1 (en
Inventor
Ciaran Hughes
Duong-Van Nguyen
Jonathan Horgan
Jan Thomanek
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Connaught Electronics Ltd
Original Assignee
Connaught Electronics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd filed Critical Connaught Electronics Ltd
Assigned to CONNAUGHT ELECTRONICS LTD. reassignment CONNAUGHT ELECTRONICS LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HORGAN, JONATHAN, HUGHES, CIARAN, NGUYEN, Duong-Van, THOMANEK, JAN
Publication of US20190197321A1 publication Critical patent/US20190197321A1/en
Application granted granted Critical
Publication of US11170232B2 publication Critical patent/US11170232B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • G06K9/00805
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/04Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • G06K9/00362
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • the present invention relates to a method for capturing an object in an environmental region of a motor vehicle based on a sequence of images of the environmental region, which are provided by means of a camera of the motor vehicle. Moreover, the present invention relates to a camera system for performing such a method as well as to a motor vehicle with such a camera system.
  • Camera systems for motor vehicles are already known from the prior art in various configuration.
  • a camera system includes a camera, which is arranged at the motor vehicle and which captures an environmental region of the motor vehicle.
  • the camera system can also have multiple such cameras, which can capture the entire environment around the motor vehicle.
  • the camera arranged at the motor vehicle provides a sequence of images of the environmental region and thus captures a plurality of images per second. This sequence of images can then be processed with the aid of an electronic image processing device.
  • objects in the environmental region of the motor vehicle can for example be recognized.
  • the objects in the images provided by the camera are reliably recognized.
  • the objects thus for example further traffic participants or pedestrians, can be differentiated from further objects in the environmental region.
  • the capture of moved objects in the environmental region in particular presents a challenge. Since these moved objects are located in different positions in the temporally consecutive images, it is required to recognize the objects in the individual images.
  • WO 2014/096240 A1 describes a method for detecting a target object in an environmental region of a camera based on an image of the environmental region provided by means of the camera.
  • a plurality of characteristic features is determined in the image, wherein it is differentiated between ground features describing a ground of the environmental region and target features associated with a target object.
  • at least a partial region of the image can be divided into a plurality of image cells.
  • the associated optical flow vector can then be determined to each characteristic feature.
  • the image cells can be combined to a region of interest.
  • WO 2014/096242 A1 describes a method for tracking a target object in an environmental region of a motor vehicle based on a sequence of images of the environmental region, which are provided by means of a camera of the motor vehicle.
  • the target object is detected in the environmental region and the target object is tracked by determining optical flow vectors to the target object based on the sequence of images during a relative movement between the motor vehicle and the target object. If a standstill state, in which both the motor vehicle and the target object come to a standstill, is detected, the current relative position of the target object with respect to the motor vehicle is stored and it is examined if a predetermined criterion with respect to the relative movement is satisfied. After the predetermined criterion is satisfied, the tracking of the target object is continued starting from the stored relative position.
  • this object is solved by a method, by a camera system as well as by a motor vehicle having the features according to the respective independent claims.
  • Advantageous developments of the present invention are the subject matter of the dependent claims.
  • a first object feature is in particular recognized in a first image of the sequence, wherein the first object feature preferably describes at least a part of the object in the environmental region.
  • a position of the object in the environmental region is in particular estimated based on a predetermined movement model, which describes a movement of the object in the environmental region.
  • a prediction feature is in particular determined in a second image following the first image in the sequence based on the first object feature and based on the estimated position. Further, a second object feature is preferably determined in the second image.
  • association of the second object feature with the prediction feature is preferably effected in the second image if a predetermined association criterion is satisfied.
  • the second object feature is preferably confirmed as originating from the object if the second object feature is associated with the prediction feature.
  • a method according to the invention serves for capturing an object in an environmental region of a motor vehicle based on a sequence of images of the environmental region, which are provided by means of a camera of the motor vehicle.
  • the method involves recognizing a first object feature in a first image of the sequence, wherein the first object feature describes at least a part of the object in the environmental region. Furthermore, the method includes estimating a position of the object in the environmental region based on a predetermined movement model, which describes a movement of the object in the environmental region. Moreover, it is provided that a prediction feature is determined in a second image following the first image in the sequence based on the first object feature and based on the estimated position.
  • the method includes determining a second object feature in the second image and associating the second object feature with the prediction feature in the second image if a predetermined association criterion is satisfied. Finally, the method involves confirming the second object feature as originating from the object if the second object feature is associated with the prediction feature.
  • the method can be performed by a camera system of the motor vehicle, which has at least one camera.
  • a sequence of images of the environmental region is provided by this camera.
  • the sequence describes a temporal succession of images captured for example with a predetermined repetition rate.
  • the camera system can have an image processing unit, by means of which the images can be evaluated.
  • a first object feature is recognized in a first image of the sequence of images.
  • This first object feature can completely describe the object in the environmental region. It can also be provided that the first object feature only describes a part of the object in the environmental region.
  • first object features are determined, which all describe the object in the environmental region.
  • the first object feature can in particular describe the position of the part of the object and/or the dimensions of the part of the object in the first image.
  • the movement of the object in the environmental region is estimated.
  • a predetermined movement model is used, which describes the movement of the object in the environmental region of the motor vehicle, thus in the real world.
  • the movement model can for example describe a linear movement of the object into a predetermined direction. Based on the movement model, the position of the object in the environmental region can be continuously estimated.
  • the position is estimated for that point of time, at which a second image of the sequence is or has been captured.
  • a prediction feature is then determined.
  • the prediction feature can be determined based on the first object feature. For example, for determining the prediction feature, the first object feature from the first image can be used and be shifted such that the position of the prediction feature in the second image describes the estimated position in the environmental region. Thus, the estimated variation of the position of the object is transferred into the second image.
  • a second object feature is then recognized.
  • This second object feature also describes at least a part of the object in the environmental region of the motor vehicle in analogous manner to the first object feature.
  • the second object feature is determined analogously to the first object feature.
  • it is examined if the second object feature can be associated with the prediction feature.
  • the second object feature is examined to what extent the second object feature corresponds with the prediction feature, to the determination of which the movement model in the real world was used. If a predetermined association criterion is satisfied, the second object feature is associated with the prediction feature in the second image.
  • the association criterion can for example describe, how similar the second object feature and the prediction feature are to each other.
  • the second object feature has been associated with the prediction feature in the second image, it can be assumed that the second object feature also describes the object in the environmental region. Since the association of the second object feature with the prediction feature has been effected, it can be assumed that the object has moved according to the predetermined movement model. Thus, it can also be assumed with high likelihood that the first object feature in the first image and the second object feature in the second image describe the same object in the environmental region of the motor vehicle. Thus, the object can be reliably recognized and tracked over the sequence of images.
  • the object is recognized as moving relative to the motor vehicle if the second object feature is confirmed as originating from the object. If the second object feature is confirmed, it can be assumed with relatively high likelihood that the second object feature describes the object or at least a part thereof. This means that the position of the first object feature in the first image has varied to the second object feature in the second image as it was predicted according to the movement model, which describes the movement of the object in the real world. Thereby, the object can be identified or classified as a moved object. This applies both to the case that the motor vehicle itself moves or to the case that the motor vehicle stands still.
  • an association probability between the second object feature and the prediction feature is determined and the predetermined association criterion is deemed as satisfied if the association probability exceeds a predetermined value.
  • a prediction feature is then determined for each of the recognized object features in the image following in the sequence.
  • the association probability is then determined.
  • the association probability in particular describes the similarity between the second object feature and the prediction feature. If the second object feature and the prediction feature are identical, the association probability can be 100% or have the value of 1. If the second object feature and the prediction feature completely differ, the association probability can be 0% or have the value of 0.
  • the second object feature can be associated with the prediction feature.
  • the predetermined value or the threshold value can for example be 75% or 0.75. Thus, it can be examined in simple manner if the second object feature can be associated with the prediction feature.
  • the association probability is determined based on an overlap between the second object feature and the prediction feature in the second image and/or based on dimensions of the second object feature and the prediction feature in the second image and/or based on a distance between the centers of gravity of the second object feature and the prediction feature in the second image and/or based on a distance between the object and a prediction object associated with the prediction feature in the environmental region.
  • Both the second object feature and the prediction feature can cover a certain shape or a certain area in the second image.
  • both the second object feature and the prediction feature are determined as a polygon.
  • the second object feature and the prediction feature can also be determined as another geometric shape.
  • the second object feature and the prediction feature overlap in the second image it can be determined on the one hand to what extent the second object feature and the prediction feature overlap in the second image. If the second object feature and the prediction feature overlap in a relatively large area, a high association probability can be assumed. If the second object feature and the prediction feature do not overlap, a low association probability can be assumed. The examination to what extent the second object feature and the prediction feature overlap is therein effected in the second image.
  • the respective dimensions of the second object feature and the prediction feature can be considered in the second image.
  • the lengths, the heights and/or the areas of the second object feature and the prediction feature can be compared to each other.
  • the similarity between the second object feature and the prediction feature in the second image or the association probability can be determined.
  • the center of gravity of the second object feature and the center of gravity of the prediction feature are determined and a distance between the center of gravity of the second object feature and the center of gravity of the prediction feature is determined. The lower the distance between the centers of gravity is, the greater the association probability is.
  • a prediction object is determined, which describes the mapping of the prediction feature into the real world.
  • an association probability between a last confirmed object feature and the second object feature is determined, wherein the last confirmed object feature describes that object feature, which was last confirmed as originating from the object. If the association probability between the second object feature and the prediction feature falls below the determined value, the second object feature is not associated with the prediction feature. This can be substantiated in that the object in the environmental region does not or no longer move according to the movement model. This is for example the case if the object has stopped or has halted or has changed a direction of movement.
  • the second object feature cannot or not sufficiently be determined if the object stands still during the capture of the second image.
  • the second object feature for example cannot be determined at all.
  • association of the second object feature with the prediction feature is not possible. If the association of the second object feature with the prediction feature has not been effected, it is examined if the second object feature can be associated with that object feature, which was last confirmed as originating from the object in one of the preceding images. Thus, it can be determined, where the second object feature is relative to the position, in which the object has certainly been at an earlier point of time. Based on this information, it can be determined if the object in the environmental region has changed its direction, has changed its speed of movement or currently stands still.
  • the prediction feature is determined starting from a position in the environmental region, which is associated with the last confirmed object feature if the association probability between the last confirmed object feature and the second object feature is greater than the association probability between the second object feature and the prediction feature.
  • the association probability to the second object feature in the second or in the current image and the association probability to that object feature from one of the preceding images can be determined, which was actually confirmed as originating from the object. If the object in the environmental region does no longer follow the predetermined movement model, it is required to change the movement model. Therein, it is in particular provided that the movement model is determined starting from that object feature, which was last confirmed. Thus, the case can be reliably taken into account that the object in the real world has changed its direction of movement or actually stands still.
  • an object position in the environmental region is determined based on the second object feature
  • a prediction position in the environmental region is determined based on the prediction feature
  • a spatial similarity between the object position and the prediction position is determined and a current position of the object in the environmental region is determined based on the association probability and the spatial similarity.
  • a prediction position is determined. This prediction position describes the current position of the object considering the movement model. Therein, the prediction position can be output with a predetermined spatial uncertainty.
  • the current position in the environmental region can also be determined.
  • a spatial similarity or a spatial likelihood can then be determined.
  • a weighting factor can then be determined, based on which the current position of the object can be determined for the tracking. This allows reliably tracking the object.
  • the current position of the object is determined based on the second object feature, the object position of which has the greater spatial similarity to the prediction position of the prediction feature.
  • the object in the environmental region is for example a pedestrian
  • one object feature can describe the head of the pedestrian
  • another object feature can describe the body
  • one object feature can respectively describe the legs of the pedestrian.
  • that second object feature is used, which has the greatest spatial similarity to the prediction feature.
  • the second object feature associated with the head of the pedestrian can have no or a very low spatial similarity to the prediction feature.
  • that second object feature, which is associated with a leg of the pedestrian can have a high spatial similarity to the prediction feature. This in particular applies to the case in which the base point of the prediction feature is compared to the respective base points of the second object features for determining the spatial similarity. This allows reliable determination of the current position of the object in the environmental region and further reliable tracking of the object.
  • a further object feature is recognized in one of the images, it is examined if the further object feature originates from an object entered the environmental region, wherein the examination is based on an entry probability depending on a position of the further object feature in the image.
  • the examination can be the case that multiple object features are recognized in the images or in the second image.
  • it is to be examined if these object features originate from an already recognized object or if the object feature describes a new object, which was not yet previously captured. Therefore, it is examined if the further object feature describes an object, which has entered the environmental region or which has moved into the environmental region.
  • an entry probability is taken into account, which can also be referred to as a birth probability.
  • This entry probability depends on the position of the object feature in the image.
  • a low entry probability is assumed. For these areas, it is unlikely that the further object has entered this area. If the further object feature has been recognized as originating from a new object, this new object can also be correspondingly tracked or its position can be determined.
  • an object has exited the environmental region.
  • This can for example be the case if the object is tracked in the images and can no longer be recognized in one of the images.
  • this can be the case if the first object feature in the first image is in an edge area of the image and the object feature can no longer be captured in the second image.
  • an exit probability can be defined analogously to the previously described entry probability and it can be examined based on this exit probability if the object has exited the environmental region.
  • the exit probability is higher if the object feature is in an edge area of the image than for the case that the object feature is in a central area of the image.
  • the second object feature is determined as a polygon, wherein the polygon has a left base point, a central base point, a right base point and/or a tip point, and wherein the polygon describes a width and/or a height of the object.
  • the second object feature can be described as an object in the second image.
  • the polygon in particular has the left, the central, the right base point as well as a tip point.
  • the central base point can be determined as the point of intersection between a connecting line between a vanishing point and the center of gravity of the polygon.
  • the width of the object is reproduced by the right and the left base point.
  • the height of the object can be described by the tip point.
  • This polygon can be determined in simple manner and within a short computing time.
  • the polygon is suitable for describing the spatial dimensions of the object.
  • a plurality of regions of interest is determined in the second image, the regions of interest are grouped and the respective polygon is determined based on the grouped regions of interest.
  • Regions of interest can respectively be determined in the second image or in the respective images of the image sequence. These regions of interest in particular describe those pixels or areas of the image, which depict a moved object.
  • the image or the second image can first be divided into a plurality of partial areas or image cells and it can be examined for each of the image cells if it depicts a moved object or a part thereof.
  • a weighting matrix can further be taken into account, in which a first value or a second value is associated with each of the image cells, according to whether or not the respective image cell describes a moved object. Those image cells describing moved objects can then be correspondingly grouped and the regions of interest can be determined herefrom. After the respective regions of interest have been determined, it can be examined if these regions of interest originate from the same object. Thus, it can be determined if the respective regions of interest can be grouped. As soon as the regions of interest have then been grouped, the polygon can be determined based on the area, which the grouped regions of interest occupy in the second image, which then describes the second object feature.
  • the second image is divided into a plurality of image cells
  • object cells describing a moved object are selected from the image cells based on an optical flow and the object cells are associated with one of the regions of interest.
  • the second image can be divided into a plurality of image cells.
  • each of the image cells can include at least one pixel.
  • each of the image cells includes a plurality of pixels.
  • the optical flow or the optical flow vector can then be determined for each of the image cells.
  • it can be reliably examined whether or not the pixels in the image cell describe a moved object.
  • Those image cells describing a moved object are considered as object cells and can be combined to a region of interest. This allows simple and reliable determination of the respective regions of interest.
  • a roadway is recognized in the second image by means of segmentation and the regions of interest are determined based on the recognized roadway.
  • a roadway on which the motor vehicle is located, can be recognized in the second image.
  • the roadway can be in front of the motor vehicle in direction of travel or behind the motor vehicle in direction of travel according to orientation of the camera.
  • the roadway or the ground can now be recognized with the aid of the segmentation method. If the moved object is also located on the roadway, the boundaries between the roadway and the object moving on the roadway can be reliably determined by recognizing the roadway. This allows precise determination of the regions of interest.
  • the determination of the polygons based on the regions of interest was described based on the second object features in the image.
  • the first object feature in the first image and all of the further object features in the images can also be determined as polygons in analogous manner.
  • the prediction feature is also determined as a polygon.
  • the prediction feature can be determined based on the first object feature and the position of the polygon and/or the size of the polygon describing the prediction feature can be adapted based on the movement model in the second image.
  • a camera system according to the invention for a motor vehicle includes at least one camera and an electronic image processing device.
  • the camera system is adapted to perform a method according to the invention or an advantageous configuration thereof.
  • a motor vehicle according to the invention includes a camera system according to the invention.
  • the motor vehicle is in particular formed as a passenger car.
  • FIG. 1 a motor vehicle according to an embodiment of the present invention, which has a camera system with a plurality of cameras;
  • FIG. 2 a schematic flow diagram of a method for determining regions of interest in the images, which are provided by the cameras;
  • FIG. 3 an image, which is provided with the aid of the cameras, which is divided into a plurality of image cells;
  • FIG. 4 areas in the image, which are used for determining the regions of interest
  • FIG. 5 object cells in the image, which are associated with the moved object, before and after dilation
  • FIG. 6 the individual image cells, over which a sliding window is shifted for determining the regions of interest
  • FIG. 7 a region of interest in the image, which is upwards corrected
  • FIG. 8 two regions of interest in the image, wherein the one region of interest is downwards corrected and the other region of interest is scaled down;
  • FIG. 9 the regions of interest, which are associated with a pedestrian in the image, who is located on a roadway;
  • FIG. 10 regions of interest, which are each combined in groups
  • FIG. 11 a schematic flow diagram of a method for tracking the object
  • FIG. 12 a schematic representation of the determination of a polygon based on grouped regions of interest
  • FIG. 13 the polygon, which has a left, a central and a right base point as well as a tip point;
  • FIG. 14 a schematic representation of the determination of an object feature based on a movement model in the real world
  • FIG. 15 a -15 d object features, which are compared to prediction features
  • FIG. 16 object features and prediction features at different points of time
  • FIG. 17 a diagram, which describes the spatial similarity between the object and a prediction object in the real world
  • FIG. 18 a pedestrian as a moved object, with which a plurality of object features are associated.
  • FIG. 19 a diagram, which describes an entry probability of an object depending on a position in the image.
  • FIG. 1 shows a motor vehicle 1 according to an embodiment of the present invention in a plan view.
  • the motor vehicle 1 is formed as a passenger car.
  • the motor vehicle 1 includes a camera system 2 , which includes at least one camera 4 .
  • the camera system 2 includes four cameras 4 , wherein one of the cameras 4 is arranged in a rear area 5 , one of the cameras 4 is arranged in a front area 7 and two of the cameras 4 are arranged in respective lateral areas 6 of the motor vehicle 1 .
  • objects 9 in an environmental region 8 of the motor vehicle 1 can be captured.
  • a sequence of images 10 , 11 is provided by each of the cameras 4 .
  • This sequence of images 10 , 11 is then transmitted to an electronic image processing device 3 of the camera system 2 .
  • the objects 9 in the environmental region 8 can then be recognized in the images 10 , 11 with the aid of the electronic image processing device 3 .
  • moved objects 9 in the environmental region 8 are to be recognized with the aid of the camera system 2 .
  • a method of three-dimensional image processing is used.
  • regions of interest 16 are determined in the images 10 , 11 , which describe a moved object 9 .
  • object features 24 , 25 are determined in the images 10 , 11 based on the regions of interest 16 , which describe the object 9 in more detail.
  • the movement of the object 9 is tracked.
  • FIG. 2 shows a schematic flow diagram of a method for determining regions of interest 16 in the images 10 , 11 .
  • a first step S 1 an image 10 , 11 provided by one of the cameras 4 is divided into a plurality of image cells 12 .
  • each of the image cells 12 can include at least one pixel.
  • each of the image cells 12 has a plurality of pixels.
  • each image cell 12 can have 10 ⁇ 10 pixels.
  • object cells 12 ′ are determined.
  • the object cells 12 ′ describe those image cells 12 , which describe a moved object 9 .
  • a weighting matrix is then determined based on the image cells 12 and the object cells 12 ′.
  • a step S 3 regions of interest 16 are then determined in the image 10 , 11 based on the weighting matrix. These regions of interest 16 are subsequently corrected in a step S 4 . Finally, the regions of interest 16 are combined in a step S 5 .
  • FIG. 3 shows an image 10 , 11 , which has been provided by one of the cameras 4 .
  • the image 10 , 11 is divided into a plurality of image cells 12 .
  • the number of the pixels in the respective image cells 12 can be determined.
  • the weighting matrix can be determined.
  • a height 13 of the weighting matrix results based on the number of lines of image cells 12 and a width 14 of the weighting matrix results based on the number of columns of the image cells 12 .
  • the image 10 , 11 shows the object 9 , which is located in the environmental region 8 .
  • the object 9 is a moving object in the form of a pedestrian.
  • This object 9 is now to be recognized in the image 10 , 11 .
  • an optical flow or a flow vector is determined in each of the image cells 12 , which describes the movement of an object 9 . If a flow vector has been determined with a sufficient confidence value, that image cell 12 is recognized as the object cell 12 ′ and identified in the weighting matrix or a value associated with the object cell 12 ′ in the weighting matrix is varied.
  • the threshold value for a sufficient confidence value depends on the respective region 15 in the image 10 , 11 .
  • FIG. 4 shows different regions 15 in the image 10 , 11 . Presently, the regions 15 differ depending on a distance to the motor vehicle 1 . Further, the threshold values for determining the confidence value can be adjustable and be dependent on the current speed of the motor vehicle 1 .
  • FIG. 5 shows the image cells 12 and the object cells 12 ′, which have been recognized as originating from the moved object 9 .
  • the object cells 12 ′ do not form a contiguous area. Since a completely contiguous area has not been recognized in the image 10 , 11 , a sparsely populated weighting matrix is also present in this area.
  • a morphological operation in particular the dilation, is applied. Objects in a binary image can be enlarged or thickened by the dilation.
  • H p describes the structuring element H, which has been shifted by p.
  • W q describes the weighting matrix W, which has been shifted by q.
  • q and p describe the directions.
  • the structuring element H is a 3 ⁇ 3 matrix.
  • the result of the dilation is represented on the right side of FIG. 5 .
  • regions of interest 16 are now to be determined. This is explained in connection with FIG. 6 .
  • a generator can be used, which determines an object hypothesis based on the image cells 12 , the object cells 12 ′ and the weights thereof.
  • a sliding window 17 is used, which is shifted over the individual image cells 12 and object cells 12 ′. Based on the integral image, thus, each of the weights is determined for each of the regions of interest 16 .
  • x and y describe the position of the lower left edge of the region of interest
  • w and h describe the width and the height of the region of interest 16 . If the weighted sum w ROI is greater than a threshold value, the region of interest 16 is marked as a hypothesis. If the region of interest 16 is marked as a hypothesis, the search for further regions of interest 16 in the current column is aborted and continued in the next column. As indicated in FIG. 6 , this is performed for all of the columns.
  • FIG. 7 shows an example, in which the region of interest 16 or the sliding window 17 is upwards corrected.
  • the upper boundary of the sliding window 17 is upwards shifted such that the object cells 12 ′ associated with the moved object 9 are included in the sliding window 17 .
  • the rectangle 18 is then determined from the corrected sliding window 17 .
  • FIG. 8 shows an example, in which the lower boundary of the sliding window 17 is downwards shifted such that all of the object cells 12 ′ are included in the sliding window 17 .
  • a further sliding window 17 is shown, which is scaled down. In this case, object cells 12 ′ are not present in the lowermost line of the sliding window 17 . For this reason, the lower boundary of the sliding window 17 is upwards shifted.
  • a roadway 19 is recognized in the image 10 , 11 .
  • a segmentation method is used.
  • the roadway 19 can be recognized in the image 10 , 11 with the aid of the segmentation method.
  • a boundary line 20 between the roadway 19 and the object 9 can be determined. Based on this boundary line 20 , the rectangles 18 describing the regions of interest 16 can then be adapted. In this example, the rectangles 18 are downwards corrected. Presently, this is illustrated by the arrows 21 .
  • the respective regions of interest 16 are grouped. This is explained in connection with FIG. 10 .
  • the rectangles 18 are apparent, which describe the regions of interest 16 .
  • a horizontal contact area 23 is determined in the overlap area. If this horizontal contact area 23 in the overlap area exceeds a predetermined threshold value, the regions of interest 16 are grouped such that groups 22 , 22 ′ of regions of interest 16 arise.
  • the rectangles 18 or regions of interest 16 on the left side are presently combined to a first group 22 and the rectangles 18 or regions of interest 16 on the right side are combined to a second group.
  • the rectangle 18 a of the second group 22 ′ is not added since the horizontal contact area 23 a is below the threshold value.
  • FIG. 11 shows a schematic flow diagram of a method for tracking the object 9 in the environmental region 8 .
  • object features 24 , 25 are determined based on the regions of interest 16 in the images 10 , 11 .
  • a prediction feature 26 is determined and in a step S 8 the object feature 24 , 25 is associated with the prediction feature 26 .
  • the position of the object 9 is updated. The updated position is then supplied to an object database 27 describing a state vector.
  • the movement of the object 9 in the environmental region 8 is predicted based on a linear movement model. Based on this movement model, the prediction feature 26 is then determined in the step S 7 . Furthermore, it is provided that new object features are recognized in a step S 11 and object features 24 , 25 are no longer taken into account in a step S 12 .
  • the association of already existing and tracked objects 9 and newly captured objects is performed both within the images 10 , 11 and in the real world.
  • the steps S 6 to S 8 are performed within the sequence of images 10 , 11 . This is illustrated in FIG. 11 by the block 35 .
  • the steps S 9 to S 12 are determined in the real world or in the environmental region 8 . This is illustrated in FIG. 11 by the block 36 .
  • the determination of the object feature 24 , 25 according to the step S 6 is illustrated in FIG. 12 .
  • the individual rectangles 18 are shown, which are associated with the respective regions of interest 16 , and which are combined to the group 22 .
  • a polygon 28 is then determined.
  • the polygon 28 is determined as the envelope of the rectangles 18 , which describe the regions of interest 16 .
  • a center of gravity 29 of the polygon 28 is determined.
  • the position of the center of gravity 29 of the polygon 28 with the coordinates x s and y s can be determined according to the following formulas:
  • an area A of the polygon 28 can be determined according to the following formula:
  • (x i , y i ), (x i+1 , y i+1 ) are coordinates of two adjacent points of the polygon 28 .
  • N is the number of points of the polygon 28 .
  • FIG. 13 shows a further representation of the polygon 28 .
  • the polygon 28 has a left base point 30 , a central base point 31 , a right base point 32 as well as a tip point 33 .
  • the center of gravity 29 of the polygon 28 is illustrated.
  • the central base point 31 results by the point of intersection of a connecting line 34 connecting a vanishing point 35 to the center of gravity 29 of the polygon 28 .
  • the left base point 30 and the right base point 32 the width of the object 9 is described.
  • the height of the object 9 is described by the tip point 33 .
  • FIG. 14 shows the object 9 in the form of a pedestrian on the right side, which moves with a speed v relative to the motor vehicle 1 .
  • the images 10 , 11 are provided by at least one camera 4 of the motor vehicle 1 , which are presented on the left side of FIG. 14 .
  • a first object feature 24 is determined as the polygon 28 in a first image 10 (not illustrated here).
  • This first object feature 24 describes the object 9 , which is in a first position P 1 at a point of time t 1 in the real world or in the environmental region 8 .
  • the prediction feature 26 is determined based on the first object feature 24 .
  • a picture 9 ′ of the object 9 or of the pedestrian is shown in the second image 11 .
  • the first object feature 24 determined in the first image 10 is presently shown dashed in the second image 11 .
  • a linear movement model is used, which describes the speed v of the object 9 .
  • a Kalman filter For describing the movement of the object 9 , a Kalman filter is used. Herein, it is assumed that the object 9 moves with a constant speed v.
  • k ⁇ 1 can be defined: ⁇ circumflex over (x) ⁇ k ⁇ 1
  • k ⁇ 1 A ⁇ circumflex over (x) ⁇ k ⁇ 1
  • k ⁇ 1 A ⁇ P k ⁇ 1
  • A describes the system matrix.
  • k ⁇ 1 describes the state vector for the preceding point of time or for the first image 10 .
  • k ⁇ 1 describes the state matrix for the preceding point of time or for the first image 10 .
  • Q is a noise matrix, which describes the error of the movement model and the differences between the movement model and the movement of the object 9 in the real world.
  • a second object feature 25 can be determined based on the regions of interest 16 . Now, it is to be examined if the second object feature 25 can be associated with the prediction feature 26 .
  • FIG. 15 a to FIG. 15 d show different variants, how the association between the second object feature 25 and the prediction feature 26 can be examined. For example—as shown in FIG. 15 a —an overlap area 36 between the second object feature 25 and the prediction feature 26 can be determined. Further, a distance 37 between the center of gravity 29 of the second object feature 25 or the polygon 28 and a center of gravity 38 of the prediction feature 26 can be determined. This is illustrated in FIG. 15 b .
  • a size of the second object feature 25 can be compared to a size of the prediction feature 26 . This is illustrated in FIG. 15 c . Further, a distance 39 between the object 9 and a prediction object 40 can be determined, which has been determined based on the prediction feature 26 or which maps the prediction feature 26 into the real world.
  • the second object feature 25 can be associated with the prediction feature 26 . That is, it is confirmed that the second object feature 25 describes the object 9 in the environmental region 8 .
  • a moved object 9 in particular a pedestrian, changes its direction of movement or its speed. Since the object features 24 , 25 have been determined based on the optical flow, it can be the case that an object feature 24 , 25 cannot be determined if the object 9 or the pedestrian currently stands still. Further, it can be the case that the moved object 9 changes its direction of movement.
  • the picture 9 ′ of the object 9 or of the pedestrian moves to the left.
  • the prediction feature 26 which has been determined based on the movement model
  • the second object feature 25 which has been determined based on the regions of interest 16 , show a good correspondence.
  • the second object feature 25 is confirmed as originating from the object 9 .
  • the object 9 or the pedestrian stops. In this case, a second object feature 25 cannot be determined.
  • the last confirmed object feature 41 is shown, which has been confirmed as originating from the object 9 . This corresponds to the second object feature 25 , which has been confirmed at the point of time t 1 .
  • the object 9 again moves to the right.
  • a second object feature 25 can be determined.
  • an association probability p between the prediction feature 26 and the second object feature 25 results.
  • an association probability p L between the last confirmed object feature 41 and the second object feature 25 is determined. Since the association probability p L is greater than the association probability p, the movement of the object 9 at a point of time t 4 is determined based on the last confirmed object feature 41 .
  • a spatial similarity between a prediction position P 2 describing the position of the object 9 based on the movement model is determined. This is illustrated in FIG. 17 .
  • the prediction position P 2 is described by multiple ellipses 42 , which describe the spatial uncertainty of the position determination.
  • the object position 43 is determined, wherein the object position 43 is determined based on the second object feature 25 . Based on the prediction position P 2 and the object position 43 , a spatial similarity or a spatial likelihood can then be determined.
  • a state vector and the associated covariance matrix can be determined: ⁇ circumflex over (x) ⁇ k
  • k j ⁇ circumflex over (x) ⁇ k
  • k P k
  • z k describes the data vector of the measurement or of the second object feature 25 .
  • ⁇ circumflex over (z) ⁇ k describes the expected data vector.
  • K describes the Kalman gain, which can be determined according to the following formula:
  • K P k
  • H describes a measurement matrix for generating the object features 24 , 25 based on the movement model
  • R describes a noise matrix, which describes the variation of the polygon 28 in the image 10 , 11 .
  • the system model can then be determined according to the following formula, wherein w describes a weighting factor:
  • FIG. 18 shows an image 11 , in which multiple second object features 25 are associated with an object 9 or the picture 9 ′.
  • a second object feature 25 is associated with a head of the object 9
  • two second object features 25 are associated with the arms of the object 9
  • a second object feature 25 is associated with the legs of the object 9 or the pedestrian.
  • a weighting factor w can be associated with one of the second object features 25 .
  • the second object feature 25 associated with the legs of the object 9 or the pedestrian has the greatest spatial similarity to the prediction feature 26 or the base point thereof.
  • the weighting factor w of 1 is associated with this second object feature 25 .
  • the weighting factor w of 0 is associated with the second object feature 25 . Based on this weighting factor w, the current position or the movement of the object can then be updated.
  • FIG. 19 shows a distribution of the entry probability depending on the position of the object feature 24 , 25 in the image 10 , 11 , which describes the environmental region 8 .
  • the areas 44 a to 44 d describe different entry probability in the image 10 , 11 .
  • an edge area 44 a a high likelihood for an entry of a new object arises.
  • a very low entry probability arises in a central area 44 a of the image 10 , 11 , which is directly in front of the motor vehicle 1 .
  • an exit probability can be defined analogously to the entry probability.
  • moved objects 9 in an environmental region 8 of the motor vehicle 1 can be reliably recognized and tracked.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for capturing an object (9) in an environmental region (8) of a motor vehicle (1) based on a sequence of images (10, 11) of the environmental region (8), which are provided by means of a camera (4) of the motor vehicle (1), including the steps of: recognizing a first object feature (24) in a first image (10) of the sequence, wherein the first object feature (24) describes at least a part of the object (9) in the environmental region (8), estimating a position of the object (9) in the environmental region (8) based on a predetermined movement model, which describes a movement of the object (9) in the environmental region (8), determining a prediction feature (26) in a second image (11) following the first image (10) in the sequence based on the first object feature (24) and based on the estimated position, determining a second object feature (25) in the second image (11), associating the second object feature (25) with the prediction feature (26) in the second image (11) if a predetermined association criterion is satisfied, and confirming the second object feature (25) as originating from the object (9) if the second object feature (25) is associated with the prediction feature (26).

Description

The present invention relates to a method for capturing an object in an environmental region of a motor vehicle based on a sequence of images of the environmental region, which are provided by means of a camera of the motor vehicle. Moreover, the present invention relates to a camera system for performing such a method as well as to a motor vehicle with such a camera system.
Camera systems for motor vehicles are already known from the prior art in various configuration. As is known, such a camera system includes a camera, which is arranged at the motor vehicle and which captures an environmental region of the motor vehicle. The camera system can also have multiple such cameras, which can capture the entire environment around the motor vehicle. The camera arranged at the motor vehicle provides a sequence of images of the environmental region and thus captures a plurality of images per second. This sequence of images can then be processed with the aid of an electronic image processing device. Thus, objects in the environmental region of the motor vehicle can for example be recognized.
Herein, it is required that the objects in the images provided by the camera are reliably recognized. Herein, it is required that the objects, thus for example further traffic participants or pedestrians, can be differentiated from further objects in the environmental region. Herein, the capture of moved objects in the environmental region in particular presents a challenge. Since these moved objects are located in different positions in the temporally consecutive images, it is required to recognize the objects in the individual images.
Hereto, WO 2014/096240 A1 describes a method for detecting a target object in an environmental region of a camera based on an image of the environmental region provided by means of the camera. Therein, a plurality of characteristic features is determined in the image, wherein it is differentiated between ground features describing a ground of the environmental region and target features associated with a target object. Therein, at least a partial region of the image can be divided into a plurality of image cells. In the respective image cells, the associated optical flow vector can then be determined to each characteristic feature. Subsequently thereto, the image cells can be combined to a region of interest.
Moreover, WO 2014/096242 A1 describes a method for tracking a target object in an environmental region of a motor vehicle based on a sequence of images of the environmental region, which are provided by means of a camera of the motor vehicle. Therein, the target object is detected in the environmental region and the target object is tracked by determining optical flow vectors to the target object based on the sequence of images during a relative movement between the motor vehicle and the target object. If a standstill state, in which both the motor vehicle and the target object come to a standstill, is detected, the current relative position of the target object with respect to the motor vehicle is stored and it is examined if a predetermined criterion with respect to the relative movement is satisfied. After the predetermined criterion is satisfied, the tracking of the target object is continued starting from the stored relative position.
It is the object of the present invention to demonstrate a solution, how in particular moved objects in an environmental region of a motor vehicle can be more reliably captured with the aid of a camera.
According to the invention, this object is solved by a method, by a camera system as well as by a motor vehicle having the features according to the respective independent claims. Advantageous developments of the present invention are the subject matter of the dependent claims.
In an embodiment of a method for capturing an object in an environmental region of a motor vehicle based on a sequence of images of the environmental region, which are provided by means of a camera of the motor vehicle, a first object feature is in particular recognized in a first image of the sequence, wherein the first object feature preferably describes at least a part of the object in the environmental region. Furthermore, a position of the object in the environmental region is in particular estimated based on a predetermined movement model, which describes a movement of the object in the environmental region. Moreover, a prediction feature is in particular determined in a second image following the first image in the sequence based on the first object feature and based on the estimated position. Further, a second object feature is preferably determined in the second image. Moreover, association of the second object feature with the prediction feature is preferably effected in the second image if a predetermined association criterion is satisfied. Finally, the second object feature is preferably confirmed as originating from the object if the second object feature is associated with the prediction feature.
A method according to the invention serves for capturing an object in an environmental region of a motor vehicle based on a sequence of images of the environmental region, which are provided by means of a camera of the motor vehicle. The method involves recognizing a first object feature in a first image of the sequence, wherein the first object feature describes at least a part of the object in the environmental region. Furthermore, the method includes estimating a position of the object in the environmental region based on a predetermined movement model, which describes a movement of the object in the environmental region. Moreover, it is provided that a prediction feature is determined in a second image following the first image in the sequence based on the first object feature and based on the estimated position. Furthermore, the method includes determining a second object feature in the second image and associating the second object feature with the prediction feature in the second image if a predetermined association criterion is satisfied. Finally, the method involves confirming the second object feature as originating from the object if the second object feature is associated with the prediction feature.
With the aid of the method, objects and in particular moved objects are to be recognized in the environmental region of the motor vehicle. The method can be performed by a camera system of the motor vehicle, which has at least one camera. A sequence of images of the environmental region is provided by this camera. Thus, the sequence describes a temporal succession of images captured for example with a predetermined repetition rate. Further, the camera system can have an image processing unit, by means of which the images can be evaluated. Therein, it is first provided that a first object feature is recognized in a first image of the sequence of images. This first object feature can completely describe the object in the environmental region. It can also be provided that the first object feature only describes a part of the object in the environmental region. Basically, it can be provided that multiple first object features are determined, which all describe the object in the environmental region. The first object feature can in particular describe the position of the part of the object and/or the dimensions of the part of the object in the first image. Moreover, it is provided that the movement of the object in the environmental region is estimated. Hereto, a predetermined movement model is used, which describes the movement of the object in the environmental region of the motor vehicle, thus in the real world. The movement model can for example describe a linear movement of the object into a predetermined direction. Based on the movement model, the position of the object in the environmental region can be continuously estimated.
Therein, it is provided that the position is estimated for that point of time, at which a second image of the sequence is or has been captured. In this second image, which follows the first image in the sequence of images, a prediction feature is then determined. The prediction feature can be determined based on the first object feature. For example, for determining the prediction feature, the first object feature from the first image can be used and be shifted such that the position of the prediction feature in the second image describes the estimated position in the environmental region. Thus, the estimated variation of the position of the object is transferred into the second image.
In the second image, a second object feature is then recognized. This second object feature also describes at least a part of the object in the environmental region of the motor vehicle in analogous manner to the first object feature. Thus, the second object feature is determined analogously to the first object feature. Moreover, it is examined if the second object feature can be associated with the prediction feature. Thus, it is examined to what extent the second object feature corresponds with the prediction feature, to the determination of which the movement model in the real world was used. If a predetermined association criterion is satisfied, the second object feature is associated with the prediction feature in the second image. The association criterion can for example describe, how similar the second object feature and the prediction feature are to each other. If the second object feature has been associated with the prediction feature in the second image, it can be assumed that the second object feature also describes the object in the environmental region. Since the association of the second object feature with the prediction feature has been effected, it can be assumed that the object has moved according to the predetermined movement model. Thus, it can also be assumed with high likelihood that the first object feature in the first image and the second object feature in the second image describe the same object in the environmental region of the motor vehicle. Thus, the object can be reliably recognized and tracked over the sequence of images.
Preferably, the object is recognized as moving relative to the motor vehicle if the second object feature is confirmed as originating from the object. If the second object feature is confirmed, it can be assumed with relatively high likelihood that the second object feature describes the object or at least a part thereof. This means that the position of the first object feature in the first image has varied to the second object feature in the second image as it was predicted according to the movement model, which describes the movement of the object in the real world. Thereby, the object can be identified or classified as a moved object. This applies both to the case that the motor vehicle itself moves or to the case that the motor vehicle stands still.
According to a further embodiment, an association probability between the second object feature and the prediction feature is determined and the predetermined association criterion is deemed as satisfied if the association probability exceeds a predetermined value. Basically, it can be provided that multiple object features are recognized in the images and a prediction feature is then determined for each of the recognized object features in the image following in the sequence. For the second object feature and the associated prediction feature, the association probability is then determined. The association probability in particular describes the similarity between the second object feature and the prediction feature. If the second object feature and the prediction feature are identical, the association probability can be 100% or have the value of 1. If the second object feature and the prediction feature completely differ, the association probability can be 0% or have the value of 0. If the association probability now exceeds a predetermined value or a threshold value, the second object feature can be associated with the prediction feature. The predetermined value or the threshold value can for example be 75% or 0.75. Thus, it can be examined in simple manner if the second object feature can be associated with the prediction feature.
Furthermore, it is advantageous if the association probability is determined based on an overlap between the second object feature and the prediction feature in the second image and/or based on dimensions of the second object feature and the prediction feature in the second image and/or based on a distance between the centers of gravity of the second object feature and the prediction feature in the second image and/or based on a distance between the object and a prediction object associated with the prediction feature in the environmental region. Both the second object feature and the prediction feature can cover a certain shape or a certain area in the second image. In particular, it is provided that both the second object feature and the prediction feature are determined as a polygon. Basically, the second object feature and the prediction feature can also be determined as another geometric shape. Herein, it can be determined on the one hand to what extent the second object feature and the prediction feature overlap in the second image. If the second object feature and the prediction feature overlap in a relatively large area, a high association probability can be assumed. If the second object feature and the prediction feature do not overlap, a low association probability can be assumed. The examination to what extent the second object feature and the prediction feature overlap is therein effected in the second image.
Alternatively or additionally, the respective dimensions of the second object feature and the prediction feature can be considered in the second image. Therein, the lengths, the heights and/or the areas of the second object feature and the prediction feature can be compared to each other. In this manner too, the similarity between the second object feature and the prediction feature in the second image or the association probability can be determined. Furthermore, it can be provided that the center of gravity of the second object feature and the center of gravity of the prediction feature are determined and a distance between the center of gravity of the second object feature and the center of gravity of the prediction feature is determined. The lower the distance between the centers of gravity is, the greater the association probability is. Furthermore, it can be provided that a prediction object is determined, which describes the mapping of the prediction feature into the real world. In the real world or in the environmental region of the motor vehicle, the distance between the object and the prediction object can now be determined. The lower this distance is, the higher the association probability is. Thus, both in the images and in the real world, it can be examined how great the correspondence between the object features and the object in the real world, which follows the movement model, is.
In a further configuration, if the association of the second object feature with the prediction feature is omitted, an association probability between a last confirmed object feature and the second object feature is determined, wherein the last confirmed object feature describes that object feature, which was last confirmed as originating from the object. If the association probability between the second object feature and the prediction feature falls below the determined value, the second object feature is not associated with the prediction feature. This can be substantiated in that the object in the environmental region does not or no longer move according to the movement model. This is for example the case if the object has stopped or has halted or has changed a direction of movement. If moved elements in the second image are used for determining the second object feature, it can be the case that the second object feature cannot or not sufficiently be determined if the object stands still during the capture of the second image. Thus, the second object feature for example cannot be determined at all. In this case too, association of the second object feature with the prediction feature is not possible. If the association of the second object feature with the prediction feature has not been effected, it is examined if the second object feature can be associated with that object feature, which was last confirmed as originating from the object in one of the preceding images. Thus, it can be determined, where the second object feature is relative to the position, in which the object has certainly been at an earlier point of time. Based on this information, it can be determined if the object in the environmental region has changed its direction, has changed its speed of movement or currently stands still.
Furthermore, it is advantageous that the prediction feature is determined starting from a position in the environmental region, which is associated with the last confirmed object feature if the association probability between the last confirmed object feature and the second object feature is greater than the association probability between the second object feature and the prediction feature. Basically, the association probability to the second object feature in the second or in the current image and the association probability to that object feature from one of the preceding images can be determined, which was actually confirmed as originating from the object. If the object in the environmental region does no longer follow the predetermined movement model, it is required to change the movement model. Therein, it is in particular provided that the movement model is determined starting from that object feature, which was last confirmed. Thus, the case can be reliably taken into account that the object in the real world has changed its direction of movement or actually stands still.
According to a further embodiment, an object position in the environmental region is determined based on the second object feature, a prediction position in the environmental region is determined based on the prediction feature, a spatial similarity between the object position and the prediction position is determined and a current position of the object in the environmental region is determined based on the association probability and the spatial similarity. Based on the prediction feature, which is determined based on the movement model, a prediction position is determined. This prediction position describes the current position of the object considering the movement model. Therein, the prediction position can be output with a predetermined spatial uncertainty. Based on the second object feature, the current position in the environmental region can also be determined. Based on this object position and the prediction position, a spatial similarity or a spatial likelihood can then be determined. Based on this spatial similarity, a weighting factor can then be determined, based on which the current position of the object can be determined for the tracking. This allows reliably tracking the object.
Furthermore, it is in particular provided that if at least two second object features are associated with the prediction feature, the current position of the object is determined based on the second object feature, the object position of which has the greater spatial similarity to the prediction position of the prediction feature. Basically, it can be the case that multiple object features are respectively determined in the images, but originate from a single object in the real world. If the object in the environmental region is for example a pedestrian, one object feature can describe the head of the pedestrian, another object feature can describe the body and one object feature can respectively describe the legs of the pedestrian. These second object features can then be confirmed as originating from the object. When it now comes to determine the current position of the object in the environmental region or to update the tracking of the object, that second object feature is used, which has the greatest spatial similarity to the prediction feature. For example, the second object feature associated with the head of the pedestrian, can have no or a very low spatial similarity to the prediction feature. However, that second object feature, which is associated with a leg of the pedestrian, can have a high spatial similarity to the prediction feature. This in particular applies to the case in which the base point of the prediction feature is compared to the respective base points of the second object features for determining the spatial similarity. This allows reliable determination of the current position of the object in the environmental region and further reliable tracking of the object.
According to a further embodiment, it is provided that if a further object feature is recognized in one of the images, it is examined if the further object feature originates from an object entered the environmental region, wherein the examination is based on an entry probability depending on a position of the further object feature in the image. Basically, it can be the case that multiple object features are recognized in the images or in the second image. Now, it is to be examined if these object features originate from an already recognized object or if the object feature describes a new object, which was not yet previously captured. Therefore, it is examined if the further object feature describes an object, which has entered the environmental region or which has moved into the environmental region. In the examination, an entry probability is taken into account, which can also be referred to as a birth probability. This entry probability depends on the position of the object feature in the image. Herein, it is taken into account that in the edge areas of the image, the likelihood is high that the object has entered the environmental region, which is depicted in the images. In an area, which is directly in front of or behind the motor vehicle and for example is arranged in a central area of the image, a low entry probability is assumed. For these areas, it is unlikely that the further object has entered this area. If the further object feature has been recognized as originating from a new object, this new object can also be correspondingly tracked or its position can be determined.
Furthermore, it can be provided that it is examined if an object has exited the environmental region. This can for example be the case if the object is tracked in the images and can no longer be recognized in one of the images. For example, this can be the case if the first object feature in the first image is in an edge area of the image and the object feature can no longer be captured in the second image. In this case, an exit probability can be defined analogously to the previously described entry probability and it can be examined based on this exit probability if the object has exited the environmental region. Herein too, it can be taken into account that the exit probability is higher if the object feature is in an edge area of the image than for the case that the object feature is in a central area of the image.
Furthermore, it is in particular provided that the second object feature is determined as a polygon, wherein the polygon has a left base point, a central base point, a right base point and/or a tip point, and wherein the polygon describes a width and/or a height of the object. The second object feature can be described as an object in the second image. Therein, the polygon in particular has the left, the central, the right base point as well as a tip point. Therein, the central base point can be determined as the point of intersection between a connecting line between a vanishing point and the center of gravity of the polygon. The width of the object is reproduced by the right and the left base point. The height of the object can be described by the tip point. This polygon can be determined in simple manner and within a short computing time. In addition, the polygon is suitable for describing the spatial dimensions of the object. Therein, it is in particular provided that a plurality of regions of interest is determined in the second image, the regions of interest are grouped and the respective polygon is determined based on the grouped regions of interest. Regions of interest can respectively be determined in the second image or in the respective images of the image sequence. These regions of interest in particular describe those pixels or areas of the image, which depict a moved object. For example, the image or the second image can first be divided into a plurality of partial areas or image cells and it can be examined for each of the image cells if it depicts a moved object or a part thereof. Herein, a weighting matrix can further be taken into account, in which a first value or a second value is associated with each of the image cells, according to whether or not the respective image cell describes a moved object. Those image cells describing moved objects can then be correspondingly grouped and the regions of interest can be determined herefrom. After the respective regions of interest have been determined, it can be examined if these regions of interest originate from the same object. Thus, it can be determined if the respective regions of interest can be grouped. As soon as the regions of interest have then been grouped, the polygon can be determined based on the area, which the grouped regions of interest occupy in the second image, which then describes the second object feature.
Further, it is advantageous if the second image is divided into a plurality of image cells, object cells describing a moved object are selected from the image cells based on an optical flow and the object cells are associated with one of the regions of interest. As already explained, the second image can be divided into a plurality of image cells. Therein, each of the image cells can include at least one pixel. In particular, it is provided that each of the image cells includes a plurality of pixels. The optical flow or the optical flow vector can then be determined for each of the image cells. Thus, it can be reliably examined whether or not the pixels in the image cell describe a moved object. Those image cells describing a moved object are considered as object cells and can be combined to a region of interest. This allows simple and reliable determination of the respective regions of interest.
Furthermore, it is advantageous if a roadway is recognized in the second image by means of segmentation and the regions of interest are determined based on the recognized roadway. With the aid of a corresponding segmentation method, a roadway, on which the motor vehicle is located, can be recognized in the second image. Therein, the roadway can be in front of the motor vehicle in direction of travel or behind the motor vehicle in direction of travel according to orientation of the camera. In the images describing the environmental region of the motor vehicle, the roadway or the ground can now be recognized with the aid of the segmentation method. If the moved object is also located on the roadway, the boundaries between the roadway and the object moving on the roadway can be reliably determined by recognizing the roadway. This allows precise determination of the regions of interest.
Presently, the determination of the polygons based on the regions of interest was described based on the second object features in the image. The first object feature in the first image and all of the further object features in the images can also be determined as polygons in analogous manner. It is in particular provided that the prediction feature is also determined as a polygon. Therein, the prediction feature can be determined based on the first object feature and the position of the polygon and/or the size of the polygon describing the prediction feature can be adapted based on the movement model in the second image.
A camera system according to the invention for a motor vehicle includes at least one camera and an electronic image processing device. The camera system is adapted to perform a method according to the invention or an advantageous configuration thereof.
A motor vehicle according to the invention includes a camera system according to the invention. The motor vehicle is in particular formed as a passenger car.
The preferred embodiments presented with respect to the method according to the invention and the advantages thereof correspondingly apply to the camera system according to the invention as well as to the motor vehicle according to the invention.
Further features of the invention are apparent from the claims, the figures and the description of figures. The features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations or alone without departing from the scope of the invention. Thus, implementations are also to be considered as encompassed and disclosed by the invention, which are not explicitly shown in the figures and explained, but arise from and can be generated by separated feature combinations from the explained implementations. Implementations and feature combinations are also to be considered as disclosed, which thus do not have all of the features of an originally formulated independent claim. Moreover, implementations and feature combinations are to be considered as disclosed, in particular by the implementations set out above, which extend beyond or deviate from the feature combinations set out in the relations of the claims.
Now, the invention is explained in more detail based on preferred embodiments as well as with reference to the attached drawings.
There show:
FIG. 1 a motor vehicle according to an embodiment of the present invention, which has a camera system with a plurality of cameras;
FIG. 2 a schematic flow diagram of a method for determining regions of interest in the images, which are provided by the cameras;
FIG. 3 an image, which is provided with the aid of the cameras, which is divided into a plurality of image cells;
FIG. 4 areas in the image, which are used for determining the regions of interest;
FIG. 5 object cells in the image, which are associated with the moved object, before and after dilation;
FIG. 6 the individual image cells, over which a sliding window is shifted for determining the regions of interest;
FIG. 7 a region of interest in the image, which is upwards corrected;
FIG. 8 two regions of interest in the image, wherein the one region of interest is downwards corrected and the other region of interest is scaled down;
FIG. 9 the regions of interest, which are associated with a pedestrian in the image, who is located on a roadway;
FIG. 10 regions of interest, which are each combined in groups;
FIG. 11 a schematic flow diagram of a method for tracking the object;
FIG. 12 a schematic representation of the determination of a polygon based on grouped regions of interest;
FIG. 13 the polygon, which has a left, a central and a right base point as well as a tip point;
FIG. 14 a schematic representation of the determination of an object feature based on a movement model in the real world;
FIG. 15a-15d object features, which are compared to prediction features;
FIG. 16 object features and prediction features at different points of time;
FIG. 17 a diagram, which describes the spatial similarity between the object and a prediction object in the real world;
FIG. 18 a pedestrian as a moved object, with which a plurality of object features are associated; and
FIG. 19 a diagram, which describes an entry probability of an object depending on a position in the image.
In the figures, identical and functionally identical elements are provided with the same reference characters.
FIG. 1 shows a motor vehicle 1 according to an embodiment of the present invention in a plan view. Presently, the motor vehicle 1 is formed as a passenger car. The motor vehicle 1 includes a camera system 2, which includes at least one camera 4. In the present embodiment, the camera system 2 includes four cameras 4, wherein one of the cameras 4 is arranged in a rear area 5, one of the cameras 4 is arranged in a front area 7 and two of the cameras 4 are arranged in respective lateral areas 6 of the motor vehicle 1.
With the aid of the camera system 2, objects 9 in an environmental region 8 of the motor vehicle 1 can be captured. Hereto, a sequence of images 10, 11 is provided by each of the cameras 4. This sequence of images 10, 11 is then transmitted to an electronic image processing device 3 of the camera system 2. The objects 9 in the environmental region 8 can then be recognized in the images 10, 11 with the aid of the electronic image processing device 3.
In particular, moved objects 9 in the environmental region 8 are to be recognized with the aid of the camera system 2. Hereto, a method of three-dimensional image processing is used. As explained in more detail below, first, regions of interest 16 are determined in the images 10, 11, which describe a moved object 9. Subsequently, object features 24, 25 are determined in the images 10, 11 based on the regions of interest 16, which describe the object 9 in more detail. Therein, it is further provided that the movement of the object 9 is tracked.
FIG. 2 shows a schematic flow diagram of a method for determining regions of interest 16 in the images 10, 11. In a first step S1, an image 10, 11 provided by one of the cameras 4 is divided into a plurality of image cells 12. Therein, each of the image cells 12 can include at least one pixel. In particular, it is provided that each of the image cells 12 has a plurality of pixels. For example, each image cell 12 can have 10×10 pixels. Furthermore, object cells 12′ are determined. The object cells 12′ describe those image cells 12, which describe a moved object 9. In a step S2, a weighting matrix is then determined based on the image cells 12 and the object cells 12′. In a step S3, regions of interest 16 are then determined in the image 10, 11 based on the weighting matrix. These regions of interest 16 are subsequently corrected in a step S4. Finally, the regions of interest 16 are combined in a step S5.
FIG. 3 shows an image 10, 11, which has been provided by one of the cameras 4. Here, it is apparent that the image 10, 11 is divided into a plurality of image cells 12. Therein, the number of the pixels in the respective image cells 12 can be determined. Based on the image cells 12, the weighting matrix can be determined. Therein, a height 13 of the weighting matrix results based on the number of lines of image cells 12 and a width 14 of the weighting matrix results based on the number of columns of the image cells 12.
The image 10, 11 shows the object 9, which is located in the environmental region 8. The object 9 is a moving object in the form of a pedestrian. This object 9 is now to be recognized in the image 10, 11. Hereto, an optical flow or a flow vector is determined in each of the image cells 12, which describes the movement of an object 9. If a flow vector has been determined with a sufficient confidence value, that image cell 12 is recognized as the object cell 12′ and identified in the weighting matrix or a value associated with the object cell 12′ in the weighting matrix is varied. Therein, the threshold value for a sufficient confidence value depends on the respective region 15 in the image 10, 11. Hereto, FIG. 4 shows different regions 15 in the image 10, 11. Presently, the regions 15 differ depending on a distance to the motor vehicle 1. Further, the threshold values for determining the confidence value can be adjustable and be dependent on the current speed of the motor vehicle 1.
FIG. 5 shows the image cells 12 and the object cells 12′, which have been recognized as originating from the moved object 9. On the left side of FIG. 5, it is apparent that the object cells 12′ do not form a contiguous area. Since a completely contiguous area has not been recognized in the image 10, 11, a sparsely populated weighting matrix is also present in this area. In order to counter this problem, a morphological operation, in particular the dilation, is applied. Objects in a binary image can be enlarged or thickened by the dilation. Therein, a dilation of the binary weighting matrix W is effected with a structuring element H. This can be described by the following formula:
W⊕H=U p∈W H p =U q∈H W q.
Therein, Hp describes the structuring element H, which has been shifted by p. Wq describes the weighting matrix W, which has been shifted by q. Herein, q and p describe the directions. Therein, the structuring element H is a 3×3 matrix. The result of the dilation is represented on the right side of FIG. 5. By filling the weighting matrix and the morphological filtering, an integral image of the weighting matrix is determined. The integral image II can be described by the following formula:
II ( X , Y ) = x X y Y W ( x , y ) .
Based on the object cells 12′, which have been recognized as originating from a moved object 9, regions of interest 16 are now to be determined. This is explained in connection with FIG. 6. Hereto, a generator can be used, which determines an object hypothesis based on the image cells 12, the object cells 12′ and the weights thereof. Hereto, a sliding window 17 is used, which is shifted over the individual image cells 12 and object cells 12′. Based on the integral image, thus, each of the weights is determined for each of the regions of interest 16. The weighted sum wROI can be determined according to the following formula:
w ROI =II(x+w,y+h)−II(x+w,y)−II(x,y+h)+II(x,y).
Herein, x and y describe the position of the lower left edge of the region of interest, w and h describe the width and the height of the region of interest 16. If the weighted sum wROI is greater than a threshold value, the region of interest 16 is marked as a hypothesis. If the region of interest 16 is marked as a hypothesis, the search for further regions of interest 16 in the current column is aborted and continued in the next column. As indicated in FIG. 6, this is performed for all of the columns.
For each of the columns, it is examined if a rectangle 18 can be formed from the sliding window 17, which includes the object cells 12′. Therein, it is further provided that the regions of interest 16 are corrected. Hereto, FIG. 7 shows an example, in which the region of interest 16 or the sliding window 17 is upwards corrected. Herein, the upper boundary of the sliding window 17 is upwards shifted such that the object cells 12′ associated with the moved object 9 are included in the sliding window 17. The rectangle 18 is then determined from the corrected sliding window 17.
Further, FIG. 8 shows an example, in which the lower boundary of the sliding window 17 is downwards shifted such that all of the object cells 12′ are included in the sliding window 17. In addition, a further sliding window 17 is shown, which is scaled down. In this case, object cells 12′ are not present in the lowermost line of the sliding window 17. For this reason, the lower boundary of the sliding window 17 is upwards shifted.
Furthermore, it is provided that a roadway 19 is recognized in the image 10, 11. Hereto, a segmentation method is used. The roadway 19 can be recognized in the image 10, 11 with the aid of the segmentation method. Moreover, a boundary line 20 between the roadway 19 and the object 9 can be determined. Based on this boundary line 20, the rectangles 18 describing the regions of interest 16 can then be adapted. In this example, the rectangles 18 are downwards corrected. Presently, this is illustrated by the arrows 21.
Furthermore, it is provided that the respective regions of interest 16 are grouped. This is explained in connection with FIG. 10. Here, the rectangles 18 are apparent, which describe the regions of interest 16. In overlapping rectangles 18, a horizontal contact area 23 is determined in the overlap area. If this horizontal contact area 23 in the overlap area exceeds a predetermined threshold value, the regions of interest 16 are grouped such that groups 22, 22′ of regions of interest 16 arise. Hereto, the rectangles 18 or regions of interest 16 on the left side are presently combined to a first group 22 and the rectangles 18 or regions of interest 16 on the right side are combined to a second group. The rectangle 18 a of the second group 22′ is not added since the horizontal contact area 23 a is below the threshold value.
FIG. 11 shows a schematic flow diagram of a method for tracking the object 9 in the environmental region 8. In a step S6, object features 24, 25 are determined based on the regions of interest 16 in the images 10, 11. In a step S7, a prediction feature 26 is determined and in a step S8 the object feature 24, 25 is associated with the prediction feature 26. In a step S9, the position of the object 9 is updated. The updated position is then supplied to an object database 27 describing a state vector. In a step S10, the movement of the object 9 in the environmental region 8 is predicted based on a linear movement model. Based on this movement model, the prediction feature 26 is then determined in the step S7. Furthermore, it is provided that new object features are recognized in a step S11 and object features 24, 25 are no longer taken into account in a step S12.
The association of already existing and tracked objects 9 and newly captured objects is performed both within the images 10, 11 and in the real world. Therein, the steps S6 to S8 are performed within the sequence of images 10, 11. This is illustrated in FIG. 11 by the block 35. The steps S9 to S12 are determined in the real world or in the environmental region 8. This is illustrated in FIG. 11 by the block 36.
The determination of the object feature 24, 25 according to the step S6 is illustrated in FIG. 12. Herein, the individual rectangles 18 are shown, which are associated with the respective regions of interest 16, and which are combined to the group 22. Based on the group 22 of regions of interest 16, a polygon 28 is then determined. Presently, the polygon 28 is determined as the envelope of the rectangles 18, which describe the regions of interest 16. Moreover, a center of gravity 29 of the polygon 28 is determined. The position of the center of gravity 29 of the polygon 28 with the coordinates xs and ys can be determined according to the following formulas:
x S = 1 6 A i = 0 N - 1 ( x i + x i + 1 ) ( x i y i + 1 - x i + 1 y i ) , y S = 1 6 A i = 0 N - 1 ( y i + y i + 1 ) ( x i y i + 1 - x i + 1 y i ) .
Further, an area A of the polygon 28 can be determined according to the following formula:
A = 1 2 i = 0 N - 1 ( x i y i + 1 - x i + 1 y i ) .
Therein, (xi, yi), (xi+1, yi+1) are coordinates of two adjacent points of the polygon 28. N is the number of points of the polygon 28.
FIG. 13 shows a further representation of the polygon 28. Herein, it is apparent that the polygon 28 has a left base point 30, a central base point 31, a right base point 32 as well as a tip point 33. Moreover, the center of gravity 29 of the polygon 28 is illustrated. The central base point 31 results by the point of intersection of a connecting line 34 connecting a vanishing point 35 to the center of gravity 29 of the polygon 28. By the left base point 30 and the right base point 32, the width of the object 9 is described. The height of the object 9 is described by the tip point 33.
FIG. 14 shows the object 9 in the form of a pedestrian on the right side, which moves with a speed v relative to the motor vehicle 1. The images 10, 11 are provided by at least one camera 4 of the motor vehicle 1, which are presented on the left side of FIG. 14. Therein, a first object feature 24 is determined as the polygon 28 in a first image 10 (not illustrated here). This first object feature 24 describes the object 9, which is in a first position P1 at a point of time t1 in the real world or in the environmental region 8.
In a second image 11, which follows the first image 10 in time, the prediction feature 26 is determined based on the first object feature 24. Presently, a picture 9′ of the object 9 or of the pedestrian is shown in the second image 11. The first object feature 24 determined in the first image 10 is presently shown dashed in the second image 11. For determining the prediction feature 26, a linear movement model is used, which describes the speed v of the object 9. Thus, it can be determined, in which position P2 the object 9 is at a point of time t1+Δt.
For describing the movement of the object 9, a Kalman filter is used. Herein, it is assumed that the object 9 moves with a constant speed v. Hereto, a state vector {circumflex over (x)}k−1|k−1 and a corresponding state matrix Pk−1|k−1 can be defined:
{circumflex over (x)} k−1|k−1 =A·{circumflex over (x)} k−1|k−1
P k|k−1 =A·P k−1|k−1 ·A T +Q.
Herein, A describes the system matrix. {circumflex over (x)}k−1|k−1 describes the state vector for the preceding point of time or for the first image 10. Pk−1|k−1 describes the state matrix for the preceding point of time or for the first image 10. Q is a noise matrix, which describes the error of the movement model and the differences between the movement model and the movement of the object 9 in the real world.
In the second image 11, which follows the first image 10 in time, a second object feature 25 can be determined based on the regions of interest 16. Now, it is to be examined if the second object feature 25 can be associated with the prediction feature 26. Hereto, FIG. 15a to FIG. 15d show different variants, how the association between the second object feature 25 and the prediction feature 26 can be examined. For example—as shown in FIG. 15a —an overlap area 36 between the second object feature 25 and the prediction feature 26 can be determined. Further, a distance 37 between the center of gravity 29 of the second object feature 25 or the polygon 28 and a center of gravity 38 of the prediction feature 26 can be determined. This is illustrated in FIG. 15b . Moreover, a size of the second object feature 25 can be compared to a size of the prediction feature 26. This is illustrated in FIG. 15c . Further, a distance 39 between the object 9 and a prediction object 40 can be determined, which has been determined based on the prediction feature 26 or which maps the prediction feature 26 into the real world.
All of these criteria shown in FIGS. 15a to 15d can be examined to determine a quality level qm, which can have a value between 0 and 1. Overall, an association probability pj can then be determined, which can be determined according to the following formula:
p j =Σw m q m.
If the association probability pj exceeds a predetermined threshold value, the second object feature 25 can be associated with the prediction feature 26. That is, it is confirmed that the second object feature 25 describes the object 9 in the environmental region 8.
In real scenes or traffic situations, it is usually the case that a moved object 9, in particular a pedestrian, changes its direction of movement or its speed. Since the object features 24, 25 have been determined based on the optical flow, it can be the case that an object feature 24, 25 cannot be determined if the object 9 or the pedestrian currently stands still. Further, it can be the case that the moved object 9 changes its direction of movement.
This is illustrated in connection with FIG. 16. At a point of time t1, the picture 9′ of the object 9 or of the pedestrian moves to the left. Here, the prediction feature 26, which has been determined based on the movement model, and the second object feature 25, which has been determined based on the regions of interest 16, show a good correspondence. Thus, the second object feature 25 is confirmed as originating from the object 9. At a point of time t2, the object 9 or the pedestrian stops. In this case, a second object feature 25 cannot be determined. Here, the last confirmed object feature 41 is shown, which has been confirmed as originating from the object 9. This corresponds to the second object feature 25, which has been confirmed at the point of time t1. At a point of time t3, the object 9 again moves to the right. Here, a second object feature 25 can be determined. Here, an association probability p between the prediction feature 26 and the second object feature 25 results. In addition, an association probability pL between the last confirmed object feature 41 and the second object feature 25 is determined. Since the association probability pL is greater than the association probability p, the movement of the object 9 at a point of time t4 is determined based on the last confirmed object feature 41.
Moreover, it is provided that a spatial similarity between a prediction position P2 describing the position of the object 9 based on the movement model is determined. This is illustrated in FIG. 17. Herein, the prediction position P2 is described by multiple ellipses 42, which describe the spatial uncertainty of the position determination. Moreover, the object position 43 is determined, wherein the object position 43 is determined based on the second object feature 25. Based on the prediction position P2 and the object position 43, a spatial similarity or a spatial likelihood can then be determined. For each associated measurement or for each object feature 24, 25, a state vector and the associated covariance matrix can be determined:
{circumflex over (x)} k|k j ={circumflex over (x)} k|k−1 +K(z k j −{circumflex over (z)} k)
P k|k =P k|k−1 −KHP k|k−1.
Therein, zk describes the data vector of the measurement or of the second object feature 25. {circumflex over (z)}k describes the expected data vector. K describes the Kalman gain, which can be determined according to the following formula:
K = P k | k - 1 H T HP k | k - 1 H T + R .
Herein, H describes a measurement matrix for generating the object features 24, 25 based on the movement model, and R describes a noise matrix, which describes the variation of the polygon 28 in the image 10, 11. The system model can then be determined according to the following formula, wherein w describes a weighting factor:
x ^ k | k = w j w k | k j + k x ^ k | k - 1 w j + k .
FIG. 18 shows an image 11, in which multiple second object features 25 are associated with an object 9 or the picture 9′. Presently, a second object feature 25 is associated with a head of the object 9, two second object features 25 are associated with the arms of the object 9 and a second object feature 25 is associated with the legs of the object 9 or the pedestrian. Based on the spatial similarity, a weighting factor w can be associated with one of the second object features 25. Presently, the second object feature 25 associated with the legs of the object 9 or the pedestrian, has the greatest spatial similarity to the prediction feature 26 or the base point thereof. The weighting factor w of 1 is associated with this second object feature 25. The weighting factor w of 0 is associated with the second object feature 25. Based on this weighting factor w, the current position or the movement of the object can then be updated.
In the images 10, 11, further object features can be recognized. Therein, it is examined if it is a new object or an object 9, which has entered the environmental region 8. Hereto, an entry probability is taken into account. Hereto, FIG. 19 shows a distribution of the entry probability depending on the position of the object feature 24, 25 in the image 10, 11, which describes the environmental region 8. Therein, the areas 44 a to 44 d describe different entry probability in the image 10, 11. In an edge area 44 a, a high likelihood for an entry of a new object arises. In contrast, a very low entry probability arises in a central area 44 a of the image 10, 11, which is directly in front of the motor vehicle 1.
If a new object or a new object feature has been recognized in the images 10, 11, this can be correspondingly tracked. In the same manner, it can be determined if an object 9 has exited the environmental region 8 and thus can no longer be captured in the images 10, 11. Here too, an exit probability can be defined analogously to the entry probability.
Overall, thus, moved objects 9 in an environmental region 8 of the motor vehicle 1 can be reliably recognized and tracked.

Claims (11)

The invention claimed is:
1. A method for capturing an object in an environmental region of a motor vehicle based on a sequence of images of the environmental region, which are provided by means of a camera of the motor vehicle, comprising:
recognizing a first object feature in a first image of the sequence, wherein the first object feature describes at least a part of the object in the environmental region;
estimating a position of the object in the environmental region based on a predetermined movement model, which describes a movement of the object in the environmental region; determining a prediction feature in a second image following the first image in the sequence based on the first object feature and based on the estimated position;
determining a second object feature in the second image; associating the second object feature with the prediction feature in the second image if a predetermined association criterion is satisfied;
confirming the second object feature as originating from the object if the second object feature is associated with the prediction feature, wherein an association probability between the second object feature and the prediction feature is determined and the predetermined association criterion is deemed as satisfied if the association probability exceeds a predetermined value; and
determining an object position in the environmental region based on the second object feature, determining a prediction position in the environmental region based on the prediction feature, determining a spatial similarity between the object position and the prediction position, and determining a current position of the object in the environmental region based on the association probability and the spatial similarity,
wherein when the association of the second object feature with the prediction feature is omitted, an association probability between a last confirmed object feature and the second object feature is determined,
wherein the last confirmed object feature describes the object feature which was last confirmed as originating from the object, and
wherein the prediction feature is determined starting from a position in the environmental region, which is associated with the last confirmed object feature, when the association probability between the last confirmed object feature and the second object feature is greater than the association probability between the second object feature and the prediction feature.
2. The method according to claim 1, wherein the object is recognized as moving relative to the motor vehicle if the second object feature is confirmed as originating from the object.
3. The method according to claim 1, wherein the association probability is determined based on an overlap between the second object feature and the prediction feature in the second image and/or based on dimensions of the second object feature and the prediction feature in the second image and/or based on a distance between the centers of gravity of the second object feature and the prediction feature in the second image and/or based on a distance between the object and a prediction object associated with the prediction feature in the environmental region.
4. The method according to claim 1, wherein when at least two second object features are associated with the prediction feature the current position of the object is determined based on the second object feature, the object position of which has the greater spatial similarity to the prediction position of the prediction feature.
5. The method according to claim 1, wherein when a further object feature is recognized in one of the images, it is examined if the further object feature originates from an object entered the environmental region, wherein the examination is performed based on an entry probability, which depends on a position of the further object feature in the image.
6. The method according to claim 1, wherein the second object feature is determined as a polygon, wherein the polygon has a left base point, a central base point, a right base point and/or a tip point and wherein the polygon describes a width and/or a height of the object.
7. The method according to claim 6, wherein a plurality of regions of interest is determined in the second image, the regions of interest are grouped and the respective polygon is determined based on the grouped regions of interest.
8. The method according to claim 7, wherein the second image is divided into a plurality of image cells, object cells describing a moved object are selected from the image cells based on optical flow, and the object cells are associated with one of the regions of interest.
9. The method according to claim 7, wherein a roadway is recognized in the second image by segmentation and the regions of interest are determined based on the recognized roadway.
10. A camera system for a motor vehicle including at least one camera and an electronic image processing device, wherein the camera system is adapted to perform a method according to claim 1.
11. A motor vehicle with a camera system according to claim 10.
US16/322,333 2016-08-01 2017-07-27 Method for capturing an object in an environmental region of a motor vehicle with prediction of the movement of the object, camera system as well as motor vehicle Active 2038-03-20 US11170232B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DE102016114168.2 2016-08-01
DE102016114168.2A DE102016114168A1 (en) 2016-08-01 2016-08-01 Method for detecting an object in a surrounding area of a motor vehicle with prediction of the movement of the object, camera system and motor vehicle
PCT/EP2017/069014 WO2018024600A1 (en) 2016-08-01 2017-07-27 Method for capturing an object in an environmental region of a motor vehicle with prediction of the movement of the object, camera system as well as motor vehicle

Publications (2)

Publication Number Publication Date
US20190197321A1 US20190197321A1 (en) 2019-06-27
US11170232B2 true US11170232B2 (en) 2021-11-09

Family

ID=59656029

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/322,333 Active 2038-03-20 US11170232B2 (en) 2016-08-01 2017-07-27 Method for capturing an object in an environmental region of a motor vehicle with prediction of the movement of the object, camera system as well as motor vehicle

Country Status (6)

Country Link
US (1) US11170232B2 (en)
EP (1) EP3491581A1 (en)
KR (1) KR102202343B1 (en)
CN (1) CN109791603B (en)
DE (1) DE102016114168A1 (en)
WO (1) WO2018024600A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102016114168A1 (en) * 2016-08-01 2018-02-01 Connaught Electronics Ltd. Method for detecting an object in a surrounding area of a motor vehicle with prediction of the movement of the object, camera system and motor vehicle
JP6841097B2 (en) * 2017-03-09 2021-03-10 富士通株式会社 Movement amount calculation program, movement amount calculation method, movement amount calculation device and business support system
US10685244B2 (en) * 2018-02-27 2020-06-16 Tusimple, Inc. System and method for online real-time multi-object tracking
US20220215650A1 (en) * 2019-04-25 2022-07-07 Nec Corporation Information processing device, information processing method, and program recording medium
WO2020237675A1 (en) * 2019-05-31 2020-12-03 深圳市大疆创新科技有限公司 Target detection method, target detection apparatus and unmanned aerial vehicle
CN110288835B (en) * 2019-06-28 2021-08-17 江苏大学 Surrounding vehicle behavior real-time identification method based on kinematic prediction compensation mechanism
JP7201550B2 (en) * 2019-07-29 2023-01-10 本田技研工業株式会社 VEHICLE CONTROL DEVICE, VEHICLE CONTROL METHOD, AND PROGRAM
US11605170B2 (en) * 2019-10-14 2023-03-14 INTERNATIONAL INSTITUTE OF EARTHQUAKE ENGINEERING AND SEISMOLOGYx Estimating a displacement sequence of an object
US11315326B2 (en) * 2019-10-15 2022-04-26 At&T Intellectual Property I, L.P. Extended reality anchor caching based on viewport prediction
CN110834645B (en) * 2019-10-30 2021-06-29 中国第一汽车股份有限公司 Free space determination method and device for vehicle, storage medium and vehicle
CN113255411A (en) * 2020-02-13 2021-08-13 北京百度网讯科技有限公司 Target detection method and device, electronic equipment and storage medium
DE102020106301A1 (en) * 2020-03-09 2021-09-09 Zf Cv Systems Global Gmbh Method for determining object information about an object in a vehicle environment, control unit and vehicle
DE102020211960A1 (en) 2020-09-24 2022-03-24 Ford Global Technologies, Llc Mapping of a trafficable area
US11640668B2 (en) * 2021-06-10 2023-05-02 Qualcomm Incorporated Volumetric sampling with correlative characterization for dense estimation

Citations (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030007074A1 (en) * 2001-06-28 2003-01-09 Honda Giken Kogyo Kabushiki Kaisha Vehicle zone monitoring apparatus
US20030091228A1 (en) * 2001-11-09 2003-05-15 Honda Giken Kogyo Kabushiki Kaisha Image recognition apparatus
JP2007334511A (en) * 2006-06-13 2007-12-27 Honda Motor Co Ltd Object detection device, vehicle, object detection method and program for object detection
JP2008176555A (en) * 2007-01-18 2008-07-31 Fujitsu Ten Ltd Obstacle detector and obstacle detection method
WO2008139530A1 (en) 2007-04-27 2008-11-20 Honda Motor Co., Ltd. Vehicle periphery monitoring device, vehicle periphery monitoring program, and vehicle periphery monitoring method
US20090080701A1 (en) * 2007-09-20 2009-03-26 Mirko Meuter Method for object tracking
US20090087088A1 (en) * 2007-09-28 2009-04-02 Samsung Electronics Co., Ltd. Image forming system, apparatus and method of discriminative color features extraction thereof
US20100002908A1 (en) * 2006-07-10 2010-01-07 Kyoto University Pedestrian Tracking Method and Pedestrian Tracking Device
US20100104137A1 (en) * 2008-04-24 2010-04-29 Gm Global Technology Operations, Inc. Clear path detection using patch approach
US20100124358A1 (en) * 2008-11-17 2010-05-20 Industrial Technology Research Institute Method for tracking moving object
DE102010005290A1 (en) 2009-01-26 2010-08-19 GM Global Technology Operations, Inc., Detroit Vehicle controlling method for vehicle operator i.e. driver, involves associating tracked objects based on dissimilarity measure, and utilizing associated objects in collision preparation system to control operation of vehicle
US20110074957A1 (en) * 2009-09-30 2011-03-31 Hitachi, Ltd. Apparatus for Vehicle Surroundings Monitorings
US20130223686A1 (en) * 2010-09-08 2013-08-29 Toyota Jidosha Kabushiki Kaisha Moving object prediction device, hypothetical movable object prediction device, program, moving object prediction method and hypothetical movable object prediction method
WO2014096240A1 (en) 2012-12-19 2014-06-26 Connaught Electronics Ltd. Method for detecting a target object based on a camera image by clustering from multiple adjacent image cells, camera device and motor vehicle
WO2014096242A1 (en) 2012-12-19 2014-06-26 Connaught Electronics Ltd. Method for tracking a target object based on a stationary state, camera system and motor vehicle
US20140350834A1 (en) * 2013-05-21 2014-11-27 Magna Electronics Inc. Vehicle vision system using kinematic model of vehicle motion
US20150146921A1 (en) * 2012-01-17 2015-05-28 Sony Corporation Information processing apparatus, information processing method, and program
US9120484B1 (en) * 2010-10-05 2015-09-01 Google Inc. Modeling behavior based on observations of objects observed in a driving environment
US20150258991A1 (en) * 2014-03-11 2015-09-17 Continental Automotive Systems, Inc. Method and system for displaying probability of a collision
US20150324972A1 (en) * 2012-07-27 2015-11-12 Nissan Motor Co., Ltd. Three-dimensional object detection device, and three-dimensional object detection method
US20160335489A1 (en) * 2014-01-14 2016-11-17 Denso Corporation Moving object detection apparatus and moving object detection method
US20160364619A1 (en) * 2014-01-09 2016-12-15 Clarion Co., Ltd. Vehicle-Surroundings Recognition Device
US20160379074A1 (en) * 2015-06-25 2016-12-29 Appropolis Inc. System and a method for tracking mobile objects using cameras and tag devices
US20170061644A1 (en) * 2015-08-27 2017-03-02 Kabushiki Kaisha Toshiba Image analyzer, image analysis method, computer program product, and image analysis system
US20170140230A1 (en) * 2014-08-21 2017-05-18 Mitsubishi Electric Corporation Driving assist apparatus, driving assist method, and non-transitory computer readable recording medium storing program
US20170259753A1 (en) * 2016-03-14 2017-09-14 Uber Technologies, Inc. Sidepod stereo camera system for an autonomous vehicle
US20170280026A1 (en) * 2015-09-25 2017-09-28 Hitachi Information & Telecommunication Engineering, Ltd. Image Processing Apparatus and Image Processing Method
US20170329332A1 (en) * 2016-05-10 2017-11-16 Uber Technologies, Inc. Control system to adjust operation of an autonomous vehicle based on a probability of interference by a dynamic object
US10093233B2 (en) * 2013-10-02 2018-10-09 Conti Temic Microelectronic Gmbh Method and apparatus for displaying the surroundings of a vehicle, and driver assistance system
US10130873B2 (en) * 2014-03-21 2018-11-20 Samsung Electronics Co., Ltd. Method and apparatus for preventing a collision between subjects
US20190197321A1 (en) * 2016-08-01 2019-06-27 Connaught Electronics Ltd. Method for capturing an object in an environmental region of a motor vehicle with prediction of the movement of the object, camera system as well as motor vehicle

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014114221A1 (en) * 2014-09-30 2016-03-31 Valeo Schalter Und Sensoren Gmbh Method for detecting an object in a surrounding area of a motor vehicle, driver assistance system and motor vehicle
CN104680557A (en) * 2015-03-10 2015-06-03 重庆邮电大学 Intelligent detection method for abnormal behavior in video sequence image

Patent Citations (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030007074A1 (en) * 2001-06-28 2003-01-09 Honda Giken Kogyo Kabushiki Kaisha Vehicle zone monitoring apparatus
US20030091228A1 (en) * 2001-11-09 2003-05-15 Honda Giken Kogyo Kabushiki Kaisha Image recognition apparatus
JP2007334511A (en) * 2006-06-13 2007-12-27 Honda Motor Co Ltd Object detection device, vehicle, object detection method and program for object detection
US20100002908A1 (en) * 2006-07-10 2010-01-07 Kyoto University Pedestrian Tracking Method and Pedestrian Tracking Device
JP2008176555A (en) * 2007-01-18 2008-07-31 Fujitsu Ten Ltd Obstacle detector and obstacle detection method
US20100103262A1 (en) * 2007-04-27 2010-04-29 Basel Fardi Vehicle periphery monitoring device, vehicle periphery monitoring program and vehicle periphery monitoring method
WO2008139530A1 (en) 2007-04-27 2008-11-20 Honda Motor Co., Ltd. Vehicle periphery monitoring device, vehicle periphery monitoring program, and vehicle periphery monitoring method
US20090080701A1 (en) * 2007-09-20 2009-03-26 Mirko Meuter Method for object tracking
US20090087088A1 (en) * 2007-09-28 2009-04-02 Samsung Electronics Co., Ltd. Image forming system, apparatus and method of discriminative color features extraction thereof
US20100104137A1 (en) * 2008-04-24 2010-04-29 Gm Global Technology Operations, Inc. Clear path detection using patch approach
US20100124358A1 (en) * 2008-11-17 2010-05-20 Industrial Technology Research Institute Method for tracking moving object
DE102010005290A1 (en) 2009-01-26 2010-08-19 GM Global Technology Operations, Inc., Detroit Vehicle controlling method for vehicle operator i.e. driver, involves associating tracked objects based on dissimilarity measure, and utilizing associated objects in collision preparation system to control operation of vehicle
US20110074957A1 (en) * 2009-09-30 2011-03-31 Hitachi, Ltd. Apparatus for Vehicle Surroundings Monitorings
US20130223686A1 (en) * 2010-09-08 2013-08-29 Toyota Jidosha Kabushiki Kaisha Moving object prediction device, hypothetical movable object prediction device, program, moving object prediction method and hypothetical movable object prediction method
US9120484B1 (en) * 2010-10-05 2015-09-01 Google Inc. Modeling behavior based on observations of objects observed in a driving environment
US20150146921A1 (en) * 2012-01-17 2015-05-28 Sony Corporation Information processing apparatus, information processing method, and program
US20150324972A1 (en) * 2012-07-27 2015-11-12 Nissan Motor Co., Ltd. Three-dimensional object detection device, and three-dimensional object detection method
WO2014096240A1 (en) 2012-12-19 2014-06-26 Connaught Electronics Ltd. Method for detecting a target object based on a camera image by clustering from multiple adjacent image cells, camera device and motor vehicle
WO2014096242A1 (en) 2012-12-19 2014-06-26 Connaught Electronics Ltd. Method for tracking a target object based on a stationary state, camera system and motor vehicle
US20140350834A1 (en) * 2013-05-21 2014-11-27 Magna Electronics Inc. Vehicle vision system using kinematic model of vehicle motion
US10093233B2 (en) * 2013-10-02 2018-10-09 Conti Temic Microelectronic Gmbh Method and apparatus for displaying the surroundings of a vehicle, and driver assistance system
US20160364619A1 (en) * 2014-01-09 2016-12-15 Clarion Co., Ltd. Vehicle-Surroundings Recognition Device
US20160335489A1 (en) * 2014-01-14 2016-11-17 Denso Corporation Moving object detection apparatus and moving object detection method
US20150258991A1 (en) * 2014-03-11 2015-09-17 Continental Automotive Systems, Inc. Method and system for displaying probability of a collision
US10130873B2 (en) * 2014-03-21 2018-11-20 Samsung Electronics Co., Ltd. Method and apparatus for preventing a collision between subjects
US20170140230A1 (en) * 2014-08-21 2017-05-18 Mitsubishi Electric Corporation Driving assist apparatus, driving assist method, and non-transitory computer readable recording medium storing program
US20160379074A1 (en) * 2015-06-25 2016-12-29 Appropolis Inc. System and a method for tracking mobile objects using cameras and tag devices
US20170061644A1 (en) * 2015-08-27 2017-03-02 Kabushiki Kaisha Toshiba Image analyzer, image analysis method, computer program product, and image analysis system
US20170280026A1 (en) * 2015-09-25 2017-09-28 Hitachi Information & Telecommunication Engineering, Ltd. Image Processing Apparatus and Image Processing Method
US20170259753A1 (en) * 2016-03-14 2017-09-14 Uber Technologies, Inc. Sidepod stereo camera system for an autonomous vehicle
US20170329332A1 (en) * 2016-05-10 2017-11-16 Uber Technologies, Inc. Control system to adjust operation of an autonomous vehicle based on a probability of interference by a dynamic object
US20190197321A1 (en) * 2016-08-01 2019-06-27 Connaught Electronics Ltd. Method for capturing an object in an environmental region of a motor vehicle with prediction of the movement of the object, camera system as well as motor vehicle

Non-Patent Citations (14)

* Cited by examiner, † Cited by third party
Title
Andreas Ess, Konrad Schindler, Bastian Leibe and Luc Van Gool; "Object Detection and Tracking for Autonomous Navigation in Dynamic Environments"; IJRR,Sage ;2010. *
Anonymous, "Blob Tracking", RoboRealm Vision for Machines, Apr. 29, 2016, XP055409455, web.archive.org, Retrieved from the Internet: URL: http://web.archive.org/web/20160429233157/Http://www.roborealm.com/help/Blob_Tracking.php (retrieved Sep. 25, 2017) p. 1, Paragraph 1—p. 2222, Paragraph 4.
Bertozzi M Et Al., "Pedestrian Localization and Tracking System with Kalman Filtering", Intelligent Vehicles Symposium, 2004 IEEE Parma, Italy, Jun. 14-17, 2004, Piscataway, NJ, USA, IEEE, Jun. 14, 2004, pp. 584-589, XP010727712, DOI: 10.1109/IVS.2004.1336449, ISBN: 978-0-7803-8310-4 abdstract, Sections I, II.C, II.E, III.
BERTOZZI M., BROGGI A., FASCIOLI A., TIBALDI A., CHAPUIS R., CHAUSSE F.: "Pedestrian localization and tracking systen with kalman filtering", INTELLIGENT VEHICLES SYMPOSIUM, 2004 IEEE PARMA, ITALY JUNE 14-17, 2004, PISCATAWAY, NJ, USA,IEEE, 14 June 2004 (2004-06-14) - 17 June 2004 (2004-06-17), pages 584 - 589, XP010727712, ISBN: 978-0-7803-8310-4, DOI: 10.1109/IVS.2004.1336449
Czyz, J. et al.: A particle filter for joint detection and tracking of color objects. In: Image and Vision Computing 2006, S. 1-11.
Dollàr, P. et al.: Pedestrian Detection: An Evaluation of the State of the Art. In: IEEE Transactions on Pattern Analysis and Machine Interlligence, vol. 34,No. 4, Apr. 2012, S. 743-761.
Esther B. Meier et al, Object Detection and Tracking in Range Image Sequences by Separation of Image Features', IEEE International Conference on Intelligent Vehicles IV'98, Oct. 30, 1998, pp. 176-181, XP055409483, abstract, Sections I, IV, V.
ESTHER B. MEIER, ADE, FRANK: "Object Detection and Tracking in Range Image Sequences by Separation of Image Features", IEEE INTERNATIONAL CONFERENCE ON INTELLIGENT VEHICLES IV'98, 30 October 1998 (1998-10-30), pages 176 - 181, XP055409483
Esther B. Meier, Frank Ade ; "Object Detection and Tracking in Range Image Sequences by Separation of Image Features"; 1998; IEEE. *
Esther B. Meier, Frank Ade; "Object detection and tracking in range image sequences by separation of imeg features"; IEEE, 1998. *
German Search Report Issued in Corresponding German Application No. 102016114168.2, dated Jun. 7, 2017 (7 Pages).
International Search Report and Written Opinion Issued in Corresponding PCT Application No. PCT/EP2017/069014, dated Oct. 9, 2017 (11 Pages).
The Notice of Preliminary Rejection issued in corresponding Korean Application No. 10-2019-7003274, dated Mar. 30, 2020 (12 pages).
Zuther, S.: Multisensorsystem mit schmalbandigen Radarsensoren zum Fahrzeugseitenschutz. Dissertation, Fakultät für Ingenieurwissenschaften und Informatik der Universität Ulm, 2014.

Also Published As

Publication number Publication date
CN109791603B (en) 2023-10-03
KR102202343B1 (en) 2021-01-13
KR20190025698A (en) 2019-03-11
DE102016114168A1 (en) 2018-02-01
WO2018024600A1 (en) 2018-02-08
EP3491581A1 (en) 2019-06-05
US20190197321A1 (en) 2019-06-27
CN109791603A (en) 2019-05-21

Similar Documents

Publication Publication Date Title
US11170232B2 (en) Method for capturing an object in an environmental region of a motor vehicle with prediction of the movement of the object, camera system as well as motor vehicle
Malik Fast vehicle detection with probabilistic feature grouping and its application to vehicle tracking
CN112307921B (en) Vehicle-mounted end multi-target identification tracking prediction method
US9230165B2 (en) Object detection apparatus, vehicle-mounted device control system and storage medium of program of object detection
Kanhere et al. Real-time incremental segmentation and tracking of vehicles at low camera angles using stable features
US7672514B2 (en) Method and apparatus for differentiating pedestrians, vehicles, and other objects
US8867790B2 (en) Object detection device, object detection method, and program
CN104008371B (en) Regional suspicious target tracking and recognizing method based on multiple cameras
Broggi et al. Self-calibration of a stereo vision system for automotive applications
CN111932580A (en) Road 3D vehicle tracking method and system based on Kalman filtering and Hungary algorithm
CN104239867A (en) License plate locating method and system
Nassu et al. A vision-based approach for rail extraction and its application in a camera pan–tilt control system
US20190362163A1 (en) Method for validation of obstacle candidate
WO2011065399A1 (en) Path recognition device, vehicle, path recognition method, and path recognition program
CN114419098A (en) Moving target trajectory prediction method and device based on visual transformation
US20200302192A1 (en) Outside recognition apparatus for vehicle
JP2004038624A (en) Vehicle recognition method, vehicle recognition device and vehicle recognition program
JP3629935B2 (en) Speed measurement method for moving body and speed measurement device using the method
Malik High-quality vehicle trajectory generation from video data based on vehicle detection and description
JP4055785B2 (en) Moving object height detection method and apparatus, and object shape determination method and apparatus
Rosebrock et al. Real-time vehicle detection with a single camera using shadow segmentation and temporal verification
JP4333683B2 (en) Windshield range detection device, method and program
Lim et al. Detection and tracking multiple pedestrians from a moving camera
KR102448944B1 (en) Method and Device for Measuring the Velocity of Vehicle by Using Perspective Transformation
WO2022208666A1 (en) Image processing device, image processing method, and program

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: CONNAUGHT ELECTRONICS LTD., IRELAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HUGHES, CIARAN;NGUYEN, DUONG-VAN;HORGAN, JONATHAN;AND OTHERS;SIGNING DATES FROM 20190207 TO 20190211;REEL/FRAME:048384/0436

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE