US11170232B2 - Method for capturing an object in an environmental region of a motor vehicle with prediction of the movement of the object, camera system as well as motor vehicle - Google Patents
Method for capturing an object in an environmental region of a motor vehicle with prediction of the movement of the object, camera system as well as motor vehicle Download PDFInfo
- Publication number
- US11170232B2 US11170232B2 US16/322,333 US201716322333A US11170232B2 US 11170232 B2 US11170232 B2 US 11170232B2 US 201716322333 A US201716322333 A US 201716322333A US 11170232 B2 US11170232 B2 US 11170232B2
- Authority
- US
- United States
- Prior art keywords
- feature
- image
- prediction
- determined
- environmental region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 230000007613 environmental effect Effects 0.000 title claims abstract description 86
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000005484 gravity Effects 0.000 claims description 14
- 230000003287 optical effect Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 7
- 230000011218 segmentation Effects 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 description 21
- 239000013598 vector Substances 0.000 description 11
- 238000010586 diagram Methods 0.000 description 6
- 230000010339 dilation Effects 0.000 description 5
- 238000005259 measurement Methods 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G06K9/00805—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R11/00—Arrangements for holding or mounting articles, not otherwise provided for
- B60R11/04—Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/095—Predicting travel path or likelihood of collision
- B60W30/0956—Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
-
- G06K9/00362—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
Definitions
- the present invention relates to a method for capturing an object in an environmental region of a motor vehicle based on a sequence of images of the environmental region, which are provided by means of a camera of the motor vehicle. Moreover, the present invention relates to a camera system for performing such a method as well as to a motor vehicle with such a camera system.
- Camera systems for motor vehicles are already known from the prior art in various configuration.
- a camera system includes a camera, which is arranged at the motor vehicle and which captures an environmental region of the motor vehicle.
- the camera system can also have multiple such cameras, which can capture the entire environment around the motor vehicle.
- the camera arranged at the motor vehicle provides a sequence of images of the environmental region and thus captures a plurality of images per second. This sequence of images can then be processed with the aid of an electronic image processing device.
- objects in the environmental region of the motor vehicle can for example be recognized.
- the objects in the images provided by the camera are reliably recognized.
- the objects thus for example further traffic participants or pedestrians, can be differentiated from further objects in the environmental region.
- the capture of moved objects in the environmental region in particular presents a challenge. Since these moved objects are located in different positions in the temporally consecutive images, it is required to recognize the objects in the individual images.
- WO 2014/096240 A1 describes a method for detecting a target object in an environmental region of a camera based on an image of the environmental region provided by means of the camera.
- a plurality of characteristic features is determined in the image, wherein it is differentiated between ground features describing a ground of the environmental region and target features associated with a target object.
- at least a partial region of the image can be divided into a plurality of image cells.
- the associated optical flow vector can then be determined to each characteristic feature.
- the image cells can be combined to a region of interest.
- WO 2014/096242 A1 describes a method for tracking a target object in an environmental region of a motor vehicle based on a sequence of images of the environmental region, which are provided by means of a camera of the motor vehicle.
- the target object is detected in the environmental region and the target object is tracked by determining optical flow vectors to the target object based on the sequence of images during a relative movement between the motor vehicle and the target object. If a standstill state, in which both the motor vehicle and the target object come to a standstill, is detected, the current relative position of the target object with respect to the motor vehicle is stored and it is examined if a predetermined criterion with respect to the relative movement is satisfied. After the predetermined criterion is satisfied, the tracking of the target object is continued starting from the stored relative position.
- this object is solved by a method, by a camera system as well as by a motor vehicle having the features according to the respective independent claims.
- Advantageous developments of the present invention are the subject matter of the dependent claims.
- a first object feature is in particular recognized in a first image of the sequence, wherein the first object feature preferably describes at least a part of the object in the environmental region.
- a position of the object in the environmental region is in particular estimated based on a predetermined movement model, which describes a movement of the object in the environmental region.
- a prediction feature is in particular determined in a second image following the first image in the sequence based on the first object feature and based on the estimated position. Further, a second object feature is preferably determined in the second image.
- association of the second object feature with the prediction feature is preferably effected in the second image if a predetermined association criterion is satisfied.
- the second object feature is preferably confirmed as originating from the object if the second object feature is associated with the prediction feature.
- a method according to the invention serves for capturing an object in an environmental region of a motor vehicle based on a sequence of images of the environmental region, which are provided by means of a camera of the motor vehicle.
- the method involves recognizing a first object feature in a first image of the sequence, wherein the first object feature describes at least a part of the object in the environmental region. Furthermore, the method includes estimating a position of the object in the environmental region based on a predetermined movement model, which describes a movement of the object in the environmental region. Moreover, it is provided that a prediction feature is determined in a second image following the first image in the sequence based on the first object feature and based on the estimated position.
- the method includes determining a second object feature in the second image and associating the second object feature with the prediction feature in the second image if a predetermined association criterion is satisfied. Finally, the method involves confirming the second object feature as originating from the object if the second object feature is associated with the prediction feature.
- the method can be performed by a camera system of the motor vehicle, which has at least one camera.
- a sequence of images of the environmental region is provided by this camera.
- the sequence describes a temporal succession of images captured for example with a predetermined repetition rate.
- the camera system can have an image processing unit, by means of which the images can be evaluated.
- a first object feature is recognized in a first image of the sequence of images.
- This first object feature can completely describe the object in the environmental region. It can also be provided that the first object feature only describes a part of the object in the environmental region.
- first object features are determined, which all describe the object in the environmental region.
- the first object feature can in particular describe the position of the part of the object and/or the dimensions of the part of the object in the first image.
- the movement of the object in the environmental region is estimated.
- a predetermined movement model is used, which describes the movement of the object in the environmental region of the motor vehicle, thus in the real world.
- the movement model can for example describe a linear movement of the object into a predetermined direction. Based on the movement model, the position of the object in the environmental region can be continuously estimated.
- the position is estimated for that point of time, at which a second image of the sequence is or has been captured.
- a prediction feature is then determined.
- the prediction feature can be determined based on the first object feature. For example, for determining the prediction feature, the first object feature from the first image can be used and be shifted such that the position of the prediction feature in the second image describes the estimated position in the environmental region. Thus, the estimated variation of the position of the object is transferred into the second image.
- a second object feature is then recognized.
- This second object feature also describes at least a part of the object in the environmental region of the motor vehicle in analogous manner to the first object feature.
- the second object feature is determined analogously to the first object feature.
- it is examined if the second object feature can be associated with the prediction feature.
- the second object feature is examined to what extent the second object feature corresponds with the prediction feature, to the determination of which the movement model in the real world was used. If a predetermined association criterion is satisfied, the second object feature is associated with the prediction feature in the second image.
- the association criterion can for example describe, how similar the second object feature and the prediction feature are to each other.
- the second object feature has been associated with the prediction feature in the second image, it can be assumed that the second object feature also describes the object in the environmental region. Since the association of the second object feature with the prediction feature has been effected, it can be assumed that the object has moved according to the predetermined movement model. Thus, it can also be assumed with high likelihood that the first object feature in the first image and the second object feature in the second image describe the same object in the environmental region of the motor vehicle. Thus, the object can be reliably recognized and tracked over the sequence of images.
- the object is recognized as moving relative to the motor vehicle if the second object feature is confirmed as originating from the object. If the second object feature is confirmed, it can be assumed with relatively high likelihood that the second object feature describes the object or at least a part thereof. This means that the position of the first object feature in the first image has varied to the second object feature in the second image as it was predicted according to the movement model, which describes the movement of the object in the real world. Thereby, the object can be identified or classified as a moved object. This applies both to the case that the motor vehicle itself moves or to the case that the motor vehicle stands still.
- an association probability between the second object feature and the prediction feature is determined and the predetermined association criterion is deemed as satisfied if the association probability exceeds a predetermined value.
- a prediction feature is then determined for each of the recognized object features in the image following in the sequence.
- the association probability is then determined.
- the association probability in particular describes the similarity between the second object feature and the prediction feature. If the second object feature and the prediction feature are identical, the association probability can be 100% or have the value of 1. If the second object feature and the prediction feature completely differ, the association probability can be 0% or have the value of 0.
- the second object feature can be associated with the prediction feature.
- the predetermined value or the threshold value can for example be 75% or 0.75. Thus, it can be examined in simple manner if the second object feature can be associated with the prediction feature.
- the association probability is determined based on an overlap between the second object feature and the prediction feature in the second image and/or based on dimensions of the second object feature and the prediction feature in the second image and/or based on a distance between the centers of gravity of the second object feature and the prediction feature in the second image and/or based on a distance between the object and a prediction object associated with the prediction feature in the environmental region.
- Both the second object feature and the prediction feature can cover a certain shape or a certain area in the second image.
- both the second object feature and the prediction feature are determined as a polygon.
- the second object feature and the prediction feature can also be determined as another geometric shape.
- the second object feature and the prediction feature overlap in the second image it can be determined on the one hand to what extent the second object feature and the prediction feature overlap in the second image. If the second object feature and the prediction feature overlap in a relatively large area, a high association probability can be assumed. If the second object feature and the prediction feature do not overlap, a low association probability can be assumed. The examination to what extent the second object feature and the prediction feature overlap is therein effected in the second image.
- the respective dimensions of the second object feature and the prediction feature can be considered in the second image.
- the lengths, the heights and/or the areas of the second object feature and the prediction feature can be compared to each other.
- the similarity between the second object feature and the prediction feature in the second image or the association probability can be determined.
- the center of gravity of the second object feature and the center of gravity of the prediction feature are determined and a distance between the center of gravity of the second object feature and the center of gravity of the prediction feature is determined. The lower the distance between the centers of gravity is, the greater the association probability is.
- a prediction object is determined, which describes the mapping of the prediction feature into the real world.
- an association probability between a last confirmed object feature and the second object feature is determined, wherein the last confirmed object feature describes that object feature, which was last confirmed as originating from the object. If the association probability between the second object feature and the prediction feature falls below the determined value, the second object feature is not associated with the prediction feature. This can be substantiated in that the object in the environmental region does not or no longer move according to the movement model. This is for example the case if the object has stopped or has halted or has changed a direction of movement.
- the second object feature cannot or not sufficiently be determined if the object stands still during the capture of the second image.
- the second object feature for example cannot be determined at all.
- association of the second object feature with the prediction feature is not possible. If the association of the second object feature with the prediction feature has not been effected, it is examined if the second object feature can be associated with that object feature, which was last confirmed as originating from the object in one of the preceding images. Thus, it can be determined, where the second object feature is relative to the position, in which the object has certainly been at an earlier point of time. Based on this information, it can be determined if the object in the environmental region has changed its direction, has changed its speed of movement or currently stands still.
- the prediction feature is determined starting from a position in the environmental region, which is associated with the last confirmed object feature if the association probability between the last confirmed object feature and the second object feature is greater than the association probability between the second object feature and the prediction feature.
- the association probability to the second object feature in the second or in the current image and the association probability to that object feature from one of the preceding images can be determined, which was actually confirmed as originating from the object. If the object in the environmental region does no longer follow the predetermined movement model, it is required to change the movement model. Therein, it is in particular provided that the movement model is determined starting from that object feature, which was last confirmed. Thus, the case can be reliably taken into account that the object in the real world has changed its direction of movement or actually stands still.
- an object position in the environmental region is determined based on the second object feature
- a prediction position in the environmental region is determined based on the prediction feature
- a spatial similarity between the object position and the prediction position is determined and a current position of the object in the environmental region is determined based on the association probability and the spatial similarity.
- a prediction position is determined. This prediction position describes the current position of the object considering the movement model. Therein, the prediction position can be output with a predetermined spatial uncertainty.
- the current position in the environmental region can also be determined.
- a spatial similarity or a spatial likelihood can then be determined.
- a weighting factor can then be determined, based on which the current position of the object can be determined for the tracking. This allows reliably tracking the object.
- the current position of the object is determined based on the second object feature, the object position of which has the greater spatial similarity to the prediction position of the prediction feature.
- the object in the environmental region is for example a pedestrian
- one object feature can describe the head of the pedestrian
- another object feature can describe the body
- one object feature can respectively describe the legs of the pedestrian.
- that second object feature is used, which has the greatest spatial similarity to the prediction feature.
- the second object feature associated with the head of the pedestrian can have no or a very low spatial similarity to the prediction feature.
- that second object feature, which is associated with a leg of the pedestrian can have a high spatial similarity to the prediction feature. This in particular applies to the case in which the base point of the prediction feature is compared to the respective base points of the second object features for determining the spatial similarity. This allows reliable determination of the current position of the object in the environmental region and further reliable tracking of the object.
- a further object feature is recognized in one of the images, it is examined if the further object feature originates from an object entered the environmental region, wherein the examination is based on an entry probability depending on a position of the further object feature in the image.
- the examination can be the case that multiple object features are recognized in the images or in the second image.
- it is to be examined if these object features originate from an already recognized object or if the object feature describes a new object, which was not yet previously captured. Therefore, it is examined if the further object feature describes an object, which has entered the environmental region or which has moved into the environmental region.
- an entry probability is taken into account, which can also be referred to as a birth probability.
- This entry probability depends on the position of the object feature in the image.
- a low entry probability is assumed. For these areas, it is unlikely that the further object has entered this area. If the further object feature has been recognized as originating from a new object, this new object can also be correspondingly tracked or its position can be determined.
- an object has exited the environmental region.
- This can for example be the case if the object is tracked in the images and can no longer be recognized in one of the images.
- this can be the case if the first object feature in the first image is in an edge area of the image and the object feature can no longer be captured in the second image.
- an exit probability can be defined analogously to the previously described entry probability and it can be examined based on this exit probability if the object has exited the environmental region.
- the exit probability is higher if the object feature is in an edge area of the image than for the case that the object feature is in a central area of the image.
- the second object feature is determined as a polygon, wherein the polygon has a left base point, a central base point, a right base point and/or a tip point, and wherein the polygon describes a width and/or a height of the object.
- the second object feature can be described as an object in the second image.
- the polygon in particular has the left, the central, the right base point as well as a tip point.
- the central base point can be determined as the point of intersection between a connecting line between a vanishing point and the center of gravity of the polygon.
- the width of the object is reproduced by the right and the left base point.
- the height of the object can be described by the tip point.
- This polygon can be determined in simple manner and within a short computing time.
- the polygon is suitable for describing the spatial dimensions of the object.
- a plurality of regions of interest is determined in the second image, the regions of interest are grouped and the respective polygon is determined based on the grouped regions of interest.
- Regions of interest can respectively be determined in the second image or in the respective images of the image sequence. These regions of interest in particular describe those pixels or areas of the image, which depict a moved object.
- the image or the second image can first be divided into a plurality of partial areas or image cells and it can be examined for each of the image cells if it depicts a moved object or a part thereof.
- a weighting matrix can further be taken into account, in which a first value or a second value is associated with each of the image cells, according to whether or not the respective image cell describes a moved object. Those image cells describing moved objects can then be correspondingly grouped and the regions of interest can be determined herefrom. After the respective regions of interest have been determined, it can be examined if these regions of interest originate from the same object. Thus, it can be determined if the respective regions of interest can be grouped. As soon as the regions of interest have then been grouped, the polygon can be determined based on the area, which the grouped regions of interest occupy in the second image, which then describes the second object feature.
- the second image is divided into a plurality of image cells
- object cells describing a moved object are selected from the image cells based on an optical flow and the object cells are associated with one of the regions of interest.
- the second image can be divided into a plurality of image cells.
- each of the image cells can include at least one pixel.
- each of the image cells includes a plurality of pixels.
- the optical flow or the optical flow vector can then be determined for each of the image cells.
- it can be reliably examined whether or not the pixels in the image cell describe a moved object.
- Those image cells describing a moved object are considered as object cells and can be combined to a region of interest. This allows simple and reliable determination of the respective regions of interest.
- a roadway is recognized in the second image by means of segmentation and the regions of interest are determined based on the recognized roadway.
- a roadway on which the motor vehicle is located, can be recognized in the second image.
- the roadway can be in front of the motor vehicle in direction of travel or behind the motor vehicle in direction of travel according to orientation of the camera.
- the roadway or the ground can now be recognized with the aid of the segmentation method. If the moved object is also located on the roadway, the boundaries between the roadway and the object moving on the roadway can be reliably determined by recognizing the roadway. This allows precise determination of the regions of interest.
- the determination of the polygons based on the regions of interest was described based on the second object features in the image.
- the first object feature in the first image and all of the further object features in the images can also be determined as polygons in analogous manner.
- the prediction feature is also determined as a polygon.
- the prediction feature can be determined based on the first object feature and the position of the polygon and/or the size of the polygon describing the prediction feature can be adapted based on the movement model in the second image.
- a camera system according to the invention for a motor vehicle includes at least one camera and an electronic image processing device.
- the camera system is adapted to perform a method according to the invention or an advantageous configuration thereof.
- a motor vehicle according to the invention includes a camera system according to the invention.
- the motor vehicle is in particular formed as a passenger car.
- FIG. 1 a motor vehicle according to an embodiment of the present invention, which has a camera system with a plurality of cameras;
- FIG. 2 a schematic flow diagram of a method for determining regions of interest in the images, which are provided by the cameras;
- FIG. 3 an image, which is provided with the aid of the cameras, which is divided into a plurality of image cells;
- FIG. 4 areas in the image, which are used for determining the regions of interest
- FIG. 5 object cells in the image, which are associated with the moved object, before and after dilation
- FIG. 6 the individual image cells, over which a sliding window is shifted for determining the regions of interest
- FIG. 7 a region of interest in the image, which is upwards corrected
- FIG. 8 two regions of interest in the image, wherein the one region of interest is downwards corrected and the other region of interest is scaled down;
- FIG. 9 the regions of interest, which are associated with a pedestrian in the image, who is located on a roadway;
- FIG. 10 regions of interest, which are each combined in groups
- FIG. 11 a schematic flow diagram of a method for tracking the object
- FIG. 12 a schematic representation of the determination of a polygon based on grouped regions of interest
- FIG. 13 the polygon, which has a left, a central and a right base point as well as a tip point;
- FIG. 14 a schematic representation of the determination of an object feature based on a movement model in the real world
- FIG. 15 a -15 d object features, which are compared to prediction features
- FIG. 16 object features and prediction features at different points of time
- FIG. 17 a diagram, which describes the spatial similarity between the object and a prediction object in the real world
- FIG. 18 a pedestrian as a moved object, with which a plurality of object features are associated.
- FIG. 19 a diagram, which describes an entry probability of an object depending on a position in the image.
- FIG. 1 shows a motor vehicle 1 according to an embodiment of the present invention in a plan view.
- the motor vehicle 1 is formed as a passenger car.
- the motor vehicle 1 includes a camera system 2 , which includes at least one camera 4 .
- the camera system 2 includes four cameras 4 , wherein one of the cameras 4 is arranged in a rear area 5 , one of the cameras 4 is arranged in a front area 7 and two of the cameras 4 are arranged in respective lateral areas 6 of the motor vehicle 1 .
- objects 9 in an environmental region 8 of the motor vehicle 1 can be captured.
- a sequence of images 10 , 11 is provided by each of the cameras 4 .
- This sequence of images 10 , 11 is then transmitted to an electronic image processing device 3 of the camera system 2 .
- the objects 9 in the environmental region 8 can then be recognized in the images 10 , 11 with the aid of the electronic image processing device 3 .
- moved objects 9 in the environmental region 8 are to be recognized with the aid of the camera system 2 .
- a method of three-dimensional image processing is used.
- regions of interest 16 are determined in the images 10 , 11 , which describe a moved object 9 .
- object features 24 , 25 are determined in the images 10 , 11 based on the regions of interest 16 , which describe the object 9 in more detail.
- the movement of the object 9 is tracked.
- FIG. 2 shows a schematic flow diagram of a method for determining regions of interest 16 in the images 10 , 11 .
- a first step S 1 an image 10 , 11 provided by one of the cameras 4 is divided into a plurality of image cells 12 .
- each of the image cells 12 can include at least one pixel.
- each of the image cells 12 has a plurality of pixels.
- each image cell 12 can have 10 ⁇ 10 pixels.
- object cells 12 ′ are determined.
- the object cells 12 ′ describe those image cells 12 , which describe a moved object 9 .
- a weighting matrix is then determined based on the image cells 12 and the object cells 12 ′.
- a step S 3 regions of interest 16 are then determined in the image 10 , 11 based on the weighting matrix. These regions of interest 16 are subsequently corrected in a step S 4 . Finally, the regions of interest 16 are combined in a step S 5 .
- FIG. 3 shows an image 10 , 11 , which has been provided by one of the cameras 4 .
- the image 10 , 11 is divided into a plurality of image cells 12 .
- the number of the pixels in the respective image cells 12 can be determined.
- the weighting matrix can be determined.
- a height 13 of the weighting matrix results based on the number of lines of image cells 12 and a width 14 of the weighting matrix results based on the number of columns of the image cells 12 .
- the image 10 , 11 shows the object 9 , which is located in the environmental region 8 .
- the object 9 is a moving object in the form of a pedestrian.
- This object 9 is now to be recognized in the image 10 , 11 .
- an optical flow or a flow vector is determined in each of the image cells 12 , which describes the movement of an object 9 . If a flow vector has been determined with a sufficient confidence value, that image cell 12 is recognized as the object cell 12 ′ and identified in the weighting matrix or a value associated with the object cell 12 ′ in the weighting matrix is varied.
- the threshold value for a sufficient confidence value depends on the respective region 15 in the image 10 , 11 .
- FIG. 4 shows different regions 15 in the image 10 , 11 . Presently, the regions 15 differ depending on a distance to the motor vehicle 1 . Further, the threshold values for determining the confidence value can be adjustable and be dependent on the current speed of the motor vehicle 1 .
- FIG. 5 shows the image cells 12 and the object cells 12 ′, which have been recognized as originating from the moved object 9 .
- the object cells 12 ′ do not form a contiguous area. Since a completely contiguous area has not been recognized in the image 10 , 11 , a sparsely populated weighting matrix is also present in this area.
- a morphological operation in particular the dilation, is applied. Objects in a binary image can be enlarged or thickened by the dilation.
- H p describes the structuring element H, which has been shifted by p.
- W q describes the weighting matrix W, which has been shifted by q.
- q and p describe the directions.
- the structuring element H is a 3 ⁇ 3 matrix.
- the result of the dilation is represented on the right side of FIG. 5 .
- regions of interest 16 are now to be determined. This is explained in connection with FIG. 6 .
- a generator can be used, which determines an object hypothesis based on the image cells 12 , the object cells 12 ′ and the weights thereof.
- a sliding window 17 is used, which is shifted over the individual image cells 12 and object cells 12 ′. Based on the integral image, thus, each of the weights is determined for each of the regions of interest 16 .
- x and y describe the position of the lower left edge of the region of interest
- w and h describe the width and the height of the region of interest 16 . If the weighted sum w ROI is greater than a threshold value, the region of interest 16 is marked as a hypothesis. If the region of interest 16 is marked as a hypothesis, the search for further regions of interest 16 in the current column is aborted and continued in the next column. As indicated in FIG. 6 , this is performed for all of the columns.
- FIG. 7 shows an example, in which the region of interest 16 or the sliding window 17 is upwards corrected.
- the upper boundary of the sliding window 17 is upwards shifted such that the object cells 12 ′ associated with the moved object 9 are included in the sliding window 17 .
- the rectangle 18 is then determined from the corrected sliding window 17 .
- FIG. 8 shows an example, in which the lower boundary of the sliding window 17 is downwards shifted such that all of the object cells 12 ′ are included in the sliding window 17 .
- a further sliding window 17 is shown, which is scaled down. In this case, object cells 12 ′ are not present in the lowermost line of the sliding window 17 . For this reason, the lower boundary of the sliding window 17 is upwards shifted.
- a roadway 19 is recognized in the image 10 , 11 .
- a segmentation method is used.
- the roadway 19 can be recognized in the image 10 , 11 with the aid of the segmentation method.
- a boundary line 20 between the roadway 19 and the object 9 can be determined. Based on this boundary line 20 , the rectangles 18 describing the regions of interest 16 can then be adapted. In this example, the rectangles 18 are downwards corrected. Presently, this is illustrated by the arrows 21 .
- the respective regions of interest 16 are grouped. This is explained in connection with FIG. 10 .
- the rectangles 18 are apparent, which describe the regions of interest 16 .
- a horizontal contact area 23 is determined in the overlap area. If this horizontal contact area 23 in the overlap area exceeds a predetermined threshold value, the regions of interest 16 are grouped such that groups 22 , 22 ′ of regions of interest 16 arise.
- the rectangles 18 or regions of interest 16 on the left side are presently combined to a first group 22 and the rectangles 18 or regions of interest 16 on the right side are combined to a second group.
- the rectangle 18 a of the second group 22 ′ is not added since the horizontal contact area 23 a is below the threshold value.
- FIG. 11 shows a schematic flow diagram of a method for tracking the object 9 in the environmental region 8 .
- object features 24 , 25 are determined based on the regions of interest 16 in the images 10 , 11 .
- a prediction feature 26 is determined and in a step S 8 the object feature 24 , 25 is associated with the prediction feature 26 .
- the position of the object 9 is updated. The updated position is then supplied to an object database 27 describing a state vector.
- the movement of the object 9 in the environmental region 8 is predicted based on a linear movement model. Based on this movement model, the prediction feature 26 is then determined in the step S 7 . Furthermore, it is provided that new object features are recognized in a step S 11 and object features 24 , 25 are no longer taken into account in a step S 12 .
- the association of already existing and tracked objects 9 and newly captured objects is performed both within the images 10 , 11 and in the real world.
- the steps S 6 to S 8 are performed within the sequence of images 10 , 11 . This is illustrated in FIG. 11 by the block 35 .
- the steps S 9 to S 12 are determined in the real world or in the environmental region 8 . This is illustrated in FIG. 11 by the block 36 .
- the determination of the object feature 24 , 25 according to the step S 6 is illustrated in FIG. 12 .
- the individual rectangles 18 are shown, which are associated with the respective regions of interest 16 , and which are combined to the group 22 .
- a polygon 28 is then determined.
- the polygon 28 is determined as the envelope of the rectangles 18 , which describe the regions of interest 16 .
- a center of gravity 29 of the polygon 28 is determined.
- the position of the center of gravity 29 of the polygon 28 with the coordinates x s and y s can be determined according to the following formulas:
- an area A of the polygon 28 can be determined according to the following formula:
- (x i , y i ), (x i+1 , y i+1 ) are coordinates of two adjacent points of the polygon 28 .
- N is the number of points of the polygon 28 .
- FIG. 13 shows a further representation of the polygon 28 .
- the polygon 28 has a left base point 30 , a central base point 31 , a right base point 32 as well as a tip point 33 .
- the center of gravity 29 of the polygon 28 is illustrated.
- the central base point 31 results by the point of intersection of a connecting line 34 connecting a vanishing point 35 to the center of gravity 29 of the polygon 28 .
- the left base point 30 and the right base point 32 the width of the object 9 is described.
- the height of the object 9 is described by the tip point 33 .
- FIG. 14 shows the object 9 in the form of a pedestrian on the right side, which moves with a speed v relative to the motor vehicle 1 .
- the images 10 , 11 are provided by at least one camera 4 of the motor vehicle 1 , which are presented on the left side of FIG. 14 .
- a first object feature 24 is determined as the polygon 28 in a first image 10 (not illustrated here).
- This first object feature 24 describes the object 9 , which is in a first position P 1 at a point of time t 1 in the real world or in the environmental region 8 .
- the prediction feature 26 is determined based on the first object feature 24 .
- a picture 9 ′ of the object 9 or of the pedestrian is shown in the second image 11 .
- the first object feature 24 determined in the first image 10 is presently shown dashed in the second image 11 .
- a linear movement model is used, which describes the speed v of the object 9 .
- a Kalman filter For describing the movement of the object 9 , a Kalman filter is used. Herein, it is assumed that the object 9 moves with a constant speed v.
- k ⁇ 1 can be defined: ⁇ circumflex over (x) ⁇ k ⁇ 1
- k ⁇ 1 A ⁇ circumflex over (x) ⁇ k ⁇ 1
- k ⁇ 1 A ⁇ P k ⁇ 1
- A describes the system matrix.
- k ⁇ 1 describes the state vector for the preceding point of time or for the first image 10 .
- k ⁇ 1 describes the state matrix for the preceding point of time or for the first image 10 .
- Q is a noise matrix, which describes the error of the movement model and the differences between the movement model and the movement of the object 9 in the real world.
- a second object feature 25 can be determined based on the regions of interest 16 . Now, it is to be examined if the second object feature 25 can be associated with the prediction feature 26 .
- FIG. 15 a to FIG. 15 d show different variants, how the association between the second object feature 25 and the prediction feature 26 can be examined. For example—as shown in FIG. 15 a —an overlap area 36 between the second object feature 25 and the prediction feature 26 can be determined. Further, a distance 37 between the center of gravity 29 of the second object feature 25 or the polygon 28 and a center of gravity 38 of the prediction feature 26 can be determined. This is illustrated in FIG. 15 b .
- a size of the second object feature 25 can be compared to a size of the prediction feature 26 . This is illustrated in FIG. 15 c . Further, a distance 39 between the object 9 and a prediction object 40 can be determined, which has been determined based on the prediction feature 26 or which maps the prediction feature 26 into the real world.
- the second object feature 25 can be associated with the prediction feature 26 . That is, it is confirmed that the second object feature 25 describes the object 9 in the environmental region 8 .
- a moved object 9 in particular a pedestrian, changes its direction of movement or its speed. Since the object features 24 , 25 have been determined based on the optical flow, it can be the case that an object feature 24 , 25 cannot be determined if the object 9 or the pedestrian currently stands still. Further, it can be the case that the moved object 9 changes its direction of movement.
- the picture 9 ′ of the object 9 or of the pedestrian moves to the left.
- the prediction feature 26 which has been determined based on the movement model
- the second object feature 25 which has been determined based on the regions of interest 16 , show a good correspondence.
- the second object feature 25 is confirmed as originating from the object 9 .
- the object 9 or the pedestrian stops. In this case, a second object feature 25 cannot be determined.
- the last confirmed object feature 41 is shown, which has been confirmed as originating from the object 9 . This corresponds to the second object feature 25 , which has been confirmed at the point of time t 1 .
- the object 9 again moves to the right.
- a second object feature 25 can be determined.
- an association probability p between the prediction feature 26 and the second object feature 25 results.
- an association probability p L between the last confirmed object feature 41 and the second object feature 25 is determined. Since the association probability p L is greater than the association probability p, the movement of the object 9 at a point of time t 4 is determined based on the last confirmed object feature 41 .
- a spatial similarity between a prediction position P 2 describing the position of the object 9 based on the movement model is determined. This is illustrated in FIG. 17 .
- the prediction position P 2 is described by multiple ellipses 42 , which describe the spatial uncertainty of the position determination.
- the object position 43 is determined, wherein the object position 43 is determined based on the second object feature 25 . Based on the prediction position P 2 and the object position 43 , a spatial similarity or a spatial likelihood can then be determined.
- a state vector and the associated covariance matrix can be determined: ⁇ circumflex over (x) ⁇ k
- k j ⁇ circumflex over (x) ⁇ k
- k P k
- z k describes the data vector of the measurement or of the second object feature 25 .
- ⁇ circumflex over (z) ⁇ k describes the expected data vector.
- K describes the Kalman gain, which can be determined according to the following formula:
- K P k
- H describes a measurement matrix for generating the object features 24 , 25 based on the movement model
- R describes a noise matrix, which describes the variation of the polygon 28 in the image 10 , 11 .
- the system model can then be determined according to the following formula, wherein w describes a weighting factor:
- FIG. 18 shows an image 11 , in which multiple second object features 25 are associated with an object 9 or the picture 9 ′.
- a second object feature 25 is associated with a head of the object 9
- two second object features 25 are associated with the arms of the object 9
- a second object feature 25 is associated with the legs of the object 9 or the pedestrian.
- a weighting factor w can be associated with one of the second object features 25 .
- the second object feature 25 associated with the legs of the object 9 or the pedestrian has the greatest spatial similarity to the prediction feature 26 or the base point thereof.
- the weighting factor w of 1 is associated with this second object feature 25 .
- the weighting factor w of 0 is associated with the second object feature 25 . Based on this weighting factor w, the current position or the movement of the object can then be updated.
- FIG. 19 shows a distribution of the entry probability depending on the position of the object feature 24 , 25 in the image 10 , 11 , which describes the environmental region 8 .
- the areas 44 a to 44 d describe different entry probability in the image 10 , 11 .
- an edge area 44 a a high likelihood for an entry of a new object arises.
- a very low entry probability arises in a central area 44 a of the image 10 , 11 , which is directly in front of the motor vehicle 1 .
- an exit probability can be defined analogously to the entry probability.
- moved objects 9 in an environmental region 8 of the motor vehicle 1 can be reliably recognized and tracked.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
W⊕H=U p∈W H p =U q∈H W q.
w ROI =II(x+w,y+h)−II(x+w,y)−II(x,y+h)+II(x,y).
{circumflex over (x)} k−1|k−1 =A·{circumflex over (x)} k−1|k−1
P k|k−1 =A·P k−1|k−1 ·A T +Q.
p j =Σw m q m.
{circumflex over (x)} k|k j ={circumflex over (x)} k|k−1 +K(z k j −{circumflex over (z)} k)
P k|k =P k|k−1 −KHP k|k−1.
Claims (11)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102016114168.2 | 2016-08-01 | ||
DE102016114168.2A DE102016114168A1 (en) | 2016-08-01 | 2016-08-01 | Method for detecting an object in a surrounding area of a motor vehicle with prediction of the movement of the object, camera system and motor vehicle |
PCT/EP2017/069014 WO2018024600A1 (en) | 2016-08-01 | 2017-07-27 | Method for capturing an object in an environmental region of a motor vehicle with prediction of the movement of the object, camera system as well as motor vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190197321A1 US20190197321A1 (en) | 2019-06-27 |
US11170232B2 true US11170232B2 (en) | 2021-11-09 |
Family
ID=59656029
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/322,333 Active 2038-03-20 US11170232B2 (en) | 2016-08-01 | 2017-07-27 | Method for capturing an object in an environmental region of a motor vehicle with prediction of the movement of the object, camera system as well as motor vehicle |
Country Status (6)
Country | Link |
---|---|
US (1) | US11170232B2 (en) |
EP (1) | EP3491581A1 (en) |
KR (1) | KR102202343B1 (en) |
CN (1) | CN109791603B (en) |
DE (1) | DE102016114168A1 (en) |
WO (1) | WO2018024600A1 (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102016114168A1 (en) * | 2016-08-01 | 2018-02-01 | Connaught Electronics Ltd. | Method for detecting an object in a surrounding area of a motor vehicle with prediction of the movement of the object, camera system and motor vehicle |
JP6841097B2 (en) * | 2017-03-09 | 2021-03-10 | 富士通株式会社 | Movement amount calculation program, movement amount calculation method, movement amount calculation device and business support system |
US10685244B2 (en) * | 2018-02-27 | 2020-06-16 | Tusimple, Inc. | System and method for online real-time multi-object tracking |
US20220215650A1 (en) * | 2019-04-25 | 2022-07-07 | Nec Corporation | Information processing device, information processing method, and program recording medium |
WO2020237675A1 (en) * | 2019-05-31 | 2020-12-03 | 深圳市大疆创新科技有限公司 | Target detection method, target detection apparatus and unmanned aerial vehicle |
CN110288835B (en) * | 2019-06-28 | 2021-08-17 | 江苏大学 | Surrounding vehicle behavior real-time identification method based on kinematic prediction compensation mechanism |
JP7201550B2 (en) * | 2019-07-29 | 2023-01-10 | 本田技研工業株式会社 | VEHICLE CONTROL DEVICE, VEHICLE CONTROL METHOD, AND PROGRAM |
US11605170B2 (en) * | 2019-10-14 | 2023-03-14 | INTERNATIONAL INSTITUTE OF EARTHQUAKE ENGINEERING AND SEISMOLOGYx | Estimating a displacement sequence of an object |
US11315326B2 (en) * | 2019-10-15 | 2022-04-26 | At&T Intellectual Property I, L.P. | Extended reality anchor caching based on viewport prediction |
CN110834645B (en) * | 2019-10-30 | 2021-06-29 | 中国第一汽车股份有限公司 | Free space determination method and device for vehicle, storage medium and vehicle |
CN113255411A (en) * | 2020-02-13 | 2021-08-13 | 北京百度网讯科技有限公司 | Target detection method and device, electronic equipment and storage medium |
DE102020106301A1 (en) * | 2020-03-09 | 2021-09-09 | Zf Cv Systems Global Gmbh | Method for determining object information about an object in a vehicle environment, control unit and vehicle |
DE102020211960A1 (en) | 2020-09-24 | 2022-03-24 | Ford Global Technologies, Llc | Mapping of a trafficable area |
US11640668B2 (en) * | 2021-06-10 | 2023-05-02 | Qualcomm Incorporated | Volumetric sampling with correlative characterization for dense estimation |
Citations (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030007074A1 (en) * | 2001-06-28 | 2003-01-09 | Honda Giken Kogyo Kabushiki Kaisha | Vehicle zone monitoring apparatus |
US20030091228A1 (en) * | 2001-11-09 | 2003-05-15 | Honda Giken Kogyo Kabushiki Kaisha | Image recognition apparatus |
JP2007334511A (en) * | 2006-06-13 | 2007-12-27 | Honda Motor Co Ltd | Object detection device, vehicle, object detection method and program for object detection |
JP2008176555A (en) * | 2007-01-18 | 2008-07-31 | Fujitsu Ten Ltd | Obstacle detector and obstacle detection method |
WO2008139530A1 (en) | 2007-04-27 | 2008-11-20 | Honda Motor Co., Ltd. | Vehicle periphery monitoring device, vehicle periphery monitoring program, and vehicle periphery monitoring method |
US20090080701A1 (en) * | 2007-09-20 | 2009-03-26 | Mirko Meuter | Method for object tracking |
US20090087088A1 (en) * | 2007-09-28 | 2009-04-02 | Samsung Electronics Co., Ltd. | Image forming system, apparatus and method of discriminative color features extraction thereof |
US20100002908A1 (en) * | 2006-07-10 | 2010-01-07 | Kyoto University | Pedestrian Tracking Method and Pedestrian Tracking Device |
US20100104137A1 (en) * | 2008-04-24 | 2010-04-29 | Gm Global Technology Operations, Inc. | Clear path detection using patch approach |
US20100124358A1 (en) * | 2008-11-17 | 2010-05-20 | Industrial Technology Research Institute | Method for tracking moving object |
DE102010005290A1 (en) | 2009-01-26 | 2010-08-19 | GM Global Technology Operations, Inc., Detroit | Vehicle controlling method for vehicle operator i.e. driver, involves associating tracked objects based on dissimilarity measure, and utilizing associated objects in collision preparation system to control operation of vehicle |
US20110074957A1 (en) * | 2009-09-30 | 2011-03-31 | Hitachi, Ltd. | Apparatus for Vehicle Surroundings Monitorings |
US20130223686A1 (en) * | 2010-09-08 | 2013-08-29 | Toyota Jidosha Kabushiki Kaisha | Moving object prediction device, hypothetical movable object prediction device, program, moving object prediction method and hypothetical movable object prediction method |
WO2014096240A1 (en) | 2012-12-19 | 2014-06-26 | Connaught Electronics Ltd. | Method for detecting a target object based on a camera image by clustering from multiple adjacent image cells, camera device and motor vehicle |
WO2014096242A1 (en) | 2012-12-19 | 2014-06-26 | Connaught Electronics Ltd. | Method for tracking a target object based on a stationary state, camera system and motor vehicle |
US20140350834A1 (en) * | 2013-05-21 | 2014-11-27 | Magna Electronics Inc. | Vehicle vision system using kinematic model of vehicle motion |
US20150146921A1 (en) * | 2012-01-17 | 2015-05-28 | Sony Corporation | Information processing apparatus, information processing method, and program |
US9120484B1 (en) * | 2010-10-05 | 2015-09-01 | Google Inc. | Modeling behavior based on observations of objects observed in a driving environment |
US20150258991A1 (en) * | 2014-03-11 | 2015-09-17 | Continental Automotive Systems, Inc. | Method and system for displaying probability of a collision |
US20150324972A1 (en) * | 2012-07-27 | 2015-11-12 | Nissan Motor Co., Ltd. | Three-dimensional object detection device, and three-dimensional object detection method |
US20160335489A1 (en) * | 2014-01-14 | 2016-11-17 | Denso Corporation | Moving object detection apparatus and moving object detection method |
US20160364619A1 (en) * | 2014-01-09 | 2016-12-15 | Clarion Co., Ltd. | Vehicle-Surroundings Recognition Device |
US20160379074A1 (en) * | 2015-06-25 | 2016-12-29 | Appropolis Inc. | System and a method for tracking mobile objects using cameras and tag devices |
US20170061644A1 (en) * | 2015-08-27 | 2017-03-02 | Kabushiki Kaisha Toshiba | Image analyzer, image analysis method, computer program product, and image analysis system |
US20170140230A1 (en) * | 2014-08-21 | 2017-05-18 | Mitsubishi Electric Corporation | Driving assist apparatus, driving assist method, and non-transitory computer readable recording medium storing program |
US20170259753A1 (en) * | 2016-03-14 | 2017-09-14 | Uber Technologies, Inc. | Sidepod stereo camera system for an autonomous vehicle |
US20170280026A1 (en) * | 2015-09-25 | 2017-09-28 | Hitachi Information & Telecommunication Engineering, Ltd. | Image Processing Apparatus and Image Processing Method |
US20170329332A1 (en) * | 2016-05-10 | 2017-11-16 | Uber Technologies, Inc. | Control system to adjust operation of an autonomous vehicle based on a probability of interference by a dynamic object |
US10093233B2 (en) * | 2013-10-02 | 2018-10-09 | Conti Temic Microelectronic Gmbh | Method and apparatus for displaying the surroundings of a vehicle, and driver assistance system |
US10130873B2 (en) * | 2014-03-21 | 2018-11-20 | Samsung Electronics Co., Ltd. | Method and apparatus for preventing a collision between subjects |
US20190197321A1 (en) * | 2016-08-01 | 2019-06-27 | Connaught Electronics Ltd. | Method for capturing an object in an environmental region of a motor vehicle with prediction of the movement of the object, camera system as well as motor vehicle |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102014114221A1 (en) * | 2014-09-30 | 2016-03-31 | Valeo Schalter Und Sensoren Gmbh | Method for detecting an object in a surrounding area of a motor vehicle, driver assistance system and motor vehicle |
CN104680557A (en) * | 2015-03-10 | 2015-06-03 | 重庆邮电大学 | Intelligent detection method for abnormal behavior in video sequence image |
-
2016
- 2016-08-01 DE DE102016114168.2A patent/DE102016114168A1/en active Pending
-
2017
- 2017-07-27 EP EP17754285.9A patent/EP3491581A1/en not_active Withdrawn
- 2017-07-27 US US16/322,333 patent/US11170232B2/en active Active
- 2017-07-27 KR KR1020197003274A patent/KR102202343B1/en active IP Right Grant
- 2017-07-27 WO PCT/EP2017/069014 patent/WO2018024600A1/en unknown
- 2017-07-27 CN CN201780056297.5A patent/CN109791603B/en active Active
Patent Citations (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030007074A1 (en) * | 2001-06-28 | 2003-01-09 | Honda Giken Kogyo Kabushiki Kaisha | Vehicle zone monitoring apparatus |
US20030091228A1 (en) * | 2001-11-09 | 2003-05-15 | Honda Giken Kogyo Kabushiki Kaisha | Image recognition apparatus |
JP2007334511A (en) * | 2006-06-13 | 2007-12-27 | Honda Motor Co Ltd | Object detection device, vehicle, object detection method and program for object detection |
US20100002908A1 (en) * | 2006-07-10 | 2010-01-07 | Kyoto University | Pedestrian Tracking Method and Pedestrian Tracking Device |
JP2008176555A (en) * | 2007-01-18 | 2008-07-31 | Fujitsu Ten Ltd | Obstacle detector and obstacle detection method |
US20100103262A1 (en) * | 2007-04-27 | 2010-04-29 | Basel Fardi | Vehicle periphery monitoring device, vehicle periphery monitoring program and vehicle periphery monitoring method |
WO2008139530A1 (en) | 2007-04-27 | 2008-11-20 | Honda Motor Co., Ltd. | Vehicle periphery monitoring device, vehicle periphery monitoring program, and vehicle periphery monitoring method |
US20090080701A1 (en) * | 2007-09-20 | 2009-03-26 | Mirko Meuter | Method for object tracking |
US20090087088A1 (en) * | 2007-09-28 | 2009-04-02 | Samsung Electronics Co., Ltd. | Image forming system, apparatus and method of discriminative color features extraction thereof |
US20100104137A1 (en) * | 2008-04-24 | 2010-04-29 | Gm Global Technology Operations, Inc. | Clear path detection using patch approach |
US20100124358A1 (en) * | 2008-11-17 | 2010-05-20 | Industrial Technology Research Institute | Method for tracking moving object |
DE102010005290A1 (en) | 2009-01-26 | 2010-08-19 | GM Global Technology Operations, Inc., Detroit | Vehicle controlling method for vehicle operator i.e. driver, involves associating tracked objects based on dissimilarity measure, and utilizing associated objects in collision preparation system to control operation of vehicle |
US20110074957A1 (en) * | 2009-09-30 | 2011-03-31 | Hitachi, Ltd. | Apparatus for Vehicle Surroundings Monitorings |
US20130223686A1 (en) * | 2010-09-08 | 2013-08-29 | Toyota Jidosha Kabushiki Kaisha | Moving object prediction device, hypothetical movable object prediction device, program, moving object prediction method and hypothetical movable object prediction method |
US9120484B1 (en) * | 2010-10-05 | 2015-09-01 | Google Inc. | Modeling behavior based on observations of objects observed in a driving environment |
US20150146921A1 (en) * | 2012-01-17 | 2015-05-28 | Sony Corporation | Information processing apparatus, information processing method, and program |
US20150324972A1 (en) * | 2012-07-27 | 2015-11-12 | Nissan Motor Co., Ltd. | Three-dimensional object detection device, and three-dimensional object detection method |
WO2014096240A1 (en) | 2012-12-19 | 2014-06-26 | Connaught Electronics Ltd. | Method for detecting a target object based on a camera image by clustering from multiple adjacent image cells, camera device and motor vehicle |
WO2014096242A1 (en) | 2012-12-19 | 2014-06-26 | Connaught Electronics Ltd. | Method for tracking a target object based on a stationary state, camera system and motor vehicle |
US20140350834A1 (en) * | 2013-05-21 | 2014-11-27 | Magna Electronics Inc. | Vehicle vision system using kinematic model of vehicle motion |
US10093233B2 (en) * | 2013-10-02 | 2018-10-09 | Conti Temic Microelectronic Gmbh | Method and apparatus for displaying the surroundings of a vehicle, and driver assistance system |
US20160364619A1 (en) * | 2014-01-09 | 2016-12-15 | Clarion Co., Ltd. | Vehicle-Surroundings Recognition Device |
US20160335489A1 (en) * | 2014-01-14 | 2016-11-17 | Denso Corporation | Moving object detection apparatus and moving object detection method |
US20150258991A1 (en) * | 2014-03-11 | 2015-09-17 | Continental Automotive Systems, Inc. | Method and system for displaying probability of a collision |
US10130873B2 (en) * | 2014-03-21 | 2018-11-20 | Samsung Electronics Co., Ltd. | Method and apparatus for preventing a collision between subjects |
US20170140230A1 (en) * | 2014-08-21 | 2017-05-18 | Mitsubishi Electric Corporation | Driving assist apparatus, driving assist method, and non-transitory computer readable recording medium storing program |
US20160379074A1 (en) * | 2015-06-25 | 2016-12-29 | Appropolis Inc. | System and a method for tracking mobile objects using cameras and tag devices |
US20170061644A1 (en) * | 2015-08-27 | 2017-03-02 | Kabushiki Kaisha Toshiba | Image analyzer, image analysis method, computer program product, and image analysis system |
US20170280026A1 (en) * | 2015-09-25 | 2017-09-28 | Hitachi Information & Telecommunication Engineering, Ltd. | Image Processing Apparatus and Image Processing Method |
US20170259753A1 (en) * | 2016-03-14 | 2017-09-14 | Uber Technologies, Inc. | Sidepod stereo camera system for an autonomous vehicle |
US20170329332A1 (en) * | 2016-05-10 | 2017-11-16 | Uber Technologies, Inc. | Control system to adjust operation of an autonomous vehicle based on a probability of interference by a dynamic object |
US20190197321A1 (en) * | 2016-08-01 | 2019-06-27 | Connaught Electronics Ltd. | Method for capturing an object in an environmental region of a motor vehicle with prediction of the movement of the object, camera system as well as motor vehicle |
Non-Patent Citations (14)
Title |
---|
Andreas Ess, Konrad Schindler, Bastian Leibe and Luc Van Gool; "Object Detection and Tracking for Autonomous Navigation in Dynamic Environments"; IJRR,Sage ;2010. * |
Anonymous, "Blob Tracking", RoboRealm Vision for Machines, Apr. 29, 2016, XP055409455, web.archive.org, Retrieved from the Internet: URL: http://web.archive.org/web/20160429233157/Http://www.roborealm.com/help/Blob_Tracking.php (retrieved Sep. 25, 2017) p. 1, Paragraph 1—p. 2222, Paragraph 4. |
Bertozzi M Et Al., "Pedestrian Localization and Tracking System with Kalman Filtering", Intelligent Vehicles Symposium, 2004 IEEE Parma, Italy, Jun. 14-17, 2004, Piscataway, NJ, USA, IEEE, Jun. 14, 2004, pp. 584-589, XP010727712, DOI: 10.1109/IVS.2004.1336449, ISBN: 978-0-7803-8310-4 abdstract, Sections I, II.C, II.E, III. |
BERTOZZI M., BROGGI A., FASCIOLI A., TIBALDI A., CHAPUIS R., CHAUSSE F.: "Pedestrian localization and tracking systen with kalman filtering", INTELLIGENT VEHICLES SYMPOSIUM, 2004 IEEE PARMA, ITALY JUNE 14-17, 2004, PISCATAWAY, NJ, USA,IEEE, 14 June 2004 (2004-06-14) - 17 June 2004 (2004-06-17), pages 584 - 589, XP010727712, ISBN: 978-0-7803-8310-4, DOI: 10.1109/IVS.2004.1336449 |
Czyz, J. et al.: A particle filter for joint detection and tracking of color objects. In: Image and Vision Computing 2006, S. 1-11. |
Dollàr, P. et al.: Pedestrian Detection: An Evaluation of the State of the Art. In: IEEE Transactions on Pattern Analysis and Machine Interlligence, vol. 34,No. 4, Apr. 2012, S. 743-761. |
Esther B. Meier et al, Object Detection and Tracking in Range Image Sequences by Separation of Image Features', IEEE International Conference on Intelligent Vehicles IV'98, Oct. 30, 1998, pp. 176-181, XP055409483, abstract, Sections I, IV, V. |
ESTHER B. MEIER, ADE, FRANK: "Object Detection and Tracking in Range Image Sequences by Separation of Image Features", IEEE INTERNATIONAL CONFERENCE ON INTELLIGENT VEHICLES IV'98, 30 October 1998 (1998-10-30), pages 176 - 181, XP055409483 |
Esther B. Meier, Frank Ade ; "Object Detection and Tracking in Range Image Sequences by Separation of Image Features"; 1998; IEEE. * |
Esther B. Meier, Frank Ade; "Object detection and tracking in range image sequences by separation of imeg features"; IEEE, 1998. * |
German Search Report Issued in Corresponding German Application No. 102016114168.2, dated Jun. 7, 2017 (7 Pages). |
International Search Report and Written Opinion Issued in Corresponding PCT Application No. PCT/EP2017/069014, dated Oct. 9, 2017 (11 Pages). |
The Notice of Preliminary Rejection issued in corresponding Korean Application No. 10-2019-7003274, dated Mar. 30, 2020 (12 pages). |
Zuther, S.: Multisensorsystem mit schmalbandigen Radarsensoren zum Fahrzeugseitenschutz. Dissertation, Fakultät für Ingenieurwissenschaften und Informatik der Universität Ulm, 2014. |
Also Published As
Publication number | Publication date |
---|---|
CN109791603B (en) | 2023-10-03 |
KR102202343B1 (en) | 2021-01-13 |
KR20190025698A (en) | 2019-03-11 |
DE102016114168A1 (en) | 2018-02-01 |
WO2018024600A1 (en) | 2018-02-08 |
EP3491581A1 (en) | 2019-06-05 |
US20190197321A1 (en) | 2019-06-27 |
CN109791603A (en) | 2019-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11170232B2 (en) | Method for capturing an object in an environmental region of a motor vehicle with prediction of the movement of the object, camera system as well as motor vehicle | |
Malik | Fast vehicle detection with probabilistic feature grouping and its application to vehicle tracking | |
CN112307921B (en) | Vehicle-mounted end multi-target identification tracking prediction method | |
US9230165B2 (en) | Object detection apparatus, vehicle-mounted device control system and storage medium of program of object detection | |
Kanhere et al. | Real-time incremental segmentation and tracking of vehicles at low camera angles using stable features | |
US7672514B2 (en) | Method and apparatus for differentiating pedestrians, vehicles, and other objects | |
US8867790B2 (en) | Object detection device, object detection method, and program | |
CN104008371B (en) | Regional suspicious target tracking and recognizing method based on multiple cameras | |
Broggi et al. | Self-calibration of a stereo vision system for automotive applications | |
CN111932580A (en) | Road 3D vehicle tracking method and system based on Kalman filtering and Hungary algorithm | |
CN104239867A (en) | License plate locating method and system | |
Nassu et al. | A vision-based approach for rail extraction and its application in a camera pan–tilt control system | |
US20190362163A1 (en) | Method for validation of obstacle candidate | |
WO2011065399A1 (en) | Path recognition device, vehicle, path recognition method, and path recognition program | |
CN114419098A (en) | Moving target trajectory prediction method and device based on visual transformation | |
US20200302192A1 (en) | Outside recognition apparatus for vehicle | |
JP2004038624A (en) | Vehicle recognition method, vehicle recognition device and vehicle recognition program | |
JP3629935B2 (en) | Speed measurement method for moving body and speed measurement device using the method | |
Malik | High-quality vehicle trajectory generation from video data based on vehicle detection and description | |
JP4055785B2 (en) | Moving object height detection method and apparatus, and object shape determination method and apparatus | |
Rosebrock et al. | Real-time vehicle detection with a single camera using shadow segmentation and temporal verification | |
JP4333683B2 (en) | Windshield range detection device, method and program | |
Lim et al. | Detection and tracking multiple pedestrians from a moving camera | |
KR102448944B1 (en) | Method and Device for Measuring the Velocity of Vehicle by Using Perspective Transformation | |
WO2022208666A1 (en) | Image processing device, image processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: CONNAUGHT ELECTRONICS LTD., IRELAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HUGHES, CIARAN;NGUYEN, DUONG-VAN;HORGAN, JONATHAN;AND OTHERS;SIGNING DATES FROM 20190207 TO 20190211;REEL/FRAME:048384/0436 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |