EP3146462A1 - Procédé pour déterminer une limite respective d'au moins un objet, dispositif de capteur, dispositif d'assistance au conducteur et véhicule automobile - Google Patents

Procédé pour déterminer une limite respective d'au moins un objet, dispositif de capteur, dispositif d'assistance au conducteur et véhicule automobile

Info

Publication number
EP3146462A1
EP3146462A1 EP15722677.0A EP15722677A EP3146462A1 EP 3146462 A1 EP3146462 A1 EP 3146462A1 EP 15722677 A EP15722677 A EP 15722677A EP 3146462 A1 EP3146462 A1 EP 3146462A1
Authority
EP
European Patent Office
Prior art keywords
selection
point
points
motor vehicle
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP15722677.0A
Other languages
German (de)
English (en)
Inventor
Duong-Van NGUYEN
Ciáran HUGHES
Jonathan Horgan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Connaught Electronics Ltd
Original Assignee
Connaught Electronics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd filed Critical Connaught Electronics Ltd
Publication of EP3146462A1 publication Critical patent/EP3146462A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes

Definitions

  • the invention relates to a method for determining a respective boundary of at least one object in an environmental region of a motor vehicle based on sensor data of an optical sensor of a sensor device of the motor vehicle, wherein the following steps are performed by means of an image processing device of the sensor device.
  • a point cloud with a plurality of points is determined based on the sensor data and the point cloud is transformed into a plan view image, which represents the environmental region of the motor vehicle.
  • the invention relates to a sensor device for a motor vehicle, to a driver assistance device with a sensor device as well as to a motor vehicle with a driver assistance device.
  • boundaries of objects can be determined. For example, objects can be recognized in images captured with a camera, and the boundaries thereof can be marked.
  • the representation of boundaries can for example be used in a rearview camera of a motor vehicle in order to make the driver aware of obstacles.
  • the boundary of a parking lot defined by the points of the point cloud can for example be identified as a closed quadrangle and be presented in a plan view image as such.
  • This can entail that the parking lot is recognized as occupied by a driver assistance device of the motor vehicle.
  • it has to be taken care that the parking lot is not represented as blocked or closed.
  • the two points, which are the frontmost points of a side of the parking lot facing the motor vehicle are connected to each other. It is the object of the invention to provide a method, a sensor device, a driver assistance device as well as a motor vehicle, in which measures are taken, which ensure that boundaries of objects located in an environmental region of a motor vehicle can be particularly reliably determined.
  • this object is solved by a method, by a sensor device, by a driver assistance device as well as by a motor vehicle having the features according to the respective independent claims.
  • Advantageous implementations of the invention are the subject matter of the dependent claims, of the description and of the figures.
  • a method serves for determining a respective boundary of at least one object in an environmental region of a motor vehicle based on sensor data of an optical sensor of a sensor device of the motor vehicle, wherein the following steps are performed by means of an image processing device of the sensor device: A point cloud with a plurality of points is determined based on the sensor data, and the point cloud is transformed into a plan view image, which represents the environmental region of the motor vehicle. According to the invention, it is provided that the following steps are performed with the image processing device in the plan view image: A reference point is determined in the plan view image, which describes a position of the optical sensor, and the plan view image is divided into a plurality of segments starting from the reference point.
  • a respective selection point is determined from the points of the point cloud in the respective segment, wherein the respective selection point in the segment has the lowest distance to the reference point compared to the other points of the point cloud in the segment.
  • the respective boundary of the at least one object is determined based on the respectively determined selection points.
  • sensor data is acquired with an optical sensor of a sensor device.
  • the sensor data can in particular represent a picture of the environmental region of the motor vehicle.
  • sensor data can be continuously acquired with the optical sensor, which describe the environmental region of the motor vehicle.
  • a point cloud is determined from the sensor data and displayed in the plan view image representing the environmental region of the motor vehicle.
  • the plan view image is divided into multiple segments, whereby the points of the point cloud are also divided.
  • a selection point is determined in each segment. This selection point is that one of the points, which has the lowest distance to the reference point, which can for example describe the position of the optical sensor, compared to the other points in the segment. The boundary of the object can then be determined only based on the selection points.
  • the method according to the invention it becomes possible to particularly accurately determine the boundary of the object.
  • a parking lot is surrounded by the objects, which delimit it.
  • the boundary of the parking lot can also be determined.
  • the objects are present in the form of the point cloud, which is processed by the method according to the invention such that only the points of the point cloud relevant to the motor vehicle are determined or used as the boundary.
  • the advantage is in that the boundary can be very precisely and reliably represented, but at the same time only the required points are used. This approach allows working with an optimally reduced amount of sensor data.
  • selection points are selected and connected by a polyline for determining the respective boundary of the at least one object.
  • the boundary is present not only in the form of individual points, but continuously. Now, for each position in the plan view image or in the environmental region, it can be determined if it is located in front of or behind the respective boundary of the object. Without the polyline, the boundary of the object would only be partially defined, namely in the position of the points.
  • a vector is provided, which is situated from a first selection point in a first segment to a second selection point in a second segment adjacent to the first one, which contains one of the selection points. Furthermore, the first and the second selection point are connected by the polyline if at least one further of the selection points is disposed on a side of the vector opposing the reference point, which is disposed after the second segment in the direction of the vector.
  • a course for the polyline can be preset by the vector. Selection points, which would for example be located behind the polyline starting from the reference point, presently cannot be taken into account, Now, it can be ensured that only the selection points are used, which are required for a reliable representation of the boundary.
  • the first selection point is connected to a third selection point by the polyline, wherein the third selection point from a predetermined set of the selection points each disposed in segments, which follow the second segment in the direction of the vector, has the lowest distance to the first selection point.
  • the first selection point is connected to a fourth selection point by the polyline if a third selection point from a predetermined set of the selection points each disposed in segments, which follow the second segment in the direction of the vector, has the lowest distance to the first selection point and a ratio of an angle between a line from the reference point to the first selection point and a line from the reference point to the third selection point as well as a distance between the first and the third selection point exceeds a predetermined limit value.
  • the ratio can also be referred to as blocking state.
  • the selection points, more precisely the first selection point and the third selection point are not connected by the polyline.
  • the limit value is for example set such that the distance from the first selection point to the second selection point is less than the dimensions of the motor vehicle.
  • the respective selection points are adapted and connected by the polyline depending on a ratio of an angle between three of the selection points and a distance between two of the selection points. It is advantageous that thus an additional criterion for the selection of the selection point can be established.
  • selection points unnecessary for the definition of the boundary cannot be taken into account.
  • those of the selection points cannot be taken into account, which for example span an area with the adjacent selection points, which is considerably smaller than the dimensions of the motor vehicle.
  • a position of at least one of the selection points is adapted depending on a ratio of an angle between three of the selection points and a distance between two of the selection points.
  • This is advantageous because the boundary can thereby be particularly optimally determined.
  • Some of the selection points can be positioned such that they are not used for determining the boundary although they have been determined according to one of the preceding steps. Thus, there may be certain positions of the selection points, which result in them being shifted or substituted.
  • the points of the point cloud are divided into at least two clusters.
  • one of the respective clusters can be provided for each of the objects in the environmental region.
  • the cluster has the advantage that now the respective boundary can be separately determined for example for each of the objects.
  • it is possible to associate an importance of the respective boundary for the motor vehicle, which differs from the respective cluster to the respective cluster.
  • the determination of the respective boundary of the at least one object is performed separately for each of the clusters based on the respectively determined selection points.
  • the boundary of the respective cluster can now be particularly precisely determined because for example only the respective points are contained in the cluster, which belong to a single one of the objects.
  • the clusters are combined if one of the segments contains the selection points of each of these clusters.
  • the optical sensor is a camera.
  • the camera has the advantage that the environmental region can be captured in large area with a
  • the method according to the invention can be integrated in an existing system including the camera.
  • the camera is preferably a video camera, which is able to provide a plurality of images (frames) per second.
  • the camera can be a CCD camera or a CMOS camera.
  • the point cloud is determined based on sensor data of a laser scanner.
  • the optical sensor is therefore formed as a laser scanner.
  • a sensor device for a motor vehicle includes at least one optical sensor for providing sensor data of an environmental region of the motor vehicle, and is adapted to perform a method according to the invention.
  • a driver assistance device includes a sensor device according to the invention.
  • a motor vehicle according to the invention includes a driver assistance device according to the invention.
  • an obstacle for the motor vehicle can be acquired with the optical sensor as the object.
  • the object can be an item or a living entity, with which a collision is to be prevented with the aid of the driver assistance device.
  • the obstacles also appear if for example the parking lot is to be bounded. All of the obstacles located on a travel path or a trajectory from a current site to a destination are to be acquired with the driver assistance device. Among these obstacles, only the boundary facing the motor vehicle can be of relevance to the motor vehicle.
  • Fig. 1 in schematic illustration a plan view of a motor vehicle and a parking lot with a boundary
  • FIG. 2 in schematic illustration points of a point cloud and segments starting from a reference point
  • Fig. 3 a flow diagram of a method according to an embodiment of the invention.
  • FIG. 4 a further flow diagram of a method according to an embodiment of the invention
  • Fig. 5 a further flow diagram of a method according to an embodiment of the invention
  • FIG. 6 in schematic illustration three selection points from the point cloud
  • Fig. 7 in schematic illustration three selection points, a shifted selection point and the reference point from the point cloud;
  • Fig. 8 in schematic illustration the three selection points, the shifted selection point and the reference point from the point cloud;
  • Fig. 9 in schematic illustration the three selection points, the shifted selection point and the reference point from the point cloud;
  • Fig. 10 in schematic illustration an environmental region of the motor vehicle with the selection points in the segments;
  • Fig. 1 1 in schematic illustration the environmental region with the selection points in the segments and the respective boundary;
  • Fig. 12 in schematic illustration the environmental region with selection points in the segments, wherein the selection points are divided into clusters and one of the boundaries is determined based on two combined clusters;
  • Fig. 13 in schematic illustration a side view of the environmental region with a point cloud of objects.
  • FIG. 14 in schematic illustration the environmental region in plan view analogous to
  • a plan view of a motor vehicle 1 is illustrated in schematic illustration.
  • the motor vehicle 1 is a passenger car.
  • the motor vehicle 1 has a sensor device 2, which includes an optical sensor 3 and an image processing device 4.
  • the optical sensor 3 can be a camera, in particular a video camera, which continuously captures a sequence of frames.
  • the image processing device 4 then processes the sequence of frames in real time and can determine a boundary 8 based on it - as described in more detail below.
  • the optical sensor 3 can also be formed as a laser scanner, which determines the point cloud 1 1 of the environmental region 5 by emitting and receiving optical pulses.
  • the laser scanner can also be an imaging laser scanner, which additionally acquires an intensity of a reflected signal of the laser scanner besides a geometry of the object 7.
  • Sensor data is provided with the optical sensor 3, which represent a picture of the environmental region 5.
  • a point cloud 1 1 is determined from the sensor data by means of the image processing device 4, which includes a plurality of points 10.
  • the image processing device 4 can determine characteristic points as the points 10 of the point cloud 1 1 from the sensor data or image data. For example, this can be effected by means of an interest operator.
  • Selection points 9 are determined from the points 10 of the point cloud 1 1 . They are used to determine the boundary 8 of the object 7. For this purpose, the selection points 9 are connected by a polyline 12. Thus, the boundary 8 is a connection of the selection points 9 of the points 10 of the point cloud 1 1 of the object 7 by the polyline 12.
  • an empty parking lot 6 is located in an environmental region 5 of the motor vehicle 1 . This parking lot 6 represents an object 7, which has the boundary 8.
  • this boundary 8 can be presented on a screen of the motor vehicle 1 together with an image of the environmental region 5.
  • the boundary 8 or data describing the boundary 8 can be used as an input signal for a driver assistance device, which assists the driver in parking.
  • the motor vehicle 1 can be precisely parked into the parking lot 6 if the boundary 8 has been reliably determined.
  • the optical sensor 3 is disposed in an interior of the motor vehicle 1 , in particular behind the windshield, and captures the environmental region 5 in front of the motor vehicle 1 .
  • the invention is not restricted to such an arrangement of the optical sensor 3.
  • the arrangement of the optical sensor 3 can be different according to embodiment.
  • the optical sensor 3 can also be disposed in a rear region of the motor vehicle 1 and capture the environmental region 5 behind the motor vehicle 1 .
  • Several such optical sensors 3 can also be employed, which each are formed for capturing the environmental region 5.
  • Fig. 2 shows a schematic plan view image 13, which contains the points 10 of the point cloud 1 1 .
  • a reference point 14 is determined, which is presently identical to the position of the optical sensor 3 of the motor vehicle 1 .
  • the reference point 14 does not necessarily have to be identical to the position of the optical sensor 3.
  • the optical sensor 3 can also be disposed in a known distance to the reference point 14. The position of the optical sensor 3 to the reference point 14 is therefore arbitrary provided that the relative position is known.
  • the plan view image 13 is divided into segments 15.
  • the segments 15 are formed as circular segments each including the reference point 14.
  • the points 10 of the point cloud 1 1 are also divided.
  • the respective selection point 9 for each of the segments 15 can now be determined by selecting that point 10 of the respective segment 15, which has the lowest distance to the reference point 14 among the points 10 within this segment 15.
  • a coordinate axis extending in horizontal image plane through the reference point 14 is denoted by y, while a coordinate axis extending vertically in image plane through the reference point 14 is denoted by x.
  • the size of the respective segments 15 depends on an angle ⁇ , which specifies a respective size of the respective segment 15 and describes, which area is swept by the segment 15 starting from the reference point 14.
  • the environmental region 5 which extends in a horizontal field of view of the optical sensor 3 over 180° the numb er of the segments 15 is calculated with 180 / ⁇ .
  • a star S which has one of the points 10 at each of its apices, it is apparent, which of the points 10 are selected as the selection points 9.
  • the selection points 9 are then connected by the polyline 12.
  • Fig. 3 shows a simplified flow diagram of a method according to an embodiment of the invention.
  • the points 10 of the point cloud 1 1 are divided into at least one cluster 16.
  • the at least one cluster 16 can be selected such that it includes the points 10 of the object 7. If multiple objects 7 are present, a separate cluster 16 can be provided for each of the objects 7.
  • a list of the selection points 9 is created.
  • the selection points 9 are calculated according to a flow diagram of Fig. 4.
  • the polyline 12 is determined according to a flow diagram of Fig. 5.
  • the flow diagram in Fig. 4 shows, how the respective selection points 9 are determined.
  • a loop for each of the points 10 of the respective segment 15 is started.
  • a step S5 it is queried if all of the points 10 of the respective segment 15 have been processed. If this is affirmed, the point 10 closest to the reference point 14 of each of the segments 15 is respectively determined as the selection point 9.
  • the method is preliminarily terminated in a step S7. If not all of the points were processed after step S5 and the query in this step is therefore negated, a step S8 follows, and a distance, in particular a Manhattan distance, from one of the points 10 to the reference point 14 is calculated.
  • the Manhattan distance is also referred to as Cityblock distance and is based on a neighborhood of four. Thus, only horizontal or vertical pieces of path are possible to determine the distance. Another possibility to this is a neighborhood of eight, in which diagonal pieces of path are also possible.
  • Step S8 is initialized with a step S9.
  • the shortest distance from the respective segment 15 to the reference point 14 is taken as a basis.
  • a step S10 follows, in which it is checked if the distance calculated in step S8 is less than or equal to the shortest distance, which has been calculated in step S9. If the result in step S10 is negative, it is continued with step S4, and if the result of step S10 is positive, it is continued with a step S1 1 .
  • step S1 1 the point 10 currently present as the selection point 9 is stored in a vector. Thereafter, the method is continued with step S4.
  • step S12 a loop for each of the selection points 9 is performed.
  • step S13 it is checked if the selection points 9 were all passed. If this is the case, the method ends in a step S14. If this is not the case, a step S15 follows, which continues with the next selection point 9. If a new selection point 9 is not found, the method is continued in step S12. In contrast, if a new selection point 9 is found, step S16 follows. In step S16, a predetermined set or number of selection points 9 adjacent to the current selection point 9 is determined.
  • step S17 it is checked if one of these adjacent selection points 9 from the predetermined set belongs to a different cluster 16 than the current selection point 9. If this is the case, the two respective clusters 16, namely the cluster 16 of the current selection point 9 and the cluster 16 of the adjacent selection points 9, are combined in a step S18, and the method is continued at step S16. If this is not the case and the current selection point 9 and the adjacent selection points 9 belong to the same cluster 16, the method is continued with a step S19.
  • step S19 it is checked if all of those adjacent selection points 9 from the predetermined set are located on a side of a vector opposing the reference point 14. The vector extends from the current selection point 9 to the nearest selection point 9. In other words, in step S16, at least one further selection point 9 is determined starting from the current selection point 9, which is disposed in a
  • next selection point 9 is on the opposing side of this vector with respect to the reference point 14, the current selection point 9 is connected to the next selection point 9 by the polyline 12 in a step S20. If this is not the case, the next selection point 9 is connected to the further selection point 9 by the polyline 12 in a step S21 . Finally, the obtained polyline 12 is optionally improved or adapted in a step S22.
  • step S22 the procedure of the flow diagram again starts at step S12 with the next selection point 9.
  • the method for improving of step S22 is exemplarily explained below in Fig. 6 to 9 based on three selection points a, b and c.
  • a variable or a state bs is introduced, which can also be referred to as blocking state.
  • This state bs is explained by the example of the first selection point a, which is connected to the second selection point b by the polyline 12.
  • the state can be determined with the following formula: a diff
  • a_diff is the angle between the first selection point a, the reference point 14 and a selection point 9 different from the second selection point b.
  • d_diff is the distance between the first selection point a and the selection point 9 different from the second selection point b.
  • the quotient of a_diff and d_d iff thus results in the state bs.
  • the state bs is less than a limit value. If the state is less than or equal to the limit value, the polyline 12 is removed, and an alternative connection to another one of the respective selection points 9 is searched with the aid of one of the previous steps.
  • the state bs also finds use in step S22, which is to improve the boundary 8 with respect to the shape thereof.
  • Fig. 6 shows a first variant, how the behavior of the polyline 12 can be adapted.
  • the first selection point a is connected to the second selection point b and the second selection point b in turn is connected to the third selection point c.
  • the selection points a, b, c span an angle ⁇ .
  • this angle ⁇ is now smaller than a predetermined value, for example 90°.
  • the state bs is less than a secon d limit value Z with respect to the first selection point a and the third selection point c.
  • the polylines 12 are removed from the second selection point b, and the first selection point a and the third selection point c are directly connected by the polyline 12.
  • the next neighboring point after the first selection point a is therefore the third selection point c.
  • Fig. 7 shows a further possible arrangement of selection points a, b, c.
  • the first selection point a forms an angle ⁇ with the second selection point b and the third selection point c.
  • the selection points a, b, c are in a particular relation to the reference point 14.
  • the first selection point a is farther away from the reference point 14 than the second selection point b.
  • the third selection point c is closest to the reference point 14 along the axis y.
  • the third selection point c is shifted such that the shifted selection point V forms a right-angled triangle with the second selection point b and the third selection point c, which faces the reference point 14.
  • the shift or replacement of the third selection point c with the shifted selection point V is only performed if the state bs of the second selection point a and the third selection point c is less than the second limit value Z.
  • Fig. 8 shows a further arrangement of the selection points a, b, c, wherein the first selection point a, the second selection point b and the third selection point c form an angle ⁇ .
  • the first selection point a has a greater distance to the reference point 14 than the second selection point b and the third selection point c.
  • the third selection point c is disposed on the left side of the second selection point b with respect to the line y. If this constellation is true and the state bs is satisfied by the second selection point b and the third selection point c, the third selection point c is replaced with the selection point V.
  • the selection point V is situated on a line segment from the reference point 14 to the third selection point c and is defined there via a perpendicular from the second selection point b in vertical direction of the image plane.
  • the state bs again has to be less than the second limit value Z.
  • the angle ⁇ is also spanned by the first selection point a, the second selection point b and the third selection point c.
  • the first selection point a is farther away from the reference point 14 than the second selection point b.
  • the third selection point c is disposed on the left side of the second selection point b with respect to the line x,.
  • the third selection point c is now replaced with the selection point V if the angle ⁇ is below a predetermined value, for example 90° and the state bs of the second selection point b and the third selection point c is below the second limit value Z.
  • the position of the selection point V is situated on a line segment from the reference point 14 to the third selection point c, wherein the final definition of the position of the point V is effected by a line horizontal in the image plane parallel to y from the second selection point b to the line segment.
  • Fig. 10 shows in schematic illustration a plan view image 13 of the environmental region 5 with a superimposed model of the motor vehicle 1 , which is divided into the segments 15.
  • the selection points 9 which are connected by the polyline 12 according to the method.
  • the selection points 9 are only connected to each other, if they belong to the same cluster 16.
  • two different clusters 16 are depicted.
  • the effect of the state bs is also described by a dashed line.
  • the state bs is only applied in the cluster 16 represented on the left side in Fig. 10.
  • Fig. 1 1 also shows a plan view image 13 with a superimposed model of the motor vehicle 1 in a further embodiment.
  • the polylines 12 or the boundary 8 are shown after step S22, which is applied for improvement.
  • the result after step S22 is referred to as improved polyline 17. It is apparent that the improved polyline 17 oscillates less compared to the polyline 12 without improvement.
  • a further plan view image 13 analogous to Fig. 10 and Fig. 1 1 is illustrated.
  • the boundary 8 or the polyline 12 is shown for the case that two of the clusters 16 have been combined.
  • the boundary 8 does not separately extend in each of the clusters 16, but extends over the respectively next selection points 9 from the two clusters 16 together.
  • Fig. 13 shows an image of the environmental region 5, in which the points 10 of the point cloud 1 1 are illustrated. Several objects 7 are present in the environmental region 5. The points 10 of the point cloud 1 1 are divided into the respective clusters 16, wherein the clusters 16 are each associated with an object 7. The environmental region 5 is presently captured with the optical sensor 3 or a camera having a fish eye lens. This explains the distortions and the wide image angle in Fig. 13.
  • Fig. 14 shows a transformation of the environmental region 5 from Fig. 13 into a plan view image 13, wherein only two of the boundaries 8 of the two respective objects 7 are represented in Fig. 14.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

La présente invention concerne un procédé pour déterminer une limite respective (8) d'au moins un objet (7) dans une région d'environnement (5) d'un véhicule automobile (1) sur la base de données de capteur d'un capteur optique (3) d'un dispositif de capteur (2) du véhicule automobile (1), les étapes suivantes étant effectuées au moyen d'un dispositif de traitement d'image (4) du dispositif de capteur (2). Un nuage de points (11) avec une pluralité de points (10) est déterminé sur la base des données de capteur et le nuage de points (11) se transforme en une image de vue en plan (13) qui représente la région d'environnement (5) du véhicule automobile (1). Avec le dispositif de traitement d'image (4), les étapes suivantes sont effectuées dans l'image de vue en plan (13). Un point de référence (14) est déterminé dans l'image de vue en plan (13), qui décrit une position du capteur optique (3), et l'image de vue en plan (13) est divisée en une pluralité de segments (15) à partir du point de référence (14). En outre, un point de sélection respectif (9) parmi les points (10) du nuage de points (11) est déterminé dans le segment respectif (15), le point de sélection respectif (9) dans le segment (15) ayant la distance la plus faible au point de référence (14) par rapport aux autres points (10) du nuage de points (11) dans le segment (15). La limite respective (8) dudit au moins un objet (7) est déterminée sur la base des points de sélection respectivement déterminés (9).
EP15722677.0A 2014-05-22 2015-04-29 Procédé pour déterminer une limite respective d'au moins un objet, dispositif de capteur, dispositif d'assistance au conducteur et véhicule automobile Withdrawn EP3146462A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102014007565.6A DE102014007565A1 (de) 2014-05-22 2014-05-22 Verfahren zum Ermitteln einer jeweiligen Grenze zumindest eines Objekts, Sensorvorrichtung, Fahrerassistenzeinrichtung und Kraftfahrzeug
PCT/EP2015/059400 WO2015176933A1 (fr) 2014-05-22 2015-04-29 Procédé pour déterminer une limite respective d'au moins un objet, dispositif de capteur, dispositif d'assistance au conducteur et véhicule automobile

Publications (1)

Publication Number Publication Date
EP3146462A1 true EP3146462A1 (fr) 2017-03-29

Family

ID=53180718

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15722677.0A Withdrawn EP3146462A1 (fr) 2014-05-22 2015-04-29 Procédé pour déterminer une limite respective d'au moins un objet, dispositif de capteur, dispositif d'assistance au conducteur et véhicule automobile

Country Status (3)

Country Link
EP (1) EP3146462A1 (fr)
DE (1) DE102014007565A1 (fr)
WO (1) WO2015176933A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102016220075A1 (de) 2016-10-14 2018-04-19 Audi Ag Kraftfahrzeug und Verfahren zur 360°-Umfelderfassung
US11048254B2 (en) * 2019-04-10 2021-06-29 Waymo Llc Generating simplified object models to reduce computational resource requirements for autonomous vehicles
US11136025B2 (en) 2019-08-21 2021-10-05 Waymo Llc Polyline contour representations for autonomous vehicles
US11668799B2 (en) 2020-03-20 2023-06-06 Aptiv Technologies Limited Histogram based L-shape detection of target objects

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8705792B2 (en) * 2008-08-06 2014-04-22 Toyota Motor Engineering & Manufacturing North America, Inc. Object tracking using linear features
US8605998B2 (en) * 2011-05-06 2013-12-10 Toyota Motor Engineering & Manufacturing North America, Inc. Real-time 3D point cloud obstacle discriminator apparatus and associated methodology for training a classifier via bootstrapping
GB2507560A (en) * 2012-11-05 2014-05-07 Univ Oxford Extrinsic calibration of mobile camera and lidar

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
None *
See also references of WO2015176933A1 *

Also Published As

Publication number Publication date
WO2015176933A1 (fr) 2015-11-26
DE102014007565A1 (de) 2015-11-26

Similar Documents

Publication Publication Date Title
KR101811157B1 (ko) 사발형 영상 시스템
EP3054400B1 (fr) Dispositif et procédé de détection de la surface de déplacement d'une route
JP6139465B2 (ja) 物体検出装置、運転支援装置、物体検出方法、および物体検出プログラム
EP3594902B1 (fr) Procédé permettant d'estimer une position relative d'un objet dans les environs d'un véhicule, unité de commande électronique pour véhicule et véhicule
US10896542B2 (en) Moving body image generation recording display device and program product
JP6699761B2 (ja) 情報処理プログラム、情報処理方法および情報処理装置
JP7077910B2 (ja) 区画線検出装置及び区画線検出方法
US10748014B2 (en) Processing device, object recognition apparatus, device control system, processing method, and computer-readable recording medium
EP3716145A1 (fr) Dispositif et procédé de détection d'objets
CN108680157B (zh) 一种障碍物检测区域的规划方法、装置及终端
WO2015176933A1 (fr) Procédé pour déterminer une limite respective d'au moins un objet, dispositif de capteur, dispositif d'assistance au conducteur et véhicule automobile
CN111213153A (zh) 目标物体运动状态检测方法、设备及存储介质
CN107004250B (zh) 图像生成装置及图像生成方法
EP3633619B1 (fr) Appareil de détection de position et procédé de détection de position
EP2642364A1 (fr) Procédé pour avertir le conducteur d'un véhicule automobile de la présence d'un objet dans l'environnement du véhicule automobile, système de caméra et véhicule à moteur
JP2008262333A (ja) 路面判別装置および路面判別方法
JP7095559B2 (ja) 区画線検出装置及び区画線検出方法
JP5299101B2 (ja) 周辺表示装置
JP5073700B2 (ja) 物体検出装置
JP6591188B2 (ja) 車外環境認識装置
JP6327115B2 (ja) 車両周辺画像表示装置、車両周辺画像表示方法
JP6683245B2 (ja) 画像処理装置、画像処理方法、画像処理プログラム、物体認識装置及び機器制御システム
EP3951744A1 (fr) Dispositif de traitement d'image, dispositif de commande de véhicule, procédé, et programme
JP2019504382A (ja) アダプティブな周辺画像データ処理を有するドライバー・アシスタント・システム
CN111460852A (zh) 车载3d目标检测方法、系统及装置

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20161117

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

RIN1 Information on inventor provided before grant (corrected)

Inventor name: HORGAN, JONATHAN

Inventor name: HUGHES, CIARAN

Inventor name: NGUYEN, DUONG-VAN

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20180907

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20190318