CN114730468A - Method for determining a model of a traffic obstacle - Google Patents

Method for determining a model of a traffic obstacle Download PDF

Info

Publication number
CN114730468A
CN114730468A CN202080080822.9A CN202080080822A CN114730468A CN 114730468 A CN114730468 A CN 114730468A CN 202080080822 A CN202080080822 A CN 202080080822A CN 114730468 A CN114730468 A CN 114730468A
Authority
CN
China
Prior art keywords
vehicles
traffic obstacle
model
traffic
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080080822.9A
Other languages
Chinese (zh)
Inventor
C·蒂洛
杨长鸿
余翊森
全冬兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Continental Automotive GmbH
Original Assignee
Continental Automotive GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Continental Automotive GmbH filed Critical Continental Automotive GmbH
Publication of CN114730468A publication Critical patent/CN114730468A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Abstract

According to a method for determining a model of a traffic obstacle, a plurality of vehicles are provided, each vehicle having at least one camera and a processor for computer vision processing. Respective images of the scene are captured by respective at least one camera of each of the plurality of vehicles. Evaluating, by a respective processor of each of the vehicles, the respective image and generating a respective preliminary model of the traffic obstacle. Transmitting the respective preliminary model of the traffic obstacle from each of the vehicles to a server. Evaluating, by an image processor of the server, the respective preliminary model received from each of the vehicles and determining a model of the traffic obstacle.

Description

Method for determining a model of a traffic obstacle
Technical Field
The present disclosure relates to a method for determining a model of a traffic obstacle, such as a new Jersey Wall (Jersey Wall) that may be arranged on a road to separate various roadways of the road from each other.
Background
Detection and modeling of assets is a fundamental requirement to generate accurate road databases that can be used for automated or robot-assisted driving. For example, the assets include traffic signs, posts, guardrails, and traffic barriers that may include new jersey walls, among others. Traffic barriers like new jersey walls can be used at the construction site of a road for separating the roadways of the road. The assets may be captured and registered by the special purpose vehicle using a complex sensor system, such as stereo cameras that capture images of the roadway and assets while driving. Modeling of an asset refers to recovering and representing 3D spatial information of the asset. For the purpose of detecting and modeling traffic obstacles, most known methods employ laser radar (LIDAR) or stereo cameras that can directly acquire 3D information.
US20200184233a1 discloses a method of detecting road obstacles like new jersey walls. Here, a vertical function and a horizontal function are used, which when combined together map to a plurality of image features. These functions are generated by complex multi-camera systems.
Theoretically, simple optical sensor systems, such as optical systems including monocular cameras, cannot reconstruct traffic obstacle 3D information for several reasons. First, monocular cameras cannot directly acquire 3D information of a scene. The only known method for reconstructing 3D information using monocular cameras requires a matching relationship of pixels between frames. Second, most traffic obstacles have little texture. Therefore, it is almost impossible to obtain a matching relationship of pixels between frames on a traffic obstacle. Third, the disparity is too small to recover its spatial information. The monocular camera can recover the spatial lines through multi-view geometry theory only if the disparity is sufficient. However, there is little difference in the edges of the traffic obstacles in consecutive frames.
Disclosure of Invention
The problem to be solved by the invention is to provide a method for determining a model of a traffic obstacle by using a cost-effective and simple sensor system, such as a monocular camera. Furthermore, the method should provide improved recognition accuracy.
A solution to this problem is described in the independent claims. The dependent claims relate to further developments of the invention.
In one embodiment, a method is provided for determining a model of a traffic obstacle by a plurality of vehicles, each vehicle having at least one camera and a processor for computer vision processing. Capturing, by a respective at least one camera of each of the plurality of vehicles, a respective image of a scene. Evaluating, by a respective processor of each of the vehicles, the respective image and generating a respective preliminary model of the traffic obstacle. Transmitting the respective preliminary models of the traffic obstacles from each of the vehicles to a server. Evaluating, by an image processor of the server, the respective preliminary model received from each of the vehicles and determining a model of the traffic obstacle. In a very simple embodiment, a single camera may be sufficient. Alternatively, a plurality of cameras may be provided.
In order to model an object, a series of successive images is usually taken by means of an optical sensor system, such as a stereo camera system. Pixels belonging to the same location on the object are then compared in successive images to create a model of the object. A stereo camera system has to provide information about an object from different spatial positions. However, a problem with modeling traffic barrier surfaces, such as traffic barriers, e.g., new jersey walls, is that the surfaces of traffic barriers typically have little or no texture. This makes it difficult to identify the same pixels arranged at the same position on the surface of the traffic obstacle to be modeled in successive images.
The method for determining a model of a traffic obstacle allows capturing images of the traffic obstacle using simple cameras located at different locations. For example, the individual cameras may be designed as simple monocular cameras, with a respective monocular camera being located in each of the plurality of vehicles. The multiple vehicles are located at different locations in the scene such that each camera takes a picture of the traffic obstacle from a separate location. A respective processor in each of the vehicles may evaluate the captured image information of the traffic obstacle such that a separate preliminary/virtual model of the traffic obstacle is generated in each vehicle.
Separate model information of the traffic obstacle, i.e., separate preliminary/virtual models of the traffic obstacle, is sent by each of the vehicles to the server. A preliminary/virtual model of the traffic obstacle received from the different vehicles is evaluated by an image processor of the server. In summary, according to one embodiment, a method for determining a model of a traffic obstacle, stereo vision thus occurs on a server that can generate an accurate model of the traffic obstacle by evaluating and comparing separate preliminary/virtual models generated and received from each of the individual vehicles in the scene.
In general, these embodiments are applicable to a wide variety of road boundaries, curbs, fences, and the like. The traffic barrier may comprise at least one of a New Jersey wall, a New Jersey barrier (Jersey barrier), a K-rail, or a media barrier.
Drawings
Hereinafter, the general inventive concept of the embodiments will be described by way of example, but not by way of limitation, with reference to the accompanying drawings.
FIG. 1 illustrates a system that performs a method for determining a model of a traffic obstacle;
FIG. 2 shows a flow chart illustrating method steps of a method for determining a model of a traffic obstacle; and
fig. 3 illustrates a simplified example of a method for determining a model of a traffic obstacle on a server by evaluating respective preliminary/virtual models of traffic obstacles generated by respective vehicles.
In the following, the different steps of an embodiment of the method for determining a model of a traffic obstacle are explained in general terms with the aid of fig. 1 and 2.
Fig. 1 shows a system comprising a plurality of vehicles 100, 101 and 102, wherein each of the vehicles comprises at least one camera 10 for capturing images/frames of an environmental scene of the respective vehicle, a processor 20 for computer vision processing, and a storage device 30 for storing a road database. Each vehicle 100, 101, and 102 may communicate with a server 200 that contains an image processor 201. In a very simple embodiment, a single camera may be sufficient.
An embodiment of modeling traffic obstacles in a road database system comprises method steps V1, V2 and V3 performed by each of a plurality of vehicles in the order V1, V2, V3, S and method step S performed by a server. These method steps are illustrated in the flow chart of fig. 2.
According to one embodiment, in a method for determining a model of a traffic obstacle, a plurality of vehicles 100, 101, and 102 are provided. Each of the vehicles contains a respective camera 10 and a respective processor 20 for computer vision processing. In step V1, performed by each of the plurality of vehicles, a respective image of the environmental scene of the vehicle is captured by a respective camera 10 of each of the plurality of vehicles 100, 101 and 102. The camera 10 may be embodied as a monocular camera installed in each of the vehicles. The cameras may be configured to capture video of the environmental scene in which the respective vehicle is located in real time.
In step V2, performed by each of the plurality of vehicles, the respective images captured in step V1 are evaluated by the respective processor 20 of each of the vehicles 100, 101, and 102, and a respective preliminary/virtual model of the traffic obstacle is generated. For this purpose, a road model may be provided in the respective storage device 30 of each of the vehicles 100, 101 and 102. According to one embodiment, a planar road surface may be assumed for the road model ([0,0,1 ]). In view of the road model, image pixels of the captured image may be projected to 3D points on the road model.
According to one embodiment of the method, the respective captured image is evaluated by the respective processor 20 of each of the vehicles 100, 101, and 102 through computer vision processing to extract pixels in the respective captured image that represent edges of the traffic obstacle. For example, the pixels extracted in the corresponding captured image may represent the upper edge of a traffic obstacle. With respect to conventional methods for modeling the surface of a traffic obstacle, computer vision processing may be used to detect the entire area of the traffic obstacle. However, the boundary between the wall and the floor may often be unclear. According to one embodiment, a method of modeling a traffic obstacle, only the location of the edges of the traffic obstacle, in particular the upper wall edges, is considered by extracting pixels representing the edges of the traffic obstacle from each of the captured images.
The respective processor 20 of each of the vehicles 100, 101 and 102 generates a respective preliminary/virtual model of the traffic obstacle by projecting respective extracted pixels representing edges of the traffic obstacle in the road model to generate respective 3D points of the edges of the traffic obstacle. According to one embodiment of the method, the respective processor 20 of each of the vehicles 100, 101, and 102 generates a respective spline curve for the edge of the traffic obstacle. The respective spline curves represent respective preliminary/virtual models of traffic obstacles generated in each of the vehicles. These preliminary/virtual models still do not contain real traffic obstacle positions.
In step V3 of the method for determining a model of a traffic obstacle, a respective preliminary/virtual model of the traffic obstacle is transmitted from each of the vehicles 100, 101 and 102 to the server 200. In step V1, a respective image is captured in the individual pose of the respective camera 10 of each of the vehicles 100, 101 and 102. In summary, the different preliminary/virtual models generated from the respective processors 20 of the different vehicles are generated from different perspectives (viewpoints). Since the preliminary/virtual models of traffic obstacles are generated from different perspectives, the camera position/pose is also important for determining the model of the traffic obstacle by the server. According to one embodiment of the method, the pose of the respective camera 10 of each of the vehicles 100, 101 and 102 is transmitted from each of the vehicles to the server 200 together with the respective preliminary/virtual model of the traffic obstacle.
In step S, performed by the server, the respective preliminary/virtual models received from each of the vehicles 100, 101 and 102 are evaluated by the image processor 201 of the server, and a model of the traffic obstacle is determined. After having collected information from each of the vehicles, i.e. the reported respective traffic obstacle preliminary/virtual models and respective camera poses, the image processor 201 of the server 200 will recover the real spatial information of the traffic obstacles. The model of the traffic obstacle determined by the image processor 201 of the server may contain at least information about the position and height of the traffic obstacle.
According to one embodiment of a method for determining a model of a traffic obstacle, the image processor 201 of the server 200 evaluates at least two of the respective preliminary/virtual models of the traffic obstacle received from at least two of the plurality of vehicles 101 and 102 to determine the model of the traffic obstacle by the server. Fig. 3 illustrates how the model of the traffic obstacle, i.e. the real position of the traffic obstacle, is recovered, assuming that the server 200 has received two reports of the respective separate preliminary/virtual models and the respective separate camera poses of the vehicle 101 and the vehicle 102 passing on different lanes.
Reference numeral 104 represents the position of the traffic obstacle preliminary/virtual model generated from the camera of the vehicle 101. The line 105 passes through the camera position of the vehicle 101 and the preliminary/virtual position of the traffic obstacle 104. The true position of the traffic barrier edge of the traffic barrier is on line 105. Reference numeral 103 corresponds to a preliminary/virtual model generated by a camera of the vehicle 102. The true location of the traffic obstacle is located on a line 106 connecting the location of the vehicle 102 and the preliminary/virtual model 103. The image processing 200 determines the intersection point 107 as the true location of the traffic obstacle. All intersections are fitted with spline curves to model traffic obstacles.
It has proven to be advantageous if the vehicles, which generate the individual preliminary/virtual models of the traffic obstacles, pass on different lanes. In another case, if two reports of separate preliminary/virtual models generated are received from vehicles passing on the same lane, they are generated from almost the same field of view. Therefore, the intersection between the two lines has a high variance, making the recovered model of the traffic obstacle unreliable in most cases.
The method for determining a model of a traffic obstacle has several advantages. First, the reported information is highly efficient. The method only requires a separate preliminary/virtual wall model to be generated and a corresponding camera pose for each of the vehicles. Furthermore, the method saves communication capacity. Second, it is easy to deal with the problem of false detection that a traffic obstacle cannot be detected in some frames. If certain frames are not detected, it is easily possible to interpolate or fit missing parts when generating the preliminary/virtual model.
The method of modeling traffic obstacles may be used to generate a road database for an autonomous automobile, but may also be used in a number of other areas of machine vision and machine orientation, such as outdoor or underwater robot orientation.

Claims (12)

1. A method for determining a model of a traffic obstacle, comprising:
-providing a plurality of vehicles (100, 101, 102), each vehicle having at least one camera (10) and a processor (20) for computer vision processing,
-capturing respective images of a scene by respective at least one camera (10) of each of the plurality of vehicles (100, 101, 102),
-evaluating, by a respective processor (20) of each of the vehicles (100, 101, 102), the respective image and generating a respective preliminary model of the traffic obstacle,
-transmitting a respective preliminary model of the traffic obstacle from each of the vehicles (100, 101, 102) to a server (200),
-evaluating, by an image processor (201) of the server (200), the respective preliminary model received from each of the vehicles (100, 101, 102) and determining a model of the traffic obstacle.
2. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
wherein the at least one camera (10) is a monocular camera.
3. The method of claim 1 or claim 2,
wherein the at least one camera (10) is a single camera.
4. The method of any of claims 1 to 3, comprising:
providing a road model in a respective storage device (30) of each of the vehicles (100, 101, 102).
5. The method of any one of claims 1 to 4,
wherein the respective captured image is evaluated by a respective processor (20) of each of the vehicles (100, 101, 102) through computer vision processing to extract pixels in the respective captured image that represent edges of the traffic obstacle.
6. The method of claim 5, wherein the first and second light sources are selected from the group consisting of,
wherein the extracted pixels in the respective captured images represent an upper edge of the traffic barrier.
7. The method of claim 6, wherein the first and second light sources are selected from the group consisting of,
wherein the respective processor (20) of each of the vehicles (100, 101, 102) generates a respective preliminary model of the traffic obstacle by projecting respective extracted pixels representing edges of the traffic obstacle in the road model to generate respective 3D points of the edges of the traffic obstacle.
8. The method of claim 7, wherein the first and second light sources are selected from the group consisting of,
wherein the respective processor (20) of each of the vehicles (100, 101, 102) generates a respective spline curve of the edge of the traffic obstacle, the respective spline curve representing a respective preliminary model of the traffic obstacle.
9. The method of any one of claims 1 to 8,
-wherein the respective image is captured in a pose of the respective at least one camera (10) of each of the vehicles (100, 101, 102),
-transmitting the pose of the respective at least one camera (10) of each of the vehicles (100, 101, 102) together with the respective preliminary model of the traffic obstacle from each of the vehicles (100, 101, 102) to the server (200).
10. The method of any one of claims 1 to 9,
wherein the image processor (201) of the server (200) evaluates at least two of the respective preliminary models of the traffic obstacle received from at least two of the plurality of vehicles (100, 101, 102) to determine a model of the traffic obstacle by the server through the image processor (201).
11. The method of any one of claims 1 to 10,
wherein the model of the traffic obstacle determined by the image processor (201) of the server comprises at least information about the position and altitude of the traffic obstacle.
12. The method of any one of claims 1 to 11,
wherein the traffic barrier comprises at least one of a New Jersey wall, a New Jersey guardrail, a K-type railing, or an in-road guardrail.
CN202080080822.9A 2019-09-20 2020-09-08 Method for determining a model of a traffic obstacle Pending CN114730468A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DE102019214397.0 2019-09-20
DE102019214397 2019-09-20
PCT/EP2020/075025 WO2021052810A1 (en) 2019-09-20 2020-09-08 Method for determining a model of a traffic barrier

Publications (1)

Publication Number Publication Date
CN114730468A true CN114730468A (en) 2022-07-08

Family

ID=72432908

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080080822.9A Pending CN114730468A (en) 2019-09-20 2020-09-08 Method for determining a model of a traffic obstacle

Country Status (3)

Country Link
EP (1) EP4052222A1 (en)
CN (1) CN114730468A (en)
WO (1) WO2021052810A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9280711B2 (en) * 2010-09-21 2016-03-08 Mobileye Vision Technologies Ltd. Barrier and guardrail detection using a single camera
US10962982B2 (en) * 2016-07-21 2021-03-30 Mobileye Vision Technologies Ltd. Crowdsourcing the collection of road surface information
EP3736537A1 (en) * 2016-10-11 2020-11-11 Mobileye Vision Technologies Ltd. Navigating a vehicle based on a detected vehicle
EP3619643A1 (en) 2017-05-03 2020-03-11 Mobileye Vision Technologies Ltd. Detection and classification systems and methods for autonomous vehicle navigation

Also Published As

Publication number Publication date
EP4052222A1 (en) 2022-09-07
WO2021052810A1 (en) 2021-03-25

Similar Documents

Publication Publication Date Title
US7321386B2 (en) Robust stereo-driven video-based surveillance
Broggi et al. Terramax vision at the urban challenge 2007
US11670087B2 (en) Training data generating method for image processing, image processing method, and devices thereof
EP2662804B1 (en) Method and apparatus for detecting continuous road partition
JP6442834B2 (en) Road surface height shape estimation method and system
US11280630B2 (en) Updating map data
CN106461774A (en) Advanced driver assistance system based on radar-cued visual imaging
Haloi et al. A robust lane detection and departure warning system
Sappa et al. An efficient approach to onboard stereo vision system pose estimation
US11025865B1 (en) Contextual visual dataspaces
WO2005088971A1 (en) Image generation device, image generation method, and image generation program
KR102167835B1 (en) Apparatus and method of processing image
US11403947B2 (en) Systems and methods for identifying available parking spaces using connected vehicles
Revilloud et al. An improved approach for robust road marking detection and tracking applied to multi-lane estimation
Geiger et al. Object flow: A descriptor for classifying traffic motion
US20230351687A1 (en) Method for detecting and modeling of object on surface of road
Zebbara et al. A fast road obstacle detection using association and symmetry recognition
WO2020210960A1 (en) Method and system for reconstructing digital panorama of traffic route
Li et al. Lane detection (part i): Mono-vision based method
CN114730468A (en) Method for determining a model of a traffic obstacle
Rebut et al. Road obstacles detection using a self-adaptive stereo vision sensor: a contribution to the ARCOS French project
Yang et al. Road detection by RANSAC on randomly sampled patches with slanted plane prior
Lamża et al. Depth estimation in image sequences in single-camera video surveillance systems
Nedevschi et al. Stereovision-based sensor for intersection assistance
Thai et al. Application of edge detection algorithm for self-driving vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination