CN116416591A - Method and equipment for generating reference value for driving boundary - Google Patents

Method and equipment for generating reference value for driving boundary Download PDF

Info

Publication number
CN116416591A
CN116416591A CN202111662570.3A CN202111662570A CN116416591A CN 116416591 A CN116416591 A CN 116416591A CN 202111662570 A CN202111662570 A CN 202111662570A CN 116416591 A CN116416591 A CN 116416591A
Authority
CN
China
Prior art keywords
image information
type
vehicle
reference value
driving boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111662570.3A
Other languages
Chinese (zh)
Inventor
A·维莫尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Priority to CN202111662570.3A priority Critical patent/CN116416591A/en
Priority to PCT/EP2022/081493 priority patent/WO2023126097A1/en
Publication of CN116416591A publication Critical patent/CN116416591A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo or light sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera

Abstract

The invention relates to a reference value generation method for a driving boundary, which comprises the following steps: receiving first image information about a vehicle surroundings from a first type camera mounted on the vehicle; extracting second image information about a driving boundary from the first image information; and generating a reference value for the driving boundary based at least on the second image information. Wherein the reference value is used to verify sensed values about the driving boundary acquired by a second type of camera mounted on the vehicle. The invention also relates to a reference value generating device for a driving boundary, a computer storage medium, a computer program product and a vehicle.

Description

Method and equipment for generating reference value for driving boundary
Technical Field
The present invention relates to the field of vehicles, and in particular, to a reference value generation method, a generation device, a computer storage medium, a computer program product, and a vehicle for driving boundaries.
Background
In vehicle automation related functions (e.g., L0-L5 level autopilot functions), on-board sensors are critical to the accuracy of perception of vehicle ambient information (especially driving boundary information). In the development and verification of these functions, the collected information is often compared with a reference value (ground route) of the surrounding environment of the vehicle. Such reference values are also referred to as ground truth or true values.
If a manually marked method is used to generate the reference value, a high labor cost is faced and a long time is spent. If the information collected by the lidar sensor is used to generate the reference value, in some cases (such as rain, snow, heavy fog, etc.), the accuracy of the reference value is difficult to ensure. If the same sensor (e.g., a front-view camera) is used to generate the reference value as the sensed value to be verified, it is also difficult to ensure the accuracy of the verification result. This is because errors or omissions caused by inherent defects of the sensor tend to exist in both the reference value and the sensed value generated with the same sensor, and thus cannot be detected by comparison of the two.
Disclosure of Invention
According to an aspect of the present invention, there is provided a reference value generation method for a driving boundary. The method comprises the following steps: receiving first image information about a vehicle surroundings from a first type camera mounted on the vehicle; extracting second image information about a driving boundary from the first image information; and generating a reference value for the driving boundary based at least on the second image information. Wherein the reference value is used to verify sensed values about the driving boundary acquired by a second type of camera mounted on the vehicle.
Additionally or alternatively to the above, in the above method, the reference value is further used to verify a sensed value about the driving boundary acquired by a first other type of sensor mounted on the vehicle. Wherein the first other type of sensor is different from the first type of camera and the second type of camera.
Additionally or alternatively to the above, in the above method, the method further includes performing coordinate system conversion on the second image information; the reference value is generated based at least on the converted second image information.
Additionally or alternatively to the above, in the above method, in the coordinate system conversion, the second image information is converted from a two-dimensional image coordinate system to a three-dimensional vehicle coordinate system.
Additionally or alternatively to the above, in the above method, the first type of camera mounted on the vehicle includes at least two first type of cameras located at different locations of the vehicle. The method further comprises extracting second images of the at least two first type cameras from first image information from the at least two first type cameras, respectively; fusing second images of the at least two first type cameras; and generating the reference value based at least on the fused second image information.
Additionally or alternatively to the above, in the above method, the method further includes fusing second image information about the driving boundary extracted at different times based on positioning information of the vehicle at the different times; and generating the reference value based at least on the fused second image information.
Additionally or alternatively to the above, in the above method, the reference value is generated based on the second image information and third image information about the driving boundary from a second other type of sensor. Wherein the second other type of sensor is different from the first type of camera.
According to another aspect of the present invention, there is provided a reference value generating apparatus for a driving boundary. The apparatus comprises: a receiving device configured to receive first image information about a vehicle surroundings from a first type camera mounted on a vehicle; an extracting device configured to extract second image information about a driving boundary from the first image information; and generating means configured to generate a reference value of the driving boundary based at least on the second image information. Wherein the reference value is used to verify sensed values about the driving boundary acquired by a second type of camera mounted on the vehicle.
Additionally or alternatively to the above, in the above apparatus, the reference value is further used to verify a sensed value about the driving boundary acquired by a first other type of sensor mounted on the vehicle. Wherein the first other type of sensor is different from the first type of camera and the second type of camera.
In addition or alternatively to the above, in the above apparatus, a conversion device is further included. The conversion means is configured to coordinate-system-convert the second image information, the generation means being further configured to generate the reference value based at least on the converted second image information.
Additionally or alternatively to the above, in the above apparatus, the converting means is further configured to convert the second image information from a two-dimensional image coordinate system to a three-dimensional vehicle coordinate system.
Additionally or alternatively to the above, in the above apparatus, a first fusion device is further included. Wherein the first type of camera mounted on the vehicle comprises at least two first type of cameras located at different positions of the vehicle. The extraction means is further configured to extract second images of the at least two first type cameras from first image information from the at least two first type cameras, respectively. The first fusing device is configured to fuse second images of the at least two first type cameras. The generating means is further configured to generate the reference value based at least on the fused second image information.
In addition or alternatively to the above, in the above apparatus, a second fusing device is further included. The second fusing device is configured to fuse second image information about the driving boundary extracted at different times based on positioning information of the vehicle at the different times. The generating means is further configured to generate the reference value based at least on the fused second image information.
Additionally or alternatively to the above, in the above apparatus, the generating means is further configured to generate the reference value based on the second image information and third image information about the driving boundary from a second other type of sensor. Wherein the second other type of sensor is different from the first type of camera.
According to yet another aspect of the present invention, there is provided a computer storage medium comprising instructions which, when executed, perform the above-described method.
According to a further aspect of the invention, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the above method.
According to a further aspect of the invention there is provided a vehicle comprising the apparatus described above.
The driving boundary generation scheme of the embodiment of the invention utilizes the first type camera to collect the image information of the driving boundary and processes the collected image information to generate the reference value. The driving boundary generation scheme has high accuracy, low time and labor cost and can flexibly fuse the image information acquired by other sensors.
Drawings
The above and other objects and advantages of the present invention will become more fully apparent from the following detailed description taken in conjunction with the accompanying drawings, in which like or similar elements are designated by like reference numerals. Wherein the drawings are not necessarily drawn to scale.
Fig. 1 shows a flow diagram of a reference value generation method 1000 for driving boundaries according to an embodiment of the invention.
Fig. 2 (a) - (d) show first image information received from four on-board first type cameras, respectively.
Fig. 3 shows a schematic structural diagram of a reference value generating apparatus 3000 for driving boundaries according to an embodiment of the present invention.
Detailed Description
Hereinafter, a generation scheme of a reference value for a driving boundary according to various exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.
It is noted that in the context of the present invention, the terms "first," "second," and the like are used to distinguish similar objects and are not necessarily used to describe a particular order or sequence. Furthermore, the terms "comprising," "having," and the like, in the context of the present invention, are intended to mean a non-exclusive inclusion, unless otherwise specifically indicated.
Fig. 1 shows a flow diagram of a reference value generation method 1000 for driving boundaries according to an embodiment of the invention. As shown in fig. 1, the reference value generation method 1000 for a driving boundary includes the following steps.
In step S110, first image information about a vehicle surroundings is received from an in-vehicle first type camera.
In the context of the present invention, the term "driving boundary" refers to a road boundary on which a vehicle can travel, e.g. lane lines, lane dotted lines, curbs, guardrails, concrete or steel walls, boundaries between asphalt and grass, reflectors, obstacles, beacons, towers, etc. for delimiting lanes or other structures delimiting different types of road surfaces.
In the context of the present invention, the term "first type camera" refers to a camera that is different from the second type camera to be authenticated. For example, the second type of camera to be verified may be a front view camera for an assisted driving system (ADAS), and accordingly, the first type of camera may be a fish eye camera mounted on a vehicle, or a wing camera mounted on a vehicle. The fish-eye camera may be a camera mounted on a vehicle that is originally used for a reversing function. In general, the fisheye camera can have a higher resolution for a sensed object in a close range, thereby generating a reference value of higher accuracy in the subsequent step S130. Wherein the wing camera may be a camera mounted on both sides of the vehicle (e.g., on the rear view mirrors of both sides) for sensing images of both sides of the vehicle.
The first image information may be received from one or more first type cameras mounted on the vehicle. Fig. 2 (a) - (d) show first image information received from four on-board first type cameras, in this embodiment in particular fisheye cameras, respectively. Wherein the four on-vehicle first type cameras corresponding to fig. 2 (a) - (d) are mounted on the front side, rear side, left side and right side of the vehicle, respectively. As shown in fig. 2 (a) - (d), the first image information collected by the four vehicle-mounted first type cameras each include a lane line, and the first type camera mounted on the front side (corresponding to fig. 2 (a)) and the first type camera mounted on the right side (corresponding to fig. 2 (d)) each collect image information of the lane line 210 at the same position.
As shown in fig. 2, the fisheye camera, which is a kind of wide-angle camera, can acquire image information of a wide field of view. However, as can also be seen from fig. 2, such a large field of view image brings about a degree of distortion, and thus, the image information can be corrected and compensated accordingly.
In step S120, second image information about the driving boundary is extracted from the first image information received in step S110.
The second image information may be extracted using a conventional image processing method, for example, an Edge filtering algorithm (Edge Filter), a Canny Edge detection algorithm (Canny Filter), a Sobel Operator Edge detection algorithm (Sobel Operator), etc.; machine learning, artificial intelligence algorithms may also be employed to extract the second image information, e.g., neural networks, deep learning, etc. The invention is not limited in this regard.
In step S130, a reference value of the driving boundary is generated based at least on the second image information extracted in step S120. The generated reference value can be used for comparing with the sensing value of the same driving boundary acquired by the vehicle-mounted second type camera to test and count the accuracy of the vehicle-mounted second type camera on driving boundary sensing and the sensing performance in an error triggering event.
In the context of the present invention, a "false triggering event" is intended to mean a typical event, such as a rain, snow, fog scenario, in which other sensors mounted on the vehicle, in addition to the first type of camera, are prone to perception errors.
In one embodiment, the reference value generated in step S130 may also be used to verify sensed values about the driving boundary acquired by other types of sensors mounted on the vehicle. Other types of sensors may be, for example, lidar sensors, millimeter wave radar sensors, etc. any suitable type of sensor mounted on the vehicle in addition to the first and second type of cameras.
Thus, a reference value of the driving boundary is generated based on the image information of the driving boundary acquired by the first type camera, thereby providing a benchmark for verifying the accuracy of the vehicle-mounted camera or other type sensor.
The sensing value generated by the second type camera or other sensors is verified by utilizing the reference value generated by the first type camera, so that the condition that the same errors cannot be identified in the verification can be avoided. Here, "co-occurrence error" is intended to mean that there is the same error due to the same factor in the multiple sensing results of the same or the same type of sensor. The same factor may be caused, for example, by the location of the sensor or by an inherent defect of this type of sensor. In embodiments according to the invention, different types of sensors are employed, which tend to have different focal lengths, image information they collect tends to be processed by different algorithms, and which tend to be mounted in different locations, and thus have different illumination conditions. Thus, a first type of camera, different from the sensor to be verified, may be utilized to avoid that a co-factor error is disregarded in the verification.
Further, according to the embodiment of the present invention, a camera capable of acquiring high-quality image information of a driving boundary near a vehicle, such as a fisheye camera, can be utilized as the first type of camera, thereby obtaining a driving boundary reference value with high accuracy. This is particularly evident in rain, snow, fog, etc.
Although not shown in fig. 1, the reference value generation method 1000 for driving boundaries may further include coordinate system conversion of image information. For example, the second image information extracted at step 120 is converted from a two-dimensional image coordinate system to a three-dimensional vehicle coordinate system to facilitate further processing of the second image information (e.g., comparison, fusion with three-dimensional image information acquired by other sensors). Accordingly, in step S130, a reference value of the driving boundary is generated based at least on the converted second image information.
It is to be noted that in the context of the present invention, the coordinate system conversion of the image information is not limited to the conversion of the second image information, but may be the conversion of the image information (for example, the first image information) generated in other steps. For example, the first image information may be subjected to the coordinate system conversion and then to the extraction step S120. This is advantageous in some cases for outlier detection and plausibility checking of image information.
In the context of the present invention, the coordinate system conversion of the image information is not limited to the conversion from the image coordinate system to the vehicle coordinate system, but may be the mutual conversion in each coordinate system such as the camera coordinate system, the image coordinate system, the world coordinate system, the vehicle coordinate system, and the like. Depending on the specific image processing requirements.
As described above in connection with fig. 2, the first image information regarding the surroundings of the vehicle may be received from a plurality of first type cameras installed at different positions of the vehicle. Accordingly, in step S120, second image information may be extracted from the first image information of each of the first type cameras, respectively, for example, from fig. 2 (a) - (d), respectively. The method 1000 may also include fusing (not shown in fig. 1) the second image information of each of the first type of cameras. Accordingly, in step S130, a reference value of the driving boundary may be generated based at least on the fused second image information.
Alternatively, the first image information from the plurality of first type cameras may be fused before the second image information is extracted. Then, in step S120, second image information is extracted from the fused first image information, and in step S130, a reference value of the driving boundary is generated based on the second image information.
Therefore, the accuracy of the generated driving boundary reference value can be further improved by fusing the image information acquired by the plurality of first type cameras. This is particularly true for field of view coincidence parts of multiple cameras of the first type.
It is easy to understand that the image information collected by each first type of camera on the vehicle may be fused, or only the image information collected by some of all the first type of cameras may be fused. For example, in the embodiments shown in fig. 2 (a) - (d), for the lane line 210, only the image information in fig. 2 (a) and (d), that is, the image information acquired by the front side first type camera and the right side first type camera, may be fused to generate the reference value of the lane line 210.
Further, although not shown in fig. 1, the reference value generation method 1000 for a driving boundary may further include fusing second image information about the same driving boundary extracted at different times based on positioning information of the vehicle at different times. Accordingly, in step S130, a reference value of the driving boundary is generated based at least on the fused second image information.
The processing of the foregoing steps is generally performed in a time frame manner. Even during the running of the vehicle, the image information of the running boundary at the same position around the vehicle generally does not exist alone in a single time frame, but exists in a plurality of time frames before and after. Therefore, the image information of the same driving boundary acquired at a plurality of times is fused based on the positioning information of the vehicle at a plurality of times, errors and missing in single-frame image information can be compensated, and the accuracy of the finally generated driving boundary reference value is effectively improved.
It should be noted that the positioning information of the vehicle at different times may be determined by global positioning, for example, global Navigation Satellite System (GNSS), global Positioning System (GPS); it may also be determined by the way the vehicle is positioned itself, for example, a vehicle ranging sensor (determining the change in positioning of the vehicle by determining the change in distance between the vehicle and a reference); or may be determined by a combination of any of the above. The invention is not limited in this regard.
Similarly to the above, the fusion of the image information of the same driving boundary acquired at different times is also not limited to after the extraction of the second image information, but may also occur before the extraction of the second image information. That is, the first image information of the same driving boundary acquired at different times may be fused first. Then, in step S120, second image information is extracted from the fused first image information, and in step S130, a reference value of the driving boundary is generated based on the second image information.
Although not shown in fig. 1, the reference value generation method 1000 for a driving boundary may further include generating a reference value for the driving boundary based on second image information from a first type of camera and third image information about the same driving boundary from other types of sensors. Here, the other type of sensor may be, for example, any type of sensor capable of acquiring the same driving boundary image information other than the first type of camera, such as a laser radar sensor, a millimeter wave radar sensor, or the like, or a combination of the foregoing various types of sensors. The image information acquired by the sensors of different types is utilized to generate the reference value, so that the influence of the inherent defects of the sensor of a single type on the generated reference value can be avoided, and the accuracy of the finally generated driving boundary reference value is further improved.
It should be noted that different types of sensors often collect image information at different times, and therefore it is necessary to fuse the image information from the different types of sensors based on the positioning information of the vehicle at different times. The positioning information of the vehicle at different times can be determined in the manner described above, and will not be described in detail herein. It will be readily appreciated that while fig. 1 and the above description describe various operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently, or at the same time. Furthermore, the order of the operations may be rearranged. Also, embodiments according to the present invention may have additional steps not included in fig. 1 and the above description.
Those skilled in the art will readily appreciate that the reference value generation method for driving boundaries provided by one or more of the above embodiments may be implemented by a computer program. For example, the computer program is embodied in a computer program product that, when executed by a processor, implements a reference value generation method for driving boundaries of one or more embodiments of the invention. For another example, when a computer storage medium (e.g., a usb disk) storing the computer program is connected to a computer, the reference value generating method for driving boundaries according to one or more embodiments of the present invention may be executed by running the computer program.
Fig. 3 shows a schematic structural diagram of a reference value generating apparatus 3000 for driving boundaries according to an embodiment of the present invention. As shown in fig. 3, the reference value generation apparatus 3000 for a running boundary includes: receiving means 310, extracting means 320 and generating means 330. Wherein the receiving means 310 is configured to receive first image information about the surroundings of the vehicle from an on-board first type camera. Alternatively, the receiving means 310 may be configured to receive the first image information from the four onboard first type cameras described above in connection with fig. 2, respectively. The extraction means 320 are configured to extract second image information about the driving boundary from the first image information. The generating means 330 is configured to generate a reference value of the driving boundary based at least on the second image information.
The reference value generated by the generating device 330 can be used for comparing with the sensing value of the same driving boundary acquired by the vehicle-mounted second type camera to test and count the accuracy of the vehicle-mounted second type camera on driving boundary sensing and the sensing performance in the error triggering event. Alternatively, the reference value generated by the generating means 330 may also be used to verify sensed values about the driving boundary acquired by other types of sensors mounted on the vehicle. Other types of sensors may be, for example, lidar sensors, millimeter wave radar sensors, etc. any suitable type of sensor mounted on the vehicle in addition to the first and second type of cameras.
In the context of the present invention, the term "driving boundary" refers to a road boundary on which a vehicle can travel, e.g. lane lines, lane dotted lines, curbs, guardrails, concrete or steel walls, boundaries between asphalt and grass, reflectors, obstacles, beacons, towers, etc. for delimiting lanes or other structures delimiting different types of road surfaces.
Furthermore, in the context of the present invention, the term "first type camera" refers to a camera that is different from the second type camera to be authenticated. For example, the second type of camera to be verified may be a front view camera for an assisted driving system (ADAS), and accordingly, the first type of camera may be a fish eye camera mounted on a vehicle, or a wing camera mounted on a vehicle. The fish-eye camera may be a camera mounted on a vehicle that is originally used for a reversing function. In general, the fisheye camera can have a higher resolution for a sensed object in a close range, thereby generating a reference value of higher accuracy in the subsequent step S130. Wherein the wing camera may be a camera mounted on both sides of the vehicle (e.g., on the rear view mirrors of both sides) for sensing images of both sides of the vehicle.
Although not shown in fig. 3, the reference value generating apparatus 3000 for a driving boundary may further include a correction device. The correction means may be configured to correct and compensate the first image information received by the receiving means 310. Taking fig. 2 as an example, the distortion in the first image information acquired by the fisheye camera may be corrected by a correction device.
In the extracting means 320, a conventional image processing method may be used to extract the second image information, for example, an Edge filtering algorithm (Edge Filter), a Canny Edge detection algorithm (Canny Filter), a Sobel Operator Edge detection algorithm (Sobel Operator), etc.; machine learning, artificial intelligence algorithms may also be employed to extract the second image information, e.g., neural networks, deep learning, etc. The invention is not limited in this regard.
Although not shown in fig. 3, the reference value generating apparatus 3000 for a driving boundary may further include a conversion device. The conversion means is configured to coordinate-system convert the second image information and accordingly the generation means 330 is configured to generate a reference value of the driving boundary based at least on the converted second image information. For example, the second image information extracted at the extraction device 320 is converted from a two-dimensional image coordinate system to a three-dimensional vehicle coordinate system to facilitate further processing of the second image information (e.g., fusion with three-dimensional image information acquired by other sensors). Accordingly, in the generating means 330, a reference value of the driving boundary is generated at least based on the converted second image information.
It is to be noted that in the context of the present invention, the coordinate system conversion of the image information is not limited to the conversion of the second image information, but may be the conversion of the image information (for example, the first image information) generated in other steps. In the context of the present invention, the coordinate system conversion of the image information is not limited to the conversion from the image coordinate system to the vehicle coordinate system, but may be the mutual conversion in each coordinate system such as the camera coordinate system, the image coordinate system, the world coordinate system, the vehicle coordinate system, and the like. Depending on the specific image processing requirements.
Although not shown in fig. 3, the reference value generating apparatus 3000 for a driving boundary may further include a first fusing device. As previously described in connection with fig. 2, the receiving device 310 may receive first image information about the surroundings of the vehicle from a plurality of first type cameras installed at different positions of the vehicle. Accordingly, in the extraction means 320, the second image information may be extracted from the first image information of each of the first type cameras, respectively, for example, the second image information may be extracted from fig. 2 (a) - (d), respectively. The first fusing means may be configured to fuse the second image information of each of the first type of cameras. The generating means 330 may be configured to generate a reference value of the driving boundary based at least on the fused second image information.
Alternatively, the first fusing means may also be configured to fuse the first image information from the plurality of first type cameras. Accordingly, the extracting means 320 extracts second image information from the first image information fused by the first fusing means, and the generating means may be configured to generate the reference value of the driving boundary based on the second image information. Therefore, the accuracy of the generated driving boundary reference value can be improved by fusing the image information acquired by the plurality of first type cameras. This is particularly significant for portions where the fields of view of multiple cameras of the first type coincide.
It is easy to understand that the first fusion device can fuse the image information collected by each first type of camera on the vehicle, and can fuse the image information collected by only part of cameras in all the first type of cameras. For example, in the embodiments shown in fig. 2 (a) - (d), for the lane line 210, the first fusing means may fuse only the image information in fig. 2 (a) and (d), i.e., fuse the image information collected by the front side first type camera and the right side first type camera to generate the reference value of the lane line 210.
Furthermore, although not shown in fig. 3, the reference value generating apparatus 3000 for driving boundaries may further include a second fusing device. The second fusing means is configured to fuse the second image information about the same driving boundary extracted at different times based on the positioning information of the vehicle at different times, and accordingly the generating means 330 is configured to generate the reference value of the driving boundary based at least on the fused second image information.
The operation of the various devices in the apparatus 3000 is typically performed in a time frame fashion. Even during the running of the vehicle, the image information of the running boundary at the same position around the vehicle generally does not exist alone in a single time frame, but exists in a plurality of time frames before and after. Therefore, the second fusion device is utilized to fuse the image information of the same driving boundary acquired at a plurality of times, so that errors and missing in single-frame image information can be compensated, and the accuracy of the finally generated driving boundary reference value is effectively improved.
Similarly to the first fusing device, the fusion of the image information of the same driving boundary acquired at different times by the second fusing device is also not limited to after the extraction device 320 completes the extraction of the second image information, but may also occur before the extraction of the second image information. That is, the second fusing device may fuse the first image information of the same driving boundary acquired at different times. Then, the extraction means 320 extracts second image information from the first image information fused by the second fusion means, and the generation means 330 generates a reference value of the driving boundary based on the second image information.
The positioning information of the vehicle at different times utilized by the second fusion device may be determined by global positioning means, such as Global Navigation Satellite System (GNSS), global Positioning System (GPS); it may also be determined by the way the vehicle is positioned itself, for example, a vehicle ranging sensor (determining the change in positioning of the vehicle by determining the change in distance between the vehicle and a reference); or may be determined by a combination of any of the above. The invention is not limited in this regard.
The generating means 330 may be further configured to generate a reference value for a driving boundary based on second image information about the driving boundary from the first type camera and third image information about the same driving boundary from the other type sensor. Here, the other types of sensors may be, for example, any type of sensor capable of acquiring the same driving boundary image information other than the first type of camera, such as a laser radar sensor, a millimeter wave radar sensor, or the like, or a combination of the foregoing types of sensors. The reference value is generated through the image information acquired by the sensors of different types, so that the influence of the inherent defects of the sensor of a single type on the generated reference value can be avoided, and the accuracy of the finally generated driving boundary reference value is further improved.
It should be noted that different types of sensors often collect image information at different times, and therefore it is necessary to fuse the image information from the different types of sensors based on the positioning information of the vehicle at different times. The positioning information of the vehicle at different times can be determined in the manner described above, and will not be described in detail herein.
In one or more embodiments, the reference value generating device 3000 for driving boundaries may be incorporated into a vehicle. The reference value generating device 3000 for driving boundary may be a device for generating a driving boundary reference value independently in the vehicle, or may be incorporated in a processing device such as another Electronic Control Unit (ECU) or a Domain Control Unit (DCU) of the vehicle. It should be understood that the term "vehicle" or other similar terms as used herein include motor vehicles in general, such as passenger vehicles (including sport utility vehicles, buses, trucks, etc.), various commercial vehicles, and the like, and include hybrid vehicles, electric vehicles, and the like. A hybrid vehicle is a vehicle having two or more power sources, such as a gasoline powered and an electric vehicle.
In one or more embodiments, the reference value generating device 3000 for driving boundaries may be incorporated into an advanced assisted driving system (ADAS) of a vehicle, or into other L0-L5 level autopilot functions.
The reference value of the driving boundary generated according to the above embodiment of the present invention may be used as a standard to compare with the sensed value of the driving boundary by other sensors on the vehicle. Such a comparison may be utilized to verify and count the accuracy, average availability (e.g., true positive rate, true negative rate), average unavailability (e.g., false positive rate, false negative rate) of the driving boundary sensing results by other sensors onboard the vehicle. Such a comparison may also be used for false triggering events to verify the performance of other sensors in the vehicle in the event of false triggering.
In summary, according to the driving boundary generation scheme provided by the embodiment of the invention, the first type camera is used for acquiring the image information of the driving boundary, and the acquired image information is processed, so that the driving boundary reference value with high accuracy can be generated. The first type of camera is used for generating the reference value instead of other sensors needing to be verified, so that the redundancy of the system can be increased, and homologous errors are avoided. In addition, compared with the manual marking of the reference value, the time cost and the labor cost can be greatly reduced by utilizing the image information acquired by the first type camera to generate the reference value. In addition, the driving boundary generation scheme according to the embodiment of the invention can also combine the vehicle-mounted first type camera with other types of sensors to generate the reference value, which can further improve the accuracy of the reference value.
Further, according to the embodiment of the present invention, a camera capable of acquiring high-quality image information of a driving boundary near a vehicle, such as a fisheye camera, can be utilized as the first type of camera, thereby obtaining a driving boundary reference value with high accuracy. This is particularly evident in rain, snow, fog, etc.
It will be readily appreciated that, although the above descriptions have described only some of the embodiments of the invention, the invention may be embodied in many other forms without departing from the spirit or scope thereof. The illustrated embodiments are, therefore, to be considered in all respects as illustrative and not restrictive. The invention is capable of various modifications and substitutions without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (17)

1. A reference value generation method for a driving boundary, comprising:
receiving first image information about a vehicle surroundings from a first type camera mounted on the vehicle;
extracting second image information about a driving boundary from the first image information; and
generating a reference value for the driving boundary based at least on the second image information,
wherein the reference value is used to verify sensed values about the driving boundary acquired by a second type of camera mounted on the vehicle.
2. The method of claim 1, wherein the reference value is further used to verify sensed values about the driving boundary acquired by a first other type of sensor mounted on the vehicle,
wherein the first other type of sensor is different from the first type of camera and the second type of camera.
3. The method of claim 1, further comprising coordinate system converting the second image information, wherein the reference value is generated based at least on the converted second image information.
4. A method according to claim 3, wherein in the coordinate system conversion, the second image information is converted from a two-dimensional image coordinate system to a three-dimensional vehicle coordinate system.
5. The method of claim 1, wherein the first type of camera mounted on the vehicle comprises at least two first type of cameras located at different locations of the vehicle,
the method further comprises the steps of:
extracting second images of the at least two first-type cameras from first image information from the at least two first-type cameras, respectively;
fusing second images of the at least two first type cameras; and
the reference value is generated based at least on the fused second image information.
6. The method of claim 1, further comprising:
fusing second image information about the driving boundary extracted at different times based on positioning information of the vehicle at the different times; and
the reference value is generated based at least on the fused second image information.
7. The method of claim 1, wherein the reference value is generated based on the second image information and third image information about the driving boundary from a second other type of sensor,
wherein the second other type of sensor is different from the first type of camera.
8. A reference value generating apparatus for a driving boundary, characterized by comprising:
a receiving device configured to receive first image information about a vehicle surroundings from a first type camera mounted on a vehicle;
an extracting device configured to extract second image information about a driving boundary from the first image information; and
generating means configured to generate a reference value of the driving boundary based at least on the second image information,
wherein the reference value is used to verify sensed values about the driving boundary acquired by a second type of camera mounted on the vehicle.
9. The apparatus of claim 8, wherein the reference value is used to verify sensed values about the driving boundary acquired by a first other type of sensor mounted on the vehicle,
wherein the first other type of sensor is different from the first type of camera and the second type of camera.
10. The apparatus of claim 8, further comprising a switching device, wherein,
the conversion means is configured to perform coordinate system conversion on the second image information, and
the generating means is further configured to generate the reference value based at least on the converted second image information.
11. The apparatus of claim 10, wherein,
the conversion device is further configured to convert the second image information from a two-dimensional image coordinate system to a three-dimensional vehicle coordinate system.
12. The apparatus of claim 8, further comprising a first fusion device, wherein the first type of camera mounted on the vehicle comprises at least two first type of cameras located at different locations of the vehicle,
the extraction means is further configured to extract second images of the at least two first type cameras from first image information from the at least two first type cameras respectively,
the first fusing device is configured to fuse the second images of the at least two first type cameras, and
the generating means is further configured to generate the reference value based at least on the fused second image information.
13. The apparatus of claim 8, further comprising a second fusing device, wherein,
the second fusing device is configured to fuse second image information about the driving boundary extracted at different times based on positioning information of the vehicle at the different times, and
the generating means is further configured to generate the reference value based at least on the fused second image information.
14. The apparatus of claim 8, wherein the generating means is further configured to generate the reference value based on the second image information and third image information about the driving boundary from a second other type of sensor,
wherein the second other type of sensor is different from the first type of camera.
15. A computer storage medium comprising instructions which, when executed, perform the method of any one of claims 1 to 7.
16. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the method according to any of claims 1 to 7.
17. A vehicle, characterized in that it comprises an apparatus according to any one of claims 8 to 14.
CN202111662570.3A 2021-12-31 2021-12-31 Method and equipment for generating reference value for driving boundary Pending CN116416591A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111662570.3A CN116416591A (en) 2021-12-31 2021-12-31 Method and equipment for generating reference value for driving boundary
PCT/EP2022/081493 WO2023126097A1 (en) 2021-12-31 2022-11-10 Method and apparatus for generating ground truth for driving boundaries

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111662570.3A CN116416591A (en) 2021-12-31 2021-12-31 Method and equipment for generating reference value for driving boundary

Publications (1)

Publication Number Publication Date
CN116416591A true CN116416591A (en) 2023-07-11

Family

ID=84365686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111662570.3A Pending CN116416591A (en) 2021-12-31 2021-12-31 Method and equipment for generating reference value for driving boundary

Country Status (2)

Country Link
CN (1) CN116416591A (en)
WO (1) WO2023126097A1 (en)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113906271A (en) * 2019-04-12 2022-01-07 辉达公司 Neural network training using ground truth data augmented with map information for autonomous machine applications

Also Published As

Publication number Publication date
WO2023126097A1 (en) 2023-07-06

Similar Documents

Publication Publication Date Title
CN110689761B (en) Automatic parking method
US10628690B2 (en) Systems and methods for automated detection of trailer properties
US20240046654A1 (en) Image fusion for autonomous vehicle operation
CN111081064B (en) Automatic parking system and automatic passenger-replacing parking method of vehicle-mounted Ethernet
CN111442776B (en) Method and equipment for sequential ground scene image projection synthesis and complex scene reconstruction
US10599931B2 (en) Automated driving system that merges heterogenous sensor data
EP3007099B1 (en) Image recognition system for a vehicle and corresponding method
US11100806B2 (en) Multi-spectral system for providing precollision alerts
US20180165833A1 (en) Calculation device, camera device, vehicle, and calibration method
CN111507130B (en) Lane-level positioning method and system, computer equipment, vehicle and storage medium
CN111028534B (en) Parking space detection method and device
CN110858405A (en) Attitude estimation method, device and system of vehicle-mounted camera and electronic equipment
CN113160594B (en) Change point detection device and map information distribution system
US10832428B2 (en) Method and apparatus for estimating a range of a moving object
CN110378836B (en) Method, system and equipment for acquiring 3D information of object
US10108866B2 (en) Method and system for robust curb and bump detection from front or rear monocular cameras
CN110780287A (en) Distance measurement method and distance measurement system based on monocular camera
CN111160070A (en) Vehicle panoramic image blind area eliminating method and device, storage medium and terminal equipment
CN111605481A (en) Congestion car following system and terminal based on look around
CN110539748A (en) congestion car following system and terminal based on look around
CN116416591A (en) Method and equipment for generating reference value for driving boundary
CN116416584A (en) Reference value generation method and device for other traffic participants
CN111256707A (en) Congestion car following system and terminal based on look around
CN114723928A (en) Image processing method and device
JP2022067324A (en) Object detection apparatus, object detection method, and object detection program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication