CN113205447A - Road picture marking method and device for lane line identification - Google Patents

Road picture marking method and device for lane line identification Download PDF

Info

Publication number
CN113205447A
CN113205447A CN202110513333.4A CN202110513333A CN113205447A CN 113205447 A CN113205447 A CN 113205447A CN 202110513333 A CN202110513333 A CN 202110513333A CN 113205447 A CN113205447 A CN 113205447A
Authority
CN
China
Prior art keywords
lane line
road
data
road image
dimensional coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110513333.4A
Other languages
Chinese (zh)
Inventor
李翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing CHJ Automotive Information Technology Co Ltd
Original Assignee
Beijing CHJ Automotive Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing CHJ Automotive Information Technology Co Ltd filed Critical Beijing CHJ Automotive Information Technology Co Ltd
Priority to CN202110513333.4A priority Critical patent/CN113205447A/en
Publication of CN113205447A publication Critical patent/CN113205447A/en
Priority to PCT/CN2022/077531 priority patent/WO2022237272A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T3/06
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Abstract

The application provides a road image labeling method and device for lane line identification, wherein the method comprises the following steps: acquiring three-dimensional coordinate data of a lane line in a road and acquiring the pose of a camera for shooting the road; performing projection transformation on the three-dimensional coordinate data according to the pose to obtain two-dimensional projection data projected into the road image; and adopting the two-dimensional projection data as lane marking data of the road image. According to the road image labeling method and device for lane line identification, three-dimensional coordinate data of a lane line and the sitting posture of a camera are directly utilized, and two-dimensional projection data obtained by projection transformation of the three-dimensional coordinate data are used as lane line labeling data. The method for marking the lane line of the road image determined by the method can realize automation of marking the lane line and solve the problem of high cost of the existing manual marking data.

Description

Road picture marking method and device for lane line identification
Technical Field
The application relates to the technical field of image processing, in particular to a road picture labeling method and device for lane line identification.
Background
The image feature extraction method based on the deep learning model is applied. For example, in the fields of automatic driving and driver assistance, a model based on deep learning training has been used to process a road picture taken by a vehicle camera and identify lane lines in the road picture.
In order to ensure that a model obtained based on deep learning training meets the requirements of practical application, lane lines can be identified as accurately as possible, a large number of training samples need to be obtained, and the training samples need to be labeled, namely, the lane lines in the training samples need to be labeled.
At present, marking of a training sample is manually realized, that is, a lane line in a road picture needs to be manually identified, the lane line in the road picture is marked by adopting a characteristic equation representation or curve representation method, and the standard lane line is used as a label of the road picture. The manual labeling method requires a lot of manpower and is expensive. Moreover, the manual indexing method cannot identify which training sample images are bad sample images for rapidly improving the accuracy of model training.
Disclosure of Invention
In order to solve the technical problems described above or at least partially solve the technical problems, the present application provides a road image labeling method and apparatus for lane line identification.
In one aspect, the present application provides a road image labeling method for lane line identification, including:
acquiring three-dimensional coordinate data of a lane line in a road and acquiring the pose of a camera for shooting the road;
performing projection transformation on the three-dimensional coordinate data according to the pose to obtain two-dimensional projection data projected into the road image;
and adopting the two-dimensional projection data as lane marking data of the road image.
Optionally, the road image labeling method for lane line identification further includes:
processing the road image by adopting a historical lane line identification model to obtain lane line identification data;
and judging whether the road image is a bad sample image or not according to the lane line marking data and the lane line identification data.
Optionally, the acquiring three-dimensional coordinate data of a lane line in a road includes:
acquiring at least two road images; the shooting poses of the cameras corresponding to the road images are different;
and determining the three-dimensional coordinate data of the lane line according to the at least two road images by adopting a three-dimensional reconstruction method.
Optionally, determining three-dimensional coordinate data of the lane line according to the at least two road images by using a three-dimensional reconstruction method, including:
acquiring matching feature points of the at least two road images;
determining the pose of a camera forming the road image according to the matched feature points;
constructing a space dense point cloud representing the road according to the at least two road images and the corresponding poses of the cameras;
obtaining a lane line point cloud according to the space dense point cloud;
and determining the three-dimensional coordinate data of the lane line according to the lane line point cloud.
Optionally, the acquiring the pose of the camera shooting the road comprises:
and acquiring matching feature points of the at least two road images, and determining the pose of the camera according to the matching feature points.
Optionally, the acquiring the pose of the camera shooting the road comprises:
and acquiring positioning characteristic data of the vehicle, and determining the pose of the camera according to the positioning characteristic data.
Optionally, acquiring three-dimensional coordinate data of a lane line in a road includes:
and searching the high-precision map of the road, and determining the three-dimensional coordinate data of the lane line.
In another aspect, the present application provides a road image labeling apparatus for lane line recognition, including:
the data acquisition unit is used for acquiring three-dimensional coordinate data of a lane line in a road and acquiring the pose of a camera for shooting the road;
the projection calculation unit is used for performing projection transformation on the three-dimensional coordinate data according to the pose to obtain two-dimensional projection data projected into the road image;
and the calibration unit is used for adopting the two-dimensional projection data as lane marking data of the road image.
Optionally, the road image labeling device for lane line identification further includes:
the model calculation unit is used for processing the road image by adopting a historical lane line identification model to obtain lane line identification data;
and the bad sample identification unit is used for judging whether the road image is a bad sample image according to the lane line marking data and the lane line identification data.
Optionally, the data acquiring unit includes:
the image acquisition subunit is used for acquiring at least two road images; the shooting poses of the cameras corresponding to the road images are different;
and the three-dimensional reconstruction subunit is used for determining the three-dimensional coordinate data of the lane line according to the at least two road images by adopting a three-dimensional reconstruction method.
Optionally, the three-dimensional reconstruction subunit includes:
a matching feature point obtaining subunit, configured to obtain matching feature points of the at least two road images;
a pose acquisition subunit, configured to determine, according to the matching feature points, a pose of a camera that forms the road image;
the dense point cloud obtaining subunit is used for constructing a space dense point cloud representing the road according to the at least two road images and the corresponding poses of the cameras;
the lane line point cloud obtaining subunit is used for obtaining a lane line point cloud according to the space dense point cloud;
and the three-dimensional coordinate data calculation subunit is used for determining the three-dimensional coordinate data of the lane line according to the lane line point cloud.
Optionally, the data acquiring unit acquires positioning feature data of a vehicle, and determines the pose of the camera according to the positioning feature data.
Optionally, the data obtaining unit determines the three-dimensional coordinate data of the lane line by searching for a high-precision map of the road.
According to the road image labeling method and device for lane line identification, three-dimensional coordinate data of a lane line and the sitting posture of a camera are directly utilized, and two-dimensional projection data obtained by projection transformation of the three-dimensional coordinate data are used as lane line labeling data. The method for marking the lane line of the road image determined by the method can realize automation of marking the lane line and solve the problem of high cost of the existing manual marking data.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flowchart of a road image labeling method for lane line identification according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of a road image labeling method for lane line identification according to another embodiment of the present application;
FIG. 3 is a schematic structural diagram of a road image labeling device for lane line identification according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
wherein: 11-a data acquisition unit, 12-a projection calculation unit, 13-a calibration unit, 14-a model calculation unit and 15-a bad sample identification unit; 21-processor, 22-memory, 23-communication interface, 24-bus system.
Detailed Description
In order that the above-mentioned objects, features and advantages of the present application may be more clearly understood, the solution of the present application will be further described below. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, but the present application may be practiced in other ways than those described herein; it is to be understood that the embodiments described in this specification are only some embodiments of the present application and not all embodiments.
The embodiment of the application provides a road image labeling method for lane line identification, which is used for realizing automatic labeling of lane lines in road images. Fig. 1 is a flowchart of a road image labeling method for lane line identification according to an embodiment of the present disclosure. As shown in fig. 1, the road image annotation method provided by the embodiment of the application includes steps S101 to S103.
S101: the method comprises the steps of obtaining three-dimensional coordinate data of a lane line in a road and obtaining the pose of a camera for shooting the road.
The three-dimensional coordinate data of the lane line is data representing the position characteristics of the lane line in the road in a three-dimensional coordinate system. The three-dimensional coordinate system may be a vehicle coordinate system or a world coordinate system, which is not particularly limited in the embodiments of the present application.
The three-dimensional coordinate data of the lane line may be represented in the form of coordinates of a midpoint of the lane line, or may be represented by a spatial coordinate expression in the extending direction of the lane line, which is not particularly limited in the embodiment of the present application. In practical application, if the three-dimensional coordinate data of the lane line is obtained by adopting an image processing method, the three-dimensional coordinate data is preferably represented in a form of coordinate points; if the three-dimensional coordinate data of the lane line is obtained from data such as a high-precision map, the three-dimensional coordinate data is preferably represented by adopting a space coordinate expression mode.
In the specific application of the embodiment of the application, the following methods are used for acquiring the three-dimensional coordinate data of the lane line in the road.
1. The method for three-dimensional reconstruction of the lane line based on the road image is characterized in that at least two road images which shoot the same area are used as a basis, three-dimensional point clouds of the road are obtained through reduction, and three-dimensional coordinate data of the lane line are extracted based on the three-dimensional point cloud data of the road. Specifically, steps S1011-S1012 are included.
S1011: and acquiring at least two road images.
The three-dimensional reconstruction method of the lane line based on the road image needs to utilize a plurality of road images shot in the same area to restore the three-dimensional characteristics of the road, so at least two road images need to be acquired. The aforementioned road image should be an image formed by photographing the same road area. It should be noted that the camera poses corresponding to the respective road images should be different.
In an application of the embodiment of the application, the road image labeling method for lane line identification is executed at a remote server, and the remote server is in communication connection with a vehicle client and receives collected data generated and reported by various sensors of the vehicle client. The collected data includes road images captured by a vehicle camera, position data of the vehicle, traveling direction data of the vehicle, vehicle attitude data, and the like. Therefore, it is possible to determine a vehicle traveling to a specific geographic range based on the position data of the vehicle and the traveling direction data of the vehicle, and to take an image captured by a camera of the vehicle in the specific geographic range as a road image for three-dimensional reconstruction.
S1012: and three-dimensional reconstruction is carried out on the road according to the at least two road images by adopting a three-dimensional reconstruction method, and the three-dimensional coordinate data of the lane line is determined.
In specific implementation, the method for determining the three-dimensional coordinate data of the lane line according to the three-dimensional reconstruction of the road by at least two road images comprises the following steps of: (1) acquiring matching feature points in the at least two road images; (2) determining the pose of a camera forming the road image according to the matched feature points; (3) constructing a space dense point cloud representing the road according to the at least two road images and the corresponding poses of the cameras; (4) obtaining a lane line point cloud according to the space dense point cloud; (5) and determining the three-dimensional coordinate data of the lane line according to the lane line point cloud.
(1) And acquiring the matched feature points in the at least two road images.
The obtaining of the matching feature points in each road image specifically includes feature point extraction and feature point matching. The characteristic points refer to image pixel points or pixel regions with typical characteristic in the image, and most of the characteristic points are points with large changes of pixel gray levels of some neighborhood pixels in specific application.
At present, the method for acquiring the image feature points includes: A. a weighted average Harris-Laplace feature point extraction algorithm; B. a feature extraction algorithm based on scale invariant feature transformation; searching an extreme point in a spatial scale by detecting the local characteristics of an image, and taking the extreme point as a characteristic point; C. a feature extraction algorithm based on accelerated robust features; the feature points based on the accelerated robust features are subjected to scale invariant feature simplification thought for reference, and the feature points are rapidly extracted by means of an integral graph and a haar wavelet technology.
After the feature points of each road image are obtained, the feature points are required to be matched to determine matched feature points.
In this embodiment of the application, the method for determining matching feature points based on feature points of each road image may include: A. the feature matching method adopting the normalized cross-correlation has the advantages of being capable of resisting global brightness change and contrast change and high in processing speed. B. The feature point matching method based on the feature with the unchanged size comprises the steps of calculating feature vectors of feature points by utilizing the field of the feature points, and then determining matched feature points based on Euclidean distances among the feature vectors. C. And (3) a feature extraction algorithm based on the accelerated robust features.
It should be noted that, before extracting the matching feature points for each road image, in practical applications, each road image needs to be preprocessed. The goal of the preprocessing is to improve the visual effect of the image, improve the definition of the image, selectively highlight useful information in the image and suppress useless information; the image preprocessing comprises the smoothing processing of the image, and specifically adopts methods such as morphological filtering, bilateral filtering, adaptive mean filtering, adaptive median filtering, adaptive weighted filtering and the like.
(2) And determining the pose of a camera forming the road image according to the matched feature points.
The step of determining the pose of the camera of the road image based on the matching feature points is a process of sparse point cloud reconstruction, which obtains the pose of the camera shooting each road image and the three-dimensional coordinates of the sparse point cloud (spatial position points corresponding to the matching feature points) based on the matching feature points.
According to the pinhole imaging model, the relationship between the pixel point position and the three-dimensional space coordinate point formed after the camera shoots the image is
Figure BDA0003061162420000081
Wherein x and y are respectively the abscissa and the ordinate of a pixel point in an image; k is the internal parameter matrix of the camera,
Figure BDA0003061162420000082
f is the focal length of the camera, and u and v are the abscissa and ordinate of the pixel point of the camera principal point; [ R | t]Is a matrix of the poses of the camera,
Figure BDA0003061162420000083
r is a rotation matrix of the camera coordinate system relative to the world coordinate system, t is a translation matrix of the origin of the camera coordinate system relative to the world coordinate system, and X, Y and Z are the space coordinates of the object. Assuming that K, R and t are known, the corresponding numerical intervals of X, Y and Z can be predicted on each road image feature point respectively; binding and adjusting the characteristic points in the road images, namely determining X, Y and Z values; in practical applications, K is generally known, and R and t are not determined, and in the case of sufficient road images and feature points, parameters in R and t can be determined. That is, the three-dimensional coordinates of the sparse point cloud and the pose [ R | t ] of the camera can be determined by the method]。
In the specific application of the embodiment of the present application, the intrinsic parameter matrix K of each camera needs to be acquired. The method for acquiring the parameter matrix K in the camera specifically comprises a Tsai calibration method, a dot template calibration method, a Zhangyinyou plane calibration method and a camera self-calibration method.
In the specific application of the embodiment of the application, the pose of the camera and the three-dimensional coordinates of the sparse point cloud can be calculated by adopting a binding adjustment method based on the principle of minimizing the reprojection error; in practical application, a step of layered reconstruction can be adopted to provide an effective initial value for binding adjustment; in practical application, incremental binding adjustment, global binding adjustment or hybrid binding adjustment methods can be adopted to calculate the pose of each camera and the three-dimensional coordinates of the sparse point cloud.
(3) Constructing a space dense point cloud representing the road according to the at least two road images and the corresponding poses of the cameras; and the reconstruction of the spatial dense point cloud is to calculate the three-dimensional coordinates of a spatial coordinate system corresponding to each pixel point in the road image pixel by pixel on the premise of knowing the pose of the camera, and then obtain the spatial dense point cloud in the spatial coordinate system.
The basic principle of the calculation of the spatial dense point cloud is to find points with image consistency in the space; the foregoing image consistency means that in a three-dimensional image representing the same scene, if a selected three-dimensional point is located on the surface of an object, after the three-dimensional point is projected onto each image according to the internal and external parameters of the camera to form a projection point, scenes contained in a small area centered on the projection point in each image should be very similar.
The measure of image correspondence between two images can be obtained by processing the image correspondence points using Sum of Squared Differences (SSD), Sum of Absolute Differences (SAD) or Normalized Cross-Correlation (NCC).
In practical application, a voxel-based method, a point cloud diffusion-based method or a depth map fusion-based method can be adopted to determine the spatially dense point cloud. In practical application, a depth map fusion-based method is mostly adopted to determine the spatial dense point cloud.
Through the foregoing steps (1) to (3), a spatially dense point cloud representing road features may be determined, and then (4) and (5) may be performed.
(4) And obtaining a lane line point cloud according to the space dense point cloud.
After the spatial dense point cloud is determined, an image with the clearest lane line can be determined from all the road images, pixels corresponding to the lane line are determined, and the spatial dense point cloud corresponding to the pixels is used as the lane line point cloud.
In some applications, where the lane line point cloud has a significant height difference relative to the road plane in the spatially dense point cloud, the lane line point cloud may also be determined by analysis of the spatially dense point cloud coordinate data.
(5) And determining the three-dimensional coordinate data of the lane line according to the lane line point cloud.
In specific implementation, the three-dimensional coordinate data of the lane line is determined according to the lane point cloud, filtering, segmentation and fusion processing can be performed on the lane line point cloud data to determine a small number of three-dimensional coordinate points representing the lane line, and then data fitting is performed on the basis of the three-dimensional coordinate points to obtain the three-dimensional coordinate data representing the lane line.
2. Data query-based method
The data query method is to determine the three-dimensional coordinate data of the lane line according to the high-precision map data by querying the high-precision map data.
The high-precision map contains a large amount of auxiliary information related to driving, wherein the auxiliary information comprises road data such as the position, type, width, gradient and curvature of a road lane line. The data query-based method determines the three-dimensional coordinate data of the lane line based on the data by querying the data in the high-precision map.
In the embodiment of the application, after the position of the vehicle is determined according to the position information of the vehicle, the high-precision map is inquired according to the position of the vehicle and the driving direction of the vehicle, and then the three-dimensional coordinate data of the corresponding lane line can be determined.
In step S101, there are several methods of acquiring the pose of the camera.
(1) Method for acquiring camera pose based on matching feature points
The step of determining the pose of the cameras of the road images based on the matching feature points is a process of sparse point cloud reconstruction, which obtains the pose of the cameras taking each road image based on the matching feature points. This determination method is based on the aforementioned method for determining the pose of the capturing camera in the three-dimensional coordinate data of the lane line, and the related contents can be referred to the aforementioned description.
(2) And determining the pose of the camera according to the positioning data based on the vehicle positioning characteristic data.
In a specific embodiment, a vehicle is provided with a sensor capable of acquiring vehicle positioning information and attitude information, and in a specific application, the sensor for acquiring vehicle positioning information and attitude information may include one or more of an inertial sensor, a wheel speed sensor, and a sensor for navigation, and based on sensor data generated by the sensor, calculation may be performed to determine position data of the vehicle and pose data of the vehicle; subsequently, based on the position data of the vehicle and the position data of the vehicle, and the conversion relationship in the camera coordinate system and the vehicle coordinate system, the pose of the camera can be determined.
S102: and performing projection transformation on the three-dimensional coordinate data according to the pose to obtain two-dimensional projection data projected into the road image.
Formula as expressed in step S101
Figure BDA0003061162420000111
After the pose of the camera is determined, and the internal parameters of the camera and the three-dimensional coordinate data of the lane lines are known, the two-dimensional lane line projection data (x, y) projected into the road image can be determined by the aforementioned formula.
S103: and adopting the two-dimensional projection data as lane marking data of the road image.
In step S103, the two-dimensional projection data is used as corresponding road image labeling data, and the determined two-dimensional projection data is used as a label to establish association with a corresponding road image.
By adopting the road image labeling method for lane line identification provided by the embodiment of the application, the three-dimensional coordinate data of the lane line and the sitting posture of the camera are directly utilized, and the two-dimensional projection data obtained by performing projection transformation on the three-dimensional coordinate data is used as the lane line labeling data. The method for marking the lane line of the road image determined by the method can realize automation of marking the lane line and solve the problem of high cost of the existing manual marking data.
FIG. 2 is a flowchart of a road image labeling method for lane line identification according to another embodiment of the present application; as shown in the figure, in some applications of the embodiments of the present application, the road image labeling method for lane line identification includes steps S104 and S105 in addition to the steps S101 to S103 of signing.
S104: and processing the road image by adopting a historical lane line identification model to obtain lane line identification data.
The historical lane line recognition model is obtained by training a deep learning model by adopting historical sample data training; the lane line recognition model is used for processing the road image shot by the vehicle camera and determining the lane line in the road.
The embodiment of the application is not limited to the deep learning model for constructing the lane line identification model, and various available deep learning algorithm models can be adopted.
S105: and judging whether the road image is a bad sample image or not according to the lane line marking data and the lane line identification data.
In the embodiment of the present application, the specific steps performed in step S105 may be as in S1051-S1054.
S1051: and calculating the difference value of the lane line marking data and the lane line identification data.
In practical application, the number of the lane marking data and the lane identification data may be multiple, and therefore, the calculation of the difference value between the lane marking data and the lane identification data is to determine the lane identification data corresponding to each lane marking data, or to determine the closest lane marking data corresponding to the lane identification data, and to find the difference value between the corresponding data, and to take the average value of the found difference values as the difference value between the lane marking data and the lane identification data.
S1052: judging whether the difference is larger than a set threshold value or not; if yes, executing S1053; if not, S1054 is performed.
S1053: and determining the road image as a bad sample image.
S1054: and determining the road image as a non-bad sample image.
In the embodiment of the application, the bad sample image is an image of which the obtained lane line identification data does not meet the set requirement after the historical lane line identification model is adopted for processing. In the embodiment of the present application, the setting requirement is embodied by a difference between the lane marking data and the lane identification data of the setting requirement. Because the bad sample image can not be processed by the historical lane line recognition model to obtain more reasonable lane line recognition data, the bad sample image can be used as a basis for retraining the lane line recognition model.
In practical application, if the difference is larger than a set threshold, the lane line identification data is determined to be larger than the lane line marking data; and because the lane line marking data is more accurate data, the lane line identification data determined by the historical lane line identification model is determined to be inaccurate, and the historical lane line identification model is determined reversely and cannot process the road image well, so that the road image can be used as a bad sample image, and the bad sample image can be used as a sample of a subsequently retrained lane line identification model to improve the identification capability of the lane line identification model.
In addition to providing the road image annotation method for lane line identification, the embodiment of the application also provides a road image annotation device for lane line identification, which has the same inventive concept as the method.
Fig. 3 is a schematic structural diagram of a road image labeling device for lane line identification according to an embodiment of the present application. As shown in fig. 3, the road image labeling device for lane line identification according to the embodiment of the present application includes a data acquiring unit 11, a projection calculating unit 12, and a calibration unit 13.
The data acquisition unit 11 is used for acquiring three-dimensional coordinate data of a lane line in a road and acquiring the pose of a camera shooting the road.
The three-dimensional coordinate data of the lane line is data representing the position characteristics of the lane line in the road in a three-dimensional coordinate system. The three-dimensional coordinate system may be a vehicle coordinate system or a world coordinate system, and the embodiment of the present application is not particularly limited.
The three-dimensional coordinate data of the lane line may be represented in the form of a lane line coordinate, or may be represented by a spatial coordinate expression in the extending direction of the lane line, which is not particularly limited in the embodiment of the present application. In practical application, if the three-dimensional coordinate data of the lane line is obtained by adopting an image processing method, the three-dimensional coordinate data is preferably represented in a form of coordinate points; if the three-dimensional coordinate data of the lane line is obtained from data such as a high-precision map, the three-dimensional coordinate data is preferably represented by adopting a space coordinate expression mode.
In some applications of the embodiments of the present application, the data acquisition unit 11 acquires three-dimensional coordinate data of a lane line in a road in the following ways.
1. The method comprises the steps of realizing three-dimensional reconstruction of a lane line based on a road image, restoring to obtain three-dimensional point cloud of the road based on at least two road images with tangible same lane line image information, and extracting three-dimensional coordinate data of the lane line based on the three-dimensional point cloud data of the road.
Specifically, the data acquiring unit 11 includes a picture acquiring subunit and a three-dimensional reconstruction subunit.
The image acquisition subunit is used for acquiring at least two road images; the three-dimensional reconstruction method of the lane line based on the road image needs to utilize a plurality of road images shot in the same area to restore the three-dimensional characteristics of the road, so at least two road images need to be acquired. The aforementioned road image should be an image formed by photographing the same road area.
It should be noted that the camera shooting poses corresponding to the respective road images should be different.
In one application of the embodiment of the application, the road image marking device for lane line identification is deployed at a remote server, and the remote server is in communication connection with a vehicle client and receives collected data generated and reported by various sensors of the vehicle client. The collected data includes road images captured by a vehicle camera, position data of the vehicle, traveling direction data of the vehicle, vehicle attitude data, and the like. Therefore, it is possible to determine a vehicle traveling to a specific geographic range based on the position data of the vehicle and the traveling direction data of the vehicle, and to take an image captured by a camera of the vehicle in the specific geographic range as a road image for three-dimensional reconstruction.
The three-dimensional reconstruction subunit is used for reconstructing the road in three dimensions according to the at least two road images by adopting a three-dimensional reconstruction method and determining the three-dimensional coordinate data of the lane line
In specific implementation, the three-dimensional reconstruction subunit comprises a matching feature point acquisition subunit, a pose acquisition subunit, a dense point cloud acquisition subunit, a lane line point cloud acquisition subunit and a three-dimensional coordinate data calculation subunit.
And the matching feature point acquisition subunit is used for acquiring matching feature points in the at least two road images. The obtaining of the matching feature points in each road image specifically includes feature point extraction and feature point matching. The characteristic points refer to image pixel points or pixel regions with typical characteristic in the image, and most of the characteristic points are points with large changes of pixel gray levels of some neighborhood pixels in specific application.
At present, the method for acquiring the image feature points includes: A. a weighted average Harris-Laplace feature point extraction algorithm; B. a feature extraction algorithm based on scale invariant feature transformation; searching an extreme point in a spatial scale by detecting the local characteristics of an image, and taking the extreme point as a characteristic point; C. a feature extraction algorithm based on accelerated robust features; the feature points based on the accelerated robust features are subjected to scale invariant feature simplification thought for reference, and the feature points are rapidly extracted by means of an integral graph and a haar wavelet technology.
After the feature points of each road image are obtained, the feature points are required to be matched to determine matched feature points.
In this embodiment of the application, the method for determining matching feature points based on feature points of each road image may include: A. the feature matching method adopting the normalized cross-correlation has the advantages of being capable of resisting global brightness change and contrast change and high in processing speed. B. The feature point matching method based on the feature with the unchanged size comprises the steps of calculating feature vectors of feature points by utilizing the field of the feature points, and then determining matched feature points based on Euclidean distances among the feature vectors. C. And (3) a feature extraction algorithm based on the accelerated robust features.
It should be noted that, before extracting the matching feature points for each road image, in practical applications, each road image needs to be preprocessed. The goal of the preprocessing is to improve the visual effect of the image, improve the definition of the image, selectively highlight useful information in the image and suppress useless information; the image preprocessing comprises the smoothing processing of the image, and specifically adopts methods such as morphological filtering, bilateral filtering, adaptive mean filtering, adaptive median filtering, adaptive weighted filtering and the like.
And the pose acquisition subunit is used for determining the pose of the camera forming the road image according to the matched feature points.
The step of determining the pose of the camera of the road image based on the matching feature points is a process of sparse point cloud reconstruction, which obtains the pose of the camera shooting each road image and the three-dimensional coordinates of the sparse point cloud (spatial position points corresponding to the matching feature points) based on the matching feature points.
According to the pinhole imaging model, the relationship between the pixel point position and the three-dimensional space coordinate point formed after the camera shoots the image is
Figure BDA0003061162420000151
Wherein x and y are pixel points in the image respectively
The abscissa and ordinate; k is the internal parameter matrix of the camera,
Figure BDA0003061162420000152
f is the focal length of the camera, and u and v are the abscissa and ordinate of the pixel point of the camera principal point; [ R | t]Is a matrix of the poses of the camera,
Figure BDA0003061162420000153
r is a rotation matrix of the camera coordinate system relative to the world coordinate system, t is a translation matrix of the origin of the camera coordinate system relative to the world coordinate system, and X, Y and Z are the space coordinates of the object. Assuming that K, R and t are known, the corresponding numerical intervals of X, Y and Z can be predicted on each road image feature point respectively; binding and adjusting the characteristic points in the road images, namely determining X, Y and Z values; in practical applications, K is generally known, and R and t are not determined, and in the case of sufficient road images and feature points, parameters in R and t can be determined. That is, the three-dimensional coordinates of the sparse point cloud and the pose [ R | t ] of the camera can be determined by the method]。
In the specific application of the embodiment of the present application, the intrinsic parameter matrix K of each camera needs to be acquired. The method for acquiring the parameter matrix K in the camera specifically comprises a Tsai calibration method, a dot template calibration method, a Zhangyinyou plane calibration method and a camera self-calibration method.
In the specific application of the embodiment of the application, the pose of the camera and the three-dimensional coordinates of the sparse point cloud can be calculated by adopting a binding adjustment method based on the principle of minimizing the reprojection error; in practical application, a step of layered reconstruction can be adopted to provide an effective initial value for binding adjustment; in an actual music palace, incremental binding adjustment, global binding adjustment or a mixed binding adjustment method can be adopted to calculate the pose of each camera and the three-dimensional coordinates of the sparse point cloud.
The method comprises the following steps of three-dimensionally reconstructing a road according to at least two road images to determine three-dimensional coordinate data of a lane line, wherein the steps are sequentially executed as follows: (1) acquiring matching feature points in the at least two road images; (2) determining the pose of a camera forming the road image according to the matched feature points; (3) constructing a space dense point cloud representing the road according to the at least two road images and the corresponding poses of the cameras; (4) obtaining a lane line point cloud according to the space dense point cloud; (5) and determining the three-dimensional coordinate data of the lane line according to the lane line point cloud.
The dense point cloud obtaining subunit is used for constructing a space dense point cloud representing the road according to the at least two road images and the corresponding poses of the cameras; and the reconstruction of the spatial dense point cloud is to calculate the three-dimensional coordinates of a spatial coordinate system corresponding to each pixel point in the road image pixel by pixel on the premise of knowing the pose of the camera, and then obtain the spatial dense point cloud in the spatial coordinate system. The basic principle of the calculation of the spatial dense point cloud is to find points with image consistency in the space; the foregoing image consistency means that in a three-dimensional image representing the same scene, if a selected three-dimensional point is located on the surface of an object, after the three-dimensional point is projected onto each image according to the internal and external parameters of the camera to form a projection point, scenes contained in a small area centered on the projection point in each image should be very similar.
The measure of image correspondence between two images can be obtained by processing the image correspondence points using Sum of Squared Differences (SSD), Sum of Absolute Differences (SAD) or Normalized Cross-Correlation (NCC).
In practical application, a voxel-based method, a point cloud diffusion-based method or a depth map fusion-based method can be adopted to determine the spatially dense point cloud. In practical application, a depth map fusion-based method is mostly adopted to determine the spatial dense point cloud.
And the lane line point cloud obtaining subunit is used for obtaining the lane line point cloud according to the space dense point cloud.
After the spatial dense point cloud is determined, an image with the clearest lane line can be determined from all the road images, pixels corresponding to the lane line are determined, and the spatial dense point cloud corresponding to the pixels is used as the lane line point cloud.
In some applications, where the lane line point cloud has a significant height difference relative to the road plane in the spatially dense point cloud, the lane line point cloud may also be determined by analysis of the spatially dense point cloud coordinate data.
And the three-dimensional coordinate data calculation subunit is used for determining the three-dimensional coordinate data of the lane line according to the lane line point cloud.
In specific implementation, the three-dimensional coordinate data of the lane line is determined according to the lane point cloud, filtering, segmentation and fusion processing can be performed on the lane line point cloud data to determine a small number of three-dimensional coordinate points representing the lane line, and then data fitting is performed on the basis of the three-dimensional coordinate points to obtain the three-dimensional coordinate data representing the lane line.
In other embodiments, the data acquisition unit 11 determines the three-dimensional coordinate data of the lane line from the high-precision map data by querying the high-precision map data.
The high-precision map contains a large amount of auxiliary information related to driving, wherein the auxiliary information comprises road data such as the position, type, width, gradient and curvature of a road lane line. The data query-based method determines the three-dimensional coordinate data of the lane line based on the data by querying the data in the high-precision map.
In the embodiment of the application, after the position of the vehicle is determined according to the position information of the vehicle, the high-precision map is inquired according to the position of the vehicle and the driving direction of the vehicle, and then the three-dimensional coordinate data of the corresponding lane line can be determined.
The data acquisition unit 11 acquires the pose of the camera in the following ways.
(1) Method for acquiring camera pose based on matching feature points
The step of determining the pose of the cameras of the road images based on the matching feature points is a process of sparse point cloud reconstruction, which obtains the pose of the cameras taking each road image based on the matching feature points. This determination method is based on the aforementioned method for determining the pose of the capturing camera in the three-dimensional coordinate data of the lane line, and the related contents can be referred to the aforementioned description.
(2) And determining the pose of the camera according to the positioning data based on the vehicle positioning characteristic data.
In a specific embodiment, a vehicle is provided with a sensor capable of acquiring vehicle positioning information and attitude information, and in a specific application, the sensor for acquiring vehicle positioning information and attitude information may include one or more of an inertial sensor, a wheel speed sensor, and a sensor for navigation, and based on sensor data generated by the sensor, calculation may be performed to determine position data of the vehicle and pose data of the vehicle; subsequently, based on the position data of the vehicle and the position data of the vehicle, and the conversion relationship in the camera coordinate system and the vehicle coordinate system, the pose of the camera can be determined.
And the projection calculation unit 12 is configured to perform projection transformation on the three-dimensional coordinate data according to the pose, so as to obtain two-dimensional projection data projected into the road image.
According to the formula
Figure BDA0003061162420000181
After the pose of the camera is determined, and the internal parameters of the camera and the three-dimensional coordinate data of the lane lines are known, the two-dimensional lane line projection data (x, y) projected into the road image can be determined by the aforementioned formula.
The calibration unit 13 is configured to use the two-dimensional projection data as lane marking data of the road image. The calibration unit 13 uses the two-dimensional projection data as the labeling data of the corresponding road image, and establishes a relationship between the determined two-dimensional projection data as a label and the corresponding road image.
In one application of the embodiment of the present application, the road image labeling device for lane line identification includes a model calculation unit 14 and a bad sample identification unit 15 in addition to the aforementioned data acquisition unit 11, projection calculation unit 12 and calibration unit 13.
The model calculation unit 14 is configured to process the road image by using a historical lane line identification model to obtain lane line identification data.
The historical lane line recognition model is obtained by training a deep learning model by adopting historical sample data training; the lane line recognition model is used for processing the road image shot by the vehicle camera and determining the lane line in the road.
The embodiment of the application is not limited to the deep learning model for constructing the lane line identification model, and various available deep learning algorithm models can be adopted.
The bad sample identification unit 15 is configured to determine whether the road image is a bad sample image according to the lane line marking data and the lane line identification data.
In specific application, the bad sample calculation unit determines whether the road image is a bad sample image by calculating a difference value between the lane line marking data and the lane line identification data and judging whether the difference value is greater than a set threshold value.
If the difference value is larger than the set threshold value, determining that the road image is a bad sample image; and if the difference value is smaller than the set threshold value, determining that the road image is not a bad sample image.
In practical application, the number of the lane marking data and the lane identification data may be multiple, and therefore, the calculation of the difference value between the lane marking data and the lane identification data is to determine the lane identification data corresponding to each lane marking data, or to determine the closest lane marking data corresponding to the lane identification data, and to find the difference value between the corresponding data, and to take the average value of the found difference values as the difference value between the lane marking data and the lane identification data.
In the embodiment of the application, the bad sample image is an image of which the obtained lane line identification data does not meet the set requirement after the historical lane line identification model is adopted for processing. In the embodiment of the present application, the setting requirement is embodied by a difference between the lane marking data and the lane identification data of the setting requirement. Because the bad sample image can not be processed by the historical lane line recognition model to obtain more reasonable lane line recognition data, the bad sample image can be used as a basis for retraining the lane line recognition model.
In practical application, if the difference is larger than a set threshold, the lane line identification data is determined to be larger than the lane line marking data; and because the lane line marking data is more accurate data, the lane line identification data determined by the historical lane line identification model is determined to be inaccurate, and the historical lane line identification model is determined reversely and cannot process the road image well, so that the road image can be used as a bad sample image, and the bad sample image can be used as a sample of a subsequently retrained lane line identification model to improve the identification capability of the lane line identification model.
The embodiment of the application also provides electronic equipment for realizing the road image labeling method for lane line identification. Fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application. As shown in fig. 4, the electronic device comprises at least one processor 21, at least one memory 22 and at least one communication interface 23.
The memory 22 in this embodiment may be either volatile memory or nonvolatile memory, or a combination of the two. In some embodiments, memory 22 stores the following elements: executable units or data structures, or a subset thereof, or an expanded set thereof: an operating system and an application program. The operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic tasks and processing hardware-based tasks. The application programs include application programs for various application tasks.
In the embodiment of the present application, the processor 21 executes the steps of the road image labeling method for lane line identification by calling a program or an instruction (specifically, a program or an instruction stored in an application program) stored in the memory 22.
In the embodiment of the present Application, the Processor 21 may be a general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The steps of the road image labeling method for lane line identification provided by the embodiment of the application can be directly implemented by a hardware decoding processor, or implemented by combining hardware and software units in the decoding processor. The software elements may be located in ram, flash, rom, prom, or eprom, registers, among other storage media that are well known in the art. The storage medium is located in a memory 22, and the processor 21 reads the information in the memory 22 and performs the steps of the method in combination with its hardware.
The communication interface 23 is used for implementing information transmission between the intelligent driving control system and the external device, for example, to obtain various vehicle sensor data, and generate and issue corresponding control instructions to the vehicle actuator.
The memory and processor components in the electronic device are coupled together by a bus system 24, and the bus system 24 is used to enable communications among the components. In the embodiment of the present application, the bus system may be a CAN bus, and may also be another type of bus. The bus system 234 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, the various buses are labeled as bus system 24 in fig. 4.
The embodiments of the present application further provide a non-transitory computer-readable storage medium, where the non-transitory computer-readable storage medium stores a program or an instruction, and the program or the instruction enables a computer to execute the steps of the embodiment of the road image labeling method for lane line identification, which are not described herein again to avoid repeated descriptions.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (15)

1. A road image labeling method for lane line identification is characterized by comprising the following steps:
acquiring three-dimensional coordinate data of a lane line in a road and acquiring the pose of a camera for shooting the road;
performing projection transformation on the three-dimensional coordinate data according to the pose to obtain two-dimensional projection data projected into the road image;
and adopting the two-dimensional projection data as lane marking data of the road image.
2. The road image labeling method for lane line identification according to claim 1, further comprising:
processing the road image by adopting a historical lane line identification model to obtain lane line identification data;
and judging whether the road image is a bad sample image or not according to the lane line marking data and the lane line identification data.
3. The road image labeling method for lane line identification according to claim 1 or 2, wherein the acquiring three-dimensional coordinate data of a lane line in a road comprises:
acquiring at least two road images; the shooting poses of the cameras corresponding to the road images are different;
and determining the three-dimensional coordinate data of the lane line according to the at least two road images by adopting a three-dimensional reconstruction method.
4. The method for labeling road images for lane line identification according to claim 3, wherein determining the three-dimensional coordinate data of the lane line from the at least two road images by using a three-dimensional reconstruction method comprises:
acquiring matching feature points of the at least two road images;
determining the pose of a camera forming the road image according to the matched feature points;
constructing a space dense point cloud representing the road according to the at least two road images and the corresponding poses of the cameras;
obtaining a lane line point cloud according to the space dense point cloud;
and determining the three-dimensional coordinate data of the lane line according to the lane line point cloud.
5. The road image labeling method for lane line identification according to claim 1 or 2, wherein acquiring the pose of a camera that photographs the road comprises:
and acquiring matching feature points of at least two road images, and determining the pose of the camera according to the matching feature points.
6. The road image labeling method for lane line identification according to claim 1 or 2, wherein acquiring the pose of a camera that photographs the road comprises:
and acquiring positioning characteristic data of the vehicle, and determining the pose of the camera according to the positioning characteristic data.
7. The road image labeling method for lane line identification according to claim 1 or 2, wherein acquiring three-dimensional coordinate data of a lane line in a road comprises:
and searching the high-precision map of the road, and determining the three-dimensional coordinate data of the lane line.
8. A road image labeling apparatus for lane line recognition, comprising:
the data acquisition unit is used for acquiring three-dimensional coordinate data of a lane line in a road and acquiring the pose of a camera for shooting the road;
the projection calculation unit is used for performing projection transformation on the three-dimensional coordinate data according to the pose to obtain two-dimensional projection data projected into the road image;
and the calibration unit is used for adopting the two-dimensional projection data as lane marking data of the road image.
9. The road image labeling apparatus for lane line identification according to claim 8, further comprising:
the model calculation unit is used for processing the road image by adopting a historical lane line identification model to obtain lane line identification data;
and the bad sample identification unit is used for judging whether the road image is a bad sample image according to the lane line marking data and the lane line identification data.
10. The road image labeling device for lane line identification according to any one of claims 8 or 9, wherein the data acquisition unit comprises:
the image acquisition subunit is used for acquiring at least two road images; the shooting poses of the cameras corresponding to the road images are different;
and the three-dimensional reconstruction subunit is used for determining the three-dimensional coordinate data of the lane line according to the at least two road images by adopting a three-dimensional reconstruction method.
11. The road image labeling apparatus for lane line identification according to claim 8 or 9, wherein the three-dimensional reconstruction subunit comprises:
the matching feature point acquisition subunit is used for acquiring matching feature points of at least two road images;
a pose acquisition subunit, configured to determine, according to the matching feature points, a pose of a camera that forms the road image;
the dense point cloud obtaining subunit is used for constructing a space dense point cloud representing the road according to the at least two road images and the corresponding poses of the cameras;
the lane line point cloud obtaining subunit is used for obtaining a lane line point cloud according to the space dense point cloud;
and the three-dimensional coordinate data calculation subunit is used for determining the three-dimensional coordinate data of the lane line according to the lane line point cloud.
12. The road image labeling device for lane line identification according to claim 8 or 9,
the data acquisition unit acquires positioning feature data of a vehicle, and determines the pose of the camera according to the positioning feature data.
13. The road image labeling device for lane line identification according to claim 8 or 9,
and the data acquisition unit determines the three-dimensional coordinate data of the lane line by searching the high-precision map of the road.
14. An electronic device comprising a processor and a memory;
the processor is adapted to execute the steps of the road image labeling method for lane line identification according to any one of claims 1 to 7 by calling a program or instructions stored in the memory.
15. A computer-readable storage medium characterized in that the computer-readable storage medium stores a program or instructions for causing a computer to execute the steps of the road image labeling method for lane line identification according to any one of claims 1 to 7.
CN202110513333.4A 2021-05-11 2021-05-11 Road picture marking method and device for lane line identification Pending CN113205447A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110513333.4A CN113205447A (en) 2021-05-11 2021-05-11 Road picture marking method and device for lane line identification
PCT/CN2022/077531 WO2022237272A1 (en) 2021-05-11 2022-02-23 Road image marking method and device for lane line recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110513333.4A CN113205447A (en) 2021-05-11 2021-05-11 Road picture marking method and device for lane line identification

Publications (1)

Publication Number Publication Date
CN113205447A true CN113205447A (en) 2021-08-03

Family

ID=77030947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110513333.4A Pending CN113205447A (en) 2021-05-11 2021-05-11 Road picture marking method and device for lane line identification

Country Status (2)

Country Link
CN (1) CN113205447A (en)
WO (1) WO2022237272A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140538A (en) * 2021-12-03 2022-03-04 禾多科技(北京)有限公司 Vehicle-mounted camera pose adjusting method, device, equipment and computer readable medium
CN114581287A (en) * 2022-02-18 2022-06-03 高德软件有限公司 Data processing method and device
WO2022237272A1 (en) * 2021-05-11 2022-11-17 北京车和家信息技术有限公司 Road image marking method and device for lane line recognition
CN117372632A (en) * 2023-12-08 2024-01-09 魔视智能科技(武汉)有限公司 Labeling method and device for two-dimensional image, computer equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117593717B (en) * 2024-01-18 2024-04-05 武汉大学 Lane tracking method and system based on deep learning

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163930A (en) * 2019-05-27 2019-08-23 北京百度网讯科技有限公司 Lane line generation method, device, equipment, system and readable storage medium storing program for executing
CN111080662A (en) * 2019-12-11 2020-04-28 北京建筑大学 Lane line extraction method and device and computer equipment
CN111127422A (en) * 2019-12-19 2020-05-08 北京旷视科技有限公司 Image annotation method, device, system and host
CN111179152A (en) * 2018-11-12 2020-05-19 阿里巴巴集团控股有限公司 Road sign identification method and device, medium and terminal
KR20200065875A (en) * 2018-11-30 2020-06-09 한국교통대학교산학협력단 Method and system for recognizing lane using landmark
CN111368605A (en) * 2018-12-26 2020-07-03 易图通科技(北京)有限公司 Lane line extraction method and device
CN111801711A (en) * 2018-03-14 2020-10-20 法弗人工智能有限公司 Image annotation
CN112154446A (en) * 2019-09-19 2020-12-29 深圳市大疆创新科技有限公司 Three-dimensional lane line determining method and device and electronic equipment
CN112184799A (en) * 2019-07-05 2021-01-05 北京地平线机器人技术研发有限公司 Lane line space coordinate determination method and device, storage medium and electronic equipment
CN112667837A (en) * 2019-10-16 2021-04-16 上海商汤临港智能科技有限公司 Automatic image data labeling method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10657390B2 (en) * 2017-11-27 2020-05-19 Tusimple, Inc. System and method for large-scale lane marking detection using multimodal sensor data
CN111753605A (en) * 2019-06-11 2020-10-09 北京京东尚科信息技术有限公司 Lane line positioning method and device, electronic equipment and readable medium
CN112633035B (en) * 2019-09-23 2022-06-24 魔门塔(苏州)科技有限公司 Driverless vehicle-based lane line coordinate true value acquisition method and device
CN113205447A (en) * 2021-05-11 2021-08-03 北京车和家信息技术有限公司 Road picture marking method and device for lane line identification

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111801711A (en) * 2018-03-14 2020-10-20 法弗人工智能有限公司 Image annotation
CN111179152A (en) * 2018-11-12 2020-05-19 阿里巴巴集团控股有限公司 Road sign identification method and device, medium and terminal
KR20200065875A (en) * 2018-11-30 2020-06-09 한국교통대학교산학협력단 Method and system for recognizing lane using landmark
CN111368605A (en) * 2018-12-26 2020-07-03 易图通科技(北京)有限公司 Lane line extraction method and device
CN110163930A (en) * 2019-05-27 2019-08-23 北京百度网讯科技有限公司 Lane line generation method, device, equipment, system and readable storage medium storing program for executing
CN112184799A (en) * 2019-07-05 2021-01-05 北京地平线机器人技术研发有限公司 Lane line space coordinate determination method and device, storage medium and electronic equipment
CN112154446A (en) * 2019-09-19 2020-12-29 深圳市大疆创新科技有限公司 Three-dimensional lane line determining method and device and electronic equipment
CN112667837A (en) * 2019-10-16 2021-04-16 上海商汤临港智能科技有限公司 Automatic image data labeling method and device
CN111080662A (en) * 2019-12-11 2020-04-28 北京建筑大学 Lane line extraction method and device and computer equipment
CN111127422A (en) * 2019-12-19 2020-05-08 北京旷视科技有限公司 Image annotation method, device, system and host

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
丁全新 等: "《无人机系统任务载荷技术》", 31 December 2020, 航空工业出版社, pages: 111 *
叶阳阳: "面向道路交通的环境感知关键技术研究", 中国博士学位论文全文数据库 工程科技Ⅱ辑, no. 03, pages 035 - 10 *
查红彬 等: "《视觉信息处理研究前沿》", 31 December 2019, 上海交通大学出版社, pages: 375 - 376 *
钱基德 等: "基于感兴趣区域模型的车道线快速检测算法", 电子科技大学学报, vol. 47, no. 03, pages 356 - 361 *
陈宗海: "《系统仿真技术及其应用第17卷》", 31 August 2016, 中国科学技术大学出版社, pages: 38 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022237272A1 (en) * 2021-05-11 2022-11-17 北京车和家信息技术有限公司 Road image marking method and device for lane line recognition
CN114140538A (en) * 2021-12-03 2022-03-04 禾多科技(北京)有限公司 Vehicle-mounted camera pose adjusting method, device, equipment and computer readable medium
CN114581287A (en) * 2022-02-18 2022-06-03 高德软件有限公司 Data processing method and device
CN117372632A (en) * 2023-12-08 2024-01-09 魔视智能科技(武汉)有限公司 Labeling method and device for two-dimensional image, computer equipment and storage medium
CN117372632B (en) * 2023-12-08 2024-04-19 魔视智能科技(武汉)有限公司 Labeling method and device for two-dimensional image, computer equipment and storage medium

Also Published As

Publication number Publication date
WO2022237272A1 (en) 2022-11-17

Similar Documents

Publication Publication Date Title
CN110411441B (en) System and method for multi-modal mapping and localization
CN113205447A (en) Road picture marking method and device for lane line identification
CN109523597B (en) Method and device for calibrating external parameters of camera
CN109544629B (en) Camera position and posture determining method and device and electronic equipment
CN110176032B (en) Three-dimensional reconstruction method and device
JP4943034B2 (en) Stereo image processing device
CN109872366B (en) Method and device for detecting three-dimensional position of object
CN108648194B (en) Three-dimensional target identification segmentation and pose measurement method and device based on CAD model
JP2008506953A5 (en)
CN109741241B (en) Fisheye image processing method, device, equipment and storage medium
WO2019075948A1 (en) Pose estimation method for mobile robot
CN113269163B (en) Stereo parking space detection method and device based on fisheye image
CN109214254B (en) Method and device for determining displacement of robot
CN112132874A (en) Calibration-board-free different-source image registration method and device, electronic equipment and storage medium
CN110197104B (en) Distance measurement method and device based on vehicle
CN104504691A (en) Camera position and posture measuring method on basis of low-rank textures
CN103489165A (en) Decimal lookup table generation method for video stitching
WO2015027649A1 (en) Vehicle detection method using multi-scale model
CN106651950B (en) Single-camera pose estimation method based on quadratic curve perspective projection invariance
JP6198104B2 (en) 3D object recognition apparatus and 3D object recognition method
CN114998630A (en) Ground-to-air image registration method from coarse to fine
CN114972451A (en) Rotation-invariant SuperGlue matching-based remote sensing image registration method
KR101598399B1 (en) System for combining images using coordinate information of roadview image
WO2019072911A1 (en) Method for determining a region of interest in an image captured by a camera of a motor vehicle, control system, camera system as well as motor vehicle
CN111815511A (en) Panoramic image splicing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination