CN110147382A - Lane line update method, device, equipment, system and readable storage medium storing program for executing - Google Patents
Lane line update method, device, equipment, system and readable storage medium storing program for executing Download PDFInfo
- Publication number
- CN110147382A CN110147382A CN201910451302.3A CN201910451302A CN110147382A CN 110147382 A CN110147382 A CN 110147382A CN 201910451302 A CN201910451302 A CN 201910451302A CN 110147382 A CN110147382 A CN 110147382A
- Authority
- CN
- China
- Prior art keywords
- point
- lane line
- offset
- original image
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 239000013598 vector Substances 0.000 claims abstract description 318
- 230000002776 aggregation Effects 0.000 claims description 90
- 238000004220 aggregation Methods 0.000 claims description 90
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000012795 verification Methods 0.000 claims description 11
- 238000004891 communication Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 5
- 238000006116 polymerization reaction Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 10
- 239000011159 matrix material Substances 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 5
- 230000004931 aggregating effect Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- DMSMPAJRVJJAGA-UHFFFAOYSA-N benzo[d]isothiazol-3-one Chemical compound C1=CC=C2C(=O)NSC2=C1 DMSMPAJRVJJAGA-UHFFFAOYSA-N 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the present invention provides a kind of lane line update method, device, equipment, system and readable storage medium storing program for executing.Wherein, method includes: to obtain the location information of the every original image of multiple original images and shooting shot to lane line;From every original image, lane line pixel is identified;According to the map in data every original image of the location information of multiple discrete points of lane line to be updated and shooting location information, offset vector of each discrete point relative to lane line pixel in every original image is calculated, the associated offset vector set of each discrete point is obtained;Vector combination is carried out to the offset vector in each offset vector set respectively, obtains the associated aggregated vector of each discrete point;According to target offset point pointed by the associated aggregated vector of each discrete point, the lane line to be updated is updated.Method provided in this embodiment eliminates single offset bring error, at low cost, stability is high and Up-to-date state is high.
Description
Technical Field
The embodiment of the invention relates to a high-precision map technology, in particular to a lane line updating method, a lane line updating device, lane line updating equipment, a lane line updating system and a readable storage medium.
Background
A High-precision Map is also called a High Definition Map (HD Map), and is a Map specifically serving for unmanned driving. Unlike the conventional navigation map, the high-precision map can provide navigation information at a Lane (Lane) level in addition to navigation information at a Road (Road) level. In a real scene, the lane line changes due to road widening or road re-planning, and it is necessary to update the lane line in the high-precision map.
At present, the method for updating the lane line in the high-precision map comprises the following steps: the lane line image is acquired by a vehicle equipped with a high-precision Global Positioning System (GPS), an Inertial Measurement Unit (IMU), and an industrial camera. And then, the lane line data in the lane line image is differenced with the lane line data in the high-precision map to obtain the transverse offset. And if the transverse offset of the lane line exceeds a certain threshold value, updating lane line data in the high-precision map.
The above method has the disadvantage that the difference between the lane line data acquired each time and the lane line data in the high-precision map is not exactly the same, because the GPS always has a certain error. If the lateral offset obtained at a time exceeds a certain threshold, it may cause an erroneous update of the lane line.
Disclosure of Invention
The embodiment of the invention provides a lane line updating method, a lane line updating device, lane line updating equipment, a lane line updating system and a readable storage medium, so as to improve the accuracy of updating a lane line.
In a first aspect, an embodiment of the present invention provides a lane line updating method, including:
acquiring a plurality of original images obtained by shooting a lane line and positioning information of each original image;
identifying lane line pixel points from each original image;
calculating the offset vector of each discrete point relative to the lane line pixel point in each original image according to the positioning information of a plurality of discrete points of the lane line to be updated in the map data and the positioning information of each original image, and obtaining the offset vector set associated with each discrete point;
respectively carrying out vector aggregation on the offset vectors in each offset vector set to obtain an aggregation vector associated with each discrete point;
and updating the lane line to be updated according to the target offset point pointed by the aggregation vector associated with each discrete point.
In a second aspect, an embodiment of the present invention further provides a lane line updating device, including:
the acquisition module is used for acquiring a plurality of original images obtained by shooting the lane line and positioning information of each original image;
the identification module is used for identifying the lane line pixel points from each original image;
the calculation module is used for calculating the offset vector of each discrete point relative to the lane line pixel point in each original image according to the positioning information of a plurality of discrete points of the lane line to be updated in the map data and the positioning information of each original image, so as to obtain the offset vector set associated with each discrete point;
the aggregation module is used for respectively carrying out vector aggregation on the offset vectors in each offset vector set to obtain an aggregation vector associated with each discrete point;
and the updating module is used for updating the lane line to be updated according to the target offset point pointed by the aggregation vector associated with each discrete point.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a memory for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the lane line updating method according to any of the embodiments.
In a fourth aspect, an embodiment of the present invention further provides a lane line updating system, including: the collection vehicle and the electronic device of any embodiment;
the electronic equipment is integrated in the acquisition vehicle or is independent of the acquisition vehicle and is in communication connection with the acquisition vehicle;
the collecting vehicle comprises a vehicle body, shooting equipment and positioning equipment, wherein the shooting equipment and the positioning equipment are carried on the vehicle body;
the photographing apparatus is configured to: shooting the lane line to obtain a plurality of original images;
the positioning device is configured to: positioning the vehicle body when each original image is shot to obtain positioning information of each original image;
the collection vehicle is used for: and sending the positioning information of the plurality of original images and the shot each original image to the electronic equipment so that the electronic equipment can update the lane line to be updated.
In a fifth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the lane line updating method according to any one of the embodiments.
In the embodiment of the invention, a plurality of original images obtained by shooting a lane line and positioning information obtained by shooting each original image are obtained, the pixel points of the lane line are identified from each original image, and the offset vector of each discrete point relative to the pixel points of the lane line in each original image is calculated according to the positioning information of a plurality of discrete points of the lane line to be updated in map data and the positioning information obtained by shooting each original image, so that the offset vector set associated with each discrete point is obtained, and a plurality of offset vectors associated with each discrete point are obtained instead of a single offset vector; the offset vectors in each offset vector set are respectively subjected to vector aggregation to obtain an aggregation vector associated with each discrete point, and the lane line to be updated is updated according to the target offset point pointed by the aggregation vector associated with each discrete point, so that errors caused by single offset vector are offset in a mode of aggregating a large number of offset vectors, and the updating accuracy, the instantaneity and the stability of the lane line are improved. The embodiment of the invention adopts a mode of aggregation of a large number of offset vectors, does not need high-precision shooting equipment and positioning equipment, does not need to continuously shoot images, reduces the data transmission flow and reduces the cost of updating the lane line; meanwhile, the method does not need to directly perform high-precision lane line fitting on the original image, does not need a high-precision updating algorithm, only needs an algorithm with general accuracy and recall rate, and can effectively reduce data processing amount and calculation time.
Drawings
Fig. 1a is a schematic structural diagram of a lane line updating system according to an embodiment of the present invention;
fig. 1b is a flowchart of a lane line updating method according to an embodiment of the present invention;
fig. 2a is a flowchart of a lane line updating method according to a second embodiment of the present invention;
FIG. 2b is a diagram of offset vectors in an original image according to a second embodiment of the present invention;
fig. 3a is a flowchart of a lane line updating method according to a third embodiment of the present invention;
fig. 3b is a schematic diagram of a distance between a back projection point and a shooting point of an original image according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a lane line updating apparatus according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention;
fig. 6 is a schematic structural diagram of a lane line updating system according to a sixth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
To clearly describe the technical solution of the embodiment of the present invention, a schematic structural diagram of a lane line updating system to which the embodiment of the present invention is applied is described first based on fig. 1 a. In fig. 1a, the lane line updating system mainly includes an electronic device and a collection vehicle.
The collection vehicle comprises various road running vehicles such as but not limited to cars, automobiles, passenger cars and the like, and can be unmanned vehicles or manned vehicles. Gather the car and include: the vehicle body, carry on shooting equipment and the positioning device on the vehicle body.
The photographing apparatus and the positioning apparatus may be provided on a rear view mirror of the collection vehicle. The shooting equipment is used for shooting the road surface in front of the collecting vehicle, such as a wide-angle camera, a fisheye camera and the like. The Positioning device is used for Positioning the acquisition vehicle in real time, and the Positioning device is integrated with a Positioning System, for example, a Global Positioning System (GPS), a Beidou Positioning System and the like.
The electronic device may be integrated into the collection vehicle or separate from and communicatively coupled to the collection vehicle. The electronic equipment is used for executing the updating operation of the lane line.
Based on the lane line updating system, an embodiment of the present invention provides a lane line updating method, a flowchart of which is shown in fig. 1b, and the method is applicable to a situation where a lane line on a road surface is collected and the lane line in a high-precision map is updated according to a collection result. The method may be performed by a lane line updating apparatus and is generally integrated in an electronic device in the above-described lane line updating system.
With reference to fig. 1b, the method provided in this embodiment specifically includes:
s110, acquiring a plurality of original images obtained by shooting the lane line and positioning information of each original image.
In the process that the collection vehicle runs on the road surface, the shooting equipment shoots the lane lines on the road surface to obtain a plurality of original images.
In this embodiment, the lane line is completely photographed without continuously photographing the device, that is, without continuously photographing frames.
In the process that the collection vehicle runs on the road surface, the positioning equipment positions the vehicle body of the collection vehicle. At the time of capturing each original image, the positioning information of the vehicle body, that is, the positioning information of capturing each original image is acquired from the positioning device.
And S120, identifying the lane line pixel points from each original image.
The lane line pixel points are pixel points belonging to the category of the lane line. Optionally, semantic segmentation is performed on each original image by using a deep neural network model, so as to obtain a semantic category label corresponding to each pixel point in each original image, such as a lane line category, a tree category, or a road sign category. From each pixel point, a pixel point belonging to the lane line category is identified.
The deep neural network model includes, but is not limited to, a convolutional neural network.
S130, calculating offset vectors of each discrete point relative to the lane line pixel points in each original image according to the positioning information of the discrete points of the lane line to be updated in the map data and the positioning information of each original image, and obtaining an offset vector set associated with each discrete point.
And the existing lane line in the high-precision map data is the lane line to be updated. And segmenting the lane line to be updated according to a preset length along the extending direction of the lane line, and taking the central point of each segment of the lane line to obtain a plurality of discrete points of the lane line to be updated. The plurality of discrete points constitute a discretized representation of the lane line.
And acquiring corresponding positioning information from the map data of the high-precision map according to the coordinates of the discrete points. And obtaining the positioning information of the lane line pixel points according to the positioning information of each shot original image. The lane line pixel points are identified from the original image and represent the real positions of the lane lines. Then, according to the deviation between the positioning information of the discrete points and the positioning information of the lane line pixel points, the offset vector of each discrete point relative to the lane line pixel points in each original image can be obtained, and an offset vector set associated with each discrete point is obtained. It should be noted that the lane line pixel points involved in the calculation should be pixel points corresponding to the discrete points in terms of positioning, for example, the vertical axis coordinates of the lane line pixel points involved in the calculation are the same as the vertical axis coordinates of the discrete points.
In an example, a total of 10 discrete points and 20 original images are taken, and through the present operation, an offset vector of each discrete point with respect to a lane line pixel point in each original image is obtained, that is, 10 offset vector sets are obtained, and each offset vector set includes at most 20 offset vectors. It should be noted that, due to the shooting angle, some original images do not include the lane line pixel points corresponding to some discrete points in terms of positioning, and thus, no corresponding offset vector is obtained, so that the offset vector set includes at most 20 offset vectors. However, in this embodiment, the set of offset vectors corresponding to each offset point includes at least two offset vectors, that is, for each discrete point, there are at least two original images including lane line pixel points corresponding to the discrete point in position. The starting point of the offset vector is a discrete point, and the end point is a lane line pixel point.
And S140, respectively carrying out vector aggregation on the offset vectors in each offset vector set to obtain an aggregation vector associated with each discrete point.
Because of errors caused by positioning equipment and/or shooting equipment, a single offset vector cannot accurately represent the deviation of the lane line, and for this reason, the embodiment adopts a mode of aggregating at least two offset vectors to offset the errors caused by the single offset vector.
Specifically, for each offset vector set, vector aggregation is performed on the offset vectors in the offset vector set to obtain an aggregation vector associated with each discrete point. Optionally, vector superposition is performed on the offset vectors to obtain an aggregation vector.
S150, updating the lane line to be updated according to the target offset point pointed by the aggregation vector associated with each discrete point.
The discrete point is in a one-to-one relationship with the offset vector set, and then the discrete point is in a one-to-one relationship with the aggregate vector. The start point of the aggregate vector is a discrete point and the end point is a target offset point. It can be seen that the target offset point is also in a one-to-one relationship with the discrete point.
The target offset point is a position to which the lane line should be updated, and based on this, a curve passing through the target offset point is generated, and map data of the updated lane line is obtained. The map data of the lane line is a coordinate sequence of the lane line in a world coordinate system.
Alternatively, methods of solving curves include, but are not limited to, a waiting coefficient method, a definition method, a parametric method, and a curve fitting method. Taking a curve fitting method as an example, curve fitting is performed according to the target offset point pointed by the aggregation vector associated with each discrete point to obtain updated map data of the lane line. It should be noted that, if the fitted curve is not in the world coordinate system, the fitted curve needs to be projected into the world coordinate system to obtain updated map data of the lane line. Subsequently, the map data of the lane lines are converted into a high-precision map format and stored in a high-precision map database.
In the embodiment of the invention, a plurality of original images obtained by shooting a lane line and positioning information obtained by shooting each original image are obtained, the pixel points of the lane line are identified from each original image, and the offset vector of each discrete point relative to the pixel points of the lane line in each original image is calculated according to the positioning information of a plurality of discrete points of the lane line to be updated in map data and the positioning information obtained by shooting each original image, so that the offset vector set associated with each discrete point is obtained, and a plurality of offset vectors associated with each discrete point are obtained instead of a single offset vector; the offset vectors in each offset vector set are respectively subjected to vector aggregation to obtain an aggregation vector associated with each discrete point, and the lane line to be updated is updated according to the target offset point pointed by the aggregation vector associated with each discrete point, so that errors caused by single offset vectors are offset in a mode of aggregating a large number of offset vectors, and the accuracy, the instantaneity and the stability of updating the lane line are improved. The embodiment of the invention adopts a mode of aggregation of a large number of offset vectors, does not need high-precision shooting equipment and positioning equipment, does not need to continuously shoot images, reduces the data transmission flow and reduces the cost of updating the lane line; meanwhile, the method does not need to directly perform high-precision lane line fitting on the original image, does not need a high-precision updating algorithm, only needs an algorithm with general accuracy and recall rate, and can effectively reduce data processing amount and calculation time.
Example two
The present embodiment is further optimized based on various optional implementation manners of the above embodiment. Optionally, optimizing the offset vector set associated with each discrete point obtained by calculating the offset vector of each discrete point relative to the pixel point of the lane line in each original image according to the positioning information of the plurality of discrete points of the lane line to be updated in the map data and the positioning information of each original image to obtain the corresponding projection point by projecting the plurality of discrete points into the corresponding original image according to the positioning information of the plurality of discrete points of the lane line to be updated in the map data and the positioning information of each original image; calculating the offset vector of each projection point relative to the lane line pixel point according to the position information of each projection point and the position information of the lane line pixel point in each original image to obtain an offset vector set associated with each discrete point; or optimizing to project the lane line pixel points in each original image to a road space of a world coordinate system according to the positioning information of each original image to obtain a plurality of back projection points; projecting each discrete point into a road space of a world coordinate system according to the positioning information of a plurality of discrete points of a lane line to be updated in the map data to obtain the position information of each discrete point; according to the position information of each discrete point and the position information of each back projection point in the road surface space of the world coordinate system, calculating the offset vector of each discrete point relative to each back projection point, and obtaining the offset vector set' associated with each discrete point, so that in a two-dimensional space: and calculating an offset vector set associated with each discrete point in the original image or the road space of the world coordinate system.
Fig. 2a is a flowchart of a lane line updating method according to a second embodiment of the present invention. The method provided by the embodiment comprises the following operations:
s210, acquiring a plurality of original images obtained by shooting the lane line and positioning information of each original image.
S220, identifying the lane line pixel points from each original image. Execution continues with either S230 or S240.
A set of offset vectors is calculated in the original image in operations S230-S231 described below, and a set of offset vectors is calculated in the road surface space of the world coordinate system in operations S240-S241. The two-dimensional space for calculating the set of offset vectors can be freely selected by those skilled in the art according to the data processing amount and accuracy.
And S230, projecting the discrete points to the corresponding original images according to the positioning information of the discrete points of the lane line to be updated in the map data and the positioning information of each original image to obtain corresponding projected points. Execution continues with S231.
The high-precision map where the discrete points are located is constructed according to a world coordinate system, and a plurality of discrete points need to be projected into an image coordinate system from the world coordinate system. Considering that the shooting visual angle of the original image is limited, in order to avoid redundant calculation, discrete points within the shooting visual angle range of each original image are projected into the corresponding original image. Wherein, the shooting visual angle range can be within a preset distance in front of the shooting equipment.
First, external parameters of the photographing apparatus are calculated according to the positioning information, and the external parameters of the photographing apparatus include position information and attitude information of the photographing apparatus in a world coordinate system.
Specifically, first, position information and attitude information (t) of a capture vehicle on which the photographing apparatus is mounted in a world coordinate system are acquired based on the positioning information (t)1,R1) Alternatively referred to as translation vectors and rotation matrices. The yaw angle in the attitude information can be estimated according to the motion azimuth angle of the collected vehicle in the positioning information. To simplify the calculation, the roll angle and the pitch angle in the attitude information are set to 0. The location information may be directly derived from the positioning information.
Then, the position information and the attitude information (t) of the vehicle in the world coordinate system are collected1,R1) And position information and attitude information (t) of the photographing apparatus in the coordinate system of the collection vehicle2,R2) And obtaining the position information and the posture information (t, R) of the shooting equipment in the world coordinate system.
The position information and the attitude information of the shooting equipment in the coordinate system of the collecting vehicle can be well calibrated when the shooting equipment is installed, and the position information and the attitude information are kept unchanged in the process of shooting images. Will (t)2,R2) The constructed matrix is multiplied by (t)1,R1) And the formed matrix obtains the position information and the posture information (t, R) of the shooting equipment in the world coordinate system.
And then, projecting the discrete points to the corresponding original images through the internal parameters and the external parameters of the shooting equipment to obtain the projection points corresponding to each discrete point. Wherein, the internal reference of shooting equipment includes: actual size of pixel on the photosensitive chip (dx, dy), center of image plane (u)0,v0) And a focal length f. The camera internal reference can be calculated by adopting a preset calibration value or a factory model of the camera.
The calculation method of the projection point corresponding to each discrete point is shown as the formula (1):
m=K[R,t]X; (1)
where m is the pixel coordinate (u, v) of the projection point, K is the internal reference matrix,[R,t]is an external reference matrix.
S231, calculating the offset vector of each projection point relative to the lane line pixel point according to the position information of each projection point and the position information of the lane line pixel point in each original image, and obtaining the offset vector set associated with each discrete point. Execution continues with S250.
Fig. 2b is a schematic diagram of the offset vector in the original image according to the second embodiment of the present invention, and in conjunction with fig. 2b, S231 includes the following four steps:
the first step is as follows: in each original image, a parallel reference line is drawn which passes through each projection point, respectively.
Different reference lines pass through different projection points, and the different reference lines are parallel. Alternatively, the reference line may be a line parallel to the transverse axis of the original image, or a line at an angle to the transverse axis, e.g. 2 degrees, 5 degrees.
The second step is that: and selecting left edge points and right edge points on the same datum line with each projection point from the lane line pixel points of each original image.
The third step: and calculating a central point corresponding to each projection point according to the left edge point and the right edge point corresponding to each projection point in each original image.
A plurality of lane line pixel points exist in the original image, and the lane line pixel points form a lane line area. The reference line passes through the projection point and the lane line region and intersects with the edge of the lane line region at the left edge point and the right edge point. And calculating the middle point of the connecting line of the left edge point and the right edge point, namely the central point corresponding to the projection point according to the position information of the left edge point and the right edge point.
It is worth noting that when there are a plurality of left and right edge points, it is indicated that there are at least two lane line regions in the original image, as shown in fig. 2 b. It is necessary to select the left edge point and the right edge point that are closest to the projection point, and then calculate the center point according to the left edge point and the right edge point that are closest to the projection point.
The fourth step: and calculating the offset vector of each projection point relative to the central point according to the position information of each projection point and the position information of the corresponding central point in each original image to obtain an offset vector set associated with each discrete point.
Wherein the starting point of the offset vector is the projection point and the midpoint is the center point. The direction of the offset vector represents the offset direction and the magnitude of the offset vector represents the degree of offset.
Fig. 2b shows 2 projection points N1 and N2, a reference line L1 passing through the projection point N1 intersecting the nearest lane line region at P1 and P2, and a reference line L2 passing through the projection point N2 intersecting the nearest lane line region at P3 and P4. The midpoint of the connecting line of P1 and P2 is P5, and the midpoint of the connecting line of P3 and P4 is P6. Next, based on the positional information of the proxel N1 and the positional information of P5, an offset vector of the proxel N1 with respect to P5 is calculatedCalculating the offset vector of the projection point N2 relative to the P6 according to the position information of the projection point N2 and the position information of the P6
S240, according to the positioning information of each original image, the lane line pixel points in each original image are projected to the road space of the world coordinate system, and a plurality of back projection points are obtained. Execution continues with S241.
First, a road surface space in a world coordinate system is divided into a plurality of meshes, and a road surface image including the plurality of meshes is obtained.
In order to be consistent with the shape of the pixel points, the pavement space is divided into a plurality of square grids. The size of the grid is determined according to lane line accuracy, and if the lane line accuracy is in the centimeter level, the size of the grid needs to be set to the centimeter level. In one example, the pavement space is discretized into 20 cm by 20 cm grids, and the entire pavement space can be represented as a pavement image, where each pixel (or each grid) corresponds to a real 20 cm by 20 cm pavement.
And then, obtaining grid coordinates corresponding to the lane line pixel points in each original image according to the positioning information of each original image. The operation includes the following four steps.
The first step is as follows: and calculating external parameters of the shooting equipment according to the positioning information, wherein the external parameters of the shooting equipment comprise position information and posture information of the shooting equipment in a world coordinate system.
The second step is that: and projecting the lane line pixel points into a world coordinate system according to the external parameters and the internal parameters of the shooting equipment.
The internal reference and the external reference of the shooting device are described in detail in the above embodiments, and are not described again here.
Geometrically, the projection of the lane line pixel point in the world coordinate system is a ray which starts from the center of the shooting equipment and passes through the lane line pixel point. The parametric equation for this ray is as follows:
in the formula (2), w is a parameter of the ray X, and j is a pixel coordinate of a lane line pixel point.
Suppose that the projection matrix P ═ K [ R, t ]]In the formula (2), H is a matrix formed by the first 3 columns of the projection matrix P, P4Is the matrix formed by column 4 of the projection matrix P.
The third step: and projecting the lane line pixel points in the world coordinate system into a road space according to the height of the road to obtain projection coordinates.
The height of the road surface can be calibrated in advance and can be calculated through positioning information. Specifically, assuming that the center point of the shooting device is as high as the positioning device, an elevation value is obtained from the positioning information, and the height of the positioning device from the ground is subtracted from the elevation value to obtain the height h of the road surface.
z=h; (3)
And (3) combining the formula (2) and the formula (3), solving w, and obtaining the coordinates (projection coordinates for short) of the projection point of the lane line pixel point (namely the ray X) in the world coordinate system in the road space.
The fourth step: and determining the grid coordinate corresponding to the projection coordinate according to the position of the projection coordinate in the road surface image.
The projection coordinate of each lane line pixel point corresponds to one region. The projected coordinates are mapped into grid coordinates (a, b) according to their positions in the road surface image. Each lane line pixel point in the original image can be projected into a road space from the image coordinate system, and a corresponding grid coordinate is calculated.
After obtaining grid coordinates corresponding to the lane line pixel points in each original image, projecting the lane line pixel points into the corresponding grid coordinates to obtain a plurality of back projection points; meanwhile, a road surface image corresponding to each original image is obtained.
S241, according to the positioning information of the discrete points of the lane line to be updated in the map data, projecting each discrete point to a road surface space of a world coordinate system to obtain the position information of each discrete point. Execution continues with S242.
And S242, calculating an offset vector of each discrete point relative to each back projection point according to the position information of each discrete point and the position information of each back projection point in the road surface space of the world coordinate system to obtain an offset vector set associated with each discrete point. Execution continues with S251.
The coordinate system in which the discrete points are located is the world coordinate system. First, according to the height of the road surface and the positioning information of the discrete points in the map data, the position information of the discrete points in the road surface space is calculated. The discrete points are then projected into the corresponding road surface image.
Alternatively, first, in each of the above-described road surface images, parallel reference lines respectively passing through each of the discrete points are drawn; then, selecting left edge points and right edge points on the same datum line with each discrete point from the back projection points of each road surface image; then, according to the left edge point and the right edge point corresponding to each discrete point in each road surface image, calculating a central point corresponding to each discrete point; and calculating the offset vector of each discrete point relative to the central point according to the position information of each discrete point and the position information of the corresponding central point in each road surface image to obtain an offset vector set associated with each discrete point. The description here is substantially the same as the description at S231, except that the image type is different from the type of the point passed through, but the methods are substantially the same and will not be described again here.
And S250, projecting the offset vector in each offset vector set to a road surface space of a world coordinate system. Execution continues with S251.
Specifically, dividing a road surface space in a world coordinate system into a plurality of networks to obtain a road surface image comprising a plurality of grids; calculating grid coordinates corresponding to the starting point and the ending point of each offset vector in each original image according to the positioning information of each original image; and projecting the starting point and the ending point of each offset vector into corresponding grid coordinates.
The description here is substantially the same as the description at S240, except that the type of the point to be projected is different, but the methods are substantially the same, and are not described again here.
And S251, in a road surface space of a world coordinate system, vector aggregation is respectively carried out on the offset vectors in each offset vector set, and an aggregation vector associated with each discrete point is obtained.
And S260, updating the lane line to be updated according to the target offset point pointed by the aggregation vector associated with each discrete point.
The present embodiment provides two-dimensional spaces: the original image and the road surface space of the world coordinate system. The set of offset vectors associated with each discrete point can be computed in either two-dimensional space. In addition, the road space is discretized by adopting a grid division mode, so that the lane line pixel points are conveniently projected into the road space. By projecting the lane line pixel points into the road space and calculating the offset vector in the road space, or projecting the offset vector in the original image into the road space, the influence of the shooting visual angle and the position and the posture of the collected vehicle on the offset vector is removed, and the updating accuracy of the lane line is improved.
EXAMPLE III
The embodiment of the invention optimizes the operation on the basis of the technical scheme of each embodiment. Optionally, optimizing the operation "performing vector aggregation on the offset vectors in each offset vector set to obtain an aggregation vector associated with each discrete point" to "performing vector addition on the offset vectors in each offset vector set to obtain an aggregation vector associated with each discrete point", or optimizing to "calculating a projection point of a lane line pixel point corresponding to each offset vector in each offset vector set in a road space of a world coordinate system; calculating the distance between each projection point and a shooting point corresponding to the original image according to the positioning information of each projection point and the positioning information of the original image where each projection point is located; configuring the weight of each offset vector according to the distance between each projection point and a shooting point corresponding to the original image; and carrying out vector weighted addition on the offset vectors in each offset vector set according to the weight of each offset vector to obtain an aggregation vector ", thereby improving the accuracy of the aggregation vector. A lane line updating method as shown in fig. 3a includes:
s310, acquiring a plurality of original images obtained by shooting the lane line and positioning information of each original image.
And S320, identifying the lane line pixel points from each original image.
S330, calculating the offset vector of each discrete point relative to the pixel point of the lane line in each original image according to the positioning information of the discrete points of the lane line to be updated in the map data and the positioning information of each original image, and obtaining the offset vector set associated with each discrete point. Execution continues with either S340 or S350.
S345 and S350 are different in that S340 directly adopts a vector addition rule to add the offset vectors in the offset vector set to obtain an aggregate vector. S350, weights are configured on the offset vectors, and then the offset vectors are subjected to weighted addition by adopting a vector addition rule.
And S340, carrying out vector addition on the offset vectors in each offset vector set to obtain an aggregation vector associated with each discrete point. Execution continues with S360.
And S350, obtaining a lane line pixel point corresponding to each offset vector in each offset vector set and a back projection point in a road space of a world coordinate system. Execution continues with S351.
If the offset vector itself is located in the road space of the world coordinate system, the lane line pixel point corresponding to the offset vector, such as the center point in the above embodiment, is directly obtained from the road space.
If the offset vector itself is not located in the road space of the world coordinate system, for example, in the original image in the above embodiment, the corresponding lane line pixel (the center point in the above embodiment) needs to be projected into the road space of the world coordinate system to obtain the back projection point. The detailed description of the projection method is given in the description of S240, and is not repeated here.
S351, calculating the distance between each back projection point and the shooting point corresponding to the original image according to the position information of each back projection point and the positioning information of the original image where the corresponding lane line pixel point is shot. Execution continues with S352.
The positioning information of the shot original image is substantially positioning information in a world coordinate system, the positioning information can be converted into a road surface space of the world coordinate system according to the road surface height, and the distance between each back projection point and the shot point corresponding to the original image is calculated in the road surface space.
Fig. 3b is a schematic diagram of a distance between a back projection point and a capture point of an original image according to a third embodiment of the present invention. Assuming that 3 offset vectors are associated with a certain discrete point, 3 backprojection points and 3 original images correspond to each other, the 3 backprojection points are Y1, Y2 and Y3, the shooting points of the 3 original images are O1, O2 and O3, the distance between the backprojection point Y1 and the shooting point O1 is S1, the distance between the backprojection point Y2 and the shooting point O2 is S2, and the distance between the backprojection point Y3 and the shooting point O3 is S3. It can be seen that S1> S2> S3.
It should be noted that if the accuracy of the positioning device and the capturing device is high enough, the backprojection points corresponding to a discrete point are substantially the same point. That is, in an ideal case, Y1, Y2, and Y3 are substantially the same point. However, due to equipment errors, the calculated backprojection points Y1, Y2, and Y3 have slight differences.
And S352, configuring the weight of each offset vector according to the distance between each back projection point and the corresponding shooting point of the original image.
The longer the distance of the back projection point is influenced by the shooting visual angle and the position and the posture of the collection vehicle, the weight of the offset vector corresponding to the longer distance of the back projection point is configured to be a smaller value; conversely, the weight of the offset vector corresponding to the back projection point close to the back projection point is configured to be a large value. It can be seen that the distance is inversely related to the weight.
And S353, carrying out vector weighted addition on the offset vectors in each offset vector set according to the weight of each offset vector to obtain an aggregation vector. Execution continues with S360.
In connection with FIG. 3b, assume that the offset vector corresponding to backprojection point Y1Is configured as an offset vector corresponding to f1 and backprojection point Y2Is configured as an offset vector corresponding to f2 and backprojection point Y3Is configured as f3, and f3>f2>f1. By passingObtaining the polymerization vector
And S360, updating the lane line to be updated according to the target offset point pointed by the aggregation vector associated with each discrete point.
Optionally, obtaining aggregation vectors associated with a plurality of adjacent discrete points on the same lane line; carrying out similarity verification on the aggregation vectors associated with the plurality of adjacent discrete points; and if the aggregation vectors associated with the plurality of adjacent discrete points pass the similarity verification, updating the lane line to be updated according to the target offset points pointed by the aggregation vectors.
Wherein the adjacent discrete points are continuously arranged discrete points. Optionally, a preset number of multiple adjacent discrete points are selected from any section of the same lane line, and then an aggregation vector associated with the multiple adjacent discrete points is obtained.
Performing similarity verification on the plurality of aggregation vectors comprises: verifying whether a difference between lengths of the plurality of aggregated vectors is smaller than a length threshold and/or verifying whether a difference between angles of the plurality of aggregated vectors is smaller than an angle threshold. If the verification result is yes, the lane line to be updated is effectively offset, and the lane line to be updated is updated according to the target offset points pointed by the aggregation vectors; and if the verification result is negative, the lane line to be updated is invalid and is not updated, or aggregation vectors associated with other adjacent discrete points on the same lane line are continuously obtained, and similarity verification is performed.
In the embodiment, vector addition or vector weighted addition is directly carried out on the offset vectors, so that vector aggregation is realized, errors caused by single offset vectors are offset, and the accuracy, the instantaneity and the stability of lane line updating are improved; the validity of the deviation is judged by carrying out similarity verification on the aggregation vector, so that the accuracy of updating the lane line is further improved.
Example four
Fig. 4 is a schematic structural diagram of a lane line updating device according to a fourth embodiment of the present invention, where the fourth embodiment of the present invention is applicable to a situation where a lane line on a road surface is collected and the lane line in a high-precision map is updated according to a collection result, and with reference to fig. 4, the lane line updating device includes: an acquisition module 410, a recognition module 420, a calculation module 430, an aggregation module 440, and an update module 450.
An obtaining module 410, configured to obtain multiple original images obtained by shooting a lane line and positioning information of each original image;
the identification module 420 is used for identifying the lane line pixel points from each original image;
the calculating module 430 is configured to calculate, according to the positioning information of the multiple discrete points of the lane line to be updated in the map data and the positioning information of each original image, an offset vector of each discrete point relative to a pixel point of the lane line in each original image, and obtain an offset vector set associated with each discrete point;
an aggregation module 440, configured to perform vector aggregation on the offset vectors in each offset vector set, to obtain an aggregation vector associated with each discrete point;
the updating module 450 is configured to update the lane line to be updated according to the target offset point pointed by the aggregation vector associated with each discrete point.
In the embodiment of the invention, a plurality of original images obtained by shooting a lane line and positioning information obtained by shooting each original image are obtained, the pixel points of the lane line are identified from each original image, and the offset vector of each discrete point relative to the pixel points of the lane line in each original image is calculated according to the positioning information of a plurality of discrete points of the lane line to be updated in map data and the positioning information obtained by shooting each original image, so that the offset vector set associated with each discrete point is obtained, and a plurality of offset vectors associated with each discrete point are obtained instead of a single offset vector; the offset vectors in each offset vector set are respectively subjected to vector aggregation to obtain an aggregation vector associated with each discrete point, and the lane line to be updated is updated according to the target offset point pointed by the aggregation vector associated with each discrete point, so that errors caused by single offset vectors are offset in a mode of aggregating a large number of offset vectors, and the accuracy, the instantaneity and the stability of updating the lane line are improved. The embodiment of the invention adopts a mode of aggregation of a large number of offset vectors, does not need high-precision shooting equipment and positioning equipment, does not need to continuously shoot images, reduces the data transmission flow and reduces the cost of updating the lane line; meanwhile, the method does not need to directly perform high-precision lane line fitting on the original image, does not need a high-precision updating algorithm, only needs an algorithm with general accuracy and recall rate, and can effectively reduce data processing amount and calculation time.
Optionally, the calculating module 430 is specifically configured to, when an offset vector set associated with each discrete point is obtained by calculating an offset vector of each discrete point relative to a lane line pixel point in each original image according to the positioning information of the multiple discrete points of the lane line to be updated in the map data and the positioning information obtained by capturing each original image: projecting the discrete points into the corresponding original image according to the positioning information of the discrete points of the lane line to be updated in the map data and the positioning information of each original image to obtain corresponding projected points; and calculating the offset vector of each projection point relative to the lane line pixel point according to the position information of each projection point and the position information of the lane line pixel point in each original image to obtain an offset vector set associated with each discrete point.
Optionally, the calculating module 430 is specifically configured to, when an offset vector set associated with each discrete point is obtained by calculating an offset vector of each discrete point relative to a lane line pixel point in each original image according to the positioning information of the multiple discrete points of the lane line to be updated in the map data and the positioning information obtained by capturing each original image: according to the positioning information of each original image, the lane line pixel points in each original image are projected into the road space of a world coordinate system to obtain a plurality of back projection points; projecting each discrete point into a road space of the world coordinate system according to the positioning information of a plurality of discrete points of the lane line to be updated in the map data to obtain the position information of each discrete point; and calculating the offset vector of each discrete point relative to each back projection point according to the position information of each discrete point and the position information of each back projection point in the road surface space of the world coordinate system to obtain an offset vector set associated with each discrete point.
Optionally, the calculating module 430 is specifically configured to, when calculating the offset vector of each projection point relative to the lane line pixel point according to the position information of each projection point and the position information of the lane line pixel point in each original image, and obtaining an offset vector set associated with each discrete point: drawing parallel reference lines which respectively penetrate through each projection point in each original image; selecting left edge points and right edge points on the same datum line with each projection point from the lane line pixel points of each original image; calculating a central point corresponding to each projection point according to the left edge point and the right edge point corresponding to each projection point in each original image; and calculating the offset vector of each projection point relative to the central point according to the position information of each projection point and the position information of the corresponding central point in each original image to obtain an offset vector set associated with each discrete point.
Optionally, the aggregation module 440, when performing vector aggregation on the offset vectors in each offset vector set to obtain an aggregation vector associated with each discrete point, is specifically configured to: projecting the offset vector in each offset vector set into a road space of a world coordinate system; and respectively carrying out vector aggregation on the offset vectors in each offset vector set in the road space of the world coordinate system to obtain an aggregation vector associated with each discrete point.
Optionally, the aggregation module 440, when projecting the offset vector in each offset vector set into the road space of the world coordinate system, is specifically configured to: dividing a road surface space in a world coordinate system into a plurality of networks to obtain a road surface image comprising a plurality of grids; calculating grid coordinates corresponding to the starting point and the ending point of each offset vector in each original image according to the positioning information of each original image; and projecting the starting point and the ending point of each offset vector into corresponding grid coordinates.
Optionally, the aggregation module 440, when performing vector aggregation on the offset vectors in each offset vector set to obtain an aggregation vector associated with each discrete point, is specifically configured to: vector addition is carried out on the offset vectors in each offset vector set to obtain a polymerization vector associated with each discrete point; or acquiring a lane line pixel point corresponding to each offset vector in each offset vector set and a back projection point in a road space of a world coordinate system; calculating the distance between each back projection point and the shooting point corresponding to the original image according to the position information of each back projection point and the positioning information of the original image where the corresponding lane line pixel point is shot; configuring the weight of each offset vector according to the distance between each back projection point and the corresponding shooting point of the original image; and carrying out vector weighted addition on the offset vectors in each offset vector set according to the weight of each offset vector to obtain an aggregation vector associated with each discrete point.
Optionally, when the lane line to be updated is updated according to the target offset point pointed by the aggregation vector associated with each discrete point, the updating module 450 is specifically configured to: acquiring aggregation vectors associated with a plurality of adjacent discrete points on the same lane line; carrying out similarity verification on the aggregation vectors associated with the plurality of adjacent discrete points; and if the aggregation vectors associated with the plurality of adjacent discrete points pass the similarity verification, updating the lane line to be updated according to the target offset points pointed by the aggregation vectors.
Optionally, when the lane line to be updated is updated according to the target offset point pointed by the aggregation vector associated with each discrete point, the updating module 450 is specifically configured to: and performing curve fitting according to the target offset point pointed by the aggregation vector associated with each discrete point to obtain updated map data of the lane line.
The lane line updating device provided by the embodiment of the invention can execute the lane line updating method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE five
Fig. 5 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention. FIG. 5 illustrates a block diagram of an exemplary electronic device 12 suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 5 is only an example and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in FIG. 5, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by the electronic device and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, and commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
The electronic device 12 may also communicate with one or more external devices 14 (e.g., a collection vehicle) to store raw pictures and positioning information obtained from the collection vehicle in the system memory 28. The electronic device may also communicate with one or more devices that enable a user to interact with the electronic device 12, and/or with any devices (e.g., network cards, modems, etc.) that enable the electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by executing programs stored in the system memory 28, for example, to implement the lane line updating method provided by the embodiment of the present invention. After obtaining the updated map data of the lane lines, the processing unit 16 converts the map data of the lane lines into a high-precision map format and stores the high-precision map data in the system memory 28.
EXAMPLE six
The present embodiment provides a lane line updating system, which mainly includes an electronic device and a collection vehicle, with reference to fig. 1a and fig. 6.
On the basis of the above-described embodiment, the photographing apparatus is configured to: shooting the lane line to obtain a plurality of original images; the positioning device is used for: positioning the vehicle body when each original image is shot to obtain positioning information of each original image; the collection vehicle is used for: and sending the plurality of original images and the positioning information for shooting each original image to the electronic equipment so as to update the lane line to be updated by the electronic equipment. The process of updating the map data of the lane line by the electronic device is described in detail in the above embodiments, and is not repeated here.
The present embodiment has a low requirement on the precision of the positioning device and the shooting device, and can be implemented by a device with a common precision, for example, a CMOS camera with 30 ten thousand pixels or 50 ten thousand pixels.
On the basis of the above embodiment, as shown in fig. 6, the collection vehicle further includes a shooting device control module connected to the shooting device, and configured to control the shooting device to start or stop shooting. The acquisition vehicle further comprises a memory connected with the shooting equipment and the positioning equipment and used for storing original images and positioning information. The collection vehicle further comprises a communication module for communicating with the electronic equipment, and an uploading module connected with the communication module, wherein the communication module is a network card, a modem, a 4G network module and the like. The collection vehicle sends the original image and the positioning information stored in the memory to the electronic equipment through the communication module and the uploading module.
EXAMPLE seven
The seventh embodiment of the present invention further provides a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements the lane line updating method of any of the embodiments.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (13)
1. A lane line updating method, comprising:
acquiring a plurality of original images obtained by shooting a lane line and positioning information of each original image;
identifying lane line pixel points from each original image;
calculating the offset vector of each discrete point relative to the lane line pixel point in each original image according to the positioning information of a plurality of discrete points of the lane line to be updated in the map data and the positioning information of each shot original image, and obtaining the offset vector set associated with each discrete point;
respectively carrying out vector aggregation on the offset vectors in each offset vector set to obtain an aggregation vector associated with each discrete point;
and updating the lane line to be updated according to the target offset point pointed by the aggregation vector associated with each discrete point.
2. The method according to claim 1, wherein the calculating offset vectors of each discrete point relative to the lane line pixel points in each original image according to the positioning information of the plurality of discrete points of the lane line to be updated in the map data and the positioning information of the captured original image to obtain an offset vector set associated with each discrete point comprises:
according to the positioning information of each shot original image, projecting the lane line pixel points in each original image into a road space of a world coordinate system to obtain a plurality of back projection points;
projecting each discrete point into a road space of the world coordinate system according to the positioning information of a plurality of discrete points of the lane line to be updated in the map data to obtain the position information of each discrete point;
and calculating the offset vector of each discrete point relative to each back projection point according to the position information of each discrete point and the position information of each back projection point in the road surface space of the world coordinate system to obtain an offset vector set associated with each discrete point.
3. The method according to claim 1, wherein the calculating offset vectors of each discrete point relative to the lane line pixel points in each original image according to the positioning information of the plurality of discrete points of the lane line to be updated in the map data and the positioning information of the captured original image to obtain an offset vector set associated with each discrete point comprises:
projecting the discrete points to corresponding original images according to the positioning information of the discrete points of the lane line to be updated in the map data and the positioning information of each original image to obtain corresponding projected points;
and calculating the offset vector of each projection point relative to the lane line pixel point according to the position information of each projection point and the position information of the lane line pixel point in each original image to obtain an offset vector set associated with each discrete point.
4. The method according to claim 3, wherein the calculating an offset vector of each projection point relative to the lane-line pixel point according to the position information of each projection point and the position information of the lane-line pixel point in each original image to obtain an offset vector set associated with each discrete point comprises:
drawing parallel reference lines which respectively penetrate through each projection point in each original image;
selecting left edge points and right edge points on the same datum line with each projection point from the lane line pixel points of each original image;
calculating a central point corresponding to each projection point according to the left edge point and the right edge point corresponding to each projection point in each original image;
and calculating the offset vector of each projection point relative to the central point according to the position information of each projection point and the position information of the corresponding central point in each original image to obtain an offset vector set associated with each discrete point.
5. The method according to claim 3, wherein the separately performing vector aggregation on the offset vectors in each offset vector set to obtain an aggregated vector associated with each discrete point comprises:
projecting the offset vector in each offset vector set into a road space of a world coordinate system;
and in the road space of the world coordinate system, vector aggregation is respectively carried out on the offset vectors in each offset vector set to obtain an aggregation vector associated with each discrete point.
6. The method of claim 5, wherein projecting the offset vectors of each set of offset vectors into the road space of the world coordinate system comprises:
dividing a road surface space in a world coordinate system into a plurality of networks to obtain a road surface image comprising a plurality of grids;
calculating grid coordinates corresponding to the starting point and the ending point of each offset vector in each original image according to the positioning information of each original image;
and projecting the starting point and the ending point of each offset vector into corresponding grid coordinates.
7. The method according to claim 1, wherein the separately performing vector aggregation on the offset vectors in each offset vector set to obtain an aggregated vector associated with each discrete point comprises:
vector addition is carried out on the offset vectors in each offset vector set to obtain a polymerization vector associated with each discrete point;
or,
the vector aggregation is performed on the offset vectors in each offset vector set to obtain an aggregation vector associated with each discrete point, and the method includes:
acquiring lane line pixel points corresponding to each offset vector in each offset vector set and back projection points in a road space of a world coordinate system;
calculating the distance between each back projection point and the shooting point corresponding to the original image according to the position information of each back projection point and the positioning information of the original image where the corresponding lane line pixel point is shot;
configuring the weight of each offset vector according to the distance between each back projection point and the corresponding shooting point of the original image;
and carrying out vector weighted addition on the offset vectors in each offset vector set according to the weight of each offset vector to obtain an aggregation vector associated with each discrete point.
8. The method according to claim 1, wherein the updating the lane line to be updated according to the target offset point pointed to by the aggregation vector associated with each discrete point comprises:
acquiring aggregation vectors associated with a plurality of adjacent discrete points on the same lane line;
performing similarity verification on the aggregation vectors associated with the plurality of adjacent discrete points;
and if the aggregation vectors associated with the plurality of adjacent discrete points pass similarity verification, updating the lane line to be updated according to the target offset points pointed by the plurality of aggregation vectors.
9. The method according to any one of claims 1 to 8, wherein the updating the lane line to be updated according to the target offset point pointed to by the aggregation vector associated with each discrete point comprises:
and performing curve fitting according to the target offset point pointed by the aggregation vector associated with each discrete point to obtain updated map data of the lane line.
10. A lane line updating device, comprising:
the acquisition module is used for acquiring a plurality of original images obtained by shooting the lane line and positioning information of each original image;
the identification module is used for identifying the lane line pixel points from each original image;
the calculation module is used for calculating offset vectors of each discrete point relative to a lane line pixel point in each original image according to the positioning information of a plurality of discrete points of a lane line to be updated in the map data and the positioning information of each shot original image, so as to obtain an offset vector set associated with each discrete point;
the aggregation module is used for respectively carrying out vector aggregation on the offset vectors in each offset vector set to obtain an aggregation vector associated with each discrete point;
and the updating module is used for updating the lane line to be updated according to the target offset point pointed by the aggregation vector associated with each discrete point.
11. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the lane line updating method of any of claims 1-9.
12. A lane line update system, comprising: an acquisition vehicle and the electronic device of claim 11;
the electronic equipment is integrated in the acquisition vehicle or is independent of the acquisition vehicle and is in communication connection with the acquisition vehicle;
the collecting vehicle comprises a vehicle body, shooting equipment and positioning equipment, wherein the shooting equipment and the positioning equipment are carried on the vehicle body;
the photographing apparatus is configured to: shooting the lane line to obtain a plurality of original images;
the positioning device is configured to: positioning the vehicle body when each original image is shot to obtain positioning information of each original image;
the collection vehicle is used for: and sending the positioning information of the plurality of original images and the shot each original image to the electronic equipment so that the electronic equipment can update the lane line to be updated.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the lane line updating method according to any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910451302.3A CN110147382B (en) | 2019-05-28 | 2019-05-28 | Lane line updating method, device, equipment, system and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910451302.3A CN110147382B (en) | 2019-05-28 | 2019-05-28 | Lane line updating method, device, equipment, system and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110147382A true CN110147382A (en) | 2019-08-20 |
CN110147382B CN110147382B (en) | 2022-04-12 |
Family
ID=67593428
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910451302.3A Active CN110147382B (en) | 2019-05-28 | 2019-05-28 | Lane line updating method, device, equipment, system and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110147382B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110728721A (en) * | 2019-10-21 | 2020-01-24 | 北京百度网讯科技有限公司 | Method, device and equipment for acquiring external parameters |
CN110987463A (en) * | 2019-11-08 | 2020-04-10 | 东南大学 | Multi-scene-oriented intelligent driving autonomous lane change performance test method |
CN111291681A (en) * | 2020-02-07 | 2020-06-16 | 北京百度网讯科技有限公司 | Method, device and equipment for detecting lane line change information |
CN111597987A (en) * | 2020-05-15 | 2020-08-28 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for generating information |
CN111597986A (en) * | 2020-05-15 | 2020-08-28 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for generating information |
CN111611958A (en) * | 2020-05-28 | 2020-09-01 | 武汉四维图新科技有限公司 | Method, device and equipment for determining lane line shape in crowdsourcing data |
CN111652952A (en) * | 2020-06-05 | 2020-09-11 | 腾讯科技(深圳)有限公司 | Lane line generation method, lane line generation device, computer device, and storage medium |
CN112697159A (en) * | 2021-01-06 | 2021-04-23 | 智道网联科技(北京)有限公司 | Map editing method and system |
CN113792061A (en) * | 2021-09-16 | 2021-12-14 | 北京百度网讯科技有限公司 | Map data updating method and device and electronic equipment |
CN114166238A (en) * | 2021-12-06 | 2022-03-11 | 北京百度网讯科技有限公司 | Lane line identification method and device and electronic equipment |
CN114387583A (en) * | 2022-01-14 | 2022-04-22 | 广州小鹏自动驾驶科技有限公司 | Method and device for processing lane line |
CN114413927A (en) * | 2022-01-20 | 2022-04-29 | 智道网联科技(北京)有限公司 | Lane line fitting method, electronic device, and storage medium |
US20220319196A1 (en) * | 2021-04-01 | 2022-10-06 | Beijing Tusen Zhitu Technology Co., Ltd. | Method and apparatus for detecting lane lines, electronic device and storage medium |
CN115188202A (en) * | 2022-07-18 | 2022-10-14 | 浙江大华技术股份有限公司 | Method and equipment for determining real motion state of vehicle |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105260988A (en) * | 2015-09-09 | 2016-01-20 | 百度在线网络技术(北京)有限公司 | High-precision map data processing method and high-precision map data processing device |
CN107229908A (en) * | 2017-05-16 | 2017-10-03 | 浙江理工大学 | A kind of method for detecting lane lines |
US20170343362A1 (en) * | 2016-05-30 | 2017-11-30 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method And Apparatus For Generating High Precision Map |
CN109059954A (en) * | 2018-06-29 | 2018-12-21 | 广东星舆科技有限公司 | The method and system for supporting high-precision map lane line real time fusion to update |
CN109297500A (en) * | 2018-09-03 | 2019-02-01 | 武汉中海庭数据技术有限公司 | High-precision positioner and method based on lane line characteristic matching |
CN109631934A (en) * | 2018-12-21 | 2019-04-16 | 斑马网络技术有限公司 | Information processing method, device, server and the storage medium of map application |
-
2019
- 2019-05-28 CN CN201910451302.3A patent/CN110147382B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105260988A (en) * | 2015-09-09 | 2016-01-20 | 百度在线网络技术(北京)有限公司 | High-precision map data processing method and high-precision map data processing device |
US20170343362A1 (en) * | 2016-05-30 | 2017-11-30 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method And Apparatus For Generating High Precision Map |
CN107229908A (en) * | 2017-05-16 | 2017-10-03 | 浙江理工大学 | A kind of method for detecting lane lines |
CN109059954A (en) * | 2018-06-29 | 2018-12-21 | 广东星舆科技有限公司 | The method and system for supporting high-precision map lane line real time fusion to update |
CN109297500A (en) * | 2018-09-03 | 2019-02-01 | 武汉中海庭数据技术有限公司 | High-precision positioner and method based on lane line characteristic matching |
CN109631934A (en) * | 2018-12-21 | 2019-04-16 | 斑马网络技术有限公司 | Information processing method, device, server and the storage medium of map application |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110728721B (en) * | 2019-10-21 | 2022-11-01 | 北京百度网讯科技有限公司 | Method, device and equipment for acquiring external parameters |
CN110728721A (en) * | 2019-10-21 | 2020-01-24 | 北京百度网讯科技有限公司 | Method, device and equipment for acquiring external parameters |
CN110987463A (en) * | 2019-11-08 | 2020-04-10 | 东南大学 | Multi-scene-oriented intelligent driving autonomous lane change performance test method |
CN111291681A (en) * | 2020-02-07 | 2020-06-16 | 北京百度网讯科技有限公司 | Method, device and equipment for detecting lane line change information |
CN111291681B (en) * | 2020-02-07 | 2023-10-20 | 北京百度网讯科技有限公司 | Method, device and equipment for detecting lane change information |
CN111597987A (en) * | 2020-05-15 | 2020-08-28 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for generating information |
CN111597986A (en) * | 2020-05-15 | 2020-08-28 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for generating information |
CN111597986B (en) * | 2020-05-15 | 2023-09-29 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for generating information |
CN111597987B (en) * | 2020-05-15 | 2023-09-01 | 阿波罗智能技术(北京)有限公司 | Method, apparatus, device and storage medium for generating information |
CN111611958A (en) * | 2020-05-28 | 2020-09-01 | 武汉四维图新科技有限公司 | Method, device and equipment for determining lane line shape in crowdsourcing data |
CN111611958B (en) * | 2020-05-28 | 2023-08-22 | 武汉四维图新科技有限公司 | Method, device and equipment for determining shape of lane line in crowdsourcing data |
CN111652952A (en) * | 2020-06-05 | 2020-09-11 | 腾讯科技(深圳)有限公司 | Lane line generation method, lane line generation device, computer device, and storage medium |
CN111652952B (en) * | 2020-06-05 | 2022-03-18 | 腾讯科技(深圳)有限公司 | Lane line generation method, lane line generation device, computer device, and storage medium |
CN112697159A (en) * | 2021-01-06 | 2021-04-23 | 智道网联科技(北京)有限公司 | Map editing method and system |
CN112697159B (en) * | 2021-01-06 | 2024-01-23 | 智道网联科技(北京)有限公司 | Map editing method and system |
US20220319196A1 (en) * | 2021-04-01 | 2022-10-06 | Beijing Tusen Zhitu Technology Co., Ltd. | Method and apparatus for detecting lane lines, electronic device and storage medium |
CN113792061A (en) * | 2021-09-16 | 2021-12-14 | 北京百度网讯科技有限公司 | Map data updating method and device and electronic equipment |
CN113792061B (en) * | 2021-09-16 | 2023-11-28 | 北京百度网讯科技有限公司 | Map data updating method and device and electronic equipment |
CN114166238A (en) * | 2021-12-06 | 2022-03-11 | 北京百度网讯科技有限公司 | Lane line identification method and device and electronic equipment |
CN114166238B (en) * | 2021-12-06 | 2024-02-13 | 北京百度网讯科技有限公司 | Lane line identification method and device and electronic equipment |
CN114387583A (en) * | 2022-01-14 | 2022-04-22 | 广州小鹏自动驾驶科技有限公司 | Method and device for processing lane line |
CN114413927A (en) * | 2022-01-20 | 2022-04-29 | 智道网联科技(北京)有限公司 | Lane line fitting method, electronic device, and storage medium |
CN114413927B (en) * | 2022-01-20 | 2024-02-13 | 智道网联科技(北京)有限公司 | Lane line fitting method, electronic device and storage medium |
CN115188202A (en) * | 2022-07-18 | 2022-10-14 | 浙江大华技术股份有限公司 | Method and equipment for determining real motion state of vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN110147382B (en) | 2022-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110147382B (en) | Lane line updating method, device, equipment, system and readable storage medium | |
CN110163930B (en) | Lane line generation method, device, equipment, system and readable storage medium | |
CN108805934B (en) | External parameter calibration method and device for vehicle-mounted camera | |
US9270891B2 (en) | Estimation of panoramic camera orientation relative to a vehicle coordinate frame | |
CN112444242B (en) | Pose optimization method and device | |
US8437501B1 (en) | Using image and laser constraints to obtain consistent and improved pose estimates in vehicle pose databases | |
CN107636679B (en) | Obstacle detection method and device | |
WO2020000137A1 (en) | Integrated sensor calibration in natural scenes | |
JP7422105B2 (en) | Obtaining method, device, electronic device, computer-readable storage medium, and computer program for obtaining three-dimensional position of an obstacle for use in roadside computing device | |
CN111263960B (en) | Apparatus and method for updating high definition map | |
CN112700486B (en) | Method and device for estimating depth of road surface lane line in image | |
CN111652072A (en) | Track acquisition method, track acquisition device, storage medium and electronic equipment | |
US11842440B2 (en) | Landmark location reconstruction in autonomous machine applications | |
US11373328B2 (en) | Method, device and storage medium for positioning object | |
CN113223064B (en) | Visual inertial odometer scale estimation method and device | |
CN115761164A (en) | Method and device for generating inverse perspective IPM image | |
CN115456898A (en) | Method and device for building image of parking lot, vehicle and storage medium | |
CN109708655A (en) | Air navigation aid, device, vehicle and computer readable storage medium | |
CN117235299A (en) | Quick indexing method, system, equipment and medium for oblique photographic pictures | |
WO2022133986A1 (en) | Accuracy estimation method and system | |
CN113034538B (en) | Pose tracking method and device of visual inertial navigation equipment and visual inertial navigation equipment | |
CN116917936A (en) | External parameter calibration method and device for binocular camera | |
EP4435723A1 (en) | Method and apparatus for calibrating roll angle of on-board camera, and device and storage medium | |
CN112116661B (en) | High-precision map construction method and device | |
CN117274835A (en) | Monocular 3D object detection method, system, medium and terminal of unmanned aerial vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |