CN111488762A - Lane-level positioning method and device and positioning equipment - Google Patents
Lane-level positioning method and device and positioning equipment Download PDFInfo
- Publication number
- CN111488762A CN111488762A CN201910075113.0A CN201910075113A CN111488762A CN 111488762 A CN111488762 A CN 111488762A CN 201910075113 A CN201910075113 A CN 201910075113A CN 111488762 A CN111488762 A CN 111488762A
- Authority
- CN
- China
- Prior art keywords
- road
- lane
- image
- specified
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0968—Systems involving transmission of navigation instructions to the vehicle
Abstract
The invention discloses a lane-level positioning method. The method comprises the following steps: performing road element segmentation on an input image to obtain a segmented image containing road elements of a specified category; vectorizing pixels included in road elements of a specified type in a segmentation image to obtain road data including road element description information of the specified type; and determining the lane where the vehicle is located and the position relation between the vehicle and the specified road element according to the road element description information included in the road data and the imaging device parameter for shooting the input image. The lane-level positioning method provided by the embodiment of the invention can realize the lane-level positioning of the vehicle through the road image shot in real time, and can quickly and accurately position the lane where the vehicle is located and the specific position of the vehicle in the road without depending on a lane-level map.
Description
Technical Field
The invention relates to the technical field of map navigation, in particular to a lane-level positioning method, a lane-level positioning device and positioning equipment.
Background
With the increasing number of automobiles, map navigation is applied more and more widely. In the field of map navigation, lane-level positioning of a vehicle is very important, and the lane-level positioning has important significance for determining the transverse position of the vehicle and formulating a navigation strategy. Furthermore, based on the results of the lane-level positioning, path planning and guidance at the vehicle lane level can also be performed.
The traditional lane-level positioning method generally performs positioning initialization through processes of lane line extraction, line type classification, map matching and the like, and simultaneously conjectures the lane where the vehicle is located by combining lane change detection and real-time lane line matching with a lane-level map. The method requires a high-precision lane-level map, can realize real-time lane line matching only by including the position and the type of a lane line, has higher requirement on map data, and can not meet the requirement of a common map; in addition, the way only identifies the lane line information at two sides of the vehicle, but does not pay attention to other information of the road (such as a road diversion strip, an edge line and the like), which also limits the accuracy of lane-level positioning to a certain extent, and especially for a wide road with more lanes, accurate matching judgment cannot be realized, for example, for a road with four lanes, two sides of a second lane and a third lane are both dotted lines, when the vehicle is positioned in the two lanes, the position of the vehicle cannot be well identified and positioned, and a large number of situations of incapability of positioning occur; in addition, the method relies on lane change detection for estimation, so that accumulated errors are easy to generate, and the lane level positioning accuracy is not high and the robustness is poor.
In the field of map navigation, how to quickly and accurately implement lane-level positioning, obtain a lane-level positioning result with higher accuracy, and reduce the degree of dependence on a lane-level map becomes a problem to be solved urgently.
Disclosure of Invention
In view of the above, the present invention provides a lane-level positioning method, apparatus and positioning device that overcome or at least partially solve the above problems.
In a first aspect, an embodiment of the present invention provides a lane-level positioning method, including:
performing road element segmentation on an input image to obtain a segmented image containing road elements of a specified category;
vectorizing pixels included in road elements of a specified type in a segmentation image to obtain road data including road element description information of the specified type;
and determining the lane where the vehicle is located and the position relation between the vehicle and the specified road element according to the road element description information included in the road data and the imaging device parameter for shooting the input image.
In an alternative embodiment, the road element segmentation on the input image includes: the method includes recognizing road elements of a specified category in an input image by using a machine learning model, and segmenting the road elements of the specified category included in the input image according to a recognition result.
In an optional embodiment, the performing vectorization processing on the pixels included in the road elements of the specified category in the segmentation image includes: determining a connected part larger than a preset first threshold value in an image area of a road element of a specified category in the segmented image as a high-confidence area; and extracting the road elements of the specified type in the high confidence region, and fitting a curve equation of the road elements of the specified type in the high confidence region in an image coordinate system.
In an optional embodiment, after the determining is the high-confidence region, the method further includes: and supplementing the road elements according to a set translation rule when determining that the road elements in the high confidence region are missing according to the characteristic parameter reference values of the road elements.
In an optional embodiment, when it is determined that the road element in the high confidence region is missing according to the characteristic parameter reference value of the road element, supplementing the road element according to the set translation rule includes: and if the distance between two adjacent same road elements in the high confidence region is larger than a set second threshold, adding the specified road element as a reference according to a set translation rule between the two adjacent same road elements based on one of the two adjacent road elements and the characteristic parameter reference value of the road element until the distribution condition of the road element is determined to meet the requirement according to the characteristic parameter reference value of the road element.
In an optional embodiment, the determining, as the high-confidence region, a connected part larger than a preset first threshold in an image region of a road element of a specified category in the segmented image includes: performing dilation operation on image areas of road elements of a specified category in the divided images respectively; judging whether the length of a connected part of the image area of the road element of each category is larger than a first threshold preset by the road element of the category according to the category of the road element on the basis of the segmentation image after the expansion operation; if yes, setting the corresponding image area as a high confidence area.
In an alternative embodiment, fitting a curve equation of the road elements of the specified category in the high confidence region in the image coordinate system includes: sampling road elements of a specified category in the high confidence region according to a preset sampling interval; and performing curve fitting on the acquired sampling points to obtain a curve equation of the road elements of the specified category in the high confidence region under the image coordinate system.
In an alternative embodiment, determining the lane in which the vehicle is located and the positional relationship of the vehicle and the specified road element according to the road element description information included in the road data and the imaging device parameter for capturing the input image includes: determining the pixel position of the vehicle in the input image according to the lane line description information and the road edge description information which are included in the road data and the imaging equipment parameter for shooting the input image; determining a lane where the vehicle is located according to the position relation between the determined pixel position and the fitted lane line curve equation; and determining the minimum pixel distance from the pixel position to the road element of the specified type according to the determined pixel position and the fitted curve equation of the road element of the specified type, and determining the distance from the vehicle to the road element of the specified type according to the determined minimum pixel distance and the space distance corresponding to each pixel point in the image.
In an optional embodiment, the determining the lane in which the vehicle is located and the position relationship between the vehicle and the specified road element includes determining and outputting at least one of the following information: the number of lanes on the left side of the vehicle, the number of lanes on the right side of the vehicle, the distance from the edge line of the left side road, the distance from the edge line of the right side road, the distance from the left lane line and the distance from the right lane line.
In an alternative embodiment, the specified category of road elements includes at least one of: road edge line, road lane line, road diversion strip.
In a second aspect, an embodiment of the present invention provides a lane-level positioning apparatus, including:
the element segmentation module is used for performing road element segmentation on the input image to obtain a segmented image containing road elements of a specified category;
the vectorization processing module is used for carrying out vectorization processing on pixels included in the road elements of the specified type in the divided images to obtain road data including the road element description information of the specified type;
and the positioning module is used for determining the lane where the vehicle is located and the position relation between the vehicle and the specified road element according to the road element description information included in the road data and the imaging equipment parameter for shooting the input image.
In a third aspect, embodiments of the present invention provide a computer-readable storage medium having stored thereon computer instructions, which when executed by a processor, implement the lane-level positioning method described above.
In a fourth aspect, an embodiment of the present invention provides a positioning apparatus, including: a memory and a processor; wherein the memory stores a computer program which, when executed by the processor, is capable of implementing the lane-level positioning method described above.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
the lane-level positioning method provided by the embodiment of the invention can determine the position relationship of the lane where the vehicle is located and/or the vehicle and the specified road element by dividing the road element of the input image according to the description information of the divided road element, can realize the lane-level positioning of the vehicle by the road image shot in real time, can quickly and accurately position the lane where the vehicle is located and the specific position of the vehicle in the road without depending on a lane-level map, can obtain a lane-level positioning result by using a single-frame image shot by an imaging device under the condition of no lane-level map, identifies the road element, positions the vehicle by the position relationship of the vehicle and the road element, does not have the condition of inaccurate matching, does not depend on lane change detection, and greatly reduces the accumulated error of positioning, the accuracy and the precision of lane level positioning are improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram illustrating a lane-level positioning method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a lane-level positioning method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an input image captured by a vehicle-mounted camera according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of an implementation of the lane-level positioning method according to the second embodiment of the present invention;
FIG. 5 is a schematic diagram of a second segmented image according to the embodiment of the present invention;
fig. 6 is a schematic structural diagram of a lane-level positioning device according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The embodiment of the invention relates to a lane and a positioning method, in particular to a positioning method capable of determining the lane of a vehicle in a road. The method has important significance in formulating the navigation strategy, and can realize lane-level path planning and guidance when being applied to the navigation process.
Various embodiments of the lane-level positioning method, the lane-level positioning device, and the positioning apparatus according to the embodiments of the present invention will be described in detail below.
Example one
The lane-level positioning method provided in the first embodiment of the present invention is implemented by inputting a single frame image, segmenting the input single frame image according to different road elements, performing vectorization processing on pixels included in a road element of a specified category in the obtained segmented image, and finally outputting a lane-level positioning result according to a result of the vectorization processing and parameters of an imaging device that captures the input image, as shown in fig. 1. The implementation flow of the lane-level positioning method is shown in fig. 2, and comprises the following steps:
s21, road element segmentation is performed on the input image to obtain a segmented image including the road element of the specified category.
Referring to fig. 3, the input image is mainly an image captured by the vehicle-mounted camera, and may be, for example, a front view image or a converted image of the front view image (e.g., a bird's eye view), and the input image may include the specified road elements, and the embodiment of the present invention is not limited to the view in any direction.
In addition, the embodiment of the present invention does not limit the manner of acquiring the input image, and the input image may be acquired by any device having a shooting capability. The input image shot by the vehicle-mounted camera can be seen to contain information such as road edge lines and road lane lines. Besides the input image information, the input image information also comprises information of a road diversion strip, a road deceleration strip, a road to-be-turned area and the like. The embodiment of the present invention refers to the information related to the road in the input image as the road element, and the road elements of the specified category in step S11 may specifically include road elements such as a road edge line, a road lane line, and a road diversion strip.
When an input image is divided, a machine learning model is used to recognize road elements of a specified type in the input image, and the road elements of the specified type included in the input image are divided according to the recognition result. The basic idea is to establish a coding and decoding full convolution network model by a deep learning method, and enable the coding and decoding full convolution network model to learn a large amount of image data according to a preset rule, for example, the rule can be set according to the type of a road element. When the encoding and decoding full convolution network model processes an input image, the input image can be divided into divided images with different road element characteristics. Referring to the road element segmentation link in fig. 1, at least one of lane line pixels, edge line pixels, and diversion line pixels may be segmented.
The segmented image may be a front view image and its transformed image, a rear view image and its transformed image, or a combination of the foregoing images, and the embodiments of the present invention are not limited thereto.
And S22, vectorizing the pixels included in the road elements of the specified type in the divided images to obtain road data including the road element description information of the specified type.
After the divided images of the road elements of different categories are obtained, it is necessary to perform vectorization processing on the pixels of the road elements of the specified category in the divided images. The purpose of the vectorization processing is to convert an input image existing as raster data into a divided image existing as vector data to facilitate processing of road elements in the divided image. In addition, the road data obtained by the vectorization processing can also be applied to the creation of a lane-level map.
Vectorization processing of disordered pixels included in road elements of a specified category in a segmented image can be achieved through strategies of high confidence region extraction, curve fitting, prior knowledge translation and the like, corresponding regions of the road elements are obtained, for example, vectorization processing links in fig. 1, vectorization processing of individual lane lines, edge lines and diversion zone regions is performed, curve equations of a left edge, a right edge and all lane lines in a road image are formed, the curve equations can be called structural description information of the road, and the obtained road data including the description information of the road elements of the specified category can be structural description information of the road. The procedure of the vectorization processing will be described in detail in the following embodiments.
And S23, determining the lane where the vehicle is located and the position relation between the vehicle and the road element of the specified category according to the road element description information included in the road data and the imaging device parameter for shooting the input image.
The parameters of the imaging device are divided into two aspects of external reference and internal reference, and the external reference mainly comprises: the mounting height, the pitch angle, the yaw angle, the roll angle and the like of the imaging device on the vehicle; the internal reference mainly comprises: camera focal length, optical center position, distortion parameters, etc.
Determining the lane in which the vehicle is located and the positional relationship of the vehicle to the road elements of the specified category includes two aspects:
and determining the lanes of the vehicles. Determining the pixel position of the vehicle in the input image according to the lane line description information and the road edge description information which are included in the road data and the imaging equipment parameter for shooting the input image; and determining the lane where the vehicle is located according to the position relation between the determined pixel position and the fitted lane line curve equation.
The positional relationship of the vehicle with the road elements of the specified category is determined. And determining the minimum pixel distance from the pixel position to the road element of the specified type according to the determined pixel position and the fitted curve equation of the road element of the specified type, and determining the distance from the vehicle to the road element of the specified type according to the determined minimum pixel distance and the space distance (or the distance between adjacent pixel points) corresponding to each pixel point in the image.
After the information of the two aspects is determined, the output positioning result comprises at least one of the following information: the number of lanes on the left side of the vehicle, the number of lanes on the right side of the vehicle, the distance from the edge line of the left side road, the distance from the edge line of the right side road, the distance from the left lane line and the distance from the right lane line.
Referring to the lateral positioning link in fig. 1, the number of lanes on the left side of the position of the vehicle, the number of lanes on the right side, the distance from the edge line, the type of lane line, and the like can be obtained.
It should be noted that the input image and the segmentation image according to the embodiment of the present invention are lane-level positioning based on the recognition result of a single frame image, that is, each frame image is processed separately, and each frame image outputs an independent positioning result. And lane level positioning results can be output from the left side and the right side of the vehicle at the same time, the two results complement each other, and the accuracy of the positioning results under the conditions of lane line shielding, lane line abrasion and the like is improved. Meanwhile, the embodiment of the invention can also be applied to the manufacture of the vehicle-to-level map.
In the vehicle-to-level positioning method, the segmented image containing the road elements of the specified category is obtained by segmenting the road elements of the input image; then, vectorizing the pixels of the road elements of the specified type in the divided image respectively to obtain road data comprising the road element description information of the specified type; finally, according to the road element description information included in the road data and the imaging device parameters for shooting the input image, the lane in which the vehicle is located and the position relation between the vehicle and the road element of the specified category are determined. The lane-level positioning method provided by the embodiment of the invention does not depend on a lane-level map, but directly identifies the single-frame image acquired by the imaging equipment, and positions according to the image identification result to obtain a lane-level positioning result. By this method, even in the case where there is no lane-level map, a lane-level positioning result can be obtained using a single frame image. In addition, the method provided by the embodiment of the invention is used for performing lane-level positioning based on the recognition result of the single frame image, and each frame image outputs an independent positioning result, so that the accumulated error of positioning can be reduced to a great extent, and the accuracy of lane-level positioning is improved. For each frame of image, the lane-level positioning method provided by the embodiment of the invention can output lane-level positioning results from the left side and the right side of the vehicle at the same time, and the two results complement each other, so that the accuracy of the positioning results under the conditions of lane line shielding, lane line abrasion and the like is improved. Meanwhile, the embodiment of the invention can also be applied to the manufacture of the vehicle-to-level map.
Example two:
an embodiment of the present invention provides a specific implementation process example of the lane-level positioning method, where the process is shown in fig. 4, and includes:
s41, using the machine learning model, identifies the road elements of the specified category in the input image, segments the road elements of the specified category included in the input image based on the identification result, and obtains a segmented image including the road elements of the specified category.
The machine learning model according to the embodiment of the present invention may be a coding and decoding full convolution network model, and any other model capable of implementing the embodiment of the present invention may be used in the implementation process of the present invention, and the embodiment of the present invention is not limited thereto. As shown in fig. 5, when an input image is segmented, first, a road element of a specified category, including a road edge line, a road lane line and a road diversion strip, is identified and obtained through a coding and decoding full convolution network model; and then segmenting the input image according to the identification result of the coding and decoding full convolutional network model. It should be noted that the object of segmentation is the pixels of the road elements of the specified category in the input image, and a segmented image including the road elements of the specified category with the same size as the original input image is finally obtained.
The road element segmentation map after the road element segmentation as shown in fig. 5 is a road element segmentation map after the original road image is subjected to Perspective transformation, and before the road element segmentation, before the road image is subjected to Perspective processing, the method may further include performing distortion correction on the original road image to obtain a road image after the distortion correction, and then performing Perspective processing, such as Inverse Perspective transformation (IPM), on the road image after the distortion correction. The process of obtaining the road element segmentation map may include: segmenting the pavement information elements in the road image by using a semantic segmentation network model to obtain a pavement element segmentation map; the road surface element division map is subjected to IPM conversion to obtain an IPM division image comprising a plurality of road surface information elements.
And S42, determining the connected parts which are larger than a preset first threshold value in the image areas of the road elements of the specified category in the divided images as high-confidence areas.
In the process of determining the high-confidence region, since the image region of the road element of the specified category in the divided image has a discontinuous portion, it is necessary to perform a dilation operation on the image region of the road element of the specified category in the divided image, and the purpose of the dilation operation is to expand the edge portion of the image region of the road element of the specified category in the divided image, and fill the discontinuous portion with the pixel points.
Judging whether the length of a connected part of the image area of the road element of each category is larger than a first threshold preset by the road element of the category according to the category of the road element on the basis of the segmentation image after the expansion operation; if yes, setting the corresponding image area as a high confidence area.
For road lane lines and road edge lines, the length of the connected portion of the image region may be, for example, greater than 1/2, which is the height of the segmented image; for the road guide strip, the connected portion of the image area specifically refers to a connected portion of a single-side edge line of the road guide strip, and may be greater than 1/3 of the divided image height, for example. The embodiment of the present invention does not limit the specific value of the preset first threshold.
And S43, supplementing the road elements according to the set translation rule when the road elements in the high confidence region are determined to be missing according to the characteristic parameter reference values of the road elements.
Due to the influence of factors such as a shooting device and a road, the road elements of the specified category in the high-confidence region obtained through vectorization processing may have an unclear problem, for example, the lane line may be positioned inaccurately due to abrasion. At this time, for a lane line which is too worn, a new lane line is often needed to be determined to solve the problem of inaccurate positioning caused by the wear of an old lane line. The specific translation process is as follows:
and if the distance between two adjacent same road elements in the high confidence region is larger than a set second threshold, adding the specified road element as a reference according to a set translation rule between the two adjacent same road elements based on one of the two adjacent road elements and the characteristic parameter reference value of the road element until the distribution condition of the road element is determined to meet the requirement according to the characteristic parameter reference value of the road element. The translation process will be described in detail below with a specific example:
from a priori information, the width of each lane is approximately 3 m. In general, in the vectorization process of an input image, a planar image captured by a capturing device needs to be converted into a bird's eye view. After the aerial view is transformed, the width of each lane is correspondingly changed, and the width of the road element of the specified type after the aerial view is transformed is determined to be w; if the distance between the road elements of the specified type in the high confidence region is larger than 1.5w, determining any line segment in the road elements of the specified type as a reference line; and translating the reference line by w, wherein the translation direction is a direction in which the distance between the specified road elements is reduced. Judging whether the pixel number of the road element of the specified category in the area formed after translation is larger than the pixel number of the height of the segmentation image 1/5; if yes, adding a line segment of the pixel center point of the road element of the specified category in the area formed by translation; until the pitch of the road elements of the specified category in the area formed after translation is less than 1.5 w.
And S44, extracting the road elements of the specified type in the high confidence region, and fitting a curve equation of the road elements of the specified type in the high confidence region under the image coordinate system.
The curve fitting method mainly realizes curve fitting through a sampling point fitting mode, and comprises the steps of sampling road elements of specified categories in a high-confidence region according to a preset sampling interval; and performing curve fitting on the acquired sampling points to obtain a curve equation of the road elements of the specified category in the high confidence region under the image coordinate system.
When the road elements of the specified type in the high-confidence area are sampled, the sampling objects are a road lane line area and a road edge line area in the high-confidence area, and the road diversion area in the high-confidence area is not subjected to sampling processing. The sampling interval may be 1/20 of the height of the segmented image or 1/30 of the height of the segmented image, and the embodiment of the present invention does not limit the specific numerical value of the sampling interval, as long as the number of sampling points can meet the purpose of accurately fitting the cubic curve.
Wherein, the general formula of the cubic curve is: y is ax3+bx2+ cx + d; in the process of solving the cubic curve formula, theoretically, the coefficient of the cubic curve formula can be obtained only by acquiring 4 groups of sampling data. However, in practical applications, more sets of sampled data are often selected for fitting, for example, 8 sets, 10 sets, etc. may be selected to reduce errors generated in the calculation process as much as possible.
The above steps S42-S44 implement vectorization processing of pixels included in road elements of the specified category in the split image to obtain road data including description information of road elements of the specified category, where step S43 is an optional step.
And S45, determining the lane where the vehicle is located and the position relation between the vehicle and the specified road element according to the road element description information included in the road data and the imaging device parameter for shooting the input image.
For determining the lane where the vehicle is located and the position relationship between the vehicle and the specified road element, refer to the first embodiment, and details are not repeated here.
For an image with a size of 14 inches and a resolution of 1024 × 768, the area of the image is 285.7mm × 214.3.3 mm, the spatial distance is 285.7/1024 or 214.3/768 is 0.279mm, the pixel position (px, py) of the vehicle in the input image can be determined according to the parameters of the imaging device, the pixel distance of the curve equation of the pixel position to the specified lane line is determined, and the actual distance of the vehicle to the specified lane line can be obtained by multiplying the determined pixel distance by the spatial distance of each pixel.
After the actual distance of the road elements of the specified category is obtained through calculation, the distance from the vehicle to the road edge line and the road lane line can be determined, and finally lane level positioning is achieved.
Based on the same inventive concept, embodiments of the present invention further provide a lane-level positioning device and a positioning apparatus, and because the principle of the problems solved by these devices and positioning apparatuses is similar to that of the lane-level positioning method, the implementation of the device and the client may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 6, an embodiment of the present invention provides a lane-level positioning apparatus, including:
a multi-element segmentation module 61, configured to perform road element segmentation on an input image to obtain a segmented image including a road element of a specified category; the specified category of road elements includes at least one of: road edge lines, road lane lines and road diversion strips;
a vectorization processing module 62, configured to perform vectorization processing on pixels included in a road element of a specified category in a split image, so as to obtain road data including road element description information of the specified category;
and the positioning module 63 is configured to determine a lane where the vehicle is located and a position relationship between the vehicle and the specified road element according to the road element description information included in the road data and the imaging device parameter for capturing the input image.
In an optional embodiment, the multi-element segmentation module 61 is specifically configured to: the method includes recognizing road elements of a specified category in an input image by using a machine learning model, and segmenting the road elements of the specified category included in the input image according to a recognition result.
In an optional embodiment, the vectorization processing module 62 is specifically configured to: determining a connected part larger than a preset first threshold value in an image area of a road element of a specified category in the segmented image as a high-confidence area; and extracting the road elements of the specified type in the high confidence region, and fitting a curve equation of the road elements of the specified type in the high confidence region in an image coordinate system.
In an optional embodiment, the vectorization processing module 62 is further configured to: and supplementing the road elements according to a set translation rule when determining that the road elements in the high confidence region are missing according to the characteristic parameter reference values of the road elements.
In an optional embodiment, the vectorization processing module 62 is specifically configured to: and if the distance between two adjacent same road elements in the high confidence region is larger than a set second threshold, adding the specified road element as a reference according to a set translation rule between the two adjacent same road elements based on one of the two adjacent road elements and the characteristic parameter reference value of the road element until the distribution condition of the road element is determined to meet the requirement according to the characteristic parameter reference value of the road element.
In an optional embodiment, the vectorization processing module 62 is specifically configured to: performing dilation operation on image areas of road elements of a specified category in the divided images respectively; judging whether the length of a connected part of the image area of the road element of each category is larger than a first threshold preset by the road element of the category according to the category of the road element on the basis of the segmentation image after the expansion operation; if yes, setting the corresponding image area as a high confidence area.
In an optional embodiment, the vectorization processing module 62 is specifically configured to: sampling road elements of a specified category in the high confidence region according to a preset sampling interval; and performing curve fitting on the acquired sampling points to obtain a curve equation of the road elements of the specified category in the high confidence region under the image coordinate system.
In an alternative embodiment, the positioning module 63 is specifically configured to: determining the pixel position of the vehicle in the input image according to the lane line description information and the road edge description information which are included in the road data and the imaging equipment parameter for shooting the input image; determining a lane where the vehicle is located according to the position relation between the determined pixel position and the fitted lane line curve equation; and determining the minimum pixel distance from the pixel position to the road element of the specified type according to the determined pixel position and the fitted curve equation of the road element of the specified type, and determining the distance from the vehicle to the road element of the specified type according to the determined minimum pixel distance and the space distance corresponding to each pixel point in the image.
In an optional embodiment, the positioning module 63 is specifically configured to determine and output at least one of the following information: the number of lanes on the left side of the vehicle, the number of lanes on the right side of the vehicle, the distance from the edge line of the left side road, the distance from the edge line of the right side road, the distance from the left lane line and the distance from the right lane line.
Embodiments of the present invention also provide a computer storage medium having computer instructions stored thereon, where the instructions, when executed by a processor, implement the lane-level positioning method described above.
An embodiment of the present invention further provides a positioning apparatus, including: a memory and a processor; the memory stores a computer program, and the program can realize the lane-level positioning method when being executed by the processor.
According to the method and the device provided by the embodiment of the invention, firstly, road elements are segmented on an input image to obtain a segmented image containing the road elements of the specified category; then, vectorizing the pixels of the road elements of the specified type in the divided image respectively to obtain road data comprising the road element description information of the specified type; finally, according to the road element description information included in the road data and the imaging device parameters for shooting the input image, the lane in which the vehicle is located and the position relation between the vehicle and the road element of the specified category are determined. The lane-level positioning method provided by the embodiment of the invention does not depend on a lane-level map, but directly identifies the single-frame image acquired by the imaging equipment, and positions according to the image identification result to obtain a lane-level positioning result. By this method, even in the case where there is no lane-level map, a lane-level positioning result can be obtained using a single frame image. In addition, the method provided by the embodiment of the invention is used for carrying out lane-level positioning based on the recognition result of the single frame image, and each frame image outputs an independent positioning result, so that the accumulated error of positioning can be reduced to a great extent, and the accuracy of lane-level positioning is improved. For each frame of image, the lane-level positioning method provided by the embodiment of the invention can output lane-level positioning results from the left side and the right side of the vehicle at the same time, and the two results complement each other, so that the accuracy of the positioning results under the conditions of lane line shielding, lane line abrasion and the like is improved. Meanwhile, the embodiment of the invention can also be applied to the manufacture of the vehicle-to-level map.
The lane-level positioning method provided by the embodiment of the invention can also be used for accurately positioning the lane level of the automatic driving vehicle in an automatic driving scene, so that the accurate control of the vehicle driving path under the assistance of an auxiliary radar can be realized, and the path guidance and the vehicle line changing guidance can be accurately realized. In addition, the method can be used for assisting lane change guidance in the navigation process.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (13)
1. A lane-level positioning method, comprising:
performing road element segmentation on an input image to obtain a segmented image containing road elements of a specified category;
vectorizing pixels included in road elements of a specified type in a segmentation image to obtain road data including road element description information of the specified type;
and determining the lane where the vehicle is located and the position relation between the vehicle and the specified road element according to the road element description information included in the road data and the imaging device parameter for shooting the input image.
2. The method of claim 1, wherein the road element segmentation of the input image comprises:
the method includes recognizing road elements of a specified category in an input image by using a machine learning model, and segmenting the road elements of the specified category included in the input image according to a recognition result.
3. The method according to claim 1, wherein the vectorizing processing of the pixels included in the road elements of the specified category in the divided image includes:
determining a connected part larger than a preset first threshold value in an image area of a road element of a specified category in the segmented image as a high-confidence area;
and extracting the road elements of the specified type in the high confidence region, and fitting a curve equation of the road elements of the specified type in the high confidence region in an image coordinate system.
4. The method of claim 3, wherein upon determining a high confidence region, further comprising:
and supplementing the road elements according to a set translation rule when determining that the road elements in the high confidence region are missing according to the characteristic parameter reference values of the road elements.
5. The method according to claim 4, wherein the supplementing the road element according to the set translation rule when determining that the road element in the high confidence region is missing according to the characteristic parameter reference value of the road element comprises:
and if the distance between two adjacent same road elements in the high confidence region is larger than a set second threshold, adding the specified road element as a reference according to a set translation rule between the two adjacent same road elements based on one of the two adjacent road elements and the characteristic parameter reference value of the road element until the distribution condition of the road element is determined to meet the requirement according to the characteristic parameter reference value of the road element.
6. The method according to claim 3, wherein the determining, as a high-confidence region, a connected portion larger than a preset first threshold value in an image region of a road element of a specified category in the segmented image comprises:
performing dilation operation on image areas of road elements of a specified category in the divided images respectively;
judging whether the length of a connected part of the image area of the road element of each category is larger than a first threshold preset by the road element of the category according to the category of the road element on the basis of the segmentation image after the expansion operation; if yes, setting the corresponding image area as a high confidence area.
7. The method of claim 3, wherein fitting a curve equation for a specified class of road elements in the high confidence region in an image coordinate system comprises:
sampling road elements of a specified category in the high confidence region according to a preset sampling interval;
and performing curve fitting on the acquired sampling points to obtain a curve equation of the road elements of the specified category in the high confidence region under the image coordinate system.
8. The method of claim 1, wherein determining the lane in which the vehicle is located and the positional relationship of the vehicle to the specified road element based on the road element description information included in the road data and the imaging device parameter that captured the input image comprises:
determining the pixel position of the vehicle in the input image according to the lane line description information and the road edge description information which are included in the road data and the imaging equipment parameter for shooting the input image;
determining a lane where the vehicle is located according to the position relation between the determined pixel position and the fitted lane line curve equation; and
and determining the minimum pixel distance from the pixel position to the road element of the specified type according to the determined pixel position and the fitted curve equation of the road element of the specified type, and determining the distance from the vehicle to the road element of the specified type according to the determined minimum pixel distance and the space distance corresponding to each pixel point in the image.
9. The method of claim 1, wherein determining the lane in which the vehicle is located and the positional relationship of the vehicle to the specified road element comprises determining and outputting at least one of:
the number of lanes on the left side of the vehicle, the number of lanes on the right side of the vehicle, the distance from the edge line of the left side road, the distance from the edge line of the right side road, the distance from the left lane line and the distance from the right lane line.
10. The method of any of claims 1-9, wherein the specified category of road elements includes at least one of: road edge line, road lane line, road diversion strip.
11. A lane-level locating device, comprising:
the element segmentation module is used for performing road element segmentation on the input image to obtain a segmented image containing road elements of a specified category;
the vectorization processing module is used for carrying out vectorization processing on pixels included in the road elements of the specified type in the divided images to obtain road data including the road element description information of the specified type;
and the positioning module is used for determining the lane where the vehicle is located and the position relation between the vehicle and the specified road element according to the road element description information included in the road data and the imaging equipment parameter for shooting the input image.
12. A computer-readable storage medium having stored thereon computer instructions, which when executed by a processor, implement the lane-level localization method according to any one of claims 1-10.
13. A positioning apparatus, comprising: a memory and a processor; wherein the memory stores a computer program which, when executed by the processor, is capable of implementing a lane-level localization method as claimed in any one of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910075113.0A CN111488762A (en) | 2019-01-25 | 2019-01-25 | Lane-level positioning method and device and positioning equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910075113.0A CN111488762A (en) | 2019-01-25 | 2019-01-25 | Lane-level positioning method and device and positioning equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111488762A true CN111488762A (en) | 2020-08-04 |
Family
ID=71791352
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910075113.0A Pending CN111488762A (en) | 2019-01-25 | 2019-01-25 | Lane-level positioning method and device and positioning equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111488762A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112309233A (en) * | 2020-10-26 | 2021-02-02 | 北京三快在线科技有限公司 | Road boundary determining and road segmenting method and device |
CN116189145A (en) * | 2023-02-15 | 2023-05-30 | 清华大学 | Extraction method, system and readable medium of linear map elements |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106156723A (en) * | 2016-05-23 | 2016-11-23 | 北京联合大学 | A kind of crossing fine positioning method of view-based access control model |
CN108216229A (en) * | 2017-09-08 | 2018-06-29 | 北京市商汤科技开发有限公司 | The vehicles, road detection and driving control method and device |
CN108596165A (en) * | 2018-08-21 | 2018-09-28 | 湖南鲲鹏智汇无人机技术有限公司 | Road traffic marking detection method based on unmanned plane low latitude Aerial Images and system |
-
2019
- 2019-01-25 CN CN201910075113.0A patent/CN111488762A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106156723A (en) * | 2016-05-23 | 2016-11-23 | 北京联合大学 | A kind of crossing fine positioning method of view-based access control model |
CN108216229A (en) * | 2017-09-08 | 2018-06-29 | 北京市商汤科技开发有限公司 | The vehicles, road detection and driving control method and device |
CN108596165A (en) * | 2018-08-21 | 2018-09-28 | 湖南鲲鹏智汇无人机技术有限公司 | Road traffic marking detection method based on unmanned plane low latitude Aerial Images and system |
Non-Patent Citations (1)
Title |
---|
王家恩 等: "车辆辅助驾驶系统中的三车道检测算法" * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112309233A (en) * | 2020-10-26 | 2021-02-02 | 北京三快在线科技有限公司 | Road boundary determining and road segmenting method and device |
CN116189145A (en) * | 2023-02-15 | 2023-05-30 | 清华大学 | Extraction method, system and readable medium of linear map elements |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109902637B (en) | Lane line detection method, lane line detection device, computer device, and storage medium | |
EP2570993B1 (en) | Egomotion estimation system and method | |
CN102646343B (en) | Vehicle detection apparatus | |
WO2020097840A1 (en) | Systems and methods for correcting a high-definition map based on detection of obstructing objects | |
US20150279021A1 (en) | Video object tracking in traffic monitoring | |
CN111191611B (en) | Traffic sign label identification method based on deep learning | |
US20150178922A1 (en) | Calibration device, method for implementing calibration, and camera for movable body and storage medium with calibration function | |
CN109840463B (en) | Lane line identification method and device | |
US11164012B2 (en) | Advanced driver assistance system and method | |
CN106485182A (en) | A kind of fuzzy Q R code restored method based on affine transformation | |
US20230005278A1 (en) | Lane extraction method using projection transformation of three-dimensional point cloud map | |
CN111738033B (en) | Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal | |
JP2014009975A (en) | Stereo camera | |
CN111738032A (en) | Vehicle driving information determination method and device and vehicle-mounted terminal | |
KR20180098945A (en) | Method and apparatus for measuring speed of vehicle by using fixed single camera | |
CN110751040B (en) | Three-dimensional object detection method and device, electronic equipment and storage medium | |
CN111488762A (en) | Lane-level positioning method and device and positioning equipment | |
CN110909620A (en) | Vehicle detection method and device, electronic equipment and storage medium | |
JP7191671B2 (en) | CALIBRATION DEVICE, CALIBRATION METHOD | |
CN111428538B (en) | Lane line extraction method, device and equipment | |
CN114037977B (en) | Road vanishing point detection method, device, equipment and storage medium | |
CN108389177B (en) | Vehicle bumper damage detection method and traffic safety early warning method | |
CN115018926A (en) | Method, device and equipment for determining pitch angle of vehicle-mounted camera and storage medium | |
CN111428537B (en) | Method, device and equipment for extracting edges of road diversion belt | |
KR20180071552A (en) | Lane Detection Method and System for Camera-based Road Curvature Estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40034901 Country of ref document: HK |