CN115035138A - Road surface gradient extraction method based on crowdsourcing data - Google Patents
Road surface gradient extraction method based on crowdsourcing data Download PDFInfo
- Publication number
- CN115035138A CN115035138A CN202210955980.5A CN202210955980A CN115035138A CN 115035138 A CN115035138 A CN 115035138A CN 202210955980 A CN202210955980 A CN 202210955980A CN 115035138 A CN115035138 A CN 115035138A
- Authority
- CN
- China
- Prior art keywords
- gradient
- road
- image
- point
- formula
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A30/00—Adapting or protecting infrastructure or their operation
- Y02A30/60—Planning or developing urban green infrastructure
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a road surface gradient extraction method based on crowdsourcing data, which comprises the following steps of: step S1, acquiring crowdsourcing sequence images, extracting road marking information based on computer vision, determining vanishing point positions of a plane and an inclined plane, and calculating road slope information based on coordinates of vanishing points(ii) a Step S2, obtaining crowdsourcing trajectory data, calculating road grade information based on the ratio of GPS horizontal speed and vertical direction speed(ii) a In the step of S3,andcarrying out gradient data fusion; and step S4, acquiring the gradient of the fusion road and outputting the accurate gradient. The method is based on crowdsourcing trajectory data and sequence image information, and solves the problem of high-precision map gradient extraction. On one hand, crowdsourcing data can realize low-cost and large-range data coverage, efficiently extract large-range road gradient information and reduce the construction cost of a high-precision map; on the other hand, the method disclosed by the invention integrates the multi-vanishing point information of the sequence image and the GPS speed information, can realize the accurate calculation of the gradient information, and meets the accuracy requirement of a high-precision map.
Description
Technical Field
The invention relates to the technical field of photogrammetry, in particular to a road surface gradient extraction method based on crowdsourcing data.
Background
The high-precision map provides lane-level navigation service and over-the-horizon safety auxiliary information for the automatic driving automobile, and is an indispensable part of automatic driving. The gradient information is one of driving auxiliary information of a high-precision map, and different accelerations are required to be used for automatically driving the automobile on roads with different gradients so as to ensure the stability and the safety of the automobile, realize the optimal control of the full speed domain of the automobile, save fuel and protect the environment.
In the prior art, high-precision map gradient information extraction generally comprises three types of methods: the first method is that a high-precision laser radar is used for extracting point clouds of a plane and a slope to obtain a plane and slope equation, and then gradient information is calculated, and the method is high in cost and low in efficiency, and cannot realize real-time updating of gradient information of a high-precision map in a large range; the second method is that the gradient is calculated by using vehicle-mounted GPS information and atmospheric pressure sensor information, but the method is easily influenced by external environment changes, and the precision of the extracted gradient cannot meet the requirement of a high-precision map; the third method is to use an acceleration sensor and a vehicle dynamics model to calculate the gradient, the method has higher requirement on the accuracy of the sensor, the extraction of the gradient is greatly influenced by the error of the sensor, and the accuracy of the method cannot meet the requirement of a high-precision map.
Disclosure of Invention
The method is based on low-cost crowdsourcing track data and crowdsourcing sequence image information, and solves the problem of high-precision map slope extraction. On one hand, crowdsourcing data can realize low-cost and large-range data coverage, efficiently extract large-range road gradient information and reduce the construction cost of a high-precision map; on the other hand, the method combines the multi-vanishing-point information of the crowdsourcing sequence image and the GPS speed information, can realize accurate calculation of the gradient information, and meets the accuracy requirement of a high-precision map.
In order to achieve the above object, the present invention provides a road surface gradient extraction method based on crowdsourcing data, which is characterized by comprising the following steps:
step S1, acquiring crowdsourcing sequence images, extracting road marking information based on computer vision, determining vanishing point positions of a plane and an inclined plane, and primarily calculating road gradient information based on coordinate positions of vanishing points;
Step S2, crowdsourcing trajectory data is obtained, and road grade information is calculated based on the ratio of the GPS horizontal speed to the vertical speed;
and step S4, acquiring the gradient of the fusion road and outputting the accurate gradient.
Further, the step S1 specifically includes the following sub-steps:
s11, inputting sequence image data;
s12, acquiring a road area at the bottom of the image, and dividing the road area on the image into a near area and a far area;
s13, respectively carrying out edge point extraction on two segmentation areas of the image by using a width limitation and gradient symmetry algorithm;
s14, constructing a voting space detection lane line from the edge points of the local area to extract the lane line;
s15, calculating image coordinates of vanishing points based on Gaussian balls by using the lane line extraction result in the previous step;
s16, constructing a three-dimensional model between the vanishing point and the road gradient based on an analytic photogrammetry perspective mapping analysis method, and calculating the road gradient based on the three-dimensional model between the vanishing point and the road gradient。
Further, the step S13 is specifically:
calculating the gradient of each pixel according to a formula (1) by using a self-adaptive sliding window, wherein the width of the self-adaptive sliding window is the same as the width of the lane line, and selecting pixel points with peak-valley gradient pairs as candidate edge points of the lane line;
in the formulaE j Is a gradient value, and is a gradient value,Sin order to be the width of the sliding window,jis a position of a pixel, and is,Ik is the pixel gray scale and the position of the pixel within the sliding window.
Further, the step S14 is specifically:
performing projection transformation on all candidate edge points in the image space by adopting a formula (2), and recovering the parallel characteristics of the lane lines on two sides to ensure that intersection points of the lane lines on the boundary of the projection space are positioned at the bottom and the top; two points P passing through the bottom and top of the image 0 And P 1 A straight line that can uniquely define the image space; accordingly, when the height is unchanged, the parameters of the straight line can be represented by the distance L from the edge of the image and the lateral deviation D between the upper end point and the lower end point,
in the formulaAs candidate edge points iniThe horizontal coordinate of the row is determined,is the firstiThe horizontal first coordinate of the row is,is the firstiThe pixel width of the detection area of a row,is to detect the width of the grid or grids,carrying out projection transformation on the candidate edge points to obtain coordinates;
projecting all candidate edge points in the detection area to a voting space through a straight line of any point in the image, voting, wherein a distance L from the edge of the image and a transverse deviation D between an upper end point and a lower end point are combined to form a voting space of lane line characteristics, searching extreme points, and extracting candidate lane lines;
and calculating parameters and residual errors of the fitted straight line of the candidate lane line by using a least square method, taking the candidate lane line as a segment with stronger robustness when the residual errors are smaller than a given threshold value, and determining other characteristic points which belong to the same line segment according to the parameters.
Further, the step S15 is specifically:
calculating image coordinates of vanishing points based on Gaussian balls by using the lane line extraction result of the previous step; each lane line extracted from the image can be calculated by a formula (3) to be corresponding to a great circle on a Gaussian ball, two great circles of two parallel lines in an image space are intersected at one point on the Gaussian ball, rays from the center of the spherical surface to the intersection point are calculated, the direction of a vanishing point can be calculated by singular value decomposition by using a formula (4), and an image coordinate of the vanishing point can be obtained by using a formula (5);
in the formulanIs the normal vector of the great circle corresponding to the lane line,C p is a reference for the camera to be used,P 0 ,P 1 is a lane line endpoint;D v in the direction of the vanishing point,Aa normal vector set of a Gaussian sphere great circle corresponding to the lane line extracted from the image;n N is the normal vector corresponding to the Nth lane line,vthe image coordinates of the vanishing point.
Further, the step S16 is specifically:
constructing a three-dimensional model between the extinction point and the road gradient based on an analytic photogrammetry perspective mapping analysis method, and designating a plane according to a perspective transformation model of a cameraP l The parallel straight lines converge into a point in the image space, called vanishing point, and the connecting line of the camera optical center and the vanishing point is parallel to the corresponding plane according to the light path propagation processP l So that the main optical axis of the camera corresponds to the planeP l The pitch angle of (c) can be expressed by equation (6):
when the road plane has a slope, the road part in the image is divided by the near plane S near And a far plane S far Two different planes are formed, and the corresponding lane lines are respectively intersected at the near vanishing point V in the image space near And a remote vanishing point V far Calculating the road surface gradient according to a formula (7);
in the formulaUsing the road surface gradient calculated using the sequential image data,,the vertical coordinates of the vanishing point of the image in the near and far regions respectively.
Further, the step S2 specifically includes the following sub-steps:
s21, inputting crowdsourcing track data;
s22, searching corresponding GPS positioning information and three-dimensional speed information according to the timestamp corresponding to the image;
s23, calculating the gradient based on the arctangent value of the vertical speed and the horizontal speed of the GPS。
Further, the step S23 specifically includes:
calculating road grade according to equation (8)
In the formulaRefers to road grade calculated using GPS speed information,V Z is the velocity of the GPS in the vertical direction,V X andV Y the lateral and longitudinal speeds of the GPS in the plane.
Further, the step S3 specifically includes the following sub-steps:
S32, solving a gradient change control point;
step S33, constructing a road gradient model;
and S34, calculating a gradient model.
Further, the step S32 is specifically:
gradient detected in imageExceeds a threshold valueThen, calculating the position of the gradient catastrophe point by using the image and the corresponding GPS positioning information thereof and using a formula (9), and setting the position as a gradient control point;
in the formulax lon ,y lat ,z height Respectively representing longitude, latitude and altitude values of the gradient catastrophe point in a world coordinate system,is the camera pitch angle,,the horizontal coordinate and the vertical coordinate of the gradient abrupt change point in the image coordinate system,x gps ,y gps ,hGPS longitude, latitude and camera height time aligned for the picture.
Further, the step S33 is specifically:
using equation (10), a gradient model of the road is constructed,
assuming that the gradient change rate of the road is a constant, in the formulaWhich indicates the gradient of the road and,xas a variable of the length of the road,represents the gradient change rate, and T is a change constant.
Further, the step S34 is specifically:
1) Determining a fitted curve:
in the formulaRepresentThe curve is fitted to the curve and,、、in order to be a parameter of the curve,xis a curve independent variable
2) The sum of the squares of the distances from each point to the curve is calculated using equation (12)
In the formula D s Is the distance from the point to the curve,i p is the serial number of the point or points,n p in the form of the total number of dots,as a value calculated by equation (11),is as followsi p Of dotsA value;
3) the sum of squares is minimized, and the parameter value of the fitting curve is obtained,,And carrying out secondary derivation to obtain a gradient change constant;
And S342, determining a gradient model of the road between the starting point, the gradient change control point and the end point according to the gradient value and the gradient change constant of the control point.
Compared with the prior art, the invention has the following beneficial effects:
the method is based on low-cost crowdsourcing track data and crowdsourcing sequence image information, and solves the problem of high-precision map slope extraction. On one hand, crowdsourcing data can realize low-cost and large-range data coverage, efficiently extract large-range road gradient information and reduce the construction cost of a high-precision map; on the other hand, the method disclosed by the invention integrates the multi-vanishing point information of the crowdsourcing sequence image and the GPS speed information, realizes the accurate calculation of the gradient information, and meets the accuracy requirement of a high-precision map.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a flow chart of road grade extraction based on multiple vanishing points in the invention.
FIG. 3 is a schematic diagram of the linear feature expression in the linear extraction process of the present invention.
FIG. 4 is a schematic view of a three-dimensional model between a vanishing point and a road slope in accordance with the present invention.
FIG. 5 is a diagram of an image inputted in an embodiment of the present invention.
FIG. 6 is a diagram of a lane line extracted image according to an embodiment of the present invention.
FIG. 7 is a dead-point extracted image in the embodiment of the present invention.
Fig. 8 is a diagram of an image extracted based on vanishing point gradient according to the present invention.
FIG. 9 is a GPS grade based extraction map of the present invention.
FIG. 10 is a fused image of the present invention.
FIG. 11 is a diagram of input crowd-sourced trajectory data in accordance with the present invention.
Fig. 14 is a schematic diagram of obtaining a fusion road gradient and outputting an accurate gradient according to the embodiment of the present invention.
Detailed Description
The technical solutions provided by the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are illustrative only and are not limiting upon the scope of the invention.
As shown in fig. 1, the present embodiment provides a road surface gradient extraction method based on crowdsourcing data, including the following steps:
step S1, acquiring crowdsourcing sequence images, extracting road marking information based on computer vision, determining vanishing point positions of a plane and an inclined plane, and primarily calculating road gradient information based on coordinate positions of vanishing points;
As shown in fig. 2, step S1 specifically includes the following sub-steps:
s11, inputting sequence image data; as shown in fig. 5;
step S12, acquiring a road area at the bottom of the image, and dividing the road area on the image into a near area and a far area, in this embodiment, dividing the image into the near area and the far area according to the distance from the lane line to the camera and using 30 meters as a boundary condition;
s13, respectively extracting edge points of the two segmentation areas of the image by using a width limitation and gradient symmetry algorithm; step S13 specifically includes:
using an adaptive sliding window, wherein the width of the adaptive sliding window is the same as the width of a lane line, calculating the gradient of each pixel according to a formula (1), and selecting a pixel point with a peak-valley gradient pair as a candidate edge point of the lane line, as shown in fig. 6, wherein fig. 6 shows a positive-negative gradient result of a certain row of pixels calculated by using the formula 1 in an image;
in the formulaE j Is a gradient value, and is a gradient value,Sfor a sliding window width (40 pixels in this embodiment),jis a position of a pixel, and is,Ik is the pixel gray scale and the position of the pixel within the sliding window.
Step S14, constructing a voting space detection lane line from the edge points of the local area to extract a lane line, as shown in fig. 7;
step S14 specifically includes:
as shown in fig. 3, projection transformation is performed on all candidate edge points in the image space by using a formula (2), and parallel features of lane lines on two sides are recovered, so that intersection points of the lane lines and the boundary of the projection space are positioned at the bottom and the top; passing through two points at the bottom and top of the imageP 0 AndP 1 a straight line that can uniquely define the image space; accordingly, when the height is unchanged, the parameters of the straight line can be represented by the distance L from the edge of the image and the lateral deviation D between the upper end point and the lower end point,
in the formulaAs candidate edge points iniThe horizontal coordinates of the row or rows,is the firstiThe horizontal first coordinate of the row is,is the firstiThe pixel width of the detection area of a row,is the width of the detection grid, which is [ -10,10 ] in this embodiment],Carrying out projection transformation on the candidate edge points to obtain coordinates;
projecting all candidate edge points in the detection area to a voting space through a straight line of any point in the image, voting, wherein a distance L from the edge of the image and a transverse deviation D between an upper end point and a lower end point are combined to form a voting space of lane line characteristics, searching extreme points, and extracting candidate lane lines;
and calculating parameters and residual errors of the fitted straight line of the candidate lane line by using a least square method, taking the candidate lane line as a segment with stronger robustness when the residual errors are smaller than a given threshold value, and determining other characteristic points which belong to the same line segment according to the parameters.
Step S15, calculating image coordinates of vanishing points based on Gaussian balls by using lane line extraction results of the near area and the far area obtained in the previous step, as shown in FIG. 8, after lane marking lines of the near area and the far area are identified, calculating different vanishing point coordinates, if a road is an uphill slope, locating vanishing point positions of the uphill slope area above vanishing points corresponding to the plane area, if the road is a downhill slope, locating vanishing points of the far downhill slope area below vanishing points corresponding to the near plane area, and calculating gradient information of the road according to different vanishing point positions; each lane line extracted from the image can be calculated by a formula (3) to correspond to a great circle on a Gaussian sphere, two great circles of two parallel lines in an image space intersect at one point on the Gaussian sphere, and rays from the center of the spherical surface to the intersection point are calculated, as shown in fig. 9, the vanishing point direction can be calculated by singular value decomposition by using a formula (4), and the image coordinates of the vanishing points of a near area and a far area are (1078,512) and (1078,471) by using a formula (5);
in the formulanIs the normal vector of the great circle corresponding to the lane line,C p for camera reference, this embodiment is a matrix,P 0 ,P 1 Is a lane line endpoint;D v in order to be the direction of the vanishing point,Aa normal vector set of a Gaussian sphere great circle corresponding to the lane line extracted from the image;n N is the normal vector corresponding to the Nth lane line,vthe image coordinates of the vanishing point.
Step S16, constructing a three-dimensional model between the vanishing point and the road gradient based on the analytic photogrammetric scenographic analysis, calculating the road gradient based on the three-dimensional model between the vanishing point and the road gradient, as shown in fig. 4 and 10,
step S16 specifically includes:
establishing a three-dimensional model between the vanishing point and the road gradient based on an analytic photogrammetric perspective mapping analysis method, and designating a plane according to a perspective transformation model of a cameraP l The parallel straight lines converge into a point in the image space, called vanishing point, and the connecting line of the camera optical center and the vanishing point is parallel to the corresponding plane according to the light path propagation processP l Thus the main optical axis of the cameraWith the corresponding planeP l The pitch angle of (c) can be expressed by equation (6) as:
in the formulaFor the ordinate of the vanishing point in the image,fis the camera focal length (1.01e + 03).
When the road plane has a slope, the road part in the image is divided by the near plane S near And a far plane S far Two different planes are formed, and the corresponding lane lines are respectively intersected at the near vanishing point V in the image space near And a remote vanishing point V far Calculating the road surface gradient to be 0.3328 radians according to a formula (7), and converting the road surface gradient into an angle of 1.88 degrees;
in the formulaUsing the road surface gradient calculated using the sequential image data,fis the focal length of the camera 1.01e +03,,the image vanishing point ordinates of the near and far regions respectively (512, 477).
Step S2, obtaining crowdsourcing trajectory data, calculating road grade information based on the ratio of GPS horizontal speed and vertical direction speed;
Step S2 specifically includes the following substeps:
step S21, inputting crowdsourcing track data, such as the graph 11;
s22, searching corresponding GPS positioning information and three-dimensional speed information according to the timestamp corresponding to the image;
s23, calculating the gradient based on the arctangent value of the vertical speed and the horizontal speed of the GPSAs shown in fig. 12, fig. 12 shows a road slope obtained by using GPS speed information, and the accuracy of the GPS positioning device is easily affected by the environment, so that the calculated noise is more, but the change trend of the road slope can still be reflected, and the slope value of the position where the road slope changes more accurately can be observed by calculating the road slope through the image multi-vanishing point, and the slope value of the road surface at the key node and the change trend of the road slope can be determined simultaneously by combining the two.
Step S23 specifically includes:
calculating road grade according to equation (8)
In the formulaRefers to road grade calculated using GPS speed information,V Z is the velocity of the GPS in the vertical direction,V X andV Y the lateral and longitudinal speeds of the GPS in the plane.
step S3 specifically includes the following sub-steps:
s32, solving a gradient change control point; gradient detected in imageExceeds a threshold valueWhen =0.8, calculating the position of the slope abrupt change point by using the image and the corresponding GPS positioning information thereof and using a formula (9), and setting the position as a slope control point;
in the formulax lon ,y lat ,z height Respectively representing longitude, latitude and altitude values of the gradient catastrophe point in a world coordinate system,is the camera pitch angle,,the horizontal coordinate and the vertical coordinate of the gradient abrupt change point in the image coordinate system,x gps ,y gps ,hthe GPS longitude, latitude, and camera height, which are time aligned for this picture, are 1.65 m.
Step S33, constructing a road gradient model; the method specifically comprises the following steps:
using equation (10), a gradient model of the road is constructed,
assuming that the gradient change rate of the road is a constant, in the formulaWhich indicates the gradient of the road and,xas a variable of the length of the road,which represents the rate of change of the slope,Tis a constant of variation.
And S34, calculating a gradient model.
1) Determining a fitted curve:
in the formulaTo representThe curve is fitted to the curve and,、、in order to be a parameter of the curve,xis a curve independent variable
2) The sum of the squares of the distances from each point to the curve is calculated using equation (12)
In the formula D s Is the distance from the point to the curve,i p is the serial number of the point or points,n p in the form of the total number of dots,as a value calculated by equation (11),is a firsti p Of dotsA value;
3) the sum of squares is minimized, and the parameter value of the fitting curve is obtained,,And carrying out secondary derivation to obtain a gradient change constant(ii) a Wherein the slope change constant calculated between the start point and the slope change control point is 0.005 and the slope change constant calculated between the slope change control point and the end point is 0.0036.
And S342, determining a road gradient model through the gradient value and the gradient change constant of the control point between the starting point and the gradient change control point and the end point.
Step S4, obtaining the fused road gradient, outputting the precise gradient, as shown in fig. 14, after obtaining the control point of the road gradient change through the image multi-vanishing-point gradient extraction, the control point may be passed throughAnd obtaining a gradient change model between a road starting point and a control point and a gradient change model between a control point and a road ending point.
The technical means disclosed in the scheme of the invention are not limited to the technical means disclosed in the above embodiments, but also include the technical means formed by any combination of the above technical features. It should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and such improvements and modifications are also considered to be within the scope of the present invention.
Claims (10)
1. A road surface gradient extraction method based on crowdsourcing data is characterized by comprising the following steps:
step S1, acquiring crowdsourcing sequence images, extracting road marking information based on computer vision, determining vanishing point positions of a plane and an inclined plane, and primarily calculating road gradient information based on coordinate positions of vanishing points;
Step S2, obtaining crowdsourcing trajectory data, calculating road grade information based on the ratio of GPS horizontal speed and vertical direction speed;
and step S4, acquiring the gradient of the fusion road and outputting the accurate gradient.
2. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 1, wherein the step S1 specifically comprises the following sub-steps:
s11, inputting sequence image data;
s12, acquiring a road area at the bottom of the image, and dividing the road area on the image into a near area and a far area;
s13, respectively extracting edge points of the two segmentation areas of the image by using a width limitation and gradient symmetry algorithm;
s14, constructing a voting space detection lane line from edge points of a local area to extract the lane line;
s15, calculating image coordinates of vanishing points based on Gaussian balls by using the lane line extraction result in the previous step;
3. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 2, wherein the step S13 specifically comprises:
calculating the gradient of each pixel according to a formula (1) by using a self-adaptive sliding window, wherein the width of the self-adaptive sliding window is the same as the width of the lane line, and selecting pixel points with peak-valley gradient pairs as candidate edge points of the lane line;
in the formulaE j Is a value of the gradient, and is,Sin order to be the width of the sliding window,jis a position of a pixel, and is,Iis the pixel gray scale, k is the position of the pixel in the sliding window;
the step S14 specifically includes:
image space using equation (2)Performing projection transformation on all candidate edge points, and recovering the parallel characteristics of lane lines on two sides to enable intersection points of the lane lines and the projection space boundary to be positioned at the bottom and the top; two points P passing through the bottom and top of the image 0 And P 1 A straight line that can uniquely define the image space; accordingly, when the height is unchanged, the parameters of the straight line can be represented by the distance L from the edge of the image and the lateral deviation D between the upper end point and the lower end point,
in the formulaAs candidate edge points iniThe horizontal coordinates of the row or rows,is the firstiThe horizontal first coordinate of the row is,is the firstiThe pixel width of the detection area of a row,is to detect the width of the grid,carrying out projection transformation on the candidate edge points to obtain coordinates;
projecting all candidate edge points in the detection area to a voting space through a straight line of any point in the image, voting, wherein the distance L from the edge of the image and the transverse deviation D between an upper end point and a lower end point are combined to form the voting space of the characteristics of the lane line, searching extreme points, and extracting the candidate lane line;
and calculating parameters and residual errors of the fitted straight line of the candidate lane line by using a least square method, taking the candidate lane line as a segment with stronger robustness when the residual errors are smaller than a given threshold value, and determining other characteristic points belonging to the same line segment according to the parameters.
4. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 2, wherein the step S15 specifically comprises:
calculating image coordinates of vanishing points based on Gaussian balls by using the lane line extraction result in the previous step; each lane line extracted from the image can be calculated by a formula (3) to be corresponding to a great circle on a Gaussian ball, two great circles of two parallel lines in an image space are intersected at one point on the Gaussian ball, rays from the center of the spherical surface to the intersection point are calculated, the direction of a vanishing point can be calculated by singular value decomposition by using a formula (4), and an image coordinate of the vanishing point can be obtained by using a formula (5);
in the formulanIs the normal vector of the great circle corresponding to the lane line,C p is a reference for the camera to be used,P 0 ,P 1 is a lane line endpoint;D v in the direction of the vanishing point,Aa normal vector set of a Gaussian sphere great circle corresponding to the lane line extracted from the image;n N is the normal vector corresponding to the Nth lane line,vthe image coordinates of the vanishing point.
5. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 2, wherein the step S16 specifically comprises:
establishing a three-dimensional model between the vanishing point and the road gradient based on an analytic photogrammetric perspective mapping analysis method, and designating a plane according to a perspective transformation model of a cameraP l The parallel straight lines converge into a point in the image space, called vanishing point, and the connecting line of the camera optical center and the vanishing point is parallel to the corresponding plane according to the light path propagation processP l So that the main optical axis of the camera corresponds to the planeP l The pitch angle can be expressed by equation (6) as:
when the road plane has a slope, the road part in the image is divided by the near plane S near And a far plane S far Two different planes are formed, and the correspondent lane lines are respectively crossed at near vanishing point V in image space near And a remote vanishing point V far Calculating the road surface gradient according to a formula (7);
6. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 1, wherein the step S2 specifically comprises the following sub-steps:
s21, inputting crowdsourcing track data;
s22, searching corresponding GPS positioning information and three-dimensional speed information according to the timestamp corresponding to the image;
7. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 6, wherein the step S23 is specifically:
calculating road grade according to equation (8)
8. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 1, wherein the step S3 specifically comprises the following sub-steps:
s32, solving a gradient change control point;
step S33, constructing a road gradient model;
and S34, calculating a gradient model.
9. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 8, wherein the step S32 specifically comprises:
gradient detected in imageExceeds a threshold valueThen, calculating the position of the gradient catastrophe point by using the image and the corresponding GPS positioning information thereof and using a formula (9), and setting the position as a gradient control point;
in the formulax lon ,y lat ,z height Respectively represents longitude, latitude and altitude values of the gradient catastrophe point in a world coordinate system,is the camera pitch angle,,for the transverse of the gradient abrupt change point in the image coordinate systemThe coordinate and the ordinate are the same as each other,x gps ,y gps ,hGPS longitude, latitude and camera height time aligned for the picture.
10. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 8, wherein the step S33 is specifically:
using equation (10), a gradient model of the road is constructed,
assuming that the gradient change rate of the road is a constant, in the formulaWhich is indicative of the gradient of the road,xis a variable of the length of the road,which represents the rate of change of the slope,Tis a variation constant;
the step S34 specifically includes:
1) Determining a fitted curve:
in the formulaTo representThe curve is fitted to the shape of the curve,、、in order to be a parameter of the curve,xfor varying the length of the road
2) The sum of squares of the distances from the points to the curve is calculated using equation (12)
In the formula D s Is the distance from the point to the curve,i p is the serial number of the point or points,n p in order to be the total number of points,as a value calculated by equation (11),is as followsi p Of dotsA value;
3) the sum of squares is minimized, and the parameter value of the fitting curve is obtained,,To make a hand in hand withPerforming secondary derivation to obtain a gradient change constant;
And S342, determining a road gradient model through the gradient value and the gradient change constant of the control point between the starting point and the gradient change control point and the end point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210955980.5A CN115035138B (en) | 2022-08-10 | 2022-08-10 | Road surface gradient extraction method based on crowdsourcing data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210955980.5A CN115035138B (en) | 2022-08-10 | 2022-08-10 | Road surface gradient extraction method based on crowdsourcing data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115035138A true CN115035138A (en) | 2022-09-09 |
CN115035138B CN115035138B (en) | 2022-11-22 |
Family
ID=83130141
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210955980.5A Active CN115035138B (en) | 2022-08-10 | 2022-08-10 | Road surface gradient extraction method based on crowdsourcing data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115035138B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115598635A (en) * | 2022-12-15 | 2023-01-13 | 江苏索利得物联网有限公司(Cn) | Millimeter wave radar ranging fusion method and system based on Beidou positioning |
CN117928575A (en) * | 2024-03-22 | 2024-04-26 | 四川省公路规划勘察设计研究院有限公司 | Lane information extraction method, system, electronic device and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012225806A (en) * | 2011-04-20 | 2012-11-15 | Toyota Central R&D Labs Inc | Road gradient estimation device and program |
CN109900254A (en) * | 2019-03-28 | 2019-06-18 | 合肥工业大学 | A kind of the road gradient calculation method and its computing device of monocular vision |
CN110161513A (en) * | 2018-09-28 | 2019-08-23 | 腾讯科技(北京)有限公司 | Estimate method, apparatus, storage medium and the computer equipment of road grade |
CN112862890A (en) * | 2021-02-07 | 2021-05-28 | 黑芝麻智能科技(重庆)有限公司 | Road gradient prediction method, road gradient prediction device and storage medium |
CN114136312A (en) * | 2021-11-25 | 2022-03-04 | 中汽研汽车检验中心(天津)有限公司 | Gradient speed combined working condition development device and development method |
-
2022
- 2022-08-10 CN CN202210955980.5A patent/CN115035138B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012225806A (en) * | 2011-04-20 | 2012-11-15 | Toyota Central R&D Labs Inc | Road gradient estimation device and program |
CN110161513A (en) * | 2018-09-28 | 2019-08-23 | 腾讯科技(北京)有限公司 | Estimate method, apparatus, storage medium and the computer equipment of road grade |
US20210024074A1 (en) * | 2018-09-28 | 2021-01-28 | Tencent Technology (Shenzhen) Company Limited | Road gradient determining method and apparatus, storage medium, and computer device |
CN109900254A (en) * | 2019-03-28 | 2019-06-18 | 合肥工业大学 | A kind of the road gradient calculation method and its computing device of monocular vision |
CN112862890A (en) * | 2021-02-07 | 2021-05-28 | 黑芝麻智能科技(重庆)有限公司 | Road gradient prediction method, road gradient prediction device and storage medium |
CN114136312A (en) * | 2021-11-25 | 2022-03-04 | 中汽研汽车检验中心(天津)有限公司 | Gradient speed combined working condition development device and development method |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115598635A (en) * | 2022-12-15 | 2023-01-13 | 江苏索利得物联网有限公司(Cn) | Millimeter wave radar ranging fusion method and system based on Beidou positioning |
CN117928575A (en) * | 2024-03-22 | 2024-04-26 | 四川省公路规划勘察设计研究院有限公司 | Lane information extraction method, system, electronic device and storage medium |
CN117928575B (en) * | 2024-03-22 | 2024-06-18 | 四川省公路规划勘察设计研究院有限公司 | Lane information extraction method, system, electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN115035138B (en) | 2022-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115035138B (en) | Road surface gradient extraction method based on crowdsourcing data | |
CN110146909B (en) | Positioning data processing method | |
CN104848867B (en) | The pilotless automobile Combinated navigation method of view-based access control model screening | |
JP6504316B2 (en) | Traffic lane estimation system | |
JP5057183B2 (en) | Reference data generation system and position positioning system for landscape matching | |
CN102208036B (en) | Vehicle position detection system | |
US8428362B2 (en) | Scene matching reference data generation system and position measurement system | |
JP5057184B2 (en) | Image processing system and vehicle control system | |
CN108731670A (en) | Inertia/visual odometry combined navigation locating method based on measurement model optimization | |
CN102207389A (en) | Vehicle position recognition system | |
US20090154793A1 (en) | Digital photogrammetric method and apparatus using intergrated modeling of different types of sensors | |
CN106017463A (en) | Aircraft positioning method based on positioning and sensing device | |
WO2018133727A1 (en) | Method and apparatus for generating orthophoto map | |
CN114526745B (en) | Drawing construction method and system for tightly coupled laser radar and inertial odometer | |
CN114216454B (en) | Unmanned aerial vehicle autonomous navigation positioning method based on heterogeneous image matching in GPS refusing environment | |
CN104655135B (en) | A kind of aircraft visual navigation method based on terrestrial reference identification | |
CN111426320A (en) | Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter | |
CN112800938B (en) | Method and device for detecting occurrence of side rockfall of unmanned vehicle | |
CN112346463A (en) | Unmanned vehicle path planning method based on speed sampling | |
CN115265493B (en) | Lane-level positioning method and device based on non-calibrated camera | |
CN114325634A (en) | Method for extracting passable area in high-robustness field environment based on laser radar | |
CN113340312A (en) | AR indoor live-action navigation method and system | |
CN115639823A (en) | Terrain sensing and movement control method and system for robot under rugged and undulating terrain | |
Bikmaev et al. | Visual Localization of a Ground Vehicle Using a Monocamera and Geodesic-Bound Road Signs | |
US20220404170A1 (en) | Apparatus, method, and computer program for updating map |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |