CN115035138A - Road surface gradient extraction method based on crowdsourcing data - Google Patents

Road surface gradient extraction method based on crowdsourcing data Download PDF

Info

Publication number
CN115035138A
CN115035138A CN202210955980.5A CN202210955980A CN115035138A CN 115035138 A CN115035138 A CN 115035138A CN 202210955980 A CN202210955980 A CN 202210955980A CN 115035138 A CN115035138 A CN 115035138A
Authority
CN
China
Prior art keywords
gradient
road
image
point
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210955980.5A
Other languages
Chinese (zh)
Other versions
CN115035138B (en
Inventor
蔡斌斌
史晓飞
赵望宇
尹武腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Yujia Technology Co ltd
Original Assignee
Wuhan Yujia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Yujia Technology Co ltd filed Critical Wuhan Yujia Technology Co ltd
Priority to CN202210955980.5A priority Critical patent/CN115035138B/en
Publication of CN115035138A publication Critical patent/CN115035138A/en
Application granted granted Critical
Publication of CN115035138B publication Critical patent/CN115035138B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a road surface gradient extraction method based on crowdsourcing data, which comprises the following steps of: step S1, acquiring crowdsourcing sequence images, extracting road marking information based on computer vision, determining vanishing point positions of a plane and an inclined plane, and calculating road slope information based on coordinates of vanishing points
Figure 342017DEST_PATH_IMAGE001
(ii) a Step S2, obtaining crowdsourcing trajectory data, calculating road grade information based on the ratio of GPS horizontal speed and vertical direction speed
Figure 107979DEST_PATH_IMAGE002
(ii) a In the step of S3,
Figure 373875DEST_PATH_IMAGE003
and
Figure 779449DEST_PATH_IMAGE004
carrying out gradient data fusion; and step S4, acquiring the gradient of the fusion road and outputting the accurate gradient. The method is based on crowdsourcing trajectory data and sequence image information, and solves the problem of high-precision map gradient extraction. On one hand, crowdsourcing data can realize low-cost and large-range data coverage, efficiently extract large-range road gradient information and reduce the construction cost of a high-precision map; on the other hand, the method disclosed by the invention integrates the multi-vanishing point information of the sequence image and the GPS speed information, can realize the accurate calculation of the gradient information, and meets the accuracy requirement of a high-precision map.

Description

Road surface gradient extraction method based on crowdsourcing data
Technical Field
The invention relates to the technical field of photogrammetry, in particular to a road surface gradient extraction method based on crowdsourcing data.
Background
The high-precision map provides lane-level navigation service and over-the-horizon safety auxiliary information for the automatic driving automobile, and is an indispensable part of automatic driving. The gradient information is one of driving auxiliary information of a high-precision map, and different accelerations are required to be used for automatically driving the automobile on roads with different gradients so as to ensure the stability and the safety of the automobile, realize the optimal control of the full speed domain of the automobile, save fuel and protect the environment.
In the prior art, high-precision map gradient information extraction generally comprises three types of methods: the first method is that a high-precision laser radar is used for extracting point clouds of a plane and a slope to obtain a plane and slope equation, and then gradient information is calculated, and the method is high in cost and low in efficiency, and cannot realize real-time updating of gradient information of a high-precision map in a large range; the second method is that the gradient is calculated by using vehicle-mounted GPS information and atmospheric pressure sensor information, but the method is easily influenced by external environment changes, and the precision of the extracted gradient cannot meet the requirement of a high-precision map; the third method is to use an acceleration sensor and a vehicle dynamics model to calculate the gradient, the method has higher requirement on the accuracy of the sensor, the extraction of the gradient is greatly influenced by the error of the sensor, and the accuracy of the method cannot meet the requirement of a high-precision map.
Disclosure of Invention
The method is based on low-cost crowdsourcing track data and crowdsourcing sequence image information, and solves the problem of high-precision map slope extraction. On one hand, crowdsourcing data can realize low-cost and large-range data coverage, efficiently extract large-range road gradient information and reduce the construction cost of a high-precision map; on the other hand, the method combines the multi-vanishing-point information of the crowdsourcing sequence image and the GPS speed information, can realize accurate calculation of the gradient information, and meets the accuracy requirement of a high-precision map.
In order to achieve the above object, the present invention provides a road surface gradient extraction method based on crowdsourcing data, which is characterized by comprising the following steps:
step S1, acquiring crowdsourcing sequence images, extracting road marking information based on computer vision, determining vanishing point positions of a plane and an inclined plane, and primarily calculating road gradient information based on coordinate positions of vanishing points
Figure 761440DEST_PATH_IMAGE001
Step S2, crowdsourcing trajectory data is obtained, and road grade information is calculated based on the ratio of the GPS horizontal speed to the vertical speed
Figure 147422DEST_PATH_IMAGE002
In the step of S3,
Figure 723897DEST_PATH_IMAGE003
and
Figure 463314DEST_PATH_IMAGE004
carrying out gradient data fusion;
and step S4, acquiring the gradient of the fusion road and outputting the accurate gradient.
Further, the step S1 specifically includes the following sub-steps:
s11, inputting sequence image data;
s12, acquiring a road area at the bottom of the image, and dividing the road area on the image into a near area and a far area;
s13, respectively carrying out edge point extraction on two segmentation areas of the image by using a width limitation and gradient symmetry algorithm;
s14, constructing a voting space detection lane line from the edge points of the local area to extract the lane line;
s15, calculating image coordinates of vanishing points based on Gaussian balls by using the lane line extraction result in the previous step;
s16, constructing a three-dimensional model between the vanishing point and the road gradient based on an analytic photogrammetry perspective mapping analysis method, and calculating the road gradient based on the three-dimensional model between the vanishing point and the road gradient
Figure 12107DEST_PATH_IMAGE005
Further, the step S13 is specifically:
calculating the gradient of each pixel according to a formula (1) by using a self-adaptive sliding window, wherein the width of the self-adaptive sliding window is the same as the width of the lane line, and selecting pixel points with peak-valley gradient pairs as candidate edge points of the lane line;
Figure 631307DEST_PATH_IMAGE006
(1)
in the formulaE j Is a gradient value, and is a gradient value,Sin order to be the width of the sliding window,jis a position of a pixel, and is,Ik is the pixel gray scale and the position of the pixel within the sliding window.
Further, the step S14 is specifically:
performing projection transformation on all candidate edge points in the image space by adopting a formula (2), and recovering the parallel characteristics of the lane lines on two sides to ensure that intersection points of the lane lines on the boundary of the projection space are positioned at the bottom and the top; two points P passing through the bottom and top of the image 0 And P 1 A straight line that can uniquely define the image space; accordingly, when the height is unchanged, the parameters of the straight line can be represented by the distance L from the edge of the image and the lateral deviation D between the upper end point and the lower end point,
Figure 910290DEST_PATH_IMAGE008
(2)
in the formula
Figure 438224DEST_PATH_IMAGE009
As candidate edge points iniThe horizontal coordinate of the row is determined,
Figure 900429DEST_PATH_IMAGE010
is the firstiThe horizontal first coordinate of the row is,
Figure 202228DEST_PATH_IMAGE011
is the firstiThe pixel width of the detection area of a row,
Figure 532716DEST_PATH_IMAGE012
is to detect the width of the grid or grids,
Figure 524942DEST_PATH_IMAGE013
carrying out projection transformation on the candidate edge points to obtain coordinates;
projecting all candidate edge points in the detection area to a voting space through a straight line of any point in the image, voting, wherein a distance L from the edge of the image and a transverse deviation D between an upper end point and a lower end point are combined to form a voting space of lane line characteristics, searching extreme points, and extracting candidate lane lines;
and calculating parameters and residual errors of the fitted straight line of the candidate lane line by using a least square method, taking the candidate lane line as a segment with stronger robustness when the residual errors are smaller than a given threshold value, and determining other characteristic points which belong to the same line segment according to the parameters.
Further, the step S15 is specifically:
calculating image coordinates of vanishing points based on Gaussian balls by using the lane line extraction result of the previous step; each lane line extracted from the image can be calculated by a formula (3) to be corresponding to a great circle on a Gaussian ball, two great circles of two parallel lines in an image space are intersected at one point on the Gaussian ball, rays from the center of the spherical surface to the intersection point are calculated, the direction of a vanishing point can be calculated by singular value decomposition by using a formula (4), and an image coordinate of the vanishing point can be obtained by using a formula (5);
Figure 290205DEST_PATH_IMAGE014
(3)
Figure 266252DEST_PATH_IMAGE015
(4)
Figure 134851DEST_PATH_IMAGE016
(5)
in the formulanIs the normal vector of the great circle corresponding to the lane line,C p is a reference for the camera to be used,P 0P 1 is a lane line endpoint;D v in the direction of the vanishing point,Aa normal vector set of a Gaussian sphere great circle corresponding to the lane line extracted from the image;n N is the normal vector corresponding to the Nth lane line,vthe image coordinates of the vanishing point.
Further, the step S16 is specifically:
constructing a three-dimensional model between the extinction point and the road gradient based on an analytic photogrammetry perspective mapping analysis method, and designating a plane according to a perspective transformation model of a cameraP l The parallel straight lines converge into a point in the image space, called vanishing point, and the connecting line of the camera optical center and the vanishing point is parallel to the corresponding plane according to the light path propagation processP l So that the main optical axis of the camera corresponds to the planeP l The pitch angle of (c) can be expressed by equation (6):
Figure 325792DEST_PATH_IMAGE017
(6)
in the formula
Figure 129799DEST_PATH_IMAGE018
To be the ordinate of the vanishing point in the image,fis the camera focal length;
when the road plane has a slope, the road part in the image is divided by the near plane S near And a far plane S far Two different planes are formed, and the corresponding lane lines are respectively intersected at the near vanishing point V in the image space near And a remote vanishing point V far Calculating the road surface gradient according to a formula (7);
Figure 717776DEST_PATH_IMAGE019
(7)
in the formula
Figure 344060DEST_PATH_IMAGE003
Using the road surface gradient calculated using the sequential image data,
Figure 904355DEST_PATH_IMAGE020
Figure 144843DEST_PATH_IMAGE021
the vertical coordinates of the vanishing point of the image in the near and far regions respectively.
Further, the step S2 specifically includes the following sub-steps:
s21, inputting crowdsourcing track data;
s22, searching corresponding GPS positioning information and three-dimensional speed information according to the timestamp corresponding to the image;
s23, calculating the gradient based on the arctangent value of the vertical speed and the horizontal speed of the GPS
Figure 439689DEST_PATH_IMAGE022
Further, the step S23 specifically includes:
calculating road grade according to equation (8)
Figure 791036DEST_PATH_IMAGE023
(8)
In the formula
Figure 2575DEST_PATH_IMAGE024
Refers to road grade calculated using GPS speed information,V Z is the velocity of the GPS in the vertical direction,V X andV Y the lateral and longitudinal speeds of the GPS in the plane.
Further, the step S3 specifically includes the following sub-steps:
step S31. input
Figure 413964DEST_PATH_IMAGE024
And with
Figure 193177DEST_PATH_IMAGE025
S32, solving a gradient change control point;
step S33, constructing a road gradient model;
and S34, calculating a gradient model.
Further, the step S32 is specifically:
gradient detected in image
Figure 410532DEST_PATH_IMAGE003
Exceeds a threshold value
Figure 351943DEST_PATH_IMAGE026
Then, calculating the position of the gradient catastrophe point by using the image and the corresponding GPS positioning information thereof and using a formula (9), and setting the position as a gradient control point;
Figure 809600DEST_PATH_IMAGE027
(9)
in the formulax lon y lat z height Respectively representing longitude, latitude and altitude values of the gradient catastrophe point in a world coordinate system,
Figure 203672DEST_PATH_IMAGE028
is the camera pitch angle,
Figure 287035DEST_PATH_IMAGE029
Figure 817373DEST_PATH_IMAGE030
the horizontal coordinate and the vertical coordinate of the gradient abrupt change point in the image coordinate system,x gps y gps hGPS longitude, latitude and camera height time aligned for the picture.
Further, the step S33 is specifically:
using equation (10), a gradient model of the road is constructed,
Figure 445932DEST_PATH_IMAGE031
(10)
assuming that the gradient change rate of the road is a constant, in the formula
Figure 655196DEST_PATH_IMAGE032
Which indicates the gradient of the road and,xas a variable of the length of the road,
Figure 152037DEST_PATH_IMAGE033
represents the gradient change rate, and T is a change constant.
Further, the step S34 is specifically:
s341, using least square algorithm pair
Figure 677827DEST_PATH_IMAGE034
Fitting to obtain gradient change constant
1) Determining a fitted curve:
Figure 601921DEST_PATH_IMAGE035
(11)
in the formula
Figure 95219DEST_PATH_IMAGE036
Represent
Figure 211730DEST_PATH_IMAGE037
The curve is fitted to the curve and,
Figure 982239DEST_PATH_IMAGE038
Figure 936289DEST_PATH_IMAGE039
Figure 57829DEST_PATH_IMAGE040
in order to be a parameter of the curve,xis a curve independent variable
2) The sum of the squares of the distances from each point to the curve is calculated using equation (12)
Figure 771838DEST_PATH_IMAGE041
(12)
In the formula D s Is the distance from the point to the curve,i p is the serial number of the point or points,n p in the form of the total number of dots,
Figure 459171DEST_PATH_IMAGE042
as a value calculated by equation (11),
Figure 725067DEST_PATH_IMAGE043
is as followsi p Of dots
Figure 146953DEST_PATH_IMAGE044
A value;
3) the sum of squares is minimized, and the parameter value of the fitting curve is obtained
Figure 179500DEST_PATH_IMAGE045
Figure 737651DEST_PATH_IMAGE046
Figure 174449DEST_PATH_IMAGE047
And carrying out secondary derivation to obtain a gradient change constant
Figure 864056DEST_PATH_IMAGE048
And S342, determining a gradient model of the road between the starting point, the gradient change control point and the end point according to the gradient value and the gradient change constant of the control point.
Compared with the prior art, the invention has the following beneficial effects:
the method is based on low-cost crowdsourcing track data and crowdsourcing sequence image information, and solves the problem of high-precision map slope extraction. On one hand, crowdsourcing data can realize low-cost and large-range data coverage, efficiently extract large-range road gradient information and reduce the construction cost of a high-precision map; on the other hand, the method disclosed by the invention integrates the multi-vanishing point information of the crowdsourcing sequence image and the GPS speed information, realizes the accurate calculation of the gradient information, and meets the accuracy requirement of a high-precision map.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a flow chart of road grade extraction based on multiple vanishing points in the invention.
FIG. 3 is a schematic diagram of the linear feature expression in the linear extraction process of the present invention.
FIG. 4 is a schematic view of a three-dimensional model between a vanishing point and a road slope in accordance with the present invention.
FIG. 5 is a diagram of an image inputted in an embodiment of the present invention.
FIG. 6 is a diagram of a lane line extracted image according to an embodiment of the present invention.
FIG. 7 is a dead-point extracted image in the embodiment of the present invention.
Fig. 8 is a diagram of an image extracted based on vanishing point gradient according to the present invention.
FIG. 9 is a GPS grade based extraction map of the present invention.
FIG. 10 is a fused image of the present invention.
FIG. 11 is a diagram of input crowd-sourced trajectory data in accordance with the present invention.
FIG. 12 shows an exemplary embodiment of the invention for determining slope
Figure 44502DEST_PATH_IMAGE049
Schematic illustration.
FIG. 13 is an embodiment of the present invention input
Figure 516547DEST_PATH_IMAGE034
And with
Figure 124246DEST_PATH_IMAGE050
And (5) parameter schematic diagram.
Fig. 14 is a schematic diagram of obtaining a fusion road gradient and outputting an accurate gradient according to the embodiment of the present invention.
Detailed Description
The technical solutions provided by the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are illustrative only and are not limiting upon the scope of the invention.
As shown in fig. 1, the present embodiment provides a road surface gradient extraction method based on crowdsourcing data, including the following steps:
step S1, acquiring crowdsourcing sequence images, extracting road marking information based on computer vision, determining vanishing point positions of a plane and an inclined plane, and primarily calculating road gradient information based on coordinate positions of vanishing points
Figure 301149DEST_PATH_IMAGE051
As shown in fig. 2, step S1 specifically includes the following sub-steps:
s11, inputting sequence image data; as shown in fig. 5;
step S12, acquiring a road area at the bottom of the image, and dividing the road area on the image into a near area and a far area, in this embodiment, dividing the image into the near area and the far area according to the distance from the lane line to the camera and using 30 meters as a boundary condition;
s13, respectively extracting edge points of the two segmentation areas of the image by using a width limitation and gradient symmetry algorithm; step S13 specifically includes:
using an adaptive sliding window, wherein the width of the adaptive sliding window is the same as the width of a lane line, calculating the gradient of each pixel according to a formula (1), and selecting a pixel point with a peak-valley gradient pair as a candidate edge point of the lane line, as shown in fig. 6, wherein fig. 6 shows a positive-negative gradient result of a certain row of pixels calculated by using the formula 1 in an image;
Figure 19706DEST_PATH_IMAGE052
(1)
in the formulaE j Is a gradient value, and is a gradient value,Sfor a sliding window width (40 pixels in this embodiment),jis a position of a pixel, and is,Ik is the pixel gray scale and the position of the pixel within the sliding window.
Step S14, constructing a voting space detection lane line from the edge points of the local area to extract a lane line, as shown in fig. 7;
step S14 specifically includes:
as shown in fig. 3, projection transformation is performed on all candidate edge points in the image space by using a formula (2), and parallel features of lane lines on two sides are recovered, so that intersection points of the lane lines and the boundary of the projection space are positioned at the bottom and the top; passing through two points at the bottom and top of the imageP 0 AndP 1 a straight line that can uniquely define the image space; accordingly, when the height is unchanged, the parameters of the straight line can be represented by the distance L from the edge of the image and the lateral deviation D between the upper end point and the lower end point,
Figure 260829DEST_PATH_IMAGE054
(2)
in the formula
Figure 658443DEST_PATH_IMAGE055
As candidate edge points iniThe horizontal coordinates of the row or rows,
Figure 967065DEST_PATH_IMAGE056
is the firstiThe horizontal first coordinate of the row is,
Figure 244462DEST_PATH_IMAGE057
is the firstiThe pixel width of the detection area of a row,
Figure 881111DEST_PATH_IMAGE058
is the width of the detection grid, which is [ -10,10 ] in this embodiment],
Figure 207050DEST_PATH_IMAGE059
Carrying out projection transformation on the candidate edge points to obtain coordinates;
projecting all candidate edge points in the detection area to a voting space through a straight line of any point in the image, voting, wherein a distance L from the edge of the image and a transverse deviation D between an upper end point and a lower end point are combined to form a voting space of lane line characteristics, searching extreme points, and extracting candidate lane lines;
and calculating parameters and residual errors of the fitted straight line of the candidate lane line by using a least square method, taking the candidate lane line as a segment with stronger robustness when the residual errors are smaller than a given threshold value, and determining other characteristic points which belong to the same line segment according to the parameters.
Step S15, calculating image coordinates of vanishing points based on Gaussian balls by using lane line extraction results of the near area and the far area obtained in the previous step, as shown in FIG. 8, after lane marking lines of the near area and the far area are identified, calculating different vanishing point coordinates, if a road is an uphill slope, locating vanishing point positions of the uphill slope area above vanishing points corresponding to the plane area, if the road is a downhill slope, locating vanishing points of the far downhill slope area below vanishing points corresponding to the near plane area, and calculating gradient information of the road according to different vanishing point positions; each lane line extracted from the image can be calculated by a formula (3) to correspond to a great circle on a Gaussian sphere, two great circles of two parallel lines in an image space intersect at one point on the Gaussian sphere, and rays from the center of the spherical surface to the intersection point are calculated, as shown in fig. 9, the vanishing point direction can be calculated by singular value decomposition by using a formula (4), and the image coordinates of the vanishing points of a near area and a far area are (1078,512) and (1078,471) by using a formula (5);
Figure 494812DEST_PATH_IMAGE060
(3)
Figure 880794DEST_PATH_IMAGE061
(4)
Figure 488229DEST_PATH_IMAGE062
(5)
in the formulanIs the normal vector of the great circle corresponding to the lane line,C p for camera reference, this embodiment is a matrix
Figure 476914DEST_PATH_IMAGE063
P 0P 1 Is a lane line endpoint;D v in order to be the direction of the vanishing point,Aa normal vector set of a Gaussian sphere great circle corresponding to the lane line extracted from the image;n N is the normal vector corresponding to the Nth lane line,vthe image coordinates of the vanishing point.
Step S16, constructing a three-dimensional model between the vanishing point and the road gradient based on the analytic photogrammetric scenographic analysis, calculating the road gradient based on the three-dimensional model between the vanishing point and the road gradient, as shown in fig. 4 and 10,
step S16 specifically includes:
establishing a three-dimensional model between the vanishing point and the road gradient based on an analytic photogrammetric perspective mapping analysis method, and designating a plane according to a perspective transformation model of a cameraP l The parallel straight lines converge into a point in the image space, called vanishing point, and the connecting line of the camera optical center and the vanishing point is parallel to the corresponding plane according to the light path propagation processP l Thus the main optical axis of the cameraWith the corresponding planeP l The pitch angle of (c) can be expressed by equation (6) as:
Figure 494548DEST_PATH_IMAGE064
(6)
in the formula
Figure 926798DEST_PATH_IMAGE065
For the ordinate of the vanishing point in the image,fis the camera focal length (1.01e + 03).
When the road plane has a slope, the road part in the image is divided by the near plane S near And a far plane S far Two different planes are formed, and the corresponding lane lines are respectively intersected at the near vanishing point V in the image space near And a remote vanishing point V far Calculating the road surface gradient to be 0.3328 radians according to a formula (7), and converting the road surface gradient into an angle of 1.88 degrees;
Figure 928252DEST_PATH_IMAGE066
(7)
in the formula
Figure 720627DEST_PATH_IMAGE003
Using the road surface gradient calculated using the sequential image data,fis the focal length of the camera 1.01e +03,
Figure 671397DEST_PATH_IMAGE067
Figure 992657DEST_PATH_IMAGE068
the image vanishing point ordinates of the near and far regions respectively (512, 477).
Step S2, obtaining crowdsourcing trajectory data, calculating road grade information based on the ratio of GPS horizontal speed and vertical direction speed
Figure 481407DEST_PATH_IMAGE024
Step S2 specifically includes the following substeps:
step S21, inputting crowdsourcing track data, such as the graph 11;
s22, searching corresponding GPS positioning information and three-dimensional speed information according to the timestamp corresponding to the image;
s23, calculating the gradient based on the arctangent value of the vertical speed and the horizontal speed of the GPS
Figure 562627DEST_PATH_IMAGE069
As shown in fig. 12, fig. 12 shows a road slope obtained by using GPS speed information, and the accuracy of the GPS positioning device is easily affected by the environment, so that the calculated noise is more, but the change trend of the road slope can still be reflected, and the slope value of the position where the road slope changes more accurately can be observed by calculating the road slope through the image multi-vanishing point, and the slope value of the road surface at the key node and the change trend of the road slope can be determined simultaneously by combining the two.
Step S23 specifically includes:
calculating road grade according to equation (8)
Figure 554853DEST_PATH_IMAGE070
(8)
In the formula
Figure 578173DEST_PATH_IMAGE037
Refers to road grade calculated using GPS speed information,V Z is the velocity of the GPS in the vertical direction,V X andV Y the lateral and longitudinal speeds of the GPS in the plane.
In the step of S3,
Figure 98760DEST_PATH_IMAGE003
and
Figure 108304DEST_PATH_IMAGE034
carrying out gradient data fusion;
step S3 specifically includes the following sub-steps:
step S31. input
Figure 610830DEST_PATH_IMAGE037
And
Figure 414837DEST_PATH_IMAGE050
as in fig. 13;
s32, solving a gradient change control point; gradient detected in image
Figure 487967DEST_PATH_IMAGE050
Exceeds a threshold value
Figure 363519DEST_PATH_IMAGE071
When =0.8, calculating the position of the slope abrupt change point by using the image and the corresponding GPS positioning information thereof and using a formula (9), and setting the position as a slope control point;
Figure 330338DEST_PATH_IMAGE072
(9)
in the formulax lon y lat z height Respectively representing longitude, latitude and altitude values of the gradient catastrophe point in a world coordinate system,
Figure 180613DEST_PATH_IMAGE073
is the camera pitch angle,
Figure 131252DEST_PATH_IMAGE074
Figure 607233DEST_PATH_IMAGE075
the horizontal coordinate and the vertical coordinate of the gradient abrupt change point in the image coordinate system,x gps y gps hthe GPS longitude, latitude, and camera height, which are time aligned for this picture, are 1.65 m.
Step S33, constructing a road gradient model; the method specifically comprises the following steps:
using equation (10), a gradient model of the road is constructed,
Figure 507187DEST_PATH_IMAGE076
(10)
assuming that the gradient change rate of the road is a constant, in the formula
Figure 918576DEST_PATH_IMAGE077
Which indicates the gradient of the road and,xas a variable of the length of the road,
Figure 949986DEST_PATH_IMAGE078
which represents the rate of change of the slope,Tis a constant of variation.
And S34, calculating a gradient model.
S341, using least square algorithm pair
Figure 839445DEST_PATH_IMAGE034
Fitting to obtain gradient change constant
1) Determining a fitted curve:
Figure 924731DEST_PATH_IMAGE079
(11)
in the formula
Figure 241443DEST_PATH_IMAGE080
To represent
Figure 760149DEST_PATH_IMAGE037
The curve is fitted to the curve and,
Figure 531927DEST_PATH_IMAGE081
Figure 327845DEST_PATH_IMAGE082
Figure 205671DEST_PATH_IMAGE083
in order to be a parameter of the curve,xis a curve independent variable
2) The sum of the squares of the distances from each point to the curve is calculated using equation (12)
Figure 87039DEST_PATH_IMAGE084
(12)
In the formula D s Is the distance from the point to the curve,i p is the serial number of the point or points,n p in the form of the total number of dots,
Figure 662508DEST_PATH_IMAGE085
as a value calculated by equation (11),
Figure 703145DEST_PATH_IMAGE086
is a firsti p Of dots
Figure 627239DEST_PATH_IMAGE034
A value;
3) the sum of squares is minimized, and the parameter value of the fitting curve is obtained
Figure 871270DEST_PATH_IMAGE087
Figure 234118DEST_PATH_IMAGE088
Figure 739049DEST_PATH_IMAGE089
And carrying out secondary derivation to obtain a gradient change constant
Figure 706480DEST_PATH_IMAGE090
(ii) a Wherein the slope change constant calculated between the start point and the slope change control point is 0.005 and the slope change constant calculated between the slope change control point and the end point is 0.0036.
And S342, determining a road gradient model through the gradient value and the gradient change constant of the control point between the starting point and the gradient change control point and the end point.
Step S4, obtaining the fused road gradient, outputting the precise gradient, as shown in fig. 14, after obtaining the control point of the road gradient change through the image multi-vanishing-point gradient extraction, the control point may be passed through
Figure 828020DEST_PATH_IMAGE044
And obtaining a gradient change model between a road starting point and a control point and a gradient change model between a control point and a road ending point.
The technical means disclosed in the scheme of the invention are not limited to the technical means disclosed in the above embodiments, but also include the technical means formed by any combination of the above technical features. It should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and such improvements and modifications are also considered to be within the scope of the present invention.

Claims (10)

1. A road surface gradient extraction method based on crowdsourcing data is characterized by comprising the following steps:
step S1, acquiring crowdsourcing sequence images, extracting road marking information based on computer vision, determining vanishing point positions of a plane and an inclined plane, and primarily calculating road gradient information based on coordinate positions of vanishing points
Figure 400008DEST_PATH_IMAGE001
Step S2, obtaining crowdsourcing trajectory data, calculating road grade information based on the ratio of GPS horizontal speed and vertical direction speed
Figure 207427DEST_PATH_IMAGE002
In the step of (S) 3,
Figure 644225DEST_PATH_IMAGE003
and
Figure 84565DEST_PATH_IMAGE004
carrying out gradient data fusion;
and step S4, acquiring the gradient of the fusion road and outputting the accurate gradient.
2. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 1, wherein the step S1 specifically comprises the following sub-steps:
s11, inputting sequence image data;
s12, acquiring a road area at the bottom of the image, and dividing the road area on the image into a near area and a far area;
s13, respectively extracting edge points of the two segmentation areas of the image by using a width limitation and gradient symmetry algorithm;
s14, constructing a voting space detection lane line from edge points of a local area to extract the lane line;
s15, calculating image coordinates of vanishing points based on Gaussian balls by using the lane line extraction result in the previous step;
step S16, constructing a three-dimensional model between the vanishing point and the road gradient based on an analytic photogrammetry perspective mapping analysis method, and calculating the road gradient based on the three-dimensional model between the vanishing point and the road gradient
Figure 265010DEST_PATH_IMAGE005
3. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 2, wherein the step S13 specifically comprises:
calculating the gradient of each pixel according to a formula (1) by using a self-adaptive sliding window, wherein the width of the self-adaptive sliding window is the same as the width of the lane line, and selecting pixel points with peak-valley gradient pairs as candidate edge points of the lane line;
Figure 989253DEST_PATH_IMAGE006
(1)
in the formulaE j Is a value of the gradient, and is,Sin order to be the width of the sliding window,jis a position of a pixel, and is,Iis the pixel gray scale, k is the position of the pixel in the sliding window;
the step S14 specifically includes:
image space using equation (2)Performing projection transformation on all candidate edge points, and recovering the parallel characteristics of lane lines on two sides to enable intersection points of the lane lines and the projection space boundary to be positioned at the bottom and the top; two points P passing through the bottom and top of the image 0 And P 1 A straight line that can uniquely define the image space; accordingly, when the height is unchanged, the parameters of the straight line can be represented by the distance L from the edge of the image and the lateral deviation D between the upper end point and the lower end point,
Figure 596951DEST_PATH_IMAGE007
(2)
in the formula
Figure 727850DEST_PATH_IMAGE008
As candidate edge points iniThe horizontal coordinates of the row or rows,
Figure 571041DEST_PATH_IMAGE009
is the firstiThe horizontal first coordinate of the row is,
Figure 25156DEST_PATH_IMAGE010
is the firstiThe pixel width of the detection area of a row,
Figure 941772DEST_PATH_IMAGE011
is to detect the width of the grid,
Figure 481337DEST_PATH_IMAGE012
carrying out projection transformation on the candidate edge points to obtain coordinates;
projecting all candidate edge points in the detection area to a voting space through a straight line of any point in the image, voting, wherein the distance L from the edge of the image and the transverse deviation D between an upper end point and a lower end point are combined to form the voting space of the characteristics of the lane line, searching extreme points, and extracting the candidate lane line;
and calculating parameters and residual errors of the fitted straight line of the candidate lane line by using a least square method, taking the candidate lane line as a segment with stronger robustness when the residual errors are smaller than a given threshold value, and determining other characteristic points belonging to the same line segment according to the parameters.
4. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 2, wherein the step S15 specifically comprises:
calculating image coordinates of vanishing points based on Gaussian balls by using the lane line extraction result in the previous step; each lane line extracted from the image can be calculated by a formula (3) to be corresponding to a great circle on a Gaussian ball, two great circles of two parallel lines in an image space are intersected at one point on the Gaussian ball, rays from the center of the spherical surface to the intersection point are calculated, the direction of a vanishing point can be calculated by singular value decomposition by using a formula (4), and an image coordinate of the vanishing point can be obtained by using a formula (5);
Figure 128219DEST_PATH_IMAGE013
(3)
Figure 436841DEST_PATH_IMAGE014
(4)
Figure 464971DEST_PATH_IMAGE015
(5)
in the formulanIs the normal vector of the great circle corresponding to the lane line,C p is a reference for the camera to be used,P 0P 1 is a lane line endpoint;D v in the direction of the vanishing point,Aa normal vector set of a Gaussian sphere great circle corresponding to the lane line extracted from the image;n N is the normal vector corresponding to the Nth lane line,vthe image coordinates of the vanishing point.
5. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 2, wherein the step S16 specifically comprises:
establishing a three-dimensional model between the vanishing point and the road gradient based on an analytic photogrammetric perspective mapping analysis method, and designating a plane according to a perspective transformation model of a cameraP l The parallel straight lines converge into a point in the image space, called vanishing point, and the connecting line of the camera optical center and the vanishing point is parallel to the corresponding plane according to the light path propagation processP l So that the main optical axis of the camera corresponds to the planeP l The pitch angle can be expressed by equation (6) as:
Figure 616467DEST_PATH_IMAGE016
(6)
in the formula
Figure 942406DEST_PATH_IMAGE017
To be the ordinate of the vanishing point in the image,fis the camera focal length;
when the road plane has a slope, the road part in the image is divided by the near plane S near And a far plane S far Two different planes are formed, and the correspondent lane lines are respectively crossed at near vanishing point V in image space near And a remote vanishing point V far Calculating the road surface gradient according to a formula (7);
Figure 184162DEST_PATH_IMAGE018
(7)
in the formula
Figure 694778DEST_PATH_IMAGE019
To calculate the road surface gradient using the sequential image data,fis the focal length of the camera and is,
Figure 287565DEST_PATH_IMAGE020
Figure 417195DEST_PATH_IMAGE021
the vertical coordinates of the vanishing point of the image in the near and far regions respectively.
6. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 1, wherein the step S2 specifically comprises the following sub-steps:
s21, inputting crowdsourcing track data;
s22, searching corresponding GPS positioning information and three-dimensional speed information according to the timestamp corresponding to the image;
s23, calculating the gradient based on the arctangent value of the vertical speed and the horizontal speed of the GPS
Figure 559463DEST_PATH_IMAGE022
7. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 6, wherein the step S23 is specifically:
calculating road grade according to equation (8)
Figure 186186DEST_PATH_IMAGE023
(8)
In the formula
Figure 187640DEST_PATH_IMAGE024
Refers to road grade calculated using GPS speed information,V Z is the velocity of the GPS in the vertical direction,V X andV Y the lateral and longitudinal velocities of the GPS in the plane.
8. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 1, wherein the step S3 specifically comprises the following sub-steps:
step S31. input
Figure 714436DEST_PATH_IMAGE024
And
Figure 930785DEST_PATH_IMAGE025
s32, solving a gradient change control point;
step S33, constructing a road gradient model;
and S34, calculating a gradient model.
9. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 8, wherein the step S32 specifically comprises:
gradient detected in image
Figure 924149DEST_PATH_IMAGE019
Exceeds a threshold value
Figure 209637DEST_PATH_IMAGE026
Then, calculating the position of the gradient catastrophe point by using the image and the corresponding GPS positioning information thereof and using a formula (9), and setting the position as a gradient control point;
Figure 556435DEST_PATH_IMAGE027
(9)
in the formulax lon y lat z height Respectively represents longitude, latitude and altitude values of the gradient catastrophe point in a world coordinate system,
Figure 814241DEST_PATH_IMAGE028
is the camera pitch angle,
Figure 40823DEST_PATH_IMAGE029
Figure 626657DEST_PATH_IMAGE030
for the transverse of the gradient abrupt change point in the image coordinate systemThe coordinate and the ordinate are the same as each other,x gps y gps hGPS longitude, latitude and camera height time aligned for the picture.
10. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 8, wherein the step S33 is specifically:
using equation (10), a gradient model of the road is constructed,
Figure 964097DEST_PATH_IMAGE031
(10)
assuming that the gradient change rate of the road is a constant, in the formula
Figure 76410DEST_PATH_IMAGE032
Which is indicative of the gradient of the road,xis a variable of the length of the road,
Figure 18433DEST_PATH_IMAGE033
which represents the rate of change of the slope,Tis a variation constant;
the step S34 specifically includes:
s341, using least square algorithm pair
Figure 278513DEST_PATH_IMAGE004
Fitting to obtain gradient change constant
1) Determining a fitted curve:
Figure 91748DEST_PATH_IMAGE034
(11)
in the formula
Figure 933934DEST_PATH_IMAGE035
To represent
Figure 174422DEST_PATH_IMAGE024
The curve is fitted to the shape of the curve,
Figure 984115DEST_PATH_IMAGE036
Figure 335462DEST_PATH_IMAGE037
Figure 235416DEST_PATH_IMAGE038
in order to be a parameter of the curve,xfor varying the length of the road
2) The sum of squares of the distances from the points to the curve is calculated using equation (12)
Figure 505860DEST_PATH_IMAGE039
(12)
In the formula D s Is the distance from the point to the curve,i p is the serial number of the point or points,n p in order to be the total number of points,
Figure 678216DEST_PATH_IMAGE040
as a value calculated by equation (11),
Figure 911882DEST_PATH_IMAGE041
is as followsi p Of dots
Figure 712348DEST_PATH_IMAGE042
A value;
3) the sum of squares is minimized, and the parameter value of the fitting curve is obtained
Figure 294639DEST_PATH_IMAGE043
Figure 832586DEST_PATH_IMAGE044
Figure 525736DEST_PATH_IMAGE045
To make a hand in hand withPerforming secondary derivation to obtain a gradient change constant
Figure 649550DEST_PATH_IMAGE046
And S342, determining a road gradient model through the gradient value and the gradient change constant of the control point between the starting point and the gradient change control point and the end point.
CN202210955980.5A 2022-08-10 2022-08-10 Road surface gradient extraction method based on crowdsourcing data Active CN115035138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210955980.5A CN115035138B (en) 2022-08-10 2022-08-10 Road surface gradient extraction method based on crowdsourcing data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210955980.5A CN115035138B (en) 2022-08-10 2022-08-10 Road surface gradient extraction method based on crowdsourcing data

Publications (2)

Publication Number Publication Date
CN115035138A true CN115035138A (en) 2022-09-09
CN115035138B CN115035138B (en) 2022-11-22

Family

ID=83130141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210955980.5A Active CN115035138B (en) 2022-08-10 2022-08-10 Road surface gradient extraction method based on crowdsourcing data

Country Status (1)

Country Link
CN (1) CN115035138B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115598635A (en) * 2022-12-15 2023-01-13 江苏索利得物联网有限公司(Cn) Millimeter wave radar ranging fusion method and system based on Beidou positioning
CN117928575A (en) * 2024-03-22 2024-04-26 四川省公路规划勘察设计研究院有限公司 Lane information extraction method, system, electronic device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012225806A (en) * 2011-04-20 2012-11-15 Toyota Central R&D Labs Inc Road gradient estimation device and program
CN109900254A (en) * 2019-03-28 2019-06-18 合肥工业大学 A kind of the road gradient calculation method and its computing device of monocular vision
CN110161513A (en) * 2018-09-28 2019-08-23 腾讯科技(北京)有限公司 Estimate method, apparatus, storage medium and the computer equipment of road grade
CN112862890A (en) * 2021-02-07 2021-05-28 黑芝麻智能科技(重庆)有限公司 Road gradient prediction method, road gradient prediction device and storage medium
CN114136312A (en) * 2021-11-25 2022-03-04 中汽研汽车检验中心(天津)有限公司 Gradient speed combined working condition development device and development method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012225806A (en) * 2011-04-20 2012-11-15 Toyota Central R&D Labs Inc Road gradient estimation device and program
CN110161513A (en) * 2018-09-28 2019-08-23 腾讯科技(北京)有限公司 Estimate method, apparatus, storage medium and the computer equipment of road grade
US20210024074A1 (en) * 2018-09-28 2021-01-28 Tencent Technology (Shenzhen) Company Limited Road gradient determining method and apparatus, storage medium, and computer device
CN109900254A (en) * 2019-03-28 2019-06-18 合肥工业大学 A kind of the road gradient calculation method and its computing device of monocular vision
CN112862890A (en) * 2021-02-07 2021-05-28 黑芝麻智能科技(重庆)有限公司 Road gradient prediction method, road gradient prediction device and storage medium
CN114136312A (en) * 2021-11-25 2022-03-04 中汽研汽车检验中心(天津)有限公司 Gradient speed combined working condition development device and development method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115598635A (en) * 2022-12-15 2023-01-13 江苏索利得物联网有限公司(Cn) Millimeter wave radar ranging fusion method and system based on Beidou positioning
CN117928575A (en) * 2024-03-22 2024-04-26 四川省公路规划勘察设计研究院有限公司 Lane information extraction method, system, electronic device and storage medium
CN117928575B (en) * 2024-03-22 2024-06-18 四川省公路规划勘察设计研究院有限公司 Lane information extraction method, system, electronic device and storage medium

Also Published As

Publication number Publication date
CN115035138B (en) 2022-11-22

Similar Documents

Publication Publication Date Title
CN115035138B (en) Road surface gradient extraction method based on crowdsourcing data
CN110146909B (en) Positioning data processing method
CN104848867B (en) The pilotless automobile Combinated navigation method of view-based access control model screening
JP6504316B2 (en) Traffic lane estimation system
JP5057183B2 (en) Reference data generation system and position positioning system for landscape matching
CN102208036B (en) Vehicle position detection system
US8428362B2 (en) Scene matching reference data generation system and position measurement system
JP5057184B2 (en) Image processing system and vehicle control system
CN108731670A (en) Inertia/visual odometry combined navigation locating method based on measurement model optimization
CN102207389A (en) Vehicle position recognition system
US20090154793A1 (en) Digital photogrammetric method and apparatus using intergrated modeling of different types of sensors
CN106017463A (en) Aircraft positioning method based on positioning and sensing device
WO2018133727A1 (en) Method and apparatus for generating orthophoto map
CN114526745B (en) Drawing construction method and system for tightly coupled laser radar and inertial odometer
CN114216454B (en) Unmanned aerial vehicle autonomous navigation positioning method based on heterogeneous image matching in GPS refusing environment
CN104655135B (en) A kind of aircraft visual navigation method based on terrestrial reference identification
CN111426320A (en) Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter
CN112800938B (en) Method and device for detecting occurrence of side rockfall of unmanned vehicle
CN112346463A (en) Unmanned vehicle path planning method based on speed sampling
CN115265493B (en) Lane-level positioning method and device based on non-calibrated camera
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN113340312A (en) AR indoor live-action navigation method and system
CN115639823A (en) Terrain sensing and movement control method and system for robot under rugged and undulating terrain
Bikmaev et al. Visual Localization of a Ground Vehicle Using a Monocamera and Geodesic-Bound Road Signs
US20220404170A1 (en) Apparatus, method, and computer program for updating map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant