CN115035138B - Road surface gradient extraction method based on crowdsourcing data - Google Patents

Road surface gradient extraction method based on crowdsourcing data Download PDF

Info

Publication number
CN115035138B
CN115035138B CN202210955980.5A CN202210955980A CN115035138B CN 115035138 B CN115035138 B CN 115035138B CN 202210955980 A CN202210955980 A CN 202210955980A CN 115035138 B CN115035138 B CN 115035138B
Authority
CN
China
Prior art keywords
gradient
road
image
point
gps
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210955980.5A
Other languages
Chinese (zh)
Other versions
CN115035138A (en
Inventor
蔡斌斌
史晓飞
赵望宇
尹武腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Yujia Technology Co ltd
Original Assignee
Wuhan Yujia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Yujia Technology Co ltd filed Critical Wuhan Yujia Technology Co ltd
Priority to CN202210955980.5A priority Critical patent/CN115035138B/en
Publication of CN115035138A publication Critical patent/CN115035138A/en
Application granted granted Critical
Publication of CN115035138B publication Critical patent/CN115035138B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a road surface slope extraction method based on crowdsourcing data, which comprises the following steps of: s1, acquiring crowdsourcing sequence images, extracting road marking information based on computer vision, determining vanishing point positions of a plane and an inclined plane, and calculating road gradient information based on coordinates of vanishing points
Figure 342017DEST_PATH_IMAGE001
(ii) a S2, acquiring crowdsourcing track data, and calculating road gradient information based on the ratio of the horizontal speed and the vertical speed of the GPS
Figure 107979DEST_PATH_IMAGE002
(ii) a In a step S3, the step of the method is that,
Figure 373875DEST_PATH_IMAGE003
and
Figure 779449DEST_PATH_IMAGE004
carrying out gradient data fusion; and S4, acquiring the gradient of the fusion road and outputting the accurate gradient. The method is based on crowdsourcing trajectory data and sequence image information, and solves the problem of high-precision map gradient extraction. On one hand, crowdsourcing data can realize low-cost and large-range data coverage, efficiently extract large-range road gradient information and reduce the construction cost of a high-precision map; on the other hand, the method disclosed by the invention integrates the multi-vanishing-point information of the sequence image and the GPS speed information, can realize accurate calculation of the gradient information, and meets the accuracy requirement of a high-precision map.

Description

Road surface gradient extraction method based on crowdsourcing data
Technical Field
The invention relates to the technical field of photogrammetry, in particular to a road surface gradient extraction method based on crowdsourcing data.
Background
The high-precision map provides lane-level navigation service and over-the-horizon safety auxiliary information for the automatic driving automobile, and is an indispensable part of automatic driving. The slope information is one of driving auxiliary information of a high-precision map, and different accelerations are required to be used for automatically driving the automobile on roads with different slopes so as to ensure the stability and safety of the automobile, realize the optimal control of the automobile in a full speed domain, save fuel and protect the environment.
In the prior art, high-precision map gradient information extraction generally comprises three types of methods: the first method is that a high-precision laser radar is used for extracting point clouds of a plane and a slope to obtain a plane and slope equation, and then gradient information is calculated, and the method is high in cost and low in efficiency, and cannot realize real-time updating of gradient information of a high-precision map in a large range; the second method is that the gradient is calculated by using vehicle-mounted GPS information and atmospheric pressure sensor information, but the method is easily influenced by external environment changes, and the precision of the extracted gradient cannot meet the requirement of a high-precision map; the third method is to use an acceleration sensor and a vehicle dynamic model to calculate the gradient, the method has higher requirement on the accuracy of the sensor, the extraction influence of the error of the sensor on the gradient is larger, and the accuracy of the method cannot meet the requirement of a high-accuracy map.
Disclosure of Invention
The method is based on low-cost crowdsourcing track data and crowdsourcing sequence image information, and solves the problem of high-precision map slope extraction. On one hand, crowdsourcing data can realize low-cost and large-range data coverage, efficiently extract large-range road gradient information and reduce the construction cost of a high-precision map; on the other hand, the method combines the multi-vanishing-point information of the crowdsourcing sequence image and the GPS speed information, can realize accurate calculation of the gradient information, and meets the accuracy requirement of a high-precision map.
In order to achieve the above object, the present invention provides a road surface gradient extraction method based on crowdsourcing data, which is characterized by comprising the following steps:
s1, acquiring a crowdsourcing sequence image, extracting road marking information based on computer vision, determining vanishing point positions of a plane and an inclined plane, and determining coordinate positions based on vanishing pointsPreliminary calculation of road gradient information
Figure 761440DEST_PATH_IMAGE001
S2, crowdsourcing trajectory data is obtained, and road grade information is calculated based on the ratio of the GPS horizontal speed to the speed in the vertical direction
Figure 147422DEST_PATH_IMAGE002
In a step S3, the step of the method is that,
Figure 723897DEST_PATH_IMAGE003
and
Figure 463314DEST_PATH_IMAGE004
carrying out gradient data fusion;
and S4, acquiring the gradient of the fusion road and outputting the accurate gradient.
Further, the step S1 specifically includes the following sub-steps:
s11, inputting sequence image data;
s12, acquiring a road area at the bottom of the image, and dividing the road area on the image into a near area and a far area;
s13, respectively extracting edge points of the two segmentation areas of the image by using a width limitation and gradient symmetry algorithm;
s14, constructing a voting space detection lane line from edge points of a local area to extract the lane line;
s15, calculating image coordinates of vanishing points based on Gaussian balls by using the lane line extraction result in the previous step;
s16, constructing a three-dimensional model between the vanishing point and the road gradient based on an analytic photogrammetry perspective mapping analysis method, and calculating the road gradient based on the three-dimensional model between the vanishing point and the road gradient
Figure 12107DEST_PATH_IMAGE005
Further, step S13 specifically includes:
calculating the gradient of each pixel according to a formula (1) by using a self-adaptive sliding window, wherein the width of the self-adaptive sliding window is the same as the width of the lane line, and selecting pixel points with peak-valley gradient pairs as candidate edge points of the lane line;
Figure 631307DEST_PATH_IMAGE006
(1)
in the formulaE j Is a gradient value, and is a gradient value,Sin order to be the width of the sliding window,jis a position of a pixel, and is,Ik is the pixel gray scale and the position of the pixel within the sliding window.
Further, the step S14 specifically includes:
performing projection transformation on all candidate edge points in the image space by adopting a formula (2), and recovering the parallel characteristics of lane lines on two sides to ensure that intersection points of the lane lines on the boundary of the projection space are positioned at the bottom and the top; two points P passing through the bottom and top of the image 0 And P 1 A straight line that can uniquely define an image space; accordingly, when the height is unchanged, the parameters of the straight line can be represented by the distance L from the edge of the image and the lateral deviation D between the upper end point and the lower end point,
Figure 910290DEST_PATH_IMAGE008
(2)
in the formula
Figure 438224DEST_PATH_IMAGE009
As candidate edge points iniThe horizontal coordinates of the row or rows,
Figure 900429DEST_PATH_IMAGE010
is the firstiThe horizontal first coordinate of the row is,
Figure 202228DEST_PATH_IMAGE011
is the firstiThe width of the pixels of the detection area of a row,
Figure 532716DEST_PATH_IMAGE012
is to detect the width of the grid,
Figure 524942DEST_PATH_IMAGE013
carrying out projection transformation on the candidate edge points to obtain coordinates;
projecting all candidate edge points in the detection area to a voting space through a straight line of any point in the image, voting, wherein a distance L from the edge of the image and a transverse deviation D between an upper end point and a lower end point are combined to form a voting space of lane line characteristics, searching extreme points, and extracting candidate lane lines;
and calculating parameters and residual errors of the fitted straight line of the candidate lane line by using a least square method, taking the candidate lane line as a segment with stronger robustness when the residual errors are smaller than a given threshold value, and determining other characteristic points which belong to the same line segment according to the parameters.
Further, the step S15 specifically includes:
calculating image coordinates of vanishing points based on Gaussian balls by using the lane line extraction result of the previous step; each lane line extracted from the image can be used for calculating a great circle on a Gaussian sphere through a formula (3), two great circles of two parallel lines in an image space are intersected at one point on the Gaussian sphere, rays from the center of the spherical surface to the intersection point are calculated, the vanishing point direction can be calculated through singular value decomposition by using a formula (4), and the image coordinate of the vanishing point can be obtained by using a formula (5);
Figure 290205DEST_PATH_IMAGE014
(3)
Figure 266252DEST_PATH_IMAGE015
(4)
Figure 134851DEST_PATH_IMAGE016
(5)
in the formulanIs the normal vector of the great circle corresponding to the lane line,C p is an internal reference of the camera and is used as a reference of the camera,P 0P 1 is a lane line endpoint;D v in order to be the direction of the vanishing point,Aa normal vector set of a Gaussian sphere great circle corresponding to the lane line extracted from the image;n N is the normal vector corresponding to the Nth lane line,vthe image coordinates of the vanishing point.
Further, the step S16 specifically includes:
constructing a three-dimensional model between the extinction point and the road gradient based on an analytic photogrammetry perspective mapping analysis method, and designating a plane according to a perspective transformation model of a cameraP l The parallel straight lines converge into a point in the image space, called vanishing point, and the connecting line of the camera optical center and the vanishing point is parallel to the corresponding plane according to the light path propagation processP l So that the main optical axis of the camera corresponds to the planeP l The pitch angle of (c) can be expressed by equation (6):
Figure 325792DEST_PATH_IMAGE017
(6)
in the formula
Figure 129799DEST_PATH_IMAGE018
To be the ordinate of the vanishing point in the image,fis the camera focal length;
when the road plane has a slope, the road part in the image is divided by the near plane S near And a far plane S far Two different planes are formed, and the correspondent lane lines are respectively crossed at near vanishing point V in image space near And a remote vanishing point V far Calculating the road surface gradient according to a formula (7);
Figure 717776DEST_PATH_IMAGE019
(7)
in the formula
Figure 344060DEST_PATH_IMAGE003
Using the road surface gradient calculated using the sequential image data,
Figure 904355DEST_PATH_IMAGE020
Figure 144843DEST_PATH_IMAGE021
the vertical coordinates of the vanishing point of the image in the near and far regions respectively.
Further, the step S2 specifically includes the following sub-steps:
s21, inputting crowdsourcing track data;
s22, searching corresponding GPS positioning information and three-dimensional speed information according to the timestamp corresponding to the image;
s23, based on the arctangent value of the vertical speed and the horizontal speed of the GPS, the gradient is obtained
Figure 439689DEST_PATH_IMAGE022
Further, the step S23 specifically includes:
calculating road grade according to equation (8)
Figure 791036DEST_PATH_IMAGE023
(8)
In the formula
Figure 2575DEST_PATH_IMAGE024
Refers to road grade calculated using GPS speed information,V Z is the velocity of the GPS in the vertical direction,V X and withV Y The lateral and longitudinal velocities of the GPS in the plane.
Further, the step S3 specifically includes the following sub-steps:
step S31. Input
Figure 413964DEST_PATH_IMAGE024
And with
Figure 193177DEST_PATH_IMAGE025
S32, solving a gradient change control point;
step S33, constructing a road gradient model;
and S34, calculating a gradient model.
Further, the step S32 specifically includes:
gradient detected in image
Figure 410532DEST_PATH_IMAGE003
Exceeds a threshold value
Figure 351943DEST_PATH_IMAGE026
Then, calculating the position of the gradient catastrophe point by using the image and the corresponding GPS positioning information thereof and using a formula (9), and setting the position as a gradient control point;
Figure 809600DEST_PATH_IMAGE027
(9)
in the formulax lon y lat z height Respectively represents longitude, latitude and altitude values of the gradient catastrophe point in a world coordinate system,
Figure 203672DEST_PATH_IMAGE028
is the camera pitch angle,
Figure 287035DEST_PATH_IMAGE029
Figure 817373DEST_PATH_IMAGE030
the horizontal coordinate and the vertical coordinate of the gradient mutation point in the image coordinate system,x gps y gps hGPS longitude, latitude and camera height time aligned for the picture.
Further, the step S33 is specifically:
using equation (10), a gradient model of the road is constructed,
Figure 445932DEST_PATH_IMAGE031
(10)
assuming that the gradient change rate of the road is a constant, in the formula
Figure 655196DEST_PATH_IMAGE032
Which is indicative of the gradient of the road,xas a variable of the length of the road,
Figure 152037DEST_PATH_IMAGE033
represents the gradient change rate, and T is a change constant.
Further, the step S34 specifically includes:
s341. Use least square algorithm pair
Figure 677827DEST_PATH_IMAGE034
Fitting to obtain gradient change constant
1) Determining a fitted curve:
Figure 601921DEST_PATH_IMAGE035
(11)
in the formula
Figure 95219DEST_PATH_IMAGE036
To represent
Figure 211730DEST_PATH_IMAGE037
The curve is fitted to the shape of the curve,
Figure 982239DEST_PATH_IMAGE038
Figure 936289DEST_PATH_IMAGE039
Figure 57829DEST_PATH_IMAGE040
in order to be a parameter of the curve,xis a curve independent variable
2) The sum of the squares of the distances from each point to the curve is calculated using equation (12)
Figure 771838DEST_PATH_IMAGE041
(12)
In the formula D s Is the distance from the point to the curve,i p is the serial number of the point or points,n p in the form of the total number of dots,
Figure 459171DEST_PATH_IMAGE042
as a value calculated by the formula (11),
Figure 725067DEST_PATH_IMAGE043
is as followsi p Of dots
Figure 146953DEST_PATH_IMAGE044
A value;
3) The sum of squares is minimized, and the parameter value of the fitting curve is obtained
Figure 179500DEST_PATH_IMAGE045
Figure 737651DEST_PATH_IMAGE046
Figure 174449DEST_PATH_IMAGE047
And carrying out secondary derivation to obtain a gradient change constant
Figure 864056DEST_PATH_IMAGE048
And S342, determining a gradient model of the road between the starting point, the gradient change control point and the end point according to the gradient value and the gradient change constant of the control point.
Compared with the prior art, the invention has the following beneficial effects:
the method is based on low-cost crowdsourcing track data and crowdsourcing sequence image information, and solves the problem of high-precision map slope extraction. On one hand, crowdsourcing data can realize low-cost and large-range data coverage, efficiently extract large-range road gradient information and reduce the construction cost of a high-precision map; on the other hand, the method disclosed by the invention integrates the multi-vanishing-point information of the crowdsourcing sequence image and the GPS speed information, realizes the accurate calculation of the gradient information and meets the accuracy requirement of a high-precision map.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a flow chart of the road grade extraction based on multiple vanishing points according to the present invention.
FIG. 3 is a schematic diagram of linear feature expression in the linear extraction process according to the present invention.
FIG. 4 is a schematic view of a three-dimensional model between a vanishing point and a road slope in accordance with the present invention.
FIG. 5 is an image diagram of an input in an embodiment of the present invention.
FIG. 6 is a diagram of a lane line extracted image according to an embodiment of the present invention.
FIG. 7 is a block diagram of a vanishing point extracted image in an embodiment of the invention.
FIG. 8 is a diagram illustrating an image based on vanishing point slope extraction according to the present invention.
FIG. 9 is a GPS grade based extraction map of the present invention.
FIG. 10 is a fused image of the present invention.
FIG. 11 is a diagram of input crowd-sourced trajectory data in accordance with the present invention.
FIG. 12 shows an exemplary embodiment of the invention for determining slope
Figure 44502DEST_PATH_IMAGE049
Schematic illustration.
FIG. 13 is an embodiment of the present invention input
Figure 516547DEST_PATH_IMAGE034
And with
Figure 124246DEST_PATH_IMAGE050
And (5) parameter schematic diagram.
Fig. 14 is a schematic diagram of acquiring a gradient of a fusion road and outputting an accurate gradient according to the embodiment of the present invention.
Detailed Description
The technical solutions provided by the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the scope of the invention.
As shown in fig. 1, the present embodiment provides a road surface gradient extraction method based on crowdsourcing data, including the following steps:
s1, acquiring a crowdsourcing sequence image, extracting road marking information based on computer vision, determining vanishing point positions of a plane and an inclined plane, and primarily calculating road gradient information based on coordinate positions of vanishing points
Figure 301149DEST_PATH_IMAGE051
As shown in fig. 2, step S1 specifically includes the following sub-steps:
s11, inputting sequence image data; as shown in fig. 5;
step S12, acquiring a road area at the bottom of the image, and dividing the road area on the image into a near area and a far area, wherein the image is divided into the near area and the far area according to the distance from a lane line to a camera and by taking 30 meters as a boundary condition in the embodiment;
s13, respectively extracting edge points of the two segmentation areas of the image by using a width limitation and gradient symmetry algorithm; step S13 specifically includes:
using an adaptive sliding window, wherein the width of the adaptive sliding window is the same as the width of a lane line, calculating the gradient of each pixel according to a formula (1), and selecting a pixel point with a peak-valley gradient pair as a candidate edge point of the lane line, as shown in fig. 6, wherein fig. 6 shows a positive-negative gradient result of a certain row of pixels calculated by using the formula 1 in an image;
Figure 19706DEST_PATH_IMAGE052
(1)
in the formulaE j Is a gradient value, and is a gradient value,Sfor a sliding window width (40 pixels in this embodiment),jis the position of the pixel(s),Ik is the pixel gray scale and the position of the pixel within the sliding window.
Step S14, constructing a voting space detection lane line from the edge points of the local area to extract the lane line, as shown in FIG. 7;
step S14 specifically includes:
as shown in fig. 3, projection transformation is performed on all candidate edge points in the image space by using a formula (2), and parallel features of lane lines on two sides are recovered, so that intersection points of the lane lines and the boundary of the projection space are positioned at the bottom and the top; passing through two points at the bottom and top of the imageP 0 AndP 1 a straight line that can uniquely define an image space; accordingly, when the height is unchanged, the parameters of the straight line can be represented by the distance L from the edge of the image and the lateral deviation D between the upper end point and the lower end point,
Figure 260829DEST_PATH_IMAGE054
(2)
in the formula
Figure 658443DEST_PATH_IMAGE055
As candidate edge points iniThe horizontal coordinate of the row is determined,
Figure 967065DEST_PATH_IMAGE056
is the firstiThe horizontal first coordinate of the row is,
Figure 244462DEST_PATH_IMAGE057
is the firstiThe pixel width of the detection area of a row,
Figure 881111DEST_PATH_IMAGE058
is the width of the detection grid, which is [ -10,10 in this embodiment],
Figure 207050DEST_PATH_IMAGE059
Carrying out projection transformation on the candidate edge points to obtain coordinates;
projecting all candidate edge points in the detection area to a voting space through a straight line of any point in the image, voting, wherein the distance L from the edge of the image and the transverse deviation D between an upper end point and a lower end point are combined to form the voting space of the characteristics of the lane line, searching extreme points, and extracting the candidate lane line;
and calculating parameters and residual errors of the fitted straight line of the candidate lane line by using a least square method, taking the candidate lane line as a segment with stronger robustness when the residual errors are smaller than a given threshold value, and determining other characteristic points which belong to the same line segment according to the parameters.
Step S15, using lane line extraction results of the near area and the far area obtained in the previous step, calculating image coordinates of vanishing points based on Gaussian balls, as shown in FIG. 8, after lane marking lines of the near area and the far area are identified, calculating different vanishing point coordinates, if a road is an uphill slope, the vanishing point position of the uphill slope area is located above the vanishing point corresponding to the plane area, if the road is a downhill slope, the vanishing point of the far downhill slope area is located below the vanishing point corresponding to the near plane area, and calculating slope information of the road according to the difference of the vanishing point positions; each lane line extracted from the image can be calculated by a formula (3) to be corresponding to a great circle on a Gaussian sphere, two great circles of two parallel lines in an image space are intersected at one point on the Gaussian sphere, and rays from the center of the spherical surface to the intersection point are calculated, as shown in fig. 9, the vanishing point direction can be calculated by singular value decomposition by using a formula (4), and the image coordinates of the vanishing points of a near area and a far area are (1078,512), (1078,471) respectively by using a formula (5);
Figure 494812DEST_PATH_IMAGE060
(3)
Figure 880794DEST_PATH_IMAGE061
(4)
Figure 488229DEST_PATH_IMAGE062
(5)
in the formulanIs the normal vector of the great circle corresponding to the lane line,C p for camera reference, this embodiment is a matrix
Figure 476914DEST_PATH_IMAGE063
P 0P 1 Is a lane line endpoint;D v in order to be the direction of the vanishing point,Aa normal vector set of a Gaussian sphere great circle corresponding to the lane line extracted from the image;n N is the normal vector corresponding to the Nth lane line,vthe image coordinates of the vanishing point.
Step S16, constructing a three-dimensional model between the vanishing point and the road slope based on the analytic photogrammetric perspective mapping analysis method, calculating the road slope based on the three-dimensional model between the vanishing point and the road slope, as shown in figures 4 and 10,
step S16 specifically includes:
constructing a three-dimensional model between the extinction point and the road gradient based on an analytic photogrammetry perspective mapping analysis method, and designating a plane according to a perspective transformation model of a cameraP l The parallel straight lines converge into a point in the image space, called vanishing point, and the connecting line of the camera optical center and the vanishing point is parallel to the corresponding plane according to the light path propagation processP l So that the main optical axis of the camera corresponds to the planeP l The pitch angle of (c) can be expressed by equation (6) as:
Figure 494548DEST_PATH_IMAGE064
(6)
in the formula
Figure 926798DEST_PATH_IMAGE065
To be the ordinate of the vanishing point in the image,fis the focal length of the camera (1.01e + 03).
On the wayWhen the road plane has a slope, the road part in the image is divided into a near plane S near And a far plane S far Two different planes are formed, and the corresponding lane lines are respectively intersected at the near vanishing point V in the image space near And a distant vanishing point V far Calculating the radian of 0.3328 of the road surface gradient according to a formula (7), and converting the radian into an angle of 1.88 degrees;
Figure 928252DEST_PATH_IMAGE066
(7)
in the formula
Figure 720627DEST_PATH_IMAGE003
Using the road surface gradient calculated using the sequential image data,fis the focal length of the camera 1.01e +03,
Figure 671397DEST_PATH_IMAGE067
Figure 992657DEST_PATH_IMAGE068
the image vanishing point ordinates of the near and far regions respectively (512, 477).
S2, crowdsourcing trajectory data is obtained, and road grade information is calculated based on the ratio of the GPS horizontal speed to the speed in the vertical direction
Figure 481407DEST_PATH_IMAGE024
The step S2 specifically includes the following substeps:
step S21, inputting crowdsourcing track data, such as the graph 11;
s22, searching corresponding GPS positioning information and three-dimensional speed information according to the timestamp corresponding to the image;
s23, calculating the gradient based on the arctangent value of the vertical speed and the horizontal speed of the GPS
Figure 562627DEST_PATH_IMAGE069
FIG. 12 shows a schematic of a GPS positioning device for determining road grade using GPS speed information, as shown in FIG. 12The road gradient calculated by the image multi-vanishing point can more accurately observe the gradient value of the position with larger gradient change of the road surface, and the gradient value of the road surface at the key node and the gradient change trend of the road surface can be simultaneously determined by combining the gradient value and the gradient change trend of the road surface.
Step S23 specifically includes:
calculating road grade according to equation (8)
Figure 554853DEST_PATH_IMAGE070
(8)
In the formula
Figure 578173DEST_PATH_IMAGE037
Refers to road grade calculated using GPS speed information,V Z is the velocity of the GPS in the vertical direction,V X andV Y the lateral and longitudinal velocities of the GPS in the plane.
In a step S3, the step of the method is that,
Figure 98760DEST_PATH_IMAGE003
and
Figure 108304DEST_PATH_IMAGE034
carrying out gradient data fusion;
the step S3 specifically includes the following substeps:
step S31. Input
Figure 610830DEST_PATH_IMAGE037
And with
Figure 414837DEST_PATH_IMAGE050
As in fig. 13;
s32, solving a gradient change control point; gradient detected in image
Figure 487967DEST_PATH_IMAGE050
Exceeds a threshold value
Figure 363519DEST_PATH_IMAGE071
When =0.8, calculating the position of the slope abrupt change point by using the image and the corresponding GPS positioning information thereof and using a formula (9), and setting the position as a slope control point;
Figure 330338DEST_PATH_IMAGE072
(9)
in the formulax lon y lat z height Respectively represents longitude, latitude and altitude values of the gradient catastrophe point in a world coordinate system,
Figure 180613DEST_PATH_IMAGE073
is the camera pitch angle,
Figure 131252DEST_PATH_IMAGE074
Figure 607233DEST_PATH_IMAGE075
the horizontal coordinate and the vertical coordinate of the gradient abrupt change point in the image coordinate system,x gps y gps hthe GPS longitude, latitude and camera height, which are time aligned for this picture, are 1.65m.
Step S33, constructing a road gradient model; the method specifically comprises the following steps:
using equation (10), a gradient model of the road is constructed,
Figure 507187DEST_PATH_IMAGE076
(10)
assuming that the gradient change rate of the road is a constant, in the formula
Figure 918576DEST_PATH_IMAGE077
Which is indicative of the gradient of the road,xis a variable of the length of the road,
Figure 949986DEST_PATH_IMAGE078
which represents the rate of change of the slope,Tis a constant of variation.
And S34, calculating a gradient model.
S341. Use least square algorithm pair
Figure 839445DEST_PATH_IMAGE034
Fitting to obtain gradient change constant
1) Determining a fitted curve:
Figure 924731DEST_PATH_IMAGE079
(11)
in the formula
Figure 241443DEST_PATH_IMAGE080
To represent
Figure 760149DEST_PATH_IMAGE037
The curve is fitted to the curve and,
Figure 531927DEST_PATH_IMAGE081
Figure 327845DEST_PATH_IMAGE082
Figure 205671DEST_PATH_IMAGE083
is a parameter of the curve and is a curve,xis a curve independent variable
2) The sum of the squares of the distances from each point to the curve is calculated using equation (12)
Figure 87039DEST_PATH_IMAGE084
(12)
In the formula D s Is the distance from the point to the curve,i p is a serial number of the point(s),n p in the form of the total number of dots,
Figure 662508DEST_PATH_IMAGE085
is calculated by the formula (11)Value of,
Figure 703145DEST_PATH_IMAGE086
is as followsi p Of dots
Figure 627239DEST_PATH_IMAGE034
A value;
3) The sum of squares is minimized, and the parameter value of the fitting curve is obtained
Figure 871270DEST_PATH_IMAGE087
Figure 234118DEST_PATH_IMAGE088
Figure 739049DEST_PATH_IMAGE089
And carrying out secondary derivation to obtain a gradient change constant
Figure 706480DEST_PATH_IMAGE090
(ii) a Wherein the slope change constant calculated between the start point and the slope change control point is 0.005 and the slope change constant calculated between the slope change control point and the end point is 0.0036.
And S342, determining a road gradient model through the gradient value and the gradient change constant of the control point between the starting point and the gradient change control point and the end point.
S4, acquiring the gradient of the fusion road, outputting the accurate gradient, and as shown in FIG. 14, extracting the control points of the road gradient change through the gradient of the image multi-vanishing point and then passing the control points
Figure 828020DEST_PATH_IMAGE044
And obtaining a gradient change model between a road starting point and a control point and a gradient change model between a control point and a road ending point.
The technical means disclosed in the invention scheme are not limited to the technical means disclosed in the above embodiments, but also include the technical scheme formed by any combination of the above technical features. It should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and such improvements and modifications are also considered to be within the scope of the present invention.

Claims (8)

1. A road surface gradient extraction method based on crowdsourcing data is characterized by comprising the following steps:
s1, acquiring a crowdsourcing sequence image, extracting road marking information based on computer vision, determining vanishing point positions of a plane and an inclined plane, and primarily calculating road gradient information theta based on coordinate positions of vanishing points camera
The step S1 specifically includes the following substeps:
s11, inputting sequence image data;
s12, acquiring a road area at the bottom of the image, and dividing the road area on the image into a near area and a far area;
s13, respectively extracting edge points of the two segmentation areas of the image by using a width limitation and gradient symmetry algorithm; calculating the gradient of each pixel by using a self-adaptive sliding window, wherein the width of the self-adaptive sliding window is the same as the width of the lane line, and selecting pixel points with peak-valley gradient pairs as candidate edge points of the lane line;
Figure FDA0003873724780000011
in the formula E j Is a gradient value, S is the width of the sliding window, j is the pixel position, I is the pixel gray scale, and k is the position of the pixel in the sliding window;
s14, constructing a voting space detection lane line from edge points of a local area to extract the lane line;
s15, calculating image coordinates of vanishing points based on Gaussian balls by using the lane line extraction result in the previous step;
s16, constructing a three-dimensional model between the vanishing point and the road gradient based on an analytic photogrammetry perspective mapping analysis method, and calculating the road gradient theta based on the three-dimensional model between the vanishing point and the road gradient camera
S2, acquiring crowdsourcing trajectory data, and calculating road gradient information theta based on the ratio of the GPS horizontal speed to the speed in the vertical direction GPS
Step S3, θ camera And theta GPS Carrying out gradient data fusion;
and S4, acquiring the gradient of the fusion road and outputting the accurate gradient.
2. The method for extracting a road surface gradient based on crowdsourcing data according to claim 1, wherein the step S14 specifically comprises:
performing projection transformation on all candidate edge points in the image space by adopting a formula (2), and recovering the parallel characteristics of lane lines on two sides to ensure that intersection points of the lane lines and the boundary of the projection space are positioned at the bottom and the top; two points P passing through the bottom and top of the image 0 And P 1 A straight line that can uniquely define the image space; accordingly, when the height is unchanged, the parameters of the straight line can be represented by the distance L from the edge of the image and the lateral deviation D between the upper end point and the lower end point,
Figure FDA0003873724780000021
in the formula J i The horizontal coordinates of the candidate edge point on the ith row,
Figure FDA0003873724780000022
is the horizontal first coordinate of the ith row,
Figure FDA0003873724780000023
is the pixel width, W, of the detection area of the ith row road Is to detect the width of the grid, J road Carrying out projection transformation on the candidate edge points to obtain coordinates;
projecting all candidate edge points in the detection area to a voting space through a straight line of any point in the image, voting, wherein the distance L from the edge of the image and the transverse deviation D between an upper end point and a lower end point are combined to form the voting space of the characteristics of the lane line, searching extreme points, and extracting the candidate lane line;
and calculating parameters and residual errors of the fitted straight line of the candidate lane line by using a least square method, taking the candidate lane line as a segment with stronger robustness when the residual errors are smaller than a given threshold value, and determining other characteristic points belonging to the same line segment according to the parameters.
3. The method for extracting a road surface gradient based on crowdsourcing data according to claim 1, wherein the step S15 specifically comprises:
calculating image coordinates of vanishing points based on Gaussian balls by using the lane line extraction result in the previous step; each lane line extracted from the image can be calculated by a formula (3) to be corresponding to a great circle on a Gaussian ball, two great circles of two parallel lines in an image space are intersected at one point on the Gaussian ball, rays from the center of the spherical surface to the intersection point are calculated, the direction of a vanishing point can be calculated by singular value decomposition by using a formula (4), and the image coordinate of the vanishing point can be obtained by using a formula (5);
n=(C P -1 P 0 )×(C P -1 P 1 ) (3)
AD v =0,A=[n 1 ,…,n N ] T (4)
v=C p D v (5)
wherein n is the normal vector of the great circle corresponding to the lane line, C p Is a camera internal reference, P 0 ,P 1 Is the end point of the lane line; d v The direction of the vanishing point is A, and A is a normal vector set of a Gaussian ball great circle corresponding to the lane line extracted by the image; n is N Is a normal vector corresponding to the Nth lane line, and v is an image coordinate of a vanishing point.
4. The method for extracting a road surface gradient based on crowdsourcing data according to claim 1, wherein the step S16 specifically comprises:
based on analysisEstablishing a three-dimensional model between the extinction point and the road gradient by a photogrammetric perspective mapping analysis method, and designating a plane P according to a perspective transformation model of a camera l The parallel straight lines converge into a point in the image space, called vanishing point, and the connecting line of the camera optical center and the vanishing point is parallel to the corresponding plane P according to the light path propagation process l So that the camera principal optical axis corresponds to the plane P l The pitch angle can be expressed by equation (6) as:
Figure FDA0003873724780000031
in the formula
Figure FDA0003873724780000032
Is the ordinate of the vanishing point in the image, and f is the focal length of the camera;
when the road plane has a slope, the road part in the image is divided by the near plane S near And a far plane S far Two different planes are formed, and the corresponding lane lines are respectively intersected at the near vanishing point V in the image space near And a distant vanishing point V far Calculating the road surface gradient according to a formula (7);
Figure FDA0003873724780000033
in the formula theta camera The road slope calculated using the sequential image data, f is the camera focal length,
Figure FDA0003873724780000034
the vertical coordinates of the vanishing point of the image in the near and far regions respectively.
5. The road surface gradient extraction method based on crowdsourcing data as claimed in claim 1, wherein the step S2 comprises the following sub-steps:
s21, inputting crowdsourcing track data;
s22, searching corresponding GPS positioning information and three-dimensional speed information according to the timestamp corresponding to the image;
s23, solving the gradient theta based on the arctangent value of the vertical speed and the horizontal speed of the GPS GPS
6. The method for extracting a road surface gradient based on crowdsourcing data according to claim 5, wherein the step S23 specifically comprises:
calculating road grade according to equation (8)
Figure FDA0003873724780000041
In the formula [ theta ] GPS Is the road grade, V, calculated using GPS speed information Z Velocity of GPS in vertical direction, V X And V Y The lateral and longitudinal velocities of the GPS in the plane.
7. The method for extracting the road surface gradient based on the crowdsourcing data as claimed in claim 1, wherein the step S3 specifically comprises the following substeps:
step S31. Input theta GPS And theta camer
S32, solving a gradient change control point;
s33, constructing a road gradient model; using equation (10), a gradient model of the road is constructed,
Figure FDA0003873724780000042
assuming that the rate of change of the gradient of the road is a constant, where θ represents the gradient of the road, x is a length variable of the road,
Figure FDA0003873724780000043
representing the gradient change rate, wherein T is a change constant; step S34. Slope mouldCalculating the model, wherein the step S34 specifically includes:
s341, using least square algorithm to pair theta GPS Fitting to obtain gradient change constant
1) Determining a fitted curve:
f(x)=a 0 +a 1 x+a 2 x 2 (11)
wherein f (x) represents θ GPS Fitting curve, a 0 、a 1 、a 2 Is a curve parameter, x is a length variable of the road
2) The sum of squares of the distances from the points to the curve is calculated using equation (12)
Figure FDA0003873724780000044
In the formula D s Is the distance from point to curve, i p Is the number of dots, n p In the form of the total number of dots,
Figure FDA0003873724780000045
as a value calculated by the formula (11),
Figure FDA0003873724780000046
is the ith p Theta of a point GPS A value;
3) The sum of squares is minimized, and the parameter value a of the fitting curve is obtained 0 ,a 1 ,a 2 And carrying out secondary derivation to obtain a gradient change constant T =2a 2 +a 1
And S342, determining a slope model of the road between the starting point, the slope change control point and the end point according to the slope value and the slope change constant of the control point.
8. The method for extracting a road surface gradient based on crowdsourcing data according to claim 7, wherein the step S32 is specifically:
gradient theta detected in image camera When the threshold value epsilon is exceeded, the image and the pair thereof are usedCalculating the position of the slope catastrophe point according to the corresponding GPS positioning information by using a formula (9), and setting the position as a slope control point;
Figure FDA0003873724780000051
in the formula x lon ,y lat ,z height Respectively representing longitude, latitude and altitude values theta of the gradient catastrophe point in a world coordinate system s Is the camera pitch angle, u, a is the abscissa and ordinate of the gradient discontinuity point in the image coordinate system, x gps ,y gps And h is the GPS longitude, latitude and camera height for which the image is time aligned.
CN202210955980.5A 2022-08-10 2022-08-10 Road surface gradient extraction method based on crowdsourcing data Active CN115035138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210955980.5A CN115035138B (en) 2022-08-10 2022-08-10 Road surface gradient extraction method based on crowdsourcing data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210955980.5A CN115035138B (en) 2022-08-10 2022-08-10 Road surface gradient extraction method based on crowdsourcing data

Publications (2)

Publication Number Publication Date
CN115035138A CN115035138A (en) 2022-09-09
CN115035138B true CN115035138B (en) 2022-11-22

Family

ID=83130141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210955980.5A Active CN115035138B (en) 2022-08-10 2022-08-10 Road surface gradient extraction method based on crowdsourcing data

Country Status (1)

Country Link
CN (1) CN115035138B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115511938A (en) * 2022-11-02 2022-12-23 清智汽车科技(苏州)有限公司 Height determining method and device based on monocular camera
CN115598635B (en) * 2022-12-15 2023-04-07 江苏索利得物联网有限公司 Millimeter wave radar ranging fusion method and system based on Beidou positioning
CN117928575B (en) * 2024-03-22 2024-06-18 四川省公路规划勘察设计研究院有限公司 Lane information extraction method, system, electronic device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012225806A (en) * 2011-04-20 2012-11-15 Toyota Central R&D Labs Inc Road gradient estimation device and program
CN109900254A (en) * 2019-03-28 2019-06-18 合肥工业大学 A kind of the road gradient calculation method and its computing device of monocular vision
CN110161513A (en) * 2018-09-28 2019-08-23 腾讯科技(北京)有限公司 Estimate method, apparatus, storage medium and the computer equipment of road grade
CN112862890A (en) * 2021-02-07 2021-05-28 黑芝麻智能科技(重庆)有限公司 Road gradient prediction method, road gradient prediction device and storage medium
CN114136312A (en) * 2021-11-25 2022-03-04 中汽研汽车检验中心(天津)有限公司 Gradient speed combined working condition development device and development method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012225806A (en) * 2011-04-20 2012-11-15 Toyota Central R&D Labs Inc Road gradient estimation device and program
CN110161513A (en) * 2018-09-28 2019-08-23 腾讯科技(北京)有限公司 Estimate method, apparatus, storage medium and the computer equipment of road grade
CN109900254A (en) * 2019-03-28 2019-06-18 合肥工业大学 A kind of the road gradient calculation method and its computing device of monocular vision
CN112862890A (en) * 2021-02-07 2021-05-28 黑芝麻智能科技(重庆)有限公司 Road gradient prediction method, road gradient prediction device and storage medium
CN114136312A (en) * 2021-11-25 2022-03-04 中汽研汽车检验中心(天津)有限公司 Gradient speed combined working condition development device and development method

Also Published As

Publication number Publication date
CN115035138A (en) 2022-09-09

Similar Documents

Publication Publication Date Title
CN115035138B (en) Road surface gradient extraction method based on crowdsourcing data
CN111272165B (en) Intelligent vehicle positioning method based on characteristic point calibration
CN102207389B (en) Vehicle position recognition system
CN109341706B (en) Method for manufacturing multi-feature fusion map for unmanned vehicle
CN111458720B (en) Oblique photography modeling method based on airborne laser radar data in complex mountain area
CN102208013B (en) Landscape coupling reference data generation system and position measuring system
JP6504316B2 (en) Traffic lane estimation system
CN102208036B (en) Vehicle position detection system
CN104848867B (en) The pilotless automobile Combinated navigation method of view-based access control model screening
US20090154793A1 (en) Digital photogrammetric method and apparatus using intergrated modeling of different types of sensors
CN109931939A (en) Localization method, device, equipment and the computer readable storage medium of vehicle
CN106980657A (en) A kind of track level electronic map construction method based on information fusion
WO2018133727A1 (en) Method and apparatus for generating orthophoto map
JP2011215057A (en) Scene matching reference data generation system and position measurement system
CN112346463B (en) Unmanned vehicle path planning method based on speed sampling
CN114526745A (en) Drawing establishing method and system for tightly-coupled laser radar and inertial odometer
KR102239562B1 (en) Fusion system between airborne and terrestrial observation data
CN112800938B (en) Method and device for detecting occurrence of side rockfall of unmanned vehicle
CN115265493B (en) Lane-level positioning method and device based on non-calibrated camera
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN114897409A (en) Method and system for evaluating road risk based on vehicle driving
CN114993298A (en) EKF-based template matching VO and wheel type odometer fusion positioning method
US20220404170A1 (en) Apparatus, method, and computer program for updating map
CN115855045A (en) Multi-mode fusion map building and positioning method applied to mine roadway
WO2022021209A1 (en) Electronic map generation method and apparatus, computer device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant