CN112183378A - Road slope estimation method and device based on color and depth image - Google Patents

Road slope estimation method and device based on color and depth image Download PDF

Info

Publication number
CN112183378A
CN112183378A CN202011053625.6A CN202011053625A CN112183378A CN 112183378 A CN112183378 A CN 112183378A CN 202011053625 A CN202011053625 A CN 202011053625A CN 112183378 A CN112183378 A CN 112183378A
Authority
CN
China
Prior art keywords
depth
pixel
rgb
image
candidate area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011053625.6A
Other languages
Chinese (zh)
Inventor
潘成伟
俞益洲
李一鸣
乔昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shenrui Bolian Technology Co Ltd
Shenzhen Deepwise Bolian Technology Co Ltd
Original Assignee
Beijing Shenrui Bolian Technology Co Ltd
Shenzhen Deepwise Bolian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shenrui Bolian Technology Co Ltd, Shenzhen Deepwise Bolian Technology Co Ltd filed Critical Beijing Shenrui Bolian Technology Co Ltd
Priority to CN202011053625.6A priority Critical patent/CN112183378A/en
Publication of CN112183378A publication Critical patent/CN112183378A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a road gradient estimation method and a road gradient estimation device based on color and depth images, wherein the method comprises the following steps: obtaining internal parameters and external parameters of a color camera and internal parameters and external parameters of a depth camera through camera calibration, and obtaining a mapping relation for one point in space; capturing RGB images of roads, and segmenting candidate regions of a slope through a road segmentation algorithm; capturing a depth image of a road, and mapping the depth image to an RGB image according to the mapping relation and the candidate area to obtain the depth value of each pixel in the candidate area; back projecting each pixel in the candidate area and the depth value thereof to a three-dimensional space to obtain a three-dimensional point cloud of the candidate area; solving a normal vector of each point on the three-dimensional point cloud; clustering by using normal information of three-dimensional points, dividing the three-dimensional point cloud into two potential planes, and solving equations of the two planes by using a least square method; and calculating the gradient according to the solved plane equation coefficient.

Description

Road slope estimation method and device based on color and depth image
Technical Field
The invention relates to the field of computers, in particular to a road slope estimation method and device based on color and depth images.
Background
The visual impairment training is carried out in the outdoor road environment, the gradient of the front road surface is obtained in real time, more and more abundant application scenes are provided, and the visual impairment training method has an important function for capturing the surrounding environment and corresponding decisions. The existing road slope detection scheme is more used for vehicles, and has the advantages that sensors such as acceleration, a gyroscope, inertial navigation and the like are used for detecting, the road slope at the current position is calculated, and the slope in front of the vehicle cannot be predicted. In addition, a method based on lane line detection is used for road estimation, and the method is suitable for a highway environment with a high structuralization degree and lacks of universality and applicability in an unknown environment. For example: in the patent of CN 202010189986.7 straight road relative gradient real-time prediction method, system and device based on a monocular camera, the monocular camera is calibrated firstly, then the lane line detection is carried out on the image captured by the camera, the lane line is projected into a three-dimensional point curve according to a camera imaging model and then divided into two straight line segments, the intersection point of the two straight line segments is regarded as a lane line gradient turning point, and the gradient value is calculated according to the three-dimensional point curve. The method has higher requirement on lane line detection and is only suitable for straight lanes. For some complex urban traffic environments, such as lane lines being obscured, this approach may fail.
Disclosure of Invention
The present invention aims to provide a method and apparatus for color and depth image based road gradient estimation that overcomes or at least partially addresses the above-mentioned problems.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
one aspect of the present invention provides a road gradient estimation method based on color and depth images, including: obtaining internal parameters K of color camera through camera calibrationrgbAnd extrinsic parameter (R)rgb,Trgb) Intrinsic parameter R of depth cameradAnd extrinsic parameter (R)d,Td) For a point in space (X)w,Yw,Zw) The following mapping relationship is obtained:
Figure BDA0002710286370000011
wherein p isrgbFor corresponding pixel positions on the RGB image, ZrgbThe corresponding pixel depth on the RGB image; p is a radical ofdFor any pixel in a given depth image, ZdFor the depth of any pixel in a given depth image,
Figure BDA0002710286370000021
is the pixel coordinates of the RGB image,
Figure BDA0002710286370000022
is the depth image pixel coordinate; capturing RGB images of roads, and segmenting candidate regions of a slope through a road segmentation algorithm; capturing a depth image of a road, and mapping the depth image to an RGB image according to the mapping relation and the candidate area to obtain the depth value of each pixel in the candidate area; according to the range of the candidate area and the depth value of each pixel in the candidate area, back-projecting each pixel in the candidate area and the depth value thereof back to the three-dimensional space to obtain three-dimensional point cloud of the candidate area; solving a normal vector of each point on the three-dimensional point cloud; clustering by using normal information of three-dimensional points, dividing the three-dimensional point cloud into two potential planes, and solving equations of the two planes by using a least square method, a1x+b1y+c1z+d10 and a2x+b2y+c2z+d20; and calculating the gradient according to the solved plane equation coefficient.
Capturing a depth image of a road, mapping the depth image to an RGB image according to the mapping relation and the candidate area, and obtaining a depth value of each pixel in the candidate area comprises: and projecting each pixel in the depth image and the depth thereof into the RGB image to obtain corresponding pixel positions px and py and a depth value d, rounding px and py in a nearest neighbor mode, and assigning d to the pixel with the rounded coordinate value.
Solving the normal vector of each point on the three-dimensional point cloud comprises the following steps: taking k (k > -2) points around each point, and counting an algorithm vector by using the points; when k is 2, calculating a normal vector of the triangle as a normal vector of the point; and when k is greater than 2, constructing a plurality of triangles, and taking the average value of normal vectors of the triangles as the normal vector of the point.
Wherein calculating the slope from the solved plane equation coefficients comprises: obtaining the normal vector of each surface as N according to the plane equation coefficient1=(a1,b1,c1) And N2=(a2,b2,c2) The cosine of the slope angle is calculated as follows:
Figure BDA0002710286370000023
the road segmentation algorithm adopts a CNN image segmentation algorithm.
Another aspect of the present invention provides a road gradient estimation apparatus based on a color and depth image, including: a calibration module for obtaining the intrinsic parameter K of the color camera through camera calibrationrgbAnd extrinsic parameter (R)rgb,Trgb) Intrinsic parameter K of depth cameradAnd extrinsic parameter (R)d,Td) For a point in space (X)w,Yw,Zw) The following mapping relationship is obtained:
Figure BDA0002710286370000024
Figure BDA0002710286370000025
wherein p isrgbFor corresponding pixel positions on the RGB image, ZrgbThe corresponding pixel depth on the RGB image; p is a radical ofdFor any pixel in a given depth image, ZdFor the depth of any pixel in a given depth image,
Figure BDA0002710286370000026
is the pixel coordinates of the RGB image,
Figure BDA0002710286370000027
is the depth image pixel coordinate; the segmentation module is used for capturing RGB images of roads and segmenting candidate regions of the slope through a road segmentation algorithm; the mapping module is used for capturing the depth image of the road, and mapping the depth image to the RGB image according to the mapping relation and the candidate area to obtain the depth value of each pixel in the candidate area; the reconstruction module is used for back-projecting each pixel in the candidate area to a three-dimensional space by using each pixel in the candidate area and the depth value of each pixel in the candidate area to obtain a three-dimensional point cloud of the candidate area; the calculation module is used for solving the normal vector of each point on the three-dimensional point cloud; clustering by using normal information of three-dimensional points, dividing the three-dimensional point cloud into two potential planes, and solving equations of the two planes by using a least square method, a1x+b1y+c1z+d10 and a2x+b2y+c2z+d20; and calculating the gradient according to the solved plane equation coefficient.
The mapping module captures a depth image of a road in the following way, and maps the depth image to an RGB image according to the mapping relation and the candidate area to obtain the depth value of each pixel in the candidate area: and the mapping module is specifically used for projecting each pixel in the depth image and the depth of the pixel into the RGB image to obtain corresponding pixel positions px and py and a depth value d, rounding the px and py in a nearest neighbor mode, and assigning the d to the pixel with the rounded coordinate value.
The calculation module solves the normal vector of each point on the three-dimensional point cloud in the following mode: a calculation module, specifically configured to take k (k > -2) points around each point, and calculate an algorithm vector using the points; when k is 2, calculating a normal vector of the triangle as a normal vector of the point; and when k is greater than 2, constructing a plurality of triangles, and taking the average value of normal vectors of the triangles as the normal vector of the point.
Wherein the calculation module calculates the gradient according to the solved plane equation coefficients by: a calculation module, specifically configured to obtain a normal vector N of each plane according to the plane equation coefficients1=(a1,b1,c1) And N2=(a2,b2,c2) The cosine of the slope angle is calculated as follows:
Figure BDA0002710286370000031
the road segmentation algorithm adopts a CNN image segmentation algorithm.
Therefore, the road slope estimation method and device based on color and depth images provided by the invention are focused on the road slope estimation under all-weather and multi-scenes of visual impairment training, not only limited to the slope estimation under the road environment, but also used for segmenting the slope estimation candidate area by utilizing a road surface segmentation algorithm, and calculating the slope by using the surface instead of a line (lane line), so that the application range is wider, the estimation precision is higher (the information of the surface is higher than the information of the line), for the slope turning point, the capturing is difficult to be carried out through the color and texture information in the image, but the normal difference near the turning point is larger, so the direction information of the point in the slope candidate area is recovered by virtue of the depth information, the slope turning point can be estimated more accurately, and the slope estimation can be carried out better.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a flow chart of a method for road grade estimation based on color and depth images provided by an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a road gradient estimation device based on color and depth images according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 is a flowchart illustrating a road gradient estimation method based on color and depth images according to an embodiment of the present invention, and referring to fig. 1, the road gradient estimation method based on color and depth images according to an embodiment of the present invention includes:
s1, obtaining the intrinsic parameters K of the color camera through camera calibrationrgbAnd extrinsic parameter (R)rgb,Trgb) Intrinsic parameter K of depth cameradAnd extrinsic parameter (R)d,Td) For a point in space (X)w,Yw,Zw) The following mapping relationship is obtained:
Figure BDA0002710286370000041
Figure BDA0002710286370000042
wherein p isrgbFor corresponding pixel positions on the RGB image, ZrgbThe corresponding pixel depth on the RGB image; p is a radical ofdFor any pixel in a given depth image, ZdFor the depth of any pixel in a given depth image,
Figure BDA0002710286370000043
is the pixel coordinates of the RGB image,
Figure BDA0002710286370000044
is the depth image pixel coordinate.
Specifically, obtaining an intrinsic parameter K of a color camera by camera calibrationrgbAnd extrinsic parameter (R)rgb,Trgb) Intrinsic parameter K of depth cameradAnd extrinsic parameter (R)d,Td). For a point in space (X)w,Yw,Zw) The following relationship exists:
Figure BDA0002710286370000045
order to
Figure BDA0002710286370000047
Pixel coordinate positions of the RGB image and the depth image, respectively. For any pixel p in a given depth imagedAnd depth Z of the samedWe can use equation (2) to get its position (X) in three-dimensional spacew,Yw,Zw) Then, the corresponding pixel position p on the RGB image is obtained by using the formula (1)rgbAnd depth Z of the samergb
And S2, capturing the RGB image of the road, and segmenting the candidate region of the slope through a road segmentation algorithm.
Specifically, an RGB image of a road may be captured by a color camera, and candidate regions of a slope may be segmented by a road segmentation algorithm.
As an optional implementation manner of the embodiment of the present invention, a CNN image segmentation algorithm is adopted as a road segmentation algorithm.
S3, capturing the depth image of the road, and mapping the depth image to the RGB image according to the mapping relation and the candidate area to obtain the depth value of each pixel in the candidate area.
Specifically, a depth image of the road may be captured by using a depth camera, and the depth image may be mapped to the RGB image according to the mapping relationship in step S1 and according to the candidate region obtained in step S2, so as to obtain a depth value of each pixel in the candidate region. Depth can also be estimated using an RGB image captured by a monocular camera (color camera).
As an optional implementation manner of the embodiment of the present invention, capturing a depth image of a road, and mapping the depth image to an RGB image according to the mapping relationship and according to the candidate region, to obtain a depth value of each pixel in the candidate region includes: and projecting each pixel in the depth image and the depth thereof into the RGB image to obtain corresponding pixel positions px and py and a depth value d, rounding px and py in a nearest neighbor mode, and assigning d to the pixel with the rounded coordinate value.
The specific depth calculation process is as follows: and projecting each pixel in the depth map and the depth thereof into the RGB image to obtain corresponding pixel positions px and py and a depth value d, taking time factors into consideration, rounding px and py in a nearest neighbor mode, and assigning d to the pixel with the rounded coordinate value. The nearest neighbor method is just rounding, the calculated px and py may be floating point numbers, and the pixel coordinates are integers, so that the corresponding relationship of the pixels is obtained by rounding the px and the py.
In addition, the depth information in the RGB image in the present invention may be obtained by a neighbor method, or may be replaced by an interpolation method in which the depth of the projection point of the depth image within the circle having the radius r is selected for each pixel and weighted.
And S4, back-projecting each pixel in the candidate area to the three-dimensional space by using the pixel and the depth value thereof according to the candidate area range and the depth value of each pixel in the candidate area to obtain the three-dimensional point cloud of the candidate area.
Specifically, according to the candidate region range obtained in step S2 and the depth information obtained in step S3, each pixel in the candidate region and its depth value are back-projected to the three-dimensional space to obtain a three-dimensional point cloud of the candidate region.
The point cloud obtained according to the photogrammetry principle includes three-dimensional coordinates (XYZ) and color information (RGB), where the coordinate values of the real three-dimensional point corresponding to each pixel can be calculated according to the depth value, and where the three-dimensional point cloud considers only the three-dimensional coordinates.
And S5, solving the normal vector of each point on the three-dimensional point cloud.
As an optional implementation manner of the embodiment of the present invention, solving the normal vector of each point on the three-dimensional point cloud includes: taking k (k > -2) points around each point, and counting an algorithm vector by using the points; when k is 2, calculating a normal vector of the triangle as a normal vector of the point; and when k is greater than 2, constructing a plurality of triangles, and taking the average value of normal vectors of the triangles as the normal vector of the point.
The specific solving process is as follows: for each point, take its surrounding k (k > -2) points and use these to count the algorithm vector. When k is 2, the normal vector of the triangle is calculated as the normal vector of the modified point. And when k is greater than 2, constructing a plurality of triangles, and taking the average value of normal vectors of the triangles as the normal vector of the point.
In addition, the normal vector estimation of the point in the present invention may be estimated by a circular disk instead of a triangle.
S6, clustering by using the normal information of the three-dimensional points, dividing the three-dimensional point cloud into two potential planes, and solving the equations of the two planes by using a least square method, a1x+b1y+c1z+d10 and a2x+b2y+c2z+d2=0;
And S7, calculating the gradient according to the solved plane equation coefficient.
As an alternative implementation of the embodiment of the present invention, calculating the gradient according to the solved plane equation coefficients comprises: obtaining the normal vector of each surface as N according to the plane equation coefficient1=(a1,b1,c1) And N2=(a2,b2,c2) The cosine of the slope angle is calculated as follows:
Figure BDA0002710286370000061
specifically, the gradient is calculated from the plane equation coefficients solved in step S6. The method comprises the following specific steps: according to the plane equation coefficient, the normal vector of each plane is N1=(a1,b1,c1) And N2=(a2,b2,c2) Then the cosine of the slope angle is calculated as follows:
Figure BDA0002710286370000062
therefore, by using the road gradient estimation method based on the color and depth images provided by the embodiment of the invention, the color camera and the depth camera are used for capturing road information, and the road candidate area segmentation and the road three-dimensional point cloud reconstruction are combined to estimate the gradient; the road surface candidate area segmented by the image segmentation technology is used for describing slope information by using higher latitude information in a surface line; the depth information is used for reconstructing the three-dimensional point cloud of the slope surface, the normal information is combined, the turning place of the slope surface is better found, two surfaces are divided according to the turning place, the slope is estimated by using the plane geometric parameters, and the accuracy is higher.
Fig. 2 is a schematic structural diagram of a road gradient estimation device based on color and depth images, which is provided by an embodiment of the present invention, and applies the method, and the following only briefly explains the structure of the road gradient estimation device based on color and depth images, and makes other things less than the best, please refer to the relevant description in the road gradient estimation method based on color and depth images, and refer to fig. 2, the road gradient estimation device based on color and depth images provided by an embodiment of the present invention includes:
a calibration module for obtaining the intrinsic parameter K of the color camera through camera calibrationrgbAnd extrinsic parameter (R)rgb,Trgb) Intrinsic parameter K of depth cameradAnd extrinsic parameter (R)d,Td) For a point in space (X)w,Yw,Zw) The following mapping relationship is obtained:
Figure BDA0002710286370000071
wherein p isrgbFor corresponding pixel positions on the RGB image, ZrgbThe corresponding pixel depth on the RGB image; p is a radical ofdFor any pixel in a given depth image, ZdFor the depth of any pixel in a given depth image,
Figure BDA0002710286370000072
is the pixel coordinates of the RGB image,
Figure BDA0002710286370000073
is the depth image pixel coordinate;
the segmentation module is used for capturing RGB images of roads and segmenting candidate regions of the slope through a road segmentation algorithm;
the mapping module is used for capturing the depth image of the road, and mapping the depth image to the RGB image according to the mapping relation and the candidate area to obtain the depth value of each pixel in the candidate area;
the reconstruction module is used for back-projecting each pixel in the candidate area to a three-dimensional space by using each pixel in the candidate area and the depth value of each pixel in the candidate area to obtain a three-dimensional point cloud of the candidate area;
the calculation module is used for solving the normal vector of each point on the three-dimensional point cloud; clustering by using normal information of three-dimensional points, dividing the three-dimensional point cloud into two potential planes, and solving equations of the two planes by using a least square method, a1x+b1y+c1z+d10 and a2x+b2y+c2z+d20; and calculating the gradient according to the solved plane equation coefficient.
As an optional implementation manner of the embodiment of the present invention, the mapping module captures a depth image of a road, and maps the depth image to the RGB image according to the mapping relationship and according to the candidate area, so as to obtain a depth value of each pixel in the candidate area: and the mapping module is specifically used for projecting each pixel in the depth image and the depth of the pixel into the RGB image to obtain corresponding pixel positions px and py and a depth value d, rounding the px and py in a nearest neighbor mode, and assigning the d to the pixel with the rounded coordinate value.
As an optional implementation manner of the embodiment of the present invention, the calculation module solves the normal vector of each point on the three-dimensional point cloud by the following method: a calculation module, specifically configured to take k (k > -2) points around each point, and calculate an algorithm vector using the points; when k is 2, calculating a normal vector of the triangle as a normal vector of the point; and when k is greater than 2, constructing a plurality of triangles, and taking the average value of normal vectors of the triangles as the normal vector of the point.
As an alternative implementation of the embodiment of the invention, the calculation module calculates the gradient from the solved plane equation coefficients by: a calculation module, specifically configured to obtain a normal vector N of each plane according to the plane equation coefficients1=(a1,b1,c1) And N2=(a2,b2,c2) The cosine of the slope angle is calculated as follows:
Figure BDA0002710286370000074
as an optional implementation manner of the embodiment of the present invention, a CNN image segmentation algorithm is adopted as a road segmentation algorithm.
Therefore, by using the road gradient estimation device based on the color and depth images, provided by the embodiment of the invention, the color camera and the depth camera are used for capturing road information, and the road surface candidate area segmentation and the road surface three-dimensional point cloud reconstruction are combined to estimate the gradient; the road surface candidate area segmented by the image segmentation technology is used for describing slope information by using higher latitude information in a surface line; the depth information is used for reconstructing the three-dimensional point cloud of the slope surface, the normal information is combined, the turning place of the slope surface is better found, two surfaces are divided according to the turning place, the slope is estimated by using the plane geometric parameters, and the accuracy is higher.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A road gradient estimation method based on color and depth images, characterized by comprising:
obtaining internal parameters K of color camera through camera calibrationrgbAnd extrinsic parameter (R)rgb,Trgb) Intrinsic parameter K of depth cameradAnd extrinsic parameter (R)d,Td) For a point in space (X)w,Yw,Zw) The following mapping relationship is obtained:
Figure FDA0002710286360000011
Figure FDA0002710286360000012
wherein p isrgbFor corresponding pixel positions on the RGB image, ZrgbThe corresponding pixel depth on the RGB image; p is a radical ofdFor any pixel in a given depth image, ZdFor the depth of any pixel in a given depth image,
Figure FDA0002710286360000013
is the pixel coordinates of the RGB image,
Figure FDA0002710286360000014
is the depth image pixel coordinate;
capturing RGB images of roads, and segmenting candidate regions of a slope through a road segmentation algorithm;
capturing a depth image of a road, and mapping the depth image to an RGB image according to the mapping relation and the candidate area to obtain a depth value of each pixel in the candidate area;
according to the candidate area range and the depth value of each pixel in the candidate area, back-projecting each pixel in the candidate area and the depth value thereof back to a three-dimensional space to obtain a three-dimensional point cloud of the candidate area;
solving a normal vector of each point on the three-dimensional point cloud;
clustering by using normal information of three-dimensional points, dividing the three-dimensional point cloud into two potential planes, and solving equations of the two planes by using a least square method, a1x+b1y+c1z+d10 and a2x+b2y+c2z+d2=0;
And calculating the gradient according to the solved plane equation coefficient.
2. The method of claim 1, wherein the capturing the depth image of the road and mapping the depth map to the RGB image according to the mapping relationship and according to the candidate area, and obtaining the depth value of each pixel in the candidate area comprises:
and projecting each pixel in the depth image and the depth thereof into the RGB image to obtain corresponding pixel positions px and py and a depth value d, rounding px and py in a nearest neighbor mode, and assigning d to the pixel with the rounded coordinate value.
3. The method of claim 1, wherein solving for the normal vector for each point on the three-dimensional point cloud comprises:
taking k (k > ═ 2) points around each point, and counting an algorithm vector by using the points; when k is 2, calculating a normal vector of the triangle as a normal vector of the point; when k is more than 2, a plurality of triangles are constructed, and the average value of the normal vectors of the triangles is used as the normal vector of the point.
4. The method of claim 1, wherein calculating a slope from solved plane equation coefficients comprises:
obtaining the normal vector of each surface as N according to the plane equation coefficient1=(a1,b1,c1) And N2=(a2,b2,c2) The cosine of the slope angle is calculated as follows:
Figure FDA0002710286360000021
5. the method of claim 1, wherein the road segmentation algorithm employs a CNN image segmentation algorithm.
6. A road gradient estimation device based on a color and depth image, characterized by comprising:
a calibration module for obtaining the intrinsic parameter K of the color camera through camera calibrationrgbAnd extrinsic parameter (R)rgb,Trgb) Intrinsic parameter K of depth cameradAnd extrinsic parameter (R)d,Td) For a point in space (X)w,Yw,Zw) To obtain the following enantiomerThe relationship between the beams:
Figure FDA0002710286360000022
wherein p isrgbFor corresponding pixel positions on the RGB image, ZrgbThe corresponding pixel depth on the RGB image; p is a radical ofdFor any pixel in a given depth image, ZdFor the depth of any pixel in a given depth image,
Figure FDA0002710286360000023
is the pixel coordinates of the RGB image,
Figure FDA0002710286360000024
is the depth image pixel coordinate;
the segmentation module is used for capturing RGB images of roads and segmenting candidate regions of the slope through a road segmentation algorithm;
the mapping module is used for capturing a depth image of a road, and mapping the depth image to an RGB image according to the mapping relation and the candidate area to obtain the depth value of each pixel in the candidate area;
the reconstruction module is used for back-projecting each pixel in the candidate area to a three-dimensional space by using each pixel in the candidate area and the depth value of each pixel in the candidate area according to the candidate area range and the depth value of each pixel in the candidate area to obtain three-dimensional point cloud of the candidate area;
the calculation module is used for solving the normal vector of each point on the three-dimensional point cloud; clustering by using normal information of three-dimensional points, dividing the three-dimensional point cloud into two potential planes, and solving equations of the two planes by using a least square method, a1x+b1y+c1z+d10 and a2x+b2y+c2z+d20; and calculating the gradient according to the solved plane equation coefficient.
7. The information of claim 6, wherein the mapping module captures a depth image of the road, and maps the depth map to the RGB image according to the mapping relationship and the candidate region to obtain a depth value of each pixel in the candidate region by:
the mapping module is specifically configured to project each pixel in the depth image and the depth thereof into the RGB image to obtain corresponding pixel positions px, py and a depth value d, integrate px and py in a nearest neighbor manner, and assign d to a pixel whose coordinate value is integrated.
8. The apparatus of claim 6, wherein the computing module solves for the normal vector for each point on the three-dimensional point cloud by:
the calculation module is specifically configured to, for each point, take k (k > ═ 2) points around the point, and calculate an algorithm vector using the points; when k is 2, calculating a normal vector of the triangle as a normal vector of the point; when k is more than 2, a plurality of triangles are constructed, and the average value of the normal vectors of the triangles is used as the normal vector of the point.
9. The apparatus of claim 6, wherein the calculation module calculates the slope from the solved plane equation coefficients by:
the calculation module is specifically configured to obtain a normal vector N of each plane according to the plane equation coefficients1=(a1,b1,c1) And N2=(a2,b2,c2) The cosine of the slope angle is calculated as follows:
Figure FDA0002710286360000031
10. the apparatus of claim 6, wherein the road segmentation algorithm employs a CNN image segmentation algorithm.
CN202011053625.6A 2020-09-29 2020-09-29 Road slope estimation method and device based on color and depth image Pending CN112183378A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011053625.6A CN112183378A (en) 2020-09-29 2020-09-29 Road slope estimation method and device based on color and depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011053625.6A CN112183378A (en) 2020-09-29 2020-09-29 Road slope estimation method and device based on color and depth image

Publications (1)

Publication Number Publication Date
CN112183378A true CN112183378A (en) 2021-01-05

Family

ID=73947291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011053625.6A Pending CN112183378A (en) 2020-09-29 2020-09-29 Road slope estimation method and device based on color and depth image

Country Status (1)

Country Link
CN (1) CN112183378A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034586A (en) * 2021-04-27 2021-06-25 北京邮电大学 Road inclination angle detection method and detection system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103607584A (en) * 2013-11-27 2014-02-26 浙江大学 Real-time registration method for depth maps shot by kinect and video shot by color camera
CN104950313A (en) * 2015-06-11 2015-09-30 同济大学 Road-surface abstraction and road gradient recognition method
CN108399610A (en) * 2018-03-20 2018-08-14 上海应用技术大学 A kind of depth image enhancement method of fusion RGB image information
CN110610505A (en) * 2019-09-25 2019-12-24 中科新松有限公司 Image segmentation method fusing depth and color information
US10635979B2 (en) * 2018-07-20 2020-04-28 Google Llc Category learning neural networks
CN111476106A (en) * 2020-03-17 2020-07-31 重庆邮电大学 Monocular camera-based straight road relative gradient real-time prediction method, system and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103607584A (en) * 2013-11-27 2014-02-26 浙江大学 Real-time registration method for depth maps shot by kinect and video shot by color camera
CN104950313A (en) * 2015-06-11 2015-09-30 同济大学 Road-surface abstraction and road gradient recognition method
CN108399610A (en) * 2018-03-20 2018-08-14 上海应用技术大学 A kind of depth image enhancement method of fusion RGB image information
US10635979B2 (en) * 2018-07-20 2020-04-28 Google Llc Category learning neural networks
CN110610505A (en) * 2019-09-25 2019-12-24 中科新松有限公司 Image segmentation method fusing depth and color information
CN111476106A (en) * 2020-03-17 2020-07-31 重庆邮电大学 Monocular camera-based straight road relative gradient real-time prediction method, system and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
冯鹏航: "基于激光雷达和机器视觉融合的智能车前方障碍物检测技术研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034586A (en) * 2021-04-27 2021-06-25 北京邮电大学 Road inclination angle detection method and detection system
CN113034586B (en) * 2021-04-27 2022-09-23 北京邮电大学 Road inclination angle detection method and detection system

Similar Documents

Publication Publication Date Title
CN108805934B (en) External parameter calibration method and device for vehicle-mounted camera
CN101419667B (en) Method and apparatus for identifying obstacle in image
CN112613378B (en) 3D target detection method, system, medium and terminal
CN111091023B (en) Vehicle detection method and device and electronic equipment
CN111986472B (en) Vehicle speed determining method and vehicle
CN111932627B (en) Marker drawing method and system
US20230049383A1 (en) Systems and methods for determining road traversability using real time data and a trained model
CN111930877B (en) Map guideboard generation method and electronic equipment
CN114217665A (en) Camera and laser radar time synchronization method, device and storage medium
CN112836698A (en) Positioning method, positioning device, storage medium and electronic equipment
Hayakawa et al. Ego-motion and surrounding vehicle state estimation using a monocular camera
CN113706633B (en) Three-dimensional information determination method and device for target object
US10134182B1 (en) Large scale dense mapping
WO2022099620A1 (en) Three-dimensional point cloud segmentation method and apparatus, and mobile platform
CN112183378A (en) Road slope estimation method and device based on color and depth image
EP4250245A1 (en) System and method for determining a viewpoint of a traffic camera
CN116740680A (en) Vehicle positioning method and device and electronic equipment
KR102003387B1 (en) Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
Kiran et al. Automatic hump detection and 3D view generation from a single road image
CN115497061A (en) Method and device for identifying road travelable area based on binocular vision
WO2022133986A1 (en) Accuracy estimation method and system
Stănescu et al. Mapping the environment at range: implications for camera calibration
JP2022091474A (en) Information processor, information processing method, program and vehicle control system
CN112767477A (en) Positioning method, positioning device, storage medium and electronic equipment
Tao 3D LiDAR based drivable road region detection for autonomous vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210105