CN115995074A - Unmanned vehicle ranging method based on improved semi-global stereo matching algorithm - Google Patents

Unmanned vehicle ranging method based on improved semi-global stereo matching algorithm Download PDF

Info

Publication number
CN115995074A
CN115995074A CN202211672141.9A CN202211672141A CN115995074A CN 115995074 A CN115995074 A CN 115995074A CN 202211672141 A CN202211672141 A CN 202211672141A CN 115995074 A CN115995074 A CN 115995074A
Authority
CN
China
Prior art keywords
value
algorithm
pixel
cost
parallax
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211672141.9A
Other languages
Chinese (zh)
Inventor
王文聘
赵紫旭
张大霖
袁尧
曹连建
梁浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaian Zhongke Jingshang Intelligent Network Research Institute Co ltd
Original Assignee
Huaian Zhongke Jingshang Intelligent Network Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaian Zhongke Jingshang Intelligent Network Research Institute Co ltd filed Critical Huaian Zhongke Jingshang Intelligent Network Research Institute Co ltd
Priority to CN202211672141.9A priority Critical patent/CN115995074A/en
Publication of CN115995074A publication Critical patent/CN115995074A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to an unmanned vehicle ranging method based on an improved semi-global stereo matching algorithm, which comprises the following steps: (1) an overall ranging method of the unmanned vehicle; (2) An improved Census algorithm based on adaptive window and multi-cost fusion; (3) fusing SAD algorithm and modified Census algorithm; (4) cost aggregation; (5) parallax calculation; (6) parallax optimization. The invention solves the problem of low precision of the semi-global stereo matching algorithm in the unmanned vehicle ranging process, improves a series of improvements on the basis of the original semi-global stereo matching algorithm, solves the problem of repeated texture areas of images in the Census transformation process, and improves the matching precision; in addition, the SAD algorithm is fused, so that the speed of matching cost is effectively improved, the sensitivity of the matching cost to light is reduced, the problem of low precision of the semi-global stereo matching algorithm in the unmanned vehicle ranging field is improved, and the intelligent degree of the unmanned vehicle is deepened.

Description

Unmanned vehicle ranging method based on improved semi-global stereo matching algorithm
Technical Field
The invention relates to the technical field of unmanned target ranging, in particular to an unmanned vehicle ranging method based on an improved semi-global stereo matching algorithm.
Background
The unmanned vehicle ranging system plays an auxiliary role on the unmanned technology, is beneficial to reducing traffic accidents and quickens the intelligent progress of the unmanned vehicle. There are many current vehicle ranging methods, and the current mainstream method is to use various sensors to solve the ranging problem, where the ranging sensors mainly include laser radar, millimeter wave radar, binocular camera, monocular camera and ultrasonic sensor. The existing stereo matching technology mainly comprises the following steps: global stereo matching, local stereo matching and semi-global stereo matching, the algorithm is generally divided into four steps of cost calculation, cost aggregation, parallax calculation and parallax optimization, wherein the cost calculation part plays a core role.
The current unmanned vehicle ranging method mostly adopts a binocular camera to match with a semi-global stereo matching algorithm for ranging, but most of the existing semi-global stereo matching algorithm adopts a traditional Census algorithm and a SAD algorithm in the cost calculation stage, the basic principle of the traditional Census algorithm is that a pixel point is selected from a matched image, the pixel point is taken as the center, a matrix window with the size of n multiplied by n is established by taking peripheral neighborhood pixel points as frames, n multiplied by n pixel points are contained in the window, each pixel point except the center pixel point is selected to be compared with the center pixel point respectively, if the gray value is larger than the center point, the point is marked as 0, otherwise, the point is marked as 1. However, in the cost calculation process, the window size of the transformation process is fixed, the window size cannot be adaptively transformed according to the gray level distribution condition of the image, so that the matching precision is not high; the SAD algorithm with the same function is operated in a mode of calculating the sum of absolute values of differences between pixel values in left and right pixel blocks in a matrix window to realize cost calculation. Although the algorithm can greatly improve the matching speed and shorten the matching time, the algorithm is relatively sensitive to illumination and has poor effect in practical application.
When the inventor researches the unmanned vehicle ranging method, the inventor finds that the matching algorithm used by the traditional ranging method has limitation to a certain extent, so that the measuring precision is limited, and the following problems need to be solved in order to improve the accuracy of unmanned vehicle ranging: (1) The traditional Census transformation adopts a fixed window form to severely limit the information quantity of the gray value of the pixel, and meanwhile, census adopts a single matching cost and is excessively dependent on the gray information of the central pixel. The technical difficulty is how to adjust the size of a fixed window of Census transformation according to the gray level distribution of an image and how to blend the color intensity matching cost; (2) The SAD algorithm and the Census algorithm have certain advantages and certain defects in the cost matching stage, and can play a role in compensating for the deficiency if the two algorithms can be combined. However, the two algorithms are different in the processing manner of the pixel gray values, so that the results of the two algorithms are different. The SAD has a value range of [0,255], the Census algorithm processes the corresponding position in the image by using a bit string form, and the range of the result often depends on the bit number of the bit string (usually 32 bits or 64 bits), so that the main technical difficulty is how to normalize the results of the two algorithms, so that the final results of the two algorithms can be divided into the same result interval.
Disclosure of Invention
The invention aims to overcome the technical defects and provide an unmanned vehicle ranging method based on an improved semi-global stereo matching algorithm.
In order to solve the technical problems, the technical scheme provided by the invention is as follows: an unmanned vehicle ranging method based on an improved semi-global stereo matching algorithm comprises the following steps:
(1) The overall ranging method for the unmanned vehicles comprises the following steps: firstly, carrying out image preprocessing on an image acquired by a binocular camera; identifying obstacles such as a close-range vehicle and the like through a target detection algorithm; dividing the dynamic video information acquired by the binocular camera, and reserving a target obstacle to be measured; performing stereo matching on images acquired by the left camera and the right camera by improving a semi-global stereo matching algorithm to generate a parallax image; according to the geometric relation of parallel binocular vision, converting the parallax image into a depth image, acquiring the coordinates x and y of a target obstacle from the depth image, and calculating the distance between the target and the unmanned vehicle according to the principle of a similar triangle;
(2) Improved Census algorithm based on adaptive window and multi-cost fusion: the initial window V is first set before Census transformation 0 ×V 0 And a maximum window V max ×V max Sequencing the gray values in the initial window, removing the maximum and minimum values of the pixel gray values, eliminating the influence of abnormal gray values, averaging the rest pixel gray values, and obtaining an initial window V 0 ×V 0 Comparing the obtained gray average value with a set threshold value A0, if the gray average value is smaller than the set threshold value A0, outputting the pixel gray average value obtained in the current window, otherwise V 0 =V 0 +2, repeating the comparison process, setting a threshold A1, and taking the gray value of the central pixel in the initial window and the calculated pixel gray value average value as the difference absolute value; if the absolute value of the difference is smaller than or equal to the set threshold A1, selecting the absolute value of the gray value of the central pixel of the window as the final grayAnd if the absolute value result of the difference is larger than the set threshold value A1, selecting the average value of the pixel gray values as a final gray value, wherein the formula is as follows:
Figure BDA0004016026330000021
wherein I is center (p) represents the final pixel gray value within the window.
Comparing each pixel gray value with the final pixel gray value in the window, wherein if the pixel gray value is larger than or equal to the final pixel gray value, setting 0, otherwise setting 1, and the formula is as follows:
Figure BDA0004016026330000022
Figure BDA0004016026330000031
wherein the method comprises the steps of
Figure BDA0004016026330000032
For character string connecting function, I census (p) the final pixel gray value obtained, I (q) represents the gray value of the other pixels in the initial window except the gray value of the central pixel, C s (p) is a bit string generated after window transformation;
(3) The SAD algorithm is fused with the modified Census algorithm: the SAD algorithm is a local stereo matching algorithm, the matching cost is realized by calculating the sum of absolute value differences of left and right pixel blocks at corresponding positions in an initial window, and the SAD algorithm cost formula is as follows:
Figure BDA0004016026330000033
wherein C is sad (x, y, d) is the SAD cost value at the (x, y) point;
since the SAD algorithm is different from Census result scale, the results of both algorithms are now normalized to the same result interval, and the normalization formula is as follows:
Figure BDA0004016026330000034
/>
where ω represents the cost value and λ is the control parameter, any cost value can be normalized to a range of [0,1] by this function.
Finally, the matching cost calculation formula after the SAD algorithm and the modified Census algorithm are fused is as follows:
C sadCensus (p,d)=ρ(C F (p,d),λ F )+ρ(C sad (p,d),λ SAD )
(4) Cost aggregation: performing cost aggregation by adopting a method of a crisscross domain;
(5) Parallax calculation: performing parallax calculation by adopting a winner general eating algorithm, performing similarity comparison on all target points and corresponding points, and selecting parallax corresponding to the minimum cost value as an optimal time difference;
(6) Parallax optimization: and processing the error points matched with the generated parallax images, which are blocked, low-texture and parallax discontinuous points by adopting left-right consistency detection, and performing filtering processing on the parallax images after parallax optimization by using a mean filtering algorithm, so that the influence of noise on the parallax images is reduced.
As an improvement, in step 2, the remaining pixel gray values are averaged as follows:
H 1 ={I(p 0 ),I(p 1 ),I(p 2 ),......I(p i )i>0}
Figure BDA0004016026330000035
wherein I (p) represents a pixel gray value, H 1 Representing a set of pixel gray values, max (H 1 ) Represents the maximum value, min (H 1 ) Representing the minimum value of the gray values of the pixels in the set, I mean (p) represents the pixel ash remaining after removal of the maximum and minimum valuesAverage value of the degree values.
As an improvement, in step 2, in order to improve the matching effect of Census algorithm on the repeated texture region of the image, in the Census transformation stage, the gray value cost is fused with the RGB channel color absolute value cost, after the Census transformation process is performed, parallax d is introduced, the pixel point of a certain point of the left image is represented by p, the pixel point corresponding to the right image is represented by p-d, and then the corresponding bit string is C s (p)、C s And (p-d) based on the distance, performing exclusive OR processing on bit strings in the left image and the right image to obtain a Hamming distance, wherein the formula is as follows:
C census (p,d)=Ham(C s (p),C s (p-d))
Figure BDA0004016026330000041
wherein C is ys (p, d) represents the color cost value of the pixel p point when the parallax is d, z is the color of the image,
Figure BDA0004016026330000042
color intensity for pixel p point in left image, < >>
Figure BDA0004016026330000043
Is the color intensity of the p-d point in the image. Finally, the two matching costs are weighted and fused, the fusion coefficient is alpha, and the cost formula is as follows:
C F (p,d)=αC census (p,d)+(1-α)C ys (p,d).
as an improvement, the visual processing in step 6 is to find a point p from the disparity map generated by the left camera, and the corresponding disparity value is D L (p) the parallax value of the corresponding point in the right camera is D R (p-D L (p)) by comparing with a set threshold value, whether the point satisfies the left-right coincidence detection is determined, and the formula is as follows:
|D L (p)-D R (p-D L (p))|<A 2
wherein A is 2 To set upAnd typically 1, if the formula is satisfied, then the requirement is satisfied, otherwise parallax correction is required for that point.
After the method is adopted, the invention has the following advantages: (1) The improved Census algorithm based on the self-adaptive window and multi-price fusion is adopted on the basis of the Census algorithm, and a threshold value is set, so that the algorithm can flexibly select a proper matching window according to the image gray distribution condition, the matching accuracy is improved, and in addition, the repeated texture area of the image is effectively reduced by eliminating the abnormal pixel gray value and the absolute value cost of the fusion RGB color; (2) The SAD algorithm and the improved Census algorithm are fused to form the SAD algorithm, so that the SAD algorithm has the characteristics of simplicity in implementation and high matching speed, and the SAD algorithm and the improved Census algorithm are fused to realize the cost matching speed, reduce the sensitivity to illumination and illumination conditions and improve the stereo matching precision.
In summary, the algorithm of the invention is based on the original Census algorithm to fuse the self-adaptive window and the color intensity matching cost, and the SAD algorithm is fused with the modified Census algorithm, so that the matching result is better and more accurate, the unmanned vehicle ranging effect is better, the problem that the semi-global stereo matching algorithm has low precision in the unmanned vehicle ranging process is solved, a series of improvements are carried out on the basis of the original semi-global stereo matching algorithm, the image repeated texture area in the Census transformation process is solved, the size of a fixed window in the traditional Census transformation is changed, and the matching precision is improved; in addition, the SAD algorithm is fused, so that the speed of matching cost is effectively improved, the sensitivity of the matching cost to light is reduced, the problem of low precision of the semi-global stereo matching algorithm in the unmanned vehicle ranging field is improved, and the intelligent degree of the unmanned vehicle is deepened.
Drawings
Fig. 1 is a flow chart of the unmanned vehicle ranging of the present invention.
Fig. 2 is a flow chart of an improved semi-global stereo matching algorithm of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Referring to fig. 1 and 2, the present invention specifically includes the following steps:
1. overall ranging method for unmanned vehicle
First, the overall ranging procedure of the unmanned vehicle is shown in a first diagram. Before the unmanned vehicle runs, the vehicle-mounted binocular camera needs to be opened, and the images acquired by the binocular camera are subjected to image preprocessing; identifying obstacles such as a close-range vehicle and the like through a target detection algorithm; dividing the dynamic video information acquired by the binocular camera, and reserving a target obstacle to be measured; performing stereo matching on images acquired by the left camera and the right camera by improving a semi-global stereo matching algorithm to generate a parallax image; according to the geometrical relationship of parallel binocular vision, the parallax image is converted into a depth image, the coordinates x and y of the target obstacle are obtained from the depth image, and the distance between the target and the unmanned vehicle is calculated according to the principle of similar triangles.
2. Improved Census algorithm based on self-adaptive window and multi-valence fusion
The initial window V is first set before Census transformation 0 ×V 0 And a maximum window V max ×V max The gray values in the initial window are ordered as follows:
H 0 ={p 0 ,p 1 ,p 2 ,……p i |i>p i i>0}
wherein p represents the initial window V 0 ×V 0 Inner pixel gray points.
Removing the maximum and minimum values of the pixel gray values, eliminating the influence of abnormal gray values, and averaging the rest pixel gray values, wherein the formula is as follows:
H1={I(p 0 ),I(p 1 ),I(p 2 ),......I......I(p i )|i(p i )i>0}
Figure BDA0004016026330000051
wherein I (p) represents a pixel gray value, H 1 Representing a set of pixel gray values, max (H 1 ) Represents the maximum value, min (H 1 ) Representing the minimum value of the gray values of the pixels in the set, I mean (p) represents the average value of the pixel gradation values remaining after the maximum and minimum values are removed.
Will initiate window V 0 ×V 0 Comparing the obtained gray average value with a set threshold value A0, if the gray average value is smaller than the set threshold value A0, outputting the pixel gray average value obtained in the current window, otherwise V 0 =V 0 +2, the comparison process is repeated.
Setting a threshold A1, and taking the gray value of the central pixel in the initial window and the calculated pixel gray value average value as the difference absolute value; if the absolute value of the difference is smaller than or equal to the set threshold value A1, the absolute value of the gray value of the central pixel of the window is selected as the final gray value, and if the absolute value of the difference is larger than the set threshold value A1, the average value of the gray values of the pixels is selected as the final gray value, wherein the formula is as follows:
Figure BDA0004016026330000061
wherein I is center (p) represents the final pixel gray value within the window.
Comparing each pixel gray value with the final pixel gray value in the window, wherein if the pixel gray value is larger than or equal to the final pixel gray value, setting 0, otherwise setting 1, and the formula is as follows:
Figure BDA0004016026330000062
Figure BDA0004016026330000063
wherein the method comprises the steps of
Figure BDA0004016026330000064
For character string connecting function, I census (p) the final pixel gray value obtained, I (q) represents the gray value of the other pixels in the initial window except the gray value of the central pixel, C s (p) after window transformationThe generated bit string.
In order to improve the matching effect of a Census algorithm on an image repeated texture region, in a Census transformation stage, gray value cost and RGB channel color absolute value cost are fused, parallax d is introduced after the Census transformation process, a pixel point of a certain point of a left image is represented by p, a pixel point corresponding to a right image is represented by p-d, and a corresponding bit string is C s (p)、C s And (p-d) based on the distance, performing exclusive OR processing on bit strings in the left image and the right image to obtain a Hamming distance, wherein the formula is as follows:
C census (p,d)=Ham(C s (p),C s (p-d))
Figure BDA0004016026330000065
wherein C is ys (p, d) represents the color cost value of the pixel p point when the parallax is d, z is the color of the image,
Figure BDA0004016026330000066
color intensity for pixel p point in left image, < >>
Figure BDA0004016026330000067
Is the color intensity of the p-d point in the image. Finally, the two matching costs are weighted and fused, the fusion coefficient is alpha, and the cost formula is as follows:
C F (p,d)=αC census (p,d)+(1-α)C ys (p,d)
3. SAD algorithm and modified Census algorithm are fused
The SAD algorithm is a local stereo matching algorithm, the matching cost is realized by calculating the sum of absolute value differences of left and right pixel blocks at corresponding positions in an initial window, and the SAD algorithm cost formula is as follows:
Figure BDA0004016026330000071
wherein C is sad (x, y, d) is the SAD cost value at the (x, y) point.
Since the SAD algorithm is different from Census result scale, the results of both algorithms are now normalized to the same result interval, and the normalization formula is as follows:
Figure BDA0004016026330000072
/>
where ω represents the cost value and λ is the control parameter, any cost value can be normalized to a range of [0,1] by this function.
Finally, the matching cost calculation formula after the SAD algorithm and the modified Census algorithm are fused is as follows:
C sadCensus (p,d)=ρ(C F (p,d),λ F )+ρ(C sad (p,d),λ SAD )
4. cost aggregation
The cost aggregation is carried out by adopting a method of a cross domain. Firstly, constructing arm lengths in the horizontal and vertical directions, wherein gray values and color intensity values of all pixel points on the arms are similar, the construction of the cross arm is based on the difference between the colors and the gray values at the pixel points, and the arm lengths can extend from the upper, lower, left and right eyes at the central point position along with the influence of the colors and the gray values until the extension is stopped when the point with larger pixel difference is encountered; the arm length is extended to a maximum extent, and the cross arm area is constructed without extending when the maximum extent is reached.
Taking a point p, wherein the sum of the horizontal arm lengths of all pixel points on the vertical arm length of the point p is a supporting area of the point p, summarizing and storing the cost aggregation process by using the cost of the horizontal arm length p as a temporary result, and then adding the temporary results stored in each step on the vertical arm length of each pixel point on the horizontal arm length p to obtain the final aggregation cost value of the pixel.
5. Parallax computation
And (3) carrying out parallax calculation by adopting a winner general eating algorithm (WTA), carrying out similarity comparison on all target points and corresponding points, and selecting the parallax corresponding to the minimum cost value as the optimal time difference.
6. Parallax optimization
And processing the error points matched with the generated parallax map, the blocked, low-texture and parallax discontinuous points by adopting left-right consistency detection. Searching a point p from a disparity map generated by a left camera, wherein the corresponding disparity value is D L (p) the parallax value of the corresponding point in the right camera is D R (p-D L (p)) by comparing with a set threshold value, whether the point satisfies the left-right coincidence detection is determined, and the formula is as follows:
|D L (p)-D R (p-D L (p))|<A 2
wherein A is 2 For a set threshold, typically 1, if the formula is satisfied, the requirement is satisfied, otherwise parallax correction is required for the point.
And filtering the parallax image after parallax optimization through an average filtering algorithm, so that the influence of noise on the parallax image is reduced.
The invention and its embodiments have been described above without limitation, and the actual construction is not limited thereto. In general, if one of ordinary skill in the art is informed by this disclosure, a structural manner and an embodiment similar to the technical scheme are not creatively designed without departing from the gist of the present invention, and all the structural manners and the embodiments are considered to be within the protection scope of the present invention.

Claims (4)

1. An unmanned vehicle ranging method based on an improved semi-global stereo matching algorithm is characterized by comprising the following steps of: the method comprises the following steps:
(1) The overall ranging method for the unmanned vehicles comprises the following steps: firstly, carrying out image preprocessing on an image acquired by a binocular camera; identifying obstacles such as a close-range vehicle and the like through a target detection algorithm; dividing the dynamic video information acquired by the binocular camera, and reserving a target obstacle to be measured; performing stereo matching on images acquired by the left camera and the right camera by improving a semi-global stereo matching algorithm to generate a parallax image; according to the geometric relation of parallel binocular vision, converting the parallax image into a depth image, acquiring the coordinates x and y of a target obstacle from the depth image, and calculating the distance between the target and the unmanned vehicle according to the principle of a similar triangle;
(2) Improved Census algorithm based on adaptive window and multi-cost fusion: the initial window V is first set before Census transformation 0 ×V 0 And a maximum window V max ×V max Sequencing the gray values in the initial window, removing the maximum and minimum values of the pixel gray values, eliminating the influence of abnormal gray values, averaging the rest pixel gray values, and obtaining an initial window V 0 ×V 0 Comparing the obtained gray average value with a set threshold value A0, if the gray average value is smaller than the set threshold value A0, outputting the pixel gray average value obtained in the current window, otherwise V 0 =V 0 +2, repeating the comparison process, setting a threshold A1, and taking the gray value of the central pixel in the initial window and the calculated pixel gray value average value as the difference absolute value; if the absolute value of the difference is smaller than or equal to the set threshold value A1, the absolute value of the gray value of the central pixel of the window is selected as the final gray value, and if the absolute value of the difference is larger than the set threshold value A1, the average value of the gray values of the pixels is selected as the final gray value, wherein the formula is as follows:
Figure FDA0004016026320000011
wherein I is center (p) represents the final pixel gray value within the window.
Comparing each pixel gray value with the final pixel gray value in the window, wherein if the pixel gray value is larger than or equal to the final pixel gray value, setting 0, otherwise setting 1, and the formula is as follows:
Figure FDA0004016026320000012
Figure FDA0004016026320000013
wherein the method comprises the steps of
Figure FDA0004016026320000014
For character string connecting function, I census (p) the final pixel gray value obtained, I (q) represents the gray value of the other pixels in the initial window except the gray value of the central pixel, C s (p) is a bit string generated after window transformation;
(3) The SAD algorithm is fused with the modified Census algorithm: the SAD algorithm is a local stereo matching algorithm, the matching cost is realized by calculating the sum of absolute value differences of left and right pixel blocks at corresponding positions in an initial window, and the SAD algorithm cost formula is as follows:
Figure FDA0004016026320000015
wherein C is sad (x, y, d) is the SAD cost value at the (x, y) point;
since the SAD algorithm is different from Census result scale, the results of both algorithms are now normalized to the same result interval, and the normalization formula is as follows:
Figure FDA0004016026320000021
where ω represents the cost value and λ is the control parameter, any cost value can be normalized to a range of [0,1] by this function.
Finally, the matching cost calculation formula after the SAD algorithm and the modified Census algorithm are fused is as follows:
C sadCensus (p,d)=ρ(C F (p,d),λ F )+ρ(C sad (p,d),λ SAD )
(4) Cost aggregation: performing cost aggregation by adopting a method of a crisscross domain;
(5) Parallax calculation: performing parallax calculation by adopting a winner general eating algorithm, performing similarity comparison on all target points and corresponding points, and selecting parallax corresponding to the minimum cost value as an optimal time difference;
(6) Parallax optimization: and processing the error points matched with the generated parallax images, which are blocked, low-texture and parallax discontinuous points by adopting left-right consistency detection, and performing filtering processing on the parallax images after parallax optimization by using a mean filtering algorithm, so that the influence of noise on the parallax images is reduced.
2. The unmanned vehicle ranging method based on the improved semi-global stereo matching algorithm according to claim 1, wherein the unmanned vehicle ranging method is characterized in that: in step 2, the average value of the rest pixel gray values is calculated, and the formula is as follows:
H 1 ={I(p 0 ),I(p 1 ),I(p 2 ),I......I(pi)|i(p i )i>0}
Figure FDA0004016026320000022
wherein I (p) represents a pixel gray value, H 1 Representing a set of pixel gray values, max (H 1 ) Represents the maximum value, min (H 1 ) Representing the minimum value of the gray values of the pixels in the set, I mean (p) represents the average value of the pixel gradation values remaining after the maximum and minimum values are removed.
3. The unmanned vehicle ranging method based on the improved semi-global stereo matching algorithm according to claim 1, wherein the unmanned vehicle ranging method is characterized in that: in step 2, in order to improve the matching effect of Census algorithm on the repeated texture region of the image, in the Census transformation stage, the gray value cost and the RGB channel color absolute value cost are fused, after the Census transformation process is performed, parallax d is introduced, a pixel point of a certain point of the left image is represented by p, a pixel point corresponding to the right image is represented by p-d, and then the corresponding bit string is C s (p)、C s And (p-d) based on the distance, performing exclusive OR processing on bit strings in the left image and the right image to obtain a Hamming distance, wherein the formula is as follows:
C census (p,d)=Ham(C s (p),C s (p-d))
Figure FDA0004016026320000031
wherein C is ys (p, d) represents the color cost value of the pixel p point when the parallax is d, z is the color of the image,
Figure FDA0004016026320000032
color intensity for pixel p point in left image, < >>
Figure FDA0004016026320000033
Is the color intensity of the p-d point in the image. Finally, the two matching costs are weighted and fused, the fusion coefficient is alpha, and the cost formula is as follows:
C F (p,d)=αC census (p,d)+(1-α)C ys (p,d)。
4. the unmanned vehicle ranging method based on the improved semi-global stereo matching algorithm according to claim 1, wherein the unmanned vehicle ranging method is characterized in that: the visual processing method in the step 6 is to find a point p from the disparity map generated by the left camera, and the corresponding disparity value is D L (p) the parallax value of the corresponding point in the right camera is D R (p-D L (p)) by comparing with a set threshold value, whether the point satisfies the left-right coincidence detection is determined, and the formula is as follows:
D L (p)-D R (p-D L (p))<A 2
wherein A is 2 For a set threshold, typically 1, if the formula is satisfied, the requirement is satisfied, otherwise parallax correction is required for the point.
CN202211672141.9A 2022-12-26 2022-12-26 Unmanned vehicle ranging method based on improved semi-global stereo matching algorithm Pending CN115995074A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211672141.9A CN115995074A (en) 2022-12-26 2022-12-26 Unmanned vehicle ranging method based on improved semi-global stereo matching algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211672141.9A CN115995074A (en) 2022-12-26 2022-12-26 Unmanned vehicle ranging method based on improved semi-global stereo matching algorithm

Publications (1)

Publication Number Publication Date
CN115995074A true CN115995074A (en) 2023-04-21

Family

ID=85991623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211672141.9A Pending CN115995074A (en) 2022-12-26 2022-12-26 Unmanned vehicle ranging method based on improved semi-global stereo matching algorithm

Country Status (1)

Country Link
CN (1) CN115995074A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188558A (en) * 2023-04-27 2023-05-30 华北理工大学 Stereo photogrammetry method based on binocular vision

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188558A (en) * 2023-04-27 2023-05-30 华北理工大学 Stereo photogrammetry method based on binocular vision

Similar Documents

Publication Publication Date Title
CN110569704A (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN108596975B (en) Stereo matching algorithm for weak texture region
CN113935428A (en) Three-dimensional point cloud clustering identification method and system based on image identification
CN107845073B (en) Local self-adaptive three-dimensional point cloud denoising method based on depth map
CN106651897B (en) Parallax correction method based on super-pixel segmentation
CN115761550A (en) Water surface target detection method based on laser radar point cloud and camera image fusion
CN109410264A (en) A kind of front vehicles distance measurement method based on laser point cloud and image co-registration
CN111105452B (en) Binocular vision-based high-low resolution fusion stereo matching method
CN115995074A (en) Unmanned vehicle ranging method based on improved semi-global stereo matching algorithm
CN113327296B (en) Laser radar and camera online combined calibration method based on depth weighting
CN114862926A (en) Stereo matching method and system fusing AD cost and multi-mode local feature cost
CN111723778B (en) Vehicle distance measuring system and method based on MobileNet-SSD
CN111681275A (en) Double-feature-fused semi-global stereo matching method
CN112906616A (en) Lane line extraction and generation method
CN112435267A (en) Disparity map calculation method for high-resolution urban satellite stereo image
CN114120012A (en) Stereo matching method based on multi-feature fusion and tree structure cost aggregation
CN107301371A (en) A kind of unstructured road detection method and system based on image information fusion
CN107610148A (en) A kind of foreground segmentation method based on Binocular Stereo Vision System
WO2023131203A1 (en) Semantic map updating method, path planning method, and related apparatuses
CN114898321A (en) Method, device, equipment, medium and system for detecting road travelable area
CN116309034A (en) Optimal spelling line acquisition method for ultra-large file remote sensing image
CN114972470A (en) Road surface environment obtaining method and system based on binocular vision
CN114511600A (en) Pose calculation method and system based on point cloud registration
CN113284181A (en) Scene map point and image frame matching method in environment modeling
CN109961413B (en) Image defogging iterative algorithm for optimized estimation of atmospheric light direction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination