CN102567964B - Filtering method for stereoscopic vision parallax image - Google Patents

Filtering method for stereoscopic vision parallax image Download PDF

Info

Publication number
CN102567964B
CN102567964B CN201110412384.4A CN201110412384A CN102567964B CN 102567964 B CN102567964 B CN 102567964B CN 201110412384 A CN201110412384 A CN 201110412384A CN 102567964 B CN102567964 B CN 102567964B
Authority
CN
China
Prior art keywords
point
noise
image
filtering
parallax
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110412384.4A
Other languages
Chinese (zh)
Other versions
CN102567964A (en
Inventor
毛晓艳
滕宝毅
刘祥
邢琰
贾永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Control Engineering
Original Assignee
Beijing Institute of Control Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Control Engineering filed Critical Beijing Institute of Control Engineering
Priority to CN201110412384.4A priority Critical patent/CN102567964B/en
Publication of CN102567964A publication Critical patent/CN102567964A/en
Application granted granted Critical
Publication of CN102567964B publication Critical patent/CN102567964B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a filtering method for a stereoscopic vision parallax image. The filtering method is applied to the filtering treatment for parallax data restored from an image and is used for removing noise points, thereby being beneficial to the subsequent application, such as three-dimensional reconstruction, scene analysis and path planning. The filtering method comprises the following steps: treating the parallax data and converting the parallax data into a continuous and integer image data; designing a treating method and solving a gradient of the parallax image according to the distribution feature of the parallax image; automatically cutting a gradient image and identifying a noise seed point therein; taking the noise seed point as a starting point, and filtering a noise area communicated with the gradient image; continuously filtering the parallax image after being subjected to noise filtering, and perfecting a parallax image effect; and lastly, restoring the parallax data according to the parallax image after being subjected to noise filtering. According to the filtering method provided by the invention, the noise points in the parallax data are reduced and the usability of the parallax data is increased.

Description

A kind of filtering method for stereoscopic vision disparity map
Technical field
The invention belongs to field of machine vision, particularly a kind of filtering method for stereoscopic vision disparity map.
Background technology
In existing theories of vision and vision technique, for the method that improves stereoscopic vision matching effect, have a variety ofly, the filtering method of disparity map is also had to a lot of research.But most methods still, in general aspect, do not have the method for parallax feature and noise special consideration thereof.
In the research > > of mistake matched filtering method in document < < stereoscopic vision, for the disparity map of dense matching, two kinds of filtering methods have been proposed: the filter method based on parallax average and the filter method based on true reference mark.Wherein first method is that the point that surpasses parallax average in wicket is filtered, and second method is the sparse matching double points filtering by after relaxative iteration and minimum intermediate value quadratic method.In the right Study on pretreatment > > of document < < binocular solid matching image, adopted general Gauss's template, smooth template and intermediate value template to carry out filtering to disparity map.In document < < Reliability-aware Cross Multilateral Filtering for Robust DisparityMap Refinement > >, introduce a kind of reliable polygon intersection filtering method, utilized the function of left and right image forward and negative relational matching parallax value to carry out the filtering of parallax.
Summary of the invention
The technical problem to be solved in the present invention is: overcome the deficiencies in the prior art, a kind of filtering method of stereoscopic vision output parallax data is provided, effectively reduce the noise in parallax data, improved the availability of parallax data, met the demand of 3-d recovery and planning.
Technical solution of the present invention comprises the following steps: a kind of filtering method for stereoscopic vision disparity map, and performing step is as follows:
The first step, the parallax data of stereoscopic vision output is converted to integer view data, the parallax data of described stereoscopic vision output comprises Null Spot and available point, Null Spot gray scale is 0, available point carries out shaping according to maximum disparity and minimum parallax, after completing, the parallax point gray scale of maximum is 255, and minimum parallax point gray scale is 1;
Second step, the view data that the first step is obtained, asks the gray scale difference value of neighborhood in its pixel, and rejects the impact of Null Spot, calculates the gradient map of disparity map;
The 3rd step, adopts adaptive automatic Segmentation gradient map, and the position that graded is violent is extracted as noise Seed Points;
The 4th step, the described noise Seed Points of take is starting point, at noise Seed Points, carries out up and down traversal search aspect four, and the consecutive point that find and noise Seed Points gray scale difference value are less than the region of setting threshold and think noise connected region, rejected, obtained the image after noise filtering;
The 5th step, carries out connected domain filtering to the image after noise filtering, and the area of absence that noise eliminating is formed is filled up, and obtains new more level and smooth continuous disparity map;
The 6th step, recovers parallax data according to the disparity map after filling up.
The computation process of described second step gradient map is:
(21) with current some A in view data 0centered by coordinate, get its 8 consecutive point A around 1a 8, wherein setting intermediate value m is 1, A 1=I d(i-m, j-m), A 2=I d(i-m, j), A 3=I d(i-m, j+m), A 4=I d(i, j-m), A 5=I d(i, j+m), A 6=I d(i+m, j-m), A 7=I d(i+m, j), A 8=I d(i+m, j+m), I d(i, j) represents the parallax gray-scale value of (i, j) this point, add up gray scale in 8 consecutive point and be the total num of zero point; I represents the pixel horizontal ordinate of this point, and j represents the pixel ordinate of this point;
(22) if total num is greater than the threshold value V of the number of setting zNforward (23) to, otherwise calculate the gradient T of current point d=| A 0* (8-num)-A 1-A 2-A 3-A 4-A 5-A 6-A 7-A 8|;
(23) by A 0the current consecutive point of point enclose to external expansion one, and m increases by 1, in order to reject the impact of Null Spot, upgrades new A 1a 8with total num, if the number of turns of expansion is less than the threshold value V that the number of turns is set qN, forward (22) to, otherwise forward (24) to;
(24) by the Grad T of current point dvalue is made as 255;
(25) image is traveled through, calculate the gradient map of disparity map.
Described the 3rd step specific implementation is:
(31) gradient map of disparity map is divided into impartial four regions up and down;
(32) in each region, adding up respectively gray scale is 1~255 number N (i), i=1~255, and calculating pixel number accounts for the ratio of the total pixel of whole image i=1~255, the pixel wide that Width is image, the pixels tall that Height is image;
(33) segmentation threshold of setting segmentation object prospect and background is t, calculates intermediate variable &mu; 1 ( t ) = &Sigma; i = 1 t P ( i ) i / &theta; ( t ) &mu; 2 ( t ) = &Sigma; i = t + 1 G P ( i ) i / ( 1 - &theta; ( t ) ) With &sigma; 1 2 = &Sigma; i = 1 t [ i - &mu; 1 ( t ) ] 2 P ( i ) / &theta; ( t ) &sigma; 2 2 = &Sigma; i = t + 1 G [ i - &mu; 1 ( t ) ] 2 P ( i ) / [ 1 - &theta; ( t ) ] , Choose the criterion of cutting apart of Fisher, when J (i) is maximum, corresponding t value is optimal segmenting threshold;
(34) according to segmentation threshold, gradient image is cut apart, the point that is greater than segmentation threshold is thought the point that graded is violent, is made as 255, and what be less than segmentation threshold is made as zero.Gray scale is that 255 point is noise Seed Points.
The 4th described step specific implementation:
(41) noise Seed Points is labeled as and is found a little, all the other points are labeled as and do not find a little;
(42) from noise Seed Points, start to search for to its left, right-hand, above and below four direction, if the grey value difference of the more lower gray-scale value that the party makes progress and noise Seed Points is less than the noise threshold V of setting tNand be labeled as and do not find a little, think that this point is noise connected region, this point is labeled as and is found a little, and find point coordinate to upgrade Seed Points position with this; Noise Seed Points finds the gray scale on a corresponding disparity map to be made as 0 with this, from disparity map, is rejected;
(43) repeating step (42), until whole image is traversed, obtains the image after noise filtering.
In the 5th described step, to the disparity map of filtering noise, adopt medium filtering to carry out smoothly, medium filtering is the data area of extracting filter window size in image, and the intensity profile in region is arranged, and the intermediate value of getting arrangement replaces the gray-scale value of this window center.
Advantage of the present invention is:
(1) parallax value that the present invention calculates stereoscopic vision is converted to anaglyph, and consider the different qualities of disparity map intensity profile, the form of cutting apart with gradient is extracted region noise, parallax data has been carried out to effective filtering, reduced the noise in parallax data, improved the availability of parallax data, solved to a certain extent and utilized stereoscopic vision data to carry out a difficult problem for environment sensing.
(2) the present invention has taken into full account the discrete block distortion occurring in disparity map, gets rid of Null Spot and participates in calculating, and expands searching scope, to guarantee the accuracy of gradient calculation, to the independent noise piece in inactive area, also can effectively remove.
Accompanying drawing explanation
Fig. 1 is realization flow figure of the present invention;
Fig. 2 is the original image that the camera in the present invention is taken;
Fig. 3 is the disparity map being formed by original image coupling parallax data in the present invention;
Fig. 4 is parallax design sketch after the noise filtering in the present invention;
Fig. 5 is parallax design sketch after the continuity filtering in the present invention.
Embodiment
As shown in Figure 1, the present invention is implemented as follows:
The first step, (left camera image as shown in Figure 2, is 8 gray level images) carried out to intrinsic parameter to the stereo-picture that left and right camera is taken and polar curve is proofreaied and correct, image after proofreading and correct is mated and interpolation, the parallax data Df (i, j) that obtains each some floating type, maximum parallax is D max, the minimum parallax except zero is D min.Setting invalid parallax point gray scale is zero, and all the other points carry out integer, and formula is as follows: i=0~Width-1 j=0~Height-1, the pixel wide that Width is image, this figure is 256.Height is the pixels tall of image, and this figure is 256.Obtain the integer parallax Di (i of every bit, j), form disparity map as shown in Figure 3, the location of pixels of the corresponding original image of each pixel in figure, what gray-scale value represented is the result that this impact point is imaged on left and right magazine lateral coordinates difference integer, in the corresponding original image in edge of black, rejects the part of not mating.
Second step, with current some A in disparity map 0centered by coordinate, get its 8 consecutive point A around 1a 8, setting intermediate quantity m is 1, makes A 1=I d(i-m, j-m), A 2=I d(i-m, j), A 3=I d(i-m, j+m), A 4=I d(i, j-m), A 5=I d(i, j+m), A 6=I d(i+m, j-m), A 7=I d(i+m, j), A 8=I d(i+m, j+m), adds up gray scale in 8 consecutive point and is the total num of zero point; If num is less than setting value 4, calculate T d=| A 0* (8-num)-A 1-A 2-A 3-A 4-A 5-A 6-A 7-A 8|, otherwise by A 0the current consecutive point of point enclose to external expansion one, make m=m+1, upgrade new A 1a 8and num, if the number of turns of expansion is less than setting value 10, num also satisfies condition, and calculates T d=| A 0* (8-num)-A 1-A 2-A 3-A 4-A 5-A 6-A 7-A 8| (A 1a 8for the renewal value on current circle), otherwise by the T of current point dvalue is made as 255.
The 3rd step is divided into impartial four regions up and down by gradient map, gradient map according to 256 * 256 is divided into j=0~127, i=0~127, the upper left corner, j=0~127, i=128~255, the upper right corner, j=128~255, i=0~127, the lower left corner, j=128~255, i=128~255, the lower right corner; In each region, add up gray scale and be 1~255 number N (k), k=1~255, calculating pixel number accounts for the ratio of the total pixel of whole image k=1~255; Calculate intermediate quantity &mu; 1 ( t ) = &Sigma; i = 1 t P ( i ) i / &theta; ( t ) &mu; 2 ( t ) = &Sigma; i = t + 1 G P ( i ) i / ( 1 - &theta; ( t ) ) , &sigma; 1 2 = &Sigma; i = 1 t [ i - &mu; 1 ( t ) ] 2 P ( i ) / &theta; ( t ) &sigma; 2 2 = &Sigma; i = t + 1 G [ i - &mu; 1 ( t ) ] 2 P ( i ) / [ 1 - &theta; ( t ) ] , According to formula J ( t ) = | &theta; ( t ) &mu; 1 ( t ) - [ 1 - &theta; ( t ) ] &mu; 2 ( t ) | 2 &theta; ( t ) &sigma; 1 2 ( t ) + [ 1 - &theta; ( t ) ] &sigma; 2 2 ( t ) , Calculate maximal value, when J (t) is maximum, corresponding t value is designated as D th; According to D thgradient image is cut apart, be greater than D thbe made as 255, what be less than segmentation threshold is made as zero.
The 4th step, gets the point that gray scale equals 255 on the image after cutting apart, correspond to the point of this position in disparity map and be made as Seed Points; Seed Points is labeled as and is found a little, and all the other points are labeled as and do not find a little; From Seed Points, start to search for to its left, right-hand, above and below four direction, if the more lower gray-scale value that the party makes progress and the grey value difference of Seed Points are less than setting threshold 8 and are labeled as and do not find a little, this point is labeled as and is found a little, and with its coordinate renew Seed Points position; Repeat this process, until whole image is traversed.Obtain parallax design sketch after noise filtering as Fig. 4, the location of pixels of the corresponding original image of each pixel in figure, what gray-scale value represented is the result that this impact point is imaged on left and right magazine lateral coordinates difference integer, in the corresponding original image in edge of black, rejects the part of not mating.Compare Fig. 3, the point that graded is larger, comprises that the intersection part of white noise piece, the high lower of topographic relief is by filtering.
The 5th step, adopts medium filtering to carry out smoothly to the disparity map of filtering noise, and medium filtering is the data area of extracting filter window size in image, and the intensity profile in region is arranged, and the intermediate value of getting arrangement replaces the gray-scale value of this window center.Obtain the parallax effect of medium filtering as shown in Figure 5, the location of pixels of the corresponding original image of each pixel in figure, what gray-scale value represented is the result that this impact point is imaged on left and right magazine lateral coordinates difference integer, in the corresponding original image in edge of black, rejects the part of not mating.Compare Fig. 4, the area of absence that noise filtering forms is filled, and new disparity map is more level and smooth, more continuous.
The 6th step, carries out integer to the conversion of floating-point to the parallax of each picture point, and Di (i, j) is the integer parallax value of current point, and its floating-point parallax value is i=0~Width-1 j=0~Height-1.Utilize the right parameter of Df (i, j) and stereoscopic camera to carry out the recovery of dimensional topography value, fundamental formular is i=0~Width-1 j=0~Height-1, i=0~Width-1 j=0~Height-1, i=0~Width-1 j=0~Height-1, wherein Baseline be stereoscopic camera between baseline, f is camera focus, dx is pixel lateral dimension, dy is pixel longitudinal size, Width and Height are respectively pixel wide and the height of camera image, get Width=256 and Height=256 herein.
Through the present invention, the dimensional topography being recovered by disparity map has been removed large-area noise piece, more identical with real terrain, data can be directly used in path planning, has solved that dimensional topography exists noise piece and the problem that causes planning.
Non-elaborated part of the present invention belongs to those skilled in the art's known technology.

Claims (4)

1. for a filtering method for stereoscopic vision disparity map, it is characterized in that performing step is as follows:
The first step, the parallax data of stereoscopic vision output is converted to integer view data, the parallax data of described stereoscopic vision output comprises Null Spot and available point, Null Spot gray scale is 0, available point carries out shaping according to maximum disparity and minimum parallax, after completing, the parallax point gray scale of maximum is 255, and minimum parallax point gray scale is 1;
Second step, the view data that the first step is obtained, asks the gray scale difference value of neighborhood in its pixel, and rejects the impact of Null Spot, calculates the gradient map of disparity map;
The 3rd step, adopts gradient map described in adaptive automatic Segmentation, and the position that in gradient map, graded is violent is extracted as noise Seed Points;
The 4th step, the described noise Seed Points of take is starting point, at noise Seed Points, carries out up and down traversal search aspect four, and the consecutive point that find and noise Seed Points gray scale difference value are less than the region of setting threshold and think noise connected region, rejected, obtained the image after noise filtering;
The 5th step, carries out connected domain filtering to the image after noise filtering, and the area of absence that noise eliminating is formed is filled up, and obtains new more level and smooth continuous disparity map;
The 6th step, recovers parallax data according to the disparity map after filling up;
The gradient map process that calculates disparity map in described second step is:
(21) with current some A in view data 0centered by coordinate, get A 08 consecutive point A around 1a 8, wherein setting intermediate value m is 1, A 1=I d(i-m, j-m), A 2=I d(i-m, j), A 3=I d(i-m, j+m), A 4=I d(i, j-m), A 5=I d(i, j+m), A 6=I d(i+m, j-m), A 7=I d(i+m, j), A 8=I d(i+m, j+m), I d(i, j) represents the parallax gray-scale value of (i, j) this point, add up gray scale in 8 consecutive point and be the total num of zero point; I represents the pixel horizontal ordinate of this point, and j represents the pixel ordinate of this point;
(22) if total num is greater than the threshold value V of the number of setting zNforward (23) to, otherwise calculate the gradient T of current point d=| A 0* (8-num)-A 1-A 2-A 3-A 4-A 5-A 6-A 7-A 8|;
(23) by A 0the current consecutive point of point enclose to external expansion one, and m increases by 1, in order to reject the impact of Null Spot, upgrades new A 1a 8with total num, if the number of turns of expansion is less than the threshold value V that the number of turns is set qN, forward (22) to, otherwise forward (24) to;
(24) by the Grad T of current point dvalue is made as 255;
(25) image is traveled through, calculate the gradient map of disparity map.
2. the filtering method for stereoscopic vision disparity map according to claim 1, is characterized in that: described the 3rd step specific implementation is:
(31) gradient map of disparity map is divided into impartial four regions up and down;
(32) in each region, adding up respectively gray scale is 1~255 number N (i), i=1~255, and calculating pixel number accounts for the ratio of the total pixel of whole image width is the pixel wide of image, the pixels tall that Height is image;
(33) segmentation threshold of setting segmentation object prospect and background is t, calculates intermediate variable &mu; 1 ( t ) = &Sigma; i = 1 t P ( i ) i / &theta; ( t ) &mu; 2 ( t ) = &Sigma; i = t + 1 G P ( i ) i / ( 1 - &theta; ( t ) ) With &sigma; 1 2 = &Sigma; i = 1 t [ i - &mu; 1 ( t ) ] 2 P ( i ) / &theta; ( t ) &sigma; 2 2 = &Sigma; i = t + 1 G [ i - &mu; 1 ( t ) ] 2 P ( i ) / [ 1 - &theta; ( t ) ] , choose the criterion of cutting apart of Fisher, when J (t) is maximum, corresponding t value is optimal segmenting threshold;
(34) according to segmentation threshold, gradient image is cut apart, the point that is greater than segmentation threshold is thought the point that graded is violent, is made as gray scale 255, and what be less than segmentation threshold is made as zero, and the point that gray scale is 255 is noise Seed Points.
3. the filtering method for stereoscopic vision disparity map according to claim 1, is characterized in that: the 4th described step specific implementation:
(41) noise Seed Points is labeled as and is found a little, all the other points are labeled as and do not find a little;
(42) from noise Seed Points, start to search for to its left, right-hand, above and below four direction, if the grey value difference of the more lower gray-scale value that the party makes progress and noise Seed Points is less than the noise threshold V of setting tNand be labeled as and do not find a little, think that this point is noise connected region, this point is labeled as and is found a little, and find point coordinate to upgrade Seed Points position with this; Noise Seed Points finds the gray scale on a corresponding disparity map to be made as 0 with this, from disparity map, is rejected;
(43) repeating step (42), until whole image is traversed, obtains the image after noise filtering.
4. the filtering method for stereoscopic vision disparity map according to claim 1, it is characterized in that: in the 5th described step, to the disparity map of filtering noise, adopt medium filtering to carry out smoothly, medium filtering is the data area of extracting filter window size in image, intensity profile in region is arranged, and the intermediate value of getting arrangement replaces the gray-scale value of this window center.
CN201110412384.4A 2011-12-08 2011-12-08 Filtering method for stereoscopic vision parallax image Active CN102567964B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110412384.4A CN102567964B (en) 2011-12-08 2011-12-08 Filtering method for stereoscopic vision parallax image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110412384.4A CN102567964B (en) 2011-12-08 2011-12-08 Filtering method for stereoscopic vision parallax image

Publications (2)

Publication Number Publication Date
CN102567964A CN102567964A (en) 2012-07-11
CN102567964B true CN102567964B (en) 2014-08-27

Family

ID=46413316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110412384.4A Active CN102567964B (en) 2011-12-08 2011-12-08 Filtering method for stereoscopic vision parallax image

Country Status (1)

Country Link
CN (1) CN102567964B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123715B (en) * 2013-04-27 2017-12-05 株式会社理光 Configure the method and system of parallax value
CN104574342B (en) * 2013-10-14 2017-06-23 株式会社理光 The noise recognizing method and Noise Identification device of parallax depth image
CN104915943B (en) * 2014-03-12 2018-03-06 株式会社理光 Method and apparatus for determining main parallax value in disparity map
CN109427043B (en) * 2017-08-25 2023-08-01 自然资源部国土卫星遥感应用中心 Method and equipment for calculating smooth item parameters of global optimization matching of stereoscopic images
CN108182666B (en) * 2017-12-27 2021-11-30 海信集团有限公司 Parallax correction method, device and terminal
CN110110645B (en) * 2019-04-30 2021-07-13 北京控制工程研究所 Obstacle rapid identification method and system suitable for low signal-to-noise ratio image
CN110455502B (en) * 2019-08-15 2021-01-05 广东海洋大学 Method for judging positions of focus and image point of lens and lens group based on object image parallax

Non-Patent Citations (12)

* Cited by examiner, † Cited by third party
Title
Francisco Rovira-Más et al.Noise Reduction in Stereo Disparity Images based on Spectral Analysis.《2009 ASABE Annual International Meeting》.2009,全文.
Jorn Jachalsky et al.Reliability-aware cross multilateral filtering for robust disparity map refinement.《3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video》.2010,全文.
Jorn Jachalsky et al.Reliability-aware cross multilateral filtering for robust disparity map refinement.《3DTV-Conference: The True Vision- Capture, Transmission and Display of 3D Video》.2010,全文. *
Li Li et al.Stereo Matching Algorithm Based on a Generalized Bilateral Filter Model.《JOURNAL OF SOFTWARE》.2011,第6卷(第10期),全文.
Noise Reduction in Stereo Disparity Images based on Spectral Analysis;Francisco Rovira-Más et al;《2009 ASABE Annual International Meeting》;20090624;全文 *
Stereo Matching Algorithm Based on a Generalized Bilateral Filter Model;Li Li et al;《JOURNAL OF SOFTWARE》;20111031;第6卷(第10期);全文 *
刘庆华 等.双目立体匹配图像对的预处理研究.《计算机工程与设计》.2005,第26卷(第3期),全文.
双目立体匹配图像对的预处理研究;刘庆华 等;《计算机工程与设计》;20050331;第26卷(第3期);全文 *
立体视觉中大视差图像误匹配滤波研究;苏永芝;《物联网》;20110430;全文 *
立体视觉中误匹配滤波方法的研究;高宏伟 等;《计算机工程》;20081031;第34卷(第20期);全文 *
苏永芝.立体视觉中大视差图像误匹配滤波研究.《物联网》.2011,全文.
高宏伟 等.立体视觉中误匹配滤波方法的研究.《计算机工程》.2008,第34卷(第20期),全文.

Also Published As

Publication number Publication date
CN102567964A (en) 2012-07-11

Similar Documents

Publication Publication Date Title
CN102567964B (en) Filtering method for stereoscopic vision parallax image
Luo et al. P-mvsnet: Learning patch-wise matching confidence aggregation for multi-view stereo
CN111832655B (en) Multi-scale three-dimensional target detection method based on characteristic pyramid network
CN103985108B (en) Method for multi-focus image fusion through boundary detection and multi-scale morphology definition measurement
CN101901343B (en) Remote sensing image road extracting method based on stereo constraint
CN103455991B (en) A kind of multi-focus image fusing method
CN104036479B (en) Multi-focus image fusion method based on non-negative matrix factorization
CN112801022A (en) Method for rapidly detecting and updating road boundary of unmanned mine card operation area
CN105528785A (en) Binocular visual image stereo matching method
CN106022259A (en) Laser-point cloud based method for extracting mountainous road by use of three-dimensional characteristic description model
CN101765019B (en) Stereo matching algorithm for motion blur and illumination change image
CN103955945A (en) Self-adaption color image segmentation method based on binocular parallax and movable outline
CN107240073A (en) A kind of 3 d video images restorative procedure merged based on gradient with clustering
CN115984494A (en) Deep learning-based three-dimensional terrain reconstruction method for lunar navigation image
CN111640128A (en) Cell image segmentation method based on U-Net network
Maltezos et al. Automatic detection of building points from LiDAR and dense image matching point clouds
CN112184725B (en) Method for extracting center of structured light bar of asphalt pavement image
CN103679740B (en) ROI (Region of Interest) extraction method of ground target of unmanned aerial vehicle
CN111209779A (en) Method, device and system for detecting drivable area and controlling intelligent driving
CN105023263B (en) A kind of method of occlusion detection and parallax correction based on region growing
CN105335968A (en) Depth map extraction method based on confidence coefficient propagation algorithm and device
CN104036481A (en) Multi-focus image fusion method based on depth information extraction
CN106447718A (en) 2D-to-3D depth estimation method
Guo et al. 2D to 3D convertion based on edge defocus and segmentation
CN103400380B (en) The single camera submarine target three-dimensional track analogy method of fusion image matrix offset

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant