CN105513064A - Image segmentation and adaptive weighting-based stereo matching method - Google Patents
Image segmentation and adaptive weighting-based stereo matching method Download PDFInfo
- Publication number
- CN105513064A CN105513064A CN201510880821.3A CN201510880821A CN105513064A CN 105513064 A CN105513064 A CN 105513064A CN 201510880821 A CN201510880821 A CN 201510880821A CN 105513064 A CN105513064 A CN 105513064A
- Authority
- CN
- China
- Prior art keywords
- parallax
- pixel
- image
- matching
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The invention discloses an image segmentation and adaptive weighting-based stereo matching method. The method comprises the steps of parallax initialization and parallax optimization. The parallax initialization comprises the steps of adopting corrected left and right images as a reference image and a target image respectively, and segmenting the image; based on the constructed combination segmentation information and an adaptive-weighting cost function, calculating matching costs E (p, pd, d) between a current to-be-matched pixel pint p in the left image and all candidate matching points pd in the right image, and selecting a candidate matching point of a minimum matching cost as an optimal matching point for the pixel pint p; repeating the above steps till all pixel points in the left image are traversed in the raster scan order. In this way, an initial parallax image is obtained. The parallax optimization comprises the steps of fitting the parallax plane of the obtained initial parallax image, suppressing the abnormity of the initial parallax image, and recovering the edge of the initial parallax image. The method is advantaged in that unreliable points in the initial parallax image obtained based on the calculation of the optimal matching point are re-corrected, while abnormal small areas are merged to adjacent normal areas. Meanwhile, edge pixels are recovered, and the parallax error is eliminated. The matching accuracy is improved.
Description
Technical field
The invention belongs to image processing field, relate to a kind of Stereo Matching Technology, specifically a kind of solid matching method based on Iamge Segmentation and adaptive weighting.
Background technology
In recent years, as the most popular one of the studying a question of computer vision field, stereovision technique is widely used in the aspects such as vision guided navigation, object identification and Industry Control.Stereovision technique mainly comprises the parts such as Image Acquisition, camera calibration, feature extraction, Stereo matching and three-dimensional reconstruction, wherein Stereo matching is the core of stereovision technique, and object is to seek the one-to-one relationship of the same space scenery under different points of view between projected image pixel.Accurate match can be carried out to image, obtain the key that the correct three-dimensional coordinate of scene is stereovision technique success or failure.
According to the difference of the way of restraint, Stereo matching is divided into two large classes: overall Stereo matching and local Stereo matching.Overall situation Stereo matching skips cost polymerization procedure usually, can the parallax of all pixels of disposable acquisition entire image by finding energy function optimum solution.Classical global registration algorithm mainly comprises: figure cuts, dynamic programming and confidence spread etc.These class methods can obtain the higher anaglyph of precision, but relevant parameter arranges that difficulty is large and efficiency of algorithm is low, are difficult to requirement of real time.Different from global registration method, local matching generally uses Window match, only to utilize in window Pixel Information as constraint condition, and can only obtain the parallax value of partial pixel at every turn.These class methods generally have higher algorithm speed, but quantity of information involved by matching process is less, and matching precision is often lower.
In recent years, that studies local matching algorithm along with Chinese scholars deepens continuously, and some local matching algorithms with degree of precision are suggested.Such as: a kind of solid matching method supporting window based on degree of confidence, first SAD (SumofAbsoluteDifference is utilized, absolute length chang) algorithm obtains the initial parallax of pixel, then in each matching window, utilize the pixel with high confidence carry out plane fitting and obtain final anaglyph, although the method can obtain better matching result, but be only applicable to the image-region with smooth texture, thus there is larger limitation, a kind of Stereo Matching Algorithm based on joint histogram, this algorithm adopts and carries out the mode of efficiently sampling to reduce the redundant computation of repetition filtering to the pixel in matching window, although a fixing spatial sample values can be drawn, but due to matching result to the feature of input picture and space sampling frequency too responsive, this method does not have ubiquity at present, a kind of solid matching method based on adaptive windows, first the parallax of pixel in gauss hybrid models profile matching window is utilized to distribute, then determine the size of matching window according to the uncertainty of parallax distribution, although the method improves quality of match, also considerably increase the complexity of algorithm, a kind of matching process supporting weight based on self-adaptation, the method does not change the size and shape of matching window, but select the rectangular window of fixed size, to distribute with central point color and distance difference according to pixel each in window and support weight, thus carry out energy converging, the method effectively prevent the select permeability of matching window, although good matching result can be obtained, but still exist following not enough: for the low texture region in matching window, structure repeat region and parallax discontinuity zone, the coupling based on pixel color and distance is utilized to be difficult to obtain correct matching result.
Summary of the invention
The object of the invention is for solving a difficult problem of the prior art, a kind of solid matching method based on Iamge Segmentation and adaptive weighting of high matching precision is provided.
The invention provides a kind of solid matching method based on Iamge Segmentation and adaptive weighting, comprise parallax initialization and parallax optimization, described parallax initialization step comprises:
S1: by the left image I through correcting
l, right image I
rrespectively as reference picture and target image;
S2: utilize mean-shift algorithm respectively to left image I
l, right image I
rsplit, and record the Color Segmentation region belonging to each pixel;
S3: make p be left image I
lin current pixel to be matched, (x
p, y
p) be its volume coordinate; Size is that other pixels q in the matching window of W except central pixel point p represents, the span of parallax d is D=[d
min, d
max]; Cost function is utilized to try to achieve pixel p and right image I
rin the Matching power flow of all candidate matches points, candidate matches point p
dmatching power flow be E (p, p
d, d);
S4: adopt WTA policy selection to have the Optimum Matching point of candidate matches point as pixel p of smallest match cost;
S5: repeating said steps S3, S4 adopt raster scan order to travel through each pixel successively, obtains the initial parallax image be made up of Optimum Matching point;
Described parallax optimization comprises successively: disparity plane matching, abnormal suppression and edge reparation;
Described disparity plane matching comprises:
A1: the stable point in each for initial parallax image Color Segmentation region that there is insincere point is added up as data acquisition, and the initial parallax of the parallax value comprised corresponding to the maximum component of stable point insincere point in Color Segmentation region is tentatively revised;
A2: choose arbitrarily 3 stable point from Color Segmentation region, builds plane equation group and draws corresponding three plane parameters;
A3: calculate other points and plan range in Color Segmentation region and be less than the pixel number N of threshold value △ d
i, wherein i is loop iteration number of times;
A4: repeat steps A 2, A3 several times, utilize maximum N
i3 corresponding stable point and plane parameter carry out least square plane matching, and upgrade plane parameter;
A5: utilize formula d (x
q, y
q)=| Ax
q+ By
qthe parallax of+C| to insincere some q in Color Segmentation region is revised again; Wherein, A, B, C in formula have maximum N in steps A 4
ithree plane parameters corresponding to 3 stable point;
Described abnormal the suppression is specially: for the anaglyph after disparity plane matching, setting threshold value δ
nand δ
c, will with surrounding pixel parallax difference be comparatively large and number of pixels is less than δ
nzonule be merged into be adjacent there is minimum parallax value and number of pixels is greater than δ
cregion in;
The reparation of described edge comprises:
B1: utilize canny algorithm to detect the edge of image after mean-shift segmentation, for the pixel p being in image border place in the anaglyph after abnormal the suppression, makes its left and right neighbor pixel be respectively q
land q
r;
B2: utilize formula
Calculate Matching power flow
with
B3: using the parallax value of the parallax corresponding to the pixel with smallest match cost in step B2 as pixel p.
Further, Matching power flow E (p, p described in step S3
d, d) be specially: E (p, p
d, d)=E
data(p, p
d)+λ E
smooth(p, d);
Wherein, E
data() is data item, for weighing the similarity of two Matching unit, data item E
data() is defined as follows:
N
pand N
pdrepresent in left images respectively with p and p
dcentered by size be W matching window in the set of other pixels.Q and q
dn
pand N
pdin there is the corresponding pixel points of same spatial location; W (p, q) calculates the weight coefficient between p and q; Δ c (p, q) calculates the color distortion of two pixels in lab space, γ
cfor its normalization coefficient,
represent pixel (x in left image
p, y
p) color component value in lab space, k is empirical constant; ε (q, q
d) calculate pixel q and q
dat the color distortion in rgb space,
represent pixel (x in left image
q, y
q) color component value in rgb space,
represent pixel in right image
in the color component value in rgb space, T is interceptive value;
Wherein, E
smooth() is smooth item, and for the smoothing constraint of parallax being in same body surface neighbor, λ is its weight coefficient, smooth item E
smooth() is defined as follows:
T
h(), t
v() represents the gradient of pixel level and vertical direction respectively, γ
tfor normalization coefficient, the parallax value that d () is pixel, f () represents the degree of confidence of pixel, represents, that is: with the ratio of the smallest match cost obtained in disparity estimation process and time little Matching power flow
Further, the candidate point that the employing WTA policy selection described in step S4 has smallest match cost is specially as Optimum Matching point: the parallax of pixel p is
Beneficial effect of the present invention is, again revises, abnormal zonule is merged into adjacent normal region, edge pixel reparation, thus eliminates parallax mistake, improve matching precision to the insincere point in the initial parallax image calculated by Optimum Matching point.
Accompanying drawing explanation
Fig. 1 is general flow chart of the present invention;
Fig. 2 is parallax result, wherein, and Fig. 2 (a) original image, the true anaglyph of Fig. 2 (b), Fig. 2 (c) OptimizedDP arithmetic result, Fig. 2 (d) RTAdaptWgt arithmetic result, Fig. 2 (e) stereo matching results of the present invention.
Embodiment
Hereafter will describe the specific embodiment of the invention in detail in conjunction with concrete accompanying drawing.It should be noted that the combination of technical characteristic or the technical characteristic described in following embodiment should not be considered to isolated, they can mutually be combined thus be reached better technique effect.
As shown in Figure 1, a kind of solid matching method based on Iamge Segmentation and adaptive weighting provided by the invention, mainly comprises two large divisions: parallax initialization and parallax optimization.Wherein, the initialized step of parallax is as follows:
S1: by the left image I through correcting
l, right image I
rrespectively as reference picture and target image.
S2: utilize mean-shift algorithm respectively to left image I
l, right image I
rsplit, and record Color Segmentation region S () belonging to each pixel, wherein, S (q) represents the Color Segmentation region labeling of pixel q after Iamge Segmentation;
S3: make p be left image I
lin current pixel to be matched, (x
p, y
p) be its volume coordinate.Size is that other pixels q in the matching window of W except central pixel point p represents, the span of parallax d is D=[d
min, d
max].Cost function is utilized to try to achieve pixel p and right image I
rin the Matching power flow of all candidate matches points, candidate matches point p
dmatching power flow be E (p, p
d, d), calculated by formula (1) and obtain.
E(p,p
d,d)=E
data(p,p
d)+λE
smooth(p,d)(1)
Wherein, E
data() is data item, for weighing the similarity of two Matching unit.E
smooth() is smooth item, and for the smoothing constraint of parallax being in same body surface neighbor, λ is its weight coefficient.
Data item E
data() is defined as follows:
Wherein,
Wherein, N
pand N
pdrepresent in left images respectively with p and p
dcentered by size be W matching window in the set of other pixels.Q and q
dn
pand N
pdin there is the corresponding pixel points of same spatial location.W (p, q) calculates the weight coefficient between p and q, and Δ c (p, q) calculates the color distortion of two pixels in lab space, γ
cfor its normalization coefficient,
represent pixel (x in left image
p, y
p) color component value in lab space, k is empirical constant.ε (q, q
d) calculate pixel q and q
dat the color distortion in rgb space,
represent pixel (x in left image
q, y
q) color component value in rgb space,
represent pixel in right image
in the color component value in rgb space, T is interceptive value.
Smooth item E
smooth() is defined as follows:
Wherein, t
h(), t
v() represents the gradient of pixel level and vertical direction respectively, γ
tfor normalization coefficient.The parallax value that d () is pixel, f () represents the degree of confidence of pixel, and the ratio of the smallest match cost obtained in available disparity estimation process and time little Matching power flow represents, that is:
S4: adopt WTA (winnertakesall, the victor is a king) policy selection to have the candidate point of smallest match cost as Optimum Matching point.Now, the parallax d (x of pixel p
p, y
p) can be expressed as:
S5: repeat step S3, S4 and also adopt raster scan order to travel through each pixel successively, thus obtain the initial parallax image that is made up of Optimum Matching point.
In image except object edge, parallax in the picture nearly all region is all slowly change.The possibility that parallax value is undergone mutation is closely related with the color distortion between pixel.It is generally acknowledged, the region that color changes greatly, the possibility that parallax value is undergone mutation is larger.The present invention, by introducing disparity continuity constraint, utilizes the Grad of pixel to weigh color distortion degree between pixel, regulates disparity constraint dynamics, thus set up the level and smooth controlling mechanism of effective parallax by its negative exponent form.
For each pixel to be matched, closely related to choosing with the scanning sequency of image of the reference image vegetarian refreshments that it imposes restriction.For improving the accuracy of coupling, algorithm of the present invention adopts from bottom to top, even rows from a left side successively to the right, odd-line pixels from the right side successively raster scan order left travel through entire image successively.For even rows, if there is left and lower neighborhood territory pixel point, then parallax value that is left and lower neighborhood territory pixel point is utilized to retrain it.Otherwise disparity correspondence is carried out then to odd-line pixels.In addition, due to reasons such as illumination effect, matching error, image block, the parallax value that each reference image vegetarian refreshments obtains might not be true parallax, if using the parallax of these points as reference parallax, and the further propagation that will lead to errors.For this reason, algorithm of the present invention based on proposed concept of confidence, for each reference pixel gives certain weights.For the reference pixel that degree of confidence is higher, increase the degree of dependence to it by the larger weights of imparting.
Parallax is optimized:
Although utilize said method can obtain matching result comparatively accurately, but still there is many parallax mistakes.For improving matching precision further, process is optimized to the initial parallax image obtained.In algorithm of the present invention, parallax optimizing process mainly divides following three steps to carry out: disparity plane matching, abnormal suppression and edge reparation.
Disparity plane matching:
For eliminating parallax mistake further, based on RANSAC (RandomSampleConsensus) algorithm, disparity plane matching is carried out to each Color Segmentation region.First the left initial parallax image obtained based on above-mentioned parallax initial method and right initial parallax image, utilize left and right consistency detection to filter out insincere point in anaglyph.Then, according to the stable point parallax value of closing on to limited row pixel assignment again on the left of left initial parallax image.Finally to each Color Segmentation region, utilize stable point to carry out disparity plane parameter estimation, and the parallax of insincere point is revised.
Horizontal parallax consistency detection is derived by unique constraints and is drawn, this constraint guarantees that the pixel in left images exists one-to-one relationship.For any pixel p, if meet formula (9), be then labeled as credible point, otherwise be labeled as insincere point.
|d
L(x
pxy
p)-d
R(x
p-d
L(x
p,y
p),y
p)|<1(9)
Wherein, d
l(), d
r() is for reference picture carries out the parallax value that disparity correspondence obtains respectively with left image and right image.
For left initial parallax image, block owing to existing between left and right image, d on the left of it
maxthere is a large amount of insincere points in row, and parallax is usually much smaller than actual value.The existence of too much insincere point in region just, greatly reduces the accuracy of plane fitting.For this reason, algorithm of the present invention before carrying out plane fitting first to left side d
maxthe parallax initial value that insincere imparting in row is new.If pixel p is the insincere point in appointed area, find nearest stable point q in its horizontal direction, if the parallax value of q is much smaller than the parallax value of p lower pixel, so d (x
p, y
p)=d (x
p, y
p, otherwise d (x-1)
p, y
p)=d (x
q, y
q).If pixel q meets following relation a and b simultaneously, be then defined as stable point:
A. pixel q is credible point.
B.f (x
q, y
q) >=ξ
1and f (x
q, y
q) <=ξ
2
Wherein, ξ
1and ξ
2be respectively the parametric variable of setting.
The insincere point obtained by consistency detection mostly is blocking a little and Mismatching point of image, and for obtaining the true parallax value of insincere point, the present invention improves traditional plane fitting algorithm, and concrete steps are as follows:
A1: the stable point in each for initial parallax image Color Segmentation region that there is insincere point is added up as data acquisition, and the initial parallax of the parallax value comprised corresponding to the maximum component of stable point insincere point in Color Segmentation region is tentatively revised.
A2: choose arbitrarily 3 stable point from Color Segmentation region, if its coordinates matrix is x=(x
1, x
2, x
3)
Τ, y=(y
1, y
2, y
3)
Τ, corresponding parallax matrix is d=(d
1, d
2, d
3)
Τ; Build plane equation group according to following formula (10) and solve plane parameter A, B, C.
A3: calculate other points and plan range in Color Segmentation region and be less than the pixel number N of threshold value △ d
i, wherein i is loop iteration number of times.
A4: repeat steps A 2, A3 several times, utilize maximum N
i3 corresponding stable point and plane parameter carry out least square plane matching, and upgrade plane parameter.
A5: utilize following formula (11) again to revise the parallax of insincere some q in region:
d(x
q,y
q)=|Ax
q+By
q+C|(11)
Wherein, A, B, C in formula (11) have maximum N in steps A 4
ithree plane parameters corresponding to 3 stable point.
In plane fitting process, first algorithm of the present invention utilizes statistics with histogram to carry out reasonable assignment mainly based on following consideration to point insincere in region: RANSAC algorithm is more responsive to abnormity point, is difficult to obtain correct fitting result when insincere point is too much; Secondly, the iterations needed for plane fitting is extremely huge.For the region having a hundreds of stable point, expect optimum plane parameter, need iteration several necessarily even more than one hundred million times, time loss is huge.If arrange the iterations upper limit, what obtain is not often optimum solution, even may obtain the result of mistake.For this reason, algorithm of the present invention limits the harmful effect of abnormity point to plane fitting by carrying out reasonable assignment to insincere point, thus in limited number of time iterative process, make fitting result be tending towards optimum.
Abnormal suppression:
In order to eliminate noise in anaglyph and isolated point, medium filtering is carried out to the disparity map after plane fitting.In addition, due to error hiding, around foreground area, often there is the zonule larger with surrounding pixel parallax difference, by setting threshold value δ
nand δ
c, number of pixels is less than δ
nzonule be merged into be adjacent there is minimum parallax value and number of pixels is greater than δ
cregion in.
Edge is repaired:
Be different from above-mentioned Optimization Steps, this step is intended to the parallax mistake of the degree of depth discontinuity zone eliminated further caused by the fat effect in edge.First canny algorithm is utilized to detect the edge of image after mean-shift segmentation.For the pixel p being in image border place, its left and right neighbor is made to be respectively q
land q
r.Then formula (12) is utilized to calculate Matching power flow
with
finally using the parallax that has corresponding to the smallest match cost parallax value as p point.
Wherein, Δ g () represents the Euclidean distance of two pixels, γ
gfor its normalization coefficient, the definition of remainder formula and variable is identical with formula (2).
Adopt stereo matching results that method provided by the invention obtains as shown in Figure 2, wherein Fig. 2 (a)-(e) is respectively original image, true anaglyph, the parallax result that utilizes OptimizedDP, RTAdaptWgt and the present invention to obtain, what wherein original image adopted is " Tsukuba ", " Venus ", " Teddy " and " Cones " that Middlebury stereoscopic vision website provides four width standard pictures.Can be found out by contrast, the anaglyph clear-cut that algorithm of the present invention obtains, noise is less, except object boundary place, and other regional disparity value continuously smooth transition.
In order to can qualitative assessment algorithm performance, utilize de-occlusion region in image (nonocc), all regions (all) and degree of depth discontinuity zone (disc) pixel parallax error rate as the criterion of algorithm quality.In appointed area, parallax error rate η is defined as follows:
In formula (15), R represents the pixel set of appointed area, and N is total number of pixel in this region, d (x, y) and d
t(x, y) represents the generation parallax value of pixel (x, y) in region and true parallax value respectively, δ
dfor error threshold, δ in experiment
dbe set as 1.
Algorithm of the present invention compares with the Disparity estimation of some advanced persons in recent years, and comparing result is as shown in table 1.Be not difficult to find out, algorithm of the present invention has higher matching precision, is better than other common matching algorithm: RTAdaptWgt, SMPF, OptimizedDP, RTCensus, CSBP and HBDS+MDC.
The error hiding rate of table 1 algorithm of the present invention and other algorithms contrasts (unit: %)
Claims (3)
1., based on a solid matching method for Iamge Segmentation and adaptive weighting, comprise parallax initialization and parallax optimization, it is characterized in that, described parallax initialization comprises the following steps:
S1: by the left image I through correcting
l, right image I
rrespectively as reference picture and target image;
S2: utilize mean-shift algorithm respectively to left image I
l, right image I
rsplit, and record the Color Segmentation region belonging to each pixel;
S3: make p be left image I
lin current pixel to be matched, (x
p, y
p) be its volume coordinate; Size is that other pixels q in the matching window of W except central pixel point p represents, the span of parallax d is D=[d
min, d
max]; Cost function is utilized to try to achieve pixel p and right image I
rin the Matching power flow of all candidate matches points, candidate matches point p
dmatching power flow be E (p, p
d, d);
S4: adopt WTA policy selection to have the Optimum Matching point of candidate matches point as pixel p of smallest match cost;
S5: repeating said steps S3, S4 adopt raster scan order to travel through each pixel successively, obtains the initial parallax image be made up of Optimum Matching point;
Described parallax optimization comprises carries out disparity plane matching, abnormal suppression and edge reparation successively to initial parallax image;
Described disparity plane matching comprises:
A1: the stable point in each for initial parallax image Color Segmentation region that there is insincere point is added up as data acquisition, and the initial parallax of the parallax value comprised corresponding to the maximum component of stable point insincere point in Color Segmentation region is tentatively revised;
A2: choose arbitrarily 3 stable point from Color Segmentation region, builds plane equation group and draws corresponding three plane parameters;
A3: calculate other points and plan range in Color Segmentation region and be less than the pixel number N of threshold value △ d
i, wherein i is loop iteration number of times;
A4: repeat steps A 2, A3 several times, utilize maximum N
i3 corresponding stable point and plane parameter carry out least square plane matching, and upgrade plane parameter;
A5: utilize formula d (x
q, y
q)=| Ax
q+ By
qthe parallax of+C| to insincere some q in Color Segmentation region is revised again; Wherein, A, B, C in formula have maximum N in steps A 4
ithree plane parameters corresponding to 3 stable point;
Described abnormal the suppression is specially: for the anaglyph after disparity plane matching, setting threshold value δ
nand δ
c, will with surrounding pixel parallax difference be comparatively large and number of pixels is less than δ
nzonule be merged into be adjacent there is minimum parallax value and number of pixels is greater than δ
cregion in;
The reparation of described edge comprises:
B1: utilize canny algorithm to detect the edge of image after mean-shift segmentation, for the pixel p being in image border place in the anaglyph after abnormal the suppression, makes its left and right neighbor pixel be respectively q
land q
r;
B2: utilize formula
Calculate Matching power flow
with
B3: using the parallax value of the parallax corresponding to the pixel with smallest match cost in step B2 as pixel p.
2. a kind of solid matching method based on Iamge Segmentation and adaptive weighting as claimed in claim 1, is characterized in that, Matching power flow E (p, p described in step S3
d, d) be specially: E (p, p
d, d)=E
data(p, p
d)+λ E
smooth(p, d);
Wherein, E
data() is data item, for weighing the similarity of two Matching unit, data item E
data() is defined as follows:
N
pand N
pdrepresent in left images respectively with p and p
dcentered by size be W matching window in the set of other pixels.Q and q
dn
pand N
pdin there is the corresponding pixel points of same spatial location; W (p, q) calculates the weight coefficient between p and q; Δ c (p, q) calculates the color distortion of two pixels in lab space, γ
cfor its normalization coefficient,
represent pixel (x in left image
p, y
p) color component value in lab space, k is empirical constant; ε (q, q
d) calculate pixel q and q
dat the color distortion in rgb space,
represent pixel (x in left image
q, y
q) color component value in rgb space,
represent pixel in right image
in the color component value in rgb space, T is interceptive value;
Wherein, E
smooth() is smooth item, and for the smoothing constraint of parallax being in same body surface neighbor, λ is its weight coefficient, smooth item E
smooth() is defined as follows:
T
h(), t
v() represents the gradient of pixel level and vertical direction respectively, γ
tfor normalization coefficient, the parallax value that d () is pixel, f () represents the degree of confidence of pixel, represents, that is: with the ratio of the smallest match cost obtained in disparity estimation process and time little Matching power flow
3. a kind of solid matching method based on Iamge Segmentation and adaptive weighting as claimed in claim 1, it is characterized in that, the candidate point that the employing WTA policy selection described in step S4 has smallest match cost is specially as Optimum Matching point: the parallax of pixel p is d (x
p, y
p),
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510880821.3A CN105513064B (en) | 2015-12-03 | 2015-12-03 | A kind of solid matching method based on image segmentation and adaptive weighting |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510880821.3A CN105513064B (en) | 2015-12-03 | 2015-12-03 | A kind of solid matching method based on image segmentation and adaptive weighting |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105513064A true CN105513064A (en) | 2016-04-20 |
CN105513064B CN105513064B (en) | 2018-03-20 |
Family
ID=55721021
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510880821.3A Active CN105513064B (en) | 2015-12-03 | 2015-12-03 | A kind of solid matching method based on image segmentation and adaptive weighting |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105513064B (en) |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105957078A (en) * | 2016-04-27 | 2016-09-21 | 浙江万里学院 | Multi-view video segmentation method based on graph cut |
CN106408596A (en) * | 2016-09-06 | 2017-02-15 | 电子科技大学 | Edge-based local stereo matching method |
CN106709948A (en) * | 2016-12-21 | 2017-05-24 | 浙江大学 | Quick binocular stereo matching method based on superpixel segmentation |
CN107133977A (en) * | 2017-05-18 | 2017-09-05 | 华中科技大学 | A kind of quick stereo matching process that model is produced based on probability |
CN107155100A (en) * | 2017-06-20 | 2017-09-12 | 国家电网公司信息通信分公司 | A kind of solid matching method and device based on image |
CN107204013A (en) * | 2017-05-22 | 2017-09-26 | 海信集团有限公司 | Applied to the pixel parallax value calculating method and device in binocular stereo vision |
CN107220994A (en) * | 2017-06-01 | 2017-09-29 | 成都通甲优博科技有限责任公司 | A kind of method and system of Stereo matching |
CN107578419A (en) * | 2017-09-13 | 2018-01-12 | 温州大学 | A kind of stereo-picture dividing method based on uniformity contours extract |
CN107578389A (en) * | 2017-09-13 | 2018-01-12 | 中山大学 | The method that the image color depth information collaboration of plane supervision is repaired |
CN108062765A (en) * | 2017-12-19 | 2018-05-22 | 上海兴芯微电子科技有限公司 | Binocular image processing method, imaging device and electronic equipment |
CN108062741A (en) * | 2017-12-15 | 2018-05-22 | 上海兴芯微电子科技有限公司 | Binocular image processing method, imaging device and electronic equipment |
CN108109148A (en) * | 2017-12-12 | 2018-06-01 | 上海兴芯微电子科技有限公司 | Image solid distribution method, mobile terminal |
CN108230338A (en) * | 2018-01-11 | 2018-06-29 | 温州大学 | A kind of stereo-picture dividing method based on convolutional neural networks |
CN108322724A (en) * | 2018-02-06 | 2018-07-24 | 上海兴芯微电子科技有限公司 | Image solid matching method and binocular vision equipment |
CN108364345A (en) * | 2018-02-11 | 2018-08-03 | 陕西师范大学 | Shelter target three-dimensional rebuilding method based on element marking and synthetic aperture imaging |
CN108596012A (en) * | 2018-01-19 | 2018-09-28 | 海信集团有限公司 | A kind of barrier frame merging method, device and terminal |
CN108898575A (en) * | 2018-05-15 | 2018-11-27 | 华南理工大学 | A kind of NEW ADAPTIVE weight solid matching method |
WO2018214505A1 (en) * | 2017-05-22 | 2018-11-29 | 成都通甲优博科技有限责任公司 | Method and system for stereo matching |
CN109784261A (en) * | 2019-01-09 | 2019-05-21 | 深圳市烨嘉为技术有限公司 | Pedestrian's segmentation and recognition methods based on machine vision |
CN109903379A (en) * | 2019-03-05 | 2019-06-18 | 电子科技大学 | A kind of three-dimensional rebuilding method based on spots cloud optimization sampling |
WO2020125637A1 (en) * | 2018-12-17 | 2020-06-25 | 深圳市道通智能航空技术有限公司 | Stereo matching method and apparatus, and electronic device |
CN111465818A (en) * | 2017-12-12 | 2020-07-28 | 索尼公司 | Image processing apparatus, image processing method, program, and information processing system |
CN111709917A (en) * | 2020-06-01 | 2020-09-25 | 深圳市深视创新科技有限公司 | Label-based shape matching algorithm |
CN111914913A (en) * | 2020-07-17 | 2020-11-10 | 三峡大学 | Novel stereo matching optimization method |
CN112200852A (en) * | 2020-10-09 | 2021-01-08 | 西安交通大学 | Space-time hybrid modulation stereo matching method and system |
CN112348871A (en) * | 2020-11-16 | 2021-02-09 | 长安大学 | Local stereo matching method |
CN113014902A (en) * | 2021-02-08 | 2021-06-22 | 中国科学院信息工程研究所 | 3D-2D synchronous display method and system |
CN113379847A (en) * | 2021-05-31 | 2021-09-10 | 上海集成电路制造创新中心有限公司 | Abnormal pixel correction method and device |
CN113516699A (en) * | 2021-05-18 | 2021-10-19 | 哈尔滨理工大学 | Stereo matching system based on super-pixel segmentation |
CN116258759A (en) * | 2023-05-15 | 2023-06-13 | 北京爱芯科技有限公司 | Stereo matching method, device and equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013072813A (en) * | 2011-09-28 | 2013-04-22 | Honda Motor Co Ltd | Level difference part recognition device |
CN104376567A (en) * | 2014-12-01 | 2015-02-25 | 四川大学 | Linear segmentation guided filtering (LSGF)-based stereo-matching method |
-
2015
- 2015-12-03 CN CN201510880821.3A patent/CN105513064B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013072813A (en) * | 2011-09-28 | 2013-04-22 | Honda Motor Co Ltd | Level difference part recognition device |
CN104376567A (en) * | 2014-12-01 | 2015-02-25 | 四川大学 | Linear segmentation guided filtering (LSGF)-based stereo-matching method |
Non-Patent Citations (2)
Title |
---|
WANG WEI等: "Local algorithms on dense two-frame stereo matching", 《COMPUTER AIDED DRAFTING, DESIGN AND MANUFACTURING》 * |
张丞等: "立体图像视差自适应调整算法", 《光电子激光》 * |
Cited By (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105957078A (en) * | 2016-04-27 | 2016-09-21 | 浙江万里学院 | Multi-view video segmentation method based on graph cut |
CN106408596A (en) * | 2016-09-06 | 2017-02-15 | 电子科技大学 | Edge-based local stereo matching method |
CN106408596B (en) * | 2016-09-06 | 2019-06-21 | 电子科技大学 | Sectional perspective matching process based on edge |
CN106709948A (en) * | 2016-12-21 | 2017-05-24 | 浙江大学 | Quick binocular stereo matching method based on superpixel segmentation |
CN107133977A (en) * | 2017-05-18 | 2017-09-05 | 华中科技大学 | A kind of quick stereo matching process that model is produced based on probability |
CN107204013B (en) * | 2017-05-22 | 2020-04-24 | 海信集团有限公司 | Method and device for calculating pixel point parallax value applied to binocular stereo vision |
CN107204013A (en) * | 2017-05-22 | 2017-09-26 | 海信集团有限公司 | Applied to the pixel parallax value calculating method and device in binocular stereo vision |
WO2018214505A1 (en) * | 2017-05-22 | 2018-11-29 | 成都通甲优博科技有限责任公司 | Method and system for stereo matching |
CN107220994A (en) * | 2017-06-01 | 2017-09-29 | 成都通甲优博科技有限责任公司 | A kind of method and system of Stereo matching |
CN107155100A (en) * | 2017-06-20 | 2017-09-12 | 国家电网公司信息通信分公司 | A kind of solid matching method and device based on image |
CN107155100B (en) * | 2017-06-20 | 2019-07-12 | 国家电网公司信息通信分公司 | A kind of solid matching method and device based on image |
CN107578389A (en) * | 2017-09-13 | 2018-01-12 | 中山大学 | The method that the image color depth information collaboration of plane supervision is repaired |
CN107578419B (en) * | 2017-09-13 | 2020-07-21 | 温州大学 | Stereo image segmentation method based on consistency contour extraction |
CN107578419A (en) * | 2017-09-13 | 2018-01-12 | 温州大学 | A kind of stereo-picture dividing method based on uniformity contours extract |
CN108109148A (en) * | 2017-12-12 | 2018-06-01 | 上海兴芯微电子科技有限公司 | Image solid distribution method, mobile terminal |
CN111465818A (en) * | 2017-12-12 | 2020-07-28 | 索尼公司 | Image processing apparatus, image processing method, program, and information processing system |
CN108062741B (en) * | 2017-12-15 | 2021-08-06 | 上海兴芯微电子科技有限公司 | Binocular image processing method, imaging device and electronic equipment |
CN108062741A (en) * | 2017-12-15 | 2018-05-22 | 上海兴芯微电子科技有限公司 | Binocular image processing method, imaging device and electronic equipment |
CN108062765A (en) * | 2017-12-19 | 2018-05-22 | 上海兴芯微电子科技有限公司 | Binocular image processing method, imaging device and electronic equipment |
CN108230338B (en) * | 2018-01-11 | 2021-09-28 | 温州大学 | Stereo image segmentation method based on convolutional neural network |
CN108230338A (en) * | 2018-01-11 | 2018-06-29 | 温州大学 | A kind of stereo-picture dividing method based on convolutional neural networks |
CN108596012B (en) * | 2018-01-19 | 2022-07-15 | 海信集团有限公司 | Barrier frame combining method, device and terminal |
CN108596012A (en) * | 2018-01-19 | 2018-09-28 | 海信集团有限公司 | A kind of barrier frame merging method, device and terminal |
CN108322724B (en) * | 2018-02-06 | 2019-08-16 | 上海兴芯微电子科技有限公司 | Image solid matching method and binocular vision equipment |
CN108322724A (en) * | 2018-02-06 | 2018-07-24 | 上海兴芯微电子科技有限公司 | Image solid matching method and binocular vision equipment |
CN108364345B (en) * | 2018-02-11 | 2021-06-15 | 陕西师范大学 | Shielded target three-dimensional reconstruction method based on pixel marking and synthetic aperture imaging |
CN108364345A (en) * | 2018-02-11 | 2018-08-03 | 陕西师范大学 | Shelter target three-dimensional rebuilding method based on element marking and synthetic aperture imaging |
CN108898575B (en) * | 2018-05-15 | 2022-04-22 | 华南理工大学 | Novel adaptive weight stereo matching method |
CN108898575A (en) * | 2018-05-15 | 2018-11-27 | 华南理工大学 | A kind of NEW ADAPTIVE weight solid matching method |
WO2020125637A1 (en) * | 2018-12-17 | 2020-06-25 | 深圳市道通智能航空技术有限公司 | Stereo matching method and apparatus, and electronic device |
CN109784261A (en) * | 2019-01-09 | 2019-05-21 | 深圳市烨嘉为技术有限公司 | Pedestrian's segmentation and recognition methods based on machine vision |
CN109903379A (en) * | 2019-03-05 | 2019-06-18 | 电子科技大学 | A kind of three-dimensional rebuilding method based on spots cloud optimization sampling |
CN111709917A (en) * | 2020-06-01 | 2020-09-25 | 深圳市深视创新科技有限公司 | Label-based shape matching algorithm |
CN111709917B (en) * | 2020-06-01 | 2023-08-22 | 深圳市深视创新科技有限公司 | Shape matching algorithm based on annotation |
CN111914913A (en) * | 2020-07-17 | 2020-11-10 | 三峡大学 | Novel stereo matching optimization method |
CN111914913B (en) * | 2020-07-17 | 2023-10-31 | 三峡大学 | Novel stereo matching optimization method |
CN112200852A (en) * | 2020-10-09 | 2021-01-08 | 西安交通大学 | Space-time hybrid modulation stereo matching method and system |
CN112348871A (en) * | 2020-11-16 | 2021-02-09 | 长安大学 | Local stereo matching method |
CN112348871B (en) * | 2020-11-16 | 2023-02-10 | 长安大学 | Local stereo matching method |
CN113014902A (en) * | 2021-02-08 | 2021-06-22 | 中国科学院信息工程研究所 | 3D-2D synchronous display method and system |
CN113014902B (en) * | 2021-02-08 | 2022-04-01 | 中国科学院信息工程研究所 | 3D-2D synchronous display method and system |
CN113516699A (en) * | 2021-05-18 | 2021-10-19 | 哈尔滨理工大学 | Stereo matching system based on super-pixel segmentation |
CN113379847A (en) * | 2021-05-31 | 2021-09-10 | 上海集成电路制造创新中心有限公司 | Abnormal pixel correction method and device |
CN113379847B (en) * | 2021-05-31 | 2024-02-13 | 上海集成电路制造创新中心有限公司 | Abnormal pixel correction method and device |
CN116258759A (en) * | 2023-05-15 | 2023-06-13 | 北京爱芯科技有限公司 | Stereo matching method, device and equipment |
CN116258759B (en) * | 2023-05-15 | 2023-09-22 | 北京爱芯科技有限公司 | Stereo matching method, device and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN105513064B (en) | 2018-03-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105513064A (en) | Image segmentation and adaptive weighting-based stereo matching method | |
Revaud et al. | Epicflow: Edge-preserving interpolation of correspondences for optical flow | |
Zhu et al. | A three-pathway psychobiological framework of salient object detection using stereoscopic technology | |
Hosni et al. | Local stereo matching using geodesic support weights | |
Kim et al. | Adaptive smoothness constraints for efficient stereo matching using texture and edge information | |
CN103236082B (en) | Towards the accurate three-dimensional rebuilding method of two-dimensional video of catching static scene | |
CN106709472A (en) | Video target detecting and tracking method based on optical flow features | |
Tang et al. | Depth from defocus in the wild | |
WO2015010451A1 (en) | Method for road detection from one image | |
Taniai et al. | Fast multi-frame stereo scene flow with motion segmentation | |
CN105261017A (en) | Method for extracting regions of interest of pedestrian by using image segmentation method on the basis of road restriction | |
CN106228544A (en) | A kind of significance detection method propagated based on rarefaction representation and label | |
CN104680510A (en) | RADAR parallax image optimization method and stereo matching parallax image optimization method and system | |
Smith et al. | Stereo matching with nonparametric smoothness priors in feature space | |
CN107507206B (en) | Depth map extraction method based on significance detection | |
CN103996202A (en) | Stereo matching method based on hybrid matching cost and adaptive window | |
CN110910421A (en) | Weak and small moving object detection method based on block characterization and variable neighborhood clustering | |
CN104599288A (en) | Skin color template based feature tracking method and device | |
CN110414385A (en) | A kind of method for detecting lane lines and system based on homography conversion and characteristic window | |
CN106408596A (en) | Edge-based local stereo matching method | |
CN108629809B (en) | Accurate and efficient stereo matching method | |
CN113570658A (en) | Monocular video depth estimation method based on depth convolutional network | |
CN107392211B (en) | Salient target detection method based on visual sparse cognition | |
CN105590327A (en) | Motion estimation method and apparatus | |
Kumar et al. | Automatic image segmentation using wavelets |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |