CN101720047B - Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation - Google Patents

Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation Download PDF

Info

Publication number
CN101720047B
CN101720047B CN2009101982317A CN200910198231A CN101720047B CN 101720047 B CN101720047 B CN 101720047B CN 2009101982317 A CN2009101982317 A CN 2009101982317A CN 200910198231 A CN200910198231 A CN 200910198231A CN 101720047 B CN101720047 B CN 101720047B
Authority
CN
China
Prior art keywords
image
parallax
reference picture
target image
disparity map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2009101982317A
Other languages
Chinese (zh)
Other versions
CN101720047A (en
Inventor
安平
鞠芹
张兆杨
张倩
吴妍菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN2009101982317A priority Critical patent/CN101720047B/en
Publication of CN101720047A publication Critical patent/CN101720047A/en
Application granted granted Critical
Publication of CN101720047B publication Critical patent/CN101720047B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention discloses a method for acquiring a range image by stereo matching of multi-aperture photographing based on color segmentation. The method comprises the following steps of: (1) carrying out image correction on all input images; (2) carrying out the color segmentation on a reference image, and extracting an area with consistent color in the reference image; (3) respectively carrying out local window matching on the multiple input images to obtain multiple parallax images; (4) removing mismatched points which are generated during the matching by applying a bilateral matching strategy; (5) synthesizing the multiple parallax images into a parallax image, and filling parallax information of the mismatched points; (6) carrying out post-treatment optimization on the parallax image to obtain a dense parallax image; and (7) converting the parallax image into a range image according to the relationship between parallax and depth. By acquiring range information from multiple viewpoint images and utilizing image information provided by the multiple viewpoint images, the invention can not only solve the problem of mismatching brought by periodic repetitive textural features and shelter and the like in the images, but also can improve matching precision and obtain an accurate range image.

Description

The method of the acquiring range image by stereo matching of multi-aperture photographing of cutting apart based on color
Technical field
The present invention relates to a kind of depth image acquisition methods, the method for the acquiring range image by stereo matching of multi-aperture photographing of cutting apart especially for color.
Background technology
(Depth Image Based Rendering, DIBR) technology is the key technology of 3DTV system in decoding end in drafting based on depth image.The DIBR technology is introduced the depth image generation new viewpoint image that depth information utilizes coloured image and correspondence thereof by giving the visible pixel in the source images.Thereby cause desired bandwidth significantly to increase because the required data quantity transmitted of compression transmitting multi-path video is very huge, if support that especially the user selects viewpoint, then need to use intensive video camera array collection, so just make the multi-view point video data volume sharply increase, seriously hindered the application of 3DTV.Therefore the method for expressing of " single view video+depth " is used as the replacement scheme of three-dimensional video-frequency, utilize the DIBR technology in decoding end, generate the 3D scene of one or more virtual view in real time, produce the 3D vision effect and satisfy the interactivity that certain viewpoint is selected; And " multi-view point video+degree of depth " is the 3D representation of video shot method that present MPEG/JVT intends employing, and available more sparse video camera array is taken the 3D scene and also supported the interactivity that viewpoint is selected.More than utilize single channel or multichannel two dimension color video to add the 3D representation of video shot mode of its corresponding depth information, can significantly reduce 3D the video data volume, thereby save transmission bandwidth.Yet, be used for obtaining of the depth map drawn based on depth image and be key technology wherein, also be a difficult point.
The degree of depth is obtained and is mainly contained following two class methods at present, and a class is to utilize special hardware device initiatively to obtain the depth information of each point in the scene.The Zcam degree of depth camera of 3DV system house exploitation for example, it is a kind of video camera that has distance measurement function, utilize the infrared pulse light source to transmit to scene, detect the infrared light that object reflects in the scene with infrared sensor then, thereby determine the distance of the every bit of object in the scene to video camera.Because the price of this type of system equipment is very expensive, be not suitable for promoting.Another kind of method is based on traditional computer stereo vision method, utilizes two width of cloth images of the same scenery that obtains two different points of view or a plurality of visual point image to carry out the depth information that three-dimensional coupling is come the restoration scenario object.These class methods generally comprised for two steps: (1) to carrying out the solid coupling, obtains the anaglyph of corresponding points to image; (2) calculate the degree of depth according to the parallax of corresponding points and the relation of the degree of depth, obtain depth image.
Stereo Matching Algorithm mainly can be divided into based on the Stereo Matching Algorithm in zone with based on this two big class of Stereo Matching Algorithm of feature.Based on the Stereo Matching Algorithm in zone (window), can recover the parallax of high texture region at an easy rate, but can cause a large amount of mistake couplings, thereby cause obscurity boundary at low texture region, simultaneously the zone of blocking also is difficult to handle; The characteristic point of extracting based on the solid matching method of feature is not too responsive to noise, thus can obtain mating more accurately, but because the characteristic point in the image is very sparse, this kind method can only obtain a sparse disparity map.A kind of based on the solid matching method of cutting apart, because it can obtain dense disparity map, be subjected to recently paying close attention to significantly.Object structures is made up of the plane of one group of non-overlapping copies in this class methods hypothesis scene, and each plane correspondence a color cut zone in the reference picture, and the parallax variation is level and smooth in the single color region.
Summary of the invention
The objective of the invention is in order to overcome the deficiencies in the prior art, a kind of method of the acquiring range image by stereo matching of multi-aperture photographing of cutting apart based on color is provided.This method such as can not only effectively solve in the image periodically and repeat textural characteristics, block at the mistake matching problem of bringing, and can also improve matching precision, thereby obtain an accurate and dense depth image.
For achieving the above object, design of the present invention is:
According to the colouring information of image the input reference picture being carried out color cuts apart; Utilize reference picture to carry out the local window coupling with all the other input pictures respectively then and obtain several disparity maps, merge the parallax information that several disparity maps are filled the mistake match point; After at last the disparity map that obtains after merging being optimized processing, obtain a dense disparity map; According to the relation between the parallax and the degree of depth, compute depth is converted to depth map with described dense disparity map.
According to above-mentioned design, technical scheme of the present invention is:
The method of the above-mentioned acquiring range image by stereo matching of multi-aperture photographing of cutting apart based on color the steps include:
(1), all input pictures are just being carried out image rule, eliminate because of picture noise, illumination condition, the factor of blocking and cause the color distortion of corresponding points on the different points of view image;
(2), reference picture is carried out color cuts apart the colour consistency zone in the extraction reference picture;
(3), reference picture is carried out the local window coupling with all the other target images respectively and obtain several disparity maps;
(4), using two-way matching strategy eliminates the mistake match point that produces, raising parallax precision in the local window matching process;
(5), according to fusion criterion several disparity maps are merged and to become a width of cloth disparity map, fill the parallax information of mistake match point;
(6), the width of cloth disparity map after merging is optimized processing after, obtain a dense disparity map;
(7), according to the relation between the parallax and the degree of depth, compute depth is converted to depth map with dense disparity map.
The method of the acquiring range image by stereo matching of multi-aperture photographing of cutting apart based on color of the present invention compared with the prior art, have following conspicuous substantive outstanding feature and remarkable advantage: this method adopts and obtains depth information based on cutting apart Stereo Matching Algorithm from a plurality of visual point images, the image information of reasonably utilizing a plurality of visual point images to provide, elimination is by periodically repeating textural characteristics in the image, the mistake coupling of bringing such as block, improved matching precision, and can obtain the parallax of each pixel, finally can obtain an accurate and dense depth map, thereby satisfy based on of the requirement of image rendering technique, thereby can obtain and to have aspect obtain based on the degree of depth of depth image drafting the depth image quality.
Description of drawings
Fig. 1 is the FB(flow block) of the method for the acquiring range image by stereo matching of multi-aperture photographing of cutting apart based on color of the present invention;
Fig. 2 is a parallel vidicon configuration-system schematic diagram;
Fig. 3 carries out the positive FB(flow block) of color rule to image among Fig. 1;
Fig. 4 is target image C l, reference picture C c, target image C rThe different points of view image between the hiding relation schematic diagram.
Fig. 5 is the three-dimensional matching result figure that obtains according to the method for the invention.
Embodiment
Below in conjunction with accompanying drawing embodiments of the invention are described in further detail.Present embodiment is to implement under the prerequisite with technical scheme of the present invention, provided detailed execution mode, but protection scope of the present invention is not limited to following embodiment.
Embodiments of the invention are to be example with 3 width of cloth input pictures, and its 3 width of cloth input picture is to be taken by parallel vidicon configuration-system as shown in Figure 2 to obtain middle viewpoint C cCaptured image is reference picture C c, two viewpoint C about it l, C rCaptured image is respectively two width of cloth target image C l, C rF represents this 3 focus of camera, and depth value is that the subpoint of spatial field sight spot P on these three planes of delineation of Z is respectively (x l, y), (x c, y) with (x r, y).
Referring to Fig. 1, the method for the acquiring range image by stereo matching of multi-aperture photographing of cutting apart based on color of the present invention the steps include:
(1), 3 width of cloth input pictures are just carrying out image rule, eliminate because of picture noise, illumination condition, factor such as block and cause the color distortion of corresponding points on the different points of view image;
(2), reference picture is carried out color cuts apart the colour consistency zone in the extraction reference picture;
(3), 3 width of cloth reference pictures are carried out local window coupling with all the other target images respectively, obtain 2 width of cloth disparity maps; Be reference picture C cRespectively with left side target image C lAnd reference picture and the right target image C rCarry out the local window coupling;
(4), adopt two-way matching strategy to eliminate the mistake match point that in the local window matching process, produces;
(5), according to fusion criterion several disparity maps are synthesized 1 width of cloth disparity map, the parallax information of filling mistake match point;
(6), the width of cloth disparity map after merging handled optimization after, obtain a dense disparity map;
(7), according to the relation between the parallax and the degree of depth, compute depth is converted to depth map with dense disparity map.
Referring to Fig. 3, above-mentioned steps (1) is described is just carrying out image rule to all input pictures, eliminates because of picture noise, illumination condition, factor such as block cause the color distortion of corresponding points on the different points of view image, and its concrete steps are:
(1-1), according to the brightness value of pixel, calculate the accumulation histogram of every width of cloth input picture;
(1-2), with the captured image of intermediate-view as the reference image C c, get that two captured images of viewpoint are respectively target image C about it l, target image C r, with accumulation histogram according to pixels number be divided into 10 sections, find out the brightness up-and-down boundary value of each section respectively, determine reference picture C thus cWith target image C lOr target image C rCorresponding intersegmental Linear Mapping relation;
(1-3), to each pixel in the target image, ask its fragment number in accumulation histogram, according to its fragment number in accumulation histogram and corresponding mapping relations, pixel is just advised then.
Above-mentioned steps (2) is described carries out color to reference picture and cuts apart, and extracts the colour consistency zone in the reference picture, and its concrete steps are as follows:
Adopt Mean Shift algorithm according to the colouring information of image to reference picture C cCarry out color and cut apart, utilize the gradient of probability distribution to seek the distribution peak value, each pixel in the image is referred under the corresponding density mode, thereby realizes cluster, the pixel in each cut zone that obtains has identical color value.
Above-mentioned step (3) is described carries out the local window coupling with all the other target images respectively with reference picture, obtains several disparity maps, and its concrete steps are as follows:
(3-1), determine reference picture C cWith target image C l, target image C rPosition relation;
When in a width of cloth target image, can not find match point, can find its corresponding match point at an other width of cloth target image.As shown in Figure 4, the rectangular frame table of blacking shows two object scenes among the figure, with the captured image of intermediate-view as the reference image C c, get that two captured images of viewpoint are respectively target image C about it l, target image C rP in the scene 1P 3The line segment zone is blocked at left viewpoint place, when the solid coupling is asked reference picture C cParallax the time, this regional pixel is at target image C lIn can not find match point, but at target image C rIn can find corresponding match point.
(3-2), respectively to reference picture C cWith target image C lAnd reference picture C cWith target image C rCarry out the local window coupling.
With above-mentioned reference picture C cAs benchmark image, with point to be matched in the benchmark image is that size of center pixel establishment is the 5*5 window, searching for onesize with vertex neighborhood to be matched in target image is the neighborhood of pixels of 5*5, window with point to be matched compares successively, wherein adopt the adaptive pixel opposite sex to measure (self-adapting dissimilarity measure) as the similarity measurement function, as shown in the formula, the point of its maximum comparability correspondence is exactly an optimal match point.
C(x,y,d)=(1-ω))*C SAD(x,y,d)+ω*C GRAD(x,y,d)
C SAD ( x , y , d ) = Σ ( i , j ) ∈ N ( x , y ) | I 1 ( i , j ) - I 2 ( i + d , j ) |
C GRAD ( x , y , d ) = Σ ( i , j ) ∈ N x ( x , y ) | ▿ x I 1 ( i , j ) - ▿ x I 2 ( i + d , j ) |
+ Σ ( i , j ) ∈ N y ( x , y ) | ▿ y I 1 ( i , j ) - ▿ y I 2 ( i + d , j ) |
Wherein, N (x, y) be with match point (x y) is the 5*5 window of center pixel,
Figure GSB00000595224700044
The horizontal component of presentation video gradient,
Figure GSB00000595224700045
The vertical component of presentation video gradient.ω represents weight;
By to target image C lWith reference picture C cCoupling obtains disparity map I LI(x, y); Target image C rWith reference picture C cCoupling obtains disparity map I RI(x, y).Comprise a lot of mistake match points in two width of cloth disparity maps that obtain.
The two-way coupling of the described employing of above-mentioned steps (4) is eliminated the mistake match point that produces in the local window matching process, it is used in target image C lWith reference picture C cCoupling and reference picture C cWith target image C rMate in these two identical matching processs, its concrete steps are as follows:
(4-1), with left image as the reference image, right image is as target image, from left to right carries out the local window coupling and obtains from left to right disparity map d LR
(4-2), with right image as the reference image, left image is as target image, carries out the local window coupling from right to left and obtains from right to left disparity map d RL
(4-3), find out at d according to following formula LRAnd d RLThe inconsistent corresponding points of parallax in two width of cloth disparity maps are defined as the mistake match point.
d LR ( x L , y ) = d RL ( x R , y ) = d LR ( x L , y ) + d RL ( x R , y ) 2 | d LR ( x L , y ) - d RL ( x R , y ) | ≤ λ 0 else
Wherein: λ is an error threshold, the pixel (x on the left image L, y) with right figure on pixel (x R, y) be a pair of match point, i.e. x R=x L+ d LR(x L, y).
Work as d LRAnd d RLThe corresponding points parallactic error satisfies in two width of cloth disparity maps | d LR(x L, y)-d RL(x R, y) | during≤λ, show that then corresponding points parallax coupling is correct.When parallactic error does not satisfy | d LR(x L, y)-d RL(x R, y) | during≤λ, show that this point for the mistake match point, is 0 with this assignment.
Above-mentioned steps (5) is described to synthesize a width of cloth disparity map with several disparity maps, fills the parallax information of mistake match point, and its concrete steps are as follows:
(5-1), calculate proportionality coefficient α according to the translation vector t in the outer ginseng matrix of camera;
α = | t C - t L | | t C - t L | + | t C - t R |
Wherein, t L, t C, t RBe respectively the translation vector in the outer ginseng matrix of left side video camera, middle video camera, the right video camera;
(5-2), according to following fusion criterion, with 2 width of cloth disparity map I LI(x, y) and I RI(x, y) synthesize final disparity map I (x y), fills the parallax information of mistake match point, and amalgamation mode is formulated as follows:
Figure GSB00000595224700051
I LI(x, y), I RI(x y) represents reference picture C respectively cWith target image C lAnd reference picture C cWith target image C rThe disparity map that coupling obtains, δ represents an error threshold.
Above-mentioned steps (6) is described to be optimized processing to the width of cloth disparity map after merging, and obtains a dense disparity map, and its concrete steps are as follows:
(6-1), the interior parallax variation of each color cut zone is level and smooth in the hypothetical reference image;
(6-2), get the parallax of the middle parallax value of all pixels in each cut zone as whole cut zone, obtain the parallax value of each pixel, its mathematical notation form as the following formula.Obtained at last the dense disparity map I ' of high-quality (x, y).
I ′ ( x , y ) = median ( x , y ) ∈ I SEG ( x , y ) ( I ( x , y ) )
Wherein, I SEG(x, y) expression cut zone.
Above-mentioned steps (7) is described, and compute depth is converted to depth map with dense disparity map according to the relation between the parallax and the degree of depth, its specifically:
The depth value of scene and its parallax have following relation in the parallel vidicon configuration-system:
Z = Bf D
Wherein, Z represents depth value, and B represents parallax range, and f is a camera focus, and D is a parallax.
According to the relation of the degree of depth and parallax, under the known situation of parallax, calculate the depth value of each pixel, thereby (x y) is converted into depth map with dense disparity map I '.
The figure (a) and (b) are respectively the matching result that obtains according to the method for the invention among Fig. 5.Wherein figure (a) is the matching result to the static scene sequence, and figure (b) is the matching result to the dynamic scene sequence.As can be seen from Figure 5 last matching result can keep accurate disparity map border, and the parallax that lacks the occlusion area of coupling has more especially also obtained good recovery, and the degree of depth level of object scene shows very obviously.According to the method for the invention, can obtain the ideal matching effect, also verified validity of the present invention thus.

Claims (8)

1. the method for an acquiring range image by stereo matching of multi-aperture photographing of cutting apart based on color is characterized in that, this method is carried out color according to the colouring information of image to the input reference picture and cut apart; Utilize reference picture to carry out the local window coupling with all the other input pictures respectively then and obtain several disparity maps, merge the parallax information that several disparity maps are filled the mistake match point; After the disparity map that obtains after merging is optimized processing, obtain a dense disparity map; According to the relation between the parallax and the degree of depth, compute depth is converted to depth map with described dense disparity map, and its concrete steps are as follows:
(1), all input pictures are just being carried out image rule, eliminate because of picture noise, illumination condition, the factor of blocking and cause the color distortion of corresponding points on the different points of view image;
(2), reference picture is carried out color cuts apart the colour consistency zone in the extraction reference picture;
(3), reference picture is carried out local window coupling with all the other target images respectively, obtain several disparity maps;
(4), using two-way matching strategy eliminates the mistake match point that produces, raising parallax precision in the local window matching process;
(5), according to fusion criterion several disparity maps are synthesized a width of cloth disparity map, the parallax information of filling mistake match point;
(6), the width of cloth disparity map after merging is optimized processing after, obtain a dense disparity map;
(7), according to the relation between the parallax and the degree of depth, compute depth is converted to depth map with dense disparity map.
2. the method for the acquiring range image by stereo matching of multi-aperture photographing of cutting apart based on color according to claim 1 is characterized in that, above-mentioned steps (1) is described is just carrying out image rule to all input pictures, and its concrete steps are:
(1-1), according to the brightness value of pixel, calculate the accumulation histogram of every width of cloth image;
(1-2), with the captured image of intermediate-view as the reference image C c, get that two captured images of viewpoint are respectively target image C about it l, target image C r, with accumulation histogram according to pixels number be divided into 10 sections, find out the brightness up-and-down boundary value of each section respectively, determine reference picture C thus cWith target image C l, target image C rCorresponding intersegmental Linear Mapping relation;
(1-3), to each pixel in the target image, ask its fragment number in accumulation histogram, according to its fragment number in accumulation histogram and corresponding mapping relations, pixel is just advised then.
3. the method for the acquiring range image by stereo matching of multi-aperture photographing of cutting apart based on color according to claim 2, it is characterized in that, above-mentioned steps (2) is described carries out color to reference picture and cuts apart, extract the colour consistency zone in the reference picture, its concrete steps are as follows: adopt Mean Shift algorithm according to the colouring information of image to reference picture C cCarry out color and cut apart, utilize the gradient of probability distribution to seek the distribution peak value, each pixel in the image is referred under the corresponding density mode, thereby realizes cluster, the pixel in each cut zone that obtains has identical color value.
4. the method for the acquiring range image by stereo matching of multi-aperture photographing of cutting apart based on color according to claim 3, it is characterized in that, above-mentioned steps (3) is described carries out the local window coupling with all the other target images respectively with reference picture, obtains several disparity maps, and its concrete steps are as follows:
(3-1), determine reference picture C cWith target image C l, target image C rPosition relation; When in a width of cloth target image, can not find match point, can find its corresponding match point at an other width of cloth target image, with the captured image of intermediate-view as the reference image C c, get that two captured images of viewpoint are respectively target image C about it l, target image C r, P in the scene 1P 3The line segment zone is blocked at left viewpoint place, when the solid coupling is asked reference picture C cParallax the time, this regional pixel is at target image C lIn can not find match point, but at target image C rIn can find corresponding match point;
(3-2), respectively to reference picture C cWith target image C lAnd reference picture C cWith target image C rCarry out the local window coupling;
With above-mentioned reference picture C cAs benchmark image, with benchmark image point to be matched is that size of center pixel establishment is the 5*5 window, searching for onesize with vertex neighborhood to be matched in target image is the neighborhood of pixels of 5*5, window with point to be matched compares successively, wherein adopt the adaptive pixel opposite sex to measure (self-adapting dissimilarity measure) as the similarity measurement function, as shown in the formula, the point of its maximum comparability correspondence is exactly an optimal match point
C(x,y,d)=(1-ω)*C SAD(x,y,d)+ω*C GRAD(x,y,d)
C SAD ( x , y , d ) = Σ ( i , j ) ∈ N ( x , y ) | I 1 ( i , j ) - I 2 ( i + d , j ) |
C GRAD ( x , y , d ) = Σ ( i , j ) ∈ N x ( x , y ) | ▿ x I 1 ( i , j ) - ▿ x I 2 ( i + d , j ) | + Σ ( i , j ) ∈ N y ( x , y ) | ▿ y I 1 ( i , j ) - ▿ y I 2 ( i + d , j ) |
Wherein, N (x, y) be with match point (x y) is the 5*5 window of center pixel, The horizontal component of presentation video gradient,
Figure FSB00000595224800024
The vertical component of presentation video gradient, ω represents weight;
By to target image C lWith reference picture C cCoupling obtains disparity map I LI(x, y), target image C rWith reference picture C cCoupling obtains disparity map I RI(x y), comprises a lot of mistake match points in two width of cloth disparity maps that obtain.
5. the method for the acquiring range image by stereo matching of multi-aperture photographing of cutting apart based on color according to claim 4, it is characterized in that, the two-way coupling of the described employing of above-mentioned steps (4) is eliminated the mistake match point that produces in the local window matching process, it is used in target image C lWith reference picture C cCoupling and reference picture C cWith target image C rMate in these two identical matching processs, its concrete steps are as follows:
(4-1), with left image as the reference image, right image is as target image, from left to right carries out the local window coupling and obtains from left to right disparity map d LR
(4-2), with right image as the reference image, left image is as target image, carries out the local window coupling from right to left and obtains from right to left disparity map d RL
(4-3), find out at d according to following formula LRAnd d RLThe inconsistent corresponding points of parallax in two width of cloth disparity maps are defined as the mistake match point,
d LR ( x L , y ) = d RL ( x R , y ) = d LR ( x L , y ) + d RL ( x R , y ) 2 | d LR ( x L , y ) - d RL ( x R , y ) | ≤ λ 0 else
Wherein: λ is an error threshold, the pixel (x on the left figure L, y) with right figure on pixel (x R, y) be a pair of match point, i.e. x R=x L+ d LR(x L, y),
Work as d LRAnd d RLThe parallactic error of corresponding points satisfies in two width of cloth disparity maps | d LR(x L, y)-d RL(x R, y) | during≤λ, show that then corresponding points parallax coupling is correct, when parallactic error does not satisfy | d LR(x L, y)-d RL(x R, y) | during≤λ, show that this point for the mistake match point, is 0 with this assignment.
6. the method for the acquiring range image by stereo matching of multi-aperture photographing of cutting apart based on color according to claim 5, it is characterized in that, above-mentioned steps (5) is described to synthesize a width of cloth disparity map with several disparity maps, fills the parallax information of mistake match point, and its concrete steps are as follows:
(5-1), calculate proportionality coefficient α according to the translation vector t in the outer ginseng matrix of camera;
α = | t C - t L | | t C - t L | + | t C - t R |
Wherein, t L, t C, t RBe respectively the translation vector in the outer ginseng matrix of left side video camera, middle video camera, the right video camera;
(5-2), according to following fusion criterion, with 2 width of cloth disparity map I LI(x, y) and I RI(x, y) synthesize final disparity map I (x y), fills the parallax information of mistake match point, and amalgamation mode is formulated as follows:
Figure FSB00000595224800032
I LI(x, y), I RI(x y) represents reference picture C respectively cWith target image C lAnd reference picture C cWith target image C rThe disparity map that coupling obtains, δ represents an error threshold.
7. the method for the acquiring range image by stereo matching of multi-aperture photographing of cutting apart based on color according to claim 6, it is characterized in that, above-mentioned steps (6) is described to be optimized processing to the width of cloth disparity map after merging, and obtains a dense disparity map, and its concrete steps are as follows:
(6-1), the parallax variation is level and smooth in each the color cut zone in the hypothetical reference image;
(6-2), get the parallax of the middle parallax value of all pixels in each cut zone as whole cut zone, obtain the parallax value of each pixel, its mathematical notation form as the following formula, obtained at last the dense disparity map I ' of high-quality (x, y),
I ′ ( x , y ) = median ( x , y ) ∈ I SEG ( x , y ) ( I ( x , y ) )
Wherein, I SEG(x, y) expression cut zone.
8. the method for the acquiring range image by stereo matching of multi-aperture photographing of cutting apart based on color according to claim 7, it is characterized in that, above-mentioned steps (7) is described, and compute depth is converted to depth map with dense disparity map according to the relation between the parallax and the degree of depth, its specifically:
The depth value of scene and its parallax have following relation in the parallel vidicon configuration-system:
Z = Bf D
Wherein, Z represents depth value, and B represents parallax range, and f is a camera focus, and D is a parallax, according to the relation of the degree of depth and parallax, under the known situation of parallax, calculate the depth value of each pixel, thereby (x y) is converted into depth map with dense disparity map I '.
CN2009101982317A 2009-11-03 2009-11-03 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation Expired - Fee Related CN101720047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009101982317A CN101720047B (en) 2009-11-03 2009-11-03 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009101982317A CN101720047B (en) 2009-11-03 2009-11-03 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation

Publications (2)

Publication Number Publication Date
CN101720047A CN101720047A (en) 2010-06-02
CN101720047B true CN101720047B (en) 2011-12-21

Family

ID=42434549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009101982317A Expired - Fee Related CN101720047B (en) 2009-11-03 2009-11-03 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation

Country Status (1)

Country Link
CN (1) CN101720047B (en)

Families Citing this family (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120005270A (en) * 2010-07-08 2012-01-16 주식회사 팬택 Image output device and method for outputting image using the same
CN102074014B (en) * 2011-02-23 2012-12-12 山东大学 Stereo matching method by utilizing graph theory-based image segmentation algorithm
CN102129567A (en) * 2011-03-17 2011-07-20 南京航空航天大学 Fast stereo matching method based on color partitioning and self-adaptive window
CN102111637A (en) * 2011-03-29 2011-06-29 清华大学 Stereoscopic video depth map generation method and device
JP5781353B2 (en) 2011-03-31 2015-09-24 株式会社ソニー・コンピュータエンタテインメント Information processing apparatus, information processing method, and data structure of position information
GB2489929A (en) * 2011-04-08 2012-10-17 Sony Corp Generation of a Colour Difference Amount Between Pairs of Images
CN102156987A (en) * 2011-04-25 2011-08-17 深圳超多维光电子有限公司 Method and device for acquiring depth information of scene
TWI469086B (en) * 2011-04-26 2015-01-11 Univ Nat Cheng Kung Method for image/video segmentation using texture feature
CN102184540B (en) * 2011-05-03 2013-03-20 哈尔滨工程大学 Sub-pixel level stereo matching method based on scale space
ITTO20110439A1 (en) * 2011-05-17 2012-11-18 Sisvel Technology Srl METHOD FOR GENERATING, TRANSMITTING AND RECEIVING STEREOSCOPIC IMAGES, AND RELATED DEVICES
JP2012253666A (en) * 2011-06-06 2012-12-20 Sony Corp Image processing apparatus and method, and program
CN102298708B (en) * 2011-08-19 2013-08-07 四川长虹电器股份有限公司 3D mode identification method based on color and shape matching
CN102509348B (en) * 2011-09-26 2014-06-25 北京航空航天大学 Method for showing actual object in shared enhanced actual scene in multi-azimuth way
CN103843333B (en) * 2011-09-29 2015-09-16 富士胶片株式会社 Control the method for the display of stereo-picture, control device and the imaging device of stereo-picture display
CN102609974B (en) * 2012-03-14 2014-04-09 浙江理工大学 Virtual viewpoint image generation process on basis of depth map segmentation and rendering
CN102622769B (en) * 2012-03-19 2015-03-04 厦门大学 Multi-target tracking method by taking depth as leading clue under dynamic scene
CN103379350B (en) * 2012-04-28 2015-06-03 中国科学院深圳先进技术研究院 Virtual viewpoint image post-processing method
CN102692236A (en) * 2012-05-16 2012-09-26 浙江大学 Visual milemeter method based on RGB-D camera
CN102802020B (en) * 2012-08-31 2015-08-12 清华大学 The method and apparatus of monitoring parallax information of binocular stereoscopic video
EP2741502A1 (en) * 2012-12-07 2014-06-11 Thomson Licensing Method and apparatus for color transfer between images
CN103986923B (en) * 2013-02-07 2016-05-04 财团法人成大研究发展基金会 Image stereo matching system
CN104036226B (en) 2013-03-04 2017-06-27 联想(北京)有限公司 A kind of object information acquisition method and electronic equipment
US20140267701A1 (en) * 2013-03-12 2014-09-18 Ziv Aviv Apparatus and techniques for determining object depth in images
CN104123715B (en) * 2013-04-27 2017-12-05 株式会社理光 Configure the method and system of parallax value
CN103337064A (en) * 2013-04-28 2013-10-02 四川大学 Method for removing mismatching point in image stereo matching
CN103458246B (en) * 2013-09-03 2016-08-17 清华大学 Occlusion handling method in video motion segmentation and system
CN103458261B (en) * 2013-09-08 2015-04-08 华东电网有限公司 Video scene variation detection method based on stereoscopic vision
CN104517095B (en) * 2013-10-08 2018-01-02 南京理工大学 A kind of number of people dividing method based on depth image
CN103607558A (en) * 2013-11-04 2014-02-26 深圳市中瀛鑫科技股份有限公司 Video monitoring system, target matching method and apparatus thereof
CN103634586A (en) * 2013-11-26 2014-03-12 深圳市唯特视科技有限公司 Stereo-image acquiring method and device
CN103780897B (en) * 2014-02-25 2016-05-11 重庆卓美华视光电有限公司 Depth map acquisition methods and the device of two visual point images
CN104112270B (en) * 2014-05-14 2017-06-20 苏州科技学院 A kind of any point matching algorithm based on the multiple dimensioned window of adaptive weighting
CN104050682B (en) * 2014-07-09 2017-01-18 武汉科技大学 Image segmentation method fusing color and depth information
JP6645492B2 (en) * 2015-02-20 2020-02-14 ソニー株式会社 Imaging device and imaging method
CN108140243B (en) * 2015-03-18 2022-01-11 北京市商汤科技开发有限公司 Method, device and system for constructing 3D hand model
CN104796684A (en) * 2015-03-24 2015-07-22 深圳市广之爱文化传播有限公司 Naked eye 3D (three-dimensional) video processing method
CN107027019B (en) * 2016-01-29 2019-11-08 北京三星通信技术研究有限公司 Image parallactic acquisition methods and device
WO2017143550A1 (en) * 2016-02-25 2017-08-31 SZ DJI Technology Co., Ltd. Imaging system and method
CN106023189B (en) * 2016-05-17 2018-11-09 北京信息科技大学 A kind of light field data depth reconstruction method based on matching optimization
WO2018021064A1 (en) * 2016-07-29 2018-02-01 ソニー株式会社 Image processing device and image processing method
CN106331672B (en) * 2016-08-19 2018-12-25 深圳奥比中光科技有限公司 Preparation method, the apparatus and system of visual point image
CN108307179A (en) * 2016-08-30 2018-07-20 姜汉龙 A kind of method of 3D three-dimensional imagings
CN106447661A (en) * 2016-09-28 2017-02-22 深圳市优象计算技术有限公司 Rapid depth image generating method
CN106530409B (en) * 2016-11-03 2019-08-27 浙江大学 Regional area consistency corresponding method in Stereo matching
CN108377327B (en) * 2016-11-03 2020-01-10 深圳市掌网科技股份有限公司 Panoramic camera and depth information acquisition method
CN107071383A (en) * 2017-02-28 2017-08-18 北京大学深圳研究生院 The virtual visual point synthesizing method split based on image local
US10834374B2 (en) 2017-02-28 2020-11-10 Peking University Shenzhen Graduate School Method, apparatus, and device for synthesizing virtual viewpoint images
CN108537871B (en) * 2017-03-03 2024-02-20 索尼公司 Information processing apparatus and information processing method
CN107122782B (en) * 2017-03-16 2020-09-11 成都通甲优博科技有限责任公司 Balanced semi-dense stereo matching method
CN108765480B (en) * 2017-04-10 2022-03-15 钰立微电子股份有限公司 Advanced treatment equipment
CN107256547A (en) * 2017-05-26 2017-10-17 浙江工业大学 A kind of face crack recognition methods detected based on conspicuousness
CN107085825A (en) * 2017-05-27 2017-08-22 成都通甲优博科技有限责任公司 Image weakening method, device and electronic equipment
CN107220994A (en) * 2017-06-01 2017-09-29 成都通甲优博科技有限责任公司 A kind of method and system of Stereo matching
CN107329490B (en) * 2017-07-21 2020-10-09 歌尔科技有限公司 Unmanned aerial vehicle obstacle avoidance method and unmanned aerial vehicle
CN107730543B (en) * 2017-09-08 2021-05-14 成都通甲优博科技有限责任公司 Rapid iterative computation method for semi-dense stereo matching
CN107610070B (en) * 2017-09-29 2020-12-01 深圳市佳创视讯技术股份有限公司 Free stereo matching method based on three-camera collection
CN107958461A (en) * 2017-11-14 2018-04-24 中国航空工业集团公司西安飞机设计研究所 A kind of carrier aircraft method for tracking target based on binocular vision
CN109977981B (en) * 2017-12-27 2020-11-24 深圳市优必选科技有限公司 Scene analysis method based on binocular vision, robot and storage device
TWI719440B (en) * 2018-04-02 2021-02-21 聯發科技股份有限公司 Stereo match method and apparatus thereof
CN108647579B (en) * 2018-04-12 2022-02-25 海信集团有限公司 Obstacle detection method and device and terminal
CN110602474B (en) * 2018-05-24 2022-07-05 杭州海康威视数字技术股份有限公司 Method, device and equipment for determining image parallax
CN108921842A (en) * 2018-07-02 2018-11-30 上海交通大学 A kind of cereal flow detection method and device
CN109299662B (en) * 2018-08-24 2022-04-12 上海图漾信息科技有限公司 Depth data calculation device and method, and face recognition device
CN111147868A (en) * 2018-11-02 2020-05-12 广州灵派科技有限公司 Free viewpoint video guide system
CN109640066B (en) * 2018-12-12 2020-05-22 深圳先进技术研究院 Method and device for generating high-precision dense depth image
CN110264403A (en) * 2019-06-13 2019-09-20 中国科学技术大学 It is a kind of that artifacts joining method is gone based on picture depth layering
CN110428462B (en) * 2019-07-17 2022-04-08 清华大学 Multi-camera stereo matching method and device
CN111353982B (en) * 2020-02-28 2023-06-20 贝壳技术有限公司 Depth camera image sequence screening method and device
CN112601094A (en) * 2021-03-01 2021-04-02 浙江智慧视频安防创新中心有限公司 Video coding and decoding method and device
CN113052886A (en) * 2021-04-09 2021-06-29 同济大学 Method for acquiring depth information of double TOF cameras by adopting binocular principle
CN113129350B (en) * 2021-04-12 2022-12-30 长春理工大学 Depth extraction method based on camera array
CN113066173B (en) * 2021-04-21 2023-03-14 国家基础地理信息中心 Three-dimensional model construction method and device and electronic equipment
TWI772040B (en) * 2021-05-27 2022-07-21 大陸商珠海凌煙閣芯片科技有限公司 Object depth information acquistition method, device, computer device and storage media
CN113643212B (en) * 2021-08-27 2024-04-05 复旦大学 Depth map noise reduction method based on map neural network

Also Published As

Publication number Publication date
CN101720047A (en) 2010-06-02

Similar Documents

Publication Publication Date Title
CN101720047B (en) Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation
CN101902657B (en) Method for generating virtual multi-viewpoint images based on depth image layering
Tam et al. 3D-TV content generation: 2D-to-3D conversion
CN101771893B (en) Video frequency sequence background modeling based virtual viewpoint rendering method
CN101742349B (en) Method for expressing three-dimensional scenes and television system thereof
CN103581648B (en) Draw the hole-filling method in new viewpoint
CN104065947B (en) The depth map acquisition methods of a kind of integration imaging system
CN101969564B (en) Upsampling method for depth video compression of three-dimensional television
Sun et al. An overview of free view-point depth-image-based rendering (DIBR)
CN104756489A (en) Virtual viewpoint synthesis method and system
CN107809630B (en) Based on the multi-view point video super-resolution rebuilding algorithm for improving virtual view synthesis
CN101808251A (en) Method for extracting blocking information in stereo image pair
CN101662695B (en) Method and device for acquiring virtual viewport
CN103873867B (en) Free viewpoint video depth map distortion prediction method and free viewpoint video depth map coding method
CN105141940A (en) 3D video coding method based on regional division
CN102790895B (en) Multi-viewpoint video encoding and viewpoint synthesis predication method based on least square
CN102710949B (en) Visual sensation-based stereo video coding method
CN106028020A (en) Multi-direction prediction based virtual visual-angle image cavity filling method
CN108924434B (en) Three-dimensional high dynamic range image synthesis method based on exposure transformation
CN102761765A (en) Deep and repaid frame inserting method for three-dimensional video
Kim et al. Multiview stereoscopic video hole filling considering spatiotemporal consistency and binocular symmetry for synthesized 3d video
CN104661014B (en) The gap filling method that space-time combines
CN109345444A (en) The super-resolution stereo-picture construction method of depth perception enhancing
Abd Manap et al. Novel view synthesis based on depth map layers representation
TWM529333U (en) Embedded three-dimensional image system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20111221

Termination date: 20211103

CF01 Termination of patent right due to non-payment of annual fee