CN101908230B - Regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method - Google Patents

Regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method Download PDF

Info

Publication number
CN101908230B
CN101908230B CN2010102354553A CN201010235455A CN101908230B CN 101908230 B CN101908230 B CN 101908230B CN 2010102354553 A CN2010102354553 A CN 2010102354553A CN 201010235455 A CN201010235455 A CN 201010235455A CN 101908230 B CN101908230 B CN 101908230B
Authority
CN
China
Prior art keywords
pixel
image
window
parallax
confidence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2010102354553A
Other languages
Chinese (zh)
Other versions
CN101908230A (en
Inventor
达飞鹏
何栿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Province Huaqiang Textile Co ltd
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN2010102354553A priority Critical patent/CN101908230B/en
Publication of CN101908230A publication Critical patent/CN101908230A/en
Application granted granted Critical
Publication of CN101908230B publication Critical patent/CN101908230B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method, which is implemented by the following steps: (1) shooting a calibration plate image with a mark point at two proper angles by using two black and white cameras; (2) keeping the shooting angles constant and shooting two images of a shooting target object at the same time by using the same camera; (3) performing the epipolar line rectification of the two images of the target objects according to the nominal data of the camera; (4) searching the neighbor regions of each pixel of the two rectified images for a closed region depth edge and building a supporting window; (5) in the built window, computing a normalized cross-correlation coefficient of supported pixels and acquiring the matching price of a central pixel; (6) acquiring a parallax by using a confidence transmission optimization method having an acceleration updating system; (7) estimating an accurate parallax by a subpixel; and (8) computing the three-dimensional coordinates of an actual object point according to the matching relationship between the nominal data of the camera and the pixel and consequently reconstructing the three-dimensional point cloud of the object and reducing the three-dimensional information of a target.

Description

A kind of three-dimensional rebuilding method based on regional depth rim detection and binocular solid coupling
Technical field
The invention belongs to the technique of binocular stereoscopic vision field, relate to problem based on three-dimensional matching treatment real scene shooting image, refer more particularly to edge detecting technology is introduced three-dimensional coupling, and based on the support window of depth edge structure pixel, the method for obtaining the matching relationship of pixel and rebuilding object three-dimensional point cloud model by the confidence spread optimization algorithm.
Background technology
Based on the binocular vision technology of solid coupling is research focus in the present three-dimensional reconstruction field, and it is by mating two width of cloth images that same object is taken from different perspectives, to recover Three-dimension Target information.Its advantage mainly is hardware configuration is required low, and is easy for installation quick, belongs to the non-contact 3-D that object is had no effect and rebuilds.And by 2.5D stereoscopic parallax figure, it is the important foundation of accurate many orders modeling (Multi-view Modeling) and structure face (Surface Meshing) algorithm that binocular is rebuild the object three-dimensional point cloud model that obtains.The binocular solid reconstructing system generally is made of three major parts: camera calibration, three-dimensional coupling, three-dimensional point cloud reconstruct.At first by about two video cameras obtain two width of cloth images of same object, then calculate the parallax of impact point in two width of cloth images, at last based on the three-dimensional point cloud of camera calibration information and parallax value reconstructed object thing based on Stereo Matching Algorithm.Three-dimensional coupling is a most crucial steps in the biocular systems, also is the classical problem in the Flame Image Process.
Generally speaking, the binocular solid reconstruction algorithm utilizes the feature of specific objective itself to carry out the solid coupling at specific objectives such as people's faces.These matching algorithms include some special matching strategy or manual steps, have influenced the robustness and the automaticity of stereo reconstruction; If object changes, may the usefulness of algorithm be had a negative impact.Rebuild the precision aspect, because Stereo Matching Algorithm is taked in the main application of biocular systems, so cause system to rely on very much the reliable parallax of pixel.Three subject matters of three-dimensional coupling are: unbalanced illumination low texture III I) regional occlusion II)).Yet for the real image that biocular systems is taken, these problems may have more challenge.Solid coupling during binocular is rebuild is as the method for passive type, and two width of cloth images of simple based target do not have by supplementarys such as structured lights, and reconstructing system is subjected to the influence of shooting environmental and target itself bigger.Actual environment is difficult to satisfy the static illumination and the color of balance, causes same scene to form different colouring informations in two width of cloth images easily about the binocular camera shooting, and unsettled shade and retroreflective regions.And the photo angle of the irregular change of shape of realistic objective thing by video camera causes the occlusion area of complex distribution and on a large scale may parallax easily, further increased the difficulty of coupling.Problem in final these three-dimensional couplings causes the of low quality of object three-dimensional point cloud that biocular systems rebuilds.
Summary of the invention
Technical matters: the purpose of this invention is to provide a kind of three-dimensional rebuilding method based on regional depth rim detection and binocular solid coupling, to and possess based on the window of regional depth rim detection structure, normalized crosscorrelation coefficient and quicken the more Stereo Matching Algorithm of the confidence spread optimization method good combination of new system, and be applied to binocular vision system, rapidly and accurately the three-dimensional model of reconstructed object thing.
Technical scheme: at first carry out camera calibration, two width of cloth images of photographic subjects thing from different perspectives then, carrying out polar curve according to nominal data proofreaies and correct and image transformation, in the fixedly neighborhood of each pixel, utilize the search of second-order differential edge detection operator then and make up a virtual sealing depth edge, form corresponding support window; Use support pixel in the support window by the coupling cost between the normalized crosscorrelation coefficient calculations center pixel then, and, then utilize the confidence spread optimization algorithm to obtain parallax as the data item of global energy optimization equation.Introduce a kind of parallax exact algorithm of estimating based on sub-pix afterwards and repair some discontinuous quantification parallax value; At last, according to the calibration information of binocular camera and the pixel parallax three-dimensional point cloud of reconstructed object thing as a result, obtain Three-dimension Target information.
This three-dimensional rebuilding method contains the following step successively:
Step 1) camera calibration: use the binocular camera of two horizontal positioned to take the scaling board that posts gauge point, the position of at first adjusting binocular camera makes its optical axis approximate parallel, then from about two angles take scaling board simultaneously, then according to the scaling board image, respectively two video cameras are demarcated, obtained intrinsic parameter matrix A separately L, A RWith outer parameter matrix [R Lt L], [R Rt R];
Step 2) object Image Acquisition: keep the shooting angle of video camera constant, two width of cloth images of photographic subjects thing, wherein left lens shooting is left image, right lens shooting be right image;
The step 3) polar curve is proofreaied and correct and image transformation: the camera interior and exterior parameter matrix that obtains based on calibration process in the step 1), the utilization method for correcting polar line rotates two width of cloth object images respectively, form parallel polar curve, convert the original image of object to image to be matched according to the polar curve after proofreading and correct then, make matched pixel corresponding in two images to be matched have identical ordinate;
Step 4) is constructed support window based on the regional depth rim detection: the closed boundary of the support window of each pixel p in the image to be matched is regarded as the set of a class degree of depth transition pixel, utilize the second-order differential edge detection operator from pixel p, to around search the frontier point of degree of depth transition pixel as support window, connect these frontier points, form the border of a virtual closed edge, form the support window W of pixel p as window pIn like manner, the support window W of each pixel q in another image to be matched qCan construct in the same way;
Step 5) is calculated the coupling cost, comprises following two sub-steps:
Step 5.1: appointing the width of cloth in the image to be matched of getting the correction of two width of cloth process polar curve is benchmark image, and another width of cloth is a registering images; Step 4) obtains that coordinate is (x, the support window W of any pixel p y) in the benchmark image p, and the support window W of possible match point q on the corresponding polar curve of p in the registering images q, the q point coordinate be (x+d, y), d ∈ [d wherein Mind Max], d MinBe minimum parallax value, d MaxBe the maximum disparity value;
Step 5.2: in benchmark image, choose size and shape and support window W pAnd W qThe window of common factor window unanimity
Figure BSA00000204289600031
In registering images, choose size and shape and support window W pAnd W qThe window of common factor window unanimity
Figure BSA00000204289600032
According to formula
Figure BSA00000204289600033
Calculate and support pixel i lAnd i rThe normalized crosscorrelation coefficient, as benchmark pixel p when parallax is d, the coupling cost of corresponding registration pixel q, be designated as Φ (p, d), I (i l) and I (i r) be respectively pixel i lAnd i rGray-scale value;
Figure BSA00000204289600034
With
Figure BSA00000204289600035
Be respectively benchmark image and registering images at the common factor window
Figure BSA00000204289600036
With
Figure BSA00000204289600037
Interior average gray;
Step 6) confidence spread optimized energy function: iteration is propagated degree of confidence message in benchmark image, can obtain the energy optimization of entire image, employing is quickened more, and new system passes to next pixel with degree of confidence message from a pixel: when iteration t each time, on left and right, upper and lower four direction, spread news respectively, and before accepted the direction of message along a pixel, upgraded the degree of confidence message of this pixel immediately; After message propagation finishes, calculate the degree of confidence of each pixel p this moment, choosing the parallax value with maximum confidence is the parallax d of this pixel p
Step 6.1: the level and smooth cost of calculating pixel
(s p) has determined neighbor p and the s support for the degree of confidence message propagation to level and smooth cost Ψ, at first uses formula
Figure BSA00000204289600038
Figure BSA00000204289600039
Calculate level and smooth cost Ψ between each pixel p and its right neighbor and the following neighbor (p, s), wherein, Δ I PsBe the gray scale difference value between pixel p and the s, by Δ I Ps=| I (s)-I (p) | calculate, I (s) and I (p) are respectively the gray-scale value of pixel s and p; L is a static threshold, and u and P are level and smooth compensation term, all is static constant; d sThe parallax value of pixel s in the time of for this reason, d is the parallax value to be matched of pixel p;
Step 6.2: the degree of confidence message of calculating pixel
When the t time iteration propagated degree of confidence message, the message that pixel s passes to neighbor p was designated as
Figure BSA000002042896000310
The coupling cost Φ (s, the d that have comprised pixel s s), level and smooth cost Ψ (s, p), and three message that pixel is accepted when the t-1 time iteration that (do not comprise p) in 4 neighborhoods of pixel s:
Wherein, Φ (s, d s) be that the to be matched pixel of pixel s in registering images is corresponding to parallax value d sThe coupling cost, obtain by step 5); Ψ (s, p)=(p s) is level and smooth cost to Ψ; N (s) p represent the neighbor of s but do not comprise p;
Step 6.3: the degree of confidence of calculating pixel is also obtained parallax
According to the acceleration of message new system more,, give adjacent left pixel with the message propagation of each pixel successively from the rightest pixel of benchmark image; From the most left pixel, give adjacent right pixel with the message propagation of each pixel successively again; Pass through the propagation of lower and upper both direction again by this rule, promptly finish 1 iteration of the degree of confidence message of entire image and upgrade; After T time iteration is upgraded, the degree of confidence b of pixel p p(d) by
Calculate, Φ (p is the coupling cost of pixel p d), obtains by step 5,
Figure BSA00000204289600042
Neighbor s passes to the degree of confidence message of pixel p for this moment, then calculates all parallax d ∈ [d in the disparity range Mind Max] corresponding degree of confidence, d MinBe minimum parallax value, d MaxBe the maximum disparity value, choose the parallax d that the parallax value with maximum confidence is a pixel p p, by
Figure BSA00000204289600043
Calculate;
The step 7) parallax is accurate: utilize sub-pix to estimate
Figure BSA00000204289600044
The integer parallax value that step 6) is obtained is converted into the decimal form, and obtains more accurate sub-pix parallax Dispartiy (p), as the final parallax value of pixel p, wherein, three discrete parallax value d p, d p+ 1, d p-1 obtains Φ (p, d by step 6 p), Φ (p, d p+ 1) and Φ (p, d p-1) the coupling cost that obtains for step 5;
The three-dimensional point cloud of step 8) reconstructed object thing: P LBe the pixel of impact point P correspondence in benchmark image, P RBe the matched pixel of impact point P correspondence in registering images, P LCoordinate be (x L, y L), P RCoordinate be (x R, y R), pass through x R=x L+ Disparity (P L) and y R=y LObtain Disparity (P L) the pixel P that obtains for step 7) LFinal parallax value, camera interior and exterior parameter matrix that obtains according to step 1) and pixel matching are to (P then L, P R) calculate the three-dimensional coordinate of impact point P; Calculate the three-dimensional coordinate of all impact points, and then obtain the three-dimensional point cloud model and the three-dimensional information of whole object.
Wherein, step 4) comprise following three sub-steps based on regional depth rim detection structure support window,
Step 4.1: two images to be matched after proofreading and correct through polar curve are carried out medium filtering, improve picture quality, reduce noise to the edge detection operator Effect on Performance;
Step 4.2: the summit on eight directions of search support window
In two images to be matched, from the horizontal direction of each pixel p, be rotated counterclockwise successively 45 ° can supported window 8 zenith directions.At each zenith directions k (k=0,1,2...7) the last second-order differential operator that passes through
Begin to search for regional depth marginal point h (θ on this direction from central point p k), wherein,
Figure BSA00000204289600046
Be pixel (p+h I+1) second order gray scale differential value on direction k, I (p+h I+1), I (p+h I-1) and I (p+h i) be respectively pixel (p+h I+1), (p+h I-1) and (p+h i) gray-scale value, h i∈ 1,2,3...} k, θ k(k=0,1, be eight zenith directions of support window 2...7), when satisfying formula The time, h (θ k)=h i, and the summit of going up support window as direction k;
Step 4.3: step-length is h (θ on eight directions of connection pixel p k) the summit, form the regional depth edge that p is ordered, choosing its area surrounded is corresponding support window W p
Beneficial effect: compared with prior art, the present invention has following advantage: based on the support window of regional depth rim detection structure based on hypothesis: the pixel of the similar degree of depth has similar parallax in the neighborhood, utilize the size and the shape of the variation constraint window of depth information in the zone, comprise the support pixel that has the similar degree of depth to center pixel.Simultaneously in conjunction with normalized crosscorrelation coefficient calculations method, can overcome effectively that unbalanced illumination obtains stable coupling cost to the influence of three-dimensional reconstruction system in the actual environment, and be applicable to follow-up energy optimization algorithm.Possess and quicken the texture information that the confidence spread optimization algorithm of new system more makes full use of all pixels of entire image and seek optimum solution,, repair the match information of occluded pixels, reduced coupling uncertainty and error with the minimization of energy function.The parallax exact method of estimating based on sub-pix can be converted into the parallax value of integer form more accurate decimal, obtains more level and smooth object three-dimensional point cloud.The present invention fully combines edge detecting technology, calculates and the advantage of confidence spread optimization algorithm based on the parallax of depth information, overcome the subject matter that runs into when existing Stereo Matching Technology is applied in actual three-dimensional reconstruction system, when guaranteeing to rebuild accuracy, improved the arithmetic speed and the automaticity of process of reconstruction.
Description of drawings
Fig. 1 entire flow figure of the present invention.
Fig. 2 system model and principle schematic.
Fig. 3 polar curve is proofreaied and correct synoptic diagram.
The schematic diagram of Fig. 4 regional depth rim detection.
The support window synoptic diagram of Fig. 5 pixel.
The acceleration of Fig. 6 confidence spread optimization algorithm is the new system synoptic diagram more.
Fig. 7 is by the 3 d space coordinate synoptic diagram of object point on matching relationship and the nominal data computed image.
Embodiment
With reference to the accompanying drawings, specific embodiments of the present invention are done more detailed description.The programming implementation tool is selected Visual C++2003.net and OpenCV Flame Image Process function library for use, selects for use people's face of low texture and the uneven significantly watermelon of illumination as the three-dimensional reconstruction target in the indoor environment.
Fig. 1 is entire flow figure of the present invention.
Fig. 2 is system model of the present invention and principle schematic.Use two ccd video cameras respectively to take a width of cloth target image, O simultaneously from two different angles respectively L, O RBe respectively the photocentre of two video cameras, I L, I RBe respectively the imaging plane of two video cameras, P is a space object point for the treatment of on the object of reconstruct, P L, P RBe object point P imaging point on two video camera imaging planes respectively.This is a pair of match point by the same space object point imaging point on the different cameras imaging plane.Appoint and to get wherein that a width of cloth is a benchmark image, another width of cloth is a registering images, and the process of search for corresponding match point for each pixel in the benchmark image in registering images is called three-dimensional the coupling.After obtaining the matching relationship of pixel,,, carry out reverse computing, just can obtain the 3 d space coordinate of corresponding object point, thereby realize the three-dimensional reconstruction of image in conjunction with demarcating the camera interior and exterior parameter that obtains according to system model.
Fig. 4 is the schematic diagram of regional depth edge detecting technology, and the coordinate center is the center pixel of each support window, and ordinate is a pixel depth, and horizontal ordinate is the space length at neighborhood interior pixel and center.The image border is the set that the pixel of transition appears in colouring information, and they not only appear between the target, also appear between the pixel.And the closed boundary of the support window of each pixel can be regarded the set of this class transition pixel as.These window edge color of pixel information transition occurs having between the pixel of the similar degree of depth and the dissimilar degree of depth to center pixel.The regional depth edge detecting technology utilizes edge detection operator from each pixel, to around search the frontier point of degree of depth transition pixel as the support window of this pixel, in Fig. 4, be labeled as " ".Connect these frontier points, form a virtual closed edge, the fixedly neighborhood of this pixel is divided into outer peripheral areas and central area, select the central area then, and use pixel wherein to calculate the coupling cost as support window.Therefore, as Fig. 5 (a) (b), the regional depth edge that the fixedly neighborhood U of each pixel is configured is divided into two targets: as the central area W and the outer peripheral areas W-U of support window.Edge detection operator mainly comprises single order and second-order differential operator.When carrying out degree of depth transition pixel detection, only set up and searching to have the frontier point of the pixel of maximum differential value as support window based on the window of single order differential boundary operator; And the second-order differential boundary operator only need be sought the frontier point of second-order differential zero crossing as support window.
Fig. 5 is the support window synoptic diagram of pixel.Shown in Fig. 5 (a), p is pixel in the benchmark image, W pFor with p being a support window zone based on depth edge detection structure of center pixel; Fig. 5 (b) is the support window in the registering images, and the frame of broken lines inner region is that the disparity range of the match point of benchmark pixel p correspondence is the hunting zone, and the match point of p drops in this scope, and q is a pixel in the hunting zone, W qFor with q being the support window zone of center pixel, dashed region among Fig. 5 (c) Be size in the benchmark image and shape and support window W pAnd W qThe window of common factor window unanimity; Dashed region among Fig. 5 (d)
Figure BSA00000204289600062
Be size in the registering images and shape and support window W pAnd W qThe window of common factor window unanimity.The coupling cost Φ that is the pixel q of d with its parallax distance in pixel p and the registering images in the benchmark image (p d) is defined as:
Φ ( p , d ) = Σ i l ∈ W p ‾ , i r ∈ W q ‾ | I ( i l ) - I l ‾ | × | I ( i r ) - I r ‾ | Σ i l ∈ W p ‾ ( I ( i l ) - I l ‾ ) 2 × Σ i r ∈ W q ‾ ( I ( i r ) - I r ‾ ) 2 - - - ( 1 )
Wherein, I (i l) and I (i r) be respectively pixel i lAnd i rGray-scale value;
Figure BSA00000204289600064
With
Figure BSA00000204289600065
Be respectively benchmark image and registering images at the common factor window
Figure BSA00000204289600066
With
Figure BSA00000204289600067
Interior average gray.
Method of the present invention specifically comprises following step:
Step 1: camera calibration
Use the binocular camera of two horizontal positioned to take the scaling board that posts gauge point, at first adjust the position of binocular camera, make the substantially parallel and left and right sides mirror of its optical axis be in suitable shooting angle, take scaling board simultaneously from two angles then.Adopt list of references " A Flexible New Technique for CameraCalibration " (Zhang Z Y among the present invention, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000,20 (11): a kind of calibration algorithm that is called the plane template method that proposes 1330-1334) is demarcated two video cameras according to the scaling board image respectively, obtains intrinsic parameter matrix A separately L, A RWith outer parameter matrix [R Lt L], [R Rt R].
Step 2: object Image Acquisition
Keep the shooting angle of video camera constant, two width of cloth images of photographic subjects thing, wherein left lens shooting is left image, right lens shooting be right image;
Step 3: polar curve is proofreaied and correct and image transformation
Fig. 3 is that polar curve is proofreaied and correct synoptic diagram.According to the polar curve constraint principles, certain pixel is positioned on the polar curve of this pixel correspondence in the matched pixel on the right figure among the left figure.And in the binocular reconstructing system of reality, because two video cameras can not guarantee absolute parallel shooting, need to introduce list of references " A compact algorithm for rectification ofstereo pairs.Machine Vision and Applications " (Fusiello A, Trucco E, Verri A.2000,12 (1): the polar curve correcting algorithm that proposes 16-22), the camera interior and exterior parameter that obtains according to step 1 calibration process, by suitable rotation, in two width of cloth images, form parallel polar curve, according to corrected polar curve conversion original image, finally form the image to be matched of object then.Corresponding matched pixel has identical ordinate in two images to be matched, thereby the search procedure of solid coupling has been dropped to one dimension from two dimension, has improved the operation efficiency and the precision of subsequent step.
Step 4:, comprise following three sub-steps according to regional depth edge detecting technology structure support window
Step 4.1: two images to be matched are carried out medium filtering, improve picture quality, reduce the influence of noise to edge detection operator.
Step 4.2: the summit on eight directions of search support window
In the image to be matched that after proofreading and correct, obtains through step 3 polar curve, from the horizontal direction of each pixel p, by shown in Figure 5, be rotated counterclockwise successively 45 ° can supported window 8 zenith directions.At each zenith directions k (k=0,1,2...7) the last second-order differential operator that passes through Begin to search for regional depth marginal point h (θ on this direction from central point p k), wherein, Be pixel (p+h I+1) second order gray scale differential value on direction k, I (p+h I+1), I (p+h I-1) and I (p+h i) be respectively pixel (p+h I+1), (p+h I-1) and (p+h i) gray-scale value, h i∈ 1,2,3...} k, θ k(k=0,1,2...7) be eight zenith directions of support window, satisfy formula by working as
Figure BSA00000204289600073
The time, h (θ k)=h i, and the summit of going up support window as direction k.
Step 4.3: step-length is h (θ on eight directions of connection pixel p k) the summit, form the regional depth edge of p, the neighborhood that is fixed is divided into central area and outer peripheral areas.Choosing the central area is corresponding support window W pThen obtain the support window W of pixel q in another image to be matched in the same way q
Step 5: calculate the coupling cost, comprise following two sub-steps:
Step 5.1: appoint and to get that a width of cloth is a benchmark image in the image that two width of cloth proofread and correct through polar curve, another width of cloth is a registering images.Through the calculating of step 4.3, can obtain in the benchmark image coordinate for (x, the support window of any pixel p y) are W p, and the support window W of possible match point q in the disparity range D in the registering images qWherein, the q point coordinate be (x+d, y), d ∈ D, D=[d Mind Max], d MinBe minimum parallax value, d MaxBe the maximum disparity value.
Step 5.2: choose size and shape and W pAnd W qThe window of common factor window unanimity, by formula (1) calculates the normalized crosscorrelation coefficient of wherein supporting pixel, the coupling cost when being d as the pixel p parallax, be designated as Φ (p, d).
Step 6: confidence spread optimized energy function
Confidence spread optimization algorithm iteration in Markov Random Fields neighbours domain system is propagated degree of confidence message.The algorithm core is the message that propagates into next pixel p when the t time iteration from a pixel s, is designated as
Figure BSA00000204289600081
Upgrade with the suitable manner iteration.Quickening more, when the new system decision passes to next pixel again with message.As shown in Figure 6, when iteration t each time, this system spreads news on left and right, upper and lower four direction respectively, and has before accepted the direction of message along a pixel, upgrades this pixel immediately.The advantage of this more new system is to carry out fast the iterative process of confidence spread optimization algorithm, and iteration all need be according to the order of Fig. 6 each time, the message of all pixels in the update image.Pixel s passes to pixel p when t iteration message comprised pixel s coupling cost Φ (s, d), level and smooth cost Ψ (s, p), and three message that pixel is accepted when t-1 iteration that (do not comprise p) in 4 neighborhoods of pixel s.
Step 6.1: the level and smooth cost of calculating pixel
(s p) has determined neighbor p and the s support for message propagation to level and smooth cost Ψ, has limited the message transmission between the discontinuous pixel of colouring information.Because Ψ (s, p)=(p, s), and in message process, level and smooth cost keeps constant to Ψ, so at first calculate the level and smooth cost between each pixel s and its right neighbor and the following neighbor.Use
Figure BSA00000204289600082
Figure BSA00000204289600083
Calculating Ψ (s, p).Wherein, Δ I SpBe the gray scale difference value between pixel p and the s, by Δ I Sp=| I (s)-I (p) | calculate, I (s) and I (p) are respectively the gray-scale value of pixel s and p; L is a static threshold, and u and P are level and smooth compensation term, all is static constant; d sThe parallax value of pixel s in the time of for this reason, d is the parallax value to be matched of pixel p.
Step 6.2: the degree of confidence message of calculating pixel
During t iteration, the degree of confidence message that pixel s passes to pixel p is:
Figure BSA00000204289600084
Wherein, Φ (s, d s) be that the to be matched pixel of pixel s in another width of cloth image is corresponding to parallax d sThe coupling cost, obtain by step 5; (s p) is level and smooth cost to Ψ; N (s) p represent the neighbor of s but do not comprise p.
Step 6.3: the degree of confidence of calculating pixel is also obtained parallax
The acceleration of pressing Fig. 6 is new system more, gives next pixel with the message propagation of each pixel successively.Through after T the iteration of entire image, the degree of confidence b of pixel p p(d) by
Figure BSA00000204289600085
Calculate, (p is the coupling cost of pixel p d) to Φ, is obtained by step 5; Neighbor s passes to the degree of confidence message of pixel p for this moment.Then calculate all parallax d ∈ [d in the disparity range Mind Max] corresponding degree of confidence, d MinBe minimum parallax value, d MaxBe the maximum disparity value, the parallax value d of pixel p pRequire to select the maximum a posteriori estimator, with the joint probability of maximization Markov random field, by
Figure BSA00000204289600091
Calculate.
Step 7: parallax is accurate
The integer parallax value that the parallax of estimating based on sub-pix accurately can obtain step 6 is converted into more accurate decimal, reduces the parallax quantization error of three-dimensional coupling, has protected the three-dimensional information of object.After the step 6, three of each pixel p discrete parallax value d in the benchmark image p, d p+ 1, d p-1 can obtain immediately.Estimate by sub-pix
Figure BSA00000204289600092
Can obtain the sub-pix parallax Dispartiy (p) of decimal form, as the final parallax value of benchmark pixel p.Wherein, three discrete parallax value d p, d p+ 1, d p-1 obtains Φ (p, d by step 6 p), Φ (p, d p+ 1) and Φ (p, d p-1) the coupling cost of the pixel p that obtains for step 5.
Step 8: three-dimensional information reduction
According to the pixel parallax Dispartiy (p) that step 7 obtains, the coupling in benchmark image and the registering images is to (P L, P R) can pass through x R=x L+ Disparity (P L), y R=y LObtain P LCoordinate be (x L, y L), P RCoordinate be (x R, y R).Follow the camera intrinsic parameter matrix A that obtains according to step 1 L, A RWith outer parameter matrix [R Lt L], [R Rt R], and pixel matching is to (P L, P R), just can calculate the three-dimensional coordinate of impact point by the space method that crosses.Fig. 7 is the cross synoptic diagram of method of space, O L, O RBe respectively the photocentre of two video cameras, S L, S RBe respectively the imaging plane of two video cameras, P L, P RIt is a pair of match point in two shot by camera images.Following relation of plane is arranged between the pixel coordinate on object point three-dimensional coordinate and the imaging plane in the space:
Z i u i v 1 = A R T 0 T 1 X w Y w Z w 1 , i = 1,2 - - - ( 2 )
Wherein, (u, the v) coordinate of representation space object point imaging element on imaging plane, (X w, Y w, Z w) volume coordinate of expression impact point P, Z 1, Z 2Be respectively the optical axis of two video cameras.Impact point P respectively with video camera photocentre O L, O RThe straight line O that is become LP, O RThe intersection point of P on imaging plane is exactly its projection P at the plane of delineation L(u 1, v), P R(u 2, v).Can obtain straight line O according to formula (2) LP L, O RP REquation, its intersection point is exactly the world coordinates that P is ordered.In the actual computation process, straight line O LP LAnd O RP RMay not can just in time intersect, need choose the common vertical line mid point of different surface beeline among Fig. 7 as impact point this moment.

Claims (2)

1. three-dimensional rebuilding method based on regional depth rim detection and binocular solid coupling is characterized in that this method for reconstructing contains the following step successively:
Step 1) camera calibration: use the binocular camera of two horizontal positioned to take the scaling board that posts gauge point, the position of at first adjusting binocular camera makes its optical axis approximate parallel, then from about two angles take scaling board simultaneously, then according to the scaling board image, respectively two video cameras are demarcated, obtained intrinsic parameter matrix A separately L, A RWith outer parameter matrix [R Lt L], [R Rt R];
Step 2) object Image Acquisition: keep the shooting angle of video camera constant, two width of cloth images of photographic subjects thing, wherein left lens shooting is left image, right lens shooting be right image;
The step 3) polar curve is proofreaied and correct and image transformation: the camera interior and exterior parameter matrix that obtains based on calibration process in the step 1), the utilization method for correcting polar line rotates two width of cloth object images respectively, form parallel polar curve, convert the original image of object to image to be matched according to the polar curve after proofreading and correct then, make matched pixel corresponding in two images to be matched have identical ordinate;
Step 4) is constructed support window based on the regional depth rim detection: the closed boundary of the support window of each pixel p in the image to be matched is regarded as the set of a class degree of depth transition pixel, utilize the second-order differential edge detection operator from pixel p, to around search the frontier point of degree of depth transition pixel as support window, connect these frontier points, form the border of a virtual closed edge, form the support window W of pixel p as window pIn like manner, the support window W of each pixel q in another image to be matched qCan construct in the same way;
Step 5) is calculated the coupling cost, comprises following two sub-steps:
Step 5.1: appointing the width of cloth in the image to be matched of getting the correction of two width of cloth process polar curve is benchmark image, and another width of cloth is a registering images; Step 4) obtains that coordinate is (x, the support window W of any pixel p y) in the benchmark image p, and the support window W of possible match point q on the corresponding polar curve of p in the registering images q, the q point coordinate be (x+d, y), d ∈ [d wherein Min, d Max], d MinBe minimum parallax value, d MaxBe the maximum disparity value;
Step 5.2: in benchmark image, choose size and shape and support window W pAnd W qThe window of common factor window unanimity
Figure FSB00000604655600011
In registering images, choose size and shape and support window W pAnd W qThe window of common factor window unanimity
Figure FSB00000604655600012
According to formula
Figure FSB00000604655600013
Calculate and support pixel i lAnd i rThe normalized crosscorrelation coefficient, as benchmark pixel p when parallax is d, the coupling cost of corresponding registration pixel q, be designated as Φ (p, d), I (i l) and I (i r) be respectively pixel i lAnd i rGray-scale value;
Figure FSB00000604655600014
With
Figure FSB00000604655600015
Be respectively benchmark image and registering images at the common factor window
Figure FSB00000604655600021
With
Figure FSB00000604655600022
Interior average gray;
Step 6) confidence spread optimized energy function: iteration is propagated degree of confidence message in benchmark image, can obtain the energy optimization of entire image, employing is quickened more, and new system passes to next pixel with degree of confidence message from a pixel: when iteration t each time, on left and right, upper and lower four direction, spread news respectively, and before accepted the direction of message along a pixel, upgraded the degree of confidence message of this pixel immediately; After message propagation finishes, calculate the degree of confidence of each pixel p this moment, choosing the parallax value with maximum confidence is the parallax d of this pixel p
Step 6.1: the level and smooth cost of calculating pixel
(s p) has determined neighbor p and s for the support of degree of confidence message propagation to level and smooth cost Ψ, at first makes
Use formula
Figure FSB00000604655600023
Figure FSB00000604655600024
Calculate level and smooth cost Ψ between each pixel p and its right neighbor and the following neighbor (p, s), wherein, Δ I PsBe the gray scale difference value between pixel p and the s, by Δ I Ps=| I (s)-I (p) | calculate, I (s) and I (p) are respectively the gray-scale value of pixel s and p; L is a static threshold, and u and P are level and smooth compensation term, all is static constant; d sThe parallax value of pixel s in the time of for this reason, d is the parallax value to be matched of pixel p;
Step 6.2: the degree of confidence message of calculating pixel
When the t time iteration propagated degree of confidence message, the message that pixel s passes to neighbor p was designated as
Figure FSB00000604655600025
The coupling cost Φ (s, the d that have comprised pixel s s), level and smooth cost Ψ (s, p), and three message that pixel is accepted when the t-1 time iteration except that p in 4 neighborhoods of pixel s:
Figure FSB00000604655600026
Wherein, Φ (s, d s) be that the to be matched pixel of pixel s in registering images is corresponding to parallax value d sThe coupling cost, obtain by step 5); Ψ (s, p)=(p s) is level and smooth cost to Ψ; N (s) p represent the neighbor of s but do not comprise p;
Step 6.3: the degree of confidence of calculating pixel is also obtained parallax
According to the acceleration of message new system more,, give adjacent left pixel with the message propagation of each pixel successively from the rightest pixel of benchmark image; From the most left pixel, give adjacent right pixel with the message propagation of each pixel successively again; Pass through the propagation of lower and upper both direction again by this rule, promptly finish 1 iteration of the degree of confidence message of entire image and upgrade; After T time iteration is upgraded, the degree of confidence b of pixel p p(d) by
Figure FSB00000604655600027
Calculate, Φ (p is the coupling cost of pixel p d), obtains by step 5,
Figure FSB00000604655600028
Neighbor s passes to the degree of confidence message of pixel p for this moment, then calculates all parallax d ∈ [d in the disparity range Min, d Max], corresponding degree of confidence, d MinBe minimum parallax value, d MaxBe the maximum disparity value, choose the parallax d that the parallax value with maximum confidence is a pixel p p, by
Figure FSB00000604655600029
Calculate;
The step 7) parallax is accurate: utilize sub-pix to estimate
The integer parallax value that step 6) is obtained is converted into the decimal form, and obtains more accurate sub-pix parallax Dispartiy (p), as the final parallax value of pixel p, wherein, three discrete parallax value d p, d p+ 1, d p-1 obtains Φ (p, d by step 6 p), Φ (p, d p+ 1) and Φ (p, d p-1) the coupling cost that obtains for step 5;
The three-dimensional point cloud of step 8) reconstructed object thing: P LBe the pixel of impact point P correspondence in benchmark image, P RBe the matched pixel of impact point P correspondence in registering images, P LCoordinate be (x L, y L), P RCoordinate be (x R, y R), pass through x R=x L+ Disparity (P L) and y R=y LObtain Disparity (P L) the pixel P that obtains for step 7) LFinal parallax value, camera interior and exterior parameter matrix that obtains according to step 1) and pixel matching are to (P then L, P R) calculate the three-dimensional coordinate of impact point P; Calculate the three-dimensional coordinate of all impact points, and then obtain the three-dimensional point cloud model and the three-dimensional information of whole object.
2. the three-dimensional rebuilding method based on regional depth rim detection and binocular solid coupling according to claim 1, what it is characterized in that step 4) comprises following three sub-steps based on regional depth rim detection structure support window,
Step 4.1: two images to be matched after proofreading and correct through polar curve are carried out medium filtering, improve picture quality, reduce noise to the edge detection operator Effect on Performance;
Step 4.2: the summit on eight directions of search support window
In two images to be matched, from the horizontal direction of each pixel p, be rotated counterclockwise successively 45 ° can supported window 8 zenith directions, on each zenith directions k by the second-order differential operator, k=0,1,2 ... 7,
Figure FSB00000604655600032
Begin to search for regional depth marginal point h (θ on this direction from central point p k), wherein, Be pixel (p+h I+1) second order gray scale differential value on direction k, I (p+h I+1), I (p+h I-1) and I (p+h i) be respectively pixel (p+h I+1), (p+h I-1) and (p+h i) gray-scale value, h i∈ 1,2,3...} k, θ k(k=0,1, be eight zenith directions of support window 2...7), when satisfying formula
Figure FSB00000604655600034
The time, h (θ k)=h i, and the summit of going up support window as direction k;
Step 4.3: step-length is h (θ on eight directions of connection pixel p k) the summit, form the regional depth edge that p is ordered, choosing its area surrounded is corresponding support window W p
CN2010102354553A 2010-07-23 2010-07-23 Regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method Expired - Fee Related CN101908230B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010102354553A CN101908230B (en) 2010-07-23 2010-07-23 Regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010102354553A CN101908230B (en) 2010-07-23 2010-07-23 Regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method

Publications (2)

Publication Number Publication Date
CN101908230A CN101908230A (en) 2010-12-08
CN101908230B true CN101908230B (en) 2011-11-23

Family

ID=43263681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010102354553A Expired - Fee Related CN101908230B (en) 2010-07-23 2010-07-23 Regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method

Country Status (1)

Country Link
CN (1) CN101908230B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI511079B (en) * 2014-04-30 2015-12-01 Au Optronics Corp Three-dimension image calibration device and method for calibrating three-dimension image

Families Citing this family (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184540B (en) * 2011-05-03 2013-03-20 哈尔滨工程大学 Sub-pixel level stereo matching method based on scale space
CN102931438B (en) * 2011-08-11 2016-03-02 深圳市吉阳自动化科技有限公司 A kind of pole piece correction method and laminating machine
CN102542268B (en) * 2011-12-29 2014-04-23 中国科学院自动化研究所 Method for detecting and positioning text area in video
CN102572485B (en) * 2012-02-02 2015-04-22 北京大学 Self-adaptive weighted stereo matching algorithm, stereo display and collecting device and system
CN103310482B (en) * 2012-03-12 2016-08-10 山东智慧生活数据系统有限公司 A kind of three-dimensional rebuilding method and system
CN102810205B (en) * 2012-07-09 2015-08-05 深圳泰山在线科技有限公司 The scaling method of a kind of shooting or photographic means
CN103236082B (en) * 2013-04-27 2015-12-02 南京邮电大学 Towards the accurate three-dimensional rebuilding method of two-dimensional video of catching static scene
CN103544732B (en) * 2013-09-29 2016-05-04 北京空间飞行器总体设计部 A kind of 3 D stereo method for reconstructing for lunar rover
CN104637043B (en) * 2013-11-08 2017-12-05 株式会社理光 Pixel selecting method, device, parallax value is supported to determine method
CN104977795A (en) * 2014-04-08 2015-10-14 立普思股份有限公司 Stereo photography system and stereo photography method thereof
CN104406594B (en) * 2014-12-09 2017-06-06 上海新跃仪表厂 The Measurement Algorithm of spacecrafts rendezvous spacecraft relative pose
CN104657553B (en) * 2015-02-10 2018-03-09 北京航天自动控制研究所 A kind of hardware-accelerated method of similarity measure based on quick normalized crosscorrelation method
CN104778685B (en) * 2015-03-17 2018-02-23 北京数码视讯科技股份有限公司 A kind of image matching method and device
CN105023275B (en) * 2015-07-14 2018-08-28 清华大学 Super-resolution optical field acquisition device and its three-dimensional rebuilding method
CN105444696B (en) * 2015-12-30 2018-04-24 天津大学 A kind of binocular ranging method and its application based on perspective projection line measurement model
CN105844692B (en) * 2016-04-27 2019-03-01 北京博瑞空间科技发展有限公司 Three-dimensional reconstruction apparatus, method, system and unmanned plane based on binocular stereo vision
CN106228605A (en) * 2016-07-29 2016-12-14 东南大学 A kind of Stereo matching three-dimensional rebuilding method based on dynamic programming
CN106295713A (en) * 2016-08-19 2017-01-04 华南理工大学 The article condition label recognition system of a kind of computer-oriented visual cognition and method
CN106303501B (en) * 2016-08-23 2018-12-04 深圳市捷视飞通科技股份有限公司 Stereo-picture reconstructing method and device based on image sparse characteristic matching
CN106403845B (en) * 2016-09-14 2017-10-03 杭州思看科技有限公司 Three-dimension sensor system and three-dimensional data acquisition methods
CN106447661A (en) * 2016-09-28 2017-02-22 深圳市优象计算技术有限公司 Rapid depth image generating method
CN106931906A (en) * 2017-03-03 2017-07-07 浙江理工大学 A kind of object dimensional size simple measurement method based on binocular stereo vision
CN107155100B (en) * 2017-06-20 2019-07-12 国家电网公司信息通信分公司 A kind of solid matching method and device based on image
CN107506782B (en) * 2017-07-06 2020-04-17 武汉市工程科学技术研究院 Dense matching method based on confidence weight bilateral filtering
CN109272516A (en) * 2017-07-18 2019-01-25 深圳市凌云视迅科技有限责任公司 A kind of cost method for congregating based on variable window
CN109284653A (en) * 2017-07-20 2019-01-29 微软技术许可有限责任公司 Slender body detection based on computer vision
CN107729893B (en) * 2017-10-12 2020-06-30 清华大学 Visual positioning method and system of die spotting machine and storage medium
US10360662B2 (en) * 2017-10-16 2019-07-23 Omnivision Technologies, Inc. Alignment of multiple camera images by matching projected one dimensional image profiles
US10510148B2 (en) 2017-12-18 2019-12-17 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Systems and methods for block based edgel detection with false edge elimination
CN108364308B (en) * 2018-01-17 2020-04-07 西南交通大学 Artificial compound eye camera depth estimation method based on cross-image belief propagation
CN108230382B (en) * 2018-01-30 2021-04-30 上海理工大学 Stereo matching algorithm based on Spearman correlation coefficient and dynamic programming fusion
CN108335350A (en) * 2018-02-06 2018-07-27 聊城大学 The three-dimensional rebuilding method of binocular stereo vision
CN109084959B (en) * 2018-06-05 2020-10-02 南京理工大学 Optical axis parallelism correction method based on binocular distance measurement algorithm
CN108961257A (en) * 2018-07-17 2018-12-07 东北林业大学 The three-dimensional rebuilding method of panoramic image in a kind of mixing vision system
CN109272453B (en) * 2018-08-31 2023-02-10 上海盎维信息技术有限公司 Modeling device and positioning method based on 3D camera
US11107268B2 (en) * 2018-09-07 2021-08-31 Cognex Corporation Methods and apparatus for efficient data processing of initial correspondence assignments for three-dimensional reconstruction of an object
CN109974623B (en) * 2018-09-13 2021-01-05 北京伟景智能科技有限公司 Three-dimensional information acquisition method and device based on line laser and binocular vision
CN111080689B (en) * 2018-10-22 2023-04-14 杭州海康威视数字技术股份有限公司 Method and device for determining face depth map
CN109887019B (en) * 2019-02-19 2022-05-24 北京市商汤科技开发有限公司 Binocular matching method and device, equipment and storage medium
CN110213558A (en) * 2019-04-28 2019-09-06 航天智造(上海)科技有限责任公司 Sub-pix optical parallax acquiring method
CN110487789B (en) * 2019-08-12 2020-03-17 中国矿业大学(北京) Rock microscopic structure three-dimensional reconstruction system and method based on grinding sheet device
CN110264490B (en) * 2019-08-15 2019-12-10 成都新西旺自动化科技有限公司 sub-pixel precision edge extraction method applied to machine vision system
CN110702015B (en) * 2019-09-26 2021-09-03 中国南方电网有限责任公司超高压输电公司曲靖局 Method and device for measuring icing thickness of power transmission line
CN110853086A (en) * 2019-10-21 2020-02-28 北京清微智能科技有限公司 Depth image generation method and system based on speckle projection
CN110930442B (en) * 2019-11-26 2020-07-31 广东技术师范大学 Method and device for determining positions of key points in robot hand-eye calibration based on calibration block
CN110940295B (en) * 2019-11-29 2021-03-30 北京理工大学 High-reflection object measurement method and system based on laser speckle limit constraint projection
CN110992431B (en) * 2019-12-16 2023-04-18 电子科技大学 Combined three-dimensional reconstruction method for binocular endoscope soft tissue image
CN111127623B (en) * 2019-12-25 2023-08-29 上海米哈游天命科技有限公司 Model rendering method and device, storage medium and terminal
CN111212128B (en) * 2019-12-30 2021-05-04 北京邮电大学 Edge-assisted-based collaborative Web AR multi-user service system and method
CN111508009B (en) * 2020-07-02 2021-02-23 上海海栎创科技股份有限公司 Binocular stereo matching preprocessing method and device
CN112419389B (en) * 2020-11-25 2024-01-23 中科融合感知智能研究院(苏州工业园区)有限公司 Method and device for realizing binocular incremental parallax matching algorithm
CN114913223A (en) * 2021-02-09 2022-08-16 北京盈迪曼德科技有限公司 Positive direction identification method and system of visual sweeper
CN113409242A (en) * 2021-02-23 2021-09-17 杭州哲域智能科技有限公司 Intelligent monitoring method for point cloud of rail intersection bow net
CN113256699B (en) * 2021-06-24 2021-10-19 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113822925B (en) * 2021-08-01 2023-12-19 国网江苏省电力有限公司徐州供电分公司 Depth estimation method and system for asynchronous binocular camera
CN113673493B (en) * 2021-10-22 2022-02-01 浙江建木智能系统有限公司 Pedestrian perception and positioning method and system based on industrial vehicle vision
CN114998532B (en) * 2022-08-05 2022-11-01 中通服建设有限公司 Three-dimensional image visual transmission optimization method based on digital image reconstruction
CN115345995A (en) * 2022-08-10 2022-11-15 先临三维科技股份有限公司 Three-dimensional reconstruction method, device and system
CN116129037B (en) * 2022-12-13 2023-10-31 珠海视熙科技有限公司 Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN116630388A (en) * 2023-07-25 2023-08-22 苏州立创致恒电子科技有限公司 Thermal imaging image binocular parallax estimation method and system based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6639596B1 (en) * 1999-09-20 2003-10-28 Microsoft Corporation Stereo reconstruction from multiperspective panoramas
CN1747559A (en) * 2005-07-29 2006-03-15 北京大学 Three-dimensional geometric mode building system and method
CN101383054A (en) * 2008-10-17 2009-03-11 北京大学 Hybrid three-dimensional reconstructing method based on image and scanning data

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100374784B1 (en) * 2000-07-19 2003-03-04 학교법인 포항공과대학교 A system for maching stereo image in real time

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6639596B1 (en) * 1999-09-20 2003-10-28 Microsoft Corporation Stereo reconstruction from multiperspective panoramas
CN1747559A (en) * 2005-07-29 2006-03-15 北京大学 Three-dimensional geometric mode building system and method
CN101383054A (en) * 2008-10-17 2009-03-11 北京大学 Hybrid three-dimensional reconstructing method based on image and scanning data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Ke Zhang等.Scalable stereo matching with Locally Adaptive Polygon Approximation.《15th IEEE International Conference on Image Processing 2008》.2008, *
Li-Li Cai等.Local Stereo Matching with Edge-Based Cost Aggregation and Occlusion Handling.《2nd International Congress on Image and Signal Processing,2009》.2009, *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI511079B (en) * 2014-04-30 2015-12-01 Au Optronics Corp Three-dimension image calibration device and method for calibrating three-dimension image

Also Published As

Publication number Publication date
CN101908230A (en) 2010-12-08

Similar Documents

Publication Publication Date Title
CN101908230B (en) Regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method
CN101887589B (en) Stereoscopic vision-based real low-texture image reconstruction method
Andrés et al. Generation of virtual models of cultural heritage
CN111028155B (en) Parallax image splicing method based on multiple pairs of binocular cameras
Pascoe et al. Farlap: Fast robust localisation using appearance priors
CN103868460A (en) Parallax optimization algorithm-based binocular stereo vision automatic measurement method
CN106485751B (en) Unmanned aerial vehicle photographic imaging and data processing method and system applied to foundation pile detection
CN104299261A (en) Three-dimensional imaging method and system for human body
CN103503025A (en) Determining model parameters based on transforming a model of an object
CN104036475A (en) High-robustness geometric correction method adapted to random projector group and projection screen
CN104778694A (en) Parameterized and automatic geometric correction method for multi-projector tiled display
CN108305277B (en) Heterogeneous image matching method based on straight line segments
CN105184857A (en) Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging
CN113256698B (en) Monocular 3D reconstruction method with depth prediction
CN112270698B (en) Non-rigid geometric registration method based on nearest curved surface
CN104463899A (en) Target object detecting and monitoring method and device
CN113658337B (en) Multi-mode odometer method based on rut lines
CN105469389A (en) Grid ball target for visual sensor calibration and corresponding calibration method
CN105005964A (en) Video sequence image based method for rapidly generating panorama of geographic scene
CN106340045A (en) Calibration optimization method based on binocular stereoscopic vision in three-dimensional face reconstruction
CN103945207A (en) Stereo image vertical parallax eliminating method based on viewpoint synthesis
CN104182968A (en) Method for segmenting fuzzy moving targets by wide-baseline multi-array optical detection system
CN103702103A (en) Optical grating three-dimensional printing image synthetic method based on binocular camera
CN107909543A (en) A kind of flake binocular vision Stereo matching space-location method
CN116625258A (en) Chain spacing measuring system and chain spacing measuring method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: JIANGSU HUAQIANG TEXTILE CO., LTD.

Free format text: FORMER OWNER: SOWTHEAST UNIV.

Effective date: 20131025

Owner name: SOWTHEAST UNIV.

Effective date: 20131025

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 210009 NANJING, JIANGSU PROVINCE TO: 226600 NANTONG, JIANGSU PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20131025

Address after: 226600, Jiangsu, Nantong County, Haian province Haian Development Zone, Xin Li Road, No. 79

Patentee after: Jiangsu Province Huaqiang Textile Co.,Ltd.

Patentee after: SOUTHEAST University

Address before: 210009 No. 2, Southeast University Road, Jiangning Development Zone, Nanjing, Jiangsu

Patentee before: Southeast University

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20111123