CN101667303B - Three-dimensional reconstruction method based on coding structured light - Google Patents

Three-dimensional reconstruction method based on coding structured light Download PDF

Info

Publication number
CN101667303B
CN101667303B CN 200910153603 CN200910153603A CN101667303B CN 101667303 B CN101667303 B CN 101667303B CN 200910153603 CN200910153603 CN 200910153603 CN 200910153603 A CN200910153603 A CN 200910153603A CN 101667303 B CN101667303 B CN 101667303B
Authority
CN
China
Prior art keywords
point
color
image
striation
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 200910153603
Other languages
Chinese (zh)
Other versions
CN101667303A (en
Inventor
陈胜勇
胡正周
管秋
原长春
潘贝
李帅
王万良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN 200910153603 priority Critical patent/CN101667303B/en
Publication of CN101667303A publication Critical patent/CN101667303A/en
Application granted granted Critical
Publication of CN101667303B publication Critical patent/CN101667303B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a three-dimensional reconstruction method based on coding structured light, comprising the following steps: 1) projecting structured light to an object to be measured, and capturing an image modulated by the object to be measured by a camera; 2) matching an optical template, comprising: (2.1) positioning the optical strip boundary, scanning along each row of the image, determining a pixel point with strong gray variation as a candidate marginal point, and searching a local domain; and (2.2) matching the optical strip: adopting a color cluster method to build a color matching proper vector, comparing image color with a projected color, and defining Euclidean distance between the color proper vector and the cluster center to distribute the colors of red, green, blue and white of the candidate optical strip; and 3) using a calibrated system parameter for three-dimensional reconstruction of the object to be measured, determining the relation between a space point coordinate and the image coordinate point thereof by the calibrated conversion matrix parameter; and restoring three-dimensional spatial coordinate from the image coordinate of a feature point. The invention can simplify calculation process and has high matching precision and high reconstruction precision.

Description

A kind of three-dimensional rebuilding method based on coded structured light
Technical field
The present invention relates to computer vision, data processing, image processing, especially a kind of three-dimensional rebuilding method based on coded structured light.
Background technology
The purpose of computer vision is that input is expressed and understood to various images.Nearly two during the last ten years, and the object three-dimensional contour outline reconstruction is one of computer vision free-revving engine always.The implementation method of three-dimensional object profile reconstruct and measuring technique has a lot, mainly comprises contact and contactless two large classes.Non-cpntact measurement mainly is divided into two classes, and a class is optical means, and another kind of is other outer method of optics.
Optical method for measuring can be divided into two classes again: active and passive type.The former is that imaging device does not send signal by object radiation signal or body surface reflected signal synthetic image; The latter sends a branch of signal by imaging device, by receiving signal synthetic image that reflect from body surface or that penetrate object.Human eye three-dimensional perception principle is mainly simulated in the measurement of passive type, and is similar with human vision take the traditional double item stereo vision as representative, mainly is the superficial makings by target object, and half-tone information, colouring information obtain the three-dimensional appearance of target object.The passive vision method is taken two width of cloth pictures for same scene at least from different angles, and perhaps at least two viewpoint positions are taken.Unique point chooses and mates the problem that passive vision can't reliably solve always between the picture, because Feature Points Matching has regionality, it just mates for the similar area of target object, is difficult to be accurate to the point on the target object, the degree of accuracy of coupling is not high, and reconstruction precision is lower; Secondly, produce easily the mistake coupling for the coupling of the unconspicuous target object of unique point, cause the erroneous point of rebuilding more, calculation of complex, this is the problem that stereoscopic vision itself can't overcome.Therefore, stereoscopic vision can't satisfy the needs of practical application.
Active measuring method fly if having time method, phase measurement, structured light method and digital hologram method etc.Wherein, the structured light method with its intrinsic noncontact, be easy to realize and the higher advantages such as precision, be subject in recent years increasing attention.Especially since entering the eighties of last century the nineties, along with developing rapidly of industrial detection technology, reverse Engineering Technology and rapid shaping technique, the demand of three-dimensional body being carried out surface profile reconstruct and measurement is increasing, more and more higher to measuring speed and accuracy requirement, so that become the most widely used method based on the structured light method of principle of triangulation.
Summary of the invention
For the calculation of complex of the three-dimensional rebuilding method that overcomes existing Vision imaging system, the deficiency that matching precision is not high, reconstruction precision is low, the invention provides a kind of computation process, matching precision is high, reconstruction precision is high three-dimensional rebuilding method based on coded structured light simplified.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of three-dimensional rebuilding method based on coded structured light may further comprise the steps:
1), to determinand projective structure light, video camera catches the image through the determinand modulation;
2), carry out optical mode plate coupling, may further comprise the steps:
(2.1), the location on striation border:
Optical mode plate to be detected is one dimension striped coding, each column scan along image, each passage to each pixel carries out pre-service with one dimension row operator, shown in Figure 8 in one dimension operator such as the accompanying drawing, determine that the strong pixel of grey scale change is candidate marginal, each lists centered by candidate marginal at image, chooses a certain size zone, its maximal value is no more than 1/2 of striation width, and the step of searching for local field is as follows:
(2.1.1). the initialization candidate marginal is as a regional center z I, j
(2.1.2). along the column direction search, with z I-k, j, z I+k, jThe pixel color space changes into tone, brightness and saturation degree (HIS) space;
(2.1.3). increase new point to this zone, as long as satisfy the threshold value H that is no more than setting with the tone value difference of neighbor pixel
h=|z i±k,j,h-z i±k±1,j,h| (1)
H is the difference of two pixel tones, z I ± k, j, hThe tone value that represents the capable j row of the i ± k pixel; After operator was processed, the triple channel gray-scale value of pixel was G Ij, the peaked pixel of regional area triple channel gray-scale value sum is defined as frontier point;
max ( E ) = Σ j = 1 C G ij ( C = 3 ) - - - ( 2 )
E is the maximal value of pixel gray-scale value sum after operator is processed;
(2.2), striation coupling: adopt the method for color cluster to set up the color-match proper vector, color of image and projection color are compared, definition color character vector distributes candidate's striation red, green, blue and white color with the Euclidean distance of cluster centre;
Different color groups is collected at specific rgb space zone, and then the cluster centre by sample point training color group mates the striation color with the distance of point-to-point;
D ik=d 2(P i,C k)(k=1,2,3,4) (3)
D IkBe 2 distances square, P iThe RGB proper vector of striation center pixel, C kBe the color cluster center, d (x, y) is x, the y distance between two points; If D=min is (D Ik), the distribution of striation color is carried out according to (4) formula so:
s c={k if D==D ik(k=1,2,3,4) (4)
s cExpression striation color;
After distributing color, per three adjacent striation colors form a code word s i, find out each code word and decode exactly in the position of whole sequence; Striation coordinate figure x iBe defined as:
x i = DdBS ( s i ) + 1 4 ( j - 1 ) - - - ( 5 )
DdBS (s i) expression directly decodes j frame optical mode plate subsequence s iCoordinate position;
3), utilize the systematic parameter demarcated to carry out the three-dimensional reconstruction of determinand, detailed process is:
Demarcate good transition matrix parameter and determined the relation of spatial point coordinate and its image coordinate point, recover 3 d space coordinate from the image coordinate of unique point; Take modulated image, after image is processed, obtain image coordinate
Figure GSB00000586656400032
Calculate desirable normalization coordinate (x, y), there are following relation in spatial point and space under the camera coordinate system:
X c Y c Z c = R X w Y w Z w + t = r 1 r 2 r 3 X w Y w Z w + t 1 t 2 t 3 - - - ( 6 )
Definition X c=xZ c, Y c=yZ c, bring (6) into and get:
xZ c yZ c Z c = r 1 r 2 r 3 X w Y w Z w + t 1 t 2 t 3 - - - ( 7 )
Cancellation Z c, abbreviation gets:
xt 3 - t 1 yt 3 - t 2 = r 1 - xr 3 r 2 - yr 3 X w Y w Z w - - - ( 8 )
Obtain x by the striation decoding dCoordinate figure:
x d m 24 - m 14 = [ m 1 - x d m 2 ] X w Y w Z w - - - ( 9 )
Get in conjunction with (8) formula, (9) formula:
xt 3 - t 1 yt 3 - t 2 x d m 24 - m 14 = r 1 - xr 3 r 2 - yr 3 m 1 - x d m 2 X w Y w Z w - - - ( 10 )
Write (10) formula as matrix A X=B form, wherein X=(X w, Y w, Z w), then equation linear least-squares solution is:
X=(A TA) -1A TB (11)
X namely is the world coordinates that is obtained by three-dimensional computations by two dimensional image.
Further, in described step 2.1) in, the searching method of cluster centre is:
(2.1.1). initialization K=4, red, green, blue, white color class, initialization K group's center;
(2.1.2). each sample point is distributed to the group of the centre distance minimum that peels off;
(2.1.3) center of mass point among the .K group becomes new cluster centre;
(2.1.4). repeating step 2 and 3 until have a few and all tend to stably to assemble.
Further again, in described step 3) in, systematic parameter comprises camera parameters and projector parameter, scaling method may further comprise the steps:
3.1), the camera calibration process:
Adopt two-dimentional calibrating template, target is that the black and white chess lattice of standard form, unique point is that the point of chess lattice forms, video camera is taken the multiple image of diverse location, select the grid angle point as unique point to described multiple image, adopt the plane reference method that video camera is demarcated, obtain camera intrinsic parameter and demarcate K, and obtain the average focal length
Figure GSB00000586656400042
It is characterized in that: described scaling method also comprises:
3.2), the projector calibrating process:
The optical mode plate that produces by coding projects on the common target, and black and white grid angle point utilizes the image that obtains as anchor point in camera calibration, extract simultaneously black and white grid angle point image coordinate, generates world coordinates;
During projection single white optical mode plate, obtain A 1, B 12 image coordinate (u A, v A), (u B, v B), and world coordinates is (X A, Y A, Z A), (X B, Y B, Z B), ask straight line L iIntersection point with L obtains P iThe image coordinate of point is such as P 1The image coordinate of point is (u P, v P), world coordinates is defined as (X p, Y P, Z p), because A 1, B 1, P 13 coplanar, so Z p=Z A=Z B, according to cross ratio invariability, obtain,
X A - X P X A - X B = v A - v P v A - v B , Y A - Y P Y A - Y B = u A - u P u A - u B - - - ( 12 )
Order
λ x = v A - v P v A - v B , λ y = u A - u P u A - u B - - - ( 13 )
Got by (12), (13) formula:
X p = X A + λ x ( X B - X A ) Y p = Y A + λ y ( Y B - Y A ) Z P = Z A = Z B - - - ( 14 ) .
λ x, λ yThe double ratio coefficient that represents respectively X and Y-direction, (X p, Y P, Z p) world coordinates of expression striation Edge Feature Points.
Further, in described step 3.2) in, the process of setting up of projector model is:
By projector coordinates x dWith world coordinates X wIt is as follows to set up model:
γ x d 1 = m 1 m 14 m 2 m 24 X w Y w Z w 1 = m 11 m 12 m 13 m 14 m 21 m 22 m 23 m 24 X w Y w Z w 1 - - - ( 15 )
γ is arbitrary scale factor, eliminates γ, and conversion (15) formula obtains:
m 1 X w Y w Z w T + m 14 m 2 X w Y w Z w T + m 24 = x d - - - ( 16 )
(16) formula of utilization structure is with m IkBe the system of homogeneous linear equations of unknown number, then by Singular-value Decomposition Solution projector parameter;
Set up and simplify perspective cross ratio invariability geometric model, hypothesis space point A, B, C are positioned on the same straight line L, and take B as reference point, the position ratio is defined as:
PR(A,B,C)=AB/AC (17)
In like manner, A ', B ', C ' they are under the O of centre of perspectivity effect, the imaging point of corresponding point A, B, C, and be positioned on the same point straight line L1, the position ratio on the image coordinate is:
PR(A′,B′,C′)=A′B′/A′C′ (18)
Cross ratio invariability according to the perspective geometry principle obtains:
PR(A,B,C)=PR(A′,B′,C′) (19)
Also be to be positioned on the same straight line L ' at image mid point A ', B ', C '.
Extract the image coordinate of unique point in the optical mode plate, obtain the world coordinates of reference point B according to (19) formula, set up space three-dimensional and come the labeling projection instrument to the mapping relations between the dimension coordinate.
Technical conceive of the present invention is: structured light three-dimensional vision is based on the triangulation principle of optics.Optical projection device (can be laser instrument, also can be projector) is projeced into the surface of object with the structured light (optical mode plate) of certain pattern, forms the striation 3-D view of being modulated by the testee surface configuration on the surface.This 3-D view is by the video camera picked-up that is in the another location, thus acquisition striation two dimension fault image, and the distortion degree of demodulation striation just can be reappeared the three-dimensional appearance of body surface.The structured light three-dimensional vision method can be divided into owing to projection light template pattern difference: dot structure light method, line-structured light method, multi-line structured light method, network light method etc., we mainly concentrate research multiline color coded structured light method, use chromatic information that striation is encoded, DeBruijn sequence optical mode plate for example, its matching problem then is obtained solution by RGB hyperchannel dynamic programming algorithm, and decoding coupling accuracy is high.
Beneficial effect of the present invention is mainly manifested in: simplify computation process, matching precision is high, reconstruction precision is high.
Description of drawings
Fig. 1 is the three-dimensional reconstruction processing flow chart.
Fig. 2 is the synoptic diagram of rgb space.
Fig. 3 is the sequence optical mode plate synoptic diagram that white waits wide interval.
Fig. 4 is that four frames are rotated counterclockwise 90 ° time shift optical mode plate synoptic diagram.
Fig. 5 is the synoptic diagram of edge detection operator.
Fig. 6 is operator edge detection result's synoptic diagram.
Fig. 7 is the synoptic diagram of the picture element scan passage gray-scale value of any row.
Fig. 8 is the one dimension row operator synoptic diagram of edge detection operator.
Fig. 9 is the synoptic diagram of striation Boundary Detection.
Figure 10 is the rgb space synoptic diagram of pixel color.
Figure 11 is perspective projection: some M projects to the synoptic diagram of the plane of delineation (u, v).
Figure 12 is the cross ratio invariability principle schematic.
Figure 13 is the synoptic diagram of high precision target.
Figure 14 is the synoptic diagram that reads in original uncalibrated image.
Figure 15 is the synoptic diagram of shooting labeling projection error.
Figure 16 is striation projection target face figure.
Figure 17 is the synoptic diagram of edge extracting and fitting a straight line.
Figure 18 is the synoptic diagram that unique point is obtained.
Figure 19 is projector calibrating error synoptic diagram.
Figure 20 is the synoptic diagram of calibration point three-dimensionalreconstruction.
Embodiment
The invention will be further described below in conjunction with accompanying drawing.
With reference to Fig. 1~Figure 20, a kind of three-dimensional rebuilding method based on coded structured light may further comprise the steps:
1), to determinand projective structure light, video camera catches the image through the determinand modulation;
2), carry out optical mode plate coupling, may further comprise the steps:
(2.1), the location on striation border:
Optical mode plate to be detected is one dimension striped coding, each column scan along image, each passage to each pixel carries out pre-service with one dimension row operator, shown in Figure 8 in one dimension operator such as the accompanying drawing, determine that the strong pixel of grey scale change is candidate marginal, each lists centered by candidate marginal at image, chooses a certain size zone, its maximal value is no more than 1/2 of striation width, and the step of searching for local field is as follows:
(2.1.1). the initialization candidate marginal is as a regional center z I, j
(2.1.2). along the column direction search, with z I-k, j, z I+k, jThe pixel color space changes into tone, brightness and saturation degree (HIS) space;
(2.1.3). increase new point to this zone, as long as satisfy the threshold value H that is no more than setting with the tone value difference of neighbor pixel
h=z i±k,j,h-z i±k±1,j,h | (1)
H is the difference of two pixel tones, z I ± k, j, hThe tone value that represents the capable j row of the i ± k pixel; After operator was processed, the triple channel gray-scale value of pixel was G Ij, the peaked pixel of regional area triple channel gray-scale value sum is defined as frontier point;
max ( E ) = Σ j = 1 C G ij ( C = 3 ) - - - ( 2 )
E is the maximal value of pixel gray-scale value sum after operator is processed;
(2.2), striation coupling: adopt the method for color cluster to set up the color-match proper vector, color of image and projection color are compared, definition color character vector distributes candidate's striation red, green, blue and white color with the Euclidean distance of cluster centre;
Different color groups is collected at specific rgb space zone, and then the cluster centre by sample point training color group mates the striation color with the distance of point-to-point;
D ik=d 2(P i,C k)(k=1,2,3,4) (3)
D IkBe 2 distances square, P iThe RGB proper vector of striation center pixel, C kBe the color cluster center, d (x, y) is x, the y distance between two points; If D=min is (D Ik), the distribution of striation color is carried out according to (4) formula so:
s c={k if D==D ik(k=1,2,3,4) (4)
s cExpression striation color;
After distributing color, per three adjacent striation colors form a code word s i, find out each code word and decode exactly in the position of whole sequence; Striation coordinate figure x iBe defined as:
x i = DdBS ( s i ) + 1 4 ( j - 1 ) - - - ( 5 )
DdBS (s i) expression directly decodes j frame optical mode plate subsequence s iCoordinate position;
3), utilize the systematic parameter demarcated to carry out the three-dimensional reconstruction of determinand, detailed process is:
Demarcate good transition matrix parameter and determined the relation of spatial point coordinate and its image coordinate point, recover 3 d space coordinate from the image coordinate of unique point; Take modulated image, after image is processed, obtain image coordinate
Figure GSB00000586656400082
Calculate desirable normalization coordinate (x, y), there are following relation in spatial point and space under the camera coordinate system:
X c Y c Z c = R X w Y w Z w + t = r 1 r 2 r 3 X w Y w Z w + t 1 t 2 t 3 - - - ( 6 )
Definition X c=xZ c, Y c=yZ c, bring (6) into and get:
xZ c yZ c Z c = r 1 r 2 r 3 X w Y w Z w + t 1 t 2 t 3 - - - ( 7 )
Cancellation Z c, abbreviation gets:
xt 3 - t 1 yt 3 - t 2 = r 1 - xr 3 r 2 - yr 3 X w Y w Z w - - - ( 8 )
Obtain x by the striation decoding dCoordinate figure:
x d m 24 - m 14 = [ m 1 - x d m 2 ] X w Y w Z w - - - ( 9 )
Get in conjunction with (8) formula, (9) formula:
xt 3 - t 1 yt 3 - t 2 x d m 24 - m 14 = r 1 - xr 3 r 2 - yr 3 m 1 - x d m 2 X w Y w Z w - - - ( 10 )
Write (10) formula as matrix A X=B form, wherein X=(X w, Y w, Z w), then equation linear least-squares solution is:
X=(A TA) -1A TB (11)
X namely is the world coordinates that is obtained by three-dimensional computations by two dimensional image.
The imaging system of present embodiment comprises experiment frame, video camera and projector.The effect of structure light imaging system is that projector is fixed on the base, and video camera is fixed on the cross bar that can move horizontally, and it is fixing that the relative position of projector and video camera can keep, and improves Systems balanth.Catch simultaneously the demand of visual field in order to satisfy different angles, the camera intrinsic horizontal rotating shaft can be regulated in 360 ° of scopes, also can adjust by the self-locking screw rod upright position of video camera on the vertical direction, the flexibility ratio that the structured-light system frame has, degree of freedom are obtained the freedom of scene greatly to improve Systems balanth.Video camera adopts the DH-SV1410FC/FM of Daheng type industrial CCD video camera, and image quality is high.The major parameter of video camera: the CCD sensing is of a size of 2/3 ", maximum frame per second 15fps, focal length of lens f=25mm, Pixel Dimensions 6.45 * 6.45um, resolution 1392 * 1040 (1447680 pixel).Adopt general happy scholar PLUS V-1100C digital projector as structured light projector, maximum display resolution is 1280 * 1024, focal distance f=23mm, but manual focusing two-way digital keystone, full color (1,677 ten thousand look), 6 inches to 200 inches of screen sizes, contrast 2000: 1, the optical mode plate that projection produces through programming.
The three-dimensional reconstruction process as shown in Figure 1, at first throw the structured light of certain pattern to determinand, video camera catches the image through the object modulation, at rim detection, striation cutting unit, the main frontier point that extracts striation, each bar striation is cut apart the location, every striation is carried out color assignment, replace with the color value in the alphabet, three striations on the continuous space are formed a code word, realize the space decoding coupling between video camera and the projector, utilize at last the systematic parameter of having demarcated to realize the three-dimensional reconstruction of object.
For what of red (green/indigo plant) component, be divided into artificially 0~255 totally 256 grades, 0 expression does not contain redness (green/indigo plant) composition, and 255 expressions contain 100% redness (green/indigo plant) composition.Just can represent 256 * 256 * 256 kinds of colors according to the various combination of red, green, blue, for example a pixel is shown as cyan when its red, green, blue composition is respectively 0,255,255.And for the pixel of gray level image, the red, green, blue composition of this pixel equates that only along with the increase of these three component values, pixel color becomes white from black.Existing most color-image forming apparatus and color display apparatus all adopt RGB (Red/Green/Blue, RGB) three primary colours represent and store, based on above-mentioned design, can adopt RGB model commonly used to set up color model, color digital image can be by the RGB color space representation, rgb space as shown in Figure 2, for simplicity, the all normalization of all colours value, namely illustrated cube is a unit cube.
In colored projection system, the striation coordinate need to detect from a sub-picture, and this paper proposes a kind of new coding strategy, and each row of projection light template is encoded.For numerous and diverse property and the raising reliability that reduces colored identification, guarantee that the color space discrimination is enough large, requirement has two Color Channel differences at least in rgb space, therefore select red, green, blue three primary colours and in vain as encoded colors, except black and white, red, green, space length between blue and white reaches maximum, this can guarantee the scene of any high saturation is all kept enough distances, and the encoded colors collection can be chosen for P={ (255,0,0), (0,255,0), (0,0,255), (255,255,255) 4 element sets }, white are as each bar color of interval, and the number of times that has satisfied white striation appearance is maximum, to obtain higher brightness of image, corresponding color set letter is mapped as:
P={P i|i=1,2,3,4}
P iIndicate respectively red, green, blue and Bai Si color.
Based on traditional De Bruijn space encoding method, introduce wide white striation and come the interval colored bars, make adjacent striation use same shade of color value, the efficient coding rate reaches 100%.Optical mode plate is by 2n mThe bar striation forms, and wherein number of color is n, and window size is m, and existing De Bruijn coding techniques adopts same parameter can only generate n (n-1) mThe bar striation, the coded sequence of generation is:
3,4,3,4,3,4,2,4,3,4,3,4,1,4,3,4,2,4,2,4,3,4,2,4,1,4,3,
4,1,4,2,4,3,4,1,4,1,4,2,4,2,4,2,4,1,4,2,4,1,4,1,4,1
In the actual scene, the uncontinuity of body surface causes the 3D error in data to increase so that the part striation is invisible, and such striation is called the shade striation.The advantage of introducing white striation can increase decodable code striation ratio, because the white striation that corresponding shade striation is not participated in decoding substitutes.Definition Q={1,2 ..., 2n m, the coordinate sequence number value of corresponding every striation.The optical mode plate that generates thus as shown in Figure 3, plane space of each striped boundary demarcation.
Simultaneously, in order to increase the resolution of existing coding templet, propose an effective space-time code, each two field picture of projection is comprised of identical optical mode plate, and a rear frame moves down striation 1/4 width by former frame and forms, and four frame optical mode plates as shown in Figure 4.In view of previous existing a large amount of document the mathematics group and aspect De Bruijn sequence is studied, adopt the method for document to produce sequence.System utilizes the time response of every frame striation boundary space coding uniqueness and multiframe to increase precision and the resolution of reconstruction model.
The correct coupling of setting up between the plane of delineation and the projected light plane is the problem of a complexity.Its step comprises striation boundary alignment, striation color assignment and the position of decoding striation in every frame template.Repeat above-mentioned steps and just can mate whole unique points, and then the reconstruction of three-dimensional shape.
The accurate location on striation border: the rim detection of image mainly is tolerance, detection and the location of grey scale change, and the method for rim detection is numerous, and the wave filter that diverse ways uses also is not quite similar.In the experiment of reality piece image is assumed to be continuous function f (x, y), its directional derivative is that extreme value is arranged in edge direction, and the principle of rim detection is to realize by the extreme value of asking f (x, y) gradient.Edge detection method commonly used has the edge detection operator based on the single order differential, comprise Roberts, Sobel, Prewitt operator, based on the Laplace operator (Laplace) of second-order differential, also have the Canny operator based on optimization method, each operator template is as shown in Figure 5.
Use above-mentioned Operators piece image is carried out rim detection, first coloured image is converted into gray level image, the testing result of each operator as shown in Figure 6.
Because view data is two-dimentional, the depth information in space is lost in the perspective process, add uneven illumination and noise in the imaging process, the precision of rim detection is reduced, many pseudo-edges appear, the edge of striation presents uncontinuity, can cause losing of spatial point code word like this, can not realize decoding.Propose other rim detection scheme for these problems this paper, carry out first the image pre-service, in accurate location.Optical mode plate is comprised of the horizontal color striped, is the one-dimensional coding mode, to any row line scanning of modulated image, as shown in Figure 7.
As can be seen from Figure 6, vertically scanning, the gray scale strong variations of colored bars and white striation is obvious, and the extreme value of the edge definable triple channel gray scale of striation is determined.Colored transition finedraw color-bar with existing certain gray-scale value around the white border, the edge that catches in the image departs from actual definition template border, and accurate location, border directly affects the precision of system reconstructing.For the problems referred to above, before Edge detected, because optical mode plate is one dimension striped coding, along each column scan of image, each passage of each pixel is carried out pre-service with the one dimension row operator of Fig. 8, determine that the strong pixel of grey scale change is candidate marginal.
Consider that overall marginal point search is consuming time, this paper proposes local search approach, and each lists centered by candidate marginal at image, chooses a certain size zone, and its maximal value is no more than 1/2 of striation width, and the step of searching for local field is as follows:
1. initialization candidate marginal is as a regional center z I, j
2. along the column direction search, with z I-k, j, z I+k, jThe pixel color space changes into tone, brightness and saturation degree (HIS) space;
3. increase new point to this zone, as long as satisfy the threshold value H that is no more than setting with the tone value difference of neighbor pixel
h=|z i±k,j,h-z i±k±1,j,h| (1)
H is the difference of two pixel tones, z I ± k, j, hThe tone value that represents the capable j row of the i ± k pixel.The respectively tonal difference of dialogue, the red, green, blue striation neighbor experiment that takes statistics is got threshold value H=0.06 and can be obtained desirable borderline region.After operator was processed, the triple channel gray-scale value of pixel was G Ij, the peaked pixel of regional area triple channel gray-scale value sum may be defined as frontier point.
max ( E ) = Σ j = 1 C G ij ( C = 3 ) - - - ( 2 )
E is the maximal value of pixel gray-scale value sum after operator is processed, and the precision of detection can reach Pixel-level.Fig. 9 uses said method to carry out the result of rim detection to Fig. 6 a original image, compares with above-mentioned operator, and it is more continuous that the edge becomes, and pseudo-edge point reduces.
The striation coupling: the color that every is detected in the image and template color have certain similarity.Therefore, correctly mating the striation corresponding color is to reduce the key of 3D miscount.The people such as Zhang propose dynamic programming method and mate each pixel, rather than mate every striation, very consuming time, such as document 1:L.Zhang, B.Curless.S.M.Seitz.Rapid shape acquisition using color structured light and multi-pass dynamic programming[A] .In:Proceedings First International Symposium on 3D Data Processing Visualization and Transmission[C], 2002,24-36 (L.Zhang, B.Curless, S.M.Seitz. rebuild [A] based on the quick pattern of the color structured light of hyperchannel dynamic programming. first international symposium's three-dimensional visualization data processed and transmission [C], 2002,24-26.).The people such as Fechteler adopt a complicated straight line clustering algorithm to find the parameter of each cluster straight line, degree of separation between the straight line is not obvious, such as document 2:P.Fechteler and P.Eisert.Adaptive color classification for structured light systems[A] .In:Proceedings of the 15th International Conference on Computer Vision and Pattern Recognition[C], USA, 2008,1-7, (P.Fechteler and P.Eisert. structured-light system color self-adaption cluster [A]. the 15th international computer vision and pattern-recognition meeting [C], the U.S., 2008,1-7).This paper adopts the method for color cluster to set up the color-match proper vector, and color of image and projection color are compared, and definition color character vector distributes candidate's striation red, green, blue and white color with the Euclidean distance of cluster centre.The three primary colours space length that coding adopts is enough large, with classical K mean algorithm search color class center.At first select the multiple image of shooting, consider sample space, select three width of cloth images in the experiment, and respective pixel RGB component is changed into color three-dimensional feature vector as the sample data of input, the cluster centre search is as follows:
1. initialization K=4, red, green, blue, white color class, initialization K group's center;
2. each sample point is distributed to the group of the centre distance minimum that peels off;
3.K the center of mass point among the group becomes new cluster centre;
Repeating step 2 and 3 until have a few and all tend to stably to assemble.
Experiment shows that the color of image of seizure and the template color of projector projects have various distortion and distortion.In addition, there be crosstalking between noise and color in each sensor.As shown in figure 10, describing the three primary colours rgb space, (a) is the striation color rgb space that grasps in the projector in the ideal case, (b) is the striation color rgb space that video camera is taken under working environment.
In Figure 10, (b) shown in, different color groups is collected at specific rgb space zone, then the cluster centre by sample point training color group mates the striation color with the distance of point-to-point.
D ik=d 2(P i,C k)(k=1,2,3,4) (3)
D IkBe 2 distances square, P iThe RGB proper vector of striation center pixel, C kBe the color cluster center, d (x, y) is x, the y distance between two points.If D=min is (D Ik), the distribution of striation color is carried out according to (4) formula so
s c={kifD==D ik(k=1,2,3,4) (4)
s cExpression striation color.For example, k=1, then the striation color is red.{ k=1,2,3,4} indicate respectively red, green, blue and white.After distributing color, per three adjacent striation colors form a code word s i, find out each code word and decode exactly in the position of whole sequence.That adopts that Hsieh proposes directly decodes De Brujin algorithm, compares with exhaustive decoding, does not need to store extra alphabet, and time complexity is the 1/n (n is coding unit) of the method for exhaustion.Striation coordinate figure x iBe defined as:
x i = DdBS ( s i ) + 1 4 ( j - 1 ) - - - ( 5 )
DdBS (s i) directly decode j frame optical mode plate subsequence s iCoordinate position.
In the present embodiment, at first select RGB model commonly used as color model, select RGB and in vain as the encoded colors collection, so that the distance of color is maximum, satisfy the exact matching of color.In the rim detection, adopt the one dimension operator to carry out pre-service, the extreme value of definition triple channel graded quadratic sum utilizes the boundary pixel point to cut apart striation, in conjunction with classical K means clustering method as frontier point simultaneously, the coupling accuracy of striation color reaches 100% in the experiment, utilize the striation color of having distributed to form code word, by directly decoding algorithm, directly locate the locus of striation, compare with the method for exhaustion, greatly improved decoding efficiency.
The calibration process of present embodiment is:
Camera model: from broadly dividing, the scaling method of video camera can be divided into traditional camera calibration method, camera self-calibration method and based on camera calibration method three classes of active vision, what this paper adopted is a kind of traditional camera marking method, the plane reference method of Zhang Zhengyou, the method utilize video camera to come all inside and outside parameter of calibrating camera from the multiple image (at least three width of cloth) of different angle shot plane templates.The dirigibility of plane reference method is can freely move between video camera and target, and kinematic parameter need not to know.Owing to have man-to-man corresponding relation between the corresponding picture point on each unique point and its image on the target, this relation can represent with a mapping matrix, so can determine a mapping matrix for every width of cloth image, for finding the solution of inner parameter provides two constraint conditions.This algorithm is at first supposed the Z of target plane place world coordinate system w=0, calculate the initial parameter of video camera by linear model analysis, then consider that radial distortion (single order or second order) is used based on maximum-likelihood method initial parameter is carried out nonlinear optimization, utilize at last and optimize good inner parameter and the mapping matrix of target plane is obtained external parameters of cameras.The plane reference method is a kind of method between traditional scaling method and self-calibrating method.It had both avoided the classic method equipment precision requirement high, and the shortcomings such as complex operation are high than the self-calibrating method precision again, satisfied the calibration request of ordinary desktop vision system.
Figure 11 is a camera perspective projection figure, and the projection from 3d space to the 2D plane is a many-to-one mapping, and picture point all 3D points to the 3d space light mM are corresponding same 2D point m all, will lose depth information like this in imaging process.Unique point world coordinates on the target plane is designated as M=(X w, Y w, Z w), the pixel coordinate of the plane of delineation is designated as m=(u, v), and corresponding homogeneous coordinates are respectively M w=(X w, Y w, Z w, 1) T, m c=(u, v, 1) T, video camera is based on linear pin-hole model, and the perspective projection relation of putting to the image with homogeneous coordinates and matrix representation spatial point is as follows:
λm c=K[Rt]M w (20)
Wherein λ is projected scale's factor of arbitrary non-zero, and rotation matrix R and translation vector t are called external parameters of cameras, and K is the intrinsic parameters of the camera matrix, is defined as:
K = α x β u 0 0 α y v 0 0 0 1 - - - ( 21 )
α wherein x=f x/ d x, α y=f y/ d y, f x, f y, d x, d yBe respectively focal length on x axle and the y direction of principal axis and the physical size of pixel, β is the out of plumb coefficient of x axle and y axle, (u 0, v 0) be the picture centre coordinate.The Z of target plane place world coordinate system w=0, got by formula (20):
λ u v 1 = K r 1 r 2 r 3 t X w Y w 0 1 = K r 1 r 2 t X w Y w 1 - - - ( 22 )
Note
Figure GSB00000586656400143
Spatial point
Figure GSB00000586656400144
With picture point m cThere is a transformation matrix H, then
Figure GSB00000586656400145
Note H=[h 1h 2h 3], can obtain:
[h 1 h 2 h 3]=γK[r 1 r 2 t] (24)
γ is a constant factor, by the orthogonality (r of rotation matrix R 1 Tr 2=0, r 1 Tr 1=r 2 Tr 2), obtain two constraint conditions of intrinsic parameter K:
h 1K -TK -1h 2=0 (25)
h 1 T K - T K - 1 h 1 = h 2 T K - T K - 1 h 2 - - - ( 26 )
Utilize (25) (26) two constraint condition linear solution intrinsic parameters K, order
B=K -TK -1 (27)
K -TK -1In fact described the projection of absolute conic on the plane of delineation, obtained B based on this principle, again B has been inverted, utilized SVD to decompose and try to achieve Intrinsic Matrix K.Calculate every width of cloth image with respect to outer parameter rotation matrix R and the translation vector t of camera coordinate system by K and mapping matrix H again.Adopt maximum likelihood to estimate that (Maximum likelihood estimation) carries out nonlinear optimization, further refinement to above-mentioned parameter.Demarcate for improving accuracy, this paper introduce five dimensions radially with tangential distortion coefficient vector k c=(k 1, k 2, p 1, p 2, k 3) T, k in actual the demarcation 3=0 (because k 5Corresponding order is higher).The image coordinate of (x, y) expression take millimeter as unit, the spatial point coordinate under the camera coordinate system is C=(X c, Y c, Z c), definition normalization coordinate diagram picture:
x n = X c / Z c Y c / Z c = x y - - - ( 28 )
Make r 2=x 2+ y 2,
Figure GSB00000586656400153
Actual normalization figure coordinate, (u, v),
Figure GSB00000586656400154
The pixel coordinate of respectively expression ideal and actual observation, behind the introducing amount of distortion, the normalization coordinate may be defined as:
Figure GSB00000586656400155
The actual pixels coordinate that obtains at last plane of delineation subpoint is:
Figure GSB00000586656400156
The maximum likelihood estimation objective function:
Figure GSB00000586656400157
Wherein
Figure GSB00000586656400158
(K, k 1, k 2, k 3, k 4, R i, t i, M j) representation space point M jProject to the plane of delineation, m by the conversion of (20) and (29) formula IjBe j calibration point in the i width of cloth image, R iBe the rotation matrix of i width of cloth image, t iBe the translation vector of i width of cloth image, k cInitial value be made as 0 TPerhaps decompose by (29) formula least square method, with its with the camera interior and exterior parameter of trying to achieve as initial value, use the Levenberg-Marquardt nonlinear optimization algorithm estimated value of objective function is minimized, obtain at last the net result of each parameter.
Projector model is at the conceptive inversion model that is equal to video camera, because the one dimension of space encoding can be simplified to one-dimensional model here.By projector coordinates x dWith world coordinates X wIt is as follows to set up model:
γ x d 1 = m 1 m 14 m 2 m 24 X w Y w Z w 1 = m 11 m 12 m 13 m 14 m 21 m 22 m 23 m 24 X w Y w Z w 1 - - - ( 15 )
γ is arbitrary scale factor, eliminates γ, and conversion (15) formula obtains:
m 1 X w Y w Z w T + m 14 m 2 X w Y w Z w T + m 24 = x d - - - ( 16 )
(16) formula of utilization structure is with m IkBe the system of homogeneous linear equations of unknown number, then decompose by singular value (SVD) and find the solution the projector parameter.
In three-dimensional reconstruction, the striation marginal point is as unique point, and is different from tessellated angle point, and its world coordinates can't directly obtain, and need to utilize the world coordinates of known point indirectly to ask.Set up and simplify perspective cross ratio invariability geometric model, as shown in figure 12, hypothesis space point A, B, C are positioned on the same straight line L, and take B as reference point, the position ratio is defined as:
PR(A,B,C)=AB/AC (17)
In like manner, A ', B ', C ' they are under the O of centre of perspectivity effect, the imaging point of corresponding point A, B, C, and be positioned on the same point straight line L1, the position ratio on the image coordinate is:
PR(A′,B′,C′)=A′B′/A′C′ (18)
Cross ratio invariability according to the perspective geometry principle can obtain:
PR(A,B,C)=PR(A′,B′,C′) (19)
In experiment, can find A, B, C on the space line at 3, so corresponding also is to be positioned on the same straight line L ' at image mid point A ', B ', C '.
Extract the image coordinate of unique point in the optical mode plate, can obtain the world coordinates of reference point B according to (19) formula.Set up space three-dimensional and come the labeling projection instrument to the mapping relations between the dimension coordinate.
At Constructed Lighting Vision System, minute independent two steps are demarcated, camera calibration and projector calibrating, and the demarcation of each step is separate, does not rely on mutually, iteration error can not occur, and system accuracy is high.
Camera calibration: the three-dimensional coordinate accuracy requirement of unique point is high, adopts two-dimentional calibrating template, operates in order to make systematic unity, and the single white light of projector projects during camera calibration, coplanar target is as shown in figure 13 made simple and easy.
Target has 21 * 21mm standard black/white gridiron pattern to form, and gridiron pattern is printed by laser high-precision, then is close on the aluminium sheet face, and unique point is comprised of tessellated point.Video camera is taken 10 width of cloth images of diverse location, as shown in figure 14.
Select the grid angle point as unique point to 10 width of cloth images, use above-mentioned plane reference method that video camera is demarcated, the camera intrinsic parameter calibration result is:
K = 3872.6 0 692.2 0 3858.1 443.3 0 0 1
Distortion factor k c=(0.16762 ,-0.74282,0.00092 ,-0.01760,0) T, the physical size of used CCD is d in the experiment x=d y=6.45 μ m do not consider the out of plumb factor, to α x, α yBe averaging, can get the average focal length
Figure GSB00000586656400172
Figure GSB00000586656400173
With the camera standard focal distance f 0The relative error of=25mm is:
Figure GSB00000586656400174
The target unique point is carried out re-projection, and the resultant error scope is the 0.49708-0.65712 pixel, the re-projection error of each width of cloth image as shown in figure 15, x, y represent respectively the image coordinate system take pixel as unit.
Projector calibrating: projector is as the light source projects device in the structured-light system, and its demarcation relates to the precision of system, and this paper demarcates with one dimension simplified model set forth above.The demarcation of projector need to be obtained world coordinates and the striation coordinate figure of feature point for calibration.To project on the coplanar target by the optical mode plate that programming produces, black and white grid angle point as shown in figure 16, is taken four width of cloth images at depth value Z=990mm, 982mm, 974mm, 966mm place respectively as anchor point.
Marginal point carries out fitting a straight line, and the angle point on the straight line of place is also needed to carry out match.
The image that utilization is obtained in camera calibration extracts black and white grid angle point image coordinate simultaneously, and according to the grid characteristic, world coordinates can generate automatically.
As shown in figure 18, during projection single white optical mode plate, can obtain A 1, B 12 image coordinate (u A, v A), (u B, v B), and world coordinates is (X A, Y A, Z A), (X B, Y B, Z B).Ask straight line L iIntersection point with L obtains P iThe image coordinate of point is such as P 1The image coordinate of point is (u P, v P), world coordinates is defined as (X p, Y P, Z p), because A 1, B 1, P 13 coplanar, so Z p=Z A=Z B, obtained by (19) formula,
X A - X P X A - X B = v A - v P v A - v B , Y A - Y P Y A - Y B = u A - u P u A - u B - - - ( 12 )
Order
λ x = v A - v P v A - v B , λ y = u A - u P u A - u B - - - ( 13 )
Got by (12), (13) formula
X p = X A + λ x ( X B - X A ) Y p = Y A + λ y ( Y B - Y A ) Z P = Z A = Z B - - - ( 14 )
Utilize thus the world coordinates of the principle solving striation Edge Feature Points of cross ratio invariability, decode again the student number coordinate of striation, demarcate with projector model.The calibration result scope is: 0-0.2889, the error synoptic diagram as shown in figure 19.
The base length of the Constructed Lighting Vision System of present embodiment is 230mm, and four target plane of depth value Z=990mm, 982mm, 974mm, 966mm are reconstructed, and reconstructed error is as shown in table 1.As can be seen from Table 1, utilize the target of the equidistant translation of four width of cloth to demarcate, the mean absolute error of system is 1.38mm, and average relative error is 0.14%.
Z/mm Standard deviation Absolute error Relative error
990.0 1.0074 1.3209 0.13
982.2 1.0374 1.4086 0.14
974.4 1.0293 1.3943 0.14
966.7 1.0881 1.3988 0.14
Table 1
Utilize the parameter of having demarcated respectively four plane reference points to be carried out back projection, reconstruction result as shown in Figure 9.
The precision of measuring accuracy and hardware device is closely related, and in the system calibrating experiment, the reason that error produces is a lot, mainly is summarized as the following aspects:
1. high-precision checkerboard pattern is printed and is existed certain error.
2. extract in the process that produces at angle point, some during camera calibration on the hypothesis gridiron pattern is coplanar, this plane that will seek template is definitely smooth, but in actual experiment, stencil plane can not be absolute smooth, image and the hypothesis of taking have little deviation, and this little deviation is introduced to calculate and will be caused error.
3. in the process of projector calibrating, use cross ratio invariability and ask world coordinates, introduced error when unique point is carried out fitting a straight line.
4. the error that causes of calibration system.Secondly at first the resolution of vision system and acquisition precision all can affect the error of whole system, and scaling method chooses the error that all can cause system with the combination of whole calibration system each several part.Mainly based on the plane reference method of Zhang Zhengyou, the result from demarcating based on the method for nonlinear iteration, makes the minimum of error to the camera calibration of this paper, and the decomposition of projector model also is based on error minimize.
More than four aspects are the main causes that cause calibrated error.Because experiment condition is limit, the restriction of the video camera of hardware and projector resolution and acquisition precision, in the grid angle point leaching process, the template face is not definitely smooth, causes that angle point extracts error.In addition, above-mentioned system model in theory with computation process in all can defectiveness, also can cause error.
Demarcating good transition matrix parameter and determined the relation of spatial point coordinate and its image coordinate point, when carrying out three-dimensional computations, require to recover 3 d space coordinate from the image coordinate of unique point, namely is the process of three-dimensional reconstruction.Take modulated image, after image is processed, obtain image coordinate
Figure GSB00000586656400191
Calculate desirable normalization coordinate (x, y) by (29) formula and (30) formula, because there are following relation in spatial point and space under the camera coordinate system:
X c Y c Z c = R X w Y w Z w + t = r 1 r 2 r 3 X w Y w Z w + t 1 t 2 t 3 - - - ( 6 )
Draw X by (28) formula c=xZ c, Y c=yZ c, bring (6) into and get
xZ c yZ c Z c = r 1 r 2 r 3 X w Y w Z w + t 1 t 2 t 3 - - - ( 7 )
Cancellation Z c, abbreviation gets
xt 3 - t 1 yt 3 - t 2 = r 1 - xr 3 r 2 - yr 3 X w Y w Z w - - - ( 8 )
Obtain x by the striation decoding dCoordinate figure,
x d m 24 - m 14 = [ m 1 - x d m 2 ] X w Y w Z w - - - ( 9 )
Get in conjunction with (8) formula, (9) formula
xt 3 - t 1 yt 3 - t 2 x d m 24 - m 14 = r 1 - xr 3 r 2 - yr 3 m 1 - x d m 2 X w Y w Z w - - - ( 10 )
Write (10) formula as matrix A X=B form, wherein X=(X w, Y w, Z w), then equation linear least-squares solution is:
X=(A TA) -1A TB (11)
X namely is the world coordinates that is obtained by three-dimensional computations by the 2D image.

Claims (2)

1. three-dimensional rebuilding method based on coded structured light, it is characterized in that: described three-dimensional rebuilding method may further comprise the steps:
1), to determinand projective structure light, video camera catches the image through the determinand modulation;
2), carry out optical mode plate coupling, may further comprise the steps:
(2.1), the location on striation border:
Optical mode plate to be detected is one dimension striped coding, each column scan along image, each passage to each pixel carries out pre-service with one dimension row operator, determine that the strong pixel of grey scale change is candidate marginal, each lists centered by candidate marginal at image, choose a certain size zone, its maximal value is no more than 1/2 of striation width, and the step of search local neighborhood is as follows:
(2.1.1). the initialization candidate marginal is as a regional center z I, j
(2.1.2). along the column direction search, with z I-k, j, z I+k, jThe pixel color space changes into tone, brightness and saturation degree (HIS) space;
(2.1.3). increase new point to this zone, as long as satisfy the threshold value H that is no more than setting with the tone value difference of neighbor pixel
h=|z i±k,j,h-z i±k±1,j,h| (1)
H is the difference of two pixel tones, z I ± k, j, hThe tone value that represents the capable j row of the i ± k pixel; After operator was processed, the triple channel gray-scale value of pixel was G Ij, the peaked pixel of regional area triple channel gray-scale value sum is defined as frontier point;
E = max ( Σ j = 1 C G ij ) , C = 3 - - - ( 2 )
E is the maximal value of pixel gray-scale value sum after operator is processed;
(2.2), striation coupling: adopt the method for color cluster to set up the color-match proper vector, color of image and projection color are compared, definition color character vector distributes candidate's striation red, green, blue and white color with the Euclidean distance of cluster centre;
Select the red, green, blue three primary colours and in vain as encoded colors, except black and white, red, green, blue and white between space length reach maximum, the encoded colors collection is chosen for φ={ (255,0,0), (0,255,0), (0,0,255), (255,255,255) } 4 element sets, white is as each bar color of interval, and corresponding color set letter is mapped as:
Φ={R i|i=1,2,3,4}
R iIndicate respectively the RGB proper vector of red, green, blue and Bai Si color;
Introduce wide white striation and come the interval colored bars, make adjacent striation use same shade of color value, optical mode plate is by 2n mThe bar striation forms, and wherein number of color is n, and window size is m;
Definition Q={1,2 ..., 2n m, the coordinate sequence number value of corresponding every striation, plane space of each striped boundary demarcation, each two field picture of projection is comprised of identical optical mode plate, and a rear frame moves down striation 1/4 width by former frame and forms;
Different color groups is collected at specific rgb space zone, and then the cluster centre by sample point training color group mates the striation color with the distance of point-to-point;
D ik=d 2i,C k)k=1,2,3,4 (3)
D IkBe 2 distances square, Φ iThe RGB proper vector of striation center pixel, C kBe the color cluster center, d (x, y) is x, the y distance between two points; If D=min is (D Ik), the distribution of striation color is carried out according to (4) formula so:
s c={k if D==D ik k=1,2,3,4} (4)
s cExpression striation color;
After distributing color, per three adjacent striation colors form a code word s i, find out each code word and decode exactly in the position of whole sequence; Striation coordinate figure x iBe defined as:
x i = DdBS ( s i ) + 1 4 ( j - 1 ) - - - ( 5 )
DdBS (s i) expression directly decodes j frame optical mode plate subsequence s iCoordinate position;
3), utilize the systematic parameter demarcated to carry out the three-dimensional reconstruction of determinand, detailed process is:
Demarcate good transition matrix parameter and determined the relation of spatial point coordinate and its image coordinate point, recover 3 d space coordinate from the image coordinate of unique point; Take modulated image, after image is processed, obtain image coordinate Calculate desirable normalization coordinate (x, y), there are following relation in spatial point and space under the camera coordinate system:
X c Y c Z c = R X w Y w Z w + t = r 1 r 2 r 3 X w Y w Z w + t 1 t 2 t 3 - - - ( 6 )
Definition X c=xZ c, Y c=yZ c, bring (6) into and get:
xZ c yZ c Z c = r 1 r 2 r 3 X w Y w Z w + t 1 t 2 t 3 - - - ( 7 )
Cancellation Z c, abbreviation gets:
xt 3 - t 1 yt 3 - t 2 = r 1 - xr 3 r 2 - yr 3 X w Y w Z w - - - ( 8 )
By projector coordinates x dWith world coordinates X=(X w, Y w, Z w) to set up projector model as follows:
γ x d 1 = m 1 m 14 m 2 m 24 X w Y w Z w 1 = m 11 m 12 m 13 m 14 m 21 m 22 m 23 m 24 X w Y w Z w 1 - - - ( 15 )
γ is arbitrary scale factor;
Obtain x by the striation decoding dCoordinate figure:
x d m 24 - m 14 = [ m 1 - x d m 2 ] X w Y w Z w - - - ( 9 )
Get in conjunction with (8) formula, (9) formula:
xt 3 - t 1 yt 3 - t 2 x d m 24 - m 14 = r 1 - xr 3 r 2 - yr 3 m 1 - x d m 2 X w Y w Z w - - - ( 10 )
Write (10) formula as matrix A X=B form, wherein X=(X w, Y w, Z w), then equation linear least-squares solution is:
X=(A TA) -1A TB (11)
X namely is the world coordinates that is obtained by three-dimensional computations by two dimensional image.
2. the three-dimensional rebuilding method based on coded structured light as claimed in claim 1 is characterized in that: in described step 3) in, systematic parameter comprises camera parameters and projector parameter, scaling method may further comprise the steps:
3.1), the camera calibration process:
Adopt two-dimentional calibrating template, target is that the black and white chess lattice of standard form, unique point is that the point of chess lattice forms, video camera is taken the multiple image of diverse location, select the grid angle point as unique point to described multiple image, adopt the plane reference method that video camera is demarcated, obtain camera intrinsic parameter and demarcate K, and obtain the average focal length It is characterized in that: described scaling method also comprises:
3.2), the projector calibrating process:
The optical mode plate that produces by coding projects on the common target, and black and white grid angle point utilizes the image that obtains as anchor point in camera calibration, extract simultaneously black and white grid angle point image coordinate, generates world coordinates;
During projection single white optical mode plate, establish that 4 corner location of each black and white grid are A on the template i, B iA I+1, B I+1, i=1,2,3 ..., two limit L of arbitrary straight line L and this grid i(A i, B i) and L I+1(A I+1, B I+1) intersect at respectively a P iAnd P I+1Directly mention A take first grid wherein as example 1, B 1Image coordinate (the u of two angle points A, v A), (u B, v B), and world coordinates is (X A, Y A, Z A), (X B, Y B, Z B), ask straight line L iIntersection point with L obtains P iThe image coordinate of point, P 1The image coordinate of point is (u P, v P), world coordinates is defined as (X p, Y P, Z p), because A 1, B 1, P 13 coplanar, so Z p=Z A=Z B, according to cross ratio invariability, obtain,
X A - X P X A - X B = v A - v P v A - v B Y A - Y P Y A - Y B = u A - u P u A - u B - - - ( 12 )
Order
λ x = v A - v P v A - v B , λ y = u A - u P u A - u B - - - ( 13 )
Got by (12), (13) formula:
X p = X A + λ x ( X B - X A ) Y P = Y A + λ y ( Y B - Y A ) Z P = Z A = Z B - - - ( 14 ) ;
λ x, λ yThe double ratio coefficient that represents respectively X and Y-direction, (X p, Y P, Z p) world coordinates of expression striation Edge Feature Points;
In described step 3.2) in, the process of setting up of projector model is:
By projector coordinates x dTo set up model as follows with world coordinates X:
γ x d 1 = m 1 m 14 m 2 m 24 X w Y w Z w 1 = m 11 m 12 m 13 m 14 m 21 m 22 m 23 m 24 X w Y w Z w 1 - - - ( 15 )
γ is arbitrary scale factor, eliminates γ, and conversion (15) formula obtains:
m 1 X w Y w Z w T + m 14 m 2 X w Y w Z w T + m 24 = x d - - - ( 16 )
(16) formula of utilization structure is with m IkBe the system of homogeneous linear equations of unknown number, then by Singular-value Decomposition Solution projector parameter;
Set up and simplify perspective cross ratio invariability geometric model, hypothesis space point A, B, C are positioned on the same straight line Γ, and take B as reference point, the position ratio is defined as:
PR(A,B,C)=AB/AC (17)
In like manner, A ', B ', C ' they are under the O of centre of perspectivity effect, the imaging point of corresponding point A, B, C, and be positioned on the same point straight line Γ ', the position ratio on the image coordinate is:
PR(A′,B′,C′)=A′B′/A′C′ (18)
Cross ratio invariability according to the perspective geometry principle obtains:
PR(A,B,C)=PR(A′,B′,C′) (19)
Also be to be positioned on the same straight line Γ ' at image mid point A ', B ', C ';
Extract the image coordinate of unique point in the optical mode plate, obtain the world coordinates of reference point B according to (19) formula, set up space three-dimensional and come the labeling projection instrument to the mapping relations between the dimension coordinate.
CN 200910153603 2009-09-29 2009-09-29 Three-dimensional reconstruction method based on coding structured light Active CN101667303B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200910153603 CN101667303B (en) 2009-09-29 2009-09-29 Three-dimensional reconstruction method based on coding structured light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200910153603 CN101667303B (en) 2009-09-29 2009-09-29 Three-dimensional reconstruction method based on coding structured light

Publications (2)

Publication Number Publication Date
CN101667303A CN101667303A (en) 2010-03-10
CN101667303B true CN101667303B (en) 2013-01-16

Family

ID=41803914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200910153603 Active CN101667303B (en) 2009-09-29 2009-09-29 Three-dimensional reconstruction method based on coding structured light

Country Status (1)

Country Link
CN (1) CN101667303B (en)

Families Citing this family (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176755B (en) * 2010-12-24 2013-07-31 海信集团有限公司 Control method and device based on eye movement three-dimensional display angle
CN102289779B (en) * 2011-07-29 2013-08-21 中国科学院长春光学精密机械与物理研究所 Device for obtaining width of Moire fringe by using image processing technology
JP5864950B2 (en) * 2011-08-15 2016-02-17 キヤノン株式会社 Three-dimensional measuring apparatus, three-dimensional measuring method and program
CN102438111A (en) * 2011-09-20 2012-05-02 天津大学 Three-dimensional measurement chip and system based on double-array image sensor
CN105263437B (en) * 2013-02-13 2017-10-03 3形状股份有限公司 Record the focusing scanning means of color
CN103235416A (en) * 2013-04-15 2013-08-07 中国科学院西安光学精密机械研究所 Method for changing color separating 3D (three dimensional) images to polarizing 3D images and compatible dual-lens projection system thereof
DE102013209770B4 (en) * 2013-05-27 2015-02-05 Carl Zeiss Industrielle Messtechnik Gmbh Method for determining adjustable parameters of a plurality of coordinate measuring machines, and method and apparatus for generating at least one virtual image of a measuring object
US9658061B2 (en) * 2013-12-31 2017-05-23 Faro Technologies, Inc. Line scanner that uses a color image sensor to improve dynamic range
CN104075659B (en) * 2014-06-24 2016-08-17 华南理工大学 A kind of three-dimensional imaging recognition methods based on RGB structure light source
NL2013355B1 (en) * 2014-08-22 2016-09-23 Handicare Stairlifts B V Method and system for designing a stair lift rail assembly.
US9948920B2 (en) * 2015-02-27 2018-04-17 Qualcomm Incorporated Systems and methods for error correction in structured light
CN106155299B (en) * 2015-04-23 2019-06-11 青岛海信电器股份有限公司 A kind of pair of smart machine carries out the method and device of gesture control
CN104952074B (en) * 2015-06-16 2017-09-12 宁波盈芯信息科技有限公司 Storage controlling method and device that a kind of depth perception is calculated
CN104897174B (en) * 2015-06-19 2018-07-10 大连理工大学 Image striation noise suppressing method based on confidence evaluation
CN105096314A (en) * 2015-06-19 2015-11-25 西安电子科技大学 Binary grid template-based method for obtaining structured light dynamic scene depth
CN105184857B (en) * 2015-09-13 2018-05-25 北京工业大学 Monocular vision based on structure light ranging rebuilds mesoscale factor determination method
CN105303609A (en) * 2015-11-18 2016-02-03 湖南拓视觉信息技术有限公司 Device for three-dimensional imaging and real-time modeling and method
CN106840251B (en) * 2015-12-07 2020-04-14 中国电力科学研究院 Three-dimensional scanning system for appearance detection of low-voltage current transformer
CN106091984B (en) * 2016-06-06 2019-01-25 中国人民解放军信息工程大学 A kind of three dimensional point cloud acquisition methods based on line laser
CN106251376B (en) * 2016-08-12 2019-08-06 南京航空航天大学 One kind is towards colored structures pumped FIR laser and edge extracting method
CN106595502A (en) * 2016-12-01 2017-04-26 广州亚思信息科技有限责任公司 Structured light-based motion compensation 3D measurement method
CN108242064B (en) * 2016-12-27 2020-06-02 合肥美亚光电技术股份有限公司 Three-dimensional reconstruction method and system based on area array structured light system
US20180225799A1 (en) * 2017-02-03 2018-08-09 Cognex Corporation System and method for scoring color candidate poses against a color image in a vision system
CN106952249B (en) * 2017-02-20 2020-06-09 广东电网有限责任公司惠州供电局 Insulator string axis extraction method based on cross ratio invariance
CN106931906A (en) * 2017-03-03 2017-07-07 浙江理工大学 A kind of object dimensional size simple measurement method based on binocular stereo vision
US20180347967A1 (en) * 2017-06-01 2018-12-06 RGBDsense Information Technology Ltd. Method and apparatus for generating a random coding pattern for coding structured light
CN107516324B (en) * 2017-07-20 2019-12-17 大连理工大学 Target boundary extraction method based on geometric characteristic mutation of light bars
KR102457891B1 (en) * 2017-10-30 2022-10-25 삼성전자주식회사 Method and apparatus for image processing
CN108038898B (en) * 2017-11-03 2020-06-30 华中科技大学 Single-frame binary structure optical coding and decoding method
CN108122254B (en) * 2017-12-15 2021-06-22 中国科学院深圳先进技术研究院 Three-dimensional image reconstruction method and device based on structured light and storage medium
CN107945268B (en) * 2017-12-15 2019-11-29 深圳大学 A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light
WO2019113912A1 (en) * 2017-12-15 2019-06-20 中国科学院深圳先进技术研究院 Structured light-based three-dimensional image reconstruction method and device, and storage medium
CN108088386B (en) * 2017-12-15 2019-11-29 深圳大学 A kind of the binary area-structure light detection method and system of micro-nano magnitude
DE102018108874A1 (en) * 2018-04-13 2019-10-17 Isra Vision Ag Method and system for measuring an object by means of stereoscopy
CN108931209B (en) * 2018-05-04 2019-12-31 长春理工大学 High-adaptability three-dimensional reconstruction method for colored object
CN108985310B (en) * 2018-05-04 2021-12-07 长春理工大学 Stripe code word matching method based on sequence characteristic repetition degree
CN108895979B (en) * 2018-05-10 2020-04-07 西安电子科技大学 Line segment coded structured light depth acquisition method
CN109166156B (en) * 2018-10-15 2021-02-12 Oppo广东移动通信有限公司 Camera calibration image generation method, mobile terminal and storage medium
CN109951695B (en) * 2018-11-12 2020-11-24 北京航空航天大学 Mobile phone-based free-moving light field modulation three-dimensional imaging method and imaging system
CN109584356A (en) * 2018-11-23 2019-04-05 东南大学 A kind of decoded more view reconstructing methods of M-array image adaptive local window
CN109406519A (en) * 2018-11-28 2019-03-01 广州番禺职业技术学院 A kind of detection device and method of the special-shaped irregular solder joint of inductance element pin
CN109373901B (en) * 2018-12-03 2020-08-07 易思维(天津)科技有限公司 Method for calculating center position of hole on plane
CN109360249A (en) * 2018-12-06 2019-02-19 北京工业大学 Calibration system is adjusted in camera
CN109506569B (en) * 2019-01-08 2020-04-07 大连理工大学 Method for monitoring three-dimensional sizes of cubic and columnar crystals in crystallization process based on binocular vision
CN110033506B (en) * 2019-03-18 2023-05-02 西安科技大学 Three-dimensional reconstruction system and reconstruction method for fully mechanized mining face based on structured light
CN110492934B (en) * 2019-07-12 2022-05-13 华南师范大学 Noise suppression method for visible light communication system
TWI720602B (en) 2019-08-27 2021-03-01 國立中央大學 Method and optical system for reconstructing surface of object
CN112650207A (en) * 2019-10-11 2021-04-13 杭州萤石软件有限公司 Robot positioning correction method, apparatus, and storage medium
CN111174731B (en) * 2020-02-24 2021-06-08 五邑大学 Color segmentation based double-stripe projection phase unwrapping method and device
CN111383234B (en) * 2020-03-04 2022-05-17 中国空气动力研究与发展中心超高速空气动力研究所 Machine learning-based structured light online intensive three-dimensional reconstruction method
CN111325831B (en) * 2020-03-04 2022-07-01 中国空气动力研究与发展中心超高速空气动力研究所 Color structured light bar detection method based on hierarchical clustering and belief propagation
CN111415407B (en) * 2020-03-27 2023-04-07 西北民族大学 Method for improving performance of three-dimensional reconstruction image by adopting multi-template system
CN111427107B (en) * 2020-04-07 2022-02-15 上海冠众光学科技有限公司 Diffraction optical element value model, diffraction optical element and manufacturing method thereof
CN112097685B (en) * 2020-07-28 2021-07-27 安徽农业大学 Moving object three-dimensional measurement method based on color fringe projection
CN111964606B (en) * 2020-08-18 2021-12-07 广州小鹏汽车科技有限公司 Three-dimensional information processing method and device
CN112489193A (en) * 2020-11-24 2021-03-12 江苏科技大学 Three-dimensional reconstruction method based on structured light
CN112767536A (en) * 2021-01-05 2021-05-07 中国科学院上海微系统与信息技术研究所 Three-dimensional reconstruction method, device and equipment of object and storage medium
CN112802193B (en) * 2021-03-11 2023-02-28 重庆邮电大学 CT image three-dimensional reconstruction method based on MC-T algorithm
CN113392597B (en) * 2021-06-16 2022-10-28 桂林电子科技大学 Wave reconstruction method based on Helmholtz-Hodge decomposition
CN113696939B (en) * 2021-08-25 2023-08-01 北京博研盛科科技有限公司 Method, system and equipment for positioning railcar based on marker
CN114087982B (en) * 2021-10-29 2023-10-27 西安理工大学 Large-breadth relative position measurement system and method based on light field
CN115379182B (en) * 2022-08-19 2023-11-24 四川大学 Bidirectional structure optical coding and decoding method and device, electronic equipment and storage medium
CN115984314B (en) * 2022-11-25 2023-06-23 哈尔滨理工大学 Image edge detection method and system based on calculation holographic second-order differential
CN116664796B (en) * 2023-04-25 2024-04-02 北京天翔睿翼科技有限公司 Lightweight head modeling system and method
CN117173383B (en) * 2023-11-02 2024-02-27 摩尔线程智能科技(北京)有限责任公司 Color generation method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1796933A (en) * 2004-12-28 2006-07-05 陈胜勇 Method and equipment for realizes structured light in high performance based on uniqueness in field
CN101089547A (en) * 2007-07-11 2007-12-19 华中科技大学 Two-diensional three-frequency dephase measuring method base on color structural light

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1796933A (en) * 2004-12-28 2006-07-05 陈胜勇 Method and equipment for realizes structured light in high performance based on uniqueness in field
CN101089547A (en) * 2007-07-11 2007-12-19 华中科技大学 Two-diensional three-frequency dephase measuring method base on color structural light

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张剑清,柯涛,程莹.基于单相机和结构光的纹理缺乏目标三维重建.《测绘信息与工程》.2006,(第4期),49-51. *
李清泉,王植,李宇光.基于线结构光的3维目标测量与多分辨率建模.《测绘学报》.2006,第35卷(第4期),371-378. *

Also Published As

Publication number Publication date
CN101667303A (en) 2010-03-10

Similar Documents

Publication Publication Date Title
CN101667303B (en) Three-dimensional reconstruction method based on coding structured light
CN107945268B (en) A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light
Koch et al. Evaluation of cnn-based single-image depth estimation methods
US6781618B2 (en) Hand-held 3D vision system
CN106091984B (en) A kind of three dimensional point cloud acquisition methods based on line laser
Pages et al. Optimised De Bruijn patterns for one-shot shape acquisition
Tarini et al. 3D acquisition of mirroring objects using striped patterns
CN101673412B (en) Light template matching method of structured light three-dimensional vision system
US11521311B1 (en) Collaborative disparity decomposition
US7430312B2 (en) Creating 3D images of objects by illuminating with infrared patterns
CN104835158B (en) Based on the three-dimensional point cloud acquisition methods of Gray code structured light and epipolar-line constraint
Krotosky et al. Mutual information based registration of multimodal stereo videos for person tracking
CN106228507A (en) A kind of depth image processing method based on light field
CN105844633B (en) Single frames structure optical depth acquisition methods based on De sequence and phase code
CN105046746A (en) Digital-speckle three-dimensional quick scanning method of human body
CN105869160A (en) Method and system for implementing 3D modeling and holographic display by using Kinect
CN105184857A (en) Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging
CN109920007A (en) Three-dimensional image forming apparatus and method based on multispectral photometric stereo and laser scanning
CN110246124A (en) Target size measurement method and system based on deep learning
Wang et al. Single view metrology from scene constraints
CN101482398B (en) Fast three-dimensional appearance measuring method and device
Lanman et al. Surround structured lighting: 3-D scanning with orthographic illumination
CN105739106A (en) Somatosensory multi-view point large-size light field real three-dimensional display device and method
CN108020172B (en) A kind of aircraft surface manufacturing quality detection method based on 3D data
Lanman et al. Surround structured lighting for full object scanning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant