CN1975323A - Method for making three-dimensional measurement of objects utilizing single digital camera to freely shoot - Google Patents

Method for making three-dimensional measurement of objects utilizing single digital camera to freely shoot Download PDF

Info

Publication number
CN1975323A
CN1975323A CN 200610161274 CN200610161274A CN1975323A CN 1975323 A CN1975323 A CN 1975323A CN 200610161274 CN200610161274 CN 200610161274 CN 200610161274 A CN200610161274 A CN 200610161274A CN 1975323 A CN1975323 A CN 1975323A
Authority
CN
China
Prior art keywords
mrow
msub
curve
points
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200610161274
Other languages
Chinese (zh)
Other versions
CN100430690C (en
Inventor
张丽艳
郑建冬
张辉
卫炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CNB2006101612744A priority Critical patent/CN100430690C/en
Publication of CN1975323A publication Critical patent/CN1975323A/en
Application granted granted Critical
Publication of CN100430690C publication Critical patent/CN100430690C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

This invention disclosed a three-dimension measurement method which consists of preparation, picture intussusception, coding spot identification, camera location confirmation, target curve pick-up, homonymic curve matching and target curve reconstruction. It has the following characteristics: mark on the target curve before measuring to make the picture identification much more easy; place a guage and a set of coding spots around the target object; screen randomly to obtain a set of image by the camera; calculate automatically the position and gesture of the camera according to the images; pick up the marked curve and match the homonymic curve in different images so that the information of three-dimensional spot line was automatically calculated.

Description

Method for carrying out three-dimensional measurement on object by using single digital camera to freely shoot
One, the technical field
Three-dimensional object measurement belongs to measurement, test technical field. The corresponding code in the international classification table is G01B.
Second, background Art
The rapid development of the three-dimensional measurement technology is greatly promoted by increasingly wide application requirements of the three-dimensional measurement technology in the fields of reverse engineering, industrial detection, quality control and the like, and various measurement methods based on optical, acoustic, electromagnetic and mechanical contact principles, such as a three-coordinate measuring machine, a laser scanner, a structured light measuring instrument and the like, appear. The three-coordinate measuring machine adopts mechanical contact type sensing, has high measuring precision, generally needs a special measuring chamber and a special measuring table, has high requirement on measuring environment, has limited measuring range and low measuring efficiency, and is not suitable for measuring soft objects; laser line scanning measurement and structured light irradiation measurement are the mainstream methods for measuring the three-dimensional geometric shape at present, dense point cloud data on the surface of a model can be quickly obtained by irradiating laser or structured light on the surface of an object, but the method is limited by a scanning range, high light reflection on the surface of the object and the like, and a laser scanner and a structured light measuring instrument are expensive. Especially for mechanical products, because of the obvious structural features, it is often necessary to mainly acquire data of angular points, edges, and some control lines on the surface of the model, which play a key role in reconstructing a digital model of a measured object, and the structured light irradiation measurement method, the laser line scanning measurement method, and the like acquire integral point cloud or grid data of the surface, so that on one hand, the output data amount is huge, and on the other hand, the required data of the edge features and some control lines on the key sections cannot be directly and explicitly acquired. Moreover, the measurement data output by these methods are generally more effective in smooth flat areas of the model, while the measurement results are less effective at the critical corners and edges.
In order to realize the measurement of three-dimensional geometric information in a simpler hardware condition and in a more flexible and convenient manner, it has become a research focus in recent years to accurately reconstruct the position and shape of an object from a plurality of images taken by a digital camera. Among them, TriTop from company Gom Germany®The system can carry out high-precision three-dimensional coordinate positioning by using a single digital camera free shooting mode. The system is characterized in that a group of coding points and length scales are placed in a scene, mark points which are easy to identify are pasted on an interested part, then a user holds a digital camera to freely shoot a plurality of images, a certain overlap of the images is required, and after all the images are input into a corresponding software system, the system automatically calculates the position and the posture of the camera and the space coordinates of all the mark points at one time when shooting. The system is commercialized and sold in China, but the system can only carry out space coordinate positioning of a specific marked point target (consisting of a black circular ring surrounding a white circular point) at present, is generally used for matching with other measuring methods to carry out splicing of multi-view angle measuring data, and cannot carry out three-dimensional measurement on a curve targetAnd the method cannot be used for three-dimensional digital model reconstruction of products with complex geometric shapes.
Third, the invention
The invention aims to realize a practical measuring method which is oriented to industrial product measurement modeling, is convenient to implement, has higher precision and lower cost by using simple hardware conditions. Therefore, the characteristic lines of the object to be measured and certain key section control lines on the surface of the object required for digital model reconstruction are marked, so that the characteristic lines are obviously different from the color of the object to be measured in color brightness, and the image recognition is facilitated; a ruler and a group of specially designed coding points are arranged around a measured object; then, holding a digital camera by hand to obtain a group of images of the measured object in a free shooting mode; according to the group of images, the position and the posture of the camera during each shooting are automatically and accurately calculated, meanwhile, a convenient interaction means is provided for a user, the semi-automatic extraction of the identification curve and the optimized matching of homonymous curves in different images are realized, and further, the three-dimensional point list information of the identified curve structure is automatically calculated.
According to the scheme, a practical system which is flexible and convenient to use and suitable for measuring three-dimensional curve structures on objects with different sizes is developed, and the practical system can be well used for building a three-dimensional digital model of a mechanical product based on a real object. The object three-dimensional measurement method is characterized in that only one digital camera and one common personal computer are used for measurement, a group of coding points and a scale are used as auxiliary materials, a complex measurement hardware system is not needed, and complicated calibration of the measurement system is not needed; curve data required for digital model reconstruction of an object are directly and explicitly generated, data redundancy is avoided, and high-efficiency model reconstruction is facilitated; all the measured data are automatically located under a world coordinate system, so that the difficult problem that the data measured for many times in other measuring methods need to be spliced is avoided, and the accumulated error caused by splicing a plurality of data sets is also avoided. The method comprises the main steps of measurement preparation, image shooting, camera pose determination, target curve extraction, automatic matching optimization of homonymous curves, three-dimensional reconstruction of target curves and the like.
Measurement preparation and image capture
Measurement preparation mainly involves three aspects of work: 1) marking a target curve (generally a boundary line of a natural curved surface sheet, a key section control line and the like) according to the requirement of digital model reconstruction, so that the target curve is obviously different from a measured object in color and brightness, and the image identification is facilitated; 2) several code points are arranged in the measurement area. Each coding point has a unique identity code, and quick and reliable identification in different images is easy to realize. Both the marking points and the coding points can be generated simply by generating the corresponding patterns on a computer and then printing. The coding points are attached to the upper surfaces of the hard board, the wood chip and the like, and can be repeatedly used; 3) a scale with two code points is placed in the measuring scene, and the distance between the centers of the two code points is known. The purpose of placing the scale is to obtain the actual size of the object to be measured, otherwise, only three-dimensional structures differing by a proportionality coefficient can be obtained. After the preparation work is finished, the digital camera can be held by hand to shoot the measured object at multiple angles. It is required that there is some overlap between the images, i.e. that one image has at least more than 5 coding points and some object curves in common with the other image.
Identification and localization of code points
The pattern of code dots is designed to consist of a central white dot, a middle black ring, and an outer ring, wherein the outer ring is divided into 15 equal parts, each equal part is black or white, black represents a binary code "0", and white represents a binary code "1", which is called a "code strip". The code for each point in a set of code points is different. According to the coding, the identity of different images can be reliably identified, and the corresponding relation of the same-name coding points among the images is automatically established. It is based on the positions of these homonymous code points in the image and the correspondence between multiple images that the automatic calculation of the respective positions and attitudes of the camera at the time of free shooting can be realized.
Because the central circle of the coding point is elliptical after being imaged by the CCD, in order to identify and position the coding point in the image, the invention firstly adopts the Canny operator to segment the image, extracts contour information representing different areas in the image, and then gradually filters candidate coding point targets according to 5 constraint conditions of the size, the shape, the ellipse fitting residual error, the mean value of the area gray scale, the variance of the area gray scale and the like of each contour, thereby realizing the extraction of the coding point target.
After locking a code point target, it is decoded, i.e. it is determined which code point it is. The decoding is based on the gray scale of each sector on the encoded band. The invention adopts a method of fitting the middle ellipse of the coding band and performing median filtering on each pixel on the middle ellipse, comprehensively considers the gray values of most pixels in the coding band, and can eliminate the influence of isolated noise. The processing result of a large number of field shot images shows that the method is very effective for improving the robustness of the identification of the coding point.
And finally, determining the center coordinates of the coding points with sub-pixel positioning accuracy according to the gray values of all pixel points in the identified central circular area of the coding points.
Camera position and pose determination
The fundamental matrix between the two images is first calculated based on the pixel coordinates of at least 5 homonymous code point centers in the two images. The camera poses corresponding to the two images and the three-dimensional coordinates of the centers of the coding points which are visible together in the two images can be further recovered by the camera internal parameters and the basic matrix. And then, according to the homonymy corresponding relation between the obtained three-dimensional space point and the coding point on the third image, solving the camera attitude corresponding to the third image, further obtaining more central three-dimensional coordinates of the coding point, then solving the camera attitude corresponding to the next image, and increasing progressively in this way until all the camera attitudes and the central three-dimensional coordinates of the coding point are obtained. And finally, performing integral optimization on all camera parameters and the three-dimensional coordinates of the center of the coding point by adopting a light beam adjustment method so as to further improve the precision. The strategy of combining the incremental method and the global optimization method not only has higher efficiency, but also can ensure that the camera positioning reaches higher precision.
And reading a group of shot images at one time, and automatically calculating and recording the position and the posture of the camera during each shooting by the measuring system. The camera position and the camera pose during each shooting are determined, which means that the position and the pose of each image under a unified world coordinate system are determined, so that each target curve reconstructed by a subsequent algorithm according to different image pairs is directly positioned in the same coordinate system without data splicing.
Semi-automatic extraction of target curves
For a currently selected image pair displayed in the two image windows, points are respectively taken on the two images by a mouse near the same marking curve which is visible together, the connecting line of the points approximately reflects the corresponding image curve contour, and then the measuring system automatically and optimally fits the approximate image curve contour to the image curve according to the principle of energy optimization. In the half automatic extraction process, a user only needs to sequentially pick points near the image curve, the method is simple and easy to implement, the stability of target curve extraction and subsequent automatic matching optimization is greatly improved due to the good initial search position, and meanwhile, the optimal fitting method ensures the extraction precision of the target curve.
Automatic matching optimization of homonymous curves
After a pair of homonymous target curves is extracted from one image pair, a key problem is to establish the corresponding relation of each pixel point on the homonymous target curves between two images. Based on the correspondence of the image points, the spatial coordinates of these points can be reconstructed by stereo triangulation.
According to the basic theory of stereo vision, the epipolar constraint should be satisfied by the homonymous points between a pair of images obtained by the camera at different positions and angles. For two candidate homonymous matching points v in an image pair1And v2The invention is used on the second image v2To v1Of (2)Distance of lines, and v on the first image1To v2Is measured by the sum of the distances of the epipolar lines of1And v2The degree of matching. Meanwhile, the matching of the point columns on the two homonymous target curves meets the spatial correlation constraint, namely the point columns arranged in sequence on one curve are certain to correspond to the point columns arranged in sequence on the other curve. Based on such analysis, the present invention first employs dynamic programming to obtain initial matches of discrete pixel points on the homonymous curve. And after initial matching of the point pairs on the curves of the images with the same name is obtained, further performing matching optimization on the curves. Setting two homonymous image curves to use parameter equation c respectively1(l)、c2(l) Showing that the present invention optimizes the following objective function to achieve c1、c2Exact matching of points on a curve
<math> <mrow> <mi>min</mi> <munderover> <mo>&Integral;</mo> <mn>0</mn> <msub> <mi>L</mi> <mn>1</mn> </msub> </munderover> <mfrac> <mrow> <mo>|</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <msup> <mrow> <mo>(</mo> <mi>&sigma;</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mi>F</mi> <msub> <mi>c</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>|</mo> <mo>|</mo> <mi>eF</mi> <msub> <mi>c</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <mo>|</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <msup> <mrow> <mo>(</mo> <mi>&sigma;</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mi>F</mi> <msub> <mi>c</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <msup> <mrow> <mo>(</mo> <mi>&sigma;</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mi>F</mi> <msup> <mi>e</mi> <mi>T</mi> </msup> </mrow> </mfrac> <mi>dl</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>28</mn> <mo>)</mo> </mrow> </mrow> </math>
Wherein, e = 0 - 1 0 1 0 0 0 0 0 ; f is a fundamental matrix between the pairs of images, which is known when the camera position and attitude at each shot have been determined; σ (l) is the mapping function to be solved, representing c1The point on the curve with the parameter l is on the curve c2The parameter values of (1).
Three-dimensional reconstruction of target curves
After matching of all pixel points on the homonymous curve is completed, the spatial coordinates of the points can be reconstructed by a mature triangulation method in binocular stereo vision because the camera positions and postures of the image pairs during respective shooting are automatically calculated, and thus the reconstruction of the whole curve is completed.
The invention has the outstanding advantages of simple measurement hardware (a digital camera, a common personal computer, a staff gauge and a group of coding points generated by printing), flexible measurement mode (free shooting), unlimited measurement range, automatic combination of measurement data of all angles, no redundancy of measurement output data, convenient use, low cost and the like, can be used for positioning space points, can measure three-dimensional information such as edges, characteristic lines, key section control lines and the like on a measured object according to the requirement of mechanical product measurement modeling, and has wide application prospect in the fields of reverse engineering, product quality detection and the like.
Drawings
FIG. 1: the invention provides a basic flow chart of a measuring method.
FIG. 2: and (5) a schematic diagram of an encoding point. Fig. 2(a) depicts a coding dot structure, which is composed of a central white dot, a middle black ring, and an outer ring, wherein the outer ring is divided into 15 equal parts, the identity of each sector is determined according to the color distinguished by each sector, the black color represents a binary code "0", and the white color represents a binary code "1"; fig. 2(b) is an example of three encoding points.
FIG. 3: the embodiment is a partition diagram of a software graphical interface of a measurement system. 1. A menu area; 2. an icon toolbar; 3. displaying a list of image files; 4. displaying one image in the current moving image pair; 5. displaying the other image in the current moving image pair; 6. and a three-dimensional graphic display area of the reconstructed target curve. The images currently displayed in the two windows 4 and 5 are specified by clicking the image file list in the area 3, and after a user interactively outlines the approximate shape of a pair of homonymous curves in the two image windows 4 and 5 according to the identification curves on the images, the system automatically calculates the three-dimensional point column on the curves and displays the three-dimensional point column in the three-dimensional graphic area.
Detailed Description
The embodiment of the method for measuring the three-dimensional curve structure of the object provided by the invention is as follows: the digital camera adopts a Nikon manual focusing digital camera with a built-in flash lamp and the resolution of 4256 multiplied by 2848, the computer adopts a Pentium IV microcomputer with the main frequency of 2.8GHz and the memory of 512MB, and a measuring software system is realized on a Visual C + + 6.0 platform.
The specific embodiment and principle of the invention are described with reference to fig. 1: before measurement, certain measurement preparation is carried out, including marking a target curve (generally a characteristic line of an object and certain key section control lines of the surface of the object required for digital model reconstruction) to be measured on the measured object, so that the target curve is obviously different from the color of the measured object on the aspect of color brightness, and the image identification is facilitated; a ruler and a group of specially designed coding points are arranged around a measured object, and the codes of all the coding points in the group of coding points are different, namely each coding point has identity uniqueness. After the preparation work of the measurement is finished, a digital camera is held by hand to obtain a group of images of the measured object in a free shooting mode, and certain overlap is required among the images, namely, more than 5 coding points and certain target curves which are visible together at least between one image and the other image are required. Based on the set of images, the measurement system automatically identifies and precisely locates the encoded points in each image, and then automatically and precisely calculates the camera position and pose at each shot. The user finishes semi-automatic extraction of one identification curve in the current moving image pair through simple interaction, and the measurement system automatically performs optimization matching of the image curve with the same name, so that three-dimensional point list information of the identified curve structure is automatically calculated. If there are no unprocessed target curves, the semi-automatic extraction, automatic matching and three-dimensional curve reconstruction processes are repeated for the next target curve (which may appear in a different image pair) until the three-dimensional reconstruction of all target curves is completed. A detailed description of several embodiments of the main steps in fig. 1 is provided below.
Coding point identity recognition
The three-dimensional measurement method of the invention is based on the analysis of a set of images taken, first of all the identification of the coding points shown in fig. 2. The center of each code point is a circular target point, the periphery of the code point is an annular code band, the code band is divided into 15 parts according to the angle average, one part is arranged at every 24 degrees, the code band is equivalent to a binary bit, white is taken as a foreground color, the corresponding binary code is '1', black is taken as a background color, and the corresponding binary code is '0'. For each code point, there are 15 possible binary codes, and the decimal number corresponding to the smallest number of the 15 binary numbers is taken as the ID of the code point.
The automatic detection algorithm of the coding points mainly comprises the following three main processes: firstly, extracting a coding point target, namely finding a target point in an image; determining the unique identity of the coding point according to the information on the coding point 'coding strip', namely decoding the coding point; and locating the sub-pixel in the center of the coding point.
(1) Encoded point target extraction
The central circle of the coding point is elliptical after being imaged by the CCD. Therefore, firstly, Canny operator is adopted to carry out image segmentation, contour information representing different areas is extracted from the image, and then a step-by-step filtering method is adopted to carry out target extraction of the coding points. Firstly, preliminarily filtering possible target points according to the size and the shape of a marked point target, and entering a further identification process for a closed contour meeting the following conditions:
Pmin≤P≤Pmax (1)
1≤P2/4πA≤1.5 (2)
where P and A are the perimeter and area, respectively, of the closed contour, Pmin、PmaxRespectively, a minimum and a maximum threshold for the perimeter of the contour. Equation (1) is a definition of the size of the closed contour, and equation (2) measures its proximity to a circle.
For the closed contour satisfying the formulas (1) and (2), ellipse fitting is carried out by adopting a least square method, and the residual error epsiloneliSatisfies a given tolerance eτHaving a square as the center circle of the candidate code point, i.e.
εeli≤ετ (3)
After least squares template matching, all elliptical contours in the image have been found. However, some outlines which are not mark point targets but have elliptical shapes or are close to the elliptical shapes are often mistaken for mark point targets in a real scene, but because the gray level of the foreground of the mark point targets adopted by the method is white,the background gray scale is black, and the contrast between the two is strong. This is a significant feature of the marked point object from other non-marked point objects. Therefore, non-coding point objects are further excluded according to such a gradation characteristic of the marker point. Since the ellipse fitting criterion shown by equation (3) has been passed, the areas of the inner central ellipse and the black circle can be determined. Recording the average value of the gray scale of the internal area of the central white point as MIMean gray level of black circle region is MOThen M isIAnd MOIt should satisfy:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msub> <mi>M</mi> <mi>I</mi> </msub> <msub> <mrow> <mo>&GreaterEqual;</mo> <mi>M</mi> </mrow> <mi>t</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>M</mi> <mi>O</mi> </msub> <msub> <mrow> <mo>&le;</mo> <mi>M</mi> </mrow> <mi>t</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>M</mi> <mi>I</mi> </msub> <mo>-</mo> <msub> <mi>M</mi> <mi>O</mi> </msub> <mo>&GreaterEqual;</mo> <mi>&Delta;</mi> <msub> <mi>M</mi> <mi>t</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein M istA threshold for distinguishing foreground gray from background gray; Δ MtIs the minimum value that the difference between the foreground gray and the background gray should satisfy.
In addition, the gray variance V of the internal area of the central white point is restrictedIAnd the gray variance of the black circular ring area satisfies:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msub> <mi>V</mi> <mi>I</mi> </msub> <msub> <mrow> <mo>&le;</mo> <mi>&delta;</mi> </mrow> <mi>I</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>V</mi> <mi>O</mi> </msub> <msub> <mrow> <mo>&le;</mo> <mi>&delta;</mi> </mrow> <mi>O</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, deltaI,δOIs the maximum gray variance allowed. The condition (5) constrains that the center of the encoded point must satisfy a certain gray uniformity. If the above expressions (1) to (5) are satisfied, the encoding point decoding process is performed.
(2) Encoded point decoding
The specific implementation steps of the coding point decoding algorithm provided by the invention are as follows:
step 1: fitting the outer contour ellipse (marked as ellipse A) of the central dot of the coding point, the outer contour ellipse (marked as ellipse B) of the middle black ring and the outer contour ellipse (marked as ellipse C) of the ring in which each white sector is positioned. An ellipse is then fitted in the middle of the ellipse B, C, with the center and rotation angle being the same as the center and rotation angle of ellipse B, C, and the major and minor axes are averaged over the major and minor axes of B, C, respectively. And obtaining the position coordinates of each pixel point on the ellipse D by adopting an ellipse drawing algorithm.
Step 2: and calculating the median of all pixel grays in the area enclosed by the A as the foreground grayscale, and calculating the median of all pixel grays in the area between the A and the B as the background grayscale. And the average value of the foreground gray and the background gray is used as a threshold value for subsequently determining the code value of each binary bit of the coding point.
Step 3: for any pixel point TD on the ellipse D, a ray is made through the center of the ellipse, and the intersection points of the ray and the ellipse B, C are recorded as TB and TC. And sequencing the gray values of all pixels on the line segment TBTC, and taking the gray value of the middle pixel as the new gray value of the TD.
Step 4: and (3) carrying out inverse affine transformation on each point on the ellipse D according to a formula (6), so that the ellipse D corresponds to a unit circle, and the gray level of each point on the unit circle corresponds to the new gray level value of the ellipse B.
<math> <mrow> <msup> <mi>X</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msup> <mi>a</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> </mtd> <mtd> </mtd> </mtr> <mtr> <mtd> </mtd> <mtd> <msup> <mi>b</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>cos</mi> <mi>&alpha;</mi> </mtd> <mtd> <mi>sin</mi> <mi>&alpha;</mi> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>sin</mi> <mi>&alpha;</mi> </mtd> <mtd> <mi>cos</mi> <mi>&alpha;</mi> </mtd> </mtr> </mtable> </mfenced> <mrow> <mo>(</mo> <mi>X</mi> <mo>-</mo> <msub> <mi>X</mi> <mi>o</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
Wherein X' is the coordinate of a point on the unit circle corresponding to TD, X is the coordinate of point TDoIs the coordinate of the center O of the ellipse D, a, b are the lengths of the major and minor axes, respectively, of the ellipse D, and α is the rotation angle of the ellipse D.
Step 5: and (4) carrying out binarization on the pixels on the unit circle, and taking one edge point as a starting point.
Step 6: starting from the starting point, every 24 ° on the unit circle is a binary bit, and the average gray value of all pixel points in each bit is calculated. If the average value of the gray scale of a certain bit is larger than the threshold value, the bit takes a binary code as '1'; otherwise, take "0". A binary coding of the encoded points is thus obtained. And finding the minimum decimal number corresponding to the binary number, wherein the decimal number is the ID of the code point.
Step3 corresponds to median filtering the point TD within a linear window, where the pixels included in the filtered window are the pixels on the segment TBTC. The new gray value on the ellipse D after median filtering is adopted to determine the binary code of each bit of the coding point, the gray values of all pixels in a coding band are considered, and the influence of isolated noise can be eliminated. The processing result of a large number of field shot images shows that the method is very effective for improving the robustness of the identification of the coding point.
(3) Marker center calculation
Center sub-pixel positioning of encoded points using equation (7)
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>c</mi> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <mi>i</mi> <mo>&CenterDot;</mo> <msub> <mi>I</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>/</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <msub> <mi>I</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>c</mi> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <mi>j</mi> <mo>&CenterDot;</mo> <msub> <mi>I</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>/</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <msub> <mi>I</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow> </math>
In the formula (x)c,yc) As center coordinates of the code points, Ii,jThe gray value of the pixel point (i, j) in the central circular area.
Camera pose automatic determination
In homogeneous coordinate representation, the projection X of a three-dimensional space point X on the camera imaging plane can be represented as:
x=K[R|t]X=PX (8)
wherein K is a camera intrinsic parameter matrix; r and t are respectively a rotation transformation matrix and a translation transformation vector from a world coordinate system to a camera coordinate system; p is a 3 × 4 projective transformation matrix.
The method comprises the steps of shooting two images of the same scene by a camera at two different positions and orientations, wherein a rotation matrix R exists between the two cameras12And a non-zero translation vector t12Then from the epipolar geometry, the following constraints exist between the two images
x 2 T F x 1 = 0 - - - ( 9 )
In the formula x1And x2Respectively is a projection point of a three-dimensional space point X on the first image and the second image; f is a 3 x 3 basis matrix that maps a point x on the right image2To the corresponding epipolar line Fx of the left image1The above.
According to the corresponding relation of the coding points between the two images 1, L, N, N is more than or equal to 5, and a basic matrix F between two images is calculated by using an MLESAC (maximum Likelihood Estimation SAmple consensus) method.
According to the internal parameters such as the focal length and the like marked by the camera, the initial value of the internal parameter matrix K can be constructed (the product marking parameters are only used as the initial values, and the measurement system can further optimize the product marking parameters in the subsequent process). Thus, from the basis matrix F, the essential matrix E between the two images can be further calculated
E=KTFK (10)
And according to the definition of the essence matrix, E ═ t]×R (in the formula [ g ]]×An anti-symmetric matrix representing a vector) can be easily derived using the orthogonality of the rotation matrices
E ^ T E ^ = 1 - t ^ x 2 - t ^ x t ^ y - t ^ x t ^ z - t ^ y t ^ x 1 - t ^ y 2 - t ^ y t ^ z - t ^ z t ^ x - t ^ z t ^ y 1 - t ^ z 2 - - - ( 11 )
In the formula E ^ = E / Tr ( E T E ) / 2 , Tr (g) represents the trace of the matrix, t ^ 12 = t 12 / | | t 12 | | is the normalized translation vector. Thus, the normalized translation vector can be easily obtained from the E matrix obtained by expression (10) and expression (11) t ^ 12 = ( t ^ x , t ^ y , t ^ z ) T . Due to the fact that ( - E ^ ) T ( - E ^ ) = E ^ T E ^ , Thus, the resulting matrix is normalized
Figure A20061016127400106
May be compared with realityDiffering by one symbol. In addition, due to the matrix
Figure A20061016127400108
Is about a vector
Figure A20061016127400109
The quadratic term of (a), the vector thus calculated according to equation (11)Also has ambiguity, i.e.All satisfy the formula (11). The solution will be given laterAnd
Figure A200610161274001013
the method of (4).
For calculating a rotation matrix R between a first image and a second image12Definition of
<math> <mrow> <msub> <mi>w</mi> <mi>i</mi> </msub> <mo>=</mo> <msub> <mover> <mi>E</mi> <mo>^</mo> </mover> <mi>i</mi> </msub> <mo>&times;</mo> <msub> <mover> <mi>t</mi> <mo>^</mo> </mover> <mn>12</mn> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>=</mo> <mn>1,2,3</mn> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow> </math>
In the formulaRepresentative matrixEach row vector of (2). Let riIs a rotation matrix R12Each row vector of (1), then
ri=wi+wj×wk (13)
Where (i, j, k) is a cyclic combination of (1, 2, 3). The camera poses of the first two views are thus determined.
Establishing a world coordinate system on a first camera, and deriving a Z-direction coordinate of a space point X in the first camera coordinate system according to the imaging geometric relation
Z 1 = f ( fr 1 - x 2 r 3 ) T t ^ 12 ( fr 1 - x 2 r 3 ) T x 1 - - - ( 14 )
Further, two other coordinate components can be obtained
X1=x1Z1/f,Y1=y1Z1/f (15)
X in the second camera coordinate system is
X2=R12(X1-t12) (16)
Due to the fact that
Figure A20061016127400112
Andambiguity of (D) may be producedFour pairs of differentAccording to the actual situation when shooting, only when a certain shooting is adoptedAll points for reconstruction are in front of both cameras simultaneously, i.e. Z for all points (here referring to the respective coded point centers visible in common for both views)1And Z2All are positive, the reconstruction result is correct, and a corresponding groupThe correct solution is obtained.
Because the length of the base line between the two cameras is unknown in the reconstruction algorithm, only normalized translation vectors can be obtainedAs can be easily seen from equation (14), the reconstructed scene differs from the actual scene by a fixed scale factor. Therefore, a scale is placed in the scene, and the distance between two marking points on the scale is known, so that a scale factor can be determined, and the actual size of the measured object can be obtained.
On the basis of the two-view camera pose determination and the encoding point center three-dimensional coordinate reconstruction, the camera poses of other shooting times need to be further determined in sequence. When the jth image is processed, at least 6 coded points whose three-dimensional coordinates have been reconstructed in the previous step are required to exist in the image, i.e. the corresponding X of the known image point and the space pointi*xiI is 1, L L, L is equal to 6. These constraints are introduced into projection equation (8) to obtain
xi=PjXi,i=1,L L,L≥6 (17)
Since each group Xi*xiGenerates two linear equations, so that the projection matrix P of the jth image can be solved by the least square method according to the formula (17)j11 unknown elements.
A 3 × 4 projection matrix PjIs shown as
Pj=K[Rj|tj]=[KRj|Ktj]=[M|p4] (18)
Where M is a matrix PjFirst 3 × 3 sub-matrix, p4Representation matrix PjThe fourth column of (2). The translation vector t is easily determined from equation (18)j=K-1p4
Because the internal parameter matrix is triangular and the rotation matrix is orthogonal, QR decomposition is carried out on the matrix to obtain the rotation matrix Rj. Estimating external attitude parameter R of camera relative to world coordinate system when j image is shotjAnd tjOn the basis, the three-dimensional space point coordinates of the encoding points which newly appear in the j image and can be matched with the same name in the previous j-1 image are reconstructed by an optical triangulation method. Thus, the camera pose determination and the encoding point three-dimensional coordinate calculation corresponding to the jth image are completed. And then continuing to process the next image until all images are processed.
(3) Camera pose optimization
Three-dimensional space point X due to image noise and other factorsiProjected matrix PjTransformed image point, with the actually recognized XiCoordinates x of image points in the jth imageijAnd do not coincide. In order to further improve the system precision, the invention establishes an objective function with minimum reprojection error based on a beam adjustment algorithm
<math> <mrow> <munder> <mi>&Sigma;</mi> <mi>ij</mi> </munder> <mi>d</mi> <msup> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>j</mi> </msub> <msub> <mi>X</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>ij</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>&RightArrow;</mo> <mi>min</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>19</mn> <mo>)</mo> </mrow> </mrow> </math>
And carrying out global optimization on the camera parameters and the three-dimensional space point coordinates obtained in the previous step. The specific solution adopts LM (Levenberg-Marquardt) algorithm. Since it is X which has been found previously and is relatively close to the true valueiAnd PjAs an initial value, therefore, the global optimization can converge faster.
Semi-automatic extraction of target curves
The graphical interface schematic diagram of the three-dimensional measurement software system of the object curve structure developed by the invention is shown in fig. 3, the left side of the diagram displays a list of all the shot image files, the lower right side of the diagram is two image display windows of the current active image pair, the currently displayed images in the two windows are specified by clicking the image file list on the left side, and the upper right side of the diagram is a three-dimensional graph display area of the reconstructed target curve.
The invention adopts the basic idea of energy optimization to automatically 'fit' the approximate contour of a target curve mutually outlined on two moving images to the corresponding target curve through a background algorithm.
In the concrete implementation, firstly, the points on the target curve inputted by user are connected to form a broken line (polygon under closed condition), then the invention adopts DDA algorithm of linear segment raster scan conversion in computer graphics to quickly obtain all the pixel points passed by broken line, and makes them pass through according to fixed interval (in the embodiment, every two pixels are taken one)
Sampling, denoted viAnd i is (0, 1, L, n). Here, since it is necessary to further automatically detect image edge information by using a Canny edge detection operator, the detected edge point set is denoted as P. Note additionally vijJ is (1, L, 8) is a point viEight neighborhood pixel points, and for descriptive convenience, note vi0=vi
The invention establishes each point viAnd the following energy functions of the eight neighborhood points:
E(vij)=[αEtension(vij)+βEbend(vij)+γEimg(vij)+δEattr(vij)] (20)
wherein E istension(vij),Ebend(vij),Eimg(vij),Eattr(vij) Are each vijThe stretching energy, the bending energy, the image energy and the energy generated by the attraction of the edge points at the points, and alpha, beta, gamma and delta are respectively the weight of each energy item and are used for adjusting the specific gravity of each energy item. To balance the effects of the terms, the energy terms are all normalized to the interval [0, 1 ]]:
<math> <mrow> <msub> <mi>E</mi> <mi>tension</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mi>ij</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>|</mo> <mover> <mi>d</mi> <mo>&OverBar;</mo> </mover> <mo>-</mo> <mo>|</mo> <msub> <mi>v</mi> <mi>ij</mi> </msub> <mo>-</mo> <msub> <mi>v</mi> <mrow> <mi>i</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mrow> <munder> <mi>max</mi> <mrow> <mn>0</mn> <mo>&le;</mo> <mi>j</mi> <mo>&le;</mo> <mn>8</mn> </mrow> </munder> <mo>{</mo> <mo>|</mo> <mover> <mi>d</mi> <mo>&OverBar;</mo> </mover> <mo>-</mo> <mo>|</mo> <msub> <mi>v</mi> <mi>ij</mi> </msub> <mo>-</mo> <msub> <mi>v</mi> <mrow> <mi>i</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>|</mo> <mo>|</mo> <mo>}</mo> </mrow> </mfrac> <mo>,</mo> </mrow> </math> <math> <mrow> <mover> <mi>d</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mo>|</mo> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>v</mi> <mrow> <mi>i</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>|</mo> </mrow> </math>
<math> <mrow> <msub> <mi>E</mi> <mi>bend</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mi>ij</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msup> <mrow> <mo>|</mo> <msub> <mi>v</mi> <mrow> <mi>i</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <mn>2</mn> <msub> <mi>v</mi> <mi>ij</mi> </msub> <mo>+</mo> <msub> <mi>v</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mrow> <munder> <mi>max</mi> <mrow> <mn>0</mn> <mo>&le;</mo> <mi>j</mi> <mo>&le;</mo> <mn>8</mn> </mrow> </munder> <mo>{</mo> <msup> <mrow> <mo>|</mo> <msub> <mi>v</mi> <mrow> <mi>i</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mrow> <mn>2</mn> <mi>v</mi> </mrow> <mi>ij</mi> </msub> <mo>+</mo> <msub> <mi>v</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>}</mo> </mrow> </mfrac> </mrow> </math> <math> <mrow> <msub> <mi>E</mi> <mi>img</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mi>ij</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <munder> <mi>min</mi> <mrow> <mn>0</mn> <mo>&le;</mo> <mi>j</mi> <mo>&le;</mo> <mn>8</mn> </mrow> </munder> <mrow> <mo>(</mo> <msub> <mi>E</mi> <mi>img</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mi>ij</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>E</mi> <mi>img</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mi>ij</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>max</mi> <mrow> <mn>0</mn> <mo>&le;</mo> <mi>j</mi> <mo>&le;</mo> <mn>8</mn> </mrow> </munder> <mrow> <mo>(</mo> <msub> <mi>E</mi> <mi>img</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mi>ij</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <munder> <mi>min</mi> <mrow> <mn>0</mn> <mo>&le;</mo> <mi>j</mi> <mo>&le;</mo> <mn>8</mn> </mrow> </munder> <mrow> <mo>(</mo> <msub> <mi>E</mi> <mi>img</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mi>ij</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
<math> <mrow> <msub> <mi>E</mi> <mi>attr</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mi>ij</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>|</mo> <msub> <mi>v</mi> <mi>ij</mi> </msub> <mo>-</mo> <msub> <mi>p</mi> <mi>ij</mi> </msub> <mo>|</mo> </mrow> <mrow> <munder> <mi>max</mi> <mrow> <mn>0</mn> <mo>&le;</mo> <mi>j</mi> <mo>&le;</mo> <mn>8</mn> </mrow> </munder> <mo>{</mo> <mo>|</mo> <msub> <mi>v</mi> <mi>ij</mi> </msub> <mo>-</mo> <msub> <mi>p</mi> <mi>ij</mi> </msub> <mo>|</mo> <mo>}</mo> </mrow> </mfrac> </mrow> </math>
Wherein E isimg=-|▽I(x,y)|2,Eattr(vij) P in (1)ijE P is and viThe closest edge point. The invention adds the edge point gravitational energy E outside the stretching energy, the bending energy and the image energyattr(vi) The purpose of (1) is to further promote convergence of the point array towards the target curve. p is a radical ofijIs limited to viWithin a window at the center. If there are no edge points in the window, Eattr(vij)=0,j=0,L,8。
By iterative method making the dot sequence viAnd i is moved to a position where the energy of the equation (20) is extremely small (0, 1, L, n), and finally locked near the image feature, thereby forming a smooth target curve point row. The points are then fitted with a uniform B-spline curve for subsequent curve matching. For the open curve, the two end points of the open curve are constrained to be kept unchanged all the time, thereby preventing the open curve from being openedThe contraction of the row of points of the curve to a point produces degradation.
Automatic matching optimization of homonymous curves
(1) Measure of degree of match
One basic constraint that a camera should satisfy for a same-name point between a pair of images obtained at different positions and angles is the epipolar constraint described by equation (9). In the case where the intra-camera parameters and relative positions and poses are known, the fundamental matrix F between the pairs of images is known. For two candidate matching points v in an image pair1And v2V can be used on the second image2To v1Distance of polar line of
D 2 ( v 1 , v 2 ) = | v 2 T F v 1 | | | eF v 1 | | - - - ( 21 )
And v on the first image1To v2Distance of polar line of
D 1 ( v 1 , v 2 ) = | v 2 T F v 1 | | | v 2 T F e T | | - - - ( 22 )
Measure v1And v2The degree of matching of (a), wherein, e = 0 - 1 0 1 0 0 0 0 0 . the invention is based on the epipolar constraint conditions of the formula (21) and the formula (22) to establish the optimization objective function of point pair matching between homonymous curves.
(2) Image curve resampling
To obtain a fitted uniform B-spline curve p (u), u ∈ [0, 1 ]]Taking the discrete parameter interval delta u as 1/L and L as the accumulated chord length of each type value point to obtain the discrete pixel points v on the uniform B spline curveiI is 0, 1,.. times.n, constructing a piecewise linear interpolation curve satisfies:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mi>c</mi> <mrow> <mo>(</mo> <msub> <mi>l</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>,</mo> </mtd> <mtd> <mi>i</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mi>N</mi> </mtd> </mtr> <mtr> <mtd> <mi>c</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <mi>l</mi> </mrow> <mrow> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>l</mi> <mi>i</mi> </msub> </mrow> </mfrac> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>+</mo> <mfrac> <mrow> <mi>l</mi> <mo>-</mo> <msub> <mi>l</mi> <mi>i</mi> </msub> </mrow> <mrow> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>l</mi> <mi>i</mi> </msub> </mrow> </mfrac> <msub> <mi>v</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> </mtd> <mtd> <msub> <mi>l</mi> <mi>i</mi> </msub> <mo>&le;</mo> <mi>l</mi> <mo>&lt;</mo> <msub> <mi>l</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>23</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein l0=0.0, <math> <mrow> <msub> <mi>l</mi> <mi>i</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>i</mi> </munderover> <mo>|</mo> <msub> <mi>v</mi> <mi>j</mi> </msub> <mo>-</mo> <msub> <mi>v</mi> <mrow> <mi>j</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>|</mo> <mo>.</mo> </mrow> </math> Note the book <math> <mrow> <mi>L</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <msub> <mi>v</mi> <mi>j</mi> </msub> <mo>-</mo> <msub> <mi>v</mi> <mrow> <mi>j</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>|</mo> <mo>.</mo> </mrow> </math> Without loss of generality, one of the two homonymous image curves with more pixel points is marked as c1(l) The less one is denoted as c2(l)。c1(l) Has a discrete pixel point of vk (1),0≤k≤N1;c2(l) Has a discrete pixel point of vj (2),0≤j≤N2. Seeking c1(l),l∈[0,L1]Each pixel point on c2(l),l∈[0,L2]Can realize that it is at c2(l) The sub-pixels are matched, so that high-precision three-dimensional curve reconstruction is realized.
(3) Matching optimization
Dynamic programming is first employed to initially match discrete pixel points on the corresponding curve. The cumulative cost function of a corresponding point pair of the homonymous curve is defined as
<math> <mrow> <mi>C</mi> <mrow> <mo>(</mo> <msubsup> <mi>v</mi> <mi>k</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>v</mi> <mi>j</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mi>D</mi> <mrow> <mo>(</mo> <msubsup> <mi>v</mi> <mi>k</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>v</mi> <mi>j</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msubsup> <mo>)</mo> </mrow> <mo>+</mo> <munder> <mi>min</mi> <mrow> <mi>m</mi> <mo>&Element;</mo> <msub> <mi>G</mi> <mi>kj</mi> </msub> </mrow> </munder> <mi>C</mi> <mrow> <mo>(</mo> <msubsup> <mi>v</mi> <mrow> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>v</mi> <mi>m</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>24</mn> <mo>)</mo> </mrow> </mrow> </math>
Wherein D ( v k ( 1 ) , v j ( 2 ) ) = D 1 ( v k ( 1 ) , v j ( 2 ) ) + D 2 ( v k ( 1 ) , v j ( 2 ) ) , GkjIs shown at vk (1)And vj (2)All possible values of m in the case of already matching. Because the number of pixel points on the two homonymous image curves is unequal, the result of dynamic programming matching can have many-to-one condition, namely longer c1A plurality of pixel points on the curve correspond to shorter c2The same pixel point on the curve.
And after initial matching of the point pairs on the image curve is obtained, matching optimization of the curve is carried out. Based on equations (21) and (22), the present invention optimizes the following objective function to achieve c1、c2Exact matching of points on a curve
<math> <mrow> <mi>min</mi> <munderover> <mo>&Integral;</mo> <mn>0</mn> <msub> <mi>L</mi> <mn>1</mn> </msub> </munderover> <mfrac> <mrow> <mo>|</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <msup> <mrow> <mo>(</mo> <mi>&sigma;</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mi>F</mi> <msub> <mi>c</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mrow> <mo>|</mo> <mo>|</mo> <mi>eF</mi> <msub> <mi>c</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <mo>|</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <msup> <mrow> <mo>(</mo> <mi>&sigma;</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mi>F</mi> <msub> <mi>c</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <msup> <mrow> <mo>(</mo> <mi>&sigma;</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mi>F</mi> <msup> <mi>e</mi> <mi>T</mi> </msup> </mrow> </mfrac> <mi>dl</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>25</mn> <mo>)</mo> </mrow> </mrow> </math>
Where σ (l) is the mapping function to be solved, representing c1The point on the curve with the parameter l is on the curve c2The parameter values of (1). The form of integration of equation (25) is rewritten into the form of summation
Figure A20061016127400154
(26) In the formula I0,l1,L,lN1Are each vk (1),0≤k≤N1Corresponds to c1(l) The parameter values of (1). The coarse matching generated by the dynamic programming method is used as an iteration initial condition, and the minimization problem of the formula (26) can be solved by adopting a conjugate gradient method.
Three-dimensional reconstruction of curves
After matching of all pixel points on the homonymous curve is completed, because the parameters in the camera are known, and the positions and postures of the camera at the time of respective shooting of the image pair are automatically calculated, namely the rotation matrix and the translation vector relative to the world coordinate system are also known, the space coordinates of the points can be reconstructed by a mature triangulation method in binocular stereo vision, and a specific calculation formula can be expressed as
x ( j ) = f R 11 ( j ) X + R 12 ( j ) Y + R 13 ( j ) Z + T x ( j ) R 31 ( j ) X + R 32 ( j ) Y + R 33 ( j ) Z + T z ( j ) y ( j ) = f R 21 ( j ) X + R 22 ( j ) Y + R 23 ( j ) Z + T y ( j ) R 31 ( j ) X + R 32 ( j ) Y + R 33 ( j ) Z + T z ( j ) , j = 1,2 - - - ( 27 )
Where f is the focal length of the camera, x(j)And y(j)Two components, R, of the pixel coordinates of the homonymous matching point on the jth image, respectively.(j)Is the component, T, of the rotation matrix of the j-th camera with respect to the world coordinate system.(j)The translation components of the j-th shooting phase relative to the world coordinate system are obtained, and 3 unknown components (X, Y, Z) of the space coordinate of the point can be solved by a least square method according to 4 equations in the formula (27). And (4) executing the solving process for each matching point pair on the homonymous curve, and completing the three-dimensional measurement of the whole target curve point column. After the three-dimensional measurement of all target curves is completed, a parameter equation of the curve and a curved surface equation of the model can be further constructed according to the point lists.

Claims (5)

1. A method for carrying out three-dimensional measurement on an object by utilizing free shooting of a single digital camera is characterized by comprising seven steps of measurement preparation, image shooting, identification and positioning of encoding points, camera pose determination, target curve extraction, automatic matching optimization of homonymous curves and three-dimensional reconstruction of target curves, and the specific method comprises the following steps: firstly, marking a characteristic line of a measured object and a key section control line on the surface of the object required for digital model reconstruction, so that the characteristic line is obviously different from the color of the measured object in color brightness, and the identification of images is facilitated; a ruler and a group of coding points are arranged around a measured object; then, holding a digital camera by hand to obtain a group of images of the measured object in a free shooting mode; according to the group of images, the position and the posture of the camera during each shooting are automatically and accurately calculated, meanwhile, a convenient interaction means is provided for a user, the semi-automatic extraction of the identification curve and the optimized matching of homonymous curves in different images are realized, and further, the three-dimensional point list information of the identified curve structure is automatically calculated.
2. The method of claim 1, wherein the identification of the code points is that the codes of each point in a set of designed code points are different, and the candidate code point targets are gradually filtered by adopting 5 constraint conditions of size, shape, ellipse fitting residual, area gray mean, and area gray variance; the gray values of most pixels in a coding band are comprehensively considered in the coding point decoding process, so that the effects of eliminating noise influence and improving the robustness of the identification of the coding points are achieved; and (4) positioning the sub-pixels in the center of the coding point by adopting a gray scale weighting method in the target area.
3. The method of claim 1, wherein the camera pose determination is based on identity uniqueness of the code points, and automatically establishes correspondence between the code points of the same name in each image; and recovering the camera postures corresponding to the two images and the three-dimensional coordinates of the centers of the coding points which are commonly visible in the two images according to the pixel coordinates of the centers of at least 5 coding points with the same name in the two images, and further adopting a strategy of combining an incremental method and a global optimization method to realize the automatic solution of the camera positions and postures corresponding to all the images.
4. The method as claimed in claim 1, wherein the target curve extraction is performed by a user by using a mouse to pick points on the two images near the same identification curve, and making the connecting line of the points approximately reflect the corresponding image curve contour, and then the measurement software system iteratively minimizes the energy of the image curve contour, thereby automatically and optimally fitting the interactively sketched approximate image curve contour to the image curve.
5. The method as claimed in claim 1, wherein the automatic matching optimization of the homonymous curve is performed by using a dynamic programming method to obtain an initial matching of pixel points on the corresponding curve, and then using a nonlinear optimization method to minimize the sum of distances from all matching points on the corresponding curve to respective polar lines, so as to achieve an optimal matching of the homonymous target curve point rows on the image, and further calculate the three-dimensional coordinates of the point rows.
CNB2006101612744A 2006-12-19 2006-12-19 Method for making three-dimensional measurement of objects utilizing single digital camera to freely shoot Expired - Fee Related CN100430690C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006101612744A CN100430690C (en) 2006-12-19 2006-12-19 Method for making three-dimensional measurement of objects utilizing single digital camera to freely shoot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006101612744A CN100430690C (en) 2006-12-19 2006-12-19 Method for making three-dimensional measurement of objects utilizing single digital camera to freely shoot

Publications (2)

Publication Number Publication Date
CN1975323A true CN1975323A (en) 2007-06-06
CN100430690C CN100430690C (en) 2008-11-05

Family

ID=38125561

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006101612744A Expired - Fee Related CN100430690C (en) 2006-12-19 2006-12-19 Method for making three-dimensional measurement of objects utilizing single digital camera to freely shoot

Country Status (1)

Country Link
CN (1) CN100430690C (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101839692A (en) * 2010-05-27 2010-09-22 西安交通大学 Method for measuring three-dimensional position and stance of object with single camera
CN101975552A (en) * 2010-08-30 2011-02-16 天津工业大学 Method for measuring key point of car frame based on coding points and computer vision
CN101739547B (en) * 2008-11-21 2012-04-11 中国科学院沈阳自动化研究所 Precise identification and position method of robust coding point in image under complex background
CN102679937A (en) * 2011-03-17 2012-09-19 镇江亿海软件有限公司 Ship steel plate dynamic three-dimension measurement method based on multi-camera vision
CN101630418B (en) * 2009-08-06 2012-10-03 白晓亮 Integrated method for measurement and reconstruction of three-dimensional model and system thereof
CN102762142A (en) * 2010-02-12 2012-10-31 皇家飞利浦电子股份有限公司 Laser enhanced reconstruction of 3d surface
CN103033171A (en) * 2013-01-04 2013-04-10 中国人民解放军信息工程大学 Encoding mark based on colors and structural features
CN103049731A (en) * 2013-01-04 2013-04-17 中国人民解放军信息工程大学 Decoding method for point-distributed color coding marks
CN103218851A (en) * 2013-04-03 2013-07-24 西安交通大学 Segmental reconstruction method for three-dimensional line segment
CN103267516A (en) * 2013-02-27 2013-08-28 北京林业大学 Sample plot measuring technology by using digital camera as tool
CN103411532A (en) * 2013-08-02 2013-11-27 上海锅炉厂有限公司 Method for mounting and measuring space connecting pipes
CN103714571A (en) * 2013-09-23 2014-04-09 西安新拓三维光测科技有限公司 Single camera three-dimensional reconstruction method based on photogrammetry
CN105157609A (en) * 2015-09-01 2015-12-16 大连理工大学 Two-sets-of-camera-based global morphology measurement method of large parts
CN105180904A (en) * 2015-09-21 2015-12-23 大连理工大学 High-speed moving target position and posture measurement method based on coding structured light
CN105574886A (en) * 2016-01-28 2016-05-11 多拉维(深圳)技术有限公司 High-precision calibration method of handheld multi-lens camera
US9396587B2 (en) 2012-10-12 2016-07-19 Koninklijke Philips N.V System for accessing data of a face of a subject
CN106960442A (en) * 2017-03-01 2017-07-18 东华大学 Based on the infrared night robot vision wide view-field three-D construction method of monocular
CN107020545A (en) * 2017-04-30 2017-08-08 天津大学 The apparatus and method for recognizing mechanical workpieces pose
CN107080148A (en) * 2017-04-05 2017-08-22 浙江省海洋开发研究院 Processing of aquatic products system and its control method
CN107835551A (en) * 2017-11-01 2018-03-23 中国科学院长春光学精密机械与物理研究所 The control method and device of lighting source power in 3 D scanning system
CN108759665A (en) * 2018-05-25 2018-11-06 哈尔滨工业大学 A kind of extraterrestrial target reconstruction accuracy analysis method based on coordinate conversion
CN108871185A (en) * 2018-05-10 2018-11-23 苏州大学 Method, apparatus, equipment and the computer readable storage medium of piece test
CN109544649A (en) * 2018-11-21 2019-03-29 武汉珈鹰智能科技有限公司 A kind of the coloud coding point design and its recognition methods of large capacity
CN110250624A (en) * 2019-08-01 2019-09-20 西安科技大学 A kind of production method of Custom Prosthesis mask bracket
CN110567728A (en) * 2018-09-03 2019-12-13 阿里巴巴集团控股有限公司 Method, device and equipment for identifying shooting intention of user
CN110942370A (en) * 2011-04-07 2020-03-31 电子湾有限公司 Descriptor and image based item model
CN111127561A (en) * 2019-12-05 2020-05-08 农芯(南京)智慧农业研究院有限公司 Multi-view image calibration device and method
CN111521127A (en) * 2019-02-01 2020-08-11 奥林巴斯株式会社 Measuring method, measuring apparatus, and recording medium
CN111735409A (en) * 2020-06-04 2020-10-02 深圳职业技术学院 Soft robot arm shape measuring method, system and storage medium
CN112241995A (en) * 2019-07-18 2021-01-19 重庆双楠文化传播有限公司 3D portrait modeling method based on multiple images of single digital camera
CN112781521A (en) * 2020-12-11 2021-05-11 北京信息科技大学 Software operator shape recognition method based on visual markers
CN114440834A (en) * 2022-01-27 2022-05-06 中国人民解放军战略支援部队信息工程大学 Object space and image space matching method of non-coding mark

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3799019B2 (en) * 2002-01-16 2006-07-19 オリンパス株式会社 Stereo shooting device and shooting method of stereo shooting device
CN1233984C (en) * 2004-11-11 2005-12-28 天津大学 Large-scale three dimensional shape and appearance measuring and splicing method without being based on adhesive mark
CN1308652C (en) * 2004-12-09 2007-04-04 武汉大学 Method for three-dimensional measurement of sheet metal part using single non-measuring digital camera

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739547B (en) * 2008-11-21 2012-04-11 中国科学院沈阳自动化研究所 Precise identification and position method of robust coding point in image under complex background
CN101630418B (en) * 2009-08-06 2012-10-03 白晓亮 Integrated method for measurement and reconstruction of three-dimensional model and system thereof
US11022433B2 (en) 2010-02-12 2021-06-01 Koninklijke Philips N.V. Laser enhanced reconstruction of 3D surface
CN102762142B (en) * 2010-02-12 2016-01-27 皇家飞利浦电子股份有限公司 The laser enhancing on 3D surface is rebuild
CN102762142A (en) * 2010-02-12 2012-10-31 皇家飞利浦电子股份有限公司 Laser enhanced reconstruction of 3d surface
CN101839692B (en) * 2010-05-27 2012-09-05 西安交通大学 Method for measuring three-dimensional position and stance of object with single camera
CN101839692A (en) * 2010-05-27 2010-09-22 西安交通大学 Method for measuring three-dimensional position and stance of object with single camera
CN101975552A (en) * 2010-08-30 2011-02-16 天津工业大学 Method for measuring key point of car frame based on coding points and computer vision
CN102679937A (en) * 2011-03-17 2012-09-19 镇江亿海软件有限公司 Ship steel plate dynamic three-dimension measurement method based on multi-camera vision
CN110942370B (en) * 2011-04-07 2023-05-12 电子湾有限公司 Descriptor and image based project model
CN110942370A (en) * 2011-04-07 2020-03-31 电子湾有限公司 Descriptor and image based item model
US9396587B2 (en) 2012-10-12 2016-07-19 Koninklijke Philips N.V System for accessing data of a face of a subject
CN103049731A (en) * 2013-01-04 2013-04-17 中国人民解放军信息工程大学 Decoding method for point-distributed color coding marks
CN103033171A (en) * 2013-01-04 2013-04-10 中国人民解放军信息工程大学 Encoding mark based on colors and structural features
CN103049731B (en) * 2013-01-04 2015-06-03 中国人民解放军信息工程大学 Decoding method for point-distributed color coding marks
CN103267516A (en) * 2013-02-27 2013-08-28 北京林业大学 Sample plot measuring technology by using digital camera as tool
CN103218851B (en) * 2013-04-03 2015-12-09 西安交通大学 A kind of segment reconstruction method of three-dimensional line segment
CN103218851A (en) * 2013-04-03 2013-07-24 西安交通大学 Segmental reconstruction method for three-dimensional line segment
CN103411532B (en) * 2013-08-02 2016-08-24 上海锅炉厂有限公司 The method measured is installed in the adapter of a kind of space
CN103411532A (en) * 2013-08-02 2013-11-27 上海锅炉厂有限公司 Method for mounting and measuring space connecting pipes
CN103714571B (en) * 2013-09-23 2016-08-10 西安新拓三维光测科技有限公司 A kind of based on photogrammetric single camera three-dimensional rebuilding method
CN103714571A (en) * 2013-09-23 2014-04-09 西安新拓三维光测科技有限公司 Single camera three-dimensional reconstruction method based on photogrammetry
CN105157609A (en) * 2015-09-01 2015-12-16 大连理工大学 Two-sets-of-camera-based global morphology measurement method of large parts
CN105157609B (en) * 2015-09-01 2017-08-01 大连理工大学 The global topography measurement method of heavy parts based on two groups of cameras
CN105180904A (en) * 2015-09-21 2015-12-23 大连理工大学 High-speed moving target position and posture measurement method based on coding structured light
CN105574886A (en) * 2016-01-28 2016-05-11 多拉维(深圳)技术有限公司 High-precision calibration method of handheld multi-lens camera
CN106960442A (en) * 2017-03-01 2017-07-18 东华大学 Based on the infrared night robot vision wide view-field three-D construction method of monocular
CN107080148A (en) * 2017-04-05 2017-08-22 浙江省海洋开发研究院 Processing of aquatic products system and its control method
CN107020545A (en) * 2017-04-30 2017-08-08 天津大学 The apparatus and method for recognizing mechanical workpieces pose
CN107835551B (en) * 2017-11-01 2019-07-23 中国科学院长春光学精密机械与物理研究所 The control method and device of lighting source power in 3 D scanning system
CN107835551A (en) * 2017-11-01 2018-03-23 中国科学院长春光学精密机械与物理研究所 The control method and device of lighting source power in 3 D scanning system
CN108871185A (en) * 2018-05-10 2018-11-23 苏州大学 Method, apparatus, equipment and the computer readable storage medium of piece test
CN108759665B (en) * 2018-05-25 2021-04-27 哈尔滨工业大学 Spatial target three-dimensional reconstruction precision analysis method based on coordinate transformation
CN108759665A (en) * 2018-05-25 2018-11-06 哈尔滨工业大学 A kind of extraterrestrial target reconstruction accuracy analysis method based on coordinate conversion
CN110567728B (en) * 2018-09-03 2021-08-20 创新先进技术有限公司 Method, device and equipment for identifying shooting intention of user
CN110567728A (en) * 2018-09-03 2019-12-13 阿里巴巴集团控股有限公司 Method, device and equipment for identifying shooting intention of user
CN109544649B (en) * 2018-11-21 2022-07-19 武汉珈鹰智能科技有限公司 Large capacity color coding point coding and its identification method
CN109544649A (en) * 2018-11-21 2019-03-29 武汉珈鹰智能科技有限公司 A kind of the coloud coding point design and its recognition methods of large capacity
CN111521127A (en) * 2019-02-01 2020-08-11 奥林巴斯株式会社 Measuring method, measuring apparatus, and recording medium
CN111521127B (en) * 2019-02-01 2023-04-07 仪景通株式会社 Measuring method, measuring apparatus, and recording medium
CN112241995A (en) * 2019-07-18 2021-01-19 重庆双楠文化传播有限公司 3D portrait modeling method based on multiple images of single digital camera
CN110250624A (en) * 2019-08-01 2019-09-20 西安科技大学 A kind of production method of Custom Prosthesis mask bracket
CN111127561A (en) * 2019-12-05 2020-05-08 农芯(南京)智慧农业研究院有限公司 Multi-view image calibration device and method
CN111127561B (en) * 2019-12-05 2023-03-24 农芯(南京)智慧农业研究院有限公司 Multi-view image calibration device and method
CN111735409A (en) * 2020-06-04 2020-10-02 深圳职业技术学院 Soft robot arm shape measuring method, system and storage medium
CN112781521A (en) * 2020-12-11 2021-05-11 北京信息科技大学 Software operator shape recognition method based on visual markers
CN114440834A (en) * 2022-01-27 2022-05-06 中国人民解放军战略支援部队信息工程大学 Object space and image space matching method of non-coding mark
CN114440834B (en) * 2022-01-27 2023-05-02 中国人民解放军战略支援部队信息工程大学 Object space and image space matching method of non-coding mark

Also Published As

Publication number Publication date
CN100430690C (en) 2008-11-05

Similar Documents

Publication Publication Date Title
CN1975323A (en) Method for making three-dimensional measurement of objects utilizing single digital camera to freely shoot
CN107133989B (en) Three-dimensional scanning system parameter calibration method
CN106780619B (en) Human body size measuring method based on Kinect depth camera
CN109598762B (en) High-precision binocular camera calibration method
CN107767442B (en) Foot type three-dimensional reconstruction and measurement method based on Kinect and binocular vision
CN110276808B (en) Method for measuring unevenness of glass plate by combining single camera with two-dimensional code
CN1162681C (en) Three-D object recognition method and pin picking system using the method
JP4245963B2 (en) Method and system for calibrating multiple cameras using a calibration object
JP5132832B1 (en) Measuring apparatus and information processing apparatus
CN105046746B (en) A kind of digital speckle human body three-dimensional fast scanning method
Furukawa et al. Accurate camera calibration from multi-view stereo and bundle adjustment
CN107218928B (en) A kind of complexity multi- piping branch system detection method
CN115345822A (en) Automatic three-dimensional detection method for surface structure light of aviation complex part
CN107767456A (en) A kind of object dimensional method for reconstructing based on RGB D cameras
CN111429494B (en) Biological vision-based point cloud high-precision automatic registration method
CN107633532B (en) Point cloud fusion method and system based on white light scanner
CN111028280B (en) # -shaped structured light camera system and method for performing scaled three-dimensional reconstruction of target
CN111402411A (en) Scattered object identification and grabbing method based on line structured light
JP2013178656A (en) Image processing device, image processing method, and image processing program
CN111524195B (en) Camera calibration method in positioning of cutting head of heading machine
US9245375B2 (en) Active lighting for stereo reconstruction of edges
CN112232319A (en) Scanning splicing method based on monocular vision positioning
CN108154536A (en) The camera calibration method of two dimensional surface iteration
CN108447096B (en) Information fusion method for kinect depth camera and thermal infrared camera
CN111127613A (en) Scanning electron microscope-based image sequence three-dimensional reconstruction method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20081105

Termination date: 20121219