CN1975323A - Method for making three-dimensional measurement of objects utilizing single digital camera to freely shoot - Google Patents

Method for making three-dimensional measurement of objects utilizing single digital camera to freely shoot Download PDF

Info

Publication number
CN1975323A
CN1975323A CN 200610161274 CN200610161274A CN1975323A CN 1975323 A CN1975323 A CN 1975323A CN 200610161274 CN200610161274 CN 200610161274 CN 200610161274 A CN200610161274 A CN 200610161274A CN 1975323 A CN1975323 A CN 1975323A
Authority
CN
China
Prior art keywords
curve
points
point
image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200610161274
Other languages
Chinese (zh)
Other versions
CN100430690C (en
Inventor
张丽艳
郑建冬
张辉
卫炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CNB2006101612744A priority Critical patent/CN100430690C/en
Publication of CN1975323A publication Critical patent/CN1975323A/en
Application granted granted Critical
Publication of CN100430690C publication Critical patent/CN100430690C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

一种利用单数码相机自由拍摄进行物体三维测量的方法,属测试技术。该方法包括测量准备、图像摄取、编码点的识别与定位、相机位姿确定、目标曲线提取、同名曲线的自动匹配优化、目标曲线的三维重建七个步骤。其特征在于:三维测量用一个数码相机、一台电脑、一组编码点和一把标尺,测量时先对被测物体上需要测量的目标曲线进行标记,以利于图像识别;在被测物体周围放置标尺和一组编码点;再手持一个数码相机以自由拍摄方式获得被测物体的一组图像;根据这组图像,自动精确计算各次拍摄时的相机位置与姿态;提供使用者方便的交互手段,实现对标识曲线的半自动提取和不同图像中同名曲线的优化匹配,从而自动计算出所标识曲线结构上的三维点列信息。

The invention relates to a method for three-dimensional measurement of an object by free shooting with a single digital camera, which belongs to the testing technology. The method includes seven steps: measurement preparation, image capture, identification and positioning of coding points, camera pose determination, target curve extraction, automatic matching optimization of the same-named curve, and 3D reconstruction of the target curve. It is characterized in that: three-dimensional measurement uses a digital camera, a computer, a group of coding points and a ruler, and first marks the target curve to be measured on the measured object to facilitate image recognition; around the measured object Place a ruler and a set of code points; then hold a digital camera to obtain a set of images of the measured object in a free shooting manner; according to this set of images, automatically and accurately calculate the camera position and attitude during each shooting; provide user convenience It realizes semi-automatic extraction of marked curves and optimal matching of curves with the same name in different images, so as to automatically calculate the three-dimensional point sequence information on the marked curve structure.

Description

利用单数码相机自由拍摄进行物体三维测量的方法Method for Three-Dimensional Measurement of Objects by Using Single Digital Camera to Shoot Freely

一、技术领域1. Technical field

三维物体测量属于测量、测试技术领域。在国际分类表中的相应代码为G01B。The three-dimensional object measurement belongs to the technical field of measurement and testing. The corresponding code in the International Classification is G01B.

二、背景技术2. Background technology

三维测量技术在逆向工程、工业检测、质量控制等领域日益广泛的应用需求极大地推动了三维测量技术的迅速发展,出现了基于光学、声学、电磁学以及机械接触原理的各种测量方法,如三坐标测量机、激光扫描仪、结构光测量仪等。其中,三坐标测量机采用机械接触式传感,测量精度高,但一般需要专门的测量室和专用测量台,对测量环境要求很高,测量范围有限,测量效率低,不适合软性物体的测量;激光线扫描测量和结构光照射测量是目前三维几何外形测量的主流方法,通过在物体表面进行激光或结构光照射,可以快速获取模型表面密集的点云数据,但都受扫描范围、物体表面高光反射等的限制,且激光扫描仪和结构光测量仪均价格昂贵。尤其是对于机械产品而言,由于通常含有明显的结构特征,常常需要重点获取对重建被测物体的数字化模型起关键作用的角点、棱边以及模型表面的某些控制线等数据,结构光照射测量法、激光线扫描测量法等获得的都是表面的整体点云或网格数据,一方面输出的数据量十分庞大,另一方面却不能直接显式地获取所需的棱边特征及某些关键截面控制线数据。而且,这些方法输出的测量数据通常在模型的光滑平坦区域的效果比较好,而恰恰在关键的角点和棱边处的测量效果较差。The increasingly wide application requirements of 3D measurement technology in reverse engineering, industrial inspection, quality control and other fields have greatly promoted the rapid development of 3D measurement technology, and various measurement methods based on optics, acoustics, electromagnetics and mechanical contact principles have emerged, such as Coordinate measuring machine, laser scanner, structured light measuring instrument, etc. Among them, the three-coordinate measuring machine adopts mechanical contact sensing, which has high measurement accuracy, but generally requires a special measurement room and a special measurement table, which has high requirements for the measurement environment, limited measurement range, low measurement efficiency, and is not suitable for soft objects. Measurement; laser line scanning measurement and structured light irradiation measurement are the mainstream methods for three-dimensional geometric shape measurement. By irradiating the surface of the object with laser or structured light, dense point cloud data on the surface of the model can be quickly obtained, but they are all limited by the scanning range, object Surface specular reflection, etc., and laser scanners and structured light measuring instruments are expensive. Especially for mechanical products, because they usually contain obvious structural features, it is often necessary to focus on obtaining data such as corner points, edges and some control lines on the surface of the model that play a key role in reconstructing the digital model of the measured object. Irradiation measurement, laser line scanning measurement, etc. obtain the overall point cloud or grid data of the surface. On the one hand, the amount of output data is very large, on the other hand, it cannot directly and explicitly obtain the required edge features and grid data. Certain key section control line data. Moreover, the measurement data output by these methods usually have a better effect on the smooth and flat areas of the model, but the measurement effect on the key corners and edges is poor.

为了以更加简单的硬件条件和更加灵活方便的方式实现三维几何信息的测量,根据一个数码相机拍摄的多幅图像精确重建物体的位置和形状成为近年来的一个研究热点。其中,德国Gom公司的TriTop系统,已经能够用单数码相机自由拍摄方式进行较高精度的三维坐标定位。该系统通过在场景中放置一组编码点和长度标尺,并在感兴趣的部位粘贴易于识别的标记点,然后使用者手持一个数码相机自由拍摄多幅图像,要求各幅图像相互之间有一定的重叠,所有图像输入相应的软件系统后,系统一次性自动计算各次拍摄时的相机位置、姿态以及所有标记点的空间坐标。该系统已经商品化并在我国进行销售,但是,该系统目前只能进行特定标记点目标(由一个黑色圆环围绕一个白色圆点构成)的空间坐标定位,一般用于配合其它测量方法进行多视角测量数据的拼合,而不能进行曲线目标的三维测量,不能用于具有复杂几何外形产品的三维数字化模型重建。In order to realize the measurement of 3D geometric information with simpler hardware conditions and a more flexible and convenient way, it has become a research hotspot in recent years to accurately reconstruct the position and shape of objects based on multiple images taken by a digital camera. Among them, the TriTop (R) system of the German Gom company has been able to perform relatively high-precision three-dimensional coordinate positioning with a single digital camera free shooting mode. The system places a set of coding points and length scales in the scene, and pastes easily identifiable marking points on the interested parts, and then the user holds a digital camera to shoot multiple images freely, requiring a certain distance between each image. After all the images are input into the corresponding software system, the system automatically calculates the camera position, attitude and the spatial coordinates of all marked points for each shooting at one time. The system has been commercialized and sold in our country, but at present, the system can only locate the spatial coordinates of a specific marker target (constituted by a black circle around a white dot), and is generally used to coordinate with other measurement methods for multiple The combination of viewing angle measurement data cannot be used for 3D measurement of curved objects, and cannot be used for 3D digital model reconstruction of products with complex geometric shapes.

三、发明内容3. Contents of the invention

本发明旨在用简单的硬件条件,实现一种面向工业产品测量建模的便于实施、精度较高、成本较低的实用测量方法。为此,本发明通过对被测物体的特征线和进行数字化模型重建所需要的物体表面的某些关键截面控制线进行标记,使其在颜色亮度上明显区别于被测物体本身的颜色,以利于图像识别;在被测物体周围放置一把标尺和一组经特殊设计的编码点;然后手持一个数码相机以自由拍摄方式获得被测物体的一组图像;根据这组图像,自动精确计算各次拍摄时的相机位置与姿态,同时,提供使用者方便的交互手段,实现对标识曲线的半自动提取和不同图像中同名曲线的优化匹配,进而自动计算出所标识曲线结构的三维点列信息。The invention aims at realizing a practical measurement method oriented to measurement and modeling of industrial products, which is easy to implement, high in precision and low in cost with simple hardware conditions. For this reason, the present invention marks the characteristic lines of the measured object and some key section control lines of the object surface required for digital model reconstruction, so that its color brightness is obviously different from the color of the measured object itself, so as to Facilitate image recognition; place a ruler and a set of specially designed coding points around the measured object; then hold a digital camera to obtain a set of images of the measured object in a free shooting manner; according to this set of images, automatically and accurately Calculate the position and attitude of the camera for each shooting, and at the same time, provide users with convenient interactive means to realize the semi-automatic extraction of the marked curve and the optimized matching of the same-named curve in different images, and then automatically calculate the three-dimensional point sequence information of the marked curve structure .

根据上述方案开发出了使用灵活方便、适用于不同大小物体上三维曲线结构测量的实用系统,可以很好地用于基于实物的机械产品三维数字化模型建立。本发明提出的物体三维测量方法,其特征在于测量只用一个数码相机、一台普通个人电脑,辅以一组编码点和一把标尺,无需复杂的测量硬件系统,也不需要对测量系统进行繁琐的标定;直接显式地产生对物体进行数字化模型重建所需的曲线数据,避免数据冗余,且便于进行高效率模型重建;全部测量数据自动位于一个世界坐标系下,避免了其他测量方法中多次测量的数据需要拼合的难点问题,也避免了多个数据集拼合带来的累计误差。本发明包括测量准备、图像摄取、相机位姿确定、目标曲线提取、同名曲线的自动匹配优化、目标曲线的三维重建等主要步骤。According to the above scheme, a flexible and convenient practical system suitable for measuring three-dimensional curve structures on objects of different sizes has been developed, which can be well used for the establishment of three-dimensional digital models of mechanical products based on physical objects. The object three-dimensional measurement method proposed by the present invention is characterized in that only one digital camera, one common personal computer is used for measurement, supplemented by a set of code points and a scale, and no complex measurement hardware system is needed, nor is it necessary to carry out the measurement of the measurement system. Complicated calibration; directly and explicitly generate the curve data required for digital model reconstruction of the object, avoid data redundancy, and facilitate high-efficiency model reconstruction; all measurement data are automatically located in a world coordinate system, avoiding other measurement methods The difficult problem of merging the data measured multiple times in the middle is also avoided, and the cumulative error caused by the merging of multiple data sets is also avoided. The invention includes the main steps of measurement preparation, image capture, camera pose determination, target curve extraction, automatic matching optimization of the same-named curve, three-dimensional reconstruction of the target curve, and the like.

测量准备与图像摄取Measurement preparation and image capture

测量准备主要包括三个方面的工作:1)根据数字化模型重建的需要对目标曲线(一般是自然曲面片的边界线、关键的截面控制线等)进行标记,使其在颜色亮度上明显区别于被测物体,以利于图像识别;2)在测量区域内布置若干编码点。每个编码点具有唯一身份编码,易于在不同图像中进行快速可靠的识别。标记点和编码点均可简单地在计算机上产生相应的图案,然后打印生成。把编码点贴到硬纸板、木片等的上面,可以重复使用;3)在测量场景内放置一个有两个编码点的标尺,其上的两个编码点中心的间距已知。放置标尺的目的是获得被测物体的实际尺寸,否则只能得到相差一个比例系数的三维结构。上述准备工作完成后,就可手持数码相机对被测物体进行多角度拍摄。要求各幅图像间具有一定的重叠,即一幅图像至少要与另一幅图像之间有共同可见的5个以上编码点和某些目标曲线。The measurement preparation mainly includes three aspects of work: 1) According to the needs of digital model reconstruction, mark the target curve (generally the boundary line of the natural surface patch, the key section control line, etc.), so that its color brightness is obviously different from that of The measured object is beneficial to image recognition; 2) Arranging several coding points in the measurement area. Each code point has a unique identity code, which is easy to carry out fast and reliable identification in different images. Both marking points and coding points can be simply generated on the computer corresponding pattern, and then printed. Paste the code points on cardboard, wood chips, etc., which can be reused; 3) Place a ruler with two code points in the measurement scene, and the distance between the centers of the two code points on it is known. The purpose of placing the ruler is to obtain the actual size of the measured object, otherwise only a three-dimensional structure with a difference of one scale factor can be obtained. After the above preparatory work is completed, the digital camera can be held in hand to take multi-angle shots of the measured object. It is required to have a certain overlap between the images, that is, at least one image must have more than 5 code points and some target curves that are common to another image.

编码点的识别与定位Identification and location of code points

编码点的图案设计成由中心白色圆点、中间黑色圆环、外圈圆环组成,其中外圈圆环被分成15个等分,每个等分为黑色或白色,黑色表示二进制码“0”,白色表示二进制码“1”,称为“编码带”。一组编码点中每个点的编码均不相同。根据这种编码,可以在不同图像中可靠地识别其身份,自动建立同名编码点在各图像间的对应关系。正是根据这些同名编码点在图像中的位置和在多幅图像间的对应,才能实现自由拍摄时相机的各个位置和姿态的自动计算。The pattern of the coding point is designed to be composed of a white dot in the center, a black ring in the middle, and an outer ring. The outer ring is divided into 15 equal parts, each of which is divided into black or white, and black means the binary code "0". ", white represents the binary code "1", which is called "coding band". Each point in a set of coded points is coded differently. According to this encoding, its identity can be reliably identified in different images, and the corresponding relationship between the same-named encoding points in each image can be automatically established. It is precisely based on the positions of these code points with the same name in the image and the correspondence between multiple images that the automatic calculation of each position and attitude of the camera during free shooting can be realized.

由于编码点的中心圆形经CCD成像后呈椭圆状,因此,为了识别和定位图像中的编码点,本发明首先采用Canny算子进行图像分割,在图像中提取代表不同区域的轮廓信息,然后根据各轮廓的尺寸、形状、椭圆拟合残差、区域灰度的均值、区域灰度的方差等5个约束条件,对候选编码点目标进行逐步过滤,从而实现编码点目标的提取。Since the central circle of the code point is elliptical after being imaged by the CCD, in order to identify and locate the code point in the image, the present invention first uses the Canny operator to segment the image, and extracts the contour information representing different regions in the image, and then According to the five constraints of size, shape, residual error of ellipse fitting, mean value of regional gray level, and variance of regional gray level, the candidate coding point targets are filtered step by step, so as to realize the extraction of coding point targets.

锁定一个编码点目标后,要对其进行解码,即确定其具体是哪个编码点。解码的依据是编码带上各扇区的灰度。本发明采用拟合编码带中间椭圆并对其上的每一象素取线性窗口进行中值滤波的方法,综合考虑了编码带内大多数象素的灰度值,能够消除孤立噪声的影响。对大量的现场拍摄图像的处理结果表明,这一方法对提高编码点身份识别的鲁棒性十分有效。After locking a code point target, it needs to be decoded, that is, to determine which code point it is. The basis of decoding is the gray level of each sector on the coded tape. The present invention adopts the method of fitting the ellipse in the middle of the encoding band and taking a linear window for each pixel on it to carry out median filtering, comprehensively considers the gray values of most pixels in the encoding band, and can eliminate the influence of isolated noise. The processing results of a large number of live images show that this method is very effective in improving the robustness of code point identification.

最后,根据识别出的编码点中心圆形区域内各象素点的灰度值,确定具有亚象素定位精度的编码点中心坐标。Finally, according to the gray value of each pixel in the identified central circular area of the coding point, the central coordinates of the coding point with sub-pixel positioning accuracy are determined.

相机位置和姿态确定Camera position and pose determination

根据至少5个同名编码点中心在两幅图像中的象素坐标,首先计算两幅图像之间的基本矩阵。由相机内参数和基本矩阵,可进一步恢复两幅图像对应的相机姿态及这两幅图中共同可见的编码点中心的三维坐标。然后,根据所获得的三维空间点与第三幅图像上编码点之间的同名对应关系,求解第三幅图像对应的相机姿态,进而获得更多的编码点中心三维坐标,再求解下一幅图像对应的相机姿态,如此递增,直到获得所有相机姿态和编码点中心三维坐标。最后采用光束平差法同时对所有相机参数及编码点中心三维坐标进行整体优化,以进一步提高精度。这种递增式方法和全局优化方法相结合的策略,使算法既有较高的效率,又能使相机定位达到较高的精度。Based on the pixel coordinates of the centers of at least 5 coded points with the same name in the two images, the fundamental matrix between the two images is first calculated. From the camera intrinsic parameters and the fundamental matrix, the camera poses corresponding to the two images and the three-dimensional coordinates of the centers of the coded points that are commonly visible in the two images can be further recovered. Then, according to the corresponding relationship between the obtained three-dimensional space point and the coding point on the third image, solve the camera pose corresponding to the third image, and then obtain more three-dimensional coordinates of the center of the coding point, and then solve the next one The camera pose corresponding to the image is incremented in this way until all camera poses and the three-dimensional coordinates of the center of the encoding point are obtained. Finally, the beam adjustment method is used to optimize all camera parameters and the three-dimensional coordinates of the center of the coding point at the same time, so as to further improve the accuracy. The strategy of combining the incremental method and the global optimization method makes the algorithm not only have higher efficiency, but also enable the camera positioning to achieve higher accuracy.

一次读入拍摄到的一组图像,测量系统即自动计算并记录各次拍摄时的相机位置与姿态。确定了各次拍摄时的相机位置和姿态,意味着确定了各幅图像在一个统一的世界坐标系下的位置和姿态,这使得后续算法根据不同图像对重建出的各目标曲线直接位于同一个坐标系中,无需数据拼合。Once a group of captured images is read in, the measurement system automatically calculates and records the camera position and attitude during each shooting. Determining the position and attitude of the camera at each shooting means determining the position and attitude of each image in a unified world coordinate system, which makes the target curves reconstructed by subsequent algorithms based on different image pairs directly located in the same In the coordinate system, no data flattening is required.

目标曲线的半自动提取Semi-automatic extraction of target curves

对于当前选中显示在两个图像窗口中的一个图像对,分别在两幅图像上用鼠标在共同可见的同一条标记曲线附近取点,使这些点的连线大致反映相应的图像曲线轮廓,然后测量系统根据能量优化的原理自动将这样的大致图像曲线轮廓最佳地贴合到图像曲线上。这一半自动提取过程中,使用者只需要在图像曲线附近依次取点即可,简便易行,且由于有较好的初始搜索位置而大大增加了目标曲线提取及后续自动匹配优化的稳定性,同时最佳贴合方法保证目标曲线的提取精度。For an image pair currently selected and displayed in two image windows, use the mouse to select points near the same marked curve that are both visible on the two images, so that the line connecting these points roughly reflects the corresponding image curve outline, and then Based on the principle of energy optimization, the measuring system automatically fits such an approximate image curve profile optimally to the image curve. In this half of the automatic extraction process, the user only needs to take points sequentially near the image curve, which is simple and easy, and because of the better initial search position, the stability of the target curve extraction and subsequent automatic matching optimization is greatly increased. At the same time, the best fitting method ensures the extraction accuracy of the target curve.

同名曲线的自动匹配优化Automatic matching optimization of curves with the same name

在一个图像对中提取出一对同名目标曲线后,一个关键的问题就是建立两幅图像间同名目标曲线上各象素点的对应关系。基于图像点的对应,就可以通过立体三角法重建出这些点的空间坐标。After extracting a pair of target curves with the same name from an image pair, a key problem is to establish the corresponding relationship between the pixels on the target curves with the same name between the two images. Based on the correspondence of image points, the spatial coordinates of these points can be reconstructed by stereo triangulation.

根据立体视觉的基本理论,相机在不同位置和角度获得的一个图像对间的同名点应满足极线约束。对于一个图像对中两个候选同名匹配点v1和v2,本发明用在第二幅图像上v2到v1的极线的距离,以及在第一幅图像上v1到v2的极线的距离之和来度量v1和v2的匹配程度。同时,两条同名目标曲线上的点列匹配应满足空间相关性约束,即一条曲线上按顺序排列的点列,一定对应另一条曲线上一个顺序排列的点列。基于这样的分析,本发明首先采用动态规划来获得同名曲线上离散象素点的初始匹配。得到同名图像曲线上点对的初始匹配后,再进一步进行曲线的匹配优化。设两条同名图像曲线分别用参数方程c1(l)、c2(l)表示,本发明优化下面的目标函数来达到c1、c2曲线上点的精确匹配According to the basic theory of stereo vision, the points of the same name between an image pair obtained by the camera at different positions and angles should satisfy the epipolar constraint. For two candidate matching points v 1 and v 2 in an image pair, the present invention uses the epipolar distance from v 2 to v 1 on the second image, and the distance from v 1 to v 2 on the first image The sum of the distances of the epipolar lines measures how well v1 and v2 match. At the same time, the point sequence matching on two target curves with the same name should satisfy the spatial correlation constraint, that is, the point sequence arranged in sequence on one curve must correspond to the point sequence arranged in sequence on the other curve. Based on such analysis, the present invention first uses dynamic programming to obtain the initial matching of discrete pixel points on the curve with the same name. After the initial matching of point pairs on the image curve with the same name is obtained, the matching optimization of the curve is further carried out. Assuming that two image curves with the same name are represented by parameter equations c 1 (l) and c 2 (l) respectively, the present invention optimizes the following objective function to achieve accurate matching of points on the c 1 and c 2 curves

minmin ∫∫ 00 LL 11 || cc 22 (( σσ (( ll )) )) TT Ff cc 11 (( ll )) || || eFf cc 11 (( ll )) || || ++ || cc 22 (( σσ (( ll )) )) TT Ff cc 11 (( ll )) || || || cc 22 (( σσ (( ll )) )) TT Ff ee TT dldl -- -- -- (( 2828 ))

其中, e = 0 - 1 0 1 0 0 0 0 0 ; F是图像对间的基本矩阵,在各次拍摄时的相机位置和姿态已经确定的情况下时已知的;σ(l)为待求的映射函数,表示c1曲线上参数为l的点在曲线c2上的参数值。in, e = 0 - 1 0 1 0 0 0 0 0 ; F is the basic matrix between image pairs, which is known when the camera position and attitude have been determined for each shooting; σ(l) is the mapping function to be obtained, indicating the point on the c 1 curve with parameter l Parameter values on curve c2 .

目标曲线的三维重建3D reconstruction of target curve

完成同名曲线上所有象素点的匹配后,由于图像对在各自拍摄时的相机位置和姿态已经自动计算得出,因此可以用双目立体视觉中成熟的三角测量法重建出这些点的空间坐标,从而完成整条曲线的重建。After completing the matching of all pixel points on the curve with the same name, since the camera position and attitude of the image pair at the time of shooting have been automatically calculated, the spatial coordinates of these points can be reconstructed using the mature triangulation method in binocular stereo vision , so as to complete the reconstruction of the entire curve.

本发明具有测量硬件简单(一个数码相机、一台普通个人电脑、一把标尺、一组打印生成的编码点)、测量方式十分灵活(自由拍摄)、测量范围不受限制、各角度测量数据自动拼合、测量输出数据无冗余、使用方便、成本低等突出优点,不但能够用于空间点的定位,而且能够根据机械产品测量建模的需要,测量出被测物体上的棱边、特征线、关键截面控制线等三维信息,在逆向工程、产品质量检测等领域有广泛的应用前景。The present invention has the advantages of simple measurement hardware (a digital camera, an ordinary personal computer, a ruler, and a set of code points generated by printing), very flexible measurement methods (free shooting), unlimited measurement range, and automatic measurement data at each angle. Stitching, measurement output data without redundancy, easy to use, low cost and other outstanding advantages, not only can be used for the positioning of spatial points, but also can measure the edges and characteristic lines on the measured object according to the needs of measurement and modeling of mechanical products 3D information, such as control lines of key sections, has broad application prospects in reverse engineering, product quality inspection and other fields.

附图说明Description of drawings

图1:本发明提出的测量方法的基本流程图。Figure 1: Basic flowchart of the measurement method proposed by the present invention.

图2:编码点示意图。图2(a)描述编码点结构,即由中心白色圆点、中间黑色圆环、外圈圆环组成,其中外圈圆环被分成15个等分,根据每个扇区分的颜色确定其身份,黑色表示二进制码“0”,白色表示二进制码“1”;图2(b)是三个编码点示例。Figure 2: Schematic representation of codepoints. Figure 2(a) describes the code point structure, which is composed of a central white dot, a middle black ring, and an outer ring. The outer ring is divided into 15 equal parts, and its identity is determined according to the color of each sector , black represents the binary code "0", and white represents the binary code "1"; Figure 2(b) is an example of three code points.

图3:实施例测量系统软件图形界面分区示意图。1.菜单区;2.图标工具栏;3.显示图像文件的列表;4.显示当前活动图像对中的一幅图像;5.显示当前活动图像对中的另一幅图像;6.重建出的目标曲线的三维图形显示区。通过点击区域3中的图像文件列表来指定4、5两个窗口内当前显示的图像,用户在4、5两个图像窗口中依照图像上的标识曲线交互勾勒出一对同名曲线的大致形状后,系统即自动计算出该曲线上的三维点列并显示于三维图形区。Fig. 3: Schematic diagram of partitioning of the graphical interface of the measurement system software of the embodiment. 1. Menu area; 2. Icon toolbar; 3. Display the list of image files; 4. Display one image in the current active image pair; 5. Display another image in the current active image pair; 6. Reconstruct The three-dimensional graphic display area of the target curve. Click the image file list in area 3 to specify the currently displayed image in the two windows 4 and 5. After the user interactively outlines the approximate shape of a pair of curves with the same name in the two image windows 4 and 5 according to the identification curve on the image , the system automatically calculates the 3D point series on the curve and displays it in the 3D graphics area.

具体实施方式Detailed ways

本发明提出的物体三维曲线结构测量方法的实施例说明如下:数码相机采用具有内置闪光灯、分辨率为4256×2848的Nikon手动调焦数码相机,计算机采用主频2.8GHz、内存512MB的Pentium IV微机,测量软件系统在Visual C++ 6.0平台上实现。The embodiment of object three-dimensional curve structure measurement method that the present invention proposes is described as follows: digital camera adopts the Nikon manual focusing digital camera that has built-in flashlight, and resolution is 4256 * 2848, and computer adopts the Pentium IV microcomputer of main frequency 2.8GHz, internal memory 512MB , the measurement software system is implemented on the Visual C++ 6.0 platform.

根据图1叙述本发明的具体实施方式及原理:测量之前要进行一定的测量准备,包括对被测物体上需要测量的目标曲线(一般是物体的特征线和进行数字化模型重建所需要的物体表面的某些关键截面控制线)进行标记,使其在颜色亮度上明显区别于被测物体本身的颜色,以利于图像识别;在被测物体周围放置一把标尺和一组经特殊设计的编码点,这组编码点中每个点的编码均不相同,即每个编码点具有身份唯一性。完成上述测量准备工作以后,手持一个数码相机以自由拍摄方式获得被测物体的一组图像,要求各幅图像间具有一定的重叠,即一幅图像至少要与另一幅图像之间有共同可见的5个以上编码点和某些目标曲线。根据这组图像,测量系统自动识别并精确定位各幅图像中的编码点,然后自动精确计算各次拍摄时的相机位置与姿态。使用者经简单的交互完成当前活动图像对中一条标识曲线的半自动提取,测量系统随后自动进行同名图像曲线的优化匹配,进而自动计算这条出所标识曲线结构的三维点列信息。如果尚有未处理的目标曲线,则对下一条目标曲线(可能出现在不同的图像对中)重复上述半自动提取、自动匹配、三维曲线重建的过程,直至完成所有目标曲线的三维重建。以下就图1中的几个主要步骤的具体实施方式做详细说明。Describe the specific embodiment and principle of the present invention according to Fig. 1: before the measurement, certain measurement preparations will be carried out, including the target curve to be measured on the measured object (generally the characteristic line of the object and the object surface required for digital model reconstruction) Some key section control lines) to mark, so that the color brightness is obviously different from the color of the measured object itself, so as to facilitate image recognition; a ruler and a set of specially designed coding points are placed around the measured object , the codes of each point in this group of code points are different, that is, each code point has a unique identity. After completing the above measurement preparations, hold a digital camera in hand to obtain a group of images of the measured object in a free shooting manner. It is required that there be a certain overlap between each image, that is, one image must have at least the same image as another image. Visible 5+ code points and certain target curves. According to this group of images, the measurement system automatically identifies and accurately locates the code points in each image, and then automatically and accurately calculates the camera position and attitude during each shooting. The user completes the semi-automatic extraction of a marked curve in the current active image pair through simple interaction, and then the measurement system automatically performs optimal matching of the image curve of the same name, and then automatically calculates the three-dimensional point sequence information of the marked curve structure. If there are unprocessed target curves, repeat the above process of semi-automatic extraction, automatic matching, and 3D curve reconstruction for the next target curve (which may appear in different image pairs) until the 3D reconstruction of all target curves is completed. The specific implementation of several main steps in FIG. 1 will be described in detail below.

编码点身份识别code point identification

本发明的三维测量方法基于对拍摄得到的一组图像进行分析处理,首先就是识别图2中所示的编码点。每个编码点的中心为圆形“目标点”,周围为环状“编码带”,“编码带”按照角度平均分为15份,每24度一份,相当于一个二进制位,取白色为前景色,相应的二进制码为“1”,黑色为背景色,相应的二进制码为“0”。对于每一个编码点,存在15种可能的二进制编码,取这15个二进制数里最小的数对应的十进制数作为编码点的ID。The three-dimensional measuring method of the present invention is based on analyzing and processing a group of photographed images, firstly identifying the coded points shown in FIG. 2 . The center of each coding point is a circular "target point", surrounded by a ring-shaped "coding belt". Foreground color, the corresponding binary code is "1", black is the background color, the corresponding binary code is "0". For each code point, there are 15 possible binary codes, and the decimal number corresponding to the smallest number among the 15 binary numbers is taken as the ID of the code point.

本发明的编码点自动检测算法主要包括以下三个主要过程:①编码点目标的提取,即在图像中寻找“目标点”;②根据编码点“编码带”上的信息确定编码点的唯一身份,即编码点解码;③编码点中心的亚象素定位。The code point automatic detection algorithm of the present invention mainly includes the following three main processes: 1. the extraction of the code point target, that is, searching for the "target point" in the image; 2. determining the unique identity of the code point according to the information on the code point "code belt" , that is, code point decoding; ③The sub-pixel positioning of the code point center.

(1)编码点目标提取(1) code point target extraction

编码点的中心圆形经CCD成像后呈椭圆状。因此,首先采用Canny算子进行图像分割,在图像中提取代表不同区域的轮廓信息,然后采用逐步过滤的方法进行编码点目标提取。首先根据标记点目标的尺寸、形状对可能的目标点进行初步过滤,满足如下条件的封闭轮廓进入进一步的识别过程:The central circle of the coding point is elliptical after being imaged by the CCD. Therefore, firstly, the Canny operator is used to segment the image, and the contour information representing different regions is extracted in the image, and then the coded point target is extracted by using the method of step-by-step filtering. Firstly, the possible target points are preliminarily filtered according to the size and shape of the marked point target, and the closed contours that meet the following conditions enter the further recognition process:

Pmin≤P≤Pmax                                           (1)P min ≤ P ≤ P max (1)

1≤P2/4πA≤1.5                                          (2)1≤P 2 /4πA≤1.5 (2)

式中,P和A分别为封闭轮廓的周长和面积,Pmin、Pmax分别是轮廓周长的最小和最大阈值。式(1)是对封闭轮廓大小的限定,式(2)则度量了其与圆的接近程度。In the formula, P and A are the perimeter and area of the closed contour, respectively, and P min and P max are the minimum and maximum thresholds of the contour perimeter, respectively. Formula (1) limits the size of the closed contour, and formula (2) measures its closeness to a circle.

对于满足(1)(2)式的封闭轮廓,采用最小二乘方法进行椭圆拟合,剩余误差εeli满足给定允差ετ方为候选编码点的中心圆,即For the closed contour that satisfies (1) and (2), the least squares method is used for ellipse fitting, and the residual error ε eli satisfies the given tolerance ε τ square is the central circle of the candidate coding point, that is

εeli≤ετ                                              (3)ε eli ≤ ε τ (3)

经过最小二乘模板匹配后,图像中所有椭圆轮廓都已找到。但是,在真实场景中往往存在一些不是标记点目标而具有椭圆形状或者与椭圆形状接近的轮廓也被误认为是标记点目标,但由于本方法采用的标记点目标前景灰度为白色,背景灰度为黑色,两者对比度强烈。这是标记点目标区别于其它非标记点目标的显著特征。因此,要根据标记点的这种灰度特征来进一步排除非编码点目标。由于已经通过式(3)所示的椭圆拟合准则,因此可以确定内部中心椭圆和黑色圆环的区域。记中心白点内部区域的灰度均值为MI,黑色圆环区域的灰度均值为MO,则MI和MO应满足:After least squares template matching, all ellipse contours in the image have been found. However, in real scenes, there are often some outlines that are not marker targets but have an ellipse shape or are close to the ellipse shape and are also mistaken for marker targets. The degree is black, and the contrast between the two is strong. This is a distinctive feature that distinguishes marked point targets from other non-marked point targets. Therefore, it is necessary to further exclude non-coded point targets based on this gray feature of marked points. Since the ellipse fitting criterion shown in formula (3) has been passed, the area of the inner central ellipse and the black circle can be determined. Note that the average gray value of the inner area of the central white point is M I , and the average gray value of the black ring area is M O , then M I and M O should satisfy:

Mm II ≥&Greater Equal; Mm tt Mm Oo ≤≤ Mm tt Mm II -- Mm Oo ≥&Greater Equal; ΔΔ Mm tt -- -- -- (( 44 ))

其中,Mt为区分前景灰度与背景灰度的阈值;ΔMt为前景灰度与背景灰度之差应满足的最小值。Among them, M t is the threshold for distinguishing the foreground gray level from the background gray level; ΔM t is the minimum value that the difference between the foreground gray level and the background gray level should satisfy.

另外,约束中心白点内部区域的灰度方差VI和黑色圆环区域的灰度方差满足:In addition, the gray variance V of the inner area of the constrained central white point and the gray variance of the black ring area satisfy:

VV II ≤≤ δδ II VV Oo ≤≤ δδ Oo -- -- -- (( 55 ))

其中,δI,δO是允许的最大灰度方差。条件(5)约束编码点的中心必须满足一定的灰度均匀性。满足上述(1)~(5)式,则进入编码点解码过程。Among them, δ I , δ O are the maximum gray variance allowed. Condition (5) constrains that the center of the coding point must satisfy a certain gray uniformity. If the above formulas (1)-(5) are satisfied, the code point decoding process is entered.

(2)编码点解码(2) Code point decoding

本发明提出的编码点解码算法的具体实现步骤如下:The specific implementation steps of the code point decoding algorithm proposed by the present invention are as follows:

Step1:拟合编码点中心圆点的外轮廓椭圆(记为椭圆A)、中间黑色圆环的外轮廓椭圆(记为椭圆B)、各白色扇区所在的圆环的外轮廓椭圆(记为椭圆C)。再拟合一个位于椭圆B、C中间的椭圆,其中心及旋转角与椭圆B、C的中心及旋转角相同,长、短轴分别取B、C长、短轴的均值。再采用椭圆绘制的算法,获得椭圆D上各象素点的位置坐标。Step1: Fitting the outer contour ellipse of the center point of the coding point (denoted as ellipse A), the outer contour ellipse of the middle black circle (denoted as ellipse B), and the outer contour ellipse of the ring where each white sector is located (denoted as Ellipse C). Then fit an ellipse located in the middle of ellipses B and C, whose center and rotation angle are the same as the centers and rotation angles of ellipses B and C, and the long and short axes are the mean values of the long and short axes of B and C respectively. Then use the ellipse drawing algorithm to obtain the position coordinates of each pixel point on the ellipse D.

Step2:计算A所包围的区域内所有象素灰度的中值作为前景灰度,A和B之间区域内所有象素灰度的中值作为背景灰度。前景灰度与背景灰度的均值作为阈值,用于后续确定编码点各二进制位的码值。Step2: Calculate the median gray value of all pixels in the area surrounded by A as the foreground gray value, and the median gray value of all pixels in the area between A and B as the background gray value. The mean value of the foreground gray level and the background gray level is used as a threshold value for subsequent determination of the code value of each binary bit of the code point.

Step3:对于椭圆D上的任一个象素点TD,过椭圆中心做一条射线,记该射线与椭圆B、C的交点为TB和TC。将线段TBTC上所有象素的灰度值进行排序,取最中间的一个象素灰度值作为TD的新灰度值。Step3: For any pixel point TD on the ellipse D, make a ray through the center of the ellipse, and record the intersection points of this ray with ellipses B and C as TB and TC. Sort the gray values of all pixels on the line segment TBTC, and take the gray value of the middle pixel as the new gray value of TD.

Step4:对椭圆D上各点按照公式(6)做逆仿射变换,使椭圆D对应一个单位圆,单位圆上各点的灰度对应于椭圆B的新灰度值。Step4: Perform an inverse affine transformation on each point on the ellipse D according to formula (6), so that the ellipse D corresponds to a unit circle, and the grayscale of each point on the unit circle corresponds to the new grayscale value of the ellipse B.

Xx ′′ == aa -- 11 bb -- 11 coscos αα sinsin αα -- sinsin αα coscos αα (( Xx -- Xx oo )) -- -- -- (( 66 ))

式中,X′是与TD对应的单位圆上点的坐标,X是点TD的坐标,Xo是椭圆D的中心O的坐标,a,b分别是椭圆D的长轴和短轴的长度,α是椭圆D的旋转角。In the formula, X' is the coordinate of the point on the unit circle corresponding to TD, X is the coordinate of the point TD, X o is the coordinate of the center O of the ellipse D, a, b are the lengths of the major axis and the minor axis of the ellipse D respectively , α is the rotation angle of the ellipse D.

Step5:对单位圆上像素作二值化,取其中的一个边缘点作为起始点。Step5: Binarize the pixels on the unit circle, and take one of the edge points as the starting point.

Step6:从起始点开始,在该单位圆上每隔24°为一个二进制位,计算各位内所有象素点的平均灰度值。若某位的灰度平均值大于阈值,则该位取二进制码为“1”;否则取“0”。从而可以得到编码点的一个二进制编码。找到与该二进制数对应的最小十进制数,该十进制数即为编码点的ID。Step6: Starting from the starting point, every 24° on the unit circle is a binary bit, and calculate the average gray value of all pixels in each bit. If the gray average value of a certain bit is greater than the threshold, the bit takes the binary code as "1"; otherwise it takes "0". A binary encoding of the code point can thus be obtained. Find the smallest decimal number corresponding to the binary number, which is the ID of the code point.

上述Step3相当于对点TD在一个线形窗口内进行一次中值滤波,滤波窗口内包含的象素即为线段TBTC上的象素。采用中值滤波后椭圆D上新的灰度值来确定编码点每一位的二进制码,考虑了编码带内所有象素的灰度值,能够消除孤立噪声的影响。对大量的现场拍摄图像的处理结果表明,这一方法对提高编码点身份识别的鲁棒性十分有效。The above Step 3 is equivalent to performing a median filter on the point TD in a linear window, and the pixels contained in the filter window are the pixels on the line segment TBTC. The new gray value on the ellipse D after median filtering is used to determine the binary code of each bit of the coding point, and the gray value of all pixels in the coding band is considered, which can eliminate the influence of isolated noise. The processing results of a large number of live images show that this method is very effective in improving the robustness of code point identification.

(3)标记点中心计算(3) Mark point center calculation

采用式(7)来进行编码点的中心亚象素定位Use equation (7) to locate the center sub-pixel of the code point

xx cc == ΣΣ jj ΣΣ ii ii ·· II ii ,, jj // ΣΣ jj ΣΣ ii II ii ,, jj ythe y cc == ΣΣ jj ΣΣ ii jj ·&Center Dot; II ii ,, jj // ΣΣ jj ΣΣ ii II ii ,, jj -- -- -- (( 77 ))

式中,(xc,yc)为编码点中心坐标,Ii,j为中心圆形区域内象素点(i,j)的灰度值。In the formula, (x c , y c ) is the coordinates of the center of the coding point, and I i, j is the gray value of the pixel point (i, j) in the central circular area.

相机位姿自动确定Automatic camera pose determination

在齐次坐标表示下,三维空间点X在摄像机成像平面上的投影x可表示为:Under the representation of homogeneous coordinates, the projection x of a point X in three-dimensional space on the imaging plane of the camera can be expressed as:

x=K[R|t]X=PX                                         (8)x=K[R|t]X=PX (8)

其中,K是摄像机内参数矩阵;R和t分别为从世界坐标系到摄像机坐标系的旋转变换矩阵和平移变换向量;P为3×4的投影变换矩阵。Among them, K is the internal parameter matrix of the camera; R and t are the rotation transformation matrix and translation transformation vector from the world coordinate system to the camera coordinate system respectively; P is the 3×4 projection transformation matrix.

设相机在两个不同的位置和朝向对同一场景拍摄了两幅图像,两个相机之间存在旋转矩阵R12和非零平移向量t12,则由极线几何可知两幅图像之间存在以下约束Assuming that the camera has taken two images of the same scene at two different positions and orientations, and there is a rotation matrix R 12 and a non-zero translation vector t 12 between the two cameras, it can be known from the epipolar geometry that the following exists between the two images constraint

xx 22 TT Ff xx 11 == 00 -- -- -- (( 99 ))

式中x1和x2分别是三维空间点X在第一、第二两幅图像上的投影点;F是3×3的基本矩阵,它映射右图像上的一点x2到左图像相应的对极线Fx1上。In the formula, x 1 and x 2 are the projection points of the three-dimensional space point X on the first and second images respectively; F is a 3×3 basic matrix, which maps a point x 2 on the right image to the corresponding point on the left image on the epipolar line Fx 1 .

根据两幅图像之间编码点的对应关系 l=1,L,N,N≥5,我们首先采用MLESAC(Maximum Likelihood Estimation SAmple Consensus)方法计算两幅图像之间的基本矩阵F。According to the correspondence between the coding points between the two images l=1, L, N, N≥5, we first use the MLESAC (Maximum Likelihood Estimation SAmple Consensus) method to calculate the fundamental matrix F between two images.

根据相机标示的焦距等内参数,可以构建内参数矩阵K的初值(产品标示参数只作为初值,测量系统在后续过程中会对其做进一步优化)。这样,根据基本矩阵F,可进一步计算两幅图像之间的本质矩阵EAccording to the internal parameters such as the focal length marked by the camera, the initial value of the internal parameter matrix K can be constructed (the product marked parameters are only used as initial values, and the measurement system will further optimize them in the subsequent process). In this way, according to the fundamental matrix F, the essential matrix E between the two images can be further calculated

E=KTFK                                                (10)E=K T F K (10)

又根据本质矩阵的定义E=[t]×R(式中[g]×表示向量的反对称矩阵),利用旋转矩阵的正交性可以很容易推导出And according to the definition of the essential matrix E=[t] × R (where [g] × represents the anti-symmetric matrix of the vector), it can be easily deduced by using the orthogonality of the rotation matrix

EE. ^^ TT EE. ^^ == 11 -- tt ^^ xx 22 -- tt ^^ xx tt ^^ ythe y -- tt ^^ xx tt ^^ zz -- tt ^^ ythe y tt ^^ xx 11 -- tt ^^ ythe y 22 -- tt ^^ ythe y tt ^^ zz -- tt ^^ zz tt ^^ xx -- tt ^^ zz tt ^^ ythe y 11 -- tt ^^ zz 22 -- -- -- (( 1111 ))

式中 E ^ = E / Tr ( E T E ) / 2 , Tr(g)表示矩阵的迹, t ^ 12 = t 12 / | | t 12 | | 是归一化后的平移向量。这样,根据(10)式求出的E矩阵和式(11)可以很容易求得归一化后的平移向量 t ^ 12 = ( t ^ x , t ^ y , t ^ z ) T . 由于 ( - E ^ ) T ( - E ^ ) = E ^ T E ^ , 因此,归一化得到的矩阵

Figure A20061016127400106
可能与实际的 相差一个符号。另外,由于矩阵
Figure A20061016127400108
的每一个元素都是关于向量
Figure A20061016127400109
的二次项,因此根据式(11)计算出的向量 也具有二义性,即 都满足(11)式。后文将给出解决
Figure A200610161274001013
的二义性的方法。In the formula E. ^ = E. / Tr ( E. T E. ) / 2 , Tr(g) represents the trace of the matrix, t ^ 12 = t 12 / | | t 12 | | is the normalized translation vector. In this way, the normalized translation vector can be easily obtained according to the E matrix obtained by formula (10) and formula (11) t ^ 12 = ( t ^ x , t ^ the y , t ^ z ) T . because ( - E. ^ ) T ( - E. ^ ) = E. ^ T E. ^ , Therefore, the matrix obtained by normalization
Figure A20061016127400106
possible and actual differ by one sign. Also, since the matrix
Figure A20061016127400108
Each element of is about the vector
Figure A20061016127400109
The quadratic term of , so the vector calculated according to formula (11) is also ambiguous, namely All satisfy (11). The solution will be given later and
Figure A200610161274001013
ambiguous method.

为了计算第一幅图与第二幅图之间的旋转矩阵R12,定义To calculate the rotation matrix R 12 between the first image and the second image, define

ww ii == EE. ^^ ii ×× tt ^^ 1212 (( ii == 1,2,31,2,3 )) -- -- -- (( 1212 ))

式中 代表矩阵 的各行向量。设ri是旋转矩阵R12的各行向量,则In the formula representative matrix The row vectors of . Let r i be the row vectors of the rotation matrix R 12 , then

ri=wi+wj×wk                                       (13)r i =w i +w j ×w k (13)

式中(i,j,k)是(1,2,3)的循环组合。这样就确定了前两幅视图的相机姿态。Where (i, j, k) is a cyclic combination of (1, 2, 3). This determines the camera pose for the first two views.

将世界坐标系建立在第一个相机上,根据成像几何关系不难导出空间点X在第一个相机坐标系下的Z方向坐标为Establish the world coordinate system on the first camera. According to the imaging geometric relationship, it is not difficult to derive the Z coordinate of the space point X in the first camera coordinate system as

ZZ 11 == ff (( frfr 11 -- xx 22 rr 33 )) TT tt ^^ 1212 (( frfr 11 -- xx 22 rr 33 )) TT xx 11 -- -- -- (( 1414 ))

进一步可求得另外两个坐标分量Further, the other two coordinate components can be obtained

X1=x1Z1/f,Y1=y1Z1/f                                 (15)X 1 =x 1 Z 1 /f, Y 1 =y 1 Z 1 /f (15)

X在第二个相机坐标系下的坐标为The coordinates of X in the second camera coordinate system are

X2=R12(X1-t12)                                        (16)X 2 =R 12 (X 1 -t 12 ) (16)

由于

Figure A20061016127400112
的二义性,可能产生四对不同的 根据拍摄时的实际情况,只有当采用某个 对重建的所有点均同时在两个相机前面,即所有点(这里是指两个视图共同可见的各个编码点中心)的Z1和Z2都为正时,才表明重建结果正确,相应的一组 即为正确解。because
Figure A20061016127400112
and ambiguity, four different pairs of According to the actual situation at the time of shooting, only when a certain All reconstructed points are in front of the two cameras at the same time, that is, when Z 1 and Z 2 of all points (here refers to the center of each coding point visible to both views) are both positive, it indicates that the reconstruction result is correct, and the corresponding A group is the correct solution.

由于上述重建算法中,两个相机之间基线长度未知,因此只能得到归一化的平移向量 从式(14)容易看出,重建出来的场景与实际场景相差一个固定的比例因子。为此,在场景中放置标尺,标尺上两个标记点之间的距离已知,从而可以确定一个比例因子,得到被测物体的实际尺寸。Since the baseline length between the two cameras is unknown in the above reconstruction algorithm, only the normalized translation vector can be obtained It is easy to see from formula (14) that the reconstructed scene differs from the actual scene by a fixed scale factor. To this end, a scale is placed in the scene, and the distance between two marked points on the scale is known, so that a scale factor can be determined to obtain the actual size of the measured object.

在两视图相机姿态确定和编码点中心三维坐标重建的基础上,需要进一步依次确定其它各次拍摄时的相机姿态。对第j幅图像进行处理时,要求该图像中存在至少6个已在前面的步骤中重建出了三维坐标的编码点,即已知图像点和空间点的对应Xixi,i=1,L L,L≥6。将这些约束带入投影方程(8)中可得On the basis of determining the pose of the two-view camera and reconstructing the three-dimensional coordinates of the center of the encoding point, it is necessary to further determine the pose of the camera in other shots in sequence. When processing the jth image, it is required that there are at least 6 coding points in the image whose three-dimensional coordinates have been reconstructed in the previous steps, that is, the corresponding X i  x i of known image points and spatial points, i= 1, L L, L≥6. Bringing these constraints into the projection equation (8) gives

xi=PjXi,i=1,L L,L≥6                   (17)x i =P j X i , i=1, L L, L≥6 (17)

由于每组Xixi的对应产生两个线性方程,因此,根据(17)式采用最小二乘法即可求解出第j幅图像的投影矩阵Pj中的11个未知元素。Since the correspondence of each group of X i  x i produces two linear equations, the 11 unknown elements in the projection matrix P j of the jth image can be solved by using the least square method according to formula (17).

将3×4的投影矩阵Pj表示成Express the 3×4 projection matrix P j as

Pj=K[Rj|tj]=[KRj|Ktj]=[M|p4]           (18)P j =K[R j |t j ]=[KR j |Kt j ]=[M|p 4 ] (18)

其中,M是矩阵Pj前3×3的子矩阵,p4表示矩阵Pj的第四列。从式(18)中很容易确定出平移向量tj=K-1p4Among them, M is the first 3×3 sub-matrix of matrix P j , and p 4 represents the fourth column of matrix P j . It is easy to determine the translation vector t j =K −1 p 4 from formula (18).

由于内参数矩阵是上三角的且旋转矩阵是正交的,因此,对矩阵进行QR分解就可得旋转矩阵Rj。在估计出拍摄第j幅图像时相机相对于世界坐标系的外部姿态参数Rj和tj的基础上,我们进一步通过光学三角形法,重建出第j幅图像中新出现的且可在前j-1幅图像中找到同名匹配的编码点的三维空间点坐标。至此,就完成了对应第j幅图像的相机姿态确定和编码点三维坐标计算。然后继续下一幅图像的处理,直至处理完所有图像。Since the internal parameter matrix is upper triangular and the rotation matrix is orthogonal, the rotation matrix R j can be obtained by performing QR decomposition on the matrix. On the basis of estimating the external attitude parameters R j and t j of the camera relative to the world coordinate system when the j-th image is taken, we further use the optical triangle method to reconstruct -The 3D space point coordinates of the coded point with the same name matching found in 1 image. So far, the determination of the camera pose corresponding to the jth image and the calculation of the three-dimensional coordinates of the encoding point are completed. Then continue to process the next image until all images are processed.

(3)相机姿态优化(3) Camera pose optimization

由于图像噪声等因素的存在,三维空间点Xi经投影矩阵Pj变换后的像点,与实际识别出的Xi在第j幅图像中的像点坐标xij并不重合。为了进一步提高系统精度,本发明基于光束平差算法,以再投影误差最小建立目标函数Due to the existence of factors such as image noise, the image point of the three-dimensional space point X i transformed by the projection matrix P j does not coincide with the actually recognized image point coordinate x ij of X i in the jth image. In order to further improve the system accuracy, the present invention is based on the beam adjustment algorithm, and establishes the objective function with the minimum reprojection error

ΣΣ ijij dd (( PP jj Xx ii ,, xx ijij )) 22 →&Right Arrow; minmin -- -- -- (( 1919 ))

对前面求出的相机参数和三维空间点坐标进行全局优化。具体求解采用LM(Levenberg-Marquardt)算法。由于是以前面求出的已经比较接近真实值的Xi和Pj作为初值,因此,全局优化可以较快收敛。Globally optimize the camera parameters and 3D space point coordinates obtained earlier. The specific solution adopts LM (Levenberg-Marquardt) algorithm. As the initial values of Xi and Pj , which are already close to the real values obtained earlier, are used as the initial values, so the global optimization can converge quickly.

目标曲线的半自动提取Semi-automatic extraction of target curves

本发明开发的物体曲线结构三维测量软件系统图形界面示意图如附图3所示,该图的左侧显示所有拍摄得到的图像文件的列表,该图的右侧下部为当前活动图像对的两个图像显示窗口,通过点击左侧的图像文件列表来指定两个窗口内当前显示的图像,该图的右侧上部为重建出的目标曲线的三维图形显示区。The schematic diagram of the graphical interface of the object curve structure three-dimensional measurement software system developed by the present invention is as shown in accompanying drawing 3, the left side of this figure shows the list of all image files that are photographed, and the lower right side of this figure is the two pairs of current active images. In the image display window, click the image file list on the left to specify the currently displayed images in the two windows. The upper right part of the figure is the three-dimensional graphic display area of the reconstructed target curve.

本发明采用能量优化的基本思想将在两个活动图像上交互勾勒出的目标曲线的大致轮廓通过后台算法自动“贴合”到相应的目标曲线上。The present invention adopts the basic idea of energy optimization to automatically "fit" the rough outline of the target curve drawn interactively on the two moving images to the corresponding target curve through the background algorithm.

具体实现时,首先连接用户输入的目标曲线上的点构成一条折线(封闭情况下为多边形),然后本发明采用计算机图形学中直线段光栅扫描转换的DDA算法快速得到折线经过的所有象素点,从这些象素点中按固定间隔(实施例中每隔两个象素取一个)进行During concrete realization, at first connect the point on the target curve of user input to form a broken line (be polygon under closed situation), then the present invention adopts the DDA algorithm of raster scan conversion of straight line segment in computer graphics to obtain all pixel points that broken line passes through fast , from these pixels at fixed intervals (take one every two pixels in the embodiment)

采样,记为vi,i=(0,1,L,n)。这里需要进一步利用Canny边缘检测算子自动检测出的图像边缘信息,因此记检测出的边缘点集为P。另记vij,j=(1,L,8)为点vi的八邻域象素点,同时为了叙述方便,记vi0=viSampling, denoted as v i , i=(0, 1, L, n). Here it is necessary to further utilize the image edge information automatically detected by the Canny edge detection operator, so record the detected edge point set as P. In addition, record v ij , j=(1, L, 8) as the eight-neighborhood pixel points of point v i , and record v i0 =v i for the convenience of description.

本发明建立每个点vi及其八邻域点的如下能量函数:The present invention establishes the following energy functions of each point v i and its eight neighborhood points:

E(vij)=[αEtension(vij)+βEbend(vij)+γEimg(vij)+δEattr(vij)]         (20)E(v ij )=[αE tension (v ij )+βE bend (v ij )+γE img (v ij )+δE attr (v ij )] (20)

其中,Etension(vij),Ebend(vij),Eimg(vij),Eattr(vij)分别为vij点处的拉伸能量、弯曲能量、图像能量和边缘点引力产生的能量,α,β,γ,δ分别为各能量项的权值,用以调节各能量项的比重。为了平衡各项的影响,各能量项全都归一化到区间[0,1]:Among them, E tension (v ij ), E bend (v ij ), E img (v ij ), E attr (v ij ) are the stretching energy, bending energy, image energy and edge point gravitational force at point v ij respectively The energy of , α, β, γ, δ are the weights of each energy item, which are used to adjust the proportion of each energy item. In order to balance the influence of each item, each energy item is normalized to the interval [0, 1]:

EE. tensiontension (( vv ijij )) == || dd ‾‾ -- || vv ijij -- vv ii -- 11 || || maxmax 00 ≤≤ jj ≤≤ 88 {{ || dd ‾‾ -- || vv ijij -- vv ii -- 11 || || }} ,, dd ‾‾ == 11 nno ΣΣ ii == 11 nno || vv ii -- vv ii -- 11 ||

EE. bendbend (( vv ijij )) == || vv ii -- 11 -- 22 vv ijij ++ vv ii ++ 11 || 22 maxmax 00 ≤≤ jj ≤≤ 88 {{ || vv ii -- 11 -- 22 vv ijij ++ vv ii ++ 11 || 22 }} EE. imgimg (( vv ijij )) == minmin 00 ≤≤ jj ≤≤ 88 (( EE. imgimg (( vv ijij )) )) -- EE. imgimg (( vv ijij )) maxmax 00 ≤≤ jj ≤≤ 88 (( EE. imgimg (( vv ijij )) )) -- minmin 00 ≤≤ jj ≤≤ 88 (( EE. imgimg (( vv ijij )) ))

EE. attrattr (( vv ijij )) == || vv ijij -- pp ijij || maxmax 00 ≤≤ jj ≤≤ 88 {{ || vv ijij -- pp ijij || }}

其中,Eimg=-|I(x,y)|2,Eattr(vij)中的pij∈P是与vi距离最近的边缘点。本发明在拉伸能、弯曲、图像能之外增加边缘点引力能Eattr(vi)的目的是进一步促使点列向目标曲线收敛。pij的搜索限定在以vi为中心的一个窗口内进行。如果该窗口内没有任何边缘点,Eattr(vij)=0,j=0,L,8。Wherein, E img =-|I(x, y)| 2 , p ij ∈P in E attr (v ij ) is the edge point closest to v i . The purpose of the present invention to increase the gravitational energy E attr (v i ) of the edge points in addition to the stretching energy, bending, and image energy is to further promote the convergence of the point sequence to the target curve. The search of p ij is limited to a window centered on v i . If there is no edge point in the window, E attr (v ij )=0, j=0,L,8.

通过迭代的方法使点列vi,i=(0,1,L,n)向使式(20)能量极小的位置移动,最终锁定在图像特征附近,形成光滑的目标曲线点列。再用均匀B样条曲线来拟合这些点,以备后续曲线匹配使用。对于开曲线,约束其两端点始终保持位置不变,从而防止开曲线的点列收缩为一点产生退化。The point sequence v i , i=(0, 1, L, n) is moved to the position where the energy of formula (20) is minimal by iterative method, and finally locked near the image feature to form a smooth target curve point sequence. These points are then fitted with a uniform B-spline curve for subsequent curve matching. For the open curve, the two ends of the constraint are always kept in the same position, so as to prevent the point sequence of the open curve from shrinking into one point and degenerate.

同名曲线的自动匹配优化Automatic matching optimization of curves with the same name

(1)匹配程度度量(1) Matching degree measurement

相机在不同位置和角度获得的一个图像对间同名点应满足的一个基本约束是(9)式描述的极线约束。在相机内参数和相对位置及姿态已知的情况,图像对间的基本矩阵F是已知的。对于一个图像对中两个候选匹配点v1和v2,可以用在第二幅图像上v2到v1的极线的距离A basic constraint that points with the same name between an image pair obtained by the camera at different positions and angles should satisfy is the epipolar constraint described in (9). In the case where the camera intrinsic parameters and relative position and pose are known, the fundamental matrix F between image pairs is known. For two candidate matching points v 1 and v 2 in an image pair, the distance from v 2 to the epipolar line of v 1 on the second image can be used

DD. 22 (( vv 11 ,, vv 22 )) == || vv 22 TT Ff vv 11 || || || eFf vv 11 || || -- -- -- (( 21twenty one ))

以及在第一幅图像上v1到v2的极线的距离and the distance of epipolar lines from v1 to v2 on the first image

DD. 11 (( vv 11 ,, vv 22 )) == || vv 22 TT Ff vv 11 || || || vv 22 TT Ff ee TT || || -- -- -- (( 22twenty two ))

度量v1和v2的匹配程度,其中, e = 0 - 1 0 1 0 0 0 0 0 . 本发明正是根据式(21)和式(22)的极线约束条件,建立同名曲线间点对匹配的优化目标函数。Measures how well v1 and v2 match, where, e = 0 - 1 0 1 0 0 0 0 0 . The present invention is just based on the epipolar constraint conditions of formula (21) and formula (22), to establish the optimization objective function of point pair matching between curves with the same name.

(2)图像曲线重采样(2) Image curve resampling

为了得到拟合出来的均匀B样条曲线p(u),u∈[0,1]上的各象素点,取离散参数间距Δu=1/L,L为各型值点的累加弦长,得到均匀B样条曲线上的离散象素点vi,i=0,1,...,N,构造分段线性插值曲线满足:In order to obtain the fitted uniform B-spline curve p(u), each pixel point on u∈[0,1], take the discrete parameter distance Δu=1/L, L is the accumulated chord length of each value point , to obtain discrete pixel points v i on the uniform B-spline curve, i=0, 1, ..., N, and construct a piecewise linear interpolation curve satisfying:

cc (( ll ii )) == vv ii ,, ii == 00 ,, .. .. .. ,, NN cc (( ll )) == ll ii ++ 11 -- ll ll ii ++ 11 -- ll ii vv ii ++ ll -- ll ii ll ii ++ 11 -- ll ii vv ii ++ 11 ,, ll ii &le;&le; ll << ll ii ++ 11 -- -- -- (( 23twenty three ))

其中l0=0.0, l i = &Sigma; j = 1 i | v j - v j - 1 | . L = &Sigma; j = 1 N | v j - v j - 1 | . 不失一般性,将两条同名图像曲线中象素点较多的一条记为c1(l),较少的那条记为c2(l)。c1(l)上的离散象素点记为vk (1),0≤k≤N1;c2(l)上的离散象素点记为vj (2),0≤j≤N2。寻求c1(l),l∈[0,L1]上的各象素点在c2(l),l∈[0,L2]上的匹配点,可以实现其在c2(l)上的亚象素匹配,从而实现高精度三维曲线重建。where l 0 =0.0, l i = &Sigma; j = 1 i | v j - v j - 1 | . remember L = &Sigma; j = 1 N | v j - v j - 1 | . Without loss of generality, among the two image curves with the same name, the one with more pixels is denoted as c 1 (l), and the one with fewer pixels is denoted as c 2 (l). The discrete pixel points on c 1 (l) are recorded as v k (1) , 0≤k≤N 1 ; the discrete pixel points on c 2 (l) are recorded as v j (2) , 0≤j≤N 2 . Seeking the matching points of each pixel point on c 1 (l), l∈[0, L 1 ] on c 2 (l), l∈[0, L 2 ], it can be achieved in c 2 (l) On the sub-pixel matching, so as to achieve high-precision three-dimensional curve reconstruction.

(3)匹配优化(3) Matching optimization

首先采用动态规划来初始匹配对应曲线上的离散象素点。同名曲线一个对应点对的累积代价函数定义为First, dynamic programming is used to initially match the discrete pixel points on the corresponding curve. The cumulative cost function for a corresponding point pair of the curve of the same name is defined as

CC (( vv kk (( 11 )) ,, vv jj (( 22 )) )) == DD. (( vv kk (( 11 )) ,, vv jj (( 22 )) )) ++ minmin mm &Element;&Element; GG kjkj CC (( vv kk -- 11 (( 11 )) ,, vv mm (( 22 )) )) -- -- -- (( 24twenty four ))

其中 D ( v k ( 1 ) , v j ( 2 ) ) = D 1 ( v k ( 1 ) , v j ( 2 ) ) + D 2 ( v k ( 1 ) , v j ( 2 ) ) , Gkj表示在vk (1)与vj (2)已经匹配的情况下m的所有可能取值。由于两条同名图像曲线上的象素点个数不等,动态规划匹配的结果会出现多对一的情况,即较长的c1曲线上的多个象素点对应较短的c2曲线上的同一个象素点。in D. ( v k ( 1 ) , v j ( 2 ) ) = D. 1 ( v k ( 1 ) , v j ( 2 ) ) + D. 2 ( v k ( 1 ) , v j ( 2 ) ) , G kj represents all possible values of m when v k (1) and v j (2) have been matched. Since the number of pixels on the two image curves with the same name is not equal, the result of dynamic programming matching will appear many-to-one, that is, multiple pixel points on the longer c1 curve correspond to the shorter c2 curve on the same pixel.

得到图像曲线上点对的初始匹配后,再进行曲线的匹配优化。根据式(21)和式(22),本发明优化下面的目标函数来达到c1、c2曲线上点的精确匹配After the initial matching of point pairs on the image curve is obtained, the matching optimization of the curve is performed. According to formula (21) and formula (22), the present invention optimizes the following objective function to achieve the exact matching of points on the c 1 and c 2 curves

minmin &Integral;&Integral; 00 LL 11 || cc 22 (( &sigma;&sigma; (( ll )) )) TT Ff cc 11 (( ll )) || || || eFf cc 11 (( ll )) || || ++ || cc 22 (( &sigma;&sigma; (( ll )) )) TT Ff cc 11 (( ll )) || || || cc 22 (( &sigma;&sigma; (( ll )) )) TT Ff ee TT dldl -- -- -- (( 2525 ))

其中σ(l)为待求的映射函数,表示c1曲线上参数为l的点在曲线c2上的参数值。将式(25)的积分形式改写为求和的形式Among them, σ(l) is the mapping function to be obtained, which means the parameter value of the point on the curve c 2 whose parameter is l on the curve c 1 . Rewrite the integral form of formula (25) into the form of summation

Figure A20061016127400154
Figure A20061016127400154

(26)式中l0,l1,L,lN1分别是vk (1),0≤k≤N1对应c1(l)上的参数值。以上述动态规划法产生的粗匹配为迭代初始条件,采用共轭梯度法即可求解式(26)的最小化问题。In formula (26), l 0 , l 1 , L, and l N1 are v k (1) respectively, and 0≤k≤N 1 corresponds to the parameter value on c 1 (l). Taking the rough matching generated by the above dynamic programming method as the initial condition of iteration, the minimization problem of formula (26) can be solved by using the conjugate gradient method.

曲线三维重建Curved 3D reconstruction

完成同名曲线上所有象素点的匹配后,由于相机内参数已知,且图像对在各自拍摄时的相机位置和姿态已经自动计算得出,即相对于世界坐标系的旋转矩阵和平移向量也已知,因此可以用双目立体视觉中成熟的三角测量法重建出这些点的空间坐标,具体计算式可表达为After completing the matching of all pixels on the curve with the same name, since the internal parameters of the camera are known, and the camera position and attitude of the image pair when they are shot have been automatically calculated, that is, the rotation matrix and translation vector relative to the world coordinate system are also It is known, so the spatial coordinates of these points can be reconstructed by the mature triangulation method in binocular stereo vision, and the specific calculation formula can be expressed as

xx (( jj )) == ff RR 1111 (( jj )) Xx ++ RR 1212 (( jj )) YY ++ RR 1313 (( jj )) ZZ ++ TT xx (( jj )) RR 3131 (( jj )) Xx ++ RR 3232 (( jj )) YY ++ RR 3333 (( jj )) ZZ ++ TT zz (( jj )) ythe y (( jj )) == ff RR 21twenty one (( jj )) Xx ++ RR 22twenty two (( jj )) YY ++ RR 23twenty three (( jj )) ZZ ++ TT ythe y (( jj )) RR 3131 (( jj )) Xx ++ RR 3232 (( jj )) YY ++ RR 3333 (( jj )) ZZ ++ TT zz (( jj )) ,, jj == 1,21,2 -- -- -- (( 2727 ))

其中,f是相机的焦距,x(j)和y(j)分别是同名匹配点在第j幅图像上的象素坐标的两个分量,R.(j)是第j次拍摄时相机相对于世界坐标系的旋转矩阵的各个分量,T.(j)是第j次拍摄时相机相对于世界坐标系的各个平移分量,根据(27)式中的4个方程用最小二乘法即可解出该点的空间坐标的3个未知分量(X,Y,Z)。对同名曲线上每一个匹配点对执行上述求解过程,就完成了整条目标曲线点列的三维测量。完成所有目标曲线的三维测量后,可以进一步根据这些点列构造出曲线的参数方程及模型的曲面方程。Among them, f is the focal length of the camera, x (j) and y (j) are the two components of the pixel coordinates of the matching point with the same name on the jth image, and R. (j) is the relative Each component of the rotation matrix of the world coordinate system, T. (j) is each translation component of the camera relative to the world coordinate system at the jth shooting, and can be solved by the least square method according to the four equations in (27) 3 unknown components (X, Y, Z) of the spatial coordinates of the point. The above solution process is performed for each pair of matching points on the curve with the same name, and the three-dimensional measurement of the point sequence of the entire target curve is completed. After completing the three-dimensional measurement of all target curves, the parameter equation of the curve and the surface equation of the model can be constructed further according to these point columns.

Claims (5)

1. A method for carrying out three-dimensional measurement on an object by utilizing free shooting of a single digital camera is characterized by comprising seven steps of measurement preparation, image shooting, identification and positioning of encoding points, camera pose determination, target curve extraction, automatic matching optimization of homonymous curves and three-dimensional reconstruction of target curves, and the specific method comprises the following steps: firstly, marking a characteristic line of a measured object and a key section control line on the surface of the object required for digital model reconstruction, so that the characteristic line is obviously different from the color of the measured object in color brightness, and the identification of images is facilitated; a ruler and a group of coding points are arranged around a measured object; then, holding a digital camera by hand to obtain a group of images of the measured object in a free shooting mode; according to the group of images, the position and the posture of the camera during each shooting are automatically and accurately calculated, meanwhile, a convenient interaction means is provided for a user, the semi-automatic extraction of the identification curve and the optimized matching of homonymous curves in different images are realized, and further, the three-dimensional point list information of the identified curve structure is automatically calculated.
2. The method of claim 1, wherein the identification of the code points is that the codes of each point in a set of designed code points are different, and the candidate code point targets are gradually filtered by adopting 5 constraint conditions of size, shape, ellipse fitting residual, area gray mean, and area gray variance; the gray values of most pixels in a coding band are comprehensively considered in the coding point decoding process, so that the effects of eliminating noise influence and improving the robustness of the identification of the coding points are achieved; and (4) positioning the sub-pixels in the center of the coding point by adopting a gray scale weighting method in the target area.
3. The method of claim 1, wherein the camera pose determination is based on identity uniqueness of the code points, and automatically establishes correspondence between the code points of the same name in each image; and recovering the camera postures corresponding to the two images and the three-dimensional coordinates of the centers of the coding points which are commonly visible in the two images according to the pixel coordinates of the centers of at least 5 coding points with the same name in the two images, and further adopting a strategy of combining an incremental method and a global optimization method to realize the automatic solution of the camera positions and postures corresponding to all the images.
4. The method as claimed in claim 1, wherein the target curve extraction is performed by a user by using a mouse to pick points on the two images near the same identification curve, and making the connecting line of the points approximately reflect the corresponding image curve contour, and then the measurement software system iteratively minimizes the energy of the image curve contour, thereby automatically and optimally fitting the interactively sketched approximate image curve contour to the image curve.
5. The method as claimed in claim 1, wherein the automatic matching optimization of the homonymous curve is performed by using a dynamic programming method to obtain an initial matching of pixel points on the corresponding curve, and then using a nonlinear optimization method to minimize the sum of distances from all matching points on the corresponding curve to respective polar lines, so as to achieve an optimal matching of the homonymous target curve point rows on the image, and further calculate the three-dimensional coordinates of the point rows.
CNB2006101612744A 2006-12-19 2006-12-19 Method for Three-Dimensional Measurement of Objects by Using Single Digital Camera to Shoot Freely Expired - Fee Related CN100430690C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006101612744A CN100430690C (en) 2006-12-19 2006-12-19 Method for Three-Dimensional Measurement of Objects by Using Single Digital Camera to Shoot Freely

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006101612744A CN100430690C (en) 2006-12-19 2006-12-19 Method for Three-Dimensional Measurement of Objects by Using Single Digital Camera to Shoot Freely

Publications (2)

Publication Number Publication Date
CN1975323A true CN1975323A (en) 2007-06-06
CN100430690C CN100430690C (en) 2008-11-05

Family

ID=38125561

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006101612744A Expired - Fee Related CN100430690C (en) 2006-12-19 2006-12-19 Method for Three-Dimensional Measurement of Objects by Using Single Digital Camera to Shoot Freely

Country Status (1)

Country Link
CN (1) CN100430690C (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101839692A (en) * 2010-05-27 2010-09-22 西安交通大学 Method for measuring three-dimensional position and stance of object with single camera
CN101975552A (en) * 2010-08-30 2011-02-16 天津工业大学 Method for measuring key point of car frame based on coding points and computer vision
CN101739547B (en) * 2008-11-21 2012-04-11 中国科学院沈阳自动化研究所 Method for accurately identifying and positioning robust coding points in image under complex background
CN102679937A (en) * 2011-03-17 2012-09-19 镇江亿海软件有限公司 Ship steel plate dynamic three-dimension measurement method based on multi-camera vision
CN101630418B (en) * 2009-08-06 2012-10-03 白晓亮 Integrated method for measurement and reconstruction of three-dimensional model and system thereof
CN102762142A (en) * 2010-02-12 2012-10-31 皇家飞利浦电子股份有限公司 Laser enhanced reconstruction of 3d surface
CN103033171A (en) * 2013-01-04 2013-04-10 中国人民解放军信息工程大学 Encoding mark based on colors and structural features
CN103049731A (en) * 2013-01-04 2013-04-17 中国人民解放军信息工程大学 Decoding method for point-distributed color coding marks
CN103218851A (en) * 2013-04-03 2013-07-24 西安交通大学 Segmental reconstruction method for three-dimensional line segment
CN103267516A (en) * 2013-02-27 2013-08-28 北京林业大学 Sample plot measuring technology by using digital camera as tool
CN103411532A (en) * 2013-08-02 2013-11-27 上海锅炉厂有限公司 Method for mounting and measuring space connecting pipes
CN103714571A (en) * 2013-09-23 2014-04-09 西安新拓三维光测科技有限公司 Single camera three-dimensional reconstruction method based on photogrammetry
CN105157609A (en) * 2015-09-01 2015-12-16 大连理工大学 Two-sets-of-camera-based global morphology measurement method of large parts
CN105180904A (en) * 2015-09-21 2015-12-23 大连理工大学 High-speed moving target position and posture measurement method based on coding structured light
CN105574886A (en) * 2016-01-28 2016-05-11 多拉维(深圳)技术有限公司 High-precision calibration method of handheld multi-lens camera
US9396587B2 (en) 2012-10-12 2016-07-19 Koninklijke Philips N.V System for accessing data of a face of a subject
CN106960442A (en) * 2017-03-01 2017-07-18 东华大学 Based on the infrared night robot vision wide view-field three-D construction method of monocular
CN107020545A (en) * 2017-04-30 2017-08-08 天津大学 The apparatus and method for recognizing mechanical workpieces pose
CN107080148A (en) * 2017-04-05 2017-08-22 浙江省海洋开发研究院 Processing of aquatic products system and its control method
CN107835551A (en) * 2017-11-01 2018-03-23 中国科学院长春光学精密机械与物理研究所 The control method and device of lighting source power in 3 D scanning system
CN108759665A (en) * 2018-05-25 2018-11-06 哈尔滨工业大学 A kind of extraterrestrial target reconstruction accuracy analysis method based on coordinate conversion
CN108871185A (en) * 2018-05-10 2018-11-23 苏州大学 Method, apparatus, equipment and the computer readable storage medium of piece test
CN109544649A (en) * 2018-11-21 2019-03-29 武汉珈鹰智能科技有限公司 A kind of the coloud coding point design and its recognition methods of large capacity
CN110250624A (en) * 2019-08-01 2019-09-20 西安科技大学 A kind of manufacturing method of customized mask bracket
CN110567728A (en) * 2018-09-03 2019-12-13 阿里巴巴集团控股有限公司 Method, device and equipment for identifying shooting intention of user
CN110942370A (en) * 2011-04-07 2020-03-31 电子湾有限公司 Descriptor and image based item model
CN111127561A (en) * 2019-12-05 2020-05-08 农芯(南京)智慧农业研究院有限公司 Multi-view image calibration device and method
CN111521127A (en) * 2019-02-01 2020-08-11 奥林巴斯株式会社 Measuring method, measuring apparatus, and recording medium
CN111735409A (en) * 2020-06-04 2020-10-02 深圳职业技术学院 Soft robot arm shape measurement method, system and storage medium
CN112241995A (en) * 2019-07-18 2021-01-19 重庆双楠文化传播有限公司 3D portrait modeling method based on multiple images of single digital camera
CN112781521A (en) * 2020-12-11 2021-05-11 北京信息科技大学 Software operator shape recognition method based on visual markers
CN114440834A (en) * 2022-01-27 2022-05-06 中国人民解放军战略支援部队信息工程大学 Object space and image space matching method of non-coding mark
CN115393447A (en) * 2022-08-01 2022-11-25 北京强度环境研究所 Strain gauge three-dimensional coordinate acquisition method and system based on single camera

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3799019B2 (en) * 2002-01-16 2006-07-19 オリンパス株式会社 Stereo shooting device and shooting method of stereo shooting device
CN1233984C (en) * 2004-11-11 2005-12-28 天津大学 Large-scale three dimensional shape and appearance measuring and splicing method without being based on adhesive mark
CN1308652C (en) * 2004-12-09 2007-04-04 武汉大学 Method for three-dimensional measurement of sheet metal part using single non-measuring digital camera

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739547B (en) * 2008-11-21 2012-04-11 中国科学院沈阳自动化研究所 Method for accurately identifying and positioning robust coding points in image under complex background
CN101630418B (en) * 2009-08-06 2012-10-03 白晓亮 Integrated method for measurement and reconstruction of three-dimensional model and system thereof
US11022433B2 (en) 2010-02-12 2021-06-01 Koninklijke Philips N.V. Laser enhanced reconstruction of 3D surface
CN102762142B (en) * 2010-02-12 2016-01-27 皇家飞利浦电子股份有限公司 The laser enhancing on 3D surface is rebuild
CN102762142A (en) * 2010-02-12 2012-10-31 皇家飞利浦电子股份有限公司 Laser enhanced reconstruction of 3d surface
CN101839692B (en) * 2010-05-27 2012-09-05 西安交通大学 Method for measuring three-dimensional position and stance of object with single camera
CN101839692A (en) * 2010-05-27 2010-09-22 西安交通大学 Method for measuring three-dimensional position and stance of object with single camera
CN101975552A (en) * 2010-08-30 2011-02-16 天津工业大学 Method for measuring key point of car frame based on coding points and computer vision
CN102679937A (en) * 2011-03-17 2012-09-19 镇江亿海软件有限公司 Ship steel plate dynamic three-dimension measurement method based on multi-camera vision
CN110942370B (en) * 2011-04-07 2023-05-12 电子湾有限公司 Descriptor and image based project model
CN110942370A (en) * 2011-04-07 2020-03-31 电子湾有限公司 Descriptor and image based item model
US9396587B2 (en) 2012-10-12 2016-07-19 Koninklijke Philips N.V System for accessing data of a face of a subject
CN103049731A (en) * 2013-01-04 2013-04-17 中国人民解放军信息工程大学 Decoding method for point-distributed color coding marks
CN103033171A (en) * 2013-01-04 2013-04-10 中国人民解放军信息工程大学 Encoding mark based on colors and structural features
CN103049731B (en) * 2013-01-04 2015-06-03 中国人民解放军信息工程大学 Decoding method for point-distributed color coding marks
CN103267516A (en) * 2013-02-27 2013-08-28 北京林业大学 Sample plot measuring technology by using digital camera as tool
CN103218851B (en) * 2013-04-03 2015-12-09 西安交通大学 A kind of segment reconstruction method of three-dimensional line segment
CN103218851A (en) * 2013-04-03 2013-07-24 西安交通大学 Segmental reconstruction method for three-dimensional line segment
CN103411532B (en) * 2013-08-02 2016-08-24 上海锅炉厂有限公司 The method measured is installed in the adapter of a kind of space
CN103411532A (en) * 2013-08-02 2013-11-27 上海锅炉厂有限公司 Method for mounting and measuring space connecting pipes
CN103714571B (en) * 2013-09-23 2016-08-10 西安新拓三维光测科技有限公司 A kind of based on photogrammetric single camera three-dimensional rebuilding method
CN103714571A (en) * 2013-09-23 2014-04-09 西安新拓三维光测科技有限公司 Single camera three-dimensional reconstruction method based on photogrammetry
CN105157609A (en) * 2015-09-01 2015-12-16 大连理工大学 Two-sets-of-camera-based global morphology measurement method of large parts
CN105157609B (en) * 2015-09-01 2017-08-01 大连理工大学 Global shape measurement method of large parts based on two sets of cameras
CN105180904A (en) * 2015-09-21 2015-12-23 大连理工大学 High-speed moving target position and posture measurement method based on coding structured light
CN105574886A (en) * 2016-01-28 2016-05-11 多拉维(深圳)技术有限公司 High-precision calibration method of handheld multi-lens camera
CN106960442A (en) * 2017-03-01 2017-07-18 东华大学 Based on the infrared night robot vision wide view-field three-D construction method of monocular
CN107080148A (en) * 2017-04-05 2017-08-22 浙江省海洋开发研究院 Processing of aquatic products system and its control method
CN107020545A (en) * 2017-04-30 2017-08-08 天津大学 The apparatus and method for recognizing mechanical workpieces pose
CN107835551B (en) * 2017-11-01 2019-07-23 中国科学院长春光学精密机械与物理研究所 The control method and device of lighting source power in 3 D scanning system
CN107835551A (en) * 2017-11-01 2018-03-23 中国科学院长春光学精密机械与物理研究所 The control method and device of lighting source power in 3 D scanning system
CN108871185A (en) * 2018-05-10 2018-11-23 苏州大学 Method, apparatus, equipment and the computer readable storage medium of piece test
CN108759665B (en) * 2018-05-25 2021-04-27 哈尔滨工业大学 Spatial target three-dimensional reconstruction precision analysis method based on coordinate transformation
CN108759665A (en) * 2018-05-25 2018-11-06 哈尔滨工业大学 A kind of extraterrestrial target reconstruction accuracy analysis method based on coordinate conversion
CN110567728B (en) * 2018-09-03 2021-08-20 创新先进技术有限公司 Method, device and equipment for identifying shooting intention of user
CN110567728A (en) * 2018-09-03 2019-12-13 阿里巴巴集团控股有限公司 Method, device and equipment for identifying shooting intention of user
CN109544649A (en) * 2018-11-21 2019-03-29 武汉珈鹰智能科技有限公司 A kind of the coloud coding point design and its recognition methods of large capacity
CN109544649B (en) * 2018-11-21 2022-07-19 武汉珈鹰智能科技有限公司 Large capacity color coding point coding and its identification method
CN111521127A (en) * 2019-02-01 2020-08-11 奥林巴斯株式会社 Measuring method, measuring apparatus, and recording medium
CN111521127B (en) * 2019-02-01 2023-04-07 仪景通株式会社 Measuring method, measuring apparatus, and recording medium
CN112241995A (en) * 2019-07-18 2021-01-19 重庆双楠文化传播有限公司 3D portrait modeling method based on multiple images of single digital camera
CN110250624A (en) * 2019-08-01 2019-09-20 西安科技大学 A kind of manufacturing method of customized mask bracket
CN111127561A (en) * 2019-12-05 2020-05-08 农芯(南京)智慧农业研究院有限公司 Multi-view image calibration device and method
CN111127561B (en) * 2019-12-05 2023-03-24 农芯(南京)智慧农业研究院有限公司 Multi-view image calibration device and method
CN111735409A (en) * 2020-06-04 2020-10-02 深圳职业技术学院 Soft robot arm shape measurement method, system and storage medium
CN112781521A (en) * 2020-12-11 2021-05-11 北京信息科技大学 Software operator shape recognition method based on visual markers
CN114440834B (en) * 2022-01-27 2023-05-02 中国人民解放军战略支援部队信息工程大学 A Matching Method of Object-Space and Image-Space for Non-coded Signs
CN114440834A (en) * 2022-01-27 2022-05-06 中国人民解放军战略支援部队信息工程大学 Object space and image space matching method of non-coding mark
CN115393447A (en) * 2022-08-01 2022-11-25 北京强度环境研究所 Strain gauge three-dimensional coordinate acquisition method and system based on single camera

Also Published As

Publication number Publication date
CN100430690C (en) 2008-11-05

Similar Documents

Publication Publication Date Title
CN1975323A (en) Method for making three-dimensional measurement of objects utilizing single digital camera to freely shoot
CN107767442B (en) Foot type three-dimensional reconstruction and measurement method based on Kinect and binocular vision
CN107133989B (en) Three-dimensional scanning system parameter calibration method
CN112161619B (en) Pose detection method, three-dimensional scanning path planning method and detection system
CN113177977A (en) Non-contact three-dimensional human body size measuring method
CN112067233B (en) Six-degree-of-freedom motion capture method for wind tunnel model
CN101763643A (en) Automatic calibration method for structured light three-dimensional scanner system
CN112132907A (en) A camera calibration method, device, electronic device and storage medium
CN102376089A (en) Target correction method and system
CN1801896A (en) Video camera rating data collecting method and its rating plate
CN112347882A (en) Intelligent sorting control method and intelligent sorting control system
CN113324478A (en) Center extraction method of line structured light and three-dimensional measurement method of forge piece
CN101030300A (en) Method for matching depth image
CN111981982A (en) Multi-directional cooperative target optical measurement method based on weighted SFM algorithm
US9245375B2 (en) Active lighting for stereo reconstruction of edges
CN112991517B (en) A 3D reconstruction method for automatic matching of texture image encoding and decoding
CN111028280A (en) # -shaped structured light camera system and method for performing scaled three-dimensional reconstruction of target
CN113970560A (en) A three-dimensional defect detection method based on multi-sensor fusion
CN112132876A (en) Initial pose estimation method in 2D-3D image registration
Huang et al. Crack detection of masonry structure based on thermal and visible image fusion and semantic segmentation
CN108447096B (en) Information fusion method of kinect depth camera and thermal infrared camera
CN101661623B (en) Three-dimensional tracking method of deformable body based on linear programming
CN116524041A (en) Camera calibration method, device, equipment and medium
CN118857153B (en) Calibration method of line laser 3D profile measuring instrument
Zhou et al. Three-dimensional colour reconstruction of aviation spiral bevel gear tooth surface through fusion of image and point cloud information

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20081105

Termination date: 20121219