CN112991405B - Stereoscopic vision matching method based on three-color vertical color stripes - Google Patents

Stereoscopic vision matching method based on three-color vertical color stripes Download PDF

Info

Publication number
CN112991405B
CN112991405B CN201911295547.8A CN201911295547A CN112991405B CN 112991405 B CN112991405 B CN 112991405B CN 201911295547 A CN201911295547 A CN 201911295547A CN 112991405 B CN112991405 B CN 112991405B
Authority
CN
China
Prior art keywords
color
line
matching
stripes
lines
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201911295547.8A
Other languages
Chinese (zh)
Other versions
CN112991405A (en
Inventor
王振洲
栗义康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University of Technology
Original Assignee
Shandong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University of Technology filed Critical Shandong University of Technology
Priority to CN201911295547.8A priority Critical patent/CN112991405B/en
Publication of CN112991405A publication Critical patent/CN112991405A/en
Application granted granted Critical
Publication of CN112991405B publication Critical patent/CN112991405B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明公开了一种适用于三维传感测量中使用的由三种颜色组成的垂直彩色条纹组成的图案及其立体视觉匹配方法。通过结构光投影装置,将一幅由三种颜色组成的彩色垂直条纹图案投影到被测物体表面,由两相机同时采集投影到物体表面的图案,首先对第一种颜色的条纹进行立体视觉匹配,将匹配关系进行建模,根据建立的匹配模型关系,对第二种颜色的条纹进行立体视觉匹配。通过第一种颜色与第二种颜色的匹配关系对模型进行更新,根据更新的匹配模型关系,对第三种颜色的条纹进行立体视觉匹配,从而实现对物体表面三维形状的实时测量。本发明解决了立体匹配精度偏低的国际难题。

Figure 201911295547

The invention discloses a pattern composed of vertical color stripes composed of three colors suitable for use in three-dimensional sensing measurement and a stereo vision matching method thereof. Through the structured light projection device, a color vertical stripe pattern consisting of three colors is projected onto the surface of the object to be measured, and the pattern projected onto the surface of the object is collected by two cameras at the same time. First, the stripes of the first color are stereoscopically matched. , model the matching relationship, and perform stereo vision matching on the stripes of the second color according to the established matching model relationship. The model is updated through the matching relationship between the first color and the second color. According to the updated matching model relationship, the stripes of the third color are stereoscopically matched, so as to realize the real-time measurement of the three-dimensional shape of the object surface. The invention solves the international problem of low stereo matching accuracy.

Figure 201911295547

Description

基于三种颜色垂直彩色条纹的立体视觉匹配方法Stereo Vision Matching Method Based on Vertical Color Stripes in Three Colors

技术领域technical field

本发明涉及光学三维传感技术,特别是涉及通过投影单幅由三种颜色组成的垂直彩色条纹组成的图案,由两相机同时采集投影到物体表面的图案并且进行对应像素的匹配,从而实现对物体表面三维形状的实时测量。The present invention relates to optical three-dimensional sensing technology, in particular, by projecting a single pattern composed of vertical color stripes composed of three colors, two cameras simultaneously collect the pattern projected onto the surface of an object and perform matching of corresponding pixels, so as to realize the Real-time measurement of the three-dimensional shape of the object surface.

背景技术Background technique

本发明涉及一种基于一次投影三种颜色彩色垂直条纹的立体视觉匹配方法,该立体视觉匹配方法主要用于主动立体视觉的三维测量技术中。主动立体视觉技术既能测量静止物体的三维数据,也能测量运动或变形物体的三维数据,完全不受物体运动状态的限制,因此应用非常广泛。相对于time of flight (TOF)技术,主动立体视觉技术精度与分辨率都要高出很多。相对于被动立体视觉技术,主动立体视觉可以测量无纹理物体。但是不管是主动立体视觉技术还是被动立体视觉技术,两相机中对应像素的匹配方法一直以来都是国内外的研究热点与难点。主动立体视觉的匹配方法往往是由精心设计的投影图案决定的,而目前被国内外的研究者们广泛使用的投影图案包括点阵图案,散斑图案,彩色条纹图案以及相位图案。其中点阵图案与条纹图案精度最高,请参见文献Z.Z. Wang, Q. Zhou andY.C. Shuang, “Three-dimensional reconstruction with single-shot structuredlight dot pattern and analytic solutions,” Measurement, 151, 107114, (2020)。Z.Z. Wang, “A one-shot-projection method for measurement of specularsurfaces,” Opt. Express, 23, 1912, (2015)。Z.Z. Wang, “Single-shot three-dimensional reconstruction based on structured light line pattern,” Opt.Lasers Eng., 106, 10-16, (2018)。但是点阵图案相对于条纹图案的分辨率较低,因此彩色条纹图案更加受欢迎。而不同研究者设计的彩色条纹图案各不相同,立体匹配方法也差别很大,而目前还没有一种公认的通用彩色条纹图案,因此主动视觉立体匹配方法还是一个公开的研究难点。本发明可以将设计的三种颜色组成的垂直彩色条纹图案投影到不同形状与不同运动模式的物体上,并且进行鲁棒地立体视觉匹配。The invention relates to a stereoscopic vision matching method based on one-time projection of three color vertical stripes. The stereoscopic vision matching method is mainly used in the three-dimensional measurement technology of active stereoscopic vision. Active stereo vision technology can measure not only the three-dimensional data of static objects, but also the three-dimensional data of moving or deformed objects, and is not limited by the state of motion of objects at all, so it is widely used. Compared with time of flight (TOF) technology, the accuracy and resolution of active stereo vision technology are much higher. In contrast to passive stereo vision technology, active stereo vision can measure texture-free objects. However, whether it is active stereo vision technology or passive stereo vision technology, the matching method of corresponding pixels in two cameras has always been a research hotspot and difficulty at home and abroad. The matching method of active stereo vision is often determined by well-designed projection patterns, and the projection patterns widely used by researchers at home and abroad include dot matrix patterns, speckle patterns, color stripe patterns and phase patterns. Among them, dot pattern and stripe pattern have the highest accuracy, please refer to the literature Z.Z. Wang, Q. Zhou and Y.C. Shuang, "Three-dimensional reconstruction with single-shot structured light dot pattern and analytical solutions," Measurement, 151, 107114, (2020 ). Z.Z. Wang, “A one-shot-projection method for measurement of specular surfaces,” Opt. Express, 23, 1912, (2015). Z.Z. Wang, “Single-shot three-dimensional reconstruction based on structured light line pattern,” Opt.Lasers Eng., 106, 10-16, (2018). But the resolution of the dot matrix pattern is lower than that of the stripe pattern, so the color stripe pattern is more popular. The color fringe patterns designed by different researchers are different, and the stereo matching methods are also very different. At present, there is no universally recognized color fringe pattern, so the active visual stereo matching method is still an open research difficulty. The invention can project the designed vertical color stripe pattern composed of three colors onto objects with different shapes and different motion modes, and perform robust stereoscopic vision matching.

发明内容SUMMARY OF THE INVENTION

本发明的目的是针对现有的立体视觉匹配方法匹配精度偏低,无法正确匹配复杂物体表面以及无法正确匹配不连续物体表面等缺陷,提供一种基于三种颜色的垂直条纹的立体视觉匹配方法,该方法根据三种颜色条纹的间隔不同,首先对间隔较大的颜色条纹在图像的每一个水平行中进行逐行匹配,然后将匹配关系在图像的每一个水平行中进行逐行建模,通过建立模型关系,进一步逐行匹配间隔较小的颜色条纹,然后通过间隔较小颜色条纹的匹配关系对建立的模型方程进行更新,最后用更新的模型关系逐行匹配间隔最小的颜色条纹。The purpose of the present invention is to provide a stereo vision matching method based on vertical stripes of three colors in view of defects such as the low matching accuracy of the existing stereo vision matching method, the inability to correctly match the surface of complex objects, and the inability to correctly match the surface of discontinuous objects. , this method is based on the different intervals of the three color stripes. First, the color stripes with larger intervals are matched line by line in each horizontal line of the image, and then the matching relationship is modeled line by line in each horizontal line of the image. , by establishing a model relationship, further match the color stripes with smaller intervals row by row, then update the established model equation through the matching relationship of smaller interval color stripes, and finally use the updated model relationship to match the color stripes with the smallest interval row by row.

为了实现上述发明的目的,本发明采用下述技术方案实现:In order to realize the above-mentioned purpose of the invention, the present invention adopts following technical scheme to realize:

使用结构光投影装置,将单幅三种颜色彩色垂直条纹图案投影到被测物体表面,该由三种颜色组成的垂直彩色条纹图案,是指通过二进制编码或者通过余弦函数编码或者通过专业画图软件生成若干条周期性重复的垂直条纹,在每一个周期中,不同颜色条纹的个数不一样,而相邻垂直条纹(不区分颜色)间隔相同。使用两个摄像装置记录变形的彩色条纹图案,将采集的彩色条纹图像从RGB域转换至HSV域,通过阈值选取方法对不同颜色的条纹依次进行分割,并且提取出分割条纹的单像素线,然后对两相机中不同颜色条纹中提取的单像素线依次进行匹配。首先匹配间隔最大颜色条纹的单像素线,匹配方法是首先对齐左右相机中所有间隔最大颜色条纹的中心,然后通过单向(向左或向右)搜索最临近像素点的方法逐行对间隔最大颜色条纹的单像素线进行匹配,并且将匹配关系进行建模。通过不同水平行中建立的匹配模型关系,对同一水平行中间隔较小颜色条纹的单像素线进行匹配,然后通过不同水平行中间隔较小颜色条纹的匹配关系对建立的模型方程进行更新,最后用更新的模型关系逐行匹配间隔最小的颜色条纹的单像素线。在匹配完成后,通过计算两条通过光心与匹配点的直线交点,计算出物体的三维坐标。Using a structured light projection device, a single three-color vertical stripe pattern is projected onto the surface of the measured object. The vertical color stripe pattern composed of three colors refers to binary coding or cosine function coding or professional drawing software. Generate several vertical stripes that repeat periodically. In each cycle, the number of stripes of different colors is different, and the interval between adjacent vertical stripes (without distinguishing colors) is the same. Use two cameras to record the distorted color fringe pattern, convert the collected color fringe image from the RGB domain to the HSV domain, segment the fringes of different colors sequentially through the threshold selection method, and extract the single pixel line of the split fringe, and then The single-pixel lines extracted from the different color stripes in the two cameras are sequentially matched. First match the single-pixel line with the largest color stripe. The matching method is to first align the centers of all the largest color stripes in the left and right cameras, and then search for the nearest pixel in one direction (to the left or right). Single-pixel lines of color stripes are matched, and the matching relationship is modeled. Match the single-pixel lines with smaller color stripes in the same horizontal row through the matching model relationship established in different horizontal lines, and then update the established model equation through the matching relationship of smaller color stripes in different horizontal rows, Finally, the updated model relationship is used to match the single-pixel lines of the color stripes with the smallest interval line by line. After the matching is completed, the three-dimensional coordinates of the object are calculated by calculating the intersection of two straight lines passing through the optical center and the matching point.

本发明与现有技术相比,有如下优点:Compared with the prior art, the present invention has the following advantages:

本发明投影的三种颜色组成的垂直彩色条纹图案,条纹的对比度明显,在HSV域中可以进行精确分割。通过对间隔最大条纹的匹配关系进行建模,可以精确预测间隔较小条纹的匹配位置,从而进行精确匹配。本发明即解决了现有立体匹配方法匹配精度低的国际难题。The vertical color stripe pattern composed of three colors projected by the present invention has obvious contrast of the stripes and can be accurately segmented in the HSV domain. By modeling the matching relationship of the fringe with the largest interval, the matching position of the fringe with the smaller interval can be accurately predicted, so as to perform an exact match. The invention solves the international problem of low matching accuracy of the existing stereo matching method.

附图说明Description of drawings

图1为本发明设计的三种颜色垂直彩色条纹示例图1。Fig. 1 is an example Fig. 1 of vertical color stripes of three colors designed in the present invention.

图2为本发明设计的三种颜色垂直彩色条纹示例图2。Fig. 2 is an example Fig. 2 of three-color vertical color stripes designed by the present invention.

具体实施方式Detailed ways

下面根据附图与工作原理,对本发明进行详细说明。The present invention will be described in detail below according to the accompanying drawings and working principle.

附图1和附图2分别给出了彩色条纹的两种设计方式,它们都满足在每一个周期中,不同颜色条纹的个数不一样,而相邻垂直条纹(不区分颜色)间隔相同。将设计的彩色条纹投影到被测物体表面,并且由左右两个相机同步采集。采集的图像在HSV域进行分割后分别得到间隔最大颜色条纹,间隔较小颜色条纹以及间隔最小颜色条纹。为了方便描述,我们将具有最大平均间隔的颜色条纹定义为颜色1条纹,我们将具有较小平均间隔的颜色条纹定义为颜色2条纹,我们将具有最小平均间隔的颜色条纹定义为颜色3条纹(如图1-2所示)。左右相机中不同颜色的分割条纹在提取出单像素中心线后通过如下方法进行匹配。Attached Figures 1 and 2 respectively show two design methods of colored stripes. They all satisfy that in each cycle, the number of different colored stripes is different, and the interval between adjacent vertical stripes (without distinguishing colors) is the same. The designed color stripes are projected onto the surface of the measured object, and are collected synchronously by the left and right cameras. After the collected image is segmented in the HSV domain, the largest interval color stripes, the smallest interval color stripes and the smallest interval color stripes are obtained respectively. For the convenience of description, we define the color fringes with the largest average interval as color 1 fringes, we define the color fringes with smaller average intervals as color 2 fringes, and we define the color fringes with the smallest average intervals as color 3 fringes ( As shown in Figure 1-2). The segmentation stripes of different colors in the left and right cameras are matched by the following method after extracting the single pixel centerline.

颜色1条纹单像素中心线(简称颜色1线)的匹配方法的具体实施步骤如下:The specific implementation steps of the matching method of the color 1 stripe single-pixel centerline (referred to as the color 1 line) are as follows:

步骤1: 分别计算出左相机中所有颜色1线的中心

Figure 793128DEST_PATH_IMAGE001
以及右相机中所有颜色 1线的中心
Figure 840719DEST_PATH_IMAGE002
。再计算出左相机中所有颜色1线的平均间隔距离
Figure 681636DEST_PATH_IMAGE003
。步骤2: 将左相机 中所有颜色1线沿着水平方向向左平移
Figure 17939DEST_PATH_IMAGE004
,以便对齐左右相机中颜色1线。步骤 3:在对齐的左右图像中,从上至下依次在每一条水平线内对左相机中的颜色1线与右相机 中的颜色1线进行匹配。右相机中颜色1线在图像第
Figure 336925DEST_PATH_IMAGE005
行中像素位置表示为
Figure 707864DEST_PATH_IMAGE006
Figure 454103DEST_PATH_IMAGE007
表示右相机种颜色1线在第
Figure 12123DEST_PATH_IMAGE005
行中像素总数。左相机中颜 色1线在图像第
Figure 400379DEST_PATH_IMAGE005
行中像素位置表示为
Figure 360245DEST_PATH_IMAGE008
Figure 277385DEST_PATH_IMAGE009
表示左相机种颜 色1线在第
Figure 588281DEST_PATH_IMAGE005
行中像素总数。对于右相机颜色1线上的任意一个像素位置
Figure 514649DEST_PATH_IMAGE010
,其在左相机 颜色1线上的像素匹配位置
Figure 329021DEST_PATH_IMAGE011
通过向左搜索最临近像素获得。而搜索范围定义如下: Step 1: Calculate the centers of all color 1 lines in the left camera separately
Figure 793128DEST_PATH_IMAGE001
and the center of all the color 1 lines in the right camera
Figure 840719DEST_PATH_IMAGE002
. Then calculate the average distance between all color 1 lines in the left camera
Figure 681636DEST_PATH_IMAGE003
. Step 2: Translate all the color 1 lines in the left camera to the left along the horizontal direction
Figure 17939DEST_PATH_IMAGE004
, in order to align the left and right camera in the color 1 line. Step 3: In the aligned left and right images, match the color 1 line in the left camera with the color 1 line in the right camera within each horizontal line sequentially from top to bottom. The color 1 line in the right camera is at the image
Figure 336925DEST_PATH_IMAGE005
The pixel position in the row is expressed as
Figure 707864DEST_PATH_IMAGE006
,
Figure 454103DEST_PATH_IMAGE007
Indicates the color 1 line of the right camera at the
Figure 12123DEST_PATH_IMAGE005
The total number of pixels in the row. The color 1 line in the left camera is in the image
Figure 400379DEST_PATH_IMAGE005
The pixel position in the row is expressed as
Figure 360245DEST_PATH_IMAGE008
,
Figure 277385DEST_PATH_IMAGE009
Indicates the left camera color 1 line at the
Figure 588281DEST_PATH_IMAGE005
The total number of pixels in the row. For any pixel position on the right camera color 1 line
Figure 514649DEST_PATH_IMAGE010
, its pixel matching position on the left camera color 1 line
Figure 329021DEST_PATH_IMAGE011
Obtained by searching the nearest pixel to the left. Whereas the search scope is defined as follows:

Figure 682642DEST_PATH_IMAGE012
(1)
Figure 682642DEST_PATH_IMAGE012
(1)

令j从1到

Figure 215254DEST_PATH_IMAGE013
依次匹配每一行中的左右相机中颜色1线的像素,由于颜色1线间隔很 大,所有像素都可以被精确匹配,
Figure 679734DEST_PATH_IMAGE013
为采集图像的纵向尺寸。 Let j range from 1 to
Figure 215254DEST_PATH_IMAGE013
The pixels of the color 1 line in the left and right cameras in each row are matched sequentially. Since the color 1 line has a large interval, all pixels can be accurately matched.
Figure 679734DEST_PATH_IMAGE013
is the vertical size of the captured image.

在颜色1线匹配完成后,需要对左右相机的匹配关系建模,建模方法如下:After the color 1 line matching is completed, it is necessary to model the matching relationship between the left and right cameras. The modeling method is as follows:

右相机中颜色1线在图像

Figure 351542DEST_PATH_IMAGE005
行中像素位置
Figure 876064DEST_PATH_IMAGE014
与左相机中颜 色1线在图像第
Figure 895973DEST_PATH_IMAGE005
行中像素位置
Figure 164143DEST_PATH_IMAGE008
的视差
Figure 953108DEST_PATH_IMAGE015
由下式表示: Color 1 line in right camera in image
Figure 351542DEST_PATH_IMAGE005
pixel position in row
Figure 876064DEST_PATH_IMAGE014
Line with color 1 in the left camera in the image
Figure 895973DEST_PATH_IMAGE005
pixel position in row
Figure 164143DEST_PATH_IMAGE008
parallax
Figure 953108DEST_PATH_IMAGE015
Expressed by the following formula:

Figure 648531DEST_PATH_IMAGE016
(2)
Figure 648531DEST_PATH_IMAGE016
(2)

右相机中颜色1线在图像第

Figure 890157DEST_PATH_IMAGE005
行中像素位置
Figure 696439DEST_PATH_IMAGE017
与对应的视差存在如下模型关 系: The color 1 line in the right camera is at the image
Figure 890157DEST_PATH_IMAGE005
pixel position in row
Figure 696439DEST_PATH_IMAGE017
There is the following model relationship with the corresponding disparity:

Figure 605489DEST_PATH_IMAGE018
(3)
Figure 605489DEST_PATH_IMAGE018
(3)

模型的系数由下式计算:The coefficients of the model are calculated by:

Figure 206235DEST_PATH_IMAGE019
(4)
Figure 206235DEST_PATH_IMAGE019
(4)

其中矩阵B与矩阵Y分别如下:Among them, matrix B and matrix Y are as follows:

Figure 466315DEST_PATH_IMAGE020
(5)
Figure 466315DEST_PATH_IMAGE020
(5)

Figure 810708DEST_PATH_IMAGE021
(6)
Figure 810708DEST_PATH_IMAGE021
(6)

颜色2条纹单像素中心线(颜色2线)的匹配方法的具体实施步骤如下:The specific implementation steps of the matching method of the color 2 stripe single pixel center line (color 2 line) are as follows:

通过建立的模型(公式3)由下式计算出颜色2线在图像第

Figure 574265DEST_PATH_IMAGE005
行中的视差
Figure 345912DEST_PATH_IMAGE022
: Through the established model (Formula 3), the color 2 line is calculated by the following formula
Figure 574265DEST_PATH_IMAGE005
parallax in line
Figure 345912DEST_PATH_IMAGE022
:

Figure 827709DEST_PATH_IMAGE023
(7)
Figure 827709DEST_PATH_IMAGE023
(7)

通过视差

Figure 975793DEST_PATH_IMAGE022
可以预测出右相机中颜色2线在图像第
Figure 593857DEST_PATH_IMAGE005
行的像素位置
Figure 67563DEST_PATH_IMAGE024
在左相 机中匹配位置: through parallax
Figure 975793DEST_PATH_IMAGE022
It can be predicted that the color 2 line in the right camera is in the image
Figure 593857DEST_PATH_IMAGE005
row pixel position
Figure 67563DEST_PATH_IMAGE024
Match position in left camera:

Figure 771077DEST_PATH_IMAGE025
(8)
Figure 771077DEST_PATH_IMAGE025
(8)

其中

Figure 457273DEST_PATH_IMAGE026
表示右相机种颜色2线在第
Figure 195422DEST_PATH_IMAGE005
行中像素总数。由于右相机颜色2线的预测 匹配位置与左相机中颜色2线实际匹配位置距离最近,因此我们通过最小距离像素位置搜 寻完成左右相机颜色2线的匹配。颜色2线匹配完成后,要用新的匹配关系对建立的模型进 行更新。右相机中颜色1线与颜色2线上的所有匹配像素按照位置大小从左往右依次排列形 成新的位置向量
Figure 308872DEST_PATH_IMAGE027
: in
Figure 457273DEST_PATH_IMAGE026
Indicates the color of the right camera 2 lines at the
Figure 195422DEST_PATH_IMAGE005
The total number of pixels in the row. Since the predicted matching position of the color 2 line of the right camera is the closest to the actual matching position of the color 2 line of the left camera, we search for the minimum distance pixel position to complete the matching of the left and right camera color 2 lines. After the color 2 line matching is completed, the established model should be updated with the new matching relationship. All matching pixels on the color 1 line and color 2 line in the right camera are arranged in order from left to right according to the position size to form a new position vector
Figure 308872DEST_PATH_IMAGE027
:

Figure 499682DEST_PATH_IMAGE028
(9)
Figure 499682DEST_PATH_IMAGE028
(9)

左相机中颜色1线与颜色2线上的所有匹配像素按照位置大小从左往右依次排列 形成新的位置向量

Figure 989569DEST_PATH_IMAGE029
: All matching pixels on the color 1 line and color 2 line in the left camera are arranged in order from left to right according to the position size to form a new position vector
Figure 989569DEST_PATH_IMAGE029
:

Figure 582224DEST_PATH_IMAGE030
(10)
Figure 582224DEST_PATH_IMAGE030
(10)

新的视差向量由下式计算:The new disparity vector is calculated by:

Figure 132154DEST_PATH_IMAGE031
(11)
Figure 132154DEST_PATH_IMAGE031
(11)

模型更新如下:The model is updated as follows:

Figure 75839DEST_PATH_IMAGE032
(12)
Figure 75839DEST_PATH_IMAGE032
(12)

更新模型的系数由公式4计算,其中矩阵B与矩阵Y分别如下:The coefficients of the updated model are calculated by formula 4, where matrix B and matrix Y are as follows:

Figure 103838DEST_PATH_IMAGE033
(13)
Figure 103838DEST_PATH_IMAGE033
(13)

Figure 553930DEST_PATH_IMAGE034
(14)
Figure 553930DEST_PATH_IMAGE034
(14)

颜色3条纹单像素中心线(颜色3线)的匹配方法的具体实施步骤如下:The specific implementation steps of the matching method of the color 3 stripe single pixel center line (color 3 line) are as follows:

通过更新的模型(公式12)由下式计算出颜色3线在图像第

Figure 274761DEST_PATH_IMAGE005
行中的视差
Figure 440163DEST_PATH_IMAGE035
: Through the updated model (Equation 12), the color 3 line is calculated by the following formula
Figure 274761DEST_PATH_IMAGE005
parallax in line
Figure 440163DEST_PATH_IMAGE035
:

Figure 271853DEST_PATH_IMAGE036
(15)
Figure 271853DEST_PATH_IMAGE036
(15)

通过视差

Figure 573521DEST_PATH_IMAGE022
可以预测出右相机中颜色2线在图像第
Figure 730833DEST_PATH_IMAGE005
行的像素位置
Figure 383531DEST_PATH_IMAGE024
在左相 机中匹配位置: through parallax
Figure 573521DEST_PATH_IMAGE022
It can be predicted that the color 2 line in the right camera is in the image
Figure 730833DEST_PATH_IMAGE005
row pixel position
Figure 383531DEST_PATH_IMAGE024
Match position in left camera:

Figure 753333DEST_PATH_IMAGE037
(16)
Figure 753333DEST_PATH_IMAGE037
(16)

其中

Figure 175087DEST_PATH_IMAGE038
表示右相机种颜色3线在第
Figure 237721DEST_PATH_IMAGE005
行中像素总数。由于右相机颜色3线的预测 匹配位置与左相机中颜色3线实际匹配位置距离最近,因此我们通过最小距离像素位置搜 寻完成左右相机颜色3线的匹配。in
Figure 175087DEST_PATH_IMAGE038
Indicates the right camera color 3 line in the
Figure 237721DEST_PATH_IMAGE005
The total number of pixels in the row. Since the predicted matching position of the right camera color 3 line is the closest to the actual matching position of the left camera color 3 line, we search for the minimum distance pixel position to complete the matching of the left and right camera color 3 lines.

Claims (7)

1. A stereo vision matching method for three-color vertical stripe patterns used in three-dimensional sensing measurement is characterized in that a structured light projection device is used for projecting a single color structured light vertical stripe pattern consisting of three colors to the surface of a measured object, two cameras are used for recording deformed color stripe patterns, single pixel lines of different color stripes in the two cameras are extracted and matched in sequence, the single pixel lines of the maximum color stripes are matched firstly, the matching method is that the centers of all the maximum color stripes in the left camera and the right camera are aligned, then the single pixel lines of the maximum color stripes are matched line by a method of searching nearest pixel points in a one-way mode, the matching relation is modeled, the modeling method is that the single pixel lines of the smaller color stripes in the same horizontal line are matched through the parallax of the matched pixels of the maximum color stripe single pixel lines of all the left camera and the right camera in each horizontal line of an image, then the modeling is carried out through the matching relation model of the maximum color stripe single pixel lines of the right camera, the matching relation of the smaller color stripes in the different horizontal lines, and finally the stereo vision matching of the three-dimensional stripe patterns is updated by line by using the model of the minimum three-dimensional surface.
2. The method of claim 1, wherein the structured light vertical stripe pattern of three colors is a plurality of periodically repeating vertical stripes generated by binary coding or cosine function coding or by professional drawing software, and the number of different color stripes is different in each period, and the interval between adjacent vertical stripes is the same.
3. The method as claimed in claim 1, wherein the three color lines are extracted by converting the acquired image from RGB domain to HSV domain, segmenting by threshold selection method, extracting the single pixel central lines of the segmentation stripes of different colors as color 1 line, color 2 line and color 3 line respectively, the average interval between adjacent color 1 lines is maximum, the average interval between adjacent color 2 lines is larger, and the average interval between adjacent color 3 lines is minimum.
4. The method of claim 1, wherein the color 1 line matching method is performed by shifting a color 1 line in the left camera to align with a color 1 line in the right camera, and then performing line-by-line matching on pixels in the color 1 line in the left and right cameras by using a neighboring pixel search method.
5. The method of claim 1, wherein the matching of the left and right camera color 2 lines means that the matching position of the right camera color 2 line in the left camera is predicted by the established model relationship between the parallax and the right camera pixel position in each horizontal line of the image, and the pixels on the color 2 line in the left and right cameras are matched line by the adjacent pixel search.
6. The method of claim 1, wherein the model update means updating the model coefficients by the disparity of the matched pixels of the left and right camera color 1 line and color 2 line and the matched pixels of the right camera color 1 line and color 2 line.
7. The method of claim 1, wherein the matching of the left and right camera color 3 lines means that the matching position of the right camera color 3 line in the left camera is predicted by the updated model relationship between the parallax and the right camera pixel position in each horizontal line of the image, and the pixels on the color 3 line in the left and right cameras are matched line by the adjacent pixel search.
CN201911295547.8A 2019-12-16 2019-12-16 Stereoscopic vision matching method based on three-color vertical color stripes Expired - Fee Related CN112991405B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911295547.8A CN112991405B (en) 2019-12-16 2019-12-16 Stereoscopic vision matching method based on three-color vertical color stripes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911295547.8A CN112991405B (en) 2019-12-16 2019-12-16 Stereoscopic vision matching method based on three-color vertical color stripes

Publications (2)

Publication Number Publication Date
CN112991405A CN112991405A (en) 2021-06-18
CN112991405B true CN112991405B (en) 2022-10-28

Family

ID=76343445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911295547.8A Expired - Fee Related CN112991405B (en) 2019-12-16 2019-12-16 Stereoscopic vision matching method based on three-color vertical color stripes

Country Status (1)

Country Link
CN (1) CN112991405B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102445165A (en) * 2011-08-05 2012-05-09 南京航空航天大学 Stereo vision measurement method based on single-frame color coding grating
CN108038905A (en) * 2017-12-25 2018-05-15 北京航空航天大学 A kind of Object reconstruction method based on super-pixel

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140075163A (en) * 2012-12-11 2014-06-19 한국전자통신연구원 Method and apparatus for projecting pattern using structured-light

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102445165A (en) * 2011-08-05 2012-05-09 南京航空航天大学 Stereo vision measurement method based on single-frame color coding grating
CN108038905A (en) * 2017-12-25 2018-05-15 北京航空航天大学 A kind of Object reconstruction method based on super-pixel

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Real-time structured light 3D scanning using colour;Renju Li et al.;《Int. J. Mechatronics and Automation》;20121231;第2卷(第4期);第262–269页 *
Stereo vision with image-guided structuredlight pattern matching;Min-Gyu Park et al.;《ELECTRONICS LETTERS》;20150205;第51卷(第3期);第238–239页 *
基于组合编码的条纹结构光主动立体视觉匹配;曲学军等;《计算机测量与控制》;20141231;第22卷(第11期);第3712-3718页 *
基于结构光立体视觉的激光再制造工件的测量;高贵等;《南开大学学报(自然科学版)》;20110228;第44卷(第1期);第36-42页 *

Also Published As

Publication number Publication date
CN112991405A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CA3040002C (en) A device and method for obtaining distance information from views
CN106709947B (en) Three-dimensional human body rapid modeling system based on RGBD camera
CN109920007B (en) Three-dimensional imaging device and method based on multispectral photometric stereo and laser scanning
CN104541127B (en) Image processing system and image processing method
CN101813461B (en) Absolute phase measurement method based on composite color fringe projection
CN104390607B (en) Phase encoding-based colorful structured light rapid three-dimensional measurement method
CN101833786B (en) Method and system for capturing and rebuilding three-dimensional model
CN101658347B (en) Method for obtaining dynamic shape of foot model
CN102445165B (en) Stereo vision measurement method based on single-frame color coding grating
CN105844633B (en) Single frames structure optical depth acquisition methods based on De sequence and phase code
CN108038905A (en) A kind of Object reconstruction method based on super-pixel
CN105069789B (en) Structure light dynamic scene depth acquisition methods based on coding grid template
CN109708578A (en) Device, method and system for measuring plant phenotype parameters
CN101853528A (en) Hand-held three-dimensional surface information extraction method and extractor thereof
CN108648277B (en) Rapid reconstruction method of laser radar point cloud data
CN108592823A (en) A kind of coding/decoding method based on binocular vision color fringe coding
CN109556535B (en) A one-step reconstruction method of 3D surface shape based on color fringe projection
CN105303616A (en) Embossment modeling method based on single photograph
CN108592822A (en) A kind of measuring system and method based on binocular camera and structure light encoding and decoding
CN102222361A (en) Method and system for capturing and reconstructing 3D model
CN108613637A (en) A kind of structured-light system solution phase method and system based on reference picture
CN101788274A (en) Method for 3D shape measurement of colourful composite grating
CN102364524A (en) A 3D reconstruction method and device based on variable illumination multi-viewpoint difference sampling
CN107860337A (en) Structural light three-dimensional method for reconstructing and device based on array camera
CN112991405B (en) Stereoscopic vision matching method based on three-color vertical color stripes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20221028

CF01 Termination of patent right due to non-payment of annual fee