WO2018133027A1 - 基于灰度约束的三维数字散斑的整像素搜索方法及装置 - Google Patents

基于灰度约束的三维数字散斑的整像素搜索方法及装置 Download PDF

Info

Publication number
WO2018133027A1
WO2018133027A1 PCT/CN2017/071900 CN2017071900W WO2018133027A1 WO 2018133027 A1 WO2018133027 A1 WO 2018133027A1 CN 2017071900 W CN2017071900 W CN 2017071900W WO 2018133027 A1 WO2018133027 A1 WO 2018133027A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
point
speckle
speckle image
tested
Prior art date
Application number
PCT/CN2017/071900
Other languages
English (en)
French (fr)
Inventor
彭翔
何进英
刘晓利
蔡泽伟
汤其剑
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大学 filed Critical 深圳大学
Priority to PCT/CN2017/071900 priority Critical patent/WO2018133027A1/zh
Publication of WO2018133027A1 publication Critical patent/WO2018133027A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object

Definitions

  • the invention belongs to the field of image processing, and in particular relates to a whole pixel searching method and device for three-dimensional digital speckle based on gray constraint.
  • the Digital Speckle Correlation Method is independently proposed by Yamaguchi of Japan and Peters et al. of the United States.
  • the basic principle is to search for corresponding points by using regional gray similarity to measure the displacement and deformation of objects.
  • the classical digital speckle correlation search methods include a two-parameter method, a thickness search method, and a cross search method.
  • the traditional digital speckle correlation method can only measure the in-plane displacement, so it is only suitable for the measurement of the two-dimensional deformation field.
  • stereo vision technology it can be used in combination with digital speckle correlation methods for contour measurement and deformation measurement of three-dimensional objects, called three-dimensional digital speckle correlation method.
  • the basic process of the three-dimensional digital speckle correlation method is to first search for the corresponding points of the whole pixel level by using the digital speckle correlation method, and then use the sub-pixel optimization method to obtain more accurate sub-pixel corresponding point positions, and then use binocular stereoscopic reconstruction. Get the three-dimensional coordinates of the three-dimensional object. Therefore, the process of searching for the corresponding points of the whole pixel level directly affects the three-dimensional coordinates of the subsequent reconstruction of the three-dimensional object, and the process of searching for the corresponding points of the entire pixel level is particularly important.
  • the existing search method for the corresponding point of the whole pixel usually uses the polar line constraint of binocular stereo vision to constrain the related search from two-dimensional to one-dimensional, that is, the search of the corresponding point is limited to the polar line instead of the entire image. Therefore, the search efficiency can be appropriately improved. Since the original polar line is tilted, the related search is inconvenient, and although the search limit is increased, it is still necessary to perform a correlation function operation on the point to be matched within the search limit, and the calculation amount of the search is still huge and takes a lot of time. The search efficiency is still not high, which in turn affects the efficiency of establishing the three-dimensional coordinates of the three-dimensional object.
  • the invention provides an integer pixel searching method and device based on gray-scale constrained three-dimensional digital speckle, and aims to solve the problem that the search method of the corresponding pixel corresponding point still needs to search for the corresponding point through a large number of calculations, thereby causing long time-consuming , the problem of low search efficiency.
  • the invention provides a whole pixel searching method for three-dimensional digital speckle based on gray constraint, which comprises:
  • the speckle region is an object region, and the object region and the background region are respectively divided in the left and right speckle images;
  • the invention provides an integer pixel searching device based on gray-scale constrained three-dimensional digital speckle, comprising:
  • An acquisition module configured to project a random digital speckle pattern on a surface of the object to be tested by the projection device, and respectively acquire an image of the left and right speckles with the object to be tested by an imaging device placed on both sides of the projection device;
  • An image processing module for performing the following steps:
  • the whole pixel searching method and device for three-dimensional digital speckle based on gray-scale constraint are provided, and a random digital speckle pattern is projected onto a surface of an object to be tested by a projection device, and an imaging device is separately collected by an imaging device placed on both sides of the projection device.
  • There are left and right speckle images of the object to be tested and the average difference value corresponding to each pixel point is calculated by the neighborhood sub-window set for each pixel point in the left and right speckle images, and the average difference value is obtained.
  • the speckle area That is, an object region, respectively dividing the object region and the background region in the left and right speckle images, respectively extracting the first polar line and the second polar line in the divided left and right speckle images, and correcting the
  • the first polar line is parallel to the horizontal axis of the coordinate system of the divided left speckle image, and the horizontal axis of the coordinate system in which the second polar line is parallel to the divided right speckle image is corrected, and the first is corrected
  • the polar line and the second polar line are straight lines on the same horizontal line, and the left corrected speckle image after projection correction and the right speckle image after projection correction are obtained, wherein the first polar line and the second polar line are common a yoke line, according to a preset depth range of the object to be measured, calculating a parallax constraint range of the right corrected speckle image after the projection correction, and selecting a pixel point in the speckle region of
  • the gray value is subjected to a gray-scale constrained operation, and a matching point is selected from the pixels to be matched, so that a correlation function is performed according to the matching point and the pixel to be tested, and an integer pixel corresponding point is obtained, so that the calculated parallax constraint range is obtained.
  • the calculation amount of the part can be reduced, and the points to be matched that do not need to perform the correlation function operation in the parallax constraint range are further eliminated by the gray constraint operation, and the number of operations of the correlation function operation can be greatly reduced compared with the prior art. Thereby shortening the duration of the correlation function operation, the corresponding point of the whole pixel can be quickly searched, the efficiency of searching the corresponding point is improved, and the efficiency of establishing the three-dimensional coordinates of the three-dimensional object can be improved.
  • FIG. 1 is a schematic flowchart showing an implementation process of a full pixel search method for three-dimensional digital speckle based on gray constraint according to a first embodiment of the present invention
  • FIG. 2 is a schematic diagram showing the position of a projection device and an imaging device according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a left speckle image provided by an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a left speckle image and a right speckle image before projection correction according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of a projection corrected left speckle image and a right speckle image according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of searching for a corresponding point of an entire pixel along a horizontal polar line (second polar line) in a right-corrected speckle image after the projection correction according to an embodiment of the present invention
  • FIG. 8 is a schematic structural diagram of an integer pixel searching device for three-dimensional digital speckle based on gray constraint according to a second embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of an implementation process of a full-pixel search method for three-dimensional digital speckle based on gray-scale constraints according to a first embodiment of the present invention, which can be applied to an electronic device having an image processing function, such as a computer.
  • the integer pixel searching method based on the gray-constrained three-dimensional digital speckle shown in 1 mainly includes the following steps:
  • FIG. 2 is a schematic view showing the position of the projection device and the imaging device.
  • two imaging devices such as cameras, etc., are located on either side of the projection device.
  • an imaging device located on the left side of the projection device is referred to as a left imaging device; a right imaging device located on the right side of the projection device is provided
  • the image acquired by the left imaging device is a left speckle image, and the image acquired from the right imaging device is a right speckle image.
  • the projection device and the two imaging devices constitute a conventional binocular stereoscopic device.
  • Figure 3 is a left speckle image. As shown in FIG. 3, the area having the speckle pattern is the object to be tested.
  • the speckle region is an object region, and the object region and the background region are respectively divided in the left and right speckle images.
  • calculating an average difference value corresponding to each pixel point by using a neighboring sub-window set for each pixel point in the left and right speckle images is specifically:
  • the formula for calculating the average difference is: Where g(x, y) is the gray value of all pixels in the neighborhood sub-window, and AD is the average difference.
  • each pixel point is to be the calculated average difference corresponding to the target pixel point, so that each pixel point corresponds to one AD value; in the same right speckle image, each pixel The points are to be calculated as the average difference corresponding to the target pixel, such that each pixel corresponds to an AD value.
  • the preset value is 3.
  • an average difference value is calculated for each pixel point, and then a region formed by a pixel point with an average difference greater than 3 is selected as a speckle region, and the speckle region is the left speckle region.
  • a plaque area which is an area of the object to be tested in the right speckle image.
  • the first pole line and the second pole line are conjugate pole lines.
  • the coordinate system is a pixel-level coordinate system.
  • the pixel-level coordinate system represents the horizontal axis with u, v represents the vertical axis, and the origin is the pixel point located in the upper left corner of the image. , the first pixel of the image.
  • the first and second polar lines extracted at the beginning are inclined polar lines, so the image is corrected by correcting the polar lines.
  • the specific correction is as follows:
  • the poles of the left speckle image and the right speckle image are respectively transformed into the infinity of the u-axis direction by matrix transformation, so that the first polar line and the second polar line are converted from the oblique polar line to the horizontal axis of the pixel-level coordinate system. (u axis) parallel pole lines. Then calculate the adjustment factor of the vertical position of the polar line, and calculate the linear equations of the adjustment coefficient as:
  • (v l1 , v l2 , . . . , v ln ) is a set of intersections of the v-axis in the coordinate system of the first polar line and the left speckle image
  • (v r1 , v r2 , . . . , v rn ) is the first The set of intersections of the v-axis in the coordinate system of the dipole line and the right speckle image
  • k and b are adjustment coefficients.
  • the projection correction expression for the left speckle image is:
  • the projection correction expression for the right speckle image is:
  • FIG. 4 is a left speckle image and a right speckle image before projection correction
  • FIG. 5 is a left speckle image and a right speckle image after projection correction.
  • the parallax constraint range of the right corrected speckle image after the projection correction is specifically:
  • FIG. 6 is a schematic diagram of searching for the corresponding point of the whole pixel along the horizontal polar line (second polar line) in the right speckle image after the projection correction
  • FIG. 7 is the right speckle image after the projection correction.
  • S105 Select a pixel point in the speckle region of the left-side speckle image after the projection correction as the pixel to be tested, and select the same number of rows on the right-precision image after the projection correction and the pixel to be tested. a pixel to be matched within the parallax constraint range, and performing a gray-scale constrained operation on the gray value of the pixel to be measured and the gray value of the pixel to be matched, and selecting a matching point from the pixel to be matched So that the correlation function is performed according to the matching point and the pixel to be tested, and the corresponding point of the whole pixel is obtained.
  • selecting a matching point from the pixel to be matched is specifically:
  • the pixel to be matched corresponding to the absolute value is selected as a matching point
  • the formula of the gray-scale constraint operation is:
  • the gray-scale constraint threshold value is: if the non-electrical signal synchronously acquires the acquired speckle image, the gray-scale constraint threshold is 20, and if the electrical signal synchronously acquires the acquired speckle image, the gray-scale constraint threshold is 12.
  • the number of traversing the pixel points in the right speckle image can be reduced by the parallax constraint range, that is, only the parallax constraint needs to be traversed
  • the pixel points within the range may be used.
  • the range of traversal can be further reduced by the gray-scale constrained operation compared to the parallax constraint range. That is to say, the traversal range subjected to the gradation constraint operation is a horizontal electrode line shorter than the horizontal electrode line length within the parallax constraint range in FIG.
  • the pixel points of the function to be correlated are further reduced by the gray constraint operation in the right speckle image, thereby reducing the number of operations of the correlation function, and increasing the corresponding point of the whole pixel. Search efficiency.
  • step S105 the method further includes:
  • Extracting the pixel to be measured in the left speckle image after the projection correction, and extracting the pixel to be tested The matching point selected after the point and gray constraint operation is subjected to a correlation function operation to calculate a correlation coefficient;
  • C is the correlation coefficient
  • m is the side length of the preset sub-window
  • f(x i , y j ) is the preset with the pixel to be measured as the center point in the left speckle image after the projection correction
  • the gray value of the pixel in the sub-window, g(x′ i , y′ j ) is the pixel in the preset sub-window with the matching point as the center point in the right-corrected speckle image after the projection correction Gray value, with The projection is corrected for the left speckle image and the average gray value of all pixels in the preset sub-window of the corrected right speckle image after projection correction.
  • the pixel points in the speckle region are to be used as the pixel to be tested, in other words, the corresponding whole is calculated through each pixel in the speckle region.
  • the pixel corresponds to the point, then each time a pixel to be measured is extracted, the matching function selected after the gray constraint operation is required to perform a correlation function operation, wherein the matching point selected after the gray constraint operation is a parallax constraint range A plurality of pixel points selected from the pixels to be matched that satisfy the gray constraint operation formula.
  • the search time using only the parallax constraint method is 7.24 s
  • the gray constraint threshold is 20
  • the search time after using the gray constraint is 2.15 s. Shortened by 5.09s, about 2 times more efficient. And as the preset sub-window increases, the shortening time is more obvious, and the final three-dimensional reconstruction results are the same.
  • the efficiency can be increased by 4 times compared to using only the parallax constraint.
  • a random digital speckle pattern is projected onto the surface of the object to be tested by the projection device, and the left and right speckle images with the object to be tested are respectively collected by the imaging device placed on both sides of the projection device, and
  • the neighboring sub-window set in each of the left and right speckle images calculates an average difference corresponding to each pixel, and the area formed by the pixel whose average difference is larger than the preset value is used as the speckle area.
  • the speckle region is an object region, and the object region and the background region are respectively divided in the left and right speckle images, and the first polar line and the first in the divided left and right speckle images are respectively extracted.
  • a dipole line correcting a horizontal axis of the coordinate system in which the first polar line is parallel to the divided left speckle image, and correcting a horizontal axis of the coordinate system in which the second polar line is parallel to the divided right speckle image
  • Depth range Calculating a parallax constraint range of the right-corrected speckle image after the projection correction, selecting a pixel point in the speckle region of the left-corrected speckle image after the projection correction as the pixel to be measured, and correcting the right speckle after the projection correction
  • the constraint operation further eliminates the points to be matched that do not need to perform the correlation function operation within the parallax constraint range. Compared with the prior art, the number of operations of the correlation function operation can be greatly reduced, thereby shortening the duration of the correlation function operation, and the method can be fast. Searching for the corresponding point of the whole pixel improves the efficiency of searching for the corresponding point, thereby improving the efficiency of establishing the three-dimensional coordinates of the three-dimensional object.
  • FIG. 8 is a schematic structural diagram of a three-dimensional digital speckle-based integer pixel searching apparatus according to a second embodiment of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown. .
  • the integer pixel search device based on the gray-scale constraint-based three-dimensional digital speckle illustrated in FIG. 8 may be the execution body of the integer pixel search method based on the gray-scale constraint-based three-dimensional digital speckle provided in the foregoing embodiment shown in FIG.
  • the RGB-based three-dimensional digital speckle-based integer pixel search device illustrated in FIG. 8 mainly includes an acquisition module 801, an image processing module 802, and a calculation module 803. The above functional modules are described in detail as follows:
  • the image processing module 802 is configured to calculate, by using a neighborhood sub-window set for each pixel in the left and right speckle images, an average difference value corresponding to each pixel point, and the pixel whose average difference value is greater than a preset value
  • the area formed by the point is used as a speckle area
  • the speckle area is an object area
  • the object area and the background area are respectively divided in the left and right speckle images.
  • the image processing module 802 is further configured to select a target pixel point in the left speckle image and the right speckle image, and set a neighborhood sub-window with the target pixel as a center point, and calculate the neighborhood.
  • the formula for calculating the average difference is: Where g(x, y) is the gray value of all pixels in the neighborhood sub-window, and AD is the average difference.
  • each pixel point is to be the calculated average difference corresponding to the target pixel point, so that each pixel point corresponds to one AD value; in the same right speckle image, each pixel The points are to be calculated as the average difference corresponding to the target pixel, such that each pixel corresponds to an AD value.
  • the preset value is 3.
  • an average difference value is calculated for each pixel point, and then a region formed by a pixel point with an average difference greater than 3 is selected as a speckle region, and the speckle region is the left speckle region.
  • a plaque area which is an area of the object to be tested in the right speckle image.
  • the image processing module 802 is further configured to separately extract the first polar line and the second polar line in the divided left and right speckle images, and correct the coordinate of the first polar line parallel to the divided left speckle image.
  • the first pole line and the second pole line are conjugate pole lines.
  • the coordinate system is a pixel-level coordinate system.
  • the pixel-level coordinate system represents the horizontal axis with u, v represents the vertical axis, and the origin is the pixel point located in the upper left corner of the image. , the first pixel of the image.
  • the first and second polar lines extracted at the beginning are inclined polar lines, so the image is corrected by correcting the polar lines.
  • the specific correction is as follows:
  • the poles of the left speckle image and the right speckle image are respectively transformed into the infinity of the u-axis direction by matrix transformation, so that the first polar line and the second polar line are converted from the oblique polar line to the horizontal axis of the pixel-level coordinate system. (u axis) parallel pole lines. Then calculate the adjustment factor of the vertical position of the polar line, and calculate the linear equations of the adjustment coefficient as:
  • the projection correction expression for the left speckle image is:
  • the projection correction expression for the right speckle image is:
  • (u' r , v' r ) is the coordinates of each pixel in the right-corrected speckle image after correction
  • (u r , v r ) is the coordinates of each pixel in the right speckle image
  • (u r0 , v r0 ) is the pole coordinate in the right speckle image
  • k and b are the polar line vertical adjustment coefficients.
  • the image processing module 802 is further configured to calculate a parallax constraint range of the right-corrected speckle image after the projection is corrected according to the preset depth range of the object to be tested.
  • the image processing module 802 is further configured to: select a closest point and a farthest point corresponding to each pixel point in the left speckle image after the projection correction according to the preset depth range of the object to be measured, and The closest point and the farthest point are projected onto the second polar line of the right corrected speckle image after the projection is corrected, and a range between the projection points on the second polar line is taken as the parallax constraint range.
  • the image processing module 802 is further configured to select a pixel point in the speckle region of the left speckle image after the projection correction as the pixel to be tested, and select the pixel to be tested on the right corrected speckle image after the projection correction a pixel to be matched that is located in the same number of rows and located within the parallax constraint range, and performs gray-scale constrained operation on the gray value of the pixel to be tested and the gray value of the pixel to be matched, from the pixel to be matched A matching point is selected from the points, so that a correlation function is performed according to the matching point and the pixel to be tested, and an integer pixel corresponding point is obtained.
  • image processing module 802 is further configured to perform the following steps:
  • the pixel to be matched corresponding to the absolute value is selected as a matching point
  • the gray-scale constraint threshold value is: if the non-electrical signal synchronously acquires the acquired speckle image, the gray-scale constraint threshold is 20, and if the electrical signal synchronously acquires the acquired speckle image, the gray-scale constraint threshold is 12.
  • the device further includes: a calculation module 803;
  • the calculating module 803 is configured to extract the pixel to be measured in the left-corrected speckle image after the projection correction, and perform a correlation function operation on the extracted pixel point to be measured and the matching point selected after the gray-scale constraint operation, and calculate Correlation coefficient
  • the calculating module 803 is further configured to select the pixel to be tested corresponding to the maximum value of the correlation coefficient as the corresponding point of the whole pixel;
  • C is the correlation coefficient
  • m is the side length of the preset sub-window
  • f(x i , y j ) is the preset with the pixel to be measured as the center point in the left speckle image after the projection correction
  • the gray value of the pixel in the sub-window, g(x′ i , y′ j ) is the pixel in the preset sub-window with the matching point as the center point in the right-corrected speckle image after the projection correction Gray value, with The projection is corrected for the left speckle image and the average gray value of all pixels in the preset sub-window of the corrected right speckle image after projection correction.
  • the acquisition module 801 projects a random digital speckle pattern onto the surface of the object to be tested through the projection device, and respectively collects left and right speckle images with the object to be tested by the imaging devices placed on both sides of the projection device.
  • the image processing module 802 calculates an average difference value corresponding to each pixel point by using a neighborhood sub-window set for each pixel point in the left and right speckle images, and the pixel value of the average difference value is greater than a preset value.
  • the formed region is used as a speckle region, and the speckle region is an object region, and the object region and the background region are respectively divided in the left and right speckle images, and the divided left and right speckle images are respectively extracted.
  • the integrated modules if implemented in the form of software functional modules and sold or used as separate products, may be stored in a computer readable storage medium.
  • the present invention Technical Solution In essence or in part that contributes to the prior art, or all or part of the technical solution may be embodied in the form of a software product, the computer software product is stored in a storage medium, including a plurality of instructions for making A computer device (which may be a personal computer, server, or network device, etc.) performs all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明公开了一种基于灰度约束的三维数字散斑的整像素搜索方法及装置,该方法包括:按照待测物体的预置深度范围,计算投影校正后的该右散斑图像的视差约束范围,选取投影校正后的该左散斑图像的该散斑区域中像素点作为待测像素点,并在投影校正后的该右散斑图像上选取与该待测像素点位于相同行数且位于该视差约束范围内的待匹配像素点,通过对该待测像素点的灰度值和该待匹配像素点的灰度值进行灰度约束运算,从该待匹配像素点中选出匹配点,使得依据该匹配点与该待测像素点进行相关函数运算,得到整像素对应点,这样可以在极大程度上减少相关函数运算的运算次数,从而缩短运算时长,可以快速的搜索到整像素对应点,提高搜索对应点的效率。

Description

基于灰度约束的三维数字散斑的整像素搜索方法及装置 技术领域
本发明属于图像处理领域,尤其涉及一种基于灰度约束的三维数字散斑的整像素搜索方法及装置。
背景技术
数字散斑相关方法(DSCM,Digital Speckle Correlation Method)是日本的Yamaguchi和美国的Peters等人分别独立提出的,其基本原理是利用区域灰度相似性搜索对应点,从而实现物体位移和变形的测量。经典的数字散斑相关搜索方法有双参数法、粗细搜索法、十字搜索法等。传统的数字散斑相关方法只能测量面内位移,所以其只适用于二维变形场的测量。随着立体视觉技术的发展,将其与数字散斑相关方法相结合,可用于三维物体的轮廓测量和变形测量,称为三维数字散斑相关方法。该三维数字散斑相关方法的基本过程是首先使用数字散斑相关方法搜索到整像素级的对应点,然后使用亚像素优化方法得到更精确的亚像素对应点位置,再使用双目立体视觉重建得到三维物体的三维坐标。因此搜索整像素级的对应点的过程是直接影响后续重建三维物体的三维坐标,搜索整像素级的对应点的过程显得尤为重要。
现有的整像素对应点的搜索方法,通常利用双目立体视觉的极线约束,将相关搜索从二维约束到一维,即将对应点的搜索限制在极线上,而非整个图像上,从而可以适当的提高搜索效率。由于原始极线是倾斜的,相关搜索不方便,而且虽然增加了搜索限制,但是依然需要对搜索限制内的待匹配点进行相关函数运算,该搜索的计算量依然很庞大,耗费了大量的时间,搜索效率依然不高,进而影响建立三维物体的三维坐标的效率。
发明内容
本发明提供一种基于灰度约束的三维数字散斑的整像素搜索方法及装置,旨在解决由于现有的整像素对应点的搜索方法依然需要通过大量的计算搜索对应点,进而导致耗时长,搜索效率低的问题。
本发明提供的一种基于灰度约束的三维数字散斑的整像素搜索方法,包括:
通过投影装置向待测物体表面投影随机数字散斑图案,通过放置于所述投影装置两侧的成像装置分别采集带有所述待测物体的左、右散斑图像;
通过为所述左、右散斑图像中每个像素点设置的邻域子窗口计算每个像素点对应的平均差值,并将所述平均差值大于预置数值的像素点所形成的区域作为散斑区域,将所述散斑区域即为物体区域,分别在所述左、右散斑图像中划分出所述物体区域和背景区域;
分别提取划分后的所述左、右散斑图像中的第一极线和第二极线,校正所 述第一极线平行于划分后的所述左散斑图像所在坐标系的横轴,以及校正所述第二极线平行于划分后的所述右散斑图像所在坐标系的横轴,并校正所述第一极线和所述第二极线为位于同一水平线的直线,得到投影校正后的所述左散斑图像和投影校正后的所述右散斑图像;
按照所述待测物体的预置深度范围,计算投影校正后的所述右散斑图像的视差约束范围;
选取投影校正后的所述左散斑图像的所述散斑区域中像素点作为待测像素点,并在投影校正后的所述右散斑图像上选取与所述待测像素点位于相同行数且位于所述视差约束范围内的待匹配像素点,通过对所述待测像素点的灰度值和所述待匹配像素点的灰度值进行灰度约束运算,从所述待匹配像素点中选出匹配点,使得依据所述匹配点与所述待测像素点进行相关函数运算,得到整像素对应点。
本发明提供的一种基于灰度约束的三维数字散斑的整像素搜索装置,包括:
采集模块,用于通过投影装置向待测物体表面投影随机数字散斑图案,通过放置于所述投影装置两侧的成像装置分别采集带有所述待测物体的左、右散斑图像;
图像处理模块,用于执行以下步骤:
通过为所述左、右散斑图像中每个像素点设置的邻域子窗口计算每个像素点对应的平均差值,并将所述平均差值大于预置数值的像素点所形成的区域作为散斑区域,将所述散斑区域即为物体区域,分别在所述左、右散斑图像中划分出所述物体区域和背景区域;
分别提取划分后的所述左、右散斑图像中的第一极线和第二极线,校正所述第一极线平行于划分后的所述左散斑图像所在坐标系的横轴,以及校正所述第二极线平行于划分后的所述右散斑图像所在坐标系的横轴,并校正所述第一极线和所述第二极线为位于同一水平线的直线,得到投影校正后的所述左散斑图像和投影校正后的所述右散斑图像;
按照所述待测物体的预置深度范围,计算投影校正后的所述右散斑图像的视差约束范围;
选取投影校正后的所述左散斑图像的所述散斑区域中像素点作为待测像素点,并在投影校正后的所述右散斑图像上选取与所述待测像素点位于相同行数且位于所述视差约束范围内的待匹配像素点,通过对所述待测像素点的灰度值和所述待匹配像素点的灰度值进行灰度约束运算,从所述待匹配像素点中选出匹配点,使得依据所述匹配点与所述待测像素点进行相关函数运算,得到整像素对应点。
本发明提供的基于灰度约束的三维数字散斑的整像素搜索方法及装置,通过投影装置向待测物体表面投影随机数字散斑图案,通过放置于该投影装置两侧的成像装置分别采集带有该待测物体的左、右散斑图像,通过为该左、右散斑图像中每个像素点设置的邻域子窗口计算每个像素点对应的平均差值,并将该平均差值大于预置数值的像素点所形成的区域作为散斑区域,将该散斑区域 即为物体区域,分别在该左、右散斑图像中划分出该物体区域和背景区域,分别提取划分后的该左、右散斑图像中的第一极线和第二极线,校正该第一极线平行于划分后的该左散斑图像所在坐标系的横轴,以及校正该第二极线平行于划分后的该右散斑图像所在坐标系的横轴,并校正该第一极线和该第二极线为位于同一水平线的直线,得到投影校正后的该左散斑图像和投影校正后的该右散斑图像,其中该第一极线和该第二极线为共轭极线,按照该待测物体的预置深度范围,计算投影校正后的该右散斑图像的视差约束范围,选取投影校正后的该左散斑图像的该散斑区域中像素点作为待测像素点,并在投影校正后的该右散斑图像上选取与该待测像素点位于相同行数且位于该视差约束范围内的待匹配像素点,通过对该待测像素点的灰度值和该待匹配像素点的灰度值进行灰度约束运算,从该待匹配像素点中选出匹配点,使得依据该匹配点与该待测像素点进行相关函数运算,得到整像素对应点,这样通过算出的视差约束范围可以减少部分的计算量,再通过灰度约束运算进一步排除视差约束范围内不需要进行相关函数运算的待匹配点,相较于现有技术可以在极大程度上减少相关函数运算的运算次数,从而缩短相关函数运算的时长,可以快速的搜索到整像素的对应点,提高了搜索该对应点的效率,从而可以提高建立三维物体的三维坐标的效率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例。
图1是本发明第一实施例提供的基于灰度约束的三维数字散斑的整像素搜索方法的实现流程示意图;
图2是本发明实施例提供的投影装置和成像装置的位置示意图;
图3是本发明实施例提供的左散斑图像的示意图;
图4是本发明实施例提供的投影校正前的左散斑图像和右散斑图像的示意图;
图5是本发明实施例提供的投影校正后的左散斑图像和右散斑图像的示意图;
图6是本发明实施例提供的投影校正后右散斑图像中沿着水平极线(第二极线)搜索整像素对应点的示意图;
图7是本发明实施例提供的投影校正后的右散斑图像中沿着该视差约束范围内的水平极线(第二极线)搜索整像素对应点的示意图;
图8是本发明第二实施例提供的基于灰度约束的三维数字散斑的整像素搜索装置的结构示意图。
具体实施方式
为使得本发明的发明目的、特征、优点能够更加的明显和易懂,下面将结 合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而非全部实施例。基于本发明中的实施例,本领域技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参阅图1,图1为本发明第一实施例提供基于灰度约束的三维数字散斑的整像素搜索方法的实现流程示意图,可应用于具有图像处理功能的电子设备中,如计算机,图1所示的基于灰度约束的三维数字散斑的整像素搜索方法,主要包括以下步骤:
S101、通过投影装置向待测物体表面投影随机数字散斑图案,通过放置于该投影装置两侧的成像装置分别采集带有该待测物体的左、右散斑图像。
如图2所示,图2为投影装置和成像装置的位置示意图。从图2中可以看出,两个成像装置,如相机等位于投影装置的两侧。需要说明的是,为了便于说明,在本发明的所有实施例中将位于该投影装置的左侧的成像装置称为左成像装置;位于该投影装置右侧的称为右成像装置,设从该左成像装置采集到的图像为左散斑图像,从该右成像装置采集到的图像为右散斑图像。其中该投影装置和两个成像装置组成了传统的双目立体视觉装置。图3为左散斑图像。如图3所示,具有散斑图案的区域为该待测物体。
S102、通过为该左、右散斑图像中每个像素点设置的邻域子窗口计算每个像素点对应的平均差值,并将该平均差值大于预置数值的像素点所形成的区域作为散斑区域,将该散斑区域即为物体区域,分别在该左、右散斑图像中划分出该物体区域和背景区域。
进一步地,通过为该左、右散斑图像中每个像素点设置的邻域子窗口计算每个像素点对应的平均差值具体为:
分别在该左散斑图像和该右散斑图像中选取目标像素点,并以该目标像素点为中心点设置邻域子窗口,并计算该邻域子窗口内所有像素点的灰度值的平均差值;
计算平均差值的公式为:
Figure PCTCN2017071900-appb-000001
其中g(x,y)为该邻域子窗口内所有像素点的灰度值,AD为该平均差值。
需要说明的是,在左散斑图像中,每个像素点均要作为目标像素点对应的算出平均差值,这样每个像素点对应一个AD值;同理右散斑图像中,每个像素点均要作为目标像素点对应的算出平均差值,这样每个像素点对应一个AD值。
优选地,该预置数值为3。在该左散斑图像中,首先对每个像素点均对应算出一个平均差值,然后选取平均差值大于3的像素点所形成的区域为散斑区域,该散斑区域为该左散斑图像中该待测物品的区域;同样地,在该右散斑图像中,首先对每个像素点均对应算出一个平均差值,然后选取平均差值大于3的像素点所形成的区域为散斑区域,该散斑区域为该右散斑图像中该待测物品的区域。
S103、分别提取划分后的该左、右散斑图像中的第一极线和第二极线,校正该第一极线平行于划分后的该左散斑图像所在坐标系的横轴,以及校正该第二极线平行于划分后的该右散斑图像所在坐标系的横轴,并校正该第一极线和该第二极线为位于同一水平线的直线,得到投影校正后的该左散斑图像和投影校正后的该右散斑图像。
该第一极线和该第二极线为共轭极线。需要说明的是,本发明实施例中坐标系均为像素级坐标系,在图像处理领域中,像素级坐标系以u表示横轴,v表示纵轴,原点为位于图像中左上角的像素点,即图像的第一个像素点。
一开始提取到的第一极线和第二极线是倾斜的极线,所以通过校正极线的方式对图像进行校正,具体校正的方式如下:
首先通过矩阵变换分别将左散斑图像和右散斑图像的极点变换至u轴方向的无穷处,使得第一极线和第二极线由倾斜的极线转换为与像素级坐标系横轴(u轴)平行的极线。然后计算极线垂直位置的调整系数,计算该调整系数的线性方程组为:
Figure PCTCN2017071900-appb-000002
其中,(vl1,vl2,…,vln)为第一极线与左散斑图像的坐标系中v轴的交点的集合,(vr1,vr2,…,vrn)为该第二极线与右散斑图像的坐标系中v坐标轴的交点的集合,k和b为调整系数。
对于左、右散斑图像的投影校正,该左散斑图像的投影校正表达式为:
Figure PCTCN2017071900-appb-000003
Figure PCTCN2017071900-appb-000004
其中,(u′l,v′l)为投影校正后的左散斑图像中各像素点的坐标,(ul,vl)为左散斑图像中各像素点的坐标,(ul0,vl0)为左散斑图像中极点坐标。
该右散斑图像的投影校正表达式为:
Figure PCTCN2017071900-appb-000005
其中,(u′r,v′r)为投影校正后的右散斑图像中各像素点的坐标,(ur,vr)为右散斑图像中各像素点的坐标,(ur0,vr0)为右散斑图像中极点坐标,k、b为极线垂直方向调整系数。如图4和图5所示,图4为投影校正前的左散斑图像和右散斑图像,图5为投影校正后的左散斑图像和右散斑图像。
S104、按照该待测物体的预置深度范围,计算投影校正后的该右散斑图像的视差约束范围。
进一步地,按照该待测物体的预置深度范围,计算投影校正后的该右散斑图像的视差约束范围具体为:
按照该待测物体的预置深度范围,选取距离投影校正后的该左散斑图像中各像素点对应的最近点和最远点,并将该最近点和该最远点投影到投影校正后的该右散斑图像的该第二极线上,并将在该第二极线上的投影点之间的范围作为该视差约束范围。
如图6和图7所示,图6为投影校正后右散斑图像中沿着水平极线(第二极线)搜索整像素对应点的示意图,图7为投影校正后的右散斑图像中沿着该视差约束范围内的该水平极线(第二极线)搜索整像素对应点的示意图。从图6和图7中明显可以看出,图7中视差约束范围内的该水平极线短于图6中的,进而该视差约束范围可以缩短搜索整像素对应点的范围。
S105、选取投影校正后的该左散斑图像的该散斑区域中像素点作为待测像素点,并在投影校正后的该右散斑图像上选取与该待测像素点位于相同行数且位于该视差约束范围内的待匹配像素点,通过对该待测像素点的灰度值和该待匹配像素点的灰度值进行灰度约束运算,从该待匹配像素点中选出匹配点,使得依据该匹配点与该待测像素点进行相关函数运算,得到整像素对应点。
进一步地,通过对该待测像素点的灰度值和该待匹配像素点的灰度值进行灰度约束运算,从该待匹配像素点中选出匹配点具体为:
计算该待测像素点的灰度值和该待匹配像素点的灰度值之间的差值的绝对值,将该绝对值与灰度约束阈值进行比较;
若该绝对值小于该灰度约束阈值,则选取该绝对值对应的待匹配像素点作为匹配点;
其中该灰度约束运算的公式为:|g(x′,y′)-f(x,y)|<threshold,f(x,y)为该待测像素点的灰度值,g(x′,y′)为与该待测像素点位于相同行数且位于该视差约束范围内的待匹配像素点的灰度值,threshold为灰度约束阈值。
该灰度约束阈值取值为:若非电信号同步采集获取的散斑图像,则灰度约束阈值为20,若电信号同步采集获取的散斑图像,该灰度约束阈值为12。
在现有技术中,由于在下一步相关函数运算过程中需要遍历整个右散斑图像的像素点,通过视差约束范围可以减少在右散斑图像中遍历像素点的个数,即只需遍历视差约束范围内的像素点即可,进一步地,相较于视差约束范围通过灰度约束运算可以进一步地缩小遍历的范围。也就是说,经过灰度约束运算的遍历范围是比图7中视差约束范围内的该水平极线长度更短的水平极线。故,相较于现有技术和视差约束范围,在右散斑图像中通过灰度约束运算进一步地减少待相关函数运算的像素点,从而减少相关函数的运算次数,达到提高整像素对应点的搜索效率。
进一步地,步骤S105之后,该方法还包括:
提取投影校正后的该左散斑图像中该待测像素点,并对提取的该待测像素 点和灰度约束运算后选出的该匹配点进行相关函数运算,算出相关系数;
选取该相关系数最大值对应的该待测像素点作为该整像素对应点,其中该相关函数运算公式为:
Figure PCTCN2017071900-appb-000006
其中C为该相关系数,m为预置子窗口的边长,f(xi,yj)为在投影校正后的该左散斑图像中以该待测像素点为中心点的该预置子窗口内的像素点的灰度值,g(x′i,y′j)为在投影校正后的该右散斑图像中以该匹配点为中心点的该预置子窗口内的像素点的灰度值,
Figure PCTCN2017071900-appb-000007
Figure PCTCN2017071900-appb-000008
分别是投影校正后的该左散斑图像和投影校正后的该右散斑图像的该预置子窗口内所有像素点的平均灰度值。
需要说明的是,在投影校正后的该左散斑图像中,散斑区域内的像素点均要作为该待测像素点,换言之,通过散斑区域内的每个像素点均算出对应的整像素对应点,那么每提取一个待测像素点,就需要与灰度约束运算后选出的该匹配点进行相关函数运算,其中该灰度约束运算后选出的该匹配点为从视差约束范围内该待匹配像素点中选出的满足灰度约束运算公式的多个像素点。
下面以实际仿真为例对本发明实施例所描述的方法的效果进行说明,具体说明如下:
相关函数运算中的该预置子窗口为9×9时,只使用视差约束的方法的搜索时间为7.24s,而设灰度约束阈值为20,使用灰度约束后的搜索时间为2.15s,缩短了5.09s,约提高了2倍的效率。并且随着该预置子窗口的加大,缩短的时间更为明显,同时二者最终三维重建结果是相同的。
进一步地,如果灰度约束阈值取值为12时,相较于只使用视差约束,效率可以提高4倍。
本发明实施例中,通过投影装置向待测物体表面投影随机数字散斑图案,通过放置于该投影装置两侧的成像装置分别采集带有该待测物体的左、右散斑图像,通过为该左、右散斑图像中每个像素点设置的邻域子窗口计算每个像素点对应的平均差值,并将该平均差值大于预置数值的像素点所形成的区域作为散斑区域,将该散斑区域即为物体区域,分别在该左、右散斑图像中划分出该物体区域和背景区域,分别提取划分后的该左、右散斑图像中的第一极线和第二极线,校正该第一极线平行于划分后的该左散斑图像所在坐标系的横轴,以及校正该第二极线平行于划分后的该右散斑图像所在坐标系的横轴,并校正该第一极线和该第二极线为位于同一水平线的直线,得到投影校正后的该左散斑图像和投影校正后的该右散斑图像,按照该待测物体的预置深度范围,计算投影校正后的该右散斑图像的视差约束范围,选取投影校正后的该左散斑图像的该散斑区域中像素点作为待测像素点,并在投影校正后的该右散斑图像上选取与该待测像素点位于相同行数且位于该视差约束范围内的待匹配像素点,通过对该待测像素点的灰度值和该待匹配像素点的灰度值进行灰度约束运算,从该 待匹配像素点中选出匹配点,使得依据该匹配点与该待测像素点进行相关函数运算,得到整像素对应点,这样通过算出的视差约束范围可以减少部分的计算量,再通过灰度约束运算进一步排除视差约束范围内不需要进行相关函数运算的待匹配点,相较于现有技术可以在极大程度上减少相关函数运算的运算次数,从而缩短相关函数运算的时长,可以快速的搜索到整像素的对应点,提高了搜索该对应点的效率,从而可以提高建立三维物体的三维坐标的效率。
请参阅图8,图8是本发明第二实施例提供的基于灰度约束的三维数字散斑的整像素搜索装置的结构示意图,为了便于说明,仅示出了与本发明实施例相关的部分。图8示例的基于灰度约束的三维数字散斑的整像素搜索装置可以是前述图1所示实施例提供的基于灰度约束的三维数字散斑的整像素搜索方法的执行主体。图8示例的基于灰度约束的三维数字散斑的整像素搜索装置,主要包括:采集模块801、图像处理模块802和计算模块803。以上各功能模块详细说明如下:
采集模块801,用于通过投影装置向待测物体表面投影随机数字散斑图案,通过放置于该投影装置两侧的成像装置分别采集带有该待测物体的左、右散斑图像。
图像处理模块802,用于通过为该左、右散斑图像中每个像素点设置的邻域子窗口计算每个像素点对应的平均差值,并将该平均差值大于预置数值的像素点所形成的区域作为散斑区域,将该散斑区域即为物体区域,分别在该左、右散斑图像中划分出该物体区域和背景区域。
进一步地,图像处理模块802,还用于分别在该左散斑图像和该右散斑图像中选取目标像素点,并以该目标像素点为中心点设置邻域子窗口,并计算该邻域子窗口内所有像素点的灰度值的平均差值;
计算平均差值的公式为:
Figure PCTCN2017071900-appb-000009
其中g(x,y)为该邻域子窗口内所有像素点的灰度值,AD为该平均差值。
需要说明的是,在左散斑图像中,每个像素点均要作为目标像素点对应的算出平均差值,这样每个像素点对应一个AD值;同理右散斑图像中,每个像素点均要作为目标像素点对应的算出平均差值,这样每个像素点对应一个AD值。
优选地,该预置数值为3。在该左散斑图像中,首先对每个像素点均对应算出一个平均差值,然后选取平均差值大于3的像素点所形成的区域为散斑区域,该散斑区域为该左散斑图像中该待测物品的区域;同样地,在该右散斑图像中,首先对每个像素点均对应算出一个平均差值,然后选取平均差值大于3的像素点所形成的区域为散斑区域,该散斑区域为该右散斑图像中该待测物品的区域。
图像处理模块802,还用于分别提取划分后的该左、右散斑图像中的第一极线和第二极线,校正该第一极线平行于划分后的该左散斑图像所在坐标系的横轴,以及校正该第二极线平行于划分后的该右散斑图像所在坐标系的横轴, 并校正该第一极线和该第二极线为位于同一水平线的直线,得到投影校正后的该左散斑图像和投影校正后的该右散斑图像。
该第一极线和该第二极线为共轭极线。需要说明的是,本发明实施例中坐标系均为像素级坐标系,在图像处理领域中,像素级坐标系以u表示横轴,v表示纵轴,原点为位于图像中左上角的像素点,即图像的第一个像素点。
一开始提取到的第一极线和第二极线是倾斜的极线,所以通过校正极线的方式对图像进行校正,具体校正的方式如下:
首先通过矩阵变换分别将左散斑图像和右散斑图像的极点变换至u轴方向的无穷处,使得第一极线和第二极线由倾斜的极线转换为与像素级坐标系横轴(u轴)平行的极线。然后计算极线垂直位置的调整系数,计算该调整系数的线性方程组为:
Figure PCTCN2017071900-appb-000010
其中,(vl1,vl2,…,vln)为第一极线与左散斑图像的坐标系中v轴的交点的集合,(vr1,vr2,…,vrn)为该第二极线与右散斑图像的坐标系中v坐标轴的交点的集合,k和b为调整系数。
对于左、右散斑图像的投影校正,该左散斑图像的投影校正表达式为:
Figure PCTCN2017071900-appb-000011
Figure PCTCN2017071900-appb-000012
其中,(u′l,v′l)为投影校正后的左散斑图像中各像素点的坐标,(ul,vl)为左散斑图像中各像素点的坐标,(ul0,vl0)为左散斑图像中极点坐标。
该右散斑图像的投影校正表达式为:
Figure PCTCN2017071900-appb-000013
其中,(u′r,v′r)为投影校正后的右散斑图像中各像素点的坐标,(ur,vr)为右散斑图像中各像素点的坐标,(ur0,vr0)为右散斑图像中极点坐标,k、b为极线垂直方向调整系数。
图像处理模块802,还用于按照该待测物体的预置深度范围,计算投影校正后的该右散斑图像的视差约束范围。
进一步地,图像处理模块802,还用于按照该待测物体的预置深度范围,选取距离投影校正后的该左散斑图像中各像素点对应的最近点和最远点,并将 该最近点和该最远点投影到投影校正后的该右散斑图像的该第二极线上,并将在该第二极线上的投影点之间的范围作为该视差约束范围。
图像处理模块802,还用于选取投影校正后的该左散斑图像的该散斑区域中像素点作为待测像素点,并在投影校正后的该右散斑图像上选取与该待测像素点位于相同行数且位于该视差约束范围内的待匹配像素点,通过对该待测像素点的灰度值和该待匹配像素点的灰度值进行灰度约束运算,从该待匹配像素点中选出匹配点,使得依据该匹配点与该待测像素点进行相关函数运算,得到整像素对应点。
进一步地,图像处理模块802,还用于执行以下步骤:
计算该待测像素点的灰度值和该待匹配像素点的灰度值之间的差值的绝对值,将该绝对值与灰度约束阈值进行比较;
若该绝对值小于该灰度约束阈值,则选取该绝对值对应的待匹配像素点作为匹配点;
其中该灰度约束运算的公式为:|g(x′,y′)-f(x,y)|<threshold,f(x,y)为该待测像素点的灰度值,g(x′,y′)为与该待测像素点位于相同行数且位于该视差约束范围内的待匹配像素点的灰度值,threshold为灰度约束阈值。
该灰度约束阈值取值为:若非电信号同步采集获取的散斑图像,则灰度约束阈值为20,若电信号同步采集获取的散斑图像,该灰度约束阈值为12。
进一步地,该装置还包括:计算模块803;
计算模块803,用于提取投影校正后的该左散斑图像中该待测像素点,并对提取的该待测像素点和灰度约束运算后选出的该匹配点进行相关函数运算,算出相关系数;
计算模块803,还用于选取该相关系数最大值对应的该待测像素点作为该整像素对应点;
其中该相关函数运算公式为:
Figure PCTCN2017071900-appb-000014
其中C为该相关系数,m为预置子窗口的边长,f(xi,yj)为在投影校正后的该左散斑图像中以该待测像素点为中心点的该预置子窗口内的像素点的灰度值,g(x′i,y′j)为在投影校正后的该右散斑图像中以该匹配点为中心点的该预置子窗口内的像素点的灰度值,
Figure PCTCN2017071900-appb-000015
Figure PCTCN2017071900-appb-000016
分别是投影校正后的该左散斑图像和投影校正后的该右散斑图像的该预置子窗口内所有像素点的平均灰度值。
需要说明的是,在投影校正后的该左散斑图像中,散斑区域内的像素点均要作为该待测像素点,换言之,通过散斑区域内的每个像素点均算出对应的整像素对应点,那么每提取一个待测像素点,就需要与灰度约束运算后选出的该匹配点进行相关函数运算,其中该灰度约束运算后选出的该匹配点为从视差约束范围内该待匹配像素点中选出的满足灰度约束运算公式的多个像素点。
本实施例未尽之细节,请参阅前述图1所示实施例的描述,此处不再赘述。
本发明实施例中,采集模块801通过投影装置向待测物体表面投影随机数字散斑图案,通过放置于该投影装置两侧的成像装置分别采集带有该待测物体的左、右散斑图像,图像处理模块802通过为该左、右散斑图像中每个像素点设置的邻域子窗口计算每个像素点对应的平均差值,并将该平均差值大于预置数值的像素点所形成的区域作为散斑区域,将该散斑区域即为物体区域,分别在该左、右散斑图像中划分出该物体区域和背景区域,分别提取划分后的该左、右散斑图像中的第一极线和第二极线,校正该第一极线平行于划分后的该左散斑图像所在坐标系的横轴,以及校正该第二极线平行于划分后的该右散斑图像所在坐标系的横轴,并校正该第一极线和该第二极线为位于同一水平线的直线,得到投影校正后的该左散斑图像和投影校正后的该右散斑图像,按照该待测物体的预置深度范围,计算投影校正后的该右散斑图像的视差约束范围,选取投影校正后的该左散斑图像的该散斑区域中像素点作为待测像素点,并在投影校正后的该右散斑图像上选取与该待测像素点位于相同行数且位于该视差约束范围内的待匹配像素点,通过对该待测像素点的灰度值和该待匹配像素点的灰度值进行灰度约束运算,从该待匹配像素点中选出匹配点,使得依据该匹配点与该待测像素点进行相关函数运算,得到整像素对应点,这样通过算出的视差约束范围可以减少部分的计算量,再通过灰度约束运算进一步排除视差约束范围内不需要进行相关函数运算的待匹配点,相较于现有技术可以在极大程度上减少相关函数运算的运算次数,从而缩短相关函数运算的时长,可以快速的搜索到整像素的对应点,提高了搜索该对应点的效率,从而可以提高建立三维物体的三维坐标的效率。
在本申请所提供的多个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信链接可以是通过一些接口,装置或模块的间接耦合或通信链接,可以是电性,机械或其它的形式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明 的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
需要说明的是,对于前述的各方法实施例,为了简便描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其它顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定都是本发明所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其它实施例的相关描述。
以上为对本发明所提供的基于灰度约束的三维数字散斑的整像素搜索方法及装置的描述,对于本领域的技术人员,依据本发明实施例的思想,在具体实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本发明的限制。

Claims (10)

  1. 一种基于灰度约束的三维数字散斑的整像素搜索方法,其特征在于,包括:
    通过投影装置向待测物体表面投影随机数字散斑图案,通过放置于所述投影装置两侧的成像装置分别采集带有所述待测物体的左、右散斑图像;
    通过为所述左、右散斑图像中每个像素点设置的邻域子窗口计算每个像素点对应的平均差值,并将所述平均差值大于预置数值的像素点所形成的区域作为散斑区域,将所述散斑区域即为物体区域,分别在所述左、右散斑图像中划分出所述物体区域和背景区域;
    分别提取划分后的所述左、右散斑图像中的第一极线和第二极线,校正所述第一极线平行于划分后的所述左散斑图像所在坐标系的横轴,以及校正所述第二极线平行于划分后的所述右散斑图像所在坐标系的横轴,并校正所述第一极线和所述第二极线为位于同一水平线的直线,得到投影校正后的所述左散斑图像和投影校正后的所述右散斑图像;
    按照所述待测物体的预置深度范围,计算投影校正后的所述右散斑图像的视差约束范围;
    选取投影校正后的所述左散斑图像的所述散斑区域中像素点作为待测像素点,并在投影校正后的所述右散斑图像上选取与所述待测像素点位于相同行数且位于所述视差约束范围内的待匹配像素点,通过对所述待测像素点的灰度值和所述待匹配像素点的灰度值进行灰度约束运算,从所述待匹配像素点中选出匹配点,使得依据所述匹配点与所述待测像素点进行相关函数运算,得到整像素对应点。
  2. 根据权利要求1所述的方法,其特征在于,所述通过对所述待测像素点的灰度值和所述待匹配像素点的灰度值进行灰度约束运算,从所述待匹配像素点中选出匹配点包括:
    计算所述待测像素点的灰度值和所述待匹配像素点的灰度值之间的差值的绝对值,将所述绝对值与灰度约束阈值进行比较;
    若所述绝对值小于所述灰度约束阈值,则选取所述绝对值对应的待匹配像素点作为匹配点;
    其中所述灰度约束运算的公式为:|g(x′,y′)-f(x,y)|<threshold,f(x,y)为所述待测像素点的灰度值,g(x′,y′)为与所述待测像素点位于相同行数且位于所述视差约束范围内的待匹配像素点的灰度值,threshold为灰度约束阈值。
  3. 根据权利要求2所述的方法,其特征在于,所述通过为所述左、右散斑图像中每个像素点设置的邻域子窗口计算每个像素点对应的平均差值包括:
    分别在所述左散斑图像和所述右散斑图像中选取目标像素点,并以所述目标像素点为中心点设置邻域子窗口,并计算所述邻域子窗口内所有像素点的灰度值的平均差值;
    计算平均差值的公式为:
    Figure PCTCN2017071900-appb-100001
    其中g(x,y)为所述邻域子窗口内所有像素点的灰度值,AD为所述平均差值。
  4. 根据权利要求2所述的方法,其特征在于,所述通过对所述待测像素点的灰度值和所述待匹配像素点的灰度值进行灰度约束运算,从所述待匹配像素点中选出匹配点,使得依据所述匹配点与所述待测像素点进行相关函数运算,得到整像素对应点之后还包括:
    提取投影校正后的所述左散斑图像中所述待测像素点,并对提取的所述待测像素点和灰度约束运算后选出的所述匹配点进行相关函数运算,算出相关系数;
    选取所述相关系数最大值对应的所述待测像素点作为所述整像素对应点;
    其中所述相关函数运算公式为:
    Figure PCTCN2017071900-appb-100002
    其中C为所述相关系数,m为预置子窗口的边长,f(xi,yj)为在投影校正后的所述左散斑图像中以所述待测像素点为中心点的所述预置子窗口内的像素点的灰度值,g(x′i,y′j)为在投影校正后的所述右散斑图像中以所述匹配点为中心点的所述预置子窗口内的像素点的灰度值,
    Figure PCTCN2017071900-appb-100003
    Figure PCTCN2017071900-appb-100004
    分别是投影校正后的所述左散斑图像和投影校正后的所述右散斑图像的所述预置子窗口内所有像素点的平均灰度值。
  5. 根据权利要求4所述的方法,其特征在于,所述按照所述待测物体的预置深度范围,计算投影校正后的所述右散斑图像的视差约束范围包括:
    按照所述待测物体的预置深度范围,选取距离投影校正后的所述左散斑图像中各像素点对应的最近点和最远点,并将所述最近点和所述最远点投影到投影校正后的所述右散斑图像的所述第二极线上,并将在所述第二极线上的投影点之间的范围作为所述视差约束范围。
  6. 一种基于灰度约束的三维数字散斑的整像素搜索装置,其特征在于,所述装置包括:
    采集模块,用于通过投影装置向待测物体表面投影随机数字散斑图案,通过放置于所述投影装置两侧的成像装置分别采集带有所述待测物体的左、右散斑图像;
    图像处理模块,用于执行以下步骤:
    通过为所述左、右散斑图像中每个像素点设置的邻域子窗口计算每个像素点对应的平均差值,并将所述平均差值大于预置数值的像素点所形成的区域作为散斑区域,将所述散斑区域即为物体区域,分别在所述左、右散斑图像中划分出所述物体区域和背景区域;
    分别提取划分后的所述左、右散斑图像中的第一极线和第二极线,校正所 述第一极线平行于划分后的所述左散斑图像所在坐标系的横轴,以及校正所述第二极线平行于划分后的所述右散斑图像所在坐标系的横轴,并校正所述第一极线和所述第二极线为位于同一水平线的直线,得到投影校正后的所述左散斑图像和投影校正后的所述右散斑图像;
    按照所述待测物体的预置深度范围,计算投影校正后的所述右散斑图像的视差约束范围;
    选取投影校正后的所述左散斑图像的所述散斑区域中像素点作为待测像素点,并在投影校正后的所述右散斑图像上选取与所述待测像素点位于相同行数且位于所述视差约束范围内的待匹配像素点,通过对所述待测像素点的灰度值和所述待匹配像素点的灰度值进行灰度约束运算,从所述待匹配像素点中选出匹配点,使得依据所述匹配点与所述待测像素点进行相关函数运算,得到整像素对应点。
  7. 根据权利要求6所述的装置,其特征在于,所述图像处理模块还用于执行以下步骤:
    计算所述待测像素点的灰度值和所述待匹配像素点的灰度值之间的差值的绝对值,将所述绝对值与灰度约束阈值进行比较;
    若所述绝对值小于所述灰度约束阈值,则选取所述绝对值对应的待匹配像素点作为匹配点;
    其中所述灰度约束运算的公式为:|g(x′,y′)-f(x,y)|<threshold,f(x,y)为所述待测像素点的灰度值,g(x′,y′)为与所述待测像素点位于相同行数且位于所述视差约束范围内的待匹配像素点的灰度值,threshold为灰度约束阈值。
  8. 根据权利要求7所述的装置,其特征在于,
    所述图像处理模块,还用于分别在所述左散斑图像和所述右散斑图像中选取目标像素点,并以所述目标像素点为中心点设置邻域子窗口,并计算所述邻域子窗口内所有像素点的灰度值的平均差值;
    计算平均差值的公式为:
    Figure PCTCN2017071900-appb-100005
    其中g(x,y)为所述邻域子窗口内所有像素点的灰度值,AD为所述平均差值。
  9. 根据权利要求7所述的装置,其特征在于,所述装置还包括:
    计算模块,用于提取投影校正后的所述左散斑图像中所述待测像素点,并对提取的所述待测像素点和灰度约束运算后选出的所述匹配点进行相关函数运算,算出相关系数;
    所述计算模块,还用于选取所述相关系数最大值对应的所述待测像素点作为所述整像素对应点;
    其中所述相关函数运算公式为:
    Figure PCTCN2017071900-appb-100006
    其中C为所述相关系数,m为预置子窗口的边长,f(xi,yj)为在投影校正后的所述左散斑图像中以所述待测像素点为中心点的所述预置子窗口内的像素点的灰度值,g(x′i,y′j)为在投影校正后的所述右散斑图像中以所述匹配点为中心点的所述预置子窗口内的像素点的灰度值,
    Figure PCTCN2017071900-appb-100007
    Figure PCTCN2017071900-appb-100008
    分别是投影校正后的所述左散斑图像和投影校正后的所述右散斑图像的所述预置子窗口内所有像素点的平均灰度值。
  10. 根据权利要求9所述的装置,其特征在于,
    所述图像处理模块,还用于按照所述待测物体的预置深度范围,选取距离投影校正后的所述左散斑图像中各像素点对应的最近点和最远点,并将所述最近点和所述最远点投影到投影校正后的所述右散斑图像的所述第二极线上,并将在所述第二极线上的投影点之间的范围作为所述视差约束范围。
PCT/CN2017/071900 2017-01-20 2017-01-20 基于灰度约束的三维数字散斑的整像素搜索方法及装置 WO2018133027A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/071900 WO2018133027A1 (zh) 2017-01-20 2017-01-20 基于灰度约束的三维数字散斑的整像素搜索方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/071900 WO2018133027A1 (zh) 2017-01-20 2017-01-20 基于灰度约束的三维数字散斑的整像素搜索方法及装置

Publications (1)

Publication Number Publication Date
WO2018133027A1 true WO2018133027A1 (zh) 2018-07-26

Family

ID=62907546

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/071900 WO2018133027A1 (zh) 2017-01-20 2017-01-20 基于灰度约束的三维数字散斑的整像素搜索方法及装置

Country Status (1)

Country Link
WO (1) WO2018133027A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009691A (zh) * 2019-03-28 2019-07-12 北京清微智能科技有限公司 基于双目立体视觉匹配的视差图像生成方法及系统
CN110462693A (zh) * 2019-06-28 2019-11-15 深圳市汇顶科技股份有限公司 门锁及识别方法
CN112527621A (zh) * 2019-09-17 2021-03-19 中移动信息技术有限公司 测试路径构建方法、装置、设备及存储介质
CN113936050A (zh) * 2021-10-21 2022-01-14 北京的卢深视科技有限公司 散斑图像生成方法、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101608908A (zh) * 2009-07-20 2009-12-23 杭州先临三维科技股份有限公司 数字散斑投影和相位测量轮廓术相结合的三维数字成像方法
US20120019809A1 (en) * 2010-07-24 2012-01-26 Focused Innovation, Inc. Method and apparatus for imaging
CN104596439A (zh) * 2015-01-07 2015-05-06 东南大学 一种基于相位信息辅助的散斑匹配三维测量方法
CN105203044A (zh) * 2015-05-27 2015-12-30 珠海真幻科技有限公司 以计算激光散斑为纹理的立体视觉三维测量方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101608908A (zh) * 2009-07-20 2009-12-23 杭州先临三维科技股份有限公司 数字散斑投影和相位测量轮廓术相结合的三维数字成像方法
US20120019809A1 (en) * 2010-07-24 2012-01-26 Focused Innovation, Inc. Method and apparatus for imaging
CN104596439A (zh) * 2015-01-07 2015-05-06 东南大学 一种基于相位信息辅助的散斑匹配三维测量方法
CN105203044A (zh) * 2015-05-27 2015-12-30 珠海真幻科技有限公司 以计算激光散斑为纹理的立体视觉三维测量方法及系统

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009691A (zh) * 2019-03-28 2019-07-12 北京清微智能科技有限公司 基于双目立体视觉匹配的视差图像生成方法及系统
CN110009691B (zh) * 2019-03-28 2021-04-09 北京清微智能科技有限公司 基于双目立体视觉匹配的视差图像生成方法及系统
CN110462693A (zh) * 2019-06-28 2019-11-15 深圳市汇顶科技股份有限公司 门锁及识别方法
CN112527621A (zh) * 2019-09-17 2021-03-19 中移动信息技术有限公司 测试路径构建方法、装置、设备及存储介质
CN113936050A (zh) * 2021-10-21 2022-01-14 北京的卢深视科技有限公司 散斑图像生成方法、电子设备及存储介质

Similar Documents

Publication Publication Date Title
CN107977997B (zh) 一种结合激光雷达三维点云数据的相机自标定方法
CN106780590B (zh) 一种深度图的获取方法及系统
JP7134012B2 (ja) 視差推定装置及び方法
CN106875443B (zh) 基于灰度约束的三维数字散斑的整像素搜索方法及装置
CN103310421B (zh) 针对高清图像对的快速立体匹配方法及视差图获取方法
CN106023303B (zh) 一种基于轮廓有效性提高三维重建点云稠密程度的方法
WO2016184099A1 (zh) 基于光场数据分布的深度估计方法
CN111066065A (zh) 用于混合深度正则化的系统和方法
WO2018133027A1 (zh) 基于灰度约束的三维数字散斑的整像素搜索方法及装置
CN110009672A (zh) 提升ToF深度图像处理方法、3D图像成像方法及电子设备
CN109754459B (zh) 一种用于构建人体三维模型的方法及系统
CN105374019A (zh) 一种多深度图融合方法及装置
CN106023230B (zh) 一种适合变形图像的稠密匹配方法
CN115205489A (zh) 一种大场景下的三维重建方法、系统及装置
CN107564091A (zh) 一种基于快速对应点搜索的三维重建方法及装置
JP2010513907A (ja) カメラシステムのキャリブレーション
CN104596439A (zh) 一种基于相位信息辅助的散斑匹配三维测量方法
US10142613B2 (en) Image processing apparatus, image processing system, and image processing method
JP4941565B2 (ja) 対応点探索装置および対応点探索方法
JP6052186B2 (ja) 画像処理装置
CN101909165A (zh) 一种基于混合测度的视频数据宽景成像方法
CN107909611A (zh) 一种利用微分几何理论提取空间曲线曲率特征的方法
JP7312026B2 (ja) 画像処理装置、画像処理方法およびプログラム
JP2019091122A (ja) デプスマップフィルタ処理装置、デプスマップフィルタ処理方法及びプログラム
CN108805841B (zh) 一种基于彩色图引导的深度图恢复及视点合成优化方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17892317

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/10/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17892317

Country of ref document: EP

Kind code of ref document: A1