WO2021238214A1 - 一种三维测量系统、方法及计算机设备 - Google Patents

一种三维测量系统、方法及计算机设备 Download PDF

Info

Publication number
WO2021238214A1
WO2021238214A1 PCT/CN2020/141869 CN2020141869W WO2021238214A1 WO 2021238214 A1 WO2021238214 A1 WO 2021238214A1 CN 2020141869 W CN2020141869 W CN 2020141869W WO 2021238214 A1 WO2021238214 A1 WO 2021238214A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
phase
speckle
dimensional measurement
Prior art date
Application number
PCT/CN2020/141869
Other languages
English (en)
French (fr)
Inventor
徐玉华
徐彬
余宇山
Original Assignee
奥比中光科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 奥比中光科技集团股份有限公司 filed Critical 奥比中光科技集团股份有限公司
Publication of WO2021238214A1 publication Critical patent/WO2021238214A1/zh
Priority to US17/828,923 priority Critical patent/US20220290977A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2518Projection by scanning of the object
    • G01B11/2527Projection by scanning of the object with phase change by in-plane movement of the patern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Definitions

  • This application relates to the field of three-dimensional measurement technology, and in particular to a three-dimensional measurement system, method, and computer equipment.
  • Three-dimensional reconstruction technology has a wide range of applications in 3D printing, machine vision, digital archaeology, medical development and other fields.
  • Currently popular methods include laser scanning method, stereo vision method, time-of-flight method and structured light method.
  • the structured light method based on speckle matching usually obtains a disparity map by matching the speckle image of the target scene with a pre-stored reference image, and calculates the depth or three-dimensional structure of the scene according to the disparity map and the calibration parameters of the measurement system.
  • the advantage of this method is that it only needs to take a single frame of image to perform three-dimensional measurement, but the disadvantage is that the measurement accuracy is limited.
  • phase shift method has advantages in measurement accuracy.
  • a system based on phase shift usually requires a projector and a camera or two cameras.
  • the phase shift method usually needs to project more than three frames of phase shift fringe pattern to the target scene. Since only a single frequency phase shift image can only obtain the relative phase, in order to obtain the absolute phase, it is also necessary to project multiple frames of phase shifts with different frequencies. Figure, this leads to low measurement efficiency.
  • the purpose of this application is to provide a three-dimensional measurement system, method, and computer equipment to solve at least one of the above-mentioned background technical problems.
  • An embodiment of the present application provides a three-dimensional measurement system, including: a projection module for projecting an image to a target object, the image includes at least three frames of phase-shift fringe images and one frame of speckle images; an acquisition module for acquiring The phase shift fringe image and the speckle image; a control and processor for calculating the relative phase of each pixel according to the at least three frames of phase shift fringe images, and comparing the speckle image with a pre-stored reference The image is matched to obtain the first depth value of the pixel, and the relative phase of the pixel is dephased according to the first depth value to determine the absolute phase of the pixel, which is calculated based on the absolute phase The second depth value of the pixel.
  • control processor is based on a first pixel value calculating depth of the projected image pixel coordinates according to the coordinates of the projected image, using the calculated pixel points obtained absolute phase:
  • X p is the projected image coordinates of the pixel
  • N is the number of fringes in the fringe image
  • w is the horizontal resolution of the projected image
  • represents the absolute phase
  • the projection module projects three frames of phase shift fringe images and one frame of speckle image to the target object; wherein, the three frames of phase shift fringes are expressed as:
  • I' represents the average brightness
  • I" is the amplitude of the modulation signal
  • represents the absolute phase
  • the projection module projects the speckle pattern and the fringe pattern to the target object through a module.
  • the projection modules respectively use two modules to project patterns on the target object, wherein one module projects the speckle pattern, and the other module projects the fringe pattern.
  • the embodiment of the present application also provides a three-dimensional measurement method, which includes the following steps:
  • the relative phase of the pixel is dephased according to the first depth value of the pixel to determine the absolute phase of the pixel, and the second depth value of the pixel is calculated based on the absolute phase.
  • the pixel is obtained by controlling the depth value according to the first processor calculates the coordinates of pixels of the projected image, the projection image according to the coordinates, the following equation is calculated absolute phase of the pixel:
  • X p is the projected image coordinates of the pixel
  • N is the number of fringes
  • w is the horizontal resolution of the projected image
  • represents the absolute phase
  • the projection module projects three frames of phase shift fringe images and one frame of speckle image to the target object; wherein, the three frames of phase shift fringes are expressed as:
  • I' represents the average brightness
  • I" is the amplitude of the modulation signal
  • represents the absolute phase
  • the projection module uses one module to project the speckle pattern and the fringe pattern on the target object; or, the projection module uses two modules to project the pattern on the target object respectively, and one of the modules projects For the speckle pattern, another module projects a striped pattern.
  • An embodiment of the present application further provides a computer device, including: a memory, a processor, and a computer program stored on the memory and running on the processor, wherein the processor is at least A three-dimensional measurement method is implemented.
  • the three-dimensional measurement method includes the steps of: controlling the projection module to project an image to the target object, wherein the image includes at least three frames of phase shift fringe images and one frame of speckle image; The phase shift fringe image and the speckle image; using the phase shift fringe image to calculate the relative phase of each pixel, and using the speckle image to match a pre-stored reference image to obtain the first pixel A depth value; the relative phase of the pixel is dephased according to the first depth value to determine the absolute phase of the pixel, and the second depth value of the pixel is calculated based on the absolute phase.
  • An embodiment of the present application provides a three-dimensional measurement system, including: a projection module for projecting an image to a target object, the image includes at least three frames of phase-shift fringe images and one frame of speckle images; an acquisition module for acquiring The phase shift fringe image and the speckle image; a control and processor for calculating the relative phase of each pixel according to the at least three frames of phase shift fringe images, and comparing the speckle image with a pre-stored reference The image is matched to obtain the first depth value of the pixel, and the relative phase of the pixel is dephased according to the first depth value to determine the absolute phase of the pixel, which is calculated based on the absolute phase The second depth value of the pixel.
  • At least three frames of phase shift fringe images and one frame of speckle image are projected, the speckle image is matched with a pre-stored reference image to obtain the first depth value, and the first depth value is used to compare the three frames of phase shift fringe
  • the relative phase of the image is dephased to obtain a more accurate absolute phase, and an accurate depth value is calculated according to the absolute phase, thereby improving the measurement accuracy.
  • Fig. 1 is a schematic diagram of a three-dimensional measurement system according to an embodiment of the present application.
  • Fig. 2 is a schematic diagram of the principle of calculating the depth value according to the absolute phase of the pixel in the three-dimensional measurement system of the embodiment of Fig. 1.
  • Fig. 3 is a flowchart of a three-dimensional measurement method according to another embodiment of the present application.
  • connection can be used for fixing or circuit connection.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present application, "a plurality of” means two or more than two, unless otherwise specifically defined.
  • FIG. 1 is a schematic structural diagram of a three-dimensional measurement system 10 according to an embodiment of the application.
  • the three-dimensional measurement system 10 includes a projection module 11, an acquisition module 12, and a control and processor 13 connected to the projection module 11 and the acquisition module 12, respectively.
  • the projection module 11 is used to project an image to the target object 20, the image includes at least three frames of phase shift fringe images and one frame of speckle image;
  • the acquisition module 12 is used to collect the phase shift fringe images and the speckle image;
  • the control and processor is used to calculate the relative phase of each pixel according to the aforementioned at least three frames of phase shift fringe images, and use the collected speckle image to match the pre-stored reference image to obtain the first depth value of the pixel,
  • the relative phase of the pixel is dephased according to the first depth value to determine the absolute phase of the pixel, and the second depth value of the pixel is calculated based on the absolute phase.
  • the projection module 11 projects an image to the target object 20, and the image includes three frames of fringe images and one frame of speckle images.
  • the phase shift method is used to obtain the relative phase, the phase shift fringe pattern is projected on the target surface, and the relative phase is calculated at each pixel; specifically, the three-step phase shift method is used as an example for illustration , The minimum number of phase shift fringe images for the three-step phase shift method is three. Therefore, the image projected by the projection module includes at least three frames of fringe images (that is, three phase shift fringe images). It is understandable that more phase shifts The fringe image can improve the accuracy of the reconstructed phase.
  • the three-frame phase-shift fringe image can be expressed by the following formula:
  • I' represents the average brightness
  • I" is the amplitude of the modulation signal
  • represents the absolute phase
  • the control and processor 13 calculates the relative phase of the pixel according to the above formula to obtain the expression of the absolute phase:
  • the value range of the relative phase is [- ⁇ , ⁇ ], k represents the number of periods of the fringe, ⁇ 'represents the relative phase, and ⁇ represents the absolute phase.
  • the absolute phase of the pixel can be calculated according to the following formula:
  • the first depth value is used to calculate the projected image coordinate X p of the pixel point, and the absolute phase ⁇ of the pixel point is calculated according to formula (4), and the value of k can be determined according to formula (3), and further , Calculate the more accurate second depth value Z 2 of the pixel point based on the absolute phase of the k-th level fringe.
  • control and processor 13 uses the speckle image to match the pre-stored reference image, and obtains the disparity value of a certain pixel point (denoted as p point) in the disparity map according to the disparity map of the current view angle image ,
  • the first depth value Z 1 of the pixel point is calculated
  • the projected image coordinate X p of the pixel point can be calculated according to the first depth value Z 1
  • the absolute phase of the pixel point p can be calculated according to formula (4). Due to the limited accuracy of speckle image matching, the first depth value Z 1 is not accurate enough.
  • the relative phase ⁇ ' is dephased by the first depth value Z 1 of the pixel point p to obtain more Accurate absolute phase; specifically, the value of k can be obtained according to formula (3), so as to calculate the more accurate second depth value Z 2 of the pixel point p according to the absolute phase of the k-th level fringe.
  • the relative phase ⁇ ′ is dephased by the first depth value Z 1 of the p point to obtain a more accurate absolute phase ⁇ .
  • X represents P (X, Y, Z) of the homogeneous coordinates
  • S C and S P represents the scale factor
  • K C and K P denotes the internal reference matrix
  • T C represents the external reference matrix
  • P C , P P denote the projection matrix of the camera and projection respectively.
  • x c and y c represent the coordinates of point p in the camera image
  • x p and y p represent the coordinates of point p in the projected image
  • X p can be calculated from the three-dimensional coordinates (X, Y, Z) of point p according to formula (7), and the absolute phase ⁇ of point p can be calculated based on X p using formula (4), and, using formula (3)
  • the k value can be calculated. Since the fringe is based on the accurate phase value to calculate the depth of point p, and the phase shift method is more accurate than the block matching method, the accuracy of calculating the depth value of point p using the fringe pattern is relatively high, so according to the p point of the k-th fringe Absolute phase can calculate the more accurate second depth value Z 2 of point p.
  • control and processor 13 uses formula (4) to calculate the projected image coordinate X p of the pixel point P according to the absolute phase of the k-th fringe point p, and then uses formula (7) to calculate the point p The second depth value Z 2 .
  • control and processor 13 uses a triangulation method to calculate the second depth value of the pixel point p according to the absolute phase ⁇ of the k-th level fringe p point.
  • the projection module 11 projects a fringe image to the target object 20, and the acquisition module 12 collects the fringe image reflected by the target object 20, and calculates the depth value of point p by the method of triangulation:
  • L is the distance between the projection module and the reference plane
  • b is the distance between the projection module and the acquisition module
  • the length of PQ can be obtained, and the depth value of point p can be obtained according to the following formula:
  • the projection module 11 projects the speckle pattern and the fringe pattern to the target object through a module.
  • DMD Digital Micromirror
  • DMD is composed of millions of micro-mirrors that can be flipped, each micro-mirror unit of DMD is a projection pixel, and each projection pixel is individually coded, so it can project any Code patterns, including speckle and stripe patterns.
  • VCSEL vertical cavity surface emitting laser
  • MEMS Micro-Electro Mechanical System
  • the projection module 11 uses two modules to project patterns on the target object, one of which projects a speckle pattern, and the other module projects a stripe pattern.
  • a combination of VCSEL and DOE diffractive Optical Elements, DOE
  • DOE diffractive Optical Elements
  • a three-dimensional measurement method is also provided, and the measurement method is implemented based on the three-dimensional measurement system in the foregoing embodiments.
  • Fig. 3 is a flowchart of a three-dimensional measurement method according to an embodiment of the present application. The measurement method includes the following steps:
  • the projection module is a single module, such as a DMD (digital micromirror), which includes a plurality of micromirrors, each micromirror unit is a projection pixel, and each projection pixel is individually coded, Therefore, arbitrary coding patterns can be projected, such as speckle patterns and fringe patterns.
  • the single module can also be a combination of VCSEL, lens and MEMS.
  • the projection modules are two modules, and the two modules respectively project the fringe pattern and the speckle pattern to the target object.
  • the two modules respectively project the fringe pattern and the speckle pattern to the target object.
  • a combination of VCSEL and DOE is used to project the speckle pattern
  • the DMD is used to project the fringe pattern. pattern.
  • S302 Control the acquisition module to acquire the phase shift fringe image and the speckle image
  • a three-frame phase-shift fringe image is taken as an example for description.
  • the three-frame phase-shift fringe image can be expressed as:
  • I' represents the average brightness
  • I" is the amplitude of the modulation signal
  • represents the absolute phase
  • S303 Calculate the relative phase of each pixel based on the phase shift fringe image, and obtain a first depth value of the pixel based on matching the speckle image with a pre-stored reference image;
  • the relative phase can be expressed as:
  • the value range of the relative phase is [- ⁇ , ⁇ ].
  • the first depth value of the pixel point based on the matching of the speckle image and the pre-stored reference image can be performed by using the existing technology, so it will not be repeated here.
  • dephase is performed according to the following formula to obtain the projected image coordinate X p of the pixel point:
  • X, Y, Z are the three-dimensional coordinates of the pixel p obtained by matching the speckle image with the pre-stored reference image
  • x c and y c are the pixel coordinates of the p point in the camera image.
  • the absolute phase of the pixel is calculated using the following formula:
  • N is the number of fringes
  • w is the horizontal resolution of the projected image
  • is the absolute phase
  • the method of triangulation is used to obtain the second depth value of the point p, and the second depth value is the accurate depth value.
  • the implementation of this application also provides a storage medium for storing a computer program, which at least executes the three-dimensional measurement method described in the foregoing embodiment when the computer program is executed.
  • the storage medium may be implemented by any type of volatile or non-volatile storage device, or a combination thereof.
  • the non-volatile memory can be read-only memory (ROM, Read Only Memory), programmable read-only memory (PROM, Programmable Read-Only Memory), and erasable programmable read-only memory (EPROM, Erasable Programmable Read-Only).
  • Memory Electrically Erasable Programmable Read-Only Memory (EEPROM, Electrically Erasable Programmable Read-Only Memory), Magnetic Random Access Memory (FRAM, Ferromagnetic Random Access Memory), Flash Memory (Flash Memory), Magnetic Surface Memory, Optical Disk, Or CD-ROM (Compact Disc Read-Only Memory); magnetic surface memory can be disk storage or tape storage.
  • the volatile memory may be a random access memory (RAM, Random Access Memory), which is used as an external cache.
  • RAM random access memory
  • SRAM static random access memory
  • SSRAM synchronous static random access memory
  • DRAM Dynamic Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM Enhanced Synchronous Dynamic Random Access Memory Access memory
  • SLDRAM synchronous connection dynamic random access memory
  • SyncLink Dynamic Random Access Memory direct memory bus random access memory
  • DRRAM Direct Rambus Random Access Memory
  • the storage media described in the embodiments of the present application are intended to include, but are not limited to, these and any other suitable types of storage.
  • An embodiment of the present application also provides a computer device, the computer device including a memory, a processor, and a computer program stored on the memory and running on the processor, wherein the processor executes the computer
  • the program implements at least the three-dimensional measurement method described in the foregoing embodiment scheme.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种三维测量系统(10),包括:投影模组(11),用于向目标物体(20)投射图像,图像包括至少三帧相移条纹图像和一帧散斑图像;采集模组(12),用于采集相移条纹图像和散斑图像;控制与处理器(13),用于根据相移条纹图像计算各像素点的相对相位,并对散斑图像与预先存储的参考图像进行匹配以获取像素点的第一深度值,根据第一深度值对像素点的相对相位进行解相确定像素点的绝对相位,基于绝对相位以确定像素点的第二深度值。通过投射至少三帧相移条纹图像和一帧散斑图像,根据散斑图像获取第一深度值,并利用第一深度值对相移条纹图像的相对相位进行解相以获取更加精确的绝对相位,根据绝对相位计算得到精确的深度值,从而提高测量精度。还公开一种三维测量方法和一种计算机设备。

Description

一种三维测量系统、方法及计算机设备
本申请要求于2020年5月24日提交中国专利局,申请号为202010445250.1,发明名称为“一种三维测量系统、方法及计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及三维测量技术领域,尤其涉及一种三维测量系统、方法及计算机设备。
背景技术
三维重建技术在3D打印、机器视觉、数字考古、医疗发展等领域有着广泛的应用。目前流行的方法包括激光扫描方法、立体视觉方法、飞行时间法和结构光方法等。
基于散斑匹配的结构光方法通常是通过采集目标场景的散斑图与预先存储的参考图进行匹配以获取视差图,根据视差图和测量系统的标定参数计算场景的深度或三维结构。该方法的优点是只需要拍摄单帧图像即可进行三维测量,缺点是测量精度有限。
在现有的三维测量方法中,相移法在测量精度方面具有优势,基于相移的系统通常需要一台投影仪和一台摄像机或两台摄像机。相移法通常需要向目标场景投射三帧以上的相移条纹图,由于只用单频的相移图只能获得相对相位,因此,为了获得绝对相位,还需要投射多帧频率不同的相移图,如此导致测量效率低。
采用双摄像头系统的方案,仅使用嵌入了斑点的三种图案即可实现三维测量。然而,这种系统需要额外的相机,增加了硬件成本;此外,双摄像头系统 由于三个设备都必须看到该区域才能测量该区域,因此还会导致产生更多与阴影有关的问题。
针对上述现有技术存在的问题,有必要进行开发研究,以提供一种方案,使用结构紧凑的三维测量系统对目标场景进行三维测量,并且能实现快速、精确的测量。
发明内容
本申请的目的在于提供一种三维测量系统、方法及计算机设备,以解决上述背景技术问题中的至少一种问题。
本申请实施例提供一种三维测量系统,包括:投影模组,用于向目标物体投射图像,所述图像包括至少三帧相移条纹图像和一帧散斑图像;采集模组,用于采集所述相移条纹图像和所述散斑图像;控制与处理器,用于根据所述至少三帧相移条纹图像计算各像素点的相对相位,并对所述散斑图像与预先存储的参考图像进行匹配以获取所述像素点的第一深度值,根据所述第一深度值对所述像素点的相对相位进行解相以确定所述像素点的绝对相位,基于所述绝对相位计算得到所述像素点的第二深度值。
在一些实施例中,控制与处理器根据所述像素点的第一深度值计算得到所述像素点的投影图像坐标,根据所述投影图像坐标 利用下式计算得到像素点的绝对相位:
Figure PCTCN2020141869-appb-000001
其中,X p为像素点的投影图像坐标,N为条纹图像的条纹数量,w为投影图像的水平分辨率,φ表示绝对相位。
在一些实施例中,所述投影模组向目标物体投射三帧相移条纹图像和一帧散斑图像;其中,所述三帧相移条纹表示为:
Figure PCTCN2020141869-appb-000002
Figure PCTCN2020141869-appb-000003
Figure PCTCN2020141869-appb-000004
其中,I'表示平均亮度,I”为调制信号的幅度,φ表示绝对相位。
在一些实施例中,所述投影模组通过一个模组向目标物体投射所述散斑图案和所述条纹图案。
在一些实施例中,所述投影模组分别利用两个模组向目标物体投射图案,其中一个模组投射所述散斑图案,另一个模组投射所述条纹图案。
本申请实施例还提供一种三维测量方法,包括如下步骤:
控制投影模组向目标物体投射图像,其中所述图像包括至少三帧相移条纹图像和一帧散斑图像;
控制采集模组采集所述相移条纹图像和所述散斑图像;
利用所述相移条纹图像计算各像素点的相对相位,并利用所述散斑图像与预先存储的参考图像进行匹配以获取所述像素点的第一深度值;
根据像素点的所述第一深度值对该像素点的相对相位进行解相以确定所述像素点的绝对相位,并基于所述绝对相位计算得到所述像素点的第二深度值。
在一些实施例中,通过控制与处理器根据所述像素点的第一深度值计算得到所述像素点的投影图像坐标,根据所述投影图像坐标 利用下式计算得到像素点的绝对相位:
Figure PCTCN2020141869-appb-000005
其中,X p为像素点的投影图像坐标,N为条纹数量,w是投影图像的水平分辨率,φ表示绝对相位。
在一些实施例中,所述投影模组向目标物体投射三帧相移条纹图像和一帧散斑图像;其中,所述三帧相移条纹表示为:
Figure PCTCN2020141869-appb-000006
Figure PCTCN2020141869-appb-000007
Figure PCTCN2020141869-appb-000008
其中,I'表示平均亮度,I”为调制信号的幅度,φ表示绝对相位。
在一些实施例中,所述投影模组利用一个模组向目标物体投射散斑图案和条纹图案;或,所述投影模组利用两个模组分别向目标物体投射图案,其中一个模组投射散斑图案,另一个模组投射条纹图案。
本申请实施例还提供一种计算机设备,包括:存储器、处理器以及存储在所述存储器上并可在所述处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时至少实现一种三维测量方法,所述三维测量方法包括步骤:控制投影模组向目标物体投射图像,其中所述图像包括至少三帧相移条纹图像和一帧散斑图像;控制采集模组采集所述相移条纹图像和所述散斑图像;利用所述相移条纹图像计算各像素点的相对相位,并利用所述散斑图像与预先存储的参考图像进行匹配以获取所述像素点的第一深度值;根据所述第一深度值对所述像素点的相对相位进行解相以确定所述像素点的绝对相位,并基于所述绝对相位计算得到所述像素点的第二深度值。
本申请实施例提供一种三维测量系统,包括:投影模组,用于向目标物体投射图像,所述图像包括至少三帧相移条纹图像和一帧散斑图像;采集模组,用于采集所述相移条纹图像和所述散斑图像;控制与处理器,用于根据所述至少三帧相移条纹图像计算各像素点的相对相位,并对所述散斑图像与预先存储的参考图像进行匹配以获取所述像素点的第一深度值,根据所述第一深度值对所述像素点的相对相位进行解相以确定所述像素点的绝对相位,基于所述绝对相位计算得到所述像素点的第二深度值。本申请实施例通过投射至少三帧相移条纹图像和一帧散斑图像,根据散斑图像与预先存储的参考图像匹配以获取第一深度值,并利用第一深度值对三帧相移条纹图像的相对相位进行解相以获取 更加精确的绝对相位,根据绝对相位计算得到精确的深度值,从而提高测量精度。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是根据本申请一个实施例三维测量系统的示意图。
图2是图1实施例三维测量系统中根据像素点的绝对相位计算深度值的原理图示。
图3是根据本申请另一个实施例三维测量方法的流程图示。
具体实施方式
为了使本申请实施例所要解决的技术问题、技术方案及有益效果更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
需要说明的是,当元件被称为“固定于”或“设置于”另一个元件,它可以直接在另一个元件上或者间接在该另一个元件上。当一个元件被称为是“连接于”另一个元件,它可以是直接连接到另一个元件或间接连接至该另一个元件上。另外,连接即可以是用于固定作用也可以是用于电路连通作用。
需要理解的是,术语“长度”、“宽度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请实施例和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多该特征。在本申请实施例的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
参照图1所示,图1为本申请一实施例提供的一种三维测量系统10的结构示意图。所述三维测量系统10包括有投影模组11、采集模组12以及分别与投影模组11和采集模组12连接的控制与处理器13。其中,投影模组11用于向目标物体20投射图像,该图像包括至少三帧相移条纹图像和一帧散斑图像;采集模组12用于采集所述相移条纹图像和散斑图像;控制与处理器用于根据前述至少三帧相移条纹图像计算各像素点的相对相位,并利用所采集到散斑图像与预先存储的参考图像进行匹配以获取所述像素点的第一深度值,根据第一深度值对该像素点的相对相位进行解相以确定该像素点的绝对相位,基于所述绝对相位以计算得到所述像素点的第二深度值。
在一些实施例中,投影模组11向目标物体20投射图像,该图像包括三帧条纹图像和一帧散斑图像。在本申请实施例中,采用相移法获取相对相位,将相移条纹图案投影在目标表面上,并在每个像素处计算相对相位;具体地,以采用三步相移法为例进行说明,三步相移法最小数量相移条纹图像是三个,因此,投影模组投射的图像包括至少三帧条纹图像(即三个相移条纹图像),可以理解的是,更多的相移条纹图像可以提高重建相位的准确性。
以三帧相移条纹图像为例,三帧相移条纹图像可用下式表示:
Figure PCTCN2020141869-appb-000009
Figure PCTCN2020141869-appb-000010
Figure PCTCN2020141869-appb-000011
其中,I'表示平均亮度,I”为调制信号的幅度,φ表示绝对相位。
控制与处理器13根据上述公式计算像素点的相对相位,以获取绝对相位的表达式:
Figure PCTCN2020141869-appb-000012
φ(x,y)=φ'(x,y)+2kπ      (3)
其中,相对相位的取值范围为[-π,π],k表示条纹的周期数,φ'表示相对相位,φ表示绝对相位。
假设一个像素点的投影图像坐标X p,则根据下式可计算该像素点的绝对相位:
Figure PCTCN2020141869-appb-000013
由于公式(3)中k是条纹的周期数,通过三帧条纹图像无法确定周期数k,因此要确定绝对相位,必须要确定k值,在本实施例中,通过额外投影一帧散斑图像,利用该散斑图像与预先存储的参考图像进行匹配以获取像素点的第一深度值,根据该像素点的第一深度值对相对相位进行解相以确定k值。具体地,首先利用第一深度值计算得到该像素点的投影图像坐标X p,并根据公式(4)计算出该像素点的绝对相位φ,根据公式(3)即能确定k值,进一步地,基于第k级条纹的绝对相位计算出该像素点更加精确的第二深度值Z 2
在一些实施例中,通过控制与处理器13利用散斑图像与预先存储的参考图像进行匹配,根据当前视角图像的视差图获取视差图中某个像素点(记为p点)的视差值,从而计算出该像素点的第一深度值Z 1,根据第一深度值Z 1可以计算像素点的投影图像坐标X p,再根据公式(4)可以计算出像素点p点的绝对相位。由于散斑图像匹配的精度有限,第一深度值Z 1不够精确,因此为了得到更高的精度,利用像素点p点的第一深度值Z 1对相对相位φ'进行解相,从而获取更加精确的绝对相位;具体地,根据公式(3)即可求取k值,从而根据第k级条纹的绝对相位计算出像素点p点更加精确的第二深度值Z 2
下面以像素点p点为例,对利用p点的第一深度值Z 1对相对相位φ'进行解相以获取更加精确的绝对相位φ进行具体说明。在投影模组11和采集模组12的参数为已知的情况下,将采集模组12采集到的散斑图像与预先存储的参考图像匹配,从而获得p点的三维坐标(X,Y,Z),将p点的三维坐标(X,Y,Z)向图像平面投影,其中投影到相机图像平面的坐标记为x c=(x c,y c) T,投影到投影图像坐标为x p=(x p,y p) T,由此可以列出下列式子:
Figure PCTCN2020141869-appb-000014
其中,X表示P(X,Y,Z)的齐次坐标,S C与S P表示尺度因子,K C和K P表示内参矩阵,R P|T P,R C|T C表示外参矩阵,P C,P P分别表示相机和投影的投影矩阵。令:
Figure PCTCN2020141869-appb-000015
根据公式(6),p点的三维坐标(X,Y,Z)可以用下式表示:
Figure PCTCN2020141869-appb-000016
其中,x c、y c表示p点在相机图像中的坐标,x p、y p表示p点在投影图像中的坐标。
通过p点的三维坐标(X,Y,Z)根据公式(7)可以计算出X p,并基于X p利用公式(4)可以计算出p点的绝对相位φ,以及,利用公式(3)可以计算k值。由于条纹是基于准确的相位值计算出p点的深度,且相移法相对于块匹配方法精度高,所以采用条纹图案计算出p点的深度值精度比较高,因此根据第k级条纹p点的绝对相位可以计算出p点更加精确的第二深度值Z 2
在一些实施例中,控制与处理器13根据第k级条纹p点的绝对相位利用公式(4)计算出像素点P的投影图像坐标X p,再利用公式(7)可以计算出p点的第二深度值Z 2
在一些实施例中,控制与处理器13根据第k级条纹p点的绝对相位φ,利用三角测量的方法计算像素点p点的第二深度值。请参照图2所示,投影模组11向目标物体20投射条纹图像,采集模组12采集经目标物体20反射回来的条纹图像,通过三角测量的方法计算p点的深度值:
Figure PCTCN2020141869-appb-000017
Figure PCTCN2020141869-appb-000018
其中,L为投影模组距离参考平面的距离,b为投影模组与采集模组的距离,
Figure PCTCN2020141869-appb-000019
表示B点的绝对相位,
Figure PCTCN2020141869-appb-000020
表示A点的绝对相位。
根据公式(8)可以得到PQ的长度,根据下式计算以获取p点的深度值:
Z=L-PQ      (9)
在一些实施例中,投影模组11通过一个模组向目标物体投射散斑图案和条纹图案。比如采用DMD(数字微镜),DMD由数百万个可以翻转的微反射镜组成,DMD的每个微反射镜单元就是一个投影像素,且每个投影像素单独编码,因此可以投影出任意的编码图案,包括散斑和条纹图案。可以理解的是,也可以采用垂直腔面发射激光器(VCSEL)和透镜以及微机电系统(Micro-Electro Mechanical System,MEMS)组合或其它组合形式以一个模组向目标物体投射散斑图案和条纹图案。
在一些实施例中,投影模组11分别利用两个模组向目标物体投射图案,其中一个模组投射散斑图案,另一个模组投射条纹图案。比如利用VCSEL和DOE(Diffractive Optical Elements,DOE)组合向目标物体投射散斑图案,而利用VCSEL和MEMS组合向目标物体投射条纹图案;或,利用DMD向目标物体投 射条纹图案。可以理解的是,对于投射斑点图案和投射条纹图案的方法有很多种,在此对其组合方式不做限制。
参照图3所示,作为本申请另一实施例,还提供一种三维测量方法,所述测量方法基于上述各实施例中的三维测量系统来实现。图3示为根据本申请一实施例中一种三维测量方法的流程图,测量方法包括如下步骤:
S301、控制投影模组向目标物体投射图像,其中所述投射图像包括至少三帧相移条纹图像和一帧散斑图像;
在一些实施例中,投影模组为单个模组,比如DMD(数字微镜),其包括有多个微反射镜,每个微反射镜单元为一个投影像素,且每个投影像素单独编码,因此可以投影出任意的编码图案,例如:散斑图案和条纹图案。当然,所述单个模组也可以为VCSEL和透镜以及MEMS的组合。
在一些实施例中,投影模组为两个模组,两个模组分别向目标物体投射条纹图案和散斑图案,比如利用VCSEL和DOE组合一个模组投射散斑图案,而用DMD投射条纹图案。
S302、控制采集模组采集所述相移条纹图像和所述散斑图像;
具体地,以三帧相移条纹图像为例进行说明,三帧相移条纹图像可表示为:
Figure PCTCN2020141869-appb-000021
Figure PCTCN2020141869-appb-000022
Figure PCTCN2020141869-appb-000023
其中,I'表示平均亮度,I”为调制信号的幅度,φ表示绝对相位。
S303、基于所述相移条纹图像计算各像素点的相对相位,并基于所述散斑图像与预先存储的参考图像进行匹配以获取所述像素点的第一深度值;
具体地,根据步骤S302中相移条纹图像的表达公式,相对相位可表示为:
Figure PCTCN2020141869-appb-000024
其中,相对相位的取值范围为[-π,π]。
可以理解的是,基于所述散斑图像与预先存储的参考图像进行匹配以获取所述像素点的第一深度值可以采用现有技术进行,故在此不再赘述。
S304、根据像素点的第一深度值对该像素点的相对相位进行解相以确定该像素点的绝对相位,并基于所述绝对相位计算得到所述像素点的第二深度值,该第二深度值即为精确的深度值;
具体地,根据下式进行解相以获取所述像素点的投影图像坐标X p
Figure PCTCN2020141869-appb-000025
其中,X、Y、Z为散斑图像与预先存储的参考图像匹配而获得的像素p点的三维坐标,x c、y c为p点在相机图像中的像素坐标。
根据投影图像坐标X p,利用下式计算得到像素点的绝对相位:
Figure PCTCN2020141869-appb-000026
其中,N为条纹数量,w是投影图像的水平分辨率,φ表示绝对相位。
根据p点的绝对相位φ,利用三角测量的方法以获取p点的第二深度值,第二深度值即为精确的深度值。
本申请实施还提供一种存储介质,用于存储计算机程序,该计算机程序被执行时至少执行前述实施例所述的三维测量方法。
所述存储介质可以由任何类型的易失性或非易失性存储设备、或者它们的组合来实现。其中,非易失性存储器可以是只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-Only Memory)、可擦除可编程只读存储器(EPROM,ErasableProgrammable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,ElectricallyErasable Programmable Read-Only Memory)、磁性随机存取存储器(FRAM,FerromagneticRandom Access Memory)、 快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-Only Memory);磁表面存储器可以是磁盘存储器或磁带存储器。易失性存储器可以是随机存取存储器(RAM,Random Access Memory),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(SRAM,Static Random Access Memory)、同步静态随机存取存储器(SSRAM,SynchronousStatic Random Access Memory)、动态随机存取存储器(DRAM,Dynamic Random AccessMemory)、同步动态随机存取存储器(SDRAM,Synchronous Dynamic Random AccessMemory)、双倍数据速率同步动态随机存取存储器(DDRSDRAM,Double Data RateSynchronous Dynamic Random Access Memory)、增强型同步动态随机存取存储器(ESDRAM,Enhanced Synchronous Dynamic Random Access Memory)、同步连接动态随机存取存储器(SLDRAM,SyncLink Dynamic Random Access Memory)、直接内存总线随机存取存储器(DRRAM,Direct Rambus Random Access Memory)。本申请实施例描述的存储介质旨在包括但不限于这些和任意其它适合类型的存储器。
本申请实施例还提供一种计算机设备,所述计算机设备包括存储器、处理器以及存储在所述存储器上并可在所述处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时至少实现前述实施例方案中所述的三维测量方法。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (10)

  1. 一种三维测量系统,其特征在于,包括:
    投影模组,用于向目标物体投射图像,所述图像包括至少三帧相移条纹图像和一帧散斑图像;
    采集模组,用于采集所述相移条纹图像和所述散斑图像;
    控制与处理器,用于根据所述至少三帧相移条纹图像计算各像素点的相对相位,并对所述散斑图像与预先存储的参考图像进行匹配以获取所述像素点的第一深度值,根据所述第一深度值对所述像素点的相对相位进行解相以确定所述像素点的绝对相位,基于所述绝对相位计算得到所述像素点的第二深度值。
  2. 如权利要求1所述的三维测量系统,其特征在于:控制与处理器根据所述像素点的第一深度值计算得到像素点的投影图像坐标,根据所述投影图像坐标 利用下式计算得到所述像素点的绝对相位:
    Figure PCTCN2020141869-appb-100001
    其中,X p为像素点的投影图像坐标,N为条纹图像的条纹数量,w为投影图像的水平分辨率,φ表示绝对相位。
  3. 如权利要求1所述的三维测量系统,其特征在于:所述投影模组向目标物体投射三帧相移条纹图像和一帧散斑图像;其中,所述三帧相移条纹表示为:
    Figure PCTCN2020141869-appb-100002
    Figure PCTCN2020141869-appb-100003
    Figure PCTCN2020141869-appb-100004
    其中,I'表示平均亮度,I”为调制信号的幅度,φ表示绝对相位。
  4. 如权利要求1所述的三维测量系统,其特征在于:所述投影模组通过一个模组向目标物体投射所述散斑图案和所述条纹图案。
  5. 如权利要求1所述的三维测量系统,其特征在于:所述投影模组分别利 用两个模组向目标物体投射图案,其中一个模组投射所述散斑图案,另一个模组投射所述条纹图案。
  6. 一种三维测量方法,其特征在于,包括如下步骤:
    控制投影模组向目标物体投射图像,其中所述图像包括至少三帧相移条纹图像和一帧散斑图像;
    控制采集模组采集所述相移条纹图像和所述散斑图像;
    利用所述相移条纹图像计算各像素点的相对相位,并利用所述散斑图像与预先存储的参考图像进行匹配以获取所述像素点的第一深度值;
    根据所述第一深度值对所述像素点的相对相位进行解相以确定所述像素点的绝对相位,并基于所述绝对相位计算得到所述像素点的第二深度值。
  7. 如权利要求6所述的三维测量方法,其特征在于:通过控制与处理器根据所述像素点的第一深度值计算得到所述像素点的投影图像坐标,根据所述投影图像坐标,利用下式计算得到像素点的绝对相位:
    Figure PCTCN2020141869-appb-100005
    其中,X p为像素点的投影图像坐标,N为条纹数量,w是投影图像的水平分辨率,φ表示绝对相位。
  8. 如权利要求6所述的三维测量方法,其特征在于:所述投影模组向目标物体投射三帧相移条纹图像和一帧散斑图像;其中,所述三帧相移条纹表示为:
    Figure PCTCN2020141869-appb-100006
    Figure PCTCN2020141869-appb-100007
    Figure PCTCN2020141869-appb-100008
    其中,I'表示平均亮度,I”为调制信号的幅度,φ表示绝对相位。
  9. 如权利要求6所述的三维测量方法,其特征在于:所述投影模组利用一个模组向目标物体投射散斑图案和条纹图案;或,所述投影模组利用两个模组 分别向目标物体投射图案,其中一个模组投射散斑图案,另一个模组投射条纹图案。
  10. 一种计算机设备,其特征在于,包括:存储器、处理器以及存储在所述存储器上并可在所述处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时至少实现一种三维测量方法,所述三维测量方法包括如下步骤:
    控制投影模组向目标物体投射图像,其中所述图像包括至少三帧相移条纹图像和一帧散斑图像;
    控制采集模组采集所述相移条纹图像和所述散斑图像;
    利用所述相移条纹图像计算各像素点的相对相位,并利用所述散斑图像与预先存储的参考图像进行匹配以获取所述像素点的第一深度值;
    根据所述第一深度值对所述像素点的相对相位进行解相以确定所述像素点的绝对相位,并基于所述绝对相位计算得到所述像素点的第二深度值。
PCT/CN2020/141869 2020-05-24 2020-12-30 一种三维测量系统、方法及计算机设备 WO2021238214A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/828,923 US20220290977A1 (en) 2020-05-24 2022-05-31 Three-dimensional measurement system, method, and computer equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010445250.1A CN111721236B (zh) 2020-05-24 2020-05-24 一种三维测量系统、方法及计算机设备
CN202010445250.1 2020-05-24

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/828,923 Continuation US20220290977A1 (en) 2020-05-24 2022-05-31 Three-dimensional measurement system, method, and computer equipment

Publications (1)

Publication Number Publication Date
WO2021238214A1 true WO2021238214A1 (zh) 2021-12-02

Family

ID=72565016

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/141869 WO2021238214A1 (zh) 2020-05-24 2020-12-30 一种三维测量系统、方法及计算机设备

Country Status (3)

Country Link
US (1) US20220290977A1 (zh)
CN (1) CN111721236B (zh)
WO (1) WO2021238214A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114322845A (zh) * 2021-12-30 2022-04-12 易思维(天津)科技有限公司 投射激光阵列图像的系统及利用其进行三维重建的方法
CN115950359A (zh) * 2023-03-15 2023-04-11 梅卡曼德(北京)机器人科技有限公司 三维重建方法、装置和电子设备
CN114322845B (zh) * 2021-12-30 2024-05-24 易思维(天津)科技有限公司 投射激光阵列图像的系统及利用其进行三维重建的方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210262787A1 (en) * 2020-02-21 2021-08-26 Hamamatsu Photonics K.K. Three-dimensional measurement device
CN111721236B (zh) * 2020-05-24 2022-10-25 奥比中光科技集团股份有限公司 一种三维测量系统、方法及计算机设备
CN112669362B (zh) * 2021-01-12 2024-03-29 四川深瑞视科技有限公司 基于散斑的深度信息获取方法、装置及系统
CN112764546B (zh) * 2021-01-29 2022-08-09 重庆子元科技有限公司 一种虚拟人物位移控制方法、装置及终端设备
CN112927340B (zh) * 2021-04-06 2023-12-01 中国科学院自动化研究所 一种不依赖于机械摆放的三维重建加速方法、系统及设备
CN114708316B (zh) * 2022-04-07 2023-05-05 四川大学 基于圆形条纹的结构光三维重建方法、装置和电子设备
CN115523866B (zh) * 2022-10-20 2023-10-03 中国矿业大学 一种适用煤矿皮带输送机传输中高反光异物检测的条纹投影三维测量方法

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5621529A (en) * 1995-04-05 1997-04-15 Intelligent Automation Systems, Inc. Apparatus and method for projecting laser pattern with reduced speckle noise
CN101556143A (zh) * 2008-04-09 2009-10-14 通用电气公司 三维测量探测装置及方法
US7787131B1 (en) * 2007-02-06 2010-08-31 Alpha Technology, LLC Sensitivity modulated imaging systems with shearing interferometry and related methods
US20170004623A1 (en) * 2005-03-30 2017-01-05 Apple Inc. Method and system for object reconstruction
CN106548489A (zh) * 2016-09-20 2017-03-29 深圳奥比中光科技有限公司 一种深度图像与彩色图像的配准方法、三维图像采集装置
CN107990846A (zh) * 2017-11-03 2018-05-04 西安电子科技大学 基于单帧结构光的主被动结合深度信息获取方法
CN108613637A (zh) * 2018-04-13 2018-10-02 深度创新科技(深圳)有限公司 一种基于参考图像的结构光系统解相方法及系统
CN110411374A (zh) * 2019-08-26 2019-11-05 湖北工业大学 一种动态三维面形测量方法及系统
CN110595388A (zh) * 2019-08-28 2019-12-20 南京理工大学 一种基于双目视觉的高动态实时三维测量方法
CN111721236A (zh) * 2020-05-24 2020-09-29 深圳奥比中光科技有限公司 一种三维测量系统、方法及计算机设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102353332A (zh) * 2011-06-28 2012-02-15 山东大学 电子散斑干涉数字补偿方法及其系统
EP2796938B1 (de) * 2013-04-25 2015-06-10 VOCO GmbH Vorrichtung zum Erfassen einer 3D-Struktur eines Objekts
CN104596439A (zh) * 2015-01-07 2015-05-06 东南大学 一种基于相位信息辅助的散斑匹配三维测量方法
JPWO2017183181A1 (ja) * 2016-04-22 2019-02-28 オリンパス株式会社 三次元形状測定装置
CN107346425B (zh) * 2017-07-04 2020-09-29 四川大学 一种三维纹理照相系统、标定方法及成像方法
CN108088391B (zh) * 2018-01-05 2020-02-07 深度创新科技(深圳)有限公司 一种三维形貌测量的方法和系统

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5621529A (en) * 1995-04-05 1997-04-15 Intelligent Automation Systems, Inc. Apparatus and method for projecting laser pattern with reduced speckle noise
US20170004623A1 (en) * 2005-03-30 2017-01-05 Apple Inc. Method and system for object reconstruction
US7787131B1 (en) * 2007-02-06 2010-08-31 Alpha Technology, LLC Sensitivity modulated imaging systems with shearing interferometry and related methods
CN101556143A (zh) * 2008-04-09 2009-10-14 通用电气公司 三维测量探测装置及方法
CN106548489A (zh) * 2016-09-20 2017-03-29 深圳奥比中光科技有限公司 一种深度图像与彩色图像的配准方法、三维图像采集装置
CN107990846A (zh) * 2017-11-03 2018-05-04 西安电子科技大学 基于单帧结构光的主被动结合深度信息获取方法
CN108613637A (zh) * 2018-04-13 2018-10-02 深度创新科技(深圳)有限公司 一种基于参考图像的结构光系统解相方法及系统
CN110411374A (zh) * 2019-08-26 2019-11-05 湖北工业大学 一种动态三维面形测量方法及系统
CN110595388A (zh) * 2019-08-28 2019-12-20 南京理工大学 一种基于双目视觉的高动态实时三维测量方法
CN111721236A (zh) * 2020-05-24 2020-09-29 深圳奥比中光科技有限公司 一种三维测量系统、方法及计算机设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114322845A (zh) * 2021-12-30 2022-04-12 易思维(天津)科技有限公司 投射激光阵列图像的系统及利用其进行三维重建的方法
CN114322845B (zh) * 2021-12-30 2024-05-24 易思维(天津)科技有限公司 投射激光阵列图像的系统及利用其进行三维重建的方法
CN115950359A (zh) * 2023-03-15 2023-04-11 梅卡曼德(北京)机器人科技有限公司 三维重建方法、装置和电子设备

Also Published As

Publication number Publication date
US20220290977A1 (en) 2022-09-15
CN111721236A (zh) 2020-09-29
CN111721236B (zh) 2022-10-25

Similar Documents

Publication Publication Date Title
WO2021238214A1 (zh) 一种三维测量系统、方法及计算机设备
CN110390719B (zh) 基于飞行时间点云重建设备
CN106548489B (zh) 一种深度图像与彩色图像的配准方法、三维图像采集装置
CN106127745B (zh) 结构光3d视觉系统与线阵相机的联合标定方法及装置
KR101854461B1 (ko) 카메라 시스템 및 이의 객체 인식 방법
KR101259835B1 (ko) 깊이 정보를 생성하기 위한 장치 및 방법
JP2017112602A (ja) パノラマ魚眼カメラの画像較正、スティッチ、および深さ再構成方法、ならびにそのシステム
Okatani et al. Autocalibration of a projector-camera system
KR20060031685A (ko) 이미지 프로젝터, 경사각 검출방법, 및 투사 이미지정정방법
JP5633058B1 (ja) 3次元計測装置及び3次元計測方法
KR20150101749A (ko) 객체의 3차원 형상을 산출하는 장치 및 방법
WO2021218196A1 (zh) 一种深度成像方法、装置及计算机可读存储介质
CN107517346B (zh) 基于结构光的拍照方法、装置及移动设备
JP2016100698A (ja) 校正装置、校正方法、プログラム
WO2019169941A1 (zh) 一种测距方法及装置
JP6308637B1 (ja) 特徴量を用いた3次元計測方法およびその装置
Wilm et al. Accurate and simple calibration of DLP projector systems
CN112816967A (zh) 图像距离测量方法、装置、测距设备和可读存储介质
CN110691228A (zh) 基于三维变换的深度图像噪声标记方法、装置和存储介质
JP2001016621A (ja) 多眼式データ入力装置
JP2018009927A (ja) 画像処理装置、画像処理方法及びプログラム
CN112598719A (zh) 深度成像系统及其标定方法、深度成像方法、存储介质
JP6065670B2 (ja) 3次元計測システム、プログラム及び方法。
CN109741384B (zh) 深度相机的多距离检测装置及方法
TWM594322U (zh) 全向立體視覺的相機配置系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20938265

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20938265

Country of ref document: EP

Kind code of ref document: A1