WO2020113449A1 - 一种图像获取方法、装置及系统 - Google Patents

一种图像获取方法、装置及系统 Download PDF

Info

Publication number
WO2020113449A1
WO2020113449A1 PCT/CN2018/119248 CN2018119248W WO2020113449A1 WO 2020113449 A1 WO2020113449 A1 WO 2020113449A1 CN 2018119248 W CN2018119248 W CN 2018119248W WO 2020113449 A1 WO2020113449 A1 WO 2020113449A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
structured light
images
laser
time points
Prior art date
Application number
PCT/CN2018/119248
Other languages
English (en)
French (fr)
Inventor
阳光
Original Assignee
深圳配天智能技术研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳配天智能技术研究院有限公司 filed Critical 深圳配天智能技术研究院有限公司
Priority to PCT/CN2018/119248 priority Critical patent/WO2020113449A1/zh
Priority to CN201880087119.3A priority patent/CN111630343A/zh
Publication of WO2020113449A1 publication Critical patent/WO2020113449A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object

Definitions

  • the invention relates to the technical field of image processing, and in particular to an image acquisition method, device and system.
  • Three-dimensional measurement technology is a technology to obtain three-dimensional feature information. It plays an important role in many fields, such as industrial part inspection, industrial design, and reverse engineering.
  • the three-dimensional measurement technology has many different methods. Among them, the application range of optical three-dimensional measurement technology is becoming wider and wider, and has become an important branch in this field. Among them, visual processing is used to obtain processed images, and the images are used to perform three-dimensional objects. Feature extraction methods have been largely used in industry.
  • the combination of visual system and structured light projection is a common optical three-dimensional measurement technology.
  • the structured light is projected onto the object to be measured, the structured light is photographed by the vision system, and the point cloud technology is used to obtain the corresponding image information, and then the three-dimensional construction is performed.
  • the point cloud technology is applied to the image, the structure The more the number of light stripes, the more precise the calculation, and the higher the robustness.
  • the corresponding calculation amount will also increase. According to the prior art, when the number of structured light stripes increases, the corresponding calculation amount will be The increase in the power of three cubes causes a large computational load on the computing system.
  • the object of the present invention is to provide an image acquisition method, device and system, which can reduce the amount of calculation of point cloud computing and reduce the calculation load.
  • an image acquisition method which includes:
  • the present invention proposes an image acquisition device including: a memory and a processor coupled to each other;
  • the memory is used to store program instructions executed by the processor
  • the processor is configured to perform the following actions according to the program instructions:
  • the present invention proposes an image acquisition system including the above-mentioned image acquisition device and a structured light source and image acquisition device; the structured light source and the image acquisition device and the processor of the image acquisition device connection;
  • the processor is used to perform the following operations:
  • the present invention projects the structured light onto the same target by time-sharing, which is equivalent to projecting the light stripes of several structured lights in stages, and obtains the corresponding structured light image each time and performs For point cloud computing, obtain multiple corresponding point cloud images, and then integrate multiple point cloud images to form a total point cloud image.
  • the obtained total point cloud image contains all the point cloud information of the light stripes of the structured light.
  • the amount of calculation is the product of the cube of the number of structured light rays fringed each time and the number of structured light images, which greatly reduces the amount of calculation of point cloud computing, reduces the calculation load, and obtains enough Point cloud information.
  • FIG. 1 is a schematic flowchart of a first embodiment of an image acquisition method of the present invention
  • FIG. 2a-2e are schematic flow diagrams of processing structured light in FIG. 1;
  • Fig. 3a is a schematic structural view of a structural light source when the electric control component is an electro-optical diverter;
  • 3b is a schematic structural view of a structural light source when the electronic control component is a variable focus lens
  • FIG. 4 is a schematic flowchart of a second embodiment of the image acquisition method of the present invention.
  • FIG. 5 is a schematic structural view of the structured light source in FIG. 4;
  • FIG. 6 is a schematic structural view of a binocular vision system
  • FIG. 7 is a schematic structural diagram of an embodiment of an image acquisition device of the present invention.
  • FIG. 8 is a schematic structural diagram of an embodiment of an image acquisition system of the present invention.
  • FIG. 1 is a schematic flowchart of a first embodiment of an image acquisition method of the present invention. As shown in FIG. 1, the image acquisition method of this embodiment may include the following steps:
  • step S101 multiple structured light images obtained by photographing the same target projected by the structured light source at multiple time points are acquired.
  • the structured light source when the structured light source projects structured light to different positions of the same target at different time points, multiple structured light images of the structured light at different positions of the same target at different time points can be acquired.
  • the structured light source may have one or more structured lights projected on the same target, and the structured lights projected at different time points do not overlap on the same target.
  • the projection direction of the structural light projected at different time points is changed, so that multiple structural lights projected by the structural light source to the same target at multiple time points are at the same target At different positions; and the same target can be captured by the image collector at corresponding multiple time points, and corresponding multiple structured light images can be obtained.
  • each structured light image The number of structured lights included is N.
  • the total number of structured lights included in the M structured light images is M ⁇ N.
  • step S102 point cloud calculations are respectively performed on multiple structured light images, and multiple point cloud images are correspondingly obtained.
  • Performing point cloud calculation on the M structured light images obtained above can obtain corresponding M point cloud images. It can be understood that the calculation amount of the point cloud calculation on a single structured light image is a single structured light image the number of cubic structured light contained, i.e., N 3, corresponding, for M Zhang structured light images respectively of the total amount of computation required for the calculation was the point cloud M ⁇ N 3.
  • step S103 multiple point cloud images are integrated to obtain a total point cloud image.
  • Integrating multiple point cloud images calculated in step S102 to obtain a total point cloud image formed by integrating M point cloud images includes: integrating and integrating the point cloud calculation results of each structured light image, that is, integrating and integrating multiple point cloud images.
  • the structured light source projects M ⁇ N structured light at one time, and to perform point cloud calculation on M ⁇ N structured light, (M ⁇ N ) 3 , while obtaining M ⁇ N structured light in this embodiment, the calculation amount required is only M ⁇ N 3. Compared with the prior art, while obtaining the same amount of structured light, it is greatly reduced The calculation of point cloud computing.
  • the image acquisition method of this embodiment can ensure the amount of point cloud information contained in the finally obtained structured light image when using structured light to detect the target, that is, while ensuring the accuracy of detection, the point cloud is reduced The amount of calculation.
  • the structural light source of this embodiment may include a plurality of lasers and an electronic control component, the plurality of lasers respectively emit laser stripes, the electronic control component is disposed on the laser light output path, and the plurality of lasers
  • the laser stripes are emitted onto the electronic control components at different incident angles.
  • the electronic control component can modulate the exit direction of the laser stripe emitted by the laser after passing through the electronic control component through modulation.
  • n lasers projected n laser stripes with a certain distance on the target, and then adjusted the electronic control component to change the exit direction of the laser stripes emitted by the n lasers after passing through the electronic control component, so that n at the second time point
  • the laser also projects n laser stripes with a certain distance on the target (as shown in FIG. 2b).
  • the n laser stripes at the first time point and the n laser stripes at the second time point do not overlap each other.
  • point cloud processing is performed on the images of the n laser stripes at the first time point and the second time point respectively to obtain two point cloud images as shown in FIG. 2c and FIG. 2d.
  • the image acquisition method of this embodiment can reduce the amount of point cloud computing while satisfying the demand for point cloud information in the point cloud image.
  • the electronic control component may be an electro-optic diverter or a variable focus lens.
  • the electro-optic diverter 12 As shown in FIG. 3a, if the electronic control component is an electro-optic diverter 12, the electro-optic diverter 12 as the electronic control component is placed on the exit path of the laser 11 at this time, and the laser stripes emitted from the laser 11 can be reflected by adjusting the electro-optic
  • the steering angle of the diverter 12 can change the incident angle and the reflection angle of the laser stripe emitted by the laser 11 relative to the electro-optical diverter 12, so that the laser stripe emitted by the laser 11 is projected at different positions on the same target.
  • the electronic control component may also be a variable focus lens 13.
  • the focal length of the variable focus lens 13 changes, the angle of refraction of the incident laser stripe may be changed for the incident laser stripe. Adjusting the focal length of the variable focus lens 13 can change the angle of refraction of the laser stripes emitted by the laser 11 relative to the variable focus lens 13, so that the laser stripes emitted by the laser 13 are projected at different positions on the same target.
  • the variable focus lens 13 may be a liquid lens, a micro lens, or a micro lens array.
  • FIG. 4 is a schematic flowchart of a second embodiment of the image acquisition method of the present invention. This embodiment is improved on the basis of the first embodiment of the image acquisition method in FIG. 1. As shown in FIG. 4, this embodiment may further include the following steps before step S101 shown in FIG. 1:
  • step S104 the grating parameter of the structural light source and the area of the target are acquired, and the positional relationship between at least one laser is adjusted according to the area and the grating parameter of the structural light source.
  • the positional relationship between the lasers can change the spacing between the laser stripes it emits. Therefore, in this embodiment, before changing the projection position of the structured light projected by the structured light source on the target, the grating parameters of the contained laser are acquired so as to adjust the spacing between the laser stripes projected by the laser through the grating reference. To obtain the area of the target, the positional relationship between the multiple lasers included in the structural light source can be appropriately adjusted through the area to be detected, and then the spacing of the laser stripes emitted by the multiple lasers included in the structural light source on the target can be adjusted. As shown in FIG.
  • the spacing between the multiple lasers 11 can be set to be small, so that the spacing between the lasers emitted by the multiple lasers 11 is small; if the target area is large, then The spacing between the multiple lasers 11 can be appropriately increased to make the spacing between the lasers emitted by the multiple lasers 11 larger.
  • the electronic control component is a variable focus lens as an example. In other embodiments, The electronically controlled component may also be an electronically controlled steering gear.
  • the distance between the laser stripes projected by multiple lasers on the target can be appropriately reduced so that the projected laser stripes can be completely projected on the target.
  • multiple lasers project The distance between the laser stripes on the target is small, which can make the distribution of the laser stripes on the target more compact and improve the relative fineness of detecting the target. If the area of the target is large, the distance between the laser stripes projected by multiple lasers on the target can be appropriately increased. Of course, if the target has a large area and the detection accuracy is high, the projection of the structural light source can also be appropriately reduced. The distance between structured lights on the target.
  • the first embodiment or the second embodiment of the image acquisition method shown in FIGS. 1 to 5 may be applied to a binocular vision system.
  • binocular vision system binocular matching is required, that is, according to a certain algorithm, the one-to-one correspondence between each pixel point between the left and right structured light images is found.
  • the left and right cameras in the binocular vision system need to be matched.
  • the structured light image projected by the structured light source projected on the target is used for point cloud computing.
  • the laser stripes emitted by the multiple lasers included in the structured light source will appear, and the pixel points will be matched by the laser stripes and the calculated number of light spots.
  • the more the number of laser stripes projected on the target the higher the accuracy of binocular matching.
  • a large number of laser stripes are projected on the same target in stages, and then each The structured light images projected by the sub-laser stripes on the target are respectively calculated by the point cloud on the structured light images to obtain the point cloud image, and the multiple point cloud images are integrated to obtain the total point cloud image. In this way, sufficient point cloud information can be satisfied during matching, and the calculation amount of point cloud computing can be reduced.
  • the binocular vision system may include a first camera 20 and a second camera 30 disposed on the left and right sides of the structured light source 10.
  • the structured light source 10 projects structured light onto the target 40 (The solid black arrow exiting from the structured light source in FIG. 6), the first camera 20 and the second camera 30 are respectively used to shoot the target 40 to obtain the corresponding left structured light image and right structured light image, black in FIG. 6
  • the dotted arrows indicate the shooting angle ranges of the first camera 20 and the second camera 30. Therefore, when the structured light source 10 projects structured light toward the target 40 at multiple time points, the left and right structured light images can be obtained by shooting the target 40 through the first camera 20 and the second camera 30 respectively.
  • the total point cloud image is A point cloud image obtained by integrating multiple left point cloud images and multiple right point cloud images.
  • the matching point of the image point in the left structured light image must be on the polar line corresponding to the right structured light image, and the left structured light image and the right structured light image are made by stereo correction After the alignment of the left and right structured light images is aligned, the matching point of binocular matching only needs to be searched in the right structured light image. Therefore, in the process of binocular matching, it is possible to perform point cloud calculation through only multiple right structured light images to obtain corresponding multiple right point cloud images, and integrate multiple right point cloud images to obtain a total right point cloud Image, through the total right point cloud image for binocular matching, find the one-to-one correspondence between each pixel point between the left and right structured light images.
  • the binocular vision system has sequence consistency constraints, uniqueness constraints and smoothness constraints for binocular matching.
  • the order consistency constraint refers to the sequence of matching points of a series of points on a polar line of the left structured light image on the right structured light image and the left structured light must be consistent;
  • the uniqueness constraint refers to the left structured light image The matching point of a certain point in the right structured light is unique.
  • the smoothness constraint means that the change curve of the parallax value of the left structured light image and the right structured light image is smooth.
  • the binocular vision system is mainly used to shoot the target, and the three-dimensional reconstruction of the target is performed through the left and right views. Therefore, the images required for the detection of the target include the total left structured light image and the total right structured light image. After the point cloud is calculated, it is the total left point cloud image and the total right point cloud image. Through the point cloud information of the total left point cloud image and the right point cloud image, a certain matching algorithm is used to perform target matching.
  • FIG. 7 is a schematic structural diagram of an embodiment of an image acquisition device according to the present invention.
  • the image acquisition device 700 of this embodiment includes a memory 702 and a processor 701 coupled to each other.
  • the memory 702 is used to store program instructions executed by the processor 701, and the processor 701 is used to execute the image acquisition method shown in FIG. 1 according to the program instructions.
  • the image acquisition device 700 of this embodiment can be applied not only to the binocular vision system shown in FIG. 6 but also to the single camera vision system.
  • the image acquisition device of this embodiment is to capture multiple structured light images by shooting the structured light projected on the target multiple times, perform point cloud calculation on each structured light image to obtain multiple point cloud images, and then convert multiple points
  • the cloud images are integrated to obtain a total point cloud image; where the point cloud information contained in the total point cloud image is the point cloud information of all structured light contained in multiple structured light images, and the amount of calculation is as much as the one-time projection
  • the amount of structured light for point cloud computing is greatly reduced.
  • FIG. 8 is a schematic structural diagram of an embodiment of an image acquisition system according to the present invention.
  • the image acquisition system 800 of this embodiment includes the image acquisition device 700 shown in FIG. 7 and a structured light source 80 and an image acquisition device 81, wherein the structured light source 80 and the image acquisition device 81 are respectively connected to the processor 701 .
  • the structured light source 80 may be the same as the structured light source shown in FIG. 3a or FIG. 3b, and includes at least one laser and an electronic control component.
  • the electronic control component is disposed on the light exit path of at least one laser, and multiple lasers emit The laser stripes form multiple structured lights after being emitted by the electronic control component.
  • FIG. 3a and FIG. 3b which will not be repeated here.
  • the image acquisition device 81 includes a first camera and a second camera disposed on the left and right sides of the structured light source 80, respectively used to shoot the structured light projected on the target by the structured light source.
  • the image acquisition device may have only one single camera that is disposed on the side of the structural light source and has a certain angle with the structural light source.
  • the processor is also used to execute the image acquisition method shown in FIG. 4, that is, before projecting multiple structured lights to the same target at multiple time points, the grating parameters of the structured light source and the area of the target are obtained, and according to the area and structured light source
  • the grating parameters adjust the positional relationship between at least one laser.
  • the structured light is projected onto the same target by time-sharing, which is equivalent to projecting the light stripes of several structured light in stages, and the corresponding structured light image is acquired each time and the point cloud calculation is performed to obtain the corresponding multiple point clouds Image, and then integrate multiple point cloud images to form a total point cloud image.
  • the obtained total point cloud image contains all the point cloud information of the light stripes of the structured light, and the calculation amount is the structured light projected every time.
  • the product of the cube of the number of light stripes and the number of structured light images greatly reduces the amount of point cloud computing, reduces the calculation load, and obtains sufficient point cloud information.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明公开一种图像获取方法、装置及系统。该图像获取方法包括:获取在多个时间点对结构光源投射的同一目标进行拍摄得到的多张结构光图像;其中,所述结构光源在每个所述时间点均向所述目标投射多条结构光,且在不同时间点投射的结构光均不重叠;对所述多张结构光图像分别进行点云计算,对应得到多张点云图像;将所述多张点云图像整合得到总点云图像;其中,所述总点云图像包括每张所述点云图像的点云信息。通过上述方法能够使得获取相同点云信息的点云图像时,减少点云计算的计算量,减少计算负荷。

Description

一种图像获取方法、装置及系统 【技术领域】
本发明涉及图像处理技术领域,尤其涉及一种图像获取方法、装置及系统。
【背景技术】
三维测量技术是一种获取三维特征信息的技术,它在众多领域都着重要的作用,例如工业零件检测、工业设计、逆向工程等。
目前三维测量技术拥有多种不同的手段,其中,光学三维测量技术应用范围越来越广,已经成为该领域的重要分支,而其中通过视觉处理以获取处理后的图像,利用图像对物体进行三维特征提取的方式已经很大程度上在工业中被使用。视觉系统结合结构光投影是目前常见的光学三维测量技术方式。现有技术中,通过结构光投影到待测物体上,利用视觉系统对结构光进行拍照进行点云技术,以获取相应的图像信息,再进行三维构建,而对图像进行点云技术时,结构光的条纹数量越多,计算越精密,得到的鲁棒性越高,然而对应的计算量也会增大,根据现有技术,当结构光的条纹数量增加时,对应的计算量会以条纹数量的三次方进行增加,对计算系统造成很大计算负荷。
【发明内容】
本发明的目的在于提供一种图像获取方法、装置及系统,能够减少点云计算的计算量,减少计算负荷。
为实现上述目的,本发明提供一种图像获取方法,该方法包括:
获取在多个时间点对结构光源投射的同一目标进行拍摄得到的多张结构光图像;其中,所述结构光源在每个所述时间点均向所述目标投射多条结构光,且在不同时间点投射的结构光均不重叠;
对所述多张结构光图像分别进行点云计算,对应得到多张点云图像;
将所述多张点云图像整合得到总点云图像;其中,所述总点云图像包括每张所述点云图像的点云信息。
另一方面,本发明提出了一种图像获取装置,该图像获取装置包括:相互耦接的存储器和处理器;
所述存储器,用于存储所述处理器执行的程序指令;
所述处理器,用于根据所述程序指令,执行如下动作:
获取在多个时间点对结构光源投射的同一目标进行拍摄得到的多张结构光 图像;其中,所述结构光源在每个所述时间点均向所述目标投射多条结构光,且在不同时间点投射的结构光均不重叠;
对所述多张结构光图像分别进行点云计算,对应得到多张点云图像;
将所述多张点云图像整合得到总点云图像;其中,所述总点云图像包括每张所述点云图像的点云信息。
另一方面,本发明提出了一种图像获取系统,该图像获取系统包括上述的图像获取装置以及结构光源和图像采集装置;所述结构光源和图像采集装置分别与所述图像获取装置的处理器连接;
所述处理器用于执行如下操作:
通过所述图像采集装置获取在多个时间点对结构光源投射的同一目标进行拍摄得到的多张结构光图像;其中,所述结构光源在每个所述时间点均向所述目标投射多条结构光,且在不同时间点投射的结构光均不重叠;
对所述多张结构光图像分别进行点云计算,对应得到多张点云图像;
将所述多张点云图形整合得到总点云图像;其中,所述总点云图像包括每张所述点云图像的点云信息。
有益效果:区别于现有技术的情况,本发明通过令结构光分时投射到同一目标上,即相当于将若干结构光的光线条纹分次进行投射,每次获取相应的结构光图像并进行点云计算,得到相应的多张点云图像,再将多张点云图像进行整合形成总点云图像,得到的总点云图像中即包含了全部的结构光的光线条纹的点云信息,而计算量则为每次投射的结构光的光线条纹的数量的三次方与结构光图像的张数的乘积,很大程度上减少了点云计算的计算量,减少计算负荷,同时获取到足够的点云信息。
【附图说明】
图1是本发明图像获取方法第一实施例的流程示意图;
图2a-图2e是图1中对结构光进行处理的流程示意图;
图3a是电控部件为电光转向器时结构光源的结构示意图;
图3b是电控部件为可变焦透镜时结构光源的结构示意图;
图4是本发明图像获取方法第二实施例的流程示意图;
图5是图4中结构光源的结构示意图;
图6是双目视觉系统的结构示意图;
图7是本发明图像获取装置一实施例的结构示意图;
图8是本发明图像获取系统一实施例的结构示意图。
【具体实施方式】
为使本领域的技术人员更好地理解本发明的技术方案,下面结合附图和具体实施方式对本发明进一步详细描述。显然,所描述的实施方式仅仅是本发明的部分实施方式,而不是全部的实施方式。基于本发明中的实施方式,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施方式,均属于本发明保护的范围。
请参阅图1,图1是本发明图像获取方法第一实施例的流程示意图,如图1所示,本实施例的图像获取方法可包括如下步骤:
在步骤S101中,获取在多个时间点对结构光源投射的同一目标进行拍摄得到的多张结构光图像。
本实施例中,结构光源在不同时间点投射结构光到同一目标的不同位置上,则可以获取在不同时间点,结构光在同一目标的不同位置上的多张结构光图像。其中,结构光源在每个时间点,投射在同一目标上的结构光可以是一条也可以是多条,且在不同时间点投射的结构光在同一目标上是不重叠的。
本实施例中,通过在多个时间点调整结构光源,改变其在不同时间点投射的结构光的投射方向,以使结构光源在多个时间点向同一目标投射的多条结构光在同一目标的不同位置上;并通过图像采集器在对应的多个时间点对同一目标进行拍摄,即可得到相应的多张结构光图像。
本实施例中,若结构光源在每个时间点上投射在目标上的结构光的数量为N,而时间点的数量为M,即可以获取的M张结构光图像,每张结构光图像中包含的结构光的数量N,此时M张结构光图像中总共包含有的结构光数量为M×N。
在步骤S102中,对多张结构光图像分别进行点云计算,对应得到多张点云图像。
对上述得到的M张结构光图像分别进行点云计算,即可得到相应的M张点云图像,可以理解的是,对单张结构光图像的点云计算的计算量为单张结构光图像中包含的结构光的数量的三次方,即N 3,相应的,对M张结构光图像分别进行点云计算所需的总的计算量则为M×N 3
在步骤S103中,将多张点云图像整合得到总点云图像。
将步骤S102中计算得到的多张点云图像进行整合,得到由M张点云图像整合而成的总点云图像。可以理解的是,总点云图像中包含的结构光的数量为M×N,且对应所需的计算量则为M×N 3。其中,所述将多张点云图像进行整合以得到总点云图像,包括:将每张结构光图像的点云计算结果进行汇总整合,也即将多张点云图像进行汇总整合。
而现有技术中为了得到数量为M×N的结构光,令结构光源一次性投射出数量为M×N的结构光,对M×N的结构光进行点云计算,则需要(M×N) 3,而本实施例在得到M×N的结构光的同时,所需的计算量仅为M×N 3,相对于现有技术而言,在得到相同数量的结构光的同时,大大减少了点云计算的计算量。
本实施例的图像获取方法,可使利用结构光对目标进行检测时,保证最终得到的结构光图像中所包含的点云信息的数量,即保证了检测的准确度的同时,减少了点云计算的计算量。
为了实现上述方法,对结构光源进行了改进,本实施例的结构光源可包括多个激光器和电控部件,多个激光器分别出射激光条纹,电控部件设置在激光器的出光路径上,多个激光器以不同的入射角出射激光条纹到电控部件上。电控部件可通过调制改变激光器出射的激光条纹在经过该电控部件后的出射方向。结合结构光源的结构,对本实施例的实现方式做进一步说明,假设激光器是数量为n,则对应投射到目标上的激光条纹的数量为n,如图2a所示,在第一时间点上,n个激光器在目标上投射了具有一定间距的n条激光条纹,随后调节电控部件,改变n个激光器出射的激光条纹在经过电控部件后的出射方向,令在第二时间点上n个激光器在目标上同样投射具有一定间距的n条激光条纹(如图2b所示),第一时间点上的n条激光条纹和第二时间点上的n条激光条纹互不重叠。本实施例中,分别对第一时间点和第二时间点上的n条激光条纹的图像进行点云处理,得到如图2c和图2d所示的两张点云图像,两者的计算量分别为n3,再将两张点云图像进行整合得到如图2e所示的总点云图像,从图中可以看出,最终得到的总点云图像中包含了2n条激光条纹,而对应的计算量为2n3。现有技术中,则是通过向目标投射2n条激光条纹,得到具有2n条激光条纹的结构光图像,此时点云计算的计算量则为2 3×n 3。由上述分析,可以看出利用本实施例的图像获取方法能够在满足点云图像中的点云信息的需求同时,减少点云计算的计算量。
进一步,本实施例中,电控部件可以为电光转向器或可变焦透镜。
如图3a所示,若电控部件为电光转向器12,此时作为电控部件的电光转向器12放置在激光器11的出射路径上,能够对激光器11出射的激光条纹进行反射,通过调整电光转向器12的转向角度,可以改变激光器11出射的激光条纹相对于电光转向器12的入射角度和反射角度,进而使得激光器11出射的激光条纹投射在同一目标上的不同位置。
此外,如图3b所示,电控部件还可以为可变焦透镜13,可变焦透镜13的 焦距发生改变时,则对其入射的激光条纹而言,可以改变入射的激光条纹的折射角度,通过调整可变焦透镜13的焦距,可以改变激光器11出射的激光条纹相对于可变焦透镜13的折射角度,进而使得激光器13出射的激光条纹投射在同一目标上的不同位置。其中,可变焦透镜13可以为液体透镜、微透镜或微透镜阵列。
进一步参阅图4,图4是本发明图像获取方法第二实施例的流程示意图。本实施例是在图1的图像获取方法第一实施例的基础上改进得到的,如图4所示,本实施例在如图1所示的步骤S101之前还可包括如下步骤:
在步骤S104中,获取结构光源的光栅参数和目标的面积,并根据面积和结构光源的光栅参数调节至少一激光器之间的位置关系。
当结构光源包含多个激光器时,激光器之间的位置关系可以改变其出射的激光条纹之间的间距。因此,在本实施例中,对改变结构光源投射的结构光在目标上的投射位置前,获取包含的激光器的光栅参数,以便通过光栅参考对激光器投射的激光条纹之间的间距进行调整。获取目标的面积,可以通过需要检测的面积适当的调整结构光源包含的多个激光器之间的位置关系,进而调整结构光源包含的多个激光器出射的激光条纹在目标上的间距。如图5所示,若目标面积较小,则可以将多个激光器11之间的间距设置较小,以使多个激光器11出射的激光之间的间距较小;若目标面积较大,则可以将多个激光器11之间的间距适当增大,以使多个激光器11出射的激光之间的间距变大,图5中以电控部件为可变焦透镜为例,在其他实施例中,电控部件也可以为电控转向器。
可以理解的是,若目标的面积较小,则可以适当减小多个激光器投射在目标上的激光条纹之间的距离,令投射的激光条纹可以完全投射到目标上,此外,多个激光器投射在目标上的激光条纹之间的间距较小,可以使激光条纹在目标上的分布更加紧密,提高对目标进行检测的相对精细。若目标的面积较大,则可以适当增加多个激光器投射在目标上的激光条纹之间的距离,当然,若面积较大的目标且检测精度需求较高时,也可以适当减小结构光源投射在目标上的结构光之间的距离。
在另一实施方式中,可将图1至图5所示的图像获取方法第一实施例或第二实施例应用在双目视觉系统中。在双目视觉系统中需要进行双目匹配,即根据一定的算法,找出左右结构光图像之间各像素点的一一对应关系,该过程中需要对双目视觉系统中的左右两个摄像机拍摄到的结构光源投射在目标上的结构光图像进行点云计算。在结构光图像中会呈现出结构光源包含的多个激光器出射的激光条纹,通过激光条纹和计算的光点数量进行像素点的匹配。此时, 投射在目标上的激光条纹的数量越多,双目匹配的准确度越高,通过本实施例的图像获取方法,使大量的激光条纹分次投射在同一目标上,进而分别获取每次激光条纹投射在目标上的结构光图像,分别对结构光图像进行点云计算得到点云图像,在对多张点云图像进行整合得到总点云图像。由此,即可满足在匹配时能够有足够多的点云信息,又可以减少点云计算的计算量。
进一步的,如图6所示,双目视觉系统可包括设置在结构光源10左右两侧的第一相机20和第二相机30,如图6所示,结构光源10投射结构光到目标40上(图6中从结构光源出射的黑色实线箭头),第一相机20和第二相机30分别用于对目标40进行拍摄得到相应的左结构光图像和右结构光图像,图6中的黑色虚线箭头表示第一相机20和第二相机30的拍摄视角范围。因此,当结构光源10在多个时间点向目标40投射结构光时,可通过第一相机20和第二相机30分别拍摄目标40得到多张左结构光图像和右结构光图像。相应的,对多张左结构光图像和多张右结构光图像进行点云计算,可分别得到相应的多张左点云图像和多张右点云图像,相应的,总点云图像为将多张左点云图像和多张右点云图像进行整合得到的点云图像。
其中,根据双目匹配过程中依赖的极限约束条件:左结构光图像中的像点的匹配点一定在右结构光图像对应的极线上,通过立体校正使左结构光图像和右结构光图像的极线水平,左结构光图像和右结构光图像行对准后,双目匹配的匹配点仅需在右结构光图像中搜索即可。因此,在双目匹配的过程中,可以仅通过多张右结构光图像进行点云计算,得到相应的多张右点云图像,对多张右点云图像进行整合,得到总的右点云图像,通过总的右点云图像进行双目匹配,找到左右结构光图像之间各像素点的一一对应关系。
此外,双目视觉系统进行双目匹配还具有顺序一致性约束、唯一性约束和平滑性约束。其中,顺序一致性约束指左结构光图像的某条极线上的一系列点在右结构光图像上的匹配点的顺序与左结构光必然是一致的;唯一性约束指左结构光图像中的某一点在右结构光中的匹配点是唯一的。平滑性约束指左结构光图像和右结构光图像的视差值的变化曲线是平滑的。
进一步,双目视觉系统主要用于对目标进行拍摄,通过左右视图进行目标的三维重建,因此,在对目标进行检测时需要的图像包括总的左结构光图像和总的右结构光图像,进行点云计算后即为总的左点云图像和总的右点云图像,通过总的左点云图像和右点云图像的点云信息,通过一定的匹配算法,进行目标匹配。
请参阅图7,图7是本发明图像获取装置一实施例的结构示意图。如图7所 示,本实施例的图像获取装置700包括相互耦接的存储器702和处理器701。其中,存储器702用于存储处理器701执行的程序指令,处理器701用于根据程序指令执行图1所示的图像获取方法。可以理解的是,本实施例的图像获取装置700即可应用在如图6所示的双目视觉系统中,也可以应用在单相机视觉系统中。本实施例的图像获取装置在于对多次投射到目标上的结构光进行拍摄获得多张结构光图像,分别对多张结构光图像进行点云计算得到多张点云图像,再将多张点云图像进行整合,得到总点云图像;其中,总点云图像中包含的点云信息是多张结构光图像中包含的全部结构光的点云信息,而计算量相对于一次性投射同样多的结构光进行点云计算的计算量大大减少。
请参阅图8,图8是本发明图像获取系统一实施例的结构示意图。如图8所示,本实施例的图像获取系统800包括图7所示的图像获取装置700以及结构光源80和图像采集装置81,其中,结构光源80和图像采集装置81分别与处理器701连接。
本实施例中,结构光源80可以与图3a或图3b所示的结构光源形同,包含至少一激光器和电控部件,电控部件设置在至少一激光器的出光路径上,激光器发射的多条激光条纹经电控部件出射后形成多条结构光,具体说明请参见图3a和图3b,此处不再赘述。
本实施例中,图像采集装置81包含设置在结构光源80的左右两侧的第一相机和第二相机,分别用于对结构光源投射在目标上的结构光进行拍摄。
可以理解的是,若实施例的图像获取系统为单相机视觉系统,则图像采集装置可以仅有一个设置在结构光源一侧,且与结构光源具有一定角度的单个相机。
进一步,处理器还用于执行图4所示的图像获取方法,即在多个时间点向同一目标投射多条结构光之前,获取结构光源的光栅参数和目标的面积,并根据面积和结构光源的光栅参数调节至少一激光器之间的位置关系。其详细说明请参见图4所示的图像获取方法,此处不再赘述。
本发明通过令结构光分时投射到同一目标上,即相当于将若干结构光的光线条纹分次进行投射,每次获取相应的结构光图像并进行点云计算,得到相应的多张点云图像,再将多张点云图像进行整合形成总点云图像,得到的总点云图像中即包含了全部的结构光的光线条纹的点云信息,而计算量则为每次投射的结构光的光线条纹的数量的三次方与结构光图像的张数的乘积,很大程度上减少了点云计算的计算量,减少计算负荷,同时获取到足够的点云信息。
以上仅为本发明的实施方式,并非因此限制本发明的专利范围,凡是利用 本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围。

Claims (15)

  1. 一种图像获取方法,其特征在于,包括:
    获取在多个时间点对结构光源投射的同一目标进行拍摄得到的多张结构光图像;其中,所述结构光源在每个所述时间点均向所述目标投射多条结构光,且在不同时间点投射的结构光均不重叠;
    对所述多张结构光图像分别进行点云计算,对应得到多张点云图像;
    将所述多张点云图像整合得到总点云图像;其中,所述总点云图像包括每张所述点云图像的点云信息。
  2. 根据权利要求1所述的方法,其特征在于,
    所述获取在多个时间点对结构光源投射的同一目标进行拍摄得到的多张结构光图像,包括:
    控制所述结构光源分别在多个时间点向同一目标投射多条结构光,并利用图像采集器在所述多个时间点上对所述目标进行拍摄,得到多张结构光图像。
  3. 根据权利要求2所述的方法,其特征在于,
    所述结构光源包括至少一激光器和电控部件,所述电控部件设置在所述至少一激光器的出光路径上,所述激光器发射的多条激光线经所述电控部件出射后形成多条结构光;
    所述控制所述结构光源分别在多个时间点向同一目标投射多条结构光,包括:
    在多个时间点上控制所述电控部件调整所述激光器发射的多条激光线的出射方向,以使调整后的所述多条激光线投射到所述同一目标上且在不同时间点投射到所述目标上的位置不同。
  4. 根据权利要求3所述的方法,其特征在于,
    所述电控部件为电控光转向器;
    所述在多个时间点上控制所述电控部件调整所述激光器发射的多条激光线的出射方向,包括:
    在多个时间点上控制所述电控光转向器的转向角度,以改变所述激光器发射的多条激光线的出射方向。
  5. 根据权利要求3所述的方法,其特征在于,
    所述电控部件为可变焦透镜;
    所述在多个时间点上控制所述电控部件调整所述激光器发射的多条激光线的出射方向,包括:
    在多个时间点上控制所述可变焦透镜的焦距,以改变所述激光器发射的多条激光线的出射方向。
  6. 根据权利要求3所述的方法,其特征在于,
    在所述控制结构光源分别在多个时间点向同一目标投射多条结构光之前,所述方法还包括:
    获取所述结构光源的光栅参数和所述目标的面积,并根据所述面积和所述结构光源的光栅参数调节所述至少一激光器之间的位置关系。
  7. 根据权利要求1所述的方法,其特征在于,
    所述拍摄得到的多张结构光图像包括从左视点在多个时间点拍摄得到的多张左结构光图像和从右视点在所述多个时间点拍摄得到的多张右结构光图像;
    所述多张点云图像包括分别对应于所述多张左结构光图像的多张左点云图像和分别对应于所述多张右结构光图像的多张右点云图像;
    所述总点云图像包括由所述多张左点云图像整合得到的左总点云图像和由所述多张右点云图像整合得到的右总点云图像;
    在所述将所述多张点云图像整合得到总点云图像之后,所述方法还包括:
    将所述左总点云图像和所述右总点云图像的点云信息进行匹配。
  8. 一种图像获取装置,其特征在于,包括:相互耦接的存储器和处理器;所述存储器,用于存储所述处理器执行的程序指令;
    所述处理器,用于根据所述程序指令,执行如下动作:
    获取在多个时间点对结构光源投射的同一目标进行拍摄得到的多张结构光图像;其中,所述结构光源在每个所述时间点均向所述目标投射多条结构光,且在不同时间点投射的结构光均不重叠;
    对所述多张结构光图像分别进行点云计算,对应得到多张点云图像;
    将所述多张点云图形整合得到总点云图像;其中,所述总点云图像包括每张所述点云图像的点云信息。
  9. 一种图像获取系统,其特征在于,包括权利要求8所述的图像获取装置、结构光源和图像采集装置;所述结构光源和图像采集装置分别与所述图像获取装置的处理器连接;
    所述处理器用于执行如下操作:
    通过所述图像采集装置获取在多个时间点对结构光源投射的同一目标进行拍摄得到的多张结构光图像;其中,所述结构光源在每个所述时间点均向所述目标投射多条结构光,且在不同时间点投射的结构光均不重叠;
    对所述多张结构光图像分别进行点云计算,对应得到多张点云图像;
    将所述多张点云图形整合得到总点云图像;其中,所述总点云图像包括每张所述点云图像的点云信息。
  10. 根据权利要求9所述的系统,其特征在于,所述处理器执行的所述获取在多个时间点对结构光源投射的同一目标进行拍摄得到的多张结构光图像,包括:
    控制所述结构光源分别在多个时间点向同一目标投射多条结构光,并利用图像采集装置在所述多个时间点上对所述目标进行拍摄,得到多张结构光图像。
  11. 根据权利要求10所述的系统,其特征在于,所述
    所述结构光源包括至少一激光器和电控部件,所述电控部件设置在所述至少一激光器的出光路径上,所述激光器发射的多条激光条纹经所述电控部件出射后形成多条结构光。
  12. 根据权利要求11所述的系统,其特征在于,
    所述电控部件为电控光转向器;
    所述处理器执行的所述在多个时间点上控制所述电控部件调整所述激光器发射的多条激光线的出射方向,包括:
    在多个时间点上控制所述电控光转向器的转向角度,以改变所述激光器发射的多条激光线的出射方向。
  13. 根据权利要求11所述的系统,其特征在于,
    所述电控部件为可变焦透镜;
    所述处理器执行的所述在多个时间点上控制所述电控部件调整所述激光器发射的多条激光线的出射方向,包括:
    在多个时间点上控制所述可变焦透镜的焦距,以改变所述激光器发射的多条激光线的出射方向。
  14. 根据权利要求11所述的系统,其特征在于,
    在控制所述结构光源分别在多个时间点向同一目标投射多条结构光之前,所述处理器还用于执行如下动作:
    获取所述结构光源的光栅参数和所述目标的面积,并根据所述面积和所述结构光源的光栅参数调节所述至少一激光器之间的位置关系。
  15. 根据权利要求13所述的系统,其特征在于,
    所述可变焦透镜包括液体透镜、微透镜或微透镜阵列。
PCT/CN2018/119248 2018-12-04 2018-12-04 一种图像获取方法、装置及系统 WO2020113449A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/119248 WO2020113449A1 (zh) 2018-12-04 2018-12-04 一种图像获取方法、装置及系统
CN201880087119.3A CN111630343A (zh) 2018-12-04 2018-12-04 一种图像获取方法、装置及系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/119248 WO2020113449A1 (zh) 2018-12-04 2018-12-04 一种图像获取方法、装置及系统

Publications (1)

Publication Number Publication Date
WO2020113449A1 true WO2020113449A1 (zh) 2020-06-11

Family

ID=70975205

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/119248 WO2020113449A1 (zh) 2018-12-04 2018-12-04 一种图像获取方法、装置及系统

Country Status (2)

Country Link
CN (1) CN111630343A (zh)
WO (1) WO2020113449A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112082513A (zh) * 2020-09-09 2020-12-15 易思维(杭州)科技有限公司 一种多激光阵列三维扫描系统及方法
CN114967284A (zh) * 2022-05-09 2022-08-30 中国科学院半导体研究所 用于增加点阵密度的点阵投影成像系统及方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365526B (zh) * 2020-11-30 2023-08-25 湖南傲英创视信息科技有限公司 弱小目标的双目检测方法及系统
CN116320357A (zh) * 2023-05-17 2023-06-23 浙江视觉智能创新中心有限公司 一种3d结构光相机系统、方法、电子设备和可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016040473A1 (en) * 2014-09-10 2016-03-17 Vangogh Imaging, Inc. Real-time dynamic three-dimensional adaptive object recognition and model reconstruction
CN106991716A (zh) * 2016-08-08 2017-07-28 深圳市圆周率软件科技有限责任公司 一种全景三维建模装置、方法及系统
CN108225218A (zh) * 2018-02-07 2018-06-29 苏州镭图光电科技有限公司 基于光学微机电系统的三维扫描成像方法及成像装置
CN108895969A (zh) * 2018-05-23 2018-11-27 深圳大学 一种手机外壳的三维检测方法及装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6161775B2 (ja) * 2016-09-06 2017-07-12 株式会社キーエンス 形状測定装置、形状測定方法および形状測定プログラム
CN206488748U (zh) * 2016-10-25 2017-09-12 成都频泰医疗设备有限公司 分时三维扫描系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016040473A1 (en) * 2014-09-10 2016-03-17 Vangogh Imaging, Inc. Real-time dynamic three-dimensional adaptive object recognition and model reconstruction
CN106991716A (zh) * 2016-08-08 2017-07-28 深圳市圆周率软件科技有限责任公司 一种全景三维建模装置、方法及系统
CN108225218A (zh) * 2018-02-07 2018-06-29 苏州镭图光电科技有限公司 基于光学微机电系统的三维扫描成像方法及成像装置
CN108895969A (zh) * 2018-05-23 2018-11-27 深圳大学 一种手机外壳的三维检测方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112082513A (zh) * 2020-09-09 2020-12-15 易思维(杭州)科技有限公司 一种多激光阵列三维扫描系统及方法
CN114967284A (zh) * 2022-05-09 2022-08-30 中国科学院半导体研究所 用于增加点阵密度的点阵投影成像系统及方法

Also Published As

Publication number Publication date
CN111630343A (zh) 2020-09-04

Similar Documents

Publication Publication Date Title
WO2020113449A1 (zh) 一种图像获取方法、装置及系统
US11002537B2 (en) Distance sensor including adjustable focus imaging sensor
CN107735645B (zh) 三维形状测量装置
EP3531066B1 (en) Three-dimensional scanning method including a plurality of lasers with different wavelengths, and scanner
US10782126B2 (en) Three-dimensional scanning method containing multiple lasers with different wavelengths and scanner
US10737391B2 (en) System for capturing an image
CN107241592B (zh) 一种成像设备及成像方法
CN107783353A (zh) 用于捕捉立体影像的装置及系统
US11102459B2 (en) 3D machine-vision system
KR20120048242A (ko) 3차원 카메라
CN105184800A (zh) 自动三维映射的投影系统及方法
CN106254738A (zh) 双图像采集系统及图像采集方法
CN106291788B (zh) 自由曲面棱镜及其形状的确定方法及其光学成像方法
Wang et al. Dual structured light 3d using a 1d sensor
US10107747B2 (en) Method, system and computer program for determining a reflectance distribution function of an object
US20160173836A1 (en) Optical system and image compensating method of optical apparatus
US20240020864A1 (en) Auto calibration from epipolar line distance in projection pattern
KR101333161B1 (ko) 공초점을 이용한 영상 처리 장치 및 이를 이용한 영상 처리 방법
JP2017135528A (ja) 画像処理装置、撮像装置および画像処理プログラム
KR20160069806A (ko) 거리 측정 장치 및 거리 측정 방법
Kasahara et al. 3D shape measurement of translucent objects using Laser Rangefinder
WO2017056479A1 (en) System
JP7309425B2 (ja) 処理装置、処理システム、撮像装置、処理方法、および、プログラム
CN109923585A (zh) 使用立体图像进行深度检测的方法和装置
Choi et al. Depth extraction using depth of field imaging with tilted retroreflective structure

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18942538

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15/10/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18942538

Country of ref document: EP

Kind code of ref document: A1