CN110619617A - Three-dimensional imaging method, device, equipment and computer readable storage medium - Google Patents

Three-dimensional imaging method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN110619617A
CN110619617A CN201910927761.4A CN201910927761A CN110619617A CN 110619617 A CN110619617 A CN 110619617A CN 201910927761 A CN201910927761 A CN 201910927761A CN 110619617 A CN110619617 A CN 110619617A
Authority
CN
China
Prior art keywords
dimensional
image data
area array
data
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910927761.4A
Other languages
Chinese (zh)
Other versions
CN110619617B (en
Inventor
孙海江
王宇庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Original Assignee
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Optics Fine Mechanics and Physics of CAS filed Critical Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority to CN201910927761.4A priority Critical patent/CN110619617B/en
Publication of CN110619617A publication Critical patent/CN110619617A/en
Application granted granted Critical
Publication of CN110619617B publication Critical patent/CN110619617B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明实施例公开了一种三维成像方法、装置及系统。其中,三维成像系统包括图像采集装置、同步驱动电路及信息处理器。图像采集装置包括可见光传感器和面阵三维传感器,可见光传感器用于采集被测点的二维图像数据,面阵三维传感器用于采集被测点的深度图像数据并以面阵形式输出。同步驱动电路用于控制二维图像数据和深度图像数据采集的同步性,以保证时域信息获取的一致性;信息处理器用于将二维图像数据和深度图像数据进行实时融合,并生成以面阵形式输出的点云数据。本申请提供的技术方案实现了面阵形式的大分辨率、高帧频点云数据的输出,满足深度视觉技术领域高分辨率、高精度三维成像的需求。

The embodiment of the invention discloses a three-dimensional imaging method, device and system. Wherein, the three-dimensional imaging system includes an image acquisition device, a synchronous driving circuit and an information processor. The image acquisition device includes a visible light sensor and an area array three-dimensional sensor. The visible light sensor is used to collect two-dimensional image data of the measured point, and the area array three-dimensional sensor is used to collect the depth image data of the measured point and output it in the form of an area array. The synchronous drive circuit is used to control the synchronization of two-dimensional image data and depth image data acquisition to ensure the consistency of time-domain information acquisition; the information processor is used to fuse two-dimensional image data and depth image data in real time, and generate a surface Point cloud data output in array form. The technical solution provided by this application realizes the output of large-resolution, high-frame-frequency point cloud data in the form of an area array, and meets the needs of high-resolution and high-precision three-dimensional imaging in the field of depth vision technology.

Description

三维成像方法、装置、设备及计算机可读存储介质Three-dimensional imaging method, device, equipment and computer-readable storage medium

技术领域technical field

本发明实施例涉及光学成像技术领域,特别是涉及一种三维成像方法、装置及系统。Embodiments of the present invention relate to the technical field of optical imaging, and in particular, to a three-dimensional imaging method, device and system.

背景技术Background technique

深度视觉技术为未来机器视觉产业和光电产业的重要发展方向,和传统的机器视觉二维信息处理相比,深度图像包含了场景的三维深度信息和二维灰度信息,这与人眼的视觉成像机制是一致的,未来的机器视觉产业将从普通的二维图像视觉发展到深度视觉,这一技术未来将在智能机器人、无人驾驶、AR/VR、安防监控等几乎所有视觉和光电产业相关领域产生颠覆性的变革。Depth vision technology is an important development direction of the machine vision industry and optoelectronics industry in the future. Compared with the traditional two-dimensional information processing of machine vision, the depth image contains three-dimensional depth information and two-dimensional grayscale information of the scene, which is different from the vision of the human eye. The imaging mechanism is consistent, and the future machine vision industry will develop from ordinary two-dimensional image vision to depth vision. Subversive changes have taken place in related fields.

深度视觉的核心是高性能的三维深度成像技术。相关技术中可通过激光扫描、结构光、双目视觉以及面阵TOF(Time of flight,飞行时间)技术来实现三维成像技术。而扫描式三维成像、结构光、双目视觉方法因为机构复杂、成本高、稳定性差等缺点无法满足高性能的深度成像要求。尽管TOF技术虽然能够以非扫描的方式获取场景的深度信息,但是由于受到了传感器工艺以及光学设计理论的限制,无法实时获取更多的点云数据,TOF成像分辨率过低,成像质量较差,作用距离也无法满足高端领域的需求,仅仅可将三维信息作为一种距离测量方法,无法将其作为一种视觉成像技术进行应用。The core of depth vision is high-performance 3D depth imaging technology. In related technologies, the three-dimensional imaging technology can be realized by laser scanning, structured light, binocular vision and area array TOF (Time of flight, time of flight) technology. However, scanning 3D imaging, structured light, and binocular vision methods cannot meet the requirements of high-performance depth imaging due to the disadvantages of complex mechanism, high cost, and poor stability. Although TOF technology can obtain the depth information of the scene in a non-scanning manner, due to the limitation of sensor technology and optical design theory, it is impossible to obtain more point cloud data in real time. The TOF imaging resolution is too low and the imaging quality is poor. However, the operating distance cannot meet the needs of high-end fields. Three-dimensional information can only be used as a distance measurement method, and it cannot be applied as a visual imaging technology.

发明内容Contents of the invention

本公开实施例提供了一种三维成像方法、装置及系统,实现了面阵形式的大分辨率、高帧频点云数据的输出,满足深度视觉技术领域高分辨率、高精度三维成像的需求。The embodiments of the present disclosure provide a three-dimensional imaging method, device and system, which realize the output of large-resolution and high-frame-frequency point cloud data in the form of an area array, and meet the requirements of high-resolution and high-precision three-dimensional imaging in the field of depth vision technology .

为解决上述技术问题,本发明实施例提供以下技术方案:In order to solve the above technical problems, embodiments of the present invention provide the following technical solutions:

本发明实施例一方面提供了一种三维成像系统,包括图像采集装置、同步驱动电路及信息处理器;An embodiment of the present invention provides a three-dimensional imaging system, including an image acquisition device, a synchronous drive circuit, and an information processor;

其中,所述图像采集装置包括用于采集被测点二维图像数据的可见光传感器和用于采集所述被测点深度图像数据并以面阵形式输出的面阵三维传感器;Wherein, the image collection device includes a visible light sensor for collecting two-dimensional image data of the measured point and an area array three-dimensional sensor for collecting the depth image data of the measured point and outputting it in the form of an area array;

所述同步驱动电路用于控制所述二维图像数据和所述深度图像数据采集的同步性,以保证时域信息获取的一致性;The synchronous drive circuit is used to control the synchronization of the acquisition of the two-dimensional image data and the depth image data, so as to ensure the consistency of time domain information acquisition;

所述信息处理器用于将所述二维图像数据和所述深度图像数据进行实时融合,并生成以面阵形式输出的点云数据。The information processor is used to fuse the two-dimensional image data and the depth image data in real time, and generate point cloud data output in the form of an area array.

可选的,所述图像采集装置还包括单点激光传感器,所述信息处理器还包括数据自校准模块和光源控制模块;Optionally, the image acquisition device further includes a single-point laser sensor, and the information processor further includes a data self-calibration module and a light source control module;

所述光源控制模块用于根据所述面阵三维传感器的工作频率调制所述单点激光传感器的激光器照明光源的工作频率,以同步触发所述面阵三维传感器和所述单点激光传感器进行数据采集;The light source control module is used to modulate the operating frequency of the laser illumination light source of the single-point laser sensor according to the operating frequency of the area array three-dimensional sensor, so as to synchronously trigger the area array three-dimensional sensor and the single-point laser sensor to perform data processing. collection;

所述数据自校准模块用于根据所述单点激光传感器采集的所述被测点的距离信息进行三维成像系统的数据标定和点云数据自校准。The data self-calibration module is used to perform data calibration and point cloud data self-calibration of the three-dimensional imaging system according to the distance information of the measured point collected by the single-point laser sensor.

可选的,还包括能量汇聚光学装置;Optionally, an energy-focusing optical device is also included;

所述能量汇聚光学装置用于在所述单点激光传感器的工作区域内进行激光器照明光源的远距离能量汇聚,以增加所述激光器照明光源的照明距离。The energy condensing optical device is used for long-distance energy condensing of the laser illumination light source in the working area of the single-point laser sensor, so as to increase the illumination distance of the laser illumination light source.

可选的,所述能量汇聚光学装置为非球面不连续弧形反射光杯。Optionally, the energy converging optical device is an aspheric discontinuous arc reflective cup.

可选的,所述非球面不连续弧形反射光杯还设置有覆盖在光杯表面的反射膜。Optionally, the aspherical discontinuous arc reflective optical cup is also provided with a reflective film covering the surface of the optical cup.

可选的,所述信息处理器还包括工作频率计算模块;Optionally, the information processor also includes a working frequency calculation module;

所述工作频率计算模块用于根据第一公式计算所述面阵三维传感器的工作频率,所述第一公式为:The operating frequency calculation module is used to calculate the operating frequency of the area array three-dimensional sensor according to a first formula, and the first formula is:

式中,N0为真实测量距离,n为所述面阵三维传感器的频率总个数,ki为波长的倍数,di为不同调制频率得到的距离,c为光速,fi为第i个工作频率。In the formula, N 0 is the actual measurement distance, n is the total number of frequencies of the area array three-dimensional sensor, k i is the multiple of the wavelength, d i is the distance obtained by different modulation frequencies, c is the speed of light, and f i is the ith a working frequency.

可选的,所述信息处理器还包括工作频率计算模块;Optionally, the information processor also includes a working frequency calculation module;

所述工作频率计算模块用于根据第二公式计算所述面阵三维传感器的工作频率,所述第二公式为:The operating frequency calculation module is used to calculate the operating frequency of the area array three-dimensional sensor according to a second formula, and the second formula is:

式中,N0'为加权后的真实测量距离,n为所述面阵三维传感器的频率总个数,m为频率的总数,u为频率的第一位置,v为频率的第二位置,kv和ku为波长的倍数,c为光速,fu为第u个工作频率,fv为第v个工作频率,为fu对应的方差,μu为fu对应的均值;μv为fv对应的均值,为fv对应的方差。In the formula, N 0 ' is the actual measurement distance after weighting, n is the total number of frequencies of the area array three-dimensional sensor, m is the total number of frequencies, u is the first position of the frequency, v is the second position of the frequency, k v and k u are multiples of wavelength, c is the speed of light, f u is the uth operating frequency, f v is the vth operating frequency, is the variance corresponding to f u , μ u is the mean value corresponding to f u ; μ v is the mean value corresponding to f v , is the variance corresponding to f v .

本发明实施例另一方面提供了一种三维成像方法,包括:Another aspect of the embodiment of the present invention provides a three-dimensional imaging method, including:

同时获取被测点在第一时刻的二维可见光图像数据和深度图像数据;Simultaneously acquire two-dimensional visible light image data and depth image data of the measured point at the first moment;

将所述二维可见光图像数据和所述深度图像数据进行实时融合,生成以面阵形式输出的点云数据。The two-dimensional visible light image data and the depth image data are fused in real time to generate point cloud data output in the form of an area array.

可选的,所述将所述二维可见光图像数据和所述深度图像数据进行实时融合之后,还包括:Optionally, after the real-time fusion of the two-dimensional visible light image data and the depth image data, further includes:

根据所述被测点的激光距离数据信息对融合后的数据进行自校准,以作为所述被测点的点云数据进行输出;Self-calibrating the fused data according to the laser distance data information of the measured point to output as point cloud data of the measured point;

所述激光距离数据信息为在所述第一时刻采集的所述被测点的激光距离数据。The laser distance data information is the laser distance data of the measured point collected at the first moment.

本发明实施例还提供了一种三维成像装置,包括:The embodiment of the present invention also provides a three-dimensional imaging device, including:

数据获取模块,用于同时获取被测点在第一时刻的二维可见光图像数据和以面阵形式输出的深度图像数据;A data acquisition module, configured to simultaneously acquire the two-dimensional visible light image data of the measured point at the first moment and the depth image data output in the form of an area array;

数据融合模块,用于将所述二维可见光图像数据和所述深度图像数据进行实时融合,生成以面阵形式输出的点云数据。The data fusion module is used to fuse the two-dimensional visible light image data and the depth image data in real time to generate point cloud data output in the form of an area array.

本申请提供的技术方案的优点在于,综合利用了多个图像传感器进行稠密及高密度图像数据采样,并设置同步驱动电路保证高密度的可见光数据和稠密的深度图像数据采样的同步性,从而保证采集的图像数据的时域一致性,最后将具有时域一致性的图像数据进行实时融合,有效地增加了深度图像的细节信息,提高了三维成像的质量和分辨率,实现了以面阵形式输出的高分辨率、高帧频点云数据输出,满足深度视觉技术领域高分辨率、高精度三维成像的需求。The advantage of the technical solution provided by this application is that multiple image sensors are used comprehensively to sample dense and high-density image data, and a synchronous drive circuit is set to ensure the synchronization of high-density visible light data and dense depth image data sampling, thereby ensuring The time-domain consistency of the collected image data, and finally the real-time fusion of the image data with time-domain consistency, effectively increases the detail information of the depth image, improves the quality and resolution of the 3D imaging, and realizes the real-time The output of high-resolution, high-frame-rate point cloud data output meets the needs of high-resolution, high-precision 3D imaging in the field of depth vision technology.

此外,本发明实施例还针对三维成像系统提供了相应的实现方法及虚拟装置,进一步使得所述方法更具有可行性,所述装置、设备及计算机可读存储介质具有相应的优点。In addition, the embodiment of the present invention also provides a corresponding implementation method and a virtual device for a three-dimensional imaging system, further making the method more feasible, and the device, device and computer-readable storage medium have corresponding advantages.

应当理解的是,以上的一般描述和后文的细节描述仅是示例性的,并不能限制本公开。It is to be understood that both the foregoing general description and the following detailed description are exemplary only and are not restrictive of the present disclosure.

附图说明Description of drawings

为了更清楚的说明本发明实施例或相关技术的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作简单的介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present invention or related technologies, the following will briefly introduce the accompanying drawings that need to be used in the description of the embodiments or related technologies. Obviously, the accompanying drawings in the following description are only the present invention For some embodiments of the present invention, those of ordinary skill in the art can also obtain other drawings based on these drawings on the premise of not paying creative efforts.

图1为本发明实施例提供的三维成像系统的一种具体实施方式结构图;FIG. 1 is a structural diagram of a specific embodiment of a three-dimensional imaging system provided by an embodiment of the present invention;

图2为本发明实施例提供的三维成像系统的另一种具体实施方式结构图;FIG. 2 is a structural diagram of another specific embodiment of the three-dimensional imaging system provided by the embodiment of the present invention;

图3为本发明实施例提供的一种三维成像方法的流程示意图;FIG. 3 is a schematic flowchart of a three-dimensional imaging method provided by an embodiment of the present invention;

图4为本发明实施例提供的另一种三维成像方法的流程示意图;FIG. 4 is a schematic flowchart of another three-dimensional imaging method provided by an embodiment of the present invention;

图5为本发明实施例提供的三维成像装置的一种具体实施方式结构图;FIG. 5 is a structural diagram of a specific embodiment of a three-dimensional imaging device provided by an embodiment of the present invention;

图6为本发明实施例提供的三维成像装置的另一种具体实施方式结构图。FIG. 6 is a structural diagram of another specific implementation manner of a three-dimensional imaging device provided by an embodiment of the present invention.

具体实施方式Detailed ways

为了使本技术领域的人员更好地理解本发明方案,下面结合附图和具体实施方式对本发明作进一步的详细说明。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to enable those skilled in the art to better understand the solution of the present invention, the present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments. Apparently, the described embodiments are only some of the embodiments of the present invention, but not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”“第四”等是用于区别不同的对象,而不是用于描述特定的顺序。此外术语“包括”和“具有”以及他们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可包括没有列出的步骤或单元。The terms "first", "second", "third" and "fourth" in the specification and claims of this application and the above drawings are used to distinguish different objects, rather than to describe a specific order . Furthermore, the terms "comprising" and "having", and any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, product, or device comprising a series of steps or units is not limited to the listed steps or units, but may include unlisted steps or units.

在介绍了本发明实施例的技术方案后,下面详细的说明本申请的各种非限制性实施方式。After introducing the technical solutions of the embodiments of the present invention, various non-limiting implementations of the present application will be described in detail below.

首先请参阅图1,图1为本发明实施例提供的三维成像系统在一种实施方式下的结构框架示意图,本发明实施例可包括以下内容:Please refer to Fig. 1 first. Fig. 1 is a schematic structural frame diagram of a three-dimensional imaging system provided by an embodiment of the present invention in an implementation manner. The embodiment of the present invention may include the following contents:

三维成像系统可包括图像采集装置1、同步驱动电路2及信息处理器3,图像采集装置1分别与同步驱动电路2、信息处理器3相连。The three-dimensional imaging system may include an image acquisition device 1 , a synchronous driving circuit 2 and an information processor 3 , and the image acquisition device 1 is connected to the synchronous driving circuit 2 and the information processor 3 respectively.

其中,图像采集装置1包括可见光传感器11和面阵三维传感器12。可见光传感器11用于采集二维灰度图像数据,面阵三维传感器12用于采集深度图像数据,且深度图像数据为以面阵形式输出。图像采集装置1中包含可见光传感器11和面阵三维传感器12的个数、种类、硬件参数可根据实际应用场景进行确定,本申请对此不做任何限定。Wherein, the image acquisition device 1 includes a visible light sensor 11 and an area array three-dimensional sensor 12 . The visible light sensor 11 is used to collect two-dimensional grayscale image data, and the area array three-dimensional sensor 12 is used to collect depth image data, and the depth image data is output in the form of an area array. The number, type, and hardware parameters of the visible light sensor 11 and the area array three-dimensional sensor 12 included in the image acquisition device 1 can be determined according to actual application scenarios, and this application does not make any limitation thereto.

在本申请中,同步驱动电路2用于控制二维图像数据和深度图像数据采集的同步性,也就是说,同步驱动电路2用于触发图像采集装置1中的各图像传感器,例如可见光传感器11和面阵三维传感器12,同时对被测点进行数据采集,从而保证各图像传感器采集的数据的时域信息具有一致性。本领域技术人员可根据实际应用场景确定同步驱动电路2的组成结构和包含的各电路元器件,本申请对此不作任何限定,只要实现可控制图像采集装置1中的各图像传感器同时进行数据采样即可。此外,同步驱动电路2还可以通过滤波等降噪手段使其噪声不低于预设噪声值,提高电路的集成能力使得其集成度高于预设集成度参考阈值,且同步驱动电路2的电压适应范围设置较宽。具有低噪声、高集成度、宽电压适应范围的同步驱动电路2可实现对前端图像采集装置1的各种复杂控制功能,使得各图像传感器按照额定指标稳定输出数据。In this application, the synchronous driving circuit 2 is used to control the synchronization of two-dimensional image data and depth image data acquisition, that is, the synchronous driving circuit 2 is used to trigger each image sensor in the image acquisition device 1, such as the visible light sensor 11 Together with the area array three-dimensional sensor 12, data collection is performed on the measured point at the same time, so as to ensure that the time domain information of the data collected by each image sensor is consistent. Those skilled in the art can determine the composition and structure of the synchronous drive circuit 2 and the circuit components contained in it according to the actual application scenario. That's it. In addition, the synchronous drive circuit 2 can also use noise reduction means such as filtering to make its noise not lower than the preset noise value, improve the integration capability of the circuit so that its integration level is higher than the preset integration level reference threshold, and the voltage of the synchronous drive circuit 2 The adaptation range is set wider. The synchronous drive circuit 2 with low noise, high integration, and wide voltage range can realize various complex control functions for the front-end image acquisition device 1, so that each image sensor can output data stably according to the rated index.

本实施例中,信息处理器3用于将二维图像数据和深度图像数据进行实时融合,并生成以面阵形式输出的点云数据。可采用相关技术中任何一种可实现将二维数据和三维数据进行融合的图像处理算法,本申请对此不做任何限定。采用TOF输出得到的深度图像与可见光图像融合,使得深度图像的细节信息进一步增加,数据融合后生成的点云数据理论上较三维面阵图像传感器12的点云数据可以提高8×8倍的分辨率。可见,本申请可有效地提高三维成像质量和分辨率。In this embodiment, the information processor 3 is used to fuse two-dimensional image data and depth image data in real time, and generate point cloud data output in the form of an area array. Any image processing algorithm that can realize the fusion of two-dimensional data and three-dimensional data in related technologies can be used, and this application does not make any limitation on this. The depth image obtained by TOF output is fused with the visible light image, which further increases the detail information of the depth image. The point cloud data generated after data fusion can theoretically improve the resolution by 8×8 times compared with the point cloud data of the 3D area image sensor 12. Rate. It can be seen that the present application can effectively improve the three-dimensional imaging quality and resolution.

在本发明实施例提供的技术方案中,综合利用了多个图像传感器进行稠密及高密度图像数据采样,并设置同步驱动电路保证高密度的可见光数据和稠密的深度图像数据采样的同步性,从而保证采集的图像数据的时域一致性,最后将具有时域一致性的图像数据进行实时融合,有效地增加了深度图像的细节信息,提高了三维成像的质量和分辨率,实现了以面阵形式输出的高分辨率、高帧频点云数据输出,满足深度视觉技术领域高分辨率、高精度三维成像的需求。In the technical solution provided by the embodiment of the present invention, a plurality of image sensors are comprehensively used to sample dense and high-density image data, and a synchronous drive circuit is set to ensure the synchronization of high-density visible light data and dense depth image data sampling, thereby Ensure the time-domain consistency of the collected image data, and finally fuse the image data with time-domain consistency in real time, effectively increasing the detail information of the depth image, improving the quality and resolution of the 3D imaging, and realizing the area array The high-resolution, high-frame-rate point cloud data output in the form of output meets the needs of high-resolution and high-precision 3D imaging in the field of depth vision technology.

在另外一种实施方式中,为了进一步提高三维成像系统的成像精度和分辨率,图像采集装置1还可包括单点激光传感器13,单点激光传感器13用于采集被测点的距离数据信息。单点激光传感器13可为任何一种单点扫描激光位移传感器,这均不影响本申请的实现。单点激光传感器13、可见光传感器11和面阵三维传感器12在同步驱动电路2的控制下同步采集数据。相应的,信息处理器3还可以包括光源控制模块,光源控制模块可用于根据面阵三维传感器的工作频率调制单点激光传感器的激光器照明光源的工作频率,以同步触发面阵三维传感器和单点激光传感器进行数据采集。利用单点激光传感器13采集的被测点的激光距离数据,与信息处理器3实时融合后生成的点云数据中的深度信息之间的关系,可以实现对点云数据的自校准,并将校准后的点云数据作为被测点的三维成像数据进行输出,提升了三维成像的精度和分辨率。In another embodiment, in order to further improve the imaging accuracy and resolution of the 3D imaging system, the image acquisition device 1 may further include a single-point laser sensor 13 for collecting distance data information of the measured point. The single-point laser sensor 13 can be any kind of single-point scanning laser displacement sensor, which does not affect the implementation of this application. The single-point laser sensor 13 , the visible light sensor 11 and the area array three-dimensional sensor 12 collect data synchronously under the control of the synchronous drive circuit 2 . Correspondingly, the information processor 3 can also include a light source control module, which can be used to modulate the operating frequency of the laser illumination light source of the single-point laser sensor according to the operating frequency of the area array three-dimensional sensor, so as to synchronously trigger the area array three-dimensional sensor and the single point Laser sensor for data acquisition. Utilize the relationship between the laser distance data of the measured point collected by the single-point laser sensor 13 and the depth information in the point cloud data generated by the information processor 3 after real-time fusion, the self-calibration of the point cloud data can be realized, and the The calibrated point cloud data is output as the 3D imaging data of the measured point, which improves the accuracy and resolution of the 3D imaging.

在该实施例中,为了解决人工校准操作繁琐的问题,提高三维成像系统的适应性和自动化程度,本申请还可通过采用单点激光传感器采集被测试点的距离数据与实际距离数据来实现对系统的数据标定或校正,利用实现简易实施自动校准,避免了人工标定的复杂操作,提高了整机系统的自动化程度。也就是说,信息处理器3还可包括数据自校准模块,该模块用于根据单点激光传感器采集被测点的距离信息进行三维成像系统的数据标定和点云数据自校准。In this embodiment, in order to solve the problem of cumbersome manual calibration operations and improve the adaptability and automation of the three-dimensional imaging system, the application can also use a single-point laser sensor to collect the distance data of the tested point and the actual distance data. The data calibration or correction of the system can be easily implemented by automatic calibration, which avoids the complicated operation of manual calibration and improves the automation of the whole machine system. That is to say, the information processor 3 may also include a data self-calibration module, which is used to perform data calibration of the 3D imaging system and point cloud data self-calibration according to the distance information of the measured point collected by the single-point laser sensor.

由上可知,本发明实施例不仅可实现自校正的全自动点云数据输出,进一步提升三维成像数据的精度和分辨率,还可提升三维成像系统的自动化程度。It can be seen from the above that the embodiment of the present invention can not only realize self-correcting fully automatic point cloud data output, further improve the accuracy and resolution of 3D imaging data, but also improve the degree of automation of the 3D imaging system.

在另外一种实施例中,为实现激光器照明的远距离能量汇聚,提高单点激光传感器13的探测性能,还可设置能量汇聚光学装置,该能量汇聚光学装置用于在单点激光传感器13的工作区域内进行激光器照明光源的远距离能量汇聚,以增加激光器照明光源的照明距离。也就是说,能量汇聚光学装置在整个系统中的位置和结构参数要保证覆盖单点激光传感器13的整个工作区域。可选的,实现了较好的能量汇聚效果,能量汇聚光学装置可采用非球面不连续弧形反射光杯,当然,本领域技术人员可根据具体应用场景采用其他可实现能量汇聚的装置,这均不影响本申请的实现。此外,为了进一步提升能量汇聚效果,在球面不连续弧形反射光杯还可设置覆盖光杯表面的反射膜。In another embodiment, in order to realize the long-distance energy concentration of laser illumination and improve the detection performance of the single-point laser sensor 13, an energy-converging optical device can also be provided, which is used for the single-point laser sensor 13 The long-distance energy concentration of the laser lighting source is carried out in the working area to increase the lighting distance of the laser lighting source. That is to say, the position and structural parameters of the energy converging optical device in the entire system must ensure that the entire working area of the single-point laser sensor 13 is covered. Optionally, to achieve a better energy concentration effect, the energy concentration optical device can use an aspheric discontinuous arc reflective light cup. Of course, those skilled in the art can use other devices that can achieve energy concentration according to specific application scenarios. All do not affect the realization of this application. In addition, in order to further improve the energy converging effect, a reflective film covering the surface of the optical cup can also be provided on the spherical surface of the discontinuous arc reflective optical cup.

在一些其他实施方式中,如图2所示,为了提高系统的测量精度,还可针对三维面阵传感器的工作频率进行调制,相应的,信息处理器3还可包括工作频率计算模块,三维面阵传感器设置多个工作频率如f1,f2,...,fn,在每个工作频率下得到的距离信息可设置为d1,d2,...,dn,可利用下述公式计算工作频率为fi时的距离值:In some other implementations, as shown in Figure 2, in order to improve the measurement accuracy of the system, the operating frequency of the three-dimensional area array sensor can also be modulated. Correspondingly, the information processor 3 can also include an operating frequency calculation module. The array sensor sets multiple operating frequencies such as f 1 , f 2 ,...,f n , and the distance information obtained at each operating frequency can be set as d 1 , d 2 ,...,d n , which can be used as follows The above formula calculates the distance value when the working frequency is f i :

式中,为真实距离,di为测量距离,ki为波长倍数,c为光速,fi为第i个工作频率。In the formula, is the real distance, d i is the measured distance, ki is the wavelength multiple, c is the speed of light, f i is the ith working frequency.

最接近实际数值的距离信息可以通过以下的公式计算得到,也就是说,工作频率计算模块可根据下述公式计算面阵三维传感器的工作频率:The distance information closest to the actual value can be calculated by the following formula, that is to say, the working frequency calculation module can calculate the working frequency of the area array three-dimensional sensor according to the following formula:

式中,N0为真实测量距离,n为所述面阵三维传感器的频率总个数,ki为波长的倍数,di为不同调制频率得到的距离,c为光速,fi为第i个工作频率。In the formula, N 0 is the actual measurement distance, n is the total number of frequencies of the area array three-dimensional sensor, k i is the multiple of the wavelength, d i is the distance obtained by different modulation frequencies, c is the speed of light, and f i is the ith a working frequency.

为了进一步提高工作频率的计算精度,还可通过对频率进行加权处理,考虑到误差等因素的影响,可采用按照高斯分布进行加权平均方法来计算频率,面阵三维传感器的实际工作频率为f1,f2,...,fm,m≤n,工作频率计算模块还可根据下述公式计算面阵三维传感器的工作频率为:In order to further improve the calculation accuracy of the working frequency, the frequency can also be weighted, and considering the influence of errors and other factors, the frequency can be calculated by weighted average method according to the Gaussian distribution. The actual working frequency of the area array three-dimensional sensor is f 1 ,f 2 ,...,f m , m≤n, the working frequency calculation module can also calculate the working frequency of the area array three-dimensional sensor according to the following formula:

式中,N0'为加权后的真实测量距离,n为所述面阵三维传感器的频率总个数,m为频率的总数,u为频率的第一位置,v为频率的第二位置,kv和ku为波长的倍数,c为光速,fu为第u个工作频率,fv为第v个工作频率,为fu对应的方差,μu为fu对应的均值;μv为fv对应的均值,为fv对应的方差。In the formula, N 0 ' is the actual measurement distance after weighting, n is the total number of frequencies of the area array three-dimensional sensor, m is the total number of frequencies, u is the first position of the frequency, v is the second position of the frequency, k v and k u are multiples of wavelength, c is the speed of light, f u is the uth operating frequency, f v is the vth operating frequency, is the variance corresponding to f u , μ u is the mean value corresponding to f u ; μ v is the mean value corresponding to f v , is the variance corresponding to f v .

综上可知,本申请综合利用了多个传感器探测到的深度信息,经过数据融合处理得到了大分辨率、高精度的三维深度信息。该三维成像系统复合利用多个传感器的信息融合技术,同时也是一种高度集成的三维成像装置。视觉生理学和视觉心理学的研究表明,人眼感知的是场景的立体深度信息,而传统的三维测量装置无法感知并且以面阵的形式获取这种信息。针对传统的飞行时间测量装置探测距离近、智能化程度低的缺点,本申请结合高精度的同步电路驱动控制、光源设计,实现了高精度、智能化的高分辨率飞行时间测量,从而进一步推进该技术在安防监控、工业机器人、工业检测等领域的广泛应用。突破目前在飞行时间测量领域的技术壁垒,研制一种集三维成像、数据处理、数据分析于一体的飞行时间点云成像与智能信号分析和处理系统,在探测距离、探测精度以及分辨率等三个指标方面超越目前的国内外成熟产品,实现高分辨率的面阵三维成像。To sum up, it can be seen that the present application comprehensively utilizes the depth information detected by multiple sensors, and obtains high-resolution and high-precision three-dimensional depth information through data fusion processing. The 3D imaging system compositely utilizes the information fusion technology of multiple sensors, and is also a highly integrated 3D imaging device. Studies on visual physiology and visual psychology have shown that what the human eye perceives is the three-dimensional depth information of the scene, while traditional three-dimensional measurement devices cannot perceive and obtain this information in the form of an area array. Aiming at the shortcomings of traditional time-of-flight measurement devices, such as short detection distance and low intelligence, this application combines high-precision synchronous circuit drive control and light source design to realize high-precision, intelligent high-resolution time-of-flight measurement, thereby further advancing This technology is widely used in security monitoring, industrial robots, industrial inspection and other fields. Break through the current technical barriers in the field of time-of-flight measurement, and develop a time-of-flight point cloud imaging and intelligent signal analysis and processing system that integrates 3D imaging, data processing, and data analysis. In terms of indicators, it surpasses the current mature products at home and abroad, and realizes high-resolution area array three-dimensional imaging.

此外,本申请还针对三维成像系统提供了三维成像方法,请参见图3,图3为本发明实施例提供的一种三维成像方法的流程示意图,本发明实施例可包括以下内容:In addition, this application also provides a three-dimensional imaging method for a three-dimensional imaging system, please refer to Figure 3, Figure 3 is a schematic flow chart of a three-dimensional imaging method provided by an embodiment of the present invention, the embodiment of the present invention may include the following:

S301:同时获取被测点在第一时刻的二维可见光图像数据和深度图像数据。S301: Simultaneously acquire two-dimensional visible light image data and depth image data of the measured point at the first moment.

二维可见光图像数据和深度图像数据为利用不同类型传感器在同一时间采集的数据,且深度图像数据为以面阵形式输出的三维点云数据。The two-dimensional visible light image data and the depth image data are data collected at the same time by using different types of sensors, and the depth image data is three-dimensional point cloud data output in the form of an area array.

S302:将二维可见光图像数据和深度图像数据进行实时融合,生成以面阵形式输出的点云数据。S302: Fusing the two-dimensional visible light image data and the depth image data in real time to generate point cloud data output in the form of an area array.

在另外一种实施方式中,请参阅图4,基于上述实施例,还可包括:In another implementation manner, please refer to FIG. 4, based on the above-mentioned embodiment, it may further include:

S303:根据被测点的激光距离数据信息对融合后的数据进行自校准,以作为被测点的点云数据进行输出。S303: Perform self-calibration on the fused data according to the laser distance data information of the measured point, so as to output it as point cloud data of the measured point.

本发明实施例所述三维成像方法的各步骤的具体实现过程可以参照上述系统实施例的相关描述,此处不再赘述。For the specific implementation process of each step of the three-dimensional imaging method described in the embodiment of the present invention, reference may be made to the relevant description of the above system embodiment, and details are not repeated here.

由上可知,本发明实施例实现了面阵形式的大分辨率、高帧频点云数据的输出,满足深度视觉技术领域高分辨率、高精度三维成像的需求。It can be seen from the above that the embodiment of the present invention realizes the output of large resolution and high frame frequency point cloud data in the form of an area array, which meets the requirements of high resolution and high precision three-dimensional imaging in the field of depth vision technology.

本发明实施例还针对三维成像方法提供了相应的实现装置,进一步使得所述方法更具有实用性。下面对本发明实施例提供的三维成像装置进行介绍,下文描述的三维成像装置与上文描述的三维成像方法可相互对应参照。The embodiment of the present invention also provides a corresponding implementation device for the three-dimensional imaging method, which further makes the method more practical. The 3D imaging device provided by the embodiments of the present invention is introduced below, and the 3D imaging device described below and the 3D imaging method described above may be referred to in correspondence.

参见图5,图5为本发明实施例提供的三维成像装置在一种具体实施方式下的结构图,该装置可包括:Referring to Fig. 5, Fig. 5 is a structural diagram of a three-dimensional imaging device provided by an embodiment of the present invention in a specific implementation manner, the device may include:

数据获取模块501,用于同时获取被测点在第一时刻的二维可见光图像数据和以面阵形式输出的深度图像数据。The data acquisition module 501 is configured to simultaneously acquire the two-dimensional visible light image data of the measured point at the first moment and the depth image data output in the form of an area array.

数据融合模块502,用于将二维可见光图像数据和深度图像数据进行实时融合,生成以面阵形式输出的点云数据。The data fusion module 502 is used to fuse two-dimensional visible light image data and depth image data in real time to generate point cloud data output in the form of an area array.

可选的,在本实施例的一些实施方式中,请参阅图6,所述装置例如还可以包括点云数据自校准模块503,用于根据被测点的激光距离数据信息对融合后的数据进行自校准,以作为被测点的点云数据进行输出;激光距离数据信息为在第一时刻采集的被测点的激光距离数据。Optionally, in some implementations of this embodiment, please refer to FIG. 6 , for example, the device may also include a point cloud data self-calibration module 503, which is used to calibrate the fused data according to the laser distance data information of the measured point. Perform self-calibration to output as the point cloud data of the measured point; the laser distance data information is the laser distance data of the measured point collected at the first moment.

本发明实施例所述三维成像装置的各功能模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。The functions of each functional module of the 3D imaging device in the embodiment of the present invention can be specifically implemented according to the method in the above method embodiment, and the specific implementation process can refer to the relevant description of the above method embodiment, and will not be repeated here.

由上可知,本发明实施例实现了面阵形式的大分辨率、高帧频点云数据的输出,满足深度视觉技术领域高分辨率、高精度三维成像的需求。It can be seen from the above that the embodiment of the present invention realizes the output of large resolution and high frame frequency point cloud data in the form of an area array, which meets the requirements of high resolution and high precision three-dimensional imaging in the field of depth vision technology.

本发明实施例还提供了一种三维成像设备,具体可包括:The embodiment of the present invention also provides a three-dimensional imaging device, which may specifically include:

存储器,用于存储计算机程序;memory for storing computer programs;

处理器,用于执行计算机程序以实现如上任意一实施例所述三维成像方法的步骤。A processor, configured to execute a computer program to implement the steps of the three-dimensional imaging method described in any one of the above embodiments.

本发明实施例所述三维成像设备的各功能模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。The functions of each functional module of the 3D imaging device described in the embodiment of the present invention can be specifically implemented according to the method in the above method embodiment, and the specific implementation process can refer to the relevant description of the above method embodiment, and will not be repeated here.

由上可知,本发明实施例实现了面阵形式的大分辨率、高帧频点云数据的输出,满足深度视觉技术领域高分辨率、高精度三维成像的需求。It can be seen from the above that the embodiment of the present invention realizes the output of large resolution and high frame frequency point cloud data in the form of an area array, which meets the requirements of high resolution and high precision three-dimensional imaging in the field of depth vision technology.

本发明实施例还提供了一种计算机可读存储介质,存储有三维成像程序,所述三维成像程序被处理器执行时如上任意一实施例所述三维成像方法的步骤。An embodiment of the present invention also provides a computer-readable storage medium storing a three-dimensional imaging program, and when the three-dimensional imaging program is executed by a processor, the steps of the three-dimensional imaging method described in any one of the above embodiments are performed.

本发明实施例所述计算机可读存储介质的各功能模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。The functions of each functional module of the computer-readable storage medium in the embodiments of the present invention can be specifically implemented according to the methods in the above-mentioned method embodiments, and the specific implementation process can refer to the relevant descriptions of the above-mentioned method embodiments, which will not be repeated here.

由上可知,本发明实施例实现了面阵形式的大分辨率、高帧频点云数据的输出,满足深度视觉技术领域高分辨率、高精度三维成像的需求。It can be seen from the above that the embodiment of the present invention realizes the output of large resolution and high frame frequency point cloud data in the form of an area array, which meets the requirements of high resolution and high precision three-dimensional imaging in the field of depth vision technology.

本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。Each embodiment in this specification is described in a progressive manner, each embodiment focuses on the difference from other embodiments, and the same or similar parts of each embodiment can be referred to each other. As for the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and for the related part, please refer to the description of the method part.

专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。Professionals can further realize that the units and algorithm steps of the examples described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, computer software or a combination of the two. In order to clearly illustrate the possible Interchangeability, in the above description, the components and steps of each example have been generally described according to their functions. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present invention.

结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。The steps of the methods or algorithms described in conjunction with the embodiments disclosed herein may be directly implemented by hardware, software modules executed by a processor, or a combination of both. Software modules can be placed in random access memory (RAM), internal memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or any other Any other known storage medium.

以上对本发明所提供的一种三维成像方法、装置及系统进行了详细介绍。本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以对本发明进行若干改进和修饰,这些改进和修饰也落入本发明权利要求的保护范围内。A three-dimensional imaging method, device and system provided by the present invention have been introduced in detail above. In this paper, specific examples are used to illustrate the principle and implementation of the present invention, and the descriptions of the above embodiments are only used to help understand the method and core idea of the present invention. It should be pointed out that for those skilled in the art, without departing from the principles of the present invention, some improvements and modifications can be made to the present invention, and these improvements and modifications also fall within the protection scope of the claims of the present invention.

Claims (10)

1. A three-dimensional imaging system is characterized by comprising an image acquisition device, a synchronous drive circuit and an information processor;
the image acquisition device comprises a visible light sensor for acquiring two-dimensional image data of a measured point and an area array three-dimensional sensor for acquiring depth image data of the measured point and outputting the depth image data in an area array form;
the synchronous driving circuit is used for controlling the synchronism of the two-dimensional image data and the depth image data acquisition so as to ensure the consistency of time domain information acquisition;
the information processor is used for fusing the two-dimensional image data and the depth image data in real time and generating point cloud data output in an area array form.
2. The three-dimensional imaging system of claim 1, wherein the image acquisition device further comprises a single-point laser sensor, the information processor further comprises a data self-calibration module and a light source control module;
the light source control module is used for modulating the working frequency of a laser lighting source of the single-point laser sensor according to the working frequency of the area array three-dimensional sensor so as to synchronously trigger the area array three-dimensional sensor and the single-point laser sensor to acquire data;
the data self-calibration module is used for calibrating data of the three-dimensional imaging system and self-calibrating point cloud data according to the distance information of the measured point acquired by the single-point laser sensor.
3. The three-dimensional imaging system of claim 2, further comprising an energy-concentrating optical device;
the energy converging optical device is used for carrying out remote energy converging of the laser illumination light source in the working area of the single-point laser sensor so as to increase the illumination distance of the laser illumination light source.
4. The three-dimensional imaging system of claim 3, wherein the energy concentrating optics are aspheric discontinuous arc reflective light cups.
5. The three-dimensional imaging system according to claim 4, wherein the aspheric discontinuous arc reflective light cup is further provided with a reflective film covering the surface of the light cup.
6. The three-dimensional imaging system of claim 1, wherein the information processor further comprises an operating frequency calculation module;
the working frequency calculation module is used for calculating the working frequency of the area array three-dimensional sensor according to a first formula, wherein the first formula is as follows:
in the formula, N0For actually measuring the distance, n is the total frequency number of the area array three-dimensional sensor, kiIs a multiple of the wavelength, diDistances obtained for different modulation frequencies, c is the speed of light, fiIs the ith operating frequency.
7. The three-dimensional imaging system of claim 1, wherein the information processor further comprises an operating frequency calculation module;
the working frequency calculation module is used for calculating the working frequency of the area array three-dimensional sensor according to a second formula, wherein the second formula is as follows:
in the formula, N0' is the weighted real measuring distance, n is the total number of the frequencies of the area array three-dimensional sensor, m is the total number of the frequencies, u is the first position of the frequency, v is the second position of the frequency, kvAnd kuIs a multiple of the wavelength, c is the speed of light, fuFor the u-th operating frequency, fvFor the v-th operating frequency, the frequency of the first frequency,is fuCorresponding variance, μuIs fuA corresponding mean value; mu.svIs fvThe corresponding average value of the average value,is fvThe corresponding variance.
8. A method of three-dimensional imaging, comprising:
simultaneously acquiring two-dimensional visible light image data of a measured point at a first moment and depth image data output in an area array form;
and fusing the two-dimensional visible light image data and the depth image data in real time to generate point cloud data output in an area array form.
9. The three-dimensional imaging method according to claim 1, wherein after the fusing the two-dimensional visible light image data and the depth image data in real time, further comprising:
self-calibrating the fused data according to the laser distance data information of the measured point to serve as point cloud data of the measured point to be output;
the laser distance data information is the laser distance data of the measured point acquired at the first moment.
10. A three-dimensional imaging apparatus, comprising:
the data acquisition module is used for simultaneously acquiring two-dimensional visible light image data of a measured point at a first moment and depth image data output in an area array form;
and the data fusion module is used for fusing the two-dimensional visible light image data and the depth image data in real time to generate point cloud data output in an area array form.
CN201910927761.4A 2019-09-27 2019-09-27 Three-dimensional imaging method, device, equipment and computer readable storage medium Active CN110619617B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910927761.4A CN110619617B (en) 2019-09-27 2019-09-27 Three-dimensional imaging method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910927761.4A CN110619617B (en) 2019-09-27 2019-09-27 Three-dimensional imaging method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110619617A true CN110619617A (en) 2019-12-27
CN110619617B CN110619617B (en) 2022-05-27

Family

ID=68924767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910927761.4A Active CN110619617B (en) 2019-09-27 2019-09-27 Three-dimensional imaging method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110619617B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667537A (en) * 2020-04-16 2020-09-15 深圳奥比中光科技有限公司 Optical fiber calibration device and method
CN112509023A (en) * 2020-12-11 2021-03-16 国网浙江省电力有限公司衢州供电公司 Multi-source camera system and RGBD registration method
CN112598719A (en) * 2020-12-09 2021-04-02 北京芯翌智能信息技术有限公司 Depth imaging system, calibration method thereof, depth imaging method and storage medium
CN113367638A (en) * 2021-05-14 2021-09-10 广东欧谱曼迪科技有限公司 Method and device for acquiring high-precision three-dimensional fluorescence image, storage medium and terminal

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106291512A (en) * 2016-07-29 2017-01-04 中国科学院光电研究院 A kind of method of array push-broom type laser radar range Nonuniformity Correction
CN106796728A (en) * 2016-11-16 2017-05-31 深圳市大疆创新科技有限公司 Generate method, device, computer system and the mobile device of three-dimensional point cloud
CN106898022A (en) * 2017-01-17 2017-06-27 徐渊 A kind of hand-held quick three-dimensional scanning system and method
CN108694731A (en) * 2018-05-11 2018-10-23 武汉环宇智行科技有限公司 Fusion and positioning method and equipment based on low line beam laser radar and binocular camera
CN109061648A (en) * 2018-07-27 2018-12-21 廖双珍 Speed based on frequency diversity/range ambiguity resolving radar waveform design method
CN109613558A (en) * 2018-12-12 2019-04-12 北京华科博创科技有限公司 A kind of the data fusion method for parallel processing and system of all-solid state laser radar system
CN209375823U (en) * 2018-12-20 2019-09-10 武汉万集信息技术有限公司 3D camera

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106291512A (en) * 2016-07-29 2017-01-04 中国科学院光电研究院 A kind of method of array push-broom type laser radar range Nonuniformity Correction
CN106796728A (en) * 2016-11-16 2017-05-31 深圳市大疆创新科技有限公司 Generate method, device, computer system and the mobile device of three-dimensional point cloud
CN106898022A (en) * 2017-01-17 2017-06-27 徐渊 A kind of hand-held quick three-dimensional scanning system and method
CN108694731A (en) * 2018-05-11 2018-10-23 武汉环宇智行科技有限公司 Fusion and positioning method and equipment based on low line beam laser radar and binocular camera
CN109061648A (en) * 2018-07-27 2018-12-21 廖双珍 Speed based on frequency diversity/range ambiguity resolving radar waveform design method
CN109613558A (en) * 2018-12-12 2019-04-12 北京华科博创科技有限公司 A kind of the data fusion method for parallel processing and system of all-solid state laser radar system
CN209375823U (en) * 2018-12-20 2019-09-10 武汉万集信息技术有限公司 3D camera

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667537A (en) * 2020-04-16 2020-09-15 深圳奥比中光科技有限公司 Optical fiber calibration device and method
CN111667537B (en) * 2020-04-16 2023-04-07 奥比中光科技集团股份有限公司 Optical fiber calibration device and method
CN112598719A (en) * 2020-12-09 2021-04-02 北京芯翌智能信息技术有限公司 Depth imaging system, calibration method thereof, depth imaging method and storage medium
CN112598719B (en) * 2020-12-09 2024-04-09 上海芯翌智能科技有限公司 Depth imaging system, calibration method thereof, depth imaging method and storage medium
CN112509023A (en) * 2020-12-11 2021-03-16 国网浙江省电力有限公司衢州供电公司 Multi-source camera system and RGBD registration method
CN112509023B (en) * 2020-12-11 2022-11-22 国网浙江省电力有限公司衢州供电公司 A multi-source camera system and RGBD registration method
CN113367638A (en) * 2021-05-14 2021-09-10 广东欧谱曼迪科技有限公司 Method and device for acquiring high-precision three-dimensional fluorescence image, storage medium and terminal
CN113367638B (en) * 2021-05-14 2023-01-03 广东欧谱曼迪科技有限公司 Method and device for acquiring high-precision three-dimensional fluorescence image, storage medium and terminal

Also Published As

Publication number Publication date
CN110619617B (en) 2022-05-27

Similar Documents

Publication Publication Date Title
CN110619617B (en) Three-dimensional imaging method, device, equipment and computer readable storage medium
JP6564537B1 (en) 3D reconstruction method and apparatus using monocular 3D scanning system
CN106772431B (en) A kind of Depth Information Acquistion devices and methods therefor of combination TOF technology and binocular vision
CN104005325B (en) Based on pavement crack checkout gear and the method for the degree of depth and gray level image
US20180343381A1 (en) Distance image acquisition apparatus and application thereof
US20130194390A1 (en) Distance measuring device
US9958547B2 (en) Three-dimensional imaging radar system and method based on a plurality of times of integral
CN105115445A (en) Three-dimensional imaging system and imaging method based on combination of depth camera and binocular vision
CN110609299A (en) Three-dimensional imaging system based on TOF
EP3008485A1 (en) Detector for optically detecting at least one object
CN209375823U (en) 3D camera
CN111352121B (en) Flight time ranging system and ranging method thereof
CN107860337A (en) Structural light three-dimensional method for reconstructing and device based on array camera
CN111538024A (en) Filtering ToF depth measurement method and device
CN109444916A (en) The unmanned travelable area determining device of one kind and method
CN103528562A (en) Method for detecting distance of human eyes and display terminal based on single camera
JP2020020612A (en) Distance measuring device, method for measuring distance, program, and mobile body
CN108007359B (en) An absolute grating ruler and displacement measuring method
CN108802746A (en) A kind of jamproof distance measuring method and device
CN111798507A (en) Power transmission line safety distance measuring method, computer equipment and storage medium
CN111352120A (en) Flight time ranging system and ranging method thereof
CN102944879A (en) Four-dimensional imaging device based on MEMS two-dimensional scan mirror and imaging method of imaging device
CN110992463B (en) A three-dimensional reconstruction method and system for transmission conductor sag based on trinocular vision
KR101706627B1 (en) Distance measurement device by using stereo image and method thereof
CN110068308A (en) A kind of distance measuring method and range-measurement system based on more mesh cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant