WO2019127508A1 - 智能终端及其3d成像方法、3d成像系统 - Google Patents

智能终端及其3d成像方法、3d成像系统 Download PDF

Info

Publication number
WO2019127508A1
WO2019127508A1 PCT/CN2017/120237 CN2017120237W WO2019127508A1 WO 2019127508 A1 WO2019127508 A1 WO 2019127508A1 CN 2017120237 W CN2017120237 W CN 2017120237W WO 2019127508 A1 WO2019127508 A1 WO 2019127508A1
Authority
WO
WIPO (PCT)
Prior art keywords
smart terminal
feature
original image
smart
feature information
Prior art date
Application number
PCT/CN2017/120237
Other languages
English (en)
French (fr)
Inventor
阳光
Original Assignee
深圳配天智能技术研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳配天智能技术研究院有限公司 filed Critical 深圳配天智能技术研究院有限公司
Priority to PCT/CN2017/120237 priority Critical patent/WO2019127508A1/zh
Priority to CN201780035378.7A priority patent/CN109328459B/zh
Publication of WO2019127508A1 publication Critical patent/WO2019127508A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the present invention relates to the field of 3D stereo vision technology, and in particular to an intelligent terminal, a 3D imaging method thereof, and a 3D imaging system.
  • the smart camera is a highly integrated micro-machine vision system, which is generally composed of an image acquisition unit, an image processing unit, an image processing software, a network communication device, etc., which integrates image acquisition, processing, and communication functions into a single camera.
  • This provides a machine vision solution that is versatile, modular, highly reliable and easy to implement.
  • the image acquisition unit converts the optical image into an analog/digital image and outputs the image to the image processing unit;
  • the image processing unit stores the image data of the image acquisition unit in real time, and performs image processing under the support of the image processing software; image processing The software completes the image processing function with the support of the image processing unit hardware;
  • the network communication device completes the communication task of the control information and the image data.
  • the 3D basic vision principle of a smart camera is to first take a picture of a target object by using an image sensor, obtain image information of the object, and then perform calculation processing on the image information to obtain a 3D image of the target object.
  • smart cameras have a limited amount of computation, and generally can only handle simple image tasks, and image sensors and communication modules cannot be extended and upgraded, thereby limiting the degree of application, making the smart camera unable to perform complex, high-precision 3D imaging.
  • the technical problem to be solved by the present invention is to provide an intelligent terminal, a 3D imaging method thereof, and a 3D imaging system, which solve the problem that the intelligent terminal cannot be competent for complicated and high-precision 3D imaging.
  • the first technical solution adopted by the present invention is: a 3D imaging method of a smart terminal, the 3D imaging method of the smart terminal includes: the smart terminal acquires an original image of the target; and the smart terminal extracts the original image
  • the feature information is transmitted to the computing unit; the computing unit performs feature matching and calculation according to the original image and the feature information; the intelligent terminal forms a 3D image of the target according to the feature matching and the calculation result.
  • the second technical solution adopted by the present invention is: an intelligent terminal, the smart terminal includes: a communication circuit, a memory, and a processor;
  • Communication circuit is used to acquire and transmit instructions
  • the memory is used by the program executed by the processor and the intermediate data generated when the program is executed;
  • any one of the above-mentioned 3D imaging methods of the smart terminal is implemented.
  • the third technical solution adopted by the present invention is: a 3D imaging system
  • the 3D imaging system includes: an intelligent terminal and a computing unit, and the computing unit is connected with the intelligent terminal signal, and the 3D imaging system can implement the above Any of the steps in the 3D imaging method of the smart terminal.
  • the invention has the beneficial effects that the invention separates a computing unit from the smart terminal, allows the intelligent terminal to automatically track and extract the feature information from the target object, and allows the computing unit to receive the feature output by the smart terminal.
  • the information and the original image are subjected to feature matching calculation, and the matching calculation result is fed back to the intelligent terminal, and the intelligent terminal performs imaging according to the feedback information or performs further feature extraction.
  • the intelligent terminal only needs to run a simple computing task, and the complex computing task is processed by an independent computing unit, thereby optimizing the performance of the intelligent terminal and expanding its application capability to enable it to be competent. Handle complex, high-precision 3D imaging tasks and improve user experience.
  • FIG. 1 is a schematic flow chart of a 3D imaging method of a smart terminal provided by the present invention
  • FIG. 2 is a schematic structural diagram of a 3D imaging system of an intelligent terminal provided by the present invention.
  • FIG. 3 is a schematic structural diagram of an intelligent terminal in a 3D imaging system of an intelligent terminal according to the present invention.
  • the present invention externally connects a computing unit to the intelligent terminal, and the computing unit is connected with the intelligent terminal signal.
  • the intelligent terminal is mainly used for tracking the target object, extracting feature information in the image of the target object, and performing simple arithmetic tasks.
  • the 3D imaging method of the smart terminal in the present invention is described as a smart camera as a smart camera.
  • FIG. 1 is a schematic flowchart diagram of a 3D imaging method of a smart terminal according to the present invention.
  • the 3D imaging method of the smart terminal mainly includes four steps.
  • Step 101 The smart terminal acquires an original image of the target.
  • the smart terminal includes a smart device, a smart phone, a tablet computer and the like, and the smart terminal is a smart camera as a specific embodiment to explain the present invention.
  • First calibration let the smart camera obtain the calibration parameter information, then track the target and obtain the original image of the target.
  • the smart camera is a monocular camera, a binocular camera or a multi-camera camera.
  • a geometric model of the camera imaging in order to determine the relationship between the three-dimensional geometric position of a certain point on the surface of the target and the corresponding point in the image, a geometric model of the camera imaging must be established. It is the camera's parameters. The process of obtaining these parameters is called the calibration of the camera.
  • the calibration of the smart camera includes calibration of internal parameters and external parameters of the smart camera, and internal parameters such as principal point coordinates, focal length, radial distortion coefficient, and lateral distortion coefficient, and external matrix such as rotation matrix and translation matrix are obtained through calibration. parameter.
  • the target is tracked, that is, the target is focused, and the original image of the target is continuously acquired.
  • Step 102 The smart terminal extracts feature information in the original image, and transmits the original image and the feature information to the computing unit.
  • the smart camera After acquiring the original image of the target object, the smart camera performs feature detection on the target image and feature estimation of the front and rear frames, and extracts feature information in the original image, wherein the feature information includes feature points and feature lines.
  • the smart camera then transmits the extracted feature information and the original image to the computing unit, wherein the computing unit includes a cloud server and an operator having a logic gate circuit, which is not specifically limited herein.
  • the 3D imaging system of the smart terminal includes two smart cameras and a computing unit, and the target is a triangular shaped object, and the two smart cameras respectively acquire the original image of the triangular shaped object, and then the object
  • the image is subjected to feature detection and feature estimation of the front and back frames, and three distinct feature points in the image of the object are extracted, for example, three vertices of the triangle are extracted, and the three vertices extracted by the two smart cameras and the captured images respectively
  • the raw image information is transmitted to the computing unit.
  • the smart camera may also extract feature lines in the target, such as extracting three edges of the triangle, and the two smart cameras respectively transmit the extracted three edges and the captured original image information to the computing unit.
  • the smart camera can also extract feature points and extract feature lines, such as extracting one edge of the triangle and two vertices. The two smart cameras will extract one edge and two vertices respectively and the original shot. Image information is transmitted to the computing unit.
  • Step 103 The calculation unit performs feature matching and calculation according to the original image and the feature information.
  • the calculation unit first performs feature matching on the whole image according to the acquired original image and feature information, and then performs feature matching on the sub-region. If the matching is successful, the distributed operation is performed according to the matching result; if the matching is unsuccessful, the computing unit will match The failed feature information is fed back to the smart camera, and the smart camera reacquires the original image of the target, extracts the feature information in the image, and transmits the feature information to the computing unit to perform the matching calculation.
  • the object is an object having a triangular shape
  • the computing unit performs feature matching according to the three vertices respectively acquired from the two smart cameras and the original image information captured, and performs a distributed operation after the matching is successful; If one of the extracted three vertices fails, the computing unit feeds back the information of the matching failed vertex to the smart camera, and the smart camera re-extracts the feature points, such as extracting the midpoint of one side of the triangle as the feature point, and two intelligences.
  • the camera separately transmits the extracted midpoint and image information to the calculation unit, and the calculation unit performs the matching calculation again.
  • the feature lines may also be re-extracted for matching.
  • the computing unit feeds back the information of the matching failed vertex to the smart camera, and the smart camera re-extracts the feature line, such as extracting one edge of the triangle, two
  • the smart camera transmits the extracted edge and image information to the calculation unit, and the calculation unit performs the matching calculation again.
  • the smart camera increases the proportion of feature types with high matching success rate, and reduces the proportion of feature types with low matching success rate.
  • two smart cameras respectively extract 100 feature points of the target image and transmit them to the computing unit for matching. Only 30 feature points match successfully, that is, the matching success rate is 30%, and the computing unit matches the 70 matches.
  • the information of the failure feature points is fed back to the smart camera.
  • the two smart cameras re-extract 70 feature lines and transmit the 70 extracted feature lines to the calculation unit for matching. All 70 feature lines are successfully matched, that is, the matching success rate. It is 100%.
  • the smart camera increases the proportion of the feature line, and reduces the proportion of the feature points. For example, when extracting 100 features, extract 90 feature lines and extract only 10 Feature points, such that more targeted extraction of feature information can increase the success rate of the match.
  • Step 104 The smart terminal forms a 3D image of the target according to the feature matching and the calculation result.
  • the smart terminal When the calculation unit successfully performs the matching calculation according to the feature information extracted by the smart camera and the original image, the smart terminal forms a 3D image of the target according to the feature matching and the calculation result, that is, the 3D image of the target can be observed by the smart camera.
  • the present invention allows the intelligent terminal to automatically track and extract feature information from the target by connecting a computing unit to the intelligent terminal, and then the computing unit receives the feature information and the original image output by the smart terminal, and performs feature matching calculation.
  • the matching calculation result is fed back to the intelligent terminal, and the intelligent terminal performs imaging according to the feedback information or performs further feature extraction.
  • the intelligent terminal only needs to run a simple computing task, and the complex computing task is processed by an independent computing unit, thereby optimizing the performance of the intelligent terminal and expanding its application capability to enable it to be competent. Handle complex, high-precision 3D imaging tasks and improve user experience.
  • FIG. 2 is a schematic structural diagram of a 3D imaging system of a smart terminal according to the present invention.
  • the 3D imaging system includes: a first smart camera 201, a second smart camera 202, a computing unit 203, and a computing unit 203.
  • the first smart camera 201 and the second smart camera 202 are connected to each other.
  • the computing unit 203 is a cloud server or an arithmetic unit having a logic gate circuit, which is not limited.
  • the first smart camera 201 and the second smart camera 202 are cameras with different camera parameters or different model parameters, which are different according to actual conditions.
  • the two smart cameras are monocular cameras, binocular cameras, or multi-view cameras, and are not limited in specific terms.
  • the first smart camera 201 and the second smart camera 202 are both used to track the target, acquire an original image of the target, and extract feature information in the image, wherein the feature information includes feature points and feature lines. Before the tracking, the calibration needs to be performed first, and the internal parameters and external parameters of the smart camera are obtained by calibration.
  • the tracking refers to the smart camera continuously focusing on the light field information from the target, and continuously acquiring the image information of the target.
  • the first smart camera 201 and the second smart camera 202 respectively transmit the extracted feature information and the original image information to the calculating unit 203, and the calculating unit 203 performs feature matching and distributed computing according to the acquired feature information and the original image, wherein the calculating unit When performing feature matching, feature matching is performed on the whole image, and then feature matching is performed on the sub-region.
  • the smart camera forms a 3D image of the object according to the calculation result of the calculation unit 203.
  • FIG. 3 is a schematic structural diagram of an intelligent terminal in a 3D imaging system of an intelligent terminal according to the present invention.
  • the intelligent terminal 301 includes a communication circuit 302 for acquiring and transmitting instructions, a memory 303 for storing a program executed by the processor 304 and intermediate data generated when the program is executed, and the processor 304 When the program of the smart terminal 301 is executed, the above-described 3D imaging method is implemented.
  • the smart terminal 301 is a smart camera, and the smart camera includes a monocular camera, a binocular camera, or a multi-view camera, which is not limited in detail.
  • the communication circuit 302 acquires an instruction to track the target
  • the smart terminal 301 tracks the target, acquires the original image of the target, extracts the feature information in the image, and stores the original image and the feature information in the memory 303, wherein
  • the feature information includes feature points and feature lines.
  • the smart terminal 301 Before the smart terminal 301 tracks the target, it needs to be scaled first, and the internal parameters and external parameters of the smart terminal 301 are obtained by calibration, and the parameter information is stored in the memory 303, wherein the tracking refers to the smart terminal 301 according to the The light field information of the target is continuously focused, and the image information of the target is continuously acquired.
  • the computing unit After the smart terminal 301 transmits the extracted feature information and the original image information to the computing unit, the computing unit performs feature matching and distributed computing according to the acquired feature information and the original image, wherein the computing unit first performs the feature matching on the full image. Feature matching, and then feature matching of sub-regions.
  • the communication circuit 302 acquires an instruction that the matching operation is successful, and the processor 304 forms a 3D image of the target object according to the result of the matching operation.
  • the present invention allows the intelligent terminal to automatically track and extract feature information from the target by connecting a computing unit to the smart terminal, and then the computing unit receives the feature information and the original image output by the smart terminal, and performs The feature matching calculation returns the matching calculation result to the intelligent terminal, and the intelligent terminal performs imaging according to the feedback information or performs further feature extraction.
  • the intelligent terminal only needs to run a simple computing task, and the complex computing task is processed by an independent computing unit, thereby optimizing the performance of the intelligent terminal and expanding its application capability to enable it to be competent. Handle complex, high-precision 3D imaging tasks and improve user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

本发明公开了一种智能终端及其3D成像方法、3D成像系统,智能终端的3D成像方法包括:智能终端获取目标物的原始图像;智能终端提取原始图像中的特征信息,并将原始图像及特征信息传输给计算单元;计算单元根据原始图像及特征信息进行特征匹配及计算;智能终端根据特征匹配及计算结果形成目标物的3D图像。本发明使智能终端只需要运行简单的计算任务,而将复杂的计算任务交由独立的计算单元进行处理,从而优化了智能终端的性能,并为其扩展了更多的应用能力,使其能胜任处理复杂、高精度的3D成像任务,提高了用户体验度。

Description

智能终端及其3D成像方法、3D成像系统
【技术领域】
本发明涉及3D立体视觉技术领域,特别是涉及一种智能终端及其3D成像方法、3D成像系统。
【背景技术】
智能相机是一种高度集成化的微小型机器视觉系统,它一般由图像采集单元、图像处理单元、图像处理软件、网络通信装置等构成,其将图像的采集、处理与通信功能集成于单一相机内,从而提供了具有多功能、模块化、高可靠性、易于实现的机器视觉解决方案。其中,图像采集单元将光学图像转换为模拟/数字图像,并输出至图像处理单元;图像处理单元对图像采集单元的图像数据进行实时存储,并在图像处理软件的支持下进行图像处理;图像处理软件在图像处理单元硬件的支持下,完成图像处理功能;网络通信装置完成控制信息、图像数据的通信任务。目前而言,智能相机的3D基本视觉原理是先利用图像传感器对目标对象进行拍照,取得物体的图像信息,然后对图像信息进行计算处理得到目标物的3D图像。
但智能相机的计算量有限,一般只能处理简单的图像任务,且图像传感器和通讯模块无法扩展升级,从而限制了其应用程度,使得智能相机无法胜任复杂、高精度的3D成像。
【发明内容】
本发明主要解决的技术问题是提供一种智能终端及其3D成像方法、3D成像系统,解决智能终端无法胜任复杂、高精度的3D成像的问题。
为解决上述技术问题,本发明采用的第一个技术方案是:一种智能终端的3D成像方法,该智能终端的3D成像方法包括:智能终端获取目标物的原始图像;智能终端提取原始图像中的特征信息,并将原始图像及特征信息传输给计算单元;计算单元根据原始图像及特征信息进行特征匹配及计算;智能终端根据特征匹配及计算结果形成目标物的3D图像。
为解决上述技术问题,本发明采用的第二个技术方案是:一种智能终端,该智能终端包括:通信电路、存储器及处理器;
通信电路用于获取及传输指令;
存储器用于处理器执行的程序以及在执行程序时所产生的中间数据;
处理器执行智能终端程序时,实现上述智能终端的3D成像方法中的任一步骤。
为解决上述技术问题,本发明采用的第三个技术方案是:一种3D成像系统,该3D成像系统包括:智能终端和计算单元,计算单元与智能终端信号连接,该3D成像系统能够实现上述智能终端的3D成像方法中的任一步骤。
本发明的有益效果是:区别于现有技术的情况,本发明通过给智能终端外连一个计算单元,让智能终端对目标物进行自动追踪和提取特征信息,让计算单元接收智能终端输出的特征信息和原始图像,并进行特征匹配计算,再将匹配计算结果反馈给智能终端,智能终端根据反馈信息进行成像或者做进一步地特征提取。这样使智能终端只需要运行简单的计算任务,而将复杂的计算任务交由独立的计算单元进行处理,从而优化了智能终端的性能,并为其扩展了更多的应用能力,使其能胜任处理复杂、高精度的3D成像任务,提高了用户体验度。
【附图说明】
图1是本发明提供的一种智能终端的3D成像方法的流程示意图;
图2是本发明提供的一种智能终端的3D成像系统的结构示意图;
图3是本发明提供的一种智能终端的3D成像系统中智能终端的结构示意图。
【具体实施方式】
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,均属于本发明保护的范围。
本发明为了让智能终端能胜任处理复杂、高精度的3D成像任务,给智能终端外接一个计算单元,该计算单元与智能终端信号连接。智能终端主要用于对目标物进行追踪、提取目标物图像中的特征信息及进行简单的运算任务。具体地,以智能终端是智能相机为一具体实施方式对本发明中智能终端的3D成像方法进行说明。以下,结合附图进行详细说明。
请参阅图1,图1是本发明提供的一种智能终端的3D成像方法的流程示意图。该智能终端的3D成像方法主要包括四个步骤。
步骤101:智能终端获取目标物的原始图像。
该智能终端包括智能相机、智能手机、平板电脑等智能设备,以智能终端是智能相机为具体实施方式对本发明进行说明。先标定,让智能相机获取标定的参数信息,然后对目标物进行追踪,并获取目标物的原始图像。其中,智能相机为单目相机、双目相机或多目相机。
需要说明的是,在获取目标物图像信息的过程中,为确定目标物表面某点的三维几何位置与其在图像中对应点之间的相互关系,必须建立相机成像的几何模型,这些几何模型参数就是相机的参数,获取这些参数的过程称之为相机的标定。
在一具体实施方式中,智能相机的标定包括智能相机内部参数和外部参数的标定,通过标定获取主点坐标、焦距、径向畸变系数、横向畸变系数等内部参数以及旋转矩阵、平移矩阵等外部参数。标定完后,对目标物进行追踪,即对目标物进行对焦,连续获取目标物的原始图像。
步骤102:智能终端提取原始图像中的特征信息,并将原始图像及特征信息传输给计算单元。
智能相机获取目标物的原始图像后,对目标物图像做特征检测和前后帧的特征预估,提取原始图像中的特征信息,其中,特征信息包括特征点和特征线。智能相机再将提取出来的特征信息及原始图像传输给计算单元,其中,计算单元包括云端服务器和具有逻辑门电路的运算器,在此不作具体限定。
在一具体实施方式中,智能终端的3D成像系统包括两个智能相机和一个计算单元,目标物是一个三角形状的物体,两个智能相机分别获取该三角形状物体的原始图像后,对该物体图像做特征检测和前后帧的特征预估,提取该物体图像中比较明显的三个特征点,比如提取三角形状物的三个顶点,两个智能相机分别将提取出的三个顶点以及拍摄的原始图像信息传输给计算单元。
在其他实施方式中,智能相机也可以提取目标物中的特征线,比如提取三角形状物的三条边线,两个智能相机分别将提取出的三条边线以及拍摄的原始图像信息传输给计算单元。在其他情形中,智能相机也可以既提取特征点又提取特征线,比如提取三角形状物的一条边线和两个顶点,两个智能相机分别将提取出的一条边线和两个顶点以及拍摄的原始图像信息传输给计算单元。
步骤103:计算单元根据原始图像及特征信息进行特征匹配及计算。
计算单元根据获取的原始图像及特征信息,先对全图进行特征匹配,再对子区域进行特征匹配,若匹配成功,则根据匹配结果进行分布式运算;若匹配不成功,则计算单元将匹配失败的特征信息反馈给智能相机,智能相机重新获取目标物的原始图像,提取图像中的特征信息,并将特征信息传输给计算单元重新进行匹配计算。
在一具体实施例中,目标物为具有三角形状的物体,计算单元根据从两个智能相机分别获取的三个顶点以及拍摄的原始图像信息进行特征匹配,匹配成功后进行分布式运算;若分别提取的三个顶点中有一个匹配失败,则计算单元将这个匹配失败顶点的信息反馈给智能相机,智能相机重新提取特征点,比如提取三角形状物一条边的中点为特征点,两个智能相机分别将提取的中点及图像信息传输给计算单元,计算单元重新进行匹配计算。
在其他实施例中,当提取的特征点匹配失败时,也可以重新提取特征线进行匹配。比如,分别提取三角形状物的三个顶点中有一个匹配失败时,计算单元将这个匹配失败顶点的信息反馈给智能相机,智能相机重新提取特征线,比如提取三角形状物的一条边线,两个智能相机分别将提取的边线及图像信息传输给计算单元,计算单元重新进行匹配计算。
智能相机在提取特征信息的过程中,增加匹配成功率高的特征类型所占的比例,减少匹配成功率低的特征类型所占的比例。例如,两个智能相机分别提取了目标物图像的100个特征点,并分别传输给计算单元进行匹配,只有30个特征点匹配成功,即匹配成功率为30%,计算单元将那70个匹配失败特征点的信息反馈给智能相机,两个智能相机重新提取70个特征线,并将分别提取的70个特征线传输给计算单元进行匹配,这70个特征线全部匹配成功,即匹配成功率为100%。则在对该三角形状物的继续拍摄过程中,智能相机会增加特征线所占的比例,而减少特征点所占的比例,比如同样提取100个特征时,提取90个特征线而只提取10个特征点,这样更有针对性的提取特征信息能增加匹配的成功率。
步骤104:智能终端根据特征匹配及计算结果形成目标物的3D图像。
当计算单元根据智能相机提取的特征信息及原始图像进行匹配计算成功时,智能终端根据特征匹配及计算结果形成目标物的3D图像,即可以通过智能相机观测到目标物的3D图像。
由上述可知,本发明通过给智能终端外连一个计算单元,让智能终端对目标物进行自动追踪和提取特征信息,让计算单元接收智能终端输出的特征信息和原始图像,并进行特征匹配计算,再将匹配计算结果反馈给智能终端,智能终端根据反馈信息进行成像或者做进一步地特征提取。这样使智能终端只需要运行简单的计算任务,而将复杂的计算任务交由独立的计算单元进行处理,从而优化了智能终端的性能,并为其扩展了更多的应用能力,使其能胜任处理复杂、高精度的3D成像任务,提高了用户体验度。
请参阅图2,图2是本发明提供的一种智能终端的3D成像系统的结构示意图,该3D成像系统包括:第一智能相机201、第二智能相机202、计算单元203,计算单元203与第一智能相机201和第二智能相机202信号连接,计算单元203为云端服务器或为具有逻辑门电路的运算器,具体不做限定。其中,第一智能相机201和第二智能相机202为两个型号参数完全相同的相机或型号参数有所不同的相机,具体根据实际情况而定。例如,设定第一智能相机201和第二智能相机202接收光波波长的具体范围有所不同,这样可以将两个智能相机得到的图像进行融合得到目标物更精确的图像信息,但这同时也会增加生产的难度。其中,两个智能相机为单目相机、双目相机或者多目相机,具体不做限定。第一智能相机201和第二智能相机202均用于对目标物进行追踪,获取目标物的原始图像,并提取图像中的特征信息,其中,特征信息包括特征点和特征线。在进行追踪前需先定标,通过定标获取智能相机的内部参数和外部参数,其中,追踪是指智能相机根据来自目标物的光场信息不断进行对焦,连续获取目标物的图像信息。第一智能相机201和第二智能相机202分别将提取的特征信息及原始图像信息传输给计算单元203,计算单元203根据获取的特征信息及原始图像进行特征匹配及分布式运算,其中,计算单元在进行特征匹配时先对全图进行特征匹配,再对子区域进行特征匹配。计算单元203匹配运算成功后,智能相机根据计算单元203的运算结果形成目标物的3D图像。
请参阅图3,图3是本发明提供的一种智能终端的3D成像系统中智能终端的结构示意图。智能终端301包括通信电路302、存储器303及处理器304,通信电路302用于获取及传输指令,存储器303用于存储处理器304执行的程序以及在执行程序时所产生的中间数据,处理器304执行智能终端301的程序时,实现上述的3D成像方法。
在一个具体的实施场景中,智能终端301为智能相机,智能相机包括单目相机、双目相机或者多目相机,具体不做限定。通信电路302获取到对目标进行追踪的指令时,智能终端301对目标物进行追踪,获取目标物的原始图像,提取图像中的特征信息,并将原始图像及特征信息存储在存储器303中,其中,特征信息包括特征点和特征线。在智能终端301对目标物进行追踪前需先定标,通过定标获取智能终端301的内部参数和外部参数,并将这些参数信息存储在存储器303中,其中,追踪是指智能终端301根据来自目标物的光场信息不断进行对焦,连续获取目标物的图像信息。智能终端301将提取的特征信息及原始图像信息传输给计算单元后,计算单元根据获取的特征信息及原始图像进行特征匹配及分布式运算,其中,计算单元在进行特征匹配时先对全图进行特征匹配,再对子区域进行特征匹配。计算单元匹配运算成功后,通信电路302获取匹配运算成功的指令,处理器304根据匹配运算的结果形成目标物的3D图像。
区别于现有技术的情况,本发明通过给智能终端外连一个计算单元,让智能终端对目标物进行自动追踪和提取特征信息,让计算单元接收智能终端输出的特征信息和原始图像,并进行特征匹配计算,再将匹配计算结果反馈给智能终端,智能终端根据反馈信息进行成像或者做进一步地特征提取。这样使智能终端只需要运行简单的计算任务,而将复杂的计算任务交由独立的计算单元进行处理,从而优化了智能终端的性能,并为其扩展了更多的应用能力,使其能胜任处理复杂、高精度的3D成像任务,提高了用户体验度。
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。

Claims (20)

  1. 一种智能终端的3D成像方法,其特征在于,所述方法包括如下步骤:
    所述智能终端获取目标物的原始图像;
    所述智能终端提取所述原始图像中的特征信息,并将所述原始图像及特征信息传输给计算单元;
    所述计算单元根据所述原始图像及特征信息进行特征匹配及计算;
    所述智能终端根据特征匹配及计算结果形成目标物的3D图像。
  2. 根据权利要求1所述的一种智能终端的3D成像方法,其特征在于,所述智能终端获取目标物的原始图像具体包括:所述智能终端对目标物进行追踪,获取目标物的原始图像。
  3. 根据权利要求1所述的一种智能终端的3D成像方法,其特征在于,所述智能终端获取目标物的原始图像之前还包括:所述智能终端获取标定参数信息。
  4. 根据权利要求1所述的一种智能终端的3D成像方法, 其特征在于,所述智能终端提取所述原始图像中的特征信息的步骤具体包括:所述智能终端对目标物图像做特征检测和前后帧的特征预估,并提取所述原始图像中的特征信息。
  5. 根据权利要求1所述的一种智能终端的3D成像方法, 其特征在于, 所述智能终端提取所述原始图像中的特征信息的步骤具体包括:所述智能终端提取所述原始图像中的特征点或特征线。
  6. 根据权利要求1所述的一种智能终端的3D成像方法,其特征在于,所述计算单元根据所述原始图像及特征信息进行特征匹配的步骤具体包括:所述计算单元根据所述原始图像及特征信息,先对全图进行特征匹配,再对子区域进行特征匹配。
  7. 根据权利要求1所述的一种智能终端的3D成像方法,其特征在于,所述所述计算单元根据所述原始图像及特征信息进行特征匹配及计算的步骤具体包括: 所述计算单元根据所述原始图像及特征信息进行特征匹配及计算,若特征信息匹配失败,则所述计算单元将匹配失败的特征信息反馈给所述智能终端,所述智能终端重新获取目标物的原始图像,提取所述图像中的特征信息,并将所述特征信息传输给计算单元。
  8. 根据权利要求1所述的一种智能终端的3D成像方法,其特征在于,所述所述计算单元根据所述原始图像及特征信息进行特征匹配及计算,所述智能终端根据特征匹配及计算结果形成目标物的3D图像的步骤具体包括:所述计算单元根据所述原始图像及特征信息进行特征匹配及计算,若特征信息匹配成功,所述计算单元根据匹配成功的特征信息进行分布式运算,智能终端根据运算结果形成目标物的3D图像。
  9. 根据权利要求1所述的一种智能终端的3D成像方法,其特征在于,所述智能终端为智能相机。
  10. 根据权利要求9所述的一种智能终端的3D成像方法,其特征在于,所述智能相机包括单目相机、双目相机或者多目相机。
  11. 根据权利要求1所述的一种智能终端的3D成像方法,其特征在于,所述计算单元为具有逻辑门电路的运算器。
  12. 根据权利要求1所述的一种智能终端的3D成像方法,其特征在于,所述计算单元为云端服务器。
  13. 智能终端,其特征在于,所述智能终端包括:通信电路、存储器及处理器;
    所述通信电路用于获取及传输指令;
    所述存储器用于所述处理器执行的程序以及在执行所述程序时所产生的中间数据;
    所述处理器执行所述智能终端程序时,实现权利要求1-12中任一所述的3D成像方法。
  14. 根据权利要求13所述的智能终端,其特征在于:所述智能终端为智能相机。
  15. 根据权利要求14所述的智能终端,其特征在于:所述智能相机包括单目相机、双目相机或者多目相机。
  16. 一种3D成像系统,其特征在于,所述系统包括:智能终端和计算单元,所述计算单元与所述智能终端信号连接,所述3D成像系统能够实现权利要求1-12任一项所述的方法。
  17. 根据权利要求16所述的一种3D成像系统,其特征在于,所述智能终端为智能相机。
  18. 根据权利要求17所述的一种3D成像系统,其特征在于,所述智能相机包括单目相机、双目相机或者多目相机。
  19. 根据权利要求16所述的一种3D成像系统,其特征在于,所述计算单元为具有逻辑门电路的运算器。
  20. 根据权利要求16所述的一种3D成像系统,其特征在于,所述计算单元为云端服务器。
PCT/CN2017/120237 2017-12-29 2017-12-29 智能终端及其3d成像方法、3d成像系统 WO2019127508A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/120237 WO2019127508A1 (zh) 2017-12-29 2017-12-29 智能终端及其3d成像方法、3d成像系统
CN201780035378.7A CN109328459B (zh) 2017-12-29 2017-12-29 智能终端及其3d成像方法、3d成像系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/120237 WO2019127508A1 (zh) 2017-12-29 2017-12-29 智能终端及其3d成像方法、3d成像系统

Publications (1)

Publication Number Publication Date
WO2019127508A1 true WO2019127508A1 (zh) 2019-07-04

Family

ID=65244687

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/120237 WO2019127508A1 (zh) 2017-12-29 2017-12-29 智能终端及其3d成像方法、3d成像系统

Country Status (2)

Country Link
CN (1) CN109328459B (zh)
WO (1) WO2019127508A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113327291A (zh) * 2020-03-16 2021-08-31 天目爱视(北京)科技有限公司 一种基于连续拍摄对远距离目标物3d建模的标定方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008111746A1 (en) * 2007-03-09 2008-09-18 Ebroadcast Technologies Co., Ltd. System and method for realizing virtual stuio through network
CN105427369A (zh) * 2015-11-25 2016-03-23 努比亚技术有限公司 移动终端及其三维形象的生成方法
CN105913474A (zh) * 2016-04-05 2016-08-31 清华大学深圳研究生院 双目三维重构装置及其三维重构方法以及一种安卓应用
CN106033621A (zh) * 2015-03-17 2016-10-19 阿里巴巴集团控股有限公司 一种三维建模的方法及装置
KR20170013539A (ko) * 2015-07-28 2017-02-07 주식회사 에이알미디어웍스 증강현실 기반의 게임 시스템 및 방법
CN106910241A (zh) * 2017-01-20 2017-06-30 徐迪 基于手机摄像和云服务器的三维人体头部的重建系统及方法
CN107167077A (zh) * 2017-07-07 2017-09-15 京东方科技集团股份有限公司 立体视觉测量系统和立体视觉测量方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102857791B (zh) * 2012-09-14 2015-07-08 武汉善观科技有限公司 用移动终端对pacs系统中图像数据的处理及显示方法
CN106331680B (zh) * 2016-08-10 2018-05-29 清华大学深圳研究生院 一种手机端2d转3d自适应云卸载方法和系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008111746A1 (en) * 2007-03-09 2008-09-18 Ebroadcast Technologies Co., Ltd. System and method for realizing virtual stuio through network
CN106033621A (zh) * 2015-03-17 2016-10-19 阿里巴巴集团控股有限公司 一种三维建模的方法及装置
KR20170013539A (ko) * 2015-07-28 2017-02-07 주식회사 에이알미디어웍스 증강현실 기반의 게임 시스템 및 방법
CN105427369A (zh) * 2015-11-25 2016-03-23 努比亚技术有限公司 移动终端及其三维形象的生成方法
CN105913474A (zh) * 2016-04-05 2016-08-31 清华大学深圳研究生院 双目三维重构装置及其三维重构方法以及一种安卓应用
CN106910241A (zh) * 2017-01-20 2017-06-30 徐迪 基于手机摄像和云服务器的三维人体头部的重建系统及方法
CN107167077A (zh) * 2017-07-07 2017-09-15 京东方科技集团股份有限公司 立体视觉测量系统和立体视觉测量方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113327291A (zh) * 2020-03-16 2021-08-31 天目爱视(北京)科技有限公司 一种基于连续拍摄对远距离目标物3d建模的标定方法
CN113327291B (zh) * 2020-03-16 2024-03-22 天目爱视(北京)科技有限公司 一种基于连续拍摄对远距离目标物3d建模的标定方法

Also Published As

Publication number Publication date
CN109328459A (zh) 2019-02-12
CN109328459B (zh) 2021-02-26

Similar Documents

Publication Publication Date Title
US10455141B2 (en) Auto-focus method and apparatus and electronic device
EP2509070B1 (en) Apparatus and method for determining relevance of input speech
CN103475886A (zh) 立体深度影像建立系统及其方法
CN112153306B (zh) 图像采集系统、方法、装置、电子设备及可穿戴设备
WO2019137081A1 (zh) 一种图像处理方法、图像处理装置及拍照设备
CN104834901A (zh) 一种基于双目立体视觉的人脸检测方法、装置及系统
CN113129241B (zh) 图像处理方法及装置、计算机可读介质、电子设备
CN112509125A (zh) 一种基于人工标志物和立体视觉的三维重建方法
WO2023273093A1 (zh) 一种人体三维模型获取方法、装置、智能终端及存储介质
US9454226B2 (en) Apparatus and method for tracking gaze of glasses wearer
CN115830675B (zh) 一种注视点跟踪方法、装置、智能眼镜及存储介质
CN108446018A (zh) 一种基于双目视觉技术的增强现实眼动交互系统
WO2024094227A1 (zh) 一种基于卡尔曼滤波和深度学习的手势姿态估计方法
WO2023142352A1 (zh) 一种深度图像的获取方法、装置、终端、成像系统和介质
CN108694713A (zh) 一种基于立体视觉的星箭对接环局部环段识别与测量方法
CN110909571B (zh) 一种高精度面部识别空间定位方法
WO2019127508A1 (zh) 智能终端及其3d成像方法、3d成像系统
CN111246116B (zh) 一种用于屏幕上智能取景显示的方法及移动终端
JP2022133133A (ja) 生成装置、生成方法、システム、およびプログラム
US10922825B2 (en) Image data processing method and electronic device
WO2020134229A1 (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
CN111385481A (zh) 图像处理方法及装置、电子设备及存储介质
WO2022183372A1 (zh) 控制方法、控制装置及终端设备
CN107534736A (zh) 终端的图像配准方法、装置和终端
CN110189267A (zh) 一种基于机器视觉的实时定位方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17936277

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17936277

Country of ref document: EP

Kind code of ref document: A1