WO2019113912A1 - Structured light-based three-dimensional image reconstruction method and device, and storage medium - Google Patents

Structured light-based three-dimensional image reconstruction method and device, and storage medium Download PDF

Info

Publication number
WO2019113912A1
WO2019113912A1 PCT/CN2017/116321 CN2017116321W WO2019113912A1 WO 2019113912 A1 WO2019113912 A1 WO 2019113912A1 CN 2017116321 W CN2017116321 W CN 2017116321W WO 2019113912 A1 WO2019113912 A1 WO 2019113912A1
Authority
WO
WIPO (PCT)
Prior art keywords
coding
coded
image
feature points
encoded
Prior art date
Application number
PCT/CN2017/116321
Other languages
French (fr)
Chinese (zh)
Inventor
宋展
曾海
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Priority to PCT/CN2017/116321 priority Critical patent/WO2019113912A1/en
Publication of WO2019113912A1 publication Critical patent/WO2019113912A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Definitions

  • the invention belongs to the technical field of image processing, and in particular relates to a method, a device and a storage medium for reconstructing a three-dimensional image based on structured light.
  • the structured light three-dimensional reconstruction system refers to the problem of matching in the stereo vision by projecting the optical pattern containing the specific coded information to the surface of the object, and then obtaining the correspondence by decoding, and then recovering the three-dimensional space coordinates at the projection point by the optical triangulation principle.
  • structured light can be divided into point structure light, line structure light, multi-line structure light and surface structure light.
  • point structure light method, line structure light method and multi-line structure light method are relatively mature and simple, but Each reconstruction requires multiple images to be taken, with low efficiency and a small measurement range.
  • the surface structured light method uses a projector to project one or more coding patterns onto the surface of a three-dimensional object, and photographs the coded pattern of the surface of the three-dimensional object with a camera, and then uses the characteristics of the projected coded structured light to perform image matching, and finally utilizes the triangle.
  • the principle of law calculates the point cloud coordinates of the object surface.
  • the coding method adopted by the existing structured light three-dimensional reconstruction technology can be roughly divided into a time coding method and a spatial coding method, and the time coding is performed according to the time sequence of the projected image, and then the coded image is continuously projected onto the surface of the object in chronological order. It has the advantages of high measurement accuracy and high measurement resolution, but its measurement speed is slow, so it is suitable for 3D information acquisition of static targets and scenes. The latter only needs to project a coding pattern, and the measurement speed is fast, so it is suitable for three-dimensional information acquisition of dynamic targets and scenes.
  • Spatial coding aims to realize three-dimensional reconstruction of the surface of an object by projecting a single-encoded image.
  • the coding information is generated by spatial coding features or different arrangement and combination thereof.
  • the encoding and decoding processes are all completed in a single image, which has the advantage of real-time.
  • the existing spatial coding structure light is often encoded by color information and gray information, but the decoding effect of the existing method is easily affected by the surface color and color channel crosstalk of the object. The impact is not robust. From the current research situation in this field, spatial coding using black and white geometric features has become a trend, but there is a contradiction between the coding density of such techniques and the size of the coding window and the types of coding elements used, ie, to obtain high density. Structured optical coding patterns can only increase the type of coding elements or increase the coding window, and these two measures will significantly increase the difficulty of decoding, resulting in a reduction in decoding success rate.
  • the present invention provides a three-dimensional image reconstruction method based on structured light, the method comprising the steps of:
  • the coded element pattern of the object coded image has rotational symmetry and includes a preset number of auxiliary coded feature points
  • the object is subjected to three-dimensional image reconstruction according to the pre-acquired three-dimensional image reconstruction system calibration parameter and the matched decoding information to obtain a three-dimensional image of the object.
  • the present invention provides a three-dimensional image reconstruction apparatus based on structured light, the apparatus comprising:
  • a feature point extracting unit configured to: when receiving a three-dimensional image reconstruction request of the object input by the user, extract a main coding feature point of the input object coded image, where the coded element graphic of the object coded image has rotational symmetry and includes a preset quantity Auxiliary coding feature points;
  • An element image extracting unit configured to construct a topological network of the main coded feature points according to the main coded feature points of the object coded image, and extract all coded element images included in the object coded image according to the topology network;
  • a feature point calculation unit configured to locate an initial position of a graphic feature point in each of the encoded element images using a preset corner detection algorithm, and calculate the grayscale value according to the initial position and the encoded element image The auxiliary coding feature point of the encoded element image;
  • An image recognition unit configured to identify, according to the primary coding feature point and the auxiliary coding feature points of all the coding element images, the pre-trained deep learning network to identify all the coding element images;
  • a decoding unit configured to match the encoded coding information corresponding to each of the coded element images and the coded information corresponding to the pre-stored coding pattern according to a preset polar coding strategy, to implement corresponding primary coding features Decoding of point and auxiliary coded feature points;
  • an image reconstruction unit configured to perform three-dimensional image reconstruction on the object according to the pre-acquired three-dimensional image reconstruction system calibration parameter and the matched decoding information to obtain a three-dimensional image of the object.
  • the present invention also provides a computer readable storage medium storing a computer program that, when executed by a processor, implements the steps of the method as described above.
  • the main coding feature point and the auxiliary coding feature point of the object coded image are extracted, and all the coded element images are identified by using the pre-trained deep learning network, and according to the preset polar line coding strategy.
  • Matching the encoded information corresponding to each of the identified coding element images with the coding information corresponding to the previously stored coding pattern to implement the corresponding primary coding The decoding of the feature points and the auxiliary coding feature points, and finally the three-dimensional image reconstruction of the object according to the pre-acquired three-dimensional image reconstruction system calibration parameters and the matched decoding information, thereby improving the success rate of image decoding, thereby improving the effect of three-dimensional image reconstruction. .
  • FIG. 1 is a flowchart showing an implementation of a method for reconstructing a three-dimensional image based on structured light according to Embodiment 1 of the present invention
  • Embodiment 1 of the present invention is a schematic diagram of images of eight coding elements provided by Embodiment 1 of the present invention.
  • FIG. 3 is a schematic diagram of main coding feature points and auxiliary coding feature points of a tessellated coding element image according to Embodiment 1 of the present invention
  • Embodiment 4 is a coded image for projection provided by Embodiment 1 of the present invention.
  • FIG. 5 is a schematic structural diagram of a three-dimensional image reconstruction apparatus based on structured light according to Embodiment 2 of the present invention.
  • FIG. 6 is a schematic structural diagram of a three-dimensional image reconstruction apparatus based on structured light according to Embodiment 3 of the present invention.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • FIG. 1 is a flowchart showing an implementation process of a structured light-based three-dimensional image reconstruction method according to Embodiment 1 of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown, which are described in detail as follows:
  • step S101 when the object three-dimensional image reconstruction request input by the user is received, the main coded feature point of the input object coded image is extracted.
  • the object encoded image is obtained by:
  • the encoded element graphic is a coding element graphic having rotational symmetry and including a preset number of feature points, for example, an "L" shaped coding element graphic, a " ⁇ " shaped coding element graphic.
  • the embodiment of the present invention is described herein by using an "L"-shaped coded element pattern.
  • eight encoded element images can be obtained, as shown in FIG.
  • the eight coded element images shown in Figure 2 represent eight code words (1, -1, 2, -2, 3, -3, 4, -4), respectively, where positive digital words represent codes in white background.
  • Element image, negative digital word represents a coded element image with black background.
  • the entire encoding process is performed only in the direction of the polar line. Since one coding dimension is reduced, a smaller coding window can be obtained based on the same number of coding elements, or more A small number of coding elements obtain the same size coding window. By defining the number of polar lines, the entire coding capacity can be adjusted to meet the projection needs of different resolutions.
  • the black and white checkerboard is used as the coding basic frame in the coding, and by filling the different coded element images, the checkerboard corner points are uniquely coded in the polar direction, and the checkerboard corner points are defined as the main coding feature points, due to the coding
  • the element graphic itself has obvious geometric features, such as "L" shape or " ⁇ " shape. Therefore, at least three auxiliary coding feature points can be defined in each coding element graphic, as shown in Fig. 3, in which the middle The four corner points of the square checkerboard are the main coding feature points, and the three corner points of the "L" shape in the checkerboard are auxiliary coding feature points.
  • Coding feature points by the definition of such mixed coded feature points
  • the number can be increased by a factor of 3, which greatly increases the density of coded feature points.
  • a coded image for projection as shown in FIG. 4 can be obtained by the above-described polar line coding strategy.
  • the encoded image for projection when projecting the encoded image for projection onto an object, preferably using a diffractive optical element of a three-dimensional image reconstruction system comprising a projection device (consisting of a diffractive optical element and a laser) And a camera.
  • the encoded image for projection when projecting the encoded image for projection onto the object, can be obtained by laser and diffractive optical elements in the three-dimensional image reconstruction system according to the principle of laser diffraction, and projected onto the surface of the object, thereby being reconstructed by the three-dimensional image reconstruction system.
  • the camera in the camera captures the object encoded image.
  • the pixels of the object coded image are subjected to template convolution, and the candidate main coded feature points of the object coded image are obtained according to the template convolution result. Calculating a degree of symmetry of each candidate main coding feature point, and culling the candidate primary coding feature points whose degree of symmetry is less than a preset threshold to obtain a main coding feature point of the object coded image.
  • the black and white checkerboard is used as the coding base frame
  • the "+" type template is used to calculate the volume in the neighborhood (the size of the neighborhood is generally 2/3 of the image size of one coding element in the object coded image).
  • Product value, the convolution value of the "+" type template Where f(x, y) represents the pixel value of the object-encoded image at point (x, y), and N represents the size of the template.
  • the erroneous candidate primary coding feature points are culled by rotational symmetry, and the true primary coding feature points are found from the candidate primary coding feature points.
  • step S102 the main coding feature point is constructed according to the main coding feature point of the object coded image.
  • the topology network extracts all coded element images included in the object encoded image according to the topology network.
  • step S103 the initial position of the graphic feature point in each encoded element image is located using a preset corner detection algorithm, and the auxiliary coding feature point of the encoded element image is calculated according to the initial position and the gray value of the encoded element image.
  • step S104 all of the encoded element images are identified using the pre-trained deep learning network based on the primary encoded feature points and the auxiliary encoded feature points of all encoded element images.
  • the initially established deep learning network is trained in advance.
  • a preset number of object encoded images having different colors, textures, illuminations, and scenes are first acquired. Samples, using a sample of each object encoded image to train a pre-established deep learning network to obtain a trained deep learning network.
  • the object coded image sample After acquiring the object coded image sample, extracting the main coded feature point of the object coded image sample, constructing a topological network of the main coded feature point according to the main coded feature point of the object coded image sample, and extracting the object coded image sample according to the topology network All the encoded element images are included, thereby extracting a large number of coded element image samples, performing Gaussian blurring, occlusion, etc. on the coded element image samples, further expanding the sample size, and then training the deep learning network using the expanded coded element image samples. To get a trained deep learning network.
  • step S105 the encoded information corresponding to each encoded element image is matched with the encoding information corresponding to the pre-stored coding pattern according to the preset polar coding strategy, so as to implement the corresponding primary coding feature point and Auxiliary coding of the decoding of feature points.
  • the mapping point of the spatial point on the right image plane is on the right polar line of the right image plane, and vice versa.
  • This constraint relationship is called the polar line constraint)
  • each encoded element image will be recognized.
  • the corresponding coding information is matched with the coding information corresponding to the pre-stored coding pattern to implement decoding of the corresponding primary coding feature point and the auxiliary coding feature point. Further, the decoding can be error-corrected according to constraints such as continuity and smoothness to improve the accuracy of decoding.
  • step S106 the object is subjected to three-dimensional image reconstruction according to the pre-acquired three-dimensional image reconstruction system calibration parameter and the matched decoding information to obtain a three-dimensional image of the object.
  • the camera and the projection device in the three-dimensional image reconstruction system are calibrated by using the Zhang Zhengyou camera calibration method to obtain the calibration parameters of the camera and the projection device, and then according to the acquired camera and the calibration parameters of the projection device.
  • the positional relationship parameter between the camera and the projection device is calculated to obtain the calibration parameters of the three-dimensional image reconstruction system.
  • the object is subjected to three-dimensional image reconstruction according to the pre-acquired three-dimensional image reconstruction system calibration parameters and the matched decoding information to obtain a three-dimensional image of the object.
  • a coding element graphic having a rotational symmetry and including a preset number of feature points is combined with a black and white background to generate a corresponding coding element image, and according to the polar coding strategy, the coding element image is used to pre
  • the coding window is coded along the direction of the polar line to obtain a coded image for projection with a preset resolution, thereby implementing a smaller coding window with fewer coding element types, and greatly reducing the subsequent decoding difficulty.
  • the success rate of decoding is
  • all the coded element images are identified using a pre-trained deep learning network, and the recognition is obtained according to a preset polar line coding strategy.
  • the coding information corresponding to each coding element image is matched with the coding information corresponding to the pre-stored coding pattern to implement decoding of the corresponding primary coding feature point and the auxiliary coding feature point, and finally, according to the pre-acquired three-dimensional image reconstruction system calibration parameter
  • the decoded information obtained by the matching performs three-dimensional image reconstruction on the object, thereby improving the success rate of image decoding, thereby improving the effect of three-dimensional image reconstruction.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • FIG. 5 shows a structure of a structured light-based three-dimensional image reconstruction apparatus according to Embodiment 2 of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown, including:
  • the feature point extracting unit 51 is configured to: when receiving the object three-dimensional image reconstruction request input by the user, extract a main coding feature point of the input object coded image, the coded element graphic of the object coded image has rotational symmetry and includes a preset quantity Auxiliary coding feature points;
  • the element image extracting unit 52 is configured to construct an editor based on the main coded feature points of the object coded image. a topological network of code feature points, extracting all coded element images included in the object coded image according to the topology network;
  • the feature point calculation unit 53 is configured to locate an initial position of the graphic feature point in each of the encoded element images using a preset corner detection algorithm, and calculate an auxiliary of the encoded element image according to the initial position and the gray value of the encoded element image. Coding feature points;
  • the image recognition unit 54 is configured to identify all the encoded element images by using a pre-trained deep learning network according to the primary encoded feature points and the auxiliary encoded feature points of all the encoded element images;
  • the decoding unit 55 is configured to match, according to the preset polar coding strategy, the encoded information corresponding to each of the identified coding element images with the coding information corresponding to the pre-stored coding pattern, to implement the corresponding primary coding feature point. And decoding of the auxiliary coded feature points;
  • the image reconstruction unit 56 is configured to perform three-dimensional image reconstruction on the object according to the pre-acquired three-dimensional image reconstruction system calibration parameter and the matched decoding information to obtain a three-dimensional image of the object.
  • each unit of the three-dimensional image reconstruction apparatus may be implemented by a corresponding hardware or software unit, and each unit may be an independent software and hardware unit, or may be integrated into one soft and hardware unit, and there is no need to limit the present. invention.
  • each unit may be implemented by a corresponding hardware or software unit, and each unit may be an independent software and hardware unit, or may be integrated into one soft and hardware unit, and there is no need to limit the present. invention.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • FIG. 6 shows a structure of a structured light-based three-dimensional image reconstruction apparatus according to Embodiment 3 of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown, including:
  • the first parameter calibration unit 601 is configured to calibrate the camera and the projection device in the three-dimensional image reconstruction system by using Zhang Zhengyou camera calibration method to obtain calibration parameters of the camera and the projection device;
  • the second parameter calibration unit 602 is configured to calculate a positional relationship parameter between the camera and the projection device according to the acquired camera and the calibration parameter of the projection device.
  • An element rotation unit 603, configured to rotate a coding element having rotational symmetry and including a preset number of feature points to obtain a plurality of codeword patterns of the coding element;
  • the element image generating unit 604 is configured to use a plurality of code word graphics and a black and white background to combine, Generating corresponding coded element images respectively;
  • the image encoding unit 605 is configured to perform encoding along the polar line direction by using a coded element image in a preset coding window according to the polar line coding strategy to obtain a coded image for projection of a preset resolution;
  • a coded image projecting unit 606 configured to project a coded image for projection onto an object
  • the feature point extracting unit 607 is configured to: when receiving the object three-dimensional image reconstruction request input by the user, extract a main coding feature point of the input object coded image, where the coded element graphic of the object coded image has rotational symmetry and includes a preset quantity Auxiliary coding feature points;
  • the element image extracting unit 608 is configured to construct a topological network of the main coded feature points according to the main coded feature points of the object coded image, and extract all the coded element images included in the object coded image according to the topology network;
  • the feature point calculation unit 609 is configured to locate an initial position of the graphic feature point in each of the encoded element images by using a preset corner detection algorithm, and calculate an auxiliary of the encoded element image according to the initial position and the gray value of the encoded element image. Coding feature points;
  • the image recognition unit 610 is configured to identify all the encoded element images by using a pre-trained deep learning network according to the primary encoded feature points and the auxiliary encoded feature points of all the encoded element images;
  • the decoding unit 611 is configured to match, according to the preset polar coding strategy, the encoded information corresponding to each of the identified coding element images with the coding information corresponding to the pre-stored coding pattern, to implement the corresponding primary coding feature point. And decoding of the auxiliary coded feature points;
  • the image reconstruction unit 612 is configured to perform three-dimensional image reconstruction on the object according to the pre-acquired three-dimensional image reconstruction system calibration parameter and the matched decoding information to obtain a three-dimensional image of the object.
  • the feature point extraction unit 607 includes:
  • a candidate feature point acquiring unit configured to perform template convolution on pixels of the object encoded image, and obtain candidate main coding feature points of the object encoded image according to the template convolution result
  • the feature point culling unit 6072 is configured to calculate a degree of symmetry of each candidate primary coded feature point, and cull the candidate primary coded feature points whose degree of symmetry is less than a preset threshold to obtain a primary coded feature point of the object coded image.
  • each unit of the three-dimensional image reconstruction apparatus may be implemented by a corresponding hardware or software unit, and each unit may be an independent software and hardware unit, or may be integrated into one soft and hardware unit, and there is no need to limit the present. invention.
  • each unit may be implemented by a corresponding hardware or software unit, and each unit may be an independent software and hardware unit, or may be integrated into one soft and hardware unit, and there is no need to limit the present. invention.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • a computer readable storage medium storing a computer program, which when executed by a processor, implements the steps in the foregoing method embodiments, for example, FIG. Steps S101 to S106 are shown.
  • the computer program when executed by the processor, implements the functions of the various units in the various apparatus embodiments described above, such as the functions of units 51 through 56 shown in FIG.
  • the steps in the foregoing method embodiment are implemented, and the pre-trained deep learning network is used to extract all the main coding feature points and the auxiliary coding feature points of the object coded image.
  • the coded element image is identified, and the coded information corresponding to each of the identified coded element images is matched with the coded information corresponding to the previously stored coded pattern according to a preset polar line coding strategy, so as to implement the corresponding primary coded feature point.
  • the computer readable storage medium of the embodiments of the present invention may include any entity or device capable of carrying computer program code, a recording medium such as a ROM/RAM, a magnetic disk, an optical disk, a flash memory, or the like.

Abstract

A structured light-based three-dimensional image reconstruction method and device, and a storage medium, applicable to the field of image processing. The method comprises: extracting main encoding feature points of an input object encoding image; constructing a topological network of the main encoding feature points, and extracting all encoding element images comprised in the object encoding image according to the topological network; positioning the initial position of a graphical feature point in each of the encoding element images, and calculating an auxiliary encoding feature point of the encoding element image; identifying all encoding element images by means of a deep learning network according to the main encoding feature points and the auxiliary encoding feature points; matching encoding information corresponding to each of the identified encoding element images with encoding information corresponding to a pre-stored encoding pattern according to a preset polar line encoding policy to implement decoding; and performing three-dimensional image reconstruction on the object according to pre-acquired calibration parameters of a three-dimensional image reconstruction system and decoding information obtained by the matching to obtain a three-dimensional image of the object.

Description

基于结构光的三维图像重建方法、装置及存储介质Three-dimensional image reconstruction method, device and storage medium based on structured light 技术领域Technical field
本发明属于图像处理技术领域,尤其涉及一种基于结构光的三维图像重建方法、装置及存储介质。The invention belongs to the technical field of image processing, and in particular relates to a method, a device and a storage medium for reconstructing a three-dimensional image based on structured light.
背景技术Background technique
结构光三维重建系统是指通过投射含有特定编码信息的光学图案到物体表面,然后通过解码获取对应性,以解决立体视觉中的匹配问题,进而通过光学三角原理恢复出投影点处的三维空间坐标。根据投射图案的不同,结构光可分为点结构光、线结构光、多线结构光及面结构光,其中点结构光法、线结构光法和多线结构光法虽然比较成熟简单,但是每次重构都需要拍摄多张图像,效率低、测量范围小。面结构光法是使用投影仪将一幅或多幅编码图案投射到三维物体表面,并用摄像机对三维物体表面的编码图案进行拍照,然后利用投射的编码结构光的特点进行图像匹配,最后利用三角法原理算得物体表面点云坐标。The structured light three-dimensional reconstruction system refers to the problem of matching in the stereo vision by projecting the optical pattern containing the specific coded information to the surface of the object, and then obtaining the correspondence by decoding, and then recovering the three-dimensional space coordinates at the projection point by the optical triangulation principle. . According to different projection patterns, structured light can be divided into point structure light, line structure light, multi-line structure light and surface structure light. Among them, point structure light method, line structure light method and multi-line structure light method are relatively mature and simple, but Each reconstruction requires multiple images to be taken, with low efficiency and a small measurement range. The surface structured light method uses a projector to project one or more coding patterns onto the surface of a three-dimensional object, and photographs the coded pattern of the surface of the three-dimensional object with a camera, and then uses the characteristics of the projected coded structured light to perform image matching, and finally utilizes the triangle. The principle of law calculates the point cloud coordinates of the object surface.
现有的结构光三维重建技术的采用的编码方法大致可划分为时间编码法和空间编码法,时间编码根据投影图像的时间顺序进行编码,然后再将编码图像按照时间顺序连续地投影到物体表面,具有测量精度较高、测量分辨率高的优点,但其测量速度较慢,因此适合于静态目标和场景的三维信息获取。后者仅需投影一幅编码图案,测量速度较快,因此适合于动态目标和场景的三维信息获取。空间编码旨在通过投影单幅编码图像实现物体表面的三维重建,其编码信息由空间编码特征或其不同的排列组合来生成,编码和解码过程均在单幅图像内完成,具有实时性的优势。现有的空间编码结构光常采用颜色信息、灰度信息进行编码,但现有方法的解码效果容易受到物体表面颜色和颜色通道串扰 的影响,鲁棒性不强。从该领域的研究现状来看,采用黑白几何特征的空间编码已成为发展趋势,但此类技术编码密度与编码窗口尺寸以及采用的编码元素种类之间存在矛盾,即:如要获得高密度的结构光编码图案,只能增加编码元素种类或者加大编码窗口,而这两种措施最后都会显著加大解码的难度,造成解码成功率的降低。The coding method adopted by the existing structured light three-dimensional reconstruction technology can be roughly divided into a time coding method and a spatial coding method, and the time coding is performed according to the time sequence of the projected image, and then the coded image is continuously projected onto the surface of the object in chronological order. It has the advantages of high measurement accuracy and high measurement resolution, but its measurement speed is slow, so it is suitable for 3D information acquisition of static targets and scenes. The latter only needs to project a coding pattern, and the measurement speed is fast, so it is suitable for three-dimensional information acquisition of dynamic targets and scenes. Spatial coding aims to realize three-dimensional reconstruction of the surface of an object by projecting a single-encoded image. The coding information is generated by spatial coding features or different arrangement and combination thereof. The encoding and decoding processes are all completed in a single image, which has the advantage of real-time. . The existing spatial coding structure light is often encoded by color information and gray information, but the decoding effect of the existing method is easily affected by the surface color and color channel crosstalk of the object. The impact is not robust. From the current research situation in this field, spatial coding using black and white geometric features has become a trend, but there is a contradiction between the coding density of such techniques and the size of the coding window and the types of coding elements used, ie, to obtain high density. Structured optical coding patterns can only increase the type of coding elements or increase the coding window, and these two measures will significantly increase the difficulty of decoding, resulting in a reduction in decoding success rate.
发明内容Summary of the invention
本发明的目的在于提供一种基于结构光的三维图像重建方法、装置及存储介质,旨在解决由于现有基于结构光的三维图像重建方法解码成功率低的问题。It is an object of the present invention to provide a three-dimensional image reconstruction method, apparatus and storage medium based on structured light, which aims to solve the problem of low decoding success rate due to the existing three-dimensional image reconstruction method based on structured light.
一方面,本发明提供了一种基于结构光的三维图像重建方法,所述方法包括下述步骤:In one aspect, the present invention provides a three-dimensional image reconstruction method based on structured light, the method comprising the steps of:
当接收到用户输入的物体三维图像重建请求时,提取输入的物体编码图像的主编码特征点,所述物体编码图像的编码元素图形具有旋转对称性且包含预设数量的辅助编码特征点;When receiving a three-dimensional image reconstruction request of the object input by the user, extracting a main coding feature point of the input object coded image, the coded element pattern of the object coded image has rotational symmetry and includes a preset number of auxiliary coded feature points;
根据所述物体编码图像的主编码特征点,构建所述主编码特征点的拓扑网络,根据所述拓扑网络提取所述物体编码图像中包括的所有编码元素图像;Constructing a topology network of the primary coded feature points according to the primary coded feature points of the object coded image, and extracting all coded element images included in the object coded image according to the topology network;
使用预设的角点检测算法对每个所述编码元素图像中的图形特征点的初始位置进行定位,根据所述初始位置以及所述编码元素图像的灰度值计算所述编码元素图像的辅助编码特征点;Positioning an initial position of a graphical feature point in each of the encoded element images using a preset corner detection algorithm, and calculating an auxiliary of the encoded element image according to the initial position and a gray value of the encoded element image Coding feature points;
根据所述主编码特征点和所有所述编码元素图像的辅助编码特征点,使用预先训练好的深度学习网络对所述所有编码元素图像进行识别;Identifying all of the encoded element images using a pre-trained deep learning network based on the primary encoded feature points and the auxiliary encoded feature points of all of the encoded element images;
根据预设的极线编码策略,将识别到得的每个所述编码元素图像对应的编码信息与预先存储的编码图案对应的编码信息进行匹配,以实现对应的主编码特征点及辅助编码特征点的解码;And matching the encoded information corresponding to each of the identified coding element images with the coding information corresponding to the pre-stored coding pattern according to a preset polar coding strategy, so as to implement corresponding primary coding feature points and auxiliary coding features. Decoding of points;
根据预先获取的三维图像重建系统标定参数和所述匹配得到的解码信息对所述物体进行三维图像重建,以得到所述物体的三维图像。 The object is subjected to three-dimensional image reconstruction according to the pre-acquired three-dimensional image reconstruction system calibration parameter and the matched decoding information to obtain a three-dimensional image of the object.
另一方面,本发明提供了一种基于结构光的三维图像重建装置,所述装置包括:In another aspect, the present invention provides a three-dimensional image reconstruction apparatus based on structured light, the apparatus comprising:
特征点提取单元,用于当接收到用户输入的物体三维图像重建请求时,提取输入的物体编码图像的主编码特征点,所述物体编码图像的编码元素图形具有旋转对称性且包含预设数量的辅助编码特征点;a feature point extracting unit, configured to: when receiving a three-dimensional image reconstruction request of the object input by the user, extract a main coding feature point of the input object coded image, where the coded element graphic of the object coded image has rotational symmetry and includes a preset quantity Auxiliary coding feature points;
元素图像提取单元,用于根据所述物体编码图像的主编码特征点,构建所述主编码特征点的拓扑网络,根据所述拓扑网络提取所述物体编码图像中包括的所有编码元素图像;An element image extracting unit, configured to construct a topological network of the main coded feature points according to the main coded feature points of the object coded image, and extract all coded element images included in the object coded image according to the topology network;
特征点计算单元,用于使用预设的角点检测算法对每个所述编码元素图像中的图形特征点的初始位置进行定位,根据所述初始位置以及所述编码元素图像的灰度值计算所述编码元素图像的辅助编码特征点;a feature point calculation unit, configured to locate an initial position of a graphic feature point in each of the encoded element images using a preset corner detection algorithm, and calculate the grayscale value according to the initial position and the encoded element image The auxiliary coding feature point of the encoded element image;
图像识别单元,用于根据所述主编码特征点和所有所述编码元素图像的辅助编码特征点,使用预先训练好的深度学习网络对所述所有编码元素图像进行识别;An image recognition unit, configured to identify, according to the primary coding feature point and the auxiliary coding feature points of all the coding element images, the pre-trained deep learning network to identify all the coding element images;
解码单元,用于根据预设的极线编码策略,将识别到得的每个所述编码元素图像对应的编码信息与预先存储的编码图案对应的编码信息进行匹配,以实现对应的主编码特征点及辅助编码特征点的解码;以及a decoding unit, configured to match the encoded coding information corresponding to each of the coded element images and the coded information corresponding to the pre-stored coding pattern according to a preset polar coding strategy, to implement corresponding primary coding features Decoding of point and auxiliary coded feature points;
图像重建单元,用于根据预先获取的三维图像重建系统标定参数和所述匹配得到的解码信息对所述物体进行三维图像重建,以得到所述物体的三维图像。And an image reconstruction unit configured to perform three-dimensional image reconstruction on the object according to the pre-acquired three-dimensional image reconstruction system calibration parameter and the matched decoding information to obtain a three-dimensional image of the object.
另一方面,本发明还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上所述方法的步骤。In another aspect, the present invention also provides a computer readable storage medium storing a computer program that, when executed by a processor, implements the steps of the method as described above.
本发明进行物体三维图像重建时,通过提取物体编码图像的主编码特征点和辅助编码特征点,使用预先训练好的深度学习网络对所有编码元素图像进行识别,并根据预设的极线编码策略将识别到得的每个编码元素图像对应的编码信息与预先存储的编码图案对应的编码信息进行匹配,以实现对应的主编码特 征点及辅助编码特征点的解码,最终根据预先获取的三维图像重建系统标定参数和匹配得到的解码信息对物体进行三维图像重建,提高了图像解码的成功率,进而提高了三维图像重建的效果。When performing the three-dimensional image reconstruction of the object, the main coding feature point and the auxiliary coding feature point of the object coded image are extracted, and all the coded element images are identified by using the pre-trained deep learning network, and according to the preset polar line coding strategy. Matching the encoded information corresponding to each of the identified coding element images with the coding information corresponding to the previously stored coding pattern to implement the corresponding primary coding The decoding of the feature points and the auxiliary coding feature points, and finally the three-dimensional image reconstruction of the object according to the pre-acquired three-dimensional image reconstruction system calibration parameters and the matched decoding information, thereby improving the success rate of image decoding, thereby improving the effect of three-dimensional image reconstruction. .
附图说明DRAWINGS
图1是本发明实施例一提供的基于结构光的三维图像重建方法的实现流程图;1 is a flowchart showing an implementation of a method for reconstructing a three-dimensional image based on structured light according to Embodiment 1 of the present invention;
图2是本发明实施例一提供的八种编码元素图像示意图;2 is a schematic diagram of images of eight coding elements provided by Embodiment 1 of the present invention;
图3是本发明实施例一提供的棋盘格编码元素图像的主编码特征点和辅助编码特征点的示意图;3 is a schematic diagram of main coding feature points and auxiliary coding feature points of a tessellated coding element image according to Embodiment 1 of the present invention;
图4是本发明实施例一提供的投影用编码图像;4 is a coded image for projection provided by Embodiment 1 of the present invention;
图5是本发明实施例二提供的基于结构光的三维图像重建装置的结构示意图;以及5 is a schematic structural diagram of a three-dimensional image reconstruction apparatus based on structured light according to Embodiment 2 of the present invention;
图6是本发明实施例三提供的基于结构光的三维图像重建装置的结构示意图。FIG. 6 is a schematic structural diagram of a three-dimensional image reconstruction apparatus based on structured light according to Embodiment 3 of the present invention.
具体实施方式Detailed ways
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。The present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
以下结合具体实施例对本发明的具体实现进行详细描述:The specific implementation of the present invention is described in detail below in conjunction with specific embodiments:
实施例一:Embodiment 1:
图1示出了本发明实施例一提供的基于结构光的三维图像重建方法的实现流程,为了便于说明,仅示出了与本发明实施例相关的部分,详述如下:FIG. 1 is a flowchart showing an implementation process of a structured light-based three-dimensional image reconstruction method according to Embodiment 1 of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown, which are described in detail as follows:
在步骤S101中,当接收到用户输入的物体三维图像重建请求时,提取输入的物体编码图像的主编码特征点。 In step S101, when the object three-dimensional image reconstruction request input by the user is received, the main coded feature point of the input object coded image is extracted.
在本发明实施例中,优选地,物体编码图像通过以下方式获得:In an embodiment of the invention, preferably, the object encoded image is obtained by:
(1)对具有旋转对称性且包含预设数量特征点的编码元素图形进行旋转,以得到编码元素的多个码字图形;(1) rotating a coded element pattern having rotational symmetry and including a preset number of feature points to obtain a plurality of code word patterns of the coded element;
(2)使用多个码字和黑、白背景进行组合,分别生成对应的编码元素图像;(2) using a plurality of code words and combining black and white backgrounds to respectively generate corresponding coded element images;
在本发明实施例中,编码元素图形为具有旋转对称性且包含预设数量特征点的编码元素图形,例如,“L”形编码元素图形、“Δ”形编码元素图形。作为示例地,在这里以“L”形编码元素图形对本发明实施例进行描述,通过将“L”形编码元素图形分别旋转90°、180°、270°即可得到四种码字图形,再将这四种码字图形分别与黑白背景块结合,可得到八种编码元素图像,如图2所示。如图2所示八种编码元素图像分别代表八种码字(1,-1,2,-2,3,-3,4,-4),其中,正数码字代表以白色为背景的编码元素图像,负数码字代表以黑色为背景的编码元素图像。在这种方式的编码中,只需一个种编码元素图形,但通过其旋转及背景色的变化,可以生成八种不同的编码元素图像,将大大减少后续的特征检测和编码特征识别的难度,显著提高编码效率。In an embodiment of the invention, the encoded element graphic is a coding element graphic having rotational symmetry and including a preset number of feature points, for example, an "L" shaped coding element graphic, a "Δ" shaped coding element graphic. By way of example, the embodiment of the present invention is described herein by using an "L"-shaped coded element pattern. By rotating the "L"-shaped coded element pattern by 90°, 180°, and 270°, respectively, four codeword graphics can be obtained. By combining these four codeword graphics with the black and white background block, eight encoded element images can be obtained, as shown in FIG. The eight coded element images shown in Figure 2 represent eight code words (1, -1, 2, -2, 3, -3, 4, -4), respectively, where positive digital words represent codes in white background. Element image, negative digital word represents a coded element image with black background. In this way of encoding, only one kind of coding element graphic is needed, but through its rotation and background color change, eight different coding element images can be generated, which will greatly reduce the difficulty of subsequent feature detection and coding feature recognition. Significantly improve coding efficiency.
(3)根据极线编码策略,使用编码元素图像以预设的编码窗口沿着极线方向进行编码,以得到预设分辨率的投影用编码图像;(3) according to the polar line coding strategy, using the coded element image to be encoded along the polar line direction with a preset coding window to obtain a coded image for projection of a preset resolution;
在本发明实施例中,根据极线编码策略,整个编码过程只在极线方向上进行,由于减少了一个编码维度,因此基于同样数目的编码元素,可以获得更小的编码窗口,抑或采用更少的编码元素获得同样大小的编码窗口,通过定义极线的数量,可以实现整个编码容量的可调控,满足不同分辨率的投射需要。In the embodiment of the present invention, according to the polar line coding strategy, the entire encoding process is performed only in the direction of the polar line. Since one coding dimension is reduced, a smaller coding window can be obtained based on the same number of coding elements, or more A small number of coding elements obtain the same size coding window. By defining the number of polar lines, the entire coding capacity can be adjusted to meet the projection needs of different resolutions.
优选地,在编码时采用黑白棋盘格作为编码基础框架,通过填充不同的编码元素图像,对棋盘格角点进行极线方向的唯一编码,将棋盘格角点定义为主编码特征点,由于编码元素图形本身具备明显的几何特征,如“L”形或“Δ”形,因此,还可以在每个编码元素图形中再定义至少三个辅助编码特征点,如图3所示,其中,中间正方形棋盘格四个角点为主编码特征点,棋盘格中“L”形三个角点为辅助编码特征点。通过这种混合编码特征点的定义,编码特征点 数量可提高3倍,从而大大提高了编码特征点的密度。作为示例地,通过上述极线编码策略可得到如图4所示的投影用编码图像。Preferably, the black and white checkerboard is used as the coding basic frame in the coding, and by filling the different coded element images, the checkerboard corner points are uniquely coded in the polar direction, and the checkerboard corner points are defined as the main coding feature points, due to the coding The element graphic itself has obvious geometric features, such as "L" shape or "Δ" shape. Therefore, at least three auxiliary coding feature points can be defined in each coding element graphic, as shown in Fig. 3, in which the middle The four corner points of the square checkerboard are the main coding feature points, and the three corner points of the "L" shape in the checkerboard are auxiliary coding feature points. Coding feature points by the definition of such mixed coded feature points The number can be increased by a factor of 3, which greatly increases the density of coded feature points. As an example, a coded image for projection as shown in FIG. 4 can be obtained by the above-described polar line coding strategy.
(4)将投影用编码图像投射到物体上,通过摄像头拍摄该物体,以得到物体编码图像。(4) Projecting the encoded image for projection onto the object, and photographing the object through the camera to obtain an object encoded image.
在本发明实施例中,在将投影用编码图像投射到物体上时,优选地,采用三维图像重建系统的衍射光学元件实现,该三维图像重建系统包括投影设备(由衍射光学元件和激光器组成)以及摄像头。具体地,在将投影用编码图像投射到物体上时,投影用编码图像可根据激光衍射原理通过三维图像重建系统中激光和衍射光学元件获得,并被投射到物体表面,进而由三维图像重建系统中的摄像头拍摄得到物体编码图像。In an embodiment of the invention, when projecting the encoded image for projection onto an object, preferably using a diffractive optical element of a three-dimensional image reconstruction system comprising a projection device (consisting of a diffractive optical element and a laser) And a camera. Specifically, when projecting the encoded image for projection onto the object, the encoded image for projection can be obtained by laser and diffractive optical elements in the three-dimensional image reconstruction system according to the principle of laser diffraction, and projected onto the surface of the object, thereby being reconstructed by the three-dimensional image reconstruction system. The camera in the camera captures the object encoded image.
在本发明实施例中,在提取输入的物体编码图像的主编码特征点时,优选地,对物体编码图像的像素进行模板卷积,根据模板卷积结果获取物体编码图像的候选主编码特征点,计算每个候选主编码特征点的对称程度,将对称程度少于预设阈值的候选主编码特征点剔除,以得到物体编码图像的主编码特征点。In the embodiment of the present invention, when extracting the main coded feature points of the input object coded image, preferably, the pixels of the object coded image are subjected to template convolution, and the candidate main coded feature points of the object coded image are obtained according to the template convolution result. Calculating a degree of symmetry of each candidate main coding feature point, and culling the candidate primary coding feature points whose degree of symmetry is less than a preset threshold to obtain a main coding feature point of the object coded image.
进一步优选地,当采用黑白棋盘格为编码基础框架时,若要提取输入的物体编码图像的主编码特征点,则相当于对棋盘格的角点进行检测。因此,首先,对于每一幅图像中的每一个像素点,利用“+”型模板在邻域(邻域的大小一般取物体编码图像中一个编码元素图像尺寸的2/3)内分别计算卷积值,“+”型模板的卷积值为
Figure PCTCN2017116321-appb-000001
其中,f(x,y)表示物体编码图像在点(x,y)的像素值,N表示为模版的尺寸。接着,判断在以像素点为中心的一个小区域内是否是极大值,如果是极大值,那么对应的像素点就是一个候选主编码特征点,也就是说用“+”型模板计算出的卷积值在局部区域内是最大的。最后,利用旋转对称性剔除错误候选主编码特征点,从候选主编码特征点中找出真正的主编码特征点。
Further preferably, when the black and white checkerboard is used as the coding base frame, if the main coded feature point of the input object coded image is to be extracted, it is equivalent to detecting the corner point of the checkerboard. Therefore, first, for each pixel in each image, the "+" type template is used to calculate the volume in the neighborhood (the size of the neighborhood is generally 2/3 of the image size of one coding element in the object coded image). Product value, the convolution value of the "+" type template
Figure PCTCN2017116321-appb-000001
Where f(x, y) represents the pixel value of the object-encoded image at point (x, y), and N represents the size of the template. Next, it is judged whether it is a maximum value in a small area centered on the pixel point, and if it is a maximum value, the corresponding pixel point is a candidate main coding feature point, that is, calculated by using a "+" type template. The convolution value is the largest in the local area. Finally, the erroneous candidate primary coding feature points are culled by rotational symmetry, and the true primary coding feature points are found from the candidate primary coding feature points.
在步骤S102中,根据物体编码图像的主编码特征点,构建主编码特征点的 拓扑网络,根据拓扑网络提取物体编码图像中包括的所有编码元素图像。In step S102, the main coding feature point is constructed according to the main coding feature point of the object coded image. The topology network extracts all coded element images included in the object encoded image according to the topology network.
在步骤S103中,使用预设的角点检测算法对每个编码元素图像中的图形特征点的初始位置进行定位,根据初始位置以及编码元素图像的灰度值计算编码元素图像的辅助编码特征点。In step S103, the initial position of the graphic feature point in each encoded element image is located using a preset corner detection algorithm, and the auxiliary coding feature point of the encoded element image is calculated according to the initial position and the gray value of the encoded element image. .
在步骤S104中,根据主编码特征点和所有编码元素图像的辅助编码特征点,使用预先训练好的深度学习网络对所有编码元素图像进行识别。In step S104, all of the encoded element images are identified using the pre-trained deep learning network based on the primary encoded feature points and the auxiliary encoded feature points of all encoded element images.
在本发明实施例中,预先对初始建立的深度学习网络进行训练,优选地,在对深度学习网络进行训练时,首先获取预设数量的、具有不同颜色、纹理、光照和场景的物体编码图像样本,使用每幅物体编码图像样本对预先建立的深度学习网络进行训练,以得到训练好的深度学习网络。具体地,获取物体编码图像样本后,提取物体编码图像样本的主编码特征点,根据物体编码图像样本的主编码特征点,构建主编码特征点的拓扑网络,根据拓扑网络提取物体编码图像样本中包括的所有编码元素图像,从而提取出大量的编码元素图像样本,对这些编码元素图像样本进行高斯模糊、遮挡等,进一步扩大样本数量,进而使用扩大后编码元素图像样本对深度学习网络进行训练,以得到训练好的深度学习网络。In the embodiment of the present invention, the initially established deep learning network is trained in advance. Preferably, when training the deep learning network, a preset number of object encoded images having different colors, textures, illuminations, and scenes are first acquired. Samples, using a sample of each object encoded image to train a pre-established deep learning network to obtain a trained deep learning network. Specifically, after acquiring the object coded image sample, extracting the main coded feature point of the object coded image sample, constructing a topological network of the main coded feature point according to the main coded feature point of the object coded image sample, and extracting the object coded image sample according to the topology network All the encoded element images are included, thereby extracting a large number of coded element image samples, performing Gaussian blurring, occlusion, etc. on the coded element image samples, further expanding the sample size, and then training the deep learning network using the expanded coded element image samples. To get a trained deep learning network.
在步骤S105中,根据预设的极线编码策略,将识别到得的每个编码元素图像对应的编码信息与预先存储的编码图案对应的编码信息进行匹配,以实现对应的主编码特征点及辅助编码特征点的解码。In step S105, the encoded information corresponding to each encoded element image is matched with the encoding information corresponding to the pre-stored coding pattern according to the preset polar coding strategy, so as to implement the corresponding primary coding feature point and Auxiliary coding of the decoding of feature points.
在本发明实施例中,根据极线编码策略中的极线约束(即空间点在两个图像平面上的投影点,如果已知空间点在左图像平面上的映射点在左图像平面的左极线上,那么空间点在右图像平面上的映射点在右图像平面的右极线上,反之亦然,这种约束关系称为极线约束),将识别到得的每个编码元素图像对应的编码信息与预先存储的编码图案对应的编码信息进行匹配,以实现对应的主编码特征点及辅助编码特征点的解码。进一步地,可根据连续性、平滑性等约束条件对解码进行纠错,以提高解码的准确性。 In the embodiment of the present invention, according to the polar line constraint in the polar line coding strategy (ie, the projection point of the spatial point on the two image planes, if the mapping point of the spatial point on the left image plane is known to be left on the left image plane) On the polar line, then the mapping point of the spatial point on the right image plane is on the right polar line of the right image plane, and vice versa. This constraint relationship is called the polar line constraint), and each encoded element image will be recognized. The corresponding coding information is matched with the coding information corresponding to the pre-stored coding pattern to implement decoding of the corresponding primary coding feature point and the auxiliary coding feature point. Further, the decoding can be error-corrected according to constraints such as continuity and smoothness to improve the accuracy of decoding.
在步骤S106中,根据预先获取的三维图像重建系统标定参数和匹配得到的解码信息对物体进行三维图像重建,以得到物体的三维图像。In step S106, the object is subjected to three-dimensional image reconstruction according to the pre-acquired three-dimensional image reconstruction system calibration parameter and the matched decoding information to obtain a three-dimensional image of the object.
在本发明实施例中,采用张正友相机标定法对三维图像重建系统中的摄像头和投影设备进行标定,以获取摄像头和所述投影设备的标定参数,进而根据获取的摄像头以及投影设备的标定参数,计算摄像头和投影设备之间的位置关系参数,从而获取三维图像重建系统标定参数。之后,根据预先获取的三维图像重建系统标定参数和匹配得到的解码信息对物体进行三维图像重建,以得到物体的三维图像。In the embodiment of the present invention, the camera and the projection device in the three-dimensional image reconstruction system are calibrated by using the Zhang Zhengyou camera calibration method to obtain the calibration parameters of the camera and the projection device, and then according to the acquired camera and the calibration parameters of the projection device. The positional relationship parameter between the camera and the projection device is calculated to obtain the calibration parameters of the three-dimensional image reconstruction system. Then, the object is subjected to three-dimensional image reconstruction according to the pre-acquired three-dimensional image reconstruction system calibration parameters and the matched decoding information to obtain a three-dimensional image of the object.
在本发明实施例中,使用具有旋转对称性且包含预设数量特征点的编码元素图形和黑、白背景进行组合,生成对应编码元素图像,并根据极线编码策略,使用编码元素图像以预设的编码窗口沿着极线方向进行编码,以得到预设分辨率的投影用编码图像,从而以更少的编码元素种类,实现更小的编码窗口,同时大大降低了后续的解码难度提高了解码的成功率。对应在解码时,通过提取物体编码图像的主编码特征点和辅助编码特征点,使用预先训练好的深度学习网络对所有编码元素图像进行识别,并根据预设的极线编码策略将识别到得的每个编码元素图像对应的编码信息与预先存储的编码图案对应的编码信息进行匹配,以实现对应的主编码特征点及辅助编码特征点的解码,最终根据预先获取的三维图像重建系统标定参数和匹配得到的解码信息对物体进行三维图像重建,提高了图像解码的成功率,进而提高了三维图像重建的效果。In the embodiment of the present invention, a coding element graphic having a rotational symmetry and including a preset number of feature points is combined with a black and white background to generate a corresponding coding element image, and according to the polar coding strategy, the coding element image is used to pre The coding window is coded along the direction of the polar line to obtain a coded image for projection with a preset resolution, thereby implementing a smaller coding window with fewer coding element types, and greatly reducing the subsequent decoding difficulty. The success rate of decoding. Corresponding to decoding, by extracting the main coding feature points and the auxiliary coding feature points of the object coded image, all the coded element images are identified using a pre-trained deep learning network, and the recognition is obtained according to a preset polar line coding strategy. The coding information corresponding to each coding element image is matched with the coding information corresponding to the pre-stored coding pattern to implement decoding of the corresponding primary coding feature point and the auxiliary coding feature point, and finally, according to the pre-acquired three-dimensional image reconstruction system calibration parameter And the decoded information obtained by the matching performs three-dimensional image reconstruction on the object, thereby improving the success rate of image decoding, thereby improving the effect of three-dimensional image reconstruction.
实施例二:Embodiment 2:
图5示出了本发明实施例二提供的基于结构光的三维图像重建装置的结构,为了便于说明,仅示出了与本发明实施例相关的部分,其中包括:FIG. 5 shows a structure of a structured light-based three-dimensional image reconstruction apparatus according to Embodiment 2 of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown, including:
特征点提取单元51,用于当接收到用户输入的物体三维图像重建请求时,提取输入的物体编码图像的主编码特征点,该物体编码图像的编码元素图形具有旋转对称性且包含预设数量的辅助编码特征点;The feature point extracting unit 51 is configured to: when receiving the object three-dimensional image reconstruction request input by the user, extract a main coding feature point of the input object coded image, the coded element graphic of the object coded image has rotational symmetry and includes a preset quantity Auxiliary coding feature points;
元素图像提取单元52,用于根据物体编码图像的主编码特征点,构建主编 码特征点的拓扑网络,根据拓扑网络提取物体编码图像中包括的所有编码元素图像;The element image extracting unit 52 is configured to construct an editor based on the main coded feature points of the object coded image. a topological network of code feature points, extracting all coded element images included in the object coded image according to the topology network;
特征点计算单元53,用于使用预设的角点检测算法对每个编码元素图像中的图形特征点的初始位置进行定位,根据初始位置以及编码元素图像的灰度值计算编码元素图像的辅助编码特征点;The feature point calculation unit 53 is configured to locate an initial position of the graphic feature point in each of the encoded element images using a preset corner detection algorithm, and calculate an auxiliary of the encoded element image according to the initial position and the gray value of the encoded element image. Coding feature points;
图像识别单元54,用于根据主编码特征点和所有编码元素图像的辅助编码特征点,使用预先训练好的深度学习网络对所有编码元素图像进行识别;The image recognition unit 54 is configured to identify all the encoded element images by using a pre-trained deep learning network according to the primary encoded feature points and the auxiliary encoded feature points of all the encoded element images;
解码单元55,用于根据预设的极线编码策略,将识别到得的每个编码元素图像对应的编码信息与预先存储的编码图案对应的编码信息进行匹配,以实现对应的主编码特征点及辅助编码特征点的解码;以及The decoding unit 55 is configured to match, according to the preset polar coding strategy, the encoded information corresponding to each of the identified coding element images with the coding information corresponding to the pre-stored coding pattern, to implement the corresponding primary coding feature point. And decoding of the auxiliary coded feature points;
图像重建单元56,用于根据预先获取的三维图像重建系统标定参数和匹配得到的解码信息对物体进行三维图像重建,以得到物体的三维图像。The image reconstruction unit 56 is configured to perform three-dimensional image reconstruction on the object according to the pre-acquired three-dimensional image reconstruction system calibration parameter and the matched decoding information to obtain a three-dimensional image of the object.
在本发明实施例中,三维图像重建装置的各单元可由相应的硬件或软件单元实现,各单元可以为独立的软、硬件单元,也可以集成为一个软、硬件单元,在此不用以限制本发明。各单元的具体实施方式可参考实施一中对应步骤的具体描述,在此不再赘述。In the embodiment of the present invention, each unit of the three-dimensional image reconstruction apparatus may be implemented by a corresponding hardware or software unit, and each unit may be an independent software and hardware unit, or may be integrated into one soft and hardware unit, and there is no need to limit the present. invention. For a specific implementation of each unit, reference may be made to the specific description of the corresponding steps in the first embodiment, and details are not described herein again.
实施例三:Embodiment 3:
图6示出了本发明实施例三提供的基于结构光的三维图像重建装置的结构,为了便于说明,仅示出了与本发明实施例相关的部分,其中包括:FIG. 6 shows a structure of a structured light-based three-dimensional image reconstruction apparatus according to Embodiment 3 of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown, including:
第一参数标定单元601,用于采用张正友相机标定法对三维图像重建系统中的摄像头和投影设备进行标定,以获取摄像头和投影设备的标定参数;The first parameter calibration unit 601 is configured to calibrate the camera and the projection device in the three-dimensional image reconstruction system by using Zhang Zhengyou camera calibration method to obtain calibration parameters of the camera and the projection device;
第二参数标定单元602,用于根据获取的摄像头以及投影设备的标定参数,计算摄像头和投影设备之间的位置关系参数。The second parameter calibration unit 602 is configured to calculate a positional relationship parameter between the camera and the projection device according to the acquired camera and the calibration parameter of the projection device.
元素旋转单元603,用于对具有旋转对称性且包含预设数量特征点的编码元素进行旋转,以得到编码元素的多个码字图形;An element rotation unit 603, configured to rotate a coding element having rotational symmetry and including a preset number of feature points to obtain a plurality of codeword patterns of the coding element;
元素图像生成单元604,用于使用多个码字图形和黑、白背景进行组合, 分别生成对应编码元素图像;The element image generating unit 604 is configured to use a plurality of code word graphics and a black and white background to combine, Generating corresponding coded element images respectively;
图像编码单元605,用于根据极线编码策略,使用编码元素图像以预设的编码窗口沿着极线方向进行编码,以得到预设分辨率的投影用编码图像;The image encoding unit 605 is configured to perform encoding along the polar line direction by using a coded element image in a preset coding window according to the polar line coding strategy to obtain a coded image for projection of a preset resolution;
编码图像投射单元606,用于将投影用编码图像投射到物体上;a coded image projecting unit 606, configured to project a coded image for projection onto an object;
特征点提取单元607,用于当接收到用户输入的物体三维图像重建请求时,提取输入的物体编码图像的主编码特征点,该物体编码图像的编码元素图形具有旋转对称性且包含预设数量的辅助编码特征点;The feature point extracting unit 607 is configured to: when receiving the object three-dimensional image reconstruction request input by the user, extract a main coding feature point of the input object coded image, where the coded element graphic of the object coded image has rotational symmetry and includes a preset quantity Auxiliary coding feature points;
元素图像提取单元608,用于根据物体编码图像的主编码特征点,构建主编码特征点的拓扑网络,根据拓扑网络提取物体编码图像中包括的所有编码元素图像;The element image extracting unit 608 is configured to construct a topological network of the main coded feature points according to the main coded feature points of the object coded image, and extract all the coded element images included in the object coded image according to the topology network;
特征点计算单元609,用于使用预设的角点检测算法对每个编码元素图像中的图形特征点的初始位置进行定位,根据初始位置以及编码元素图像的灰度值计算编码元素图像的辅助编码特征点;The feature point calculation unit 609 is configured to locate an initial position of the graphic feature point in each of the encoded element images by using a preset corner detection algorithm, and calculate an auxiliary of the encoded element image according to the initial position and the gray value of the encoded element image. Coding feature points;
图像识别单元610,用于根据主编码特征点和所有编码元素图像的辅助编码特征点,使用预先训练好的深度学习网络对所有编码元素图像进行识别;The image recognition unit 610 is configured to identify all the encoded element images by using a pre-trained deep learning network according to the primary encoded feature points and the auxiliary encoded feature points of all the encoded element images;
解码单元611,用于根据预设的极线编码策略,将识别到得的每个编码元素图像对应的编码信息与预先存储的编码图案对应的编码信息进行匹配,以实现对应的主编码特征点及辅助编码特征点的解码;以及The decoding unit 611 is configured to match, according to the preset polar coding strategy, the encoded information corresponding to each of the identified coding element images with the coding information corresponding to the pre-stored coding pattern, to implement the corresponding primary coding feature point. And decoding of the auxiliary coded feature points;
图像重建单元612,用于根据预先获取的三维图像重建系统标定参数和匹配得到的解码信息对所述物体进行三维图像重建,以得到物体的三维图像。The image reconstruction unit 612 is configured to perform three-dimensional image reconstruction on the object according to the pre-acquired three-dimensional image reconstruction system calibration parameter and the matched decoding information to obtain a three-dimensional image of the object.
其中,特征点提取单元607包括:The feature point extraction unit 607 includes:
候选特征点获取单元,用于对物体编码图像的像素进行模板卷积,根据模板卷积结果获取物体编码图像的候选主编码特征点;以及a candidate feature point acquiring unit, configured to perform template convolution on pixels of the object encoded image, and obtain candidate main coding feature points of the object encoded image according to the template convolution result;
特征点剔除单元6072,用于计算每个候选主编码特征点的对称程度,将对称程度少于预设阈值的候选主编码特征点剔除,以得到物体编码图像的主编码特征点。 The feature point culling unit 6072 is configured to calculate a degree of symmetry of each candidate primary coded feature point, and cull the candidate primary coded feature points whose degree of symmetry is less than a preset threshold to obtain a primary coded feature point of the object coded image.
在本发明实施例中,三维图像重建装置的各单元可由相应的硬件或软件单元实现,各单元可以为独立的软、硬件单元,也可以集成为一个软、硬件单元,在此不用以限制本发明。各单元的具体实施方式可参考实施一中对应步骤的具体描述,在此不再赘述。In the embodiment of the present invention, each unit of the three-dimensional image reconstruction apparatus may be implemented by a corresponding hardware or software unit, and each unit may be an independent software and hardware unit, or may be integrated into one soft and hardware unit, and there is no need to limit the present. invention. For a specific implementation of each unit, reference may be made to the specific description of the corresponding steps in the first embodiment, and details are not described herein again.
实施例四:Embodiment 4:
在本发明实施例中,提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行时实现上述方法实施例中的步骤,例如,图1所示的步骤S101至S106。或者,该计算机程序被处理器执行时实现上述各装置实施例中各单元的功能,例如图5所示单元51至56的功能。In an embodiment of the present invention, there is provided a computer readable storage medium storing a computer program, which when executed by a processor, implements the steps in the foregoing method embodiments, for example, FIG. Steps S101 to S106 are shown. Alternatively, the computer program, when executed by the processor, implements the functions of the various units in the various apparatus embodiments described above, such as the functions of units 51 through 56 shown in FIG.
在本发明实施例中,该计算机程序被处理器执行时实现上述方法实施例中的步骤,通过提取物体编码图像的主编码特征点和辅助编码特征点,使用预先训练好的深度学习网络对所有编码元素图像进行识别,并根据预设的极线编码策略将识别到得的每个编码元素图像对应的编码信息与预先存储的编码图案对应的编码信息进行匹配,以实现对应的主编码特征点及辅助编码特征点的解码,最终根据预先获取的三维图像重建系统标定参数和匹配得到的解码信息对物体进行三维图像重建,提高了图像解码的成功率,进而提高了三维图像重建的效果。In the embodiment of the present invention, when the computer program is executed by the processor, the steps in the foregoing method embodiment are implemented, and the pre-trained deep learning network is used to extract all the main coding feature points and the auxiliary coding feature points of the object coded image. The coded element image is identified, and the coded information corresponding to each of the identified coded element images is matched with the coded information corresponding to the previously stored coded pattern according to a preset polar line coding strategy, so as to implement the corresponding primary coded feature point. And the decoding of the auxiliary coding feature points, and finally the three-dimensional image reconstruction of the object according to the pre-acquired three-dimensional image reconstruction system calibration parameters and the matched decoding information, thereby improving the success rate of image decoding, thereby improving the effect of three-dimensional image reconstruction.
本发明实施例的计算机可读存储介质可以包括能够携带计算机程序代码的任何实体或装置、记录介质,例如,ROM/RAM、磁盘、光盘、闪存等存储器。The computer readable storage medium of the embodiments of the present invention may include any entity or device capable of carrying computer program code, a recording medium such as a ROM/RAM, a magnetic disk, an optical disk, a flash memory, or the like.
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。 The above is only the preferred embodiment of the present invention, and is not intended to limit the present invention. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the protection of the present invention. Within the scope.

Claims (10)

  1. 一种基于结构光的三维图像重建方法,其特征在于,所述方法包括下述步骤:A three-dimensional image reconstruction method based on structured light, characterized in that the method comprises the following steps:
    当接收到用户输入的物体三维图像重建请求时,提取输入的物体编码图像的主编码特征点,所述物体编码图像中的编码元素图形具有旋转对称性且包含预设数量的辅助编码特征点;When receiving a three-dimensional image reconstruction request of the object input by the user, extracting a main coding feature point of the input object coded image, the coded element graphic in the object coded image has rotational symmetry and includes a preset number of auxiliary coded feature points;
    根据所述物体编码图像的主编码特征点,构建所述主编码特征点的拓扑网络,根据所述拓扑网络提取所述物体编码图像中包括的所有编码元素图像;Constructing a topology network of the primary coded feature points according to the primary coded feature points of the object coded image, and extracting all coded element images included in the object coded image according to the topology network;
    使用预设的角点检测算法对每个所述编码元素图像中的图形特征点的初始位置进行定位,根据所述初始位置以及所述编码元素图像的灰度值计算所述编码元素图像的辅助编码特征点;Positioning an initial position of a graphical feature point in each of the encoded element images using a preset corner detection algorithm, and calculating an auxiliary of the encoded element image according to the initial position and a gray value of the encoded element image Coding feature points;
    根据所述主编码特征点和所有所述编码元素图像的辅助编码特征点,使用预先训练好的深度学习网络对所述所有编码元素图像进行识别;Identifying all of the encoded element images using a pre-trained deep learning network based on the primary encoded feature points and the auxiliary encoded feature points of all of the encoded element images;
    根据预设的极线编码策略,将识别到得的每个所述编码元素图像对应的编码信息与预先存储的编码图案对应的编码信息进行匹配,以实现对应的主编码特征点及辅助编码特征点的解码;And matching the encoded information corresponding to each of the identified coding element images with the coding information corresponding to the pre-stored coding pattern according to a preset polar coding strategy, so as to implement corresponding primary coding feature points and auxiliary coding features. Decoding of points;
    根据预先获取的三维图像重建系统标定参数和所述匹配得到的解码信息对所述物体进行三维图像重建,以得到所述物体的三维图像。The object is subjected to three-dimensional image reconstruction according to the pre-acquired three-dimensional image reconstruction system calibration parameter and the matched decoding information to obtain a three-dimensional image of the object.
  2. 如权利要求1所述的方法,其特征在于,根据预先获取的三维图像重建系统标定参数和所述匹配得到的解码信息对所述物体进行三维图像重建的步骤之前,所述方法还包括:The method according to claim 1, wherein before the step of performing three-dimensional image reconstruction on the object according to the pre-acquired three-dimensional image reconstruction system calibration parameter and the matched decoding information, the method further comprises:
    采用张正友相机标定法对所述三维图像重建系统中的摄像头和投影设备进行标定,以获取所述摄像头和所述投影设备的标定参数;Calibrating the camera and the projection device in the three-dimensional image reconstruction system by using Zhang Zhengyou camera calibration method to obtain calibration parameters of the camera and the projection device;
    根据获取的所述摄像头以及所述投影设备的标定参数,计算所述摄像头和投影设备之间的位置关系参数。Calculating a positional relationship parameter between the camera and the projection device according to the acquired camera and the calibration parameter of the projection device.
  3. 如权利要求1所述的方法,其特征在于,提取输入的物体编码图像的主 编码特征点的步骤,包括:The method of claim 1 wherein the master of the input object encoded image is extracted The steps of encoding feature points include:
    对所述物体编码图像的像素进行模板卷积,根据所述模板卷积结果获取所述物体编码图像的候选主编码特征点;And performing template convolution on the pixels of the object encoded image, and acquiring candidate main coding feature points of the object encoded image according to the template convolution result;
    计算每个所述候选主编码特征点的对称程度,将所述对称程度少于预设阈值的候选主编码特征点剔除,以得到所述物体编码图像的主编码特征点。Calculating a degree of symmetry of each of the candidate primary coding feature points, and punctifying the candidate primary coding feature points whose degree of symmetry is less than a preset threshold to obtain a primary coding feature point of the object encoded image.
  4. 如权利要求1所述的方法,其特征在于,提取输入的物体编码图像的主编码特征点的步骤之前,所述方法还包括:The method of claim 1 wherein before the step of extracting the main coded feature points of the input object coded image, the method further comprises:
    对具有旋转对称性且包含预设数量特征点的编码元素图形进行旋转,以得到所述编码元素的多个码字图形;Rotating a coded element graphic having rotational symmetry and including a preset number of feature points to obtain a plurality of code word patterns of the coded element;
    使用所述多个码字图形和黑、白背景进行组合,分别生成对应的编码元素图像;Combining the plurality of codeword graphics and the black and white background to generate corresponding coding element images respectively;
    根据所述极线编码策略,使用所述编码元素图像以预设的编码窗口沿着极线方向进行编码,以得到预设分辨率的投影用编码图像;According to the polar line coding strategy, the coded element image is encoded along a polar line direction with a preset coding window to obtain a coded image for projection of a preset resolution;
    将所述投影用编码图像投射到所述物体上。The encoded image for projection is projected onto the object.
  5. 如权利要求1所述的方法,其特征在于,使用预先训练好的深度学习网络对所述所有编码元素图像进行识别的步骤之前,所述方法还包括:The method of claim 1 wherein prior to the step of identifying all of the encoded element images using a pre-trained deep learning network, the method further comprising:
    获取预设数量的、具有不同颜色、纹理、光照和场景的物体编码图像样本;Obtaining a preset number of object coded image samples having different colors, textures, illuminations, and scenes;
    使用每幅所述物体编码图像样本对预先建立的深度学习网络进行训练,以得到所述训练好的深度学习网络。A pre-established deep learning network is trained using each of the object encoded image samples to obtain the trained deep learning network.
  6. 一种基于结构光的三维图像重建装置,其特征在于,所述装置包括:A three-dimensional image reconstruction device based on structured light, characterized in that the device comprises:
    特征点提取单元,用于当接收到用户输入的物体三维图像重建请求时,提取输入的物体编码图像的主编码特征点,所述物体编码图像的编码元素图形具有旋转对称性且包含预设数量的辅助编码特征点;a feature point extracting unit, configured to: when receiving a three-dimensional image reconstruction request of the object input by the user, extract a main coding feature point of the input object coded image, where the coded element graphic of the object coded image has rotational symmetry and includes a preset quantity Auxiliary coding feature points;
    元素图像提取单元,用于根据所述物体编码图像的主编码特征点,构建所述主编码特征点的拓扑网络,根据所述拓扑网络提取所述物体编码图像中包括的所有编码元素图像; An element image extracting unit, configured to construct a topological network of the main coded feature points according to the main coded feature points of the object coded image, and extract all coded element images included in the object coded image according to the topology network;
    特征点计算单元,用于使用预设的角点检测算法对每个所述编码元素图像中的图形特征点的初始位置进行定位,根据所述初始位置以及所述编码元素图像的灰度值计算所述编码元素图像的辅助编码特征点;a feature point calculation unit, configured to locate an initial position of a graphic feature point in each of the encoded element images using a preset corner detection algorithm, and calculate the grayscale value according to the initial position and the encoded element image The auxiliary coding feature point of the encoded element image;
    图像识别单元,用于根据所述主编码特征点和所有所述编码元素图像的辅助编码特征点,使用预先训练好的深度学习网络对所述所有编码元素图像进行识别;An image recognition unit, configured to identify, according to the primary coding feature point and the auxiliary coding feature points of all the coding element images, the pre-trained deep learning network to identify all the coding element images;
    解码单元,用于根据预设的极线编码策略,将识别到得的每个所述编码元素图像对应的编码信息与预先存储的编码图案对应的编码信息进行匹配,以实现对应的主编码特征点及辅助编码特征点的解码;以及a decoding unit, configured to match the encoded coding information corresponding to each of the coded element images and the coded information corresponding to the pre-stored coding pattern according to a preset polar coding strategy, to implement corresponding primary coding features Decoding of point and auxiliary coded feature points;
    图像重建单元,用于根据预先获取的三维图像重建系统标定参数和所述匹配得到的解码信息对所述物体进行三维图像重建,以得到所述物体的三维图像。And an image reconstruction unit configured to perform three-dimensional image reconstruction on the object according to the pre-acquired three-dimensional image reconstruction system calibration parameter and the matched decoding information to obtain a three-dimensional image of the object.
  7. 如权利要求6所述的装置,其特征在于,所述装置还包括:The device of claim 6 wherein said device further comprises:
    第一参数标定单元,用于采用张正友相机标定法对所述三维图像重建系统中的摄像头和投影设备进行标定,以获取所述摄像头和所述投影设备的标定参数;以及a first parameter calibration unit, configured to calibrate a camera and a projection device in the three-dimensional image reconstruction system by using a Zhang Zhengyou camera calibration method to obtain calibration parameters of the camera and the projection device;
    第二参数标定单元,用于根据获取的所述摄像头以及所述投影设备的标定参数,计算所述摄像头和投影设备之间的位置关系参数。And a second parameter calibration unit, configured to calculate a positional relationship parameter between the camera and the projection device according to the acquired camera and the calibration parameter of the projection device.
  8. 如权利要求6所述的装置,其特征在于,所述特征点提取单元包括:The device according to claim 6, wherein the feature point extracting unit comprises:
    候选特征点获取单元,用于对所述物体编码图像的像素进行模板卷积,根据所述模板卷积结果获取所述物体编码图像的候选主编码特征点;以及a candidate feature point acquiring unit, configured to perform template convolution on pixels of the object encoded image, and obtain candidate main coding feature points of the object encoded image according to the template convolution result;
    特征点剔除单元,用于计算每个所述候选主编码特征点的对称程度,将所述对称程度少于预设阈值的候选主编码特征点剔除,以得到所述物体编码图像的主编码特征点。a feature point culling unit, configured to calculate a degree of symmetry of each of the candidate primary coded feature points, and cull the candidate primary coded feature points whose degree of symmetry is less than a preset threshold to obtain a primary coding feature of the object coded image point.
  9. 如权利要求6所述的装置,其特征在于,所述装置还包括:The device of claim 6 wherein said device further comprises:
    元素旋转单元,用于对具有旋转对称性且包含预设数量特征点的编码元素进行旋转,以得到所述编码元素的多个码字图形; An element rotation unit, configured to rotate a coding element having rotational symmetry and including a preset number of feature points to obtain a plurality of codeword patterns of the coding element;
    元素图像生成单元,用于使用所述多个码字图形和黑、白背景进行组合,分别生成对应的编码元素图像;An element image generating unit, configured to use the plurality of codeword graphics and black and white backgrounds to generate corresponding coding element images;
    图像编码单元,用于根据所述极线编码策略,使用所述编码元素图像以预设的编码窗口沿着极线方向进行编码,以得到预设分辨率的投影用编码图像;以及An image encoding unit, configured to perform encoding along a polar line direction with a preset encoding window according to the polar line encoding strategy to obtain a coded image for projection of a preset resolution;
    编码图像投射单元,用于将所述投影用编码图像投射到所述物体上。An encoded image projecting unit for projecting the encoded image for projection onto the object.
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至5任一项所述方法的步骤。 A computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the steps of the method of any one of claims 1 to 5.
PCT/CN2017/116321 2017-12-15 2017-12-15 Structured light-based three-dimensional image reconstruction method and device, and storage medium WO2019113912A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/116321 WO2019113912A1 (en) 2017-12-15 2017-12-15 Structured light-based three-dimensional image reconstruction method and device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/116321 WO2019113912A1 (en) 2017-12-15 2017-12-15 Structured light-based three-dimensional image reconstruction method and device, and storage medium

Publications (1)

Publication Number Publication Date
WO2019113912A1 true WO2019113912A1 (en) 2019-06-20

Family

ID=66819827

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/116321 WO2019113912A1 (en) 2017-12-15 2017-12-15 Structured light-based three-dimensional image reconstruction method and device, and storage medium

Country Status (1)

Country Link
WO (1) WO2019113912A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330814A (en) * 2020-11-24 2021-02-05 革点科技(深圳)有限公司 Machine learning-based structured light three-dimensional reconstruction method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667303A (en) * 2009-09-29 2010-03-10 浙江工业大学 Three-dimensional reconstruction method based on coding structured light
CN103020954A (en) * 2012-10-31 2013-04-03 长春理工大学 Irregular surface-orientated self-adaptive projection system
CN103983213A (en) * 2014-05-30 2014-08-13 深圳先进技术研究院 Structured light coding method and related device
WO2017119846A1 (en) * 2016-01-06 2017-07-13 Heptagon Micro Optics Pte. Ltd. Three-dimensional imaging using frequency domain-based processing
CN108122254A (en) * 2017-12-15 2018-06-05 中国科学院深圳先进技术研究院 Three-dimensional image reconstruction method, device and storage medium based on structure light

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667303A (en) * 2009-09-29 2010-03-10 浙江工业大学 Three-dimensional reconstruction method based on coding structured light
CN103020954A (en) * 2012-10-31 2013-04-03 长春理工大学 Irregular surface-orientated self-adaptive projection system
CN103983213A (en) * 2014-05-30 2014-08-13 深圳先进技术研究院 Structured light coding method and related device
WO2017119846A1 (en) * 2016-01-06 2017-07-13 Heptagon Micro Optics Pte. Ltd. Three-dimensional imaging using frequency domain-based processing
CN108122254A (en) * 2017-12-15 2018-06-05 中国科学院深圳先进技术研究院 Three-dimensional image reconstruction method, device and storage medium based on structure light

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330814A (en) * 2020-11-24 2021-02-05 革点科技(深圳)有限公司 Machine learning-based structured light three-dimensional reconstruction method
CN112330814B (en) * 2020-11-24 2023-11-07 革点科技(深圳)有限公司 Structured light three-dimensional reconstruction method based on machine learning

Similar Documents

Publication Publication Date Title
CN108122254B (en) Three-dimensional image reconstruction method and device based on structured light and storage medium
CN109658449B (en) Indoor scene three-dimensional reconstruction method based on RGB-D image
US9269003B2 (en) Diminished and mediated reality effects from reconstruction
Shen et al. Layer depth denoising and completion for structured-light rgb-d cameras
JP2021528734A (en) Motion compensation for geometry information
US11348267B2 (en) Method and apparatus for generating a three-dimensional model
US20130162643A1 (en) Physical Three-Dimensional Model Generation Apparatus
US20120176478A1 (en) Forming range maps using periodic illumination patterns
US20120176380A1 (en) Forming 3d models using periodic illumination patterns
US8917317B1 (en) System and method for camera calibration
TW201118791A (en) System and method for obtaining camera parameters from a plurality of images, and computer program products thereof
WO2019031259A1 (en) Image processing device and method
CN112330751B (en) Line deviation detection method and device for structured light camera
WO2018040982A1 (en) Real time image superposition method and device for enhancing reality
US11653023B2 (en) Encoding device, encoding method, decoding device, and decoding method
CN108895979B (en) Line segment coded structured light depth acquisition method
KR102422822B1 (en) Apparatus and method for synthesizing 3d face image using competitive learning
KR102327304B1 (en) A method of improving the quality of 3D images acquired from RGB-depth camera
WO2019113912A1 (en) Structured light-based three-dimensional image reconstruction method and device, and storage medium
Ma et al. Deformable Neural Radiance Fields using RGB and Event Cameras
CN111480342B (en) Encoding device, encoding method, decoding device, decoding method, and storage medium
CN113034671B (en) Traffic sign three-dimensional reconstruction method based on binocular vision
KR102267442B1 (en) Correction method of camera distortion caused by multi-view capturing and generating method for block 3d model using thereof
KR102442980B1 (en) Super-resolution method for multi-view 360-degree image based on equi-rectangular projection and image processing apparatus
Guo et al. Repair of Holes Appearing in the Stereo Reconstruction Based on 3D Reprojection and DCGAN

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17934895

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.11.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17934895

Country of ref document: EP

Kind code of ref document: A1