WO2023174204A1 - 基于SfM和深度学习的高光谱三维重建系统及方法与应用 - Google Patents

基于SfM和深度学习的高光谱三维重建系统及方法与应用 Download PDF

Info

Publication number
WO2023174204A1
WO2023174204A1 PCT/CN2023/081051 CN2023081051W WO2023174204A1 WO 2023174204 A1 WO2023174204 A1 WO 2023174204A1 CN 2023081051 W CN2023081051 W CN 2023081051W WO 2023174204 A1 WO2023174204 A1 WO 2023174204A1
Authority
WO
WIPO (PCT)
Prior art keywords
hyperspectral
dimensional
dimensional reconstruction
feature
deep learning
Prior art date
Application number
PCT/CN2023/081051
Other languages
English (en)
French (fr)
Inventor
何赛灵
马腾飞
Original Assignee
浙江大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江大学 filed Critical 浙江大学
Publication of WO2023174204A1 publication Critical patent/WO2023174204A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the invention belongs to the field of computer technology, and particularly relates to a hyperspectral three-dimensional reconstruction system, method and application based on SfM and deep learning.
  • 3D reconstruction has played a huge role in the field of information preservation of cultural relics and ancient books.
  • 3D reconstructed cultural relics can be displayed virtually using optical projection, and 3D printing can be used to accurately replicate the 3D reconstructed model. wait.
  • Spectral imaging analysis is an important means of material composition analysis and plays an important role in food composition testing, agricultural and fishery product freshness testing and other fields. Hyperspectral imaging can preserve richer information than ordinary cameras, which also has important applications in the fields of identification of cultural relics and ancient books and information preservation.
  • Hyperspectral three-dimensional reconstruction of three-dimensional targets is an important means of material composition analysis and information preservation of cultural relics and ancient books.
  • hyperspectral reconstruction methods are three-dimensional reconstruction and hyperspectral separation. These methods require two sets of equipment, including three-dimensional reconstruction equipment. and hyperspectral acquisition equipment, and they need to be well calibrated. This method of equipment separation has the problems of high hardware cost, difficult calibration, low accuracy, and weak system migration capability.
  • the purpose of the present invention is to propose a hyperspectral three-dimensional reconstruction system, method and application based on SfM (Structure from Motion) and deep learning.
  • This system does not require additional three-dimensional reconstruction equipment. It can perform hyperspectral three-dimensional reconstruction of three-dimensional targets with high reconstruction accuracy using only one hyperspectral two-dimensional imaging equipment. It is especially suitable for the analysis of the material properties of three-dimensional targets and the informatization of cultural relics and ancient books. save.
  • a hyperspectral three-dimensional reconstruction system based on SfM and deep learning including a hyperspectral two-dimensional image acquisition module, a hyperspectral feature generation module and a hyperspectral three-dimensional reconstruction module;
  • the hyperspectral two-dimensional image acquisition module is connected to the hyperspectral feature generation module and is used to obtain multi-view hyperspectral information of the target object;
  • the hyperspectral feature generation module is connected to the hyperspectral two-dimensional image acquisition module and the hyperspectral three-dimensional reconstruction module respectively, and uses a neural network model based on deep learning training to extract the features of the hyperspectral two-dimensional image based on the constraint information of multiple spectra. Point and generate feature descriptors;
  • the hyperspectral three-dimensional reconstruction module is connected to the hyperspectral feature generation module, and is used to generate a hyperspectral three-dimensional reconstruction model based on feature points and feature descriptors.
  • the hyperspectral two-dimensional image acquisition module includes an electro-optical tunable filter and a high-speed CCD camera.
  • the hyperspectral two-dimensional image acquisition module further includes an electronically controlled rotating stage.
  • the hyperspectral three-dimensional reconstruction system performs multi-view photography of three-dimensional targets.
  • a hyperspectral three-dimensional reconstruction method based on SfM and deep learning including the following steps:
  • the neural network model based on deep learning training is used to extract feature points of the hyperspectral image and generate feature descriptors;
  • the described steps 1) for large targets use a hand-held hyperspectral two-dimensional image acquisition device to capture them from multiple angles; for small targets, place the small targets on an electronically controlled rotating table to automatically complete multi-angle hyperspectral image acquisition.
  • the feature points described in step 2) are common under each spectrum.
  • the feature descriptor is a 256-dimensional vector, and the value range of each element in the vector is 0-255.
  • the described step 3) uses cosine similarity when matching feature points.
  • the camera pose calculated in step 4) from one viewing angle is common to all spectra of this image.
  • the present invention has a wide range of usage scenarios and can be applied in various scenarios, such as material attribute analysis, information-based preservation of cultural relics and ancient books, etc.
  • the present invention uses a neural network based on deep learning to extract hyperspectral image feature points and generate feature descriptors.
  • the obtained feature points are more accurate and the feature descriptors are more reliable.
  • the three-dimensional model reconstructed by the present invention has higher accuracy and better completeness.
  • the present invention does not require additional three-dimensional reconstruction equipment. It only needs to use one hyperspectral two-dimensional imaging device (an electronically controlled rotary table can be optionally equipped if necessary) to perform hyperspectral three-dimensional reconstruction with high reconstruction accuracy, low cost and ease of use. Promote use.
  • Figure 1 is a schematic diagram of a module of the present invention.
  • Figure 2a is a working mode of the hyperspectral two-dimensional image acquisition module of the present invention.
  • Figure 2b is another working mode of the hyperspectral two-dimensional image acquisition module of the present invention (including an electronically controlled rotating stage).
  • Figure 3 is a schematic flow chart of an algorithm of the present invention.
  • Figure 4 is a result diagram of the actual operation of the embodiment of the present invention.
  • a hyperspectral three-dimensional reconstruction system based on SfM and deep learning includes a hyperspectral two-dimensional image acquisition module, a hyperspectral feature generation module and a hyperspectral three-dimensional reconstruction module.
  • the hyperspectral two-dimensional image acquisition module is connected to the hyperspectral feature generation module and is used to obtain multi-view hyperspectral information of the target object.
  • the hyperspectral feature generation module is connected to the hyperspectral two-dimensional image acquisition module and the hyperspectral three-dimensional reconstruction module respectively, and uses a neural network model based on deep learning training to extract the features of the hyperspectral two-dimensional image based on the constraint information of multiple spectra. points and generates feature descriptors; compared with RGB images, the feature points extracted by this module are more accurate, the calculated feature descriptors are more stable, and the feature point matching accuracy between images is higher.
  • the hyperspectral three-dimensional reconstruction module is connected to the hyperspectral feature generation module, and is used to generate a hyperspectral three-dimensional reconstruction model based on feature points and feature descriptors.
  • the hyperspectral two-dimensional image acquisition module includes an electro-optical tunable filter and a high-speed CCD camera.
  • the hyperspectral two-dimensional image acquisition module further includes an electronically controlled rotating stage.
  • the hyperspectral three-dimensional reconstruction system performs multi-view photography of three-dimensional targets.
  • a hyperspectral three-dimensional reconstruction method based on SfM and deep learning includes the following steps:
  • the neural network model based on deep learning training is used to extract feature points of the hyperspectral image and generate feature descriptors;
  • the feature points described in step 2) are common under each spectrum.
  • the feature descriptor is a 256-dimensional vector, and the value range of each element in the vector is 0-255.
  • the described step 3) uses cosine similarity when matching feature points.
  • the camera pose calculated in step 4) from one viewing angle is common to all spectra of this image.
  • hyperspectral two-dimensional image acquisition module uses a hyperspectral two-dimensional image acquisition module to collect multiple hyperspectral images of the cultural relic from multiple viewing angles.
  • the collected hyperspectral images should cover the entire surface of the cultural relic as much as possible.
  • the present invention uses a neural network based on deep learning to extract feature points of these hyperspectral images and calculate their corresponding feature descriptors.
  • the size of the input hyperspectral image data is H ⁇ W ⁇ S, where H refers to the height of the two-dimensional imaging space, W refers to the width of the two-dimensional imaging space, and S refers to the spectral dimension.
  • the hyperspectral image is directly input into the trained deep neural network.
  • the neural network first fully extracts the features of the hyperspectral image and outputs a feature map through a Shared Encoder.
  • the Shared Encoder includes 4 groups of 8 convolutional layers and 3 maximum Pooling layer, the size of the output feature map is H/8 ⁇ W/8 ⁇ 128; then the network is divided into two branches, one branch is the Interest Point Decoder, including two convolutional layers, H/8 ⁇ W /8 ⁇ 128 feature map is converted into H/8 ⁇ W/8 ⁇ 64 feature point probability map, and then the feature point probability map is expanded to obtain a feature point probability map that is consistent with the size of the two-dimensional imaging space of the original hyperspectral image. Its size is H ⁇ W ⁇ 1.
  • the value corresponding to each spatial position in the feature point probability map indicates the probability that the point is a feature point; the other branch is the Descriptor Decoder, which also includes two convolutional layers, and the output size is H ⁇ W ⁇ 256 feature descriptor subgraph, the 256-dimensional vector corresponding to each spatial position in the feature descriptor subgraph is the feature descriptor of the point.
  • the relative pose between each viewing angle is solved based on the matching results.
  • the camera pose calculated from one viewing angle is common to each spectrum of this hyperspectral image, and the SfM algorithm is used to perform three-dimensional reconstruction of each spectrum separately.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开一种基于SfM和深度学习的高光谱三维重建系统及方法与应用。高光谱三维重建系统,包括高光谱二维图像采集模块、高光谱特征生成模块和高光谱三维重建模块。高光谱三维重建方法,包括以下步骤:1)首先在多个视角下对三维目标采集多张高光谱图像;2)使用基于深度学习训练的神经网络模型,对高光谱图像进行特征点提取并生成特征描述子;3)进行特征点匹配;4)计算各视角下高光谱相机位姿;5)对各个光谱分别三维重建;6)进行配准,得到高光谱三维重建模型。本发明仅需使用一台高光谱二维成像设备即可对三维目标进行高光谱三维重建,精度高,成本低易于推广使用,尤其适用于三维目标物质属性的分析以及文物古籍信息化保存。

Description

基于SfM和深度学习的高光谱三维重建系统及方法与应用 技术领域
本发明属于计算机技术领域,特别涉及基于SfM和深度学习的高光谱三维重建系统及方法与应用。
背景技术
随着信息手段的发展,三维重建已经在文物古籍信息化保存领域发挥出了巨大的作用,如经过三维重建的文物可以用光学投影的方法进行虚拟展示,可以用3D打印精确复制三维重建的模型等。光谱成像分析是物质成分分析的重要手段,在食品成分检测、农渔产品新鲜度检测等领域发挥着重要的作用。高光谱成像可以保存比普通相机更加丰富的信息,这在文物古籍鉴定及信息化保存领域也同样具有重要应用。
对三维目标进行高光谱三维重建是物质成分分析和文物古籍信息化保存的重要手段,目前使用的高光谱重建方法都是三维重建和高光谱分离的,这些方法需要两套设备,包括三维重建设备和高光谱采集设备,且需要他们是良好标定的,这样设备分离的方法具有硬件成本高,标定困难,精度低,系统迁移能力弱的问题。
发明内容
为了克服现有技术的不足,本发明的目的是提出一种基于SfM(Structure from Motion)和深度学习的高光谱三维重建系统及方法与应用。该系统不需额外的三维重建设备,仅可使用一台高光谱二维成像设备即可以对三维目标进行高光谱三维重建且重建精度高,尤其适用于三维目标物质属性的分析以及文物古籍信息化保存。
一种基于SfM和深度学习的高光谱三维重建系统,包括高光谱二维图像采集模块、高光谱特征生成模块和高光谱三维重建模块;
所述的高光谱二维图像采集模块与高光谱特征生成模块相连,用于获取目标物体的多视角高光谱信息;
所述的高光谱特征生成模块分别与高光谱二维图像采集模块和高光谱三维重建模块相连,使用基于深度学习训练的神经网络模型,根据多个光谱的约束信息提取高光谱二维图像的特征点并生成特征描述子;
所述的高光谱三维重建模块与高光谱特征生成模块相连,根据特征点和特征描述子用来生成高光谱三维重建模型。
所述的高光谱二维图像采集模块包括电光可调谐滤波片和高速CCD相机。
所述的高光谱二维图像采集模块进一步包括电控旋转台。
所述的高光谱三维重建系统对三维目标进行多视角拍摄。
一种基于SfM和深度学习的高光谱三维重建方法,包括以下步骤:
1)首先在多个视角下用高光谱二维图像采集模块对三维目标采集多张高光谱图像;
2)根据高光谱特征生成模块使用基于深度学习训练的神经网络模型,对高光谱图像进行特征点提取并生成特征描述子;
3)根据提取到的特征点和生成的特征描述子对多个视角的高光谱图像进行特征点匹配;
4)根据特征点匹配结果计算各视角下高光谱相机位姿;
5)根据相机位姿用SfM方法对各个光谱分别三维重建;
6)根据各个光谱的三维重建进行配准,得到高光谱三维重建模型。
所述的步骤1)对大目标,手持高光谱二维图像采集设备对其多视角拍摄;对小目标,将小目标放在电控旋转台上自动完成多视角高光谱图像采集。
所述的步骤2)所述的特征点在各个光谱下通用,特征描述子是一个256维的向量,向量中每个元素取值范围都为0-255。
所述的步骤3)进行特征点匹配时使用余弦相似度进行匹配。
所述的步骤4)一个视角下计算得到的相机位姿在这幅图像的各个谱上通用。
所述的步骤6)进行各个光谱三维重建模型的配准时只需要对各个三维模型叠加。
一种基于SfM和深度学习的高光谱三维重建系统的应用,用于物质属性分析;或者用于文物、古籍的信息化保存。
本发明相对于现有技术具有如下的优点和效果:
1、本发明的使用场景较为广泛,可以应用于各类场景下,如:物质属性分析、文物古籍信息化保存等。
2、本发明使用基于深度学习的神经网络对高光谱图像特征点提取和特征描述子生成,得到的特征点更准确、特征描述子更可靠。
3、得益于更精确的特征点和更稳定的特征描述子,本发明重建得到的三维模型精度更高,完整度更好。
4、本发明不需额外的三维重建设备,仅需使用一台高光谱二维成像设备(必要时可选配一个电控旋转台)即可进行高光谱三维重建且重建精度高,成本低易于推广使用。
附图说明
图1是本发明的一种模块示意图。
图2a是本发明高光谱二维图像采集模块的一种工作模式。
图2b是本发明高光谱二维图像采集模块的另一种工作模式(包括电控旋转台)。
图3是本发明的一个算法流程示意图。
图4是本发明实施例实际运行的一个结果图。
具体实施方式
下面结合附图和实施例对本发明做进一步的阐述。
如图1所示,一种基于SfM和深度学习的高光谱三维重建系统,包括高光谱二维图像采集模块、高光谱特征生成模块和高光谱三维重建模块。
所述的高光谱二维图像采集模块与高光谱特征生成模块相连,用于获取目标物体的多视角高光谱信息。
所述的高光谱特征生成模块分别与高光谱二维图像采集模块和高光谱三维重建模块相连,使用基于深度学习训练的神经网络模型,根据多个光谱的约束信息提取高光谱二维图像的特征点并生成特征描述子;相比于RGB图像,该模块提取的特征点精度更高、计算的特征描述子稳定性更好、图像间的特征点匹配精度更高。
所述的高光谱三维重建模块与高光谱特征生成模块相连,根据特征点和特征描述子用来生成高光谱三维重建模型。
所述的高光谱二维图像采集模块包括电光可调谐滤波片和高速CCD相机。
所述的高光谱二维图像采集模块进一步包括电控旋转台。
所述的高光谱三维重建系统对三维目标进行多视角拍摄。
如图3所示,一种基于SfM和深度学习的高光谱三维重建方法,包括以下步骤:
1)首先在多个视角下用高光谱二维图像采集模块对三维目标采集多张高光谱图像;
2)根据高光谱特征生成模块使用基于深度学习训练的神经网络模型,对高光谱图像进行特征点提取并生成特征描述子;
3)根据提取到的特征点和生成的特征描述子对多个视角的高光谱图像进行特征点匹配;
4)根据特征点匹配结果计算各视角下高光谱相机位姿;
5)根据相机位姿用SfM方法对各个光谱分别三维重建;
6)根据各个光谱的三维重建进行配准,得到高光谱三维重建模型。
所述的步骤1)对大目标,工作时固定三维目标位置,手持高光谱二维图像采集设备对其多视角拍摄(图2a);对小目标,将小目标放在电控旋转台上自动完成多视角高光谱图像采集(图2b),全程自动化操作减轻了操作者采集多视角高光谱图像的负担。
所述的步骤2)所述的特征点在各个光谱下通用,特征描述子是一个256维的向量,向量中每个元素取值范围都为0-255。
所述的步骤3)进行特征点匹配时使用余弦相似度进行匹配。
所述的步骤4)一个视角下计算得到的相机位姿在这幅图像的各个谱上通用。
所述的步骤6)进行各个光谱三维重建模型的配准时只需要对各个三维模型叠加。
应用实施例
一种基于SfM和深度学习的高光谱三维重建系统的应用,用于物质属性分析;或者用于文物、古籍的信息化保存。
这里以文物、古籍信息化保存场景作为示例。
首先在多个视角下用高光谱二维图像采集模块对该文物采集多张高光谱图像,采集的高光谱图像应尽可能覆盖该文物的全部表面。
现有的特征点提取和描述子生成方法(以Scale-invariant feature transform,SIFT为代表)都是以单张灰度图作为输入,这些方法无法处理高光谱图像,更无法充分利用高光谱图像中丰富的光谱信息作为特征,限制了其提取特征点的精度和描述子的稳定性。
本发明使用基于深度学习的神经网络提取这些高光谱图像的特征点并计算其相应的特征描述子。设输入的高光谱图像数据的大小是H×W×S,其中H指二维成像空间的高度,W指二维成像空间的宽度,S指光谱维度。高光谱图像被直接输入到训练好的深度神经网络中,神经网络先通过一个Shared Encoder充分提取高光谱图像的特征并输出特征图,该Shared Encoder包括4组共8个卷积层和3个最大池化层,输出的特征图的大小是H/8×W/8×128;然后网络被分为两个分支,一个分支是Interest Point Decoder,包括两个卷积层,将H/8×W/8×128的特征图转换为H/8×W/8×64的特征点概率图,再将该特征点概率图展开得到与原高光谱图像二维成像空间大小一致的特征点概率图,其大小为H×W×1,特征点概率图中的每个空间位置对应的数值表明了该点是特征点的概率;另一个分支是Descriptor Decoder,同样包括两个卷积层,输出大小为H×W×256的特征描述子图,特征描述子图中的每个空间位置对应的256维向量即是该点的特征描述子。借助这些特征点和特征描述子对多个视角高光谱图像进行精确匹配。
然后,根据匹配结果求解各个视角之间的相对位姿,一个视角下计算得到的相机位姿在这幅高光谱图像的各个谱上通用,利用SfM算法对每个光谱都单独进行三维重建。
最后,将每个光谱三维重建得到的模型进行空间叠加即可完成配准,得到完整的高光谱三维模型(图4)。
以上实施方式只是对本发明的优选实施施进行描述,并非对本发明的构思和范围进行限定,在不脱离本发明设计思想的前提下,本领域中普通技术人员对本发明的技术方案做出的各种变化和改进,均属于本发明的保护范围。

Claims (10)

  1. 一种基于SfM和深度学习的高光谱三维重建系统,其特征在于:包括高光谱二维图像采集模块、高光谱特征生成模块和高光谱三维重建模块;
    所述的高光谱二维图像采集模块与高光谱特征生成模块相连,用于获取目标物体的多视角高光谱信息;
    所述的高光谱特征生成模块分别与高光谱二维图像采集模块和高光谱三维重建模块相连,使用基于深度学习训练的神经网络模型,根据多个光谱的约束信息提取高光谱二维图像的特征点并生成特征描述子;
    所述的高光谱三维重建模块与高光谱特征生成模块相连,根据特征点和特征描述子用来生成高光谱三维重建模型。
  2. 根据权利要求1所述的一种基于SfM和深度学习的高光谱三维重建系统,其特征在于:所述的高光谱二维图像采集模块包括电光可调谐滤波片和高速CCD相机。
  3. 根据权利要求2所述的一种基于SfM和深度学习的高光谱三维重建系统,其特征在于:所述的高光谱二维图像采集模块进一步包括电控旋转台。
  4. 一种根据权利要求1所述的一种基于SfM和深度学习的高光谱三维重建系统的高光谱三维重建方法,其特征在于:包括以下步骤:
    1)首先在多个视角下用高光谱二维图像采集模块对三维目标采集多张高光谱图像;
    2)根据高光谱特征生成模块使用基于深度学习训练的神经网络模型,对高光谱图像进行特征点提取并生成特征描述子;
    3)根据提取到的特征点和生成的特征描述子对多个视角的高光谱图像进行特征点匹配;
    4)根据特征点匹配结果计算各视角下高光谱相机位姿;
    5)根据相机位姿用SfM方法对各个光谱分别三维重建;
    6)根据各个光谱的三维重建进行配准,得到高光谱三维重建模型。
  5. 根据权利要求4所述的高光谱三维重建方法,其特征在于:步骤1)对大目标,手持高光谱二维图像采集设备对其多视角拍摄;对小目标,将小目标放在电控旋转台上自动完成多视角高光谱图像采集。
  6. 根据权利要求4所述的高光谱三维重建方法,其特征在于:步骤2)所述的特征点在各个光谱下通用,特征描述子是一个256维的向量,向量中每个元素取值范围都为0-255。
  7. 根据权利要求4所述的高光谱三维重建方法,其特征在于:步骤3)进行特征点匹配时使用余弦相似度进行匹配。
  8. 根据权利要求4所述的高光谱三维重建方法,其特征在于:步骤4)一个视角下计算得到的相机位姿在这幅图像的各个谱上通用。
  9. 根据权利要求8所述的高光谱三维重建方法,其特征在于:步骤6)进行各个光谱三维重建模型的配准时只需要对各个光谱重建出的三维模型空间叠加。
  10. 一种基于SfM和深度学习的高光谱三维重建系统的应用,其特征在于:用于物质属性分析;或者用于文物、古籍的信息化保存。
PCT/CN2023/081051 2022-03-14 2023-03-13 基于SfM和深度学习的高光谱三维重建系统及方法与应用 WO2023174204A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210247159.8 2022-03-14
CN202210247159.8A CN114677474A (zh) 2022-03-14 2022-03-14 基于SfM和深度学习的高光谱三维重建系统及方法与应用

Publications (1)

Publication Number Publication Date
WO2023174204A1 true WO2023174204A1 (zh) 2023-09-21

Family

ID=82073683

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/081051 WO2023174204A1 (zh) 2022-03-14 2023-03-13 基于SfM和深度学习的高光谱三维重建系统及方法与应用

Country Status (2)

Country Link
CN (1) CN114677474A (zh)
WO (1) WO2023174204A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677474A (zh) * 2022-03-14 2022-06-28 浙江大学 基于SfM和深度学习的高光谱三维重建系统及方法与应用
CN116210571B (zh) * 2023-03-06 2023-10-20 广州市林业和园林科学研究院 一种立体绿化遥感智能灌溉方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108051837A (zh) * 2017-11-30 2018-05-18 武汉大学 多传感器集成室内外移动测绘装置及自动三维建模方法
CN110853145A (zh) * 2019-09-12 2020-02-28 浙江大学 高空间分辨率便携式抗抖动高光谱成像方法及装置
CN113822896A (zh) * 2021-08-31 2021-12-21 北京市农林科学院信息技术研究中心 一种植物群体三维表型数据采集装置及方法
CN114677474A (zh) * 2022-03-14 2022-06-28 浙江大学 基于SfM和深度学习的高光谱三维重建系统及方法与应用

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108051837A (zh) * 2017-11-30 2018-05-18 武汉大学 多传感器集成室内外移动测绘装置及自动三维建模方法
CN110853145A (zh) * 2019-09-12 2020-02-28 浙江大学 高空间分辨率便携式抗抖动高光谱成像方法及装置
CN113822896A (zh) * 2021-08-31 2021-12-21 北京市农林科学院信息技术研究中心 一种植物群体三维表型数据采集装置及方法
CN114677474A (zh) * 2022-03-14 2022-06-28 浙江大学 基于SfM和深度学习的高光谱三维重建系统及方法与应用

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIANG JIE; ZIA ALI; ZHOU JUN; SIRAULT XAVIER: "3D Plant Modelling via Hyperspectral Imaging", 2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, IEEE, 2 December 2013 (2013-12-02), pages 172 - 177, XP032575656, DOI: 10.1109/ICCVW.2013.29 *
LUO JING, LIN ZIJIAN, XING YUXIN, FORSBERG ERIK, WU CHENGDONG, ZHU XINHUA, GUO TINGBIAO, WANG GAOXUAN, BIAN BEILEI, WU DUN, HE SAI: "PORTABLE 4D SNAPSHOT HYPERSPECTRAL IMAGER FOR FASTSPECTRAL AND SURFACE MORPHOLOGY MEASUREMENTS", PROGRESS IN ELECTROMAGNETICS RESEARCH, vol. 173, 1 January 2022 (2022-01-01), pages 25 - 36, XP093091764, DOI: 10.2528/PIER22021702 *

Also Published As

Publication number Publication date
CN114677474A (zh) 2022-06-28

Similar Documents

Publication Publication Date Title
WO2023174204A1 (zh) 基于SfM和深度学习的高光谱三维重建系统及方法与应用
CN105205858B (zh) 一种基于单个深度视觉传感器的室内场景三维重建方法
CN101998136B (zh) 单应矩阵的获取方法、摄像设备的标定方法及装置
CN109269430A (zh) 基于深度提取模型的多株立木胸径被动测量方法
CN102506757B (zh) 双目立体测量系统多视角测量中的自定位方法
Gao et al. Adaptive anchor box mechanism to improve the accuracy in the object detection system
DE112013003214T5 (de) Verfahren zum Registrieren von Daten
CN106097348A (zh) 一种三维激光点云与二维图像的融合方法
CN106709950A (zh) 一种基于双目视觉的巡线机器人跨越障碍导线定位方法
CN107481279A (zh) 一种单目视频深度图计算方法
CN110322485A (zh) 一种异构多相机成像系统的快速图像配准方法
CN111060006A (zh) 一种基于三维模型的视点规划方法
Peng et al. Binocular-vision-based structure from motion for 3-D reconstruction of plants
CN116071424A (zh) 基于单目视觉的果实空间坐标定位方法
Paturkar et al. 3D reconstruction of plants under outdoor conditions using image-based computer vision
Kehl et al. Automatic illumination‐invariant image‐to‐geometry registration in outdoor environments
Xinmei et al. Passive measurement method of tree height and crown diameter using a smartphone
Atik et al. An automatic image matching algorithm based on thin plate splines
Yang et al. Research and application of 3D face modeling algorithm based on ICP accurate alignment
CN110363806A (zh) 一种利用不可见光投射特征进行三维空间建模的方法
Liu et al. New anti-blur and illumination-robust combined invariant for stereo vision in human belly reconstruction
Vuletić et al. Close-range multispectral imaging with Multispectral-Depth (MS-D) system
Chen et al. Fast and accurate 3D reconstruction of plants using mvsnet and multi-view images
Cai et al. 3D reconstruction and visual simulation of double-flowered plants based on laser scanning
Li et al. Automatic reconstruction and modeling of dormant jujube trees using three-view image constraints for intelligent pruning applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23769707

Country of ref document: EP

Kind code of ref document: A1