WO2020019304A1 - 获取三维场景的方法与装置 - Google Patents

获取三维场景的方法与装置 Download PDF

Info

Publication number
WO2020019304A1
WO2020019304A1 PCT/CN2018/097458 CN2018097458W WO2020019304A1 WO 2020019304 A1 WO2020019304 A1 WO 2020019304A1 CN 2018097458 W CN2018097458 W CN 2018097458W WO 2020019304 A1 WO2020019304 A1 WO 2020019304A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
scene
panoramic image
training data
model
Prior art date
Application number
PCT/CN2018/097458
Other languages
English (en)
French (fr)
Inventor
陆真国
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2018/097458 priority Critical patent/WO2020019304A1/zh
Priority to CN201880038658.8A priority patent/CN110914871A/zh
Publication of WO2020019304A1 publication Critical patent/WO2020019304A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation

Definitions

  • the present application relates to the field of three-dimensional scene reconstruction, and more particularly, to a method and device for acquiring a three-dimensional scene.
  • 3D reconstruction technology can be applied to indoor scene reconstruction mapping.
  • the 3D reconstruction results combined with Augmented Reality (AR) technology can be applied to the effects of interior decoration design effects preview, furniture layout preview and other applications that require 3D scenes.
  • AR Augmented Reality
  • 3D reconstruction technology usually uses a color (RGB) camera and a sensor that can provide depth information, such as a depth camera or a laser scanner, to perform 3D scene reconstruction. This solution requires moving the camera or moving the subject along a certain path.
  • some reconstruction methods perform 3D scene reconstruction by arranging a large number of complex patterns in the scene in advance.
  • the present application provides a method and device for acquiring a three-dimensional scene, which can effectively improve the convenience and efficiency of the three-dimensional scene reconstruction.
  • a method for acquiring a three-dimensional scene includes: acquiring a two-dimensional panoramic image; inputting the two-dimensional panoramic image into a model to obtain a three-dimensional scene corresponding to the two-dimensional panoramic image.
  • a method for three-dimensional scene reconstruction includes: acquiring training data, the training data including two-dimensional panoramic image samples and corresponding three-dimensional scene samples; using a machine learning algorithm to train a model through the training data So that the model has the function of receiving two-dimensional panoramic images and outputting three-dimensional scenes.
  • an apparatus for acquiring a three-dimensional scene includes: an image acquisition unit for acquiring a two-dimensional panoramic image; and a processing unit for inputting the two-dimensional panoramic image acquired by the image acquisition unit into a model to obtain The three-dimensional scene corresponding to the two-dimensional panoramic image.
  • an apparatus for three-dimensional scene reconstruction includes: an acquisition unit for acquiring training data, where the training data includes two-dimensional panoramic image samples and corresponding three-dimensional scene samples; and a training unit for adopting The machine learning algorithm trains the model through the training data, so that the model has the function of receiving two-dimensional panoramic images and outputting three-dimensional scenes.
  • an apparatus for three-dimensional scene reconstruction includes a memory and a processor.
  • the memory is configured to store instructions.
  • the processor is configured to execute the instructions stored in the memory. The execution is such that the processor executes the method provided by the first aspect or the second aspect.
  • a computer storage medium on which a computer program is stored.
  • the computer program When the computer program is executed by a computer, the computer executes the method provided by the first aspect or the second aspect.
  • a computer program product includes instructions that, when executed by a computer, cause the computer to perform the method provided by the first aspect or the second aspect.
  • the solution provided in this application can obtain the reconstruction result of the three-dimensional scene corresponding to the two-dimensional panoramic image by inputting the two-dimensional panoramic image into the model and outputting the model. Therefore, the solution provided by this application only needs to obtain a two-dimensional panoramic image, and then obtain a corresponding three-dimensional scene reconstruction result through a model. Compared with the traditional three-dimensional reconstruction technology, the implementation process is simplified, the cost is reduced, and the three-dimensional reconstruction is improved. Convenience and efficiency.
  • FIG. 1 is a schematic flowchart of a method for acquiring a three-dimensional scene according to an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a method for three-dimensional scene reconstruction provided by an embodiment of the present application.
  • FIG. 3 is a schematic block diagram of an apparatus for acquiring a three-dimensional scene according to an embodiment of the present application.
  • FIG. 4 is a schematic block diagram of an apparatus for 3D scene reconstruction provided by an embodiment of the present application.
  • FIG. 5 is a schematic block diagram of a three-dimensional scene reconstruction system according to an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of a method 100 for acquiring a three-dimensional scene according to an embodiment of the present application.
  • the method 100 includes the following steps.
  • the two-dimensional panoramic image mentioned in this article refers to the image obtained by the "panoramic shooting” method.
  • “Panorama shooting” means that the shooting position of the shooting device is fixed, and multiple photos of different angles and directions are taken to the surrounding 360 degrees (or other non-zero angles, such as 90 degrees or 270 degrees). Multiple photos are taken to form a panoramic picture.
  • a two-dimensional panoramic image can be obtained by shooting with a panoramic camera.
  • a panoramic camera For example, a 270-degree panoramic camera is used to obtain a two-dimensional panoramic image.
  • a common camera can be used to take multiple pictures in a "panoramic shooting” mode, and then use related software (such as Photoshop) to combine the multiple pictures into a picture to obtain a two-dimensional panoramic image.
  • related software such as Photoshop
  • the model involved in this article has the following functions (also known as functional relationships): input two-dimensional panoramic image information, and output corresponding three-dimensional scene structure information.
  • the solution provided by this application only needs to obtain a two-dimensional panoramic image, and then obtain a corresponding three-dimensional scene reconstruction result through a model.
  • the implementation process is simplified, the cost is reduced, and the three-dimensional reconstruction is improved. Convenience and efficiency.
  • the model used in this paper to receive two-dimensional panoramic images and output three-dimensional scenes can be trained by machine learning methods.
  • the model is trained using a supervised learning algorithm.
  • the training data used in the model training process includes two-dimensional panoramic image samples and their corresponding three-dimensional scene samples.
  • the three-dimensional scene sample is a three-dimensional scene reconstructed based on the two-dimensional panorama image sample; or, the two-dimensional scene image sample is a two-dimensional panorama image generated in the three-dimensional scene sample.
  • the two-dimensional panoramic image samples and three-dimensional scene samples in the training data of the model correspond to the input and output during the training of the model, respectively.
  • training data is obtained in an actual scene.
  • training data is recorded as actual scene training data.
  • the actual scene training data includes a two-dimensional panoramic image captured in an actual scene and a three-dimensional scene reconstructed from the captured two-dimensional panoramic image.
  • a panoramic camera A is used to capture a two-dimensional panoramic image A; then a corresponding three-dimensional reconstruction tool is used to obtain a corresponding three-dimensional scene reconstruction result A ′ based on the captured two-dimensional panoramic image A.
  • the two-dimensional panoramic image A and the three-dimensional scene reconstruction result A ' are a two-dimensional panoramic image sample and a three-dimensional scene sample used for training the model, respectively.
  • the 3D reconstruction tool that obtains the 3D scene reconstruction result A ′ according to the 2D panoramic image A can be any existing 3D reconstruction technology, for example, a 3D reconstruction technology based on a color camera and a depth sensor, and a 3D reconstruction technology through an unknown template in the scene , Laser scanner reconstruction technology, or 3D reconstruction technology based on the consumer depth camera Microsoft Kinect, etc., this application is not limited in this regard.
  • training data is acquired in a virtual scene.
  • This training data is referred to herein as virtual scene training data
  • the virtual scene training data includes a virtual three-dimensional scene and a two-dimensional panoramic image generated in the virtual three-dimensional scene.
  • a three-dimensional virtual scene B ' is generated by a computer virtual technology, and then a two-dimensional panoramic image B is generated in the three-dimensional virtual scene.
  • the two-dimensional panoramic image B and the three-dimensional virtual scene B ' are respectively a two-dimensional panoramic image sample and a three-dimensional scene sample used for training the model.
  • the computer virtualization technology used to generate the three-dimensional virtual scene B may be any existing technology that can generate a three-dimensional virtual scene, for example, computer graphics technology.
  • training data is obtained in an actual scene and a virtual scene, respectively.
  • training data for training the model is obtained based on the actual scene and the virtual scene, respectively.
  • the training data includes the above-mentioned actual scene training data and virtual scene training data.
  • a panoramic camera is used to capture a two-dimensional panoramic image A; then a corresponding three-dimensional reconstruction tool is used to obtain a corresponding three-dimensional scene reconstruction result A ′ based on the captured two-dimensional panoramic image A.
  • the two-dimensional panoramic image A and the three-dimensional scene reconstruction result A ' are a two-dimensional panoramic image sample and a three-dimensional scene sample, respectively, for training a model.
  • a three-dimensional virtual scene B ' is generated by a computer virtual technology, and then a two-dimensional panoramic image B is generated in the three-dimensional virtual scene.
  • the two-dimensional panoramic image B and the three-dimensional virtual scene B ' are respectively a two-dimensional panoramic image sample and a three-dimensional scene sample used for training the model.
  • the two-dimensional panoramic image samples used for training the model include the image A captured in the actual scene and the image B generated in the virtual scene; the three-dimensional scene samples used for training the model include the three-dimensional reconstructed based on the actual scene.
  • the reconstruction result A 'and the three-dimensional virtual scene B' generated using computer virtualization technology.
  • the two-dimensional panoramic image sample A corresponds to the three-dimensional scene sample A ', and the two-dimensional panoramic image sample B corresponds to the three-dimensional scene sample B'.
  • a model is trained by using a two-dimensional panoramic image sample and a three-dimensional scene sample, so that the model has a function of receiving a two-dimensional panoramic image and outputting a three-dimensional scene, so that the model can be used to obtain the Reconstruction results of three-dimensional scenes corresponding to three-dimensional panoramic images.
  • the supervised learning algorithm used to train the model may be any of the following techniques: decision tree, random forest, or support vector machine.
  • the model When the model is trained with a decision tree, the model can be called a decision tree. When the model is trained in a random forest, the model can be called a random forest. When the model is trained using a support vector machine, the model can be called a support vector machine.
  • model proposed in this paper for outputting three-dimensional scene information based on the input two-dimensional panoramic image information may be pre-trained, and in practical applications, it may be directly used.
  • the solution provided in this application can be applied to indoor three-dimensional scene reconstruction.
  • the two-dimensional panoramic image samples used in training the model are obtained in an actual indoor scene.
  • the three-dimensional scene samples used in training the model are generated based on indoor virtual scenes.
  • the model obtained based on the indoor scene training is suitable for processing a three-dimensional scene reconstruction in an indoor scene, that is, the two-dimensional panoramic image in step S110 is a two-dimensional panoramic image captured by a panoramic camera in an indoor scene.
  • the solution provided in this application can also be applied to 3D scene reconstruction in other occasions, such as outdoor 3D scene reconstruction.
  • the two-dimensional panoramic image samples used in training the model are obtained in an actual outdoor scene.
  • the three-dimensional scene samples used in training the model are generated based on outdoor virtual scenes.
  • the model obtained based on the outdoor scene training is suitable for processing a three-dimensional scene reconstruction in an outdoor scene, that is, the two-dimensional panoramic image in step S110 is a two-dimensional panoramic image captured by a panoramic camera in an outdoor scene.
  • the solution provided by this application only needs to obtain a two-dimensional panoramic image, and then obtain a corresponding three-dimensional scene reconstruction result through a model.
  • the implementation process is simplified, the cost is reduced, and the three-dimensional Convenience and efficiency of reconstruction.
  • an embodiment of the present application further provides a method 200 for three-dimensional scene reconstruction.
  • the method 200 includes the following steps.
  • the training data may be acquired in any of the three ways of acquiring training data as described above.
  • a machine learning algorithm is used to train the model through the training data, so that the model has a function of receiving a two-dimensional panoramic image and outputting a three-dimensional scene.
  • any one of the following supervised learning algorithms may be used to train the model based on the training data obtained in S210: a decision tree, a random forest, and a support vector machine.
  • the model is trained by using two-dimensional panoramic image samples and three-dimensional scene samples, so that the model has the functions of receiving two-dimensional panoramic images and outputting three-dimensional scenes, so that the model can be obtained in a relatively fast and efficient manner. Reconstruction results of the 3D scene corresponding to the 2D panoramic image.
  • FIG. 3 is a schematic block diagram of an apparatus 300 for acquiring a three-dimensional scene according to an embodiment of the present application.
  • the device 300 includes an image acquisition unit 310 and a processing unit 320.
  • the image acquisition unit 310 is configured to acquire a two-dimensional panoramic image.
  • the image acquisition unit 310 is an image acquisition device having a “panoramic shooting” function, such as a panoramic camera / camera.
  • the image acquisition unit 310 is a 270-degree panoramic camera.
  • the image acquisition unit 310 includes an ordinary camera and an image stitching module, wherein the ordinary camera is used to fix the shooting position and rotate multiple angles to take multiple photos, and the image stitching module is used to stitch multiple photos together.
  • the processing unit 320 is configured to input the two-dimensional panoramic image obtained by the image acquisition unit 310 into a model, and obtain a three-dimensional scene corresponding to the two-dimensional panoramic image.
  • the model receives two-dimensional panoramic images and can output corresponding three-dimensional scene structure information.
  • the processing unit 320 may be implemented by a processor or a processor-related circuit.
  • the solution provided by this application only needs to obtain a two-dimensional panoramic image, and then obtain a corresponding three-dimensional scene reconstruction result through a model.
  • the implementation process is simplified, the cost is reduced, and the three-dimensional reconstruction is improved. Convenience and efficiency.
  • apparatus 300 may correspond to an execution subject of the method 100 in the above embodiment.
  • the model is obtained through training data training, where the training data includes two-dimensional panoramic image samples and corresponding three-dimensional scene samples.
  • the training data includes actual scene training data
  • the actual scene training data includes a two-dimensional panoramic image captured in the actual scene and a three-dimensional scene reconstructed from the captured two-dimensional panoramic image.
  • the training data includes virtual scene training data
  • the virtual scene training data includes a virtual three-dimensional scene and a two-dimensional panoramic image generated in the virtual three-dimensional scene.
  • the training data includes actual scene training data and virtual scene training data.
  • the actual scene training data includes a two-dimensional panoramic image taken in an actual scene and a three-dimensional scene reconstructed from the captured two-dimensional panoramic image.
  • the virtual scene training data includes a virtual three-dimensional scene and generated in the virtual three-dimensional scene. Two-dimensional panoramic image.
  • the model is any one of the following models: a decision tree, a random forest, and a support vector machine.
  • the apparatus 300 provided in the embodiment of the present application may be applied to indoor three-dimensional scene reconstruction.
  • the apparatus 300 provided in the embodiment of the present application may be applied to 3D scene reconstruction in other occasions, for example, outdoor 3D scene reconstruction.
  • an embodiment of the present application further provides an apparatus 400 for 3D scene reconstruction.
  • the apparatus 400 includes an obtaining unit 410 and a training unit 420.
  • the obtaining unit 410 is configured to obtain training data, where the training data includes a two-dimensional panoramic image sample and a corresponding three-dimensional scene sample.
  • the training unit 420 is configured to use a machine learning algorithm to train a model through the training data, so that the model has a function of receiving a two-dimensional panoramic image and outputting a three-dimensional scene.
  • the model is trained by using two-dimensional panoramic image samples and three-dimensional scene samples, so that the model has the functions of receiving two-dimensional panoramic images and outputting three-dimensional scenes, so that the model can be obtained in a relatively fast and efficient manner. Reconstruction results of the 3D scene corresponding to the 2D panoramic image.
  • Both the obtaining unit 410 and the training unit 420 may be implemented by a processor or a processor-related circuit.
  • the obtaining unit 410 is configured to obtain actual scene training data, where the actual scene training data includes a two-dimensional panoramic image captured in the actual scene and reconstructed from the captured two-dimensional panoramic image. Three-dimensional scene.
  • the obtaining unit 410 is configured to obtain virtual scene training data, where the virtual scene training data includes a virtual three-dimensional scene and a two-dimensional panoramic image generated in the virtual three-dimensional scene.
  • the obtaining unit 410 is configured to obtain actual scene training data and virtual scene training data, where the actual scene training data includes a two-dimensional panoramic image captured in an actual scene and two captured images according to the captured two scenes.
  • a three-dimensional scene obtained by reconstructing a three-dimensional panoramic image.
  • the training data of the virtual scene includes a virtual three-dimensional scene and a two-dimensional panoramic image generated in the virtual three-dimensional scene.
  • the machine learning algorithm is any one of the following algorithms: a decision tree, a random forest, and a support vector machine.
  • an embodiment of the present application further provides a system 500 for reconstructing a three-dimensional scene.
  • the system 500 includes a panoramic camera device 510 and a three-dimensional reconstruction device 520.
  • the three-dimensional reconstruction device 520 includes a module 521, which has a function of receiving a two-dimensional panoramic image and outputting a three-dimensional scene.
  • the panoramic camera device 510 is configured to acquire a two-dimensional panoramic image.
  • the three-dimensional reconstruction device 520 is configured to obtain a two-dimensional panoramic image from the panoramic camera device 510, and input the two-dimensional panoramic image into the model 521, and obtain a three-dimensional scene corresponding to the two-dimensional panoramic image through the output of the model 521.
  • the system 500 further includes a model training device 530 for training the model 521 by a machine learning method.
  • the training data used in the training process includes two-dimensional panoramic image samples and three-dimensional scene samples.
  • model training device 530 is configured to acquire training data by using any one of the three methods for acquiring training data described above.
  • the system 500 may further include a display device (not shown in FIG. 5) for presenting the three-dimensional scene structure obtained by the three-dimensional reconstruction device 520, or may also be used to simultaneously display the two-dimensional panorama obtained by the panoramic camera device 510.
  • the image and the three-dimensional scene acquired by the three-dimensional reconstruction device 520 may further include a display device (not shown in FIG. 5) for presenting the three-dimensional scene structure obtained by the three-dimensional reconstruction device 520, or may also be used to simultaneously display the two-dimensional panorama obtained by the panoramic camera device 510.
  • the image and the three-dimensional scene acquired by the three-dimensional reconstruction device 520 may further include a display device (not shown in FIG. 5) for presenting the three-dimensional scene structure obtained by the three-dimensional reconstruction device 520, or may also be used to simultaneously display the two-dimensional panorama obtained by the panoramic camera device 510.
  • the image and the three-dimensional scene acquired by the three-dimensional reconstruction device 520 may further include a display device (not shown in FIG. 5) for presenting the three
  • An embodiment of the present application further provides an apparatus for three-dimensional scene reconstruction.
  • the apparatus includes: a memory and a processor.
  • the memory is configured to store instructions.
  • the processor is configured to execute the instructions stored in the memory. The execution of the instructions causes the processor to execute the method 100 or the method 200 provided by the foregoing method embodiment.
  • An embodiment of the present application further provides a computer storage medium on which a computer program is stored.
  • the computer program When the computer program is executed by a computer, the computer executes the method 100 or the method 200 provided in the foregoing method embodiment.
  • An embodiment of the present application further provides a computer program product including instructions, which when executed by a computer, cause the computer to execute the method 100 or the method 200 provided by the foregoing method embodiment.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be from a website site, computer, server, or data center Transmission by wire (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website site, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, and the like that includes one or more available medium integration.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a digital video disc (DVD)), or a semiconductor medium (for example, a solid state disk (SSD)), etc. .
  • a magnetic medium for example, a floppy disk, a hard disk, a magnetic tape
  • an optical medium for example, a digital video disc (DVD)
  • DVD digital video disc
  • SSD solid state disk
  • the disclosed systems, devices, and methods may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

提供一种获取三维场景的方法,该方法包括:获取二维全景图像;将该二维全景图像输入模型,获得该二维全景图像对应的三维场景。由于只需获取二维全景图像,然后通过一个模型就可获取对应的三维场景重建结果,相比于传统三维重建技术,简化了实现流程,降低成本,提高了三维重建的便捷性和效率。

Description

获取三维场景的方法与装置
版权申明
本专利文件披露的内容包含受版权保护的材料。该版权为版权所有人所有。版权所有人不反对任何人复制专利与商标局的官方记录和档案中所存在的该专利文件或者该专利披露。
技术领域
本申请涉及三维场景重建领域,并且更为具体地,涉及一种获取三维场景的方法与装置。
背景技术
三维重建技术可以应用于室内场景重建测绘。三维重建结果配合增强现实(Augmented Reality,AR)技术可以应用于室内装修设计效果预览、家具布置预览等其它需要三维场景的应用。
传统三维重建技术通常使用一个彩色(RGB)相机,配合一个可以提供深度信息的传感器,例如,深度相机或激光扫描仪,来进行三维场景重建。这种方案需要按照一定路径移动相机或者移动拍摄对象。另外,有一些重建方式通过提前在场景中布置大量复杂的模板(pattern)来进行三维场景重建。
上述可知,现有的三维场景重建技术,实现流程复杂。
发明内容
本申请提供一种获取三维场景的方法与装置,可以有效提高三维场景重建的便捷性和效率。
第一方面,提供一种获取三维场景的方法,该方法包括:获取二维全景图像;将该二维全景图像输入模型,获得该二维全景图像对应的三维场景。
第二方面,提供一种用于三维场景重建的方法,该方法包括:获取训练数据,该训练数据包括二维全景图像样本及其对应三维场景样本;采用机器学习算法,通过该训练数据训练模型,使得该模型具备接收二维全景图像,输出三维场景的功能。
第三方面,提供一种获取三维场景的装置,该装置包括:图像采集单元,用于获取二维全景图像;处理单元,用于将该图像采集单元获取的该二维全 景图像输入模型,获得该二维全景图像对应的三维场景。
第四方面,提供一种用于三维场景重建的装置,该装置包括:获取单元,用于获取训练数据,该训练数据包括二维全景图像样本及其对应三维场景样本;训练单元,用于采用机器学习算法,通过该训练数据训练模型,使得该模型具备接收二维全景图像,输出三维场景的功能。
第五方面,提供一种用于三维场景重建的装置,该装置包括存储器与处理器,该存储器用于存储指令,该处理器用于执行该存储器存储的指令,并且对该存储器中存储的指令的执行使得,该处理器执行第一方面或第二方面提供的方法。
第六方面,提供一种计算机存储介质,其上存储有计算机程序,该计算机程序被计算机执行时使得,该计算机执行第一方面或第二方面提供的方法。
第七方面,提供一种包含指令的计算机程序产品,该指令被计算机执行时使得计算机执行第一方面或第二方面提供的方法。
本申请提供的方案通过将二维全景图像输入模型,通过该模型的输出,就可获取该二维全景图像对应的三维场景重建结果。因此,本申请提供的方案,只需获取二维全景图像,然后通过一个模型就可获取对应的三维场景重建结果,相比于传统三维重建技术,简化了实现流程,降低成本,提高了三维重建的便捷性和效率。
附图说明
图1是本申请实施例提供的获取三维场景的方法的示意性流程图。
图2是本申请实施例提供的用于三维场景重建的方法的示意性流程图。
图3是本申请实施例提供的获取三维场景的装置的示意性框图。
图4是本申请实施例提供的用于三维场景重建的装置的示意性框图。
图5是本申请实施例提供的三维场景重建的系统的示意性框图。
具体实施方式
图1为本申请实施例提供的获取三维场景的方法100的示意性流程图。该方法100包括如下步骤。
S110,获取二维全景图像。
本文中提及的二维全景图像指的是通过“全景拍摄”方式拍摄得到的图像。“全景拍摄”指的是,拍摄设备的拍摄位置固定,向周围360度(或其他非零角度,例如90度或270度等其它角度)的范围拍摄多张不同角度和方向的照片,然后将拍摄的多张照片拼合成一张全景图片。
可选地,可以通过全景摄像机拍摄,获取二维全景图像。例如,利用270度的全景摄像机拍摄获取二维全景图像。
可选地,可以通过普通摄像机按照“全景拍摄”的方式拍摄多张图片,然后利用相关软件(例如Photoshop)将这多张图片拼合成一张图片,从而获得二维全景图像。
S120,将该二维全景图像输入模型,获得该二维全景图像对应的三维场景。
本文涉及的模型具备如下功能(也可称为函数关系):输入二维全景图像信息,输出对应的三维场景结构信息。
本申请实施例通过将二维全景图像输入模型,通过该模型的输出,就可获取该二维全景图像对应的三维场景重建结果。因此,本申请提供的方案,只需获取二维全景图像,然后通过一个模型就可获取对应的三维场景重建结果,相比于传统三维重建技术,简化了实现流程,降低成本,提高了三维重建的便捷性和效率。
本文中用于接收二维全景图像、输出三维场景的模型,可以通过机器学习方法训练得到。
具体地,该模型是采用监督学习算法训练得到的。模型训练过程中使用的训练数据包括二维全景图像样本及其对应的三维场景样本。三维场景样本为基于该二维全景图像样本重建得到的三维场景;或者,该二维全景图像样本为在该三维场景样本中生成的二维全景图像。
应理解,该模型的训练数据中的二维全景图像样本与三维场景样本分别对应于该模型训练过程中的输入与输出。
具体地,获取用于训练该模型的训练数据的方式有多种。
可选地,作为一种获取训练数据的方式,在实际场景中获取训练数据。本文中将这种训练数据记为实际场景训练数据。该实际场景训练数据包括在实际场景中拍摄的二维全景图像以及根据所拍摄的二维全景图像重建得到的三维场景。
具体地,针对一个实际场景,例如,室内场景,采用全景摄像机拍摄二维全景图像A;然后采用相关的三维重建工具基于所拍摄的二维全景图像A得到对应的三维场景重建结果A’。其中,二维全景图像A和三维场景重建结果A’分别为用于训练模型的二维全景图像样本和三维场景样本。
根据二维全景图像A获得三维场景重建结果A’的三维重建工具可是现有的任一种三维重建技术,例如,基于彩色相机配合深度传感器的三维重建技术、通过现场中不知模板的三维重建技术、激光扫描仪重建技术、或者基于消费级深度摄像机Microsoft Kinect的三维重建技术等,本申请对此不做限定。
可选地,作为另一种获取训练数据的方式,在虚拟场景中获取训练数据。本文中将这种训练数据记为虚拟场景训练数据,该虚拟场景训练数据包括虚拟三维场景以及在所述虚拟三维场景中生成的二维全景图像。
具体地,通过计算机虚拟技术生成三维虚拟场景B’,然后在该三维虚拟场景中生成二维全景图像B。其中,二维全景图像B与三维虚拟场景B’分别为用于训练模型的二维全景图像样本和三维场景样本。
用于生成三维虚拟场景B的计算机虚拟技术可以是现有的任一种可以生成三维虚拟场景的技术,例如,计算机图形学技术。
可选地,作为再一种获取训练数据的方式,分别在实际场景和虚拟场景中获取训练数据。换言之,分别基于实际场景和虚拟场景,获取用于训练模型的训练数据。训练数据包括上述的实际场景训练数据与虚拟场景训练数据。
针对一个实际场景,例如,室内场景,采用全景摄像机拍摄二维全景图像A;然后采用相关的三维重建工具基于所拍摄的二维全景图像A得到对应的三维场景重建结果A’。二维全景图像A和三维场景重建结果A’分别为用于训练模型的二维全景图像样本和三维场景样本。此外,通过计算机虚拟技术生成三维虚拟场景B’,然后在该三维虚拟场景中生成二维全景图像B。二维全景图像B与三维虚拟场景B’分别为用于训练模型的二维全景图像样本和三维场景样本。
也就是说,用于训练模型的二维全景图像样本包括在实际场景中拍摄得到的图像A和在虚拟场景中生成得到图像B;用于训练模型的三维场景样本包括基于实际场景重建得到的三维重建结果A’和利用计算机虚拟技术生成 的三维虚拟场景B’。其中,二维全景图像样本A与三维场景样本A’对应,二维全景图像样本B与三维场景样本B’对应。
应理解,训练数据越丰富,得到的模型效果越好。
本申请实施例中,通过二维全景图像样本与三维场景样本训练模型,使得该模型具备接收二维全景图像,输出三维场景的功能,从而使得可以通过该模型,以较为快捷高效地方式获得二维全景图像对应的三维场景重建结果。
可选地,用来训练该模型的监督学习算法可以为如下技术中的任一种:决策树、随机森林或支持向量机。
当采用决策树方式训练该模型时,该模型可称为决策树。当采用随机森林方式训练该模型时,该模型可称为随机森林。当采用支持向量机方式训练该模型时,该模型可称为支持向量机。
需要说明的是,本文提出的根据输入的二维全景图像信息输出三维场景信息的模型可以是预先训练好的,在实际应用中,可以直接拿来使用。
可选地,本申请提供的方案可以应用于室内三维场景重建。
例如,在训练模型时使用的二维全景图像样本是在室内实际场景中获取的。再例如,在训练模型时使用的三维场景样本是基于室内虚拟场景生成的。应理解,基于室内场景训练得到的模型,适用于处理室内场景下的三维场景重建,即步骤S110中的二维全景图像为在室内场景下利用全景摄像机拍摄二维全景图像。
可选地,本申请提供的方案还可应用于其他场合下的三维场景重建,例如室外三维场景重建。
例如,在训练模型时使用的二维全景图像样本是在室外实际场景中获取的。再例如,在训练模型时使用的三维场景样本是基于室外虚拟场景生成的。应理解,基于室外场景训练得到的模型,适用于处理室外场景下的三维场景重建,即步骤S110中的二维全景图像为在室外场景下利用全景摄像机拍摄二维全景图像。
上述可知,本申请提供的方案,只需获取二维全景图像,然后通过一个模型就可获取对应的三维场景重建结果,相比于传统三维重建技术,简化了实现流程,降低成本,提高了三维重建的便捷性和效率。
如图2所示,本申请实施例还提供一种用于三维场景重建的方法200, 该方法200包括如下步骤。
S210,获取训练数据,该训练数据包括二维全景图像样本及其对应三维场景样本。
具体地,可以采用如上文描述的三种获取训练数据的方式中任一种方式获取训练数据。
S220,采用机器学习算法,通过该训练数据训练模型,使得该模型具备接收二维全景图像,输出三维场景的功能。
可选地,可以采用如下监督学习算法中的任一种,基于S210获取的训练数据训练该模型:决策树,随机森林与支持向量机。
可选地,还可以采用其它机器学习算法训练该模型。
在本申请实施例中,通过二维全景图像样本与三维场景样本训练模型,使得该模型具备接收二维全景图像,输出三维场景的功能,从而使得可以通过该模型,以较为快捷高效地方式获得二维全景图像对应的三维场景重建结果。
上文描述了本申请的方法实施例,下文描述本申请的装置实施例。应理解,装置实施例与方法实施例相对应,相关的方案及其技术效果同样适用于装置实施例。
图3为本申请实施例提供的获取三维场景的装置300的示意性框图。该装置300包括图像采集单元310和处理单元320。
图像采集单元310用于,获取二维全景图像。
可选地,该图像采集单元310为具备“全景拍摄”功能的图像采集设备,例如为全景摄像机/照相机。再例如,该图像采集单元310为270度全景摄像机。
可选地,该图像采集单元310包括普通照相机与图像拼接模块,其中,普通照相机用于固定拍摄位置,旋转一定角度拍摄多张照片,该图像拼接模块用于将多张照片拼接起来。
处理单元320,用于将该图像采集单元310获取的该二维全景图像输入模型,获取该二维全景图像对应的三维场景。
该模型接收二维全景图像,可以输出对应的三维场景结构信息。
该处理单元320可以由处理器或处理器相关电路实现。
本申请实施例通过将二维全景图像输入模型,通过该模型的输出,就可 获取该二维全景图像对应的三维场景重建结果。因此,本申请提供的方案,只需获取二维全景图像,然后通过一个模型就可获取对应的三维场景重建结果,相比于传统三维重建技术,简化了实现流程,降低成本,提高了三维重建的便捷性和效率。
应理解,该装置300可对应于上文实施例中的方法100的执行主体。
可选地,在本实施例中,该模型通过训练数据训练得到,其中,该训练数据包括二维全景图像样本及其对应的三维场景样本。
可选地,在本实施例中,该训练数据包括实际场景训练数据,该实际场景训练数据包括在实际场景中拍摄的二维全景图像以及根据所拍摄的二维全景图像重建得到的三维场景。
可选地,在本实施例中,该训练数据包括虚拟场景训练数据,该虚拟场景训练数据包括虚拟三维场景以及在该虚拟三维场景中生成的二维全景图像。
可选地,在本实施例中,该训练数据包括实际场景训练数据与虚拟场景训练数据。其中,该实际场景训练数据包括在实际场景中拍摄的二维全景图像以及根据所拍摄的二维全景图像重建得到的三维场景;该虚拟场景训练数据包括虚拟三维场景以及在该虚拟三维场景中生成的二维全景图像。
可选地,在本实施例中,该模型为如下模型中的任一种:决策树,随机森林与支持向量机。
可选地,本申请实施例提供的装置300可应用于室内三维场景重建。
可选地,本申请实施例提供的装置300可应用于其他场合下的三维场景重建,例如,室外三维场景重建。
如图4所示,本申请实施例还提供一种用于三维场景重建的装置400。该装置400包括获取单元410和训练单元420。
获取单元410用于,获取训练数据,该训练数据包括二维全景图像样本及其对应三维场景样本。
训练单元420用于,采用机器学习算法,通过该训练数据训练模型,使得该模型具备接收二维全景图像,输出三维场景的功能。
在本申请实施例中,通过二维全景图像样本与三维场景样本训练模型,使得该模型具备接收二维全景图像,输出三维场景的功能,从而使得可以通过该模型,以较为快捷高效地方式获得二维全景图像对应的三维场景重建结 果。
获取单元410和训练单元420均可以由处理器或处理器相关电路实现。
可选地,在本实施例中,该获取单元410用于,获取实际场景训练数据,该实际场景训练数据包括在实际场景中拍摄的二维全景图像以及根据所拍摄的二维全景图像重建得到的三维场景。
可选地,在本实施例中,该获取单元410用于,获取虚拟场景训练数据,该虚拟场景训练数据包括虚拟三维场景以及在该虚拟三维场景中生成的二维全景图像。
可选地,在本实施例中,该获取单元410用于,获取实际场景训练数据和虚拟场景训练数据,该实际场景训练数据包括在实际场景中拍摄的二维全景图像以及根据所拍摄的二维全景图像重建得到的三维场景,该虚拟场景训练数据包括虚拟三维场景以及在该虚拟三维场景中生成的二维全景图像。
可选地,在本实施例中,该机器学习算法为如下算法中的任一种:决策树,随机森林与支持向量机。
如图5所示,本申请实施例还提供一种用于重建三维场景的系统500。该系统500包括全景摄像设备510与三维重建设备520,三维重建设备520中包括模块521,该模块521具备接收二维全景图像输出三维场景的功能。
全景摄像设备510用于获取二维全景图像。
三维重建设备520用于,从全景摄像设备510获取二维全景图像,并将该二维全景图像输入模型521,通过该模型521的输出获取该二维全景图像对应的三维场景。
可选地,如图5所示,该系统500还包括模型训练设备530,用于通过机器学习方法训练该模型521,训练过程中使用的训练数据包括二维全景图像样本和三维场景样本。
可选地,模型训练设备530用于采用如上文描述的三种获取训练数据的方式中任一种方式获取训练数据。
可选地,该系统500还可以包括显示设备(图5中未画出),用于呈现三维重建设备520获取的三维场景结构,或者,还可用于同时显示全景摄像设备510获取的二维全景图像和三维重建设备520获取的三维场景。
本申请实施例还提供一种用于三维场景重建的装置,该装置包括:存储器与处理器,该存储器用于存储指令,该处理器用于执行该存储器存储的指 令,并且对该存储器中存储的指令的执行使得,该处理器执行上文方法实施例提供的方法100或方法200。
本申请实施例还提供一种计算机存储介质,其上存储有计算机程序,该计算机程序被计算机执行时使得,该计算机执行上文方法实施例提供的方法100或方法200。
本申请实施例还提供一种包含指令的计算机程序产品,该指令被计算机执行时使得计算机执行上文方法实施例提供的方法100或方法200。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其他任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个 系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (27)

  1. 一种获取三维场景的方法,其特征在于,包括:
    获取二维全景图像;
    将所述二维全景图像输入模型,获得所述二维全景图像对应的三维场景。
  2. 根据权利要求1所述的方法,其特征在于,所述模型通过机器学习训练得到,其中,机器学习过程中使用的训练数据包括二维全景图像样本及其对应的三维场景样本。
  3. 根据权利要求2所述的方法,其特征在于,所述训练数据包括实际场景训练数据,所述实际场景训练数据包括在实际场景中拍摄的二维全景图像以及根据所拍摄的二维全景图像重建得到的三维场景。
  4. 根据权利要求2或3所述的方法,其特征在于,所述训练数据包括虚拟场景训练数据,所述虚拟场景训练数据包括虚拟三维场景以及在所述虚拟三维场景中生成的二维全景图像。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,所述模型为监督学习训练模型。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述模型为如下模型中的任一种:决策树,随机森林与支持向量机。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述获取二维全景图像,包括:
    利用全景摄像设备获取所述二维全景图像。
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,所述二维全景图像为室内的二维全景图像。
  9. 一种用于三维场景重建的方法,其特征在于,其特征在于,包括:
    获取训练数据,所述训练数据包括二维全景图像样本及其对应三维场景样本;
    采用机器学习算法,通过所述训练数据训练模型,使得所述模型具备接收二维全景图像,输出三维场景的功能。
  10. 根据权利要求9所述的方法,其特征在于,所述获取训练数据,包括:
    获取实际场景训练数据,所述实际场景训练数据包括在实际场景中拍摄 的二维全景图像以及根据所拍摄的二维全景图像重建得到的三维场景。
  11. 根据权利要求9或10所述的方法,其特征在于,所述获取训练数据,包括:
    获取虚拟场景训练数据,所述虚拟场景训练数据包括虚拟三维场景以及在所述虚拟三维场景中生成的二维全景图像。
  12. 根据权利要求9至11中任一项所述的方法,其特征在于,所述模型为如下模型中的任一种:决策树,随机森林与支持向量机。
  13. 一种获取三维场景的装置,其特征在于,包括:
    图像采集单元,用于获取二维全景图像;
    处理单元,用于将所述图像采集单元获取的所述二维全景图像输入模型,获得所述二维全景图像对应的三维场景。
  14. 根据权利要求13所述的装置,其特征在于,所述模型通过机器学习训练得到,其中,机器学习过程中使用的训练数据包括二维全景图像样本及其对应的三维场景样本。
  15. 根据权利要求14所述的装置,其特征在于,所述训练数据包括实际场景训练数据,所述实际场景训练数据包括在实际场景中拍摄的二维全景图像以及根据所拍摄的二维全景图像重建得到的三维场景。
  16. 根据权利要求14或15所述的装置,其特征在于,所述训练数据包括虚拟场景训练数据,所述虚拟场景训练数据包括虚拟三维场景以及在所述虚拟三维场景中生成的二维全景图像。
  17. 根据权利要求13至16中任一项所述的装置,其特征在于,所述模型为监督学习训练模型。
  18. 根据权利要求13至17中任一项所述的装置,其特征在于,所述模型为如下模型中的任一种:决策树,随机森林与支持向量机。
  19. 根据权利要求13至18中任一项所述的装置,其特征在于,所述图像采集单元为全景摄像设备。
  20. 根据权利要求13至19中任一项所述的装置,其特征在于,所述二维全景图像为室内的二维全景图像。
  21. 一种用于三维场景重建的装置,其特征在于,包括:
    获取单元,用于获取训练数据,所述训练数据包括二维全景图像样本及其对应三维场景样本;
    训练单元,用于采用机器学习算法,通过所述训练数据训练模型,使得所述模型具备接收二维全景图像,输出三维场景的功能。
  22. 根据权利要求21所述的装置,其特征在于,所述获取单元用于,获取实际场景训练数据,所述实际场景训练数据包括在实际场景中拍摄的二维全景图像以及根据所拍摄的二维全景图像重建得到的三维场景。
  23. 根据权利要求21所述的装置,其特征在于,所述获取单元用于,获取虚拟场景训练数据,所述虚拟场景训练数据包括虚拟三维场景以及在所述虚拟三维场景中生成的二维全景图像。
  24. 根据权利要求21至23中任一项所述的装置,其特征在于,所述模型为如下模型中的任一种:决策树,随机森林与支持向量机。
  25. 一种用于三维场景重建的装置,其特征在于,包括:存储器与处理器,所述存储器用于存储指令,所述处理器用于执行所述存储器存储的指令,并且对所述存储器中存储的指令的执行使得,所述处理器执行如权利要求1至7中任一项所述的方法,或执行如权利要求9至12中任一项所述的方法。
  26. 一种计算机存储介质,其特征在于,其上存储有计算机程序,所述计算机程序被计算机执行时使得,所述计算机执行如权利要求1至7中任一项所述的方法,或执行如权利要求9至12中任一项所述的方法。
  27. 一种包含指令的计算机程序产品,其特征在于,所述指令被计算机执行时使得计算机执行如权利要求1至7中任一项所述的方法,或执行如权利要求9至12中任一项所述的方法。
PCT/CN2018/097458 2018-07-27 2018-07-27 获取三维场景的方法与装置 WO2020019304A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/097458 WO2020019304A1 (zh) 2018-07-27 2018-07-27 获取三维场景的方法与装置
CN201880038658.8A CN110914871A (zh) 2018-07-27 2018-07-27 获取三维场景的方法与装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/097458 WO2020019304A1 (zh) 2018-07-27 2018-07-27 获取三维场景的方法与装置

Publications (1)

Publication Number Publication Date
WO2020019304A1 true WO2020019304A1 (zh) 2020-01-30

Family

ID=69180362

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/097458 WO2020019304A1 (zh) 2018-07-27 2018-07-27 获取三维场景的方法与装置

Country Status (2)

Country Link
CN (1) CN110914871A (zh)
WO (1) WO2020019304A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205591B (zh) * 2021-04-30 2024-03-08 北京奇艺世纪科技有限公司 一种三维重建训练数据的获取方法、装置及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296783A (zh) * 2016-07-28 2017-01-04 众趣(北京)科技有限公司 一种结合空间全局3d视图和全景图片的空间展示方法
CN106294918A (zh) * 2015-06-10 2017-01-04 中国科学院宁波材料技术与工程研究所 一种虚拟透明化办公系统的设计方法
CN106991716A (zh) * 2016-08-08 2017-07-28 深圳市圆周率软件科技有限责任公司 一种全景三维建模装置、方法及系统
US20170285455A1 (en) * 2003-06-03 2017-10-05 Leonard P. Steuart, III Digital 3d/360 degree camera system
CN108305327A (zh) * 2017-11-22 2018-07-20 北京居然设计家家居连锁集团有限公司 一种图像渲染方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955942A (zh) * 2014-05-22 2014-07-30 哈尔滨工业大学 一种基于svm的2d图像的深度图提取方法
CN106651764B (zh) * 2016-12-29 2019-10-15 北京奇艺世纪科技有限公司 一种全景图压缩方法及装置
CN106980728A (zh) * 2017-03-30 2017-07-25 理光图像技术(上海)有限公司 房屋装潢设计体验装置及系统
US10977818B2 (en) * 2017-05-19 2021-04-13 Manor Financial, Inc. Machine learning based model localization system
CN107369204B (zh) * 2017-07-27 2020-01-07 北京航空航天大学 一种从单幅照片恢复出场景基本三维结构的方法
CN111768496B (zh) * 2017-08-24 2024-02-09 Oppo广东移动通信有限公司 图像处理方法、装置、服务器及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170285455A1 (en) * 2003-06-03 2017-10-05 Leonard P. Steuart, III Digital 3d/360 degree camera system
CN106294918A (zh) * 2015-06-10 2017-01-04 中国科学院宁波材料技术与工程研究所 一种虚拟透明化办公系统的设计方法
CN106296783A (zh) * 2016-07-28 2017-01-04 众趣(北京)科技有限公司 一种结合空间全局3d视图和全景图片的空间展示方法
CN106991716A (zh) * 2016-08-08 2017-07-28 深圳市圆周率软件科技有限责任公司 一种全景三维建模装置、方法及系统
CN108305327A (zh) * 2017-11-22 2018-07-20 北京居然设计家家居连锁集团有限公司 一种图像渲染方法

Also Published As

Publication number Publication date
CN110914871A (zh) 2020-03-24

Similar Documents

Publication Publication Date Title
US11488355B2 (en) Virtual world generation engine
US20190342544A1 (en) Image processing apparatus and method
WO2019192351A1 (zh) 短视频拍摄方法、装置及电子终端
CN113473159B (zh) 数字人直播方法、装置、直播管理设备及可读存储介质
US10085008B2 (en) Image processing apparatus and method
TW201915944A (zh) 圖像處理方法、裝置、系統和儲存介質
JP6894962B2 (ja) 自由視点映像用画像データのキャプチャ方法及び装置、プログラム
EP3900326A1 (en) Video conferencing supporting a composite video stream
US10453244B2 (en) Multi-layer UV map based texture rendering for free-running FVV applications
JP2018026064A (ja) 画像処理装置、画像処理方法、システム
WO2018133757A1 (zh) 场景的渲染方法、装置和存储介质
JP7285834B2 (ja) 三次元再構成方法および三次元再構成装置
CN109816768A (zh) 一种室内重建方法、装置、设备和介质
WO2016184285A1 (zh) 物品图像处理方法、装置和系统
US20130050190A1 (en) Dressing simulation system and method
WO2020019304A1 (zh) 获取三维场景的方法与装置
US20230394701A1 (en) Information processing apparatus, information processing method, and storage medium
US10122996B2 (en) Method for 3D multiview reconstruction by feature tracking and model registration
CN112511815A (zh) 图像或视频生成方法及装置
JP2006323450A (ja) シミュレーション画像生成装置、方法、演算プログラム、及びそのプログラムを記録した記録媒体
US11825191B2 (en) Method for assisting the acquisition of media content at a scene
CN113542721A (zh) 深度图处理方法、视频重建方法及相关装置
JP2020126393A (ja) 画像処理装置、画像処理方法、及びプログラム
WO2021060016A1 (ja) 画像処理装置、画像処理方法、プログラム、および画像処理システム
WO2023090038A1 (ja) 情報処理装置、映像処理方法、プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18927629

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18927629

Country of ref document: EP

Kind code of ref document: A1