WO2021120052A1 - Reconstruction 3d à partir d'un nombre insuffisant d'images - Google Patents
Reconstruction 3d à partir d'un nombre insuffisant d'images Download PDFInfo
- Publication number
- WO2021120052A1 WO2021120052A1 PCT/CN2019/126298 CN2019126298W WO2021120052A1 WO 2021120052 A1 WO2021120052 A1 WO 2021120052A1 CN 2019126298 W CN2019126298 W CN 2019126298W WO 2021120052 A1 WO2021120052 A1 WO 2021120052A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- images
- user
- image sequence
- silhouettes
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/564—Depth or shape recovery from multiple images from contours
Definitions
- the present invention relates to three dimension (3D) reconstruction from a plurality of two dimension (2D) images captured from a subject.
- MVS methods are not able to recover dense depth values on uniformly colored surface) , etc. Therefore, an insufficient number of images are captured to reconstruct the subject. Such an insufficient number of images cannot reconstruct a complete 3D model, namely, they leave big holes on the reconstructed 3D model of the subject.
- a device is provided to achieve closing holes on the reconstructed 3D model caused by an insufficient number of images.
- a device including: a camera for capturing an image sequence of a subject, a three dimension (3D) reconstruction unit for reconstructing a 3D model from the image sequence, and a model refinement unit for refining the 3D model so as to be fitted to one or more images selected by a user from the image sequence.
- 3D three dimension
- the 3D model is refined based on one or more silhouettes of the subject that are extracted from the one or more selected images.
- the device further includes: a user interface unit for showing one or more silhouettes of the subject that are extracted from the one or more selected images, and making the user check whether the one or more silhouettes are accurate or not.
- the 3D model is reconstructed as a set of points, and holes on the 3D model is closed by one or more parts of a set of tangent surfaces computed from the one or more silhouettes, wherein the one or more parts of the set of tangent surfaces are inside a 3D model reconstructed as 3D mesh from the set of points.
- a method performed by a device includes: capturing an image sequence of a subject, reconstructing a three dimension (3D) model from the image sequence, and refining the 3D model so as to be fitted to one or more images selected by a user from the image sequence.
- 3D three dimension
- a computer readable storage media storing a program thereon, where when the program is executed by a processor, the program causes the processor to perform the method according to the second aspect.
- Fig. 1 depicts an example of a usage scene of a 3D human model reconstruction application according to a first embodiment of the present invention
- Fig. 2 depicts an example of a block diagram of a hardware configuration
- Fig. 3 depicts an example of a block diagram of a functional configuration
- Fig. 4 (a) depicts an example of an overall flowchart of model refinement
- Fig. 4 (b) depicts an example of a detailed flowchart of the model refinement
- Fig. 5 (a) depicts an example of a UI shown on the display 117;
- Fig. 5 (b) depicts an example of a UI shown on the display 117;
- Fig. 6 (a) depicts an example of a 3D model 300 with a hole
- Fig. 6 (b) depicts an example of a tangent surface 301
- Fig. 6 (c) depicts an example of a 3D model 300 and corresponding part of the tangent surface 302 that will be merged to fill the hole;
- Fig. 6 (d) depicts an example of a refined 3D model 303 .
- a first embodiment of the present invention is a 3D human model reconstruction application on a mobile device.
- Fig. 1 depicts an example of a usage scene of the 3D human model reconstruction application on a mobile device 100, for example, a smart phone.
- a user 102 holding and operating the mobile device 100 scans a static target person 101.
- the target person 101 in Fig. 1 is conveniently shown as a simplified shape of a human, but it is intended to be an actual human.
- the subject is not limited to a human, for example, it may be from a small thing to a large thing, such as a stuffed toy, a car, etc.
- the user 102 is supposed to move around the target person 101 while keeping a camera 115 (Fig. 2) on the mobile device 100 toward the target person 101 and operate a user interface (UI) on a display 117 (Fig. 2) .
- UI user interface
- the term “scan” means capturing images of a subject from various directions. It is ideal to capture enough images to cover almost all of the surface of the subject; however, images captured by non-professional users are not enough. In many cases, 3D depth information of, for example, the top of the head, armpits, and crotch cannot be obtained, and holes are left on the reconstructed 3D model of the subject. Some of the reasons of the lack of the images are that it is difficult to capture the top of the head of the static target person 101 without going a higher place, and armpits and crotch are usually occluded by the other parts of the body. Existing techniques for closing the holes are as follows:
- Screened Poisson surface reconstruction (for example, refer to: Kazhdan, Michael, and Hugues Hoppe, "Screened poisson surface reconstruction” , ACM Transactions on Graphics (ToG) 32.3 (2013) : 29) , which assumes that a continuous implicit surface is behind observed points, is widely used to make a 3D mesh from a set of points. Method 1 fills holes implicitly at the same time as meshing.
- Screened Poisson surface reconstruction often fails to naturally close large holes around locally steep geometry, and makes inflated artifacts which are bigger/fatter than the actual surface.
- Method 2 Hole filling (for example, refer to: Liepa, Peter, "Filling holes in meshes" , Proceedings of the 2003 Eurographics/ACM SIGGRAPH symposium on Geometry processing, Eurographics Association, 2003) is also widely used to fill holes on a 3D model. Method 2 detects hole boundaries on a 3D model, performs parameterization for it, and finally polygonises it. But Method 2 is not robust and sometimes fails to fill holes or fills a hole unnaturally in practice since it can’t handle complex hole boundaries on a noisy mesh.
- Visual Hull (for example, refer to: United States Patent Application, Publication No. US2015/0178988A1, “Method and a system for generating a realistic 3d reconstruction model for an object or being” ) , which reconstructs 3D model from a plurality of silhouette images, is another way of 3D reconstruction.
- Visual Hull is usually performed under well calibrated settings. For instance, a subject is supposed to be in a special room where a sufficient number of cameras is tightly fixed and walls and floors are covered by distinct color to extract accurate silhouette of the subject. Under such lab-setting Visual Hull would reconstruct accurate 3D model.
- Method 3 describes a system in such a special room.
- Method 3 basically relies on Visual Hull, but enhances fidelity of a face by fusing high resolution mesh that comes from structured-light based triangulation.
- Special smoothing method is applied to boundary of face to alleviate visible geometrical steps caused by combining two independent meshes.
- one or more silhouettes of a subject are used to close holes on the 3D model.
- the silhouette is useful to close holes appearing on unobservable region such as the top of the head, crotch or armpits.
- the mobile device 100 includes a CPU (Central Processing Unit) 110, a RAM (Random Access Memory) 111, a ROM (Read Only Memory) 112, a bus 113, an Input/Output I/F (Interface) 114, a display 117, and a touch panel 118.
- the mobile device 100 also has a camera 115 and a storage device 116 that are connected to the bus 113 via the Input/Output I/F 114.
- the CPU 110 controls each element connected through the bus 113.
- the RAM 111 is used for a main memory of the CPU 110 and so on.
- the ROM 112 stores OS (Operating System) , programs, device drivers and so on.
- the camera 115 connected via the Input/Output I/F 114 captures still images or videos.
- the storage device 116 connected via the Input/Output I/F 114 is a storage having a large capacity, for example, a hard disk or a flash memory.
- the Input/Output I/F 114 converts data captured by the camera 115 into an image format, and stores it in the storage device 116.
- the display 117 shows a user interface.
- the touch panel 118 embedded on the display 117 accepts and transfers touch operations by the user 102 to the CPU 110.
- Fig. 3 depicts an example of a block diagram of a function configuration of the first embodiment.
- the mobile device 100 includes a user interface control unit 120, an image acquisition unit 121, a silhouette extraction unit 122, a 3D reconstruction unit 123, a model refinement unit 124, and a storage unit 125.
- the user interface control unit 120 controls a user interface shown on the display 117 according to the states of the other units and touch operations by the user 102 to the touch panel 118.
- the user interface control unit 120 is realized by the CPU 110, the RAM 111, programs in the ROM 112, the bus 113, the display 117, and the touch panel 118.
- the image acquisition unit 121 obtains a sequence of still images or a video from the camera 115, and stores it in the RAM 111 or the storage device 116.
- the image acquisition unit 121 is realized by the CPU 110, the RAM 111, programs in the ROM 112, the bus 113, the Input/Output I/F 114, and the camera 115.
- the silhouette extraction unit 122 extracts a silhouette of the target person 101 from a still image or a frame of video captured by the image acquisition unit 121 and stored in the RAM 111 or the storage unit 125.
- the silhouette extraction unit 122 could be implemented in various ways, for example, background subtraction or CNN (Convolutional Neural Network) .
- CNN Convolutional Neural Network
- the silhouette extraction unit 122 is realized by the CPU 110, the RAM 111, programs in the ROM 112, and the bus 113.
- the 3D reconstruction unit 123 reconstructs a 3D model of the target person 101 from the sequence of still images or the video captured by the image acquisition unit 121 and stored in the RAM 111 or the storage unit 125.
- the 3D reconstruction unit 123 also estimates extrinsic parameters that define 3D rigid transformation between each image used for the reconstruction and the 3D model.
- the 3D reconstruction unit 123 could be implemented in various ways, for example, SfM (Structure-from-Motion) and MVS (Multi-View Stereo) for color images or KinectFusion for depth images.
- the 3D reconstruction unit 123 is realized by the CPU 110, the RAM 111, programs in the ROM 112, and the bus 113.
- the model refinement unit 124 refines the 3D model reconstructed by the 3D reconstruction unit 123 to make a refined 3D model with one or more silhouettes selected by the user 102.
- the details will be described later.
- the model refinement unit 124 is realized by the CPU 110, the RAM 111, programs in the ROM 112, and the bus 113.
- the storage unit 125 stores the captured images and the refined 3D model into the storage device 116 for further use.
- the storage unit 125 is realized by the Input/Output I/F 114 and the storage device 116.
- the CPU 110 controls the above-mentioned units in this embodiment.
- Fig. 4 (a) depicts an example of an overall flowchart of model refinement according to the first embodiment.
- Fig. 4 (b) depicts an example of a detailed flowchart of the model refinement according to the first embodiment.
- Each step of Figs. 4 (a) and 4 (b) would be executed by the CPU 110 and data are stored in the RAM 111 or the storage device 116 and loaded from them as needed.
- the CPU110 obtains an image sequence via the image acquisition unit 121 with the camera 115 and stores it in the RAM 111. It is assumed that the images are colored in this embodiment. It could be possible to store it in the storage device 116 by the storage unit 125.
- Fig. 1 shows how the mobile device 100 is operated in this step. The user 102 holding and operating the mobile device 100 scans a static target person 101 as completely as possible. The user 102 is supposed to move around the target person 101 while keeping the camera 115 on the back of the mobile device 100 toward the target person 101.
- CPU110 processes the image sequence obtained at step S100 to generate a 3D model.
- the 3D reconstruction unit 123 reconstructs the 3D model and estimates extrinsic camera parameters (mentioned above) and if necessary, intrinsic camera parameters (mentioned later) . All of the output at step S101 are stored in the RAM 111 or the storage device 116.
- the UI on the display 117 requests the user 102 to select one or more images, to which the user 102 wishes to fit the 3D model, from the image sequence.
- the message “Select frontal view” in Fig. 5 (a) is merely an example, and the user 102 is requested to select “one or more images” to be used for fitting the 3D model into the silhouettes of the subject that are extracted from those “one or more images” .
- the UI is controlled by the user interface control unit 120.
- Fig. 5 (a) is the UI of this step. On the display 117, thumbnails of the image sequence 200 are shown. The images captured by the camera 115 are used for thumbnails in Figs. 5 (a) and 5 (b) (the face of the person in the thumbnails in Figs. 5 (a) and 5 (b) have been processed for the purpose of privacy protection because this patent application document will be opened to the public) .
- Fig. 5 (a) is the UI of this step.
- thumbnails of the image sequence 200 are shown on the display 117.
- the images captured by the camera 115 are used for thumbnails in Figs. 5 (a) and 5 (b) (the face of the person in the thumbnails in Figs. 5 (a) and 5 (b) have been processed for the purpose of privacy protection because this patent application document will be opened to the public) .
- Fig. 5 (a) is the UI of this step.
- thumbnails of the image sequence 200 are shown.
- the images captured by the camera 115
- the upper-left image is a photo of a person standing in a room that is captured from the front
- the upper-right image, the middle-left image, the middle-right image, and the lower-right image are captured from the rear right, from the back, from the left, and from the front right, respectively
- the lower-left image is a photo of the lower body of the person captured from the front right.
- the user 102 is supposed to select one or more frames corresponding to thumbnails by a touch operation. If there are too many images to show on the display 117 at one time, a next page button 201 is shown to change the thumbnails 200 to show the other images. After the user 102 selects one of the images, for example, if the user 102 selects the upper-left image, the UI changes and the selected image 202 is displayed as shown in Fig. 5 (b) .
- a silhouette extraction unit 122 extracts the silhouette of the target person 101 from the selected frames at step S102.
- Fig. 5 (b) depicts an example of the UI at step S104.
- the selected image 202 and corresponding extracted silhouette 203 are shown.
- the silhouette is shown in white and the background is shown in black. There may be cases where a wrong silhouette is extracted because of an algorithm error. If the user 102 taps one of response buttons 204, namely, taps “OK” or “NG” according to acceptance or rejection of the extracted silhouette. After the user 102 responses, the UI continues to show another selected image and corresponding silhouette.
- step S105 If all of the one or more selected images and corresponding silhouettes are checked by the user 102, the process goes to the next step, specifically, if at least one silhouette does not have acceptable quality (the user 102 responded “NG” at least once) , goes to step S105, or if all silhouettes are acceptable (the user 102 responded “OK” for all of the one or more selected images and corresponding silhouettes) , goes to step S107.
- the above-mentioned UI interaction may be eliminated by automatically selecting one or more images for silhouette refinement.
- step S104 All or part of process at step S104 could be performed in advance or integrated into earlier steps.
- step S100 when the user 102 is capturing the image sequence, the UI could show corresponding silhouette and the response buttons for each captured image in real time. In this case, the user 102 could select one or more images and check corresponding silhouettes in step S100.
- step S105 the UI asks whether the user 102 wishes to select the other images from the existing image sequence (the image sequence obtained at step S100) or not. If yes, the process goes back to step S102, or if no, goes to step S106.
- step S106 the user 102 captures another image sequence in the same way as step S100. After capturing, camera parameters for each additional image are estimated in the same way as step S101. Images captured at this step are merged to the existing image sequence. Then, the process goes back to step S102.
- the model refinement unit 124 refines the 3D model reconstructed at step S101 by using the silhouettes extracted at step S103 and confirmed by the user 102 at step S104.
- the details of the model refinement are shown in Fig. 4 (b) and will be explained later.
- the refined 3D model is stored in the RAM 111 or the storage device 116 for further use, for instance, 3D model viewers or Augmented Reality applications.
- a user who operates a device selects one or more images of a subject from the image sequence that the user is capturing and/or has already captured. Then, the device according to the present invention extracts one or more silhouettes of the subject that are extracted from the one or more selected images, and asks the user whether the silhouettes are accurate or not. If it is OK, the device applies refinement, based on the silhouettes, for the reconstructed 3D model to close holes on it. If not, the user is requested to select the other images from the image sequence or capture additional images.
- the model refinement unit 124 refines the 3D model reconstructed by the 3D reconstruction unit 123 to make a refined 3D model with the one or more silhouettes selected by the user 102.
- the 3D model often has large holes caused by the insufficient number of input images.
- Fig. 6 (a) depicts an example of a 3D model 300 of a human with a hole on the top of the head caused by difficulty to capture there in a casual scanning.
- Fig. 6 (a) shows a simplified 3D model of a human only from shoulder to top, viewed from obliquely above.
- the model refinement unit 124 fills this hole by using the one or more silhouettes.
- the model refinement unit 124 computes a set of 3D dimensional curved tangent surfaces from each silhouette based on the principle of perspective projective transformation and camera parameters like those for Visual Hull.
- Various methods could be used at this step. For example, a signed distance function on a voxel grid set to cover the 3D model is updated by unprojecting ( “projection” means mapping 3D points to 2D points on an image, and “unprojection” means the reverse of this “projection” , namely, mapping 2D points on an image to 3D points) inside pixels of a silhouette to make an implicit tangent surface of the silhouette. Then marching cube is applied to extract the tangent surface.
- the tangent surface should cover over the holes if the 3D model and the silhouette is enough accurate.
- a tangent surface 301 is shown in Fig. 6 (b) , in which the white part corresponds to the silhouette, and the dark gray part corresponds to a possible surface calculated from the silhouette. This process is applied to all of the one or more silhouettes.
- camera parameters are required, and are already obtained before performing this step S200.
- camera 115 has a standard Field-of-View and could be approximated by a pinhole camera model.
- Intrinsic camera parameters which are focal length and principal point, and distortion coefficients, could be independently calibrated before the application runs or estimated in the 3D reconstruction unit 123. Extrinsic camera parameters are estimated in the 3D reconstruction unit 123.
- the model refinement unit 124 calculates parts of the tangent surfaces on the holes to close them.
- the places of the holes are identified.
- Poisson surface reconstruction (refer to “Existing Method 1” mentioned earlier) is applied to the 3D model to make closed surface with inflated artifacts on the holes.
- Outside inflated artifacts parts of the closed surface are determined.
- Such outside parts are above the holes.
- the Nearest Neighbor from the outside parts to the tangent surfaces is used to find corresponding parts of the tangent surfaces to close holes.
- Such a part of tangent surface 302 is shown in Fig. 6 (c) .
- step S202 the parts of the tangent surfaces calculated at step S201 are merged to the 3D model to make a refined 3D model.
- a merged surface of the refined 3D model 303 is shown in Fig. 6 (d) .
- holes on the 3D model are closed and aligned with silhouettes of the subject.
- a closed surface is not only visually good but also important for further application with the 3D model because many computer graphics and computer vision algorithms assume input surface is closed.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
Abstract
L'invention concerne un dispositif (100). Le dispositif (100) comprend : une caméra (115) pour capturer une séquence d'images d'un sujet, une unité de reconstruction tridimensionnelle (3D) (123) pour reconstruire un modèle 3D à partir de la séquence d'images, et une unité d'affinement de modèle (124) pour affiner le modèle 3D de façon à ce que celui-ci soit adapté à une ou plusieurs images sélectionnées par un utilisateur à partir de la séquence d'images. Le dispositif (100) ferme des trous sur le modèle 3D reconstruit causés par un nombre insuffisant d'images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/126298 WO2021120052A1 (fr) | 2019-12-18 | 2019-12-18 | Reconstruction 3d à partir d'un nombre insuffisant d'images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/126298 WO2021120052A1 (fr) | 2019-12-18 | 2019-12-18 | Reconstruction 3d à partir d'un nombre insuffisant d'images |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021120052A1 true WO2021120052A1 (fr) | 2021-06-24 |
Family
ID=76476978
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/126298 WO2021120052A1 (fr) | 2019-12-18 | 2019-12-18 | Reconstruction 3d à partir d'un nombre insuffisant d'images |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021120052A1 (fr) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1308902A2 (fr) * | 2001-11-05 | 2003-05-07 | Canon Europa N.V. | Appareil de modèlisation tridimensionnelle par ordinateur |
WO2009006273A2 (fr) * | 2007-06-29 | 2009-01-08 | 3M Innovative Properties Company | Vues synchronisées de données vidéo et de données de modèle tridimensionnel |
US20140111507A1 (en) * | 2012-10-23 | 2014-04-24 | Electronics And Telecommunications Research Institute | 3-dimensional shape reconstruction device using depth image and color image and the method |
CN104282040A (zh) * | 2014-09-29 | 2015-01-14 | 北京航空航天大学 | 一种用于三维实体模型重建的有限元前处理方法 |
CN109242954A (zh) * | 2018-08-16 | 2019-01-18 | 叠境数字科技(上海)有限公司 | 基于模板变形的多视角三维人体重建方法 |
CN109658449A (zh) * | 2018-12-03 | 2019-04-19 | 华中科技大学 | 一种基于rgb-d图像的室内场景三维重建方法 |
-
2019
- 2019-12-18 WO PCT/CN2019/126298 patent/WO2021120052A1/fr active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1308902A2 (fr) * | 2001-11-05 | 2003-05-07 | Canon Europa N.V. | Appareil de modèlisation tridimensionnelle par ordinateur |
WO2009006273A2 (fr) * | 2007-06-29 | 2009-01-08 | 3M Innovative Properties Company | Vues synchronisées de données vidéo et de données de modèle tridimensionnel |
US20140111507A1 (en) * | 2012-10-23 | 2014-04-24 | Electronics And Telecommunications Research Institute | 3-dimensional shape reconstruction device using depth image and color image and the method |
CN104282040A (zh) * | 2014-09-29 | 2015-01-14 | 北京航空航天大学 | 一种用于三维实体模型重建的有限元前处理方法 |
CN109242954A (zh) * | 2018-08-16 | 2019-01-18 | 叠境数字科技(上海)有限公司 | 基于模板变形的多视角三维人体重建方法 |
CN109658449A (zh) * | 2018-12-03 | 2019-04-19 | 华中科技大学 | 一种基于rgb-d图像的室内场景三维重建方法 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11210838B2 (en) | Fusing, texturing, and rendering views of dynamic three-dimensional models | |
WO2020192706A1 (fr) | Système et procédé de reconstruction de modèle tridimensionnel d'objet | |
EP3323249B1 (fr) | Appareil de génération de contenu tridimensionnel et procédé de génération de contenu tridimensionnel associé | |
KR101560508B1 (ko) | 3차원 이미지 모델 조정을 위한 방법 및 장치 | |
US9886530B2 (en) | Computing camera parameters | |
KR101613721B1 (ko) | 2d 이미지 시퀀스들로부터 3d 장면을 재구성하기 위한방법 | |
JP6685827B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
Shen et al. | Virtual mirror rendering with stationary rgb-d cameras and stored 3-d background | |
EP3429195A1 (fr) | Procédé et système de traitement d'image dans une visioconférence pour la correction du regard | |
WO2021078179A1 (fr) | Procédé et dispositif d'affichage d'image | |
JP2018530045A (ja) | 一連のイメージからのオブジェクトの3d再構成のための方法、一連のイメージからのオブジェクトの3d再構成を実行するように構成されたコンピュータ読取可能記憶媒体及び装置 | |
Slabaugh et al. | Image-based photo hulls | |
CN113628327A (zh) | 一种头部三维重建方法及设备 | |
US20220277512A1 (en) | Generation apparatus, generation method, system, and storage medium | |
CN113516755A (zh) | 图像处理方法、图像处理装置、电子设备和存储介质 | |
CN111742352A (zh) | 3d对象建模方法以及相关设备和计算机程序产品 | |
EP4258221A2 (fr) | Appareil de traitement d'image, procédé de traitement d'image et programme | |
US9998724B2 (en) | Image processing apparatus and method for processing three-dimensional data that describes a space that includes physical objects | |
WO2021120052A1 (fr) | Reconstruction 3d à partir d'un nombre insuffisant d'images | |
Lim et al. | 3-D reconstruction using the kinect sensor and its application to a visualization system | |
Ha et al. | Normalfusion: Real-time acquisition of surface normals for high-resolution rgb-d scanning | |
EP3236422A1 (fr) | Procédé et dispositif de détermination d'un modèle 3d | |
JP7119854B2 (ja) | 変更画素領域抽出装置、画像処理システム、変更画素領域抽出方法、画像処理方法及びプログラム | |
Lee et al. | Panoramic mesh model generation from multiple range data for indoor scene reconstruction | |
Savakar et al. | A relative 3D scan and construction for face using meshing algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19956820 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19956820 Country of ref document: EP Kind code of ref document: A1 |