EP2016559A2 - Système et procédé permettant une reconstruction tridimensionnelle d'objet à partir d'images bidimensionnelles - Google Patents

Système et procédé permettant une reconstruction tridimensionnelle d'objet à partir d'images bidimensionnelles

Info

Publication number
EP2016559A2
EP2016559A2 EP06826653A EP06826653A EP2016559A2 EP 2016559 A2 EP2016559 A2 EP 2016559A2 EP 06826653 A EP06826653 A EP 06826653A EP 06826653 A EP06826653 A EP 06826653A EP 2016559 A2 EP2016559 A2 EP 2016559A2
Authority
EP
European Patent Office
Prior art keywords
image
dimensional
feature points
applying
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP06826653A
Other languages
German (de)
English (en)
Inventor
Yousef Wasef Nijim
Izzat Hekmat Izzat
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
THOMSON LICENSING
Original Assignee
Thomson Licensing SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing SAS filed Critical Thomson Licensing SAS
Publication of EP2016559A2 publication Critical patent/EP2016559A2/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion

Definitions

  • the present invention generally relates to three-dimensional object modeling, and more particularly, to a system and method for three-dimensional (3D) information acquisition from two-dimensional (2D) images using hybrid feature detection and tracking including smoothing functions.
  • the resulting video sequence contains implicit information on the three-dimensional (3D) geometry of the scene. While for adequate human perception this implicit information suffices, for many applications the exact geometry of the 3D scene is required.
  • One category of these applications is when sophisticated data processing techniques are used, for instance in the generation of new views of the scene, or in the reconstruction of the 3D geometry for industrial inspection applications.
  • 3D acquisition techniques in general can be classified as active and passive approaches, single view and multi-view approaches and geometric and photometric methods.
  • Passive approaches acquire 3D geometry from images or videos taken under regular lighting conditions. 3D geometry is computed using the geometric or photometric features extracted from images and videos. Active approaches use special light sources, such as laser, structure light or infrared light. Active approaches compute the geometry based on the response of the objects and scenes to the special light projected onto the surface of the objects and scenes.
  • Single-view approaches recover 3D geometry using multiple images taken from a single camera viewpoint. Examples include structure from motion and depth from defocus.
  • Multi-view approaches recover 3D geometry from multiple images taken from multiple camera viewpoints, resulted from object motion, or with different light source positions.
  • Stereo matching is an example of multi-view 3D recovery by matching the pixels in the left image and right image in the stereo pair to obtain the depth information of the pixels.
  • Geometric methods recover 3D geometry by detecting geometric features such as corners, edges, lines or contours in single or multiple images. The spatial relationship among the extracted corners, edges, lines or contours can be used to infer the 3D coordinates of the pixels in images.
  • Structure From Motion is a technique that attempts to reconstruct the 3D structure of a scene from a sequence of images taken from a camera moving within the scene or a static camera and a moving object.
  • SFM Structure From Motion
  • nonlinear techniques require iterative optimization, and must contend with local minima.
  • these techniques promise good numerical accuracy and flexibility.
  • SFM SFM over the stereo matching
  • Feature based approaches can be made more effective by tracking techniques, which exploits the past history of the features' motion to predict disparities in the next frame.
  • the correspondence problem can be also cast as a problem of estimating the apparent motion of the image brightness pattern, called the optical flow.
  • SFM SFM
  • the present disclosure provides a system and method for three-dimensional
  • the system and method of the present disclosure includes acquiring at least two images of a scene and applying a smoothing function to make the features more visible followed by a hybrid scheme of feature selection and tracking for the recovery of 3D information.
  • the smoothing function is applied on the images followed by a feature point selection that will find the features in the image.
  • At least two feature point detection functions are employed to cover a wider range of good feature points in the first image, then the smoothing function is applied on the second image followed by a tracking function to track the detected feature points in the second image.
  • the results of the feature detection/selection and tracking will be combined to obtain a complete 3D model.
  • One target application of this work is 3D reconstruction of film sets.
  • the resulting 3D models can be used for visualization during the film shooting " or for postproduction. Other applications will benefit from this approach including but not limited to gaming and 3D TV.
  • a three-dimensional acquisition process including acquiring first and second images of a scene, applying at least two feature detection functions to the first image to detect feature points of objects in the image, combining outputs of the at least two feature detection functions to select object feature points to be tracked, applying a tracking function on the second image to track the selected object feature points, and reconstructing a three-dimensional model of the scene from the output of the tracking function.
  • the process further applying a smoothing function on the first image before the applying of at least two feature detection functions step to make the feature points of objects in the first image more visible, wherein the features points are corners, edges or lines of objects in the image.
  • a system for three-dimensional (3D) information acquisition from two-dimensional (2D) images includes a post-processing device configured for reconstructing a three- dimensional model of a scene from at least two images, the post-processing device including a feature point detector configured to detect feature points in an image, the feature point detector including at least two feature detection functions, wherein at least two feature detection functions are applied to a first image of the at least two images, a feature point tracker configured for tracking selected feature points between at least two images, and a depth map generator configured to generate a depth map between the at least two images from the tracked feature points, wherein the post-processing device creates the 3D mode! from the depth map.
  • the postprocessing device further includes a smoothing function filter configured for making feature points of objects in the first image more visible.
  • a program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform method steps for modeling a three-dimensional (3D) scene from two-dimensional (2D) images
  • the method including acquiring first and second images of a scene, applying a smoothing function to the first image, applying at least two feature detection functions to the smoothed first image to detect feature points of objects in the image, combining outputs of the at least two feature detection functions to select object feature points to be tracked, applying the smoothing function on the second image, applying a tracking function on the second image to track the selected object feature points, and reconstructing a three- dimensional model of the scene from an output of the tracking function.
  • FIG. 1 is an exemplary illustration of a system for three-dimensional (3D) information acquisition according to an aspect of the present invention
  • FIG. 2 is a flow diagram of an exemplary method for reconstructing three- dimensional (3D) objects from two-dimensional (2D) images according to an aspect of the present invention
  • FIG. 3A is an illustration of a scene processed with one feature point detection function
  • FIG. 3B is an illustration of the scene shown in FIG. 3A processed with a hybrid detection function.
  • these elements are implemented in a combination of hardware and software on one or more appropriately programmed general-purpose devices, which may include a processor, memory and input/output interfaces.
  • processor or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (“DSP”) hardware, read only memory (“ROM”) for storing software, random access memory (“RAM”), and nonvolatile storage.
  • DSP digital signal processor
  • ROM read only memory
  • RAM random access memory
  • any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
  • any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function.
  • the invention as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
  • the techniques disclosed in the present invention deal with the problem of recovering 3D geometries of objects and scenes.
  • a system and method for recovering three-dimensional (3D) geometries of objects and scenes.
  • the system and method of the present invention provides an enhancement approach for Structure From Motion (SFM) using a hybrid approach to recover 3D features.
  • SFM Structure From Motion
  • This technique is motivated by the lack of a single method capable of locating features for large environments reliably.
  • the techniques of the present invention start by applying first a different smoothing function, such as Poison or Laplacian transform, to the images before feature point detection/selection and tracking. This type of smoothing filter helps make the features in images more visible to detect than the Gaussian function commonly used. Then, multiple feature detectors are applied to one image to obtain good features. After the use of two feature detectors, good features are obtained, which are then tracked easily throughout several images using a tracking method.
  • a different smoothing function such as Poison or Laplacian transform
  • a scanning device 103 may be provided for scanning film prints 104, e.g., camera-original film negatives, into a digital format, e.g. Cineon-format or Society of Motion Picture and Television
  • the scanning device 103 may comprise, e.g., a telecine or any device that will generate a video output from film such as, e.g., an Arri LocProTM with video output.
  • files from the post production process or digital cinema 106 e.g., files already in computer-readable form
  • Potential sources of computer-readable files are AVIDTM editors, DPX files, D5 tapes etc.
  • Scanned film prints are input to the post-processing device 102, e.g., a computer.
  • the computer is implemented on any of the various known computer platforms having hardware such as one or more central processing units (CPU), memory 110 such as random access memory (RAM) and/or read only memory (ROM) and input/output (I/O) user interface(s) 112 such as a keyboard, cursor control device (e.g., a mouse or joystick) and display device.
  • the computer platform also includes an operating system and micro instruction code.
  • the various processes and functions described herein may either be part of the micro instruction code or part of a software application program (or a combination thereof) which is executed via the operating system.
  • the software application program is tangibly embodied ori a program storage device, which may be uploaded to and executed by any suitable machine such as post-processing device 102.
  • various other peripheral devices may be connected to the computer platform by various interfaces and bus structures, such a parallel port, serial port or universal serial bus (USB).
  • Other peripheral devices may include additional storage devices 124 and a printer 128.
  • the printer 128 may be employed for printed a revised version of the film 126 wherein scenes may have been altered or replaced using 3D modeled objects as a result of the techniques described below.
  • files/film prints already in computer-readable form 106 may be directly input into the computer 102.
  • film used herein may refer to either film prints or digital cinema.
  • a software program includes a three-dimensional (3D) reconstruction module
  • the 3D reconstruction module 114 includes a smoothing function filter 116 for making features of objects in images more visible to detect.
  • the 3D reconstruction module 114 also includes a feature point detector 118 for detecting feature points in an image.
  • the feature point detector 118 will include at least two different feature point detection functions, e.g., algorithms, for detecting or selecting feature points.
  • a feature point tracker 120 is provided for tracking selected feature points throughout a plurality of consecutive images via a tracking function or algorithm.
  • a depth map generator 122 is also provided for generating a depth map from the tracked feature points.
  • FIG. 2 is a flow diagram of an exemplary method for reconstructing three- dimensional (3D) objects from two-dimensional (2D) images according to an aspect of the present invention.
  • the post-processing device 102 obtains the digital master video file in a computer-readable format.
  • the digital video file may be acquired by capturing a temporal sequence of video images with a digital video camera.
  • the video sequence may be captured by a conventional film- type camera.
  • the film is scanned via scanning device 103 and the process proceeds to step 202.
  • the camera will acquire 2D images while moving either the object in a scene or the camera.
  • the camera will acquire multiple viewpoints of the scene.
  • the digital file of the film will include indications or information on locations of the frames (e.g.. timecode, frame number, time from start of the film, etc.).
  • Each frame of the digital video file will include one image, e.g., I 1 , I 2 , ...I n -
  • a smoothing function filter 116 is applied to image U.
  • the smoothing function filter 116 is a Poison or Laplacian transform which helps make features of objects in the image more visible to detect than the Gaussian function commonly used in the art. It is to be appreciated that other smoothing function filters may be employed.
  • Image I 1 is then processed by a first feature point detector in step 204.
  • Feature points are the salient features of an image, such as corners, edges, lines or the like, where there is a high amount of image intensity contrast. The feature points are selected because they are easily identifiable and may be tracked robustly.
  • the feature point detector 118 may use a Kitchen-Rosenfeld corner detection operator C, as is well known in the art. This operator is used to evaluate the degree of "cornerness" of the image at a given pixel location. "Corners" are generally image features characterized by the intersection of two directions of image intensity gradient maxima, for example at a 90 degree angle. To extract feature points, the Kitchen-Rosenfeld operator is applied at each valid pixel position of image h.
  • the higher the value of the operator C at a particular pixel, the higher its degree of "cornerness", and the pixel position (x,y) in image I 1 is a feature point if C at (x,y) is greater than at other pixel positions in a neighborhood around (x,y).
  • the neighborhood may be a 5x5 matrix centered on the pixel position (x,y).
  • the output from the feature point detector 118 is a set of feature points ⁇ F 1 ⁇ in image I 1 where each F 1 corresponds to a "feature" pixel position in image I 1 .
  • step 206 image h is input to smoothing function filter 1 16 and a second different feature point detector is applied to the image (step 208). The feature points that are detected in steps 204 and step 208 are then combined and the duplicate selected feature points are eliminated (step 210). It is to be appreciated that the smoothing function filter applied at step 206 is the same filter applied at step 202; however, in other embodiments, different smoothing function filters may be used in each of steps 202 and 206.
  • SIFT Scale Invariant Feature Transform
  • SUSAN Smallest Univalue Segment Assimilating Nucleus
  • Hough transform Sobel edge operator and Canny edge detector
  • FIG. 3A illustrates a scene with detected feature points represented by small squares.
  • the scene in FIG. 3A was processed with one feature point detector.
  • the scene in FIG. 3B was processed with a hybrid point detector approach in accordance with the present invention and has detected a significantly higher number of feature points.
  • a second image b is smoothed using the same smoothing function filter that was used on the first image h (step 212).
  • the good feature points that were selected on the first image U are then tracked on the second image l 2 (step 214).
  • the feature point tracker 120 tracks the feature points into the next image ⁇ of the scene shot by finding their closest match.
  • the smoothing function filter applied in step 212 may be different than the filters applied in steps 202 and 206. Furthermore, it is to be appreciated that although steps 202 through steps 212 were described sequentially, in certain embodiments, the smoothing function filters may be applied simultaneously via parallel processing or hardware.
  • the disparity information is calculated for each tracked feature.
  • Disparity is calculated as the difference between the pixel location in I 1 and I 2 in the horizontal direction.
  • Disparity is inversely related to depth with a scaling factor related to camera calibration parameters.
  • camera calibration parameters are obtained and are employed by the depth map generator 122 to generator a depth map for the object or scene between the two images.
  • the camera parameters include but are not limited to the focal length of the camera and the distance between the two camera shots.
  • the camera parameters may be manually entered into the system 100 via user interface 112 or estimated from camera calibration algorithms. Using the camera parameters, the depth is estimated at the feature points.
  • the resulting depth map is sparse with depth values only at the detected feature.
  • a depth map is a
  • a depth map can be viewed as a grey scale image of an object, with the depth information replacing the intensity information, or pixels, at each point on the surface of the object. Accordingly, surface points are also referred to as pixels within the technology of 3D graphical construction, and the two terms will be used interchangeably within this disclosure. Since disparity information is inversely proportional to depth multiplied by a scaling factor, it can be used directly for building the 3D scene model for most applications. This simplifies the computation since it makes computation of camera parameters unnecessary.
  • the depth map generator 122 From the sets of feature points present in the image pair h and I 2 and an estimate of the depth at each feature point, and assuming that the feature points are chosen so that they lie relatively close to each other and span the whole image, the depth map generator 122 creates a 3D mesh structure by interconnecting such feature points in which the feature points lie at the vertices of formed polygons. The closer the feature points are to each other, the denser the resulting 3D mesh structure. Since the depth at each vertex of the 3D structure is known, the depths at the points within each polygon may be estimated. In this way the depth at all image pixel positions may be estimated. This may be done by planar interpolation.
  • a robust and fast method of generating the 3D mesh structure is Delaunay triangulation.
  • the feature points are connected to form a set of triangles whose vertices lie at feature point positions.
  • a "depth plane" may be fitted to each individual triangle from which the depths of every point within the triangle may be determined.
  • a complete 3D model of the object can be reconstructed by combining the triangulation mesh resulted from the Delaunay algorithm with the texture information from image h (step 218).
  • the texture information is the 2D intensity image.
  • the complete 3D model will include depth and intensity values at image pixels.
  • the resulting combined image can be visualized using conventional visualization tools such as the ScanAlyze software developed at Stanford University of Stanford, CA.
  • the reconstructed 3D model of a particular object or scene may then be rendered for viewing on a display device or saved in a digital file 130 separate from the file containing the images.
  • the digital file of 3D reconstruction 130 may be stored in storage device 124 for later retrieval, e.g., during an editing stage of the film where a modeled object may be inserted into a scene where the object was not previously present.
  • the system and method of the present invention utilizes multiple feature point detectors and combines the results of the multiple feature point detectors to improve the number and quality of the detected feature points. In contrast to a single feature detector, combining different feature point detectors improve the results of finding good feature points to track. After getting the "better" results from the multiple feature point detectors (i.e. using more than one feature point detector), the feature points in the second image are easier to track and produce better depth map results compared to using one feature detector to get the depth map results.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

L'invention concerne un système et un procédé destinés à effectuer l'acquisition et la modélisation tridimensionnelles (3D) d'une scène à partir d'images bidimensionnelles (2D). Le système et le procédé de l'invention permettent : d'acquérir une première et une seconde image d'une scène; d'appliquer une fonction de lissage à la première image (202) afin de rendre plus visibles les points caractéristiques des objets présents dans la scène, tels que les coins et les bords des objets; d'appliquer au moins deux fonctions de détection de caractéristiques à la première image afin de détecter les points caractéristiques des objets dans la première image (204, 208); de combiner les sorties des fonctions de détection de caractéristiques précitées afin de sélectionner des points caractéristiques d'objet à suivre (210); d'appliquer une fonction de lissage à la seconde image (206); d'appliquer une fonction de poursuite à la seconde image afin de suivre les points caractéristiques d'objet sélectionnés (214); et de reconstruire un modèle tridimensionnel de la scène à partir de la sortie de la fonction de poursuite (218).
EP06826653A 2006-05-05 2006-10-25 Système et procédé permettant une reconstruction tridimensionnelle d'objet à partir d'images bidimensionnelles Withdrawn EP2016559A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US79808706P 2006-05-05 2006-05-05
PCT/US2006/041647 WO2007130122A2 (fr) 2006-05-05 2006-10-25 Système et procédé permettant une reconstruction tridimensionnelle d'objet à partir d'images bidimensionnelles

Publications (1)

Publication Number Publication Date
EP2016559A2 true EP2016559A2 (fr) 2009-01-21

Family

ID=38577526

Family Applications (1)

Application Number Title Priority Date Filing Date
EP06826653A Withdrawn EP2016559A2 (fr) 2006-05-05 2006-10-25 Système et procédé permettant une reconstruction tridimensionnelle d'objet à partir d'images bidimensionnelles

Country Status (5)

Country Link
EP (1) EP2016559A2 (fr)
JP (1) JP2009536499A (fr)
CN (1) CN101432776B (fr)
CA (1) CA2650557C (fr)
WO (1) WO2007130122A2 (fr)

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7542034B2 (en) 2004-09-23 2009-06-02 Conversion Works, Inc. System and method for processing video images
KR100894874B1 (ko) * 2007-01-10 2009-04-24 주식회사 리얼이미지 그물 지도를 이용한 이차원 영상으로부터의 입체 영상 생성장치 및 그 방법
US7826067B2 (en) 2007-01-22 2010-11-02 California Institute Of Technology Method and apparatus for quantitative 3-D imaging
US20080278572A1 (en) 2007-04-23 2008-11-13 Morteza Gharib Aperture system with spatially-biased aperture shapes and positions (SBPSP) for static and dynamic 3-D defocusing-based imaging
US8089635B2 (en) 2007-01-22 2012-01-03 California Institute Of Technology Method and system for fast three-dimensional imaging using defocusing and feature recognition
US8655052B2 (en) 2007-01-26 2014-02-18 Intellectual Discovery Co., Ltd. Methodology for 3D scene reconstruction from 2D image sequences
US8274530B2 (en) 2007-03-12 2012-09-25 Conversion Works, Inc. Systems and methods for filling occluded information for 2-D to 3-D conversion
WO2009067223A2 (fr) * 2007-11-19 2009-05-28 California Institute Of Technology Procédé et système pour une imagerie à trois dimensions rapide à l'aide d'une défocalisation et d'une reconnaissance de caractéristiques
EP2329454A2 (fr) 2008-08-27 2011-06-08 California Institute of Technology Procédé et dispositif d'imagerie tridimensionnelle haute résolution donnant par défocalisation une posture de la caméra
CN101383046B (zh) * 2008-10-17 2011-03-16 北京大学 一种基于图像的三维重建方法
US8773507B2 (en) 2009-08-11 2014-07-08 California Institute Of Technology Defocusing feature matching system to measure camera pose with interchangeable lens cameras
US8773514B2 (en) 2009-08-27 2014-07-08 California Institute Of Technology Accurate 3D object reconstruction using a handheld device with a projected light pattern
CN102271262B (zh) 2010-06-04 2015-05-13 三星电子株式会社 用于3d显示的基于多线索的视频处理方法
US8675926B2 (en) * 2010-06-08 2014-03-18 Microsoft Corporation Distinguishing live faces from flat surfaces
DK3091508T3 (en) 2010-09-03 2019-04-08 California Inst Of Techn Three-dimensional imaging system
US8855406B2 (en) 2010-09-10 2014-10-07 Honda Motor Co., Ltd. Egomotion using assorted features
US9224245B2 (en) * 2011-01-10 2015-12-29 Hangzhou Conformal & Digital Technology Limited Corporation Mesh animation
US9679384B2 (en) * 2011-08-31 2017-06-13 Apple Inc. Method of detecting and describing features from an intensity image
US10607350B2 (en) * 2011-08-31 2020-03-31 Apple Inc. Method of detecting and describing features from an intensity image
JP5966837B2 (ja) * 2012-10-05 2016-08-10 大日本印刷株式会社 奥行き制作支援装置、奥行き制作支援方法、およびプログラム
CN105556507B (zh) * 2013-09-18 2020-05-19 美国西门子医疗解决公司 从输入信号生成目标对象的重构图像的方法和系统
CN104517316B (zh) * 2014-12-31 2018-10-16 中科创达软件股份有限公司 一种三维物体建模方法及终端设备
US9613452B2 (en) * 2015-03-09 2017-04-04 Siemens Healthcare Gmbh Method and system for volume rendering based 3D image filtering and real-time cinematic rendering
WO2016145625A1 (fr) * 2015-03-18 2016-09-22 Xiaoou Tang Extraction de posture 3d de main à partir d'un système d'imagerie binoculaire
US11406264B2 (en) 2016-01-25 2022-08-09 California Institute Of Technology Non-invasive measurement of intraocular pressure
CN106023307B (zh) * 2016-07-12 2018-08-14 深圳市海达唯赢科技有限公司 基于现场环境的快速重建三维模型方法及系统
CN106846469B (zh) * 2016-12-14 2019-12-03 北京信息科技大学 基于特征点追踪由聚焦堆栈重构三维场景的方法和装置
US10586379B2 (en) 2017-03-08 2020-03-10 Ebay Inc. Integration of 3D models
US11727656B2 (en) 2018-06-12 2023-08-15 Ebay Inc. Reconstruction of 3D model with immersive experience
CN109117496B (zh) * 2018-06-25 2023-10-27 国网经济技术研究院有限公司 一种变电站工程临建布置三维模拟设计方法和系统
CN110942479B (zh) * 2018-09-25 2023-06-02 Oppo广东移动通信有限公司 虚拟对象控制方法、存储介质及电子设备
CN110533777B (zh) * 2019-08-01 2020-09-15 北京达佳互联信息技术有限公司 三维人脸图像修正方法、装置、电子设备和存储介质
CN111083373B (zh) * 2019-12-27 2021-11-16 恒信东方文化股份有限公司 一种大屏幕及其智能拍照方法
CN111601246B (zh) * 2020-05-08 2021-04-20 中国矿业大学(北京) 基于空间三维模型图像匹配的智能位置感知系统
CN111724481A (zh) * 2020-06-24 2020-09-29 嘉应学院 对二维图像进行三维重构的方法、装置、设备及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3548652B2 (ja) * 1996-07-24 2004-07-28 株式会社東芝 物体形状復元装置及びその方法
JP3512992B2 (ja) * 1997-01-07 2004-03-31 株式会社東芝 画像処理装置および画像処理方法
JP2003242162A (ja) * 2002-02-18 2003-08-29 Nec Soft Ltd 特徴点抽出方法、画像検索方法、特徴点抽出装置、画像検索システム及びプログラム
FR2837597A1 (fr) * 2002-03-25 2003-09-26 Thomson Licensing Sa Procede de modelisation d'une scene 3d
CN100416613C (zh) * 2002-09-29 2008-09-03 西安交通大学 计算机网络环境智能化场景绘制装置系统及绘制处理方法
CN1312633C (zh) * 2004-04-13 2007-04-25 清华大学 大规模三维场景多视点激光扫描数据自动配准方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2007130122A2 *

Also Published As

Publication number Publication date
WO2007130122A3 (fr) 2008-04-17
CA2650557C (fr) 2014-09-30
WO2007130122A2 (fr) 2007-11-15
CN101432776B (zh) 2013-04-24
CN101432776A (zh) 2009-05-13
JP2009536499A (ja) 2009-10-08
CA2650557A1 (fr) 2007-11-15

Similar Documents

Publication Publication Date Title
US8433157B2 (en) System and method for three-dimensional object reconstruction from two-dimensional images
CA2650557C (fr) Systeme et procede permettant une reconstruction tridimensionnelle d'objet a partir d'images bidimensionnelles
JP5160643B2 (ja) 2次元画像からの3次元オブジェクト認識システム及び方法
KR102468897B1 (ko) 깊이 값을 추정하는 방법 및 장치
JP5156837B2 (ja) 領域ベースのフィルタリングを使用する奥行マップ抽出のためのシステムおよび方法
EP2089853B1 (fr) Procédé et système de modélisation de la lumière
US8452081B2 (en) Forming 3D models using multiple images
US8447099B2 (en) Forming 3D models using two images
CA2687213C (fr) Systeme et procede pour l'appariement stereo d'images
EP3182371B1 (fr) Détermination de seuil dans par exemple un algorithme de type ransac
EP2291825B1 (fr) Système et procédé d extraction de profondeur d images avec prédiction directe et inverse de profondeur
WO2012096747A1 (fr) Établissement de cartes télémétriques à l'aide de motifs d'éclairage périodiques
Angot et al. A 2D to 3D video and image conversion technique based on a bilateral filter
Deschenes et al. An unified approach for a simultaneous and cooperative estimation of defocus blur and spatial shifts
Yao et al. 2D-to-3D conversion using optical flow based depth generation and cross-scale hole filling algorithm
Yamao et al. A sequential online 3d reconstruction system using dense stereo matching
Eisemann et al. Reconstruction of Dense Correspondences.
Ishii et al. Joint rendering and segmentation of free-viewpoint images
Wang et al. Depth Super-resolution by Fusing Depth Imaging and Stereo Vision with Structural Determinant Information Inference

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20081106

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK RS

DAX Request for extension of the european patent (deleted)
RBV Designated contracting states (corrected)

Designated state(s): DE FR GB

17Q First examination report despatched

Effective date: 20100107

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: THOMSON LICENSING

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20140501