US20150279075A1 - Recording animation of rigid objects using a single 3d scanner - Google Patents

Recording animation of rigid objects using a single 3d scanner Download PDF

Info

Publication number
US20150279075A1
US20150279075A1 US14/671,313 US201514671313A US2015279075A1 US 20150279075 A1 US20150279075 A1 US 20150279075A1 US 201514671313 A US201514671313 A US 201514671313A US 2015279075 A1 US2015279075 A1 US 2015279075A1
Authority
US
United States
Prior art keywords
object
method
recording
reference model
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/671,313
Inventor
Stephen Brooks Myers
Jacob Abraham Kuttothara
Steven Donald Paddock
John Moore Wathen
Andrew Slatton
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Knockout Concepts LLC
Original Assignee
KNOCKOUT CONCEPTS, LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US201461971036P priority Critical
Application filed by KNOCKOUT CONCEPTS, LLC filed Critical KNOCKOUT CONCEPTS, LLC
Priority to US14/671,313 priority patent/US20150279075A1/en
Assigned to KNOCKOUT CONCEPTS, LLC reassignment KNOCKOUT CONCEPTS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MEYERS, STEPHEN B
Assigned to KNOCKOUT CONCEPTS, LLC reassignment KNOCKOUT CONCEPTS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KUTTOTHARA, JACOB A, PADDOCK, STEVEN D, SLATTON, ANDREW, WATHEN, JOHN M
Publication of US20150279075A1 publication Critical patent/US20150279075A1/en
Application status is Abandoned legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical means
    • G01B11/26Measuring arrangements characterised by the use of optical means for measuring angles or tapers; for testing the alignment of axes
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00201Recognising three-dimensional objects, e.g. using range or tactile information
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/03Detection or correction of errors, e.g. by rescanning the pattern
    • G06K9/036Evaluation of quality of acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/36Image preprocessing, i.e. processing the image information without deciding about the identity of the image
    • G06K9/46Extraction of features or characteristics of the image
    • G06K9/4604Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes, intersections
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0081
    • G06T7/2046
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K2209/00Indexing scheme relating to methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K2209/40Acquisition of 3D measurements of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • G06T2207/20144
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

This application teaches a method or methods related to recording animation. Such a method may include determining a reference model of an object by separating a 3D image of the object from a 3D image of its environment. The method may also include analyzing the reference model using a feature detection and localization algorithm(s). The object may then be recorded in motion, and the recording may be analyzed using feature detection and localization algorithms(s). Features of the recording may be matched to features of the reference model, wherein a match between the reference model and a frame of the recording comprises a pose of the object. A video animation may be created by recording a time series of poses of the object.

Description

    I. BACKGROUND OF THE INVENTION
  • A. Field of Invention
  • Some embodiments may generally relate to the field of extracting elements of 3D images in motion.
  • B. Description of the Related Art
  • Various video recording methodologies are known in the art as well as various methods of computer analysis of video. However, current recording analysis technologies tend to confine users to merely recognizing features in image data. Furthermore, objects in recorded digital video cannot be manipulated as in the manner of a 3D CAD drawing. What is missing is methodology for separating an object from its background in a 3D reconstructed model of a static scene, then using video of the same object in motion to obtain further structural detail of the object, and creating a 3D model object that can be reoriented, manipulated, and moved independent of the image or video from which it was created.
  • Some embodiments of the present invention may provide one or more benefits or advantages over the prior art.
  • II. SUMMARY OF THE INVENTION
  • Some embodiments may relate to a method for recording animation comprising the steps of: determining a reference model of an object by separating a three-dimensional model of the object from its environment in a 3D reconstruction of a static scene; analyzing the reference model using a feature detection and localization algorithm; recording movement of the object; analyzing the recording using feature detection and localization algorithms; matching features of the recording to features of the reference model, wherein a match between the reference model and a frame of the recording comprises a pose of the object; and recording a time series of poses of the object, the time series comprising an animation.
  • Embodiments may further comprise the step of saving the reference model in association with the animation on a computer readable medium.
  • According to some embodiments data for determining the reference model is obtained with a three-dimensional scanning device.
  • According to some embodiments the step of separating the three-dimensional model of the object from its environment is conducted by the three-dimensional scanning device.
  • According to some embodiments the step of analyzing the reference model is conducted by the three dimensional scanning device.
  • According to some embodiments the data for determining the reference model of the object, and from recording movement of the object, are obtained with the same three-dimensional scanning device.
  • According to some embodiments the feature detection and localization algorithm for analyzing the reference model is selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface.
  • According to some embodiments the feature detection and localization algorithm for analyzing the recording is selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface.
  • According to some embodiments a quantity of digital computations of a microprocessor is reduced by applying a Kalman filter to the step of analyzing the recording using feature detection and localization algorithms.
  • Embodiments may also relate to a method for recording animation comprising the steps of: determining a reference model of an object by separating a three-dimensional reconstruction of the object from its environment in a 3D reconstruction of a static scene; analyzing the reference model using a feature detection and localization algorithm selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface; recording movement of the object; analyzing the recording using feature detection and localization algorithms selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface, wherein a quantity of digital computations of a microprocessor is reduced by applying a Kalman filter; matching features of the recording to features of the reference model, wherein a match between the reference model and a frame of the recording comprises a pose of the object; and recording a time series of poses of the object, the time series comprising an animation.
  • Embodiments may also relate to a method for recording animation comprising the steps of: determining a reference model of an object by separating a three-dimensional reconstruction of the object from its environment in a 3D reconstruction of a static scene; analyzing the reference model using a feature detection and localization algorithm selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface; recording movement of the object; analyzing the recording using feature detection and localization algorithms selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface, wherein a quantity of digital computations of a microprocessor is reduced by applying a Kalman filter; matching features of the recording to features of the reference model, wherein a match between the reference model and a frame of the recording comprises a pose of the object; and recording a time series of poses of the object, the time series comprising an animation; wherein the step of separating the three-dimensional image of the object from the three-dimensional image of the environment of the object is conducted by the three-dimensional scanning device, and wherein the data for determining the reference model of the object, and from recording movement of the object, are obtained with the same three-dimensional scanning device.
  • Other benefits and advantages will become apparent to those skilled in the art to which it pertains upon reading and understanding of the following detailed specification.
  • III. BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention may take physical form in certain parts and arrangement of parts, embodiments of which will be described in detail in this specification and illustrated in the accompanying drawings which form a part hereof and wherein:
  • FIG. 1 is a process according to an embodiment of the invention;
  • FIG. 2 illustrates capturing 3D reconstructed model of a static object according to one embodiment;
  • FIG. 3 illustrates separating an element of a 3D model from its background; and
  • FIG. 4 illustrates obtaining additional detail of a scanned and separated object by recording it in motion.
  • IV. DETAILED DESCRIPTION OF THE INVENTION
  • A method for recording animation of a three-dimensional real world object includes separating a 3D model of the object from a 3D model of its surroundings. Many known 3D scanners and cameras are capable of achieving obtaining the data necessary for method according to embodiments of this invention. This model of the 3D object separated from the model of its environment, may be used as a reference model. The reference model may be further analyzed using a feature detection and localization algorithm to identify various features of the reference model that may be used for comparison with live feed from the 3D scanning device. Movement, manually induced or otherwise, of the object may be recorded using the 3D scanning device. Once again the features of the recording of the object in motion may be analyzed utilizing similar feature detection and localization algorithms. The features of the recording can be compared with the features of the reference model, and when matches are found said matches may comprise poses of the object for rendering an animation. Finally, the poses may be recombined in any order to formulate an animation of the object. The combination of a time series of poses arranged in any order and an arbitrary background allows one to create animations of the object that differ from the motion observed in the previously recorded video. As used herein the term posed includes the generally accepted meaning in the 3D imaging arts.
  • Referring now to the drawings wherein the showings are for purposes of illustrating embodiments of the invention only and not for purposes of limiting the same, FIG. 1 depicts a flow diagram 100 of an illustrative embodiment for recording animation of a real world three dimensional object. In a first step (not shown) 3D model data of an object may be captured by any arbitrary 3D digital imaging device and/or may be retrieved from storage in a database. A reference model of the object may be obtained by separating the object from its environment 110 according to known mathematical methods. In one embodiment, the act of separating the model of the object from its environment may be achieved using a 3D scanning device configured with such capabilities; however it is contemplated that any 3D digital scanning device may be used to carry out methods taught herein.
  • The reference model may analyzed using feature detection and localization algorithms 112 in order to enable later comparison of the features and related data with live feed from the scanning device. The feature detection and localization algorithm used for analyzing the reference model may be chosen from many processes and algorithms now known or developed in the future. Some such feature detection and localization algorithms include RANSAC (Random Sample Consensus), iterative closest point, least squares methods, Newtonian methods, quasi-Newtonian methods, expectation-maximization methods, detection of principal curvatures, or detection of distance to a medial surface. The methodology and corresponding algorithms of all of these processes are known in the art and incorporated by reference herein. In an illustrative embodiment, during the step of analyzing the recording using a feature detection and localization algorithm, the quantity of digital computations of a microprocessor may be reduced by applying a Kalman filter. In this context a Kalman filter allows embodiments to accurately predict the next position and/or orientation of the object which enables embodiments to apply feature detection calculations to smaller regions of the 3D data. Kalman filter methodology is known in the art and is incorporated by reference herein.
  • Movement of the real world three-dimensional object may be manually induced and recorded using a 3D scanning device 114. Features of the object in the recording may be analyzed using similar feature detection and localization algorithms 116. The feature detection and localization algorithm used for analyzing the recording may be chosen from many processes and algorithms now known or developed in the future. Some such feature detection and localization algorithms include RANSAC (Random Sample Consensus), iterative closest point, least squares methods, Newtonian methods, quasi-Newtonian methods, expectation-maximization methods, detection of principal curvatures, or detection of distance to a medial surface. The methodology and corresponding algorithms of all of these processes are incorporated by reference herein. In an illustrative embodiment, during the step of analyzing the recording using a feature detection and localization algorithm, the quantity of digital computations of a microprocessor may be reduced by applying a Kalman filter.
  • Once the features of the recording are obtained, such features may be compared with the features of the reference model 118. A match between the features of the recording and the features of the reference model comprises a pose of the object. The feature comparison may be continuously made until multiple matches result in multiple poses 120 being obtained. In an alternate embodiment, the matching of the features to obtain poses is done in real time when the recording is being made. A time series of the various poses may be recorded in any order comprising an animation of the object 122. In an illustrative embodiment, the reference model initially obtained may be saved in association with the animation. This may be saved on any computer readable medium.
  • FIG. 2 depicts an illustrative embodiment 200 wherein a 3D scanner 210 is used to obtain an image 216 of a real world object 212. The scanner 210 may collect images of the static object 212 from all directions and orientations 214 to ensure a complete modeling 216 of the object 212. A reconstruction of this image data may be used to obtain a reference model of the real world object 212. In another embodiment, images of the static object 212 may be collected from less than all vantage points, and missing data may be filled in by correlating areas of missing data to areas of the object in a later-collected video image showing the object in motion.
  • FIG. 3 depicts an illustrative embodiment 300 wherein the model 216 of the object is obtained on a 3D data processing device 314 for further processing. After the model is captured 216, a data processing device may be used to separate the model of the object 312 from the model of its environment 310. This separation of the object from its environment may then be used as a reference model of the object, or may be used to produce a reference model of the object through further data processing.
  • FIG. 4 depicts an illustrative embodiment 400 wherein the movement of the real world object 410 is recorded 412 using a 3D scanning device 210. The features of the recording 412 are analyzed using feature detection and localization algorithms and the features of the recording are compared with the features of the reference model. A match between the features of the recording 412 and the features of the reference model comprises a pose of the three-dimensional object. A continuous matching of the features results in multiple poses and a time series of the various poses may be recorded comprising an animation of the object. In one embodiment, the reference model may be saved in association with the animation on a computer readable medium, device storage or server (including cloud server).
  • It will be apparent to those skilled in the art that the above methods and apparatuses may be changed or modified without departing from the general scope of the invention. The invention is intended to include all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
  • Having thus described the invention, it is now claimed:

Claims (13)

I/we claim:
1. A method for recording animation comprising the steps of:
determining a reference model of an object by separating a three-dimensional model of the object from its environment in a 3D reconstruction of a static scene;
analyzing the reference model using a feature detection and localization algorithm;
recording movement of the object;
analyzing the recording using feature detection and localization algorithms;
matching features of the recording to features of the reference model, wherein a match between the reference model and a frame of the recording comprises a pose of the object; and
recording a time series of poses of the object, the time series comprising an animation.
2. The method of claim 1 further comprising the step of saving the reference model in association with the animation on a computer readable medium.
3. The method of claim 1, wherein data for determining the reference model is obtained with a three-dimensional scanning device.
4. The method of claim 3, wherein the step of separating the three-dimensional image of the object from the three-dimensional image of the environment of the object is conducted by the three-dimensional scanning device.
5. The method of claim 3, wherein the step of analyzing the reference model is conducted by the three dimensional scanning device.
6. The method of claim 3, wherein the data for determining the reference model of the object, and for recording movement of the object, are obtained with the same three-dimensional scanning device.
7. The method of claim 1, wherein the feature detection and localization algorithm for analyzing the reference model is selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface.
8. The method of claim 1, wherein the feature detection and localization algorithm for analyzing the recording is selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface.
9. The method of claim 1, wherein a quantity of digital computations of a microprocessor is reduced by applying a Kalman filter to the step of analyzing the recording using feature detection and localization algorithms.
10. A method for recording animation comprising the steps of:
determining a reference model of an object by separating a three-dimensional model of the object from its an environment in a 3D reconstruction of a static scene;
analyzing the reference model using a feature detection and localization algorithm selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface;
recording movement of the object;
analyzing the recording using feature detection and localization algorithms selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface, wherein a quantity of digital computations of a microprocessor is reduced by applying a Kalman filter;
matching features of the recording to features of the reference model, wherein a match between the reference model and a frame of the recording comprises a pose of the object; and
recording a time series of poses of the object, the time series comprising an animation.
11. A method for recording animation comprising the steps of:
determining a reference model of an object by separating a three-dimensional model of the object from its environment in a 3D reconstruction of a static scene;
analyzing the reference model using a feature detection and localization algorithm selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface;
recording movement of the object;
analyzing the recording using feature detection and localization algorithms selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface, wherein a quantity of digital computations of a microprocessor is reduced by applying a Kalman filter;
matching features of the recording to features of the reference model, wherein a match between the reference model and a frame of the recording comprises a pose of the object; and
recording a time series of poses of the object, the time series comprising an animation;
wherein the step of separating the three-dimensional image of the object from the three-dimensional image of the environment of the object is conducted by the three-dimensional scanning device, and wherein the data for determining the reference model of the object, and from recording movement of the object, are obtained with the same three-dimensional scanning device.
12. The method of claim 11, wherein the step of separating the three-dimensional image of the object from the three-dimensional image of the environment of the object is conducted by the three-dimensional scanning device.
13. The method of claim 12, wherein the step of analyzing the reference model is conducted by the three dimensional scanning device.
US14/671,313 2014-03-27 2015-03-27 Recording animation of rigid objects using a single 3d scanner Abandoned US20150279075A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US201461971036P true 2014-03-27 2014-03-27
US14/671,313 US20150279075A1 (en) 2014-03-27 2015-03-27 Recording animation of rigid objects using a single 3d scanner

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/671,313 US20150279075A1 (en) 2014-03-27 2015-03-27 Recording animation of rigid objects using a single 3d scanner

Publications (1)

Publication Number Publication Date
US20150279075A1 true US20150279075A1 (en) 2015-10-01

Family

ID=54189850

Family Applications (5)

Application Number Title Priority Date Filing Date
US14/671,373 Abandoned US20150278155A1 (en) 2014-03-27 2015-03-27 Identifying objects using a 3d scanning device, images, and 3d models
US14/671,420 Abandoned US20150279087A1 (en) 2014-03-27 2015-03-27 3d data to 2d and isometric views for layout and creation of documents
US14/671,313 Abandoned US20150279075A1 (en) 2014-03-27 2015-03-27 Recording animation of rigid objects using a single 3d scanner
US14/671,749 Abandoned US20150279121A1 (en) 2014-03-27 2015-03-27 Active Point Cloud Modeling
US14/672,048 Active 2035-11-14 US9841277B2 (en) 2014-03-27 2015-03-27 Graphical feedback during 3D scanning operations for obtaining optimal scan resolution

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US14/671,373 Abandoned US20150278155A1 (en) 2014-03-27 2015-03-27 Identifying objects using a 3d scanning device, images, and 3d models
US14/671,420 Abandoned US20150279087A1 (en) 2014-03-27 2015-03-27 3d data to 2d and isometric views for layout and creation of documents

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/671,749 Abandoned US20150279121A1 (en) 2014-03-27 2015-03-27 Active Point Cloud Modeling
US14/672,048 Active 2035-11-14 US9841277B2 (en) 2014-03-27 2015-03-27 Graphical feedback during 3D scanning operations for obtaining optimal scan resolution

Country Status (1)

Country Link
US (5) US20150278155A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160125638A1 (en) * 2014-11-04 2016-05-05 Dassault Systemes Automated Texturing Mapping and Animation from Images

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469446A (en) * 2014-09-05 2016-04-06 富泰华工业(深圳)有限公司 Point cloud mesh simplification system and method
US9866815B2 (en) * 2015-01-05 2018-01-09 Qualcomm Incorporated 3D object segmentation
CN105551078A (en) * 2015-12-02 2016-05-04 北京建筑大学 Method and system of virtual imaging of broken cultural relics
US10127333B2 (en) 2015-12-30 2018-11-13 Dassault Systemes Embedded frequency based search and 3D graphical data processing
JP2017120633A (en) * 2015-12-30 2017-07-06 ダッソー システムズDassault Systemes Embedded frequency based search and 3d graphical data processing
US10049479B2 (en) 2015-12-30 2018-08-14 Dassault Systemes Density based graphical mapping
US10360438B2 (en) 2015-12-30 2019-07-23 Dassault Systemes 3D to 2D reimaging for search
CN106524920A (en) * 2016-10-25 2017-03-22 上海建科工程咨询有限公司 Application of field measurement in construction project based on three-dimensional laser scanning
CN106650700A (en) * 2016-12-30 2017-05-10 上海联影医疗科技有限公司 Motif, and method and device for measuring system matrix

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100027840A1 (en) * 2006-07-20 2010-02-04 The Regents Of The University Of California System and method for bullet tracking and shooter localization
US20140219550A1 (en) * 2011-05-13 2014-08-07 Liberovision Ag Silhouette-based pose estimation
US8896607B1 (en) * 2009-05-29 2014-11-25 Two Pic Mc Llc Inverse kinematics for rigged deformable characters

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4319031B2 (en) 2001-09-06 2009-08-26 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Object segmentation method and apparatus
US8108929B2 (en) * 2004-10-19 2012-01-31 Reflex Systems, LLC Method and system for detecting intrusive anomalous use of a software system using multiple detection algorithms
US7860301B2 (en) 2005-02-11 2010-12-28 Macdonald Dettwiler And Associates Inc. 3D imaging system
US7768656B2 (en) 2007-08-28 2010-08-03 Artec Group, Inc. System and method for three-dimensional measurement of the shape of material objects
KR20090047172A (en) * 2007-11-07 2009-05-12 삼성디지털이미징 주식회사 Method for controlling digital camera for picture testing
US8255100B2 (en) * 2008-02-27 2012-08-28 The Boeing Company Data-driven anomaly detection to anticipate flight deck effects
DE102008021558A1 (en) * 2008-04-30 2009-11-12 Advanced Micro Devices, Inc., Sunnyvale Process and system for semiconductor process control and monitoring using PCA models of reduced size
EP2280651A2 (en) * 2008-05-16 2011-02-09 Geodigm Corporation Method and apparatus for combining 3d dental scans with other 3d data sets
WO2010003844A1 (en) * 2008-06-30 2010-01-14 Thomson Licensing Method for the real-time composition of a video
AT545260T (en) * 2008-08-01 2012-02-15 Gigle Networks Sl Ofdm frame synchronization process and system
US8817019B2 (en) * 2009-07-31 2014-08-26 Analogic Corporation Two-dimensional colored projection image from three-dimensional image data
GB0913930D0 (en) * 2009-08-07 2009-09-16 Ucl Business Plc Apparatus and method for registering two medical images
US8085279B2 (en) * 2009-10-30 2011-12-27 Synopsys, Inc. Drawing an image with transparent regions on top of another image without using an alpha channel
EP2677938B1 (en) * 2011-02-22 2019-09-18 Midmark Corporation Space carving in 3d data acquisition
US8724880B2 (en) * 2011-06-29 2014-05-13 Kabushiki Kaisha Toshiba Ultrasonic diagnostic apparatus and medical image processing apparatus
EP2780826A4 (en) * 2011-11-15 2016-01-06 Trimble Navigation Ltd Browser-based collaborative development of a 3d model
US20150153476A1 (en) * 2012-01-12 2015-06-04 Schlumberger Technology Corporation Method for constrained history matching coupled with optimization
US9208550B2 (en) 2012-08-15 2015-12-08 Fuji Xerox Co., Ltd. Smart document capture based on estimated scanned-image quality
DE102013203667A1 (en) * 2013-03-04 2014-09-04 Adidas Ag Interactive booth and method for determining a body shape
EP3022525A1 (en) 2013-07-18 2016-05-25 a.tron3d GmbH Method of capturing three-dimensional (3d) information on a structure
US20150070468A1 (en) 2013-09-10 2015-03-12 Faro Technologies, Inc. Use of a three-dimensional imager's point cloud data to set the scale for photogrammetry

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100027840A1 (en) * 2006-07-20 2010-02-04 The Regents Of The University Of California System and method for bullet tracking and shooter localization
US8896607B1 (en) * 2009-05-29 2014-11-25 Two Pic Mc Llc Inverse kinematics for rigged deformable characters
US20140219550A1 (en) * 2011-05-13 2014-08-07 Liberovision Ag Silhouette-based pose estimation

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160125638A1 (en) * 2014-11-04 2016-05-05 Dassault Systemes Automated Texturing Mapping and Animation from Images

Also Published As

Publication number Publication date
US20150276392A1 (en) 2015-10-01
US20150279121A1 (en) 2015-10-01
US9841277B2 (en) 2017-12-12
US20150278155A1 (en) 2015-10-01
US20150279087A1 (en) 2015-10-01

Similar Documents

Publication Publication Date Title
Fischer et al. Flownet: Learning optical flow with convolutional networks
Fuhrmann et al. Mve-a multi-view reconstruction environment.
US8405742B2 (en) Processing images having different focus
Revaud et al. Epicflow: Edge-preserving interpolation of correspondences for optical flow
US9177381B2 (en) Depth estimate determination, systems and methods
US8781161B2 (en) Image processing method and apparatus for generating a 3D model of a target object
US9420265B2 (en) Tracking poses of 3D camera using points and planes
JP6022562B2 (en) Mobile augmented reality system
TWI485650B (en) Method and arrangement for multi-camera calibration
Lu et al. Depth enhancement via low-rank matrix completion
US8872851B2 (en) Augmenting image data based on related 3D point cloud data
US9330470B2 (en) Method and system for modeling subjects from a depth map
WO2015006224A1 (en) Real-time 3d computer vision processing engine for object recognition, reconstruction, and analysis
JP6144826B2 (en) Interactive and automatic 3D object scanning method for database creation
WO2011048302A1 (en) Method, computer program, and device for hybrid tracking of real-time representations of objects in image sequence
US20160012588A1 (en) Method for Calibrating Cameras with Non-Overlapping Views
US9429418B2 (en) Information processing method and information processing apparatus
WO2015192316A1 (en) Face hallucination using convolutional neural networks
Carozza et al. Markerless vision‐based augmented reality for urban planning
Park et al. Texture-less object tracking with online training using an RGB-D camera
KR101791590B1 (en) Object pose recognition apparatus and method using the same
US20140211989A1 (en) Component Based Correspondence Matching for Reconstructing Cables
US10083366B2 (en) Edge-based recognition, systems and methods
US20160071318A1 (en) Real-Time Dynamic Three-Dimensional Adaptive Object Recognition and Model Reconstruction
US9799139B2 (en) Accurate image alignment to a 3D model

Legal Events

Date Code Title Description
AS Assignment

Owner name: KNOCKOUT CONCEPTS, LLC, OHIO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MEYERS, STEPHEN B;REEL/FRAME:035776/0218

Effective date: 20150528

Owner name: KNOCKOUT CONCEPTS, LLC, OHIO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KUTTOTHARA, JACOB A;WATHEN, JOHN M;PADDOCK, STEVEN D;AND OTHERS;REEL/FRAME:035776/0299

Effective date: 20150528

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION