US20150279087A1 - 3d data to 2d and isometric views for layout and creation of documents - Google Patents

3d data to 2d and isometric views for layout and creation of documents Download PDF

Info

Publication number
US20150279087A1
US20150279087A1 US14/671,420 US201514671420A US2015279087A1 US 20150279087 A1 US20150279087 A1 US 20150279087A1 US 201514671420 A US201514671420 A US 201514671420A US 2015279087 A1 US2015279087 A1 US 2015279087A1
Authority
US
United States
Prior art keywords
dimensional model
model data
boundaries
dimensional
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/671,420
Inventor
Stephen Brooks Myers
Jacob Abraham Kuttothara
Steven Donald Paddock
John Moore Wathen
Andrew Slatton
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Knockout Concepts LLC
Original Assignee
Knockout Concepts LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Knockout Concepts LLC filed Critical Knockout Concepts LLC
Priority to US14/671,420 priority Critical patent/US20150279087A1/en
Assigned to KNOCKOUT CONCEPTS, LLC reassignment KNOCKOUT CONCEPTS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MEYERS, STEPHEN B
Assigned to KNOCKOUT CONCEPTS, LLC reassignment KNOCKOUT CONCEPTS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KUTTOTHARA, JACOB A, PADDOCK, STEVEN D, SLATTON, ANDREW, WATHEN, JOHN M
Publication of US20150279087A1 publication Critical patent/US20150279087A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/26Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • G06K9/4604
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/12Acquisition of 3D measurements of objects

Definitions

  • Embodiments generally relate to creating technical drawings from 3D model data.
  • a variety of methods are known in the art for generating 2D images from 3D models. For instance, it is known to generate a collage of 2D renderings that represent a 3D model. It is further known to identify vertices and edges of objects in images.
  • the prior art also includes methods for flattening 3D surfaces to 2D quadrilateral line drawings in a 2D image plane.
  • the art is deficient in a number of regards. For instance, the prior art does not teach or suggest fitting a 3D point cloud to a set of simple 2D surfaces, determining boundaries and vertices of the 2D surfaces and projecting them onto an image plane.
  • Some embodiments of the present invention may provide one or more benefits or advantages over the prior art.
  • Some embodiments may relate to a method for generating two-dimensional images, comprising the steps of: providing a set of three-dimensional model data of a subject; determining a set of boundaries between intersecting surfaces of the set of three-dimensional model data; selecting a view of the three-dimensional model data to convert to a two-dimensional image; determining an outline of the three-dimensional model data corresponding to the selected view of the three-dimensional model data; determining the portion of the set of boundaries that would be invisible in the selected view due to opacity of the subject; and projecting the outline of the three-dimensional model data and the visible portion of the set of boundaries onto a two-dimensional image plane.
  • Embodiments may further comprise projecting the invisible boundaries on the two-dimensional image plane in a form visually distinguishable from the visible boundaries.
  • the form visually distinguishable from the visible boundaries comprises dashed, dotted, or broken lines.
  • the three-dimensional model data comprises a point cloud.
  • Embodiments may further comprise the step of converting the point cloud to a set of continuous simple surfaces using a fitting method selected from one or more of a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method.
  • RANSAC random sample consensus
  • a simple surface comprises a planar surface, a cylindrical surface, a spherical surface, a sinusoidal surface, or a conic surface.
  • the step of selecting a view comprises orienting a three-dimensional model defined by the three-dimensional model data so that the planar bounded region with the largest convex hull is visible.
  • the step of determining a set of boundaries comprises a Kreveld method, a Dey Wang method, or an iterative simple surface intersection method.
  • the three-dimensional model data comprises a mesh.
  • the step of determining a set of boundaries comprises finding sharp angles between intersecting simple surfaces according to a dihedral angle calculation.
  • FIG. 1 is a flowchart showing an image conversion process according to an embodiment of the invention
  • FIG. 2 is a schematic view of a user capturing 3D model data with a 3D scanning device
  • FIG. 3 is a drawing of a point cloud being converted into an isometric drawing
  • FIG. 4 is a drawing showing the use of a set of simple surfaces for generating 2D drawings
  • FIG. 5 is a drawing of a device according to an embodiment of the invention.
  • FIG. 6 is an illustrative printout according to an embodiment of the invention.
  • a method for generating two-dimensional images includes determining a set of boundaries between intersecting surfaces of three-dimensional model data corresponding to an object.
  • a specific view of the three-dimensional model data, for which the two-dimensional images are required, is selected.
  • the outline of the three-dimensional model data corresponding to the selected view is determined and corresponding invisible portion of the boundaries, due to opacity of the object, is identified.
  • the outline of the three-dimensional model data and the visible portion of the boundaries so determined are projected onto a two-dimensional image plane.
  • FIG. 1 depicts a flow diagram 100 of an illustrative embodiment wherein three-dimensional data 110 is provided for the purpose of generating corresponding two-dimensional images.
  • the three dimensional data may be in the form of point cloud or mesh representation of a three-dimensional subject.
  • any and all other forms of three-dimensional data representation, now known or developed in the future, that are capable of being converted to point cloud or mesh form may be used.
  • the point cloud or mesh may be further converted to a set or sets of continuous simple surfaces by using a fitting method including but not limited to a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method. All these methods are well understood in the art and their methodologies are incorporated by reference herein. Any simple geometric surface including but not limited to a planar surface, cylindrical surface, spherical surface, sinusoidal surface, or a conic surface may be used to represent the point cloud as the set of simple continuous surfaces.
  • a fitting method including but not limited to a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method. All these methods are well understood in the art and their methodologies are incorporated by reference herein.
  • Any simple geometric surface including but not limited to a planar surface, cylindrical
  • a set of boundaries between intersecting surfaces of the three-dimensional model data is determined 112 .
  • this determination of a set of boundaries may be achieved by using a Kreveld method, a Dey Wang method, or an iterative simple surface intersection method. All these methods are well understood in the art and their methodologies are incorporated by reference herein.
  • the set of boundaries may be determined by finding sharp angles between intersecting simple surfaces according to a dihedral angle calculation.
  • a view of the image data for which two-dimensional images are required is selected 114 .
  • the view may be selected by orienting a three-dimensional model defined by the three-dimensional model data so that the planar bounded region with the largest convex hull is visible.
  • outline of the image data corresponding to the view is determined 116 .
  • the outline determination may be based upon selecting the portion of the image data from one visible edge to the other in the selected view.
  • the portion of the set of boundaries that would be invisible in the selected view due to opacity of the subject is determined 118 .
  • the portion of the set of visible boundaries in the selected viewpoint is determined thereby excluding the invisible boundaries.
  • the determined outline and the visible portion of the set of boundaries are projected on a two-dimensional image plane 120 .
  • the invisible portion of the boundaries may also be depicted on a 2D image plane in a manner that distinguishes the invisible boundaries from the visible boundaries.
  • One illustrative mechanism of distinguishing invisible boundaries from visible ones may involve use of dashed, dotted, or broken lines.
  • FIG. 2 depicts an illustrative embodiment 200 wherein a three-dimensional scanner 210 is used to scan and obtain three-dimensional model data 216 of a real world subject 212 .
  • the three-dimensional model data 216 is obtained by scanning the subject 212 from various directions and orientations 214 .
  • the image scanner 210 may be any known or future developed 3D scanner including but not limited to mobile devices, smart phones or tablets configured to scan and obtain three-dimensional model data.
  • FIG. 3 depicts an illustrative embodiment 300 wherein the three-dimensional model data of the real world subject is represented in the form of a point cloud 310 .
  • This point cloud representation may be further converted to a set or sets of continuous simple surfaces 312 .
  • this conversion may be achieved by using a fitting method including but not limited to a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method.
  • RANSAC random sample consensus
  • the simple surfaces used to represent the point cloud may be any simple geometric surface (polygonal and cylindrical surface in this case) including but not limited to planar surface, cylindrical surface, spherical surface, sinusoidal surface, or a conic surface.
  • a set of boundaries between the intersecting simple surfaces is determined using various methods known in the art including but not limited to a Kreveld method, a Dey Wang method, or an iterative simple surface intersection method.
  • the boundaries may also be determined by finding sharp angles between intersecting simple surfaces according to a dihedral angle calculation.
  • FIG. 4 depicts an illustrative embodiment 400 wherein the three-dimensional model data, represented as a set of continuous simple surfaces 312 , is used for 2D image generation.
  • a view of the set of continuous simple surfaces 312 is chosen and the determined outline and the portion of visible set of boundaries corresponding to the chosen view is projected on a two-dimensional image plane.
  • the top view may be chosen and projected 412 or the front view 416 or side view 414 may be chosen and projected.
  • the invisible boundaries 418 may be depicted using dashed, dotted, or broken lines.
  • drawings having precise dimensions, such as the ones shown in FIG. 4 elements 412 and 414 .
  • drawings can be made to scale, i.e. a 1:1 scale, with the identical measurements of the real world object being modeled.
  • the scanning device may be equipped with features for measuring its distance from the object being scanned, and may therefore be capable of accurately determining dimensions.
  • Embodiments may also include the ability to manipulate scale, so that a drawing of a very large object can be rendered in a more manageable scale such as 1:10. It may further be advantageous to include dimensions on the 3D or 2D drawings produced according to embodiments of the invention in the form of annotations similar to those shown in FIG. 4 elements 412 and 414 .
  • FIG. 5 depicts an embodiment 500 wherein a user device is illustrated, such device 510 with a capacitive touch screen 512 and interface may be configured to either carry out the method provided herein or to receive the 2D images and other related data using the method provided herein.
  • the device 510 may be any device with computing and processing capabilities including but not limited to user mobile phones, tablets, smart phones and the like.
  • the device 510 may be adapted to display the point cloud 310 of the scanned subject and the corresponding set of continuous simple surfaces 312 . Also, the various views such as top view 412 , side view 414 and front view 416 are also displayed on the screen 512 of the device 510 .
  • the device 510 may connect to a printing device 520 to enable physical printing of the 2D images and other related information.
  • the device 520 may be connected to the printing device 520 through a wire connection 518 or wirelessly 516 .
  • the wireless connection 516 with the printing device 520 may include Wi-Fi, Bluetooth or any other now known or future developed method of wireless connectivity.
  • FIG. 6 depicts an illustrative embodiment 600 of a physical print or digital document 610 of the 2D images obtained using the methods described herein.
  • a two-dimensional representation of the set of continuous simple surfaces 312 , various 2D images such as top view 412 , side view 414 and front view 416 may be depicted in the document 610 .
  • the document 610 may also contain additional information in the form of notes 612 or annotations with respect to the 2D images, and a header 614 and footer 616 section.
  • notes and annotations may include, without limitation, the volume of the object, the objects dimensions, its texture and color, its location as determined by an onboard GPS, time and date that the scan was taken, the operator's name, or any other data that may be convenient to store with the scan data. If the average density of the object is known even the weight of the object could be determined and displayed in notes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

This application relates to methods for generating two-dimensional images from three-dimensional model data. A process according to the application may begin with providing a set of three-dimensional model data of a subject, and determining a set of boundaries between intersecting surfaces of the set of three-dimensional model data. A user or an algorithm may select a view of the three-dimensional model data to convert to a two-dimensional image. The process may further include determining an outline of the three-dimensional model corresponding to the selected view, and projecting the outline of the three-dimensional model and a visible portion of the set of boundaries onto a two-dimensional image plane.

Description

    I. BACKGROUND OF THE INVENTION
  • A. Field of Invention
  • Embodiments generally relate to creating technical drawings from 3D model data.
  • B. Description of the Related Art
  • A variety of methods are known in the art for generating 2D images from 3D models. For instance, it is known to generate a collage of 2D renderings that represent a 3D model. It is further known to identify vertices and edges of objects in images. The prior art also includes methods for flattening 3D surfaces to 2D quadrilateral line drawings in a 2D image plane. However, the art is deficient in a number of regards. For instance, the prior art does not teach or suggest fitting a 3D point cloud to a set of simple 2D surfaces, determining boundaries and vertices of the 2D surfaces and projecting them onto an image plane.
  • Some embodiments of the present invention may provide one or more benefits or advantages over the prior art.
  • II. SUMMARY OF THE INVENTION
  • Some embodiments may relate to a method for generating two-dimensional images, comprising the steps of: providing a set of three-dimensional model data of a subject; determining a set of boundaries between intersecting surfaces of the set of three-dimensional model data; selecting a view of the three-dimensional model data to convert to a two-dimensional image; determining an outline of the three-dimensional model data corresponding to the selected view of the three-dimensional model data; determining the portion of the set of boundaries that would be invisible in the selected view due to opacity of the subject; and projecting the outline of the three-dimensional model data and the visible portion of the set of boundaries onto a two-dimensional image plane.
  • Embodiments may further comprise projecting the invisible boundaries on the two-dimensional image plane in a form visually distinguishable from the visible boundaries.
  • According to some embodiments the form visually distinguishable from the visible boundaries comprises dashed, dotted, or broken lines.
  • According to some embodiments the three-dimensional model data comprises a point cloud.
  • Embodiments may further comprise the step of converting the point cloud to a set of continuous simple surfaces using a fitting method selected from one or more of a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method.
  • According to some embodiments a simple surface comprises a planar surface, a cylindrical surface, a spherical surface, a sinusoidal surface, or a conic surface.
  • According to some embodiments the step of selecting a view comprises orienting a three-dimensional model defined by the three-dimensional model data so that the planar bounded region with the largest convex hull is visible.
  • According to some embodiments the step of determining a set of boundaries comprises a Kreveld method, a Dey Wang method, or an iterative simple surface intersection method.
  • According to some embodiments the three-dimensional model data comprises a mesh.
  • According to some embodiments the step of determining a set of boundaries comprises finding sharp angles between intersecting simple surfaces according to a dihedral angle calculation.
  • Other benefits and advantages will become apparent to those skilled in the art to which it pertains upon reading and understanding of the following detailed specification.
  • III. BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention may take physical form in certain parts and arrangement of parts, embodiments of which will be described in detail in this specification and illustrated in the accompanying drawings which form a part hereof and wherein:
  • FIG. 1 is a flowchart showing an image conversion process according to an embodiment of the invention;
  • FIG. 2 is a schematic view of a user capturing 3D model data with a 3D scanning device;
  • FIG. 3 is a drawing of a point cloud being converted into an isometric drawing;
  • FIG. 4 is a drawing showing the use of a set of simple surfaces for generating 2D drawings;
  • FIG. 5 is a drawing of a device according to an embodiment of the invention; and
  • FIG. 6 is an illustrative printout according to an embodiment of the invention.
  • IV. DETAILED DESCRIPTION OF THE INVENTION
  • A method for generating two-dimensional images includes determining a set of boundaries between intersecting surfaces of three-dimensional model data corresponding to an object. A specific view of the three-dimensional model data, for which the two-dimensional images are required, is selected. Upon selection of the specific view, the outline of the three-dimensional model data corresponding to the selected view is determined and corresponding invisible portion of the boundaries, due to opacity of the object, is identified. The outline of the three-dimensional model data and the visible portion of the boundaries so determined are projected onto a two-dimensional image plane.
  • Referring now to the drawings wherein the showings are for purposes of illustrating embodiments of the invention only and not for purposes of limiting the same, FIG. 1 depicts a flow diagram 100 of an illustrative embodiment wherein three-dimensional data 110 is provided for the purpose of generating corresponding two-dimensional images. The three dimensional data may be in the form of point cloud or mesh representation of a three-dimensional subject. Furthermore, any and all other forms of three-dimensional data representation, now known or developed in the future, that are capable of being converted to point cloud or mesh form may be used.
  • The point cloud or mesh may be further converted to a set or sets of continuous simple surfaces by using a fitting method including but not limited to a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method. All these methods are well understood in the art and their methodologies are incorporated by reference herein. Any simple geometric surface including but not limited to a planar surface, cylindrical surface, spherical surface, sinusoidal surface, or a conic surface may be used to represent the point cloud as the set of simple continuous surfaces.
  • A set of boundaries between intersecting surfaces of the three-dimensional model data is determined 112. In an illustrative embodiment this determination of a set of boundaries may be achieved by using a Kreveld method, a Dey Wang method, or an iterative simple surface intersection method. All these methods are well understood in the art and their methodologies are incorporated by reference herein. In an alternate embodiment wherein the three-dimensional model data is represented as a mesh, the set of boundaries may be determined by finding sharp angles between intersecting simple surfaces according to a dihedral angle calculation.
  • Once the set of boundaries between intersecting surfaces of the three-dimensional model data is determined, a view of the image data for which two-dimensional images are required is selected 114. In one embodiment, the view may be selected by orienting a three-dimensional model defined by the three-dimensional model data so that the planar bounded region with the largest convex hull is visible. Based on the view selected, and outline of the image data corresponding to the view is determined 116. In one embodiment, the outline determination may be based upon selecting the portion of the image data from one visible edge to the other in the selected view. Also, the portion of the set of boundaries that would be invisible in the selected view due to opacity of the subject is determined 118. In another embodiment, the portion of the set of visible boundaries in the selected viewpoint is determined thereby excluding the invisible boundaries. The determined outline and the visible portion of the set of boundaries are projected on a two-dimensional image plane 120.
  • In another embodiment, the invisible portion of the boundaries may also be depicted on a 2D image plane in a manner that distinguishes the invisible boundaries from the visible boundaries. One illustrative mechanism of distinguishing invisible boundaries from visible ones may involve use of dashed, dotted, or broken lines.
  • FIG. 2 depicts an illustrative embodiment 200 wherein a three-dimensional scanner 210 is used to scan and obtain three-dimensional model data 216 of a real world subject 212. The three-dimensional model data 216 is obtained by scanning the subject 212 from various directions and orientations 214. The image scanner 210 may be any known or future developed 3D scanner including but not limited to mobile devices, smart phones or tablets configured to scan and obtain three-dimensional model data.
  • FIG. 3 depicts an illustrative embodiment 300 wherein the three-dimensional model data of the real world subject is represented in the form of a point cloud 310. This point cloud representation may be further converted to a set or sets of continuous simple surfaces 312. As discussed previously herein, this conversion may be achieved by using a fitting method including but not limited to a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method. The simple surfaces used to represent the point cloud may be any simple geometric surface (polygonal and cylindrical surface in this case) including but not limited to planar surface, cylindrical surface, spherical surface, sinusoidal surface, or a conic surface. In one embodiment, a set of boundaries between the intersecting simple surfaces is determined using various methods known in the art including but not limited to a Kreveld method, a Dey Wang method, or an iterative simple surface intersection method. In another embodiment, where a mesh model is used instead of point cloud, the boundaries may also be determined by finding sharp angles between intersecting simple surfaces according to a dihedral angle calculation.
  • FIG. 4 depicts an illustrative embodiment 400 wherein the three-dimensional model data, represented as a set of continuous simple surfaces 312, is used for 2D image generation. A view of the set of continuous simple surfaces 312 is chosen and the determined outline and the portion of visible set of boundaries corresponding to the chosen view is projected on a two-dimensional image plane. For example the top view may be chosen and projected 412 or the front view 416 or side view 414 may be chosen and projected. Optionally, the invisible boundaries 418 may be depicted using dashed, dotted, or broken lines. Furthermore, because of the nature of the image data collected and reconstructed, it is possible to produce drawings having precise dimensions, such as the ones shown in FIG. 4 elements 412 and 414.
  • It is also contemplated to include a dimensional standard in the collected 3D model data so that drawings can be made to scale, i.e. a 1:1 scale, with the identical measurements of the real world object being modeled. For instance, in some embodiments the scanning device may be equipped with features for measuring its distance from the object being scanned, and may therefore be capable of accurately determining dimensions. Embodiments may also include the ability to manipulate scale, so that a drawing of a very large object can be rendered in a more manageable scale such as 1:10. It may further be advantageous to include dimensions on the 3D or 2D drawings produced according to embodiments of the invention in the form of annotations similar to those shown in FIG. 4 elements 412 and 414.
  • FIG. 5 depicts an embodiment 500 wherein a user device is illustrated, such device 510 with a capacitive touch screen 512 and interface may be configured to either carry out the method provided herein or to receive the 2D images and other related data using the method provided herein. The device 510 may be any device with computing and processing capabilities including but not limited to user mobile phones, tablets, smart phones and the like. The device 510 may be adapted to display the point cloud 310 of the scanned subject and the corresponding set of continuous simple surfaces 312. Also, the various views such as top view 412, side view 414 and front view 416 are also displayed on the screen 512 of the device 510. The device 510 may connect to a printing device 520 to enable physical printing of the 2D images and other related information. It will be understood that images may be stored in the form of digital documents as well, and that the invention is not limited to printed documents. The device 520 may be connected to the printing device 520 through a wire connection 518 or wirelessly 516. The wireless connection 516 with the printing device 520 may include Wi-Fi, Bluetooth or any other now known or future developed method of wireless connectivity. There may be contextual touch screen buttons 514 on the screen 512 of the device 510 that may be configured to carry out various actions like execute print command, zoom in/out, select different views of the set of continuous simple surfaces 312 etc.
  • FIG. 6 depicts an illustrative embodiment 600 of a physical print or digital document 610 of the 2D images obtained using the methods described herein. A two-dimensional representation of the set of continuous simple surfaces 312, various 2D images such as top view 412, side view 414 and front view 416 may be depicted in the document 610. The document 610 may also contain additional information in the form of notes 612 or annotations with respect to the 2D images, and a header 614 and footer 616 section. For instance, embodiments of the invention may include the ability to precisely measure the actual dimensions of an object being scanned, therefore, notes and annotations may include, without limitation, the volume of the object, the objects dimensions, its texture and color, its location as determined by an onboard GPS, time and date that the scan was taken, the operator's name, or any other data that may be convenient to store with the scan data. If the average density of the object is known even the weight of the object could be determined and displayed in notes.
  • It will be apparent to those skilled in the art that the above methods and apparatuses may be changed or modified without departing from the general scope of the invention. The invention is intended to include all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
  • Having thus described the invention, it is now claimed:

Claims (17)

I/we claim:
1. A method for generating two-dimensional images, comprising the steps of:
providing a set of three-dimensional model data of a subject;
determining a set of boundaries between intersecting surfaces of the set of three-dimensional model data;
selecting a view of the three-dimensional model data to convert to a two-dimensional image;
determining an outline of the three-dimensional model data corresponding to the selected view of the three-dimensional model data; and
projecting the outline of the three-dimensional model data and a visible portion of the set of boundaries onto a two-dimensional image plane.
2. The method of claim 1, further comprising the step of determining the portion of the set of boundaries that would be invisible in the selected view due to opacity of the subject.
3. The method of claim 1, further comprising the step of projecting the invisible boundaries on the two-dimensional image plane in a form visually distinguishable from the visible boundaries.
4. The method of claim 2, wherein the form visually distinguishable from the visible boundaries comprises dashed, dotted, or broken lines.
5. The method of claim 1, wherein the three-dimensional model data comprises a point cloud.
6. The method of claim 5, further comprising the step of converting the point cloud to a set of continuous simple surfaces using a fitting method selected from one or more of a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method.
7. The method of claim 6, wherein a simple surface comprises a planar surface, a cylindrical surface, a spherical surface, a sinusoidal surface, or a conic surface.
8. The method of claim 1, wherein the step of selecting a view comprises orienting a three-dimensional model defined by the three-dimensional model data so that the planar bounded region with the largest convex hull is visible.
9. The method of claim 5, wherein the step of determining a set of boundaries comprises a Kreveld method, a Dey Wang method, or an iterative simple surface intersection method.
10. The method of claim 1, wherein the three-dimensional model data comprises a mesh.
11. The method of claim 10, wherein the step of determining a set of boundaries comprises finding sharp angles between intersecting simple surfaces according to a dihedral angle calculation.
12. A method for generating two-dimensional images, comprising the steps of:
providing a set of three-dimensional model data of a subject, wherein the three-dimensional model data comprises a point cloud;
converting the point cloud to a set of continuous simple surfaces using a fitting method selected from one or more of a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, wherein a simple surface comprises a planar surface, a cylindrical surface, a spherical surface, a sinusoidal surface, or a conic surface;
determining a set of boundaries between intersecting the simple surfaces, wherein the step of determining a set of boundaries comprises a Kreveld method, a Dey Wang method, or an iterative simple surface intersection method;
selecting a view of the three-dimensional model data to convert to a two-dimensional image, wherein the step of selecting a view comprises orienting a three-dimensional model defined by the three-dimensional model data so that the planar bounded region with the largest convex hull is visible;
determining an outline of the three-dimensional model data corresponding to the selected view of the three-dimensional model data;
determining the portion of the set of boundaries that would be invisible in the selected view due to opacity of the subject; and
projecting the outline of the three-dimensional model data and the visible portion of the set of boundaries onto a two-dimensional image plane.
13. The method of claim 12, further comprising projecting the invisible boundaries on the two-dimensional image plane in a form visually distinguishable from the visible boundaries.
14. The method of claim 13, wherein the form visually distinguishable from the visible boundaries comprises dashed, dotted, or broken lines.
15. A method for generating two-dimensional images, comprising the steps of:
providing a set of three-dimensional model data of a subject, wherein the three-dimensional model data comprises a mesh;
determining a set of boundaries between intersecting surfaces of the set of three-dimensional model data, wherein the step of determining a set of boundaries comprises finding sharp angles between intersecting simple surfaces according to a dihedral angle calculation;
selecting a view of the three-dimensional model data to convert to a two-dimensional image;
determining an outline of the three-dimensional model data corresponding to the selected view of the three-dimensional model data;
determining the portion of the set of boundaries that would be invisible in the selected view due to opacity of the subject; and
projecting the outline of the three-dimensional model data and the visible portion of the set of boundaries onto a two-dimensional image plane.
16. The method of claim 15, further comprising projecting the invisible boundaries on the two-dimensional image plane in a form visually distinguishable from the visible boundaries.
17. The method of claim 16, wherein the form visually distinguishable from the visible boundaries comprises dashed, dotted, or broken lines.
US14/671,420 2014-03-27 2015-03-27 3d data to 2d and isometric views for layout and creation of documents Abandoned US20150279087A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/671,420 US20150279087A1 (en) 2014-03-27 2015-03-27 3d data to 2d and isometric views for layout and creation of documents

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461971036P 2014-03-27 2014-03-27
US14/671,420 US20150279087A1 (en) 2014-03-27 2015-03-27 3d data to 2d and isometric views for layout and creation of documents

Publications (1)

Publication Number Publication Date
US20150279087A1 true US20150279087A1 (en) 2015-10-01

Family

ID=54189850

Family Applications (5)

Application Number Title Priority Date Filing Date
US14/671,420 Abandoned US20150279087A1 (en) 2014-03-27 2015-03-27 3d data to 2d and isometric views for layout and creation of documents
US14/671,313 Abandoned US20150279075A1 (en) 2014-03-27 2015-03-27 Recording animation of rigid objects using a single 3d scanner
US14/672,048 Active 2035-11-14 US9841277B2 (en) 2014-03-27 2015-03-27 Graphical feedback during 3D scanning operations for obtaining optimal scan resolution
US14/671,373 Abandoned US20150278155A1 (en) 2014-03-27 2015-03-27 Identifying objects using a 3d scanning device, images, and 3d models
US14/671,749 Abandoned US20150279121A1 (en) 2014-03-27 2015-03-27 Active Point Cloud Modeling

Family Applications After (4)

Application Number Title Priority Date Filing Date
US14/671,313 Abandoned US20150279075A1 (en) 2014-03-27 2015-03-27 Recording animation of rigid objects using a single 3d scanner
US14/672,048 Active 2035-11-14 US9841277B2 (en) 2014-03-27 2015-03-27 Graphical feedback during 3D scanning operations for obtaining optimal scan resolution
US14/671,373 Abandoned US20150278155A1 (en) 2014-03-27 2015-03-27 Identifying objects using a 3d scanning device, images, and 3d models
US14/671,749 Abandoned US20150279121A1 (en) 2014-03-27 2015-03-27 Active Point Cloud Modeling

Country Status (1)

Country Link
US (5) US20150279087A1 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160071327A1 (en) * 2014-09-05 2016-03-10 Fu Tai Hua Industry (Shenzhen) Co., Ltd. System and method for simplifying a mesh point cloud
US20160188159A1 (en) * 2014-12-30 2016-06-30 Dassault Systemes Selection of a viewpoint of a set of objects
US20160196659A1 (en) * 2015-01-05 2016-07-07 Qualcomm Incorporated 3d object segmentation
EP3188049A1 (en) * 2015-12-30 2017-07-05 Dassault Systèmes Density based graphical mapping
US10049479B2 (en) 2015-12-30 2018-08-14 Dassault Systemes Density based graphical mapping
US10127333B2 (en) 2015-12-30 2018-11-13 Dassault Systemes Embedded frequency based search and 3D graphical data processing
US20190005709A1 (en) * 2017-06-30 2019-01-03 Apple Inc. Techniques for Correction of Visual Artifacts in Multi-View Images
US10360438B2 (en) 2015-12-30 2019-07-23 Dassault Systemes 3D to 2D reimaging for search
US10754242B2 (en) 2017-06-30 2020-08-25 Apple Inc. Adaptive resolution and projection format in multi-direction video
US10762595B2 (en) * 2017-11-08 2020-09-01 Steelcase, Inc. Designated region projection printing of spatial pattern for 3D object on flat sheet in determined orientation
US10924747B2 (en) 2017-02-27 2021-02-16 Apple Inc. Video coding techniques for multi-view video
WO2021051802A1 (en) * 2019-09-16 2021-03-25 杭州群核信息技术有限公司 Intelligent cloud processing system and method for selecting wardrobes to generate three views
US10999602B2 (en) 2016-12-23 2021-05-04 Apple Inc. Sphere projected motion estimation/compensation and mode decision
US11074708B1 (en) * 2020-01-06 2021-07-27 Hand Held Products, Inc. Dark parcel dimensioning
US11093752B2 (en) 2017-06-02 2021-08-17 Apple Inc. Object tracking in multi-view video
TWI743645B (en) * 2019-07-29 2021-10-21 大陸商浙江商湯科技開發有限公司 Information processing method and device, positioning method and device, electronic equipment and computer readable storage medium
US11259046B2 (en) 2017-02-15 2022-02-22 Apple Inc. Processing of equirectangular object data to compensate for distortion by spherical projections

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160125638A1 (en) * 2014-11-04 2016-05-05 Dassault Systemes Automated Texturing Mapping and Animation from Images
JP2017041022A (en) * 2015-08-18 2017-02-23 キヤノン株式会社 Information processor, information processing method and program
CN105551078A (en) * 2015-12-02 2016-05-04 北京建筑大学 Method and system of virtual imaging of broken cultural relics
US11138306B2 (en) * 2016-03-14 2021-10-05 Amazon Technologies, Inc. Physics-based CAPTCHA
CN106524920A (en) * 2016-10-25 2017-03-22 上海建科工程咨询有限公司 Application of field measurement in construction project based on three-dimensional laser scanning
CN106650700B (en) * 2016-12-30 2020-12-01 上海联影医疗科技股份有限公司 Die body, method and device for measuring system matrix
CN107677221B (en) * 2017-10-25 2024-03-19 贵州大学 Plant leaf movement angle measuring method and device
US10699404B1 (en) * 2017-11-22 2020-06-30 State Farm Mutual Automobile Insurance Company Guided vehicle capture for virtual model generation
EP3496388A1 (en) 2017-12-05 2019-06-12 Thomson Licensing A method and apparatus for encoding a point cloud representing three-dimensional objects
CN108921045B (en) * 2018-06-11 2021-08-03 佛山科学技术学院 Spatial feature extraction and matching method and device of three-dimensional model
US10600230B2 (en) * 2018-08-10 2020-03-24 Sheng-Yen Lin Mesh rendering system, mesh rendering method and non-transitory computer readable medium
GB2586838B (en) * 2019-09-05 2022-07-27 Sony Interactive Entertainment Inc Free-viewpoint method and system
KR20210030147A (en) * 2019-09-09 2021-03-17 삼성전자주식회사 3d rendering method and 3d rendering apparatus
CN111443091B (en) * 2020-04-08 2023-07-25 中国电力科学研究院有限公司 Cable line tunnel engineering defect judging method
CN111814691B (en) * 2020-07-10 2022-01-21 广东电网有限责任公司 Space expansion display method and device for transmission tower image
CN116817771B (en) * 2023-08-28 2023-11-17 南京航空航天大学 Aerospace part coating thickness measurement method based on cylindrical voxel characteristics

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110102456A1 (en) * 2009-10-30 2011-05-05 Synopsys, Inc. Drawing an image with transparent regions on top of another image without using an alpha channel

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003021532A2 (en) 2001-09-06 2003-03-13 Koninklijke Philips Electronics N.V. Method and apparatus for segmentation of an object
US8108929B2 (en) * 2004-10-19 2012-01-31 Reflex Systems, LLC Method and system for detecting intrusive anomalous use of a software system using multiple detection algorithms
US7860301B2 (en) 2005-02-11 2010-12-28 Macdonald Dettwiler And Associates Inc. 3D imaging system
US7965868B2 (en) * 2006-07-20 2011-06-21 Lawrence Livermore National Security, Llc System and method for bullet tracking and shooter localization
US7768656B2 (en) 2007-08-28 2010-08-03 Artec Group, Inc. System and method for three-dimensional measurement of the shape of material objects
KR20090047172A (en) * 2007-11-07 2009-05-12 삼성디지털이미징 주식회사 Method for controlling digital camera for picture testing
US8255100B2 (en) * 2008-02-27 2012-08-28 The Boeing Company Data-driven anomaly detection to anticipate flight deck effects
DE102008021558A1 (en) * 2008-04-30 2009-11-12 Advanced Micro Devices, Inc., Sunnyvale Process and system for semiconductor process control and monitoring using PCA models of reduced size
US8199988B2 (en) * 2008-05-16 2012-06-12 Geodigm Corporation Method and apparatus for combining 3D dental scans with other 3D data sets
US9082221B2 (en) * 2008-06-30 2015-07-14 Thomson Licensing Method for the real-time composition of a video
ATE545260T1 (en) * 2008-08-01 2012-02-15 Gigle Networks Sl OFDM FRAME SYNCHRONIZATION METHOD AND SYSTEM
US8896607B1 (en) * 2009-05-29 2014-11-25 Two Pic Mc Llc Inverse kinematics for rigged deformable characters
WO2011014192A1 (en) * 2009-07-31 2011-02-03 Analogic Corporation Two-dimensional colored projection image from three-dimensional image data
GB0913930D0 (en) * 2009-08-07 2009-09-16 Ucl Business Plc Apparatus and method for registering two medical images
EP2677938B1 (en) * 2011-02-22 2019-09-18 Midmark Corporation Space carving in 3d data acquisition
ES2812578T3 (en) * 2011-05-13 2021-03-17 Vizrt Ag Estimating a posture based on silhouette
US8724880B2 (en) * 2011-06-29 2014-05-13 Kabushiki Kaisha Toshiba Ultrasonic diagnostic apparatus and medical image processing apparatus
EP2780826B1 (en) * 2011-11-15 2020-08-12 Trimble Inc. Browser-based collaborative development of a 3d model
WO2013106720A1 (en) * 2012-01-12 2013-07-18 Schlumberger Canada Limited Method for constrained history matching coupled with optimization
US9208550B2 (en) 2012-08-15 2015-12-08 Fuji Xerox Co., Ltd. Smart document capture based on estimated scanned-image quality
DE102013203667B4 (en) * 2013-03-04 2024-02-22 Adidas Ag Cabin for trying out one or more items of clothing
WO2015006791A1 (en) 2013-07-18 2015-01-22 A.Tron3D Gmbh Combining depth-maps from different acquisition methods
US20150070468A1 (en) 2013-09-10 2015-03-12 Faro Technologies, Inc. Use of a three-dimensional imager's point cloud data to set the scale for photogrammetry

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110102456A1 (en) * 2009-10-30 2011-05-05 Synopsys, Inc. Drawing an image with transparent regions on top of another image without using an alpha channel

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Chivate et al. "Extending surfaces for reverse engineering solid model generation", Department of Mechanical Engineering, PennsylÕania State UniÕersity, UniÕersity Park, PA 16802, USA Received 7 December 1994; accepted 27 August 1996 *
Wang et al. “Impulse-Based Rendering Methods for Haptic Simulation of Bone-Burring”, IEEE TRANSACTIONS ON HAPTICS, VOL. 5, NO. 4, OCTOBER-DECEMBER 2012 *
Zhuang et al. "Simplifying Complex CAD Geometry with Conservative Bounding Contours", Proceedings of the 1997 IEEE International Conference on Robotics and Automation Albuquerque, New Mexico - April 1997 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160071327A1 (en) * 2014-09-05 2016-03-10 Fu Tai Hua Industry (Shenzhen) Co., Ltd. System and method for simplifying a mesh point cloud
US9830686B2 (en) * 2014-09-05 2017-11-28 Fu Tai Hua Industry (Shenzhen) Co., Ltd. System and method for simplifying a mesh point cloud
US20160188159A1 (en) * 2014-12-30 2016-06-30 Dassault Systemes Selection of a viewpoint of a set of objects
US9866815B2 (en) * 2015-01-05 2018-01-09 Qualcomm Incorporated 3D object segmentation
US20160196659A1 (en) * 2015-01-05 2016-07-07 Qualcomm Incorporated 3d object segmentation
US10049479B2 (en) 2015-12-30 2018-08-14 Dassault Systemes Density based graphical mapping
US10127333B2 (en) 2015-12-30 2018-11-13 Dassault Systemes Embedded frequency based search and 3D graphical data processing
US10360438B2 (en) 2015-12-30 2019-07-23 Dassault Systemes 3D to 2D reimaging for search
EP3188049A1 (en) * 2015-12-30 2017-07-05 Dassault Systèmes Density based graphical mapping
US10999602B2 (en) 2016-12-23 2021-05-04 Apple Inc. Sphere projected motion estimation/compensation and mode decision
US11818394B2 (en) 2016-12-23 2023-11-14 Apple Inc. Sphere projected motion estimation/compensation and mode decision
US11259046B2 (en) 2017-02-15 2022-02-22 Apple Inc. Processing of equirectangular object data to compensate for distortion by spherical projections
US10924747B2 (en) 2017-02-27 2021-02-16 Apple Inc. Video coding techniques for multi-view video
US11093752B2 (en) 2017-06-02 2021-08-17 Apple Inc. Object tracking in multi-view video
US10754242B2 (en) 2017-06-30 2020-08-25 Apple Inc. Adaptive resolution and projection format in multi-direction video
US20190005709A1 (en) * 2017-06-30 2019-01-03 Apple Inc. Techniques for Correction of Visual Artifacts in Multi-View Images
US10762595B2 (en) * 2017-11-08 2020-09-01 Steelcase, Inc. Designated region projection printing of spatial pattern for 3D object on flat sheet in determined orientation
US11321810B2 (en) 2017-11-08 2022-05-03 Steelcase Inc. Designated region projection printing
US11722626B2 (en) 2017-11-08 2023-08-08 Steelcase Inc. Designated region projection printing
TWI743645B (en) * 2019-07-29 2021-10-21 大陸商浙江商湯科技開發有限公司 Information processing method and device, positioning method and device, electronic equipment and computer readable storage medium
US11983820B2 (en) 2019-07-29 2024-05-14 Zhejiang Sensetime Technology Development Co., Ltd Information processing method and device, positioning method and device, electronic device and storage medium
WO2021051802A1 (en) * 2019-09-16 2021-03-25 杭州群核信息技术有限公司 Intelligent cloud processing system and method for selecting wardrobes to generate three views
US11074708B1 (en) * 2020-01-06 2021-07-27 Hand Held Products, Inc. Dark parcel dimensioning

Also Published As

Publication number Publication date
US20150276392A1 (en) 2015-10-01
US20150279075A1 (en) 2015-10-01
US9841277B2 (en) 2017-12-12
US20150278155A1 (en) 2015-10-01
US20150279121A1 (en) 2015-10-01

Similar Documents

Publication Publication Date Title
US20150279087A1 (en) 3d data to 2d and isometric views for layout and creation of documents
US9965896B2 (en) Display device and display method
JP6344050B2 (en) Image processing system, image processing apparatus, and program
US10204404B2 (en) Image processing device and image processing method
JP6733267B2 (en) Information processing program, information processing method, and information processing apparatus
KR101556992B1 (en) 3d scanning system using facial plastic surgery simulation
JPWO2014181725A1 (en) Image measuring device
JP2014025748A (en) Dimension measuring program, dimension measuring instrument, and dimension measuring method
JPWO2017154705A1 (en) Imaging apparatus, image processing apparatus, image processing program, data structure, and imaging system
JP6048575B2 (en) Size measuring apparatus and size measuring method
CN106797458A (en) The virtual change of real object
US20140375685A1 (en) Information processing apparatus, and determination method
US20180204387A1 (en) Image generation device, image generation system, and image generation method
US20150339859A1 (en) Apparatus and method for navigating through volume image
CN109906600A (en) Simulate the depth of field
JP2015233266A (en) Image processing system, information processing device, and program
JP2018142109A (en) Display control program, display control method, and display control apparatus
Papadaki et al. Accurate 3D scanning of damaged ancient Greek inscriptions for revealing weathered letters
WO2015101979A1 (en) Device and method with orientation indication
JP2014164483A (en) Database generation device, camera attitude estimation device, database generation method, camera attitude estimation method and program
CN108549484B (en) Man-machine interaction method and device based on human body dynamic posture
Galantucci et al. Coded targets and hybrid grids for photogrammetric 3D digitisation of human faces
CN110288714B (en) Virtual simulation experiment system
JP6198104B2 (en) 3D object recognition apparatus and 3D object recognition method
JP2013231607A (en) Calibration tool display device, calibration tool display method, calibration device, calibration method, calibration system and program

Legal Events

Date Code Title Description
AS Assignment

Owner name: KNOCKOUT CONCEPTS, LLC, OHIO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KUTTOTHARA, JACOB A;WATHEN, JOHN M;PADDOCK, STEVEN D;AND OTHERS;REEL/FRAME:035776/0299

Effective date: 20150528

Owner name: KNOCKOUT CONCEPTS, LLC, OHIO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MEYERS, STEPHEN B;REEL/FRAME:035776/0218

Effective date: 20150528

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION