US20150279121A1 - Active Point Cloud Modeling - Google Patents

Active Point Cloud Modeling Download PDF

Info

Publication number
US20150279121A1
US20150279121A1 US14/671,749 US201514671749A US2015279121A1 US 20150279121 A1 US20150279121 A1 US 20150279121A1 US 201514671749 A US201514671749 A US 201514671749A US 2015279121 A1 US2015279121 A1 US 2015279121A1
Authority
US
United States
Prior art keywords
dimensional model
model data
dimensional
voxels
scanning device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/671,749
Inventor
Stephen Brooks Myers
Jacob Abraham Kuttothara
Steven Donald Paddock
John Moore Wathen
Andrew Slatton
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Knockout Concepts LLC
Original Assignee
Knockout Concepts LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Knockout Concepts LLC filed Critical Knockout Concepts LLC
Priority to US14/671,749 priority Critical patent/US20150279121A1/en
Assigned to KNOCKOUT CONCEPTS, LLC reassignment KNOCKOUT CONCEPTS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MEYERS, STEPHEN B
Assigned to KNOCKOUT CONCEPTS, LLC reassignment KNOCKOUT CONCEPTS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KUTTOTHARA, JACOB A, PADDOCK, STEVEN D, SLATTON, ANDREW, WATHEN, JOHN M
Publication of US20150279121A1 publication Critical patent/US20150279121A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/26Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/12Acquisition of 3D measurements of objects

Definitions

  • Embodiments may generally relate to the field of modifying selected portions of a three-dimensional scan.
  • Three-dimensional model capture and editing methods and devices are known in the imaging arts. For example, it is known to capture visible spectrum or infrared light, or other forms of electromagnetic radiation, or even sound waves with an imaging device, and convert the data to point clouds, voxels, and/or other convenient data formats. It is also known to adjust data acquisition parameters so as to capture an image of suitable resolution, or an image that otherwise has suitable characteristics. However, some three-dimensional models are generally suitable, but include areas where the image quality must be improved. Thus, there is a need in the art for systems and methods capable of editing portions of three dimensional model data without overwriting the entire image.
  • Some embodiments of the present invention may provide one or more benefits or advantages over the prior art.
  • Some embodiments may relate to a three-dimensional scan editing method comprising the steps of: providing a set of three-dimensional model data defining a three-dimensional subject, and displaying the data as a reconstructed 3D model; providing a scanning device adapted to acquire three-dimensional model data; selecting one or more voxels of the set of three-dimensional model data and changing the state of the selected voxels to over-writable; providing a visual cue indicating that the selected one or more voxels are over-writable; co-registering the scanning device's view of the three-dimensional subject with selected voxels of the three-dimensional model data; using the scanning device to acquire new three-dimensional model data of the three-dimensional subject; and over-writing the selected voxels with the new three-dimensional model data.
  • Embodiments may further comprise the step of specifying a data acquisition quality parameter of the scanning device, wherein the quality parameter modifies the quality of the new three-dimensional model data.
  • the data acquisition quality parameter is selected from image resolution, optical filtering, background subtraction, color data, or noise reduction.
  • the visual cue is selected from one or more of color, transparency, highlighting, or outlining.
  • the step of co-registering further comprises adjusting the field of view of the scanning device to match the selected voxels.
  • the step of co-registering further comprises a method selected from one or more of point-cloud registration, RGB image registration, intensity image registration, or iterative closest point.
  • the step of co-registering further comprises assuming that the field of view of the scanning device matches the selected voxels.
  • the step of co-registering further comprises reorienting a three-dimensional model of the subject to match the field of view of the three-dimensional scanning device.
  • the three-dimensional model data comprises one or more of an isosurface, a signed distance function, a truncated signed distance function, a surfel, a mesh, a point cloud, or a continuous function.
  • the set of three-dimensional model data defining a three-dimensional subject is displayed on a video display device in the form of a three-dimensional model.
  • the three-dimensional model may be reoriented according to gesture input or touchscreen input.
  • FIG. 1 is an illustration of device acquiring 3D scanning data of a subject in accordance with a method of the invention
  • FIG. 2 is an illustration of a user selecting a portion of a 3D model for editing
  • FIG. 3 is an illustration of voxels of 3D model data
  • FIG. 4 is a flow diagram of an illustrative embodiment
  • FIG. 5 is an illustration of a device capturing new 3D model data for supplementing an existing 3D model data set according to a method of the invention.
  • FIG. 6 illustrates a networked embodiment including separate image capture and image processing devices.
  • Methodology for modification of three dimensional (3D) scans includes, obtaining the image of a three dimensional subject with the help of 3D cameras, scanners or various other devices now known or developed in the future.
  • the captured 3D model may be provided as a set of three dimensional model data representative of the three dimensional subject.
  • the three dimensional model data may alternatively be obtained from previously recorded and stored data. This model data may be used to reconstruct the image of the three dimensional subject on any user device including but not limited to computing devices, imaging devices, mobile devices and the like.
  • the three dimensional model data may be configured to permit selection and modification of specific voxels of the data for the purposes of further detailing or modification.
  • the term ‘voxel’ is understood in the same sense as generally understood in the relevant industry i.e. to include a unit of graphic information that defines any point of an object in three dimensional space.
  • the modification of the selected voxels may be achieved by obtaining new three-dimensional model data using 3D scanning devices and overwriting existing voxels based
  • FIG. 1 is an illustrative embodiment of a specific use case 100 wherein a 3D scanning device 110 is used to obtain three dimensional model data of a real world subject 112 (a vehicle in this case).
  • the scanning device may be any known 3D scanning device including but not limited to mobile phones and tablets with three-dimensional scan capabilities.
  • the scanning device 110 captures various features of the subject 112 from various angles and viewpoints 114 .
  • the model data so obtained is displayed as a reconstructed 3D model 116 of the subject 112 on the display screen of the scanning device 110 .
  • the three-dimensional model data may be obtained from a server or device memory where such data is already stored and the corresponding reconstructed 3D model may be displayed on the image-processing device.
  • the three-dimensional model data may be obtained in any of the formats, now known or developed in the future, appropriate for image reconstruction including but not limited to isosurface, a signed distance function, a truncated signed distance function or surface element representation.
  • other forms of model data such as meshes or point clouds or other forms of representation capable of being converted to one of the forms mentioned herein may also be used.
  • FIG. 2 represents an illustrative embodiment 200 wherein the three dimensional model data, displayed as a reconstructed 3D model 116 on video display of the image-processing device 210 , is configured to permit selection of a specific part or view point 212 (in this case the wheel) of the subject.
  • the selection may be made by selecting one or more voxels of the set of three-dimensional model data and changing the state of the selected voxels to over-writable.
  • the over-writable state informs the system, user's intention to modify or carry out further detailing of the selected voxels.
  • FIG. 3 illustrates voxel representation 300 of three-dimensional model data wherein specific voxels 312 are selected and marked as over-writable.
  • the voxels 310 are marked over-writable using the visual cue of change in color of the selected voxels 312 .
  • Other suitable visual cues include but are not limited to highlighting, changing transparency, or modifying/marking an outline of the voxels may be used to show the selected voxels marked as over-writable.
  • the voxels corresponding to the wheel may be selected and marked as over-writable. This informs the system that the user intends to carry out further image processing of the wheel of the vehicle.
  • a 3D scanning device is further used to obtain new three-dimensional model data of the three-dimensional subject.
  • a data acquisition quality parameter of the scanning device may be specified, to modify the quality of the new three-dimensional model data.
  • the quality parameters may be selected from image resolution, optical filtering, background subtraction, color data, or noise reduction.
  • color data it will be understood that one may specify whether data is to be collected in color, black and white, grey scale, etc.
  • image resolution may be set as a data acquisition quality parameter of the scanning device in order to obtain further details of the wheel.
  • the scanning device takes a higher-resolution image of the wheel thereby capturing in-depth details of the wheel, its ridge pattern, and rim details etc.
  • FIG. 4 illustrates a flow diagram 400 of an illustrative embodiment wherein the new three-dimensional model data is obtained based on co-registration of the scanning device's view of the subject with the selected voxels of the three-dimensional model data.
  • the specific voxels are selected 410 for further detailing, overwriting, or modification and a corresponding visual cue indicates the over-writable state of the voxels 412 .
  • the 3D scanning device is set to capture a specific view of the subject.
  • the view being captured by the scanning device is co-registered with the selected voxels 414 to ensure that the correct viewpoint is captured by the scanning device and that the correct voxels are overwritten.
  • Co-registration may involve data and viewpoint comparison by transforming the two sets of data i.e. one obtained from the three dimensional image and the other obtained from the view being captured, into one coordinate system.
  • the device may be repositioned in case the correct viewpoint or angle is not obtained.
  • the three-dimensional model of the subject may be reoriented to match the field of view of the three-dimensional scanning device in order to easily and efficiently achieve co-registration.
  • the scanning device is allowed to capture the new model data 418 from the co-registered viewpoint.
  • the field of view of the scanning device may be adjusted to match the selected voxels to achieve accurate co-registration.
  • the co-registration process may optionally comprise point-cloud registration, RGB image registration, intensity image registration, or iterative closest point registration to ensure easier, faster, and consistent aligning of the view being captured with the selected voxels.
  • co-registration may be achieved by assuming that the field of view of the scanning device matches the selected voxels.
  • FIG. 5 depicts an embodiment 500 illustrating the capture of new three-dimensional model data with the help of the 3D scanning device 110 .
  • the co-registered view 510 of the subject 112 is captured by the device 110 by means of rescanning 512 the subject 112 .
  • the device captures the new model data corresponding to the wheel. This new model data is used to modify or overwrite the existing voxels representing the wheel to provide real time modified 3D model data.
  • FIG. 6 illustrates an embodiment 600 wherein a 3D model processing device 610 and a 3D model capturing device 612 are connected to each other and to a central server 616 via a Local Area Network or a Wide Area Network (including internet) 614 .
  • the image capturing device 612 and the image processing device 610 may be configured to use the 3D model processing and modification methodology provided in the exemplary embodiments herein to work simultaneously on the same subject in real time or near real time environment.
  • the image-processing device 610 may be used to select the voxels of a three dimensional image and the visual cue on the selected voxels may also be reflected on the image scanning device 612 .
  • the image-scanning device may then capture the new three dimensional model data that may be sent to the image-processing device 610 .
  • the image-processing device may use the new model data to modify the selected voxels.
  • the image scanning device 612 and the image-processing device 610 may both be user mobile devices, smart phones, tablets and other similar devices with 3D model capturing and processing capability, or the image processing device 610 may be a purpose-built device.
  • scanning device 612 and image processing device 610 may employ different image rendering methodologies and yet may be able to simultaneously use the 3D model modification methodology provided herein and interact with respect to the same 3D model data.

Abstract

A three-dimensional scan editing method can include providing a set of three-dimensional model data defining a three-dimensional subject, and displaying the data as a reconstructed image. A user may select one or more voxels and change its state to over-writable. The state change may be reflected by a visual cue such as color or transparency. An image capture device may be provided and its field of view may be co-registered with the selected voxels. The user may then acquire new 3D model data with the device and overwrite the selected voxels with the new data.

Description

    I. BACKGROUND OF THE INVENTION
  • A. Field of Invention
  • Embodiments may generally relate to the field of modifying selected portions of a three-dimensional scan.
  • B. Description of the Related Art
  • Three-dimensional model capture and editing methods and devices are known in the imaging arts. For example, it is known to capture visible spectrum or infrared light, or other forms of electromagnetic radiation, or even sound waves with an imaging device, and convert the data to point clouds, voxels, and/or other convenient data formats. It is also known to adjust data acquisition parameters so as to capture an image of suitable resolution, or an image that otherwise has suitable characteristics. However, some three-dimensional models are generally suitable, but include areas where the image quality must be improved. Thus, there is a need in the art for systems and methods capable of editing portions of three dimensional model data without overwriting the entire image.
  • Some embodiments of the present invention may provide one or more benefits or advantages over the prior art.
  • II. SUMMARY OF THE INVENTION
  • Some embodiments may relate to a three-dimensional scan editing method comprising the steps of: providing a set of three-dimensional model data defining a three-dimensional subject, and displaying the data as a reconstructed 3D model; providing a scanning device adapted to acquire three-dimensional model data; selecting one or more voxels of the set of three-dimensional model data and changing the state of the selected voxels to over-writable; providing a visual cue indicating that the selected one or more voxels are over-writable; co-registering the scanning device's view of the three-dimensional subject with selected voxels of the three-dimensional model data; using the scanning device to acquire new three-dimensional model data of the three-dimensional subject; and over-writing the selected voxels with the new three-dimensional model data.
  • Embodiments may further comprise the step of specifying a data acquisition quality parameter of the scanning device, wherein the quality parameter modifies the quality of the new three-dimensional model data.
  • In some embodiments the data acquisition quality parameter is selected from image resolution, optical filtering, background subtraction, color data, or noise reduction.
  • In some embodiments the visual cue is selected from one or more of color, transparency, highlighting, or outlining.
  • In some embodiments the step of co-registering further comprises adjusting the field of view of the scanning device to match the selected voxels.
  • In some embodiments the step of co-registering further comprises a method selected from one or more of point-cloud registration, RGB image registration, intensity image registration, or iterative closest point.
  • In some embodiments the step of co-registering further comprises assuming that the field of view of the scanning device matches the selected voxels.
  • In some embodiments the step of co-registering further comprises reorienting a three-dimensional model of the subject to match the field of view of the three-dimensional scanning device.
  • In some embodiments the three-dimensional model data comprises one or more of an isosurface, a signed distance function, a truncated signed distance function, a surfel, a mesh, a point cloud, or a continuous function.
  • In some embodiments the set of three-dimensional model data defining a three-dimensional subject is displayed on a video display device in the form of a three-dimensional model.
  • In some embodiments the three-dimensional model may be reoriented according to gesture input or touchscreen input.
  • Other benefits and advantages will become apparent to those skilled in the art to which it pertains upon reading and understanding of the following detailed specification.
  • III. BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention may take physical form in certain parts and arrangement of parts, embodiments of which will be described in detail in this specification and illustrated in the accompanying drawings which form a part hereof and wherein:
  • FIG. 1 is an illustration of device acquiring 3D scanning data of a subject in accordance with a method of the invention;
  • FIG. 2 is an illustration of a user selecting a portion of a 3D model for editing;
  • FIG. 3 is an illustration of voxels of 3D model data;
  • FIG. 4 is a flow diagram of an illustrative embodiment;
  • FIG. 5 is an illustration of a device capturing new 3D model data for supplementing an existing 3D model data set according to a method of the invention; and
  • FIG. 6 illustrates a networked embodiment including separate image capture and image processing devices.
  • IV. DETAILED DESCRIPTION OF THE INVENTION
  • Methodology for modification of three dimensional (3D) scans includes, obtaining the image of a three dimensional subject with the help of 3D cameras, scanners or various other devices now known or developed in the future. The captured 3D model may be provided as a set of three dimensional model data representative of the three dimensional subject. The three dimensional model data may alternatively be obtained from previously recorded and stored data. This model data may be used to reconstruct the image of the three dimensional subject on any user device including but not limited to computing devices, imaging devices, mobile devices and the like. The three dimensional model data may be configured to permit selection and modification of specific voxels of the data for the purposes of further detailing or modification. Herein, the term ‘voxel’ is understood in the same sense as generally understood in the relevant industry i.e. to include a unit of graphic information that defines any point of an object in three dimensional space. The modification of the selected voxels may be achieved by obtaining new three-dimensional model data using 3D scanning devices and overwriting existing voxels based on such new data.
  • Referring now to the drawings wherein the showings are for purposes of illustrating embodiments of the invention only and not for purposes of limiting the same, FIG. 1 is an illustrative embodiment of a specific use case 100 wherein a 3D scanning device 110 is used to obtain three dimensional model data of a real world subject 112 (a vehicle in this case). The scanning device may be any known 3D scanning device including but not limited to mobile phones and tablets with three-dimensional scan capabilities. The scanning device 110 captures various features of the subject 112 from various angles and viewpoints 114. The model data so obtained is displayed as a reconstructed 3D model 116 of the subject 112 on the display screen of the scanning device 110. In a related embodiment, the three-dimensional model data may be obtained from a server or device memory where such data is already stored and the corresponding reconstructed 3D model may be displayed on the image-processing device. The three-dimensional model data may be obtained in any of the formats, now known or developed in the future, appropriate for image reconstruction including but not limited to isosurface, a signed distance function, a truncated signed distance function or surface element representation. Alternatively, other forms of model data such as meshes or point clouds or other forms of representation capable of being converted to one of the forms mentioned herein may also be used.
  • FIG. 2 represents an illustrative embodiment 200 wherein the three dimensional model data, displayed as a reconstructed 3D model 116 on video display of the image-processing device 210, is configured to permit selection of a specific part or view point 212 (in this case the wheel) of the subject. The selection may be made by selecting one or more voxels of the set of three-dimensional model data and changing the state of the selected voxels to over-writable. The over-writable state informs the system, user's intention to modify or carry out further detailing of the selected voxels. FIG. 3 illustrates voxel representation 300 of three-dimensional model data wherein specific voxels 312 are selected and marked as over-writable. Herein, the voxels 310 are marked over-writable using the visual cue of change in color of the selected voxels 312. Other suitable visual cues include but are not limited to highlighting, changing transparency, or modifying/marking an outline of the voxels may be used to show the selected voxels marked as over-writable. In the example of a vehicle as a subject, the voxels corresponding to the wheel may be selected and marked as over-writable. This informs the system that the user intends to carry out further image processing of the wheel of the vehicle.
  • Once the voxels are selected and marked as over-writable, a 3D scanning device is further used to obtain new three-dimensional model data of the three-dimensional subject. In order to obtain new three-dimensional model data, a data acquisition quality parameter of the scanning device may be specified, to modify the quality of the new three-dimensional model data. The quality parameters may be selected from image resolution, optical filtering, background subtraction, color data, or noise reduction. With specific regard to color data as a quality parameter, it will be understood that one may specify whether data is to be collected in color, black and white, grey scale, etc. With reference to FIG. 2, i.e. the illustration of the vehicle as a subject, once the voxels corresponding to the wheel are selected, ‘image resolution’ may be set as a data acquisition quality parameter of the scanning device in order to obtain further details of the wheel. As a result the scanning device takes a higher-resolution image of the wheel thereby capturing in-depth details of the wheel, its ridge pattern, and rim details etc.
  • FIG. 4 illustrates a flow diagram 400 of an illustrative embodiment wherein the new three-dimensional model data is obtained based on co-registration of the scanning device's view of the subject with the selected voxels of the three-dimensional model data. The specific voxels are selected 410 for further detailing, overwriting, or modification and a corresponding visual cue indicates the over-writable state of the voxels 412. The 3D scanning device is set to capture a specific view of the subject. The view being captured by the scanning device is co-registered with the selected voxels 414 to ensure that the correct viewpoint is captured by the scanning device and that the correct voxels are overwritten. Co-registration may involve data and viewpoint comparison by transforming the two sets of data i.e. one obtained from the three dimensional image and the other obtained from the view being captured, into one coordinate system. The device may be repositioned in case the correct viewpoint or angle is not obtained. In an exemplary embodiment, the three-dimensional model of the subject may be reoriented to match the field of view of the three-dimensional scanning device in order to easily and efficiently achieve co-registration. Once the user is satisfied that the appropriate viewpoint has been achieved 416, the scanning device is allowed to capture the new model data 418 from the co-registered viewpoint. In an alternate embodiment, the field of view of the scanning device may be adjusted to match the selected voxels to achieve accurate co-registration. The co-registration process may optionally comprise point-cloud registration, RGB image registration, intensity image registration, or iterative closest point registration to ensure easier, faster, and consistent aligning of the view being captured with the selected voxels. In yet another embodiment, co-registration may be achieved by assuming that the field of view of the scanning device matches the selected voxels.
  • FIG. 5 depicts an embodiment 500 illustrating the capture of new three-dimensional model data with the help of the 3D scanning device 110. The co-registered view 510 of the subject 112 is captured by the device 110 by means of rescanning 512 the subject 112. In the vehicle illustration, once the view of the wheel being captured by the device is co-registered with the selected voxels of the wheel in the three dimensional image, the device captures the new model data corresponding to the wheel. This new model data is used to modify or overwrite the existing voxels representing the wheel to provide real time modified 3D model data.
  • The method of 3D model data modification provided in exemplary embodiments herein may be used to modify 3D model data in real time and near real time environments. FIG. 6 illustrates an embodiment 600 wherein a 3D model processing device 610 and a 3D model capturing device 612 are connected to each other and to a central server 616 via a Local Area Network or a Wide Area Network (including internet) 614. The image capturing device 612 and the image processing device 610 may be configured to use the 3D model processing and modification methodology provided in the exemplary embodiments herein to work simultaneously on the same subject in real time or near real time environment. For example the image-processing device 610 may be used to select the voxels of a three dimensional image and the visual cue on the selected voxels may also be reflected on the image scanning device 612. The image-scanning device may then capture the new three dimensional model data that may be sent to the image-processing device 610. The image-processing device may use the new model data to modify the selected voxels. In one embodiment the image scanning device 612 and the image-processing device 610 may both be user mobile devices, smart phones, tablets and other similar devices with 3D model capturing and processing capability, or the image processing device 610 may be a purpose-built device. In yet another embodiment scanning device 612 and image processing device 610 may employ different image rendering methodologies and yet may be able to simultaneously use the 3D model modification methodology provided herein and interact with respect to the same 3D model data.
  • It will be apparent to those skilled in the art that the above methods and apparatuses may be changed or modified without departing from the general scope of the invention. The invention is intended to include all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
  • Having thus described the invention, it is now claimed:

Claims (14)

I/we claim:
1. A three-dimensional scan editing method comprising the steps of:
providing a set of three-dimensional model data defining a three-dimensional subject, and displaying the data as a reconstructed 3D model;
providing a scanning device adapted to acquire three-dimensional model data;
selecting one or more voxels of the set of three-dimensional model data and changing the state of the selected voxels to over-writable;
providing a visual cue indicating that the selected one or more voxels are over-writable;
co-registering the scanning device's view of the three-dimensional subject with selected voxels of the three-dimensional model data;
using the scanning device to acquire new three-dimensional model data of the three-dimensional subject; and
overwriting the selected voxels with the new three-dimensional model data.
2. The method of claim 1, further comprising the step of specifying a data acquisition quality parameter of the scanning device, wherein the quality parameter modifies the quality of the new three-dimensional model data.
3. The method of claim 2, wherein the data acquisition quality parameter is selected from image resolution, optical filtering, background subtraction, color data, or noise reduction.
4. The method of claim 1, wherein the visual cue is selected from one or more of color, transparency, highlighting, or outlining.
5. The method of claim 1, wherein the step of co-registering further comprises adjusting the field of view of the scanning device to match the selected voxels.
6. The method of claim 5, wherein the step of co-registering further comprises a method selected from one or more of point-cloud registration, RGB image registration, intensity image registration, or iterative closest point.
7. The method of claim 5, wherein the step of co-registering further comprises assuming that the field of view of the scanning device matches the selected voxels.
8. The method of claim 1, wherein the step of co-registering further comprises reorienting a three-dimensional model of the subject to match the field of view of the three-dimensional scanning device.
9. The method of claim 8, wherein the step of co-registering further comprises a method selected from one or more of point-cloud registration, RGB image registration, intensity image registration, or iterative closest point.
10. The method of claim 8, wherein the step of co-registering further comprises assuming that the field of view of the scanning device matches the selected voxels.
11. The method of claim 1, wherein the three-dimensional model data comprises one or more of an isosurface, a signed distance function, a truncated signed distance function, a surfel, a mesh, a point cloud, or a continuous function.
12. The method of claim 11, wherein the set of three-dimensional model data defining a three-dimensional subject is displayed on a video display device in the form of a three-dimensional model.
13. The method of claim 12, wherein the three-dimensional model may be reoriented according to gesture input or touchscreen input.
14. A three-dimensional scan editing method comprising the steps of:
providing a set of three-dimensional model data defining a three-dimensional subject, and displaying the data as a reconstructed 3D model, wherein the three-dimensional model data comprises one or more of an isosurface, a signed distance function, a truncated signed distance function, a surfel, a mesh, a point cloud, or a continuous function;
providing a scanning device adapted to acquire three-dimensional model data;
selecting one or more voxels of the set of three-dimensional model data and changing the state of the selected voxels to over-writable;
providing a visual cue indicating that the selected one or more voxels are over-writable, wherein the visual cue is selected from one or more of color, transparency, highlighting, or outlining;
co-registering the scanning device's view of the three-dimensional subject with selected voxels of the three-dimensional model data, wherein the step of co-registering further comprises adjusting the field of view of the scanning device to match the selected voxels, and wherein the step of co-registering further comprises a method selected from one or more of point-cloud registration, RGB image registration, intensity image registration, or iterative closest point;
specifying a data acquisition quality parameter of the scanning device selected from image resolution, optical filtering, background subtraction, color data, or noise reduction;
using the scanning device to acquire new three-dimensional model data of the three-dimensional subject, wherein the quality parameter modifies the quality of the new three-dimensional model data; and
overwriting the selected voxels with the new three-dimensional model data.
US14/671,749 2014-03-27 2015-03-27 Active Point Cloud Modeling Abandoned US20150279121A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/671,749 US20150279121A1 (en) 2014-03-27 2015-03-27 Active Point Cloud Modeling

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461971036P 2014-03-27 2014-03-27
US14/671,749 US20150279121A1 (en) 2014-03-27 2015-03-27 Active Point Cloud Modeling

Publications (1)

Publication Number Publication Date
US20150279121A1 true US20150279121A1 (en) 2015-10-01

Family

ID=54189850

Family Applications (5)

Application Number Title Priority Date Filing Date
US14/671,313 Abandoned US20150279075A1 (en) 2014-03-27 2015-03-27 Recording animation of rigid objects using a single 3d scanner
US14/671,420 Abandoned US20150279087A1 (en) 2014-03-27 2015-03-27 3d data to 2d and isometric views for layout and creation of documents
US14/671,749 Abandoned US20150279121A1 (en) 2014-03-27 2015-03-27 Active Point Cloud Modeling
US14/671,373 Abandoned US20150278155A1 (en) 2014-03-27 2015-03-27 Identifying objects using a 3d scanning device, images, and 3d models
US14/672,048 Active 2035-11-14 US9841277B2 (en) 2014-03-27 2015-03-27 Graphical feedback during 3D scanning operations for obtaining optimal scan resolution

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US14/671,313 Abandoned US20150279075A1 (en) 2014-03-27 2015-03-27 Recording animation of rigid objects using a single 3d scanner
US14/671,420 Abandoned US20150279087A1 (en) 2014-03-27 2015-03-27 3d data to 2d and isometric views for layout and creation of documents

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/671,373 Abandoned US20150278155A1 (en) 2014-03-27 2015-03-27 Identifying objects using a 3d scanning device, images, and 3d models
US14/672,048 Active 2035-11-14 US9841277B2 (en) 2014-03-27 2015-03-27 Graphical feedback during 3D scanning operations for obtaining optimal scan resolution

Country Status (1)

Country Link
US (5) US20150279075A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105551078A (en) * 2015-12-02 2016-05-04 北京建筑大学 Method and system of virtual imaging of broken cultural relics
US20190005709A1 (en) * 2017-06-30 2019-01-03 Apple Inc. Techniques for Correction of Visual Artifacts in Multi-View Images
CN111443091A (en) * 2020-04-08 2020-07-24 中国电力科学研究院有限公司 Cable line tunnel engineering defect judgment method
US10754242B2 (en) 2017-06-30 2020-08-25 Apple Inc. Adaptive resolution and projection format in multi-direction video
CN111656762A (en) * 2017-12-05 2020-09-11 交互数字Ce专利控股公司 Method and apparatus for encoding a point cloud representing a three-dimensional object
CN111814691A (en) * 2020-07-10 2020-10-23 广东电网有限责任公司 Space expansion display method and device for transmission tower image
US10924747B2 (en) 2017-02-27 2021-02-16 Apple Inc. Video coding techniques for multi-view video
US10999602B2 (en) 2016-12-23 2021-05-04 Apple Inc. Sphere projected motion estimation/compensation and mode decision
US11093752B2 (en) 2017-06-02 2021-08-17 Apple Inc. Object tracking in multi-view video
US11259046B2 (en) 2017-02-15 2022-02-22 Apple Inc. Processing of equirectangular object data to compensate for distortion by spherical projections
US11315239B1 (en) * 2017-11-22 2022-04-26 State Farm Mutual Automobile Insurance Company Guided vehicle capture for virtual mode generation
US20230215047A1 (en) * 2019-09-05 2023-07-06 Sony Interactive Entertainment Inc. Free-viewpoint method and system
CN116817771A (en) * 2023-08-28 2023-09-29 南京航空航天大学 Aerospace part coating thickness measurement method based on cylindrical voxel characteristics

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469446A (en) * 2014-09-05 2016-04-06 富泰华工业(深圳)有限公司 Point cloud mesh simplification system and method
US20160125638A1 (en) * 2014-11-04 2016-05-05 Dassault Systemes Automated Texturing Mapping and Animation from Images
EP3040946B1 (en) * 2014-12-30 2019-11-13 Dassault Systèmes Viewpoint selection in the rendering of a set of objects
US9866815B2 (en) * 2015-01-05 2018-01-09 Qualcomm Incorporated 3D object segmentation
JP2017041022A (en) * 2015-08-18 2017-02-23 キヤノン株式会社 Information processor, information processing method and program
JP6906303B2 (en) * 2015-12-30 2021-07-21 ダッソー システムズDassault Systemes Density-based graphical mapping
US10049479B2 (en) 2015-12-30 2018-08-14 Dassault Systemes Density based graphical mapping
US10127333B2 (en) 2015-12-30 2018-11-13 Dassault Systemes Embedded frequency based search and 3D graphical data processing
US10360438B2 (en) 2015-12-30 2019-07-23 Dassault Systemes 3D to 2D reimaging for search
US11138306B2 (en) * 2016-03-14 2021-10-05 Amazon Technologies, Inc. Physics-based CAPTCHA
CN106524920A (en) * 2016-10-25 2017-03-22 上海建科工程咨询有限公司 Application of field measurement in construction project based on three-dimensional laser scanning
CN106650700B (en) * 2016-12-30 2020-12-01 上海联影医疗科技股份有限公司 Die body, method and device for measuring system matrix
CN107677221B (en) * 2017-10-25 2024-03-19 贵州大学 Plant leaf movement angle measuring method and device
US10762595B2 (en) 2017-11-08 2020-09-01 Steelcase, Inc. Designated region projection printing of spatial pattern for 3D object on flat sheet in determined orientation
CN108921045B (en) * 2018-06-11 2021-08-03 佛山科学技术学院 Spatial feature extraction and matching method and device of three-dimensional model
US10600230B2 (en) * 2018-08-10 2020-03-24 Sheng-Yen Lin Mesh rendering system, mesh rendering method and non-transitory computer readable medium
CN112381919B (en) * 2019-07-29 2022-09-27 浙江商汤科技开发有限公司 Information processing method, positioning method and device, electronic equipment and storage medium
KR20210030147A (en) * 2019-09-09 2021-03-17 삼성전자주식회사 3d rendering method and 3d rendering apparatus
CN110610045A (en) * 2019-09-16 2019-12-24 杭州群核信息技术有限公司 Intelligent cloud processing system and method for generating three views by selecting cabinet and wardrobe
US11074708B1 (en) * 2020-01-06 2021-07-27 Hand Held Products, Inc. Dark parcel dimensioning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090115873A1 (en) * 2007-11-07 2009-05-07 Samsung Techwin Co., Ltd. Method Of Controlling Digital Camera For Testing Pictures, And Digital Camera Using The Method
US20090316966A1 (en) * 2008-05-16 2009-12-24 Geodigm Corporation Method and apparatus for combining 3D dental scans with other 3D data sets
US20110090307A1 (en) * 2008-06-30 2011-04-21 Jean-Eudes Marvie Method for the real-time composition of a video
US20130120368A1 (en) * 2011-11-15 2013-05-16 Trimble Navigation Limited Browser-Based Collaborative Development of a 3D Model

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1430443A2 (en) 2001-09-06 2004-06-23 Koninklijke Philips Electronics N.V. Method and apparatus for segmentation of an object
US8108929B2 (en) * 2004-10-19 2012-01-31 Reflex Systems, LLC Method and system for detecting intrusive anomalous use of a software system using multiple detection algorithms
US7860301B2 (en) 2005-02-11 2010-12-28 Macdonald Dettwiler And Associates Inc. 3D imaging system
US7965868B2 (en) * 2006-07-20 2011-06-21 Lawrence Livermore National Security, Llc System and method for bullet tracking and shooter localization
US7768656B2 (en) 2007-08-28 2010-08-03 Artec Group, Inc. System and method for three-dimensional measurement of the shape of material objects
US8255100B2 (en) * 2008-02-27 2012-08-28 The Boeing Company Data-driven anomaly detection to anticipate flight deck effects
DE102008021558A1 (en) * 2008-04-30 2009-11-12 Advanced Micro Devices, Inc., Sunnyvale Process and system for semiconductor process control and monitoring using PCA models of reduced size
ATE545260T1 (en) * 2008-08-01 2012-02-15 Gigle Networks Sl OFDM FRAME SYNCHRONIZATION METHOD AND SYSTEM
US8896607B1 (en) * 2009-05-29 2014-11-25 Two Pic Mc Llc Inverse kinematics for rigged deformable characters
US8817019B2 (en) * 2009-07-31 2014-08-26 Analogic Corporation Two-dimensional colored projection image from three-dimensional image data
GB0913930D0 (en) * 2009-08-07 2009-09-16 Ucl Business Plc Apparatus and method for registering two medical images
US8085279B2 (en) * 2009-10-30 2011-12-27 Synopsys, Inc. Drawing an image with transparent regions on top of another image without using an alpha channel
US9245374B2 (en) * 2011-02-22 2016-01-26 3M Innovative Properties Company Space carving in 3D data acquisition
ES2812578T3 (en) * 2011-05-13 2021-03-17 Vizrt Ag Estimating a posture based on silhouette
US8724880B2 (en) * 2011-06-29 2014-05-13 Kabushiki Kaisha Toshiba Ultrasonic diagnostic apparatus and medical image processing apparatus
US20150153476A1 (en) * 2012-01-12 2015-06-04 Schlumberger Technology Corporation Method for constrained history matching coupled with optimization
US9208550B2 (en) 2012-08-15 2015-12-08 Fuji Xerox Co., Ltd. Smart document capture based on estimated scanned-image quality
DE102013203667B4 (en) * 2013-03-04 2024-02-22 Adidas Ag Cabin for trying out one or more items of clothing
WO2015006791A1 (en) 2013-07-18 2015-01-22 A.Tron3D Gmbh Combining depth-maps from different acquisition methods
US20150070468A1 (en) 2013-09-10 2015-03-12 Faro Technologies, Inc. Use of a three-dimensional imager's point cloud data to set the scale for photogrammetry

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090115873A1 (en) * 2007-11-07 2009-05-07 Samsung Techwin Co., Ltd. Method Of Controlling Digital Camera For Testing Pictures, And Digital Camera Using The Method
US20090316966A1 (en) * 2008-05-16 2009-12-24 Geodigm Corporation Method and apparatus for combining 3D dental scans with other 3D data sets
US20110090307A1 (en) * 2008-06-30 2011-04-21 Jean-Eudes Marvie Method for the real-time composition of a video
US20130120368A1 (en) * 2011-11-15 2013-05-16 Trimble Navigation Limited Browser-Based Collaborative Development of a 3D Model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Schiller et al. "Datastructures for Capturing Dynamic Scenes with a Time-of-Flight Camera", R. Koch and A. Kolb (Eds.): Dyn3D 2009, LNCS 5742, pp. 42-57, 2009. Springer-Verlag Berlin Heidelberg 2009. *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105551078A (en) * 2015-12-02 2016-05-04 北京建筑大学 Method and system of virtual imaging of broken cultural relics
US11818394B2 (en) 2016-12-23 2023-11-14 Apple Inc. Sphere projected motion estimation/compensation and mode decision
US10999602B2 (en) 2016-12-23 2021-05-04 Apple Inc. Sphere projected motion estimation/compensation and mode decision
US11259046B2 (en) 2017-02-15 2022-02-22 Apple Inc. Processing of equirectangular object data to compensate for distortion by spherical projections
US10924747B2 (en) 2017-02-27 2021-02-16 Apple Inc. Video coding techniques for multi-view video
US11093752B2 (en) 2017-06-02 2021-08-17 Apple Inc. Object tracking in multi-view video
US20190005709A1 (en) * 2017-06-30 2019-01-03 Apple Inc. Techniques for Correction of Visual Artifacts in Multi-View Images
US10754242B2 (en) 2017-06-30 2020-08-25 Apple Inc. Adaptive resolution and projection format in multi-direction video
US11922618B2 (en) 2017-11-22 2024-03-05 State Farm Mutual Automobile Insurance Company Guided vehicle capture for virtual model generation
US11315239B1 (en) * 2017-11-22 2022-04-26 State Farm Mutual Automobile Insurance Company Guided vehicle capture for virtual mode generation
US11095920B2 (en) 2017-12-05 2021-08-17 InterDigital CE Patent Holdgins, SAS Method and apparatus for encoding a point cloud representing three-dimensional objects
CN111656762A (en) * 2017-12-05 2020-09-11 交互数字Ce专利控股公司 Method and apparatus for encoding a point cloud representing a three-dimensional object
US20230215047A1 (en) * 2019-09-05 2023-07-06 Sony Interactive Entertainment Inc. Free-viewpoint method and system
CN111443091A (en) * 2020-04-08 2020-07-24 中国电力科学研究院有限公司 Cable line tunnel engineering defect judgment method
CN111814691A (en) * 2020-07-10 2020-10-23 广东电网有限责任公司 Space expansion display method and device for transmission tower image
CN116817771A (en) * 2023-08-28 2023-09-29 南京航空航天大学 Aerospace part coating thickness measurement method based on cylindrical voxel characteristics

Also Published As

Publication number Publication date
US20150278155A1 (en) 2015-10-01
US20150279087A1 (en) 2015-10-01
US20150276392A1 (en) 2015-10-01
US20150279075A1 (en) 2015-10-01
US9841277B2 (en) 2017-12-12

Similar Documents

Publication Publication Date Title
US20150279121A1 (en) Active Point Cloud Modeling
Mori et al. A survey of diminished reality: Techniques for visually concealing, eliminating, and seeing through real objects
US9773302B2 (en) Three-dimensional object model tagging
Kersten et al. Image-based low-cost systems for automatic 3D recording and modelling of archaeological finds and objects
US10282856B2 (en) Image registration with device data
CN107155341B (en) Three-dimensional scanning system and frame
CN110300292B (en) Projection distortion correction method, device, system and storage medium
JP6685827B2 (en) Image processing apparatus, image processing method and program
US10223839B2 (en) Virtual changes to a real object
CN104952111A (en) Method and apparatus for obtaining 3D face model using portable camera
JPWO2016152633A1 (en) Image processing system, image processing method, and program
US10169891B2 (en) Producing three-dimensional representation based on images of a person
TWI624177B (en) Image data segmentation
CN105869115B (en) A kind of depth image super-resolution method based on kinect2.0
CN109906600A (en) Simulate the depth of field
CN107517346A (en) Photographic method, device and mobile device based on structure light
Du et al. Video fields: fusing multiple surveillance videos into a dynamic virtual environment
KR20180032059A (en) 3-Dimensional Contents Providing System, Method and Computer Readable Recoding Medium
CN115482322A (en) Computer-implemented method and system for generating a synthetic training data set
Papadaki et al. Accurate 3D scanning of damaged ancient Greek inscriptions for revealing weathered letters
US10861138B2 (en) Image processing device, image processing method, and program
US20190066366A1 (en) Methods and Apparatus for Decorating User Interface Elements with Environmental Lighting
CN109242900B (en) Focal plane positioning method, processing device, focal plane positioning system and storage medium
CN104680520A (en) Field three-dimensional information investigation method and system
JP2006331009A (en) Image processor and image processing method

Legal Events

Date Code Title Description
AS Assignment

Owner name: KNOCKOUT CONCEPTS, LLC, OHIO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KUTTOTHARA, JACOB A;WATHEN, JOHN M;PADDOCK, STEVEN D;AND OTHERS;REEL/FRAME:035776/0299

Effective date: 20150528

Owner name: KNOCKOUT CONCEPTS, LLC, OHIO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MEYERS, STEPHEN B;REEL/FRAME:035776/0218

Effective date: 20150528

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION