CN109598757A - A method of for capturing the 3D model of object in space - Google Patents

A method of for capturing the 3D model of object in space Download PDF

Info

Publication number
CN109598757A
CN109598757A CN201710920307.7A CN201710920307A CN109598757A CN 109598757 A CN109598757 A CN 109598757A CN 201710920307 A CN201710920307 A CN 201710920307A CN 109598757 A CN109598757 A CN 109598757A
Authority
CN
China
Prior art keywords
space
axis
model
imaging device
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710920307.7A
Other languages
Chinese (zh)
Inventor
亚当·麦克·邦博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ulsee Inc
Original Assignee
Ulsee Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ulsee Inc filed Critical Ulsee Inc
Priority to CN201710920307.7A priority Critical patent/CN109598757A/en
Publication of CN109598757A publication Critical patent/CN109598757A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The present invention provides a kind of method of the 3D model of the surface for capturing an object in space or environment, comprising the following steps: the multiple images of the object a) are shot using an imaging device;B) restore the position and orientation of the imaging device for each image;C) orientation of the upper axis in estimation space;D) borderline region of the object is estimated using the orientation of axis upper in the position of the imaging device and orientation and space;And the 3D model of the object e) is established in borderline region.The advantage of the invention is that not needing any operation of user's execution in addition to capturing camera lens.

Description

A method of for capturing the 3D model of object in space
Technical field
The present invention relates to a kind of method of 3D model for capturing object in space, especially a kind of surface is not formed A part of 3D model, wherein the capture of 3D model is based on to only by the camera orientation of shooting photo.
Background technique
The imaging device of such as mobile phone camera or other digital cameras may be used in photogrammetric process capture 3D mould Type, this allows the image creation 3D model from such as photo.Using general photogrammetric software, system creation can be multiple The model for all areas seen in image.This means that user needs to cut out their interested or uninterested 3D models Part.Especially the object is usually to produce, and is connected on the ground that it is sitting in.This will lead to two problems:
1. the 3D model for the object that navigates is difficult: since comprising ground level or background surface, the rotation center of model is usually fixed Justice is incorrect and inconsistent with the center of object interested.
2. the merging on ground and body surface: usual image is shot from a low-angle, therefore photogrammetric process The shape near object bottom can not be accurately determined.The result is that by ground together with the object " merging " of bottom.
In the past, the 3DSOM software of applicant had been solved by allowing user to delete article shape in several photos Come the problem of extracting interested object from scene.This is referred to as manually " shielding ".However, this masking needs exerting for user Power, technical ability and time.
Include: from the live current method for extracting object interested
Shielding object (3DSOM Pro)-needs the effort and technical ability of user manually.
Agisoft " PhotoScan ", capture are real " reality captures ": the necessary manual definition of user is around the first of model The begin bounding box-of sparse estimation requires user to run complicated photogrammetric software to calculate and then check initial " sparse " mould Type.Then user must define the direction positions and dimensions of bounding box by complicated GUI.Once definition, box may be still A part comprising the ground level below object.
Autodesk ReMake: creation model, and hand-operated tools is provided to cut object in the 3 d space.Need one A considerably complicated GUI, runs on desktop computer.
Therefore, it is necessary to create a system easy to use to extract the completely enclosed 3D model of object, which can be by Unskilled labor author uses, and without any further input of user, rather than captures initial pictures.
Summary of the invention
In view of this, the present invention provides a kind of for capturing the 3D model on the surface in space or object environmentally Method, comprising the following steps:
A) multiple images of the object are shot using an imaging device;
B) restore the position and orientation of the imaging device for each image;
C) orientation of the upper axis in estimation space;
D) boundary of the object is estimated using the orientation of axis upper in the position of the imaging device and orientation and space Region;And
E) the 3D model of the object is established in borderline region.
Surface or environment preferably do not constitute a part of 3D model.
Preferably, the imaging device of recovery is for the position of each image and orientation for creating the object indicated on surface Sparse cloud.
Preferably, the sparse cloud around the object being aligned with the rising axis in space is wound.
Preferably, primary picture plane axis is determined by the recovery imaging device parameter of each image from object to estimate Upper axis in space.
Preferably, primary picture plane axis takes amplitude peak from x or y plane of delineation direction that is average or being added.
Preferably, the upper axis in space is obtained with amplitude peak from the average picture plane on the direction x or y.
Preferably, this method further includes the point that estimation is positioned in the space on ground level.
Preferably, this method further includes the complete 3D model by the 3D model shearing on ground to generate the object.
Preferably, ground plane is just estimated very close to the point of axis upper in space by selection, the subset is analyzed to look for To principal plane.
Preferably, selected o'clock in 25 degree.
Preferably, plane selection is biased to lower Z coordinate, and wherein z-axis is aligned to the upward axis in estimation space.
Preferably, imaging device position and orientation are used up in the side orthogonal with the upper axis of estimation, to limit these sides Borderline region on upward object.
Preferably, before the interpolation of 3D model shear restoration surface data " closing to avoid ground and body surface And " or " mixing ".
Preferably, 3D model is based on specific image automatic orientation.
Preferably, the 3D model of object is located on the ground of the direction of observation towards the first image.
Preferably, imaging device is mobile phone camera.
Preferably, the image of object is taken on mobile device camera, and the capture of the 3D model of object occurs In mobile device or on the server on " cloud ".
Detailed description of the invention
Fig. 1 is the angles and positions of each imaging device of an embodiment and expression upper capture in space according to the present invention Surface sparse cloud schematic diagram;
Fig. 2 is the schematic diagram in the X and Y plane of delineation direction of each imaging device of an embodiment according to the present invention;
Fig. 3 is the application amount schematic diagram that an embodiment illustrates in space in range according to the present invention;
Fig. 4 is that an embodiment is illustrated by the 3D of the bounding volumes creation in the space including object and ground according to the present invention Figure;
Fig. 5 is that an embodiment is only limitted to shear the signal of the shearing 3D grid of the object on ground from space according to the present invention Figure;And
Fig. 6 is the imaging device number that initially solves of the embodiment in resetting space before the direction of "upper" according to the present invention According to.
Specific embodiment
Described below is preferred embodiment of the present invention, it is therefore intended that illustrate spirit of the invention and not to limit Determine protection scope of the present invention, the scope of the invention is defined in the claims.
It, hereafter will be appended by embodiment and cooperation for the above objects, features and advantages of the present invention can more be become apparent Schema is described in detail below.It is noted that each component in institute's accompanying drawings is only signal, not according to the reality of each component Border ratio is painted.
Some vocabulary is used in patent specification and claim to censure specific component.In fields Has usually intellectual, it is to be appreciated that hardware manufacturer may call the same component with different nouns.This patent is said Bright book and claim are come not in such a way that the difference of title is as component is distinguished with the difference of component functionally Criterion as differentiation.Specification in the whole text and claims mentioned in " comprising " and " be open term including ", Therefore it should be construed to " including but not limited to ".In addition, " coupling mono- word of " includes any direct and is indirectly electrically connected hand herein Section.Therefore, if it is described herein that a first device is coupled to a second device, then representing the first device can directly be electrically connected in The second device, or be electrically connected indirectly through other devices or connection means to the second device.
Basic principle of the invention is to define interested object using simple camera work stream.Workflow is:
Step 101: object is placed on flat surface (desk or floor);
Step 102: around object from various angle captures object, wherein most of images should use identical imaging device Direction (i.e. laterally or longitudinally).
Using this workflow, then restore the rotation angles and positions of imaging device using photogrammetric survey method, And sparse cloud of upper surface in representation space is generated, as shown in Figure 1.
Standard method can be used for restoring being imaged device data and generate sparse cloud.
The recovery and matching-of characteristic point this can be used described in US6711293 (Lowe) be known as SIFT standard side Method is realized.This is the method for the scale invariant feature in image and to be positioned for identification using this scale invariant feature The other method of object in image.For identification to from the pixel amplitudes extreme value in multiple and different images that image generates The method that each subregion of pixel region generates the scale invariant feature of multiple component sub-district field descritors.This include by with Under type generates multiple and different images and obscures initial pictures to generate blurred picture, and fuzzy by subtracting from initial pictures Image is to generate different images.For each different image, pixel amplitudes extreme value is positioned, and about each pixel amplitudes pole Value defines corresponding pixel region.Each pixel region is divided into subregion, and generates multiple component for each subregion Range descriptors.These component sub-district field descritors are related to the component sub-district field descritor of image considered, and when foot The component sub-district field descritor (scale invariant feature) of enough amounts defines more than the threshold correlation with component sub-district field descritor Polymerization correlation when, by object be designated as be detected invariant features).
The standard technique listed in document can be used in this for the recovery-in camera position and direction, with particular reference to the 13rd chapter " motion structure " (Cambridge University engineering department Roberto Cipolla professor)
(http://mi.eng.cam.ac.uk/~cipolla/publications/ contributionToEditedBook/2008-SFM-chapters.pdf)。
Then the 3D model of the not object of background element is extracted using following steps:
Step 201: the rough direction of "upper" axis in estimation space.Assuming that user protects imaging device in most of images It holds in one in two orientations.In the space of the recovery of all imaging device positions " x " with " y " as in-plane is added, Main way (amplitude peak) would tend to be aligned with "upper" axis in space.This is illustrated in Figure 2.
Step 202: bounding volumes being aligned with upward axis in the space comprising all sparse points, to obtain article shape Initial boundary (frame 1 of Fig. 3).If the direction z is aligned with " upward " direction, bounding box is limited on x and y-dimension, so that They are located in the boundary of the imaging device position in x and y-dimension (frame 2 of Fig. 3).This is shown in FIG. 3.In alternative solution In, instead of square or rectangle frame, imaging device center can be fitted to circle on x and y-dimension, to obtain object in x and y The closer limiting size of position, this is referred to as cylinder constraint.
Step 203: being indicated using the 3D of photogrammetric technology building object (point, voxel or polygon).This shows in Fig. 4 Out.In this case, the imaging device being processed into pair, they are close together and have similar direction of observation.It is three-dimensional Pixel matching and triangulation process return to 3D point.Belief propagation algorithm (Belief Propagation) is Stereo matching Standard method, can be used to realize this point.At " belief propagation algorithm on GPU for stereoscopic vision " It is described in the document of (Brunton, Shu, Roth), Canadian third computer and robot distant view computer conference 2006 This method http://people.scs.carleton.ca/~c_shu/pdf/BP-GPU-stereo.pdf.In this hair In the case where bright, the case where boundary volume (background) beyond estimation, is rejected.
Step 204: on the ground of estimation point in space.These will be it normally substantially and positioned at boundary body " bottom " Upward axis aligned point in neighbouring space.Using a kind of technology, wherein three candidate points of random selection are to construct ground level " conjecture ", and the conjecture that could support up from the selection of candidate ground plane point.This is an example of random sampling common recognition (RANSAC) Sub (see, for example, https: //en.wikipedia.org/wiki/Random_sample_consensus).RANSAC be from The alternative manner of the parameter of one group of estimation mathematical model
The data observed include exceptional value, when exceptional value is not influenced by estimated value.Therefore, it can also be explained For rejecting outliers method.In the sense, it is a kind of nondeterministic algorithm, it can only generate conjunction with certain probability Reason as a result, this probability will increase as more iteration are allowed to.The algorithm is in 1981 by SRI International Fischler and Belles are issued for the first time.They solve the problems, such as that position determines (LDP) using RANSAC, will the purpose is to determine Known location of the image projection to the point in one group of terrestrial reference.
In the present case, by p3, p1, p2, three points of p1, p0, p2, p0 are calculated.Cross product is vertical with plane, P0 is located in plane.Then the distance to plane is calculated to all the points, and counts and is counted in small threshold value.These give each The integer value of " support " of plane.In the present case, it is preferable that initially 1000 random planars of selection, and select Plane with maximum " support " value.
Once having estimated ground level, just side is jammed 3D model on the ground, to ensure that the complete model of object does not have Any outer surface is resumed.
In the present invention, it is represented a little using one group, and using standard method processing shearing point to create closure grid.Note Meaning, grid dividing need interpolation point data, and by cutting down grounding point before engagement, therefore it is attached to avoid grounding point from influencing ground Close interpolation grid (therefore " merging " problem for avoiding ground and object).In the present case, using referred to as " Poisson The standard method processing shearing point of resurfacing " (Kazhdan, Bolitho, Hoppe) is to generate the processing of polygon model geometry Seminar (2006) http://hhoppe.com/poissonrecon.pdf.
If interpolation scheme re-creates the part of object in known ground plane positions, can shear again final Grid.
Object can orient as follows.Coordinate system is defined as upward direction in the space of above-mentioned estimation, and by being first Camera (or user specify one) the selection x-axis plane of delineation and x-axis in the space that defines, and relative in the space of estimation " upward " orthogonalization.Final axis is the intersectionproduct in the two directions.Obtained coordinate system ensures " nature " for model Default direction is located at ground and towards first imaging device position.
Determine the working example of "upper" in space
In the example of this work, it will be assumed that 6 images have had been taken, therefore have had 6 imaging device positions, often A image corresponds to one of 6 images.The following institute in the position of imaging device and coordinate in the x-y plane for the image having been taken Show.
Camera index: 0
Position: 4.35,5.96,3.41
Plane of delineation x-axis: 0.76, -0.53,0.38
Plane of delineation y-axis: -0.37,0.13,0.92
Camera index: 1
Position: -7.76,6.31,3.74
Plane of delineation x-axis: 0.91,0.33, -0.24
Plane of delineation y-axis: 0.26, -0.01,0.97
Camera index: 2
Position: -9.65, -6.09,16.38
Plane of delineation x-axis: -0.83,0.48, -0.28
Plane of delineation y-axis: 0.45,0.88,0.1
Camera index: 3
Position: 0.57, -8.98,15.42
Plane of delineation x-axis: -0.92, -0.28,0.29
Plane of delineation y-axis: -0.21,0.94,0.26
Camera index: 4
Position: 8.54, -2.42,8.18
Plane of delineation x-axis: -0.06, -0.81,0.58
Plane of delineation y-axis: -0.52,0.52,0.67
Camera index: 5
Position: 5.42,2.95,0.39
Plane of delineation x-axis: 0.65, -0.64,0.40
Plane of delineation y-axis: -0.26,0.31,0.92
Then all x-axis are added with the value of all y-axis:
The sum of x-axis: 0.52, -1.46,1.13
The sum of y-axis: -0.65,2.76,3.90
Therefore, in this case, amplitude peak and axis be y-axis and direction and " upward " be y-axis normalization with :- 0.14,0.57,0.81.This can find out that it shows the camera initially solved in resetting space before "upper" in Fig. 6 Data.
It is above be only preferred embodiment of the present disclosure be not intended for present disclosure to be limited in the spirit of the disclosure and In principle, any change to the protection of the disclosure, equivalent replacement or improvement should be comprising in the range.
Above-described embodiment it is merely for convenience explanation and illustrate, though arbitrarily repaired by person of ordinary skill in the field Change, is not departing from such as the range to be protected in claims.

Claims (10)

1. a kind of method of the 3D model on the surface or environment for capturing an object in space, comprising the following steps:
A) multiple images of the object are shot using an imaging device;
B) restore the position and orientation of the imaging device for each image;
C) orientation of the upper axis in estimation space;
D) borderline region of the object is estimated using the orientation of axis upper in the position of the imaging device and orientation and space; And
E) the 3D model of the object is established in borderline region.
2. according to the method described in claim 1, wherein the surface or environment do not constitute a part of the 3D model.
3. method according to claim 1 or 2, wherein for each image restore the imaging device position and The method in orientation is the sparse cloud for creating the surface for indicating the object.
4. according to the method described in claim 3, wherein the borderline region is around the upper axis being aligned in the space The sparse cloud point of the object.
5. according to method described in preceding claims 1, wherein dress is imaged by the recovery of each image from the object Parameter is set to determine primary picture plane axis to estimate the upper axis in the space.
6. according to the method described in claim 5, wherein the upper axis in the space is with the direction x or y with amplitude peak On the average plane of delineation in it is acquired.
7. according to the method described in claim 1, wherein including the point being rest on the ground in estimation space;And by the 3D mould Type shears the complete 3D model that the object is generated above the ground level.
8. method according to claim 6 or 7, wherein just estimate to be grounded very close to the point of space axis by selection Plane, the subset are analyzed to find principal plane.
9. according to the method described in claim 8, wherein selected o'clock in 25 degree.
10. method according to claim 8 or claim 9, wherein plane selection is biased to lower Z coordinate, wherein z-axis It is aligned to upward axis in estimation space.
CN201710920307.7A 2017-09-30 2017-09-30 A method of for capturing the 3D model of object in space Pending CN109598757A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710920307.7A CN109598757A (en) 2017-09-30 2017-09-30 A method of for capturing the 3D model of object in space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710920307.7A CN109598757A (en) 2017-09-30 2017-09-30 A method of for capturing the 3D model of object in space

Publications (1)

Publication Number Publication Date
CN109598757A true CN109598757A (en) 2019-04-09

Family

ID=65956875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710920307.7A Pending CN109598757A (en) 2017-09-30 2017-09-30 A method of for capturing the 3D model of object in space

Country Status (1)

Country Link
CN (1) CN109598757A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102136155A (en) * 2010-01-27 2011-07-27 首都师范大学 Object elevation vectorization method and system based on three dimensional laser scanning
US20120008830A1 (en) * 2010-07-12 2012-01-12 Canon Kabushiki Kaisha Information processing apparatus, control method therefor, and computer-readable storage medium
US20140044322A1 (en) * 2012-08-08 2014-02-13 The Hong Kong Polytechnic University Contactless 3D Biometric Feature identification System and Method thereof
CN104330022A (en) * 2013-07-22 2015-02-04 赫克斯冈技术中心 Method and system for volume determination using a structure from motion algorithm

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102136155A (en) * 2010-01-27 2011-07-27 首都师范大学 Object elevation vectorization method and system based on three dimensional laser scanning
US20120008830A1 (en) * 2010-07-12 2012-01-12 Canon Kabushiki Kaisha Information processing apparatus, control method therefor, and computer-readable storage medium
US20140044322A1 (en) * 2012-08-08 2014-02-13 The Hong Kong Polytechnic University Contactless 3D Biometric Feature identification System and Method thereof
CN104330022A (en) * 2013-07-22 2015-02-04 赫克斯冈技术中心 Method and system for volume determination using a structure from motion algorithm

Similar Documents

Publication Publication Date Title
US10977818B2 (en) Machine learning based model localization system
JP6944441B2 (en) Methods and systems for detecting and combining structural features in 3D reconstruction
CN107077744B (en) Method and system for three-dimensional model generation using edges
JP5963353B2 (en) Optical data processing apparatus, optical data processing system, optical data processing method, and optical data processing program
US9208607B2 (en) Apparatus and method of producing 3D model
US10452949B2 (en) System and method for scoring clutter for use in 3D point cloud matching in a vision system
US20080018668A1 (en) Image Processing Device and Image Processing Method
US9286539B2 (en) Constructing contours from imagery
CN109035330A (en) Cabinet approximating method, equipment and computer readable storage medium
Santos et al. 3D plant modeling: localization, mapping and segmentation for plant phenotyping using a single hand-held camera
KR102011564B1 (en) System and method for automatic selection of 3d alignment algorithms in a vision system
JP2016099982A (en) Behavior recognition device, behaviour learning device, method, and program
US10672181B2 (en) 3D capture: object extraction
Simon Tracking-by-synthesis using point features and pyramidal blurring
Hafeez et al. Image based 3D reconstruction of texture-less objects for VR contents
Simon In-situ 3d sketching using a video camera as an interaction and tracking device
Dzwierzynska Reconstructing architectural environment from the perspective image
WO2022237026A1 (en) Plane information detection method and system
US10403030B2 (en) Computing volumes of interest for photogrammetric 3D reconstruction
JP2012234432A (en) Vanishing point calculation device, vanishing point calculation method and program
CN109598757A (en) A method of for capturing the 3D model of object in space
Huang et al. Simple 3d reconstruction of single indoor image with perspective cues
Arslan 3D object reconstruction from a single image
Jisen A study on target recognition algorithm based on 3D point cloud and feature fusion
Dikovski et al. Structure from motion obtained from low quality images in indoor environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190409