CN110047139A - A kind of specified target three-dimensional rebuilding method and system - Google Patents
A kind of specified target three-dimensional rebuilding method and system Download PDFInfo
- Publication number
- CN110047139A CN110047139A CN201910347535.9A CN201910347535A CN110047139A CN 110047139 A CN110047139 A CN 110047139A CN 201910347535 A CN201910347535 A CN 201910347535A CN 110047139 A CN110047139 A CN 110047139A
- Authority
- CN
- China
- Prior art keywords
- point
- view image
- scene
- specified target
- training sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000012549 training Methods 0.000 claims abstract description 72
- 238000010276 construction Methods 0.000 claims abstract description 15
- 238000011084 recovery Methods 0.000 claims abstract description 8
- 230000011218 segmentation Effects 0.000 claims description 15
- 238000005516 engineering process Methods 0.000 claims description 12
- 238000003066 decision tree Methods 0.000 claims description 10
- 238000010606 normalization Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000001010 compromised effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000011158 quantitative evaluation Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
It includes: acquisition multi-view image that the present invention, which discloses a kind of specified target three-dimensional rebuilding method and system, method,;The specified target image that determination need to be rebuild;Scene recovery is carried out to multi-view image using SFM, the scene sparsity structure of multi-view image is obtained, the part 3D point in the scene sparsity structure of multi-view image is marked, training sample set is obtained;AdaBoost algorithm construction strong classifier is used based on training sample set;Classified using strong classifier to remaining 3D point in scene sparsity structure, obtains sparse cloud structure of specified target;The dense point cloud structure of the scene of multi-view image is determined using MVS based on sparse cloud structure;Based on dense point cloud structure using training sample set as training data, using Multiple trees determination strategy, specified target dense point cloud structure is determined.The above method in the present invention only needs a small amount of sparse point under a viewpoint that the three-dimensional reconstruction to specified target can be realized, and calculation amount is small, and precision is high.
Description
Technical field
The present invention relates to three-dimensional reconstruction fields, more particularly to a kind of specified target three-dimensional rebuilding method and system.
Background technique
3D target and scene rebuilding based on image are one of the important research directions of computer vision field, no matter are based on
Single view or multi-view image have extensive research.Depth information of the one-view image due to lacking target, rebuilding often needs
Different degrees of man-machine interactively is wanted, or infers object construction using modeling rule.It is from one-view image mapping space structure
Typical ill-conditioning problem, more methods are carried out based on multi-view image.But most methods are both for entire field
The Reconstruction Design of scape, if reconstructed results are the models that scene integrally wants to obtain specific object in scene, need further to scene
Model carries out target identification.As the target identification of image, 3D target identification is also required to acquire enough images, depth in advance
Figure etc. carrys out training pattern as target sample, when sample size it is small even without when, the identification of such target is difficult to carry out,
This also leads to not the 3D model that objectives are obtained from overall scenario.Therefore, it can not be specified in the prior art in scene
Target rebuild.
Summary of the invention
The object of the present invention is to provide a kind of specified target three-dimensional rebuilding method and system, realization sets the goal to scene middle finger
Reconstruction.
To achieve the above object, the present invention provides following schemes:
A kind of specified target three-dimensional rebuilding method, the method for reconstructing include:
Obtain multi-view image;
To wherein a width carries out Target Segmentation, the specified target image that determination need to be rebuild in the multi-view image;
Scene recovery is carried out to the multi-view image using structure from motion SFM technology, obtains multi-view image
Scene sparsity structure, the scene sparsity structure be multiple 3D points composition three-dimensional structure;
Part 3D point in the scene sparsity structure of the multi-view image is marked, training sample set is obtained;Institute
Stating training sample set includes positive sample and negative sample;
AdaBoost algorithm construction strong classifier is used based on the training sample set;
Classified using the strong classifier to remaining 3D point in the scene sparsity structure, obtains specified target
Sparse cloud structure;
The dense point cloud structure of the scene of multi-view image is determined using MVS technology based on the sparse cloud structure;
The dense point cloud structure of scene based on the multi-view image is used using the training sample set as training data
Multiple trees determination strategy determines that specified target dense point cloud structure, the specified target dense point cloud structure are the finger rebuild
Set the goal 3 dimensional drawing.
Optionally, the part 3D point in the scene sparsity structure to the multi-view image is marked, and is instructed
Practice sample set to specifically include:
Define the first viewpoint P, k width multi-view image { I1,I2,…Ik};
N 3D point { M is calculated through SFM1,M2,…Mn};
Judge point Mi, i ∈ [1, n] whether to the first viewpoint P as it can be seen that if the point MiTo the first viewpoint P as it can be seen that then corresponding to
In image IP, the 2D point coordinate in P ∈ [1, k] isTo described image IPTarget Segmentation is carried out, is obtained described first
Visible 3D point and sightless 3D point under viewpoint P, wherein the visible 3D point is expressed asIt is positive
Sample, subscript ob indicate target, and the invisible 3D point is expressed asFor negative sample, subscript bk is indicated
Background.
Optionally, part 3D point of the method for reconstructing in the scene sparsity structure to the multi-view image is into rower
Note, after obtaining training sample set further include: expand the training sample set, the training sample set after being expanded.
Optionally, it is described to the training sample set carry out expand specifically include:
Define the second viewpoint q;
Judge point MiWhether not only to the first viewpoint P as it can be seen that but also to the second viewpoint q as it can be seen that if MiBoth to the first viewpoint P as it can be seen that
Again to the second viewpoint q as it can be seen that then point MiIn image IPWith image Iq, the corresponding 2D coordinate of q ∈ [1, k]WithFor
Characteristic matching point;
To image IqSuper-pixel segmentation is carried out, if pointAnd pointBelong to same super-pixel region, then pointCorresponding 3D point and pointCorresponding 3D point is identical label.
Optionally, it is specifically included based on the training sample set using AdaBoost algorithm construction strong classifier for described pair:
Given Weak Classifier space H, initializes weightT indicates Weak Classifier number
Amount;
The training sample set is divided, multiple training sample subsets are obtained
It calculatesThe sum of middle class sample weights:
The output of Weak Classifier is determined based on above-mentioned division:
Wherein ε is normal number, and M indicates 3D point;
Choose ht(M) make normalization factorIt is minimum;
Update sample weights
It repeats the above steps to obtain multiple Weak Classifiers;
It is integrated to the multiple Weak Classifier, obtain strong classifier.
Optionally, the strong classifier is expressed asWherein b is the threshold of manual setting
Value, M indicate that 3D point, T indicate the quantity of Weak Classifier, and t indicates the quantity of training sample subset.
Optionally, the dense point cloud structure of the scene based on the multi-view image is instruction with the training sample set
Practice data and determine that specified target dense point cloud structure specifically includes using Multiple trees determination strategy:
1) root node is constructed, using sample number minimum on node and minimum information gain as termination condition;
2) training sample set structure node is used, if present node reaches termination condition, it is leaf that present node, which is arranged,
Child node continues to train other nodes;
If 3) present node does not reach termination condition, the gini index of the training sample data of each attribute is calculated,
The smallest attribute construction node of gini index is found, and using corresponding sample point as cut-off;
4) on present node, the sample that sample attribute is less than cut-off is divided into left sibling, remaining is divided into the right side
Node;
Step 2) and step 3) are repeated, until all nodes are all marked as leaf node, obtains more decision trees;
The 3D point in the dense point cloud structure of the scene of the multi-view image is sentenced using the more decision trees
It is disconnected, obtain multiple judging results;It is target 3D point or 3D point is background 3D that the multiple judging result, which includes the 3D point,
Point;
Choosing identical result in the multiple judging result is at most final result;
Specified target dense point cloud structure is determined based on the final result.
The present invention additionally provides a kind of specified target three-dimensional reconstruction system, the reconstructing system includes:
Module is obtained, for obtaining multi-view image;
Specified target image determining module, for determining to wherein a width carries out Target Segmentation in the multi-view image
The specified target image that need to be rebuild;
Scene sparsity structure recovery module, for using structure from motion SFM technology to the multi-view image into
Row scene is restored, and the scene sparsity structure of multi-view image is obtained, and the scene sparsity structure is the three-dimensional of multiple 3D points composition
Stereochemical structure;
Training sample set determining module is carried out for the part 3D point in the scene sparsity structure to the multi-view image
Label, obtains training sample set;The training sample set includes positive sample and negative sample;
Strong classifier constructing module, for using AdaBoost algorithm construction strong classifier based on the training sample set;
Categorization module, for being classified using the strong classifier to remaining 3D point in the sparse cloud structure,
Obtain sparse cloud structure of specified target;
The dense point cloud structure determination module of multi-view image, for using MVS technology based on the sparse cloud structure
Determine the dense point cloud structure of the scene of multi-view image;
Target dense point cloud structure determination module, for the scene based on the multi-view image dense point cloud structure with
The training sample set is training data using the determining specified target dense point cloud structure of Multiple trees determination strategy, described specified
Target dense point cloud structure is the specified target 3 dimensional drawing rebuild.
The specific embodiment provided according to the present invention, the invention discloses following technical effects:
A kind of simply and effectively specified Object reconstruction method is proposed based on multi-view image in the present invention, this method passes through
Determine a small amount of sparse SFM point under a known viewpoint, and by the training to classifier, it can be from MVS dense point cloud
Target is rebuild in identification, is suitable for various scenes, has high accuracy of identification and robustness, result can be to the scene based on image
Understanding, 3D target recognition and tracking etc. are very helpful using tool.
Detailed description of the invention
It in order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, below will be to institute in embodiment
Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention
Example, for those of ordinary skill in the art, without any creative labor, can also be according to these attached drawings
Obtain other attached drawings.
Fig. 1 is that the embodiment of the present invention specifies target three-dimensional rebuilding method flow chart;
Fig. 2 is scene of embodiment of the present invention figure;
Fig. 3 is that the embodiment of the present invention specifies target three-dimensional rebuilding method result figure;
Fig. 4 is different scenes of embodiment of the present invention figure and its corresponding ROC curve figure;
Fig. 5 is that the embodiment of the present invention specifies target three-dimensional reconstruction system structural schematic diagram.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
The object of the present invention is to provide a kind of specified target three-dimensional rebuilding method and system, realization sets the goal to scene middle finger
Reconstruction.
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, with reference to the accompanying drawing and specific real
Applying mode, the present invention is described in further detail.
Fig. 1 is that the embodiment of the present invention specifies target three-dimensional rebuilding method flow chart, as shown in Figure 1, the method for reconstructing packet
It includes:
Step 101: obtaining multi-view image.
Step 102: to wherein a width carries out Target Segmentation, the specified target figure that determination need to be rebuild in the multi-view image
Picture.As shown in Fig. 2, the scene in Fig. 2 includes plurality of pictures, the house in the scene is rebuild, only need to choose it
In one, determine and need the target rebuild, avoid and need to rebuild the entirety of scene in the prior art, brought number
According to the big disadvantage for the treatment of capacity intensive.
Step 103: scene recovery being carried out to the multi-view image using structure from motion SFM technology, is obtained more
The scene sparsity structure of visual point image, the scene sparsity structure are the three-dimensional structure of multiple 3D points composition.
Step 104: the part 3D point in the scene sparsity structure of the multi-view image being marked, obtains training sample
This collection;The training sample set includes positive sample and negative sample.
Wherein, since sparse cloud lacks label, classifier can not be trained as sample, is divided in the present invention by target
The bianry image cut is labeled sparse of part cloud, and forms training sample.It is specific as follows:
Define the first viewpoint P, k width multi-view image { I1,I2,…Ik, wherein the first viewpoint P is any one viewpoint;
N 3D point { M is calculated through SFM1,M2,…Mn};
Judge point Mi, i ∈ [1, n] whether to the first viewpoint P as it can be seen that if the point MiTo the first viewpoint P as it can be seen that then corresponding to
In image IP, the 2D point coordinate in P ∈ [1, k] isTo described image IPTarget Segmentation is carried out, is obtained described first
Visible 3D point and sightless 3D point under viewpoint P, wherein the visible 3D point is expressed asIt is positive
Sample, subscript ob indicate target, and the invisible 3D point is expressed asFor negative sample, subscript bk is indicated
Background.
In order to enable the model of training has preferable generalization ability, training sample need to be expanded, specific as follows:
Define the second viewpoint q, wherein the second viewpoint q is any one viewpoint;
Judge point MiWhether not only to the first viewpoint P as it can be seen that but also to the second viewpoint q as it can be seen that if MiBoth to the first viewpoint P as it can be seen that
Again to the second viewpoint q as it can be seen that then point MiIn image IPWith image IqCorresponding 2D coordinateWithIt is characterized matching
Point;
To image IqSuper-pixel segmentation is carried out, if pointAnd pointBelong to same super-pixel region, then point
Corresponding 3D point and pointCorresponding 3D point is identical label.
Training sample set specifically can be used following expression to be indicated:
S1={ (M1,y1),…,(Mn,yn), wherein class label is
Step 105: AdaBoost algorithm construction strong classifier is used based on the training sample set.
Since the 3D scene of reconstruction is discontinuous, if separating target by constructing the very high classifier of single nicety of grading
It is relatively difficult, it is therefore, general to construct multiple niceties of grading using continuous AdaBoost (Adaptive Boosting) algorithm
Weak Classifier, it is specific as follows by integrating the higher strong classifier of nicety of grading:
Given Weak Classifier space H, initializes weightT indicates Weak Classifier quantity;
The training sample set is divided, multiple training sample subsets are obtained
It calculatesThe sum of middle class sample weights:
The output of Weak Classifier is determined based on above-mentioned division:
Wherein ε is normal number, and M indicates 3D point;
Choose ht(M) make normalization factorIt is minimum;
Update sample weights
It repeats the above steps to obtain multiple Weak Classifiers;
It is integrated to the multiple Weak Classifier, obtain strong classifier.
Specifically, the strong classifier is expressed asWherein b is the threshold of manual setting
Value, is defaulted as 0.
Step 106: being classified using the strong classifier to remaining 3D point in the sparse point cloud model, referred to
The sparse point cloud model to set the goal.
Step 107: determining the dense point cloud of the scene of multi-view image using MVS technology based on the sparse point cloud model
Model.
After obtaining the sparse model of target, point cloud is just provided with target/background label, and then can be considered as new training again
Collect the dense model to estimate target.If passing through sparse cloud M of SFMspIt is calculated for seed point and obtains MVS dense point cloud Mde, then point
Cloud MspWith a cloud MdeSpace structure be it is approximate, distribution of color is also similar.According to this feature, CART is used herein
Tree method constructs decision tree to a cloud MdeCarry out the classification of target/background, the final dense model for obtaining target.
Step 108: the dense point cloud structure of the scene based on the multi-view image is training with the training sample set
Data determine that specified target dense point cloud structure, the specified target dense point cloud structure are using Multiple trees determination strategy
The specified target 3 dimensional drawing rebuild.
Specifically, it is only partially visible in each viewpoint since the dense point cloud for constituting scene is space structure, if
Training set is only used to construct 1 decision tree, and classifier is easy over-fitting.Therefore, the present invention is raw by the corresponding sample of each viewpoint
At 1 decision tree, the label of a cloud is judged with Multiple trees.The algorithm of every decision tree construction is as follows:
1) root node is constructed, using sample number minimum on node and minimum information gain as termination condition;
2) training sample set structure node is used, if present node reaches termination condition, it is leaf that present node, which is arranged,
Child node continues to train other nodes;
If 3) present node does not reach termination condition, the gini index of the training sample data of each attribute is calculated,
The smallest attribute construction node of gini index is found, and using corresponding sample point as cut-off;
4) on present node, the sample that sample attribute is less than cut-off is divided into left sibling, remaining is divided into the right side
Node;
Step 2) and step 3) are repeated, until all nodes are all marked as leaf node, obtains more decision trees;
The 3D point in the dense point cloud structure of the scene of the multi-view image is sentenced using the more decision trees
It is disconnected, obtain multiple judging results;It is target 3D point or 3D point is background 3D that the multiple judging result, which includes the 3D point,
Point;
Choosing identical result in the multiple judging result is at most final result;
Specified target dense point cloud structure is determined based on the final result.
Fig. 3 is that the embodiment of the present invention specifies target three-dimensional rebuilding method result figure, wherein such as the part (a) institute in Fig. 3
Show, the part (a) indicates input picture, this group of experimental image is made of 37 width outdoor scene multi-view images, and image resolution ratio is
720×576.(b) in Fig. 3 is partially the toy mould in image come specified target of rebuilding by the binary segmentation figure of piece image
Type, (part b) indicate target outline.Sparse cloud of SFM and MVS dense point cloud are obtained using software Agisoft PhotoScan,
(c) in Fig. 3 is partially that SFM calculates resulting sparse model of place, and since blue background texture is single, characteristic matching occurs wrong
Accidentally, there are many sparse blue dots in the edge of model, (c) part indicates sparse cloud of SFM.Same phenomenon also appears in
In the PMVS dense point cloud of part (d) in 3, this makes model seem that (d) part indicates MVS dense point cloud comprising many noises.
(e) in Fig. 3 is partially the super-pixel segmentation of corresponding sparse cloud and image in the images, and (e) part indicates super-pixel
Segmentation and target point, with these sparse points for initial sample, by training and classification, sparse cloud classification result of last SFM
It is shown in part (f) in Fig. 3, (f) part indicates target sparse point, it can be noted that include minority below toy models
Non-targeted point.But in last PMVS point cloud identification, as shown in part (g) in Fig. 3, (g) part indicates Object reconstruction
As a result.Method in the present invention eliminates the blue dot at object edge, and the target point of the mistake in sparse model is to thick
The target identification of close cloud does not impact, and the Multiple trees model that this display this paper algorithm uses is not compromised by small part
Error sample and lead to over-fitting.
In order to assess the Object reconstruction accuracy of the above method in the present invention, using Cornell Multiview
The multi-view image collection of Dataset is tested.The GroundTruth of 3D target, the present invention are calculated with PMVS first in order to obtain
The dense point cloud that method obtains, then utilize the projection matrix re-projection of SFM calculating acquisition to each image in cloud.If re-projection
2D point fall in target area, then assert that the corresponding 3D point of 2D point belongs to target, referred to as target point, and fall in target area it
Outer 3D point is known as background dot.In Fig. 4 (a) part-(d) part be respectively " Box ", " Person ", " Msr_ph " and
The experimental result of " Tree "." Box " and " Person " is indoor scene, and " Msr_ph " and " Tree " is outdoor scene.Fig. 4 the 1st
Column are input pictures, and the 2nd is classified as PMVS dense point cloud, and the 3rd column are the specified targets rebuild.In order to more intuitively indicate the present invention
For algorithm to the ability of the dense point prediction of PMVS, last arranges the ROC curve for depicting object detection.By ROC curve it is found that originally
The classifier of algorithm construction has good performance in invention, and the quantizating index AUC value minimum 96.81% of ROC curve is (ideal
100%) value is.
Quantitative evaluation, which is carried out, for the performance further to model calculates the accuracy of experimental result herein as shown in table 1
(Precision, evaluation reconstruction target put the specific gravity that target point is really belonged in cloud), (Recall reflects correct recall rate
The target point of identification accounts for the specific gravity of catalogue punctuate), accuracy rate (Accuracy, reflect classifier system to entire dense point cloud
Decision-making ability), F1- score (resultant effect of evaluation accuracy rate and recall rate).Their calculation formula are as follows:
Wherein, TP indicates the target point number for including in reconstruction model.FP indicates the non-targeted point in reconstruction model.FN table
Show the target point being not included in reconstruction model.The non-targeted point of TN expression correct rejection.
Table 1
From the results shown in Table 1, the method in the present invention has very strong target identification ability to various scenes, respectively
The average value of item index is all larger than 90%, even applying to the synthesis energy that outdoor bigger scene " Tree " has also embodied
Power.But simultaneously it may also be noted that arriving, in the experiment of " Box ", accuracy is not as good as 90%, this is because reconstructed object " box " is upper
Surface color and wall are all white, and spatial position is adjoined, and causes the background dot for belonging to wall to be erroneously identified, to drop
Low accuracy, but the recall rate of target has 93.2%, and this shows that the point overwhelming majority for belonging to target is all identified.
Fig. 5 is that the embodiment of the present invention specifies target three-dimensional reconstruction system structural schematic diagram, as shown in figure 5, the reconstruction is
System includes:
Module 201 is obtained, for obtaining multi-view image;
Specified target image determining module 202 is used for wherein a width carries out Target Segmentation in the multi-view image, really
Surely the specified target image that need to be rebuild;
Scene sparsity structure recovery module 203, for using structure from motion SFM technology to the multi-view image
Scene recovery is carried out, the scene sparsity structure of multi-view image is obtained, the scene sparsity structure is the three of multiple 3D points composition
Tie up stereochemical structure;
Training sample set determining module 204, for the part 3D point in the scene sparsity structure to the multi-view image
It is marked, obtains training sample set;The training sample set includes positive sample and negative sample;
Strong classifier constructing module 205, for being classified by force based on the training sample set using AdaBoost algorithm construction
Device;
Categorization module 206, for using the strong classifier to remaining 3D point minute in the sparse cloud structure
Class obtains sparse cloud structure of specified target;
The dense point cloud structure determination module 207 of multi-view image, for using MVS skill based on the sparse cloud structure
Art determines the dense point cloud structure of the scene of multi-view image;
Target dense point cloud structure determination module 208, the dense point cloud knot for the scene based on the multi-view image
Structure determines specified target dense point cloud structure using Multiple trees determination strategy using the training sample set as training data, described
Specified target dense point cloud structure is the specified target 3 dimensional drawing rebuild.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with other
The difference of embodiment, the same or similar parts in each embodiment may refer to each other.
Used herein a specific example illustrates the principle and implementation of the invention, and above embodiments are said
It is bright to be merely used to help understand method and its core concept of the invention;At the same time, for those skilled in the art, foundation
Thought of the invention, there will be changes in the specific implementation manner and application range.In conclusion the content of the present specification is not
It is interpreted as limitation of the present invention.
Claims (8)
1. a kind of specified target three-dimensional rebuilding method, which is characterized in that the method for reconstructing includes:
Obtain multi-view image;
To wherein a width carries out Target Segmentation, the specified target image that determination need to be rebuild in the multi-view image;
Scene recovery is carried out to the multi-view image using structure from motion SFM technology, obtains the field of multi-view image
Scape sparsity structure, the scene sparsity structure are the three-dimensional structure of multiple 3D points composition;
Part 3D point in the scene sparsity structure of the multi-view image is marked, training sample set is obtained;The instruction
Practicing sample set includes positive sample and negative sample;
AdaBoost algorithm construction strong classifier is used based on the training sample set;
Classified using the strong classifier to remaining 3D point in the scene sparsity structure, obtains the sparse of specified target
Point cloud structure;
The dense point cloud structure of the scene of multi-view image is determined using MVS technology based on the sparse cloud structure;
The dense point cloud structure of scene based on the multi-view image is determined using the training sample set as training data using more
Plan tree determination strategy determines that specified target dense point cloud structure, the specified target dense point cloud structure are the specified mesh rebuild
Mark 3 dimensional drawing.
2. specified target three-dimensional rebuilding method according to claim 1, which is characterized in that described to the multi-view image
Scene sparsity structure in part 3D point be marked, obtain training sample set and specifically include:
Define the first viewpoint P, k width multi-view image { I1,I2,…Ik};
N 3D point { M is calculated through SFM1,M2,…Mn};
Judge point Mi, i ∈ [1, n] whether to the first viewpoint P as it can be seen that if the point MiTo the first viewpoint P as it can be seen that then correspondence is being schemed
As IP, the 2D point coordinate in P ∈ [1, k] isTo described image IPTarget Segmentation is carried out, is obtained in first viewpoint
Visible 3D point and sightless 3D point under P, wherein the visible 3D point is expressed asBe positive sample
This, subscript ob indicates target, and the invisible 3D point is expressed asFor negative sample, subscript bk indicates back
Scape.
3. specified target three-dimensional rebuilding method according to claim 2, which is characterized in that the method for reconstructing is to described
Part 3D point in the scene sparsity structure of multi-view image is marked, after obtaining training sample set further include: to described
Training sample set is expanded, the training sample set after being expanded.
4. specified target three-dimensional rebuilding method according to claim 3, which is characterized in that described to the training sample set
Expand and specifically include:
Define the second viewpoint q;
Judge point MiWhether not only to the first viewpoint P as it can be seen that but also to the second viewpoint q as it can be seen that if MiNot only to the first viewpoint P as it can be seen that but also right
Second viewpoint q is as it can be seen that then point MiIn image IPWith image Iq, the corresponding 2D coordinate of q ∈ [1, k]WithIt is characterized
Match point;
To image IqSuper-pixel segmentation is carried out, if pointAnd pointBelong to same super-pixel region, then point
Corresponding 3D point and pointCorresponding 3D point is identical label.
5. specified target three-dimensional rebuilding method according to claim 1, which is characterized in that described pair is based on the trained sample
This collection is specifically included using AdaBoost algorithm construction strong classifier:
Given Weak Classifier space H, initializes weightT indicates Weak Classifier quantity;
The training sample set is divided, multiple training sample subsets are obtained
It calculatesThe sum of middle class sample weights:
The output of Weak Classifier is determined based on above-mentioned division: Wherein ε
For normal number, M indicates 3D point;
Choose ht(M) make normalization factorIt is minimum;
Update sample weights
It repeats the above steps to obtain multiple Weak Classifiers;
It is integrated to the multiple Weak Classifier, obtain strong classifier.
6. specified target three-dimensional rebuilding method according to claim 5, which is characterized in that the strong classifier is expressed asWherein b is the threshold value of manual setting, and M indicates that 3D point, T indicate the quantity of Weak Classifier, t
Indicate the quantity of training sample subset.
7. specified target three-dimensional rebuilding method according to claim 1, which is characterized in that described to be based on the multi-view
The dense point cloud structure of the scene of picture is determined using the training sample set as training data using Multiple trees determination strategy specified
Target dense point cloud structure specifically includes:
1) root node is constructed, using sample number minimum on node and minimum information gain as termination condition;
2) training sample set structure node is used, if present node reaches termination condition, it is leaf section that present node, which is arranged,
Point continues to train other nodes;
If 3) present node does not reach termination condition, the gini index of the training sample data of each attribute is calculated, is found
The smallest attribute construction node of gini index, and using corresponding sample point as cut-off;
4) on present node, the sample that sample attribute is less than cut-off is divided into left sibling, remaining is divided into right node;
Step 2) and step 3) are repeated, until all nodes are all marked as leaf node, obtains more decision trees;
The 3D point in the dense point cloud structure of the scene of the multi-view image is judged using the more decision trees, is obtained
To multiple judging results;It is target 3D point or 3D point is background 3D point that the multiple judging result, which includes the 3D point,;
Choosing identical result in the multiple judging result is at most final result;
Specified target dense point cloud structure is determined based on the final result.
8. a kind of specified target three-dimensional reconstruction system, which is characterized in that the reconstructing system includes:
Module is obtained, for obtaining multi-view image;
Specified target image determining module, for wherein a width carries out Target Segmentation in the multi-view image, determining to need weight
The specified target image built;
Scene sparsity structure recovery module, for carrying out field to the multi-view image using structure from motion SFM technology
Scape restores, and obtains the scene sparsity structure of multi-view image, and the scene sparsity structure is the 3 D stereo of multiple 3D points composition
Structure;
Training sample set determining module, for the part 3D point in the scene sparsity structure to the multi-view image into rower
Note, obtains training sample set;The training sample set includes positive sample and negative sample;
Strong classifier constructing module, for using AdaBoost algorithm construction strong classifier based on the training sample set;
Categorization module is obtained for being classified using the strong classifier to remaining 3D point in the sparse cloud structure
Sparse cloud structure of specified target;
The dense point cloud structure determination module of multi-view image, for being determined based on the sparse cloud structure using MVS technology
The dense point cloud structure of the scene of multi-view image;
Target dense point cloud structure determination module, the dense point cloud structure for the scene based on the multi-view image is with described
Training sample set is training data using the determining specified target dense point cloud structure of Multiple trees determination strategy, the specified target
Dense point cloud structure is the specified target 3 dimensional drawing rebuild.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910347535.9A CN110047139B (en) | 2019-04-28 | 2019-04-28 | Three-dimensional reconstruction method and system for specified target |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910347535.9A CN110047139B (en) | 2019-04-28 | 2019-04-28 | Three-dimensional reconstruction method and system for specified target |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110047139A true CN110047139A (en) | 2019-07-23 |
CN110047139B CN110047139B (en) | 2022-07-08 |
Family
ID=67279824
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910347535.9A Expired - Fee Related CN110047139B (en) | 2019-04-28 | 2019-04-28 | Three-dimensional reconstruction method and system for specified target |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110047139B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110653166A (en) * | 2019-10-08 | 2020-01-07 | 河南科技大学 | Fruit detection and classification method and device |
CN110782521A (en) * | 2019-09-06 | 2020-02-11 | 重庆东渝中能实业有限公司 | Mobile terminal three-dimensional reconstruction and model restoration method and system |
CN111192313A (en) * | 2019-12-31 | 2020-05-22 | 深圳优地科技有限公司 | Method for robot to construct map, robot and storage medium |
CN111681318A (en) * | 2020-06-10 | 2020-09-18 | 上海城市地理信息系统发展有限公司 | Point cloud data modeling method and device and electronic equipment |
CN111815766A (en) * | 2020-07-28 | 2020-10-23 | 复旦大学附属华山医院 | Processing method and system for reconstructing blood vessel three-dimensional model based on 2D-DSA image |
CN111882657A (en) * | 2020-06-29 | 2020-11-03 | 杭州易现先进科技有限公司 | Three-dimensional reconstruction scale recovery method, device and system and computer equipment |
CN113627434A (en) * | 2021-07-07 | 2021-11-09 | 中国科学院自动化研究所 | Method and device for building processing model applied to natural image |
CN114332415A (en) * | 2022-03-09 | 2022-04-12 | 南方电网数字电网研究院有限公司 | Three-dimensional reconstruction method and device of power transmission line corridor based on multi-view technology |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101719286A (en) * | 2009-12-09 | 2010-06-02 | 北京大学 | Multiple viewpoints three-dimensional scene reconstructing method fusing single viewpoint scenario analysis and system thereof |
US20180276885A1 (en) * | 2017-03-27 | 2018-09-27 | 3Dflow Srl | Method for 3D modelling based on structure from motion processing of sparse 2D images |
CN108986162A (en) * | 2018-06-28 | 2018-12-11 | 四川斐讯信息技术有限公司 | Vegetable and background segment method based on Inertial Measurement Unit and visual information |
CN109076148A (en) * | 2016-04-12 | 2018-12-21 | 奎蒂安特有限公司 | Everyday scenes reconstruction engine |
-
2019
- 2019-04-28 CN CN201910347535.9A patent/CN110047139B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101719286A (en) * | 2009-12-09 | 2010-06-02 | 北京大学 | Multiple viewpoints three-dimensional scene reconstructing method fusing single viewpoint scenario analysis and system thereof |
CN109076148A (en) * | 2016-04-12 | 2018-12-21 | 奎蒂安特有限公司 | Everyday scenes reconstruction engine |
US20180276885A1 (en) * | 2017-03-27 | 2018-09-27 | 3Dflow Srl | Method for 3D modelling based on structure from motion processing of sparse 2D images |
CN108986162A (en) * | 2018-06-28 | 2018-12-11 | 四川斐讯信息技术有限公司 | Vegetable and background segment method based on Inertial Measurement Unit and visual information |
Non-Patent Citations (2)
Title |
---|
张护望: "基于三维重建技术的轨道板几何尺寸检测系统设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
缪君 等: "少量交互的多视角图像目标分割算法", 《计算机辅助设计与图形学学报》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110782521A (en) * | 2019-09-06 | 2020-02-11 | 重庆东渝中能实业有限公司 | Mobile terminal three-dimensional reconstruction and model restoration method and system |
CN110653166B (en) * | 2019-10-08 | 2021-10-22 | 河南科技大学 | Fruit detection and classification method and device |
CN110653166A (en) * | 2019-10-08 | 2020-01-07 | 河南科技大学 | Fruit detection and classification method and device |
CN111192313A (en) * | 2019-12-31 | 2020-05-22 | 深圳优地科技有限公司 | Method for robot to construct map, robot and storage medium |
CN111192313B (en) * | 2019-12-31 | 2023-11-07 | 深圳优地科技有限公司 | Method for constructing map by robot, robot and storage medium |
CN111681318A (en) * | 2020-06-10 | 2020-09-18 | 上海城市地理信息系统发展有限公司 | Point cloud data modeling method and device and electronic equipment |
CN111882657A (en) * | 2020-06-29 | 2020-11-03 | 杭州易现先进科技有限公司 | Three-dimensional reconstruction scale recovery method, device and system and computer equipment |
CN111882657B (en) * | 2020-06-29 | 2024-01-26 | 杭州易现先进科技有限公司 | Three-dimensional reconstruction scale recovery method, device, system and computer equipment |
CN111815766A (en) * | 2020-07-28 | 2020-10-23 | 复旦大学附属华山医院 | Processing method and system for reconstructing blood vessel three-dimensional model based on 2D-DSA image |
CN111815766B (en) * | 2020-07-28 | 2024-04-30 | 复影(上海)医疗科技有限公司 | Processing method and system for reconstructing three-dimensional model of blood vessel based on 2D-DSA image |
CN113627434A (en) * | 2021-07-07 | 2021-11-09 | 中国科学院自动化研究所 | Method and device for building processing model applied to natural image |
CN113627434B (en) * | 2021-07-07 | 2024-05-28 | 中国科学院自动化研究所 | Construction method and device for processing model applied to natural image |
CN114332415A (en) * | 2022-03-09 | 2022-04-12 | 南方电网数字电网研究院有限公司 | Three-dimensional reconstruction method and device of power transmission line corridor based on multi-view technology |
CN114332415B (en) * | 2022-03-09 | 2022-07-29 | 南方电网数字电网研究院有限公司 | Three-dimensional reconstruction method and device of power transmission line corridor based on multi-view technology |
Also Published As
Publication number | Publication date |
---|---|
CN110047139B (en) | 2022-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110047139A (en) | A kind of specified target three-dimensional rebuilding method and system | |
CN110120097B (en) | Semantic modeling method for airborne point cloud of large scene | |
CN111832655B (en) | Multi-scale three-dimensional target detection method based on characteristic pyramid network | |
CN101739721B (en) | Time change and disordered multi-image-based four-dimensional modeling method | |
CN106920243A (en) | The ceramic material part method for sequence image segmentation of improved full convolutional neural networks | |
Xiao et al. | Joint affinity propagation for multiple view segmentation | |
CN101714262A (en) | Method for reconstructing three-dimensional scene of single image | |
CN103065158B (en) | The behavior recognition methods of the ISA model based on relative gradient | |
CN105869173A (en) | Stereoscopic vision saliency detection method | |
CN102999942A (en) | Three-dimensional face reconstruction method | |
CN104050628B (en) | Image processing method and image processing device | |
CN107392131A (en) | A kind of action identification method based on skeleton nodal distance | |
CN110163213A (en) | Remote sensing image segmentation method based on disparity map and multiple dimensioned depth network model | |
CN110827312B (en) | Learning method based on cooperative visual attention neural network | |
CN103226708A (en) | Multi-model fusion video hand division method based on Kinect | |
CN109448015A (en) | Image based on notable figure fusion cooperates with dividing method | |
CN112800906A (en) | Improved YOLOv 3-based cross-domain target detection method for automatic driving automobile | |
CN107944459A (en) | A kind of RGB D object identification methods | |
CN105574545B (en) | The semantic cutting method of street environment image various visual angles and device | |
Hua et al. | Depth estimation with convolutional conditional random field network | |
CN110070574A (en) | A kind of binocular vision Stereo Matching Algorithm based on improvement PSMNet | |
CN102147812A (en) | Three-dimensional point cloud model-based landmark building image classifying method | |
CN111310821A (en) | Multi-view feature fusion method, system, computer device and storage medium | |
CN113610139A (en) | Multi-view-angle intensified image clustering method | |
CN109670401A (en) | A kind of action identification method based on skeleton motion figure |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220708 |
|
CF01 | Termination of patent right due to non-payment of annual fee |