CN104134234B - A kind of full automatic three-dimensional scene construction method based on single image - Google Patents

A kind of full automatic three-dimensional scene construction method based on single image Download PDF

Info

Publication number
CN104134234B
CN104134234B CN201410340189.9A CN201410340189A CN104134234B CN 104134234 B CN104134234 B CN 104134234B CN 201410340189 A CN201410340189 A CN 201410340189A CN 104134234 B CN104134234 B CN 104134234B
Authority
CN
China
Prior art keywords
region
image
sample
classification
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410340189.9A
Other languages
Chinese (zh)
Other versions
CN104134234A (en
Inventor
陈雪锦
王贵杭
胡思宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201410340189.9A priority Critical patent/CN104134234B/en
Publication of CN104134234A publication Critical patent/CN104134234A/en
Application granted granted Critical
Publication of CN104134234B publication Critical patent/CN104134234B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

A kind of full automatic three-dimensional scene models construction method based on single image of disclosure of the invention, including:Method based on machine learning, is trained using training image collection and obtains that the image of input can be carried out the grader of rude classification mark.The image of input is divided into setting, ground and the seed region of sky three using described grader, the rude classification annotation results of image-region are obtained.Based on " credible " region in described rude classification annotation results, rude classification annotation results are corrected using GrabCut image segmentation algorithms and accurately border is obtained between image geometry region.Between the precise classification annotation results in the image geometry region described in obtaining and image geometry region on the basis of accurate border, the method for computer graphics is utilized to carry out the modeling of realistic three-dimensional scenic.The full automatic three-dimensional scenic based on single image is realized by using method disclosed by the invention to build.

Description

A kind of full automatic three-dimensional scene construction method based on single image
Technical field
The present invention relates to the modeling field based on image, more particularly to a kind of full automatic three dimensional field based on single image Scape construction method.
Background technology
Three-dimensional reconstruction based on image can construct realistic 3-D graphic from two dimensional image.It is based on The modeling of image is the new technology risen recent years, and it is using the image directly photographed, using as far as possible few interaction Operation, rebuilds scene.The characteristics of its is maximum is exactly to overcome traditional deficiency based on Geometric Modeling and rendering technique, Ke Yi The real time roaming with realistic scene as photo is realized on the computer only with common computing capability.Traditional three-dimensional modeling Although instrument is increasingly modified, but builds and slightly show the work that complicated threedimensional model remains a unusual time and effort consuming.Consider The many threedimensional models to be built can find or be moulded in real world, therefore 3-D scanning technology and based on image Modeling technique is just into preferable modeling pattern in people mind;Again because the former can only typically obtain the geological information of scenery, And the latter makes a living into the composograph with photorealistic there is provided a kind of natural mode, therefore it rapidly becomes at present Study hotspot in field of Computer Graphics.
Model Reconstruction based on image is the advanced problems of computer graphics study.The technology combines computer graphical The theory and method of the numerous areas such as, image procossing and computer vision, the two-dimensional signal included by image scene To obtain the three-dimensional data for Model Reconstruction, the Model Reconstruction in virtual scene is realized, therefore in CAD And have good application prospect in reverse-engineering.Modeling technique based on image is the progress image reason on the basis of two dimensional image Solve simultaneously finally rebuild 3 D stereo, it is one of computer vision subject matter to be solved, be widely used in robot navigation, The every field such as fuzzy diagnosis, virtual reality and building reconstruction.
The initially research on three-dimensional reconstruction is, based on geological information method, such as to put cloud.In recent years based on image Three-dimensional reconstruction is risen, and it is rebuild using the photo directly photographed, is overcome traditional based on geometrical reconstruction technology In problem of calibrating, its tool has an enormous advantage, thus the three-dimensional reconstruction based on image turns into numerous scholar's research Important topic.
Most of research at present is the three-dimensional reconstruction for two width or several (sequence) images.Needed on several figure reconstruction techniques The characteristic point for being used to match between numerous and diverse pretreatment, searching image is first carried out to each image, and Feature Points Matching is figure As processing in difficult point, therefore using multiple image carry out three-dimensional reconstruction operation on there is reconstructed cost height, complex operation, calculating Amount is big, is not suitable for the problems such as dynamic scene is rebuild.
The main thought of three-dimensional reconstruction based on single image is color, the shape that target is extracted by individual digital image The two dimensions such as shape, coplanarity, three-dimensional geometric information, so as to obtain the space three-dimensional information of the target using a small amount of known conditions.It is single The trouble of multiple image reconstruction is avoided in the three-dimensional reconstruction operation of width image, its process of reconstruction is simple, speed fast, only need to shoot The one suitable digital photo of subtended angle degree can obtain the three-dimensional geometric information of the target;Its less investment, it is not necessary to multiple video cameras Or projecting apparatus is demarcated, the investment of human and material resources is greatly reduced;And technically only piece image is pre-processed, Without the matching of multiple image, the matching difficulty of multiple image reconstruction is avoided, the time is greatlyd save, improves efficiency.Cause This, the attention that three-dimensional reconstruction is increasingly being people is carried out with single image.
In current research method, the three-dimensional rebuilding method of single image include interactive three-dimensional scene construction method and Full automatic three-dimensional scene construction method.Interactive three-dimensional scene construction method needs the interaction of user to be instructed, entirely certainly Dynamic three-dimensional scene construction method is generally based on characteristics of image and obtains corresponding scene structure point using the method for machine learning Class device, classification annotation is carried out using grader by image-region, and the modeling of three-dimensional scenic is carried out on this basis.Interactive builds Mould method precision is high, but needs user mutual to instruct.Full automatic three-dimensional scene construction method is study hotspot in recent years, such as What rapidly and accurately estimates image-region classification, and the accuracy for improving automatic Reconstruction is full automatic single image three-dimensional reconstruction side The subject matter that method faces.
The content of the invention
It is an object of the invention to provide a kind of full automatic three-dimensional scene construction method based on single image, effectively obtaining Take carried out in image geometry territorial classification annotation results and image between geometric areas on the basis of accurate border it is realistic Three-dimensional scenic structure.
The purpose of the present invention is achieved through the following technical solutions:A kind of full automatic three dimensional field based on single image Scape construction method, comprises the following steps:
Step 1:The grader of image geometry region division can be carried out by being obtained using training image collection
The grader of image geometry region division is obtained based on machine learning, it is necessary first to collect training image collection, Then one group of training sample is obtained using training image collection, finally trains grader using training sample;The training sample It is to be obtained in training image collection, including sample mark and sample extraction;
The sample mark refers to the mark that geometric areas is carried out to each width figure inside training image collection, i.e., each The whole region of width image is divided into multiple geometry subregions, and each geometry subregion should be attributed to one in three kinds of classifications Kind, these three classifications are respectively:Erect region, ground region and sky areas;
, it is necessary to extract the real sample set for being used to train after sample is marked.In order to reported as precisely as possible to image district Domain carries out the division of geometry subregion, using 30*40 rectangular block as sample unit, is interval steps with 10 often piece image It is divided into the sample rectangular block of 30*40 with certain overlapping region a series of.For each sample rectangular block, extract 1031 and tie up Sample characteristics;One group of training sample (training sample set) can be obtained hence for each width training image, and is owned The training sample set of training image forms final training sample set;
Training sample set is extracted, point of image geometry region division can be carried out by being obtained using the training method for having supervision Class device, i.e., using support vector machines (Support Vector Machine) grader, train obtained sorter model energy Enough export a test sample and be belonging respectively to the other probability of three species;
Step 2:The image that the grader obtained using training is inputted to user carries out the division of geometric areas, obtains rough The result of classification annotation;
Piece image is inputted, it is a series of with certain overlapping region to be first that image-region is divided into by interval steps with 10 30*40 sample rectangular block, the sample characteristics of 1031 dimensions are extracted for each sample rectangular block;For each sample moment Shape block, grader exports the sample and is belonging respectively to the other probability of three species according to the sample characteristics of its 1031 dimension:p(v|Pi)、p (g|Pi) and p (s | Pi), wherein p (v | Pi) represent sample PiBelong to erect region probability, p (g | Pi) and p (s | Pi) difference table This P of sampleiBelong to ground region and the probability of sky areas;
For each 10*10 of image-region decision package CjIt belongs to the other probability of three species should be certainly by N number of include The classification of the sample rectangular block of plan unit is together decided on, each decision package CjIt belongs to the other probability calculation of three species:
Wherein N represents to include decision package CjSample rectangular block number, PiSome in N number of rectangular block is represented, so that Obtain decision package CjIt is belonging respectively to the other probability size of three species;p(v|Cj) represent decision package CjBelong to and erect the general of region Rate, and p (g | Cj) and p (s | Cj) decision package C is represented respectivelyjBelong to ground region and the probability of sky areas;
And if only if decision package CjBelong to the Probability p of certain classification*During > 0.5, the decision package is just marked for such Not, unknown classification is otherwise labeled it as;
Step 3:Using based on the rude classification annotation results obtained in GrabCut image segmentation algorithm amendment steps 2, and Optimize the border between image geometry region, obtain between image geometry region accurately border
During using based on GrabCut image segmentation algorithms, using in rude classification result the region of " believable " as GrabCut initial input is fully automatically optimized rough annotation results, and " credible " region is with larger possibility Belong to the set of the pixel of certain classification, that is, the probability for belonging to certain classification is more than 0.5 and is belonging to all pixels of the category Belong in set probability it is larger preceding 90%;" credible " region accordingly is calculated for each classification, is obtained for image The classification of certain in region P*" credible " region;Rude classification is corrected by the output based on GrabCut image segmentation algorithms Result, to obtain in image between geometric areas accurately border;
Step 4, the annotation results exported for step 3, building for three-dimensional scenic is carried out using the method for computer graphics There is provided the realistic three-dimensional scenic roaming of user for mould.
According to accurately boundary information between geometric areas in image, image-region is cut into different geometric areas; On the basis of setting camera parameter, relative depth information is introduced by reference to ground, so as to recover geometry in image scene The three-dimensional coordinate on the important summit in region;It is final to utilize each geometry subregion of plane approximation, and regional according to geometry Relation is placed in three-dimensional scenic, so as to generate realistic three-dimensional scenic roaming.
In the step 1, the sample characteristics of 1031 dimensions include:The Bag of Visual Words features of 1000 dimensions, 30 dimensions Color characteristic and 1 dimension position feature.
In the step 1, the basic function in SVM classifier elects RBF as, and model classification elects multi-class point as Class device, probability Estimation parameter b is set to 1, that is, trains obtained grader to export a test sample and be belonging respectively to three kinds The probability of classification.
The step 3 is implemented as:
(1) for certain classification P in image-region in*The computational methods in " credible " region be:
To belonging to category P in rough annotation results*All pixels belong to the probability size of the category according to them Descending is arranged, and removes the less pixel of probability, and its percentage is k%;
Produce one and P*Corresponding two-value template image M*, M*It is all to belong to set P as artwork size*In picture Element, it is in template image M*Respective pixel positional value be 1, otherwise value be 0;
Detection template image M*In connected region, for connected region inside exist area be less than A 0 value area Domain, with 1 value covering filling;
Structural element Erodent Algorithm image M by β of size*.Such may be belonged to by being considered as the pixel being corroded Other pixel, its set is designated as, through excessive erosion rear pattern plate image M*Intermediate value is 1 pixel, be considered as " credible " of the category as Element, its set is designated as
Three kinds of classifications (ground, setting and sky areas) are obtained according to the method in described calculating " credible " region respectively " credible " set of pixelsWithAnd " possibility " set of pixelsWithCalculating parameter is respectively:For erectting region, K, A, β take 10,5000,20 respectively, and for ground region and sky areas, k, A, β takes difference 0,5000,10;
(2) GrabCut algorithms are fully automatically optimized being embodied as rough annotation results:
Each classification is individually split respectively according to described " credible " set of pixels and " possibility " set of pixels, to three classes In some classification independent segmentation, its computational methods is:Category region is considered as prospect, two other category regions is considered as the back of the body Scape.Specifically, the pixel of " credible " in the category is considered as foreground pixel, " credible " pixel of two other classification is considered as background Pixel, and " possibility " pixel in the category is considered as possible prospect, and other remaining pixels regard possible background as; GrabCut partitioning algorithms are initialized using above- mentioned information, the mixed Gauss model of foreground and background are set up respectively, after over-segmentation The independent segmentation result by prospect of certain category regions can be obtained;
Annotation results are further optimized according to the result of described independent segmentation, method is:In the independent segmentation result of three steps On the basis of, according to the method for individually splitting some region, again using erect region as prospect carry out the separation of prospect background from And obtain final image segmentation result;
The result individually split according to sky and ground can substantially estimate horizontal position, will most using horizon Background area in whole image segmentation result is divided into sky and ground region, and its method is:The back of the body on horizon Scene area is labeled as sky areas, and the background area under horizon is labeled as ground region.
On the result of described image geometry area marking, three-dimensional scenic is carried out using the method for computer graphics Modeling there is provided user it is realistic three-dimensional scenic roaming, including:
According to described image geometry area marking result, the accurate border between image geometry region is obtained, is used Fitted polygon of the Douglas-Peucker algorithms to ground and the border polygonal approximation acquisition border for erectting region;
The step of wherein above-mentioned method using computer graphics carries out the modeling of three-dimensional scenic be:
(1) to described scene modeling, using pinhole camera model, optical axis passes through picture centre, world coordinate system and phase Machine coordinate system is overlapped, and camera fields of view is set to 1.431rad;
(2) reference horizontal plane of manufacturing is utilized, the three-dimensional coordinate on important summit in scene is obtained, method is:Introduce reference horizontal plane of manufacturing, The height of ground level is set to -5;Projection matrix is obtained according to above-mentioned modeling information, under conditions of ground level determination, passed through Back projection, calculates the three-dimensional coordinate in three-dimensional scenic corresponding to each pixel of ground region in image, especially, can With the three-dimensional coordinate for obtaining ground region and erectting zone boundary point;
(3) according to the three-dimensional coordinate and ground region and setting region of described ground region and setting zone boundary point The fitted polygon on border, obtains a series of perpendiculars, and method is:Ground region and the fitting for erectting zone boundary is polygon Each broken line in shape is considered as the intersection in the region on some perpendicular and ground, and the coboundary of each perpendicular is by described Image labeling result in erect the border of region and sky areas and determine;
(4) to described perpendicular and ground region, realistic three-dimensional scenic mould is obtained using texture mapping Type;The scene walkthrough of the sense of reality includes:The visual angle of camera is converted, focus and converts observation position observation model of place.
As seen from the above technical solution provided by the invention, one is obtained by the method training based on machine learning The image of input can be divided into roughly to the SVMs (SVM) of different geometry subregions.These subregions belong to three major types One kind (erectting region, ground region and sky areas).In the result marked due to rude classification, between image geometry region Border often occur wrong point and obscure, so as to propose the algorithm split using image to correct the result of rude classification mark, with Obtain accurately border between geometric areas.Using accurately boundary information, it can be avoided in the structure for carrying out three-dimensional scenic Due to the inaccurate caused distortion in border, so as to generate realistic three-dimensional scene models.
The advantage of the present invention compared with prior art is:
(1) present invention incorporates the advantage that machine learning and image are split, obtained with image partition method amendment machine learning Rude classification annotation results, border more accurately between geometric areas is obtained, so as to carry out the structure of three-dimensional scenic Can preferably it be avoided in building due to the inaccurate caused model of place distortion in border.
(2) on the basis of the image classification mark accuracy and prior art that the present invention is obtained have comparativity, this hair The Technical Solving of bright proposition is simpler.Existing technology lays particular emphasis on using more effective characteristics of image and built complicated Classification annotation model reach higher classification annotation accuracy.For characteristics of image, the present invention only used to be had on a small quantity The characteristics of image of effect.For classification annotation model, the present invention is only with single classifier model.Obtaining point with comparativity On the basis of class mark accuracy, need to use more characteristics of image compared to prior art and need to build complicated classification For model, technical scheme proposed by the present invention seems more simple, and complexity is low, it is easy to accomplish.
Brief description of the drawings
Fig. 1 is the system flow schematic diagram of technical scheme;
Fig. 2 is the parts of images of the training image concentration used in embodiments of the invention one;
Fig. 3 is to correct the algorithm stream of rude classification annotation results based on image segmentation algorithm in technical solution of the present invention Cheng Tu;
Fig. 4 is segmentation of the GrabCut image segmentation algorithms that are related to of technical solution of the present invention under different initial methods Comparative result figure;
Fig. 5 is that the input picture in the embodiment of the present invention one is marked what technical solution of the present invention was proposed based on rude classification As a result the accurate border between the ground region obtained under " four steps " GrabCut algorithms and setting region;
The result figure that Fig. 6 observes for the three-dimensional scene models of the input picture of the embodiment of the present invention one under different visual angles;
Fig. 7 is the confusion matrix for carrying out classification annotation acquisition on database 1 according to technical scheme;
Fig. 8 is the contingency table obtained on database 2 in the way of 6 folding cross validations according to technical scheme Note the contrast of accuracy and prior art the classification annotation result on the database of result;
Fig. 9 is, according to technical scheme, SVMs to be used for grader on database 1 and database 2 Carry out the classification annotation accuracy of rude classification mark acquisition and the accurate of modified result is labeled using image segmentation algorithm The contrast of degree.
Embodiment
With reference to the accompanying drawing in the embodiment of the present invention, technical scheme is clearly and completely described, shown So, described embodiment is only a part of embodiment of the invention, rather than whole embodiments.Implementation based on the present invention Example, the every other embodiment that those of ordinary skill in the art are obtained under the premise of creative work is not made is belonged to Protection scope of the present invention.
Scene image described in the embodiment of the present invention is to be directed to outdoor scene image, because outdoor scene picture material can be with It is made up of the geometric areas of three types:Erect region, ground region and sky areas.Generally, outdoor scene image Content can be combined by the geometric areas of three of the above type and constituted, such as the several frequently seen outdoor scene image shown in Fig. 2 In, ground region can be meadow, road etc., and it can be building, trees etc. to erect region, and sky areas is sky.Due to The present invention not only will accurately be marked to picture material, in addition it is also necessary to the structure of three-dimensional scenic is carried out according to annotation results, And three-dimensional scenic assumes the presence for having reference ground in building, so carrying out three-dimensional scenic structure suitable for technical solution of the present invention Outdoor scene image at least include ground region.If only carrying out the classification annotation of picture material with technical solution of the present invention, Then the scope of application, which is not only restricted to image, needs to include the hypothesis of ground region.
Because outdoor scene picture material can be made up of the geometric areas of three types:Erect region, ground region and Sky areas.Its characteristics of image of different geometric areas has some can be with the feature of distinction, such as color, the color of sky Common can have blueness, and the color on meadow is usually green.Based on these observations, the present invention comes first with image data set Training one has the grader that picture material according to three kinds of geometric areas divide to differentiation, i.e., point obtained by training Class device can be divided into input picture according to the local feature of image different geometry subregions.In the embodiment of the present invention The characteristics of image used includes:Dense SIFT (dense scale invariant feature conversion, Dense Scale Invariant Feature Transform) feature, Bag of Visual Words (vision bag of words feature), color characteristic (using LUV or RGB), position feature (uses normalized height value h).The grader of training uses support vector machines (Support Vector Machine), the basic function that training is used elects RBF as, and model classification elects multi-class grader as, generally Rate estimation parameter b is set to 1, that is, training obtained model to export a test sample, to be belonging respectively to three species other general Rate, remaining, which is set, selects default parameters.
Characteristics of image used in the grader in the image geometry region of above-mentioned use machine learning training is local feature, When it carries out geometric areas identification and classification to picture material, although effective classification results can be obtained, but it is global due to lacking Constraint can cause the problems such as border between some mistakes semantically point and region is not accurate enough, so utilizing GrabCut Image segmentation algorithm, is further introduced into the constraint between image-region, to optimize and correct the rude classification exported by grader The result of mark, so that more accurately border between obtaining image geometry region.Using accurately boundary information, three are being carried out It can be avoided in the modeling for tieing up scene due to the indefinite caused distortion in border, so as to generate realistic three-dimensional scenic mould Type.
Embodiment one
Fig. 1 is the system for the full automatic three-dimensional scenic modeling method based on single image that the embodiment of the present invention one is provided Flow chart.The key step of embodiment one includes:
Step 1, using training image collection obtain can carry out the grader of image geometry region division.
Because the grader of image geometry region division is obtained based on machine learning in the embodiment of the present invention, first First need to collect training image collection, then obtain one group of training sample using training image collection, finally trained using training sample Grader.
The collection of training image collection can be obtained by internet hunt.Because the content thousand of outdoor scene image becomes ten thousand Change, the training image of collection should be representative, it is as far as possible many to cover various possible outdoor scenes.Illustrated in accompanying drawing 2 The parts of images that the training image used in embodiment one is concentrated, these images are several frequently seen outdoor scene images, they One kind in three kinds of classifications (ground, setting and sky) is comprised at least.Ground region in three kinds of classifications can be meadow, road Road etc., it can be building, trees etc. to erect region, and sky areas is sky.Certainly, if simply had for given chamber outfield The application of scape, training image collection can have more specific aim, such as built only for outdoor street view image, then can collect Different classes of streetscape figure is used as training image collection.
Training sample is obtained in training image collection, including sample mark and sample extraction.
Sample mark refers to the mark that geometric areas is carried out to each width figure inside training image collection, i.e., each width figure The whole region of picture is divided into multiple geometry subregions, and each geometry subregion should be attributed to one kind in three kinds of classifications.This Three kinds of classifications are respectively:Erect region, ground region and sky areas.Because being trained in the present invention by the way of having supervision Grader, so sample mark needs artificially to be labeled manually.
, it is necessary to extract real due to the sample set of training after sample is marked.The purpose of the present invention is reported as precisely as possible To image-region carry out geometry subregion division, therefore in embodiments of the invention using 10*10 rectangular block as determine Plan unit, sample unit is used as using 30*40 rectangular block.Handle is interval steps with 10 per piece image in embodiments of the invention The sample rectangular block of 30*40 with certain overlapping region a series of is divided into, by taking 800*600 image as an example, can be obtained 58*77=4466 sample rectangular block.For each sample rectangular block, the sample characteristics of 1031 dimensions are extracted, 1000 are specifically included The Bag of Visual Words features of dimension, the color characteristic of 30 dimensions, the positional information of 1 dimension.
Extract the Bag of Visual Words features of 1000 dimensions, it is necessary first to extract every width training image in advance Dense SIFT features formation SIFT feature collection, is then clustered to feature set using clustering algorithm, obtains SIFT feature 1000 cluster centres.Dense SIFT features extract the interval steps used for 4 in the embodiment of the present invention, and clustering algorithm is used K-means (K- averages) clustering algorithm.It is special according to SIFT for the sample rectangular block of each 30*40 in a width training image The cluster centre levied counts the SIFT feature word frequency histogram of the rectangular area, forms the Bag of Visual of 1000 dimensions Words features.The color characteristic used in the embodiment of the present invention is using 30 dimension histogram features, in LUV spaces, each passage system Count the histogram feature of 10 dimensions.The positional information used in the embodiment of the present invention for 1 dimension relative altitude information, i.e., it is every The relative altitude of individual sample rectangular block in the picture.The sample is used as the feature that each sample rectangular block extracts 1031 dimensions Feature description.One group of training sample can be obtained for each width training image, and the sample set of all training images is formed Final training sample set.Pure training sample is used only in the embodiment of the present invention to be used to train grader, i.e. training sample institute Rectangular area belong to the other training sample of same class and constitute final training sample set (final training sample set bag Containing the other training sample of three species).
Training sample set is extracted, present example is obtained using the training method for having supervision can carry out image geometry area The grader that domain is divided.Specifically, grader uses support vector machines (Support Vector Machine), basic function Elect RBF as, model classification elects multi-class grader as, and probability Estimation parameter b is set to 1, that is, trains obtained mould Type can export a test sample and be belonging respectively to the other probability of three species.
Step 2, the image inputted to user, the grader obtained using training are carried out the division of geometric areas to it, obtained The result marked to rude classification.
The purpose of step 2 is the rough mark by training obtained grader to carry out area classification to the image of input. Piece image is inputted, is first that image-region is divided into a series of 30*40's with certain overlapping region by interval steps with 10 Sample rectangular block, the sample characteristics of 1031 dimensions are extracted for each sample rectangular block.For each sample rectangular block, point Class device exports the sample and is belonging respectively to the other probability of three species according to the sample characteristics of its 1031 dimension:p(v|Pi)、p(g|Pi) and p (s|Pi), wherein p (v | Pi) represent sample PiBelong to erect region probability, p (g | Pi) and p (s | Pi) sample P is represented respectivelyiCategory Probability in ground region and sky areas.
The purpose of the present invention be it is reported as precisely as possible to image-region carry out subregion division, therefore the present invention implementation Using 10*10 rectangular block as decision package (from each other without overlapping) in example, each decision package is included in the picture In multiple sample rectangular blocks.30*40 sample rectangular block is used in the embodiment of the present invention, sampling interval step is 10, then exists Image interior zone, each decision package is included in 12 sample rectangular blocks.Therefore, for each decision package Cj It, which belongs to the other probability of three species, to be together decided on by the classification of N number of sample rectangular block comprising the decision package.The present invention In embodiment, each decision package CjIt belongs to the other probability calculation of three species:
Wherein N represents to include decision package CjSample rectangular block number, PiSome in N number of rectangular block is represented, so that Obtain decision package CjIt is belonging respectively to the other probability size of three species.p(v|Cj) represent decision package CjBelong to and erect the general of region Rate, and p (g | Cj) and p (s | Cj) decision package C is represented respectivelyjBelong to ground region and the probability of sky areas.
In the embodiment of the present invention, and if only if decision package CjBelong to the Probability p of certain classification*During > 0.5, just mark should Decision package is the category, otherwise labels it as unknown classification.Input picture is defeated by grader (SVM is used in embodiment) The classification annotation results contrast gone out is rough.Some mistakes point are occurred mainly in also deposits inside the boundary between geometric areas, region Semantic wrong point at some.Accurately border between geometric areas is obtained in order to correct the result of rude classification mark, so as to be beneficial to The modeling of sense of reality three-dimensional scenic, the present invention proposes a kind of modification method split based on image.
Step 3, utilize the rude classification annotation results obtained in image segmentation algorithm amendment step 2, correct classification results And optimize the border between image geometry region.
For mark roughly some mistake point and geometric areas border not precisely, the present invention propose one kind be based on The modification method of GrabCut image segmentation algorithms.GrabCut is a kind of image point of effective interactive separation prospect background Cut algorithm.Some initial markup informations for being given by user initialize the gauss hybrid models of prospect background.Opened up in accompanying drawing 4 Show under different initialization modes to erect (a) in result of the region as the GrabCut image segmentation algorithms of prospect, Fig. 4 It is the segmentation result in the setting region only obtained with rectangle frame as the GrabCut image segmentation algorithms of segmentation range constraint, Fig. 4 In (b) and (c) be on the basis of rectangle frame constraint, by user mutual be labelled with the foreground information lines of region (erect) and (b) and (c) in the segmentation result that background information (lines of sky and ground region) is inputted as GrabCut segmentation, Fig. 4 Difference be that the user mutual of (c) is more, be labelled with more foreground and background information.(d) in Fig. 4 is proposed by the present invention The segmentation knot in the setting region that the full automatic GrabCut image segmentation algorithms based on image rude classification annotation results are obtained Really.Because GrabCut image segmentation algorithms need user mutual, and present invention aims to set up one based on the complete of single image Automatic three-dimensional scenic constructing system, so GrabCut algorithms can not be utilized directly.Notice rough point produced in step 2 Class result, although there are some mistakes point, but there are still most of correct region of mark in image.Therefore, the present invention is carried Go out by the use of in rude classification annotation results the region of " believable " as GrabCut initial input." credible " region is herein Be defined as belonging to larger possibility the set of the pixel of certain classification, that is, belong to certain classification probability be more than 0.5 and Belong in all pixels set for belonging to the category probability it is larger preceding 90%.In embodiments of the present invention, for each class Corresponding " credible " region is not all calculated.By taking certain category regions in three kinds of classifications as an example, the computational methods in " credible " region are such as Under:
The all pixels set for belonging to certain category regions in rough annotation results is designated asBy the picture inside the set Element is arranged according to the probability size descending for belonging to category region.After descending is arranged, set is removedK% pixel afterwards Obtain new set P*.GatherThe middle less rear k% of probability pixel, which is considered as " unreliable " pixel and given, to be removed.
Produce one and P*Corresponding two-value template image M*。M*It is all to belong to set P as artwork size*In picture Element, it is in template image M*Respective pixel positional value be 1, otherwise value be 0.
Detection template image M*In connected region, for connected region inside exist area be less than A 0 value area Domain, with 1 value covering filling.
Structural element Erodent Algorithm image M by β of size*.It is considered as " possibility " for the pixel being corroded and belongs to this The pixel of category regions, its set is designated as.Through excessive erosion rear pattern plate image M*Intermediate value is 1 pixel, is considered as category region " credible " pixel, its set is designated as
4 steps more than, can obtain " credible " set of pixels of certain category regions" possibility " belongs to category region Set of pixels.Using the above method, " credible " pixel can be obtained respectively for ground region, setting region and sky areas CollectionWithAnd " possibility " set of pixelsWithIn the embodiment of the present invention, for erectting region, (k, A, β) takes (10,5000,20), for ground region and sky areas, (k, A, β) takes (0,5000,10).
Because GrabCut is the interactive binary segmentation algorithm for prospect background separation, and in the embodiment of the present invention It is related to the other region of three species:Setting, ground and sky.Therefore, technical solution of the present invention proposes a kind of based on rude classification mark " four steps " GrabCut algorithms of note result are fully automatically optimized rough annotation results.
When obtainingWithAfterwards, first each classification is entered respectively in the embodiment of the present invention Row individually segmentation.Independent segmentation to some classification in three classes, its computational methods is:Category region is considered as prospect, in addition Two category regions are considered as background.Specifically, the pixel of " credible " in the category is considered as foreground pixel, two other classification " credible " pixel is considered as background pixel, and " possibility " pixel in the category is considered as into possible prospect, and other remaining institutes There is pixel to regard possible background as;GrabCut partitioning algorithms are initialized using above- mentioned information, foreground and background is set up respectively Mixed Gauss model, can obtain the independent segmentation result by prospect of certain category regions after over-segmentation.
The border that the result individually split can be corrected between mistakes point many in rude classification mark and region is more accurate Really, but erect region and ground region, to erect there are still some mistakes point between region and sky areas, in order to further Optimize annotation results, the 4th step of " four steps " GrabCut algorithms that technical solution of the present invention is proposed, to erect region as foreground zone Domain, ground and sky areas are considered as background area.On the basis of the independent segmentation result of three steps, region is erect according to independent segmentation Method, again using erect region as prospect carry out prospect background segmentation.Described in Fig. 3 and utilize the segmentation of GrabCut images The algorithm flow of algorithm amendment image-region rude classification annotation results.For rude classification annotation results, pass through Classification annotation result after GrabCut image segmentation algorithms, have modified many wrong points and geometric areas of rude classification mark Between border it is more accurate.The result individually split using sky and ground can substantially estimate horizontal position, from And geometric correction can be carried out, i.e., the background area in final image segmentation result is divided into sky and ground using horizon Face region, its method is:Background area on horizon is labeled as sky areas, the background area under horizon Domain is labeled as ground region
Segmentation result of the GrabCut partitioning algorithms under different initialization conditions is illustrated in Fig. 4.For the more complicated back of the body Scape, GrabCut algorithms need many user mutuals to obtain preferable segmentation result.And what technical solution of the present invention was proposed " four steps " GrabCut partitioning algorithms based on rude classification annotation results can obtain good segmentation in the case of full automatic As a result.
Step 4, the annotation results exported for step 3, building for three-dimensional scenic is carried out using the method for computer graphics There is provided the realistic three-dimensional scenic roaming of user for mould.
The classification annotation result in the image geometry region that step 3 is obtained provides accurately border between geometric areas, such as Shown in Fig. 5, curve ABCDEF (white line) is ground region and erects the border between region, and it has distinguished perpendicular well Vertical object and ground.Although the result (geometry tab area, horizontal position and the regional edge that are only obtained by above step Boundary) three-dimensional scene models can not be accurately recovered, but still can be by existing information, to scene under rational hypothesis It is modeled and is roamed there is provided the realistic three-dimensional scenic of user.
In the embodiment of the present invention, using pinhole camera model, optical axis by picture centre, while hypothetical world coordinate system and Camera coordinates system is overlapped, and camera fields of view is set to 1.431rad.Because the height of reference ground in model affects model of place Yardstick, in embodiments of the present invention the height of ground level be set to -5.Projection matrix can be obtained by conditions above, on ground In the case of highly determining, by back projection, can calculate in image corresponding to each pixel of ground region in three-dimensional Three-dimensional coordinate in scene.Because step 3 provides ground and erects the exact boundary in region, then by back projection, these sides Three-dimensional coordinate corresponding to boundary's point, which can be calculated, to be obtained.In order to obtain in the three-dimensional coordinate for erectting region, the embodiment of the present invention first With fitted polygon of the Douglas-Peucker algorithms to ground and the border polygonal approximation acquisition border for erectting region. Each broken line on fitted polygon, can be regarded as some perpendicular and the intersection on ground.Each broken line correspondence one Individual perpendicular, region is erect in annotation results for the coboundary of each perpendicular and the border of sky areas is determined.So as to The geometrical model of scene can be obtained, realistic three-dimensional scenic can be obtained by texture mapping.User can convert The visual angle of camera, observation position and the operation such as focus and carry out scene walkthrough.Accompanying drawing 6 illustrates the defeated of the embodiment of the present invention one Enter (a) (b) (c) in the result figure that the three-dimensional scene models of image are observed under different visual angles, Fig. 6 to represent visual angle 1 respectively, regard The result figure of the 3 times observation models of place in angle 2 and visual angle.
According to technical scheme proposed by the present invention, respectively in two generally acknowledged data that accuracy is marked for testing classification Storehouse Popup databases (Derek Hoiem, Alexei A.Efros, and Martial Hebert, " Automatic photo Pop-up, " in ACM Transactions on Graphics (TOG) .ACM, 2005, vol.24, pp.577-584.), letter Claim " database 1 " and Geometric context databases (Derek Hoiem, Alexei A.Efros, and Martial Hebert,“Geometric context from a single image,”in International Conference of Computer Vision (ICCV) .2005, vol.1, pp.654-661.), referred to as " evaluation and test skill of the present invention is carried out on database 2 " The validity of art scheme.Database 1 includes 144 images, wherein 82 training images and 62 test images.Database 2 is wrapped Containing 300 images, using 50 images as portion, 6 parts are divide into.The standard method of test of database 2 uses 6 folding cross validations: During test in turn will wherein 1 part as training image collection, 5 parts are test chart image set in addition.Accompanying drawing 7 is in the skill according to the present invention Art scheme, obtains the grader marked roughly, and 62 test images are carried out on database 1 with 82 training image training The confusion matrix that classification annotation is obtained.The of the invention classification annotation accuracy corresponding with the confusion matrix is 92%, that is, is existed On test chart image set, 92% image pixel is by correct classification annotation, and the datum line of the classification annotation accuracy of the database For 87%.Fig. 8 is the classification annotation obtained in the way of technical scheme is on database 2 with 6 folding cross validations As a result the contrast of accuracy and prior art the classification annotation result on the database.Data display, in standard testing data On storehouse 2, the datum line of classification annotation accuracy is 86.0%, and classification annotation result best at present is 88.9%, and of the invention Sorting technique obtain classification annotation result accuracy be 88.7%.As a result show, sorting technique of the invention can be obtained Obtain the classification annotation accuracy that there is comparativity with prior art.It should be noted that:For characteristics of image, the present invention only makes With a small amount of effective characteristics of image;For classification annotation model, the present invention is only with single classifier model.Therefore obtaining On the basis of there is the classification annotation accuracy of comparativity with prior art, need to use more images special compared to prior art For levying and needing to build complicated disaggregated model, technical scheme proposed by the present invention seems more simple, and complexity is low, It is easily achieved.Fig. 9 is to use SVMs on database 1 and database 2 for grader according to technical scheme Carry out the rough accuracy for marking the classification annotation accuracy obtained and using image segmentation algorithm to be labeled modified result Contrast.Data display, marks the accuracy point that the accuracy ratio after modified result is marked roughly on database 1 and database 2 Indescribably high 4.6% and 3.5%.As a result show, rough annotation results are entered by utilization image segmentation algorithm proposed by the present invention The method of row amendment can effectively improve the accuracy of classification annotation.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment can To be realized by software, the mode of necessary general hardware platform can also be added to realize by software.Understood based on such, The technical scheme of above-described embodiment can be embodied in the form of software product, the software product can be stored in one it is non-easily The property lost storage medium (can be CD-ROM, USB flash disk, mobile hard disk etc.) in, including some instructions are to cause a computer to set Standby (can be personal computer, server, or network equipment etc.) performs the method described in the embodiment of the present invention.
The foregoing is only a preferred embodiment of the present invention, but protection scope of the present invention be not limited thereto, Any one skilled in the art is in the technical scope of present disclosure, the change or replacement that can be readily occurred in, It should all be included within the scope of the present invention.Therefore, protection scope of the present invention should be with the protection model of claims Enclose and be defined.

Claims (3)

1. a kind of full automatic three-dimensional scene construction method based on single image, it is characterised in that comprise the following steps:
Step 1:The grader of image geometry region division can be carried out by being obtained using training image collection;
The grader of image geometry region division is obtained based on machine learning, it is necessary first to collect training image collection, then One group of training sample is obtained using training image collection, finally grader is trained using training sample;The training sample be Obtained in training image collection, including sample mark and sample extraction;
The sample mark refers to the mark that geometric areas is carried out to each width figure inside training image collection, i.e., each width figure The whole region of picture is divided into multiple geometry subregions, and each geometry subregion should be attributed to one kind in three kinds of classifications, this Three kinds of classifications are respectively:Erect region, ground region and sky areas;
, it is necessary to extract the real sample set for being used to train after sample is marked;Image-region is entered in order to reported as precisely as possible The division of row geometry subregion, using 30*40 rectangular block as sample unit, divides every piece image for interval steps with 10 Into the sample rectangular block of 30*40 with certain overlapping region a series of;For each sample rectangular block, the sample of 1031 dimensions is extracted Eigen;One group of training sample, i.e., one training sample set, and the instruction of all training images are obtained for each width training image Practice sample set and form final training sample set;
Training sample set is extracted, the classification of image geometry region division can be carried out by being obtained using the training method for having supervision Device, i.e., using support vector machines (Support Vector Machine) grader, train obtained model to export one Individual test sample is belonging respectively to the other probability of three species;
Step 2:The image that the grader obtained using training is inputted to user carries out the division of geometric areas, obtains rude classification The result of mark;
Piece image is inputted, is first that image-region is divided into a series of 30* with certain overlapping region by interval steps with 10 40 sample rectangular block, the sample characteristics of 1031 dimensions are extracted for each sample rectangular block;For each sample rectangle Block, grader exports the sample and is belonging respectively to the other probability of three species according to the sample characteristics of its 1031 dimension:p(v|Pi)、p(g| Pi) and p (s | Pi), wherein p (v | Pi) represent sample PiBelong to erect region probability, p (g | Pi) and p (s | Pi) sample is represented respectively This PiBelong to ground region and the probability of sky areas;
For each decision package CjIt belongs to the other probability of three species by N number of sample rectangular block comprising the decision package Classification is together decided on, each decision package CjIt belongs to the other probability calculation of three species:
Wherein N represents to include decision package CjSample rectangular block number, PiSome in N number of rectangular block is represented, so as to obtain Decision package CjIt is belonging respectively to the other probability size of three species;p(v|Cj) represent decision package CjBelong to the probability for erectting region, p (g|Cj) and p (s | Cj) decision package C is represented respectivelyjBelong to ground region and the probability of sky areas;
And if only if decision package CjBelong to the Probability p of certain classification*>When 0.5, just mark the decision package for this it is described certain Classification, otherwise labels it as unknown classification;
Step 3:Using based on the rude classification annotation results obtained in GrabCut image segmentation algorithm amendment steps 2, and optimize Border between image geometry region, obtains between image geometry region accurately border;
During using based on GrabCut image segmentation algorithms, the region of " credible " is used as the first of GrabCut using in rude classification result Begin to input and fully automatically optimized rough annotation results;" credible " region is to belong to certain classification with larger possibility Pixel set, that is, the probability for belonging to certain classification is more than and 0.5 and belongs to general in all pixels set for belonging to the category Rate it is larger preceding 90%;" credible " region accordingly is calculated for each classification, is obtained for certain species in image-region Other P*" credible " region;Based on rude classification mark " credible " region, using the output of GrabCut image segmentation algorithms come The result of rude classification mark is corrected, to obtain in image between geometric areas accurately border;
Step 4, the annotation results exported for step 3, carry out the modeling of three-dimensional scenic using the method for computer graphics, carry For the realistic three-dimensional scenic roaming of user;
According to accurately boundary information between geometric areas in image, image-region is cut into different geometric areas;Setting Determine on the basis of camera parameter, relative depth information is introduced by reference to ground, so as to recover geometric areas in image scene Important summit three-dimensional coordinate;It is final to utilize each geometry subregion of plane approximation, and regional according to geometrical relationship It is placed in three-dimensional scenic, so as to generate realistic three-dimensional scenic roaming;
The step 3 is implemented as:
(11) for certain classification P in image-region*The computational methods in " credible " region be:
To belonging to category P in rough annotation results*All pixels according to they belong to the category probability size descending arrange Row, remove the less pixel of probability, and its percentage is k%;
Produce one and P*Corresponding two-value template image M*, M*It is all to belong to set P as artwork size*In pixel, its In template image M*Respective pixel positional value be 1, otherwise value be 0;
Detection template image M*In connected region, for connected region inside exist area be less than A 0 value region, with 1 Value covering filling;
Structural element Erodent Algorithm image M by β of size*, the picture of the category may be belonged to by being considered as the pixel being corroded Element, its set is designated asThrough excessive erosion rear pattern plate image M*Intermediate value is 1 pixel, is considered as " credible " pixel of the category, and it collects Conjunction is designated as
Other " credible " set of pixels of three species is obtained according to the method in described calculating " credible " region respectivelyWithAnd " possibility " set of pixelsWithCalculating parameter is respectively:For erectting region, k, A, β takes 10,5000,20 respectively, for Ground region and sky areas, k, A, β take 0,5000,10 respectively;
(12) GrabCut algorithms are fully automatically optimized being embodied as rough annotation results:
Each classification is individually split respectively according to described " credible " set of pixels and " possibility " set of pixels, to certain in three classes The independent segmentation of individual classification, computational methods are:Category region is considered as prospect, two other category regions is considered as background, i.e., The pixel of " credible " in the category is considered as foreground pixel, " credible " pixel of two other classification is considered as background pixel, and will " possibility " pixel in the category is considered as possible prospect, and remaining every other pixel regards possible background as;Utilize Above- mentioned information initializes GrabCut partitioning algorithms, and the mixed Gauss model of foreground and background is set up respectively, can be with after over-segmentation Obtain the independent segmentation result by prospect of certain category regions;
Annotation results are further optimized according to the result of described independent segmentation, method is:In the base of the independent segmentation result of three steps On plinth, the method for erectting region according to independent segmentation, again to erect separation of the region as prospect progress prospect background so as to obtain Obtain image segmentation result finally;
The result individually split according to sky and ground can substantially estimate horizontal position, will be final using horizon Background area in image segmentation result is divided into sky and ground region, and its method is:Background area on horizon Domain is labeled as sky areas, and the background area under horizon is labeled as ground region;
For step 3 export annotation results, using computer graphics method carry out three-dimensional scenic modeling there is provided user Realistic three-dimensional scenic roaming, including:
The annotation results exported according to the step 3, obtain the accurate border between image geometry region, use Douglas- Fitted polygon of the Peucker algorithms to ground and the border polygonal approximation acquisition border for erectting region;
The modeling of three-dimensional scenic is carried out using the method for computer graphics, including:
(21) to described scene modeling, using pinhole camera model, optical axis is sat by picture centre, world coordinate system and camera Mark system overlaps, and camera fields of view is set to 1.431rad;
(22) reference horizontal plane of manufacturing is utilized, the three-dimensional coordinate on important summit in scene is obtained, method is:Introduce reference horizontal plane of manufacturing, ground The height of plane is set to -5;Projection matrix is obtained according to above-mentioned modeling information, under conditions of ground level determination, by anti- Projection, calculates the three-dimensional coordinate in three-dimensional scenic corresponding to each pixel of ground region in image, obtains ground area Domain and the three-dimensional coordinate for erectting zone boundary point;
(23) according to described ground region and the three-dimensional coordinate and ground region and setting regional edge of setting zone boundary point The fitted polygon on boundary, obtains a series of perpendiculars, and method is:By ground region and the fitted polygon of setting zone boundary On each broken line be considered as some perpendicular and ground region intersection, the coboundary of each perpendicular is by described The border that region and sky areas are erect in image labeling result is determined;
(24) to described perpendicular and ground region, realistic three-dimensional scene models are obtained using texture mapping; The scene walkthrough of the sense of reality includes:The visual angle of camera is converted, focus and converts observation position observation model of place.
2. according to the method described in claim 1, it is characterised in that:In the step 1, the sample characteristics of 1031 dimensions include: The position feature of the Bag of Visual Words features of 1000 dimensions, the color characteristic of 30 dimensions and 1 dimension.
3. according to the method described in claim 1, it is characterised in that:In the step 1, the basic function in SVM classifier is elected as RBF, model classification elects multi-class grader as, and probability Estimation parameter b is set to 1, that is, trains obtained grader A test sample can be exported and be belonging respectively to the other probability of three species.
CN201410340189.9A 2014-07-16 2014-07-16 A kind of full automatic three-dimensional scene construction method based on single image Active CN104134234B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410340189.9A CN104134234B (en) 2014-07-16 2014-07-16 A kind of full automatic three-dimensional scene construction method based on single image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410340189.9A CN104134234B (en) 2014-07-16 2014-07-16 A kind of full automatic three-dimensional scene construction method based on single image

Publications (2)

Publication Number Publication Date
CN104134234A CN104134234A (en) 2014-11-05
CN104134234B true CN104134234B (en) 2017-07-25

Family

ID=51806903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410340189.9A Active CN104134234B (en) 2014-07-16 2014-07-16 A kind of full automatic three-dimensional scene construction method based on single image

Country Status (1)

Country Link
CN (1) CN104134234B (en)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104599283B (en) * 2015-02-10 2017-06-09 南京林业大学 A kind of picture depth improved method for recovering camera heights based on depth difference
CN104851127B (en) * 2015-05-15 2017-07-04 北京理工大学深圳研究院 It is a kind of based on interactive building point cloud model texture mapping method and device
CN105100771A (en) * 2015-07-14 2015-11-25 山东大学 Single-viewpoint video depth obtaining method based on scene classification and geometric dimension
CN107798703B (en) * 2016-08-30 2021-04-30 成都理想境界科技有限公司 Real-time image superposition method and device for augmented reality
CN106845352B (en) * 2016-12-23 2020-09-18 北京旷视科技有限公司 Pedestrian detection method and device
CN106815428B (en) * 2017-01-13 2020-05-19 中国空气动力研究与发展中心高速空气动力研究所 Wind tunnel balance calibration data processing method based on intelligent optimization algorithm
CN108629800A (en) * 2017-03-20 2018-10-09 北京三星通信技术研究有限公司 Plane determines that method and augmented reality show the display methods of information, related device
CN107492135A (en) * 2017-08-21 2017-12-19 维沃移动通信有限公司 A kind of image segmentation mask method, device and computer-readable recording medium
EP3474185B1 (en) * 2017-10-18 2023-06-28 Dassault Systèmes Classification of 2d images according to types of 3d arrangement
CN109902699B (en) * 2017-12-08 2023-07-11 北京邮电大学 Information processing method, device and computer storage medium
CN107944504B (en) * 2017-12-14 2024-04-16 北京木业邦科技有限公司 Board recognition and machine learning method and device for board recognition and electronic equipment
US10755112B2 (en) * 2018-03-13 2020-08-25 Toyota Research Institute, Inc. Systems and methods for reducing data storage in machine learning
CN108445505B (en) * 2018-03-29 2021-07-27 南京航空航天大学 Laser radar-based feature significance detection method in line environment
CN108734120B (en) * 2018-05-15 2022-05-10 百度在线网络技术(北京)有限公司 Method, device and equipment for labeling image and computer readable storage medium
CN108802785B (en) * 2018-08-24 2021-02-02 清华大学 Vehicle self-positioning method based on high-precision vector map and monocular vision sensor
CN110197529B (en) * 2018-08-30 2022-11-11 杭州维聚科技有限公司 Indoor space three-dimensional reconstruction method
CN109345557B (en) * 2018-09-19 2021-07-09 东南大学 Foreground and background separation method based on three-dimensional reconstruction result
MX2021006830A (en) * 2018-12-10 2021-07-02 Climate Llc Mapping field anomalies using digital images and machine learning models.
CN109840592B (en) * 2018-12-24 2019-10-18 梦多科技有限公司 A kind of method of Fast Labeling training data in machine learning
CN110136078A (en) * 2019-04-29 2019-08-16 天津大学 The semi-automatic reparation complementing method of single plant corn image leaf destruction
CN110176064B (en) * 2019-05-24 2022-11-18 武汉大势智慧科技有限公司 Automatic identification method for main body object of photogrammetric generation three-dimensional model
US10783643B1 (en) 2019-05-27 2020-09-22 Alibaba Group Holding Limited Segmentation-based damage detection
CN110264444B (en) * 2019-05-27 2020-07-17 阿里巴巴集团控股有限公司 Damage detection method and device based on weak segmentation
CN111784726A (en) * 2019-09-25 2020-10-16 北京沃东天骏信息技术有限公司 Image matting method and device
CN112613596A (en) * 2020-12-01 2021-04-06 河南东方世纪交通科技股份有限公司 ETC system based on three-dimensional scene simulation technology
CN113781422A (en) * 2021-09-01 2021-12-10 廊坊中油朗威工程项目管理有限公司 Pipeline construction violation identification method based on single image geometric measurement algorithm
CN115861572B (en) * 2023-02-24 2023-05-23 腾讯科技(深圳)有限公司 Three-dimensional modeling method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281545A (en) * 2008-05-30 2008-10-08 清华大学 Three-dimensional model search method based on multiple characteristic related feedback
US7711155B1 (en) * 2003-04-14 2010-05-04 Videomining Corporation Method and system for enhancing three dimensional face modeling using demographic classification
CN103400015A (en) * 2013-08-15 2013-11-20 华北电力大学 Composition modeling method for combustion system based on numerical simulation and test operation data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7711155B1 (en) * 2003-04-14 2010-05-04 Videomining Corporation Method and system for enhancing three dimensional face modeling using demographic classification
CN101281545A (en) * 2008-05-30 2008-10-08 清华大学 Three-dimensional model search method based on multiple characteristic related feedback
CN103400015A (en) * 2013-08-15 2013-11-20 华北电力大学 Composition modeling method for combustion system based on numerical simulation and test operation data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Libsvm: a library for support vector machines";Chih-Chung Chang;《Acm Transactions on Intelligent Systems & Technology》;20110430;第2卷(第3期);第389-396页 *
"基于立体视觉的三维模型重建系统设计";赵娟等;《光电技术应用》;20110430;第26卷(第2期);第12-14,30页 *

Also Published As

Publication number Publication date
CN104134234A (en) 2014-11-05

Similar Documents

Publication Publication Date Title
CN104134234B (en) A kind of full automatic three-dimensional scene construction method based on single image
CN107862698B (en) Light field foreground segmentation method and device based on K mean cluster
CN109753885B (en) Target detection method and device and pedestrian detection method and system
Zhang et al. Semantic segmentation of urban scenes using dense depth maps
CN109872397A (en) A kind of three-dimensional rebuilding method of the airplane parts based on multi-view stereo vision
Sirmacek et al. Performance evaluation for 3-D city model generation of six different DSMs from air-and spaceborne sensors
CN104134071B (en) A kind of deformable part model object detecting method based on color description
CN107452010A (en) A kind of automatically stingy nomography and device
US9942535B2 (en) Method for 3D scene structure modeling and camera registration from single image
CN105989604A (en) Target object three-dimensional color point cloud generation method based on KINECT
Li et al. An overlapping-free leaf segmentation method for plant point clouds
CN107481279A (en) A kind of monocular video depth map computational methods
CN110827312B (en) Learning method based on cooperative visual attention neural network
WO2022213612A1 (en) Non-contact three-dimensional human body size measurement method
CN107527054B (en) Automatic foreground extraction method based on multi-view fusion
CN114612488A (en) Building-integrated information extraction method, computer device, and storage medium
CN108596919A (en) A kind of Automatic image segmentation method based on depth map
CN103871072B (en) Orthography based on project digital elevation model inlays line extraction method
Wang et al. An overview of 3d object detection
CN108665472A (en) The method and apparatus of point cloud segmentation
CN108090485A (en) Display foreground extraction method based on various visual angles fusion
CN102982524B (en) Splicing method for corn ear order images
CN115937461A (en) Multi-source fusion model construction and texture generation method, device, medium and equipment
CN108022245A (en) Photovoltaic panel template automatic generation method based on upper thread primitive correlation model
CN111027538A (en) Container detection method based on instance segmentation model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant