CN110197529A - Interior space three-dimensional rebuilding method - Google Patents
Interior space three-dimensional rebuilding method Download PDFInfo
- Publication number
- CN110197529A CN110197529A CN201811003630.9A CN201811003630A CN110197529A CN 110197529 A CN110197529 A CN 110197529A CN 201811003630 A CN201811003630 A CN 201811003630A CN 110197529 A CN110197529 A CN 110197529A
- Authority
- CN
- China
- Prior art keywords
- lines
- image
- super
- spatial relationship
- current line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/10—Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
Abstract
The invention discloses interior space three-dimensional rebuilding methods, including establish corner feature learning library, each corner feature has corresponding spatial relationship;Obtain current panorama image to be reconstructed;Lines detection and/or super-pixel segmentation are done to current panorama image, generate the lines-super-pixel image being made of lines and super-pixel;Current panorama image is resolved into multiple single-view images by visual angle;The deep learning that every single-view image is carried out to corner feature learning library respectively, the spatial relationship of corner feature is identified by deep learning;The spatial relationship of all single-view images is synthesized by the synthesis condition of panoramic picture, generates the spatial relationship of current panorama image;The spatial relationship of current panorama image is applied to lines-super-pixel image, generates the lines-super-pixel image with spatial relationship.The present invention can accurately identify the three-dimensional feature in panoramic picture, can accurately construct three-dimensional relationship.
Description
Technical field
The present invention relates to a kind of methods for carrying out space three-dimensional reconstruction using two-dimension picture.
Background technique
Following background technique is used to help reader and understands the present invention, and is not construed as the prior art.
Three-dimensional panorama is the real scene virtual reality technology based on panoramic picture.Panorama is that 360 ° of camera ring are shot
One or more groups of photos are spliced into a panoramic picture, can also be by once shooting realization panoramic picture, and panoramic picture is one
Kind carries out the plane picture of mapping production to ambient enviroment, object with certain geometrical relationship, and panoramic picture needs to carry out Three-dimensional Gravity
Three-dimension space image could be become by building.Three-dimensional panorama figure is generally captured the image of entire scene by general camera combination fish eye lens
Information reuses software and carries out picture split, becomes 360 ° of panoramic pictures and browses for virtual reality.
But panoramic pictures are to show three-dimensional scenic by two-dimensional mode, the three-dimensional space sense missing of image, and companion
And have lines distort (such as straight line shows as curve in panoramic pictures).There is a kind of side that panoramic pictures are redeveloped into three-dimensional scenic
Method is: obtaining panoramic picture in input chamber, obtains lines in such a way that discrete point is fitted, mark different type by image, semantic
Lines;Generate super-pixel by segmentation, mark each face direction (such as with color indicia, red expression floor or smallpox
The horizontal planes such as plate, striped color table show the vertical planes such as metope, and white indicates the face for not applying direction limitation), restore the depth of image
Information obtains grayscale image;Rebuild three-dimensional lines, output three-dimensional space model.The shortcomings that this method for reconstructing three-dimensional scene, is:
1, a part that lines can only be obtained by discrete point fitting, can not obtain complete lines, lines interruption occur.2, lines are endless
The whole overlapping relation caused between lines is lost, and the missing of adjacent surface relationship is caused, as corner can not identify.
Summary of the invention
The purpose of the present invention is to provide one kind can accurately identify the three-dimensional feature in panoramic picture, can accurately construct
The interior space three-dimensional rebuilding method of three-dimensional relationship.
The technical solution adopted by the present invention to solve the technical problems is: interior space three-dimensional rebuilding method, including following
Step:
S1, corner feature learning library is established, each corner feature has corresponding spatial relationship, the space of corner feature
Relationship refers to, in the corner image, ceiling, metope and the distributed relation on ground, the image of corner feature are cameras from fixation
The single-view image of angle shot;
S2, current panorama image to be reconstructed is obtained;
S3, current panorama image is done lines detection and/or super-pixel segmentation pass through image, semantic analysis different colours
Or synteny does not indicate the lines of different directions, generates the lines-super-pixel image being made of lines and super-pixel;
S4, the spatial relationship for obtaining current panorama image:
S4.1, current panorama image is resolved into multiple single-view images by visual angle;Panoramic picture, which generates, to be needed using more
The image for opening different perspectives is synthesized, multiple the single-view images being decomposed to form are exactly the image for synthesizing panoramic picture;
S4.2, the deep learning that every single-view image is carried out to corner feature learning library respectively, are known by deep learning
Not Chu corner feature spatial relationship;
S4.3, the spatial relationship of all single-view images is synthesized by the synthesis condition of panoramic picture, is generated current
The spatial relationship of panoramic picture indicates different spatial position (spatial position such as ceiling, metope, ground in different colors
Deng);
S5, the spatial relationship of current panorama image is applied to lines-super-pixel image, deletes lines-super-pixel image
The lines of middle anisotropy obtain, and generate the lines-super-pixel image with spatial relationship;Such as, indicate that the lines of metope occur
In ground or ceiling region, then the line orientations mistake of the expression metope, does delete processing;The space of current panorama image
Relationship is consistent with the pixel coordinate of current panorama figure, lines-super-pixel image is consistent with the pixel coordinate of current panorama figure,
Therefore, the spatial relationship of current panorama image can be mapped with lines-super-pixel image;
S6, with the information in each face in the lines with spatial relationship-super-pixel image reconstruction space;
Step S3 is synchronous with S4 to carry out, and perhaps first progress S3 carries out S4 again or first progress S4 carries out S3 again.
In some embodiments, the fitting of multistage detection lines is carried out to the lines in step S3-super-pixel image, including
Following steps:
S3.1.1, lines detection acquisition lines image is carried out to current panorama image, by semantic analysis, mark not Tongfang
To lines, lines detection algorithm use common algorithm, such as SFV;
Any one lines in S3.1.2, acquisition lines image obtain current line and extend forward as current line
Rule (such as slope, arc curvature of a curve of straight line etc.), current line is extended forward by its Extending Law;
S3.1.3, as current line extends forward, judge whether there is be less than at a distance from current line extended segment it is default
The combinable lines of value, if so, current line and combinable lines are fused into a lines;Do not extend if it is not, then retaining
Current line.
As a preferred option, when space three-dimensional is rebuild indoors, the line of anisotropy in lines-super-pixel image is deleted
The detection and fitting of lines are carried out after item again.
As a preferred option, in step S3.1.3, the method for lines fusion are as follows: current line extends forward, encounter can
When merging lines, continuous fitting lines are fitted in the terminal of starting point to the combinable lines of current line, continuous lines
The error of the Extending Law of fitting rule and current line is in the error range of setting;As current line and fitting lines are
Camber line, then the difference between the radius of curvature of current line and the radius of curvature for being fitted lines should be in the error range of setting;
Fitting lines may be or the fitting lines and current line or combinable line between current line and combinable lines
One of item is overlapped;
Alternatively, the terminal of combinable lines is moved to and works as front when current line extends forward, encounters combinable lines
The extended segment of item forms fitting lines;
Fitting lines extend forward according to the continuation of its Extending Law, if to combinable lines are still encountered at the end of plane,
Then retain the current line not extended;If working as front with fitting lines substitution to combinable line segment has been merged at the end of plane
Item and all combinable line segments merged.
In some embodiments, in lines-super-pixel image, when there is the case where lines are blocked, to the strong of lines
Fitting is straightened, comprising the following steps:
S3.2.1, big object feature learning library is established, each big object feature has corresponding spatial relationship, big object
Feature includes: furniture, household electrical appliances etc., and the spatial relationship of big object refers to: the region of big object and each correlation plane point in the picture
Cut relationship;
S3.2.2, current panorama image is obtained, is that current panorama image with big object feature learning library carries out deep learning,
The spatial relationship of big object feature is obtained, as shown in Figure 9;
S3.2.3, the lines image or lines-super-pixel image for obtaining current panorama image, by the space of big object feature
Relationship is applied to lines image or lines-super-pixel image, and the lines in big object feature region are deleted;
S3.2.4, it will be blocked by big object and the lines interrupted caused to be fitted by force;It is carried out blocking finger by big object
It is that the terminal of the lines or starting point are located on the boundary of big object;Strong fitting refers to: being extended forward using current line rule, is sought
Whether look for has Extending Law therewith identical and what distance was less than preset value is fitted lines;
Alternatively, the end boundary in face where extending forwardly to lines using current line rule, judges current line and phase
The adjacent surface whether proximal surface has distance to be less than preset value can be fitted lines, and it is the lines positioned at adjacent surface that adjacent surface, which can be fitted lines,
Extend forwardly to the boundary of the adjacent surface by its Extending Law or the adjacent surface can be fitted lines be actually to terminate at the phase
The boundary of proximal surface;The lines for belonging to the same end point or direction of extinction belong to the same face.End point is showing for this field
There is technology, existing paper discloses the detailed theory of end point, not reinflated explanation in the application.
In some embodiments, the lines after over-fitting-super-pixel image is as the super picture of lines-used in step S5
Sketch map picture.
Above method identifies that the corner feature and its space in panoramic picture are closed by learning to corner depths of features
System, each corner feature are corresponding with ceiling, metope and the space on ground distributed relation, the spatial relationship group of all corner features
Altogether, it can complete to build out the corresponding three-dimensional space frame of panoramic picture (such as to the spatial relationship difference of current panorama image
Ceiling region, wall section, ground region);Line orientations indicate intersection or parallel relation between metope, spatial relationship
In conjunction with lines-super-pixel image, the corresponding three-dimensional space of panoramic picture is reconstructed.
The present invention has the advantages that reconstructing three-dimensional space model by identification corner feature, each corner feature is corresponding
There is corresponding spatial shape, therefore after recognizing corner feature in panoramic picture, it can by panoramic picture according to corner spy
The spatial shape of sign carries out the segmentation in space face, and the precision of three-dimensional reconstruction is high, and speed is fast.
Detailed description of the invention
Fig. 1 is the three-dimensional panoramic image of a separate space.
Fig. 2 is lines-super-pixel image of three-dimensional panoramic image.
Fig. 3 is that three-dimensional panoramic image decomposes the wherein single-view image obtained.
Fig. 4 is the corresponding spatial relationship of Fig. 3.
Fig. 5 is the corresponding spatial relationship of three-dimensional panoramic image.
Fig. 6 is the lines image after lines detection and fitting.
Fig. 7 is the corresponding three-dimensional space of Fig. 1.
Fig. 8 is the spatial relationship deep learning of big object feature.
Fig. 9 is the schematic diagram for needing more Space integrations.
Specific embodiment
The present invention is further described with reference to the accompanying drawings and detailed description.
The three-dimensional reconstruction of the interior space
In some embodiments, a kind of interior space three-dimensional rebuilding method as shown in figs. 1-7, comprising the following steps:
S1, corner feature learning library is established, each corner feature has corresponding spatial relationship, the space of corner feature
Relationship refers to, in the corner image, ceiling, metope and the distributed relation on ground, the image of corner feature are cameras from fixation
The single-view image of angle shot;
S2, current panorama image to be reconstructed is obtained;
S3, current panorama image is done lines detection and/or super-pixel segmentation pass through image, semantic analysis different colours
Or synteny does not indicate the lines of different directions, generates the lines-super-pixel image being made of lines and super-pixel;
S4, the spatial relationship for obtaining current panorama image:
S4.1, current panorama image is resolved into multiple single-view images by visual angle;Panoramic picture, which generates, to be needed using more
The image for opening different perspectives is synthesized, multiple the single-view images being decomposed to form are exactly the image for synthesizing panoramic picture;
S4.2, the deep learning that every single-view image is carried out to corner feature learning library respectively, are known by deep learning
Not Chu corner feature spatial relationship;
S4.3, the spatial relationship of all single-view images is synthesized by the synthesis condition of panoramic picture, is generated current
The spatial relationship of panoramic picture indicates different spatial position (spatial position such as ceiling, metope, ground in different colors
Deng);A indicates that the parallel lines of same color, b indicate that the parallel lines of same color, c indicate the parallel lines of same color,
As in Figure 2-4;
S5, the spatial relationship of current panorama image is applied to lines-super-pixel image, deletes lines-super-pixel image
The lines of middle anisotropy obtain, and generate the lines-super-pixel image with spatial relationship;Such as, indicate that the lines of metope occur
In ground or ceiling region, then the line orientations mistake of the expression metope, does delete processing;The space of current panorama image
Relationship is consistent with the pixel coordinate of current panorama figure, lines-super-pixel image is consistent with the pixel coordinate of current panorama figure,
Therefore, the spatial relationship of current panorama image can be mapped with lines-super-pixel image;
S6, with the information in each face in the lines with spatial relationship-super-pixel image reconstruction space;
Step S3 is synchronous with S4 to carry out, and perhaps first progress S3 carries out S4 again or first progress S4 carries out S3 again.
In some embodiments, the fitting of multistage detection lines is carried out to the lines in step S3-super-pixel image, including
Following steps:
S3.1.1, lines detection acquisition lines image is carried out to current panorama image, by semantic analysis, mark not Tongfang
To lines, lines detection algorithm use common algorithm, such as SFV;
Any one lines in S3.1.2, acquisition lines image obtain current line and extend forward as current line
Rule (such as slope, arc curvature of a curve of straight line etc.), current line is extended forward by its Extending Law;
S3.1.3, as current line extends forward, judge whether there is be less than at a distance from current line extended segment it is default
The combinable lines of value, if so, current line and combinable lines are fused into a lines;Do not extend if it is not, then retaining
Current line.
As a preferred option, when space three-dimensional is rebuild indoors, the line of anisotropy in lines-super-pixel image is deleted
The detection and fitting of lines are carried out after item again.
As a preferred option, in step S3.1.3, the method for lines fusion are as follows: current line extends forward, encounter can
When merging lines, continuous fitting lines are fitted in the terminal of starting point to the combinable lines of current line, continuous lines
The error of the Extending Law of fitting rule and current line is in the error range of setting;As current line and fitting lines are
Camber line, then the difference between the radius of curvature of current line and the radius of curvature for being fitted lines should be in the error range of setting;
Fitting lines may be or the fitting lines and current line or combinable line between current line and combinable lines
One of item is overlapped;
Alternatively, the terminal of combinable lines is moved to and works as front when current line extends forward, encounters combinable lines
The extended segment of item forms fitting lines;
Fitting lines extend forward according to the continuation of its Extending Law, if to combinable lines are still encountered at the end of plane,
Then retain the current line not extended;If working as front with fitting lines substitution to combinable line segment has been merged at the end of plane
Item and all combinable line segments merged.
In some embodiments, in lines-super-pixel image, when there is the case where lines are blocked, to the strong of lines
Fitting is straightened, comprising the following steps:
S3.2.1, big object feature learning library is established, each big object feature has corresponding spatial relationship, big object
Feature includes: furniture, household electrical appliances etc., and the spatial relationship of big object refers to: the region of big object and each correlation plane point in the picture
Cut relationship;
S3.2.2, current panorama image is obtained, is that current panorama image with big object feature learning library carries out deep learning,
The spatial relationship of big object feature is obtained, as shown in Figure 8;
S3.2.3, the lines image or lines-super-pixel image for obtaining current panorama image, by the space of big object feature
Relationship is applied to lines image or lines-super-pixel image, and the lines in big object feature region are deleted;
S3.2.4, it will be blocked by big object and the lines interrupted caused to be fitted by force;It is carried out blocking finger by big object
It is that the terminal of the lines or starting point are located on the boundary of big object;Strong fitting refers to: being extended forward using current line rule, is sought
Whether look for has Extending Law therewith identical and what distance was less than preset value is fitted lines;
Alternatively, the end boundary in face where extending forwardly to lines using current line rule, judges current line and phase
The adjacent surface whether proximal surface has distance to be less than preset value can be fitted lines, and it is the lines positioned at adjacent surface that adjacent surface, which can be fitted lines,
Extend forwardly to the boundary of the adjacent surface by its Extending Law or the adjacent surface can be fitted lines be actually to terminate at the phase
The boundary of proximal surface;The lines for belonging to the same end point or direction of extinction belong to the same face.End point is showing for this field
There is technology, existing paper discloses the detailed theory of end point, not reinflated explanation in the application.
In some embodiments, the lines after over-fitting-super-pixel image is as the super picture of lines-used in step S5
Sketch map picture.
Lines approximating method
Above method identifies that the corner feature and its space in panoramic picture are closed by learning to corner depths of features
System, each corner feature are corresponding with ceiling, metope and the space on ground distributed relation, the spatial relationship group of all corner features
Altogether, it can complete to build out the corresponding three-dimensional space frame of panoramic picture (such as to the spatial relationship difference of current panorama image
Ceiling region, wall section, ground region);Line orientations indicate intersection or parallel relation between metope, spatial relationship
In conjunction with lines-super-pixel image, the corresponding three-dimensional space of panoramic picture is reconstructed.
In some embodiments, for lines image or lines-super-pixel image, when carrying out lines detection, existing picture
Vegetarian refreshments fitting often occurs that lines interrupt, incomplete problem provides a kind of lines detection and be fitted to obtain complete lines
Method.
A kind of method of lines fitting when image reconstruction, comprising the following steps:
S3.1, lines detection acquisition lines image is carried out to current panorama image, pass through semantic analysis, label different directions
Lines, lines detection algorithm use common algorithm, such as SFV;
Any one lines in S3.2, acquisition lines image obtain what current line extended forward as current line
Regular (such as slope, arc curvature of a curve of straight line etc.), current line is extended forward by its Extending Law;
S3.3, as current line extends forward, judge whether there is at a distance from current line extended segment be less than preset value
Combinable lines, if so, current line and combinable lines are fused into a lines;Do not extend if it is not, then retaining
Current line.
As a preferred option, when space three-dimensional is rebuild indoors, the line of anisotropy in lines-super-pixel image is deleted
The detection and fitting of lines are carried out after item again.
As a preferred option, the method for current line Extending Law forward is obtained in step S3.2 are as follows: obtain camber line
The slope of curvature or straight line.
As a preferred option, in step S3.3, the method for lines fusion are as follows: current line extends forward, encounters and can close
And when lines, continuous fitting lines are fitted in the terminal of starting point to the combinable lines of current line, continuous lines are intended
The error of the Extending Law of rule and current line is closed in the error range of setting;If current line and fitting lines are arc
Line, then the difference between the radius of curvature of current line and the radius of curvature for being fitted lines should be in the error range of setting;It is quasi-
Zygonema item may be or the fitting lines and current line or combinable lines between current line and combinable lines
One of them is overlapped;
Alternatively, the terminal of combinable lines is moved to and works as front when current line extends forward, encounters combinable lines
The extended segment of item forms fitting lines;
Fitting lines extend forward according to the continuation of its Extending Law, if to combinable lines are still encountered at the end of plane,
Then retain the current line not extended;If working as front with fitting lines substitution to combinable line segment has been merged at the end of plane
Item and all combinable line segments merged.
Big object feature
When there is the case where lines are blocked, straight fitting is haled to lines, comprising the following steps:
S3.2.1, big object feature learning library is established, each big object feature has corresponding spatial relationship, big object
Feature includes: furniture, household electrical appliances etc., and the spatial relationship of big object refers to: the region of big object and each correlation plane point in the picture
Cut relationship;
S3.2.2, currently pending image is obtained, currently pending image can be panoramic picture and be also possible to single-view
Image carries out deep learning with big object feature learning library, obtains the spatial relationship of big object feature in present image;
S3.2.3, the lines image or lines-super-pixel image for obtaining currently pending image, by the sky of big object feature
Between relationship be applied to lines image or lines-super-pixel image, by big object feature region lines delete;
S3.2.4, it will be blocked by big object and the lines interrupted caused to be fitted by force;It is carried out blocking finger by big object
It is that the terminal of the lines or starting point are located on the boundary of big object;Strong fitting refers to: being extended forward using current line rule, is sought
Whether look for has Extending Law therewith identical and what distance was less than preset value is fitted lines;This can be fitted lines if it exists, this will
It current line and lines can be fitted permeates a lines.
More Space integrations
In three-dimensional Reconstruction, it is possible to more spaces occur and need the case where merging, for example a large space is divided
It when for multiple small spaces, then needs all small Space integrations are integral, in this case, needs to carry out melting for three-dimensional space
It closes.
As a preferred option, as shown in figure 9, the fusion method in more spaces, comprising the following steps:
SI, scale is puted up in each small space before taking pictures;
SII, scale learning database is established;
SIII, the panoramic picture in each space is obtained as current spatial image;
SIV, current spatial image is subjected to deep learning with scale learning database, finds out the scale in current spatial image,
The scale of all spatial images is carried out to the registration in direction and size, registration refers to that the scale of all spatial images is completely heavy
It closes;
SV, three-dimensional Reconstruction is carried out to each panoramic picture after scale registration respectively.
Preferably, in SV, the three-dimensional space face with identical information is synthesized into a face, thus by multiple small space combinations
The three-dimensional graph in space more than one.
The fusion method in more spaces takes pictures to each subspace without using same model camera, different product can be used
Board, different cameral respectively take pictures to each subspace, as long as the scale puted up in each space is consistent.
When more Space integrations, the three-dimensional reconstruction of every sub-spaces uses the three-dimensional space reconstruction side in the various embodiments described above
Method.
This method finds the scale in each panoramic picture by deep learning, can be complete after all scales are registrated
The pairs of direction in each space and the registration of size, the panoramic picture after registration can be reconstructed into the three-dimensional space of same ratio, then
Multiple three-dimensional space are merged.
In the case where lacking any element specifically disclosed herein, limitation, may be implemented illustrated and described herein
Invention.Used terms and expressions method is used as the term of explanation rather than limits, and is not intended in these terms and table
Up to any equivalent for excluding shown and described feature or part thereof in the use of method, and it should be realized that various remodeling exist
It is all feasible in the scope of the present invention.It is therefore to be understood that although specifically being disclosed by various embodiments and optional feature
The present invention, but the modifications and variations of concept as described herein can be used by those of ordinary skill in the art, and recognize
It is fallen into for these modifications and variations within the scope of the present invention of the appended claims restriction.
It is described herein or record article, patent, patent application and every other document and can electronically obtain
The content of information to a certain extent in full include herein by reference, just as each individual publication by specific and single
Solely point out by reference.Applicant retains from any of any this article, patent, patent application or other documents
And all material and information are incorporated into the right in the application.
Claims (6)
1. interior space three-dimensional rebuilding method, comprising the following steps:
S1, corner feature learning library is established, each corner feature has corresponding spatial relationship;
S2, current panorama image to be reconstructed is obtained;
S3, lines detection and/or super-pixel segmentation are done to current panorama image by image, semantic analysis different colours or not
Synteny indicates the lines of different directions, generates the lines-super-pixel image being made of lines and super-pixel;
S4, the spatial relationship for obtaining current panorama image:
S4.1, current panorama image is resolved into multiple single-view images by visual angle;
S4.2, the deep learning that every single-view image is carried out to corner feature learning library respectively, are identified by deep learning
The spatial relationship of corner feature;
S4.3, the spatial relationship of all single-view images is synthesized by the synthesis condition of panoramic picture, generates current panorama
The spatial relationship of image indicates different spatial positions in different colors;
S5, the spatial relationship of current panorama image is applied to lines-super-pixel image, deletes side in lines-super-pixel image
It is obtained to the lines of mistake, generates the lines-super-pixel image with spatial relationship;
S6, with the information in each face in the lines with spatial relationship-super-pixel image reconstruction space.
2. interior space three-dimensional rebuilding method as described in claim 1, it is characterised in that: to the super picture of lines-in step S3
Sketch map picture carries out the fitting of multistage detection lines, comprising the following steps:
S3.1.1, lines detection acquisition lines image is carried out to current panorama image, by semantic analysis, mark different directions
Lines;
S3.1.2, the rule that any one lines in lines image extend forward as current line, acquisition current line are obtained
Rule, current line is extended forward by its Extending Law;
S3.1.3, as current line extends forward, judge whether there is at a distance from current line extended segment less than preset value
Combinable lines, if so, current line and combinable lines are fused into a lines;If it is not, then retaining working as of not extending
Preceding lines.
3. interior space three-dimensional rebuilding method as claimed in claim 2, it is characterised in that: when space three-dimensional is rebuild indoors,
Delete the detection and fitting for carrying out lines in lines-super-pixel image after the lines of anisotropy again.
4. interior space three-dimensional rebuilding method as claimed in claim 3, it is characterised in that: in step S3.1.3, lines fusion
Method are as follows: when current line extends forward, encounters combinable lines, current line starting point to combinable lines terminal
Fit continuous fitting lines, the error of the error of the fitting rules of continuous lines and the Extending Law of current line in setting
In range;
Alternatively, the terminal of combinable lines is moved to current line when current line extends forward, encounters combinable lines
Extended segment forms fitting lines;
Fitting lines extend forward according to the continuation of its Extending Law, if protecting to combinable lines are still encountered at the end of plane
Stay the current line not extended;If to combinable line segment has been merged at the end of plane, with fitting lines substitution current line and
All combinable line segments merged.
5. interior space three-dimensional rebuilding method as claimed in claim 4, it is characterised in that: in lines-super-pixel image, out
When the case where existing lines are blocked, straight fitting is haled to lines, comprising the following steps:
S3.2.1, big object feature learning library is established, each big object feature has corresponding spatial relationship;
S3.2.2, current panorama image is obtained, is that current panorama image with big object feature learning library carries out deep learning, acquisition
The spatial relationship of big object feature;
S3.2.3, the lines image or lines-super-pixel image for obtaining current panorama image, by the spatial relationship of big object feature
It is applied to lines image or lines-super-pixel image, the lines in big object feature region are deleted;
S3.2.4, it will be blocked by big object and the lines interrupted caused to be fitted by force;Alternatively, forward using current line rule
The end boundary in face where extending to lines, judges whether current line and adjacent surface have distance can less than the adjacent surface of preset value
Lines are fitted, it is to extend forwardly to the adjacent surface by its Extending Law positioned at the lines of adjacent surface that adjacent surface, which can be fitted lines,
It is the boundary for actually terminating at the adjacent surface that boundary or the adjacent surface, which can be fitted lines,;Belong to the same end point or disappears
The lines for losing direction belong to the same face.
6. interior space three-dimensional rebuilding method as claimed in claim 5, it is characterised in that: the super picture of lines-after over-fitting
Sketch map picture is as lines used in step S5-super-pixel image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811003630.9A CN110197529B (en) | 2018-08-30 | 2018-08-30 | Indoor space three-dimensional reconstruction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811003630.9A CN110197529B (en) | 2018-08-30 | 2018-08-30 | Indoor space three-dimensional reconstruction method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110197529A true CN110197529A (en) | 2019-09-03 |
CN110197529B CN110197529B (en) | 2022-11-11 |
Family
ID=67751138
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811003630.9A Active CN110197529B (en) | 2018-08-30 | 2018-08-30 | Indoor space three-dimensional reconstruction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110197529B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111339914A (en) * | 2020-02-24 | 2020-06-26 | 桂林理工大学 | Indoor ceiling ground identification method based on single picture |
CN111461125A (en) * | 2020-03-19 | 2020-07-28 | 杭州凌像科技有限公司 | Continuous segmentation method of panoramic image |
CN112598780A (en) * | 2020-12-04 | 2021-04-02 | Oppo广东移动通信有限公司 | Instance object model construction method and device, readable medium and electronic equipment |
CN113989376A (en) * | 2021-12-23 | 2022-01-28 | 贝壳技术有限公司 | Method and device for acquiring indoor depth information and readable storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1567385A (en) * | 2003-06-19 | 2005-01-19 | 邓兴峰 | Panoramic reconstruction method of three dimensional image from two dimensional image |
US20050031195A1 (en) * | 2003-08-08 | 2005-02-10 | Microsoft Corporation | System and method for modeling three dimensional objects from a single image |
CN101000461A (en) * | 2006-12-14 | 2007-07-18 | 上海杰图软件技术有限公司 | Method for generating stereoscopic panorama by fish eye image |
CN101714262A (en) * | 2009-12-10 | 2010-05-26 | 北京大学 | Method for reconstructing three-dimensional scene of single image |
CN104134234A (en) * | 2014-07-16 | 2014-11-05 | 中国科学技术大学 | Full-automatic three-dimensional scene construction method based on single image |
CN107038724A (en) * | 2015-10-28 | 2017-08-11 | 舆图行动股份有限公司 | Panoramic fisheye camera image correction, synthesis and depth of field reconstruction method and system |
-
2018
- 2018-08-30 CN CN201811003630.9A patent/CN110197529B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1567385A (en) * | 2003-06-19 | 2005-01-19 | 邓兴峰 | Panoramic reconstruction method of three dimensional image from two dimensional image |
US20050031195A1 (en) * | 2003-08-08 | 2005-02-10 | Microsoft Corporation | System and method for modeling three dimensional objects from a single image |
CN101000461A (en) * | 2006-12-14 | 2007-07-18 | 上海杰图软件技术有限公司 | Method for generating stereoscopic panorama by fish eye image |
CN101714262A (en) * | 2009-12-10 | 2010-05-26 | 北京大学 | Method for reconstructing three-dimensional scene of single image |
CN104134234A (en) * | 2014-07-16 | 2014-11-05 | 中国科学技术大学 | Full-automatic three-dimensional scene construction method based on single image |
CN107038724A (en) * | 2015-10-28 | 2017-08-11 | 舆图行动股份有限公司 | Panoramic fisheye camera image correction, synthesis and depth of field reconstruction method and system |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111339914A (en) * | 2020-02-24 | 2020-06-26 | 桂林理工大学 | Indoor ceiling ground identification method based on single picture |
CN111339914B (en) * | 2020-02-24 | 2022-08-19 | 桂林理工大学 | Indoor ceiling ground identification method based on single picture |
CN111461125A (en) * | 2020-03-19 | 2020-07-28 | 杭州凌像科技有限公司 | Continuous segmentation method of panoramic image |
CN111461125B (en) * | 2020-03-19 | 2022-09-20 | 杭州凌像科技有限公司 | Continuous segmentation method of panoramic image |
CN112598780A (en) * | 2020-12-04 | 2021-04-02 | Oppo广东移动通信有限公司 | Instance object model construction method and device, readable medium and electronic equipment |
CN112598780B (en) * | 2020-12-04 | 2024-04-05 | Oppo广东移动通信有限公司 | Instance object model construction method and device, readable medium and electronic equipment |
CN113989376A (en) * | 2021-12-23 | 2022-01-28 | 贝壳技术有限公司 | Method and device for acquiring indoor depth information and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110197529B (en) | 2022-11-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9438878B2 (en) | Method of converting 2D video to 3D video using 3D object models | |
Avidan et al. | Novel view synthesis by cascading trilinear tensors | |
CN110197529A (en) | Interior space three-dimensional rebuilding method | |
WO2017029487A1 (en) | Method and system for generating an image file of a 3d garment model on a 3d body model | |
EP1150254A3 (en) | Methods for creating an image for a three-dimensional display, for calculating depth information, and for image processing using the depth information | |
EP3503030A1 (en) | Method and apparatus for generating a three-dimensional model | |
KR102281462B1 (en) | Systems, methods and software for creating virtual three-dimensional images that appear to be projected in front of or on an electronic display | |
CN108629828B (en) | Scene rendering transition method in the moving process of three-dimensional large scene | |
Böhm | Multi-image fusion for occlusion-free façade texturing | |
JP4996922B2 (en) | 3D visualization | |
Schmeing et al. | Depth image based rendering | |
KR100335617B1 (en) | Method for synthesizing three-dimensional image | |
CN110148206A (en) | The fusion method in more spaces | |
RU2735066C1 (en) | Method for displaying augmented reality wide-format object | |
Wang et al. | Disparity manipulation for stereo images and video | |
CN110148221A (en) | A kind of method of lines fitting when image reconstruction | |
Siegmund et al. | Virtual Fitting Pipeline: Body Dimension Recognition, Cloth Modeling, and On-Body Simulation. | |
CN110148220A (en) | The three-dimensional rebuilding method of the big object of the interior space | |
CN114119891A (en) | Three-dimensional reconstruction method and reconstruction system for robot monocular semi-dense map | |
Chen et al. | Image synthesis from a sparse set of views | |
Kimura et al. | 3D reconstruction based on epipolar geometry | |
Chotikakamthorn | Near point light source location estimation from shadow edge correspondence | |
Jang et al. | Depth video based human model reconstruction resolving self-occlusion | |
KR20030015625A (en) | Calibration-free Approach to 3D Reconstruction Using A Cube Frame | |
Eggert et al. | Multi-layer visualization of mobile mapping data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |