CN113379901A - Method and system for establishing house live-action three-dimension by utilizing public self-photographing panoramic data - Google Patents
Method and system for establishing house live-action three-dimension by utilizing public self-photographing panoramic data Download PDFInfo
- Publication number
- CN113379901A CN113379901A CN202110699234.XA CN202110699234A CN113379901A CN 113379901 A CN113379901 A CN 113379901A CN 202110699234 A CN202110699234 A CN 202110699234A CN 113379901 A CN113379901 A CN 113379901A
- Authority
- CN
- China
- Prior art keywords
- house
- dimensional
- panoramic
- data
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 230000000007 visual effect Effects 0.000 claims abstract description 9
- 238000013507 mapping Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 8
- 230000001788 irregular Effects 0.000 claims description 6
- 238000004422 calculation algorithm Methods 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 150000001875 compounds Chemical class 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 230000007704 transition Effects 0.000 claims description 3
- 230000014509 gene expression Effects 0.000 abstract description 6
- 238000012545 processing Methods 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 4
- 238000010276 construction Methods 0.000 abstract description 3
- 230000002596 correlated effect Effects 0.000 abstract 1
- 238000004590 computer program Methods 0.000 description 5
- 238000005259 measurement Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/16—Real estate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
Abstract
The invention provides a method and a system for establishing a house live-action three-dimension by utilizing public self-shooting panoramic data, which comprises the steps of receiving house 360-degree panoramic data which is automatically shot and uploaded by the public, and generating a house three-dimensional color point cloud by utilizing the house 360-degree panoramic data; performing point cloud regularization treatment on house color points, and selecting a wall surface; extracting a house plane map based on the house point cloud; and performing correlated display on the panoramic view, the plan view and the three-dimensional model of the house 360. The invention supports the public to independently and quickly construct the real-scene three-dimensional model of the house, and realizes the low-cost, efficient and convenient three-dimensional visual digital expression of the house. By utilizing a 360-degree panoramic multi-view photogrammetry processing technology, the efficiency and the accuracy of the real-scene three-dimensional model construction are remarkably improved, the cost of the house real-scene three-dimensional modeling and the workload of field data acquisition of professionals are reduced, the crowd-sourced house three-dimensional modeling and visual expression are realized, and the method has a wide application prospect.
Description
Technical Field
The invention belongs to the field of live-action three-dimensional modeling, and particularly relates to a method and a system for establishing a house live-action three-dimension based on public self-photographing panoramic data.
Background
The indoor space is an important place for human life, entertainment and study. How to rapidly carry out three-dimensional modeling on indoor scenes has become a hot problem of current research. Indoor scene modeling is being performed by various approaches to date, and has made considerable progress.
Currently, there are three methods for indoor scene modeling. Firstly, a three-dimensional model is established by using three-dimensional software such as CAD, 3ds max and the like. A three-dimensional model is built through a series of operations using some basic geometric elements. Such geometric interactive modeling is generally to restore an indoor model using data provided by indoor construction drawings, CAD, etc. and height assistance data. The method has the advantages of low cost, mature technology and wide application. However, due to the different shapes and structures of the indoor scenes, the interactive reconstruction of the indoor scenes is time-consuming and requires a high level of professional skills for the operator. Another method is to acquire three-dimensional data by using an instrument, which can be classified into optical measurement, ultrasonic measurement, electromagnetic measurement, etc., wherein the optical measurement is the most widely used method. For example, three-dimensional space coordinates and color information of each sampling point of an indoor scene are directly acquired through a laser scanner. This approach has achieved a number of research efforts. However, this method is not readily accepted and used by the public due to the high cost and harsh conditions of data acquisition. The third method is an image-based modeling method. A common digital camera is used as image acquisition equipment, and a model with high authenticity is directly reconstructed from an image. Due to the advantages of low cost, high efficiency, low labor intensity and the like, the method becomes a powerful tool for attracting increasing attention in a plurality of fields and analyzing and extracting geometric information and making models, particularly ground close-range images, can reflect geometric details of the surfaces of buildings and have vivid texture information, and is a research hotspot in the fields of current photogrammetry and computer vision. The image-based modeling technology is based on relevant knowledge of digital photogrammetry, computer graphics and computer vision, obtains required camera parameters by analyzing and processing images according to the perspective imaging principle of a camera, and can recover a three-dimensional model of an object from a single image or a plurality of images. While this approach has its own advantages, there are still some areas to improve on convenience. The reason is as follows:
the modeling based on the image needs to have considerable professional knowledge, the calculation amount is large, professional calculation software is needed, the popularization is difficult to realize, and the cost is high when the ordinary people want to model indoors or need to seek help from related institutions or professionals.
Meanwhile, the image-based modeling method has certain requirements on the images, such as definition, coverage, overlapping degree and the like, which means that the images meeting the modeling conditions can be screened out only by taking a large number of pictures, and professional personnel are required to be assigned to the site for data acquisition.
Disclosure of Invention
The project provides a house live-action three-dimensional scheme established based on public self-photographing panoramic data, so that the independent and rapid establishment of a live-action three-dimensional model of a house by the public is realized, and the low-cost, efficient and convenient three-dimensional visual digital expression of the house is realized.
In order to achieve the above object, the present invention provides a method for establishing a three-dimensional real scene of a house by using mass self-portrait panoramic data, comprising the following steps:
step 1, receiving 360-degree panoramic data of a house which is automatically shot and uploaded by the public;
step 2, generating a house three-dimensional color point cloud by utilizing house 360-degree panoramic data, wherein the implementation mode comprises the following substeps,
step 2.1, frames are extracted from the 360-degree panoramic video stream, and a sequence 360-degree panoramic image set is established;
step 2.2, correcting the 360-degree panoramic image multi-view projection to obtain a sequence multi-view image, the realization method is as follows,
firstly, converting an originally input 360-degree panoramic image into a three-dimensional panoramic ball space by adopting an equal-rectangular projection spherical panoramic model; the equal-rectangular projection spherical panoramic model is used for converting a three-dimensional spherical model and a two-dimensional plane of a panoramic image and establishing a conversion relation among a three-dimensional spherical surface, spherical surface textures and the two-dimensional panoramic plane;
secondly, surrounding the panoramic ball by a square with the side length equal to the diameter of the panoramic ball, and mapping points on the spherical surface to the corresponding cubic surface to obtain 6 multi-view images;
thirdly, rotating the cube by 45 degrees, and mapping the points on the spherical surface to the corresponding cube surface again to obtain another 6 multi-view images;
finally, selecting at least 6 images in the horizontal direction and at least 2 images in the vertical direction from the 12 multi-view images to form a house sequence multi-view image;
step 2.3, then, performing house sequence multi-view image matching and SFM reconstruction to generate three-dimensional sparse point cloud;
step 2.4, generating dense three-dimensional point cloud of the house by adopting PMVS according to the three-dimensional sparse point cloud; generating an irregular house triangular Mesh based on the dense three-dimensional point cloud of the house; searching a most suitable texture block in the corresponding multi-view image, and mapping the texture block to a Mesh panel to obtain a three-dimensional model of the house with realistic texture, wherein the three-dimensional model comprises house three-dimensional color point cloud and a surface Mesh model constructed based on the color point cloud;
step 3, performing point cloud regularization treatment on house color points, and selecting a wall surface;
step 4, extracting a house plane map based on the house point cloud, comprising the following substeps,
step 4.1, vertically projecting the point cloud onto a horizontal plane, and performing straight line fitting according to the density of the projection points;
4.2, carrying out region division by utilizing the fitted straight line to obtain a vector line image layer projected to the 2D plane;
step 4.3, utilizing the vector room plan data generated in the step 4.2 and using a height histogram method to obtain the height information of the floor and the ceiling of each room from the point clouds, wherein the point cloud number of the height histogram is represented as two peak values, and the two height values are respectively used as the heights of the floor and the ceiling of the room; triangularizing the ceiling, the wall surface and the floor of each room by using a Delaunay triangularization method to construct a final room three-dimensional model; outputting the constructed three-dimensional model of the room in a vector Mesh grid form;
and step 5, displaying the panorama, the plan view and the three-dimensional model of the house 360 in an associated manner.
In step 1, the house 360-degree panoramic image data is acquired by using a panoramic camera function or a panoramic camera of the intelligent mobile device to acquire 360-degree panoramic image data of the house.
Moreover, the resolution of the intelligent mobile device or the panoramic camera is not lower than 4K, and a panoramic lens is adopted; the house is shot in a handheld mode, the requirement is as stable as possible, and the image coverage range is as comprehensive as possible; the panoramic video acquisition frame rate is not lower than 25 frame/second.
Furthermore, in step 2.1, the frame extraction interval of the panoramic video is set for the stability of shooting, and the following is realized,
when the video is stable, setting an interval of 5 frames/second to extract video frames;
when the shooting process has large jitter, the interval of 3 frames/second is set to extract video frames.
Furthermore, the relation of the equirectangular projected spherical panoramic model in step 2.2 is as follows,
in the formula (I), the compound is shown in the specification,the coordinates of the image points in the original panoramic image,is the coordinate of the corresponding point in the three-dimensional panoramic sphere, w is the original panoramic image length, h is the original panoramic image width, thetaqIs the zenith angle in the spherical coordinate system,is the azimuth angle in the spherical coordinate system.
And in step 4.3, based on the dense three-dimensional point cloud of the pipeline, generating the Mesh of the irregular pipeline triangulation network according to a Poisson reconstruction algorithm.
And in step 4.3, mapping the Mesh patch texture, including performing consistency check and color smoothness evaluation on the patch texture by adopting a graph cut theory, and performing transition of color and brightness by adopting a distance weighting mode to eliminate seams between adjacent texture blocks.
Moreover, the implementation of step 5 comprises the following sub-steps,
step 5.1, solving the coordinate of a photographing center through a light beam method, projecting the photographing center onto a horizontal plane, and setting a label at the position of the projection point;
and 5.2, unifying the house panorama, the plan view and the three-dimensional model to the same coordinate system by taking the photographing central point and the corresponding label position as references.
And the method is used for building visual house display and basic data acquisition by establishing a live-action indoor three-dimensional model.
On the other hand, the invention also provides a system for establishing the three-dimensional real scene of the house by utilizing the public self-shooting panoramic data, which is used for realizing the method for establishing the three-dimensional real scene of the house by utilizing the public self-shooting panoramic data.
The invention has the following positive effects:
1) the invention adopts a set of client-terminal service mode, realizes the convenient flow of independently collecting data by the public and returning the data to the three-dimensional model through terminal processing, and has simple operation and easy popularization.
2) By utilizing the advanced 360-degree panoramic multi-view photogrammetry processing technology, the efficiency and the accuracy of the real scene three-dimensional model construction are obviously improved, and the cost of the house real scene three-dimensional modeling is reduced.
3) The invention provides a more flexible, faster and more flexible panoramic data acquisition method, which can realize autonomous and simple data acquisition of the public and reduce the workload of field data acquisition of professionals.
The invention provides a technical scheme for establishing a real scene three-dimensional model of a house by utilizing public self-shooting panoramic data, which realizes that the public independently and quickly constructs the real scene three-dimensional model of the house and realizes low-cost, efficient and convenient three-dimensional visual digital expression of the house. The method can realize crowdsourcing type house three-dimensional modeling and visual expression, provide support for various house buying, house selling and house renting, also provide data support for house online decoration and maintenance, and have wide application prospect in the fields of house transaction, house maintenance, house safety management and the like.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is described in detail below with reference to the accompanying drawings and examples.
Referring to fig. 1, a method for establishing a three-dimensional real scene of a house by using mass self-portrait panoramic data according to an embodiment of the present invention includes the following steps:
step 1, collecting house 360-degree panoramic data independently uploaded by the public.
The invention utilizes photos or videos which are shot and uploaded by the masses themselves and relate to house panoramic data to be uploaded. When the concrete implementation is recommended, a prompt is sent to a provider who takes pictures and videos, corresponding actual requirements such as texture information of building objects are provided, and guarantee is provided for subsequently establishing a three-dimensional model.
In specific implementation, a user can shoot a house (including photos or videos) by using a panoramic camera on a mobile phone to obtain 360-degree panoramic data of the house; and uploading the panoramic data through the client by the user.
The proposal utilizes the panoramic camera shooting function of the intelligent mobile equipment or the panoramic camera to acquire 360-degree panoramic image data of the house, professional knowledge is not needed, and common users can operate and realize the panoramic camera shooting system by utilizing the existing products. The resolution of the intelligent mobile equipment or the panoramic camera is not lower than 4K, and a panoramic lens is adopted; the house is shot in a handheld mode, the requirement is as stable as possible, and the image coverage range is as comprehensive as possible; the panoramic video acquisition frame rate is not lower than 25 frame/second.
And 2, generating a house three-dimensional color point cloud by using the house 360-degree panoramic data.
2.1) firstly, frames are extracted from the 360-degree panoramic video stream, and a sequence 360-degree panoramic image set is established.
In specific implementation, the panoramic video frame extraction interval should be set for shooting stability, and the effect is better. The preferred proposed implementation is as follows:
firstly, when a video is stable, setting an interval of 5 frames/second to extract video frames; and secondly, when the shooting process has large jitter, setting an interval of 3 frames/second to extract video frames.
2.2) correcting the 360-degree panoramic image multi-view projection to obtain a sequence multi-view image.
The specific method comprises the following steps:
firstly, converting an originally input 360-degree panoramic image into a three-dimensional panoramic ball space (the relational expressions are shown as formula 1 and formula 2) by adopting an equal-rectangle projection spherical panoramic model; the equal-rectangular projection spherical panoramic model is used for converting a three-dimensional spherical model and a two-dimensional plane of a panoramic image and establishing a conversion relation among a three-dimensional spherical surface, spherical surface textures and the two-dimensional panoramic plane;
in the formula (I), the compound is shown in the specification,the coordinates of the image points in the original panoramic image,is the coordinate of the corresponding point in the three-dimensional panoramic sphere, w is the original panoramic image length, h is the original panoramic image width, thetaqIs the zenith angle in the spherical coordinate system,is the azimuth angle in the spherical coordinate system.
Secondly, surrounding the panoramic ball by a square with the side length equal to the diameter of the panoramic ball, and mapping points on the spherical surface to the corresponding cubic surface to obtain 6 multi-view images;
thirdly, rotating the cube by 45 degrees, and mapping the points on the spherical surface to the corresponding cube surface again to obtain another 6 multi-view images;
and finally, selecting at least 6 images in the horizontal direction and at least 2 images in the vertical direction from the 12 multi-view images to form a house sequence multi-view image.
2.3) then carrying out house sequence multi-view image matching and SFM reconstruction to generate a three-dimensional sparse point cloud, comprising the following operations:
firstly, extracting feature points of a house sequence multi-view image by adopting an SIFT feature operator;
secondly, matching the house multi-view images by using a support line voting and affine invariant constraint image matching mode to obtain a house sequence image matching point set;
and thirdly, performing aerial triangulation and sparse point cloud generation on the house sequence image matching point set through SFM, recovering the camera pose of each frame of panoramic image, and generating a house three-dimensional sparse point cloud.
2.4) generating dense three-dimensional point cloud by adopting PMVS according to the three-dimensional sparse point cloud, and specifically realizing the following steps:
firstly, generating dense three-dimensional point cloud of a house by adopting PMVS (pulse-modulated visual switching) according to the three-dimensional sparse point cloud of the house;
secondly, generating an irregular house triangular Mesh based on the house dense three-dimensional point cloud;
and finally, traversing a house Mesh patch, calculating a law vector, searching a corresponding multi-view image to determine the most appropriate texture block, and mapping the texture block onto the Mesh patch to obtain a three-dimensional model of the house with realistic texture, wherein the three-dimensional model comprises a house three-dimensional color point cloud and a surface Mesh model constructed based on the color point cloud.
And 3, carrying out point cloud regularization on house color.
The step 3 implementation of the embodiment comprises the following specific processes:
3.1) firstly, carrying out smooth operation on the point cloud through surface fitting, realizing reduction or filling of the model and reducing the deformation of the model.
3.2) secondly, carrying out point cloud plane segmentation by using a region growing algorithm, searching and combining similar point sets, and segmenting to obtain different objects;
performing plane point cloud fitting by using an iterative weight least square method, and calculating a normal vector n of a point cloud plane;
and 3.3) selecting a wall surface, taking the vertical plane as an alternative wall surface, and then removing the plane with the height of the vertical plane lower than the threshold value to obtain the wall surface meeting the conditions.
The embodiment calculates whether the judgment plane is vertical or not by using the formula | n.v | <epsilon. Wherein n is a normal vector of the point cloud plane, v is (0,0,1) T, and e is a cosine value of the angle threshold. When the angle threshold is 90 ° ± 1 °, e ═ cos (90 ° ± 1 °). And removing the plane with the height h of the vertical plane less than 1.5m, and then obtaining the wall surface meeting the conditions.
And 4, generating a house plane graph and three-dimensional grid model based on the house point cloud.
The step 4 implementation of the embodiment comprises the following specific processes:
4.1) vertically projecting the point cloud onto a horizontal plane, and performing straight line fitting according to the density of the projection points.
And 4.2) carrying out area division by utilizing the fitted straight line, dividing a two-dimensional plane space by utilizing a two-dimensional line segment to form a polygon unit, and obtaining a vector polygon unit for dividing the space by utilizing a space division algorithm. And splicing the vector lines to obtain a vector line layer projected to the 2D plane.
4.3) utilizing the vector room plan data generated in the step 4.2, and using a height histogram method to obtain the height information of the floor and the ceiling of each room from the point clouds, wherein the point cloud number of the height histogram is represented as two peak values, and the two height values are respectively used as the heights of the floor and the ceiling of the room; and triangulating the ceiling, the wall and the floor of each room by using a Delaunay triangularization method, and constructing a final room three-dimensional model. And outputting the constructed three-dimensional model of the room in a vector Mesh grid form.
Preferably, based on the dense three-dimensional point cloud of the house, the irregular triangular Mesh of the house is generated according to a Poisson reconstruction algorithm. And texture mapping is carried out on the Mesh surface patch after the triangular net is constructed, during mapping, consistency check and color smoothness evaluation of surface patch textures are carried out by adopting a graph cut theory, transition of color and brightness is carried out by adopting a distance weighting mode, and a seam between adjacent texture blocks is eliminated.
And step 5, displaying the panorama, the plan view and the three-dimensional model of the house 360 in an associated manner.
The step 5 implementation of the embodiment comprises the following specific processes:
and 5.1) solving the coordinates of the photographing center by a light beam method, projecting the photographing center on a horizontal plane, and setting a label at the position of the projection point.
And 5.2) unifying the house panorama, the plan view and the three-dimensional model into the same coordinate system by taking the photographing central point and the corresponding label position as references.
The process can realize the low-cost 360-degree panoramic image data live-action indoor three-dimensional reconstruction method, and can be used for building the live-action indoor three-dimensional model to perform house visual display and basic data acquisition. Wherein, the step 2, the step 3 and the step 4 are original inventions of the invention.
In specific implementation, a person skilled in the art can implement the automatic operation process by using a computer software technology, and a system device for implementing the method, such as a computer-readable storage medium storing a corresponding computer program according to the technical solution of the present invention and a computer device including a corresponding computer program for operating the computer program, should also be within the scope of the present invention.
In some possible embodiments, there is provided a system for building a three-dimensional live-action of a house using mass-portrait panorama data, comprising the following modules,
the first module is used for receiving 360-degree panoramic data of a house which is automatically shot and uploaded by the public;
the second module is used for generating a house three-dimensional color point cloud by using the house 360-degree panoramic data;
the third module is used for performing point cloud regularization processing on house color points and selecting a wall surface;
the fourth module is used for extracting a house plane map based on the house point cloud;
and the fifth module is used for displaying the house 360 panorama, the plan view and the three-dimensional model in an associated manner.
The specific module implementation can be realized by referring to the steps of the method, and the invention is not repeated.
In some possible embodiments, a system for building a three-dimensional real scene of a house by using self-portrait panoramic data of the public comprises a processor and a memory, wherein the memory is used for storing program instructions, and the processor is used for calling the stored instructions in the memory to execute the method for building the three-dimensional real scene of the house by using the self-portrait panoramic data of the public as described above.
In some possible embodiments, a system for building a three-dimensional real scene of a house by using self-portrait panoramic data of the public comprises a readable storage medium, wherein a computer program is stored on the readable storage medium, and when the computer program is executed, the method for building the three-dimensional real scene of the house by using the self-portrait panoramic data of the public is realized.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.
Claims (10)
1. A method for establishing a house live-action three-dimension by utilizing public self-shooting panoramic data is characterized by comprising the following steps: the method comprises the following steps:
step 1, receiving 360-degree panoramic data of a house which is automatically shot and uploaded by the public;
step 2, generating a house three-dimensional color point cloud by using house 360-degree panoramic data, wherein the implementation mode comprises the following substeps, step 2.1, performing frame extraction on a 360-degree panoramic video stream, and establishing a sequence 360-degree panoramic image set;
step 2.2, correcting the 360-degree panoramic image multi-view projection to obtain a sequence multi-view image, the realization method is as follows,
firstly, converting an originally input 360-degree panoramic image into a three-dimensional panoramic ball space by adopting an equal-rectangular projection spherical panoramic model; the equal-rectangular projection spherical panoramic model is used for converting a three-dimensional spherical model and a two-dimensional plane of a panoramic image and establishing a conversion relation among a three-dimensional spherical surface, spherical surface textures and the two-dimensional panoramic plane;
secondly, surrounding the panoramic ball by a square with the side length equal to the diameter of the panoramic ball, and mapping points on the spherical surface to the corresponding cubic surface to obtain 6 multi-view images;
thirdly, rotating the cube by 45 degrees, and mapping the points on the spherical surface to the corresponding cube surface again to obtain another 6 multi-view images;
finally, selecting at least 6 images in the horizontal direction and at least 2 images in the vertical direction from the 12 multi-view images to form a house sequence multi-view image;
step 2.3, then, performing house sequence multi-view image matching and SFM reconstruction to generate three-dimensional sparse point cloud;
step 2.4, generating dense three-dimensional point cloud of the house by adopting PMVS according to the three-dimensional sparse point cloud; generating an irregular house triangular Mesh based on the dense three-dimensional point cloud of the house; searching a most suitable texture block in the corresponding multi-view image, and mapping the texture block to a Mesh panel to obtain a three-dimensional model of the house with realistic texture, wherein the three-dimensional model comprises house three-dimensional color point cloud and a surface Mesh model constructed based on the color point cloud;
step 3, performing point cloud regularization treatment on house color points, and selecting a wall surface;
step 4, extracting a house plane map based on the house point cloud, comprising the following substeps,
step 4.1, vertically projecting the point cloud onto a horizontal plane, and performing straight line fitting according to the density of the projection points;
4.2, carrying out region division by utilizing the fitted straight line to obtain a vector line image layer projected to the 2D plane;
step 4.3, utilizing the vector room plan data generated in the step 4.2 and using a height histogram method to obtain the height information of the floor and the ceiling of each room from the point clouds, wherein the point cloud number of the height histogram is represented as two peak values, and the two height values are respectively used as the heights of the floor and the ceiling of the room; triangularizing the ceiling, the wall surface and the floor of each room by using a Delaunay triangularization method to construct a final room three-dimensional model; outputting the constructed three-dimensional model of the room in a vector Mesh grid form;
and step 5, displaying the panorama, the plan view and the three-dimensional model of the house 360 in an associated manner.
2. The method for building the real scene three-dimensional of the house by utilizing the public self-portrait panoramic data as claimed in claim 1, wherein: in the step 1, the house 360-degree panoramic image data is acquired in a mode that 360-degree panoramic image data is acquired by using a panoramic camera function or a panoramic camera of the intelligent mobile equipment.
3. The method for building the real scene three-dimensional of the house by using the public self-portrait panoramic data as claimed in claim 2, wherein: the resolution of the intelligent mobile equipment or the panoramic camera is not lower than 4K, and a panoramic lens is adopted; the house is shot in a handheld mode, the requirement is as stable as possible, and the image coverage range is as comprehensive as possible; the panoramic video acquisition frame rate is not lower than 25 frame/second.
4. The method for building the real scene three-dimensional of the house by utilizing the public self-portrait panoramic data as claimed in claim 1, wherein: in step 2.1, the frame extraction interval of the panoramic video is set for the stability of shooting, and the following is realized,
when the video is stable, setting an interval of 5 frames/second to extract video frames;
when the shooting process has large jitter, the interval of 3 frames/second is set to extract video frames.
5. The method for building the real scene three-dimensional of the house by utilizing the public self-portrait panoramic data as claimed in claim 1, wherein: the relation of the equal rectangular projection spherical panoramic model in step 2.2 is as follows,
in the formula (I), the compound is shown in the specification,the coordinates of the image points in the original panoramic image,is the coordinate of the corresponding point in the three-dimensional panoramic sphere, w is the original panoramic image length, h is the original panoramic image width, thetaqIs the zenith angle in the spherical coordinate system,is the azimuth angle in the spherical coordinate system.
6. The method for building the real scene three-dimensional of the house by utilizing the public self-portrait panoramic data as claimed in claim 1, wherein: and 4.3, generating the pipeline irregular triangulation network Mesh based on the pipeline dense three-dimensional point cloud according to a Poisson reconstruction algorithm.
7. The method for building the real scene three-dimensional of the house by utilizing the public self-portrait panoramic data as claimed in claim 1, wherein: and 4.3, mapping the Mesh surface patch texture, wherein the mapping comprises the steps of performing consistency check and color smoothness evaluation on the surface patch texture by adopting a graph cut theory, performing color and brightness transition by adopting a distance weighting mode, and eliminating seams between adjacent texture blocks.
8. The method for building the real scene three-dimensional of the house by utilizing the public self-portrait panoramic data as claimed in claim 1, wherein: the implementation of step 5 comprises the following sub-steps,
step 5.1, solving the coordinate of a photographing center through a light beam method, projecting the photographing center onto a horizontal plane, and setting a label at the position of the projection point;
and 5.2, unifying the house panorama, the plan view and the three-dimensional model to the same coordinate system by taking the photographing central point and the corresponding label position as references.
9. A method of building a real three-dimensional house scene using the self-portrait panoramic data of the public according to claim 1, 2, 3, 4, 5, 6, 7 or 8, wherein: the method is used for building visual display and basic data acquisition by establishing a live-action indoor three-dimensional model.
10. The utility model provides an utilize masses' auto heterodyne panorama data to establish three-dimensional system of house live action which characterized in that: method for implementing the method of building a real-world scene of a house using self-portrait panoramic data of the public as claimed in any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110699234.XA CN113379901A (en) | 2021-06-23 | 2021-06-23 | Method and system for establishing house live-action three-dimension by utilizing public self-photographing panoramic data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110699234.XA CN113379901A (en) | 2021-06-23 | 2021-06-23 | Method and system for establishing house live-action three-dimension by utilizing public self-photographing panoramic data |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113379901A true CN113379901A (en) | 2021-09-10 |
Family
ID=77578654
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110699234.XA Pending CN113379901A (en) | 2021-06-23 | 2021-06-23 | Method and system for establishing house live-action three-dimension by utilizing public self-photographing panoramic data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113379901A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113823001A (en) * | 2021-09-23 | 2021-12-21 | 北京有竹居网络技术有限公司 | Method, device, equipment and medium for generating house type graph |
CN113822936A (en) * | 2021-09-29 | 2021-12-21 | 北京市商汤科技开发有限公司 | Data processing method and device, computer equipment and storage medium |
CN116311325A (en) * | 2023-02-16 | 2023-06-23 | 江苏艾佳家居用品有限公司 | Automatic scale identification system based on artificial intelligence model |
CN116933359A (en) * | 2023-06-26 | 2023-10-24 | 武汉峰岭科技有限公司 | Building complex roof modeling method and system |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140301633A1 (en) * | 2013-04-09 | 2014-10-09 | Google Inc. | System and Method for Floorplan Reconstruction and Three-Dimensional Modeling |
CN104537586A (en) * | 2014-12-30 | 2015-04-22 | 郭人和 | Three-dimensional house display system and method |
CN107169136A (en) * | 2017-06-09 | 2017-09-15 | 成都智建新业建筑设计咨询有限公司 | Houseclearing three-dimensional panorama display systems |
CN109887082A (en) * | 2019-01-22 | 2019-06-14 | 武汉大学 | A kind of interior architecture three-dimensional modeling method and device based on point cloud data |
CN110189412A (en) * | 2019-05-13 | 2019-08-30 | 武汉大学 | More floor doors structure three-dimensional modeling methods and system based on laser point cloud |
WO2020006941A1 (en) * | 2018-07-03 | 2020-01-09 | 上海亦我信息技术有限公司 | Method for reconstructing three-dimensional space scene on basis of photography |
CN110949682A (en) * | 2019-12-13 | 2020-04-03 | 集美大学 | Indoor modeling unmanned aerial vehicle and indoor modeling method based on VR photography |
US20200118342A1 (en) * | 2018-10-15 | 2020-04-16 | University Of Maryland, College Park | Methods and apparatuses for dynamic navigable 360 degree environments |
CN111462326A (en) * | 2020-03-31 | 2020-07-28 | 武汉大学 | Low-cost 360-degree panoramic video camera urban pipeline three-dimensional reconstruction method and system |
CN112488783A (en) * | 2020-11-25 | 2021-03-12 | 北京有竹居网络技术有限公司 | Image acquisition method and device and electronic equipment |
-
2021
- 2021-06-23 CN CN202110699234.XA patent/CN113379901A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140301633A1 (en) * | 2013-04-09 | 2014-10-09 | Google Inc. | System and Method for Floorplan Reconstruction and Three-Dimensional Modeling |
CN104537586A (en) * | 2014-12-30 | 2015-04-22 | 郭人和 | Three-dimensional house display system and method |
CN107169136A (en) * | 2017-06-09 | 2017-09-15 | 成都智建新业建筑设计咨询有限公司 | Houseclearing three-dimensional panorama display systems |
WO2020006941A1 (en) * | 2018-07-03 | 2020-01-09 | 上海亦我信息技术有限公司 | Method for reconstructing three-dimensional space scene on basis of photography |
US20200118342A1 (en) * | 2018-10-15 | 2020-04-16 | University Of Maryland, College Park | Methods and apparatuses for dynamic navigable 360 degree environments |
CN109887082A (en) * | 2019-01-22 | 2019-06-14 | 武汉大学 | A kind of interior architecture three-dimensional modeling method and device based on point cloud data |
CN110189412A (en) * | 2019-05-13 | 2019-08-30 | 武汉大学 | More floor doors structure three-dimensional modeling methods and system based on laser point cloud |
CN110949682A (en) * | 2019-12-13 | 2020-04-03 | 集美大学 | Indoor modeling unmanned aerial vehicle and indoor modeling method based on VR photography |
CN111462326A (en) * | 2020-03-31 | 2020-07-28 | 武汉大学 | Low-cost 360-degree panoramic video camera urban pipeline three-dimensional reconstruction method and system |
CN112488783A (en) * | 2020-11-25 | 2021-03-12 | 北京有竹居网络技术有限公司 | Image acquisition method and device and electronic equipment |
Non-Patent Citations (1)
Title |
---|
刘亚文等: "利用航空影像、点云数据和矢量图进行简单房屋三维重建方法研究", 《武汉大学学报(信息科学版)》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113823001A (en) * | 2021-09-23 | 2021-12-21 | 北京有竹居网络技术有限公司 | Method, device, equipment and medium for generating house type graph |
CN113822936A (en) * | 2021-09-29 | 2021-12-21 | 北京市商汤科技开发有限公司 | Data processing method and device, computer equipment and storage medium |
CN116311325A (en) * | 2023-02-16 | 2023-06-23 | 江苏艾佳家居用品有限公司 | Automatic scale identification system based on artificial intelligence model |
CN116311325B (en) * | 2023-02-16 | 2023-10-27 | 江苏艾佳家居用品有限公司 | Automatic scale identification system based on artificial intelligence model |
CN116933359A (en) * | 2023-06-26 | 2023-10-24 | 武汉峰岭科技有限公司 | Building complex roof modeling method and system |
CN116933359B (en) * | 2023-06-26 | 2024-02-02 | 武汉峰岭科技有限公司 | Building complex roof modeling method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109102537B (en) | Three-dimensional modeling method and system combining two-dimensional laser radar and dome camera | |
CN110335343B (en) | Human body three-dimensional reconstruction method and device based on RGBD single-view-angle image | |
CN107247834B (en) | A kind of three dimensional environmental model reconstructing method, equipment and system based on image recognition | |
CN113379901A (en) | Method and system for establishing house live-action three-dimension by utilizing public self-photographing panoramic data | |
CN103226830B (en) | The Auto-matching bearing calibration of video texture projection in three-dimensional virtual reality fusion environment | |
WO2020192355A1 (en) | Method and system for measuring urban mountain viewing visible range | |
CN101329771B (en) | Method for rapidly modeling of urban street base on image sequence | |
Pylvanainen et al. | Automatic alignment and multi-view segmentation of street view data using 3d shape priors | |
CN106462943A (en) | Aligning panoramic imagery and aerial imagery | |
CN109102566A (en) | A kind of indoor outdoor scene method for reconstructing and its device of substation | |
CN110428501B (en) | Panoramic image generation method and device, electronic equipment and readable storage medium | |
CN110717494A (en) | Android mobile terminal indoor scene three-dimensional reconstruction and semantic segmentation method | |
WO2023280038A1 (en) | Method for constructing three-dimensional real-scene model, and related apparatus | |
CN108629829A (en) | The three-dimensional modeling method and system that one bulb curtain camera is combined with depth camera | |
CN113487723B (en) | House online display method and system based on measurable panoramic three-dimensional model | |
CN113643434B (en) | Three-dimensional modeling method based on air-ground cooperation, intelligent terminal and storage device | |
CN110660125B (en) | Three-dimensional modeling device for power distribution network system | |
CN113436559B (en) | Sand table dynamic landscape real-time display system and display method | |
CN115937288A (en) | Three-dimensional scene model construction method for transformer substation | |
CN112270736A (en) | Augmented reality processing method and device, storage medium and electronic equipment | |
CN116051747A (en) | House three-dimensional model reconstruction method, device and medium based on missing point cloud data | |
CN109788270B (en) | 3D-360-degree panoramic image generation method and device | |
Gomez-Lahoz et al. | Recovering traditions in the digital era: the use of blimps for modelling the archaeological cultural heritage | |
CN115205491A (en) | Method and device for handheld multi-view three-dimensional reconstruction | |
Ho et al. | Large scale 3D environmental modelling for stereoscopic walk-through visualisation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210910 |
|
RJ01 | Rejection of invention patent application after publication |