CN111311725B - Visual field analysis method for building - Google Patents
Visual field analysis method for building Download PDFInfo
- Publication number
- CN111311725B CN111311725B CN201811516960.8A CN201811516960A CN111311725B CN 111311725 B CN111311725 B CN 111311725B CN 201811516960 A CN201811516960 A CN 201811516960A CN 111311725 B CN111311725 B CN 111311725B
- Authority
- CN
- China
- Prior art keywords
- dimensional
- data
- component
- building
- visual field
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
A visual field analysis method for a building relates to the visual field analysis field in three-dimensional virtual, and the method comprises the following steps: encoding data; constructing a three-dimensional environment; machine learning; analyzing the vision field; and outputting a result. The advantages are that: and optimizing the image, so that the optimized image feature recognition algorithm can stably recognize key feature points of the component and is not interfered by factors such as scaling, rotation and the like. By means of an intelligent technology, the positions of windows, balconies and the like of a construction project scheme are rapidly identified, surrounding environments are simulated, a landscape visual field is analyzed, decisions of links such as construction project planning approval and real estate pricing can be assisted, visual experience of watching outdoor landscape in a simulated project house type is provided for a user in a real estate sales link, and decision-making of the user is assisted.
Description
Technical Field
The invention relates to the field of view analysis in three-dimensional virtualization, in particular to a method for analyzing a view of a building structure, which is used for rapidly identifying the positions of windows, balconies and the like of a construction project scheme, simulating the surrounding environment of human vision and analyzing the view of the landscape through an intelligent technology.
Background
The three-dimensional virtual technology promotes the popular development of geographic information, but in the aspect of professional GIS application, the three-dimensional virtual technology often has no practicability due to the lack of a three-dimensional space analysis function. The urban analysis is used as an important space analysis method and is widely applied to various aspects such as landscape evaluation, line-of-sight shielding judgment in real estate, signal coverage in communication or fire coverage in military, and the like. However, how to apply the market analysis in the three-dimensional virtual technology better solves the problem of optimizing the image, so that the optimized image feature recognition algorithm can stably recognize the key feature points of the component and is not interfered by factors such as scaling, rotation and the like. By means of an intelligent technology, the positions of windows, balconies and the like of a construction project scheme are rapidly identified, surrounding environments are simulated, a landscape visual field is analyzed, decisions of links such as construction project planning approval and real estate pricing can be assisted, visual experience of watching outdoor landscape in a simulated project house type is provided for a user in a real estate sales link, and the problem of purchasing houses is assisted.
Disclosure of Invention
The embodiment of the invention provides a method for analyzing the visual field of a building, which comprises the following steps: and optimizing the image, so that the optimized image feature recognition algorithm can stably recognize key feature points of the component and is not interfered by factors such as scaling, rotation and the like. By means of an intelligent technology, the positions of windows, balconies and the like of a construction project scheme are rapidly identified, surrounding environments are simulated, a landscape visual field is analyzed, decisions of links such as construction project planning approval and real estate pricing can be assisted, visual experience of watching outdoor landscape in a simulated project house type is provided for a user in a real estate sales link, and decision-making of the user is assisted.
The invention provides a method for analyzing the visual field of a building, which comprises the following steps of;
and (3) data coding: coding and classifying spatial data required by engineering projects, industry regulations, industry cases and other data, and sorting various original data;
three-dimensional environment construction: extracting the coded original data, making three-dimensional data, constructing a three-dimensional construction project and a surrounding environment, and integrating three-dimensional scenes; classifying and naming coding is carried out on each constructed three-dimensional model, and a three-dimensional model database is finally formed;
machine learning: extracting feature information of the extracted original data of the component, and establishing a feature information database of the component;
visual field analysis: identifying the part position of a construction project in a three-dimensional scene according to a part characteristic information database established by machine learning, calculating the orientation of the position, intercepting three-dimensional landscape pictures of all directions of the oriented hemisphere, fusing by a picture fusion algorithm, finally judging the visible range of the landscape part by comparing and calculating with a screenshot of the landscape part, marking, and simultaneously calculating the percentage of the visible range;
and (3) outputting results: and (5) carrying out standard formatting on the component visual field analysis result in the construction project and outputting the component visual field analysis result to a document.
A visual field analysis method for a building structure comprises the following specific steps of:
building an overground three-dimensional environment: extracting ground space data original data, semi-automatically constructing a ground construction object frame model, and attaching material textures;
and (3) constructing a three-dimensional construction project: extracting original data of construction project design drawing data, and constructing a construction project fine three-dimensional model;
three-dimensional environment integration: and (3) carrying out unified coordinate system, unified data format and unified platform display on the ground three-dimensional model database and the construction project three-dimensional model database.
A method for analyzing the visual field of a building, wherein the above-ground three-dimensional environment is constructed by: extracting ground space data original data, semi-automatically constructing a ground construction object frame model, and attaching material textures; the construction method of the three-dimensional environment on the ground comprises the following specific steps of:
acquisition of raw data: acquiring point cloud coordinate data of an overground building and a terrain and high-definition image data of the building by using an onboard laser radar device, an onboard laser radar device and a high-definition camera device, and completing acquisition of original data;
semi-automatically constructing a three-dimensional model: after the point cloud and image data noise reduction treatment, automatically constructing a high-definition three-dimensional building model and a three-dimensional terrain model, pasting a high-definition image photo on the building model by a semi-automatic method of manual intervention, and simultaneously carrying out light treatment, shadow baking and anti-pasting effect treatment, and adjusting the display effect of the three-dimensional model;
building an overground three-dimensional environment: and (3) carrying out integrated database building on the built high-definition three-dimensional model, and finally forming an overground three-dimensional model database.
A method for visual field analysis of a building, wherein the three-dimensional building project is constructed by: extracting original data of construction project design drawing data, and constructing a construction project fine three-dimensional model; the construction method for the three-dimensional construction project comprises the following specific steps of:
extracting original data: extracting construction project design data, and screening data information required by modeling;
three-dimensional modeling: and according to the related original data, a 3dmax modeling is manually applied to form a three-dimensional scheme model library of the construction project.
A visual field analysis method for a building comprises the following specific steps:
three-dimensional part learning: inputting learning data of three-dimensional components, and learning unique characteristic rules of each component through a learning algorithm;
component feature extraction: extracting feature information according to the learning sample data and the feature of the component learned by the algorithm;
component characteristic information library building: and (3) carrying out unique identification coding on the extracted three-dimensional component characteristic information, and constructing a component characteristic information database for storage.
A method of building visual field analysis, wherein the three-dimensional part learning comprises the steps of:
and (3) extracting a sample: three-dimensional part sample data is extracted from the encoded data,
machine learning: through a learning algorithm, the sample data is learned, the characteristics of various city components are identified,
the image feature point location and gradient vector feature information algorithm is as follows:
an image pixel gray level calculation formula:
an image binary calculation formula:
calculating a binarization threshold value by a histogram method:
representing the +.>Column->Go (go)/(go)>Pixel gray value representing the position +.>、/>、 Components representing the color value of the pixel to be calculated at that position, respectively,/->、/>、/>Respectively representing the original pixel color value component of the position,/->Representing constant 0.3 @, ->Representing the constant 0.59>Representing constant 0.11 @, ->Representing the gray value of the position pixel binarized,/for the pixel>Is a threshold value in a binary method;
the histogram statistical function formula isThe inverse function formula is +.>,/>Sub-maximum and maximum representing histogram gray histogram function values, < >>Representing the gray value to which the histogram function corresponds here;
defining SIFT feature detection scale space;
wherein the method comprises the steps ofIs a variable-scale Gaussian function, +.>Is the spatial coordinates>Is a scale, and the initial scale value is 1.6;
gaussian differential scale space:
The position and the scale of the key point are precisely determined by adopting a fitting three-dimensional quadratic function, the matching stability is enhanced, the noise resistance is improved, and the precise position is obtained when the derivation and the calculation are 0When->The key point is reserved, otherwise, the key point is discarded
building a feature information library: and constructing the learned characteristic information of various city components into a relational database for storage.
A method of visual field analysis of a building, wherein the visual field analysis comprises the steps of:
identification means: according to the three-dimensional scene component codes, automatically identifying components to be analyzed in the construction project;
identifying a position: acquiring information such as the position, the orientation and the like of the part according to the identified construction project part model;
identifying the azimuth: setting screenshot azimuth angle step length according to the identified azimuth and the image synthesis quality requirement, and identifying the orientation coordinates of each angle azimuth;
obtaining a azimuth graph: according to the recognized position coordinates and the calculated azimuth orientation coordinates, respectively intercepting an azimuth map of the orientation;
and (5) azimuth graph fusion: according to an image fusion algorithm, extracting characteristic points from the orientation maps, correcting distortion, balancing colors, splicing pictures, and finally forming a picture of the visible range of the part to the environment;
image analysis: clustering the visible range pictures generated by the components, searching and matching with the component characteristic information base, identifying urban components on the visible view, drawing the range outline of the visible components on the graph, and calculating the percentage of the visible range of the components to the whole components.
A visual field analysis method for a building comprises the following steps of: .
Feature point extraction: extracting common part sites in each direction map of the part position, and establishing a mapping relation;
distortion correction: registering pixel points according to the established mapping relation, and correcting the deformation of the picture by a distortion correction method;
and (3) image fusion: and finally, cutting out and outputting the corrected picture to form a large picture.
A method of building visual field analysis, wherein the image analysis comprises the steps of:
component identification: performing preliminary clustering on the visual range diagram, and matching with the component characteristic data in the component characteristic information base so as to identify urban components on the diagram;
the scope of the label: according to the identified urban parts, plotting the range on the visual range diagram in a highlighting mode;
generating a result: and comparing the identified urban parts with the urban parts in the part characteristic information base, and marking that the parts show the percentage of the parts to the whole parts on the visual range diagram.
A method of building visual field analysis, wherein the spatial data comprises: three-dimensional ground construction point cloud data, three-dimensional ground construction texture photo data and field photo data; the various city components: geographical elements such as windows, doors, veranda, greening, small products, bridges, roads, street lamps, signboards, building structures, rivers and the like; the landscape component includes: greening, small products, bridges, roads, street lamps, guideboards, building structures, rivers and other geographic elements; the above-ground three-dimensional environment construction comprises peripheral building structures, terrains, rivers, landscaping, urban parts, street lamp signboards, bridges and rivers; the three-dimensional construction project construction comprises construction project construction structures and construction project landscape greening; the learning materials of the three-dimensional component include: model data, texture data, materials, descriptive text; the unique features include: model features, texture features, text features; the characteristic information includes: component name, component model feature, component texture feature, component material feature, component text feature; the component model features include: model length, width, height, conventional limit; the component texture features include: texture length and width dimensions, texture black-and-white images, texture color values, color limits and feature points; the material characteristics of the component comprise: material color value, material name, other materials; the part text feature includes: words, terms, sentences describing the components.
It can be seen from this: the embodiment of the invention discloses a visual field analysis method for a building, which comprises the following steps: and optimizing the image, so that the optimized image feature recognition algorithm can stably recognize key feature points of the component and is not interfered by factors such as scaling, rotation and the like. By means of an intelligent technology, the positions of windows, balconies and the like of a construction project scheme are rapidly identified, surrounding environments are simulated, a landscape visual field is analyzed, decisions of links such as construction project planning approval and real estate pricing can be assisted, visual experience of watching outdoor landscape in a simulated project house type is provided for a user in a real estate sales link, and decision-making of the user is assisted.
Drawings
FIG. 1 is a schematic overall flow chart of a visual field analysis method for a building according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a three-dimensional environment construction step in a method for analyzing a visual field of a construction structure according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a step of constructing an overground three-dimensional environment in a method for analyzing a visual field of a building according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of a three-dimensional construction project construction step in a construction visual field analysis method according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart of a machine learning step in a visual field analysis method for building according to an embodiment of the present invention;
FIG. 6 is a schematic flow chart of a three-dimensional component learning step in a visual field analysis method for a building according to an embodiment of the present invention;
FIG. 7 is a schematic flow chart of a view analysis step in a method for analyzing a view of a building according to an embodiment of the present invention;
FIG. 8 is a schematic flow chart of a map fusion in a method for analyzing a visual field of a building according to an embodiment of the present invention;
fig. 9 is a flowchart illustrating an image analysis step in a visual field analysis method for a building according to an embodiment of the present invention.
Detailed Description
For a better understanding of the present invention, reference will now be made in detail to the present invention, examples of which are illustrated in the accompanying drawings and described in the following examples, wherein the present invention is illustrated, but is not limited to, the accompanying drawings.
Example 1:
FIG. 1 is a view analysis method of a building, as shown in FIG. 1, the method comprises the following steps:
and (3) data coding: coding and classifying space data required by engineering projects, urban parts and other data, and sorting various original data;
three-dimensional environment construction: extracting the coded original data, making three-dimensional data, constructing a three-dimensional construction project and a surrounding environment, and integrating three-dimensional scenes; classifying and naming coding is carried out on each constructed three-dimensional model, and a three-dimensional model database is finally formed;
machine learning: extracting feature information of the extracted original data of the component, and establishing a feature information database of the component;
visual field analysis: identifying the part position of a construction project in a three-dimensional scene according to a part characteristic information database established by machine learning, calculating the orientation of the position, intercepting three-dimensional landscape pictures of all directions of the oriented hemisphere, fusing by a picture fusion algorithm, finally judging the visible range of the landscape part by comparing and calculating with a screenshot of the landscape part, marking, and simultaneously calculating the percentage of the visible range;
and (3) outputting results: and (5) carrying out standard formatting on the component visual field analysis result in the construction project and outputting the component visual field analysis result to a document.
The method for analyzing the visual field of the building structure shown in fig. 2 comprises the following specific steps of:
building an overground three-dimensional environment: extracting ground space data original data, semi-automatically constructing a ground construction object frame model, and attaching material textures;
and (3) constructing a three-dimensional construction project: extracting original data of construction project design drawing data, and constructing a construction project fine three-dimensional model;
three-dimensional environment integration: and (3) carrying out unified coordinate system, unified data format and unified platform display on the ground three-dimensional model database and the construction project three-dimensional model database.
A method for analyzing the visual field of a building structure as shown in fig. 3, wherein the above-ground three-dimensional environment is constructed by: extracting ground space data original data, semi-automatically constructing a ground construction object frame model, and attaching material textures; the construction method of the three-dimensional environment on the ground comprises the following specific steps of:
acquisition of raw data: acquiring point cloud coordinate data of an overground building and a terrain and high-definition image data of the building by using an onboard laser radar device, an onboard laser radar device and a high-definition camera device, and completing acquisition of original data;
semi-automatically constructing a three-dimensional model: after the point cloud and image data noise reduction treatment, automatically constructing a high-definition three-dimensional building model and a three-dimensional terrain model, pasting a high-definition image photo on the building model by a semi-automatic method of manual intervention, and simultaneously carrying out light treatment, shadow baking and anti-pasting effect treatment, and adjusting the display effect of the three-dimensional model;
building an overground three-dimensional environment: and (3) carrying out integrated database building on the built high-definition three-dimensional model, and finally forming an overground three-dimensional model database.
A method for analyzing the visual field of a building, as shown in fig. 4, wherein the three-dimensional building project is constructed by: extracting original data of construction project design drawing data, and constructing a construction project fine three-dimensional model; the construction method for the three-dimensional construction project comprises the following specific steps of:
extracting original data: extracting construction project design data, and screening data information required by modeling;
three-dimensional modeling: and according to the related original data, a 3dmax modeling is manually applied to form a three-dimensional scheme model library of the construction project.
The method for analyzing the visual field of the building structure shown in fig. 5 comprises the following specific steps of:
three-dimensional part learning: extracting three-dimensional part sample data from the coded data, and learning the unique characteristic rule of each part through a learning algorithm;
component feature extraction: extracting feature information according to the learning sample data and the feature of the component learned by the algorithm;
component characteristic information library building: and (3) carrying out unique identification coding on the extracted three-dimensional component characteristic information, and constructing a component characteristic information database for storage.
A method for visual field analysis of a building as shown in fig. 6, the three-dimensional part learning comprising the steps of:
and (3) extracting a sample: extracting three-dimensional part sample data from the encoded data;
machine learning: through a learning algorithm, learning sample data, and identifying the characteristics of various urban parts;
the image feature point location and gradient vector feature information algorithm is as follows:
an image pixel gray level calculation formula:
an image binary calculation formula:
calculating a binarization threshold value by a histogram method:
representing the +.>Column->Go (go)/(go)>Pixel gray value representing the position +.>、/>、 />Components representing the color value of the pixel to be calculated at that position, respectively,/->、/>、/>Respectively representing the original pixel color value component of the position,/->Representing constant 0.3 @, ->Representing the constant 0.59>Representing constant 0.11 @, ->Representing the gray value of the position pixel binarized,/for the pixel>Is a threshold value in a binary method;
the histogram statistical function formula isThe inverse function formula is +.>,/>Sub-maximum and maximum representing histogram gray histogram function values, < >>Representing the gray value to which the histogram function corresponds here;
defining SIFT feature detection scale space;
wherein the method comprises the steps ofIs a variable-scale Gaussian function, +.>Is the spatial coordinates>Is a scale, and the initial scale value is 1.6;
gaussian differential scale space:
The position and the scale of the key point are precisely determined by adopting a fitting three-dimensional quadratic function, the matching stability is enhanced, the noise resistance is improved, and the precise position is obtained when the derivation and the calculation are 0When->The key point is reserved, otherwise, the key point is discarded
a method for visual field analysis of a building as shown in fig. 7, the visual field analysis comprising the steps of:
identification means: according to the three-dimensional scene component codes, automatically identifying components to be analyzed in the construction project;
identifying a position: acquiring information such as the position, the orientation and the like of the part according to the identified construction project part model;
identifying the azimuth: setting screenshot azimuth angle step length according to the identified azimuth and the image synthesis quality requirement, and identifying the orientation coordinates of each angle azimuth;
obtaining a azimuth graph: according to the recognized position coordinates and the calculated azimuth orientation coordinates, respectively intercepting an azimuth map of the orientation;
and (5) azimuth graph fusion: according to an image fusion algorithm, extracting characteristic points from the orientation maps, correcting distortion, balancing colors, splicing pictures, and finally forming a picture of the visible range of the part to the environment;
image analysis: clustering the visible range pictures generated by the components, searching and matching with the component characteristic information base, identifying urban components on the visible view, drawing the range outline of the visible components on the graph, and calculating the percentage of the visible range of the components to the whole components.
The method for analyzing the visual field of the building structure shown in fig. 8, the azimuth graph fusion comprises the following steps:
feature point extraction: extracting common part sites in each direction map of the part position, and establishing a mapping relation;
distortion correction: registering pixel points according to the established mapping relation, and correcting the deformation of the picture by a distortion correction method;
and (3) image fusion: and finally, cutting out and outputting the corrected picture to form a large picture.
A method of visual field analysis of a building as shown in fig. 9, the image analysis comprising the steps of:
component identification: performing preliminary clustering on the visual range diagram, and matching with the component characteristic data in the component characteristic information base so as to identify urban components on the diagram;
the scope of the label: according to the identified urban parts, plotting the range on the visual range diagram in a highlighting mode;
generating a result: and comparing the identified urban parts with the urban parts in the part characteristic information base, and marking that the parts show the percentage of the parts to the whole parts on the visual range diagram.
In the specific implementation case: the spatial data includes: three-dimensional ground construction point cloud data, three-dimensional ground construction texture photo data and field photo data; the various city components: geographical elements such as windows, doors, veranda, greening, small products, bridges, roads, street lamps, signboards, building structures, rivers and the like; the landscape component includes: greening, small products, bridges, roads, street lamps, guideboards, building structures, rivers and other geographic elements; the above-ground three-dimensional environment construction comprises peripheral building structures, terrains, rivers, landscaping, urban parts, street lamp signboards, bridges and rivers; the three-dimensional construction project construction comprises construction project construction structures and construction project landscape greening; the learning materials of the three-dimensional component include: model data, texture data, materials, descriptive text; the unique features include: model features, texture features, text features; the characteristic information includes: component name, component model feature, component texture feature, component material feature, component text feature; the component model features include: model length, width, height, conventional limit; the component texture features include: texture length and width dimensions, texture black-and-white images, texture color values, color limits and feature points; the material characteristics of the component comprise: material color value, material name, other materials; the part text feature includes: words, terms, sentences describing the components.
Example 2:
FIG. 1 is a view analysis method of a building, as shown in FIG. 1, the method comprises the following steps:
and (3) data coding: and (3) coding and classifying the space data diagram required by the engineering project and the city component data, and sorting various original data. The spatial data includes: three-dimensional ground construction point cloud data codes SY-DS-DY-000001.Las; building texture photo data of a structure on the three-dimensional ground and encoding SY-DS-WL-000001.Jpg by using the photo data; the ground topography data codes SY-DS-DX-00001.Dwg; ground road data code SY-DS-DL-000001.Shp; ground house data code SY-DM-FW-000001.Shp; the construction project drawing data code SY-XM-ZPT-000001.Dwg; planning an elevation view and an effect view SY-XM-LMT-000001.Jpg of a construction project building; the urban part material data code SY-CSBJ-000001.Jpg; urban part descriptive text code SY-CSBJ-000001.Txt; dividing the symbol into units according to the "-" symbol, wherein the "-" symbol is followed by a file extension, the last unit number in all codes is the number of the data, and if a plurality of data exist, the numbers are accumulated in the unit;
three-dimensional environment construction: coding data of the first two units numbered as SY-DS, SY-DM, SY-DX and SY-SG, performing three-dimensional data production by using a software tool, constructing a three-dimensional foundation pit and a three-dimensional surrounding environment, and storing and loading the three-dimensional foundation pit and the three-dimensional surrounding environment in a three-dimensional software platform for visualization. And meanwhile, classifying and naming codes are carried out on each constructed three-dimensional model according to a geode structure, for example, the ground house is coded according to a building code DS-FW-LOU-000001. Dividing into units according to "-" symbols, wherein the number of the last unit in all codes is the number of the data, and if a plurality of data exist, accumulating the numbers in the unit;
machine learning: extracting coded data of which the first two units are SY-CSBJ, carrying out characteristic information learning on the urban parts, and establishing a characteristic information database of the parts;
visual field analysis: identifying part position information of a construction project in a three-dimensional scene and the direction of the position according to a part characteristic information database established by machine learning, intercepting three-dimensional landscape pictures of all directions of a hemispherical surface of the direction, fusing by a picture fusion method, judging the visible range of a landscape part through comparison calculation with a landscape part screenshot, marking, and simultaneously calculating the percentage of the visible range;
and (3) outputting results: and (5) carrying out standard formatting on the component visual field analysis result in the construction project and outputting the component visual field analysis result to a document.
The method for analyzing the visual field of the building structure shown in fig. 2 comprises the following specific steps of:
building an overground three-dimensional environment: and extracting the original data of the ground space data, wherein codes SY-DS-DY, SY-DS-WL and the like are carried out, a ground construction object frame model is constructed in a semi-automatic mode, and material textures are attached to form a fine three-dimensional model. Including building structures, terrains, landscaping, urban parts, street lamp signboards, bridges, rivers, foundation pit construction sites, mechanical equipment, building materials and the like on the periphery;
and (3) constructing a three-dimensional construction project: extracting original data of construction project design drawing data, wherein the original data is provided with a code SY-XM, and constructing a construction project fine three-dimensional model and a construction project three-dimensional topography;
three-dimensional environment integration: and (3) carrying out unified coordinate system, unified data format and unified platform display on the ground three-dimensional model database and the construction project three-dimensional model database.
A method for analyzing the visual field of a building structure as shown in fig. 3, wherein the above-ground three-dimensional environment is constructed by: extracting ground space data original data, semi-automatically constructing a ground construction object frame model, and attaching material textures; the construction method of the three-dimensional environment on the ground comprises the following specific steps of:
acquisition of raw data: acquiring point cloud coordinate data of an overground building and a terrain and high-definition image data of the building by using an onboard laser radar device, an onboard laser radar device and a high-definition camera device, and completing acquisition of original data;
semi-automatically constructing a three-dimensional model: after the point cloud and image data noise reduction treatment, automatically constructing a high-definition three-dimensional building model and a three-dimensional terrain model, pasting a high-definition image photo on the building model by a semi-automatic method of manual intervention, and simultaneously carrying out light treatment, shadow baking and anti-pasting effect treatment, and adjusting the display effect of the three-dimensional model;
building an overground three-dimensional environment: and (3) carrying out integrated database building on the built high-definition three-dimensional model, and finally forming an overground three-dimensional model database.
A method for analyzing the visual field of a building, as shown in fig. 4, wherein the three-dimensional building project is constructed by: extracting original data of construction project design drawing data, and constructing a construction project fine three-dimensional model; the construction method for the three-dimensional construction project comprises the following specific steps of:
extracting original data: extracting construction project design data, and screening data information required by modeling;
three-dimensional modeling: and according to the related original data, a 3dmax modeling is manually applied to form a three-dimensional scheme model library of the construction project.
The method for analyzing the visual field of the building structure shown in fig. 5 comprises the following specific steps of:
three-dimensional part learning: extracting three-dimensional part sample data from the coded data, and learning the unique characteristic rule of each part through a learning algorithm;
component feature extraction: extracting feature information according to the learning sample data and the feature of the component learned by the algorithm;
component characteristic information library building: and (3) carrying out unique identification coding on the extracted three-dimensional component characteristic information, and constructing a component characteristic information database for storage.
A method for visual field analysis of a building as shown in fig. 6, the three-dimensional part learning comprising the steps of: .
And (3) extracting a sample: extracting three-dimensional part sample data from the encoded data; .
Machine learning: through a learning algorithm, learning sample data, and identifying the characteristics of various urban parts; .
The image feature point location and gradient vector feature information algorithm is as follows: .
An image pixel gray level calculation formula: .
An image binary calculation formula:
calculating a binarization threshold value by a histogram method:
representing the +.>Column->Go (go)/(go)>Pixel gray value representing the position +.>、/>、 Components representing the color value of the pixel to be calculated at that position, respectively,/->、/>、/>Respectively representing the original pixel color value component of the position,/->Representing constant 0.3 @, ->Representing the constant 0.59>Representing constant 0.11 @, ->Representing the gray value of the position pixel binarized,/for the pixel>Is a threshold value in a binary method;
the histogram statistical function formula isThe inverse function formula is +.>,/>Sub-maximum and maximum representing histogram gray histogram function values, < >>Representing the gray value to which the histogram function corresponds here;
defining SIFT feature detection scale space;
wherein the method comprises the steps ofIs a variable-scale Gaussian function, +.>Is the spatial coordinates>Is a scale, and the initial scale value is 1.6;
gaussian differential scale space:
The position and the scale of the key point are precisely determined by adopting a fitting three-dimensional quadratic function, the matching stability is enhanced, the noise resistance is improved, and the precise position is obtained when the derivation and the calculation are 0When->The key point is reserved, otherwise, the key point is discarded
a method for visual field analysis of a building as shown in fig. 7, the visual field analysis comprising the steps of:
identification means: according to the three-dimensional scene component codes, automatically identifying components to be analyzed in the construction project;
identifying a position: acquiring information such as the position, the orientation and the like of the part according to the identified construction project part model;
identifying the azimuth: setting screenshot azimuth angle step length according to the identified azimuth and the image synthesis quality requirement, and identifying the orientation coordinates of each angle azimuth;
obtaining a azimuth graph: according to the recognized position coordinates and the calculated azimuth orientation coordinates, respectively intercepting an azimuth map of the orientation;
and (5) azimuth graph fusion: according to an image fusion algorithm, extracting characteristic points from the orientation maps, correcting distortion, balancing colors, splicing pictures, and finally forming a picture of the visible range of the part to the environment;
image analysis: clustering the visible range pictures generated by the components, searching and matching with the component characteristic information base, identifying urban components on the visible view, drawing the range outline of the visible components on the graph, and calculating the percentage of the visible range of the components to the whole components.
The method for analyzing the visual field of the building structure shown in fig. 8, the azimuth graph fusion comprises the following steps:
feature point extraction: extracting common part sites in each direction map of the part position, and establishing a mapping relation;
distortion correction: registering pixel points according to the established mapping relation, and correcting the deformation of the picture by a distortion correction method;
and (3) image fusion: and finally, cutting out and outputting the corrected picture to form a large picture.
A method of visual field analysis of a building as shown in fig. 9, the image analysis comprising the steps of:
component identification: performing preliminary clustering on the visual range diagram, and matching with the component characteristic data in the component characteristic information base so as to identify urban components on the diagram;
the scope of the label: according to the identified urban parts, plotting the range on the visual range diagram in a highlighting mode;
generating a result: and comparing the identified urban parts with the urban parts in the part characteristic information base, and marking that the parts show the percentage of the parts to the whole parts on the visual range diagram.
In the specific implementation case: the spatial data includes: three-dimensional ground construction point cloud data, three-dimensional ground construction texture photo data and field photo data; the various city components: geographical elements such as windows, doors, veranda, greening, small products, bridges, roads, street lamps, signboards, building structures, rivers and the like; the landscape component includes: greening, small products, bridges, roads, street lamps, guideboards, building structures, rivers and other geographic elements; the above-ground three-dimensional environment construction comprises peripheral building structures, terrains, rivers, landscaping, urban parts, street lamp signboards, bridges and rivers; the three-dimensional construction project construction comprises construction project construction structures and construction project landscape greening; the learning materials of the three-dimensional component include: model data, texture data, materials, descriptive text; the unique features include: model features, texture features, text features; the characteristic information includes: component name, component model feature, component texture feature, component material feature, component text feature; the component model features include: model length, width, height, conventional limit; the component texture features include: texture length and width dimensions, texture black-and-white images, texture color values, color limits and feature points; the material characteristics of the component comprise: material color value, material name, other materials; the part text feature includes: words, terms, sentences describing the components.
It can be seen from this: the embodiment of the invention discloses a visual field analysis method for a building, which comprises the following steps: and optimizing the image, so that the optimized image feature recognition algorithm can stably recognize key feature points of the component and is not interfered by factors such as scaling, rotation and the like. By means of an intelligent technology, the positions of windows, balconies and the like of a construction project scheme are rapidly identified, surrounding environments are simulated, a landscape visual field is analyzed, decisions of links such as construction project planning approval and real estate pricing can be assisted, visual experience of watching outdoor landscape in a simulated project house type is provided for a user in a real estate sales link, and decision-making of the user is assisted.
Although embodiments of the present invention have been described by way of examples, those of ordinary skill in the art will appreciate that there are numerous modifications and variations to the invention without departing from the spirit of the invention, and it is intended that the appended claims encompass such modifications and variations without departing from the spirit of the invention.
Claims (9)
1. A visual field analysis method for a building is characterized by comprising the following steps:
and (3) data coding: coding and classifying space data required by engineering projects, industry regulations and industry case data, and sorting various original data;
the spatial data includes: three-dimensional ground construction point cloud data, three-dimensional ground construction texture photo data and field photo data;
three-dimensional environment construction: extracting the coded original data, making three-dimensional data, constructing a three-dimensional construction project and a surrounding environment, and integrating three-dimensional scenes; classifying and naming coding is carried out on each constructed three-dimensional model, and a three-dimensional model database is finally formed;
machine learning: extracting characteristic information of the extracted learning material data of the three-dimensional component, and establishing a characteristic information database of the component;
the characteristic information includes: component name, component model feature, component texture feature, component material feature, component text feature;
the component model features include: model length, width, height, conventional limit;
the component texture features include: texture length and width dimensions, texture black-and-white images, texture color values, color limits and feature points;
the material characteristics of the component comprise: material color value, material name, other materials;
the part text feature includes: words, terms, and sentences describing the component;
the learning materials of the three-dimensional component include: model data, texture data, materials, descriptive text;
visual field analysis: identifying the part position of a construction project in a three-dimensional scene according to a part characteristic information database established by machine learning, calculating the orientation of the position, intercepting three-dimensional landscape pictures of all directions of the oriented hemisphere, fusing by a picture fusion algorithm, finally judging the visible range of the landscape part by comparing and calculating with a landscape part screenshot, marking, and simultaneously calculating the percentage of the visible range;
the landscape component includes: greening, street lamps, guideboards, building structures and river geographic elements;
and (3) outputting results: and (5) carrying out standard formatting on the component visual field analysis result in the construction project and outputting the component visual field analysis result to a document.
2. The method for analyzing the visual field of a building structure according to claim 1, wherein the specific steps of the construction of the three-dimensional environment are as follows:
building an overground three-dimensional environment: extracting ground space data original data, semi-automatically constructing a ground construction object frame model, and attaching material textures;
the above-ground three-dimensional environment construction comprises the following steps: building structures, terrains, rivers, landscaping and street lamp signboards on the periphery;
and (3) constructing a three-dimensional construction project: extracting original data of construction project design drawing data, and constructing a construction project fine three-dimensional model;
the three-dimensional construction project construction comprises: building a building project and greening a landscape of the building project;
three-dimensional environment integration: and (3) carrying out unified coordinate system, unified data format and unified platform display on the ground three-dimensional model database and the construction project three-dimensional model database.
3. The method for analyzing the visual field of a building according to claim 2, wherein the above-ground three-dimensional environment is constructed by: extracting ground space data original data, semi-automatically constructing a ground construction object frame model, and attaching material textures; the construction method of the three-dimensional environment on the ground comprises the following specific steps of:
acquisition of raw data: acquiring point cloud coordinate data of an overground building and a terrain and high-definition image data of the building by using an onboard laser radar device, an onboard laser radar device and a high-definition camera device, and completing acquisition of original data;
semi-automatically constructing a three-dimensional model: after the point cloud and image data noise reduction treatment, automatically constructing a high-definition three-dimensional building model and a three-dimensional terrain model, pasting a high-definition image photo on the building model by a semi-automatic method of manual intervention, and simultaneously carrying out light treatment, shadow baking and anti-pasting effect treatment, and adjusting the display effect of the three-dimensional model;
building an overground three-dimensional environment: and (3) carrying out integrated database building on the built high-definition three-dimensional model, and finally forming an overground three-dimensional model database.
4. The method for visual field analysis of a building according to claim 2, wherein the three-dimensional construction project is constructed by: extracting original data of construction project design drawing data, and constructing a construction project fine three-dimensional model; the construction method for the three-dimensional construction project comprises the following specific steps of:
extracting original data: extracting construction project design data, and screening data information required by modeling;
three-dimensional modeling: and according to the related original data, a 3dmax modeling is manually applied to form a three-dimensional scheme model library of the construction project.
5. A method of building visual field analysis according to claim 1, wherein: the machine learning specifically comprises the following steps:
three-dimensional part learning: inputting learning materials of three-dimensional components, learning unique characteristic rules of each component through a learning algorithm, and constructing an information characteristic library;
the unique features include: model features, texture features, text features;
component feature extraction: extracting feature information according to the learning sample data and the feature of the component learned by the algorithm;
component characteristic information library building: and (3) carrying out unique identification coding on the extracted three-dimensional component characteristic information, and constructing a component characteristic information database for storage.
6. A method of building visibility analysis according to claim 5, wherein the three-dimensional part learning includes the steps of:
and (3) extracting a sample: extracting three-dimensional part sample data from the encoded data;
machine learning: through a learning algorithm, learning sample data, and identifying the characteristics of various urban parts;
the various city components: windows, doors, veranda, greening, bridges, roads, street lamps, guideboards and river geographic elements;
the image feature point location and gradient vector feature information algorithm is as follows:
an image pixel gray level calculation formula:
an image binary calculation formula:
calculating a binarization threshold value by a histogram method:
representing the +.>Column->Go (go)/(go)>Pixel gray value representing the position +.>、/>、 />Components representing the color value of the pixel to be calculated at that position, respectively,/->、/>、/>Respectively representing the original pixel color value component of the position,/->Representing constant 0.3 @, ->Representing the constant 0.59>Representing constant 0.11 @, ->Representing the gray value of the position pixel binarized,/for the pixel>Is a threshold value in a binary method;
the histogram statistical function formula isThe inverse function formula is +.>,/>Sub-maximum and maximum representing histogram gray histogram function values, < >>Representing the gray value to which the histogram function corresponds here;
SIFT feature detection scale space definition:
wherein the method comprises the steps ofIs a variable-scale Gaussian function, +.>Is the spatial coordinates>Is a scale, and the initial scale value is 1.6;
gaussian differential scale space:
The position and the scale of the key point are precisely determined by adopting a fitting three-dimensional quadratic function, the matching stability is enhanced, the noise resistance is improved, and the precise position is obtained when the derivation and the calculation are 0When->The key points are reserved, otherwise, the key points are discarded;
building a feature information library: and constructing the learned characteristic information of various city components into a relational database for storage.
7. A method of visual field analysis of a building according to claim 1, wherein the visual field analysis comprises the steps of:
identification means: according to the three-dimensional scene component codes, automatically identifying components to be analyzed in the construction project;
identifying a position: acquiring position and orientation information of the component according to the identified construction project component model;
identifying the azimuth: setting screenshot azimuth angle step length according to the identified azimuth and the image synthesis quality requirement, and identifying the orientation coordinates of each angle azimuth;
obtaining a azimuth graph: according to the recognized position coordinates and the calculated azimuth orientation coordinates, respectively intercepting an azimuth map of the orientation;
and (5) azimuth graph fusion: according to an image fusion algorithm, extracting characteristic points from the orientation maps, correcting distortion, balancing colors, splicing pictures, and finally forming a picture of the visible range of the part to the environment;
image analysis: clustering the visible range pictures generated by the components, searching and matching with the component characteristic information base, identifying urban components on the visible view, drawing the range outline of the visible components on the graph, and calculating the percentage of the visible range of the components to the whole components.
8. The method for analyzing the visual field of a building according to claim 7, wherein: the azimuth map fusion comprises the following steps:
feature point extraction: extracting common part sites in each direction map of the part position, and establishing a mapping relation;
distortion correction: registering pixel points according to the established mapping relation, and correcting the deformation of the picture by a distortion correction method;
and (3) image fusion: and finally, cutting out and outputting the corrected picture to form a large picture.
9. The method for analyzing the visual field of a building according to claim 7, wherein: the image analysis comprises the following steps:
component identification: performing preliminary clustering on the visual range diagram, and matching with the component characteristic data in the component characteristic information base so as to identify urban components on the diagram;
the scope of the label: according to the identified urban parts, plotting the range on the visual range diagram in a highlighting mode;
generating a result: and comparing the identified urban parts with the urban parts in the part characteristic information base, and marking that the parts show the percentage of the parts to the whole parts on the visual range diagram.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811516960.8A CN111311725B (en) | 2018-12-12 | 2018-12-12 | Visual field analysis method for building |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811516960.8A CN111311725B (en) | 2018-12-12 | 2018-12-12 | Visual field analysis method for building |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111311725A CN111311725A (en) | 2020-06-19 |
CN111311725B true CN111311725B (en) | 2023-06-13 |
Family
ID=71146754
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811516960.8A Active CN111311725B (en) | 2018-12-12 | 2018-12-12 | Visual field analysis method for building |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111311725B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102855659A (en) * | 2012-07-17 | 2013-01-02 | 北京交通大学 | Three-dimensional holographic visualization system and method for high-speed comprehensively detecting train |
CN105761310A (en) * | 2016-02-03 | 2016-07-13 | 东南大学 | Simulated analysis and image display method of digital map of sky visible range |
CN105869211A (en) * | 2016-06-16 | 2016-08-17 | 成都中科合迅科技有限公司 | Analytical method and device for visible range |
CN106296818A (en) * | 2016-08-23 | 2017-01-04 | 河南智绘星图信息技术有限公司 | A kind of terrestrial space scene simulation method and system based on mobile platform |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10127721B2 (en) * | 2013-07-25 | 2018-11-13 | Hover Inc. | Method and system for displaying and navigating an optimal multi-dimensional building model |
-
2018
- 2018-12-12 CN CN201811516960.8A patent/CN111311725B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102855659A (en) * | 2012-07-17 | 2013-01-02 | 北京交通大学 | Three-dimensional holographic visualization system and method for high-speed comprehensively detecting train |
CN105761310A (en) * | 2016-02-03 | 2016-07-13 | 东南大学 | Simulated analysis and image display method of digital map of sky visible range |
CN105869211A (en) * | 2016-06-16 | 2016-08-17 | 成都中科合迅科技有限公司 | Analytical method and device for visible range |
CN106296818A (en) * | 2016-08-23 | 2017-01-04 | 河南智绘星图信息技术有限公司 | A kind of terrestrial space scene simulation method and system based on mobile platform |
Non-Patent Citations (2)
Title |
---|
Li Yin, Zhenxin Wang.Measuring visual enclosure for street walkability:Using machine learing algorithms and Google Street View imagery.Applied Geography.2016,第76卷P147-153. * |
靳海亮,李留磊,袁松鹤,耿文轩.一种用于三维城市建筑物的可视域分析算法.测绘通报.2018,(第1期),P103-106. * |
Also Published As
Publication number | Publication date |
---|---|
CN111311725A (en) | 2020-06-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110136170B (en) | Remote sensing image building change detection method based on convolutional neural network | |
US11995886B2 (en) | Large-scale environment-modeling with geometric optimization | |
Garilli et al. | Automatic detection of stone pavement's pattern based on UAV photogrammetry | |
US10115165B2 (en) | Management of tax information based on topographical information | |
US8503761B2 (en) | Geospatial modeling system for classifying building and vegetation in a DSM and related methods | |
Freire et al. | Introducing mapping standards in the quality assessment of buildings extracted from very high resolution satellite imagery | |
CN108428254A (en) | The construction method and device of three-dimensional map | |
CN114758086B (en) | Method and device for constructing urban road information model | |
JP7418281B2 (en) | Feature classification system, classification method and its program | |
CN115512247A (en) | Regional building damage grade assessment method based on image multi-parameter extraction | |
CN111428582B (en) | Method for calculating urban sky width by using Internet streetscape photo | |
Gao et al. | Large-scale synthetic urban dataset for aerial scene understanding | |
CN109657728B (en) | Sample production method and model training method | |
CN115527027A (en) | Remote sensing image ground object segmentation method based on multi-feature fusion mechanism | |
Yoo et al. | True orthoimage generation by mutual recovery of occlusion areas | |
CN111311725B (en) | Visual field analysis method for building | |
CN113033386A (en) | High-resolution remote sensing image-based transmission line channel hidden danger identification method and system | |
CN115908729A (en) | Three-dimensional live-action construction method, device and equipment and computer readable storage medium | |
Krauß | Preprocessing of satellite data for urban object extraction | |
CN114627073B (en) | Terrain recognition method, apparatus, computer device and storage medium | |
Su et al. | Building Detection From Aerial Lidar Point Cloud Using Deep Learning | |
Elaksher et al. | Automatic generation of high-quality three-dimensional urban buildings from aerial images | |
Roozenbeek | Dutch Open Topographic Data Sets as Georeferenced Markers in Augmented Reality | |
CN117746221A (en) | Urban street space updating achievement evaluation method based on street view image | |
Qin et al. | A supervised method for object-based 3d building change detection on aerial stereo images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |