CN111311725A - Building visual domain analysis method - Google Patents
Building visual domain analysis method Download PDFInfo
- Publication number
- CN111311725A CN111311725A CN201811516960.8A CN201811516960A CN111311725A CN 111311725 A CN111311725 A CN 111311725A CN 201811516960 A CN201811516960 A CN 201811516960A CN 111311725 A CN111311725 A CN 111311725A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- component
- data
- construction project
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
A visual field analysis method for a building relates to the field of visual field analysis in three-dimensional virtual, and comprises the following steps: encoding the data; constructing a three-dimensional environment; machine learning; performing visual field analysis; and outputting the result. The advantages are that: the image is optimized, so that the optimized image feature recognition algorithm can stably recognize key feature points of the component and is not interfered by factors such as scaling and rotation. Through an intelligent technology, the positions of a window, a balcony and the like of a construction project scheme are quickly identified, the peripheral environment of the human vision is simulated, the visual field of the landscape is analyzed, the decision of links such as construction project planning approval, real estate type pricing and the like can be assisted, the intuitive experience of watching outdoor landscapes in the simulation of the project type of the body is provided for a user in the real estate selling link, and the decision of purchasing houses is assisted for the user.
Description
Technical Field
The invention relates to the field of visual field analysis in three-dimensional virtual, in particular to a building visual field analysis method for rapidly identifying positions such as windows, balconies and the like of a construction project scheme, simulating human visual surrounding environment and analyzing a landscape visual field through an intelligent technology.
Background
The three-dimensional virtual technology promotes the popular development of geographic information, but in the aspect of professional GIS application, the three-dimensional virtual technology is not practical due to the lack of a three-dimensional space analysis function. As an important spatial analysis method, the urban analysis is widely applied to aspects such as landscape evaluation, line-of-sight occlusion judgment in real estate, signal coverage in communication or fire coverage in military. However, how to better apply the urban analysis in the three-dimensional virtual technology solves the problem of optimizing the image, so that the optimized image feature recognition algorithm can stably recognize the key feature points of the component and is not interfered by factors such as scaling and rotation. Through an intelligent technology, the positions of a window, a balcony and the like of a construction project scheme are quickly identified, the peripheral environment of the human vision is simulated, the visual field of the landscape is analyzed, the decision of links such as construction project planning approval, real estate type pricing and the like can be assisted, the intuitive experience of watching outdoor landscapes in the simulation of the project type of the body is provided for a user in the real estate selling link, and the decision of purchasing houses is assisted for the user.
Disclosure of Invention
The embodiment of the invention provides a visual field analysis method for a building, which comprises the following steps: the image is optimized, so that the optimized image feature recognition algorithm can stably recognize key feature points of the component and is not interfered by factors such as scaling and rotation. Through an intelligent technology, the positions of a window, a balcony and the like of a construction project scheme are quickly identified, the peripheral environment of the human vision is simulated, the visual field of the landscape is analyzed, the decision of links such as construction project planning approval, real estate type pricing and the like can be assisted, the intuitive experience of watching outdoor landscapes in the simulation of the project type of the body is provided for a user in the real estate selling link, and the decision of purchasing houses is assisted for the user.
The invention provides a building visual domain analysis method, wherein the method comprises the following steps:
data encoding: carrying out coding classification on spatial data required by an engineering project and data such as industrial regulations, industrial cases and the like, and sorting various original data;
three-dimensional environment construction: extracting the coded original data, making three-dimensional data, constructing a three-dimensional construction project and a surrounding environment, and integrating three-dimensional scenes; meanwhile, classifying and naming codes are carried out on each constructed three-dimensional model, and a three-dimensional model database is finally formed;
machine learning: extracting characteristic information of the extracted original data of the component, and establishing a characteristic information database of the component;
visual field analysis: identifying the position of a component of a construction project in a three-dimensional scene according to a component characteristic information database established by machine learning, calculating the orientation of the position, intercepting three-dimensional landscape pictures of all directions of the orientation hemispherical surface, fusing the three-dimensional landscape pictures through a picture fusion algorithm, comparing and calculating with screenshots of landscape components, judging the visual range of the landscape components, marking and calculating the percentage of the visual range;
and (4) outputting a result: and performing standard formatting on the visual field analysis result of the component in the construction project and outputting the visual field analysis result to a document.
A method for analyzing a visual domain of a building, wherein the method for constructing the three-dimensional environment comprises the following specific steps:
constructing an overground three-dimensional environment: extracting the original data of the overground spatial data, constructing an overground building object frame model in a semi-automatic manner, and pasting texture on the overground building object frame model;
constructing a three-dimensional construction project: extracting original data of construction project design drawing data, and constructing a construction project fine three-dimensional model;
three-dimensional environment integration: and (4) carrying out unified coordinate system, unified data format and unified platform display on the ground three-dimensional model database and the construction project three-dimensional model database.
A method for analyzing visual field of a building, wherein the aboveground three-dimensional environment is constructed by: extracting the original data of the overground spatial data, constructing an overground building object frame model in a semi-automatic manner, and pasting texture on the overground building object frame model; the method comprises the following specific steps of constructing the overground three-dimensional environment:
collecting original data: acquiring point cloud coordinate data of overground buildings and landforms and high-definition image data of the buildings at the same time by using airborne and vehicle-mounted laser radar equipment and high-definition camera equipment to finish the acquisition of original data;
semi-automatically constructing a three-dimensional model: after the point cloud and image data are subjected to noise reduction processing, automatically constructing a high-precision three-dimensional building model and a three-dimensional terrain model, attaching a high-definition image photo to the building model by a manual intervention semi-automatic method, simultaneously performing lighting processing, shadow baking and reverse attaching effect processing, and adjusting the display effect of the three-dimensional model;
constructing an overground three-dimensional environment: and integrating the built high-precision three-dimensional model to build a library, and finally forming an overground three-dimensional model database.
A building visual domain analysis method, wherein the three-dimensional construction project building comprises: extracting original data of construction project design drawing data, and constructing a construction project fine three-dimensional model; the method comprises the following specific steps of constructing the three-dimensional construction project:
extracting original data: extracting construction project design data, and screening data information required by modeling;
three-dimensional modeling: and (4) forming a three-dimensional scheme model library of the construction project by manually applying 3dmax modeling according to related original data.
A building visual domain analysis method is provided, wherein the machine learning comprises the following specific steps:
three-dimensional part learning: inputting learning data of the three-dimensional component, and learning the unique characteristic rule of each component through a learning algorithm;
extracting the feature of the part: extracting feature information according to the learning sample data and the part features learned by the algorithm;
establishing a database of component characteristic information: and after unique identification coding is carried out on the extracted three-dimensional component characteristic information, a component characteristic information database is constructed and stored.
A method for visual domain analysis of a structure, wherein said three-dimensional part learning comprises the steps of:
extracting a sample: extracting three-dimensional part sample data from the encoded data;
machine learning: learning sample data through a learning algorithm, and identifying characteristics of various urban parts;
the image feature point location and gradient vector feature information algorithm is as follows:
image pixel gray scale calculation formula:
image binary calculation formula:
the histogram method calculates a binarization threshold α:
i, j represents the ith column and the jth row in the image, Gray (i, j) represents the pixel Gray value of the position, R (i, j), G (i, j) and B (i, j) respectively represent the components of the pixel color value to be calculated at the position, R (i, j), G (i, j) and B (i, j) respectively represent the original pixel color value components at the position, a represents a constant 0.3, B represents a constant 0.59, c represents a constant 0.11, Black (i, j) represents the Gray value after the pixel binarization at the position, and Alph is the threshold value in the binary method;
the histogram statistical function formula is c ═ f (g), and the inverse function formula is g ═ f (g)-1(c),c0,c1Sub-maximum and maximum values, g, representing histogram function values of histogram gray scale0=f-1(c0),g1=f-1(c1) Representing the gray value at which the histogram function corresponds;
SIFT feature detection scale space definition:
L(i,j,σ)=G(i,j,σ)×I(i,j) ④
whereinIs a scale variable gaussian function, I (I, j) is the spatial coordinate, σ is the scale, the initial scale value is 1.6;
gaussian difference scale space:
D(i,j,σ)=(G(i,j,kσ)-G(i,j,σ))×I(i,j)=L(i,j,kσ)-L(i,j,σ) ⑤
down-sampling: k is 21/sS is the number of layers in each group, where the value is 4, and the values of sigma are sigma, k in turn2σ,…,kn-1σ;
The positions and the scales of the key points are accurately determined by adopting a fitting three-dimensional quadratic function, the matching stability is enhanced, the anti-noise capability is improved, and the accurate positions are obtained when the derivation is calculated to be 0When in useThe key points are reserved, otherwise, the key points are discarded;
calculating the modulus M (i, j) and the direction angle theta (i, j) formula of each key point gradient:
θ(i,j)=atan2((L(i,j+1)-L(i,j-1))/(L(i+1,j)-L(i-1,j))) ⑦
establishing a database of the characteristic information: and constructing the learned characteristic information of various city parts into a relational database for storage.
A method for visual field analysis of a structure, wherein said visual field analysis comprises the steps of:
an identification component: automatically identifying a component to be analyzed in a construction project according to the three-dimensional scene component code;
identifying the position: acquiring information such as the position, orientation and direction of the component according to the identified construction project component model;
and (3) identifying the orientation: setting the angle step length of the screenshot azimuth according to the identified azimuth and the image synthesis quality requirement, and identifying the orientation coordinate of each angle azimuth;
obtaining a direction map: respectively intercepting the orientation maps of the orientations according to the identified position coordinates and the calculated orientation coordinates;
and (3) fusion of the orientation map: according to an image fusion algorithm, feature point extraction, distortion correction, color balance and image splicing are carried out on the orientation maps in multiple directions, and finally a visible range image of the component to the environment is formed;
image analysis: and clustering the visual range pictures generated by the component, searching and matching the visual range pictures with the component characteristic information base, identifying the urban component on the visual graph, plotting the range outline of the visual component on the graph, and calculating the percentage of the visual range of the component in the whole component.
A building visual domain analysis method, wherein the orientation map fusion comprises the following steps:
extracting characteristic points: extracting common position points in each azimuth graph of the part position and establishing a mapping relation;
and (3) distortion correction: carrying out pixel point location registration according to the established mapping relation, and carrying out image deformation correction through a distortion correction method;
image fusion: and finally, cutting and outputting the corrected picture to form a big picture.
A method for visual domain analysis of a structure, wherein said image analysis comprises the steps of:
and (3) identifying the component: performing primary clustering on the visible range diagram, and matching the visible range diagram with component feature data in a component feature information base so as to identify the urban components on the diagram;
the subject area is: according to the recognized city component, plotting the range in a highlight mode on the visual range diagram;
and (3) generating a result: and comparing the identified city component with the city component in the component characteristic information library, and marking that the component shows the percentage of the part in the whole component on the visual range diagram.
A method for visual domain analysis of a structure, wherein said spatial data comprises: three-dimensional overground building structure point cloud data, three-dimensional overground building structure texture photo data and field shooting photo data; the various city components: geographic elements such as windows, doors, balconies, greenery, small goods, bridges, roads, street lamps, signboards, buildings, rivers and the like; the landscape component includes: geographic elements such as greenery, small goods, bridges, roads, street lamps, signboards, buildings, rivers and the like; the overground three-dimensional environment construction comprises surrounding overground construction, terrains, rivers, landscape greening, urban parts, street lamp signboards, bridges and rivers; the three-dimensional construction project construction comprises construction project construction and construction project landscape greening; the learning material of the three-dimensional part comprises: model data, texture data, material and description text; the unique features include: model features, texture features, material features, text features; the characteristic information includes: the method comprises the following steps of (1) component name, component model characteristics, component texture characteristics, component material characteristics and component text characteristics; the component model features include: the model is long, wide, high, normal limit; the component textural features comprise: the length and width of the texture, the black and white image of the texture, the color value of the texture, the limit value of the color and the characteristic point; the component material characteristics include: material color value, material name, other materials; the part text features include: words, phrases, or sentences describing the components.
It can be seen from this that: the embodiment of the invention provides a building visual domain analysis method, which comprises the following steps: the image is optimized, so that the optimized image feature recognition algorithm can stably recognize key feature points of the component and is not interfered by factors such as scaling and rotation. Through an intelligent technology, the positions of a window, a balcony and the like of a construction project scheme are quickly identified, the peripheral environment of the human vision is simulated, the visual field of the landscape is analyzed, the decision of links such as construction project planning approval, real estate type pricing and the like can be assisted, the intuitive experience of watching outdoor landscapes in the simulation of the project type of the body is provided for a user in the real estate selling link, and the decision of purchasing houses is assisted for the user.
Drawings
FIG. 1 is a schematic overall flow chart of a method for analyzing a visual domain of a structure according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating steps of a method for analyzing a visual domain of a structure for constructing a three-dimensional environment according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating the steps of constructing the three-dimensional environment on the ground in a visual domain analysis method for a structure according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart illustrating steps of constructing a three-dimensional construction project in a visual domain analysis method for a construction according to an embodiment of the present invention;
FIG. 5 is a flow chart illustrating a machine learning step of a method for analyzing a visual domain of a structure according to an embodiment of the present invention;
FIG. 6 is a schematic flow chart illustrating a three-dimensional part learning step in a visual domain analysis method for a structure according to an embodiment of the present invention;
FIG. 7 is a schematic flow chart illustrating the visual field analyzing step of a method for analyzing the visual field of a structure according to an embodiment of the present invention;
FIG. 8 is a schematic flow chart illustrating fusion of orientation maps in a method for analyzing a visual domain of a structure according to an embodiment of the present invention;
FIG. 9 is a flowchart illustrating an image analysis step in a method for analyzing a visual field of a structure according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the technical solution of the present invention, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments, wherein the exemplary embodiments and the description of the present invention are provided to explain the present invention, but not to limit the present invention.
Example 1:
FIG. 1 is a method for analyzing the visual domain of a structure, as shown in FIG. 1, comprising the steps of:
data encoding: carrying out coding classification on spatial data required by engineering projects and data such as urban parts and the like, and sorting various original data;
three-dimensional environment construction: extracting the coded original data, making three-dimensional data, constructing a three-dimensional construction project and a surrounding environment, and integrating three-dimensional scenes; meanwhile, classifying and naming codes are carried out on each constructed three-dimensional model, and a three-dimensional model database is finally formed;
machine learning: extracting characteristic information of the extracted original data of the component, and establishing a characteristic information database of the component;
visual field analysis: identifying the position of a component of a construction project in a three-dimensional scene according to a component characteristic information database established by machine learning, calculating the orientation of the position, intercepting three-dimensional landscape pictures of all directions of the orientation hemispherical surface, fusing the three-dimensional landscape pictures through a picture fusion algorithm, comparing and calculating with screenshots of landscape components, judging the visual range of the landscape components, marking and calculating the percentage of the visual range;
and (4) outputting a result: and performing standard formatting on the visual field analysis result of the component in the construction project and outputting the visual field analysis result to a document.
As shown in fig. 2, a method for analyzing a visual domain of a structure includes the following specific steps:
constructing an overground three-dimensional environment: extracting the original data of the overground spatial data, constructing an overground building object frame model in a semi-automatic manner, and pasting texture on the overground building object frame model;
constructing a three-dimensional construction project: extracting original data of construction project design drawing data, and constructing a construction project fine three-dimensional model;
three-dimensional environment integration: and (4) carrying out unified coordinate system, unified data format and unified platform display on the ground three-dimensional model database and the construction project three-dimensional model database.
A method for analyzing the visual field of a building, as shown in fig. 3, wherein the above-ground three-dimensional environment is constructed by: extracting the original data of the overground spatial data, constructing an overground building object frame model in a semi-automatic manner, and pasting texture on the overground building object frame model; the method comprises the following specific steps of constructing the overground three-dimensional environment:
collecting original data: acquiring point cloud coordinate data of overground buildings and landforms and high-definition image data of the buildings at the same time by using airborne and vehicle-mounted laser radar equipment and high-definition camera equipment to finish the acquisition of original data;
semi-automatically constructing a three-dimensional model: after the point cloud and image data are subjected to noise reduction processing, automatically constructing a high-precision three-dimensional building model and a three-dimensional terrain model, attaching a high-definition image photo to the building model by a manual intervention semi-automatic method, simultaneously performing lighting processing, shadow baking and reverse attaching effect processing, and adjusting the display effect of the three-dimensional model;
constructing an overground three-dimensional environment: and integrating the built high-precision three-dimensional model to build a library, and finally forming an overground three-dimensional model database.
A method for analyzing the visual domain of a building as shown in fig. 4, wherein the three-dimensional construction project is constructed by: extracting original data of construction project design drawing data, and constructing a construction project fine three-dimensional model; the method comprises the following specific steps of constructing the three-dimensional construction project:
extracting original data: extracting construction project design data, and screening data information required by modeling;
three-dimensional modeling: and (4) forming a three-dimensional scheme model library of the construction project by manually applying 3dmax modeling according to related original data.
As shown in fig. 5, the method for analyzing the visual domain of the building includes the following specific steps:
three-dimensional part learning: extracting three-dimensional component sample data from the coded data, and learning the unique characteristic rule of each component through a learning algorithm;
extracting the feature of the part: extracting feature information according to the learning sample data and the part features learned by the algorithm;
establishing a database of component characteristic information: and after unique identification coding is carried out on the extracted three-dimensional component characteristic information, a component characteristic information database is constructed and stored.
As shown in fig. 6, the method for analyzing the visual domain of the building structure specifically includes the following steps:
extracting a sample: extracting three-dimensional part sample data from the encoded data;
and (3) feature learning: learning sample data, and identifying the characteristics of various city components;
the image feature point location and gradient vector feature information learning algorithm comprises the following steps:
image pixel gray scale calculation formula:
image binary calculation formula:
the histogram method calculates a binarization threshold α:
i, j represents the ith column and the jth row in the image, Gray (i, j) represents the pixel Gray value of the position, R (i, j), G (i, j) and B (i, j) respectively represent the components of the pixel color value to be calculated at the position, R (i, j), G (i, j) and B (i, j) respectively represent the original pixel color value components at the position, a represents a constant 0.3, B represents a constant 0.59, c represents a constant 0.11, Black (i, j) represents the Gray value after the pixel binarization at the position, and Alph is the threshold value in the binary method;
the histogram statistical function formula is c ═ f (g), and the inverse function formula is g ═ f (g)-1(c),c0,c1Sub-maximum and maximum values, g, representing histogram function values of histogram gray scale0=f-1(c0),g1=f-1(c1) Representing the gray value at which the histogram function corresponds;
SIFT feature detection scale space definition:
L(i,j,σ)=G(i,j,σ)×I(i,j) ④
whereinIs a scale variable gaussian function, I (I, j) is the spatial coordinate, σ is the scale, the initial scale value is 1.6;
gaussian difference scale space:
D(i,j,σ)=(G(i,j,kσ)-G(i,j,σ))×I(i,j)=L(i,j,kσ)-L(i,j,σ) ⑤
down-sampling: k is 21/sS is the number of layers in each group, where the value is 4, and the values of sigma are sigma, k in turn2σ,…,kn-1σ;
The positions and the scales of the key points are accurately determined by adopting a fitting three-dimensional quadratic function, the matching stability is enhanced, the anti-noise capability is improved, and the accurate positions are obtained when the derivation is calculated to be 0When in useThe key points are reserved, otherwise, the key points are discarded;
calculating the modulus M (i, j) and the direction angle theta (i, j) formula of each key point gradient:
θ(i,j)=atan2((L(i,j+1)-L(i,j-1))/(L(i+1,j)-L(i-1,j))) ⑦
a method for visual field analysis of a structure as shown in fig. 7, said visual field analysis comprising the steps of:
an identification component: automatically identifying a component to be analyzed in a construction project according to the three-dimensional scene component code;
identifying the position: acquiring information such as the position, orientation and direction of the component according to the identified construction project component model;
and (3) identifying the orientation: setting the angle step length of the screenshot azimuth according to the identified azimuth and the image synthesis quality requirement, and identifying the orientation coordinate of each angle azimuth;
obtaining a direction map: respectively intercepting the orientation maps of the orientations according to the identified position coordinates and the calculated orientation coordinates;
and (3) fusion of the orientation map: according to an image fusion algorithm, feature point extraction, distortion correction, color balance and image splicing are carried out on the orientation maps in multiple directions, and finally a visible range image of the component to the environment is formed;
image analysis: and clustering the visual range pictures generated by the component, searching and matching the visual range pictures with the component characteristic information base, identifying the urban component on the visual graph, plotting the range outline of the visual component on the graph, and calculating the percentage of the visual range of the component in the whole component.
A method for analyzing the visual domain of a structure as shown in fig. 8, wherein the fusion of the orientation maps comprises the following steps:
extracting characteristic points: extracting common position points in each azimuth graph of the part position and establishing a mapping relation;
and (3) distortion correction: carrying out pixel point location registration according to the established mapping relation, and carrying out image deformation correction through a distortion correction method;
image fusion: and finally, cutting and outputting the corrected picture to form a big picture.
A method for visual domain analysis of a structure as shown in fig. 9, said image analysis comprising the steps of:
and (3) identifying the component: performing primary clustering on the visible range diagram, and matching the visible range diagram with component feature data in a component feature information base so as to identify the urban components on the diagram;
the subject area is: according to the recognized city component, plotting the range in a highlight mode on the visual range diagram;
and (3) generating a result: and comparing the identified city component with the city component in the component characteristic information library, and marking that the component shows the percentage of the part in the whole component on the visual range diagram.
The specific implementation case is as follows: the spatial data includes: three-dimensional overground building structure point cloud data, three-dimensional overground building structure texture photo data and field shooting photo data; the various city components: geographic elements such as windows, doors, balconies, greenery, small goods, bridges, roads, street lamps, signboards, buildings, rivers and the like; the landscape component includes: geographic elements such as greenery, small goods, bridges, roads, street lamps, signboards, buildings, rivers and the like; the overground three-dimensional environment construction comprises surrounding overground construction, terrains, rivers, landscape greening, urban parts, street lamp signboards, bridges and rivers; the three-dimensional construction project construction comprises construction project construction and construction project landscape greening; the learning material of the three-dimensional part comprises: model data, texture data, material and description text; the unique features include: model features, texture features, material features, text features; the characteristic information includes: the method comprises the following steps of (1) component name, component model characteristics, component texture characteristics, component material characteristics and component text characteristics; the component model features include: the model is long, wide, high, normal limit; the component textural features comprise: the length and width of the texture, the black and white image of the texture, the color value of the texture, the limit value of the color and the characteristic point; the component material characteristics include: material color value, material name, other materials; the part text features include: words, phrases, or sentences describing the components.
Example 2:
FIG. 1 is a method for analyzing the visual domain of a structure, as shown in FIG. 1, comprising the steps of:
data encoding: and (4) coding and classifying the spatial data map and the urban part data required by the engineering project, and sorting various original data. The spatial data includes: building a point cloud data code SY-DS-DY-000001.las on the three-dimensional ground; three-dimensional overground structure texture photo data and field shot photo data code SY-DS-WL-000001. jpg; the overground terrain data code SY-DS-DX-00001. dwg; the ground road data coding SY-DS-DL-000001. shp; a ground house data code SY-DM-FW-000001. shp; the data coding SY-XM-ZPT-000001.dwg of the construction project drawing data; planning an elevation map and an effect map SY-XM-LMT-000001.jpg of a construction project building; the city component material data code SY-CSBJ-000001. jpg; city component descriptive text coding SY-CSBJ-000001. txt; dividing the code into units according to the < - > symbol, wherein the < lambda > symbol is followed by a file extension, the last unit number in all codes is the serial number of the data, and if a plurality of data exist, accumulating the numbers in the unit;
three-dimensional environment construction: and (3) carrying out three-dimensional data manufacturing on the coded data with the first two numbered units of SY-DS, SY-DM, SY-DX and SY-SG by using a software tool, constructing a three-dimensional foundation pit and a three-dimensional surrounding environment, and merging the three-dimensional foundation pit and the three-dimensional surrounding environment into a library loading three-dimensional software platform for visualization. Meanwhile, each constructed three-dimensional model is classified and named according to a geode structure, such as a ground house is coded according to a building code DS-FW-LOU-000001. Dividing the code into units according to the symbol, wherein the last unit number in all codes is the serial number of the data, and if a plurality of data exist, accumulating the numbers in the unit;
machine learning: extracting coded data with the first two units of serial number being 'SY-CSBJ', learning characteristic information of urban parts, and establishing a characteristic information database of the parts;
visual field analysis: identifying the part position information of a construction project in a three-dimensional scene and the orientation of the position according to a part characteristic information database established by machine learning, intercepting three-dimensional landscape pictures of all directions of an oriented hemispherical surface, fusing by a picture fusion method, comparing and calculating with screenshots of landscape parts, judging the visual range of the landscape parts, marking and calculating the percentage of the visual range at the same time;
and (4) outputting a result: and performing standard formatting on the visual field analysis result of the component in the construction project and outputting the visual field analysis result to a document.
As shown in fig. 2, a method for analyzing a visual domain of a structure includes the following specific steps:
constructing an overground three-dimensional environment: extracting the original data of the overground spatial data, wherein the data are coded 'SY-DS-DY', 'SY-DS-WL', and the like, semi-automatically constructing an overground structure body frame model, and pasting material textures to form a fine three-dimensional model. The method comprises the following steps of constructing buildings, terrains, landscaping, urban parts, street lamp signboards, bridges, rivers, foundation pit construction sites, mechanical equipment, building materials and the like on the ground at the periphery;
constructing a three-dimensional construction project: extracting original data of construction project design drawing data, wherein the data are coded by SY-XM, and constructing a construction project fine three-dimensional model and a construction project three-dimensional terrain;
three-dimensional environment integration: and (4) carrying out unified coordinate system, unified data format and unified platform display on the ground three-dimensional model database and the construction project three-dimensional model database.
A method for analyzing the visual field of a building, as shown in fig. 3, wherein the above-ground three-dimensional environment is constructed by: extracting the original data of the overground spatial data, constructing an overground building object frame model in a semi-automatic manner, and pasting texture on the overground building object frame model; the method comprises the following specific steps of constructing the overground three-dimensional environment:
collecting original data: acquiring point cloud coordinate data of overground buildings and landforms and high-definition image data of the buildings at the same time by using airborne and vehicle-mounted laser radar equipment and high-definition camera equipment to finish the acquisition of original data;
semi-automatically constructing a three-dimensional model: after the point cloud and image data are subjected to noise reduction processing, automatically constructing a high-precision three-dimensional building model and a three-dimensional terrain model, attaching a high-definition image photo to the building model by a manual intervention semi-automatic method, simultaneously performing lighting processing, shadow baking and reverse attaching effect processing, and adjusting the display effect of the three-dimensional model;
constructing an overground three-dimensional environment: and integrating the built high-precision three-dimensional model to build a library, and finally forming an overground three-dimensional model database.
A method for analyzing the visual domain of a building as shown in fig. 4, wherein the three-dimensional construction project is constructed by: extracting original data of construction project design drawing data, and constructing a construction project fine three-dimensional model; the method comprises the following specific steps of constructing the three-dimensional construction project:
extracting original data: extracting construction project design data, and screening data information required by modeling;
three-dimensional modeling: and (4) forming a three-dimensional scheme model library of the construction project by manually applying 3dmax modeling according to related original data.
As shown in fig. 5, the method for analyzing the visual domain of the building includes the following specific steps:
three-dimensional part learning: extracting three-dimensional component sample data from the coded data, and learning the unique characteristic rule of each component through a learning algorithm;
extracting the feature of the part: extracting feature information according to the learning sample data and the part features learned by the algorithm;
establishing a database of component characteristic information: and after unique identification coding is carried out on the extracted three-dimensional component characteristic information, a component characteristic information database is constructed and stored.
As shown in fig. 6, the method for analyzing the visual domain of the building structure specifically includes the following steps:
extracting a sample: extracting three-dimensional part sample data from the encoded data;
and (3) feature learning: learning sample data, and identifying the characteristics of various city components;
the image feature point location and gradient vector feature information learning algorithm comprises the following steps:
image pixel gray scale calculation formula:
image binary calculation formula:
the histogram method calculates a binarization threshold α:
i, j represents the ith column and the jth row in the image, Gray (i, j) represents the pixel Gray value of the position, R (i, j), G (i, j) and B (i, j) respectively represent the components of the pixel color value to be calculated at the position, R (i, j), G (i, j) and B (i, j) respectively represent the original pixel color value components at the position, a represents a constant 0.3, B represents a constant 0.59, c represents a constant 0.11, Black (i, j) represents the Gray value after the pixel binarization at the position, and Alph is the threshold value in the binary method;
the histogram statistical function formula is c ═ f (g), and the inverse function formula is g ═ f (g)-1(c),c0,c1Sub-maximum and maximum values, g, representing histogram function values of histogram gray scale0=f-1(c0),g1=f-1(c1) Representing the gray value at which the histogram function corresponds;
SIFT feature detection scale space definition:
L(i,j,σ)=G(i,j,σ)×I(i,j) ④
whereinIs a scale variable gaussian function, I (I, j) is the spatial coordinate, σ is the scale, the initial scale value is 1.6;
gaussian difference scale space:
D(i,j,σ)=(G(i,j,kσ)-G(i,j,σ))×I(i,j)=L(i,j,kσ)-L(i,j,σ) ⑤
down-sampling: k is 21/sS is the number of layers in each group, where the value is 4, and the values of sigma are sigma, k in turn2σ,…,kn-1σ;
The positions and the scales of the key points are accurately determined by adopting a fitting three-dimensional quadratic function, the matching stability is enhanced, the anti-noise capability is improved, and the accurate positions are obtained when the derivation is calculated to be 0When in useThe key points are reserved, otherwise, the key points are discarded;
calculating the modulus M (i, j) and the direction angle theta (i, j) formula of each key point gradient:
θ(i,j)=atan2((L(i,j+1)-L(i,j-1))/(L(i+1,j)-L(i-1,j))) ⑦
a method for visual field analysis of a structure as shown in fig. 7, said visual field analysis comprising the steps of:
an identification component: automatically identifying a component to be analyzed in a construction project according to the three-dimensional scene component code;
identifying the position: acquiring information such as the position, orientation and direction of the component according to the identified construction project component model;
and (3) identifying the orientation: setting the angle step length of the screenshot azimuth according to the identified azimuth and the image synthesis quality requirement, and identifying the orientation coordinate of each angle azimuth;
obtaining a direction map: respectively intercepting the orientation maps of the orientations according to the identified position coordinates and the calculated orientation coordinates;
and (3) fusion of the orientation map: according to an image fusion algorithm, feature point extraction, distortion correction, color balance and image splicing are carried out on the orientation maps in multiple directions, and finally a visible range image of the component to the environment is formed;
image analysis: and clustering the visual range pictures generated by the component, searching and matching the visual range pictures with the component characteristic information base, identifying the urban component on the visual graph, plotting the range outline of the visual component on the graph, and calculating the percentage of the visual range of the component in the whole component.
A method for analyzing the visual domain of a structure as shown in fig. 8, wherein the fusion of the orientation maps comprises the following steps:
extracting characteristic points: extracting common position points in each azimuth graph of the part position and establishing a mapping relation;
and (3) distortion correction: carrying out pixel point location registration according to the established mapping relation, and carrying out image deformation correction through a distortion correction method;
image fusion: and finally, cutting and outputting the corrected picture to form a big picture.
A method for visual domain analysis of a structure as shown in fig. 9, said image analysis comprising the steps of:
and (3) identifying the component: performing primary clustering on the visible range diagram, and matching the visible range diagram with component feature data in a component feature information base so as to identify the urban components on the diagram;
the subject area is: according to the recognized city component, plotting the range in a highlight mode on the visual range diagram;
and (3) generating a result: and comparing the identified city component with the city component in the component characteristic information library, and marking that the component shows the percentage of the part in the whole component on the visual range diagram.
The specific implementation case is as follows: the spatial data includes: three-dimensional overground building structure point cloud data, three-dimensional overground building structure texture photo data and field shooting photo data; the various city components: geographic elements such as windows, doors, balconies, greenery, small goods, bridges, roads, street lamps, signboards, buildings, rivers and the like; the landscape component includes: geographic elements such as greenery, small goods, bridges, roads, street lamps, signboards, buildings, rivers and the like; the overground three-dimensional environment construction comprises surrounding overground construction, terrains, rivers, landscape greening, urban parts, street lamp signboards, bridges and rivers; the three-dimensional construction project construction comprises construction project construction and construction project landscape greening; the learning material of the three-dimensional part comprises: model data, texture data, material and description text; the unique features include: model features, texture features, material features, text features; the characteristic information includes: the method comprises the following steps of (1) component name, component model characteristics, component texture characteristics, component material characteristics and component text characteristics; the component model features include: the model is long, wide, high, normal limit; the component textural features comprise: the length and width of the texture, the black and white image of the texture, the color value of the texture, the limit value of the color and the characteristic point; the component material characteristics include: material color value, material name, other materials; the part text features include: words, phrases, or sentences describing the components.
It can be seen from this that: the embodiment of the invention provides a building visual domain analysis method, which comprises the following steps: the image is optimized, so that the optimized image feature recognition algorithm can stably recognize key feature points of the component and is not interfered by factors such as scaling and rotation. Through an intelligent technology, the positions of a window, a balcony and the like of a construction project scheme are quickly identified, the peripheral environment of the human vision is simulated, the visual field of the landscape is analyzed, the decision of links such as construction project planning approval, real estate type pricing and the like can be assisted, the intuitive experience of watching outdoor landscapes in the simulation of the project type of the body is provided for a user in the real estate selling link, and the decision of purchasing houses is assisted for the user.
While the embodiments of the present invention have been described by way of example, those skilled in the art will appreciate that there are numerous variations and permutations of the present invention without departing from the spirit of the invention, and it is intended that the appended claims cover such variations and modifications as fall within the true spirit of the invention.
Claims (10)
1. A method for analyzing a visual field of a structure, comprising the steps of:
data encoding: carrying out coding classification on spatial data required by an engineering project and data such as industrial regulations, industrial cases and the like, and sorting various original data;
three-dimensional environment construction: extracting the coded original data, making three-dimensional data, constructing a three-dimensional construction project and a surrounding environment, and integrating three-dimensional scenes; meanwhile, classifying and naming codes are carried out on each constructed three-dimensional model, and a three-dimensional model database is finally formed;
machine learning: extracting characteristic information of the extracted original data of the component, and establishing a characteristic information database of the component;
visual field analysis: identifying the position of a component of a construction project in a three-dimensional scene according to a component characteristic information database established by machine learning, calculating the orientation of the position, intercepting three-dimensional landscape pictures of all directions of the orientation hemispherical surface, fusing the three-dimensional landscape pictures through a picture fusion algorithm, comparing and calculating with screenshots of landscape components, judging the visual range of the landscape components, marking and calculating the percentage of the visual range;
and (4) outputting a result: and performing standard formatting on the visual field analysis result of the component in the construction project and outputting the visual field analysis result to a document.
2. The method for visual domain analysis of a structure according to claim 1, wherein the three-dimensional environment is constructed by the following steps:
constructing an overground three-dimensional environment: extracting the original data of the overground spatial data, constructing an overground building object frame model in a semi-automatic manner, and pasting texture on the overground building object frame model;
constructing a three-dimensional construction project: extracting original data of construction project design drawing data, and constructing a construction project fine three-dimensional model;
three-dimensional environment integration: and (4) carrying out unified coordinate system, unified data format and unified platform display on the ground three-dimensional model database and the construction project three-dimensional model database.
3. The method for visual domain analysis of a structure according to claim 2, wherein said above-ground three-dimensional environment is constructed by: extracting the original data of the overground spatial data, constructing an overground building object frame model in a semi-automatic manner, and pasting texture on the overground building object frame model; the method comprises the following specific steps of constructing the overground three-dimensional environment:
collecting original data: acquiring point cloud coordinate data of overground buildings and landforms and high-definition image data of the buildings at the same time by using airborne and vehicle-mounted laser radar equipment and high-definition camera equipment to finish the acquisition of original data;
semi-automatically constructing a three-dimensional model: after the point cloud and image data are subjected to noise reduction processing, automatically constructing a high-precision three-dimensional building model and a three-dimensional terrain model, attaching a high-definition image photo to the building model by a manual intervention semi-automatic method, simultaneously performing lighting processing, shadow baking and reverse attaching effect processing, and adjusting the display effect of the three-dimensional model;
constructing an overground three-dimensional environment: and integrating the built high-precision three-dimensional model to build a library, and finally forming an overground three-dimensional model database.
4. The method according to claim 2, wherein the three-dimensional construction project is constructed by: extracting original data of construction project design drawing data, and constructing a construction project fine three-dimensional model; the method comprises the following specific steps of constructing the three-dimensional construction project:
extracting original data: extracting construction project design data, and screening data information required by modeling;
three-dimensional modeling: and (4) forming a three-dimensional scheme model library of the construction project by manually applying 3dmax modeling according to related original data.
5. A method for visual domain analysis of a structure according to claim 1, wherein: the machine learning comprises the following specific steps:
three-dimensional part learning: inputting learning data of the three-dimensional components, learning the unique characteristic rule of each component through a learning algorithm, and constructing an information characteristic library;
extracting the feature of the part: extracting feature information according to the learning sample data and the part features learned by the algorithm;
establishing a database of component characteristic information: and after unique identification coding is carried out on the extracted three-dimensional component characteristic information, a component characteristic information database is constructed and stored.
6. The method according to claim 5, wherein the three-dimensional part learning comprises the steps of:
extracting a sample: extracting three-dimensional part sample data from the encoded data;
machine learning: learning sample data through a learning algorithm, and identifying characteristics of various urban parts;
the image feature point location and gradient vector feature information algorithm is as follows:
image pixel gray scale calculation formula:
image binary calculation formula:
the histogram method calculates a binarization threshold α:
i, j represents the ith column and the jth row in the image, Gray (i, j) represents the pixel Gray value of the position, R (i, j), G (i, j) and B (i, j) respectively represent the components of the pixel color value to be calculated at the position, R (i, j), G (i, j) and B (i, j) respectively represent the original pixel color value components at the position, a represents a constant 0.3, B represents a constant 0.59, c represents a constant 0.11, Black (i, j) represents the Gray value after the pixel binarization at the position, and Alph is the threshold value in the binary method;
the histogram statistical function formula is c ═ f (g), and the inverse function formula is g ═ f (g)-1(c),c0,c1Sub-maximum and maximum values, g, representing histogram function values of histogram gray scale0=f-1(c0),g1=f-1(c1) Representing the gray value at which the histogram function corresponds;
SIFT feature detection scale space definition:
L(i,j,σ)=G(i,j,σ)×I(i,j) ④
whereinIs a scale variable gaussian function, I (I, j) is the spatial coordinate, σ is the scale, the initial scale value is 1.6;
gaussian difference scale space:
D(i,j,σ)=(G(i,j,kσ)-G(i,j,σ))×I(i,j)=L(i,j,kσ)-L(i,j,σ) ⑤
down-sampling: k is 21S is the number of layers in each group, where the value is 4, and the values of sigma are sigma, k2σ,…,kn-1σ;
The positions and the scales of the key points are accurately determined by adopting a fitting three-dimensional quadratic function, the matching stability is enhanced, the anti-noise capability is improved, and the accurate positions are obtained when the derivation is calculated to be 0When in useThe key points are reserved, otherwise, the key points are discarded;
calculating the modulus M (i, j) and the direction angle theta (i, j) formula of each key point gradient:
θ(i,j)=atan2((L(i,j+1)-L(i,j-1))/(L(i+1,j)-L(i-1,j))) ⑦
establishing a database of the characteristic information: and constructing the learned characteristic information of various city parts into a relational database for storage.
7. A method for visual field analysis of a structure according to claim 1, wherein said visual field analysis comprises the steps of:
an identification component: automatically identifying a component to be analyzed in a construction project according to the three-dimensional scene component code;
identifying the position: acquiring information such as the position, orientation and direction of the component according to the identified construction project component model;
and (3) identifying the orientation: setting the angle step length of the screenshot azimuth according to the identified azimuth and the image synthesis quality requirement, and identifying the orientation coordinate of each angle azimuth;
obtaining a direction map: respectively intercepting the orientation maps of the orientations according to the identified position coordinates and the calculated orientation coordinates;
and (3) fusion of the orientation map: according to an image fusion algorithm, feature point extraction, distortion correction, color balance and image splicing are carried out on the orientation maps in multiple directions, and finally a visible range image of the component to the environment is formed;
image analysis: and clustering the visual range pictures generated by the component, searching and matching the visual range pictures with the component characteristic information base, identifying the urban component on the visual graph, plotting the range outline of the visual component on the graph, and calculating the percentage of the visual range of the component in the whole component.
8. A method for visual domain analysis of a structure according to claim 7, wherein: the method for fusing the azimuth maps comprises the following steps:
extracting characteristic points: extracting common position points in each azimuth graph of the part position and establishing a mapping relation;
and (3) distortion correction: carrying out pixel point location registration according to the established mapping relation, and carrying out image deformation correction through a distortion correction method;
image fusion: and finally, cutting and outputting the corrected picture to form a big picture.
9. A method for visual domain analysis of a structure according to claim 7, wherein: the image analysis comprises the following steps:
and (3) identifying the component: performing primary clustering on the visible range diagram, and matching the visible range diagram with component feature data in a component feature information base so as to identify the urban components on the diagram;
the subject area is: according to the recognized city component, plotting the range in a highlight mode on the visual range diagram;
and (3) generating a result: and comparing the identified city component with the city component in the component characteristic information library, and marking that the component shows the percentage of the part in the whole component on the visual range diagram.
10. A method for visual domain analysis of a structure according to claims 1-9, wherein: the spatial data includes: three-dimensional overground building structure point cloud data, three-dimensional overground building structure texture photo data and field shooting photo data; the various city components: geographic elements such as windows, doors, balconies, greenery, small goods, bridges, roads, street lamps, signboards, buildings, rivers and the like; the landscape component includes: geographic elements such as greenery, small goods, bridges, roads, street lamps, signboards, buildings, rivers and the like; the overground three-dimensional environment construction comprises surrounding overground construction, terrains, rivers, landscape greening, urban parts, street lamp signboards, bridges and rivers; the three-dimensional construction project construction comprises construction project construction and construction project landscape greening; the learning material of the three-dimensional part comprises: model data, texture data, material and description text; the unique features include: model features, texture features, material features, text features; the characteristic information includes: the method comprises the following steps of (1) component name, component model characteristics, component texture characteristics, component material characteristics and component text characteristics; the component model features include: the model is long, wide, high, normal limit; the component textural features comprise: the length and width of the texture, the black and white image of the texture, the color value of the texture, the limit value of the color and the characteristic point; the component material characteristics include: material color value, material name, other materials; the part text features include: words, phrases, or sentences describing the components.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811516960.8A CN111311725B (en) | 2018-12-12 | 2018-12-12 | Visual field analysis method for building |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811516960.8A CN111311725B (en) | 2018-12-12 | 2018-12-12 | Visual field analysis method for building |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111311725A true CN111311725A (en) | 2020-06-19 |
CN111311725B CN111311725B (en) | 2023-06-13 |
Family
ID=71146754
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811516960.8A Active CN111311725B (en) | 2018-12-12 | 2018-12-12 | Visual field analysis method for building |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111311725B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102855659A (en) * | 2012-07-17 | 2013-01-02 | 北京交通大学 | Three-dimensional holographic visualization system and method for high-speed comprehensively detecting train |
CN105761310A (en) * | 2016-02-03 | 2016-07-13 | 东南大学 | Simulated analysis and image display method of digital map of sky visible range |
CN105869211A (en) * | 2016-06-16 | 2016-08-17 | 成都中科合迅科技有限公司 | Analytical method and device for visible range |
CN106296818A (en) * | 2016-08-23 | 2017-01-04 | 河南智绘星图信息技术有限公司 | A kind of terrestrial space scene simulation method and system based on mobile platform |
US20180025542A1 (en) * | 2013-07-25 | 2018-01-25 | Hover Inc. | Method and system for displaying and navigating an optimal multi-dimensional building model |
-
2018
- 2018-12-12 CN CN201811516960.8A patent/CN111311725B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102855659A (en) * | 2012-07-17 | 2013-01-02 | 北京交通大学 | Three-dimensional holographic visualization system and method for high-speed comprehensively detecting train |
US20180025542A1 (en) * | 2013-07-25 | 2018-01-25 | Hover Inc. | Method and system for displaying and navigating an optimal multi-dimensional building model |
CN105761310A (en) * | 2016-02-03 | 2016-07-13 | 东南大学 | Simulated analysis and image display method of digital map of sky visible range |
CN105869211A (en) * | 2016-06-16 | 2016-08-17 | 成都中科合迅科技有限公司 | Analytical method and device for visible range |
CN106296818A (en) * | 2016-08-23 | 2017-01-04 | 河南智绘星图信息技术有限公司 | A kind of terrestrial space scene simulation method and system based on mobile platform |
Non-Patent Citations (2)
Title |
---|
LI YIN, ZHENXIN WANG: "Measuring visual enclosure for street walkability:Using machine learing algorithms and Google Street View imagery" * |
靳海亮,李留磊,袁松鹤,耿文轩: "一种用于三维城市建筑物的可视域分析算法" * |
Also Published As
Publication number | Publication date |
---|---|
CN111311725B (en) | 2023-06-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110136170B (en) | Remote sensing image building change detection method based on convolutional neural network | |
US11995886B2 (en) | Large-scale environment-modeling with geometric optimization | |
US11113864B2 (en) | Generative image synthesis for training deep learning machines | |
Garilli et al. | Automatic detection of stone pavement's pattern based on UAV photogrammetry | |
CN112508985B (en) | SLAM loop detection improvement method based on semantic segmentation | |
US9704042B2 (en) | Predicting tree species from aerial imagery | |
US10115165B2 (en) | Management of tax information based on topographical information | |
CN108428254A (en) | The construction method and device of three-dimensional map | |
CN115984273B (en) | Road disease detection method, device, computer equipment and readable storage medium | |
CN115512247A (en) | Regional building damage grade assessment method based on image multi-parameter extraction | |
CN114758086B (en) | Method and device for constructing urban road information model | |
CN114627073B (en) | Terrain recognition method, apparatus, computer device and storage medium | |
Li et al. | 3D map system for tree monitoring in hong kong using google street view imagery and deep learning | |
CN111311725B (en) | Visual field analysis method for building | |
Loghin et al. | Supervised classification and its repeatability for point clouds from dense VHR tri-stereo satellite image matching using machine learning | |
Su et al. | Building Detection From Aerial Lidar Point Cloud Using Deep Learning | |
RU2771442C1 (en) | Method for processing images by convolutional neural networks | |
RU2771442C9 (en) | Method for processing images by convolutional neural networks | |
CN117115566B (en) | Urban functional area identification method and system by utilizing full-season remote sensing images | |
Givens et al. | A method to generate sub-pixel classification maps for use in DIRSIG three-dimensional models | |
Hummel | On synthetic datasets for development of computer vision algorithms in airborne reconnaissance applications | |
Meixner et al. | Building façade separation in vertical aerial images | |
CN118229895A (en) | Three-dimensional scene simulation method for target region | |
Villinger et al. | Semantic segmentation of fused mobile mapping data | |
Roozenbeek | Dutch Open Topographic Data Sets as Georeferenced Markers in Augmented Reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |