CN113516771A - Building change feature extraction method based on live-action three-dimensional model - Google Patents

Building change feature extraction method based on live-action three-dimensional model Download PDF

Info

Publication number
CN113516771A
CN113516771A CN202110684111.9A CN202110684111A CN113516771A CN 113516771 A CN113516771 A CN 113516771A CN 202110684111 A CN202110684111 A CN 202110684111A CN 113516771 A CN113516771 A CN 113516771A
Authority
CN
China
Prior art keywords
target building
feature
difference image
dimensional model
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110684111.9A
Other languages
Chinese (zh)
Inventor
郑爽
张小星
王勤勤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wuce Spatial Information Co ltd
Original Assignee
Shenzhen Wuce Spatial Information Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Wuce Spatial Information Co ltd filed Critical Shenzhen Wuce Spatial Information Co ltd
Priority to CN202110684111.9A priority Critical patent/CN113516771A/en
Publication of CN113516771A publication Critical patent/CN113516771A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a building change feature extraction method based on a real-scene three-dimensional model. Step 1: establishing and updating a real-scene three-dimensional model of the target building; step 2: extracting initial change characteristics of a target building in the live-action three-dimensional model according to a preset extraction rule; and step 3: and adding detail features, optimizing the multi-feature difference image, further obtaining and outputting the final change features of the target building. The method can accurately and efficiently extract the change characteristics of the building, avoids the subjective influence of manual extraction, and is time-saving and labor-saving.

Description

Building change feature extraction method based on live-action three-dimensional model
Technical Field
The invention relates to the technical field of architecture, in particular to a building change feature extraction method based on a live-action three-dimensional model.
Background
At present, the development of the country drives the rapid urbanization of buildings which are one of important parts of city composition, and meanwhile, the change of the urban landform information is brought, and only if the change of the building information is accurately known, the local landform information can be macroscopically mastered.
The traditional building extraction method aiming at pixels has great disadvantages, and meanwhile, the interference of non-buildings with highly similar spectral information is difficult to process. Therefore, finding an automatic, accurate and efficient building extraction method has important significance.
Disclosure of Invention
The invention provides a building change feature extraction method based on a live-action three-dimensional model, which is used for accurately acquiring change features of a target building.
A building change feature extraction method based on a live-action three-dimensional model comprises the following steps:
step 1: establishing and updating a real-scene three-dimensional model of the target building;
step 2: extracting initial change characteristics of a target building in the live-action three-dimensional model according to a preset extraction rule;
and step 3: and adding detail features, optimizing the multi-feature difference image, further obtaining and outputting the final change features of the target building.
Preferably, the building change feature extraction method based on the live-action three-dimensional model includes the following steps: establishing and updating a real-scene three-dimensional model of a target building, comprising:
step 1.1: acquiring position data of a target building, and planning an acquisition path of the target building according to the position data;
step 1.2: shooting the target building according to the acquisition path, acquiring the position relation between different shooting points in the shooting process, and acquiring the pixel characteristics of the corresponding shooting point image;
step 1.3: discretizing the corresponding image according to the pixel characteristics to obtain a three-dimensional point cloud of the target building;
step 1.4: and establishing a target building standard coordinate system according to the position relation among different shooting points, inserting the three-dimensional point cloud of the target building into the standard coordinate system, and establishing a real-scene three-dimensional model of the target building.
Preferably, the building change feature extraction method based on the live-action three-dimensional model includes the following steps: establishing and updating a real-scene three-dimensional model of a target building, comprising: updating the live-action three-dimensional model of the target building, wherein the steps comprise:
obtaining a plurality of current images of the target building, and performing first-stage training on the current images to obtain a plurality of first training images;
segmenting the first training image according to a first mode to obtain first data, performing first preprocessing on the first data to obtain first pixel information, segmenting according to a second mode to obtain second data, and performing second preprocessing on the second data to obtain second pixel information;
carrying out weighted average on the first pixel information and the second pixel information to obtain third pixel information;
when the third pixel information reaches a preset standard, determining that the first training image is qualified, and storing the first training image into an image information base;
otherwise, performing second-stage training on the first training image to obtain a second training image, removing repeated images in an image information base, and establishing a temporary live-action three-dimensional model according to the residual images in the image information base and the second training image;
placing the temporary live-action three-dimensional model into a standard coordinate system to be compared with the live-action three-dimensional model, and acquiring a difference data set;
carrying out target difference classification on the difference data sets, obtaining short data sets of the same target according to classification results, carrying out feature extraction on the short data sets, and obtaining morphological difference of target positions of the target buildings;
and updating the real three-dimensional model according to the morphological difference of the target position.
Preferably, the building change feature extraction method based on the live-action three-dimensional model includes the following steps: extracting the initial change characteristics of the target building in the live-action three-dimensional model according to a preset extraction rule, wherein the extraction rule comprises the following steps:
step 2.1: establishing a three-dimensional model database, and storing the standard real-scene three-dimensional model of the target building into the three-dimensional model database;
step 2.2: acquiring a current live-action three-dimensional model of the target building, dividing the current live-action three-dimensional model into N small windows, acquiring a gray level co-occurrence matrix of each small window to obtain texture characteristic values of the corresponding small windows, and forming a texture characteristic matrix according to all the texture characteristic values to obtain a first characteristic of the target building;
step 2.3: acquiring each vertex of the target building according to the three-dimensional point cloud of the current live-action three-dimensional model, and generating a vertex diagram to obtain a second characteristic of the target building;
step 2.4: acquiring standard first characteristics and standard second characteristics of the target building according to the standard real-scene three-dimensional model of the target building;
step 2.5: and obtaining the initial change characteristic of the target building according to the difference between the first characteristic and the standard first characteristic and the difference between the second characteristic and the standard second characteristic.
Preferably, the building change feature extraction method based on the live-action three-dimensional model includes the following steps: adding detail features, optimizing a multi-feature difference image, further obtaining and outputting final change features of the target building, wherein the steps comprise:
step 3.1: dividing the target building into areas based on the first characteristics to obtain different area objects, acquiring color point clouds of the area objects and acquiring color information;
according to the color information, determining spectral information of each point cloud in the region object, establishing a region object spectral matrix, and acquiring a characteristic value of the spectral matrix;
obtaining the spectral feature of the target building as a third feature based on the feature value;
step 3.2: acquiring a second characteristic, processing the second characteristic information based on the morphological information of the target building, and acquiring a fourth characteristic of the target building;
step 3.3: acquiring detail features: and the third characteristic and the fourth characteristic are respectively compared with corresponding standard values to obtain characteristic differences, and a multi-characteristic difference image constructed by the initial variation characteristics is optimized based on the characteristic differences to obtain the final variation characteristics of the target building.
Preferably, the building change feature extraction method based on the live-action three-dimensional model includes the following steps of 1.1: acquiring position data of a target building, and planning an image acquisition path of the target building according to the position data, wherein the image acquisition path comprises the following steps:
step 1.1.1: acquiring position data of a target building, and determining the construction range of the target building according to the position data;
step 1.1.2: presume the characteristic road sign that the said target building picture needs to pass through of gathering on the basis of the construction range of the said target building, obtain the initial route, and calculate the associativity among every characteristic road sign, confirm the best route to gather the picture of the said target building;
acquiring the weight value of each characteristic road sign on the initial path, and calculating the association degree of the adjacent characteristic road signs by using a formula (1);
Figure BDA0003123735150000041
where ξ represents the degree of association of the neighboring feature roadmap, pjRepresenting the weight value of the current characteristic road sign j, wherein j is more than or equal to 2; p is a radical ofj-1Representing the weight value of the previous characteristic road sign j-1; p is a radical ofj+1A weight value representing the next characteristic landmark j + 1; n represents the possible total path number of the current characteristic road sign in each direction, and n is more than or equal to 1;
when the relevance xi of the adjacent characteristic road sign is in a prediction range, keeping the current characteristic road sign j, and continuously calculating the relevance of an adjacent node for the next characteristic road sign j + 1;
and the combination unit is used for combining according to all the reserved characteristic road signs, constructing the optimal path for acquiring the target building image and outputting and displaying the optimal path.
Preferably, the building change feature extraction method based on the live-action three-dimensional model includes the following steps of 3.3: acquiring detail features: the third feature and the fourth feature are respectively compared with corresponding standard values to obtain feature differences, and a multi-feature difference image constructed by the initial variation features is optimized based on the feature differences to obtain final variation features of the target building, wherein the method comprises the following steps:
step 3.3.1: acquiring a multi-feature difference image before optimization, and performing fusion optimization on the multi-feature difference image according to different fusion rules to acquire a plurality of initial multi-feature difference images;
step 3.3.2: performing first evaluation on the initial multi-feature difference image based on an image evaluation index to obtain a first difference between the initial multi-feature difference image and a standard multi-feature difference image;
if the first difference is within a preset difference range, the initial multi-feature difference image is shown to meet basic requirements, and second evaluation is carried out;
otherwise, indicating that the initial multi-feature difference image does not meet the basic requirements, abandoning the fusion rule corresponding to the initial multi-feature difference image, and adjusting each feature proportion corresponding to the fusion rule according to the first difference and the image evaluation index to generate a new fusion rule;
step 3.3.3: obtaining a second evaluation of the initial multi-feature difference image meeting the basic requirements based on the standard multi-feature difference image, evaluating the form loss degree of the target building, and obtaining a form loss rate;
step 3.3.4: judging whether the loss rate of the initial multi-feature difference image is greater than a preset threshold value or not according to the form loss rate, if so, judging that the initial multi-feature difference image is unqualified, and removing;
otherwise, judging that the initial multi-feature difference image is qualified, selecting the initial multi-feature difference image with the minimum loss rate as a final multi-feature difference image, and temporarily storing all the qualified initial multi-feature difference images;
step 3.3.5: when a user needs to select the current multi-feature difference image, all qualified initial multi-feature difference images are obtained, and the user selects the qualified initial multi-feature difference images according to the self requirement;
and obtaining a user selection result, analyzing the user requirement, and updating the standard for preferentially recommending the initial multi-feature difference image according to the user requirement.
Preferably, the method for extracting the building change features based on the live-action three-dimensional model includes: denoising the multi-feature difference image, comprising:
step 4.1: acquiring an optimized multi-feature difference image, and acquiring a noise signal variance of the optimized multi-feature difference image;
step 4.2: denoising the optimized multi-feature difference image based on the even-number complex wave signal, and calculating a pre-estimated value of a signal wave coefficient without noise of the optimized multi-feature difference image according to a formula (2):
Figure BDA0003123735150000061
wherein, delta2An estimate representing the noise-free signal wave coefficients of the optimized multi-feature difference image; m represents the number of pixels in the pixel field N under noise of the optimized multi-feature difference image; c represents the signal wave coefficient with noise of the optimized multi-feature difference image at the current moment;
Figure BDA0003123735150000062
represents the aboveNoise signal variance of the optimized multi-feature difference image;
according to the estimated value of the signal wave coefficient without noise of the multi-feature difference image, establishing a denoising model:
Figure BDA0003123735150000063
wherein H represents a denoising model of the multi-feature difference image; a represents the signal wave coefficient of the multi-feature difference image without noise; c represents the signal wave coefficient of the multi-feature difference image with noise;
Figure BDA0003123735150000065
representing that C is A + B, and B represents the wave coefficient of the multi-feature difference image noise signal; λ represents a regularization number; inf { } represents taking a infimum function; Ω denotes an integration range, which is determined according to the magnitude of the noise signal;
Figure BDA0003123735150000064
representing a gradient operator;
step 4.3: denoising the multi-feature difference image according to the denoising model, and judging to finish the denoising of the multi-feature difference image when the definition of the multi-feature difference influence reaches a preset requirement;
otherwise, adjusting the regularization number, and carrying out denoising processing on the multi-feature difference image again.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a method for extracting building change features based on a live-action three-dimensional model according to an embodiment of the present invention;
FIG. 2 is a flowchart of step 1 of a building change feature extraction method based on a live-action three-dimensional model according to an embodiment of the present invention;
FIG. 3 is a flowchart of step 2 of a building change feature extraction method based on a live-action three-dimensional model according to an embodiment of the present invention;
fig. 4 is a flowchart of step 3 of a building change feature extraction method based on a live-action three-dimensional model in the embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Example 1:
the invention provides a building change feature extraction method based on a live-action three-dimensional model, as shown in figure 1, comprising the following steps:
step 1: establishing and updating a real-scene three-dimensional model of the target building;
step 2: extracting initial change characteristics of a target building in the live-action three-dimensional model according to a preset extraction rule;
and step 3: and adding detail features, optimizing the multi-feature difference image, further obtaining and outputting the final change features of the target building.
In this embodiment, the preset extraction rule refers to extracting the architectural features of the target building, and the features include, for example, any one of the wall length, width, height, shape, color, and the like.
In this embodiment, the initial variation feature is a difference between the first feature and a standard first feature, and between the second feature and a standard second feature, and the multi-feature difference image is obtained from the initial variation feature, so as to obtain a variation feature of the target building; the final change characteristic is an image obtained by adjusting and optimizing the initial characteristic according to the difference between the third characteristic and the fourth characteristic and the standard value, so as to obtain the change characteristic of the target building; the first characteristic refers to a texture characteristic of the target building; the second characteristic refers to morphological characteristics of the target building and the third characteristic refers to spectral characteristics of the target building; the fourth feature refers to a morphological building index feature of the target building.
In this embodiment, the detail features refer to spectral features and morphological building index features of the target building.
In this embodiment, the multi-feature difference image is an image constructed by fusing a plurality of features of the obtained real-scene three-dimensional model.
The beneficial effects of the above technical scheme are: the method comprises the steps of establishing a live-action three-dimensional model to accurately capture the characteristics of a target building, extracting the initial change characteristics of the target building in the live-action three-dimensional model according to a preset extraction rule, adding detail characteristics according to a classification result, optimizing a multi-characteristic difference image, further obtaining the final change characteristics of the target building, and outputting the final change characteristics. The change characteristics of the target building interior are accurately and efficiently acquired.
Example 2:
based on example 1, step 1: establishing and updating a live-action three-dimensional model of the target building, as shown in fig. 2, includes:
step 1.1: acquiring position data of a target building, and planning an acquisition path of the target building according to the position data;
step 1.2: shooting the target building according to the acquisition path, acquiring the position relation between different shooting points in the shooting process, and acquiring the pixel characteristics of the corresponding shooting point image;
step 1.3: discretizing the corresponding image according to the pixel characteristics to obtain a three-dimensional point cloud of the target building;
step 1.4: and establishing a target building standard coordinate system according to the position relation among different shooting points, inserting the three-dimensional point cloud of the target building into the standard coordinate system, and establishing a real-scene three-dimensional model of the target building.
In this embodiment, the collection path is a moving path of the collection device (for example, an unmanned aerial vehicle), and the collection path includes a plurality of shooting points, so that images of buildings at different positions can be conveniently obtained, and the comprehensiveness of the collection is ensured.
In this embodiment, the pixel characteristics refer to the gray level variation of the pixels.
In this embodiment, the standard coordinate system is a coordinate system constructed according to the building characteristics of the target building, and is intended to facilitate comparison of real-scene three-dimensional models at different times.
The beneficial effects of the above technical scheme are: the method comprises the steps of obtaining position data of a target building, planning an acquisition path of the target building according to the position data, avoiding the problem of incomplete image acquisition caused by irregular acquisition range, processing acquired images and establishing a live-action three-dimensional model in a standard coordinate system, facilitating comparison of the live-action three-dimensional models at different periods and roughly observing the change of the building.
Example 3:
based on example 1, step 1: establishing and updating a real-scene three-dimensional model of a target building, comprising: updating the live-action three-dimensional model of the target building, wherein the steps comprise:
obtaining a plurality of current images of the target building, and performing first-stage training on the current images to obtain a plurality of first training images;
segmenting the first training image according to a first mode to obtain first data, performing first preprocessing on the first data to obtain first pixel information, segmenting according to a second mode to obtain second data, and performing second preprocessing on the second data to obtain second pixel information;
carrying out weighted average on the first pixel information and the second pixel information to obtain third pixel information;
when the third pixel information reaches a preset standard, determining that the first training image is qualified, and storing the first training image into an image information base;
otherwise, performing second-stage training on the first training image to obtain a second training image, removing repeated images in an image information base, and establishing a temporary live-action three-dimensional model according to the residual images in the image information base and the second training image;
placing the temporary live-action three-dimensional model into a standard coordinate system to be compared with the live-action three-dimensional model, and acquiring a difference data set;
carrying out target difference classification on the difference data sets, obtaining short data sets of the same target according to classification results, carrying out feature extraction on the short data sets, and obtaining morphological difference of target positions of the target buildings;
and updating the real three-dimensional model according to the morphological difference of the target position.
In this embodiment, the first-stage training refers to training images acquired by an acquisition device; the second stage training refers to training the images which are trained in the first stage.
In this embodiment, the first training image is an image acquired by an acquisition device that is trained in a first stage; the second training image refers to a first training image of the second stage training; the current image, the first training image and the second training image are all multiple.
In the embodiment, the first mode is to determine an adaptive threshold of an image, and segment the image by respectively adopting different thresholds according to local characteristics of the image; the second mode is that the image is divided into regions by detecting the places with abrupt change of gray level or structure by using edge detection, and the image is divided according to the regions;
in this embodiment, the first pixel information refers to pixels of a plurality of image blocks obtained by dividing according to a threshold; the second pixel information is pixels of a plurality of image blocks obtained by area division.
In this embodiment, the image information base is used to store the first training image and the image used to initially establish the live-action three-dimensional model.
In this embodiment, the difference data set is a set of difference values at different positions obtained by placing the temporary live-action three-dimensional model into a standard coordinate system and comparing the temporary live-action three-dimensional model with the live-action three-dimensional model.
In this embodiment, the classification result is obtained by classifying the data in the difference data set according to the position of the corresponding target building.
In this embodiment, the short data set is a set composed of data corresponding to the same position of the target building and classified according to the classification result.
In this embodiment, the target position refers to a position on the target building corresponding to the short data set.
The beneficial effects of the above technical scheme are: the method comprises the steps of conducting first training on a current image to obtain a first training image meeting requirements, storing the first training image in an image information base to remove repeated images in the image information base, building a temporary live-action three-dimensional model according to residual images in the image information base and a second training image, and comparing the temporary live-action three-dimensional model with the live-action three-dimensional model to obtain a difference data set.
Carrying out target difference classification on the difference data sets, obtaining short data sets of the same target from the classification results, and carrying out feature extraction on the short data sets to obtain the morphological difference of the target positions of the target buildings; and updating the real three-dimensional model according to the morphological difference of the target position. And accurately searching the difference between the live-action three-dimensional model and the current temporary three-dimensional model, finding an accurate position on the target building, and accurately updating the live-action three-dimensional model according to the position so as to obtain accurate change characteristics in the next detection.
Example 4:
based on the example 1, the step 2: extracting the initial change features of the target building in the live-action three-dimensional model according to a preset extraction rule, as shown in fig. 3, including:
step 2.1: establishing a three-dimensional model database, and storing the standard real-scene three-dimensional model of the target building into the three-dimensional model database;
step 2.2: acquiring a current live-action three-dimensional model of the target building, dividing the current live-action three-dimensional model into N small windows, acquiring a gray level co-occurrence matrix of each small window to obtain texture characteristic values of the corresponding small windows, and forming a texture characteristic matrix according to all the texture characteristic values to obtain a first characteristic of the target building;
step 2.3: acquiring each vertex of the target building according to the three-dimensional point cloud of the current live-action three-dimensional model, and generating a vertex diagram to obtain a second characteristic of the target building;
step 2.4: acquiring standard first characteristics and standard second characteristics of the target building according to the standard real-scene three-dimensional model of the target building;
step 2.5: and obtaining the initial change characteristic of the target building according to the difference between the first characteristic and the standard first characteristic and the difference between the second characteristic and the standard second characteristic.
In this embodiment, the three-dimensional model database is used for storing live-action three-dimensional models of different versions due to model updating and before updating, and includes standard live-action three-dimensional models.
In this embodiment, the first feature refers to a texture feature of the target building; the second feature refers to the morphological characteristics of the target building.
The beneficial effects of the above technical scheme are: according to the method, the first characteristic and the second characteristic of the current live-action three-dimensional model of the target building are obtained, and the initial change characteristic of the target building is obtained according to the difference between the first characteristic and the standard first characteristic and the difference between the second characteristic and the standard second characteristic. The change characteristics of the target building are subjected to preliminary analysis, and the preliminary analysis can be used for estimating the change direction of the target building.
Example 5:
based on the example 1, the step 3: adding detail features, optimizing a multi-feature difference image, further obtaining and outputting final change features of the target building, as shown in fig. 4, including:
step 3.1: dividing the target building into areas based on the first characteristics to obtain different area objects, acquiring color point clouds of the area objects and acquiring color information;
according to the color information, determining spectral information of each point cloud in the region object, establishing a region object spectral matrix, and acquiring a characteristic value of the spectral matrix;
obtaining the spectral feature of the target building as a third feature based on the feature value;
step 3.2: acquiring a second characteristic, processing the second characteristic information based on the morphological information of the target building, and acquiring a fourth characteristic of the target building;
step 3.3: acquiring detail features: and the third characteristic and the fourth characteristic are respectively compared with corresponding standard values to obtain characteristic differences, and a multi-characteristic difference image constructed by the initial variation characteristics is optimized based on the characteristic differences to obtain the final variation characteristics of the target building.
In this embodiment, the color information refers to the color saturation and color representation of the color point cloud.
In this embodiment, the spectrum information refers to a reflection spectrum of the point cloud determined according to the color information; the spectrum matrix refers to a matrix formed by point cloud reflection spectrums in the same area.
In the present embodiment, the third feature refers to a spectral feature of the target building; the fourth feature refers to a morphological building index feature of the target building.
The beneficial effects of the above technical scheme are: according to the method, the first feature is processed to obtain a third feature, the second feature is processed to obtain a fourth feature, standard values corresponding to the third feature and the fourth feature are obtained, the difference between the third feature and the corresponding standard values and the difference between the fourth feature and the corresponding standard values are respectively calculated, meanwhile, a multi-feature difference image is optimized based on the difference to obtain the final change feature of the target building, and the problem that image information is incomplete due to fusion of the first feature and the second feature is solved.
Example 6:
based on example 2, step 1.1: acquiring position data of a target building, and planning an image acquisition path of the target building according to the position data, wherein the image acquisition path comprises the following steps:
step 1.1.1: acquiring position data of a target building, and determining the construction range of the target building according to the position data;
step 1.1.2: presume the characteristic road sign that the said target building picture needs to pass through of gathering on the basis of the construction range of the said target building, obtain the initial route, and calculate the associativity among every characteristic road sign, confirm the best route to gather the picture of the said target building;
acquiring the weight value of each characteristic road sign on the initial path, and calculating the association degree of the adjacent characteristic road signs by using a formula (1);
Figure BDA0003123735150000131
where ξ represents the degree of association of the neighboring feature roadmap, pjRepresenting the weight value of the current characteristic road sign j, wherein j is more than or equal to 2; p is a radical ofj-1Representing the weight value of the previous characteristic road sign j-1; p is a radical ofj+1A weight value representing the next characteristic landmark j + 1; n represents the possible total path number of the current characteristic road sign in each direction, and n is more than or equal to 1;
when the relevance xi of the adjacent characteristic road sign is in a prediction range, keeping the current characteristic road sign j, and continuously calculating the relevance of an adjacent node for the next characteristic road sign j + 1;
and the combination unit is used for combining according to all the reserved characteristic road signs, constructing the optimal path for acquiring the target building image and outputting and displaying the optimal path.
In this embodiment, the characteristic landmark refers to a landmark object in the image capturing movement path.
In this embodiment, the initial path refers to an image acquisition moving path preliminarily divided according to the construction range of the target building; the optimal path refers to an image acquisition moving path obtained by combining the reserved characteristic road signs.
In this embodiment, the weight value is an importance degree of a landmark object in a movement path of image acquisition, and is represented by a probability value.
The beneficial effects of the above technical scheme are: according to the invention, the construction range of the target building is determined according to the position data through the position data of the target building, the characteristic road signs through which the image of the target building needs to pass are presumed to be acquired based on the construction range of the target building, the initial path is obtained, the relevance among the characteristic road signs is calculated, the optimal path for acquiring the image of the target building is determined, the unimportant road signs are removed, and the optimal acquisition path is obtained, so that the image acquisition time is greatly reduced, and meanwhile, the problem of incomplete image acquisition caused by unclear acquisition range is avoided.
Example 7:
based on example 5, step 3.3: step 3.3: acquiring detail features: the third feature and the fourth feature are respectively compared with corresponding standard values to obtain feature differences, and a multi-feature difference image constructed by the initial variation features is optimized based on the feature differences to obtain final variation features of the target building, wherein the method comprises the following steps:
step 3.3.1: acquiring a multi-feature difference image before optimization, and performing fusion optimization on the multi-feature difference image according to different fusion rules to acquire a plurality of initial multi-feature difference images;
step 3.3.2: performing first evaluation on the initial multi-feature difference image based on an image evaluation index to obtain a first difference between the initial multi-feature difference image and a standard multi-feature difference image;
if the first difference is within a preset difference range, the initial multi-feature difference image is shown to meet basic requirements, and second evaluation is carried out;
otherwise, indicating that the initial multi-feature difference image does not meet the basic requirements, abandoning the fusion rule corresponding to the initial multi-feature difference image, and adjusting each feature proportion corresponding to the fusion rule according to the first difference and the image evaluation index to generate a new fusion rule;
step 3.3.3: obtaining a second evaluation of the initial multi-feature difference image meeting the basic requirements based on the standard multi-feature difference image, evaluating the form loss degree of the target building, and obtaining a form loss rate;
step 3.3.4: judging whether the loss rate of the initial multi-feature difference image is greater than a preset threshold value or not according to the form loss rate, if so, judging that the initial multi-feature difference image is unqualified, and removing;
otherwise, judging that the initial multi-feature difference image is qualified, selecting the initial multi-feature difference image with the minimum loss rate as a final multi-feature difference image, and temporarily storing all the qualified initial multi-feature difference images;
step 3.3.5: when a user needs to select the current multi-feature difference image, all qualified initial multi-feature difference images are obtained, and the user selects the qualified initial multi-feature difference images according to the self requirement;
and obtaining a user selection result, analyzing the user requirement, and updating the standard for preferentially recommending the initial multi-feature difference image according to the user requirement.
In this embodiment, the multi-feature difference image before optimization refers to a multi-feature difference image without the third feature and the fourth feature.
In this embodiment, the fusion rule refers to a rule for performing image fusion when the ratio of each feature is different.
In this embodiment, the initial multi-feature difference image is a multi-feature difference image obtained by fusing according to different fusion rules.
In this embodiment, the image evaluation indexes include color, sharpness, noise, and the like.
In this embodiment, the first evaluation is to evaluate an initial multi-feature difference image evaluation index; the first difference refers to a difference between the initial multi-feature difference image and the standard multi-feature difference image with respect to an image evaluation index.
In the embodiment, the second evaluation is carried out according to the form difference between the building form described by the initial multi-feature difference image and the form of the target building; the form loss rate refers to the form change condition of the initial multi-feature difference image relative to the target building itself.
In this embodiment, preferentially recommending the initial multi-feature difference image refers to an initial multi-feature difference image that is pre-screened for the user according to the user requirement, and the standard of the initial multi-feature difference image is determined according to the user requirement.
The beneficial effects of the above technical scheme are: according to different fusion rules, the multi-feature difference images are subjected to fusion optimization to obtain a plurality of initial multi-feature difference images, the initial multi-feature difference images are evaluated, and an optimal result is selected according to the evaluation result, so that the optimal fusion rule is screened out. In addition, the user can select the fusion rule according to the self requirement, so that the man-machine interaction is realized, meanwhile, the user requirement is analyzed, and the standard of preferentially recommending the initial multi-feature difference image is updated according to the requirement.
Example 8:
based on embodiment 4, the method further comprises the following steps: denoising the multi-feature difference image, comprising:
step 4.1: acquiring an optimized multi-feature difference image, and acquiring a noise signal variance of the optimized multi-feature difference image;
step 4.2: denoising the optimized multi-feature difference image based on the even-number complex wave signal, and calculating a pre-estimated value of a signal wave coefficient without noise of the optimized multi-feature difference image according to a formula (2):
Figure BDA0003123735150000161
wherein, delta2An estimate representing the noise-free signal wave coefficients of the optimized multi-feature difference image; m represents the number of pixels in the pixel field N under noise of the optimized multi-feature difference image; c represents the signal wave coefficient with noise of the optimized multi-feature difference image at the current moment;
Figure BDA0003123735150000162
A noise signal variance representing the optimized multi-feature difference image;
according to the estimated value of the signal wave coefficient without noise of the multi-feature difference image, establishing a denoising model:
Figure BDA0003123735150000163
wherein H represents a denoising model of the multi-feature difference image; a represents the signal wave coefficient of the multi-feature difference image without noise; c represents the signal wave coefficient of the multi-feature difference image with noise;
Figure BDA0003123735150000172
representing that C is A + B, and B represents the wave coefficient of the multi-feature difference image noise signal; λ represents a regularization number; inf { } represents taking a infimum function; Ω denotes an integration range, which is determined according to the magnitude of the noise signal;
Figure BDA0003123735150000171
representing a gradient operator;
step 4.3: denoising the multi-feature difference image according to the denoising model, and judging to finish the denoising of the multi-feature difference image when the definition of the multi-feature difference influence reaches a preset requirement;
otherwise, adjusting the regularization number, and carrying out denoising processing on the multi-feature difference image again.
The beneficial effects of the above technical scheme are: the method obtains the optimized multi-feature difference image, obtains the noise signal variance of the multi-feature difference image, denoises the multi-feature difference image based on the double-number complex wave signal, and establishes a denoising model, thereby avoiding the distortion of the multi-feature difference image caused by noise and causing the extraction error of the change features of the building.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (8)

1. A building change feature extraction method based on a live-action three-dimensional model is characterized by comprising the following steps:
step 1: establishing and updating a real-scene three-dimensional model of the target building;
step 2: extracting initial change characteristics of a target building in the live-action three-dimensional model according to a preset extraction rule;
and step 3: and adding detail features, optimizing the multi-feature difference image, further obtaining and outputting the final change features of the target building.
2. The method for extracting the building change features based on the live-action three-dimensional model as claimed in claim 1, wherein: step 1: establishing and updating a real-scene three-dimensional model of a target building, comprising:
step 1.1: acquiring position data of a target building, and planning an acquisition path of the target building according to the position data;
step 1.2: shooting the target building according to the acquisition path, acquiring the position relation between different shooting points in the shooting process, and acquiring the pixel characteristics of the corresponding shooting point image;
step 1.3: discretizing the corresponding image according to the pixel characteristics to obtain a three-dimensional point cloud of the target building;
step 1.4: and establishing a target building standard coordinate system according to the position relation among different shooting points, inserting the three-dimensional point cloud of the target building into the standard coordinate system, and establishing a real-scene three-dimensional model of the target building.
3. The method for extracting the building change features based on the live-action three-dimensional model as claimed in claim 1, wherein: step 1: establishing and updating a real-scene three-dimensional model of a target building, comprising: updating the live-action three-dimensional model of the target building, wherein the steps comprise:
obtaining a plurality of current images of the target building, and performing first-stage training on the current images to obtain a plurality of first training images;
segmenting the first training image according to a first mode to obtain first data, performing first preprocessing on the first data to obtain first pixel information, segmenting according to a second mode to obtain second data, and performing second preprocessing on the second data to obtain second pixel information;
carrying out weighted average on the first pixel information and the second pixel information to obtain third pixel information;
when the third pixel information reaches a preset standard, determining that the first training image is qualified, and storing the first training image into an image information base;
otherwise, performing second-stage training on the first training image to obtain a second training image, removing repeated images in an image information base, and establishing a temporary live-action three-dimensional model according to the residual images in the image information base and the second training image;
placing the temporary live-action three-dimensional model into a standard coordinate system to be compared with the live-action three-dimensional model, and acquiring a difference data set;
carrying out target difference classification on the difference data sets, obtaining short data sets of the same target according to classification results, carrying out feature extraction on the short data sets, and obtaining morphological difference of target positions of the target buildings;
and updating the real three-dimensional model according to the morphological difference of the target position.
4. The method for extracting the building change features based on the live-action three-dimensional model as claimed in claim 1, wherein: step 2: extracting the initial change characteristics of the target building in the live-action three-dimensional model according to a preset extraction rule, wherein the extraction rule comprises the following steps:
step 2.1: establishing a three-dimensional model database, and storing the standard real-scene three-dimensional model of the target building into the three-dimensional model database;
step 2.2: acquiring a current live-action three-dimensional model of the target building, dividing the current live-action three-dimensional model into N small windows, acquiring a gray level co-occurrence matrix of each small window to obtain texture characteristic values of the corresponding small windows, and forming a texture characteristic matrix according to all the texture characteristic values to obtain a first characteristic of the target building;
step 2.3: acquiring each vertex of the target building according to the three-dimensional point cloud of the current live-action three-dimensional model, and generating a vertex diagram to obtain a second characteristic of the target building;
step 2.4: acquiring standard first characteristics and standard second characteristics of the target building according to the standard real-scene three-dimensional model of the target building;
step 2.5: and obtaining the initial change characteristic of the target building according to the difference between the first characteristic and the standard first characteristic and the difference between the second characteristic and the standard second characteristic.
5. The method for extracting the building change features based on the live-action three-dimensional model as claimed in claim 1, wherein: and step 3: adding detail features, optimizing a multi-feature difference image, further obtaining and outputting final change features of the target building, wherein the steps comprise:
step 3.1: dividing the target building into areas based on the first characteristics to obtain different area objects, acquiring color point clouds of the area objects and acquiring color information;
according to the color information, determining spectral information of each point cloud in the region object, establishing a region object spectral matrix, and acquiring a characteristic value of the spectral matrix;
obtaining the spectral feature of the target building as a third feature based on the feature value;
step 3.2: acquiring a second characteristic, processing the second characteristic information based on the morphological information of the target building, and acquiring a fourth characteristic of the target building;
step 3.3: acquiring detail features: and the third characteristic and the fourth characteristic are respectively compared with corresponding standard values to obtain characteristic differences, and a multi-characteristic difference image constructed by the initial variation characteristics is optimized based on the characteristic differences to obtain the final variation characteristics of the target building.
6. The method for extracting the building change features based on the live-action three-dimensional model as claimed in claim 2, wherein: step 1.1: acquiring position data of a target building, and planning an image acquisition path of the target building according to the position data, wherein the image acquisition path comprises the following steps:
step 1.1.1: acquiring position data of a target building, and determining the construction range of the target building according to the position data;
step 1.1.2: presume the characteristic road sign that the said target building picture needs to pass through of gathering on the basis of the construction range of the said target building, obtain the initial route, and calculate the associativity among every characteristic road sign, confirm the best route to gather the picture of the said target building;
acquiring the weight value of each characteristic road sign on the initial path, and calculating the association degree of the adjacent characteristic road signs by using a formula (1);
Figure FDA0003123735140000041
where ξ represents the degree of association of the neighboring feature roadmap, pjRepresenting the weight value of the current characteristic road sign j, wherein j is more than or equal to 2; p is a radical ofj-1Representing the weight value of the previous characteristic road sign j-1; p is a radical ofj+1A weight value representing the next characteristic landmark j + 1; n represents the possible total path number of the current characteristic road sign in each direction, and n is more than or equal to 1;
when the relevance xi of the adjacent characteristic road sign is in a prediction range, keeping the current characteristic road sign j, and continuously calculating the relevance of an adjacent node for the next characteristic road sign j + 1;
and the combination unit is used for combining according to all the reserved characteristic road signs, constructing the optimal path for acquiring the target building image and outputting and displaying the optimal path.
7. The method for extracting the building change features based on the live-action three-dimensional model as claimed in claim 5, wherein: step 3.3: acquiring detail features: the third feature and the fourth feature are respectively compared with corresponding standard values to obtain feature differences, and a multi-feature difference image constructed by the initial variation features is optimized based on the feature differences to obtain final variation features of the target building, wherein the method comprises the following steps:
step 3.3.1: acquiring a multi-feature difference image before optimization, and performing fusion optimization on the multi-feature difference image according to different fusion rules to acquire a plurality of initial multi-feature difference images;
step 3.3.2: performing first evaluation on the initial multi-feature difference image based on an image evaluation index to obtain a first difference between the initial multi-feature difference image and a standard multi-feature difference image;
if the first difference is within a preset difference range, the initial multi-feature difference image is shown to meet basic requirements, and second evaluation is carried out;
otherwise, indicating that the initial multi-feature difference image does not meet the basic requirements, abandoning the fusion rule corresponding to the initial multi-feature difference image, and adjusting each feature proportion corresponding to the fusion rule according to the first difference and the image evaluation index to generate a new fusion rule;
step 3.3.3: obtaining a second evaluation of the initial multi-feature difference image meeting the basic requirements based on the standard multi-feature difference image, evaluating the form loss degree of the target building, and obtaining a form loss rate;
step 3.3.4: judging whether the loss rate of the initial multi-feature difference image is greater than a preset threshold value or not according to the form loss rate, if so, judging that the initial multi-feature difference image is unqualified, and removing;
otherwise, judging that the initial multi-feature difference image is qualified, selecting the initial multi-feature difference image with the minimum loss rate as a final multi-feature difference image, and temporarily storing all the qualified initial multi-feature difference images;
step 3.3.5: when a user needs to select the current multi-feature difference image, all qualified initial multi-feature difference images are obtained, and the user selects the qualified initial multi-feature difference images according to the self requirement;
and obtaining a user selection result, analyzing the user requirement, and updating the standard for preferentially recommending the initial multi-feature difference image according to the user requirement.
8. The method for extracting the building change features based on the live-action three-dimensional model as claimed in claim 4, wherein: the method comprises the following steps: denoising the multi-feature difference image, comprising:
step 4.1: acquiring an optimized multi-feature difference image, and acquiring a noise signal variance of the optimized multi-feature difference image;
step 4.2: denoising the optimized multi-feature difference image based on the even-number complex wave signal, and calculating a pre-estimated value of a signal wave coefficient without noise of the optimized multi-feature difference image according to a formula (2):
Figure FDA0003123735140000051
wherein, delta2An estimate representing the noise-free signal wave coefficients of the optimized multi-feature difference image; m represents the number of pixels in the pixel field N under noise of the optimized multi-feature difference image; c represents the signal wave coefficient with noise of the optimized multi-feature difference image at the current moment;
Figure FDA0003123735140000052
a noise signal variance representing the optimized multi-feature difference image;
according to the estimated value of the signal wave coefficient without noise of the multi-feature difference image, establishing a denoising model:
Figure FDA0003123735140000061
wherein H represents a denoising model of the multi-feature difference image; a represents the signal wave coefficient of the multi-feature difference image without noise; c represents the signal wave coefficient of the multi-feature difference image with noise; w represents C ═ A + B, and B represents the multi-feature difference image noise signal wave coefficient; λ represents a regularization number; inf { } represents taking a infimum function; Ω denotes an integration range, which is determined according to the magnitude of the noise signal;
Figure FDA0003123735140000062
representing a gradient operator;
step 4.3: denoising the multi-feature difference image according to the denoising model, and judging to finish the denoising of the multi-feature difference image when the definition of the multi-feature difference influence reaches a preset requirement;
otherwise, adjusting the regularization number, and carrying out denoising processing on the multi-feature difference image again.
CN202110684111.9A 2021-06-21 2021-06-21 Building change feature extraction method based on live-action three-dimensional model Pending CN113516771A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110684111.9A CN113516771A (en) 2021-06-21 2021-06-21 Building change feature extraction method based on live-action three-dimensional model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110684111.9A CN113516771A (en) 2021-06-21 2021-06-21 Building change feature extraction method based on live-action three-dimensional model

Publications (1)

Publication Number Publication Date
CN113516771A true CN113516771A (en) 2021-10-19

Family

ID=78066049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110684111.9A Pending CN113516771A (en) 2021-06-21 2021-06-21 Building change feature extraction method based on live-action three-dimensional model

Country Status (1)

Country Link
CN (1) CN113516771A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114896679A (en) * 2022-07-13 2022-08-12 深圳大学 Three-dimensional model optimization method for building, intelligent terminal and storage medium
CN115311574A (en) * 2022-10-12 2022-11-08 山东乾元泽孚科技股份有限公司 Building monitoring method, equipment and medium
CN116189367A (en) * 2022-12-09 2023-05-30 嘉应学院 Building fire alarm system based on Internet of things

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839267A (en) * 2014-02-27 2014-06-04 西安科技大学 Building extracting method based on morphological building indexes
CN107356230A (en) * 2017-07-12 2017-11-17 深圳市武测空间信息有限公司 A kind of digital mapping method and system based on outdoor scene threedimensional model
CN109919944A (en) * 2018-12-29 2019-06-21 武汉大学 A kind of joint super-pixel figure of complex scene building variation detection cuts optimization method
JP6808787B1 (en) * 2019-08-22 2021-01-06 株式会社パスコ Building change identification device and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839267A (en) * 2014-02-27 2014-06-04 西安科技大学 Building extracting method based on morphological building indexes
CN107356230A (en) * 2017-07-12 2017-11-17 深圳市武测空间信息有限公司 A kind of digital mapping method and system based on outdoor scene threedimensional model
CN109919944A (en) * 2018-12-29 2019-06-21 武汉大学 A kind of joint super-pixel figure of complex scene building variation detection cuts optimization method
JP6808787B1 (en) * 2019-08-22 2021-01-06 株式会社パスコ Building change identification device and program

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114896679A (en) * 2022-07-13 2022-08-12 深圳大学 Three-dimensional model optimization method for building, intelligent terminal and storage medium
CN114896679B (en) * 2022-07-13 2022-10-04 深圳大学 Three-dimensional model optimization method of building, intelligent terminal and storage medium
CN115311574A (en) * 2022-10-12 2022-11-08 山东乾元泽孚科技股份有限公司 Building monitoring method, equipment and medium
CN115311574B (en) * 2022-10-12 2023-02-07 山东乾元泽孚科技股份有限公司 Building monitoring method, equipment and medium
CN116189367A (en) * 2022-12-09 2023-05-30 嘉应学院 Building fire alarm system based on Internet of things
CN116189367B (en) * 2022-12-09 2023-09-26 嘉应学院 Building fire alarm system based on Internet of things

Similar Documents

Publication Publication Date Title
CN109147254B (en) Video field fire smoke real-time detection method based on convolutional neural network
CN113516771A (en) Building change feature extraction method based on live-action three-dimensional model
CN109598794B (en) Construction method of three-dimensional GIS dynamic model
CN110163213B (en) Remote sensing image segmentation method based on disparity map and multi-scale depth network model
CN111563442A (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN110599537A (en) Mask R-CNN-based unmanned aerial vehicle image building area calculation method and system
CN109410171B (en) Target significance detection method for rainy image
CN109767422A (en) Pipe detection recognition methods, storage medium and robot based on deep learning
CN111832489A (en) Subway crowd density estimation method and system based on target detection
CN107705321A (en) Moving object detection and tracking method based on embedded system
CN112396619B (en) Small particle segmentation method based on semantic segmentation and internally complex composition
CN111462027B (en) Multi-focus image fusion method based on multi-scale gradient and matting
CN113177456B (en) Remote sensing target detection method based on single-stage full convolution network and multi-feature fusion
CN113569724B (en) Road extraction method and system based on attention mechanism and dilation convolution
CN106056139A (en) Forest fire smoke/fog detection method based on image segmentation
CN101980317A (en) Method for predicting traffic flow extracted by improved C-V model-based remote sensing image road network
CN111126278A (en) Target detection model optimization and acceleration method for few-category scene
CN114677323A (en) Semantic vision SLAM positioning method based on target detection in indoor dynamic scene
CN113111722A (en) Automatic driving target identification method based on improved Mask R-CNN
CN113327255A (en) Power transmission line inspection image processing method based on YOLOv3 detection, positioning and cutting and fine-tune
CN111274964B (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN111079826B (en) Construction progress real-time identification method integrating SLAM and image processing
CN115546113A (en) Method and system for predicting parameters of tunnel face crack image and front three-dimensional structure
CN113313107A (en) Intelligent detection and identification method for multiple types of diseases on cable surface of cable-stayed bridge
JP6698191B1 (en) Road marking failure detection device, road marking failure detection method, and road marking failure detection program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination