CN115272848A - Intelligent change detection method for buildings in multi-cloud and multi-fog farmland protection area - Google Patents

Intelligent change detection method for buildings in multi-cloud and multi-fog farmland protection area Download PDF

Info

Publication number
CN115272848A
CN115272848A CN202210844031.XA CN202210844031A CN115272848A CN 115272848 A CN115272848 A CN 115272848A CN 202210844031 A CN202210844031 A CN 202210844031A CN 115272848 A CN115272848 A CN 115272848A
Authority
CN
China
Prior art keywords
building
model
data
image
knowledge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210844031.XA
Other languages
Chinese (zh)
Other versions
CN115272848B (en
Inventor
李闯农
朱军
朱庆
郭煜坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202210844031.XA priority Critical patent/CN115272848B/en
Publication of CN115272848A publication Critical patent/CN115272848A/en
Application granted granted Critical
Publication of CN115272848B publication Critical patent/CN115272848B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an intelligent change detection method for buildings in a cloudy and foggy farmland protection area, belongs to the technical field of knowledge maps and recognition, and solves the problem that the prior art cannot effectively recognize and detect the buildings in the farmland aiming at optical remote sensing images shielded by the cloud and the foggy farmland, so that the recognition effect on the buildings in the farmland is poor. The method comprises the steps of constructing a knowledge map model for detecting the change of buildings in the farmland based on multi-source space-time data; constructing a knowledge inference model for detecting the change of buildings in the farmland based on a knowledge graph model and an inference model of a two-way chain; and executing rules in the knowledge inference model to perform intelligent change detection on the buildings in the cultivated land protection area based on data input to the knowledge inference model by the knowledge graph model. The intelligent change detection method is used for intelligent change detection of the building.

Description

Intelligent change detection method for buildings in multi-cloud and multi-fog farmland protection area
Technical Field
An intelligent change detection method for buildings in a cloudy and foggy farmland protection area is used for intelligent change detection of the buildings and belongs to the technical field of knowledge maps and recognition.
Background
The important foundation of grain production in the farmland is the fundamental life line of social development, the country carries out special protection on the farmland, keeps the farmland to protect the red line strictly, keeps the total amount of the farmland not to be reduced, and increases the strength to rectify and modify the behavior of disorderly occupying the farmland and building houses. Therefore, the method is an important measure for detecting the change of the building in the farmland protection area and preventing the farmland from being occupied disorderly. The application of the high-resolution remote sensing image brings great convenience to target supervision, a wider monitoring range and shorter information collection frequency provide a good data base for farmland building supervision, and the rapid development of artificial intelligence provides technical support for intelligent identification and change detection of targets. However, in southern China, a large number of cloudy, rainy and foggy areas exist, the influence of weather limits the observation effect of remote sensing on the ground, and on the other hand, due to the climate and geographic conditions of the areas, the urbanization process of the areas develops rapidly, and the areas become key areas for the supervision of cultivated land buildings. Therefore, how to realize building identification and change detection for the cloudy, rainy and foggy farmland protection area is an important problem.
Although, in towns where buildings are more concentrated, semantic segmentation is beneficial to extract buildings in images; in the farmland conservation area, buildings are distributed scattered, a plurality of small single buildings exist, and the single buildings are easy to miss detection or false detection in the image by directly using the deep learning semantic segmentation method. In addition, the background content is processed in the building extraction to reduce the influence, and in the farmland, as the buildings in the farmland account for less than the whole image, the pretreatment of the whole Zhang Yingxiang wastes the calculation resource and has lower efficiency.
The prior art has the following technical problems:
1. the prior art aims at the problem that the identification effect on buildings in cultivated land is poor because the optical remote sensing image shielded by cloud and mist cannot effectively identify and detect the buildings in the cultivated land;
2. in the prior art, the background of the whole optical remote sensing image is processed to identify the building, so that the problems of computing resource waste, low computing efficiency and the like are caused;
3. in the prior art, in the process of building change detection, because the steps of data selection, image processing, target identification and extraction and the like are divided, the association among the steps needs to be realized manually, so that a large amount of labor and time cost is consumed, and the full-flow intelligent change detection cannot be realized.
Disclosure of Invention
The invention aims to provide an intelligent change detection method for buildings in a cloudy and foggy farmland protection area, and solves the problem that in the prior art, the buildings in the farmland cannot be effectively identified and detected aiming at optical remote sensing images shielded by clouds and fogs, so that the identification effect on the buildings in the farmland is poor.
In order to achieve the purpose, the invention adopts the technical scheme that:
a method for detecting intelligent changes of buildings in a cloudy and foggy farmland protection area comprises the following steps:
step 1, constructing a knowledge map model for detecting changes of buildings in a farmland based on multi-source space-time data;
step 2, constructing a knowledge reasoning model for detecting the change of the building in the farmland based on the knowledge graph model and a reasoning model of a two-way chain;
and 3, executing rules in the knowledge inference model to perform intelligent change detection on the buildings in the cultivated land protection area based on data input to the knowledge inference model by the knowledge graph model.
Further, the specific steps of step 1 are:
step 1.1, acquiring objects required by building detection in a farmland based on multi-source space-time data, and carding the attribute characteristics of each object and the incidence relation between each object, wherein the multi-source space-time data comprises historical climate data, multi-channel image data, farmland division data, farmland building sample data, district-county farmland policy data, time data, administrative division spatial range data and vegetation data, the objects comprise time, space, climate, images, farmland, buildings and vegetation abstracted from the historical climate data, the multi-channel image data, the farmland division data, the farmland building sample data, the district-county farmland policy data, the time data, the administrative division spatial range data and the vegetation data, and the images comprise optical remote sensing images and SAR images;
and 1.2, forming a mode layer by the incidence relations among all the objects and the attributes, and forming a data layer by the attribute characteristics, namely obtaining a constructed knowledge graph model, wherein the mode layer comprises the semantic relations among all the objects and all the objects, and the data layer comprises the specific data contents of all the objects.
Further, the specific steps of step 2 are:
2.1, extracting and storing knowledge from multi-source space-time data by using an SPO (sparse pressure cooker) triple based on a mode layer and a data layer in a knowledge graph model, wherein the triple expression form of the knowledge stored by the SPO triple is a subject, a predicate and an object, and the SPO triple extracts the knowledge from the multi-source space-time data by adopting a knowledge extraction mode of structured data, a knowledge extraction mode of semi-structured data and an unstructured knowledge extraction mode;
and 2.2, constructing a knowledge inference model based on the stored knowledge and the inference model of the bidirectional chain.
Further, the specific steps of step 2.2 are:
step 2.21, constructing a rule base based on the stored knowledge, wherein the rule base comprises an image optimization rule, a building identification rule, a building extraction rule and a change detection rule;
and 2.22, constructing a knowledge inference model for detecting the change of the buildings in the farmland based on the rule base and the inference model of the two-way chain.
Further, the specific steps of the image optimization rule in step 2.21 are as follows:
if the historical optical remote sensing images are detected, selecting the optical remote sensing images on sunny days in the needed months for recommendation according to statistical data of historical climate data for the space to be detected;
if the optical remote sensing image of the current month is detected, selecting the month corresponding to the current month in the history according to statistical data of historical climate data at the beginning of a month according to the space to be detected, sequencing continuous sunny days in the month from more than less days, acquiring the time period with the most continuous sunny days after sequencing, and selecting the optical remote sensing image of the time period in the current month for recommendation based on the time period obtained after sequencing;
obtaining a recommendation result, numbering the recommendation result based on data in the knowledge graph model to obtain a final recommendation image, wherein the numbering format is 'space name-time-image type';
the building identification rule in step 2.21 specifically comprises the following steps:
step 2.211, dividing the recommended image into a clear scene or a cloudy scene by using a scene classifier, if the recommended image is the clear scene, directly identifying the building target through a lightweight SSD model to obtain the position of the building, namely obtaining a range frame of the building, and then turning to step 2.213, if not, turning to step 2.212;
step 2.212, optimizing the recommended image containing the cloud and fog, performing building target identification through a light SSD model after optimization to obtain the position of the building, namely obtaining a range frame of the building, and then turning to step 2.213;
the method comprises the following specific steps of optimizing and processing the recommended image containing the cloud and mist:
establishing a mapping model from the SAR image to the optical remote sensing image by using a generated antagonistic neural network, wherein the mapping model comprises a U-net generation network obtained after training and a Markov discriminator, and generating an objective function L of the antagonistic neural networkGAN(G,D)Comprises the following steps:
LGAN(G,D)=En,m[logD(n,m)]+En,l[log(1-D(n,G(n,l)))] (5)
wherein n represents a haze-free SAR image, m represents a haze-free optical remote sensing image, G (n, l)) represents a generated haze-free optical remote sensing image, D (m, n) represents whether the image is a real sample or not, l represents random noise, En,m[logD(n,m)]Representing the probability distribution of the real data, En,l[log(1-D(n,G(n,l)))]Representing a probability distribution of the generated data;
inputting the recommended images with cloud and mist into the trained mapping model to generate an optical remote sensing image, and obtaining the optimized images;
2.213, obtaining shooting time and projection coordinates of an optical remote sensing image including a range frame of a building based on a knowledge graph model, clustering each frame image with a time period of one week by using a first frame image of the optical remote sensing image as a base point by using a kmeans algorithm, and complementing the range frames of the optical remote sensing image obtained after clustering to obtain a complemented range frame, wherein the complementing formula is as follows:
Figure BDA0003751557540000041
wherein, Bnew{ x, y, w, h } are the complementary frames, x, y are the coordinates of the top left corner of the complementary frame, w and h are the width and length of the complementary frame, BiIs the ith complementary range box, and merges range boxes when there is an intersection of range boxes, Bi{ x } denotes the x coordinate in the upper left corner of the ith complementary Range box, BiY represents the y coordinate in the upper left corner of the ith complementary range box,
Figure BDA0003751557540000042
indicating the presence, n denotes the intersection, max denotes the maximum value, min denotes the minimum value;
and 2.214, storing the complementary range frames and the numbers of the optical remote sensing images containing the complementary range frames.
Further, the light SSD model is characterized in that a fully-connected network in the SSD model is replaced by two convolution layers on the basis of the SSD model, and the model is lightened by using a convolution channel pruning algorithm after the replacement;
the specific steps of carrying out lightweight on the substituted SSD model by utilizing the convolution channel pruning algorithm are as follows:
firstly, setting different pruning rates of each convolution layer of the substituted SSD model, and determining the optimal pruning rate interval of the substituted SSD model;
setting different pruning rates of each convolution layer in the optimal pruning rate interval by utilizing the optimal pruning rate interval of the substituted SSD model, wherein the different pruning rates determine the pruning rate of each convolution layer according to the identification correct rate of the substituted SSD model of each convolution layer under the larger or smaller pruning rate, wherein the median of the interval larger than the optimal pruning rate is larger, and the median of the interval smaller than the optimal pruning rate is smaller;
then calculating the L1 norm of a channel in each convolution kernel in the substituted SSD model, and sequencing based on the L1 norm, wherein the higher the numerical value is, the more important the channel is, and the lower the numerical value is, pruning is performed on the channel;
and finally, performing unified pruning on the layer with the larger pruning rate, performing retraining to obtain the precision of the substituted SSD model before pruning, performing retraining on the layer with the smaller pruning rate to obtain the precision of the substituted SSD model before pruning, and finishing the light weight of the SSD model to obtain the light-weight SSD model.
Further, the concrete steps of the building extraction rule in the step 2.21 are as follows:
step 2.21-1, firstly, based on the complementary range frames obtained in the step 2.214, indexing and screening all images containing the range frames of the building as new optical remote sensing images, and cutting the new optical remote sensing images to obtain the range frames of the building;
step 2.21-2, searching all the complementary range frames according to the range frames obtained by cutting, and obtaining local optical remote sensing images for target identification according to the complementary range frames obtained by searching;
2.21-3, extracting a range frame corresponding to each frame of image from each local optical remote sensing image obtained in the step 2.21-2, extracting the boundary of the building according to each range frame, and merging the extracted boundaries after the boundary is extracted to obtain the boundary of the building which is extracted primarily;
2.21-4, establishing a target library of the building based on the boundary of the building extracted preliminarily, and regularizing the boundary of the building based on a building morphology fitting method to obtain a boundary extracted finally, namely obtaining the outline of the building object;
the specific steps of the change detection rule in step 2.21 are as follows:
and comparing the boundary obtained by final extraction with the corresponding planning graph, namely determining the area according to the initially input space, performing grid calculation with the planning graph of the area, rasterizing the boundary of the building and the planning graph into a uniform resolution, subtracting the extracted boundary of the building from the planning graph to obtain a change graph spot of the building, classifying into a positive class if the range frame of the building in the change graph spot is increased, and classifying into a negative class if the range is reduced, wherein the change graph spot of the positive class is marked as abnormal to indicate possible illegal behaviors.
Further, the specific steps of the steps 2.21-3 are as follows:
2.21-31, extracting a range frame corresponding to each frame image from each local optical remote sensing image, adopting a corresponding vegetation index based on a waveband existing in each local optical remote sensing image, judging whether each pixel in each extracted range frame is vegetation or not based on the vegetation index, and if the pixel is vegetation, turning to the next step, wherein the vegetation index comprises a triangular vegetation index, a soil adjustment vegetation index, a normalized difference vegetation index, an enhanced vegetation index and a normalized difference vegetation index;
2.21-32, eliminating the pixels which are judged as vegetation in each extracted range frame;
2.21-33, enhancing the range frame after vegetation elimination based on a multi-scale segmentation method, extracting the boundaries of the building through an SVM classification method after enhancement, and combining the extracted boundaries after boundary extraction to obtain the boundaries of the building which are extracted primarily;
the steps 2.21-4 comprise the following specific steps:
constructing various shape elements of the building, wherein the shape elements comprise squares, rectangles, trapezoids and circles;
forming all the morphological elements into a target library of the building;
according to the preliminarily extracted boundaries of the buildings, the fitting degree of the buildings is calculated, and the irregular formation blocks are replaced by regular building forms in the target library according to the fitting degree, and the method specifically comprises the following steps:
firstly, calculating a rectangular fitting factor of the extracted boundary, namely the proportion K of the area of the boundary of the extracted building on the external matrix thereof:
Figure BDA0003751557540000061
secondly, calculating the length-width ratio W of the circumscribed matrix:
Figure BDA0003751557540000062
the ratio C of the area to the perimeter of the graph is then calculated:
Figure BDA0003751557540000063
wherein, areaobjMeans Area, area of the obj th boundary of the extracted buildingrectRefers to the area, len of the circumscribed matrix of the object block after the rect mergingrectThe length and Wid of the circumscribed matrix of the object block after the rect mergingrectThe sum of the width of the circumscribed matrix of the object block after the rect merging and CirobjRefers to the perimeter of the obj th boundary;
and finally, comparing the ratio K of the area of the boundary of the building on the external matrix, the aspect ratio W of the external matrix and the ratio C of the area to the perimeter of the graph obtained by extraction with K, W and C of the object of the target library, and selecting the closest image for replacement, thereby completing the regularization of the boundary.
Further, the specific steps of the steps 2.21-33 are as follows:
dividing each pixel in the range frame after vegetation elimination into object blocks based on a multi-scale division method, merging the similar object blocks by calculating similarity, and calculating to obtain a characteristic variable, namely obtaining an enhanced range frame;
the parameters of the similarity are the characteristics of the building, including shape characteristics, texture characteristics and color characteristics, and the characteristic variables are obtained by giving different weights to the characteristics and weighting, wherein the formula is as follows:
F=ω1fs2ft1fc (8)
wherein F represents a characteristic variable, ω1And ω2Expressed as given weight, fsRepresenting a feature of shape, ftRepresenting a texture feature, fcRepresenting color features, representing shape features by smoothness and compactness, representing texture features by entropy values in a gray co-occurrence matrix, and representing color features by HSV;
the expression formula of the shape feature, the texture feature and the color feature is as follows:
Figure BDA0003751557540000071
Figure BDA0003751557540000072
fc=ρ1H+ρ2S+ρ3V (11)
wherein alpha is1Represents a weight value, C represents a boundary length of the merged object block, L represents a perimeter of an envelope rectangle of the merged object block, N represents the number of pixels included in the range of the merged object block, grey (x, y) represents a gray value at a pixel coordinate (x, y), ρ is1、ρ2、ρ3Representing weights in HSV, H, S, V represent hue, saturation and lightness, respectively, R(x,y)、G(x,y)、B(x,y)Respectively representing the values of coordinates (x, y) in the red, green and blue channels, i representing the pixel rows and columns of the merged object block;
and after the characteristic variables are obtained, inputting the characteristic variables into a trained SVM classification method to extract the boundaries of the building, and combining the extracted boundaries after the boundaries are extracted to obtain the boundaries of the building which is extracted primarily.
Further, step 3 is to perform building change detection based on the input knowledge inference model data and the image optimization rule, the building identification rule, the building extraction rule or/and the change detection rule in the knowledge inference model execution rule base, and specifically includes the following steps:
inputting a space to a knowledge inference model based on a knowledge graph model, and sequentially executing an image optimization rule, a building identification rule, a building extraction rule and a change detection rule in a rule base by the knowledge inference model to perform intelligent change detection on the building in the cultivated land protection area;
inputting a recommended image to the knowledge inference model based on the knowledge graph model, and sequentially executing a building identification rule, a building extraction rule and a change detection rule in a rule base by the knowledge inference model to perform intelligent change detection on the building in the cultivated land protection area;
inputting the number of the local optical remote sensing image containing the range frame of the building after complementation to a knowledge inference model based on a knowledge graph model, and sequentially executing a building extraction rule and a change detection rule in a rule base by the knowledge inference model to carry out intelligent change detection on the building in the cultivated land protection area;
and inputting the finally extracted boundary to the knowledge inference model based on the knowledge graph model, and executing change detection in the rule base by the knowledge inference model to perform intelligent change detection on the building in the cultivated land protection area.
Compared with the prior art, the invention has the advantages that:
1. according to the method, all space-time combined (after complementation) range frames are searched based on the range frames of the buildings in each frame image, and the image for target identification is obtained according to the complementary range frames, so that the range of target processing is reduced, and the image can be processed more quickly; on the other hand, the influence of a plurality of other targets is eliminated, and the precision of boundary extraction can be improved to a certain extent;
2. the invention integrates a lightweight SSD model, constructs an end-to-end building identification model to connect a plurality of intelligent processing steps, not only reduces the volume of the original SSD model, but also effectively improves the efficiency of the whole model;
3. according to the invention, when the image has cloud and fog, multi-channel image complementation (namely, optimization processing of the recommended image containing cloud and fog) is adopted, the information completeness of each frame of image in the image is further maintained, the optimization of subsequent identification precision is facilitated, and the identification detection of buildings in the image shielded by the cloud and fog can be effectively carried out;
4. the knowledge graph mode of the whole process is driven based on the knowledge inference model, so that all modules for change detection are closely connected, the automation is enhanced, and the manpower is reduced;
5. the invention does not need to process the background of the whole image, namely, the computing resource is not wasted and the computing efficiency is high.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a schematic diagram of the framework of the present invention;
FIG. 2 is a schematic diagram of a knowledge graph model according to the present invention;
FIG. 3 is a schematic diagram of a knowledge inference model of a two-way chain in the present invention;
FIG. 4 is a schematic diagram of intelligent building identification according to the present invention;
FIG. 5 is a schematic flow chart of boundary extraction of a building according to the present invention, in which Image _ n represents an nth frame Image, and Rn represents a range frame of the nth frame Image;
FIG. 6 is an index diagram of a local optical remote sensing image according to the present invention, wherein the corresponding block diagrams of Z1, Z2, and Z3 are complementary range frames obtained after cropping, M1 represents a 1 st frame image, M2 represents a 2 nd frame image, and M3 represents a 3 rd frame image;
FIG. 7 is a schematic illustration of vegetation elimination and enhancement based on the vegetation index in the present invention;
FIG. 8 is a schematic diagram of the regularization of boundaries based on a building target library in the present invention;
FIG. 9 is a schematic diagram of a lightweight SSD model in accordance with the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The knowledge map energy model is connected with the incidence relation among the cultivated land protected area objects, the attribute characteristics and the data, so that knowledge storage can be completed by combining a knowledge extraction mode, a knowledge reasoning model is constructed based on the knowledge map model for the requirement of building change detection in the cultivated land in the cloudy, rainy and foggy area, data optimization is realized through incidence reasoning, and intelligent identification, boundary extraction and change detection of the building can be guided. In the aspect of intelligent identification of cultivated land buildings, regional climate statistical data and associated time information are analyzed, optical remote sensing images with low cloud and fog influence probability are preferably selected as basic data of identification, an end-to-end network structure is established by combining a deep learning technology, and a range frame of the buildings in the optical remote sensing images is more accurately identified through modules such as scene classification, cloud and fog optimization, target identification and space-time complementation. In the aspects of arable land building boundary extraction and change detection, the boundary extraction of the building is realized by eliminating vegetation information and associating geometric and textural features of the building and combining a multi-scale segmentation method, and a change detection result is obtained by associating and comparing planning information and historical data.
A method for detecting intelligent change of buildings in a cloudy and foggy farmland protection area is characterized by comprising the following steps:
step 1, constructing a knowledge graph model for detecting changes of buildings in a farmland based on multi-source space-time data; as shown in fig. 2, the specific steps are as follows:
step 1.1, acquiring objects required by building detection in a farmland based on multi-source space-time data, and carding the attribute characteristics of each object and the incidence relation between each object, wherein the multi-source space-time data comprises historical climate data, multi-channel image data, farmland division data, farmland building sample data, district-county farmland policy data, time data, administrative division spatial range data and vegetation data, the objects comprise time, space, climate, images, farmland, buildings and vegetation abstracted from the historical climate data, the multi-channel image data, the farmland division data, the farmland building sample data, the district-county farmland policy data, the time data, the administrative division spatial range data and the vegetation data, and the images comprise optical remote sensing images and SAR images;
and 1.2, forming a mode layer by the incidence relations among all the objects and the attributes, and forming a data layer by the attribute characteristics, namely obtaining a constructed knowledge graph model, wherein the mode layer comprises the semantic relations among all the objects and all the objects, and the data layer comprises the specific data contents of all the objects.
Step 2, constructing a knowledge inference model for detecting the change of buildings in the farmland based on a knowledge graph model and an inference model of a two-way chain; the method comprises the following specific steps:
2.1, extracting and storing knowledge from the multi-source space-time data by using SPO triples based on a mode layer and a data layer in a knowledge graph model, wherein the triples of the knowledge stored by the SPO triples are expressed in the forms of a subject, a predicate and an object, and the SPO triples extract the knowledge from the multi-source space-time data by adopting a knowledge extraction mode of structured data, a knowledge extraction mode of semi-structured data and an unstructured knowledge extraction mode; the structured data converts a data form of the multi-source space-time data into a triple through a D2RQ, the semi-structured data analyzes a template structure of the multi-source space-time data, the template structure is converted into the triple through a wrapper, the unstructured data identifies entities in the multi-source space-time data through a sequence marking model through a pipeline model, and then the entities are classified, so that the data are converted into the triple.
And 2.2, constructing a knowledge inference model based on the stored knowledge and the inference model of the bidirectional chain. The method comprises the following specific steps:
step 2.21, constructing a rule base based on the stored knowledge, wherein the rule base comprises an image optimization rule, a building identification rule, a building extraction rule and a change detection rule;
the image optimization rule comprises the following specific steps:
if the historical optical remote sensing images are detected, selecting the optical remote sensing images on sunny days in the needed months for recommendation according to statistical data of historical climate data for the space to be detected;
if the optical remote sensing image of the current month is detected, selecting a month corresponding to the current month in the history according to statistical data of historical climate data at the beginning of a month aiming at a space needing to be detected, sequencing more or less continuous sunny days in the month, acquiring a time period with the most continuous sunny days after sequencing, and selecting the optical remote sensing image of the time period in the current month for recommendation based on the time period obtained after sequencing;
obtaining a recommendation result, numbering the recommendation result based on data in the knowledge graph model to obtain a final recommendation image, wherein the numbering format is 'space name-time-image type';
the expression of the image optimization rule is as follows:
image(X)←isInclude(X,Y)∧isTime(X,Z) (1)
Figure BDA0003751557540000111
wherein image (X) indicates the recommended image X, isInclude (X, Y) indicates that the recommended image X is included in the region Y, isTime (X, Z) indicates that the recommended image X is included in the time Z, cloudTime (Y, Z) indicates that the region Y is cloudy at the time Z, symbol A indicates AND,
Figure BDA0003751557540000112
the symbol represents not, and ← represents a condition indicating that the image X is included in the region Y and within the time Z, and that the probability that the region Y has a cloud within the time Z is small, the image X is recommended;
the building identification rule comprises the following specific steps:
step 2.211, a scene classifier is used for dividing the recommended image into a clear scene or a cloudy scene (the cloudy scene can also be a rainy scene, and the rainy scene also has cloud), if the recommended image is the clear scene, building target identification is directly carried out through a lightweight SSD model to obtain the position of the building, namely a range frame of the building is obtained, and then the step 2.213 is carried out, and if the recommended image is not the cloudy scene, the step 2.212 is carried out;
the light SSD model is characterized in that a full-connection network in the SSD model is replaced by two convolution layers on the basis of the SSD model, and the model is lightened by using a convolution channel pruning algorithm after the replacement;
the concrete steps of carrying out lightweight on the substituted SSD model by utilizing the convolution channel pruning algorithm are as follows:
firstly, setting different pruning rates of each convolution layer of the substituted SSD model, and determining the optimal pruning rate interval of the substituted SSD model;
setting different pruning rates of each convolution layer in the optimal pruning rate interval by utilizing the optimal pruning rate interval of the substituted SSD model, wherein the different pruning rates determine the pruning rate of each convolution layer according to the identification correct rate of the substituted SSD model of each convolution layer under the larger or smaller pruning rate, wherein the median of the interval larger than the optimal pruning rate is larger, and the median of the interval smaller than the optimal pruning rate is smaller;
then calculating the L1 norm of a channel in each convolution kernel in the substituted SSD model, and sequencing based on the L1 norm, wherein the higher the numerical value is, the more important the channel is, and the lower the numerical value is, pruning is performed on the channel;
and finally, performing unified pruning on the layer with the larger pruning rate, performing retraining to obtain the precision of the substituted SSD model before pruning, performing retraining on the layer with the smaller pruning rate to obtain the precision of the substituted SSD model before pruning, and finishing the light weight of the SSD model to obtain the light-weight SSD model.
The light SSD model adopts 6 different feature diagram sizes including a large-size feature diagram, shallow information, a predicted small target, a small-size feature diagram, deep information and a predicted large target (38 multiplied by 38, 19 multiplied by 19, 10 multiplied by 10, 5 multiplied by 5, 3 multiplied by 3 and 1 multiplied by 1), and realizes building target identification through presetting prior frames with different length-width ratios.
Step 2.212, optimizing the recommended image containing the cloud and fog, performing building target identification through a light SSD model after optimization to obtain the position of the building, namely obtaining a range frame of the building, and then turning to step 2.213;
the method comprises the following specific steps of optimizing and processing the recommended image containing the cloud and mist:
establishing a mapping model from the SAR image to the optical remote sensing image by using a generated antagonistic neural network, wherein the mapping model comprises a U-net generation network obtained after training and a Markov discriminator, and generating an objective function L of the antagonistic neural networkGAN(G,D)Comprises the following steps:
LGAN(G,D)=En,m[logD(n,m)]+En,l[log(1-D(n,G(n,l)))] (5)
wherein n represents a haze-free SAR image, m represents a haze-free optical remote sensing image, G (n, l)) represents a generated haze-free optical remote sensing image, and D (m, n) represents a determination of whether or not the image is a true sampleHere, l represents random noise, En,m[logD(n,m)]Representing the probability distribution of the real data, En,l[log(1-D(n,G(n,l)))]Representing a probability distribution of the generated data;
during training, the SAR image with cloud and fog is input into the U-net generation network to generate an optical remote sensing image without cloud and fog, a Markov discriminator is used for identifying whether the optical remote sensing image is generated by the U-net generation network, and if the optical remote sensing image is not generated by the U-net generation network, a trained mapping model of the SAR image to the optical remote sensing image is obtained.
Inputting the recommended image with cloud and mist into the trained mapping model to generate an optical remote sensing image, and obtaining an image after optimization processing;
2.213, acquiring shooting time and projection coordinates of an optical remote sensing image of a range frame containing a building based on a knowledge graph model, clustering the frame images with a time period of one week by using a kmeans algorithm and taking a first frame image of the optical remote sensing image as a base point, and complementing the range frames of the optical remote sensing image obtained after clustering to obtain a complemented range frame, wherein the complementing formula is as follows:
Figure BDA0003751557540000131
wherein, Bnew{ x, y, w, h } are the complementary range boxes, x, y are the coordinates of the top left corner of the complementary range box, w and h are the width and length of the complementary range box, BiIs the ith complementary range box, and merges range boxes when there is an intersection of range boxes, Bi{ x } denotes the x coordinate at the top left corner of the z-th complementary range box, BiY represents the y coordinate of the upper left corner of the z-th complementary range box,
Figure BDA0003751557540000133
indicating the presence, n denotes the intersection, max denotes the maximum value, min denotes the minimum value;
and 2.214, storing the complementary range frames and the numbers of the optical remote sensing images containing the complementary range frames. Because the cloud and fog optimization cannot completely eliminate the influence, through multi-channel image complementation, the optical remote sensing images in a circle at the same position are fused, and the identification areas which are not shielded in the optical remote sensing images in the circle are merged to obtain a complemented range frame. After the building target identification is completed, space-time complementation (namely multi-channel image complementation) is carried out, the completeness of information is further ensured, and the identification precision is optimized.
The expression of the building identification rule is:
Figure BDA0003751557540000132
the detection (X, R) indicates that a range frame R of a building is selected from frames in the recommended image X for identification, the hasCloud (X) indicates that the recommended image X contains clouds, the optCloud indicates that the recommended image X containing the clouds is optimized, and the condition of the Osteur indicates that the range frame of the building is identified after the recommended image X containing the clouds is optimized if the recommended image X contains the clouds, otherwise, the range frame of the building is directly identified.
The concrete steps of the building extraction rule are as follows:
step 2.21-1, firstly, based on the complementary range frames obtained in the step 2.214, indexing and screening all images containing the range frames of the building as new optical remote sensing images, and cutting the new optical remote sensing images to obtain the range frames of the building;
step 2.21-2, searching all the complementary range frames according to the range frames obtained by cutting, and obtaining local optical remote sensing images for target identification according to the complementary range frames obtained by searching;
step 2.21-3, extracting a range frame corresponding to each frame of image from each local optical remote sensing image obtained in step 2.21-2, extracting the boundary of the building according to each range frame, merging the extracted boundaries after the boundary extraction to obtain the boundary of the building extracted primarily, wherein the specific steps are as follows:
2.21-31, extracting a range frame corresponding to each frame image from each local optical remote sensing image, adopting a corresponding vegetation index based on a waveband existing in each local optical remote sensing image, judging whether each pixel in each extracted range frame is vegetation or not based on the vegetation index, and if the pixel is vegetation, turning to the next step, wherein the vegetation index comprises a triangular vegetation index, a soil adjustment vegetation index, a normalized difference vegetation index, an enhanced vegetation index and a normalized difference vegetation index;
2.21-32, eliminating the pixels of the vegetation judged in each extracted range frame;
2.21-33, enhancing the range frame after vegetation elimination based on a multi-scale segmentation method, extracting the boundaries of the building through an SVM classification method after enhancement, and combining the extracted boundaries after boundary extraction to obtain the boundaries of the building which are extracted primarily; the method comprises the following specific steps:
dividing each pixel in the range frame after vegetation elimination into object blocks based on a multi-scale division method, merging the similar object blocks by calculating similarity, and calculating to obtain a characteristic variable, namely obtaining an enhanced range frame;
the parameters of the similarity are the characteristics of the building, including shape characteristics, texture characteristics and color characteristics, and the characteristic variables are obtained by giving different weights to the characteristics and weighting, wherein the formula is as follows:
F=ω1fs2ft1fc (8)
wherein F represents a characteristic variable, ω1And ω2Expressed as given weight, fsRepresenting a shape feature, ftRepresenting a texture feature, fcRepresenting color features, shape features with smoothness and compactness, texture features with entropy values in a gray co-occurrence matrix, and color features with HSV;
the expression formula of the shape feature, the texture feature and the color feature is as follows:
Figure BDA0003751557540000141
Figure BDA0003751557540000142
fc=ρ1H+ρ2S+ρ3V (11)
wherein alpha is1Represents a weight value, C represents a boundary length of the merged object block, L represents a perimeter of an envelope rectangle of the merged object block, N represents the number of pixels included in the range of the merged object block, grey (x, y) represents a gray value at a pixel coordinate (x, y), ρ is1、ρ2、ρ3Representing weights in HSV, H, S, V represent hue, saturation and lightness, respectively, R(x,y)、G(x,y)、B(x,y)Respectively representing the values of coordinates (x, y) in the red, green and blue channels, i representing the pixel rows and columns of the merged object block;
and after the characteristic variables are obtained, inputting the characteristic variables into a trained SVM classification method to extract the boundaries of the building, and combining the extracted boundaries after the boundaries are extracted to obtain the boundaries of the building which is extracted primarily.
2.21-4, establishing a target library of the building based on the boundary of the building extracted preliminarily, and regularizing the boundary of the building based on a building morphology fitting method to obtain a boundary extracted finally, namely obtaining the outline of the building object; the method comprises the following specific steps:
constructing various shape elements of the building, wherein the shape elements comprise squares, rectangles, trapezoids and circles;
forming a target library of the building by all the morphological elements; as the buildings are artificial targets and have certain geometrical characteristics, the buildings in the farmland are mostly single-storey buildings and multi-storey buildings or villas below four storeys, and the basic form elements are used for constructing a common target bank of the buildings in the farmland area.
According to the preliminarily extracted boundaries of the buildings, the fitting degree of the buildings is calculated, and the irregular formation blocks are replaced by regular building forms in the target library according to the fitting degree, and the method specifically comprises the following steps:
firstly, calculating a rectangular fitting factor of the extracted boundary, namely the proportion K of the area of the boundary of the extracted building on the external matrix thereof:
Figure BDA0003751557540000151
secondly, calculating the length-width ratio W of the circumscribed matrix:
Figure BDA0003751557540000152
the ratio C of the area to the perimeter of the graph is then calculated:
Figure BDA0003751557540000153
wherein, areaobjRefers to the Area, of the obj th boundary of the extracted buildingrectRefers to the area, len of the circumscribed matrix of the object block after the rect mergingrectThe length and Wid of the circumscribed matrix of the object block after the rect mergingrectThe sum of the width of the circumscribed matrix of the object block after the rect merging and CirobjRefers to the perimeter of the obj th boundary;
finally, the ratio K of the area of the boundary of the building on the external matrix, the aspect ratio W of the external matrix and the ratio C of the area to the perimeter of the graph, which are obtained by extraction, are compared with K, W and C of the target library (similarity delta is calculated), and the closest image is selected for replacement, so that the regularization of the boundary is completed.
Figure BDA0003751557540000154
Delta denotes the similarity, Kobj、Wobj、CobjRespectively representing the occupation ratio K of the area of the obj th boundary of the building on the external matrix thereof, the aspect ratio W of the external matrix and the area-to-perimeter ratio C of the graph,Ksam、Wsam、Csamrespectively representing the occupation ratio K of the area of the sam-th boundary in the target library on the circumscribed matrix, the aspect ratio W of the circumscribed matrix and the area-to-perimeter ratio C of the graph, wherein if the value of delta is closer to 1, the building object and the target library object are more similar.
The expression of the building extraction rule is:
pairBuilding(B,S)←boundary(B,R) (7)
←(hasRange(B,R)
Figure BDA0003751557540000161
wherein boundary (B, R) means a boundary B, hhasRange (B, R) of the building in the range frame R of the building, which means a portion of the range frame R of the building that is a vegetation is indicated by the range frame R, isVeg (R) including the building in the recommended image X, hasFeature (R) indicates a portion of the range frame R of the building that includes a feature of the building, and pairBuilding (B, S) indicates a finally extracted boundary obtained by matching the boundary B of the building with the object S of the target bank of the building;
the specific steps of the change detection rule are as follows:
and comparing the boundary obtained by final extraction with the corresponding planning graph, namely determining the region according to the initially input space range, performing grid calculation with the planning graph of the region, rasterizing the boundary of the building and the planning graph into uniform resolution, subtracting the extracted boundary of the building by using the planning graph to obtain a change graph spot of the building, classifying the change graph spot into a positive class if the range frame of the building in the change graph spot is increased, and classifying the change graph spot into a negative class if the range is reduced, wherein the change graph spot of the positive class is marked as abnormal to indicate that illegal behaviors possibly occur.
The expression of the change detection rule is:
abnormal(B,Y)←isChange(B,Y)∧inRange(B,LL) (4)
wherein, abnormal (B, Y) indicates that the extracted boundary B of the building is abnormal in the area Y, isChange (B, Y) indicates that the extracted boundary B of the building is changed in the area Y, inRange (B, LL) indicates that the extracted boundary B of the building is in the range of the violation LL, and ← indicates that the change detection method is performed if the boundary B of the building is abnormal in the area Y and the boundary B of the building is in the range of the violation LL, and otherwise, is not performed.
And 2.22, constructing a knowledge inference model for detecting the change of the buildings in the farmland based on the rule base and the inference model of the two-way chain. As shown in fig. 3, the inference model of the two-way chain includes a rule execution (i.e., a rule base is activated), a rule matcher, and a rule checker. When the requirement of building change detection is provided, the rule base is activated, rule matching is carried out according to read-in data (namely which rule is matched and executed), after the corresponding rule is matched, the completeness of the data and the model is checked by using a rule detector, whether the rule conflicts is judged, then the rule is executed, after the rule is executed to obtain a result, whether the change detection requirement is met is judged according to the result content, if the change detection requirement is not met, the rule is put into the rule matching, which rule is to be continuously implemented in the result is judged, and if not, the process is stopped.
And 3, inputting a space range to the knowledge inference model based on the knowledge graph model, and executing rules in the knowledge inference model to detect intelligent changes of buildings in the cultivated land protection area. Namely, based on the input knowledge inference model data, the knowledge inference model executes the image optimization rule, the building identification rule, the building extraction rule or/and the change detection rule in the rule base to detect the change of the building, and the specific steps are as follows:
inputting a space to a knowledge inference model based on a knowledge graph model, and executing an image optimization rule, a building identification rule, a building extraction rule and a change detection rule in a rule base in sequence by the knowledge inference model to perform intelligent change detection on the building in the cultivated land protection area;
inputting a recommended image to the knowledge inference model based on the knowledge graph model, and sequentially executing a building identification rule, a building extraction rule and a change detection rule in a rule base by the knowledge inference model to perform intelligent change detection on the building in the cultivated land protection area;
inputting the number of the local optical remote sensing image containing the range frame of the building after complementation to a knowledge inference model based on a knowledge graph model, and sequentially executing a building extraction rule and a change detection rule in a rule base by the knowledge inference model to carry out intelligent change detection on the building in the cultivated land protection area;
and inputting the finally extracted boundary to the knowledge inference model based on the knowledge graph model, and executing change detection in the rule base by the knowledge inference model to perform intelligent change detection on the building in the cultivated land protection area.
Examples
As shown in fig. 2, a mode layer of the knowledge map is constructed by analyzing ontology concept information, i.e., time, space, climate, image, farmland, building, and vegetation, and the ontology concept in the mode layer is related to < climate, in, time >, < policy, limit to, time >, < policy, judgment violation, building >, < farmland, presence, building >, < policy, protection, farmland >, < time, limit, farmland >, < space, limit, farmland >, < image, shot in, time >, < image, shot in, space >, < climate, located in, space >, < farmland, presence, and vegetation >. The method comprises the steps of establishing a data layer based on the requirement of detecting change of a model layer and a cultivated land building, wherein climate data comprises distribution of sunny days and cloudy days, time comprises year/month/day and hour/minute/second, policies comprise violation behaviors, the building comprises texture features, illumination features and geometric features, vegetation comprises vegetation indexes, space comprises longitude and latitude, range size and geometric form, influences comprise resolution, wave bands, types and images, collecting the content of the data layer, and storing knowledge.
Inputting a space range for detecting the change of buildings in the farmland, matching a rule matcher in a knowledge inference model to an image optimization rule, executing, namely reading historical climate data, counting time interval distribution of a sunny day in the current month in the past year, selecting 10 optical remote sensing images with corresponding time intervals of the sunny day based on shooting time, and numbering the 10 optical remote sensing images to obtain a final recommended image.
After the finally recommended image is obtained, a rule matcher in a knowledge inference model is matched with a building identification rule and executed, namely whether the optical remote sensing image is a clear day scene is judged according to an existing scene classifier, if not, an SAR image with approximate shooting time needs to be searched, a generation countermeasure neural network is used for cloud and fog optimization, the optimized image and the optical remote sensing image judged to be a clear day are placed in a light SSD model for identification, each image is divided into 300x300 images, after the building is selected in the light SSD model frame, a range frame of the building is obtained and reduced to the original optical remote sensing image, the optical remote sensing image with the range frame (namely, the shooting time and the projection coordinates of the image are obtained), each frame image with the time period of one week is clustered by using a first frame image of the image as a base point through a kmean algorithm, the range frames of the clustered images are complemented, and a complemented range frame B after complementation is obtainednew{ x, y, w, h }. And then storing the complementary range frames and the serial numbers of the optical remote sensing images containing the complementary range frames.
After obtaining the range frame of the building, the rule matcher in the knowledge inference model matches the building extraction rule and executes to obtain the final extracted boundary:
after the extracted boundary is obtained, a rule matcher in the knowledge inference model is matched with a change detection rule and executed, namely, a place is determined according to an initially input space range, grid calculation is carried out on the place and a planning graph of the area, a building object and the planning graph are rasterized into uniform resolution, the extracted building object is subtracted by the planning graph to obtain a change graph spot of the building, if the building range in the change graph spot is increased, the change graph spot is classified into a positive class, if the range is reduced, the change graph spot classified into a negative class, and the change graph spot of the positive class is marked as abnormal to indicate that illegal behaviors possibly occur.
In conclusion, the method and the system construct the knowledge map for detecting the change of the farmland buildings by clarifying the farmland scenes and the characteristic relationship of the buildings inside the farmland scenes and combing the characteristics of multi-source image data. Based on task requirements of building change detection in the farmland, a rule base and a rule execution model are established, a knowledge map is used as a drive, and methods such as deep learning, target extraction and multi-channel complementation are combined to complete intelligent identification, boundary extraction and change detection of buildings in the farmland.

Claims (10)

1. A method for detecting intelligent change of buildings in a cloudy and foggy farmland protection area is characterized by comprising the following steps:
step 1, constructing a knowledge map model for detecting changes of buildings in a farmland based on multi-source space-time data;
step 2, constructing a knowledge reasoning model for detecting the change of the building in the farmland based on the knowledge graph model and a reasoning model of a two-way chain;
and 3, executing rules in the knowledge inference model to perform intelligent change detection on the buildings in the cultivated land protection area based on data input to the knowledge inference model by the knowledge graph model.
2. The method for detecting the intelligent change of the building in the cloudy and foggy farmland protection area according to claim 1, wherein the specific steps of the step 1 are as follows:
step 1.1, acquiring objects required by building detection in a farmland based on multi-source time-space data, and carding the attribute characteristics of each object and the incidence relation among the objects, wherein the multi-source time-space data comprise historical climate data, multi-channel image data, farmland division data, farmland building sample data, district-county farmland policy data, time data, administrative division spatial range data and vegetation data, the objects comprise time, space, climate, images, farmlands, buildings and vegetation abstracted from the historical climate data, the multi-channel image data, the farmland division data, the farmland building sample data, the district-county farmland policy data, the time data, the administrative division spatial range data and the vegetation data, and the images comprise optical remote sensing images and SAR images;
and 1.2, forming a mode layer by the incidence relations among all the objects and the attributes, and forming a data layer by the attribute characteristics, namely obtaining a constructed knowledge graph model, wherein the mode layer comprises the semantic relations among all the objects and all the objects, and the data layer comprises the specific data contents of all the objects.
3. The method for detecting the intelligent change of the building in the cloudy and foggy farmland protection area according to claim 2, wherein the specific steps of the step 2 are as follows:
2.1, extracting and storing knowledge from multi-source space-time data by using an SPO (sparse pressure cooker) triple based on a mode layer and a data layer in a knowledge graph model, wherein the triple expression form of the knowledge stored by the SPO triple is a subject, a predicate and an object, and the SPO triple extracts the knowledge from the multi-source space-time data by adopting a knowledge extraction mode of structured data, a knowledge extraction mode of semi-structured data and an unstructured knowledge extraction mode;
and 2.2, constructing a knowledge inference model based on the stored knowledge and the inference model of the bidirectional chain.
4. The method for detecting the intelligent change of the building in the cloudy and foggy farmland protection area according to claim 3, wherein the specific steps of the step 2.2 are as follows:
step 2.21, constructing a rule base based on the stored knowledge, wherein the rule base comprises an image optimization rule, a building identification rule, a building extraction rule and a change detection rule;
and 2.22, constructing a knowledge inference model for detecting the change of the buildings in the farmland based on the rule base and the inference model of the two-way chain.
5. The method for detecting the intelligent change of the building in the protected area of the cloudy and foggy farmland according to claim 4, wherein the specific steps of the image optimization rule in the step 2.21 are as follows:
if the historical optical remote sensing images are detected, selecting the optical remote sensing images on sunny days in the needed months for recommendation according to statistical data of historical climate data for the space to be detected;
if the optical remote sensing image of the current month is detected, selecting the month corresponding to the current month in the history according to statistical data of historical climate data at the beginning of a month according to the space to be detected, sequencing continuous sunny days in the month from more than less days, acquiring the time period with the most continuous sunny days after sequencing, and selecting the optical remote sensing image of the time period in the current month for recommendation based on the time period obtained after sequencing;
obtaining a recommendation result, numbering the recommendation result based on data in the knowledge graph model to obtain a final recommendation image, wherein the numbering format is 'space name-time-image type';
the building identification rule in step 2.21 specifically comprises the following steps:
step 2.211, dividing the recommended image into a clear scene or a cloudy scene by using a scene classifier, if the recommended image is the clear scene, directly identifying the building target through a lightweight SSD model to obtain the position of the building, namely obtaining a range frame of the building, and then turning to step 2.213, if not, turning to step 2.212;
step 2.212, optimizing the recommended image containing the cloud and fog, performing building target identification through a light SSD model after optimization to obtain the position of the building, namely obtaining a range frame of the building, and then turning to step 2.213;
the method comprises the following specific steps of optimizing the recommended image containing the cloud and mist:
establishing a mapping model from the SAR image to the optical remote sensing image by using a generated antagonistic neural network, wherein the mapping model comprises a U-net generation network obtained after training and a Markov discriminator, and generating an objective function L of the antagonistic neural networkGAN(G,D)Comprises the following steps:
LGAN(G,D)=En,m[logD(n,m)]+En,l[log(1-D(n,G(n,l)))] (5)
wherein n represents a haze-free SAR image, m represents a haze-free optical remote sensing image, G (n, l)) represents a generated haze-free optical remote sensing image, D (m, n) represents whether the image is a real sample or not, l represents random noise, En,m[logD(n,m)]Representing the probability distribution of the real data, En,l[log(1-D(n,G(n,l)))]Representing a probability distribution of the generated data;
inputting the recommended image with cloud and mist into the trained mapping model to generate an optical remote sensing image, and obtaining an image after optimization processing;
2.213, obtaining shooting time and projection coordinates of an optical remote sensing image including a range frame of a building based on a knowledge graph model, clustering each frame image with a time period of one week by using a first frame image of the optical remote sensing image as a base point by using a kmeans algorithm, and complementing the range frames of the optical remote sensing image obtained after clustering to obtain a complemented range frame, wherein the complementing formula is as follows:
Figure FDA0003751557530000031
wherein, Bnew{ x, y, w, h } are the complementary range boxes, x, y are the coordinates of the top left corner of the complementary range box, w and h are the width and length of the complementary range box, BiIs the ith complementary range box, and merges range boxes when there is an intersection of range boxes, Bi{ x } denotes the x coordinate in the upper left corner of the ith complementary Range box, BiY represents the y coordinate in the upper left corner of the ith complementary range box,
Figure FDA0003751557530000032
indicating the presence, n denotes the intersection, max denotes the maximum value, min denotes the minimum value;
and 2.214, storing the complementary range frames and the numbers of the optical remote sensing images containing the complementary range frames.
6. The method for detecting the intelligent change of the building in the cloudy and foggy farmland protection area according to claim 5, wherein the light SSD model is characterized in that a full-connection network in the SSD model is replaced by two convolution layers on the basis of the SSD model, and the model is lightened by using a convolution channel pruning algorithm after the replacement;
the concrete steps of carrying out lightweight on the substituted SSD model by utilizing the convolution channel pruning algorithm are as follows:
firstly, setting different pruning rates of each convolution layer of the substituted SSD model, and determining the optimal pruning rate interval of the substituted SSD model;
setting different pruning rates of each convolution layer in the optimal pruning rate interval by utilizing the optimal pruning rate interval of the substituted SSD model, wherein the different pruning rates determine the pruning rate of each convolution layer according to the identification correct rate of the substituted SSD model of each convolution layer under the larger or smaller pruning rate, wherein the median of the interval larger than the optimal pruning rate is larger, and the median of the interval smaller than the optimal pruning rate is smaller;
then calculating the L1 norm of a channel in each convolution kernel in the substituted SSD model, and sequencing based on the L1 norm, wherein the higher the numerical value is, the more important the channel is, and the lower the numerical value is, pruning is performed on the channel;
and finally, performing unified pruning on the layer with the larger pruning rate, performing retraining to obtain the precision of the substituted SSD model before pruning, performing retraining on the layer with the smaller pruning rate to obtain the precision of the substituted SSD model before pruning, and finishing the light weight of the SSD model to obtain the light-weight SSD model.
7. The method for detecting the intelligent change of the building in the cloudy and foggy farmland protection area according to claim 6, wherein the concrete steps of building extraction rules in the step 2.21 are as follows:
step 2.21-1, firstly, based on the complementary range frames obtained in the step 2.214, indexing and screening all images containing the range frames of the building as new optical remote sensing images, and cutting the new optical remote sensing images to obtain the range frames of the building;
step 2.21-2, searching all the complementary range frames according to the range frames obtained by cutting, and obtaining local optical remote sensing images for target identification according to the complementary range frames obtained by searching;
2.21-3, extracting a range frame corresponding to each frame of image from each local optical remote sensing image obtained in the step 2.21-2, extracting the boundary of the building according to each range frame, and merging the extracted boundaries after the boundary is extracted to obtain the boundary of the building which is extracted primarily;
2.21-4, establishing a target library of the building based on the boundary of the building extracted preliminarily, and regularizing the boundary of the building based on a building morphology fitting method to obtain a boundary extracted finally, namely obtaining the outline of the building object;
the specific steps of the change detection rule in step 2.21 are as follows:
and comparing the boundary obtained by final extraction with the corresponding planning map, namely determining an area according to the initially input space, performing grid calculation with the planning map of the area, rasterizing the boundary of the building and the planning map into uniform resolution, subtracting the extracted boundary of the building by using the planning map to obtain a change map spot of the building, classifying the change map spot into a positive class if the range frame of the building in the change map spot is increased, and classifying the change map spot into a negative class if the range is reduced, wherein the change map spot of the positive class is marked as abnormal to indicate that illegal behaviors possibly occur.
8. The method for intelligently detecting the change of the building in the protected area of the cloudy and foggy farmland according to claim 7, wherein the steps 2.21 to 3 comprise the following steps:
2.21-31, extracting a range frame corresponding to each frame image from each local optical remote sensing image, adopting a corresponding vegetation index based on a waveband existing in each local optical remote sensing image, judging whether each pixel in each extracted range frame is vegetation or not based on the vegetation index, and if the pixel is vegetation, turning to the next step, wherein the vegetation index comprises a triangular vegetation index, a soil adjustment vegetation index, a normalized difference vegetation index, an enhanced vegetation index and a normalized difference vegetation index;
2.21-32, eliminating the pixels which are judged as vegetation in each extracted range frame;
2.21-33, enhancing the range frame after vegetation elimination based on a multi-scale segmentation method, extracting the boundaries of the building through an SVM classification method after enhancement, and combining the extracted boundaries after boundary extraction to obtain the boundaries of the building which are extracted primarily;
the steps 2.21-4 comprise the following specific steps:
constructing various shape elements of the building, wherein the shape elements comprise squares, rectangles, trapezoids and circles;
forming all the morphological elements into a target library of the building;
calculating the fitting degree of the building according to the boundary of the building extracted preliminarily, and replacing the irregular formation block with a regular building form in a target library according to the fitting degree, wherein the method specifically comprises the following steps:
firstly, calculating a rectangular fitting factor of the extracted boundary, namely the proportion K of the area of the boundary of the extracted building on the external matrix thereof:
Figure FDA0003751557530000051
secondly, calculating the length-width ratio W of the circumscribed matrix:
Figure FDA0003751557530000052
the ratio C of the area to the perimeter of the graph is then calculated:
Figure FDA0003751557530000053
wherein, areaobjMeans Area, area of the obj th boundary of the extracted buildingrectRefers to the area, len of the circumscribed matrix of the object block after the rect mergingrectThe length and Wid of the circumscribed matrix of the object block after the rect mergingrectThe sum of the width of the circumscribed matrix of the object block after the rect merging and CirobjRefers to the perimeter of the obj th boundary;
and finally, comparing the ratio K of the area of the boundary of the building on the external matrix, the aspect ratio W of the external matrix and the ratio C of the area to the perimeter of the graph obtained by extraction with K, W and C of the object of the target library, and selecting the closest image for replacement, thereby completing the regularization of the boundary.
9. The method for intelligently detecting the change of the buildings in the protected area of the cloudy and foggy farmland according to claim 8, characterized in that: the steps 2.21-33 comprise the following specific steps:
dividing each pixel in the range frame with the vegetation removed into object blocks based on a multi-scale division method, merging the adjacent object blocks by calculating the similarity, and calculating to obtain a characteristic variable, namely obtaining an enhanced range frame;
the parameters of the similarity are the characteristics of the building, including shape characteristics, texture characteristics and color characteristics, and the characteristic variables are obtained by giving different weights to the characteristics and weighting, wherein the formula is as follows:
F=ω1fs2ft1fc (8)
wherein F represents a characteristic variable, ω1And omega2Expressed as given weight, fsRepresenting a shape feature, ftRepresenting a texture feature, fcRepresenting color features, shape features with smoothness and compactness, texture features with entropy values in a gray co-occurrence matrix, and color features with HSV;
the expression formula of the shape feature, the texture feature and the color feature is as follows:
Figure FDA0003751557530000061
Figure FDA0003751557530000062
fc=ρ1H+ρ2S+ρ3V (11)
wherein alpha is1Representing weight values, C representsThe boundary length of the merged object block, L represents the perimeter of the bounding rectangle of the merged object block, N represents the number of pixels contained within the merged object block, grey (x, y) represents the gray value at the pixel coordinate (x, y), ρ1、ρ2、ρ3Representing weights in HSV, H, S, V represent hue, saturation and lightness, respectively, R(x,y)、G(x,y)、B(x,y)Respectively representing the values of coordinates (x, y) in the red, green and blue channels, i representing the pixel rows and columns of the merged object block;
and after the characteristic variables are obtained, inputting the characteristic variables into a trained SVM classification method to extract the boundaries of the building, and combining the extracted boundaries after the boundaries are extracted to obtain the boundaries of the building which is extracted primarily.
10. The method for intelligently detecting the change of the building in the protection area of the cloudy and foggy farmland according to claim 9, wherein the step 3 is to perform the change detection of the building by executing the image optimization rule, the building identification rule, the building extraction rule or/and the change detection rule in the rule base based on the input knowledge inference model data, and the concrete steps are as follows:
inputting a space to a knowledge inference model based on a knowledge graph model, and executing an image optimization rule, a building identification rule, a building extraction rule and a change detection rule in a rule base in sequence by the knowledge inference model to perform intelligent change detection on the building in the cultivated land protection area;
inputting a recommended image to the knowledge inference model based on the knowledge graph model, and sequentially executing a building identification rule, a building extraction rule and a change detection rule in a rule base by the knowledge inference model to perform intelligent change detection on the building in the cultivated land protection area;
inputting the number of the local optical remote sensing image containing the range frame of the building after complementation to a knowledge inference model based on a knowledge graph model, and sequentially executing a building extraction rule and a change detection rule in a rule base by the knowledge inference model to carry out intelligent change detection on the building in the cultivated land protection area;
and inputting the finally extracted boundary to the knowledge inference model based on the knowledge graph model, and executing change detection in the rule base by the knowledge inference model to perform intelligent change detection on the building in the cultivated land protection area.
CN202210844031.XA 2022-07-18 2022-07-18 Intelligent change detection method for buildings in multi-cloud and multi-fog farmland protection area Active CN115272848B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210844031.XA CN115272848B (en) 2022-07-18 2022-07-18 Intelligent change detection method for buildings in multi-cloud and multi-fog farmland protection area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210844031.XA CN115272848B (en) 2022-07-18 2022-07-18 Intelligent change detection method for buildings in multi-cloud and multi-fog farmland protection area

Publications (2)

Publication Number Publication Date
CN115272848A true CN115272848A (en) 2022-11-01
CN115272848B CN115272848B (en) 2023-04-18

Family

ID=83767593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210844031.XA Active CN115272848B (en) 2022-07-18 2022-07-18 Intelligent change detection method for buildings in multi-cloud and multi-fog farmland protection area

Country Status (1)

Country Link
CN (1) CN115272848B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117033366A (en) * 2023-10-09 2023-11-10 航天宏图信息技术股份有限公司 Knowledge-graph-based ubiquitous space-time data cross verification method and device
CN117607063A (en) * 2024-01-24 2024-02-27 中国科学院地理科学与资源研究所 Forest vertical structure parameter measurement system and method based on unmanned aerial vehicle
CN117993499A (en) * 2024-04-03 2024-05-07 江西省水利科学院(江西省大坝安全管理中心、江西省水资源管理中心) Multi-mode knowledge graph construction method for four pre-platforms for flood control in drainage basin

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5701400A (en) * 1995-03-08 1997-12-23 Amado; Carlos Armando Method and apparatus for applying if-then-else rules to data sets in a relational data base and generating from the results of application of said rules a database of diagnostics linked to said data sets to aid executive analysis of financial data
CN103971115A (en) * 2014-05-09 2014-08-06 中国科学院遥感与数字地球研究所 Automatic extraction method for newly-increased construction land image spots in high-resolution remote sensing images based on NDVI and PanTex index
CN110516539A (en) * 2019-07-17 2019-11-29 苏州中科天启遥感科技有限公司 Remote sensing image building extracting method, system, storage medium and equipment based on confrontation network
CN112132006A (en) * 2020-09-21 2020-12-25 西南交通大学 Intelligent forest land and building extraction method for cultivated land protection
CN112818966A (en) * 2021-04-16 2021-05-18 武汉光谷信息技术股份有限公司 Multi-mode remote sensing image data detection method and system
CN113780097A (en) * 2021-08-17 2021-12-10 北京数慧时空信息技术有限公司 Arable land extraction method based on knowledge map and deep learning
CN113936217A (en) * 2021-10-25 2022-01-14 华中师范大学 Priori semantic knowledge guided high-resolution remote sensing image weakly supervised building change detection method
CN114117070A (en) * 2021-11-19 2022-03-01 重庆电子工程职业学院 Method, system and storage medium for constructing knowledge graph
CN114186076A (en) * 2021-12-15 2022-03-15 深圳市网联安瑞网络科技有限公司 Knowledge graph construction method, device, equipment and computer readable storage medium
CN114418932A (en) * 2021-11-30 2022-04-29 广州欧科信息技术股份有限公司 Historical building repair method and system based on digital twinning technology

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5701400A (en) * 1995-03-08 1997-12-23 Amado; Carlos Armando Method and apparatus for applying if-then-else rules to data sets in a relational data base and generating from the results of application of said rules a database of diagnostics linked to said data sets to aid executive analysis of financial data
CN103971115A (en) * 2014-05-09 2014-08-06 中国科学院遥感与数字地球研究所 Automatic extraction method for newly-increased construction land image spots in high-resolution remote sensing images based on NDVI and PanTex index
CN110516539A (en) * 2019-07-17 2019-11-29 苏州中科天启遥感科技有限公司 Remote sensing image building extracting method, system, storage medium and equipment based on confrontation network
CN112132006A (en) * 2020-09-21 2020-12-25 西南交通大学 Intelligent forest land and building extraction method for cultivated land protection
CN112818966A (en) * 2021-04-16 2021-05-18 武汉光谷信息技术股份有限公司 Multi-mode remote sensing image data detection method and system
CN113780097A (en) * 2021-08-17 2021-12-10 北京数慧时空信息技术有限公司 Arable land extraction method based on knowledge map and deep learning
CN113936217A (en) * 2021-10-25 2022-01-14 华中师范大学 Priori semantic knowledge guided high-resolution remote sensing image weakly supervised building change detection method
CN114117070A (en) * 2021-11-19 2022-03-01 重庆电子工程职业学院 Method, system and storage medium for constructing knowledge graph
CN114418932A (en) * 2021-11-30 2022-04-29 广州欧科信息技术股份有限公司 Historical building repair method and system based on digital twinning technology
CN114186076A (en) * 2021-12-15 2022-03-15 深圳市网联安瑞网络科技有限公司 Knowledge graph construction method, device, equipment and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
卢彻等: ""改进U-Net的高分影像建筑物提取方法"", 《测绘科学》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117033366A (en) * 2023-10-09 2023-11-10 航天宏图信息技术股份有限公司 Knowledge-graph-based ubiquitous space-time data cross verification method and device
CN117033366B (en) * 2023-10-09 2023-12-29 航天宏图信息技术股份有限公司 Knowledge-graph-based ubiquitous space-time data cross verification method and device
CN117607063A (en) * 2024-01-24 2024-02-27 中国科学院地理科学与资源研究所 Forest vertical structure parameter measurement system and method based on unmanned aerial vehicle
CN117607063B (en) * 2024-01-24 2024-04-19 中国科学院地理科学与资源研究所 Forest vertical structure parameter measurement system and method based on unmanned aerial vehicle
CN117993499A (en) * 2024-04-03 2024-05-07 江西省水利科学院(江西省大坝安全管理中心、江西省水资源管理中心) Multi-mode knowledge graph construction method for four pre-platforms for flood control in drainage basin
CN117993499B (en) * 2024-04-03 2024-06-04 江西省水利科学院(江西省大坝安全管理中心、江西省水资源管理中心) Multi-mode knowledge graph construction method for four pre-platforms for flood control in drainage basin

Also Published As

Publication number Publication date
CN115272848B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN115272848B (en) Intelligent change detection method for buildings in multi-cloud and multi-fog farmland protection area
CN113449680B (en) Knowledge distillation-based multimode small target detection method
CN105551028B (en) A kind of method and system of the geographical spatial data dynamic renewal based on remote sensing image
KR20220000898A (en) Method to identify shoreline changes based on multi-factor
CN112101159B (en) Multi-temporal forest remote sensing image change monitoring method
CN111028255B (en) Farmland area pre-screening method and device based on priori information and deep learning
CN111709379A (en) Remote sensing image-based hilly area citrus planting land plot monitoring method and system
CN113963222B (en) High-resolution remote sensing image change detection method based on multi-strategy combination
Ochoa et al. A framework for the management of agricultural resources with automated aerial imagery detection
CN112347895A (en) Ship remote sensing target detection method based on boundary optimization neural network
CN111753682B (en) Hoisting area dynamic monitoring method based on target detection algorithm
Wang et al. Tea picking point detection and location based on Mask-RCNN
CN107038416A (en) A kind of pedestrian detection method based on bianry image modified HOG features
CN112381013A (en) Urban vegetation inversion method and system based on high-resolution remote sensing image
CN110705449A (en) Land utilization change remote sensing monitoring analysis method
CN113657324A (en) Urban functional area identification method based on remote sensing image ground object classification
WO2020093624A1 (en) Antenna downward inclination angle measurement method based on multi-scale detection algorithm
CN115272876A (en) Remote sensing image ship target detection method based on deep learning
CN115965812A (en) Evaluation method for wetland vegetation species and ground feature classification by unmanned aerial vehicle image
CN116385902A (en) Remote sensing big data processing method, system and cloud platform
CN115019163A (en) City factor identification method based on multi-source big data
CN113379603B (en) Ship target detection method based on deep learning
CN112967286B (en) Method and device for detecting newly added construction land
Quispe et al. Automatic building change detection on aerial images using convolutional neural networks and handcrafted features
CN111882573B (en) Cultivated land block extraction method and system based on high-resolution image data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant