CN112927252B - Newly-added construction land monitoring method and device - Google Patents

Newly-added construction land monitoring method and device Download PDF

Info

Publication number
CN112927252B
CN112927252B CN202110388500.7A CN202110388500A CN112927252B CN 112927252 B CN112927252 B CN 112927252B CN 202110388500 A CN202110388500 A CN 202110388500A CN 112927252 B CN112927252 B CN 112927252B
Authority
CN
China
Prior art keywords
image
time phase
layer
phase
scale parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110388500.7A
Other languages
Chinese (zh)
Other versions
CN112927252A (en
Inventor
文强
丁媛
苗立新
陈雨竹
周淑芳
卫娇娇
王策
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Twenty First Century Aerospace Technology Co ltd
Original Assignee
Twenty First Century Aerospace Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Twenty First Century Aerospace Technology Co ltd filed Critical Twenty First Century Aerospace Technology Co ltd
Priority to CN202110388500.7A priority Critical patent/CN112927252B/en
Publication of CN112927252A publication Critical patent/CN112927252A/en
Application granted granted Critical
Publication of CN112927252B publication Critical patent/CN112927252B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method and a device for monitoring newly added construction land, and relates to the technical field of remote sensing information extraction. The newly added construction land monitoring method comprises the following steps: receiving a front-time true color orthophoto map, a rear-time true color orthophoto map and a base vector; transforming the front-phase true color orthophoto map and the back-phase true color orthophoto map from the RGB color space to the Lab color space, respectively; converting the basic vector into a basic image object layer; dividing the size two scale parameters of the front time phase Lab image and the rear time phase Lab image; respectively extracting the spatial characteristics of each object in the large-scale parameter image object layer and the small-scale parameter image object layer; and determining whether the land use type of the small-scale parameter image object layer is changed from a non-construction land to a construction land. Therefore, the feature of the ground object can be described more accurately, and the newly added construction land can be monitored more comprehensively and accurately.

Description

Newly-added construction land monitoring method and device
Technical Field
The application relates to the technical field of remote sensing information extraction, in particular to a method and a device for monitoring newly-added construction land.
Background
In the field resource management, for the operations of land law enforcement supervision, land supervision and the like, the newly added construction land needs to be monitored so as to realize the monitoring of the newly added illegal construction land.
At present, newly added construction land extraction is performed through high resolution images acquired by satellites such as high resolution one and high resolution two, and the method specifically comprises the following three modes: first kind: a classification method based on pixels. Specifically, various newly-added construction land indexes constructed by combination are combined, for example: morphological Building Index (MBI), pixel Shape Index (PSI), and building presence index (PanTex), if the value of a pixel exceeds a threshold, it is extracted as a new construction land. Second kind: object-based segmentation classification methods. Specifically, the pixels are expanded to the object level through different pixel clustering algorithms, the images are segmented by adopting multiple scales, a plurality of object layers are formed, and further, a richer feature library is utilized, for example: spectrum, shape, size, texture, spatial and contextual features, etc., to enable extraction of newly added construction sites, or by using specific models, such as: and (3) based on the construction land identification model defined by PanTex, the classification of the newly added construction land is realized. Third kind: deep learning method. Specifically, a huge amount of newly-added construction land sample library is collected and manually marked uniformly, and a proper network architecture is determined, for example: alexNet, vggNet, FCN, U-Net and the like, optimizing and training the determined network through a sample library to optimize and update parameters of the network, and finally applying the optimized and trained network to a target area to automatically extract newly-added construction land in the target area.
However, in engineering applications, the images used in extracting the newly added construction land are typically processed into true color orthographic images (DOM). And the index which can be calculated only by the near infrared band can not be obtained from the true color image, for example: NDVI, NDWI, etc., and thus the newly added construction land extraction method depending on near infrared spectrum information cannot be used. If the pixel-based classification method in the first step is adopted, a spiced salt phenomenon is easy to appear, and a lower omission ratio cannot be ensured. If the object-based segmentation classification method in the second embodiment is adopted, when the size of the object on the image map is greatly different from that in the assumed model, or there is no obvious shadow around the user to be constructed, the assumed model of the feature image is not established, and if the assumed model is still adopted to extract the construction land, deviation occurs, and the extraction effect is poor. If the deep learning method in the third aspect is adopted, because a large number of marking samples need to be acquired to perform network optimization training, a certain obstacle is caused to the engineering implementation of the extraction of the construction land, and the contradiction between the recall ratio and the false alarm ratio cannot be effectively solved.
Disclosure of Invention
The embodiment of the application aims to provide a method and a device for monitoring newly added construction land, which can monitor the newly added construction land more comprehensively and accurately.
In order to solve the technical problems, the embodiment of the application provides the following technical scheme:
the first aspect of the application provides a method for monitoring newly added construction land, which comprises the following steps:
receiving a front-time true color orthographic image, a rear-time true color orthographic image and a basic vector, wherein the front-time true color orthographic image and the rear-time true color orthographic image are true color orthographic images of different time phases in the same region, and the basic vector is used for representing the acquisition time of the front-time true color orthographic image or the land utilization type of each region in a certain period earlier than the acquisition time of the front-time true color orthographic image;
respectively converting the front-time-phase true color orthographic image and the rear-time-phase true color orthographic image from an RGB color space to a Lab color space to obtain a front-time-phase Lab image and a rear-time-phase Lab image;
converting the basic vector into a basic image object layer, wherein each vector image spot corresponds to one image object in the basic image object layer, and each image object inherits all attribute information of the corresponding vector image spot;
Under the constraint of the basic image object layer, the front time phase Lab image and the rear time phase Lab image are firstly segmented by small scale parameters to obtain a small scale parameter image object layer; copying the small-scale parameter image object layer, and merging the objects in which the difference between the small-scale parameter image object layer and the front time phase Lab characteristics and the rear time phase Lab characteristics of surrounding objects is smaller than a defined threshold value according to a homogeneity rule by using large-scale parameters to obtain a large-scale parameter image object layer; the objects in the large-scale parameter image object layer are obtained by combining the objects in the small-scale parameter image object layer according to a homogeneity rule, and the boundary of the combined image object does not exceed the boundary of the corresponding object in the basic image object layer;
respectively extracting the spatial characteristics of objects in the large-scale parameter image object layer and the small-scale parameter image object layer;
and determining whether the land utilization type of the small-scale parameter image object layer is changed from a non-construction land to a construction land based on the spatial characteristics of the object in the large-scale parameter image object layer and the spatial characteristics of the object in the small-scale parameter image object layer so as to monitor the newly added construction land.
The second aspect of the present application provides an additional construction land monitoring device, comprising:
the receiving module is used for receiving a front-time true color orthographic image, a rear-time true color orthographic image and a basic vector, wherein the front-time true color orthographic image and the rear-time true color orthographic image are true color orthographic images of different time phases in the same region, and the basic vector is used for representing the acquisition time of the front-time true color orthographic image or the land utilization type of each region in a certain period earlier than the acquisition time of the front-time true color orthographic image;
the conversion module is used for respectively converting the front-time real-color orthographic image and the rear-time real-color orthographic image from RGB color space to Lab color space to obtain a front-time Lab image and a rear-time Lab image;
the conversion module is used for converting the basic vector into a basic image object layer, wherein each vector image spot corresponds to one image object in the basic image object layer, and each image object inherits all attribute information of the corresponding vector image spot;
the segmentation module is used for segmenting the front time phase Lab image and the rear time phase Lab image under the constraint of the basic image object layer by using small scale parameters to obtain a small scale parameter image object layer; copying the small-scale parameter image object layer, and merging the objects in which the difference between the small-scale parameter image object layer and the front time phase Lab characteristics and the rear time phase Lab characteristics of surrounding objects is smaller than a defined threshold value according to a homogeneity rule by using large-scale parameters to obtain a large-scale parameter image object layer; the objects in the large-scale parameter image object layer are obtained by combining the objects in the small-scale parameter image object layer according to a homogeneity rule, and the boundary of the combined image object does not exceed the boundary of the corresponding object in the basic image object layer;
The feature calculation module is used for respectively extracting the spatial features of the objects in the large-scale parameter image object layer and the small-scale parameter image object layer;
and the monitoring module is used for determining whether the land utilization type of the small-scale parameter image object layer is changed from a non-construction land to a construction land based on the spatial characteristics of the object in the large-scale parameter image object layer and the spatial characteristics of the object in the small-scale parameter image object layer so as to monitor the newly added construction land.
A third aspect of the present application provides an electronic apparatus, comprising: a processor, a memory, a bus; the processor and the memory complete communication with each other through the bus; the processor is configured to invoke program instructions in the memory to perform the method of the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium comprising: a stored program; wherein the program, when run, controls a device in which the storage medium is located to perform the method in the first aspect.
Compared with the prior art, the newly added construction land monitoring method provided by the first aspect of the application comprises the steps of after receiving the front-time true color orthophoto map and the rear-time true color orthophoto map, converting the front-time true color orthophoto map and the rear-time true color orthophoto map from RGB color space to Lab color space to obtain a front-time Lab image and a rear-time Lab image, further dividing the front-time Lab image and the rear-time Lab image to obtain a large-scale parameter image object layer and a small-scale parameter image object layer, extracting the spatial characteristics of the large-scale parameter image object layer and the small-scale parameter image object layer, and finally determining the land utilization type change condition of the small-scale parameter image object layer based on the spatial characteristics of the large-scale parameter image object layer and the small-scale parameter image object layer so as to monitor the newly added construction land. The spatial characteristics of the object in the image are extracted under the Lab color space, and whether the object is a newly added construction land or not is determined based on the spatial characteristics of the object under the Lab color space, so that the object is more close to the perception of human eyes on the color, the feature of the ground object can be more accurately described, and the newly added construction land can be more comprehensively and accurately monitored.
The newly added construction land monitoring device provided in the second aspect of the present application, the electronic apparatus provided in the third aspect of the present application, and the computer-readable storage medium provided in the fourth aspect of the present application have the same or similar advantageous effects as the newly added construction land monitoring method provided in the first aspect.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present application will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. In the drawings, wherein like or corresponding reference numerals indicate like or corresponding parts, there are shown by way of illustration, and not limitation, several embodiments of the application, in which:
FIG. 1 schematically illustrates a flow chart of a method of monitoring an added construction site;
FIG. 2 schematically illustrates a second flow chart of the method of monitoring newly added construction land;
FIG. 3 schematically illustrates a block diagram of a newly added construction site monitoring apparatus;
fig. 4 schematically shows a structural diagram of an electronic device.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
It is noted that unless otherwise indicated, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs.
An embodiment of the present application provides a method for monitoring newly added construction land, fig. 1 schematically shows a flowchart of a method for monitoring newly added construction land, and referring to fig. 1, the method may include:
s101: a front true color orthophoto map, a rear true color orthophoto map, and a base vector are received.
Wherein the front-time true color orthophoto image and the rear-time true color orthophoto image are true color orthophoto images of different phases in the same region. The base vector is used for representing the acquisition time of the front-time true color orthographic image or the land utilization type of each region in a certain period earlier than the acquisition time of the front-time true color orthographic image.
Taking Beijing No. two satellite with resolution of 0.8 m as an example, remote sensing image data acquired by the satellite in month 4 of 2020 and month 5 of 2020 are acquired respectively. The remote sensing image data comprises 4 wave bands, namely a blue wave band, a green wave band, a red wave band and a near infrared wave band. The remote sensing image data is processed into 8-bit true color orthographic images of 3 wave bands, namely a front-phase true color orthographic image and a rear-phase true color orthographic image.
S102: and respectively converting the front-time real-color orthographic image and the rear-time real-color orthographic image from an RGB color space to a Lab color space to obtain a front-time Lab image and a rear-time Lab image.
In the traditional RGB color space, R, G, B three color components have strong correlation, and can only be described from chromaticity information, which is unfavorable for feature extraction of ground objects. However, the applicant has found that in the Lab color space, the extraction of the ground substance can be performed more accurately.
The Lab color space was the uniform color space recommended by the 1976 International Commission on illumination, which is a device independent color system and also a physiological feature-based color system. The pyramidal cells on the retina of the human eye are a three-color mechanism, and become a three-pair mechanism in the transmission path of visual information to the brain, namely, the intensity (black-white, L) reaction, the red-green (R-G) reaction and the yellow-blue (Y-B) reaction of light. Each pair of colors is excited by one color and suppressed by the other color. Lab color space describes human visual perception in a digitized way based on human visual characteristics. The L component in Lab color space is used to represent the brightness of the pixel, with a range of values (0, 100), representing from pure black to pure white. a represents a range from green to red, the range of values is (-128, 127), positive values represent red, and negative values represent green; b represents the range from blue to yellow, the range of values is (-128, 127), positive values represent yellow, and negative values represent blue. The components L, a and b are mutually independent, namely, the brightness and the chromaticity information are separated, so that the separation of the ground feature and the background in the image is facilitated, and the ground feature can be conveniently identified. The standard CIE Lab image was appropriately transformed to obtain a Lab image used in the present invention.
S103: and converting the basic vector into a basic image object layer.
In the basic image object layer, each vector image spot corresponds to one image object, and each image object inherits all attribute information of the corresponding vector image spot.
S104: and dividing the front time phase Lab image and the rear time phase Lab image respectively by using the large scale parameter and the small scale parameter to obtain a large scale parameter image object layer and a small scale parameter image object layer.
Specifically, under the constraint of a basic image object layer, a front time phase Lab image and a rear time phase Lab image are firstly segmented by small scale parameters to obtain a small scale parameter image object layer; copying the small-scale parameter image object layer, and merging the objects in which the difference between the small-scale parameter image object layer and the front time phase Lab characteristics and the rear time phase Lab characteristics of surrounding objects is smaller than a defined threshold value according to a homogeneity rule by using large-scale parameters to obtain a large-scale parameter image object layer; the objects in the large-scale parameter image object layer are obtained by combining the objects in the small-scale parameter image object layer according to the homogeneity rule, and the boundary of the combined image object does not exceed the boundary of the corresponding object in the basic image object layer.
In implementations, multi-scale segmentation and multi-threshold segmentation may be used in combination. Specifically, the multi-scale segmentation is performed on the pre-time-phase Lab image and the post-time-phase Lab image, and then the multi-threshold segmentation is performed on the segmented image object according to the characteristics of three layers of the Lab image, so that the large and small buildings in the image can be perfectly segmented from the background. Of course, the previous phase Lab image may be subjected to multi-threshold segmentation, and then the segmented image block may be subjected to multi-scale segmentation. The specific mode for multi-scale and multi-threshold segmentation is the prior art. And will not be described in detail herein.
S105: and respectively extracting the spatial characteristics of each object of the large-scale parameter image object layer and the small-scale parameter image object layer.
After the large-scale parameter image object layer and the small-scale parameter image object layer are segmented from the front time phase Lab image and the rear time phase Lab image according to the two scale parameters, specific ground objects of the objects under the two image object layers on the front time phase and the rear time phase images are required to be determined, and further the monitoring of the newly added construction land is realized. Therefore, it is necessary to acquire the features of all the objects in the large-scale parameter image object layer and the small-scale parameter image object layer, and further determine specific land utilization types on the images of the objects in the large-scale parameter image object layer and the small-scale parameter image object layer in the front time phase and the rear time phase based on the features of the objects in the large-scale parameter image object layer and the small-scale parameter image object layer, so as to determine whether the objects change from non-construction land to construction land.
Since the front-phase Lab image and the rear-phase Lab image are images of the Lab color space, the objects in the large-scale parameter image object layer and the small-scale parameter image object layer have features described in three components of L, a, and b in both the front and rear phases.
S106: and determining whether the land utilization type of the small-scale parameter image object layer is changed from the non-construction land to the construction land based on the spatial characteristics of the object in the large-scale parameter image object layer and the spatial characteristics of the object in the small-scale parameter image object layer so as to monitor the newly added construction land.
After the spatial characteristics of each object of the large-scale parameter image object layer and the small-scale parameter image object layer are determined, the land utilization types of the objects in the front time phase and the rear time phase are respectively determined according to the spatial characteristics of the objects, and then whether each object is a newly added construction land or not is determined according to the change condition of each object land utilization type, namely, whether the land utilization type of the object is changed from a non-construction land to a construction land or not is determined in the small-scale parameter image object layer, so that the monitoring of the newly added construction land in a certain area is realized.
From the above, it can be seen that, in the newly added construction land monitoring method provided by the embodiment of the present application, after receiving the front-time true color orthophoto map and the rear-time true color orthophoto map, the front-time true color orthophoto map and the rear-time true color orthophoto map are transformed from the RGB color space to the Lab color space to obtain the front-time Lab image and the rear-time Lab image, and then the front-time Lab image and the rear-time Lab image are segmented to obtain the small-scale parameter image object layer and the large-scale parameter image object layer, so as to extract the spatial features of the small-scale parameter image object layer and the large-scale parameter image object layer, and finally, the land use type change condition of the small-scale parameter image object layer is determined based on the spatial features of each object in the large-scale parameter image object layer and the small-scale parameter image object layer, so as to monitor the newly added construction land. The spatial characteristics of the object in the image are extracted under the Lab color space, and whether the object is a newly added construction land or not is determined based on the spatial characteristics of the object under the Lab color space, so that the object is more close to the perception of human eyes on the color, the feature of the ground object can be more accurately described, and the newly added construction land can be more comprehensively and accurately monitored.
Further, as a refinement and extension of the method shown in fig. 1, the embodiment of the application also provides a newly added construction land monitoring method. Fig. 2 schematically illustrates a second flowchart of a method for monitoring newly added construction land, see fig. 2, which may include:
s201: a base image and a base vector are received.
For users, when new construction land needs to be monitored, data base images and base vectors are needed.
The basic image is remote sensing image data of different time phases in a certain area acquired by a satellite. And after receiving the remote sensing image data, the newly added construction land monitoring device performs processing treatment to obtain a front-time true color orthographic image and a rear-time true color orthographic image of 3 wave bands and 8 bits.
The base vector can characterize the land use type of each region in the pre-time phase image or earlier, and in most cases, the land use type of the base vector characterizes earlier than the shooting time of the pre-time phase image. For example: the basic vector contains the spatial distribution characteristics of railway land, highway land, river water surface, scenic spots, special land and the like. The area where the newly increased construction land does not appear in the image map can be eliminated through the basis vector, so that the newly increased construction land is monitored only in the area where the newly increased construction land possibly appears in the later period, and further the monitoring efficiency of the newly increased construction land is improved.
S202: a set of pel-level features is generated.
Because the land utilization type of the ground feature is required to be identified subsequently based on the spatial feature of the ground feature, and the newly added construction land is further monitored, a pixel-level feature set can be generated based on the image, so that the spatial feature of the ground feature is generated subsequently.
Specifically, S202 may include:
s2021: and respectively converting the front-time true color orthophoto image and the rear-time true color orthophoto image from an RGB color space to an XYZ color space to obtain a front-time phase transition image and a rear-time phase transition image.
The calculation formula for converting the RGB color space into the XYZ color space is as follows:
X=var_R*0.4124+var_G*0.3576+var_B*0.1805
Y=var_R*0.2126+var_G*0.7152+var_B*0.0722
Z=var_R*0.0193+var_G*0.1192+var_B*0.9505
wherein, the liquid crystal display device comprises a liquid crystal display device,
s2022: and respectively carrying out standard CIE Lab transformation on the front time phase transition image and the rear time phase transition image, then adding preset parameters into the transformed image layer, and carrying out downward rounding operation to obtain the front time phase Lab image and the rear time phase Lab image.
The calculation formula for converting the XYZ color space into the Lab color space is as follows:
L=INT(116f(Y/Yn)-16+C)
a=INT(500[f(X/Xn)-f(Y/Yn)]+C)
b=INT(200[f(Y/Yn)-f(Z/Zn)]+C)
wherein INT is a downward rounding function, C is an offset (i.e. a preset parameter), X, Y, Z is a tristimulus value of an object, xn, yn, zn is a tristimulus value of a CIE standard illuminant (xn=95.047, yn=100.000, zn= 108.883), and the calculation formula of the function f (t) is as follows:
In order to match the value ranges of the front-phase Lab image and the rear-phase Lab image to the 8-bit value range, C needs to be added to each of the three components L, a, and b based on the standard CIELab calculation result, and rounding needs to be performed. After the processing, the values of the three components L, a and b fall in the interval (0, 255), which is more beneficial to directly carrying out the threshold selection and image processing operation.
The calculation formula of C is as follows:
-min(L min ,a min ,b min )≤C≤255-man(L min ,a min ,b min )
wherein L is min Represents the minimum value of the L component of RGB color in the standard CIE Lab color space, a min Representing the minimum value of the a component of the RGB color in the standard CIE Lab color space, b min Representing the minimum value of the b component of the RGB color in the standard CIE Lab color space, L max Represents the maximum value of the L component of RGB color in the standard CIE Lab color space, a max Represents the maximum value of the a component of RGB color in the standard CIE Lab color space, b max Represents the maximum value of the b component of the RGB color in the standard CIE Lab color space.
S2023: and calculating the difference characteristic image layers of each layer L, a and b based on the front time phase Lab image and the rear time phase Lab image.
That is, the difference feature layer l_diff of the L layer in the post-phase Lab image and the L layer in the pre-phase Lab image, the difference feature layer a_diff of the a layer in the post-phase Lab image and the a layer in the pre-phase Lab image, and the difference feature layer b_diff of the b layer in the post-phase Lab image and the b layer in the pre-phase Lab image are calculated.
S2024: edge detection is performed on the front-phase Lab image and the rear-phase Lab image.
The edges of an image refer to the portions of the image where the brightness changes significantly in a localized area, and the gray-level profile of the portions can be generally seen as a step, i.e. a sharp change from a smaller gray-level value to a relatively larger gray-level value. By performing edge detection on the front-phase Lab image and the rear-phase Lab image, the front-phase Lab image and the rear-phase Lab image can be conveniently segmented later.
In the specific implementation process, after Gaussian denoising is performed on the front-time-phase Lab image and the rear-time-phase Lab image, edge detection can be performed on the L layers of the front-time-phase Lab image and the rear-time-phase Lab image based on the Lee Sigma operator and the Canny operator, so that the L layers are obtained respectively: qsx _lee_sigma, qsx_l_canny, hsx_lee_sigma and hsx _l_canny.
Finally, the results from S2022-S2024 are integrated together to obtain the pixel level feature set for later use.
S203: and excluding areas where no new construction land appears based on the basis vectors.
In a specific implementation process, a region where a new construction land does not appear in the previous time phase Lab image is determined according to the basis vector, for example: railway land, highway land, river water surface, scenic spots and special land. Generating image objects according to boundaries of the vector image spots through chessboard segmentation, if the names of the ground classes of the objects in the corresponding vector image spots are the types of the newly added construction land, assigning the classes of the objects as 'areas which are not found to change', and only the objects with the classes of unclassified (unclassified) participate in subsequent segmentation.
S204: adaptive multi-scale, multi-threshold segmentation.
Specifically, S204 may include:
s2041: and carrying out multi-scale segmentation on the front time phase Lab image and the rear time phase Lab image in the unclassified object range based on the region merging algorithm with the minimum regional heterogeneity to obtain a small-scale parameter image object layer and a large-scale parameter image object layer.
The image is segmented into objects with different scales by setting the weights of the layers participating in segmentation, and the shape heterogeneity, the spectrum heterogeneity and the compactness of the segmented image spots. The average heterogeneity between the segmented objects is the largest and the homogeneity between the pixels inside the object is the largest.
Gradually combining the first single pixels into smaller image objects through a region combining algorithm based on the minimum heterogeneity in the region, gradually combining the smaller image objects into larger image objects, and finally completing image segmentation through the set optimal segmentation scale.
For example: s1 and s2 are combined into s, then the regional heterogeneity of s is formulated as follows:
f=w color h color +(1-w color )h shape
wherein w is c Color is the spectral weight of the combined image spots, h color To combine the spectral heterogeneity of the spots, h shape Shape heterogeneity of the spots after merging.
When the optimal segmentation scale is determined, firstly, the change rate (rates of change of LV, ROC-LV) of the homogeneity of the image object under different segmentation scales is calculated, and then when the ROC-LV reaches the maximum, namely a peak value appears, the segmentation scale corresponding to the point is the optimal segmentation scale, so that the self-adaptive multi-scale segmentation is realized.
After multi-scale segmentation using the optimal segmentation scale, there may still be cases where the fine construction land is not completely segmented from the background. At this time, further segmentation may continue in a manner of adaptive multi-threshold segmentation.
S2042: and dividing the object in the small-scale parameter image object layer by adopting a self-adaptive multi-threshold dividing mode to obtain the object which is still stored in the small-scale parameter image object layer.
The adaptive multi-threshold segmentation is to divide an image into a plurality of object regions directly through reasonable thresholds, and no prior knowledge is needed. When the gradation value of the target region is greatly different from the surrounding, division can be effectively performed. Specifically, peak-valley detection is carried out on the neighborhood average histogram after morphological filtering according to the image characteristics, and Gaussian function parameters are quickly fitted. And the data of each peak area are singly truncated and statistically analyzed to determine fitting parameters, so that the interference caused by multimodal overlapping is better avoided, and the self-adaptive multi-threshold segmentation of complex images can be realized.
For example, representing an mxn digital image as a two-dimensional gray function f (x, y), the local gray average g (x, y) of the (x, y) points is:
Calculating the skewness by a sample, determining the peak value of the area by taking the absolute minimum skewness, and fitting the probability density function of the distribution as follows:
finally, by constraint:
pi(Ti)=pi+1(Ti)
normal fitting functions Pi (x) and Ti are obtained for each peak region. Thereby performing adaptive segmentation based on the determined plurality of thresholds.
Taking the example of monitoring the newly-added construction land from the remote sensing image data collected from the Beijing No. two satellite, the optimal segmentation scale determined in the step S2041 is 10. The shape parameter was 0.1 and the compactness parameter was 0.5. The threshold value of the post-phase b layer determined in step S2042 is 139.
S205: and merging the neighborhood image spots by adopting a multi-scale segmentation merging algorithm based on the limiting condition.
After the small-scale image segmentation is performed, a phenomenon that a complete ground object is segmented into a plurality of objects exists. Thus, there is also a need to continue to merge adjacent objects that are relatively consistent in characteristics.
In the implementation process, the L, a and b layers of the Lab color space are combined, the difference of objects in the neighborhood range is analyzed, and the objects with the difference smaller than a certain threshold value are combined under the constraint of larger scale parameter conditions. Wherein the value of the larger scale parameter is determined by gradually increasing in steps of 5. The constraint condition of the difference threshold in the finally determined multi-scale segmentation merging algorithm is as follows:
abs_hsx_L_mean_diff_to_unclassified<5
abs_hsx_a_mean_diff_to_unclassified<5
abs_hsx_b_mean_diff_to_unclassified<5
abs_qsx_a_mean_diff_to_unclassified<5
Where abs_ hsx _l_mean_diff_to_unclassified represents the absolute value of the difference between the segmented object on hsx _l and the mean of the surrounding unclassified classes, hsx _l being the L layer in the post-phase image. abs_ hsx _a_mean_diff_to_ unclassified, abs _ hsx _b_mean_diff_to_ unclassified, abs _ qsx _a_mean\u the meaning of diff_to_unclassified is similar to abs_ hsx _l_mean_diff_to_unclassified. hsx _a is an a-layer in the post-phase image, hsx _b is a b-layer in the post-phase image, and qsx _a is an a-layer in the pre-phase image.
Wherein, the threshold value in the constraint condition is determined after trial and error. Thus, merging of the multi-scale objects is completed.
Taking the example of monitoring the newly-added construction land from the remote sensing image data collected from the Beijing No. two satellite, the optimal segmentation scale determined in step S205 is 80. The shape parameter was 0.1 and the compactness parameter was 0.5.
What needs to be explained here is: after the small-scale parameter object layer and the large-scale parameter object layer are formed by multi-scale segmentation, spatial features of objects in the two image object layers are also required to be extracted in advance, so that whether the object category is changed from a non-construction land to a construction land is judged by the spatial features.
Table 1 shows the spatial features to be extracted.
TABLE 1 spatial features to be extracted
When the feature selection is carried out, the feature selection method is different from a common feature optimization method based on a statistical method, and the feature selection method can fully and directly represent the advantages of the features by means of the Lab color space and is selected according to the visual characteristics of the target features. The specific physical significance of the three components L, a and b in the Lab color space is combined, and the target ground object is mainly described by the mean value or standard deviation of the three components L, a and b and other spatial characteristics in the table 1 in terms of color, brightness, density, comparison relation with surrounding environment and the like.
Then, feature recognition is performed based on the spatial features of the target object, and then newly added construction land is monitored.
Here, the sample extremum method is mainly used in determining the threshold value of each constraint condition to be subsequently described. In selecting samples, samples close to the critical points of the target and the non-target are selected as much as possible (about ten samples per class). According to the color characteristics represented by the L layer, the a layer and the b layer, the threshold is directly determined according to the maximum value or the minimum value of the sample. Describing a highlight object, the L threshold should take the minimum value of L in the sample; describing a red target, the a threshold should take the minimum value of a in the sample; describing a blue target, b threshold should take the maximum value of b in the sample. The calculation formula of the sample extremum method is as follows:
Q=max(k 1 ,k 2 ,…,k n ) Or min (k) 1 ,k 2 ,…,k n )
Wherein Q represents a determined threshold, k 1 ,k 2 …,k n Representing the values of sample 1, sample 2, …, sample n on a certain feature, max () represents the maximum function, and min () represents the minimum function.
It should be noted that the objects in the large-scale parameter image object layer are obtained by combining a plurality of objects in the small-scale parameter image object layer according to a certain rule. Therefore, the large-scale image object layer is more suitable for extracting the ground objects with larger area or the ground objects with more prominent characteristics such as shape, texture and the like. The small-scale image object layer is more suitable for extracting ground objects with smaller area or ground objects with more prominent spectral characteristics and less important characteristics such as shape, texture and the like.
S206: and extracting the newly added construction land at the large-scale parameter image object layer.
Specifically, the land use type of the object at the front time phase and the rear time phase is determined based on the spatial characteristics of the object at the front time phase and the rear time phase, and further, whether the object is a newly added construction land is determined. The method comprises the steps of extracting newly-increased blue-topped houses, changing newly-increased vegetation into suspected construction land and removing interference land features of newly-increased agricultural facilities.
(1) Extracting newly added blue roof house
Specifically, according to the comparison result of the Mean value (Mean hsx_b) of the layer b of the object in the later time phase and the threshold value b of the blue top room, determining whether the object in the later time phase is the blue top room; for subjects whose posterior phase is the blue roof, the difference (b_diff) between the posterior phase b layer mean and the anterior phase b layer is compared to the blue change threshold to determine if the subject has been the blue roof in the anterior phase. The posterior phase is the blue top room and the anterior phase is not the object of the blue top room, namely the newly added blue top room. The threshold value of the blue top room b is the maximum value of the average value of the layer b of the post-time phase b of the samples of the blue top room; the blue change threshold is the maximum value of the difference between the back phase b layer mean value and the front phase b layer mean value in all newly added blue top room samples, and the value is a negative number.
In the implementation process, the b layer can reflect the change of the blue top room to the greatest extent, so that the optimal threshold value of the foreground blue top room and the background is calculated by adopting the maximum inter-class variance, and is determined as 139, and the blue top room is segmented on a large-scale object by the threshold value.
A number of blue-top house samples are selected, including as many types of situations as possible in the samples, such as: light blue roofs, dark blue roofs, and the like. Obtaining threshold ranges of the sample on the post-phase b-band, the pre-phase b-band and the post-phase L-band, for example: the post-phase b-band range is 130 to 138, the pre-phase b-band range is 140 to 150, and the post-phase L-band range is 170 to 190. And through the 3 spatial features and the determined threshold, the feature description of the newly added blue top room is completed, and the method is specifically as follows:
Mean hsx_b<138and Mean qsx_b>150and Mean hsx_L>188
Mean hsx_b<131and Mean qsx_b>145and Mean hsx_L>188
Mean hsx_b<136and Mean qsx_b>140and Mean hsx_L>188
Mean hsx_b<137and Mean qsx_b>145and Mean hsx_L>195
Mean hsx_b<130and Mean qsx_b>140and Mean hsx_L>170
(2) Extracting vegetation to become suspected construction land
Specifically, according to the average value of the front time phase a layer, the average value of the rear time phase a layer and the edge characteristics of the rear time phase L layer of the object, comparing the average value with the corresponding vegetation to become a suspected construction land threshold value, determining whether the object becomes the suspected construction land, wherein the corresponding vegetation to become the suspected construction land threshold value is based on the maximum value of the a layer in the front time phase Lab image, the minimum value of the a layer in the rear time phase Lab image and the minimum value of the edge characteristics of the L layer in the rear time phase Lab image of the vegetation to become the suspected construction land sample object.
In the implementation process, on a true color image, vegetation appears green, and the corresponding a value is lower. The suspected construction land is generally not green, and the corresponding value of a is relatively high. Thus, the threshold values are preliminarily determined to be 142 to 148, and the object satisfying the condition "Mean qsx_a <142and Mean hsx_a>147" is labeled as a "vegetation_construction land" class.
Based on the images of the front and back time phases, a certain amount of vegetation is selected to become a sample of the construction land, the spatial feature sets of the sample are subjected to variation coefficient calculation and are sequenced, the optimal spatial feature set and the threshold area of the vegetation construction land are finally obtained, and the target ground object is extracted according to the statistical result of the sample. As can be seen from the sample statistics results, the a-band, L-band and the Candy-band after the edge detection operation after the Lab color space transformation can be identified to a large extent, so that the object satisfying the condition "Mean hsx_a >141and Mean hsx_L_Canny>0.3" or "Mean hsx_a >138and Mean diff.to hsx_a,hsx _vegetation >13" is first classified as "vegetation_construction land" class. Here, "Mean diff.to hsx _a, hsx _vegetation >13" means that the object is 13 or more larger in the later time phase a layer Mean than the surrounding later time phase objects of vegetation type.
And finally, circularly searching the object with the largest overlap ratio with the boundary of the object in the periphery of the vegetation-construction land class, and ensuring that the object of the newly added construction land is complete and reasonable. And (3) assigning the object which satisfies the requirement that the cyclic search satisfies the requirement that the vegetation_construction land is 0.8 as a class of vegetation_construction land, and the cyclic frequency is 5 times.
(3) Extracting newly added agricultural facilities
Specifically, whether the post-time phase object is a newly added agricultural facility is determined according to the number of the spots of the areas where the pre-time phase object and the post-time phase object are located, the difference between the pre-time phase object and the adjacent object on the L layer, and the difference between the post-time phase object and the adjacent object on the L layer, and the comparison result of the agricultural facility threshold.
The agricultural facility threshold is the minimum value of the number of image spots of the area where the agricultural facility sample is located, and the minimum value of the L image layer difference values of the front and rear time phases.
In the specific implementation process, since the newly added agricultural facility and the construction land are consistent in terms of the change of the spectrum information, the newly added agricultural facility needs to be extracted separately as a characteristic land class. By analyzing the characteristics of the target ground object on the image, the applicant finds that facility greenhouses or medium-small greenhouses generally appear in a piece, and the characteristic of large object density is shown after the segmentation by using small scale parameters. Therefore, it is necessary to construct regional features, i.e., to construct an "object feature buffer 30m region" to obtain the number of spots, so as to quickly extract new agricultural facilities. 200 samples of newly added agricultural facilities and non-newly added agricultural facilities are selected respectively, a SEaTH method is adopted to calculate the classification threshold value of the characteristic of the figure spot Number within the buffer 30m range, the threshold value is finally determined to be 15, and the object of the Number of the newly added agricultural facilities potential area (30) <15 in the newly added agricultural facilities potential area is assigned to be the non-newly added agricultural facilities class.
In the "newly added agricultural facility potential area", an average difference characteristic (Mean diff. To hsx _l) of the luminance bands of the map spots and the newly added agricultural facility potential area at the later time phase is calculated, and the optimal division threshold 6 is obtained by an algorithm. The average difference characteristic of the brightness wave band of the front time phase of the image spots and the potential area of the newly added agricultural facility is obtained by the same method, and the threshold value is 3. Thus, the object of "Mean diff.to hsx _l in" newly added agricultural equipment potential area ", newly added agricultural equipment potential area >6and Mean diff.to qsx_L, newly added agricultural equipment potential area <3", is assigned as "newly added agricultural equipment" class.
The method is characterized in that the whole extraction of the target ground object is the idea of 'extraction combined with gradual removal', and after the main newly-increased agricultural facilities are determined through the optimal spatial characteristics, the periphery of the newly-increased agricultural facilities is required to be searched, so that the extraction rate of the newly-increased agricultural facilities is ensured. The object meeting the condition 'Mean diff.to hsx _L, the potential area of the newly added agricultural facility >4and Mean diff.to qsx_L and the potential area of the newly added agricultural facility <2and Rel.border to L1 _newly added agricultural facility > 0.1' are assigned as a 'newly added agricultural facility' class by adopting a cyclic extraction mode. Objects of the loop search condition "rel. Boundary to l1_newly added agricultural facility >0.9" are classified into the "newly added agricultural facility" class. Thus, the extraction work of the newly added agricultural facilities is completed, and the detection rate of the newly added agricultural facilities is ensured.
Next, comprehensively considering the size and the segmentation effect of the target extract relative to small objects, dividing the target newly-increased construction land with the area of more than 10 pixels into 6 major classes of 'newly-increased red roof house, newly-increased blue roof house, newly-increased green roof house, newly-increased gray roof house, newly-increased highlight ground surface and newly-increased other construction land' in a small-scale image object layer segmented by taking 10 as a segmentation scale, reserving the newly-increased blue roof house and newly-increased agricultural facility range extracted in a large-scale parameter segmentation layer (only when the newly-increased construction land of the 6 classes is not included), ensuring that the extracted vegetation is changed into the newly-increased construction land, and not including a non-changing part of a large piece in the map spot), combining the characteristic sets of the pixel level and the object level, determining the optimal characteristic of each target land class on each space by a sample method, and extracting the newly-increased threshold value.
S207: and extracting the newly added construction land at the small-scale parameter image object layer.
Specifically, in the small-scale parameter image object layer, whether the land use type of the preceding time phase and the following time phase is the target type or not is determined based on the spatial characteristics of the preceding time phase and the following time phase of the object, and whether the object is a newly added construction land or not is determined. The target types comprise a newly added red roof house, a newly added blue roof house, a newly added green roof house, a newly added grey roof house, a newly added highlighting ground surface and newly added other construction lands. Before extracting these target types, the objects in the small-scale image object layer corresponding to the objects which are determined to be newly added blue-top rooms and newly added agricultural facilities in the large-scale image object layer are correspondingly assigned to the newly added blue-top rooms and the newly added agricultural facilities, and other objects are left unclassified (unclassified).
(1) Extracting newly added red roof house
Specifically, according to the comparison result of the Mean value (Mean hsx_a) of the image layer of the time phase a of the unclassified object after the time phase a and the Mean value (Mean qsx_a) of the image layer of the time phase a before the time phase a and the red roof room threshold value, whether the area for suspicious construction of the time phase small-scale object after the time phase and the newly-added vegetation is the newly-added red roof room is determined.
The red roof room threshold is the minimum value on the back phase a layer and the maximum value on the front phase a layer of the newly added red roof room sample.
In the implementation process, the red roof house shows obvious highlight features on the image layer a. And selecting about 300 red roof house samples to perform feature screening and determining a feature value range interval in the early stage, and finally extracting the red roof house to the greatest extent through an a-band Mean value (Mean a) and a brightness Mean value (Mean L) of the feature screening. First, the object is subjected to threshold segmentation on the layer a of the later time phase, and the threshold 156; then, determining an extraction threshold value by a method of gradually approaching an optimal threshold value interval, and extracting the red roof house by the threshold value interval.
Specifically, the objects with red roof rooms on the front and rear time phases are excluded, namely, unclassified objects meeting the constraint condition of 'Mean hsx_a >159and Mean qsx_a>159' are assigned as 'L2_ qsx _ hsx _and are red roof rooms'. Then, extracting newly added red roof houses from unclassified objects, wherein the constraint conditions are as follows:
condition 1: mean hsx_a >156and Mean qsx_a<153and Mean hsx_L > =184
Condition 2: mean hsx_a >160and Mean qsx_a<153and Rel.border to L2 newly added red roof house >0.01
(2) Extracting newly added blue roof house
Because some small newly-added blue-top rooms may not be segmented in the large-scale parameter image object layer, in the small-scale parameter image object layer, the newly-added blue-top rooms extracted from the previous large-scale parameter image object layer and the areas outside the newly-added agricultural facilities need to be extracted again.
Specifically, in the small-scale parameter image object layer, according to the average value of the back time phase b image layer, the difference value between the back time phase b image layer average value and the front time phase b image layer average value of the unclassified object and the comparison result of the blue roof room threshold value, whether the area for suspicious construction of the back time phase small-scale object and the newly-added vegetation is the newly-added blue roof room is determined. The blue top room threshold here is the same as the ground blue top room threshold in step S206 (1).
In the implementation process, first, the objects with blue top rooms on the front and rear phases are excluded, namely, unclassified objects meeting the constraint condition "Mean hsx_b <138and Mean qsx_b<138" are assigned as "l2_ qsx _ hsx _and blue top rooms".
Then, extracting newly added blue top rooms, and marking unclassified objects meeting any one of the following constraint conditions as potential newly added blue top rooms:
condition 1: mean hsx_b <138and Mean qsx_b>150and Mean hsx_L>188
Condition 2: mean hsx_b <131and Mean qsx_b>145and Mean hsx_L>188
Condition 3: mean hsx_b <130and Mean qsx_b>140and Mean hsx_L>188and Mean qsx_L<180
Condition 4: mean hsx_b <135and Mean qsx_b>140and Mean hsx_L>188and Mean qsx_L<180and Rel.border to L2 newly added blue top house >0
Condition 5: mean hsx_b <136and Mean qsx_b>140and Mean hsx_L>188
Condition 6: mean hsx_b <137and Mean qsx_b>145and Mean hsx_L>195
Condition 7: mean hsx_b <130and Mean qsx_b>140and Mean hsx_L>170
(3) Extracting newly-increased green roof house
Specifically, according to the comparison result of the average value of each layer of the unclassified object in the front time phase a and b with the threshold value a and b of the green roof of the front time phase, determining that the object is not the green roof in the front time phase, and according to the comparison result of the average value of each layer of the object in the rear time phase a and b with the threshold value a and b of the green roof of the rear time phase, determining whether the object is the newly added green roof.
The front time phase green roof room a threshold value is the maximum value of a front time phase green roof room sample on a front time phase a layer, the front time phase green roof room b threshold value is the maximum value of the front time phase green roof room sample on a front time phase b layer, the rear time phase green roof room a threshold value is the maximum value of a rear time phase green roof room sample on a rear time phase a layer, and the rear time phase green roof room b threshold value is the maximum value of a rear time phase green roof room sample on a rear time phase b layer.
In the specific implementation process, as the green roof rooms are extracted from the basic vector cultivated land, the number of the green roof rooms is small, besides adopting a green roof room sample to determine the green roof room threshold value and further extracting the newly added green roof rooms, the method of determining the characteristics and the threshold value by adopting the sample can be adopted for extraction, and the method of directly adopting the constraint condition description mode can be adopted for extraction. That is, an object satisfying the constraint "Mean hsx_b <149and Mean hsx_a<138and Mean qsx_b>153and Mean qsx_a>145" is marked as "newly added green top house". After merging the newly added green top room objects, the extracted objects can be regulated according to the size of the pattern spots.
(4) Top house for extracting newly-increased ash
Since the target is gray and has no obvious color characteristics, judgment of the gray roof house is required by the shadow of the house.
Specifically, an object whose post-phase L-layer average value is smaller than a specified threshold value is assigned a shadow class. For an unclassified object with a gray later time phase and a gray earlier time phase, determining whether the object is a newly added grey top room according to a comparison result of an overlapping index of the object shifted north by a specified pixel number and a shadow and a grey top room shadow overlapping threshold.
The gray top house shadow overlapping threshold value is a preset overlapping index parameter, and the gray is determined in a mode that an object a average value is in a certain interval and a b average value is in a certain interval range;
in the specific implementation process, because the ash roof houses are mainly distributed around the agriculture and forestry land, the extraction range of the ash roof houses is directly determined on a basic vector, and an object meeting constraint conditions of ' DLMC ', ' thermal Layer 1= ' dry land ' or ' DLMC ', ' thermal Layer 1= ' water casting land ' or ' DLMC ', ' thermal Layer 1= ' facility agricultural land ' or ' DLMC ', ' thermal Layer 1= ' unclassified ' of ' forest land ', ' L2 is assigned to find the range of the ash roof houses.
By means of the relation between the house and the building shadows in the northwest direction, firstly, hsx _shadows are distinguished according to the hsx _L value; then, an overlapping relationship with hsx _shadow after movement is defined. The overlap of the object after moving 3 pels north with hsx _shadow, i.e. shift_north_overlap_ hsx _shadow= [ Overlap of two objects Overlap of object with other (class= hsx _shadow, shift=0x3x0x0, mode= Relative to larger object [0..1] ] ]. Based on this, the range of "l2_found small gray top room" satisfying "Mean hsx_a >143and Mean hsx_a<148and Mean hsx_b>148and Mean hsx_b<154and Mean hsx_L>190and Mean hsx_L<205and shift_north_overlap_hsx _shadow >0.001" is assigned as "l2_small gray top room".
(5) Extracting new high-brightness earth surface
Specifically, in the small-scale parameter image object layer, according to the comparison result of the average value of the unclassified object on the front time phase L image layer and the front time phase highlight surface threshold value and the comparison result of the average value of the unclassified object on the rear time phase L image layer and the rear time phase highlight surface threshold value, whether the object is a newly-increased highlight surface or not is determined.
The front time phase highlight surface threshold value is the maximum value of the new highlight surface sample in the front time phase L layer, and the rear time phase highlight surface threshold value is the minimum value of the new highlight surface sample in the rear time phase L layer;
In the implementation process, first, the object with the highlight surface on the front and rear phases is excluded, namely, the object meeting the constraint condition of "Mean hsx_l >215and Mean qsx_L>215" is assigned as "l2_ qsx _ hsx _and the object with the highlight surface. Then, based on the post-time phase L image layer, the maximum inter-class variance method is adopted to determine that the image segmentation threshold is 219, and then the threshold segmentation method is adopted to segment the object again, so that the separation of the highlight ground surface and other ground objects is completed to the greatest extent.
And then, carrying out sample feature statistics through the selected highlight ground surface sample, wherein in the extraction of the newly added highlight ground surface, the optimal spatial features are an L layer average value, a b layer average value and a layer average value after Lee Sigma edge detection. The new highlighting surface is preliminarily determined through the threshold values of the layers on the front and back phases, and the specific constraint conditions are as follows:
Mean hsx_L>218and Mean qsx_L<201
Mean hsx_L>202and Mean qsx_L<201and Mean hsx_lee_sigma>5
Mean hsx_L>202and Mean qsx_L<203and Mean hsx_lee_sigma>7
after the new highlight surface is preliminarily determined, extracting a part of missing new highlight surface through the neighborhood relation between the surrounding unclassified ground objects and the new highlight surface, so that the extraction rate of the new highlight surface is ensured as much as possible during the extraction of the target. The specific constraint conditions are as follows:
①Mean hsx_L>200and Mean qsx_L<195and Mean hsx_lee_sigma>5.5and Mean qsx_b>157and Mean hsx_b<156and Mean diff.to hsx_L,unclassified>10
②Mean hsx_L>212and Mean diff.to hsx_L,unclassified>15
After determining the region of the "new highlight surface", the determination of the "new highlight surface" is performed according to the geometric features (compactness, pattern-to-long and narrow index, etc.) of the extracted object and the spatial relationship with the surrounding adjacent objects, excluding the objects meeting the following constraint conditions:
compactness >2and Rel.border to L2_qsx_hsx-are all highlights >0.1
Shape index>2.5and Mean diff.to qsx_L,unclassified>3and Length/Width<5
p_a_rate >0.7 or p_a_rate >0.1and Rel.border to L2_qsx_hsx _highlight table >0.1
(6) Extracting newly added other construction land
Specifically, in the small-scale parameter image object layer, according to the comparison result of the standard deviation of the front time phase L image layer of the unclassified object and the standard deviation threshold value of the time phase L before the other construction land is newly added, the comparison result of the standard deviation of the rear time phase L image layer and the standard deviation threshold value of the time phase L after the other construction land is newly added, the comparison result of the average value of the rear time phase a image layer and the time phase a threshold value after the other construction land is newly added, and the comparison result of the long and narrow index of the object and the long and narrow index threshold value, whether the object is the other construction land is newly added is determined.
The time phase L standard deviation threshold before other construction land is the maximum value of the L standard deviation of the newly added other construction land samples at the time phase before the other construction land, the time phase L standard deviation threshold after other construction land is the minimum value of the L standard deviation of the newly added other construction land samples at the time phase after the other construction land, and the time phase a threshold after other construction land is the minimum value of the a average value of the newly added other construction land samples at the time phase after the other construction land; the threshold value of the long and narrow index is the maximum value of the long and narrow index of the newly added other construction land samples.
In the specific implementation process, because the number of the newly added other construction lands is small, the constraint conditions can be directly described for extraction besides determining the threshold value by adopting a sample extremum method and further extracting the other construction lands. The specific constraint conditions are as follows: an object satisfying "Standard deviation qsx _l >12and Standard deviation hsx_L<6and Mean hsx_a>148and p_a_rate < =0.4" is assigned as a new additional construction land.
Here, some non-newly added construction sites may be mixed into the extracted newly added construction sites. In order to ensure the accuracy of the monitoring of the newly added construction land, the newly added construction land needs to be removed and optimized after the newly added construction land is extracted.
Specifically, non-target objects in the small-scale parametric image object layer are deleted based on the speckle-elongated index of the small-scale parametric image object layer. Wherein, the non-target object is an object corresponding to the non-newly added construction land.
(1) Optimizing newly added blue roof house (for large scale parameter dividing object layer)
Determining that irregular objects with small strips and areas exist in the newly added blue top room through feature description, and constructing an 'long and narrow index' feature: p_a_rate= ([ borer length ] - [ Area ])/[ Area ], to describe narrow strip grid objects, thereby optimizing the newly added blue top room. The newly added blue top room is removed and optimized through space logic and built geometric feature indexes such as p_a_rate, and the description of the 'non-newly added blue top room' in the 'newly added blue top room' set is as follows:
p_a_rate>0.7
Area<50Pxl and Mean hsx_L<190
(2) Optimizing vegetation to become suspected construction land
The geometrical characteristics and the post-phase color characteristics are mainly used for removing. The method comprises the following steps:
the object of "Area <100Pxl or p_a_rate>0.5or Mean hsx_L<187" is excluded.
The object of "Mean hsx_l <193and Mean hsx_a<147and Mean hsx_b<159" is excluded.
(3) Optimizing newly added agricultural facilities
The method is mainly excluded from aspects of object geometric features, edge features, density features, brightness features and the like. The method comprises the following steps:
the object satisfying the condition "Mean qsx_l_canny >0.15" in the "newly added agricultural facility" is excluded.
The object satisfying the condition "Area <2000Pxl and Number of L1 _newly added agricultural facility (500) <2" in the "newly added agricultural facility" is excluded.
The "newly added agricultural facility" object of "Mean hsx_l <190" is excluded.
The agricultural facility land is shown as clear in edge on the image texture, and the 'newly added agricultural facility' of 'Mean hsx_l_canny <0.03and Mean hsx_b>160' is determined to be excluded by combining the color characteristics of the agricultural facility land. The "newly added agricultural facility" satisfying the condition "Area >1000Pxl and Mean hsx_b>160" is excluded.
And merging adjacent 'newly added agricultural facility' class objects.
(4) Optimizing newly added red roof house
Because the red roof house identification is easily interfered by the dark red ground object on the layer a, some objects which are mistakenly extracted due to the interference of the dark red ground object need to be removed from the extracted newly added red roof house. For example: objects in the cultivated land that satisfy the constraint "Mean hsx_a <161and Mean diff.to hsx_L,L2 cultivated land < 6". Still other constraints are as follows:
Mean hsx_L<184or p_a_rate>0.5or Mean qsx_L>212
mean hsx_a <167and Rel.border to L2_qsx_hsx _are red roof house >0
p_a_rate>0.2and Mean diff.to hsx_a,unclassified<6
Mean diff.to hsx_a,unclassified<5
Mean qsx_L>205and Mean qsx_a>145
Finally, special exclusion is carried out by combining the in-situ class attribute in the thematic vector. The description features of the foundation classes are mainly relied on, and the description features are as follows: when the original object of village and town building meets the condition of Standard deviation qsx _L >10 or Mean hsx_L <190, the new red roof house is not added.
(5) Optimizing newly added blue roof house (for small scale parameter dividing object layer)
In the potential newly added blue roof house, the constraint conditions of objects which are needed to be removed and interfere with ground objects are as follows:
rel.border to l2_ qsx _ hsx _are both blue top room >0and p_a_rate > = -0.4
p_a_rate>0.7
Area<50Pxl and Mean hsx_L<190
(6) Optimizing newly-increased ash roof house
The object elimination is performed by combining the size characteristics, the brightness characteristics, the color difference characteristics on the front and back time phases and the like of the object, and the specific steps are as follows:
The object whose object condition is "Area < xiaotuban or Mean qsx _l <180" is excluded.
The object whose object condition is "abs_a_diff <5and abs_b_diff<5and abs_L_diff<5" is excluded.
Wherein xiaotusan is a preset parameter representing a patch threshold.
(7) Optimizing new heightened bright ground surface
The method uses the constructed characteristic combination of x_interval and y_interval and the characteristic that the characteristic combination of the characteristic combination and the characteristic combination of L2 qsx hsx are adjacent relations of a tear-running object with a highlight surface, brightness difference of surrounding unclassified objects and the like to exclude the new highlight surface meeting the following conditions, and the specific steps are as follows:
excluding the objects of "y_interval <5and Rel.border to L2_qsx_hsx _all being highlight table >0" in the "newly highlight table".
Excluding the objects of "x_interval <5and Rel.border to L2_qsx_hsx _all being highlight table >0" in the "newly highlight table".
The object of "x_interval <4and Mean diff.to hsx_L,unclassified<10" in the "newly highlighted surface" is excluded.
The object of "y_interval <4and Mean diff.to hsx_L,unclassified<10" in the "newly highlighted surface" is excluded.
S208: and integrating the multi-dimensional extraction results.
To ensure the integrity of the extracted object to the greatest extent, the results extracted from the parts need to be integrated. Specifically, based on a preset distance number, buffering extracted image spots of various newly-added construction land categories; the buffered spot overlaps are merged. For example: and buffering 4 pixels of the newly added blue top room, the newly added red top room, the newly added grey top room, the newly added green top room, the newly added highlight ground surface, the newly added other construction land and the newly added vegetation to become a suspected construction land extracted in the two dimensions and combining. In this way, the problem of dense pattern spots can be solved, and the situation of 'islands' in the pattern spots can be removed.
And outputting the integrated extraction result so as to realize the monitoring of the newly added construction land.
As can be seen from the above, according to the method for monitoring the newly-increased construction land provided by the embodiment of the application, for the diversity of the newly-increased construction land, the newly-increased construction land is subjected to refinement classification and one by one description extraction, so that the omission factor of the newly-increased construction land is ensured to be stabilized within 1%. Secondly, an object long and narrow index suitable for describing the narrow strip-shaped ground object is provided, and the object long and narrow index has a good effect on extracting the object. Thirdly, whether the color characteristics of the object change on the front time phase image and the rear time phase image or not and the difference degree of the object from surrounding objects are directly described by using the values of the L component, the a component and the b component, and compared with a isomorphic construction characteristic image method, the method has stronger stability, universality and expansibility, and simultaneously solves the problem that the deep learning method has high dependence on samples. Fourth, on the basis of dividing and combining the image layer objects, the characteristics of object density, average value on the Canny image layer, brightness contrast difference and the like are utilized to identify and reject the newly added construction land disturbance ground class, namely the newly added agricultural facilities (multi-span greenhouse or medium-sized and small-sized greenhouse), and the extraction precision of the target ground object is improved.
Based on the same inventive concept, as an implementation of the method, the embodiment of the application also provides a newly added construction land monitoring device. Fig. 3 schematically illustrates a block diagram of a newly added construction land monitoring device, as shown in fig. 3, which may include:
a receiving module 301, configured to receive a front-phase true color orthographic image, a rear-phase true color orthographic image, and a base vector, where the front-phase true color orthographic image and the rear-phase true color orthographic image are true color orthographic images of different phases in the same region, and the base vector is used to characterize a land utilization type of each region of a certain period of time earlier than the front-phase true color orthographic image acquisition time or the front-phase true color orthographic image acquisition time;
the conversion module 302 is configured to convert the front-phase true color orthographic image and the rear-phase true color orthographic image from an RGB color space to a Lab color space, to obtain a front-phase Lab image and a rear-phase Lab image;
the conversion module 302 is configured to convert the base vector into a base image object layer, where each vector image patch corresponds to one image object, and each image object inherits all attribute information of the corresponding vector image patch;
The segmentation module 303 is configured to segment the front-phase Lab image and the back-phase Lab image with small scale parameters under the constraint of the base image object layer to obtain a small scale parameter image object layer; copying the small-scale parameter image object layer, and merging the objects in which the difference between the small-scale parameter image object layer and the front time phase Lab characteristics and the rear time phase Lab characteristics of surrounding objects is smaller than a defined threshold value according to a homogeneity rule by using large-scale parameters to obtain a large-scale parameter image object layer; the objects in the large-scale parameter image object layer are obtained by combining the objects in the small-scale parameter image object layer according to a homogeneity rule, and the boundary of the combined image object does not exceed the boundary of the corresponding object in the basic image object layer;
the feature calculation module 304 is configured to extract spatial features of objects in the large-scale parameter image object layer and the small-scale parameter image object layer respectively;
the monitoring module 305 is configured to determine whether the land utilization type of the object in the small-scale parameter image object layer is changed from a non-construction land to a construction land based on the spatial feature of the object in the large-scale parameter image object layer and the spatial feature of the object in the small-scale parameter image object layer, so as to monitor the newly added construction land.
Based on the foregoing embodiment, the conversion module is configured to convert the front-phase true color orthographic image and the rear-phase true color orthographic image from an RGB color space to an XYZ color space, to obtain a front-phase transition image and a rear-phase transition image;
and respectively carrying out standard CIE Lab transformation on the front time phase transition image and the rear time phase transition image, then adding preset parameters into the transformed image layer, and carrying out downward rounding operation to obtain the front time phase Lab image and the rear time phase Lab image. The value ranges of the front time phase Lab image and the rear time phase Lab image are consistent with 8-bit value ranges.
Based on the foregoing embodiment, the apparatus further includes: and the data preparation module is used for removing the region where the newly added construction land does not appear in the working region for obtaining the pre-time phase Lab image and the post-time phase Lab image based on the basic vector to obtain a basic region.
Based on the foregoing embodiment, the segmentation module is configured to segment the front-time Lab image and the back-time Lab image with different scale parameters based on a region merging algorithm with minimal regional heterogeneity, so as to obtain the small-scale parameter image object layer and the large-scale parameter image object layer.
Based on the foregoing embodiment, the segmentation module is configured to segment the front-time Lab image and the back-time Lab image with small scale parameters within the basic region under the constraint of the basic image object layer, so as to obtain a small scale parameter image object layer;
copying the small-scale parameter image object layer;
and merging the front time phase Lab image and the rear time phase Lab image on the copied small-scale parameter image object layer by adopting a region merging algorithm based on the minimum regional heterogeneity to obtain the large-scale parameter image object layer.
Based on the foregoing embodiment, the feature calculation module is configured to extract spatial features of all objects in the large-scale parameter image object layer and all objects in the small-scale parameter image object layer respectively.
Based on the foregoing embodiment, the objects in the macro-scale parameter image object layer are formed by merging the objects in the plurality of micro-scale parameter image object layers, the objects in the macro-scale parameter image object layer are parent objects of the corresponding objects in the micro-scale parameter image object layer, and the objects in the micro-scale parameter image object layer are child objects of the corresponding objects in the corresponding macro-scale parameter image object layer;
The monitoring module is used for judging that the object in the large-scale parameter image object layer belongs to a newly added blue top room if the object in the large-scale parameter image object layer is determined to be a blue top room on a rear time phase and not a blue top room on a front time phase based on the spatial characteristics of the object in the large-scale parameter image object layer;
if the object in the large-scale parameter image object layer is determined to be not in a vegetation state at a later time phase, the texture in the object is not smooth, and the object in the large-scale parameter image object layer is determined to be in a vegetation state at a front time phase, and the object in the large-scale parameter image object layer belongs to newly-increased vegetation and becomes a suspected construction land;
if the fact that the objects in the large-scale parameter image object layer belong to the categories of the newly-added agricultural facilities possibly in the attribute of the basic vector is determined, the objects are dense, the brightness is obviously higher than the surrounding brightness in the later time phase, the difference between the brightness and the surrounding objects in the front time phase is smaller than a preset value, and the objects in the large-scale parameter image object layer are determined to belong to the newly-added agricultural facilities;
in the small-scale parameter image object layer, an object with a parent object class of a newly added blue top room is assigned as a newly added blue top room, an object with a parent object class of a newly added agricultural facility is assigned as a newly added agricultural facility, and the other object classes are kept unclassified;
Based on the spatial characteristics of the objects in the small-scale parameter image object layer, combining with attribute information of land utilization types inherited by the objects from the basic vector, if the land utilization types of the other objects in the front time phase are determined not to belong to the construction land, and the land utilization types in the rear time phase are red roof, blue roof, green roof, gray roof, highlight earth surface or other construction lands, the corresponding object types are respectively determined as a newly added red roof, a newly added blue roof, a newly added green roof, a newly added gray roof, a newly added highlight earth surface and newly added other construction lands; the determined newly added red roof house, newly added blue roof house, newly added green roof house, newly added grey roof house, newly added highlight ground surface and newly added other construction land are collectively called as newly added construction land;
in the large-scale parameter image object layer, if the object type is that the newly-increased vegetation is changed into the suspected construction land and the newly-increased construction land sub-object is not included, all the sub-object types corresponding to the object type are assigned to the newly-increased vegetation to be changed into the suspected construction land.
Based on the foregoing embodiment, the monitoring module is configured to determine, in the large-scale parametric image object layer, whether the object is a blue top room in the post-time phase Lab image according to a comparison result of a mean value of a b layer in the post-time phase Lab image and a blue top room b threshold; comparing the difference value of the average value of b layers in the back time-phase Lab image and the average value of b layers in the front time-phase Lab image with a blue change threshold value to determine whether the object is a blue roof room in the front time-phase Lab image; the object which is not a blue top room in the back time phase Lab image and is not a blue top room in the front time phase Lab image is a newly added blue top room; the threshold value of the blue top room b is the maximum value of the average value of the layer b of the post-time phase b of the samples of the blue top room; the blue change threshold is the maximum value of the difference between the back time phase b layer average value and the front time phase b layer average value in all newly added blue top room samples, and is a negative number;
In the large-scale parameter image object layer, according to the average value of an a layer in the front time-phase Lab image, the average value of an a layer in the rear time-phase Lab image and the edge characteristic of an L layer in the rear time-phase Lab image of the object, respectively comparing the average value of the a layer in the front time-phase Lab image, the average value of the a layer in the rear time-phase Lab image and the edge characteristic of the L layer in the rear time-phase Lab image with the comparison result that the corresponding vegetation becomes a suspected construction land threshold value, determining whether the object becomes the suspected construction land, wherein the corresponding vegetation becomes the suspected construction land threshold value is the maximum value of the front time-phase a layer, the minimum value of the rear time-phase a layer and the minimum value of the edge characteristic of the rear time-phase L layer in the sample object of the suspected construction land;
in the large-scale parameter image object layer, determining a range for checking newly added agricultural facilities according to the basic vector, and determining whether the object is newly added agricultural facilities according to a comparison result of the density of the object and the density threshold value of the agricultural facilities and a comparison result of a difference value of an object on a front time phase L image layer mean value and a front time phase L difference threshold value of the agricultural facilities and a difference value of an object on a rear time phase L image layer mean value and a surrounding adjacent object on a rear time phase L image layer mean value and a rear time phase L difference threshold value of the agricultural facilities in the range; the agricultural facility density threshold is determined based on a density minimum of the newly added agricultural facility sample object, the pre-agricultural facility time phase L difference threshold is determined based on a maximum value of a difference value between the newly added agricultural facility sample object and an adjacent object on a pre-time phase L layer mean value, and the post-agricultural facility time phase L difference threshold is determined based on a minimum value of a difference value between the newly added agricultural facility sample object and an adjacent object on a post-time phase L layer mean value;
In the small-scale parameter image object layer, determining whether the object is a red roof room in the later time phase according to a comparison result of the average value of the image layer of the unclassified object in the later time phase a and the red roof room threshold of the later time phase; for the object with the rear time phase being the red roof room, determining whether the object is the red roof room in the front time phase according to the comparison result of the average value of the a layer of the object in the front time phase and the threshold value of the red roof room in the front time phase; the back time phase is the red roof room and the front time phase is not the object of the red roof room, namely the newly added red roof room; the front time phase red roof room threshold is the maximum value of all newly added red roof room samples on the front time phase a layer average value, and the rear time phase red roof room threshold is the minimum value of all newly added red roof room samples on the rear time phase a layer average value;
in the small-scale parameter image object layer, for an unclassified object with a post-time phase L image layer average value larger than a post-time phase blue top room brightness threshold value, determining whether the object is a blue top room at the post-time phase according to a comparison result of the average value of the object in the post-time phase b image layer and the post-time phase blue top room threshold value; for the object with the rear time phase being the blue top room, determining whether the object is the blue top room or not in the front time phase according to the comparison result of the average value of the b image layers of the object in the front time phase and the threshold value of the blue top room in the front time phase; the posterior time phase is the object of the blue top room and the anterior time phase is not the object of the blue top room, namely the newly added blue top room; the brightness threshold of the post-time-phase blue top room is the minimum value of the average value of the post-time-phase L layers of all newly added blue top room samples, the threshold of the pre-time-phase blue top room is the minimum value of the average value of the pre-time-phase b layers of all newly added blue top room samples, and the threshold of the post-time-phase blue top room is the maximum value of the average value of the post-time-phase b layers of all newly added blue top room samples;
In the small-scale parameter image object layer, the class of the object is unclassified, according to the comparison result of the average value of each image layer of the object in the front time phase a and the front time phase b and the threshold value of the front time phase green roof room a and the front time phase b, whether the object is a green roof room is determined according to the comparison result of the average value of each image layer of the object in the rear time phase a and the average value of each image layer of the object in the rear time phase b and the threshold value of the rear time phase green roof room a and the threshold value of the rear time phase green roof room b, whether the object is a newly added green roof room is determined, the threshold value of the front time phase green roof room a is the maximum value of a front time phase green roof room sample in the front time phase a, the threshold value of the front time phase green roof room b is the maximum value of a front time phase b, the rear time phase green roof room a threshold value is the maximum value of a rear time phase green roof room sample in the rear time phase b;
the object with the later time phase L layer average value smaller than the specified threshold value is assigned to be a shadow type; for unclassified objects with gray later time phases and gray earlier time phases, determining whether the objects are newly added grey top rooms according to the comparison result of the overlapping indexes of the objects and shadows after shifting to north and the grey top room shadow overlapping threshold value, wherein the grey top room shadow overlapping threshold value is a preset overlapping index parameter, and the gray is determined in a mode that an average value of the objects is located in a preset interval and a b average value is located in a preset interval range;
In the small-scale parameter image object layer, determining whether an object is a newly increased highlight surface according to a comparison result of a mean value of an unclassified object on a front time phase L image layer and a front time phase highlight surface threshold value and a comparison result of a mean value of a rear time phase L image layer and a rear time phase highlight surface threshold value, wherein the front time phase highlight surface threshold value is the maximum value of the front time phase L image layer of a newly increased highlight surface sample, and the rear time phase highlight surface threshold value is the minimum value of the rear time phase L image layer of the newly increased highlight surface sample;
in the small-scale parameter image object layer, according to a comparison result of a standard deviation of an unclassified object in a front time phase L image layer and a time phase L standard deviation threshold value before a new additional construction land, a comparison result of a standard deviation of a rear time phase L image layer and a time phase L standard deviation threshold value after the new additional construction land, a comparison result of a mean value of a rear time phase a image layer and a time phase a threshold value after the new additional construction land, determining whether the object is the new additional construction land or not, wherein the time phase L standard deviation threshold value before the new additional construction land is the maximum value of the L standard deviation of the sample of the new additional construction land in the front time phase, the time phase L standard deviation threshold value after the new additional construction land is the minimum value of the L standard deviation of the sample of the new additional construction land in the rear time phase, the time phase a threshold value after the new additional construction land is the minimum value of the new additional construction land in the mean value after the time phase a, and the time phase a threshold value after the new additional construction land is the new additional land, and the time phase value of the new additional construction land;
In the small-scale parameter image object layer, an object which is still unclassified and has a parent object category of newly increased vegetation is changed into a suspected construction land, and the newly increased vegetation is changed into a suspected construction land.
Based on the foregoing embodiment, the spatial features include a plaque-to-length index, which is a ratio of perimeter to area expressed in number of pixels;
the apparatus further comprises: and the optimization module is used for deleting non-target objects in the small-scale parameter image object layer based on the pattern-to-long-narrow index of the small-scale parameter image object layer under the condition that the land utilization type of the small-scale parameter image object layer is changed from a non-construction land to a construction land, wherein the non-target objects are objects corresponding to the non-newly added construction land.
Based on the foregoing embodiment, the apparatus further includes: the merging module is used for buffering the extracted pattern spots of various newly-added construction land categories based on the preset distance number; the buffered spot overlaps are merged.
It should be noted here that the description of the above device embodiments is similar to the description of the method embodiments described above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, please refer to the description of the embodiments of the method of the present application.
Based on the same inventive concept, the embodiment of the application also provides electronic equipment. Fig. 4 schematically shows a block diagram of an electronic device, which may include, with reference to fig. 4: at least one processor 401; at least one memory 402; a bus 403; wherein, the processor 401 and the memory 402 complete the communication with each other through the bus 403; the processor 401 is operative to invoke program instructions in the memory 402 to perform the methods in one or more embodiments described above.
It should be noted here that the description of the above embodiments of the electronic device is similar to the description of the above embodiments of the method, with similar advantageous effects as the embodiments of the method. For technical details not disclosed in the embodiments of the electronic device of the present application, please refer to the description of the method embodiments of the present application for understanding.
Based on the same inventive concept, the embodiments of the present application also provide a computer readable storage medium, which includes a stored program, wherein the program controls a device in which the storage medium is located to perform the method in one or more embodiments described above when running.
It should be noted here that the description of the above embodiments of the storage medium is similar to the description of the above embodiments of the method, with similar advantageous effects as the embodiments of the method. For technical details not disclosed in the storage medium embodiments of the present application, please refer to the description of the method embodiments of the present application for understanding.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. The newly added construction land monitoring method is characterized by comprising the following steps of:
receiving a front-time true color orthographic image, a rear-time true color orthographic image and a basic vector, wherein the front-time true color orthographic image and the rear-time true color orthographic image are true color orthographic images of different time phases in the same region, and the basic vector is used for representing the acquisition time of the front-time true color orthographic image or the land utilization type of each region in a certain period earlier than the acquisition time of the front-time true color orthographic image;
respectively converting the front-time-phase true color orthographic image and the rear-time-phase true color orthographic image from an RGB color space to a Lab color space to obtain a front-time-phase Lab image and a rear-time-phase Lab image;
converting the basic vector into a basic image object layer, wherein each vector image spot corresponds to one image object in the basic image object layer, and each image object inherits all attribute information of the corresponding vector image spot;
Under the constraint of the basic image object layer, the front time phase Lab image and the rear time phase Lab image are firstly segmented by small scale parameters to obtain a small scale parameter image object layer; copying the small-scale parameter image object layer, and merging the objects in which the difference between the small-scale parameter image object layer and the front time phase Lab characteristics and the rear time phase Lab characteristics of surrounding objects is smaller than a defined threshold value according to a homogeneity rule by using large-scale parameters to obtain a large-scale parameter image object layer; the objects in the large-scale parameter image object layer are obtained by combining the objects in the small-scale parameter image object layer according to a homogeneity rule, and the boundary of the combined image object does not exceed the boundary of the corresponding object in the basic image object layer;
respectively extracting the spatial characteristics of objects in the large-scale parameter image object layer and the small-scale parameter image object layer;
and determining whether the land utilization type of the small-scale parameter image object layer is changed from a non-construction land to a construction land based on the spatial characteristics of the object in the large-scale parameter image object layer and the spatial characteristics of the object in the small-scale parameter image object layer so as to monitor the newly added construction land.
2. The method of claim 1, wherein transforming the pre-phase true color orthographic image and the post-phase true color orthographic image from RGB color space to Lab color space, respectively, results in a pre-phase Lab image and a post-phase Lab image, comprising:
transforming the front-time-phase true color orthographic image and the rear-time-phase true color orthographic image from an RGB color space to an XYZ color space respectively to obtain a front-time-phase transition image and a rear-time-phase transition image;
and respectively carrying out standard CIE Lab conversion on the front time phase transition image and the rear time phase transition image, then adding preset parameters into a converted image layer, and carrying out downward rounding operation to obtain the front time phase Lab image and the rear time phase Lab image, wherein the value ranges of the front time phase Lab image and the rear time phase Lab image are consistent with the value range of the 8-bit image.
3. The method of claim 1, wherein after the obtaining of the pre-phase Lab image and the post-phase Lab image, the method further comprises:
and removing the region where the newly added construction land does not appear in the operation region of the obtained pre-time-phase Lab image and the post-time-phase Lab image based on the basic vector to obtain a basic region.
4. A method according to claim 3, wherein the pre-phase Lab image and the post-phase Lab image are segmented with small scale parameters under the constraint of the base image object layer to obtain a small scale parameter image object layer; copying the small-scale parameter image object layer, merging the objects with differences between the small-scale parameter image object layer and the front time phase Lab characteristics and the back time phase Lab characteristics of surrounding objects smaller than a defined threshold value according to a homogeneity rule by using large-scale parameters to obtain the large-scale parameter image object layer, wherein the method comprises the following steps:
in the basic area range, under the constraint of the basic image object layer, dividing the front time phase Lab image and the rear time phase Lab image by small scale parameters to obtain a small scale parameter image object layer;
copying the small-scale parameter image object layer;
and merging the front time phase Lab image and the rear time phase Lab image on the copied small-scale parameter image object layer by adopting a region merging algorithm based on the minimum regional heterogeneity to obtain the large-scale parameter image object layer.
5. The method according to claim 1 or 4, wherein the extracting spatial features of objects within the large scale parametric image object layer and within the small scale parametric image object layer, respectively, comprises:
And respectively extracting the spatial characteristics of all objects in the large-scale parameter image object layer and all objects in the small-scale parameter image object layer.
6. The method according to claim 1, wherein the objects in the macro-scale parameter image object layer are formed by merging objects in a plurality of micro-scale parameter image object layers, the objects in the macro-scale parameter image object layer are parent objects of corresponding objects in the micro-scale parameter image object layer, and the objects in the micro-scale parameter image object layer are child objects of corresponding objects in the macro-scale parameter image object layer;
the determining whether the land use type of the small-scale parameter image object layer is changed from a non-construction land to a construction land based on the spatial features of the object in the large-scale parameter image object layer and the spatial features of the object in the small-scale parameter image object layer comprises:
based on the spatial characteristics of the objects in the large-scale parameter image object layer, if the objects in the large-scale parameter image object layer are determined to be blue-top rooms on the rear time phase and not blue-top rooms on the front time phase, judging that the objects in the large-scale parameter image object layer belong to newly added blue-top rooms;
If the object in the large-scale parameter image object layer is determined to be not in a vegetation state at a later time phase, the texture in the object is not smooth, and the object in the large-scale parameter image object layer is determined to be in a vegetation state at a front time phase, and the object in the large-scale parameter image object layer belongs to newly-increased vegetation and becomes a suspected construction land;
if the fact that the objects in the large-scale parameter image object layer belong to the categories of the newly-added agricultural facilities possibly in the attribute of the basic vector is determined, the objects are dense, the brightness is obviously higher than the surrounding brightness in the later time phase, the difference between the brightness and the surrounding objects in the front time phase is smaller than a preset value, and the objects in the large-scale parameter image object layer are determined to belong to the newly-added agricultural facilities;
in the small-scale parameter image object layer, an object with a parent object class of a newly added blue top room is assigned as a newly added blue top room, an object with a parent object class of a newly added agricultural facility is assigned as a newly added agricultural facility, and the other object classes are kept unclassified;
based on the spatial characteristics of the objects in the small-scale parameter image object layer, combining with attribute information of land utilization types inherited by the objects from the basic vector, if the land utilization types of the other objects in the front time phase are determined not to belong to the construction land, and the land utilization types in the rear time phase are red roof, blue roof, green roof, gray roof, highlight earth surface or other construction lands, the corresponding object types are respectively determined as a newly added red roof, a newly added blue roof, a newly added green roof, a newly added gray roof, a newly added highlight earth surface and newly added other construction lands; the determined newly added red roof house, newly added blue roof house, newly added green roof house, newly added grey roof house, newly added highlight ground surface and newly added other construction land are collectively called as newly added construction land;
In the large-scale parameter image object layer, if the object type is that the newly-increased vegetation is changed into the suspected construction land and the newly-increased construction land sub-object is not included, all the sub-object types corresponding to the object type are assigned to the newly-increased vegetation to be changed into the suspected construction land.
7. The method of claim 6, wherein the determining that the object in the macro-scale parametric image object layer belongs to the newly added blue top room if it is determined that the object in the macro-scale parametric image object layer is a blue top room in a posterior time phase and is not a blue top room in an anterior time phase based on the spatial characteristics of the object in the macro-scale parametric image object layer comprises:
in the large-scale parameter image object layer, determining whether the object is a blue top room in the back time phase Lab image according to a comparison result of the average value of the b layer in the back time phase Lab image and a blue top room b threshold; comparing the difference value of the b layer average value in the back time phase Lab image and the b layer in the front time phase Lab image with a blue change threshold value to determine whether the object is a blue roof room in the front time phase Lab image; the object which is not a blue top room in the back time phase Lab image and is not a blue top room in the front time phase Lab image is a newly added blue top room; the threshold value of the blue top room b is the maximum value of the average value of the layer b of the post-time phase b of the samples of the blue top room; the blue change threshold is the maximum value of the difference between the back time phase b layer average value and the front time phase b layer average value in all newly added blue top room samples, and is a negative number;
If it is determined that the object in the large-scale parameter image object layer does not show a vegetation state at a later time phase and the texture in the object is not smooth, but shows a vegetation state at a previous time phase, determining that the object in the large-scale parameter image object layer belongs to newly added vegetation and becomes a suspected construction land, including:
in the large-scale parameter image object layer, according to the average value of an a layer in the front time-phase Lab image, the average value of an a layer in the rear time-phase Lab image and the edge characteristic of an L layer in the rear time-phase Lab image of the object, respectively comparing the average value of the a layer in the front time-phase Lab image, the average value of the a layer in the rear time-phase Lab image and the edge characteristic of the L layer in the rear time-phase Lab image with the comparison result that the corresponding vegetation becomes a suspected construction land threshold value, determining whether the object becomes the suspected construction land, wherein the corresponding vegetation becomes the suspected construction land threshold value is the maximum value of the front time-phase a layer, the minimum value of the rear time-phase a layer and the minimum value of the edge characteristic of the rear time-phase L layer in the sample object of the suspected construction land;
if it is determined that the object in the large-scale parameter image object layer belongs to a category of a newly added agricultural facility in the attribute of the base vector, the object is dense, the brightness is obviously higher than the surrounding area in the rear time phase, and the difference between the brightness and the surrounding object in the front time phase is smaller than a preset value, the determining that the object in the large-scale parameter image object layer belongs to the newly added agricultural facility includes:
In the large-scale parameter image object layer, determining a range for checking newly added agricultural facilities according to the basic vector, and determining whether the object is newly added agricultural facilities according to a comparison result of the density of the object and the density threshold value of the agricultural facilities and a comparison result of a difference value of an object on a front time phase L image layer mean value and a front time phase L difference threshold value of the agricultural facilities and a difference value of an object on a rear time phase L image layer mean value and a surrounding adjacent object on a rear time phase L image layer mean value and a rear time phase L difference threshold value of the agricultural facilities in the range; the agricultural facility density threshold is determined based on a density minimum of the newly added agricultural facility sample object, the pre-agricultural facility time phase L difference threshold is determined based on a maximum value of a difference value between the newly added agricultural facility sample object and an adjacent object on a pre-time phase L layer mean value, and the post-agricultural facility time phase L difference threshold is determined based on a minimum value of a difference value between the newly added agricultural facility sample object and an adjacent object on a post-time phase L layer mean value;
based on the spatial characteristics of the objects in the small-scale parameter image object layer, combining with the attribute information of land utilization types inherited by the objects from the basic vector, if it is determined that the land utilization types of the other objects in the front time phase do not belong to the construction land, and the land utilization types in the rear time phase are red roof houses, blue roof houses, green roof houses, gray roof houses, highlight ground surfaces or other construction lands, the corresponding object types are respectively determined as newly added red roof houses, newly added blue roof houses, newly added green roof houses, newly added gray roof houses, newly added highlight ground surfaces and newly added other construction lands, including:
In the small-scale parameter image object layer, determining whether the object is a red roof room in the later time phase according to a comparison result of the average value of the image layer of the unclassified object in the later time phase a and the red roof room threshold of the later time phase; for the object with the rear time phase being the red roof room, determining whether the object is the red roof room in the front time phase according to the comparison result of the average value of the a layer of the object in the front time phase and the threshold value of the red roof room in the front time phase; the back time phase is the red roof room and the front time phase is not the object of the red roof room, namely the newly added red roof room; the front time phase red roof room threshold is the maximum value of all newly added red roof room samples on the front time phase a layer average value, and the rear time phase red roof room threshold is the minimum value of all newly added red roof room samples on the rear time phase a layer average value;
in the small-scale parameter image object layer, for an unclassified object with a post-time phase L image layer average value larger than a post-time phase blue top room brightness threshold value, determining whether the object is a blue top room at the post-time phase according to a comparison result of the average value of the object in the post-time phase b image layer and the post-time phase blue top room threshold value; for the object with the rear time phase being the blue top room, determining whether the object is the blue top room or not in the front time phase according to the comparison result of the average value of the b image layers of the object in the front time phase and the threshold value of the blue top room in the front time phase; the posterior time phase is the object of the blue top room and the anterior time phase is not the object of the blue top room, namely the newly added blue top room; the brightness threshold of the post-time-phase blue top room is the minimum value of the average value of the post-time-phase L layers of all newly added blue top room samples, the threshold of the pre-time-phase blue top room is the minimum value of the average value of the pre-time-phase b layers of all newly added blue top room samples, and the threshold of the post-time-phase blue top room is the maximum value of the average value of the post-time-phase b layers of all newly added blue top room samples;
In the small-scale parameter image object layer, the class of the object is unclassified, according to the comparison result of the average value of each image layer of the object in the front time phase a and the front time phase b and the threshold value of the front time phase green roof room a and the front time phase b, whether the object is a green roof room is determined according to the comparison result of the average value of each image layer of the object in the rear time phase a and the average value of each image layer of the object in the rear time phase b and the threshold value of the rear time phase green roof room a and the threshold value of the rear time phase green roof room b, whether the object is a newly added green roof room is determined, the threshold value of the front time phase green roof room a is the maximum value of a front time phase green roof room sample in the front time phase a, the threshold value of the front time phase green roof room b is the maximum value of a front time phase b, the rear time phase green roof room a threshold value is the maximum value of a rear time phase green roof room sample in the rear time phase b;
the object with the later time phase L layer average value smaller than the specified threshold value is assigned to be a shadow type; for unclassified objects with gray later time phases and gray earlier time phases, determining whether the objects are newly added grey top rooms according to the comparison result of the overlapping indexes of the objects and shadows after shifting to north and the grey top room shadow overlapping threshold value, wherein the grey top room shadow overlapping threshold value is a preset overlapping index parameter, and the gray is determined in a mode that an average value of the objects is located in a preset interval and a b average value is located in a preset interval range;
In the small-scale parameter image object layer, determining whether an object is a newly increased highlight surface according to a comparison result of a mean value of an unclassified object on a front time phase L image layer and a front time phase highlight surface threshold value and a comparison result of a mean value of a rear time phase L image layer and a rear time phase highlight surface threshold value, wherein the front time phase highlight surface threshold value is the maximum value of the front time phase L image layer of a newly increased highlight surface sample, and the rear time phase highlight surface threshold value is the minimum value of the rear time phase L image layer of the newly increased highlight surface sample;
in the small-scale parameter image object layer, according to a comparison result of a standard deviation of an unclassified object in a front time phase L image layer and a time phase L standard deviation threshold value before a new additional construction land, a comparison result of a standard deviation of a rear time phase L image layer and a time phase L standard deviation threshold value after the new additional construction land, a comparison result of a mean value of a rear time phase a image layer and a time phase a threshold value after the new additional construction land, determining whether the object is the new additional construction land or not, wherein the time phase L standard deviation threshold value before the new additional construction land is the maximum value of the L standard deviation of the sample of the new additional construction land in the front time phase, the time phase L standard deviation threshold value after the new additional construction land is the minimum value of the L standard deviation of the sample of the new additional construction land in the rear time phase, the time phase a threshold value after the new additional construction land is the minimum value of the new additional construction land in the mean value after the time phase a, and the time phase a threshold value after the new additional construction land is the new additional land, and the time phase value of the new additional construction land;
In the small-scale parameter image object layer, an object which is still unclassified and has a parent object category of newly increased vegetation is changed into a suspected construction land, and the newly increased vegetation is changed into a suspected construction land.
8. The method of claim 1, 6 or 7, wherein the spatial features comprise a subject elongation index, the subject elongation index being a ratio of perimeter to area expressed in pixels;
after the determining whether the land use type of the small-scale parametric image object layer is changed from a non-construction land to a construction land based on the spatial features of the object within the large-scale parametric image object layer and the spatial features of the object within the small-scale parametric image object layer, the method further includes:
and deleting non-target objects in the small-scale parameter image object layer based on the pattern-to-long and narrow index of the small-scale parameter image object layer under the condition that the land utilization type of the small-scale parameter image object layer is changed from a non-construction land to a construction land, wherein the non-target objects are objects corresponding to the non-real newly-added construction land.
9. An add-on construction land monitoring device, comprising:
The receiving module is used for receiving a front-time true color orthographic image, a rear-time true color orthographic image and a basic vector, wherein the front-time true color orthographic image and the rear-time true color orthographic image are true color orthographic images of different time phases in the same region, and the basic vector is used for representing the acquisition time of the front-time true color orthographic image or the land utilization type of each region in a certain period earlier than the acquisition time of the front-time true color orthographic image;
the conversion module is used for respectively converting the front-time real-color orthographic image and the rear-time real-color orthographic image from RGB color space to Lab color space to obtain a front-time Lab image and a rear-time Lab image;
the conversion module is used for converting the basic vector into a basic image object layer, wherein each vector image spot corresponds to one image object in the basic image object layer, and each image object inherits all attribute information of the corresponding vector image spot;
the segmentation module is used for segmenting the front time phase Lab image and the rear time phase Lab image under the constraint of the basic image object layer by using small scale parameters to obtain a small scale parameter image object layer; copying the small-scale parameter image object layer, and merging the objects in which the difference between the small-scale parameter image object layer and the front time phase Lab characteristics and the rear time phase Lab characteristics of surrounding objects is smaller than a defined threshold value according to a homogeneity rule by using large-scale parameters to obtain a large-scale parameter image object layer; the objects in the large-scale parameter image object layer are obtained by combining the objects in the small-scale parameter image object layer according to a homogeneity rule, and the boundary of the combined image object does not exceed the boundary of the corresponding object in the basic image object layer;
The feature calculation module is used for respectively extracting the spatial features of the objects in the large-scale parameter image object layer and the small-scale parameter image object layer;
and the monitoring module is used for determining whether the land utilization type of the small-scale parameter image object layer is changed from a non-construction land to a construction land based on the spatial characteristics of the object in the large-scale parameter image object layer and the spatial characteristics of the object in the small-scale parameter image object layer so as to monitor the newly added construction land.
10. An electronic device, comprising: a processor, a memory, a bus;
the processor and the memory complete communication with each other through the bus; the processor is configured to invoke program instructions in the memory to perform the method of any of claims 1 to 8.
CN202110388500.7A 2021-04-12 2021-04-12 Newly-added construction land monitoring method and device Active CN112927252B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110388500.7A CN112927252B (en) 2021-04-12 2021-04-12 Newly-added construction land monitoring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110388500.7A CN112927252B (en) 2021-04-12 2021-04-12 Newly-added construction land monitoring method and device

Publications (2)

Publication Number Publication Date
CN112927252A CN112927252A (en) 2021-06-08
CN112927252B true CN112927252B (en) 2023-09-22

Family

ID=76174114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110388500.7A Active CN112927252B (en) 2021-04-12 2021-04-12 Newly-added construction land monitoring method and device

Country Status (1)

Country Link
CN (1) CN112927252B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119575B (en) * 2021-11-30 2022-07-19 二十一世纪空间技术应用股份有限公司 Spatial information change detection method and system
CN116258958B (en) * 2022-12-22 2023-12-05 二十一世纪空间技术应用股份有限公司 Building extraction method and device for homologous high-resolution images and DSM data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971115A (en) * 2014-05-09 2014-08-06 中国科学院遥感与数字地球研究所 Automatic extraction method for newly-increased construction land image spots in high-resolution remote sensing images based on NDVI and PanTex index
CN110852207A (en) * 2019-10-29 2020-02-28 北京科技大学 Blue roof building extraction method based on object-oriented image classification technology
CN112101159A (en) * 2020-09-04 2020-12-18 国家林业和草原局中南调查规划设计院 Multi-temporal forest remote sensing image change monitoring method
CN112183416A (en) * 2020-09-30 2021-01-05 北京吉威数源信息技术有限公司 Automatic extraction method of newly added construction land based on deep learning method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8233712B2 (en) * 2006-07-28 2012-07-31 University Of New Brunswick Methods of segmenting a digital image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971115A (en) * 2014-05-09 2014-08-06 中国科学院遥感与数字地球研究所 Automatic extraction method for newly-increased construction land image spots in high-resolution remote sensing images based on NDVI and PanTex index
CN110852207A (en) * 2019-10-29 2020-02-28 北京科技大学 Blue roof building extraction method based on object-oriented image classification technology
CN112101159A (en) * 2020-09-04 2020-12-18 国家林业和草原局中南调查规划设计院 Multi-temporal forest remote sensing image change monitoring method
CN112183416A (en) * 2020-09-30 2021-01-05 北京吉威数源信息技术有限公司 Automatic extraction method of newly added construction land based on deep learning method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于对象的多源数据变化检测的方法;许淑淑;;测绘通报(第S1期);全文 *

Also Published As

Publication number Publication date
CN112927252A (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN104751478B (en) Object-oriented building change detection method based on multi-feature fusion
US8233712B2 (en) Methods of segmenting a digital image
Darwish et al. Image segmentation for the purpose of object-based classification
CN111798467B (en) Image segmentation method, device, equipment and storage medium
Hoonhout et al. An automated method for semantic classification of regions in coastal images
CN109934154B (en) Remote sensing image change detection method and detection device
Gauch et al. Comparison of three-color image segmentation algorithms in four color spaces
CN109410171B (en) Target significance detection method for rainy image
CN112927252B (en) Newly-added construction land monitoring method and device
EP1359543A2 (en) Method for detecting subject matter regions in images
CN104598908A (en) Method for recognizing diseases of crop leaves
Su Scale-variable region-merging for high resolution remote sensing image segmentation
CN104217440B (en) A kind of method extracting built-up areas from remote sensing images
CN107657619A (en) A kind of low-light (level) Forest fire image dividing method
CN112183416A (en) Automatic extraction method of newly added construction land based on deep learning method
CN106960182A (en) A kind of pedestrian integrated based on multiple features recognition methods again
CN110879992A (en) Grassland surface covering object classification method and system based on transfer learning
Zhan et al. Quantitative analysis of shadow effects in high-resolution images of urban areas
CN105095898B (en) A kind of targeted compression cognitive method towards real-time vision system
Guo et al. Dual-concentrated network with morphological features for tree species classification using hyperspectral image
Bora AERSCIEA: An Efficient and Robust Satellite Color Image Enhancement Approach.
Wang et al. Hybrid remote sensing image segmentation considering intrasegment homogeneity and intersegment heterogeneity
Lizarazo et al. Fuzzy image segmentation for urban land-cover classification
CN105205485B (en) Large scale image partitioning algorithm based on maximum variance algorithm between multiclass class
CN112149492A (en) Remote sensing image accurate cloud detection method based on reinforcement genetic learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant