CN112070780A - Residential area extraction result contour linearity processing method, device and equipment - Google Patents

Residential area extraction result contour linearity processing method, device and equipment Download PDF

Info

Publication number
CN112070780A
CN112070780A CN202010778043.8A CN202010778043A CN112070780A CN 112070780 A CN112070780 A CN 112070780A CN 202010778043 A CN202010778043 A CN 202010778043A CN 112070780 A CN112070780 A CN 112070780A
Authority
CN
China
Prior art keywords
contour
processing
residential area
original image
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010778043.8A
Other languages
Chinese (zh)
Inventor
刘松林
张丽
巩丹超
龚辉
秦进春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
61540 Troops of PLA
Original Assignee
61540 Troops of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 61540 Troops of PLA filed Critical 61540 Troops of PLA
Priority to CN202010778043.8A priority Critical patent/CN112070780A/en
Publication of CN112070780A publication Critical patent/CN112070780A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a method for processing linearity of contour of residential area extraction result, which comprises the following steps: acquiring an original image, and superposing DSM data consistent with the area in the original image into the original image to obtain an input image; performing edge feature enhancement processing on an input image to obtain a first feature map, and performing feature extraction on the input image by adopting a semantic segmentation network to obtain a second feature map; connecting and combining the first characteristic diagram and the second characteristic diagram to obtain an output characteristic diagram; and carrying out contour constraint on the output feature map by using a contour constraint loss function, and acquiring a residential area extraction result through the constrained output feature map. The area contour linear processing is carried out in the residential area extraction process, so that the contour linear processing and the residential area extraction are carried out simultaneously, the area contour linear processing is realized when the residential area extraction is completed, and the purpose of improving the residential area contour linearity end to end is realized.

Description

Residential area extraction result contour linearity processing method, device and equipment
Technical Field
The application relates to the technical field of remote sensing mapping, in particular to a method, a device and equipment for processing linearity of a residential area extraction result contour.
Background
The 1:50000 topographic map data is one of the most basic geographic information data and plays a fundamental and strategic role in national economy and national defense construction. As society develops rapidly, users have higher requirements on the terrain map situation, and the terrain map updating becomes a first and urgent task. Since the change of the topographic data is generally relatively small, the updating of the topographic data is mainly the updating of the feature elements. Among them, among many features, the residential area is one of the most important elements in the topographic map. At present, remote sensing image surface feature extraction methods for mapping residential area features are emerging continuously, and the method mainly comprises two steps of segmentation extraction and post-processing. The segmentation extraction belongs to the semantic segmentation category, and the post-processing aspect generally refers to the graphic specification to perform regularized optimization on the extracted contour so as to meet the mapping requirement. However, in order to meet the mapping requirement, under the condition of keeping the precision requirement of the closed and irregular contour boundary of the residential area, the method is processed into the mapping result with the straight line contour according to the graphic specification, so that the complexity of the algorithm is increased, and the automation and intelligence degree of the overall algorithm for extracting the land features of the residential area is reduced.
Disclosure of Invention
In view of this, the application provides a method for processing linearity of contour of extraction result of residential areas, which can effectively simplify the linearity processing mode of edge contour of residential areas, and improve the automation and intelligence degree of the overall algorithm for extracting the features of the residential areas.
According to an aspect of the present application, there is provided a method for processing linearity of a contour of a residential-area extraction result, including:
acquiring an original image, and superposing DSM data consistent with the area in the original image into the original image to obtain an input image;
performing edge feature enhancement processing on the input image to obtain a first feature map after edge enhancement, and performing feature extraction on the input image by adopting a semantic segmentation network to obtain a corresponding second feature map;
connecting and combining the first characteristic diagram and the second characteristic diagram to obtain a corresponding output characteristic diagram;
and carrying out contour constraint on the output characteristic diagram by using a contour constraint loss function, and acquiring a residential area extraction result through the constrained output characteristic diagram.
In one possible implementation, superimposing DSM data that is consistent with an area in the original image into the original image includes:
acquiring the DSM data based on the original image, performing normalization processing on the DSM data, and overlapping the DSM data subjected to the normalization processing to the original image;
wherein a resolution of the acquired DSM data is consistent with a resolution of the original imagery.
In a possible implementation manner, when performing edge feature enhancement processing on the input image, a laplacian operator template is used for performing the edge feature enhancement processing.
In a possible implementation manner, when the first feature map and the second feature map are combined in a connected manner, 1 × 1 convolution is adopted for the combined in a connected manner.
In a possible implementation manner, when the contour of the output feature map is constrained by using a contour constraint loss function, the contour constraint loss function is a joint loss function in which a directional derivative is added to a loss function of the semantic segmentation network;
wherein the joint loss function is:
Figure BDA0002619194220000021
Figure BDA0002619194220000022
for the loss function of the semantic segmentation network, both α and β are loss weighting coefficients.
According to another aspect of the application, a device for processing linearity of contour of residential extraction result is also provided, which comprises a data superposition module, an edge enhancement processing module, a semantic segmentation module, a connection combination module and a contour constraint module;
the data superposition module is configured to acquire an original image, and superpose DSM data consistent with an area in the original image into the original image to obtain an input image;
the edge enhancement module is configured to perform edge feature enhancement processing on the input image to obtain a first feature map after edge enhancement;
the semantic segmentation module is configured to extract features of the input image by adopting a semantic segmentation network to obtain a corresponding second feature map;
the connection combination module is configured to perform connection combination on the first characteristic diagram and the second characteristic diagram to obtain corresponding output characteristic diagrams;
and the contour constraint module is configured to carry out contour constraint on the output feature map by using a contour constraint loss function, and obtain a residential area extraction result through the constrained output feature map.
In one possible implementation, the edge enhancement module includes a laplacian processing layer, a normalization layer, and a first convolution layer that are cascaded layer by layer;
the Laplace processing layer is configured to adopt a Laplace operator template to operate the input image;
the normalization layer is configured to utilize ReLu and Tanh activation functions to perform normalization processing on the input image after the Laplace operator operation is performed;
the first convolution layer is configured to output the first feature map after performing feature enhancement on the input feature data after the normalization processing.
In one possible implementation, the semantic segmentation network is a pnet network;
the connection combination module comprises a second convolution layer, and the convolution kernel of the second convolution layer is a 1 x 1 convolution kernel.
According to an aspect of the present application, there is provided a residential-area extracted result contour linearity processing apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute the executable instructions to implement any of the methods described above.
According to another aspect of the present application, there is also provided a non-transitory computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method of any of the preceding.
According to the residential area extraction result contour linearity processing method, DSM data superposition processing is added to an input layer, edge feature enhancement processing is added to a feature extraction layer, and contour constraint processing is added to an output layer. In other words, according to the residential area extraction result contour linearity processing method in the embodiment of the application, the region contour linearity processing is performed in the residential area extraction process, so that the contour linearity processing and the residential area extraction are performed simultaneously, and the region contour linearity processing is realized when the residential area extraction is completed, so that the purpose of improving the residential area contour linearity end to end is realized. Compared with a processing method for extracting the residential area and then performing edge contour in a post-processing mode in the related art, the processing method is effectively simplified, so that the extraction of the residential area is simpler, and meanwhile, the automation and the intelligent degree of the overall algorithm for extracting the residential ground features are effectively improved.
Other features and aspects of the present application will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the application and, together with the description, serve to explain the principles of the application.
FIG. 1 is a flowchart of a residential quarter extraction result contour linearity processing method according to an embodiment of the present application;
fig. 2 is a diagram showing an example of DSM data of a certain area collected in the residential-area extraction-result-contour linearity processing method according to the embodiment of the present application;
fig. 3a illustrates an original image of a certain area to be processed in the residential-area-extracted-result-contour linearity processing method according to an embodiment of the present application;
fig. 3b is an exemplary diagram of mapping specific labels in the constructed residential area sample library in the residential area extraction result contour linearity processing method according to the embodiment of the present application;
FIG. 3c is a graph showing the result of residential areas extracted from the original image shown in FIG. 3a by a conventional contour linear processing of the result of residential area extraction;
fig. 3d is a result graph of the extracted residential area of the original image shown in fig. 3a by using the contour linearity processing method of the residential area extraction result according to an embodiment of the present application;
FIG. 4a is a diagram illustrating an original image of a region to be processed in a residential extraction result contour linearity processing method according to another embodiment of the present application;
fig. 4b is a diagram showing an example of mapping-specific labels in a constructed residential area sample library in a residential area extraction result contour linearity processing method according to another embodiment of the present application;
FIG. 4c is a graph showing the result of residential areas extracted from the original image shown in FIG. 4a by a conventional contour linear processing of the result of residential area extraction;
fig. 4d is a result graph of the extracted residential area of the original image shown in fig. 4a by using the contour linearity processing method of the residential area extraction result according to an embodiment of the present application;
fig. 5 is a block diagram showing the configuration of a resident extraction result contour linearity processing device according to an embodiment of the present application;
fig. 6 is a block diagram showing the configuration of a resident extraction result contour linearity processing apparatus according to the embodiment of the present application.
Detailed Description
Various exemplary embodiments, features and aspects of the present application will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present application.
Fig. 1 shows a flowchart of a residential-site extraction result contour linearity processing method according to an embodiment of the present application. As shown in fig. 1, the method includes: step S100, acquiring an original image, and superposing DSM data consistent with the area in the original image into the original image to obtain an input image. Here, the DSM data refers to three-dimensional feature information for representing the earth's surface, and includes elevation values of feature elements such as residential areas in an area. According to the method, DSM data is added to the original image as prior information, and therefore a good data base can be laid for linear segmentation of the extracted residential area outline when the residential area is extracted from the original image subsequently.
Step S200, performing edge feature enhancement processing on the input image to obtain a first feature map after edge enhancement, and performing feature extraction on the input image by adopting a semantic segmentation network to obtain a corresponding second feature map. That is to say, in the method for processing linearity of contour of extracted result of residential area according to the embodiment of the present application, when the feature of the input image is extracted by using the semantic segmentation network to obtain the residential area region in the input image, the edge feature enhancement processing is further performed on the input image, so as to enhance the edge contour of the extracted residential area region.
And step S300, connecting and combining the first characteristic diagram and the second characteristic diagram to obtain a corresponding output characteristic diagram. By combining the input image (i.e., the first feature map) subjected to the edge feature enhancement processing with the residential area at the feature extraction site, the effect of enhancing the edge contour of the extracted residential area is achieved, so that the edge contour of the residential area in the finally obtained output feature map is more obvious and clear.
And S400, constraining the contour of the output feature map by using a contour constraint loss function, and acquiring a residential area extraction result through the constrained output feature map. That is, in the obtained output feature map, the edge contour of the residential area in the output feature map is reinforced and constrained again by using the contour constraint loss function, so that the edge contour of the residential area in the residential area extraction result obtained by the output feature map after contour constraint tends to a straight line segment more.
Therefore, in the method for processing the linearity of the contour of the residential area extraction result, corresponding DSM data is superposed in an original image to serve as prior information, and the DSM data has obvious step characteristics at the edge of the residential area, is mostly in natural right-angle transition at corners and has no obvious convex and concave parts, so that the DSM data is superposed to serve as the prior information and is superposed in the original image, and the linearity of the extracted edge contour of the residential area can be effectively improved. Meanwhile, when the semantic segmentation network is adopted to extract the features of the original image (namely, the input image) overlapped with the DSM data so as to obtain the result of the residential area, after the edge feature enhancement processing is carried out on the input image, the result after the edge feature enhancement processing and the result of the residential area extracted by the semantic segmentation network are connected and combined, so that the edge features of the extracted residential area are enhanced. And finally, constraining the contour of the output feature map by using a contour constraint loss function, so that when a residential area extraction result is finally obtained through the output feature map after contour constraint, the edge contour of the residential area in the obtained residential area extraction result tends to be linear segmentation, and the linearity of the edge contour of the residential area is finally and effectively improved.
That is to say, in the residential area extraction result contour linearity processing method according to the embodiment of the present application, the DSM data overlay processing is added to the input layer, the edge feature enhancement processing is added to the feature extraction layer, and the contour constraint processing is added to the output layer, so that the linearity processing of the edge contour of the residential area can be realized in the process of extracting the residential area from the original image by the above three aspects of design. In other words, according to the residential area extraction result contour linearity processing method in the embodiment of the application, the region contour linearity processing is performed in the residential area extraction process, so that the contour linearity processing and the residential area extraction are performed simultaneously, and the region contour linearity processing is realized when the residential area extraction is completed, so that the purpose of improving the residential area contour linearity end to end is realized. Compared with a processing method for extracting the residential area and then performing edge contour in a post-processing mode in the related art, the processing method is effectively simplified, so that the extraction of the residential area is simpler, and meanwhile, the automation and the intelligent degree of the overall algorithm for extracting the residential ground features are effectively improved.
In one possible implementation, the DSM data is superimposed on the original image in the following manner. That is, DSM data is acquired based on an original image, and normalization processing is performed on the DSM data. And then, overlapping the DSM data after the normalization processing into the original image to obtain an input image.
Here, it should be noted that, in order to ensure the accuracy of the extracted result of the residential area, the collected DSM data should be DSM data of the area displayed in the original image. That is, the acquired DSM data is three-dimensional feature information corresponding to an area in the original image. After corresponding DSM data sampling is performed based on an original image, when DSM data is superimposed into the original image as prior information, normalization processing needs to be performed on the DSM data. The method for performing normalization processing on DSM data may be implemented by a conventional normalization method in the art, and is not described herein again.
Referring to fig. 2, DSM data of a certain area displayed as a gray image after normalization of the elevation value is shown. In the DSM data after the elevation value normalization, the grid distance is 1m, and the pixel with high gray value represents that the position has a large elevation value, and as can be seen from the figure, the DSM data has an obvious step characteristic at the edge of a residential area, and is mostly in natural right-angle transition at the corner without an obvious convex or concave portion, so that the DSM data after sampling and normalization processing is superimposed as prior information into the original image, so that the edge contour of the residential area in the original image (i.e., the input image) on which the DSM data is superimposed is in a linear segmentation trend.
It should be noted that, when acquiring corresponding DSM data with reference to an original image, the resolution of the acquired DSM data should be consistent with the resolution of the original image, so that when superimposing the DSM data as prior information on the original image, the accuracy of the superimposed result can be effectively ensured.
In addition, in a possible implementation, when the DSM data is superimposed on the original image, a direct superimposing method may be adopted. That is, DSM data is directly superimposed on an original image as an additional channel layer, thereby adding a channel layer to the original image. By adopting the direct superposition mode, the linearity processing process of the contour of the result extracted by the residents is effectively simplified, and the difficulty of the linearity processing of the contour of the result extracted by the residents is reduced.
Furthermore, after DSM data are superposed on an original image to obtain an input image, edge feature enhancement processing can be performed on the input image, and meanwhile, a semantic segmentation network is adopted to perform feature extraction on the input image. In a possible implementation manner, when performing edge feature enhancement processing on an input image, the edge feature enhancement processing may be implemented by using a laplacian operator template. When the features of the input image are extracted, semantic segmentation network can be adopted to realize the feature extraction.
Specifically, edges typically occur at locations where the gray values are not continuous, and are typically found using the first or second derivative of the image gray function. Therefore, when the edge feature enhancement processing is performed using the laplacian, the laplacian is the simplest isotropic differential operator and has rotation invariance. The laplace transform of a two-dimensional image function f (x, y) is the isotropic second derivative, defined as:
Figure BDA0002619194220000091
the equation is expressed in discrete form:
Figure BDA0002619194220000092
according to equation (2), the laplacian can be expressed in the form of a template, as shown in table 1. Because the edge in the image is the region with jump gray level, the Laplace operator template can effectively realize the enhancement of the edge information.
TABLE 1
0 1 0
1 -4 1
0 1 0
In the process of performing edge feature enhancement processing on the input image corresponding to the designed laplacian operator template, the laplacian operator template can be firstly adopted to perform operation on the input image, then the normalized result is performed by utilizing the Relu and the Than activation function, and then the feature enhancement is performed on the input image after the normalization processing through the convolution layer so as to output the image result after the edge feature enhancement.
In the above embodiment, when the feature enhancement processing is performed on the normalized input image by the convolutional layer, the convolutional layer may be provided as two cascaded convolutional layers, so that the feature enhancement is performed on the normalized input image by two convolutional layers in succession. It should be noted that two consecutive convolutional layers may be the same or different, and convolution parameters (such as the size of a convolution kernel) of the convolutional layers may be flexibly set according to actual situations, which is not described herein again.
Furthermore, when the edge feature enhancement processing is performed on the input image, a semantic segmentation network is also adopted to perform feature extraction on the input image. In the method for processing the linearity of the contour of the result extracted by the residents, the semantic segmentation network used in the method may be a conventional neural network in the field, such as: a net network.
When the feature extraction of the input image is carried out by adopting the conventional semantic segmentation network in the field to obtain the output result of the residential area, only the network model needs to be properly adjusted, and the adjustment mode is as follows: the method is realized by adding a DSM data superposition layer in a front-end input layer, adding a network branch of an edge feature enhancement layer at the same time, and adding contour constraint in a loss function at a rear-end output side. The adjustment mode is simple and easy to realize.
In addition, in step S200, after the first feature map is obtained by performing edge feature enhancement processing on the input image and the second feature map is obtained by performing feature extraction on the input image by using the semantic segmentation network, the first feature map and the second feature map may be connected and combined. In one possible implementation, the convolution layer may be used to convolve the first feature map and the second feature map. When the convolution layer is used to combine the first feature map and the second feature map, the size of the convolution kernel of the adopted convolution layer may be: 1 x 1 convolution kernel.
In the method for processing the linearity of the contour of the result of extraction by the residents according to the embodiment of the present application, when the contour of the output feature map is constrained by the contour constraint loss function, the contour constraint loss function is a combined loss function in which a directional derivative is added to the loss function of the semantic segmentation network. As can be understood by those skilled in the art, the loss functions are different and the corresponding joint loss functions are different according to different semantic segmentation networks. That is, in the method according to the embodiment of the present application, the joint loss function used when performing the contour constraint on the output feature map is determined based on the employed semantic segmentation network.
For example, considering that the directional derivative or the gradient has a good effect of indicating the edge walking direction, when the contour constraint is added to the loss function of the semantic segmentation network, the directional derivative can be used for calculating the loss to constrain the contour characteristics when the contour is extracted in the residents. With reference to the formula:
Figure BDA0002619194220000101
wherein, in the above formula (3), O is ∈ {0,1}W×HA true value label is shown, W × H indicates an image size (i.e., a size of an original image),
Figure BDA0002619194220000102
and
Figure BDA0002619194220000103
then the directional derivatives of the network inference result image (i.e., the output feature map) in the row direction and the column direction are represented respectively,
Figure BDA0002619194220000104
and
Figure BDA0002619194220000105
then the truth label images (i.e., the sample images used in training the network model that runs the resident extraction result contour linearity processing method of the present embodiment) are represented as the directional derivatives in the row and column directions, | | | survival1Is a norm of 1.
Figure BDA0002619194220000111
Representing the degree of disparity in the directional derivatives of the two images. In the training phase, by making a joint loss function
Figure BDA0002619194220000112
And (3) obtaining an optimal model with the minimum, wherein the joint loss function is shown as the following formula:
Figure BDA0002619194220000113
wherein the content of the first and second substances,
Figure BDA0002619194220000114
alpha and beta are weighting coefficients for the two types of losses, which are the original loss functions of the underlying network, and can be determined experimentally.
The strategy has universality and can be superposed on a loss function of any basic semantic segmentation network. Such as: the semantic segmentation network generally adopts cross entropy loss, and the calculation formula is as follows:
Figure BDA0002619194220000115
wherein O is ∈ {0,1}W×HIndicating the true value label, yxRepresenting the value of the network prediction result Y at position x.
In addition, it should be noted that, since the method for processing the linearity of the extracted result contour in the residency according to the embodiment of the present application is implemented based on the semantic segmentation network, the linearity of the extracted result contour in the residency is processed by using the adjusted network model by correspondingly adjusting the network structure of the semantic segmentation network. Therefore, in the method of the embodiment of the present application, the method further includes a step of training the network model. In the training process, a sample base of the residential area needs to be constructed. When constructing a sample library, the ground feature labels at the pixel level are mainly obtained manually. In the application, in order to meet the resident end-to-end extraction training with the linear outline, the constructed resident area sample library is a sample library with the label outline having the regularized linear characteristic, and the sample library can be also called as a mapping special label. The method for constructing the residential area sample library can be implemented by a conventional method in the field, and is not described herein again.
In addition, in order to more clearly illustrate the effect of improving the linearity of the extracted residential area contour by using the residential area extraction result contour linearity processing method according to the embodiment of the present application, reference is made to fig. 3a to 3d and fig. 4a to 4d, which respectively illustrate the remote sensing images shown in fig. 3a and 4a as examples.
Referring to fig. 3c and 3d, it can be seen that the linearity of the contour of the result of the residential area extracted from the remote sensing image shown in fig. 3a is stronger by using the method for processing the linearity of the contour of the result of the residential area extracted from the residential area according to the embodiment of the present application. Similarly, it can be seen from fig. 4c and 4d that the contour of the residential area extracted from the remote sensing image shown in fig. 4a by using the method for processing the contour linearity of the residential area extraction result according to the embodiment of the present application is more linearly segmented than the contour of the residential area extracted by using the conventional method.
Correspondingly, based on any one of the residential area extracted result contour linearity processing methods, the application also provides a residential area extracted result contour linearity processing device. It should be noted that, according to the foregoing, the resident extraction result contour linearity processing apparatus according to the embodiment of the present application is substantially a network model constructed to implement resident extraction result contour linearity processing.
Referring to fig. 5, the apparatus 100 for processing linearity of contour of residential quarter extraction result according to the embodiment of the present application includes a data superposition module 110, an edge enhancement processing module 120, a semantic segmentation module 130, a connection combination module 140, and a contour constraint module 150. The data superimposing module 110 is configured to obtain an original image, and superimpose DSM data that is consistent with an area in the original image onto the original image to obtain an input image; the edge enhancement module is configured to perform edge feature enhancement processing on the input image to obtain a first feature map after edge enhancement; a semantic segmentation module 130 configured to perform feature extraction on the input image by using a semantic segmentation network to obtain a corresponding second feature map; a connection combination module 140 configured to perform connection combination on the first feature map and the second feature map to obtain a corresponding output feature map; and the contour constraint module 150 is configured to constrain the contour of the output feature map by using a contour constraint loss function, and obtain the residential area extraction result through the constrained output feature map.
Further, the edge enhancement module comprises a Laplace processing layer, a normalization layer and a first convolution layer which are cascaded layer by layer. The Laplace processing layer is configured to adopt a Laplace operator template to operate an input image; the normalization layer is configured to perform normalization processing on the input image subjected to Laplace operator operation by utilizing Relu and Than activation functions; and the first convolution layer is configured to output a first feature map after performing feature enhancement on the input feature data after the normalization processing. Here, it is to be noted that the first convolution layer may be two continuous convolution layers. The convolution parameters of the two convolution layers may be the same or different, and are not particularly limited herein.
The semantic segmentation network may be a pnet network. The connection combining module 140 includes a second convolutional layer having a convolution kernel of 1 × 1.
In addition, as described above, since the residential-area extraction-result-contour linearity processing device according to the embodiment of the present invention is a self-designed network model, it is necessary to train a high-network model when extracting a residential area of a received remote-sensing image using the residential-area extraction-result-contour linearity processing device according to the embodiment of the present invention. Specifically, the training process may be implemented as follows:
firstly, a residential area sample library is constructed, and a residential area mapping special label is adopted. Then, training a network model based on the constructed residential area sample library:
(1) reading in an original image, DSM data of a corresponding area and a label image; then, (2) referring to the original image, sampling and normalizing DSM data, and superposing the DSM data on the original image to increase a channel layer for the original image to obtain original input data; then, (3) the original input data respectively passes through the conventional semantic segmentation network A and the edge feature enhancement layer, and the outputs of the conventional semantic segmentation network A and the edge feature enhancement layer are connected and combined by using 1 × 1 convolution to obtain an output feature map; then, (4) calculating the loss between the output characteristic diagram and the label image by using the loss function added with the contour constraint; then, (5) the calculated loss is used for back propagation.
By repeating the steps (1) to (5) until the network converges, the model training of the resident extraction result contour linearity processing device of the embodiment of the application can be completed.
It should be noted that, although the residential-area extraction result contour linearity processing method and apparatus as described above have been described by taking fig. 1 and 5 as an example, those skilled in the art will understand that the present application should not be limited thereto. In fact, the user can flexibly set the structure and parameters of each network layer in the device according to personal preference and/or practical application scenarios, as long as three processes of DSM data overlay, edge feature enhancement and contour constraint can be added in the network model.
Still further, according to another aspect of the present application, there is also provided a resident-extraction-result-contour linearity processing apparatus 200. Referring to fig. 6, the resident extraction result contour linearity processing device 200 of the embodiment of the present application includes a processor 210 and a memory 220 for storing executable instructions of the processor 210. Wherein the processor 210 is configured to implement any of the above-described resident extraction result contour linearity processing methods when executing the executable instructions.
Here, it should be noted that the number of the processors 210 may be one or more, and it may be a GPU. Meanwhile, in the resident-extraction-result-contour linearity processing apparatus 200 of the embodiment of the present application, an input device 230 and an output device 240 may be further included. The processor 210, the memory 220, the input device 230, and the output device 240 may be connected via a bus, or may be connected via other methods, which is not limited in detail herein.
The memory 220, which is a computer-readable storage medium, may be used to store software programs, computer-executable programs, and various modules, such as: the resident area extraction result contour linearity processing method of the embodiment of the application corresponds to a program or a module. The processor 210 executes various functional applications and data processing of the resident extraction result contour linearity processing apparatus 200 by executing software programs or modules stored in the memory 220.
The input device 230 may be used to receive an input number or signal. Wherein the signal may be a key signal generated in connection with user settings and function control of the device/terminal/server. The output device 240 may include a display device such as a display screen.
According to another aspect of the present application, there is also provided a non-transitory computer readable storage medium having stored thereon computer program instructions which, when executed by the processor 210, implement any of the resident extraction result contour straightness processing methods described above.
Having described embodiments of the present application, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (9)

1. A method for processing the linearity of the contour of a residential area extraction result is characterized by comprising the following steps:
acquiring an original image, and superposing DSM data consistent with the area in the original image into the original image to obtain an input image;
performing edge feature enhancement processing on the input image to obtain a first feature map after edge enhancement, and performing feature extraction on the input image by adopting a semantic segmentation network to obtain a corresponding second feature map;
connecting and combining the first characteristic diagram and the second characteristic diagram to obtain a corresponding output characteristic diagram;
and carrying out contour constraint on the output characteristic diagram by using a contour constraint loss function, and acquiring a residential area extraction result through the constrained output characteristic diagram.
2. The method of claim 1, wherein superimposing DSM data that corresponds to an area in the original image into the original image comprises:
acquiring the DSM data based on the original image, performing normalization processing on the DSM data, and overlapping the DSM data subjected to the normalization processing to the original image;
wherein a resolution of the acquired DSM data is consistent with a resolution of the original imagery.
3. The method of claim 1, wherein the edge enhancement processing is performed on the input image by using a Laplacian template.
4. The method of claim 1, wherein the first and second signatures are concatenated using a 1 x 1 convolution.
5. The method according to claim 1, wherein when the contour of the output feature map is constrained by a contour constraint loss function, the contour constraint loss function is a joint loss function with directional derivatives added to the loss function of the semantic segmentation network;
wherein the joint loss function is:
Figure FDA0002619194210000011
Figure FDA0002619194210000012
for the loss function of the semantic segmentation network, both α and β are loss weighting coefficients.
6. A device for processing the linearity of a contour of a residential area extraction result is characterized by comprising a data superposition module, an edge enhancement processing module, a semantic segmentation module, a connection combination module and a contour constraint module;
the data superposition module is configured to acquire an original image, and superpose DSM data consistent with an area in the original image into the original image to obtain an input image;
the edge enhancement module is configured to perform edge feature enhancement processing on the input image to obtain a first feature map after edge enhancement;
the semantic segmentation module is configured to extract features of the input image by adopting a semantic segmentation network to obtain a corresponding second feature map;
the connection combination module is configured to perform connection combination on the first characteristic diagram and the second characteristic diagram to obtain corresponding output characteristic diagrams;
and the contour constraint module is configured to constrain the contour of the output feature map by using a contour constraint loss function, and acquire a residential area extraction result through the constrained output feature map.
7. The apparatus of claim 6, wherein the edge enhancement module comprises a Laplace processing layer, a normalization layer, and a convolutional layer cascaded layer-by-layer;
the Laplace processing layer is configured to adopt a Laplace operator template to operate the input image;
the normalization layer is configured to utilize ReLu and Tanh activation functions to perform normalization processing on the input image after the Laplace operator operation is performed;
the convolution layer is configured to output the first feature map after performing feature enhancement on the input feature data after the normalization processing.
8. A residential neighborhood extraction result contour linearity processing apparatus, characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the method of any one of claims 1 to 5 when executing the executable instructions.
9. A non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the method of any of claims 1 to 5.
CN202010778043.8A 2020-08-05 2020-08-05 Residential area extraction result contour linearity processing method, device and equipment Pending CN112070780A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010778043.8A CN112070780A (en) 2020-08-05 2020-08-05 Residential area extraction result contour linearity processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010778043.8A CN112070780A (en) 2020-08-05 2020-08-05 Residential area extraction result contour linearity processing method, device and equipment

Publications (1)

Publication Number Publication Date
CN112070780A true CN112070780A (en) 2020-12-11

Family

ID=73657669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010778043.8A Pending CN112070780A (en) 2020-08-05 2020-08-05 Residential area extraction result contour linearity processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN112070780A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846332A (en) * 2016-12-30 2017-06-13 中国人民解放军61540部队 A kind of remote sensing image variation detection method and device based on DSM
US20180089833A1 (en) * 2016-09-27 2018-03-29 Xactware Solutions, Inc. Computer Vision Systems and Methods for Detecting and Modeling Features of Structures in Images
CN110188778A (en) * 2019-05-31 2019-08-30 中国人民解放军61540部队 Residential block element profile rule method based on Extraction of Image result
CN110363053A (en) * 2018-08-09 2019-10-22 中国人民解放军战略支援部队信息工程大学 A kind of Settlement Place in Remote Sensing Image extracting method and device
CN110569790A (en) * 2019-09-05 2019-12-13 中国人民解放军61540部队 Residential area element extraction method based on texture enhancement convolutional network
US20200057907A1 (en) * 2018-08-14 2020-02-20 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180089833A1 (en) * 2016-09-27 2018-03-29 Xactware Solutions, Inc. Computer Vision Systems and Methods for Detecting and Modeling Features of Structures in Images
CN106846332A (en) * 2016-12-30 2017-06-13 中国人民解放军61540部队 A kind of remote sensing image variation detection method and device based on DSM
CN110363053A (en) * 2018-08-09 2019-10-22 中国人民解放军战略支援部队信息工程大学 A kind of Settlement Place in Remote Sensing Image extracting method and device
US20200057907A1 (en) * 2018-08-14 2020-02-20 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN110188778A (en) * 2019-05-31 2019-08-30 中国人民解放军61540部队 Residential block element profile rule method based on Extraction of Image result
CN110569790A (en) * 2019-09-05 2019-12-13 中国人民解放军61540部队 Residential area element extraction method based on texture enhancement convolutional network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHUANQI CHENG;XIANGYANG HAO;SONGLIN LIU: "Image segmentation based on 2D Renyi gray entropy and Fuzzy Clustering", 2014 12TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP) *
徐景中;姚芳;: "LIDAR点云中多层屋顶轮廓线提取方法研究", 计算机工程与应用, no. 32 *
王永刚;马彩霞;刘慧平;: "基于数学形态学的建筑物轮廓信息提取", 国土资源遥感, no. 01 *

Similar Documents

Publication Publication Date Title
CN108229497B (en) Image processing method, image processing apparatus, storage medium, computer program, and electronic device
Xie et al. Edge-guided single depth image super resolution
GB2544590A (en) Image synthesis utilizing an active mask
WO2021237875A1 (en) Hand data recognition method and system based on graph convolutional network, and storage medium
CN112734642A (en) Remote sensing satellite super-resolution method and device of multi-scale texture transfer residual error network
CN115345866A (en) Method for extracting buildings from remote sensing images, electronic equipment and storage medium
Zuo et al. MIG-net: Multi-scale network alternatively guided by intensity and gradient features for depth map super-resolution
Zhong et al. Deep attentional guided image filtering
CN115619933A (en) Three-dimensional face reconstruction method and system based on occlusion segmentation
WO2018030593A1 (en) Laplacian patch-based image synthesis method and device
Serwa et al. Enhancement of classification accuracy of multi-spectral satellites’ images using Laplacian pyramids
CN113077477B (en) Image vectorization method and device and terminal equipment
Zheng et al. Windowing decomposition convolutional neural network for image enhancement
CN110321452A (en) A kind of image search method based on direction selection mechanism
CN111967516B (en) Pixel-by-pixel classification method, storage medium and classification equipment
CN113920014A (en) Neural-networking-based combined trilateral filter depth map super-resolution reconstruction method
CN117593187A (en) Remote sensing image super-resolution reconstruction method based on meta-learning and transducer
CN112070780A (en) Residential area extraction result contour linearity processing method, device and equipment
Abiko et al. Fast edge preserving 2D smoothing filter using indicator function
CN114998630B (en) Ground-to-air image registration method from coarse to fine
Yu et al. Intensity guided depth upsampling using edge sparsity and super-weighted $ l_0 $ gradient minimization
CN112150384B (en) Method and system based on fusion of residual network and dynamic convolution network model
CN104318236A (en) Method and system for obtaining image local features
Zuo et al. Minimum spanning forest with embedded edge inconsistency measurement for color-guided depth map upsampling
CN114463503A (en) Fusion method and device of three-dimensional model and geographic information system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination