CN116824396A - Multi-satellite data fusion automatic interpretation method - Google Patents

Multi-satellite data fusion automatic interpretation method Download PDF

Info

Publication number
CN116824396A
CN116824396A CN202311090810.6A CN202311090810A CN116824396A CN 116824396 A CN116824396 A CN 116824396A CN 202311090810 A CN202311090810 A CN 202311090810A CN 116824396 A CN116824396 A CN 116824396A
Authority
CN
China
Prior art keywords
data
feature
target
satellite
linear
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311090810.6A
Other languages
Chinese (zh)
Other versions
CN116824396B (en
Inventor
华中雄
余建新
罗丹枫
谢正东
李兰
曹文婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Fanxing Information Technology Co ltd
Original Assignee
Hubei Fanxing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Fanxing Information Technology Co ltd filed Critical Hubei Fanxing Information Technology Co ltd
Priority to CN202311090810.6A priority Critical patent/CN116824396B/en
Publication of CN116824396A publication Critical patent/CN116824396A/en
Application granted granted Critical
Publication of CN116824396B publication Critical patent/CN116824396B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the field of data interpretation, and discloses a multi-satellite data fusion automatic interpretation method which is used for improving the accuracy of automatic interpretation in a multi-satellite data fusion scene. The method comprises the following steps: synchronously acquiring data of a plurality of second urban areas to obtain first remote sensing data, converting data formats and correcting the atmosphere to obtain a plurality of second remote sensing data; extracting features to obtain a plurality of initial feature data and performing linear transformation to obtain a linear feature set; constructing a covariance matrix, and performing feature correlation calculation on a plurality of linear features according to the covariance matrix to obtain target feature correlation; calculating main component feature weights in the plurality of linear features according to the target feature correlation, and carrying out feature fusion on the plurality of initial feature data according to the main component feature weights to obtain fused feature data; and inputting the fusion characteristic data into a building characteristic interpretation model to perform building characteristic interpretation, so as to obtain a target interpretation result.

Description

Multi-satellite data fusion automatic interpretation method
Technical Field
The invention relates to the field of data interpretation, in particular to a multi-satellite data fusion automatic interpretation method.
Background
With the continuous development of satellite technology, the acquired satellite remote sensing data are increased. In the field of remote sensing, the use of satellites to obtain remote sensing data of the earth's surface has become a common approach. However, there are some limitations on the data acquired by a single satellite, such as limitations in resolution, telemetry bands, and the like.
However, the existing scheme is limited by the reliability of the data quality and interpretation algorithm, and if the quality of the input remote sensing data is poor or the interpretation model is not reasonably designed, inaccuracy of the interpretation result may be caused. Secondly, the data fusion and interpretation process usually requires a large amount of computing resources and time, and certain requirements are put on computing power and efficiency, namely, the accuracy of the existing scheme is low.
Disclosure of Invention
The invention provides a multi-satellite data fusion automatic interpretation method which is used for improving the accuracy of automatic interpretation in a multi-satellite data fusion scene.
The first aspect of the invention provides a multi-satellite data fusion automation interpretation method, which comprises the following steps:
performing regional division and satellite configuration on a first urban area to be detected to obtain a plurality of second urban areas and target satellites corresponding to each second urban area, and constructing an overlapping acquisition area and an acquisition angle of each target satellite;
Carrying out data synchronous acquisition on the plurality of second urban areas according to the overlapping acquisition areas and the acquisition angles of each target satellite to obtain first remote sensing data of each target satellite, and carrying out data format conversion and atmosphere correction on the first remote sensing data of each target satellite to obtain a plurality of second remote sensing data;
extracting features of the plurality of second remote sensing data to obtain a plurality of initial feature data, and performing linear transformation on the plurality of initial feature data to obtain a linear feature set, wherein the linear feature set comprises a plurality of linear features;
constructing a covariance matrix according to the linear feature set, and performing feature correlation calculation on the plurality of linear features according to the covariance matrix to obtain target feature correlation;
calculating principal component feature weights in the plurality of linear features according to the target feature correlation, and carrying out feature fusion on the plurality of initial feature data according to the principal component feature weights to obtain fusion feature data;
and inputting the fusion characteristic data into a preset building characteristic interpretation model to perform building characteristic interpretation to obtain a target interpretation result, wherein the target interpretation result is used for indicating building information and building categories in the target urban area.
With reference to the first aspect, in a first implementation manner of the first aspect of the present invention, the performing area division and satellite configuration on the first urban area to be detected to obtain a plurality of second urban areas and target satellites corresponding to each second urban area, and constructing an overlapping acquisition area and an acquisition angle of each target satellite, includes:
dividing a first urban area to be detected into a plurality of second urban areas, and extracting edge coordinates of the second urban areas to obtain an edge coordinate set of each second urban area;
carrying out satellite acquisition task allocation on a plurality of preset target satellites and a plurality of second urban areas to obtain target satellites corresponding to each second urban area;
calculating the area of each second city area based on the edge coordinate set, and acquiring the data resolution;
creating a satellite orbit intersection scheme of the plurality of target satellites according to the area of each second urban area;
generating an orbit starting point coordinate and an orbit ending point coordinate of each target satellite according to the satellite orbit crossing scheme and the edge coordinate set, and performing track fitting according to the orbit starting point coordinate and the orbit ending point coordinate to generate a target acquisition orbit of each second urban area;
And calculating an overlapped acquisition area of each target satellite according to the area and the data resolution, and setting an acquisition angle of each target satellite according to the target acquisition orbit.
With reference to the first aspect, in a second implementation manner of the first aspect of the present invention, the performing data synchronous acquisition on the plurality of second urban areas according to the overlapping acquisition area and the acquisition angle of each target satellite to obtain first remote sensing data of each target satellite, and performing data format conversion and atmospheric correction on the first remote sensing data of each target satellite to obtain a plurality of second remote sensing data includes:
data acquisition is carried out on the plurality of second urban areas according to the overlapping acquisition areas and the acquisition angles of each target satellite, so that initial remote sensing data of each target satellite are obtained;
acquiring time stamp data of the initial remote sensing data, and carrying out data synchronization on the initial remote sensing data of each target satellite according to the time stamp data to obtain first remote sensing data of each target satellite;
extracting the band data of the first remote sensing data to obtain the original data of red, green and blue bands, and rasterizing the first remote sensing data to obtain raster remote sensing data;
Converting the original data of the red, green and blue wave bands into the red, green and blue wave bands in the grid remote sensing data to obtain converted grid remote sensing data;
outputting the converted grid remote sensing data into a GeoTIFF format to obtain a GeoTIFF data file, wherein the red, green and blue wave bands respectively correspond to a first wave band, a second wave band and a third wave band of the GeoTIFF data file;
constructing an atmospheric radiation transmission model, and carrying out atmospheric parameter estimation on the first remote sensing data to obtain target atmospheric parameters;
and carrying out atmosphere correction on the GeoTIFF data file through the atmosphere radiation transmission model and the target atmosphere parameters to obtain a plurality of second remote sensing data.
With reference to the first aspect, in a third implementation manner of the first aspect of the present invention, the extracting features of the plurality of second remote sensing data to obtain a plurality of initial feature data, and performing linear transformation on the plurality of initial feature data to obtain a linear feature set, where the linear feature set includes a plurality of linear features, includes:
performing feature extraction on the plurality of second remote sensing data to obtain a plurality of initial feature data, wherein the plurality of initial feature data comprises: red channel mean, green channel mean, blue channel mean, texture features, and shape features;
Normalizing the plurality of initial characteristic data to obtain normalized characteristic data;
and carrying out linear transformation on the normalized characteristic data to obtain a linear characteristic set, wherein the linear characteristic set comprises a plurality of linear characteristics.
With reference to the first aspect, in a fourth implementation manner of the first aspect of the present invention, the constructing a covariance matrix according to the linear feature set, and performing feature correlation calculation on the plurality of linear features according to the covariance matrix, to obtain a target feature correlation, includes:
calculating covariance matrix elements according to a plurality of linear features in the linear feature set, and constructing a covariance matrix according to the covariance matrix elements, wherein the covariance matrix elements have a calculation formula as follows:
wherein ,(i, j) th element representing covariance matrix,>representing the value of the ith linear feature in the kth sample, +.>Mean value representing the ith linear feature; n represents the number of samples, +.>Representing the value of the jth linear feature in the kth sample, mean (j) representing the average value of the jth linear feature;
and calculating the correlation among the plurality of linear features according to the covariance matrix to obtain target feature correlation, wherein a calculation formula of the feature correlation is as follows:
wherein ,correlation coefficient representing linear characteristic i and linear characteristic j, +.>(i, j) th element representing covariance matrix,> and />The standard deviation of feature i and feature j are indicated, respectively.
With reference to the first aspect, in a fifth implementation manner of the first aspect of the present invention, the calculating principal component feature weights in the plurality of linear features according to the target feature correlation, and performing feature fusion on the plurality of initial feature data according to the principal component feature weights, to obtain fused feature data includes:
calculating a plurality of feature values corresponding to the plurality of linear features according to the target feature correlation;
generating corresponding feature vectors according to the plurality of feature values, and carrying out principal component normalization on the feature vectors to obtain principal component feature weights;
multiplying the plurality of initial feature data by the principal component feature weight to obtain a plurality of target products, and performing addition operation on the plurality of target products to obtain fusion feature data, wherein the fusion feature data is a linear combination of the plurality of initial feature data in the principal component direction.
With reference to the first aspect, in a sixth implementation manner of the first aspect of the present invention, the inputting the fused feature data into a preset building feature interpretation model to perform building feature interpretation, to obtain a target interpretation result, where the target interpretation result is used to indicate building information and a building class in the target city area, and includes:
Performing feature size adjustment on the fusion feature data to obtain target input features;
inputting the target input features into a preset building feature interpretation model, wherein the building feature interpretation model comprises: three layers of convolution networks, a pooling layer and two layers of full-connection layers;
performing convolution feature extraction and pooling operation on the target input features through the three-layer convolution network and the pooling layer to obtain target one-dimensional vectors;
and inputting the target one-dimensional vector into the two fully-connected layers to perform building feature interpretation to obtain a target interpretation result, wherein the target interpretation result is used for indicating building information and building categories in the target urban area.
A second aspect of the present invention provides an apparatus for multi-satellite data fusion automation interpretation, the apparatus for multi-satellite data fusion automation interpretation comprising:
the configuration module is used for carrying out regional division and satellite configuration on a first urban area to be detected, obtaining a plurality of second urban areas and target satellites corresponding to each second urban area, and constructing an overlapping acquisition area and an acquisition angle of each target satellite;
the acquisition module is used for synchronously acquiring the data of the plurality of second urban areas according to the overlapping acquisition area and the acquisition angle of each target satellite to obtain first remote sensing data of each target satellite, and carrying out data format conversion and atmosphere correction on the first remote sensing data of each target satellite to obtain a plurality of second remote sensing data;
The extraction module is used for carrying out feature extraction on the plurality of second remote sensing data to obtain a plurality of initial feature data, and carrying out linear transformation on the plurality of initial feature data to obtain a linear feature set, wherein the linear feature set comprises a plurality of linear features;
the computing module is used for constructing a covariance matrix according to the linear feature set, and carrying out feature correlation computation on the plurality of linear features according to the covariance matrix to obtain target feature correlation;
the fusion module is used for calculating the feature weights of the main components in the plurality of linear features according to the target feature correlation, and carrying out feature fusion on the plurality of initial feature data according to the feature weights of the main components to obtain fusion feature data;
and the interpretation module is used for inputting the fusion characteristic data into a preset building characteristic interpretation model to perform building characteristic interpretation to obtain a target interpretation result, wherein the target interpretation result is used for indicating building information and building categories in the target urban area.
A third aspect of the present invention provides an apparatus for multi-satellite data fusion automated interpretation, comprising: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the multi-satellite data fusion automation interpretation device to perform the multi-satellite data fusion automation interpretation method described above.
A fourth aspect of the invention provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of multi-satellite data fusion automated interpretation described above.
In the technical scheme provided by the invention, data synchronous acquisition is carried out on a plurality of second urban areas to obtain first remote sensing data, and data format conversion and atmosphere correction are carried out to obtain a plurality of second remote sensing data; extracting features to obtain a plurality of initial feature data and performing linear transformation to obtain a linear feature set; constructing a covariance matrix, and performing feature correlation calculation on a plurality of linear features according to the covariance matrix to obtain target feature correlation; calculating main component feature weights in the plurality of linear features according to the target feature correlation, and carrying out feature fusion on the plurality of initial feature data according to the main component feature weights to obtain fused feature data; the method and the device can acquire multi-source, multi-angle and multi-spectrum remote sensing data through data fusion of a plurality of satellites, thereby improving the interpretation accuracy. The data of different satellites have different resolutions, observation capacities and remote sensing wave bands, and the defects of single satellite data can be made up by comprehensively utilizing the data, so that the influence of errors and remote sensing data loss is reduced, and the accuracy of an interpretation result is improved. The data acquisition and fusion are performed by using a plurality of satellites, so that the spatial resolution and coverage of the data can be increased. The orbit and acquisition capacity of different satellites enable the satellites to acquire remote sensing data of different areas, and by fusing the data, more comprehensive and finer target city area information can be acquired, so that a richer data base is provided. By establishing an interpretation model and an automation algorithm, a large amount of remote sensing data can be rapidly and efficiently interpreted into building monitoring information of a target city area. Compared with the traditional manual interpretation method, the automatic interpretation has higher efficiency and consistency, can process a large-scale data set, provides a building monitoring result updated in real time, and further improves the accuracy of the automatic interpretation in a multi-satellite data fusion scene.
Drawings
FIG. 1 is a schematic diagram of one embodiment of a method for multi-satellite data fusion automation interpretation in an embodiment of the invention;
FIG. 2 is a flow chart of a region division and satellite configuration in accordance with an embodiment of the present invention;
FIG. 3 is a flow chart of data format conversion and atmospheric correction in an embodiment of the invention;
FIG. 4 is a flow chart of construction feature interpretation in an embodiment of the invention;
FIG. 5 is a schematic diagram of an embodiment of an apparatus for multi-satellite data fusion automation interpretation in accordance with an embodiment of the present invention;
FIG. 6 is a schematic diagram of an embodiment of an apparatus for multi-satellite data fusion automation interpretation in accordance with an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a multi-satellite data fusion automatic interpretation method which is used for improving the accuracy of automatic interpretation in a multi-satellite data fusion scene. The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, a specific flow of an embodiment of the present invention is described below with reference to fig. 1, and one embodiment of a method for multi-satellite data fusion automation interpretation in an embodiment of the present invention includes:
s101, carrying out regional division and satellite configuration on a first urban area to be detected to obtain a plurality of second urban areas and target satellites corresponding to each second urban area, and constructing an overlapping acquisition area and an acquisition angle of each target satellite;
it can be understood that the execution subject of the present invention may be a device for automatically interpreting multi-satellite data fusion, and may also be a terminal or a server, which is not limited herein. The embodiment of the invention is described by taking a server as an execution main body as an example.
Specifically, the server performs region division on the first urban area to be detected, and divides the first urban area into a plurality of second urban areas. For each second urban area, its set of edge coordinates is extracted to describe the shape and boundaries of the area. And carrying out satellite acquisition task allocation on a plurality of preset target satellites and the second urban area. And determining the target satellite corresponding to each second urban area by considering the coverage area, the resolution requirement, the availability and other factors of the satellite. And calculating the area of each second city area based on the edge coordinate set, and acquiring the data resolution. A satellite orbit intersection scheme for a plurality of target satellites is created based on the area of each second metropolitan area. This solution takes into account the orbital arrangement of the satellites to ensure coverage of the entire area and to maximize data acquisition efficiency. And generating the orbit start point coordinates and the orbit end point coordinates of each target satellite by using a satellite orbit intersection scheme and an edge coordinate set. And (3) performing track fitting on the starting point and the ending point coordinates to generate a target acquisition orbit of each second urban area, so as to ensure data acquisition of satellites in the areas. And calculating the overlapped acquisition area of each target satellite according to the area and the data resolution. These overlapping areas are coverage areas between adjacent satellites that facilitate subsequent data fusion and interpretation accuracy. And setting an acquisition angle for each target satellite according to the target acquisition orbit. Thus, the data acquisition coverage can be optimized, and the full utilization of the overlapped area and the high quality of the data are ensured. For example, assume that a server is to automatically interpret satellite data fusion for an urban area of a city, which includes multiple areas, such as a city center, residential area, and industrial area. The server uses three target satellites A, B and C to cover these areas. Dividing the urban area into three second urban areas: region 1 represents the city center, region 2 represents the residential area, and region 3 represents the industrial area. For each second urban area, extracting an edge coordinate set of the second urban area, and describing the boundary of the area. And performing satellite acquisition task allocation. Satellite a is assigned to region 1 and region 2, satellite B is assigned to region 2 and region 3, and satellite C is assigned to region 1 and region 3, depending on the satellite's capabilities and availability. Each second urban area has a corresponding target satellite. And calculating the area of each second city area based on the edge coordinate set, and acquiring the data resolution. Assuming that the area of region 1 is 100 square kilometers, the area of region 2 is 200 square kilometers, and the area of region 3 is 150 square kilometers. The data resolution is 1 m/pixel. Based on the area of the region, a satellite orbital intersection scheme is created. Considering that region 1 and region 2 are relatively close, the server arranges satellites a and B to orbit in cross-tracks. Satellites B and C are responsible for covering areas 2 and 3, which also run in cross-tracks. And generating the orbit start point coordinates and the orbit end point coordinates of each target satellite by using a satellite orbit intersection scheme and an edge coordinate set. By performing a trajectory fit on these start and end coordinates, a target acquisition trajectory for each second urban area is generated. And calculating the overlapped acquisition area of each target satellite according to the area and the data resolution. For satellite a and satellite B, they have overlapping areas in area 2, where data fusion and interpretation can be performed. Similarly, satellite a and satellite C have overlapping regions in region 1 and satellite B and satellite C have overlapping regions in region 3. And setting an acquisition angle for each target satellite according to the target acquisition orbit. For example, satellite a starts acquisition at the orbital origin of region 1, covering region 1 at a 45 degree angle; the acquisition is ended at the end of the track of zone 2, covering zone 2 at a 30 degree angle. Other satellites are also angularly positioned in a similar manner to ensure coverage of the target area. In this embodiment, the server successfully realizes the regional division and satellite configuration of the first urban area to be detected. The server divides the urban area into a plurality of second urban areas and allocates a corresponding target satellite for each area. Meanwhile, the area of each region is calculated according to the edge coordinate set, and data resolution information is acquired.
S102, synchronously acquiring data of a plurality of second urban areas according to the overlapping acquisition areas and the acquisition angles of each target satellite to obtain first remote sensing data of each target satellite, and performing data format conversion and atmosphere correction on the first remote sensing data of each target satellite to obtain a plurality of second remote sensing data;
specifically, the server performs region division on the first urban region to be detected, and divides the first urban region into a plurality of second urban regions. By extracting the set of edge coordinates, the server obtains the edge coordinates of each second urban area. And the server distributes satellite acquisition tasks for a plurality of preset target satellites and the second urban area. Each second urban area corresponds to a target satellite. According to the edge coordinate set, the server calculates the area of each second city area and obtains the data resolution. Based on the area of the region, the server creates a satellite orbit intersection scheme for each target satellite to ensure coverage of each second metropolitan area. The server generates a target acquisition trajectory for each second urban area by calculating the trajectory start and end coordinates and performing trajectory fitting. In combination with the area and data resolution, the server calculates the overlapping acquisition area for each target satellite. These overlapping areas provide coverage between adjacent satellites to enhance the accuracy and reliability of the data. And in the data acquisition stage, according to the overlapped acquisition areas and the acquisition angles, the server synchronously acquires the data of the second urban areas. The server obtains first telemetry data for each target satellite. The server performs data format conversion and atmospheric correction. And extracting the original data of red, green and blue wave bands from the first remote sensing data, and rasterizing the original data to generate raster remote sensing data. And converting the original data into corresponding wave bands in the grid remote sensing data, and ensuring the consistency and the processibility of the data. And outputting the converted grid remote sensing data into a GeoTIFF format to generate a GeoTIFF data file. In this document, the red, green, and blue bands correspond to the first, second, and third bands, respectively. In order to perform atmospheric correction, the server builds an atmospheric radiation transmission model, and performs atmospheric parameter estimation on the first remote sensing data to obtain target atmospheric parameters. And using the atmospheric radiation transmission model and the target atmospheric parameters, the server performs atmospheric correction on the GeoTIFF data file. The purpose of the atmospheric correction is to eliminate the effects of atmospheric scattering and absorption on the remote sensing data to obtain more accurate surface reflectivity or radiance values. In this embodiment, the server successfully implements the process of performing regional division and satellite configuration on the first urban area to be detected. And synchronously acquiring data according to the overlapping acquisition area and the acquisition angle of each target satellite, and performing format conversion and atmosphere correction on the data to finally obtain a plurality of second remote sensing data. For example to illustrate the entire flow. It is assumed that the first metropolitan area of the server is a metropolitan area comprising a plurality of second metropolitan areas such as a city center, suburban area, and harbor area. The server selects three target satellites for remote sensing data acquisition and monitoring. The server divides the first urban area into three second urban areas of a city center, a suburban area and a harbor area, and extracts an edge coordinate set of each area. The server distributes acquisition tasks for three target satellites, and each satellite is respectively responsible for monitoring the urban center, suburban area and harbor area. According to the edge coordinate set, the server calculates the area of each second city area and obtains the data resolution. This information helps the server determine the satellite orbit intersection scheme and the overlapping acquisition region. Based on the area of the areas, the server creates a satellite orbit intersection scheme of three target satellites, ensuring coverage of each area. And (3) calculating coordinates of a starting point and an ending point of the track, performing track fitting, and generating a target acquisition track of each second urban area by the server. The server calculates the overlapping acquisition area of each target satellite in combination with the area and the data resolution to enhance the accuracy and reliability of the data. In the data acquisition stage, according to the overlapping acquisition area and the acquisition angle, the server synchronously acquires data of the urban center, suburban area and harbor area. The server obtains first telemetry data for each target satellite. The server performs format conversion and atmospheric correction on the first remote sensing data. The server extracts the original data of the red, green and blue wave bands and converts the original data into a grid remote sensing data format. By constructing an atmospheric radiation transmission model and estimating target atmospheric parameters, the server performs atmospheric correction on the grid remote sensing data, so that the influence of atmospheric scattering and absorption on the data is eliminated, and a plurality of second remote sensing data are obtained.
S103, carrying out feature extraction on a plurality of second remote sensing data to obtain a plurality of initial feature data, and carrying out linear transformation on the plurality of initial feature data to obtain a linear feature set, wherein the linear feature set comprises a plurality of linear features;
it should be noted that, for each second remote sensing data, the server performs feature extraction to extract a plurality of initial feature data. These initial feature data include red channel mean, green channel mean, blue channel mean, texture features, and shape features. The average value of red, green and blue channels reflects the average brightness value of different wave bands in the remote sensing image, the texture features describe the texture information of the image, and the shape features represent the geometric shape of the target. For comparison and analysis of the data, the server normalizes these initial feature data. The normalization can unify the value ranges of different characteristic data to the same interval, and eliminate deviation caused by different dimensions. Common normalization methods include linear mapping of data to 0,1 or normalization to a form with a mean of 0 and a variance of 1. After normalization is completed, the server performs linear transformation to obtain a linear feature set. The linear transformation can be realized by matrix multiplication, and the normalized characteristic data is multiplied by a linear transformation matrix to obtain linear characteristics. The set of linear features includes a plurality of linear features that may be used to describe the linear relationship between different features in the remote sensing image. For example, assume that the server has three second remote sensing data, corresponding to remote sensing images of three urban areas, respectively. The server extracts the red, green, and blue channel average values, texture features, and shape features of each image as initial feature data. Normalization processing is performed on the initial characteristic data. For example, the server performs a linear mapping of [0,1] on the red, green, and blue channel averages, and performs a normalization process on the texture features and shape features. After normalization is completed, the server performs linear transformation on the normalized feature data using a linear transformation matrix. Assume that the linear transformation matrix is:
[0.5 0.2 0.3 0.1 0.4];
[0.1 0.3 0.5 0.4 0.2];
[0.4 0.5 0.2 0.3 0.1]。
The server multiplies the normalized characteristic data by the matrix to obtain a result of linear transformation of the fused characteristic data. Assume that the server has the following normalized feature data: [0.8,0.6,0.4,0.9,0.7]. The server multiplies the characteristic data by a linear transformation matrix to obtain a result of the fusion of the characteristic data after linear transformation: fusing characteristic data:
,
,
wherein, the result of calculation is: [0.84,0.58,0.59]. In this example, the server linearly transforms the normalized feature data using a given linear transformation matrix. Through matrix multiplication, the server multiplies each feature by the corresponding weight and adds the results to obtain the result of the linear transformation of the fused feature data. The server maps the original feature data to a new feature space through a linear transformation for subsequent building feature interpretation or other tasks.
S104, constructing a covariance matrix according to the linear feature set, and carrying out feature correlation calculation on a plurality of linear features according to the covariance matrix to obtain target feature correlation;
specifically, for each feature i in the linear feature set, the server calculates its mean (i). This can be obtained by traversing all samples, adding the values of feature i in each sample, and then dividing by the number of samples N. The server uses the calculation formula of the covariance matrix element to calculate the covariance between the linear feature i and the feature j . Tool withIn terms of the volume, the covariance matrix element is calculated by the formula. In this formula, X (i, k) represents the value of the ith linear feature in the kth sample, mean (i) represents the average value of the ith linear feature, N represents the number of samples, (-)>Representing the value of the jth linear feature in the kth sample, mean (j) represents the average value of the jth linear feature. By traversing all the samples, the server calculates the covariance cov (i, j) between each pair of features i and j in the linear feature set, thereby constructing a covariance matrix. After obtaining the covariance matrix, the server uses it to calculate the correlation between the linear features. Feature correlation can be calculated by using the calculation formula of the correlation coefficient +.>Is obtained. Where corr (i, j) represents the correlation coefficient of the linear feature i and the linear feature j, cov (i, j) represents the (i, j) th element of the covariance matrix, and std (i) and std (j) represent the standard deviations of the feature i and the feature j, respectively. By calculating the correlation coefficient, the server obtains the correlation between all the features in the linear feature set. These feature correlations provide information about the strength of the linear relationship between the linear features, which can be used in further analysis and applications. For example to illustrate the above procedure. Assume that the server has a linear feature set comprising 4 linear features, feature a, feature B, feature C, and feature D, respectively. The server has 100 samples to analyze. The server calculates the average mean (a), mean (B), mean (C), and mean (D) for each feature. The server calculates covariance cov (a, B) between feature a and feature B, covariance cov (a, C) between feature a and feature C, covariance cov (a, D) between feature a and feature D, and covariance between other features using the calculation formula of the covariance matrix element. The covariance matrix is obtained through calculation, and the server obtains the covariance matrix shown as follows:
Wherein the server calculates the correlation between features using the covariance matrix. For example, assume that the server wants to calculate a correlation coefficient between feature a and feature B. Calculation formula for server using correlation coefficientWherein std (a) and std (B) represent standard deviations of feature a and feature B, respectively. In a similar manner, the server calculates correlation coefficients between other features, such as corr (a, C), corr (a, D), corr (B, C), corr (B, D), corr (C, D), and the like. Finally, the server obtains a correlation matrix between the features in the linear feature set, wherein each element represents a correlation coefficient between the corresponding features. The calculation method of the feature correlation can help the server to know the degree of correlation between the linear features, so that the potential value of the features in the fields of data analysis, machine learning and the like can be further analyzed and applied.
S105, calculating main component feature weights in the plurality of linear features according to the target feature correlation, and carrying out feature fusion on the plurality of initial feature data according to the main component feature weights to obtain fusion feature data;
specifically, the server calculates feature values corresponding to a plurality of linear features according to the target feature correlation. The feature value reflects the importance of the feature in the dataset. A larger feature value indicates that the feature is more capable of interpreting the data. And the server generates corresponding feature vectors according to the feature values, and performs principal component normalization on the feature vectors. Principal component normalization is a commonly used data processing technique for scaling the numerical range of feature vectors to a normalized interval, such as [0,1] or [ -1,1]. This ensures that the weights of the various features are comparable. The server multiplies the plurality of initial feature data by the principal component feature weights to obtain a plurality of target products. These target products are the products of each feature and its corresponding principal component feature weight. And the server performs addition operation on the target products to obtain fusion characteristic data. The fused feature data is a linear combination of a plurality of initial feature data in the principal component direction, which integrates the importance and weight of different features in principal component analysis. For example, assume that the server has four initial feature data: feature a, feature B, feature C, and feature D. According to the calculation result of the target feature correlation, the server obtains the corresponding feature values: a characteristic value A, a characteristic value B, a characteristic value C and a characteristic value D. The server generates corresponding feature vectors, and performs principal component normalization to obtain principal component feature weights: weight a, weight B, weight C, and weight D. The server multiplies the initial characteristic data by the characteristic weights of the main components to obtain four target products: a target product A, a target product B, a target product C and a target product D. And the server performs addition operation on the four target products to obtain fusion characteristic data. The fused feature data is a linear combination of the initial feature data in the principal component direction, which reflects the integrated features of the initial feature data. In this embodiment, the server calculates the feature weight of the principal component in the linear feature by using the target feature correlation, and applies it to feature fusion of the initial feature data, thereby obtaining more comprehensive and representative fused feature data. This helps to extract important information in the data and plays a greater role in the fields of data analysis, pattern recognition, and the like. The method has the advantages that redundant information among the features can be reduced, the expression capacity of the features is improved, the data dimension is reduced, and subsequent data analysis and modeling are facilitated.
S106, inputting the fusion characteristic data into a preset building characteristic interpretation model to perform building characteristic interpretation, and obtaining a target interpretation result, wherein the target interpretation result is used for indicating building information and building types in a target city area.
Specifically, feature size adjustment is performed on the fused feature data so as to adapt to the input requirements of the building feature interpretation model. This may involve adapting the fused feature data to match the input feature size desired by the architectural feature interpretation model. And transmitting the adjusted target input characteristics to a preset building characteristic interpretation model. This model may include three convolutional networks, a pooling layer, and two fully connected layers. Convolutional networks are used to extract local features in feature maps, while pooling operations are used to reduce the size of feature maps and preserve critical information. The full connection layer is used for interpreting and classifying the extracted features. In a building feature interpretation model, firstly, carrying out convolution feature extraction and pooling operation on target input features through a three-layer convolution network and a pooling layer, thereby obtaining a one-dimensional target vector. This target vector will contain key feature information extracted from the fused feature data. And inputting the target vector into two fully connected layers for building feature interpretation. The fully connected layer will use the learned weights and offsets to linearly transform and non-linearly map the target vector to obtain the final interpretation result. This result will indicate building information in the target metropolitan area as well as building categories, which may be tags, attributes, types, or other relevant information about the building. For example, assume that the server has a fused feature data set that includes features such as building area, number of office buildings, and number of residential buildings. The server wishes to use these feature data to make building feature interpretations to predict the type of building (e.g., residential, business, or office). The server adjusts the feature size of the fusion feature data and converts the fusion feature data into an input size suitable for the building feature interpretation model. And transmitting the adjusted target input characteristics to a building characteristic interpretation model. This model may consist of three convolutional networks, a pooling layer and two fully connected layers. The model carries out convolution feature extraction and pooling operation on the target input features through a convolution network and a pooling layer, so that a target one-dimensional vector is obtained, wherein each element represents the importance or the correlation degree of a building feature. And transmitting the target one-dimensional vector to two fully connected layers for building feature interpretation. The fully connected layer will perform linear transformation and non-linear mapping on the input target one-dimensional vector to infer the type of building. For example, a first fully connected layer may map a target one-dimensional vector to one intermediate representation, while a second fully connected layer maps the intermediate representation to a different building type category. Through the process, the server inputs the fusion characteristic data into a building characteristic interpretation model, and a target interpretation result, namely the type of a building, is obtained through the processing of the convolution and the full connection layer. For example, the interpretation result may indicate whether the building is a residential, commercial, or office building. For example, assume that the server has a set of building data including area, number of office buildings, and number of residential buildings as fusion features. The server transmits the features to a building feature interpretation model, and the following target interpretation results are obtained through the processing of convolution and a full connection layer: input characteristics: area=150 square meters, number of buildings written=3, number of residential buildings=2. Target interpretation results: residential building. In this example, the server deduces that the building is a residential building based on the interpretation of the fused feature data. This process can help the server automatically interpret the building characteristics and predict its type, providing useful information and decision basis for real estate industry, building market analysis, etc.
In the embodiment of the invention, data synchronous acquisition is carried out on a plurality of second urban areas to obtain first remote sensing data, and data format conversion and atmosphere correction are carried out to obtain a plurality of second remote sensing data; extracting features to obtain a plurality of initial feature data and performing linear transformation to obtain a linear feature set; constructing a covariance matrix, and performing feature correlation calculation on a plurality of linear features according to the covariance matrix to obtain target feature correlation; calculating main component feature weights in the plurality of linear features according to the target feature correlation, and carrying out feature fusion on the plurality of initial feature data according to the main component feature weights to obtain fused feature data; the method and the device can acquire multi-source, multi-angle and multi-spectrum remote sensing data through data fusion of a plurality of satellites, thereby improving the interpretation accuracy. The data of different satellites have different resolutions, observation capacities and remote sensing wave bands, and the defects of single satellite data can be made up by comprehensively utilizing the data, so that the influence of errors and remote sensing data loss is reduced, and the accuracy of an interpretation result is improved. The data acquisition and fusion are performed by using a plurality of satellites, so that the spatial resolution and coverage of the data can be increased. The orbit and acquisition capacity of different satellites enable the satellites to acquire remote sensing data of different areas, and by fusing the data, more comprehensive and finer target city area information can be acquired, so that a richer data base is provided. By establishing an interpretation model and an automation algorithm, a large amount of remote sensing data can be rapidly and efficiently interpreted into building monitoring information of a target city area. Compared with the traditional manual interpretation method, the automatic interpretation has higher efficiency and consistency, can process a large-scale data set, provides a building monitoring result updated in real time, and further improves the accuracy of the automatic interpretation in a multi-satellite data fusion scene.
In a specific embodiment, as shown in fig. 2, the process of performing step S101 may specifically include the following steps:
s201, carrying out region division on a first urban region to be detected to obtain a plurality of second urban regions, and carrying out edge coordinate extraction on the plurality of second urban regions to obtain an edge coordinate set of each second urban region;
s202, carrying out satellite acquisition task allocation on a plurality of preset target satellites and a plurality of second urban areas to obtain target satellites corresponding to each second urban area;
s203, calculating the area of each second city area based on the edge coordinate set, and acquiring the data resolution;
s204, creating a satellite orbit intersection scheme of a plurality of target satellites according to the area of each second urban area;
s205, generating an orbit starting point coordinate and an orbit ending point coordinate of each target satellite according to a satellite orbit crossing scheme and an edge coordinate set, and performing track fitting according to the orbit starting point coordinate and the orbit ending point coordinate to generate a target acquisition orbit of each second urban area;
s206, calculating an overlapping acquisition area of each target satellite according to the area and the data resolution, and setting an acquisition angle of each target satellite according to the target acquisition orbit.
Specifically, the server performs regional division on the first urban area to be detected, and divides the first urban area into a plurality of second urban areas. This can be achieved by spatial analysis or image segmentation, etc. For each second urban area, the server performs edge coordinate extraction to obtain an edge coordinate set thereof. This may be achieved using edge detection algorithms or image processing techniques. The server performs satellite acquisition task allocation on a plurality of preset target satellites and a plurality of second urban areas. This involves assigning each second urban area to a respective target satellite based on the regional characteristics, satellite capabilities and mission requirements using a mission assignment algorithm, such as a greedy algorithm or genetic algorithm. The server calculates the area of each second urban area based on the edge coordinate set of the second urban area, and obtains the data resolution. This can be achieved by calculating the area of the region enclosed by the edge coordinates and combining the relevant data information to obtain the data resolution. Based on the area and the data resolution, the server will create a satellite orbit intersection scheme for a plurality of target satellites. This means that the manner of orbital intersection of each target satellite between different regions is determined based on the orbits of the satellites and the characteristics of the regions. And the server generates orbit starting point coordinates and end point coordinates of each target satellite according to the satellite orbit crossing scheme and the edge coordinate set, performs track fitting, and generates a target acquisition orbit of each second urban area. This involves calculating the trajectory of each target satellite in each region by means of a mathematical model or interpolation algorithm based on the orbital parameters of the satellite and the region edge coordinates. In this embodiment, the server may implement planning and task allocation for the target satellite acquisition orbits of the plurality of second urban areas by dividing the urban areas to be detected, extracting edge coordinates, allocating satellite tasks, calculating area areas, acquiring data resolution, creating an orbital intersection scheme, and fitting tracks. This would provide an efficient solution for the server to provide building characteristic information collection within each area. For example, assume that the server has one urban area to be detected, and the server divides it into four second urban areas (A, B, C and D areas). By the edge coordinate extraction, the server obtains an edge coordinate set for each region. The server has three pre-set target satellites (Satellite 1, satellite2 and Satellite 3) available for acquisition tasks. Through the Satellite mission allocation algorithm, the server allocates an A area to Satellite1, a B area to Satellite2, a C area to Satellite3, and reserves a D area as a spare. And aiming at the edge coordinate set of each second urban area, the server calculates the area of the respective area and acquires the corresponding data resolution. For example, the area of the A region is 1000 square meters, and the data resolution is 1 meter/pixel; the area of the B region was 2000 square meters and the data resolution was 0.5 meters per pixel. Based on the area and the data resolution, the server creates a satellite orbit intersection scheme for each target satellite. For Satellite1, its tracks will intersect in region a; for Satellite2, its tracks will intersect in region B; for Satellite3, its tracks will intersect in the C region. And generating an orbit starting point coordinate and an orbit end point coordinate of each target satellite by the server according to the satellite orbit crossing scheme and the edge coordinate set, and performing track fitting to obtain a target acquisition orbit of each second urban area. This will ensure that the building features within each area are fully covered and collected. And according to the area and the data resolution, the server calculates the overlapping acquisition area of each target satellite, and sets the acquisition angle of each satellite according to the target acquisition orbit. This will ensure that the architectural features within each area can be more fully observed and recorded through the acquisition angles of multiple satellites.
In a specific embodiment, as shown in fig. 3, the process of executing step S102 may specifically include the following steps:
s301, acquiring data of a plurality of second urban areas according to the overlapping acquisition areas and the acquisition angles of each target satellite to obtain initial remote sensing data of each target satellite;
s302, acquiring time stamp data of initial remote sensing data, and carrying out data synchronization on the initial remote sensing data of each target satellite according to the time stamp data to obtain first remote sensing data of each target satellite;
s303, extracting band data of the first remote sensing data to obtain original data of red, green and blue bands, and rasterizing the first remote sensing data to obtain raster remote sensing data;
s304, converting the original data of the red, green and blue wave bands into the red, green and blue wave bands in the grid remote sensing data to obtain converted grid remote sensing data;
s305, outputting the converted grid remote sensing data into a GeoTIFF format to obtain a GeoTIFF data file, wherein red, green and blue wave bands respectively correspond to a first wave band, a second wave band and a third wave band of the GeoTIFF data file;
s306, constructing an atmospheric radiation transmission model, and carrying out atmospheric parameter estimation on the first remote sensing data to obtain target atmospheric parameters;
S307, performing atmospheric correction on the GeoTIFF data file through the atmospheric radiation transmission model and the target atmospheric parameters to obtain a plurality of second remote sensing data.
Specifically, the server determines the data range to be acquired in the plurality of second urban areas according to the overlapping acquisition area and the acquisition angle of each target satellite. This may be determined based on the field of view of the satellite sensor and the telemetry data acquisition requirements. And acquiring data of a plurality of second urban areas by using a preset target satellite. The satellites acquire remote sensing data in the overlapping acquisition areas according to the designated acquisition angles, and initial remote sensing data of each target satellite are obtained. And acquiring the time stamp data of the initial remote sensing data, and carrying out data synchronization on the initial remote sensing data of each target satellite according to the time stamp data. This ensures that the telemetry data for each satellite is synchronized in time for subsequent processing and analysis. And extracting the band data of the first remote sensing data to obtain the original data of red, green and blue bands. This can be extracted based on the band configuration of the satellite sensors and the remote sensing data format. And carrying out rasterization processing on the first remote sensing data to obtain raster remote sensing data. The rasterization process converts the remote sensing data into a grid form taking pixels as basic units, so that the subsequent processing and analysis are convenient. And then converting the original data of the red, green and blue wave bands into the red, green and blue wave bands in the grid remote sensing data to obtain the converted grid remote sensing data. This allows for band shifting operations based on the format requirements of the telemetry data and the processing tool. And outputting the converted grid remote sensing data into a GeoTIFF format to obtain a GeoTIFF data file. GeoTIFF is a commonly used Geographic Information System (GIS) data format that may contain remote sensing data for multiple bands. And constructing an atmospheric radiation transmission model, and estimating atmospheric parameters of the first remote sensing data. The atmospheric radiation transmission model considers the influence of the atmosphere on the remote sensing data, and the remote sensing data can be corrected more accurately by estimating the atmospheric parameters of the target area. And carrying out atmosphere correction on the GeoTIFF data file through the atmosphere radiation transmission model and the target atmosphere parameters. And the atmospheric correction is to correct the remote sensing data according to the atmospheric radiation transmission model and the atmospheric parameters of the target area, eliminate the influence of the atmosphere on the data and obtain a plurality of second remote sensing data. For example, assume that the server has two target satellites: satellite a and satellite b, and two second metropolitan areas: region X and region Y. According to the planning of overlapping acquisition areas and acquisition angles, satelliteta is responsible for acquisition area X and satellitetb is responsible for acquisition area Y. The satellite acquires initial remote sensing data in the acquisition process and records time stamp data. By means of data synchronization, the server will ensure that the telemetry data of the two satellites are consistent in time. And extracting the original data of red, green and blue wave bands by the server for the initial remote sensing data of Satellite A, and rasterizing the original data to obtain raster remote sensing data. Similarly, the server performs the same processing steps for the initial telemetry data of SatelliteB. And the server converts the original data of the red, green and blue wave bands into the red, green and blue wave bands in the grid remote sensing data. The server obtains converted raster telemetry data in which the red, green, and blue bands correspond to the first, second, and third bands of the raster telemetry data file. In order to facilitate subsequent processing and analysis, the server outputs the converted grid remote sensing data into a GeoTIFF format. The server obtains a GeoTIFF data file containing red, green and blue band data, which can be further processed and analyzed in the conventional GIS software. The server builds an atmospheric radiation transmission model, and performs atmospheric parameter estimation on the first remote sensing data by estimating atmospheric parameters of the target area. The server performs atmospheric correction on the GeoTIFF data file using the atmospheric radiation transmission model and the target atmospheric parameters. The process corrects the remote sensing data according to the model and the parameters, and eliminates the influence of the atmosphere on the data. Finally, the server obtains a plurality of second remote sensing data, which are the remote sensing data after the atmospheric correction, and can be used for further analysis and application.
In a specific embodiment, the process of executing step S103 may specifically include the following steps:
(1) And extracting features of the plurality of second remote sensing data to obtain a plurality of initial feature data, wherein the plurality of initial feature data comprises: red channel mean, green channel mean, blue channel mean, texture features, and shape features;
(2) Normalizing the plurality of initial characteristic data to obtain normalized characteristic data;
(3) And carrying out linear transformation on the normalized feature data to obtain a linear feature set, wherein the linear feature set comprises a plurality of linear features.
In particular, feature extraction on the plurality of second telemetry data by the server is a critical step that allows the server to extract meaningful information from the telemetry data to support further analysis and application. Assume that the server has two second telemetry Data, data1 and Data2, respectively. The server performs feature extraction on Data1 and Data2 to obtain a plurality of initial feature Data. These initial feature data may include red channel mean, green channel mean, blue channel mean, texture features, and shape features. The server obtains color information by calculating the average value of pixels on red, green and blue channels And (5) extinguishing. Texture features may capture details and texture information of an image by computing texture statistics (e.g., gray level co-occurrence matrix, wavelet transform, etc.). Shape features may describe the shape of a target by extracting geometric properties of the target, such as boundaries, contours, etc. In order to facilitate comparison and processing of differences between different features, the server normalizes the initial feature data. The normalization can enable different characteristic data to be compared on the same scale, and the influence of differences among characteristic values on subsequent analysis is avoided. Common normalization methods include mapping the data to a range of 0 to 1 or normalizing it to a distribution with a mean of 0 and a variance of 1. Further, the server performs linear transformation on the normalized feature data to obtain a linear feature set. The linear transformation may be implemented by matrix multiplication. Assuming that the server has a linear transformation matrix, the normalized feature data may be multiplied by the matrix to obtain the linear feature set. The set of linear features may include a plurality of linear features, each feature consisting of a linear combination of normalized feature data. For example, assume that the server extracts red channel average, green channel average, and texture feature from Data1 and Data2 as initial feature Data. The server normalizes the initial feature data to ensure that they are within the same scale. The server then has a linear transformation matrix, and the normalized feature data can be multiplied by the matrix to obtain a set of linear features. For example, the set of linear features may comprise a weighted combination of red channel average and green channel average, representing the linear features of the color information. By performing feature extraction, normalization and linear transformation on the plurality of second remote sensing data, the server obtains a group of features with uniform scale and linear relation, and provides a more reliable basis for further analysis and application. For example, assume that the server has two second sets of telemetry data, city a and city B telemetry data, respectively. The server extracts the following initial feature data: red channel mean, green channel mean, blue channel mean, texture features, and shape features. For the remote sensing data of city a, the initial feature data calculated by the server is as follows: red channel mean: 120. green channel Average value: 90. blue channel mean: 75. texture features: 0.85, shape characteristics: 0.92. for the remote sensing data of city B, the initial feature data calculated by the server is as follows: red channel mean: 110. green channel average: 95. blue channel mean: 80. texture features: 0.78, shape characteristics: 0.88. the server normalizes the initial feature data and maps them to a range of 0 to 1 to obtain normalized feature data. For example, the average value of the red, green and blue channels is normalized to be between 0 and 1, and the texture feature and the shape feature are subjected to corresponding normalization processing. The server linearly transforms the normalized feature data using a predefined linear transformation matrix. The linear transformation matrix is assumed as follows: [0.3,0.5,0.2,0.1,0.4]. By multiplying the normalized feature data with the matrix, the server obtains a set of linear features. For the remote sensing data of city a, the linear feature set can be expressed as: linear characteristics =Normalized red channel mean +.>Normalized green channel mean +.>Normalized blue channel mean +.>Normalized texture feature +The normalized shape features similarly, for the remote sensing data of city B, the linear feature set can be calculated by the same linear combination formula. In this embodiment, the server converts the original remote sensing data into a feature set with a uniform scale and linear relationship. These linear features may be more conveniently used in further data analysis, model training, or applications. For example, the server uses these linear features to perform cluster analysis, looking for similar urban areas; or take it as input The features are used to build a predictive model, such as predicting land utilization type or population density of a city area.
In a specific embodiment, the process of executing step S104 may specifically include the following steps:
(1) Calculating covariance matrix elements according to a plurality of linear features in the linear feature set, and constructing a covariance matrix according to the covariance matrix elements, wherein the covariance matrix elements have a calculation formula as follows:
wherein cov (i, j) represents the (i, j) th element of the covariance matrix, X (i, k) represents the value of the i-th linear feature in the k-th sample, mean (i) represents the average value of the i-th linear feature, N represents the number of samples, X (j, k) represents the value of the j-th linear feature in the k-th sample, mean (j) represents the average value of the j-th linear feature;
(2) And calculating the correlation among a plurality of linear features according to the covariance matrix to obtain target feature correlation, wherein the calculation formula of the feature correlation is as follows:
where corr (i, j) represents the correlation coefficient of the linear feature i and the linear feature j, cov (i, j) represents the (i, j) th element of the covariance matrix, and std (i) and std (j) represent the standard deviations of the feature i and the feature j, respectively.
Specifically, the server calculates an average value for each feature from a given set of linear features. Assuming that the server has N samples and M linear features, then for the i-th feature the server calculates the average mean (i) as the average of the i-th feature values in all samples. The server calculates the elements of the covariance matrix. The element cov (i, j) of the covariance matrix represents the covariance between the ith and jth features. According to a given formula The server calculates covarianceThe value of each element in the matrix. In this formula, X (i, k) represents the value of the ith feature in the kth sample, and mean (i) and mean (j) represent the average of the ith and jth features, respectively. After the covariance matrix is calculated, the server further calculates the correlation between the features. Correlation is an indicator of the strength of a linear relationship between two features. According to a given formulaWhere corr (i, j) represents the correlation coefficient of feature i and feature j, cov (i, j) represents the (i, j) th element of the covariance matrix, and std (i) and std (j) represent the standard deviations of feature i and feature j, respectively. In this embodiment, the server obtains the correlation of the target features, that is, the correlation coefficient matrix between the features. This matrix may reveal correlations between different features, helping the server understand the structure of correlations between features in the linear feature set. The method is very useful for subsequent data analysis and model construction, can help a server to select the features with stronger correlation, reduces redundant information and improves the accuracy of prediction or classification. For example, assume that the server has a set of linear features that includes the area of the building, the number of office buildings, and the number of residential buildings. The server collected 100 sample data. By calculating the mean and covariance matrix elements for each feature, the server gets the following results:
Covariance matrix:
cov (area ) =100.0, cov (area, number of office buildings) =30.0, cov (area, number of residential buildings) =40.0;
cov (number of office buildings, area) =30.0, cov (number of office buildings ) =25.0, cov (number of office buildings, number of residential buildings) =15.0;
cov (number of residential buildings, area) =40.0, cov (number of residential buildings, number of office buildings) =15.0, cov (number of residential buildings ) =20.0;
the server calculates a correlation coefficient matrix between the features according to the covariance matrix. Assuming standard deviations of 10.0, 5.0 and 4.0 for the area, the number of office buildings and the number of residential buildings, respectively, the correlation coefficient matrix is as follows:
corr (area ) =1.0, corr (area, number of office buildings) =0.6, corr (area, number of residential buildings) =0.8;
corr (number of office buildings, area) =0.6, corr (number of office buildings ) =1.0, corr (number of office buildings, number of residential buildings) =0.3;
corr (number of residential buildings, area) =0.8, corr (number of residential buildings, number of office buildings) =0.3, corr (number of residential buildings ) =1.0;
the degree of correlation between the different features is observed from the server in the correlation coefficient matrix. For example, a correlation coefficient between the area and the number of residential buildings of 0.8 indicates a strong positive correlation therebetween. And the correlation coefficient between the number of office buildings and the number of residential buildings is 0.3, which indicates that the correlation between them is weak. The server better understands the interrelationship between features by performing feature extraction, normalization, linear transformation, covariance matrix calculation, and correlation coefficient calculation on the feature set. This helps the server to select the appropriate features during the data analysis and modeling process, improving the accuracy and interpretation of the model.
In a specific embodiment, the process of executing step S105 may specifically include the following steps:
(1) Calculating a plurality of characteristic values corresponding to a plurality of linear characteristics according to the target characteristic correlation;
(2) Generating corresponding feature vectors according to the plurality of feature values, and carrying out principal component normalization on the feature vectors to obtain principal component feature weights;
(3) Multiplying the plurality of initial feature data by the feature weight of the main component to obtain a plurality of target products, and performing addition operation on the plurality of target products to obtain fusion feature data, wherein the fusion feature data is a linear combination of the plurality of initial feature data in the direction of the main component.
Specifically, the server first calculates feature values corresponding to a plurality of linear features according to the target feature correlation. The eigenvalues reflect the importance and variance magnitudes of the individual linear features. Corresponding feature vectors are generated from the feature values. The feature vector represents the weight and contribution in each principal component direction. The server may also principal component normalize the feature vectors so that they are 1 in length to ensure that the weights are comparable. And multiplying the plurality of initial characteristic data by the characteristic weight of the principal component to obtain the product of each linear characteristic in the principal component direction. These products represent a linear combination of each linear feature in the principal component direction. And adding the target products to obtain the fusion characteristic data. The fused feature data is formed by linear combination of a plurality of initial feature data in the principal component direction. For example, assume that the server has three linear features: feature a, feature B, and feature C. By calculating the correlation between them, the server obtains the corresponding feature values. The server generates a corresponding feature vector and normalizes the principal component. The server multiplies the initial feature data by the principal component feature weights to obtain the product of each feature in the principal component direction. And adding the products to obtain the fusion characteristic data. For example, the fused feature data may be composed of the product of feature a in the first principal component direction, the product of feature B in the second principal component direction, and the product of feature C in the third principal component direction. The server obtains a set of fused feature data that comprehensively considers the plurality of features. In this embodiment, the server performs linear combination of a plurality of initial feature data in the principal component direction by calculating the feature value, generating the feature vector, normalizing the principal component, multiplying and summing the initial feature data with the principal component feature weight, thereby obtaining the fused feature data. The method can help the server comprehensively consider the information of the plurality of features, and extract the features with more representativeness and differentiation for subsequent analysis and decision.
In a specific embodiment, as shown in fig. 4, the process of executing step S106 may specifically include the following steps:
s401, adjusting the feature size of the fusion feature data to obtain target input features;
s402, inputting target input features into a preset building feature interpretation model, wherein the building feature interpretation model comprises: three layers of convolution networks, a pooling layer and two layers of full-connection layers;
s403, performing convolution feature extraction and pooling operation on target input features through a three-layer convolution network and a pooling layer to obtain a target one-dimensional vector;
s404, inputting the target one-dimensional vector into two fully-connected layers to perform building feature interpretation, and obtaining a target interpretation result, wherein the target interpretation result is used for indicating building information and building types in a target city area.
Specifically, the server adjusts the feature size of the fusion feature data. This may be achieved by adjusting the dimension, size or shape of the feature data. For example, the feature data may be scaled or sampled using interpolation methods to accommodate the input requirements of the building feature interpretation model. The adjusted feature data will become the target input feature. And inputting the target input characteristics into a preset building characteristic interpretation model. This model is typically composed of a three-layer convolutional network, a pooling layer, and two fully connected layers. And carrying out convolution feature extraction on the target input features by the model through a three-layer convolution network. The convolution layer performs feature extraction on the local region by sliding the convolution kernel over the feature map. Each convolution kernel may capture a different feature, such as an edge, texture, shape, or the like. As the level increases, the convolution features become more abstract and advanced. And downsampling the convolution characteristics through a pooling layer, reducing the dimension of the characteristics and retaining main characteristic information. Common pooling operations include maximum pooling and average pooling, which can reduce the dimension of feature graphs while maintaining important feature patterns. The features obtained through the rolling and pooling operations are converted into one-dimensional vectors for input into the two fully connected layers. The fully connected layer maps the input features to the output categories by way of learning weights and biases. The fully connected layer may capture complex relationships between features and generate final architectural feature interpretation results. For example, assume that the server wishes to interpret buildings in the target metropolitan area. The server has obtained a fused feature dataset containing features of building height, area, degree of greening around, etc. And converting the fused feature data into target input features by the server through feature size adjustment. After the target input features are transferred to the building feature interpretation model, the convolution network will extract the local structure, texture and shape features of the building. The pooling layer downsamples the extracted features and retains important feature information. The model maps the extracted features onto the categories and other attributes of the building through two fully connected layers. In this embodiment, the server analyzes and interprets the target input features using the building feature interpretation model, thereby obtaining information and categories of buildings in the target city area. For example, the model may identify that one building is a residential building and another building is a store. Meanwhile, the model can also extract the characteristics of the building such as height, shape, texture and the like.
The method for automatically fusing and interpreting multi-satellite data in the embodiment of the present invention is described above, and the apparatus for automatically fusing and interpreting multi-satellite data in the embodiment of the present invention is described below, referring to fig. 5, one embodiment of the apparatus for automatically fusing and interpreting multi-satellite data in the embodiment of the present invention includes:
the configuration module 501 is configured to perform region division and satellite configuration on a first urban area to be detected, obtain a plurality of second urban areas and target satellites corresponding to each second urban area, and construct an overlapping acquisition region and an acquisition angle of each target satellite;
the acquisition module 502 is configured to perform data synchronous acquisition on the plurality of second urban areas according to the overlapping acquisition areas and the acquisition angles of each target satellite to obtain first remote sensing data of each target satellite, and perform data format conversion and atmospheric correction on the first remote sensing data of each target satellite to obtain a plurality of second remote sensing data;
an extracting module 503, configured to perform feature extraction on the plurality of second remote sensing data to obtain a plurality of initial feature data, and perform linear transformation on the plurality of initial feature data to obtain a linear feature set, where the linear feature set includes a plurality of linear features;
A calculating module 504, configured to construct a covariance matrix according to the linear feature set, and perform feature correlation calculation on the plurality of linear features according to the covariance matrix, so as to obtain a target feature correlation;
the fusion module 505 is configured to calculate a principal component feature weight in the plurality of linear features according to the target feature correlation, and perform feature fusion on the plurality of initial feature data according to the principal component feature weight to obtain fused feature data;
and an interpretation module 506, configured to input the fused feature data into a preset building feature interpretation model to perform building feature interpretation, so as to obtain a target interpretation result, where the target interpretation result is used to indicate building information and building category in the target city area.
Through the cooperation of the components, the data of a plurality of second urban areas are synchronously acquired, so that first remote sensing data are obtained, data format conversion and atmosphere correction are performed, and a plurality of second remote sensing data are obtained; extracting features to obtain a plurality of initial feature data and performing linear transformation to obtain a linear feature set; constructing a covariance matrix, and performing feature correlation calculation on a plurality of linear features according to the covariance matrix to obtain target feature correlation; calculating main component feature weights in the plurality of linear features according to the target feature correlation, and carrying out feature fusion on the plurality of initial feature data according to the main component feature weights to obtain fused feature data; the method and the device can acquire multi-source, multi-angle and multi-spectrum remote sensing data through data fusion of a plurality of satellites, thereby improving the interpretation accuracy. The data of different satellites have different resolutions, observation capacities and remote sensing wave bands, and the defects of single satellite data can be made up by comprehensively utilizing the data, so that the influence of errors and remote sensing data loss is reduced, and the accuracy of an interpretation result is improved. The data acquisition and fusion are performed by using a plurality of satellites, so that the spatial resolution and coverage of the data can be increased. The orbit and acquisition capacity of different satellites enable the satellites to acquire remote sensing data of different areas, and by fusing the data, more comprehensive and finer target city area information can be acquired, so that a richer data base is provided. By establishing an interpretation model and an automation algorithm, a large amount of remote sensing data can be rapidly and efficiently interpreted into building monitoring information of a target city area. Compared with the traditional manual interpretation method, the automatic interpretation has higher efficiency and consistency, can process a large-scale data set, provides a building monitoring result updated in real time, and further improves the accuracy of the automatic interpretation in a multi-satellite data fusion scene.
The apparatus for multi-satellite data fusion automation interpretation in the embodiment of the present invention is described in detail above in fig. 5 from the point of view of a modularized functional entity, and the apparatus for multi-satellite data fusion automation interpretation in the embodiment of the present invention is described in detail below from the point of view of hardware processing.
Fig. 6 is a schematic structural diagram of a multi-satellite data fusion automation interpreted device 600 according to an embodiment of the present invention, where the multi-satellite data fusion automation interpreted device 600 may have a relatively large difference due to different configurations or performances, and may include one or more processors (central processing units, CPU) 610 (e.g., one or more processors) and a memory 620, one or more storage media 630 (e.g., one or more mass storage devices) storing applications 633 or data 632. Wherein the memory 620 and the storage medium 630 may be transitory or persistent storage. The program stored on the storage medium 630 may include one or more modules (not shown), each of which may include a series of instruction operations in the apparatus 600 for automating the interpretation of multi-satellite data fusion. Still further, the processor 610 may be configured to communicate with the storage medium 630 to execute a series of instruction operations in the storage medium 630 on the multi-satellite data fusion automation interpreted device 600.
The multi-satellite data fusion automation interpreted device 600 may also include one or more power supplies 640, one or more wired or wireless network interfaces 650, one or more input/output interfaces 660, and/or one or more operating systems 631, such as Windows Serve, mac OS X, unix, linux, freeBSD, and the like. It will be appreciated by those skilled in the art that the configuration of the apparatus for multi-satellite data fusion automation interpretation shown in fig. 6 does not constitute a limitation of the apparatus for multi-satellite data fusion automation interpretation, and may include more or less components than those illustrated, or may combine certain components, or may be a different arrangement of components.
The invention also provides a multi-satellite data fusion automation interpretation device, which comprises a memory and a processor, wherein the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, cause the processor to execute the steps of the multi-satellite data fusion automation interpretation method in the above embodiments.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, and which may also be a volatile computer readable storage medium, the computer readable storage medium having stored therein instructions which, when executed on a computer, cause the computer to perform the steps of the method for automated interpretation of multi-satellite data fusion.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random acceS memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of multi-satellite data fusion automation interpretation, the method comprising:
performing regional division and satellite configuration on a first urban area to be detected to obtain a plurality of second urban areas and target satellites corresponding to each second urban area, and constructing an overlapping acquisition area and an acquisition angle of each target satellite;
carrying out data synchronous acquisition on the plurality of second urban areas according to the overlapping acquisition areas and the acquisition angles of each target satellite to obtain first remote sensing data of each target satellite, and carrying out data format conversion and atmosphere correction on the first remote sensing data of each target satellite to obtain a plurality of second remote sensing data;
Extracting features of the plurality of second remote sensing data to obtain a plurality of initial feature data, and performing linear transformation on the plurality of initial feature data to obtain a linear feature set, wherein the linear feature set comprises a plurality of linear features;
constructing a covariance matrix according to the linear feature set, and performing feature correlation calculation on the plurality of linear features according to the covariance matrix to obtain target feature correlation;
calculating principal component feature weights in the plurality of linear features according to the target feature correlation, and carrying out feature fusion on the plurality of initial feature data according to the principal component feature weights to obtain fusion feature data;
and inputting the fusion characteristic data into a preset building characteristic interpretation model to perform building characteristic interpretation to obtain a target interpretation result, wherein the target interpretation result is used for indicating building information and building categories in the target urban area.
2. The method for automatically interpreting multi-satellite data fusion according to claim 1, wherein said performing region division and satellite configuration on the first urban area to be detected to obtain a plurality of second urban areas and target satellites corresponding to each second urban area, and constructing overlapping acquisition regions and acquisition angles of each target satellite, comprises:
Dividing a first urban area to be detected into a plurality of second urban areas, and extracting edge coordinates of the second urban areas to obtain an edge coordinate set of each second urban area;
carrying out satellite acquisition task allocation on a plurality of preset target satellites and a plurality of second urban areas to obtain target satellites corresponding to each second urban area;
calculating the area of each second city area based on the edge coordinate set, and acquiring the data resolution;
creating a satellite orbit intersection scheme of the plurality of target satellites according to the area of each second urban area;
generating an orbit starting point coordinate and an orbit ending point coordinate of each target satellite according to the satellite orbit crossing scheme and the edge coordinate set, and performing track fitting according to the orbit starting point coordinate and the orbit ending point coordinate to generate a target acquisition orbit of each second urban area;
and calculating an overlapped acquisition area of each target satellite according to the area and the data resolution, and setting an acquisition angle of each target satellite according to the target acquisition orbit.
3. The method of claim 1, wherein the performing data synchronous acquisition on the plurality of second urban areas according to the overlapping acquisition areas and the acquisition angles of each target satellite to obtain first remote sensing data of each target satellite, performing data format conversion and atmospheric correction on the first remote sensing data of each target satellite to obtain a plurality of second remote sensing data, and comprises:
data acquisition is carried out on the plurality of second urban areas according to the overlapping acquisition areas and the acquisition angles of each target satellite, so that initial remote sensing data of each target satellite are obtained;
acquiring time stamp data of the initial remote sensing data, and carrying out data synchronization on the initial remote sensing data of each target satellite according to the time stamp data to obtain first remote sensing data of each target satellite;
extracting the band data of the first remote sensing data to obtain the original data of red, green and blue bands, and rasterizing the first remote sensing data to obtain raster remote sensing data;
converting the original data of the red, green and blue wave bands into the red, green and blue wave bands in the grid remote sensing data to obtain converted grid remote sensing data;
Outputting the converted grid remote sensing data into a GeoTIFF format to obtain a GeoTIFF data file, wherein the red, green and blue wave bands respectively correspond to a first wave band, a second wave band and a third wave band of the GeoTIFF data file;
constructing an atmospheric radiation transmission model, and carrying out atmospheric parameter estimation on the first remote sensing data to obtain target atmospheric parameters;
and carrying out atmosphere correction on the GeoTIFF data file through the atmosphere radiation transmission model and the target atmosphere parameters to obtain a plurality of second remote sensing data.
4. The method of claim 1, wherein the performing feature extraction on the plurality of second remote sensing data to obtain a plurality of initial feature data, and performing linear transformation on the plurality of initial feature data to obtain a linear feature set, wherein the linear feature set includes a plurality of linear features, and includes:
performing feature extraction on the plurality of second remote sensing data to obtain a plurality of initial feature data, wherein the plurality of initial feature data comprises: red channel mean, green channel mean, blue channel mean, texture features, and shape features;
Normalizing the plurality of initial characteristic data to obtain normalized characteristic data;
and carrying out linear transformation on the normalized characteristic data to obtain a linear characteristic set, wherein the linear characteristic set comprises a plurality of linear characteristics.
5. The method for multi-satellite data fusion automation interpretation according to claim 1, wherein the constructing a covariance matrix according to the linear feature set, and performing feature correlation calculation on the plurality of linear features according to the covariance matrix, to obtain a target feature correlation, comprises:
calculating covariance matrix elements according to a plurality of linear features in the linear feature set, and constructing a covariance matrix according to the covariance matrix elements, wherein the covariance matrix elements have a calculation formula as follows:
wherein ,represents the +.>Element(s)>Representing the value of the ith linear feature in the kth sample, +.>Mean value of the ith linear characteristic, N represents the number of samples, +.>Representing the value of the jth linear feature in the kth sample, mean (j) representing the average value of the jth linear feature;
and calculating the correlation among the plurality of linear features according to the covariance matrix to obtain target feature correlation, wherein a calculation formula of the feature correlation is as follows:
wherein ,correlation coefficient representing linear characteristic i and linear characteristic j, +.>Representing the first covariance matrixElement(s)> and />The standard deviation of feature i and feature j are indicated, respectively.
6. The method of claim 1, wherein calculating principal component feature weights of the plurality of linear features according to the target feature correlation, and performing feature fusion on the plurality of initial feature data according to the principal component feature weights to obtain fused feature data, comprises:
calculating a plurality of feature values corresponding to the plurality of linear features according to the target feature correlation;
generating corresponding feature vectors according to the plurality of feature values, and carrying out principal component normalization on the feature vectors to obtain principal component feature weights;
multiplying the plurality of initial feature data by the principal component feature weight to obtain a plurality of target products, and performing addition operation on the plurality of target products to obtain fusion feature data, wherein the fusion feature data is a linear combination of the plurality of initial feature data in the principal component direction.
7. The method for automatically interpreting a multi-satellite data fusion according to claim 1, wherein said inputting said fusion feature data into a preset building feature interpretation model for building feature interpretation, obtaining a target interpretation result, wherein said target interpretation result is used for indicating building information and building categories in said target metropolitan area, and comprises:
Performing feature size adjustment on the fusion feature data to obtain target input features;
inputting the target input features into a preset building feature interpretation model, wherein the building feature interpretation model comprises: three layers of convolution networks, a pooling layer and two layers of full-connection layers;
performing convolution feature extraction and pooling operation on the target input features through the three-layer convolution network and the pooling layer to obtain target one-dimensional vectors;
and inputting the target one-dimensional vector into the two fully-connected layers to perform building feature interpretation to obtain a target interpretation result, wherein the target interpretation result is used for indicating building information and building categories in the target urban area.
8. An apparatus for multi-satellite data fusion automation interpretation, the apparatus comprising:
the configuration module is used for carrying out regional division and satellite configuration on a first urban area to be detected, obtaining a plurality of second urban areas and target satellites corresponding to each second urban area, and constructing an overlapping acquisition area and an acquisition angle of each target satellite;
the acquisition module is used for synchronously acquiring the data of the plurality of second urban areas according to the overlapping acquisition area and the acquisition angle of each target satellite to obtain first remote sensing data of each target satellite, and carrying out data format conversion and atmosphere correction on the first remote sensing data of each target satellite to obtain a plurality of second remote sensing data;
The extraction module is used for carrying out feature extraction on the plurality of second remote sensing data to obtain a plurality of initial feature data, and carrying out linear transformation on the plurality of initial feature data to obtain a linear feature set, wherein the linear feature set comprises a plurality of linear features;
the computing module is used for constructing a covariance matrix according to the linear feature set, and carrying out feature correlation computation on the plurality of linear features according to the covariance matrix to obtain target feature correlation;
the fusion module is used for calculating the feature weights of the main components in the plurality of linear features according to the target feature correlation, and carrying out feature fusion on the plurality of initial feature data according to the feature weights of the main components to obtain fusion feature data;
and the interpretation module is used for inputting the fusion characteristic data into a preset building characteristic interpretation model to perform building characteristic interpretation to obtain a target interpretation result, wherein the target interpretation result is used for indicating building information and building categories in the target urban area.
9. An apparatus for multi-satellite data fusion automation interpretation, the apparatus comprising: a memory and at least one processor, the memory having instructions stored therein;
The at least one processor invoking the instructions in the memory to cause the apparatus of multi-satellite data fusion automation interpretation to perform the method of multi-satellite data fusion automation interpretation of any of claims 1-7.
10. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the method of multi-satellite data fusion automation interpretation of any of claims 1-7.
CN202311090810.6A 2023-08-29 2023-08-29 Multi-satellite data fusion automatic interpretation method Active CN116824396B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311090810.6A CN116824396B (en) 2023-08-29 2023-08-29 Multi-satellite data fusion automatic interpretation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311090810.6A CN116824396B (en) 2023-08-29 2023-08-29 Multi-satellite data fusion automatic interpretation method

Publications (2)

Publication Number Publication Date
CN116824396A true CN116824396A (en) 2023-09-29
CN116824396B CN116824396B (en) 2023-11-21

Family

ID=88114820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311090810.6A Active CN116824396B (en) 2023-08-29 2023-08-29 Multi-satellite data fusion automatic interpretation method

Country Status (1)

Country Link
CN (1) CN116824396B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117434497A (en) * 2023-12-20 2024-01-23 深圳市宇隆移动互联网有限公司 Indoor positioning method, device and equipment of satellite communication terminal and storage medium
CN117523410A (en) * 2023-11-10 2024-02-06 中国科学院空天信息创新研究院 Image processing and construction method based on multi-terminal collaborative perception distributed large model

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697475A (en) * 2019-01-17 2019-04-30 中国地质大学(北京) A kind of muskeg information analysis method, remote sensing monitoring component and monitoring method
CN109977991A (en) * 2019-01-23 2019-07-05 彭广惠 Forest resourceies acquisition method based on high definition satellite remote sensing
CN110516588A (en) * 2018-12-29 2019-11-29 长沙天仪空间科技研究院有限公司 A kind of remote sensing satellite system
CN111325184A (en) * 2020-03-20 2020-06-23 宁夏回族自治区自然资源勘测调查院 Intelligent interpretation and change information detection method for remote sensing image
CN113158335A (en) * 2021-04-07 2021-07-23 广东交通职业技术学院 Ship electric control cylinder oil injection visualization method, system and device and storage medium
CN113537018A (en) * 2021-07-05 2021-10-22 国网安徽省电力有限公司铜陵供电公司 Water and soil conservation monitoring method based on multi-temporal satellite remote sensing and unmanned aerial vehicle technology
CN113591775A (en) * 2021-08-11 2021-11-02 武汉工程大学 Multispectral remote sensing image specific ground object extraction method combining hyperspectral features
CN114022783A (en) * 2021-11-08 2022-02-08 刘冰 Satellite image-based water and soil conservation ecological function remote sensing monitoring method and device
CN115240080A (en) * 2022-08-23 2022-10-25 北京理工大学 Intelligent interpretation and classification method for multi-source remote sensing satellite data
CN116091937A (en) * 2022-12-24 2023-05-09 航天科工智能运筹与信息安全研究院(武汉)有限公司 High-resolution remote sensing image ground object recognition model calculation method based on deep learning
CN116205678A (en) * 2023-03-06 2023-06-02 江苏省地质调查研究院 Agricultural land physical quantity investigation and value quantity estimation method based on remote sensing automatic interpretation
CN116503677A (en) * 2023-06-28 2023-07-28 武汉大学 Wetland classification information extraction method, system, electronic equipment and storage medium
US20230252761A1 (en) * 2021-01-26 2023-08-10 Wuhan University Method for classifying hyperspectral images on basis of adaptive multi-scale feature extraction model

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110516588A (en) * 2018-12-29 2019-11-29 长沙天仪空间科技研究院有限公司 A kind of remote sensing satellite system
CN109697475A (en) * 2019-01-17 2019-04-30 中国地质大学(北京) A kind of muskeg information analysis method, remote sensing monitoring component and monitoring method
CN109977991A (en) * 2019-01-23 2019-07-05 彭广惠 Forest resourceies acquisition method based on high definition satellite remote sensing
CN111325184A (en) * 2020-03-20 2020-06-23 宁夏回族自治区自然资源勘测调查院 Intelligent interpretation and change information detection method for remote sensing image
US20230252761A1 (en) * 2021-01-26 2023-08-10 Wuhan University Method for classifying hyperspectral images on basis of adaptive multi-scale feature extraction model
CN113158335A (en) * 2021-04-07 2021-07-23 广东交通职业技术学院 Ship electric control cylinder oil injection visualization method, system and device and storage medium
CN113537018A (en) * 2021-07-05 2021-10-22 国网安徽省电力有限公司铜陵供电公司 Water and soil conservation monitoring method based on multi-temporal satellite remote sensing and unmanned aerial vehicle technology
CN113591775A (en) * 2021-08-11 2021-11-02 武汉工程大学 Multispectral remote sensing image specific ground object extraction method combining hyperspectral features
CN114022783A (en) * 2021-11-08 2022-02-08 刘冰 Satellite image-based water and soil conservation ecological function remote sensing monitoring method and device
CN115240080A (en) * 2022-08-23 2022-10-25 北京理工大学 Intelligent interpretation and classification method for multi-source remote sensing satellite data
CN116091937A (en) * 2022-12-24 2023-05-09 航天科工智能运筹与信息安全研究院(武汉)有限公司 High-resolution remote sensing image ground object recognition model calculation method based on deep learning
CN116205678A (en) * 2023-03-06 2023-06-02 江苏省地质调查研究院 Agricultural land physical quantity investigation and value quantity estimation method based on remote sensing automatic interpretation
CN116503677A (en) * 2023-06-28 2023-07-28 武汉大学 Wetland classification information extraction method, system, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
吕雄杰;陆文龙;宋治文;张昱;马享优;: "QuickBird影像在城市土地利用现状调查中的应用研究", 天津农业科学, no. 02, pages 16 - 19 *
李盼盼;陈国旭;刘盛东;李忠城;薛志亮;卢牧原;: "基于高分卫星影像的断层构造识别及三维地质建模应用", 合肥工业大学学报(自然科学版), no. 05, pages 110 - 117 *
龙皎;陈振声;周静;: "利用遥感影像进行园林绿地现状调查的方法――以贵阳市为例", 四川林勘设计, no. 02, pages 61 - 65 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117523410A (en) * 2023-11-10 2024-02-06 中国科学院空天信息创新研究院 Image processing and construction method based on multi-terminal collaborative perception distributed large model
CN117434497A (en) * 2023-12-20 2024-01-23 深圳市宇隆移动互联网有限公司 Indoor positioning method, device and equipment of satellite communication terminal and storage medium
CN117434497B (en) * 2023-12-20 2024-03-19 深圳市宇隆移动互联网有限公司 Indoor positioning method, device and equipment of satellite communication terminal and storage medium

Also Published As

Publication number Publication date
CN116824396B (en) 2023-11-21

Similar Documents

Publication Publication Date Title
CN116824396B (en) Multi-satellite data fusion automatic interpretation method
Chen et al. A methodology for automated segmentation and reconstruction of urban 3-D buildings from ALS point clouds
AU2021250975A1 (en) Statistical point pattern matching technique
CN111027547A (en) Automatic detection method for multi-scale polymorphic target in two-dimensional image
CN111028327B (en) Processing method, device and equipment for three-dimensional point cloud
CN109308688B (en) Visible light and near-infrared band thick cloud and shadow removing method
CN112906662B (en) Method, device and equipment for detecting change of remote sensing image and storage medium
US7778808B2 (en) Geospatial modeling system providing data thinning of geospatial data points and related methods
CN112115911A (en) Light-weight SAR image target detection method based on deep learning
CN115546656B (en) Remote sensing image cultivation region extraction method based on deep learning
CN111104850B (en) Remote sensing image building automatic extraction method and system based on residual error network
CN116258976A (en) Hierarchical transducer high-resolution remote sensing image semantic segmentation method and system
CN115375868A (en) Map display method, remote sensing map display method, computing device and storage medium
CN117992757B (en) Homeland ecological environment remote sensing data analysis method based on multidimensional data
CN116091939A (en) Forest on-ground biomass downscaling method based on multiscale geographic weighted regression
CN115019163A (en) City factor identification method based on multi-source big data
Aslani et al. Rooftop segmentation and optimization of photovoltaic panel layouts in digital surface models
CN112069445A (en) 2D SLAM algorithm evaluation and quantification method
CN115861791B (en) Method and device for generating litigation clues and storage medium
CN115511899A (en) Method and device for extracting morphological structure of shallow small landslide
Wang et al. A method for data density reduction in overlapped airborne LiDAR strips
Ismail et al. Developing complete urban digital twins in busy environments: a framework for facilitating 3d model generation from multi-source point cloud data
Vats et al. Terrain-Informed Self-Supervised Learning: Enhancing Building Footprint Extraction from LiDAR Data with Limited Annotations
CN118069729B (en) Method and system for visualizing homeland ecological restoration data based on GIS
CN117115679B (en) Screening method for space-time fusion remote sensing image pairs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant