CN115424140A - Satellite-borne mass image data distributed cooperative rapid high-precision processing system and method - Google Patents

Satellite-borne mass image data distributed cooperative rapid high-precision processing system and method Download PDF

Info

Publication number
CN115424140A
CN115424140A CN202210910161.9A CN202210910161A CN115424140A CN 115424140 A CN115424140 A CN 115424140A CN 202210910161 A CN202210910161 A CN 202210910161A CN 115424140 A CN115424140 A CN 115424140A
Authority
CN
China
Prior art keywords
image
image data
satellite
module
precision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210910161.9A
Other languages
Chinese (zh)
Inventor
李晓博
刘洋
邵应昭
徐常志
张建华
谢卫莹
王元乐
张茗茗
丁跃利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Institute of Space Radio Technology
Original Assignee
Xian Institute of Space Radio Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Institute of Space Radio Technology filed Critical Xian Institute of Space Radio Technology
Priority to CN202210910161.9A priority Critical patent/CN115424140A/en
Publication of CN115424140A publication Critical patent/CN115424140A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a satellite-borne massive image data distributed cooperative rapid high-precision processing system, which comprises: the system comprises an imaging mode judging module, a region matching calculation module, a rapid screening module and a high-precision detection identification module; the imaging mode judging module judges the current working imaging mode of the remote sensing satellite, when the remote sensing satellite works in the ocean target wide area searching imaging mode, the original image data is input into the rapid screening module, and when the remote sensing satellite works in the point target detailed searching imaging mode, the original image data is input into the region matching calculation module; the rapid screening module screens out images suspected of containing suspicious targets from original image data and inputs the images into the high-precision detection and identification module; the region matching calculation module constructs a hot spot region, and images belonging to the hot spot region are screened from the original image data and input to the high-precision detection and identification module; and the high-precision detection and identification module is used for detecting and identifying suspicious targets of the input images and outputting detection results.

Description

Satellite-borne mass image data distributed cooperative rapid high-precision processing system and method
Technical Field
The invention belongs to the field of space remote sensing, and particularly relates to a distributed cooperative rapid high-precision processing system and method for satellite-borne mass image data.
Background
In recent years, a new generation of artificial intelligence technology represented by deep learning is widely applied to a satellite-borne data processing system, and the performance of the method is greatly improved compared with that of a traditional method. However, with the continuous improvement of space, time and spectral resolution of remote sensing satellites, the data volume acquired by the satellites is increasingly large (the data rate reaches dozens of Gbps), the processing rate of a mainstream deep learning model (an FPGA or an AI chip) can only reach the magnitude of dozens of Mbps (dozens of frames) to hundreds of Mbps (hundreds of frames), and the real-time high-precision processing requirement of satellite-borne mass data cannot be met.
According to big data analysis, only a few data (less than 1%) of a large amount of ocean remote sensing image data acquired by a remote sensing satellite contain real ship targets, and most of the data are useless data without targets, such as pure oceans or cloud layers. The existing satellite-borne processing system has the following defects when the ocean target detection processing is carried out:
the whole processing flow needs to perform traversal processing on all images acquired by the satellite, namely, the specific processing flow divides the acquired original images into blocks, and each image block is sent to a high-precision target detection and identification network for processing no matter whether a target exists in the image block or not; similarly, in the detection and identification process of fixed area targets such as airport planes, ports and ships, traversal processing is also carried out; the existing method greatly wastes satellite-borne computing resources and increases processing time delay.
Disclosure of Invention
The technical problem solved by the invention is as follows: the defects of the prior art are overcome, a distributed cooperative rapid high-precision processing system and method for satellite-borne mass image data are provided, and the efficiency and precision of satellite mass image data processing are effectively improved.
The technical solution of the invention is as follows:
a satellite-borne massive image data distributed cooperative rapid high-precision processing system comprises: imaging mode judging module, regional matching calculation module, quick screening module and high accuracy detection identification module, wherein:
an imaging mode determination module: judging the current working imaging mode of the remote sensing satellite, inputting the original image data into a rapid screening module when the remote sensing satellite works in the ocean target wide area search imaging mode, and inputting the original image data into a region matching calculation module when the remote sensing satellite works in the point target detailed search imaging mode;
a fast screening module: screening out images suspected of containing suspicious targets from original image data, inputting the images into a high-precision detection and identification module, and discarding the rest images;
the region matching calculation module: constructing a hot spot region, screening out images belonging to the hot spot region from original image data, inputting the images to a high-precision detection and identification module, and discarding the rest images;
high accuracy detects the identification module: and carrying out target detection and identification on the input image and outputting a detection result.
Preferably, the fast screening module includes one or more data processing units, each data processing unit screens an input image by using a fast screening network, the fast screening network includes a first convolutional layer, a second convolutional layer, at least one feature extraction unit, a third convolutional layer, a fully-connected layer, and a Softmax layer, which are sequentially arranged, the first convolutional layer includes 32 1 × 1 convolution kernels, the second convolutional layer includes 64 3 × 3 convolution kernels, the feature extraction unit includes a multi-channel feature extraction unit, a Max firing layer, and a fourth convolutional layer, which are sequentially arranged, the multi-channel feature extraction unit includes a first channel, a second channel, and a third channel, the first channel is connected with 64 3 × 3 convolution kernels after the 32 1 × 1 convolution kernels, the second channel is connected with 64 1 × 1 convolution kernels after the 32 3 × 3 convolution kernels, the third channel is 64 1 × 1 convolution kernels, the fourth convolutional layer is 64 3 × 3 convolution kernels, and the third convolutional layer includes 32 × 3 convolution kernels; and activating a feature graph output after convolution kernel convolution in the first convolution layer, the second convolution layer, the first channel, the second channel, the third convolution layer or the fourth convolution layer by using a Relu activation function.
Preferably, the convolution steps of the 1 × 1 convolution kernel and the 3 × 3 convolution kernel are both 1.
Preferably, the number of the feature extraction units is 3 to 5, and the feature extraction units are connected in series in sequence.
Preferably, the high-precision detection and identification module includes: the system comprises a multi-mode data fusion unit, a high-resolution information holding unit, a backbone network and an auxiliary super-resolution learning branch, wherein:
a multi-modal data fusion unit: unifying the original images to the same resolution, fusing multi-source data to form a fused image and inputting the fused image to a high-resolution information holding unit; the original image comprises a full-color image, a multi-spectral image and a near-infrared image;
high score information holding unit: detail keeping and improving are carried out on the fusion image, and a detail-enhanced high-resolution image is obtained and input to a backbone network;
backbone network: extracting the hierarchical features of the detail-enhanced high-resolution image to obtain low-level and high-level features of the image, judging the category of a suspicious target in the image by adopting a high-precision detection and identification model, and calculating the position and the confidence coefficient of the suspicious target;
auxiliary super-resolution learning branch: and performing super-resolution learning by using the low-level and high-level features of the image obtained by the backbone network, and correcting the high-precision detection and identification model of the backbone network after acquiring the local texture, detail features and high-level semantic information of the complex background image.
Preferably, the auxiliary super-resolution learning branch comprises an encoder and a decoder, wherein the encoder matches the low-layer features of the image to the spatial size of the high-layer features, and then merges the low-layer features and the high-layer features; and the decoder decodes the information obtained by combining the low-layer characteristics and the high-layer characteristics and outputs the super-resolution characteristics.
Preferably, the region matching calculation module constructs a hot spot region, including:
dividing the earth surface into basic grids with equal size according to the longitude and latitude indexes, marking and numbering each basic grid, and forming one or more hot spot areas according to the information annotated on the ground, wherein each hot spot area is composed of one or more basic grids.
Preferably, the extracting, by the region matching calculation module, an image belonging to a hot spot region from an original image includes:
dividing an original image into image blocks with equal size, calculating a ground longitude and latitude coordinate corresponding to a central point of the image block by using a GPS and posture information in image auxiliary data, determining a basic grid number where the central point of the image block is located according to the longitude and latitude coordinate, then judging whether the basic grid belongs to a hot spot area, if so, extracting the image block, and if not, discarding the image block.
Preferably, the fast filtering module and the regional filtering module comprise a plurality of parallel data processing units.
A method for processing images by adopting a satellite-borne mass image data distributed cooperative rapid high-precision processing system comprises the following steps:
(1) Receiving original image data input by a satellite-borne camera;
(2) Judging and judging the imaging mode of satellite work, entering the step (3) when the satellite works in the wide-area search imaging mode of the ocean targets, and entering the step (5) when the satellite works in the detailed-survey imaging mode of point targets;
(3) Dividing original image data into image blocks with equal size, sequentially inputting the image blocks into a rapid screening module to perform suspected target screening processing, and entering the step (4);
(4) When the suspected target of the image block is detected, inputting the image block into a high-precision detection and identification module for identification processing and outputting a detection result, otherwise, discarding the image block until all original image data are processed and quitting;
(5) Dividing original image data into image blocks with equal size, inputting the image blocks into an area matching calculation module in sequence, judging whether the image blocks belong to a hot spot area, and entering the step (6);
(6) And when detecting that the image block belongs to the hot spot area, inputting the image block into a high-precision detection and identification module for identification processing and outputting a detection result, otherwise, discarding the image block until all original image data are processed.
Compared with the prior art, the invention has the advantages that:
(1) Aiming at the sparse characteristic of the ocean targets, the suspected target area image in the massive original data is quickly screened by using a quick screening network, so that the problem of real-time processing of the massive original data under the condition of limited resources is solved;
(2) According to the invention, a global hotspot area target information base is established on the satellite processing system, a priori fixed target and a target guided by information are quickly matched, a large amount of invalid data are eliminated, and the processing speed of the large-volume original data is improved;
(3) According to the invention, different computing nodes are adopted for different imaging application modes, and processing precision is ensured while mass data processing rate is solved through distributed cooperative processing;
(4) The method has the advantages that the depth of the fast screening network is only 12 layers, the calculation complexity is low, and the method can fast realize the effective distinguishing of suspected targets under complex sea clutter interference scenes such as cloud and mist, small islands and the like.
Drawings
FIG. 1 is a schematic diagram of a distributed cooperative fast high-precision processing system for satellite-borne mass image data according to the present invention;
FIG. 2 is a flowchart illustrating the ocean targets wide-area search process of the present invention;
FIG. 3 is a schematic diagram of a fast screening network according to the present invention;
FIG. 4 is a flow chart of a detailed processing of the key objective of the present invention;
fig. 5 is a schematic diagram of a high-precision detection network according to the present invention.
Detailed Description
The invention is further illustrated by the following figures and examples.
As shown in fig. 1, the distributed cooperative fast high-precision processing system for satellite-borne massive image data includes: the device comprises an imaging mode judging module, a region matching calculation module, a rapid screening module and a high-precision detection and identification module.
The method comprises two working modes of ocean target wide area search processing and key target detailed inspection processing, wherein the two working modes are based on high-speed parallel computing nodes, a quick screening network and a satellite-borne hot target information base are utilized, suspected target area images are extracted from original mass data and then sent to a high-precision detection and identification module for high-precision fine processing, and the processing precision is guaranteed while the original data rate is greatly reduced through distributed cooperative processing of heterogeneous computing nodes.
Specifically, as shown in fig. 2, the ocean-going target wide-area search processing mode is mainly used for ocean-going vessel target wide-area fast search discovery, the imaging mode judgment module inputs massive raw data input by a camera into the fast screening module, firstly, the image is subjected to block processing through a plurality of parallel computing nodes, then, massive high-speed raw data are screened through the fast screening network, suspected target area images are extracted, the rate of the raw data is greatly reduced, then, the suspected target area images are sent to the high-precision detection and identification module for fine processing, and by utilizing the target sparsity characteristic, the processing rate is greatly improved, and meanwhile, the processing precision is guaranteed.
In the embodiment shown in fig. 2, the fast filtering module includes two high-speed parallel computing units formed by FPGA chips, and the specific working flow thereof is as follows:
the FPGA receives high-speed original image data, and the high-speed original image data are stored in the DDR for caching; reading an image block I with the size of M multiplied by N from the DDR; and judging whether a suspicious target exists in the image block I by using a rapid screening network.
Further, the fast screening network uses convolution kernels of 1 × 1 and 3 × 3 to construct a convolution neural network, and all convolution steps are 1 during convolution operation, so that the input and output data of each channel of the convolution layer are kept consistent in size dimension, that is, the number of output data channels is the same as the number of the convolution kernels; and activating the feature graph output after convolution by using a Relu activation function. As shown in fig. 3, after the remote sensing image is input into the network, 32 convolution kernels of 1 × 1 are used for convolution, and then 64 convolution kernels of 3 × 3 are used for processing. Inputting the feature map obtained by the convolution of the second layer into a multi-channel feature extraction module for processing, wherein the module is provided with three channels, the first channel is formed by connecting 64 convolution kernels of 3 × 3 after 32 convolution kernels of 1 × 1, the second channel is formed by connecting 64 convolution kernels of 1 × 1 after 32 convolution kernels of 3 × 3, the third channel is formed by connecting 64 convolution kernels of 1 × 1, and a convolution layer formed by a Max posing layer and 64 convolution kernels of 3 × 3 is connected behind each multi-channel feature extraction module to jointly form a feature extraction unit. The Max pooling layer is a maximum pooling layer. After 3 feature extraction units are used, the obtained feature graph is processed by using a convolution layer containing 32 3 multiplied by 3 convolution kernels, the processing result is input into a full connection layer and a Softmax layer to obtain a final classification result, the full connection layer is used for summarizing all results, the Softmax layer is used for normalizing the result and outputting the final classification result, the depth of the whole fast screen network is only 12 layers, the calculation complexity is low, and the effective distinguishing of suspected targets under complex sea clutter interference scenes such as cloud fog, small islands and the like can be quickly realized on the track.
As shown in fig. 4, the detailed key target searching processing mode is mainly used for detailed key target searching in ports, airports and the like and under the condition that ground artificial information or other satellite (e.g., electronic satellite) guide information exists, the imaging mode judging module inputs massive raw data input by a camera into the area matching calculation module, firstly, the raw data is subjected to block parallel processing through a plurality of parallel calculation nodes, area longitude and latitude information corresponding to each image block is calculated, area matching calculation is realized according to prior information such as a satellite-borne hot target information base and the like, and an image of a suspected target area is extracted and then sent into the high-precision detection recognition module for fine processing. By utilizing the fixed region target prior information, the processing precision is ensured, and the processing rate is greatly reduced.
Specifically, the earth surface is firstly divided into grids with equal size according to longitude and latitude indexes, the size of each grid is defined to be mxn, the grid division is carried out on the earth surface, different grids can be combined, the hot spot area range with any shape can be accurately described, when a satellite processing system carries out data screening, the corresponding target area image can be accurately extracted, irrelevant data processing is reduced, the processing speed and the processing precision are improved, and meanwhile, the requirement of a user for detailed inspection and observation of any interested area is met. The size of the m multiplied by n is mainly selected according to the ground area range corresponding to the size of the block image of the on-satellite processing system, and the closer the size of the m multiplied by n is to the size of the block image, the more accurate the extracted image block of the suspected area and the more accurate the description of the hot spot area are. Therefore, in actual engineering, the size of the image blocks of the on-board processing system and the shape rules of different hot spot areas need to be comprehensively considered, and an appropriate size of mxn needs to be selected.
After the size of m × n is selected, each grid on the earth's surface is marked with a number Z k (i,j)。
Wherein i is more than or equal to floor (90/m) and less than or equal to floor (90/m),
-floor(90/n)≤j≤floor(90/n),
1≤k≤(180*2*90*2)/(m*n)。
a certain hot spot area A x Can be composed of several basic grids Z k (i, j) constitutes:
A x ={Z x ,1≤x≤(180*2*90*2)/(m*n)}。
the global hotspot area information base T is as follows:
T=A x ∪B y ∪C z
wherein A is x 、B y 、C z Respectively represent the focus area, port, airport and other hot spot area sets.
The global hotspot area information base is firstly constructed on the ground and then used by a satellite-based processing system, and can be dynamically updated, maintained and upgraded on line according to requirements in the satellite operation process.
In the embodiment shown in fig. 4, the fast filtering module includes a high-speed parallel computing unit formed by an FPGA chip, and the specific working flow is as follows:
s1: the FPGA receives high-speed original image data, and the high-speed original image data are stored in the DDR for caching;
s2: reading an image block I with the size of M multiplied by N from a DDR;
s3: calculating a ground longitude and latitude coordinate I (k, l) corresponding to a current image central point by using information such as a GPS (global positioning system), an attitude and the like in the image auxiliary data, and performing grid normalization processing I (I, j); wherein i = floor (180/M), j = floor (90/N);
s4: matching I (I, j) with a hotspot region information base T, when I (I, j) belongs to T, sending an image block I to a high-precision detection and identification module for high-precision detection and identification processing, otherwise, discarding the image block;
and repeating S2-S4, clearing the DDR after finishing the processing of the image data in the DDR, and reading in the next scene image data.
The high-precision detection and identification module disclosed by the invention is used for processing the input image by adopting a high-precision detection network based on a super-resolution auxiliary branch aiming at the high-precision detection and identification processing requirements of a complex background target under the condition of on-satellite resource limitation. The network takes a YOLOv5s structure as a base line, introduces a multi-mode data fusion structure, a high-score information retaining network and an auxiliary super-score branch to ensure high-precision and low-false-alarm detection and identification of targets in a complex background, does not participate in an inference stage, does not introduce extra calculated amount for the detection network, does not influence the inference speed, is easy to deploy on a resource-limited satellite, thereby achieving the optimal detection precision and processing speed,
specifically, as shown in fig. 5, the high-precision detection network includes a multi-modal data fusion unit, a high-resolution information holding unit, a backbone network, and an auxiliary super-resolution learning branch;
a multi-modal data fusion unit: performing normalization processing on multi-modal images with different resolutions in original image data, unifying the images to the same resolution, forming a fused image through multi-source data fusion and inputting the fused image to a high-resolution information holding unit; the multi-modal images include panchromatic images, multi-spectral images, and near-infrared images.
Specifically, a detection structure based on multi-mode adaptive fusion is designed by utilizing complementarity among visible light panchromatic, multi-spectral and infrared different modal information, so that more information for distinguishing targets is obtained, and the detection precision and universality of the algorithm are further improved. The input full-color image, RGB image, and near-infrared image are first normalized to the [0,1] interval, and they are connected with a relatively low amount of calculation to accelerate the inference. Specifically, the fused image is defined as:
X=Concat(Q、R、G、B、I)
wherein the fused image is
Figure BDA0003773735710000081
C denotes the number of channels, H and W denote the height and width of the image, respectively, { Q }, { R, G, B } and { I } denote a full-color image, an RGB image, and a near-infrared image, respectively, and Concat (·) denotes a connecting operation along the channel dimension. Then, X is downsampled to 1/n size of the original image to complete the super-divide module and speed up the training process. The sampled image is represented as
Figure BDA0003773735710000091
Consists of: x' = D (X) generation, where D (·) represents n downsampling operations using bilinear interpolation. The downsampled result is then input into the backbone to produce a multi-level feature.
High score information holding unit: and maintaining and improving the details of the fused image to obtain a detail-enhanced high-resolution image and inputting the detail-enhanced high-resolution image to a backbone network.
Specifically, a high-score information keeping network is constructed to keep the high-score characteristic of the remote sensing target, and the problem of small target information loss is solved. The Focus module in the yollov 5 stem segments the image at intervals in the spatial domain, and then re-assembles the new image to resize the input image. Specifically, the operation is to collect one value for each pixel in the image, and then reconstruct it to obtain a smaller complementary image. The size of the reconstructed image decreases as the number of channels increases. Therefore, it can result in reduced resolution and loss of spatial information for small targets. Considering that the detection of small objects is more dependent on higher resolution, the Focus module is abandoned and a specific convolution operation is used instead to prevent the loss of small objects due to the resolution reduction.
A backbone network: and (3) extracting the hierarchical features of the detail-enhanced high-resolution image to obtain the low-level and high-level features of the image, finishing the judgment of the suspicious object category in the image by adopting a high-precision detection and identification model, and calculating the position and the confidence coefficient of the suspicious object category.
Auxiliary super-resolution learning branch: the method comprises the following steps that when a ground system carries out high-precision detection and recognition model training, super-resolution learning is carried out by utilizing low-level and high-level features of an image obtained by a trunk network, the high-precision detection and recognition model of the trunk network is modified and perfected after local texture, detail features and high-level semantic information of a complex background image are obtained, an auxiliary super-resolution learning branch comprises an encoder and a decoder, the encoder matches the low-level features of the image to the space size of the high-level features, and then the low-level features and the high-level features are combined; the decoder decodes the information of the low-level features and the high-level features and outputs the super-resolution features.
Specifically, the auxiliary super-resolution learning branch is a unit for a ground system training high-precision detection recognition modeler to join a backbone network, the auxiliary super-resolution learning branch does not participate in on-satellite reasoning calculation, the network high-resolution information retention capacity is improved by designing the learning branch based on the auxiliary super-resolution, and the detection precision of the algorithm is further improved. The feature size reserved in the backbone network for multi-scale detection is much smaller than the original input image. Most existing methods perform an upsampling operation to recover the feature size. Unfortunately, this approach has limited effectiveness due to the loss of information on textures and patterns. Therefore, using this method is not suitable for detecting small objects in the remote sensing image. To solve this problem, the method introduces a secondary hyper-branching. In particular, the super-partition structure can be seen as a simple codec model. The low-level and high-level features of the backbone network are selected as the input of the hyper-branching to obtain local texture, mode and semantic information. In the encoder, low-level features are first subjected to CBR blocks and upsampling operations, matching the spatial size of the high-level features, and then the low-level and high-level features are merged using a concatenation operation and two CBRD blocks. The CBR module includes convolution, batch normalization and ReLU activation functions, while the CBRD module includes an additional drop-out. For the decoder, the low-decitex is upgraded to the high-partition space, where the output size of the super-partition module is twice the input image. The decoder is implemented using three deconvolution layers. The hyper-branch guides the relevant learning of the space dimension and transfers the same to the main branch, thereby improving the performance of target detection. The hyper-branch is completely removed in the inference stage, does not participate in the detection inference stage, does not introduce extra calculation amount for the detection network, and does not influence the inference speed. And because the input image is a relatively low-resolution image, compared with a common detection network, the method can realize an acceleration effect so as to meet the requirement of efficient detection tasks under the condition of limited satellite-borne resources.
The distributed cooperative rapid high-precision processing system method for the satellite-borne mass image data comprises the following steps:
1) Receiving original image data input by a satellite-borne camera;
2) Judging an imaging mode of satellite work, and entering the step 3) when the satellite works in an ocean target wide area searching imaging mode, and entering the step 5) when the satellite works in a point target detailed searching imaging mode;
3) Dividing original image data into image blocks with equal size, sequentially inputting the image blocks into a rapid screening module for screening suspected targets, and entering the step 4);
4) When the suspected target of the image block is detected, inputting the image block into a high-precision detection and identification module for identification processing and outputting a detection result, otherwise, discarding the image block until all original image data are processed, and exiting the method;
5) Dividing original image data into image blocks with equal size, inputting the image blocks into an area matching calculation module in sequence, calculating a longitude and latitude coordinate of the ground corresponding to a central point of the image block by using information such as a GPS (global positioning system) and a posture in image auxiliary data, determining a basic grid number of the central point of the image block according to the longitude and latitude coordinate, judging whether the basic grid belongs to a hot spot area, and entering step 6);
6) And when detecting that the image block belongs to the hot spot area, inputting the image block into a high-precision detection and identification module for identification and outputting a detection result, otherwise, discarding the image block until all original image data are processed, and exiting the method.
Those skilled in the art will appreciate that those matters not described in detail in the present specification are well known in the art.

Claims (10)

1. A satellite-borne mass image data distributed cooperative rapid high-precision processing system is characterized by comprising: imaging mode judging module, regional matching calculation module, quick screening module and high accuracy detection identification module, wherein:
an imaging mode determination module: judging the current working imaging mode of the remote sensing satellite, inputting the original image data into a rapid screening module when the remote sensing satellite works in the ocean target wide area search imaging mode, and inputting the original image data into a region matching calculation module when the remote sensing satellite works in the point target detailed search imaging mode;
a fast screening module: screening out images suspected of containing suspicious targets from original image data, inputting the images into a high-precision detection and identification module, and discarding the rest images;
the region matching calculation module: constructing a hot spot region, screening out images belonging to the hot spot region from original image data, inputting the images to a high-precision detection and identification module, and discarding the rest images;
high accuracy detects the identification module: and carrying out target detection and identification on the input image and outputting a detection result.
2. The distributed collaborative fast high-precision processing system for the satellite-borne massive image data according to claim 1, wherein the fast screening module includes one or more data processing units, each data processing unit screens an input image by using a fast screening network, the fast screening network includes a first convolutional layer, a second convolutional layer, at least one feature extraction unit, a third convolutional layer, a full connection layer, and a Softmax layer, which are sequentially arranged, the first convolutional layer includes 32 1 × 1 convolutional kernels, the second convolutional layer includes 64 3 × 3 convolutional kernels, the feature extraction unit includes a multi-channel feature extraction unit, a Max layer, and a fourth convolutional layer, which are sequentially arranged, the multi-channel feature extraction unit includes a first channel, a second channel, and a third channel, the first channel is connected with 64 3 × 3 convolutional kernels after the 32 1 × 1 convolutional kernels, the second channel is connected with 64 1 × 1 convolutional kernels after the 32 3 convolutional kernels, the third channel is 64, the 1 × 3 convolutional kernels are connected with 64 convolutional kernels, and the fourth convolutional kernels include 64 convolutional kernels; and activating a feature map output after convolution kernel convolution in the first convolution layer, the second convolution layer, the first channel, the second channel, the third convolution layer or the fourth convolution layer by using a Relu activation function.
3. The distributed cooperative rapid high-precision processing system for satellite-borne massive image data according to claim 2, wherein the convolution steps of the 1 x 1 convolution kernel and the 3 x 3 convolution kernel are both 1.
4. The distributed cooperative rapid high-precision processing system for satellite-borne mass image data according to claim 2, wherein the number of the feature extraction units is 3-5, and the feature extraction units are connected in series in sequence.
5. The distributed cooperative rapid high-precision processing system for satellite-borne massive image data according to claim 1, wherein the high-precision detection and identification module comprises: the system comprises a multi-mode data fusion unit, a high-resolution information holding unit, a backbone network and an auxiliary super-resolution learning branch, wherein:
a multi-modal data fusion unit: unifying original images to the same resolution, forming a fused image through multi-source data fusion and inputting the fused image to a high-resolution information holding unit; the original image comprises a full-color image, a multispectral image and a near-infrared image;
high score information holding unit: detail keeping and improving are carried out on the fusion image, and a detail-enhanced high-resolution image is obtained and input to a backbone network;
backbone network: extracting the hierarchical features of the detail-enhanced high-resolution image to obtain low-level and high-level features of the image, judging the category of a suspicious target in the image by adopting a high-precision detection and identification model, and calculating the position and the confidence coefficient of the suspicious target;
auxiliary super-resolution learning branch: and performing super-resolution learning by using the low-level and high-level features of the image obtained by the backbone network, and correcting the high-precision detection and identification model of the backbone network after acquiring the local texture, detail features and high-level semantic information of the complex background image.
6. The distributed cooperative rapid high-precision processing system for the spaceborne massive image data as claimed in claim 5, wherein the auxiliary super-resolution learning branch comprises an encoder and a decoder, the encoder matches the low-level features of the image to the spatial size of the high-level features, and then merges the low-level features and the high-level features; and the decoder decodes the information obtained by combining the low-layer characteristics and the high-layer characteristics and outputs the super-resolution characteristics.
7. The distributed cooperative rapid high-precision processing system for satellite-borne mass image data according to claim 1, wherein the region matching calculation module constructs a hot spot region, comprising:
dividing the earth surface into basic grids with equal size according to the longitude and latitude indexes, marking and numbering each basic grid, and forming one or more hot spot areas according to the information annotated on the ground, wherein each hot spot area is formed by one or more basic grids.
8. The distributed cooperative rapid high-precision processing system for satellite-borne mass image data according to claim 7, wherein the region matching calculation module extracts images belonging to hot spot regions from original images, and comprises:
dividing an original image into image blocks with equal size, calculating a ground longitude and latitude coordinate corresponding to a central point of the image block by using a GPS and attitude information in image auxiliary data, determining a basic grid number where the central point of the image block is located according to the longitude and latitude coordinate, then judging whether the basic grid belongs to a hot spot area, if so, extracting the image block, and if not, discarding the image block.
9. The distributed collaborative rapid high-precision processing system for satellite-borne mass image data according to claim 1, wherein the rapid screening module and the regional screening module include a plurality of parallel data processing units.
10. A method for processing images by using the distributed cooperative fast high-precision processing system for satellite-borne mass image data according to any one of claims 1 to 9, comprising the following steps:
(1) Receiving original image data input by a satellite-borne camera;
(2) Judging and judging the imaging mode of satellite work, entering the step (3) when the satellite works in the wide-area search imaging mode of the ocean targets, and entering the step (5) when the satellite works in the detailed-survey imaging mode of point targets;
(3) Dividing original image data into image blocks with equal size, sequentially inputting the image blocks into a rapid screening module to perform suspected target screening processing, and entering the step (4);
(4) When the suspected target of the image block is detected, inputting the image block into a high-precision detection and identification module for identification and outputting a detection result, otherwise, discarding the image block until all original image data are processed and quitting;
(5) Dividing original image data into image blocks with equal size, inputting the image blocks into an area matching calculation module in sequence, judging whether the image blocks belong to a hot spot area, and entering the step (6);
(6) And when detecting that the image block belongs to the hot spot area, inputting the image block into a high-precision detection and identification module for identification and outputting a detection result, otherwise, discarding the image block until all the original image data are processed.
CN202210910161.9A 2022-07-29 2022-07-29 Satellite-borne mass image data distributed cooperative rapid high-precision processing system and method Pending CN115424140A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210910161.9A CN115424140A (en) 2022-07-29 2022-07-29 Satellite-borne mass image data distributed cooperative rapid high-precision processing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210910161.9A CN115424140A (en) 2022-07-29 2022-07-29 Satellite-borne mass image data distributed cooperative rapid high-precision processing system and method

Publications (1)

Publication Number Publication Date
CN115424140A true CN115424140A (en) 2022-12-02

Family

ID=84195604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210910161.9A Pending CN115424140A (en) 2022-07-29 2022-07-29 Satellite-borne mass image data distributed cooperative rapid high-precision processing system and method

Country Status (1)

Country Link
CN (1) CN115424140A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984084A (en) * 2022-12-19 2023-04-18 中国科学院空天信息创新研究院 Remote sensing distributed data processing method based on dynamic detachable network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984084A (en) * 2022-12-19 2023-04-18 中国科学院空天信息创新研究院 Remote sensing distributed data processing method based on dynamic detachable network
CN115984084B (en) * 2022-12-19 2023-06-06 中国科学院空天信息创新研究院 Remote sensing distributed data processing method based on dynamic detachable network

Similar Documents

Publication Publication Date Title
CN108764063B (en) Remote sensing image time-sensitive target identification system and method based on characteristic pyramid
CN111862126B (en) Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN109271856B (en) Optical remote sensing image target detection method based on expansion residual convolution
Li et al. Cross-layer attention network for small object detection in remote sensing imagery
CN108596108B (en) Aerial remote sensing image change detection method based on triple semantic relation learning
CN110111345B (en) Attention network-based 3D point cloud segmentation method
CN114565860B (en) Multi-dimensional reinforcement learning synthetic aperture radar image target detection method
Wang et al. Spatiotemporal fusion of remote sensing image based on deep learning
CN113420607A (en) Multi-scale target detection and identification method for unmanned aerial vehicle
Lei et al. End-to-end change detection using a symmetric fully convolutional network for landslide mapping
CN112288008A (en) Mosaic multispectral image disguised target detection method based on deep learning
Chen et al. NIGAN: A framework for mountain road extraction integrating remote sensing road-scene neighborhood probability enhancements and improved conditional generative adversarial network
CN113838064B (en) Cloud removal method based on branch GAN using multi-temporal remote sensing data
CN113610905B (en) Deep learning remote sensing image registration method based on sub-image matching and application
CN115035361A (en) Target detection method and system based on attention mechanism and feature cross fusion
Hamida et al. Deep learning for semantic segmentation of remote sensing images with rich spectral content
Zhang et al. Self-attention guidance and multi-scale feature fusion based uav image object detection
CN114022408A (en) Remote sensing image cloud detection method based on multi-scale convolution neural network
Chen et al. Object-based multi-modal convolution neural networks for building extraction using panchromatic and multispectral imagery
CN115100545A (en) Target detection method for small parts of failed satellite under low illumination
CN110793529B (en) Quick matching star map identification method
Gao et al. Road extraction using a dual attention dilated-linknet based on satellite images and floating vehicle trajectory data
CN115631427A (en) Multi-scene ship detection and segmentation method based on mixed attention
CN114943902A (en) Urban vegetation unmanned aerial vehicle remote sensing classification method based on multi-scale feature perception network
CN115424140A (en) Satellite-borne mass image data distributed cooperative rapid high-precision processing system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination