CN116109924A - SAR target recognition distribution feature diagnosis method and device - Google Patents

SAR target recognition distribution feature diagnosis method and device Download PDF

Info

Publication number
CN116109924A
CN116109924A CN202310004978.4A CN202310004978A CN116109924A CN 116109924 A CN116109924 A CN 116109924A CN 202310004978 A CN202310004978 A CN 202310004978A CN 116109924 A CN116109924 A CN 116109924A
Authority
CN
China
Prior art keywords
target
sar image
feature
result
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310004978.4A
Other languages
Chinese (zh)
Inventor
刘锦帆
胡利平
李超
王超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Environmental Features
Original Assignee
Beijing Institute of Environmental Features
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Environmental Features filed Critical Beijing Institute of Environmental Features
Priority to CN202310004978.4A priority Critical patent/CN116109924A/en
Publication of CN116109924A publication Critical patent/CN116109924A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a distributed feature diagnosis method and device for SAR target recognition. The method comprises the following steps: acquiring a target SAR image to be diagnosed; inputting the target SAR image into a pre-trained recognition model to obtain a recognition result of the target SAR image; the recognition model comprises a feature extraction module, a full-connection layer and an output layer which are sequentially connected; inputting the target SAR image and the identification result of the target SAR image into a diagnosis network to obtain the diagnosis result of the target SAR image; the diagnosis network is formed by sequentially connecting a feature extraction module in the identification model, a feature mapping layer and an output layer in the identification model. According to the method and the device, the contribution degree of each region in the target SAR image to the recognition result can be traced, so that the influence condition of each structural region of the SAR target in the target SAR image to the recognition result is diagnosed, and the key region for recognizing the SAR target is obtained.

Description

SAR target recognition distribution feature diagnosis method and device
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a distributed feature diagnosis method and device for SAR target recognition.
Background
Most of the existing SAR target recognition methods are based on machine learning and artificial intelligence algorithms, and although the algorithm models reach higher accuracy in a plurality of prediction tasks, the models are highly opaque due to the multi-layer nested nonlinear structures, so that students cannot clearly know which information in input data promotes classification decisions made by the models, and the information is isolated inside by the models and is difficult to deeply ascertain.
Therefore, there is a need for a distributed feature diagnostic method for SAR target identification.
Disclosure of Invention
In order to solve the problem that a critical area of SAR target recognition is difficult to determine in the existing SAR target recognition method, the embodiment of the invention provides a distributed feature diagnosis method and device for SAR target recognition.
In a first aspect, an embodiment of the present invention provides a distributed feature diagnosis method for SAR target identification, including:
acquiring a target SAR image to be diagnosed;
inputting the target SAR image into a pre-trained recognition model to obtain a recognition result of the target SAR image; the recognition model comprises a feature extraction module, a full-connection layer and an output layer which are sequentially connected;
inputting the target SAR image and the identification result of the target SAR image into a diagnosis network to obtain the diagnosis result of the target SAR image; the diagnosis network is formed by sequentially connecting the feature extraction module in the identification model, a feature mapping layer and the output layer in the identification model.
In a second aspect, an embodiment of the present invention further provides a distributed feature diagnosis apparatus for SAR target identification, including:
an acquisition unit configured to acquire a target SAR image to be diagnosed;
the recognition unit is used for inputting the target SAR image into a pre-trained recognition model to obtain a recognition result of the target SAR image; the recognition model comprises a feature extraction module, a full-connection layer and an output layer which are sequentially connected;
the diagnosis unit is used for inputting the target SAR image and the identification result of the target SAR image into a diagnosis network to obtain the diagnosis result of the target SAR image; the diagnosis network is formed by sequentially connecting the feature extraction module in the identification model, a feature mapping layer and the output layer in the identification model.
In a third aspect, an embodiment of the present invention further provides a computing device, including a memory and a processor, where the memory stores a computer program, and the processor implements a method according to any embodiment of the present specification when executing the computer program.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform a method according to any of the embodiments of the present specification.
The embodiment of the invention provides a distributed feature diagnosis method and a distributed feature diagnosis device for SAR target recognition, which are characterized in that firstly, a target SAR image to be diagnosed is input into a pre-trained recognition model, so that a recognition result of the target SAR image can be obtained, and the recognition model comprises a feature extraction module, a full connection layer and an output layer which are connected in sequence; and then, inputting the target SAR image and the identification result of the target SAR image into a diagnosis network, wherein the diagnosis network is formed by sequentially connecting a feature extraction module of the identification model, a feature mapping layer and an output layer of the identification model, and can output the diagnosis result of the target SAR image. According to the method and the device, the contribution degree of each region in the target SAR image to the recognition result can be traced, so that the influence condition of each structural region of the SAR target in the target SAR image to the recognition result is diagnosed, and the key region for recognizing the SAR target is obtained.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a distributed feature diagnostic method for SAR target recognition according to one embodiment of the present invention;
FIG. 2 is a hardware architecture diagram of a computing device according to one embodiment of the present invention;
fig. 3 is a block diagram of a distributed feature diagnosis apparatus for SAR target recognition according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without making any inventive effort based on the embodiments of the present invention are within the scope of protection of the present invention.
As described above, with the development of an onboard/satellite-borne radar imaging system, the use of the system for reconnaissance monitoring and detection and identification of ground sea surface targets focused on the ground has been widely used. Among them, synthetic Aperture Radar (SAR) imaging technology has made great progress. However, not matching with the development of radar imaging systems, there are many difficulties in research on intelligent interpretation technology based on radar target characteristic data, and intelligent interpretation of SAR targets is one of them. Unlike optical images, SAR images reflect geometric features such as size and contour of the target, and also contain electromagnetic features such as scattering centers of the target. The SAR images with different incidence angles and different wave bands are greatly different due to the factors of electromagnetic wave multiple reflection, radar beam incidence/receiving directions and the like in the SAR imaging mechanism.
Therefore, the existing SAR target recognition method based on machine learning and artificial intelligence algorithm is difficult to know the critical area of SAR target recognition although achieving higher accuracy.
In order to solve the above technical problems, the inventor may consider that the feature mapping layer is used to replace the full connection layer in the recognition model, so that the feature extraction module, the feature mapping layer, and the output layer are sequentially connected to form a diagnosis network. The original full-connection layer can lose the space information reserved in the feature extraction module, and the feature mapping layer is used for replacing the full-connection layer, so that the diagnosis network has the structure information mapping capability, and the diagnosis result of the target SAR image is obtained based on the target SAR image and the recognition result of the target SAR image.
Specific implementations of the above concepts are described below.
Referring to fig. 1, an embodiment of the present invention provides a distributed feature diagnosis method for SAR target identification, which includes:
step 100, obtaining a target SAR image to be diagnosed;
102, inputting a target SAR image into a pre-trained recognition model to obtain a recognition result of the target SAR image; the recognition model comprises a feature extraction module, a full-connection layer and an output layer which are sequentially connected;
104, inputting the target SAR image and the identification result of the target SAR image into a diagnosis network to obtain the diagnosis result of the target SAR image; the diagnosis network is formed by sequentially connecting a feature extraction module in the identification model, a feature mapping layer and an output layer in the identification model.
In the embodiment of the invention, firstly, a target SAR image to be diagnosed is input into a pre-trained recognition model, and a recognition result of the target SAR image can be obtained, wherein the recognition model comprises a feature extraction module, a full connection layer and an output layer which are sequentially connected; and then, inputting the target SAR image and the identification result of the target SAR image into a diagnosis network, wherein the diagnosis network is formed by sequentially connecting a feature extraction module of the identification model, a feature mapping layer and an output layer of the identification model, and can output the diagnosis result of the target SAR image. According to the method and the device, the contribution degree of each region in the target SAR image to the recognition result can be traced, so that the influence condition of each structural region of the SAR target in the target SAR image to the recognition result is diagnosed, and the key region for recognizing the SAR target is obtained.
For step 100:
in the embodiment of the invention, the target SAR image to be diagnosed is a simulated ship SAR image, and it can be understood that the target SAR image to be diagnosed can be selected from the target SAR images in the verification set.
The target may be a ship or other object such as an airplane, and the type of the target is not limited here.
For step 102:
in the embodiment of the invention, a pre-trained recognition model is firstly obtained, the recognition model of the embodiment comprises a feature extraction module, a full connection layer and an output layer which are sequentially connected, the feature extraction module comprises 4 feature extraction sub-modules, and each feature extraction sub-module comprises 1 convolution layer and 1 pooling layer.
Next, a training process of the recognition model will be described.
Firstly, the simulated original SAR images of three ship targets are randomly mixed and marked to generate a training set, a testing set and a verification set. Training and testing the convolutional neural network CNN which is built in advance by using the training set to generate an identification model. The convolutional neural network consists of 4 convolutional-pooling layers, 2 full-connection layers and an output layer, wherein the processed radar ground-wiping angle image data with 67 degrees is input into a CNN network for network training, and the processed radar ground-wiping angle image data with 70 degrees is used for network testing and network verification.
The epoch parameter of the training network is set to 20 times, and the batch_size is taken to 30 so as to be convenient for timely acquiring the network convergence condition. Training the CNN network to obtain the test set confusion matrix and the recognition rate shown in tables 1-4.
TABLE 1
Figure BDA0004035993020000051
TABLE 2
Figure BDA0004035993020000052
TABLE 3 Table 3
Figure BDA0004035993020000053
TABLE 4 Table 4
Figure BDA0004035993020000061
As can be seen from tables 1 to 4, after training by adopting the CNN network structure, the target recognition rate is higher than 99%, so that the network has good recognition capability for classifying and recognizing ship target SAR images, and network parameters of the CNN network can be fixed to generate a recognition model.
After training to generate an identification model, inputting a target SAR image to be diagnosed into the identification model, carrying out classification prediction on the category of a target in the target SAR image by the identification model, and outputting the identification result of the target SAR image.
For step 104:
in the embodiment of the invention, the feature extraction module comprises a plurality of feature extraction sub-modules;
step 104 may include:
inputting the target SAR image into a diagnosis network to respectively utilize each feature extraction sub-module to perform feature extraction on the target SAR image;
and inputting the output result of the last feature extraction sub-module and the identification result of the target SAR image into a feature mapping layer of the diagnosis network, so as to project the identification result of the target SAR image back into the feature mapping space of the last feature extraction sub-module according to the network parameters of the output layer, and obtain the diagnosis result of the target SAR image.
For example, the recognition model is trained by the first convolutional neural network in step 102, and the recognition model is formed by sequentially connecting 4 feature extraction sub-modules (convolutional layer+pooling layer), two fully connected layers, and one output layer (softmax) in series. Then, a second convolutional neural network is constructed, which is formed by sequentially connecting 4 feature extraction sub-modules, a feature mapping layer and an output layer (softmax) in series. And inputting the network parameters in the 4 feature extraction sub-modules and the output layers in the identification model into the corresponding 4 feature extraction sub-modules and the output layers in the second convolutional neural network, so that a diagnosis network can be generated.
Then, after the target SAR image and the recognition result of the target SAR image are input into the diagnosis network, the 4 feature extraction submodules respectively perform feature extraction of different scales on the target SAR image, the feature extraction result of the last feature extraction submodule and the recognition result of the target SAR image are input into a feature mapping layer of the diagnosis network, and the feature mapping layer projects the recognition result of the target SAR image back into a feature mapping space of the last feature extraction submodule according to the network parameters of the output layer, so that the diagnosis result of the target SAR image can be obtained.
In some implementations, the last feature extraction submodule includes a convolutional layer and a pooling layer;
the feature mapping layer diagnoses the target SAR image by means of the following steps S1-S4:
step S1, determining the score of dividing the target SAR image into target categories based on the network parameters of the output layer and the recognition result of the target SAR image; wherein the recognition result comprises a target category;
step S2, determining the weight of each feature map corresponding to the target class of the average pooling result based on the output result of the last feature extraction sub-module and the score of the target SAR image divided into the target classes; the feature map is an output result of a convolution layer in the last feature extraction submodule; the pooling layer in the last feature extraction submodule is used for carrying out average pooling on each feature mapping diagram output by the convolution layer;
step S3, carrying out weighted summation on each feature map based on the weight of the average pooling result of each feature map corresponding to the target category, and obtaining a feature synthetic map of the target category;
and S4, obtaining a diagnosis result of the target SAR image based on the characteristic synthetic graph.
For example, in step S1, assuming that the recognition result of the target SAR image is c-class and contains a probability value recognized as c-class, the feature mapping layer may calculate that the output of the target SAR image before the Softmax layer is S based on the network parameters of the output layer and the probability value c I.e. the score at which the target SAR image is classified into target classes.
In step S2, since the last feature extraction submodule includes a convolution layer and a pooling layer, the output result of the convolution layer of the last feature extraction submodule is a plurality of feature maps, the feature maps are local area images of the target SAR image, and the pooling layer performs average pooling on each feature map to obtain an average pooling value corresponding to each feature map, denoted as F k
And F k The expression of (2) is:
F k =∑ x,y f k (x,y)
wherein f k (x, y) represents the value of the neuron with the spatial position (x, y) in the kth feature map, that is, the value of each pixel point in the kth feature map, in the convolution layer of the last feature extraction submodule.
In the embodiment of the present invention, the weight of the average pooling result of each feature map corresponding to the target class is calculated by the following formula:
Figure BDA0004035993020000081
wherein S is c For a score where the target SAR image is classified into a target class,
Figure BDA0004035993020000082
the average pooling result for each feature map corresponds to the weight of the target class, f k (x, y) is the value of each pixel point in the kth feature map, F k And c is the target class, which is the average pooling result of the kth feature map. />
It can be seen that the score S of the target class is classified according to the target SAR image c And the average pooling result F of each feature map k The weight of the average pooling result of each feature map corresponding to the target class can be calculated
Figure BDA0004035993020000085
In step S3, the feature composition map of the target class is calculated by the following formula:
Figure BDA0004035993020000083
wherein M is c (x, y) is a feature composition map of the target class,
Figure BDA0004035993020000084
the average pooling result for each feature map corresponds to the weight of the target class, f k (x, y) is the value of each pixel point in the kth feature map, and c is the target class.
As can be seen from the formula, the feature composite map of the target category can be obtained by carrying out weighted summation on each feature map based on the weight of the average pooling result of each feature map corresponding to the target category.
In one embodiment of the present invention, step S4 may be implemented at least in two ways:
in the first mode, after the feature synthetic map of the target class is amplified to the same size as the target SAR image, the feature synthetic map can be directly used as a diagnosis result of the target SAR image.
And in a second mode, the feature synthetic image is overlapped on the target SAR image through color mapping, and a diagnosis result of the target SAR image is obtained.
The two modes are respectively described below.
First, mode one will be described.
In this mode one, S is obtained from the formulas of step S2 and step S3 c =∑ x,y M c (x, y), thus M c (x, y) directly indicates the magnitude of the impact of the input pixel value at position (x, y) on the division into class c in the target SAR image. Then, the importance degree of each region in the target SAR image on the classification result can be intuitively seen by magnifying the characteristic synthetic map of the target class to the same size as the target SAR image and then comparing the characteristic synthetic map with the target SAR image.
The first embodiment is completed, and the second embodiment is described below.
In the second mode, since the target SAR image is a black-and-white image, in order to intuitively distinguish the critical area by color, each pixel value in the feature synthesis image can be superimposed on the target SAR image by color mapping, the pixel value is red with a large pixel value and blue with a small pixel value, and the mapped image can be used as a diagnosis result of the target SAR image, and the critical area identified by the target SAR image can be intuitively determined by color.
As shown in fig. 2 and 3, the embodiment of the invention provides a distributed feature diagnosis device for SAR target identification. The apparatus embodiments may be implemented by software, or may be implemented by hardware or a combination of hardware and software. In terms of hardware, as shown in fig. 2, a hardware architecture diagram of a computing device where a SAR target recognition distributed feature diagnosis apparatus provided in an embodiment of the present invention is located, in addition to a processor, a memory, a network interface, and a nonvolatile memory shown in fig. 2, the computing device where the embodiment is located may generally include other hardware, such as a forwarding chip responsible for processing a packet, and so on. Taking a software implementation as an example, as shown in fig. 3, as a device in a logic sense, the device is formed by reading a corresponding computer program in a nonvolatile memory into a memory by a CPU of a computing device where the device is located. The distributed feature diagnosis device for SAR target recognition provided in the present embodiment includes:
an acquisition unit configured to acquire a target SAR image to be diagnosed;
the recognition unit is used for inputting the target SAR image into a pre-trained recognition model to obtain a recognition result of the target SAR image; the recognition model comprises a feature extraction module, a full-connection layer and an output layer which are sequentially connected;
the diagnosis unit is used for inputting the target SAR image and the identification result of the target SAR image into a diagnosis network to obtain the diagnosis result of the target SAR image; the diagnosis network is formed by sequentially connecting the feature extraction module in the identification model, a feature mapping layer and the output layer in the identification model.
In one embodiment of the present invention, the feature extraction modules in the recognition unit 302 and the diagnosis unit 303 include a plurality of feature extraction sub-modules;
in one embodiment of the invention, the diagnostic unit 303 is configured to perform:
inputting the target SAR image into a diagnosis network to respectively utilize each feature extraction sub-module to perform feature extraction on the target SAR image;
and inputting the output result of the last feature extraction sub-module and the identification result of the target SAR image into a feature mapping layer of the diagnosis network, so as to project the identification result of the target SAR image back into the feature mapping space of the last feature extraction sub-module according to the network parameters of the output layer, and obtain the diagnosis result of the target SAR image.
In one embodiment of the present invention, the last feature extraction submodule in the recognition unit 302 and the diagnosis unit 303 includes a convolution layer and a pooling layer;
in one embodiment of the present invention, the feature mapping layer in the diagnosing unit 303 diagnoses the target SAR image by:
determining a score of the target SAR image divided into target categories based on network parameters of the output layer and a recognition result of the target SAR image; wherein the recognition result comprises a target category;
determining the weight of each feature map corresponding to the target class of the average pooling result based on the output result of the last feature extraction sub-module and the score of the target SAR image divided into the target classes; wherein, the feature map is the output result of the convolution layer in the last feature extraction sub-module; the pooling layer in the last feature extraction submodule is used for carrying out average pooling on each feature map output by the convolution layer;
based on the weight of the average pooling result of each feature map corresponding to the target category, carrying out weighted summation on each feature map to obtain a feature synthetic map of the target category;
and obtaining a diagnosis result of the target SAR image based on the characteristic synthetic map.
In one embodiment of the present invention, the weight of the average pooling result of each feature map in the diagnostic unit 303 corresponding to the target class is calculated by the following formula:
Figure BDA0004035993020000101
wherein S is c A score for the target SAR image being classified into a target class,
Figure BDA0004035993020000102
the average pooling result for each feature map corresponds to the weight of the target class, f k (x, y) is the value of each pixel point in the kth feature map, F k Is the kth featureAnd (3) the average pooling result of the mapping graph, wherein c is the target category.
In one embodiment of the present invention, the feature composition map of the target class in the diagnostic unit 303 is calculated by the following formula:
Figure BDA0004035993020000111
wherein M is c (x, y) is a feature composition map of the target class,
Figure BDA0004035993020000112
the average pooling result for each feature map corresponds to the weight of the target class, f k (x, y) is the value of each pixel point in the kth feature map, and c is the target class.
In one embodiment of the present invention, the diagnostic unit 303 is configured to superimpose the feature synthesis map onto the target SAR image through color mapping to obtain a diagnostic result of the target SAR image when the diagnostic result of the target SAR image is obtained based on the feature synthesis map.
In one embodiment of the invention, the feature extraction modules in the recognition unit 302 and the diagnostic unit 303 comprise 4 feature extraction sub-modules, and each feature extraction sub-module comprises 1 convolution layer and 1 pooling layer.
It will be appreciated that the architecture illustrated in the embodiments of the present invention does not constitute a specific limitation on a distributed feature diagnostic device for SAR target recognition. In other embodiments of the invention, a distributed feature diagnostic device for SAR target recognition may include more or fewer components than shown, or may combine certain components, or may split certain components, or may have a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The content of information interaction and execution process between the modules in the device is based on the same conception as the embodiment of the method of the present invention, and specific content can be referred to the description in the embodiment of the method of the present invention, which is not repeated here.
The embodiment of the invention also provides a computing device, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the distributed feature diagnosis method for SAR target recognition in any embodiment of the invention when executing the computer program.
The embodiment of the invention also provides a computer readable storage medium, and the computer readable storage medium stores a computer program, and the computer program when executed by a processor causes the processor to execute the distributed feature diagnosis method for SAR target recognition in any embodiment of the invention.
Specifically, a system or apparatus provided with a storage medium on which a software program code realizing the functions of any of the above embodiments is stored, and a computer (or CPU or MPU) of the system or apparatus may be caused to read out and execute the program code stored in the storage medium.
In this case, the program code itself read from the storage medium may realize the functions of any of the above-described embodiments, and thus the program code and the storage medium storing the program code form part of the present invention.
Examples of the storage medium for providing the program code include a floppy disk, a hard disk, a magneto-optical disk, an optical disk (e.g., CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD+RW), a magnetic tape, a nonvolatile memory card, and a ROM. Alternatively, the program code may be downloaded from a server computer by a communication network.
Further, it should be apparent that the functions of any of the above-described embodiments may be implemented not only by executing the program code read out by the computer, but also by causing an operating system or the like operating on the computer to perform part or all of the actual operations based on the instructions of the program code.
Further, it is understood that the program code read out by the storage medium is written into a memory provided in an expansion board inserted into a computer or into a memory provided in an expansion module connected to the computer, and then a CPU or the like mounted on the expansion board or the expansion module is caused to perform part and all of actual operations based on instructions of the program code, thereby realizing the functions of any of the above embodiments.
It is noted that relational terms such as first and second, and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: various media in which program code may be stored, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A distributed feature diagnostic method for SAR target identification, comprising:
acquiring a target SAR image to be diagnosed;
inputting the target SAR image into a pre-trained recognition model to obtain a recognition result of the target SAR image; the recognition model comprises a feature extraction module, a full-connection layer and an output layer which are sequentially connected;
inputting the target SAR image and the identification result of the target SAR image into a diagnosis network to obtain the diagnosis result of the target SAR image; the diagnosis network is formed by sequentially connecting the feature extraction module in the identification model, a feature mapping layer and the output layer in the identification model.
2. The method of claim 1, wherein the feature extraction module comprises a plurality of feature extraction sub-modules;
inputting the target SAR image and the recognition result of the target SAR image into a diagnosis network to obtain the diagnosis result of the target SAR image, wherein the method comprises the following steps:
inputting the target SAR image into a diagnosis network to respectively utilize each feature extraction sub-module to perform feature extraction on the target SAR image;
and inputting the output result of the last feature extraction sub-module and the identification result of the target SAR image into a feature mapping layer of the diagnosis network, so as to project the identification result of the target SAR image back into the feature mapping space of the last feature extraction sub-module according to the network parameters of the output layer, and obtain the diagnosis result of the target SAR image.
3. The method of claim 2, wherein the last feature extraction submodule includes a convolutional layer and a pooling layer;
the feature mapping layer diagnoses the target SAR image by:
determining a score of the target SAR image divided into target categories based on network parameters of the output layer and a recognition result of the target SAR image; wherein the recognition result comprises a target category;
determining the weight of each feature map corresponding to the target class of the average pooling result based on the output result of the last feature extraction sub-module and the score of the target SAR image divided into the target classes; wherein, the feature map is the output result of the convolution layer in the last feature extraction sub-module; the pooling layer in the last feature extraction submodule is used for carrying out average pooling on each feature map output by the convolution layer;
based on the weight of the average pooling result of each feature map corresponding to the target category, carrying out weighted summation on each feature map to obtain a feature synthetic map of the target category;
and obtaining a diagnosis result of the target SAR image based on the characteristic synthetic map.
4. A method according to claim 3, wherein the weights for the target class for the average pooling result for each feature map are calculated by the following formula:
Figure FDA0004035993010000021
wherein S is c A score for the target SAR image being classified into a target class,
Figure FDA0004035993010000022
the average pooling result for each feature map corresponds to the weight of the target class, f k (x, y) is the value of each pixel point in the kth feature map, F k And c is the target class, which is the average pooling result of the kth feature map.
5. A method according to claim 3, wherein the feature composition of the target class is calculated by the formula:
Figure FDA0004035993010000023
/>
wherein M is c (x, y) is a feature composition map of the target class,
Figure FDA0004035993010000024
the average pooling result for each feature map corresponds to the weight of the target class, f k (x, y) is the value of each pixel point in the kth feature map, and c is the target class.
6. The method of claim 3, wherein the obtaining the diagnostic result of the target SAR image based on the feature synthesis map comprises: and superposing the characteristic synthetic image on the target SAR image through color mapping to obtain a diagnosis result of the target SAR image.
7. The method of any of claims 1-6, wherein the feature extraction module comprises 4 feature extraction sub-modules, and each feature extraction sub-module comprises 1 convolutional layer and 1 pooling layer.
8. A distributed feature diagnostic device for SAR target identification, comprising:
an acquisition unit configured to acquire a target SAR image to be diagnosed;
the recognition unit is used for inputting the target SAR image into a pre-trained recognition model to obtain a recognition result of the target SAR image; the recognition model comprises a feature extraction module, a full-connection layer and an output layer which are sequentially connected;
the diagnosis unit is used for inputting the target SAR image and the identification result of the target SAR image into a diagnosis network to obtain the diagnosis result of the target SAR image; the diagnosis network is formed by sequentially connecting the feature extraction module in the identification model, a feature mapping layer and the output layer in the identification model.
9. A computing device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the method of any of claims 1-7 when the computer program is executed.
10. A computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of any of claims 1-7.
CN202310004978.4A 2023-01-03 2023-01-03 SAR target recognition distribution feature diagnosis method and device Pending CN116109924A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310004978.4A CN116109924A (en) 2023-01-03 2023-01-03 SAR target recognition distribution feature diagnosis method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310004978.4A CN116109924A (en) 2023-01-03 2023-01-03 SAR target recognition distribution feature diagnosis method and device

Publications (1)

Publication Number Publication Date
CN116109924A true CN116109924A (en) 2023-05-12

Family

ID=86266809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310004978.4A Pending CN116109924A (en) 2023-01-03 2023-01-03 SAR target recognition distribution feature diagnosis method and device

Country Status (1)

Country Link
CN (1) CN116109924A (en)

Similar Documents

Publication Publication Date Title
Nieto-Hidalgo et al. Two-stage convolutional neural network for ship and spill detection using SLAR images
EP3924879B1 (en) Systems and methods involving creation and/or utilization of image mosaics
KR20210002104A (en) Target detection and training of target detection networks
US11468266B2 (en) Target identification in large image data
US20150356350A1 (en) unsupervised non-parametric multi-component image segmentation method
CN115439694A (en) High-precision point cloud completion method and device based on deep learning
CN110321867B (en) Shielded target detection method based on component constraint network
CN113436125B (en) Side-scan sonar simulation image generation method, device and equipment based on style migration
CN112508863B (en) Target detection method based on RGB image and MSR image double channels
CN112966815A (en) Target detection method, system and equipment based on impulse neural network
CN113128564A (en) Typical target detection method and system based on deep learning under complex background
CN116844055A (en) Lightweight SAR ship detection method and system
CN116109924A (en) SAR target recognition distribution feature diagnosis method and device
CN112801201B (en) Deep learning visual inertial navigation combined navigation design method based on standardization
CN114663743A (en) Ship target re-identification method, terminal equipment and storage medium
Gerg et al. A Perceptual Metric Prior on Deep Latent Space Improves Out-Of-Distribution Synthetic Aperture Sonar Image Classification
den Hollander et al. Vessel classification for naval operations
Duarte et al. Multiple vessel detection in harsh maritime environments
Sannapu et al. Classification of marine vessels using deep learning models based on SAR images
Busson et al. Seismic shot gather noise localization using a multi-scale feature-fusion-based neural network
Zhou et al. SAR ship detection network based on global context and multi-scale feature enhancement
CN112396006B (en) Building damage identification method and device based on machine learning and computing equipment
Jessen et al. Object Detector Simulation for Certification of Autonomous Surface Vessels
Levin et al. Improving situational awareness in aviation: Robust vision-based detection of hazardous objects
CN116503737B (en) Ship detection method and device based on space optical image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination