CN112766032A - SAR image saliency map generation method based on multi-scale and super-pixel segmentation - Google Patents

SAR image saliency map generation method based on multi-scale and super-pixel segmentation Download PDF

Info

Publication number
CN112766032A
CN112766032A CN202011350638.XA CN202011350638A CN112766032A CN 112766032 A CN112766032 A CN 112766032A CN 202011350638 A CN202011350638 A CN 202011350638A CN 112766032 A CN112766032 A CN 112766032A
Authority
CN
China
Prior art keywords
scale
super
map
pixel
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011350638.XA
Other languages
Chinese (zh)
Inventor
周云
李相东
李海翔
李熙乐
于雪莲
汪学刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202011350638.XA priority Critical patent/CN112766032A/en
Publication of CN112766032A publication Critical patent/CN112766032A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for detecting a significant target of an SAR (synthetic aperture radar) image based on multi-scale superpixels, belonging to the field of signal processing, in particular to the field of synthetic aperture radar image feature extraction. The method comprises the frequency characteristic of a super-pixel level, the local contrast of the super-pixel level and the global contrast of the super-pixel level. And linearly integrating the same features under different scales to obtain the salient feature maps of the features, so that the superpixel features obtained under multiple scales can participate in the fusion of the feature maps. And finally multiplying the characteristic saliency maps to obtain a final saliency target detection result map. Finally, simulation experiments are carried out on SAR images in different scenes, and compared with various classical saliency detection algorithms and a two-parameter CFAR detection method, the method proves that the algorithm has a remarkable target detection effect superior to that of a comparison algorithm under most conditions, and the inhibition effect on background clutter is higher than that of the comparison algorithm; under comprehensive evaluation, the SAR image significant target detection method has a good SAR image significant target detection effect.

Description

SAR image saliency map generation method based on multi-scale and super-pixel segmentation
Technical Field
The invention belongs to the field of signal processing, in particular to the field of synthetic aperture radar image feature extraction.
Background
Synthetic Aperture Radar (SAR) is a microwave active high-resolution imaging Radar, and can overcome the influence of extreme weather such as thunderstorm, snowstorm and the like, and realize all-day and all-weather active operation tasks. By utilizing the powerful penetration capability of electromagnetic waves, the SAR can detect and detect targets hidden under vegetation and ruins, obtain large-area ground surface high-resolution images in a short time and provide help which cannot be achieved by optical images and infrared. Therefore, the SAR plays an indispensable role in civilian aspects such as disaster detection, agricultural investigation, environmental monitoring, building surveying and mapping and military fields such as military target identification, battlefield armed investigation and ground strike due to unique advantages
High resolution and wide imaging area are main advantages of the SAR image, but with the development of the SAR imaging technology, High latitude (High-dimension), High resolution (High-resolution) and Multi-mode (Multi-mode) become main development trends of the SAR image. The SAR image has been developed from the first dozens of pixels to tens of thousands of counting units, the texture information, the shape information and the edge information are more and more abundant, and meanwhile, the wide imaging range causes the image to contain more and more target types and the scene to be more and more complex. It is obvious that these large and complex data cannot be processed efficiently by means of conventional manual methods.
Many other studies have been to represent images by changing the resolution of the image, i.e. by down-sampling, to form a gaussian pyramid structure. Every time the original image is subjected to the down-sampling operation, image information is lost 3/4, and the gaussian pyramid structure including the original image generally has nine layers, so that a large amount of target information is lost when the down-sampling operation is completed.
Disclosure of Invention
The invention aims to solve the technical problem that the prior art can not quickly and accurately locate and find a special area in a scene for a large number of images.
In order to extract the salient object more accurately and effectively, original image information can be reserved while multi-scale local information is obtained, and the information integrity of the multi-scale image is guaranteed. Firstly, segmenting an image in different scales by utilizing an SLTC algorithm; the SLTC algorithm is a simple linear iterative clustering algorithm for short, and is a super-pixel segmentation method utilizing the relation between color similarity and spatial distance. The method can generate compact and uniform superpixel blocks, has higher comprehensive evaluation in the aspects of operation speed, edge adhesion, segmentation shape and the like, and can obtain satisfactory effect; and then, on the basis of studying the ITTI algorithm, the biological mechanism of non-uniform multi-scale sampling of the human eyes on the visual image is learned, and the defect that the shape characteristics of the target contour are lost in the ITTI algorithm is improved by utilizing the good adhesion of the superpixel segmentation methods with different scales and sizes to the edge. Meanwhile, based on the inherent characteristics of the SAR image, the SAR image is subjected to characteristic extraction suitable for the SAR image by using the superpixel as a basic unit without using simple characteristics, wherein the characteristic extraction comprises superpixel-level frequency characteristics, superpixel-level local contrast and superpixel-level global contrast. The method comprises the following steps of performing linear integration on the same characteristic under different scales to obtain each characteristic saliency map, and ensuring that superpixel characteristics obtained under multiple scales can participate in fusion of the characteristic maps, so that the technical scheme of the invention is a SAR image saliency target detection method based on multi-scale superpixels, and the method comprises the following steps:
step 1: segmenting the multi-scale superpixel image;
presetting M scales, namely a scale 1-a scale M, by using an SLIC algorithm (linear iterative clustering algorithm), wherein the M is more than or equal to 1; respectively obtaining super pixel blocks under different scales;
SLIC segmentation is carried out on the input image under the scale 1 to obtain N1A plurality of superpixel blocks;
SLIC segmentation is carried out on the input image under the scale 2 to obtain N2A plurality of superpixel blocks;
······
SLIC segmentation is carried out on the input image under the scale M to obtain NMA plurality of superpixel blocks;
in the above way, the superpixel segmentation results under different scales are obtained;
step 2: extracting features based on the superpixel blocks;
step 2.1: extracting frequency characteristics of the superpixel blocks;
under the scale 1, each super pixel block in the image after the SLIC algorithm processing is represented as S1_i,i=1,2,...,N1(ii) a With equation (1), the frequency of the superpixel block per block is obtained:
Figure RE-GDA0002953485290000021
wherein, Ifre(m, n) denotes the frequency of the superpixel block, λ denotes the empirical parameter, fre(m,n)Representing the frequency of the gray value of a pixel point with the coordinate of (m, n) in the SAR image appearing in the image, n(m,n)Representing the number of the points corresponding to the gray value appearing in the image, and N representing the total number of the pixel points of the input SAR image; and calculating a frequency characteristic value by using the following formulas (2) and (3):
num(1_i)=Number(S1_i)/4,i=1,2,...,N1 (2)
Figure RE-GDA0002953485290000022
wherein num (1_ i) represents taking S1_i1/4 pixels with the largest intra-pixel frequency eigenvalue, Number (S)1_i) Representing the ith super pixel S1_iThe frequency characteristic values in the sequence are sorted from large to small and the pixel number is calculatedAmount ffrep(S1_i) Represents a pair S1_iThe maximum 1/4 pixel frequency characteristic value in the average is taken as S1_iThe super-pixel frequency characteristic value;
to N sequentially1Extracting frequency characteristics of the super pixel blocks to obtain N1Dimension feature vector ffrep_1(S1_1), ffrep_1(S1_2),......,ffrep_1(S1_N1)](ii) a For the ith super-pixel block, the frequency characteristic of each pixel inside the super-pixel block is ffrep(S1_i) Represents; obtaining a frequency characteristic diagram Freq _ map (1) under the scale 1;
similar to the above steps, obtaining a frequency characteristic map of scale 2, Freq _ map (2);
................................
obtaining a frequency characteristic diagram of a scale M, Freq _ map (M);
step 2.2: extracting local contrast characteristics of the superpixel blocks;
calculating a superpixel block S using the following equation (4)1_iLocal contrast frequency flocal(S1_i):
Figure BDA0002801193150000031
Wherein S is1_jRepresentation and superpixel block S1_iAll super pixels in the neighborhood; for enhancing algorithm robustness, super-pixel block S1_iThe gray values of the inner pixels are sorted from large to small, the largest 1/8 pixels are selected and averaged, and the peak characteristic f of each super pixel is obtainedpeak(S1_i) The mean value of the gray values of the pixels in each super pixel is fmean(S1_i) Finally, the local contrast frequency f is obtained according to the formula (4)local(S1_i)
Based on the segmentation result of the scale 1, S is sequentially paired1_iPerforming local contrast feature extraction to obtain N1Dimensional local contrast feature vector [ f ]local_1(S1_1),flocal_1(S1_2),......,flocal_1(S1_N1)](ii) a For the ith super-pixel block, the local contrast characteristic frequency of each pixel inside the super-pixel block is flocal(S1_i) Represents; after normalization processing is carried out, a Local contrast characteristic map (1) under the scale 1 is obtained;
similarly, with the steps, Local _ map (2) based on scale 2 is obtained;
........................
obtaining a Local _ map (M) based on the scale M;
step 2.3, extracting the global contrast characteristic of the superpixel block;
superpixel block S1_iThe average value of the gray levels of all the pixels in the super pixel is used as the initial value f of the super pixeli(ii) a The area S is calculated by the following methodiGlobal contrast characteristic value fglob(S1_i):
Figure BDA0002801193150000041
Figure BDA0002801193150000042
Figure BDA0002801193150000043
Wherein, G (S)1_i) Representing the global contrast, f, of the i-th block superpixel at scale 1jRepresents the mean value of the gray levels, num (S) of all the pixels in the jth super-pixel1_j) Representing a super pixel area S1_jThe threshold value T is based on G (S)1_i) An empirical value constant is selected for the distribution;
in sequence to S1_iPerforming feature extraction of global contrast to obtain N1Dimensional global contrast feature vector fglobal_1(S1_1), fglobal_1(S1_2),......,fglobal_1(S1_N1)](ii) a For the ith super-pixel block, the global contrast characteristic frequency of each pixel inside the super-pixel block is fglobal(S1_i) Represents; obtaining a global contrast characteristic map global _ map (1) under the scale 1;
similarly, with the steps, obtaining a global feature map Local _ map (2) based on the scale 2;
........................
obtaining global feature map (M) based on the scale M;
and step 3: multi-scale fusion;
the frequency saliency map SM is obtained by linearly superposing Freq _ map (1) to Freq _ map (M)freq
Performing linear superposition on Local _ map (1) -Local _ map (M) to obtain a Local feature saliency map SMlocal
Performing linear superposition on Global _ map (1) -Global _ map (M) to obtain a Global feature saliency map SMglobal
And 4, step 4: multiplying the three graphs obtained in the step 3 by using the following formula (8):
SM=N(SMfreq·SMlocal·SMglobal) (8)
obtaining a final total saliency map SM; where N (-) represents the normalization operation.
The invention has the beneficial effects that:
the obvious target detection result obtained by the algorithm more accurately depicts the shape information of the target, and meanwhile, the suppression effect on the background clutter is obviously superior to that of other comparison algorithms, and the algorithm is greatly helpful for reducing the false alarm rate.
Drawings
Fig. 1 is a basic flow chart of the algorithm.
Fig. 2 shows an example of SLIC algorithm with different numbers, where (a) is a result graph with 100 superpixels, and (b) is a result graph with 200 superpixels.
Fig. 3 shows two examples of superpixel-based frequency profiles, where (a) is the original and (b) is the superpixel frequency profile.
Fig. 4 is a local contrast characteristic diagram based on super-pixels, in which (a) is an original image and (b) is a local contrast characteristic diagram of super-pixels.
Fig. 5 is a superpixel-based global contrast feature map, where (a) is the original image and (b) is the superpixel-based global contrast feature map.
FIG. 6 is a global contrast profile, where the top row is a tank SAR imaging picture from the M-Star database with a resolution of 128 × 128, and the bottom row is a port ship SAR picture from the SSDD database with a resolution of 366 × 345; wherein, (a) is the original image, and the upper and lower lines of the images (b) to (f) are respectively the saliency maps obtained by adopting different algorithms for the two images; the graph (b) is the standard significant target result of the artificial labeling, which is called a standard significant result Graph (GT), the graph (c) is the result of the significant target detection method using the invention, the graph (d) is the detection result of the RC algorithm, the graph (e) is the detection result of the ITTI algorithm, and the graph (f) is the detection result of the two-parameter CFAR algorithm.
Detailed Description
The flow of the algorithm is shown in fig. 1, and the specific implementation steps are as follows:
step 1: based on SLIC algorithm, multi-scale super-pixel segmentation is carried out on input image
M scales, namely the scale 1-the scale M, are preset, and M is more than or equal to 1.
SLIC segmentation is carried out on an input image under the scale 1 to obtain N1A plurality of superpixel blocks;
performing SLIC segmentation on the input image under the scale 2 to obtain N2A plurality of superpixel blocks;
······
SLIC segmentation is carried out on the input image under the scale M to obtain NMA plurality of superpixel blocks;
in the above, the super-pixel segmentation results at M scales are obtained.
FIG. 2 is an example of superpixel segmentation at different scales; for the corresponding input image of the graph (a), take N1The segmentation result is shown in fig. (a) as 100; for the corresponding input image of the image (b), take N1The result of the segmentation is shown in fig. (b) at 200.
Step 2: feature extraction based on superpixel blocks;
the step is divided into three parallel submodules;
step 2.1: extracting the frequency characteristics of the superpixel segmentation;
sequentially comparing the superpixel blocks S obtained in the step 1 based on the segmentation result of the scale 11_i,i=1,2,...,N1Frequency feature extraction is carried out by using formulas (1), (2) and (3) to obtain an N1-dimensional frequency feature vector ffrep_1(S1_1),ffrep_1(S1_2),......, ffrep_1(S1_N1)]. For the ith (1. ltoreq. i. ltoreq.N)1) A super-pixel block, the frequency characteristic of each pixel inside the super-pixel block is ffrep_1(S1_1) And (4) showing. After normalization, the frequency feature map Frep _ map (1) at scale 1 is obtained.
Similarly, with the above steps, a frequency feature map Frep _ map (2) based on scale 2 is obtained.
........................
A frequency feature map Frep _ map (M) based on the scale M is obtained.
Step 2.2: local contrast feature extraction based on superpixel segmentation
Sequentially comparing the superpixel blocks S obtained in the step 1 based on the segmentation result of the scale 11_i,i=1,2,...,N1Using formulas (3) and (4) to extract the local contrast characteristics to obtain an N1Dimensional local contrast feature vector [ f ]local_1(S1_1), flocal_1(S1_2),......,flocal_1(S1_N1)]. For the ith (1. ltoreq. i. ltoreq.N)1) A super-pixel block, wherein the local contrast characteristic of each pixel inside the super-pixel block is represented by flocal_1(S1_i) And (4) showing. After normalization, Local contrast characteristic map Local _ map (1) at scale 1 is obtained.
Similarly, with the above steps, Local _ map (2) based on scale 2 is obtained.
........................
Local _ map (M) is obtained based on the scale M.
Step 2.3: superpixel-based global contrast feature extraction
Sequentially comparing the superpixel blocks S obtained in the step 1 based on the segmentation result of the scale 11_iAnd carrying out feature extraction of global contrast by using formulas (5), (6) and (7) to obtain N1Dimensional global contrast feature vector fglobal_1(S1_1),fglobal_1(S1_2),......, fglobal_1(S1_N1)]. For the ith (1. ltoreq. i. ltoreq.N)1) A super-pixel block, wherein the global contrast characteristic of each pixel inside the super-pixel block is fglobal_1(S1_i) And (4) showing. Thus, a global contrast feature map global _ map (1) at scale 1 is obtained.
Similarly, with the above steps, a global feature map global _ map (2) based on scale 2 is obtained.
........................
Obtaining a global feature map (M) based on the scale M.
And step 3: extracting the feature graphs of the frequency feature, the local contrast feature and the global contrast feature respectively through the step 2, and then linearly overlapping the feature graphs of the same feature among different scales to avoid target omission caused by that the superpixel scale on a certain scale is not matched with the target and the target superpixel is not included, so that a frequency feature significant graph F based on superpixel segmentation is obtainedfrepLocal contrast saliency map FlocalGlobal contrast saliency map Fglobal
And 4, step 4: fusing the three feature maps by using the formula (8) to generate a final saliency map
The results shown in fig. 6 are obtained by performing simulation experiments under various algorithms on two sets of SAR images. As seen from fig. 6(c), the frequency of the highlighted pixel points is low, a complete target area is obtained, and the shape information of the target is accurately depicted, so that compared with other comparison algorithms, the algorithm provided by the invention has a better filtering effect on background noise, a lower false alarm result, and better display on the contour information of the target.

Claims (1)

1. A SAR image salient target detection method based on multi-scale superpixels comprises the following steps:
step 1: segmenting the multi-scale superpixel image;
presetting M scales, namely a scale 1-a scale M, by using a linear iterative clustering algorithm, wherein the M is more than or equal to 1; respectively obtaining super pixel blocks under different scales;
SLIC segmentation is carried out on the input image under the scale 1 to obtain N1A plurality of superpixel blocks;
SLIC segmentation is carried out on the input image under the scale 2 to obtain N2A plurality of superpixel blocks;
······
SLIC segmentation is carried out on the input image under the scale M to obtain NMA plurality of superpixel blocks;
in the above way, the superpixel segmentation results under different scales are obtained;
step 2: extracting features based on the superpixel blocks;
step 2.1: extracting frequency characteristics of the superpixel blocks;
under the scale 1, each super pixel block in the image after the SLIC algorithm processing is represented as S1-i,i=1,2,...,N1(ii) a With equation (1), the frequency of the superpixel block per block is obtained:
Figure FDA0002801193140000011
wherein, Ifre(m, n) denotes the frequency of the superpixel block, λ denotes the empirical parameter, fre(m,n)Representing the frequency of the gray value of a pixel point with the coordinate of (m, n) in the SAR image appearing in the image, n(m,n)Representing the number of the points corresponding to the gray value appearing in the image, and N representing the total number of the pixel points of the input SAR image; and calculating a frequency characteristic value by using the following formulas (2) and (3):
num(1_i)=Number(S1_i)/4,i=1,2,...,N1 (2)
Figure FDA0002801193140000012
wherein num (1_ i) represents taking S1_i1/4 pixels with the largest intra-pixel frequency eigenvalue, Number (S)1_i) Representing the ith super pixel S1-iThe frequency eigenvalues in (f) are sorted from large to small and the number of pixels is calculatedfrep(S1_i) Represents a pair S1_iThe maximum 1/4 pixel frequency characteristic value in the average is taken as S1_iThe super-pixel frequency characteristic value;
to N sequentially1Extracting frequency characteristics of the super pixel blocks to obtain N1Dimension feature vector ffrep_1(S1_1),ffrep_1(S1_2),……,ffrep_1(S1_N1)](ii) a For the ith super-pixel block, the frequency characteristic of each pixel inside the super-pixel block is ffrep(S1_i) Represents; obtaining a frequency characteristic diagram Freq _ map (1) under the scale 1;
similar to the above steps, obtaining a frequency characteristic map of scale 2, Freq _ map (2);
…………………………
obtaining a frequency characteristic diagram of a scale M, Freq _ map (M);
step 2.2: extracting local contrast characteristics of the superpixel blocks;
calculating a superpixel block S using the following equation (4)1_iLocal contrast frequency flocal(S1_i):
Figure FDA0002801193140000021
Wherein S is1-jRepresentation and superpixel block S1-iAll super pixels in the neighborhood; for enhancing algorithm robustness, super-pixel block S1-iThe gray values of the inner pixels are sorted from large to small, and the largest gray value is selected1/8 number of pixels and averaging to obtain the peak feature f of each super-pixelpeak(S1_i) The mean value of the gray values of the pixels in each super pixel is fmean(S1_i) Finally, the local contrast frequency f is obtained according to the formula (4)local(S1_i)
Based on the segmentation result of the scale 1, S is sequentially paired1_iPerforming local contrast feature extraction to obtain N1Dimensional local contrast feature vector [ f ]local_1(S1_1),flocal_1(S1_2),……,flocal_1(S1_N1)](ii) a For the ith super-pixel block, the local contrast characteristic frequency of each pixel inside the super-pixel block is flocal(S1_i) Represents; after normalization processing is carried out, a Local contrast characteristic map (1) under the scale 1 is obtained;
similarly, with the steps, Local _ map (2) based on scale 2 is obtained;
……………………
obtaining a Local _ map (M) based on the scale M;
step 2.3, extracting the global contrast characteristic of the superpixel block;
superpixel block S1_iThe average value of the gray levels of all the pixels in the super pixel is used as the initial value f of the super pixeli(ii) a The area S is calculated by the following methodiGlobal contrast characteristic value fglob(S1_i):
Figure FDA0002801193140000022
Figure FDA0002801193140000023
Figure FDA0002801193140000031
Wherein, G (S)1-i) Representing the global contrast, f, of the i-th block superpixel at scale 1jRepresents the mean value of the gray levels, num (S) of all the pixels in the jth super-pixel1_j) Representing a super pixel area S1_jThe threshold value T is based on G (S)1_i) An empirical value constant is selected for the distribution;
in sequence to S1_iPerforming feature extraction of global contrast to obtain N1Dimensional global contrast feature vector fglobal_1(S1_1),fglobal_1(S1_2),……,fglobal_1(S1_N1)](ii) a For the ith super-pixel block, the global contrast characteristic frequency of each pixel inside the super-pixel block is fglobal(S1-i) Represents; obtaining a global contrast characteristic map global _ map (1) under the scale 1;
similarly, with the steps, obtaining a global feature map Local _ map (2) based on the scale 2;
……………………
obtaining global feature map (M) based on the scale M;
and step 3: multi-scale fusion;
the frequency saliency map SM is obtained by linearly superposing Freq _ map (1) to Freq _ map (M)freq
Performing linear superposition on Local _ map (1) -Local _ map (M) to obtain a Local feature saliency map SMlocal
Performing linear superposition on Global _ map (1) -Global _ map (M) to obtain a Global feature saliency map SMglobal
And 4, step 4: multiplying the three graphs obtained in the step 3 by using the following formula (8):
SM=N(SMfreq·SMlocal·SMglobal) (8)
obtaining a final total saliency map SM; where N (-) represents the normalization operation.
CN202011350638.XA 2020-11-26 2020-11-26 SAR image saliency map generation method based on multi-scale and super-pixel segmentation Pending CN112766032A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011350638.XA CN112766032A (en) 2020-11-26 2020-11-26 SAR image saliency map generation method based on multi-scale and super-pixel segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011350638.XA CN112766032A (en) 2020-11-26 2020-11-26 SAR image saliency map generation method based on multi-scale and super-pixel segmentation

Publications (1)

Publication Number Publication Date
CN112766032A true CN112766032A (en) 2021-05-07

Family

ID=75693171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011350638.XA Pending CN112766032A (en) 2020-11-26 2020-11-26 SAR image saliency map generation method based on multi-scale and super-pixel segmentation

Country Status (1)

Country Link
CN (1) CN112766032A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880295A (en) * 2023-02-28 2023-03-31 吉林省安瑞健康科技有限公司 Computer-aided tumor ablation navigation system with accurate positioning function

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150169989A1 (en) * 2008-11-13 2015-06-18 Google Inc. Foreground object detection from multiple images
CN107680106A (en) * 2017-10-13 2018-02-09 南京航空航天大学 A kind of conspicuousness object detection method based on Faster R CNN
CN107992874A (en) * 2017-12-20 2018-05-04 武汉大学 Image well-marked target method for extracting region and system based on iteration rarefaction representation
CN110110618A (en) * 2019-04-22 2019-08-09 电子科技大学 A kind of SAR target detection method based on PCA and global contrast

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150169989A1 (en) * 2008-11-13 2015-06-18 Google Inc. Foreground object detection from multiple images
CN107680106A (en) * 2017-10-13 2018-02-09 南京航空航天大学 A kind of conspicuousness object detection method based on Faster R CNN
CN107992874A (en) * 2017-12-20 2018-05-04 武汉大学 Image well-marked target method for extracting region and system based on iteration rarefaction representation
CN110110618A (en) * 2019-04-22 2019-08-09 电子科技大学 A kind of SAR target detection method based on PCA and global contrast

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
唐永昊: "基于视觉注意模型的SAR图像目标检测算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880295A (en) * 2023-02-28 2023-03-31 吉林省安瑞健康科技有限公司 Computer-aided tumor ablation navigation system with accurate positioning function

Similar Documents

Publication Publication Date Title
CN110210463B (en) Precise ROI-fast R-CNN-based radar target image detection method
CN107358258B (en) SAR image target classification based on NSCT double CNN channels and selective attention mechanism
CN107767400B (en) Remote sensing image sequence moving target detection method based on hierarchical significance analysis
CN108898065B (en) Deep network ship target detection method with candidate area rapid screening and scale self-adaption
CN103578110B (en) Multiband high-resolution remote sensing image dividing method based on gray level co-occurrence matrixes
CN111027497B (en) Weak and small target rapid detection method based on high-resolution optical remote sensing image
CN109829423B (en) Infrared imaging detection method for frozen lake
CN108038856B (en) Infrared small target detection method based on improved multi-scale fractal enhancement
CN114549642B (en) Low-contrast infrared dim target detection method
CN114266947A (en) Classification method and device based on fusion of laser point cloud and visible light image
CN107369163B (en) Rapid SAR image target detection method based on optimal entropy dual-threshold segmentation
CN106971402B (en) SAR image change detection method based on optical assistance
CN112766032A (en) SAR image saliency map generation method based on multi-scale and super-pixel segmentation
CN116843906A (en) Target multi-angle intrinsic feature mining method based on Laplace feature mapping
Cheng et al. Tensor locality preserving projections based urban building areas extraction from high-resolution SAR images
Liu et al. Target detection in remote sensing image based on saliency computation of spiking neural network
Dong et al. A novel VHR image change detection algorithm based on image fusion and fuzzy C-means clustering
CN114373135A (en) Ship target detection method based on local significance characteristic measurement
CN113963270A (en) High resolution remote sensing image building detection method
CN110211124B (en) Infrared imaging frozen lake detection method based on MobileNet V2
CN108596139B (en) Remote sensing image urban area extraction method based on Gabor feature saliency
Xiaojun et al. Tracking of moving target based on video motion nuclear algorithm
Zhang et al. Target detection in sar images based on sub-aperture coherence and phase congruency
Wang et al. Saliency-driven target detection based on common visual feature clustering for multiple SAR images
Lin et al. Coarse to fine aircraft detection from front-looking infrared images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210507