CN114821358A - Optical remote sensing image marine ship target extraction and identification method - Google Patents
Optical remote sensing image marine ship target extraction and identification method Download PDFInfo
- Publication number
- CN114821358A CN114821358A CN202210463979.0A CN202210463979A CN114821358A CN 114821358 A CN114821358 A CN 114821358A CN 202210463979 A CN202210463979 A CN 202210463979A CN 114821358 A CN114821358 A CN 114821358A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- sensing image
- saliency map
- target
- ship
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Image Processing (AREA)
Abstract
The invention relates to an optical remote sensing image marine ship target extraction and identification method, which comprises the following steps: inputting a visible light remote sensing image; globally detecting the sea surface in the remote sensing image by using a visual saliency method based on covariance joint characteristics to generate a single-scale saliency map; performing down-sampling on the generated single-scale saliency map, establishing a multi-scale saliency map, and obtaining a final saliency map; calculating a detection threshold value of a sea surface ship target, binarizing the saliency map, and realizing rough segmentation of the saliency map; establishing a training set and a test set; to obtain feature vectors that can be used for training and testing; training a model according to the built frame and positive and negative samples in the training set, and classifying the feature vectors; and testing the candidate area according to the established frame and the positive and negative samples in the test set, and finishing the extraction and identification of the marine ship target of the optical remote sensing image. The method can efficiently search the sea surface target area, greatly reduce the false alarm rate and improve the detection accuracy.
Description
Technical Field
The invention relates to an optical remote sensing image marine ship target extraction and identification method.
Background
In recent years, the ocean remote sensing technology has been one of the challenging research subjects in the field of computer vision, and the ocean remote sensing technology has a better development prospect and belongs to the ship detection technology. With the rapid development of remote sensing information science, ship detection by using a remote sensing technology is not only applied to military fields such as maritime investigation, maritime strike analysis and evaluation, but also widely applied to civil fields such as island resource investigation, marine exploration and maritime rescue, and has important value.
With the increasing enhancement of the capacity of acquiring remote sensing data of the air and space platforms and the rapid development of high-resolution satellites, more and more remote sensing data can be used for research. From a data acquisition perspective, current ship detection can be roughly classified into three categories: synthetic Aperture Radar (SAR) image ship detection, Infrared (IR) image ship detection, and Visible Remote Sensing (VRS) image ship detection. The Synthetic Aperture Radar (SAR) method has day and night imaging capability and certain penetrability which are not influenced by complex weather conditions such as illumination, cloud and fog and the like, and is mainly used for monitoring sea surface oil spill and sea surface ocean current. All countries in the world are constantly dedicated to the development of novel SAR loads, the spatial resolution is continuously improved, and impressive performance is achieved. However, the coherent imaging mechanism of the SAR image causes a large amount of speckle noise in the image, which severely interferes with edge and texture features, and in addition, color information cannot be utilized, and thus, is not suitable for identifying the ship target. In addition, the infrared image is used for enhancing the visual effect under the condition of weak light, but has the defects of low signal-to-noise ratio, insufficient structural information and the like. More features are available in the visible image, such as color, texture, edges, orientation, and frequency domain features, and thus more details and complex structures can be captured.
At present, the method for extracting and detecting the target area of the marine ship comprises the following steps: the traditional optical remote sensing image ship detection is based on gray information segmentation images, and the method is only suitable for the conditions of calm sea surface and no cloud and fog interference, and has poor robustness; there are also template matching based methods, which mainly remove the sea island region according to the ship target shape, but the template is not easy to select for different ship types with different scenes; the method is also based on the traditional machine learning method, mainly aims at separating a target area from a background area, and highly depends on an ideal training sample; the target and the background can be effectively classified based on a deep learning method, but the requirements on hardware are high, training steps are complex, and interpretability is poor; the method based on sparse representation is not systematic at present, and is partially applied to ship detection of infrared images; and the method based on the visual saliency segmentation can generate fewer suspected target areas compared with the number of candidate area frames generated by a sliding window method and a gray information segmentation image method in a machine learning method.
In summary, the task of detecting the ship target in the current optical remote sensing image has at least the following disadvantages:
1. islands, dense clouds, ocean waves, and various uncertain sea conditions result in high false alarm rates.
2. The homogeneity of the ship target is reduced due to parameter limitation of the visible light imaging sensor, interference of sea clutter and ship trail interference.
3. The grey level relevance of the target is low due to self factors such as color, texture, size and type of the ship;
4. based on the requirement of rapidity of large-scale remote sensing data, the reduction of the calculation burden becomes a key problem;
5. the detection efficiency is low in the detection process due to different target directions and unobvious characteristics.
Therefore, how to perform fast, stable and robust extraction and detection under the complex conditions of severe sea conditions, low target homogeneity and gray level relevance, geometric distortion of the target, target rotation and the like becomes a problem which needs to be solved urgently at present.
Disclosure of Invention
In view of the above, a method for extracting and identifying marine vessel targets based on optical remote sensing images is needed.
The invention provides a marine ship target extraction and identification method of an optical remote sensing image, which comprises the following steps: a. inputting a visible light remote sensing image; b. introducing covariance statistics and homologous similarity measurement, and using a visual saliency method based on covariance combined characteristics to carry out global detection on the sea surface in the remote sensing image to generate a single-scale saliency map; c. sampling the generated single-scale saliency map in a downlink manner, establishing a multi-scale saliency map, and obtaining a final saliency map through a superposition fusion mechanism and normalized fusion; d. calculating a detection threshold value of a sea surface ship target according to the obtained gray statistical characteristics of the final saliency map, binarizing the saliency map, realizing rough segmentation of the saliency map, marking back an original remote sensing image, finding out the area of each target, and separating a suspected target from a sea surface background; e. establishing a training set and a test set, wherein the training set comprises positive samples and negative samples, and the test set comprises positive samples and negative samples; f. designing a CF-Fourier feature, embedding an Aggregation Channel Feature (ACF) and a pyramid Feature (FGPM), and establishing a framework to obtain a feature vector for training and testing; g. training a model according to the built frame and positive and negative samples in the training set, and classifying the feature vectors in the step f by using a Boosting decision tree; h. and testing the candidate area according to the established frame and the positive and negative samples in the test set, and finishing the extraction and identification of the marine ship target of the optical remote sensing image.
Preferably, the step a comprises:
the method comprises the steps of inputting an optical remote sensing image f (x, y) with the spatial resolution of H multiplied by W, wherein the remote sensing image comprises ships, sea fog, thick cloud layers, islands and the like, the sizes and the color polarities of the ships are different, and the ships are randomly distributed on the sea surface.
Preferably, the step b comprises:
step S21: calculating the brightness of a pixel m according to the input remote sensing image, extracting gradient characteristics in the horizontal direction and the vertical direction, brightness second derivative characteristics, brightness L of Lab color space close to human vision, opposite color dimension a and b characteristics, and forming a nine-dimensional characteristic vector f with position coordinates (x, y) m :
Step S22: dividing the remote sensing image into square regions R with the same size, calculating a characteristic mean value, and using f m Symmetrically constructing a 9 × 9 covariance feature matrix as a region descriptor S:
step S23, Cholesky decomposition is carried out on the region descriptor to obtain each row vector L in the upper triangular matrix i Then the region descriptor is equivalent to a set of points S in euclidean space:
combining the characteristic mean value mu and the point set S to obtain a pair C R Encoding a feature vector psi with Euclidean spatial computation capability μ (C R ):
ψ μ (C R )=(μ,s 1 ,s 2 ,...,s k ,,s k+1 ,.s k+2 ..,s 2k );
Step S24: by the context similarity measure, the most similar T measures are found to represent the significance of the region, and the formula is as follows:
step S25: designing a homologous similarity weight function w on the basis of the significance j And enhancing contrast to obtain a sparse graph of the salient region:
wherein the weight function is measured using an inverse function of the feature distance, defined as a gaussian function:
preferably, said step d comprises:
adopting an OTSU method to acquire a self-adaptive segmentation threshold T to establish a connected region for extracting a target:
preferably, the step f specifically includes:
step S61: the gradient of the planar image I (x, y) at the pixel (x, y) is represented as (D (x, y), θ (D (x, y))), and the continuous gradient direction pulse curve is calculated as:
h(ζ)=||D(x,y)||δ(ζ-θ(D(x,y)));
step S62: fourier analysis was used for gradient direction pulse curves:
Step S63: self-guiding kernel function P for rotating image in vector field and searching condition of rotation invariance j (r):
Step S64: by adopting the kernel function convolution modeling, according to the condition of the rotation invariance, the Fourier HOG rotation invariance descriptor is expressed as follows:
step S65: introducing a circumferential frequency filter CF, designing a gray value change mode of a ship target by using the brightness difference between a ship and a surrounding background, and calculating Discrete Fourier Transform (DFT) of the gray value at a pixel (i, j):
step S66: the extracted rotation invariance gradient feature and the extracted circumference frequency feature are sent to a classifier to distinguish whether to judgeWhether it is a real ship or a false alarm: the efficiency is slow for directly collecting image pyramid features at the basic scale d 0 Based on the different scales d realized by estimating the scale factor lambda 1 Fast pyramid feature estimation of (1):
F d1 =F d0 ·(d 0 /d 1 ) -λ 。
preferably, the step g specifically comprises the following steps:
by dividing the positive and negative samples by 1: and 3, as an input end of the model, training the model, classifying to generate confidence scores of the candidate regions, and using the cross-over ratio as a judgment standard for judging whether the candidate regions are true targets.
Preferably, the step h specifically includes:
and d, testing the fragments of the suspected ship target area extracted in the step d, judging whether the fragments are real targets or false alarms, if the fragments are real targets, retaining the fragments, if the fragments are false alarms, removing the fragments, and finally marking the fragments in the input image.
The method has no complex parameter setting, does not depend on the prior knowledge of the sea surface background and the distribution characteristics of the targets, and provides a method for combining visual saliency detection based on covariance combined statistical characteristics aiming at the characteristics of the ship targets under the sea surface background, so that the covariance estimation and the homologous similarity weighting fusion correction of the region are insufficient, the integral advantage is enhanced, and the sea surface background interference is restrained. And the multi-scale stacking fusion enhances the whole continuity of the detected targets and the distinguishability among the targets, and efficiently searches the sea surface target area. For false alarms such as heavy cloud layers and islands possibly appearing in the image, a polymerization channel feature-feature pyramid acceleration frame (ACF-FPGM) embedded with CF-Fourier spatial frequency domain joint features is used for further identifying the detected target, and whether the detected target is a ship or not is judged, so that the false alarm rate is greatly reduced, and the detection accuracy is improved.
In addition, the detection and identification time is in the second level, the real-time performance is good, the automation degree is obviously improved, the rapid discovery and positioning and the quantitative determination of the ship targets under the interference of multiple backgrounds in a large sea area can be realized, and the detection robustness is good. The method lays a foundation for further combining unmanned aerial vehicle platform or satellite attitude data to calculate information such as the position, course and the like of each ship and for classification and identification of ship targets.
Drawings
FIG. 1 is a flow chart of the method for extracting and identifying marine vessel targets from optical remote sensing images according to the invention;
FIG. 2 is a flow chart of a method for extracting and identifying marine vessel targets from optical remote sensing images according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a process for extracting a visual saliency target according to an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating a multi-scale fusion effect provided by an embodiment of the present invention: among them, fig. 4(a) is an original figure; fig. 4(b) σ ═ 2 -4 (ii) a Fig. 4(c) σ ═ 2 -5 (ii) a Fig. 4(d) σ ═ 2 -6 (ii) a Fig. 4(e) fusion saliency map;
FIG. 5(a) is an original image I provided by an embodiment of the present invention;
FIG. 5(b) is a diagram of a direction gradient Dx/Dy according to an embodiment of the present invention;
FIG. 5(c) is a schematic diagram of Fourier analysis coefficients of the gradient provided by the embodiment of the present invention:
fig. 6 is a schematic diagram of a change mode of a CF characteristic gray level value and characteristics according to an embodiment of the present invention: wherein, FIG. 6(a) Ships; FIG. 6(b) a gray scale histogram; FIG. 6(c) a false color map;
fig. 7 is a schematic diagram of a fine determination process according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Referring to fig. 1 and 2, a flow chart of the operation of the method for extracting and identifying the marine vessel target based on the optical remote sensing image according to the preferred embodiment of the invention is shown.
In step S1, a visible light remote sensing image f (x, y) with spatial resolution H × W is input. Specifically, the method comprises the following steps:
the method comprises the steps of inputting an optical remote sensing image f (x, y) with the spatial resolution of H multiplied by W, wherein the remote sensing image comprises ships, sea fog, thick cloud layers, islands and the like, the sizes and the color polarities of the ships are different, and the positions of the ships on the sea surface are distributed randomly.
And step S2, introducing covariance statistic and homologous similarity measurement, and using a visual saliency method based on covariance combined characteristics to carry out global detection on the sea surface in the remote sensing image to generate a single-scale saliency map. Specifically, the method comprises the following steps:
step S21: calculating the brightness of a pixel m according to the input remote sensing image, extracting the gradient characteristic in the horizontal direction and the vertical direction, the brightness second derivative characteristic, the brightness L of the Lab color space close to human vision, the opposite color dimension a and b characteristic, wherein the three characteristics respectively correspond to the three-line image sequence of the second line diagram of the figure 3 and form a nine-dimensional characteristic vector f with the position coordinates (x, y) m :
Step S22: dividing the remote sensing image into square regions R with the same size, calculating a characteristic mean value, and using f m Symmetrically constructing a 9 × 9 covariance feature matrix as a region descriptor S:
step S23, Cholesky decomposition is carried out on the region descriptor to obtain each row vector L in the upper triangular matrix i Then the region descriptor is equivalent to a set of points S in euclidean space:
combining the characteristic mean value mu and the point set S to obtain a pair C R Encoding a feature vector psi with Euclidean spatial computation capability μ (C R ):
ψ μ (C R )=(μ,s 1 ,s 2 ,...,s k ,,s k+1 ,.s k+2 ..,s 2k )
Step S24: the similarity is represented by Euclidean distance through a context similarity measure, and the most similar T measures are searched for representing the significance of the regions. Wherein the context includes: radius of three times the length of 3 divided units, i.e. R i Number range of 1-9 and number 5 excluding R, this embodiment sets T to 5, and the formula is as follows:
step S25: designing a homologous similarity weight function w on the basis of the significance j And enhancing contrast to obtain a sparse graph of the salient region:
wherein the weight function is measured using an inverse function of the feature distance, defined as a gaussian function:
and step S3, performing down-sampling on the generated single-scale saliency map, establishing a multi-scale saliency map, and performing normalization fusion through a superposition fusion mechanism to obtain a final saliency map. Specifically, the method comprises the following steps:
according to the single-scale saliency map generation process, the confrontation relation of the multi-scale expansion, balance area characterization capacity and the spatial resolution of the saliency map is adopted, and multiple scales are adoptedThe degree product is fused and normalized, and referring to fig. 4, the embodiment uses 3 scales, where Γ ═ σ |2 k As shown in fig. 4(b) -4 (d), the final saliency map (fig. 4(e)) has a better cloud fog removal effect on a fine scale and is beneficial to highlight the ship target on a coarse scale:
and step S4, calculating a detection threshold value of the sea surface ship target according to the obtained gray statistical characteristics of the final saliency map, binarizing the saliency map, realizing rough segmentation of the saliency map, marking back the original remote sensing image, finding out the area of each target, and separating the suspected target from the sea surface background. Specifically, the method comprises the following steps:
in this embodiment, an OTSU method is adopted to obtain an adaptive segmentation threshold T to establish a connected region for extracting a target:
in step S5, please refer to fig. 7, a training set and a testing set are established. The training set comprises positive samples and negative samples; the test set includes positive samples and negative samples.
The present embodiment creates a training set and a test set, 630 data sets, each image size being 56 pixels by 56 pixels. The positive sample comprises various ships under different backgrounds, and the sizes of the ships are different from 6 to 20 pixels; the negative samples are from background disturbances that may be present at sea, such as waves, wake waves, clouds, dense clouds, islands, etc.
Step S6, designing CF-Fourier characteristics, embedding aggregate channel characteristics (ACF) and pyramid characteristics (FGPM), and establishing a framework to obtain a characteristic vector for training and testing. Specifically, the method comprises the following steps:
step S61: as shown in fig. 5(a) and 5(b), the gradient of the planar image I (x, y) at the pixel (x, y) is represented as (D (x, y), θ (D (x, y))), then the continuous gradient direction pulse curve is calculated as:
h(ζ)=||D(x,y)||δ(ζ-θ(D(x,y)))
step S62: fourier analysis was used for gradient direction pulse curves:
coefficient of performanceThe corresponding Fourier domain coefficient image is shown in fig. 5 (c);
step S63: self-guiding kernel function P for rotating image in vector field and searching condition of rotation invariance j (r):
Step S64: by adopting the kernel function convolution modeling, according to the condition of the rotation invariance, the Fourier HOG rotation invariance descriptor is expressed as follows:
step S65: introducing a circumferential frequency filter (CF), designing a gray value change mode of a ship target by using the brightness difference between the ship and the surrounding background, referring to fig. 6, calculating Discrete Fourier Transform (DFT) of the gray value at a pixel (i, j):
step S66: the extracted rotation invariance gradient characteristic and the extracted circumference frequency characteristic are sent to a classifier to distinguish whether the ship is a real ship or a false alarm. Feature refinement is structurally performed using an Aggregate Channel Feature (ACF). And inputting the obtained ACF into a boosting decision tree, wherein the quick detection rate and the low calculation requirement are important. The efficiency is slow for directly collecting image pyramid features at the basic scale d 0 Based on the different scales d realized by estimating the scale factor lambda 1 Fast pyramid feature estimation of (1):
F d1 =F d0 ·(d 0 /d 1 ) -λ
and S7, training a model according to the established frame and positive and negative samples in the training set, and classifying the feature vectors in the step S6 by using a Boosting decision tree.
By dividing the positive and negative samples by 1: and 3, as an input end of the model, training the model, classifying to generate confidence scores of the candidate regions, and using an intersection ratio (iou) as a judgment standard for judging whether the candidate regions are true targets.
And step S8, testing the candidate area according to the established frame and the positive and negative samples in the test set, and completing the marine ship target extraction and identification of the optical remote sensing image.
In this embodiment, the fragment of the suspected ship target area extracted in step S4 is set to 56 × 56, and the fragment is sent to a model for testing, and whether the fragment is a real target or a false alarm is determined, if the fragment is a real target, the fragment is retained, and if the fragment is a false alarm, the fragment is removed, and finally the fragment is marked in the input image.
The method comprises visual saliency segmentation extraction and fine discrimination of a supervision system. In the visual saliency segmentation and extraction stage, by constructing covariance characteristics belonging to sea surface ship targets and designing a second-order statistic weight by utilizing the homologous similarity, a single saliency map obtained after the optimal similarity measurement can well weaken the background and highlight the targets; and finally, a multi-scale fusion strategy and a self-adaptive threshold segmentation module are designed, and the unsupervised sea surface ship region can be efficiently extracted under multiple environments such as different target scales, disordered cloud and mist backgrounds, sea clutter and wake wave interference and the like. And in the fine discrimination stage of a supervision system, designing a CF-Fourier HOG characteristic of a ship space frequency domain, wherein the characteristic has rotation invariance, and under the framework of aggregating channel characteristics and accelerating a quick characteristic pyramid, the ship target identification and the false alarm elimination are completed.
Although the present invention has been described with reference to the preferred embodiments, it will be understood by those skilled in the art that the foregoing preferred embodiments are merely illustrative of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and scope of the present invention should be included in the scope of the claims of the present invention.
Claims (7)
1. A marine ship target extraction and identification method based on optical remote sensing images is characterized by comprising the following steps:
a. inputting a visible light remote sensing image;
b. introducing covariance statistics and homologous similarity measurement, and using a visual saliency method based on covariance combined characteristics to carry out global detection on the sea surface in the remote sensing image to generate a single-scale saliency map;
c. sampling the generated single-scale saliency map in a downlink manner, establishing a multi-scale saliency map, and obtaining a final saliency map through a superposition fusion mechanism and normalized fusion;
d. calculating a detection threshold value of a sea surface ship target according to the obtained gray statistical characteristics of the final saliency map, binarizing the saliency map, realizing rough segmentation of the saliency map, marking back an original remote sensing image, finding out the area of each target, and separating a suspected target from a sea surface background;
e. establishing a training set and a test set, wherein the training set comprises positive samples and negative samples, and the test set comprises positive samples and negative samples;
f. designing a CF-Fourier characteristic, embedding a polymerization channel characteristic ACF and a pyramid characteristic FGPM, and establishing a framework to obtain a characteristic vector for training and testing;
g. training a model according to the built frame and positive and negative samples in the training set, and classifying the feature vectors in the step f by using a Boosting decision tree;
h. and testing the candidate area according to the established frame and the positive and negative samples in the test set, and finishing the extraction and identification of the marine ship target of the optical remote sensing image.
2. The method of claim 1, wherein step a comprises:
the method comprises the steps of inputting an optical remote sensing image f (x, y) with the spatial resolution of H multiplied by W, wherein the remote sensing image comprises ships, sea fog, thick cloud layers, islands and the like, the sizes and the color polarities of the ships are different, and the ships are randomly distributed on the sea surface.
3. The method of claim 2, wherein step b comprises:
step S21: calculating the brightness of a pixel m according to the input remote sensing image, extracting gradient characteristics in the horizontal direction and the vertical direction, brightness second derivative characteristics, brightness L of Lab color space close to human vision, opposite color dimension a and b characteristics, and forming a nine-dimensional characteristic vector f with position coordinates (x, y) m :
Step S22: dividing the remote sensing image into square regions R with the same size, calculating a characteristic mean value, and using f m A 9 × 9 covariance feature matrix is constructed symmetrically as a region descriptor S:
step S23, Cholesky decomposition is carried out on the region descriptor to obtain each row vector L in the upper triangular matrix i Then the region descriptor is equivalent to a set of points S in euclidean space:
combining the characteristic mean value mu and the point set S to obtain a pair C R Encoding a feature vector psi with Euclidean spatial computation capability μ (C R ):
ψ μ (C R )=(μ,s 1 ,s 2 ,...,s k ,,s k+1 ,.s k+2 ..,s 2k );
Step S24: by the context similarity measure, the most similar T measures are found to represent the significance of the region, and the formula is as follows:
step S25: designing a homologous similarity weight function w on the basis of the significance j And enhancing contrast to obtain a sparse graph of the salient region:
wherein the weight function is measured using an inverse function of the feature distance, defined as a gaussian function:
5. the method according to claim 4, wherein said step f specifically comprises:
step S61: the gradient of the planar image I (x, y) at the pixel (x, y) is represented as (D (x, y), θ (D (x, y))), and the continuous gradient direction pulse curve is calculated as:
h(ζ)=||D(x,y)||δ(ζ-θ(D(x,y)));
step S62: fourier analysis was used for gradient direction pulse curves:
Step S63: self-guiding kernel function P for rotating image in vector field and searching condition of rotation invariance j (r):
Step S64: by adopting the kernel function convolution modeling, according to the condition of the rotation invariance, the Fourier HOG rotation invariance descriptor is expressed as follows:
step S65: introducing a circumferential frequency filter CF, designing a gray value change mode of a ship target by using the brightness difference between a ship and a surrounding background, and calculating Discrete Fourier Transform (DFT) of the gray value at a pixel (i, j):
step S66: the extracted rotation invariance gradient characteristic and the extracted circumference frequency characteristic are sent to a classifier to distinguish whether the ship is a real ship or a false alarm: the efficiency is slow for directly collecting image pyramid features at the basic scale d 0 Based on the different scales d realized by estimating the scale factor lambda 1 Fast pyramid feature estimation of (1):
F d1 =F d0 ·(d 0 /d 1 ) -λ 。
6. the method according to claim 5, wherein said step g comprises the steps of:
by dividing the positive and negative samples by 1: and 3, as an input end of the model, training the model, classifying to generate confidence scores of the candidate regions, and using the cross-over ratio as a judgment standard for judging whether the candidate regions are true targets.
7. The method according to claim 6, wherein said step h specifically comprises:
and d, testing the fragments of the suspected ship target area extracted in the step d, judging whether the fragments are real targets or false alarms, if the fragments are real targets, retaining the fragments, if the fragments are false alarms, removing the fragments, and finally marking the fragments in the input image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210463979.0A CN114821358A (en) | 2022-04-29 | 2022-04-29 | Optical remote sensing image marine ship target extraction and identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210463979.0A CN114821358A (en) | 2022-04-29 | 2022-04-29 | Optical remote sensing image marine ship target extraction and identification method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114821358A true CN114821358A (en) | 2022-07-29 |
Family
ID=82509200
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210463979.0A Pending CN114821358A (en) | 2022-04-29 | 2022-04-29 | Optical remote sensing image marine ship target extraction and identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114821358A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115424249A (en) * | 2022-11-03 | 2022-12-02 | 中国工程物理研究院电子工程研究所 | Self-adaptive detection method for small and weak targets in air under complex background |
CN116109936A (en) * | 2022-10-21 | 2023-05-12 | 中国科学院长春光学精密机械与物理研究所 | Target detection and identification method based on optical remote sensing |
CN117611998A (en) * | 2023-11-22 | 2024-02-27 | 盐城工学院 | Optical remote sensing image target detection method based on improved YOLOv7 |
-
2022
- 2022-04-29 CN CN202210463979.0A patent/CN114821358A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116109936A (en) * | 2022-10-21 | 2023-05-12 | 中国科学院长春光学精密机械与物理研究所 | Target detection and identification method based on optical remote sensing |
CN116109936B (en) * | 2022-10-21 | 2023-08-29 | 中国科学院长春光学精密机械与物理研究所 | Target detection and identification method based on optical remote sensing |
CN115424249A (en) * | 2022-11-03 | 2022-12-02 | 中国工程物理研究院电子工程研究所 | Self-adaptive detection method for small and weak targets in air under complex background |
CN115424249B (en) * | 2022-11-03 | 2023-01-31 | 中国工程物理研究院电子工程研究所 | Self-adaptive detection method for small and weak targets in air under complex background |
CN117611998A (en) * | 2023-11-22 | 2024-02-27 | 盐城工学院 | Optical remote sensing image target detection method based on improved YOLOv7 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107563303B (en) | Robust ship target detection method based on deep learning | |
CN103400156B (en) | Based on the High Resolution SAR image Ship Detection of CFAR and rarefaction representation | |
CN109871902B (en) | SAR small sample identification method based on super-resolution countermeasure generation cascade network | |
CN114821358A (en) | Optical remote sensing image marine ship target extraction and identification method | |
CN108898065B (en) | Deep network ship target detection method with candidate area rapid screening and scale self-adaption | |
CN111626290A (en) | Infrared ship target detection and identification method under complex sea surface environment | |
CN105279772B (en) | A kind of trackability method of discrimination of infrared sequence image | |
CN111160120A (en) | Fast R-CNN article detection method based on transfer learning | |
CN106485651B (en) | The image matching method of fast robust Scale invariant | |
CN103697855B (en) | A kind of hull horizontal attitude measuring method detected based on sea horizon | |
CN109063669B (en) | Bridge area ship navigation situation analysis method and device based on image recognition | |
CN116109936B (en) | Target detection and identification method based on optical remote sensing | |
CN116703895B (en) | Small sample 3D visual detection method and system based on generation countermeasure network | |
Xue et al. | Rethinking automatic ship wake detection: state-of-the-art CNN-based wake detection via optical images | |
CN108573280B (en) | Method for unmanned ship to autonomously pass through bridge | |
CN110348307B (en) | Path edge identification method and system for crane metal structure climbing robot | |
CN115019201A (en) | Weak and small target detection method based on feature refined depth network | |
CN117152601A (en) | Underwater target detection method and system based on dynamic perception area routing | |
CN117115436A (en) | Ship attitude detection method and device, electronic equipment and storage medium | |
CN107784285B (en) | Method for automatically judging civil and military attributes of optical remote sensing image ship target | |
CN113887652B (en) | Remote sensing image weak and small target detection method based on morphology and multi-example learning | |
CN115861669A (en) | Infrared dim target detection method based on clustering idea | |
CN115511853A (en) | Remote sensing ship detection and identification method based on direction variable characteristics | |
CN114140698A (en) | Water system information extraction algorithm based on FasterR-CNN | |
CN114663743A (en) | Ship target re-identification method, terminal equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |