CN116503737B - Ship detection method and device based on space optical image - Google Patents

Ship detection method and device based on space optical image Download PDF

Info

Publication number
CN116503737B
CN116503737B CN202310523354.3A CN202310523354A CN116503737B CN 116503737 B CN116503737 B CN 116503737B CN 202310523354 A CN202310523354 A CN 202310523354A CN 116503737 B CN116503737 B CN 116503737B
Authority
CN
China
Prior art keywords
image
ship
feature
optical image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310523354.3A
Other languages
Chinese (zh)
Other versions
CN116503737A (en
Inventor
赵薇薇
陈雪华
吕守业
刘喆
王永刚
王艳
王刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
No61646 Unit Of Pla
Original Assignee
No61646 Unit Of Pla
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by No61646 Unit Of Pla filed Critical No61646 Unit Of Pla
Priority to CN202310523354.3A priority Critical patent/CN116503737B/en
Publication of CN116503737A publication Critical patent/CN116503737A/en
Application granted granted Critical
Publication of CN116503737B publication Critical patent/CN116503737B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The invention discloses a ship detection method based on a space optical image, which comprises the following steps: acquiring optical image information of a space to be detected; the space optical images to be detected in the space optical image information to be detected are ship space optical images in the same region and at different times; preprocessing the space optical image information to be detected to obtain standard space optical image information; establishing and training a ship image detection model; processing the standard space optical image information by using the ship image detection model to obtain a ship detection result; and the ship detection result represents ship existence information of the space optical image information to be detected in the same region and at different times. The method can realize high-timeliness and high-precision optical ship detection, and can finish the optical ship detection under the conditions of large data volume, complex scene space optical image data and space platform resource constraint.

Description

Ship detection method and device based on space optical image
Technical Field
The invention belongs to the field of optical image detection, and particularly relates to a ship detection method and device based on a space optical image.
Background
The optical sensor images by acquiring various reflection information of a scene, has higher space-time resolution, the acquired image can well reflect texture detail information of the scene surface, is favorable for the overall cognition of an observer on the scene, has rich detail information such as edges, textures and the like, has clear image under sufficient illumination condition, has high light sensitivity and is suitable for observation by human eyes. In object information acquisition, an optical sensor is used as a most common image source and is widely applied to the fields of ship detection, identification, tracking and the like. The useful information is extracted through the image source, so that the detection capability of the ship can be improved, the false alarm rate and the false alarm rate in early warning are reduced, and powerful support is provided for judgment and decision in various practical application scenes.
In recent years, with the rapid development of spatial optical sensing technology, high-resolution and large-scale spatial optical image data are continuously enriched. For a space optical image, the imaging quality of optical data is easily influenced by illumination and weather, when the lighting condition is poor and the light is weak, the quality of the optical image is rapidly reduced and becomes fuzzy, and the space optical image has the characteristics of large breadth, multiple scales and the like, and compared with a general image, the detection difficulty is higher. Optical object detection is the finding of a specific location of an object of interest in an optical image and the identification of its class. However, the results of conventional machine learning based optical detection methods tend to be unsatisfactory because the spatial optical image background is very complex and most optical features are densely populated with small objects. For large-scale aerial optical images, small vessels occupy only a few pixels, which are easily lost during training, resulting in missing some critical information. In the space optical image, the ship to be detected is usually in a more complex scene, which makes it more difficult to detect small ships, and the convolutional neural network in this scene has high complexity and high operation amount. Therefore, how to realize high-precision and rapid detection of ships in space optical images and meet the resource constraint condition of a platform is a problem which needs to be solved currently.
Disclosure of Invention
The invention aims to provide a ship detection method and device based on a space optical image, so as to realize high-timeliness and high-precision optical ship detection, and the optical ship detection can be completed under the conditions of large data volume, complex space optical image data and space platform resource constraint.
The invention discloses a ship detection method based on a space optical image, which comprises the following steps:
s1, acquiring optical image information of a space to be detected; the space optical images to be detected in the space optical image information to be detected are ship space optical images in the same region and at different times;
s2, preprocessing the spatial optical image information to be detected to obtain standard spatial optical image information;
s3, building and training a ship image detection model;
s4, processing the standard space optical image information by using the ship image detection model to obtain a ship detection result; and the ship detection result represents ship existence information of the space optical image information to be detected in the same region and at different times.
The preprocessing the spatial optical image information to be detected to obtain standard spatial optical image information comprises the following steps:
S21, carrying out linear quantization processing on the optical image information of the space to be detected to obtain quantized image data information;
s22, carrying out linear stretching treatment on the gray value information of the quantized image data information to obtain gray stretched image information;
s23, carrying out ship labeling and normalization processing on the gray scale stretched image information to obtain labeled picture information; the labeling picture information comprises an optical image and corresponding label information;
s24, combining the gray stretching image information and the labeling picture information to obtain a source picture set;
s25, carrying out proportion division processing on the source picture set to obtain a first standard image set and a second standard image set;
s26, performing overlapping cutting operation on the first standard image set to obtain a second sub-optical image set;
s27, carrying out ship screening treatment on the second sub-optical image set to obtain all sub-images containing ships as a third standard image set;
s28, carrying out data enhancement processing on the third standard image set to obtain a fourth standard image set;
s29, carrying out fusion processing on the second standard image set and the fourth standard image set to obtain standard space optical image information; the standard space optical image information comprises a space optical image and corresponding ship label information; the ship tag information comprises ship category information and ship position information.
The performing the overlap clipping operation on the first standard image set to obtain a second sub-optical image set includes:
s261, carrying out partial overlapping cutting on each spatial optical image in the first standard image set by using a sliding window from a set direction to obtain a first sub-optical image set; the first sub-optical image set comprises a plurality of first sub-optical images; adjacent two sub-optical images in the first sub-optical image set contain overlapping areas with preset proportions;
s262, calculating the intersection ratio of the cut ship area of each sub-optical image in the first sub-optical image set, judging whether the intersection ratio is larger than a set threshold value or not, and obtaining a threshold value judging result; the intersection ratio of the cut ship region is the ratio of the residual area of the cut ship region in the sub-optical image to the complete area of the cut ship region;
if the threshold value judging result is larger than a set threshold value, reserving the sub-optical images in the first sub-optical image set, and summarizing all reserved sub-optical images to obtain a second sub-optical image set;
and if the threshold value judging result is smaller than a set threshold value, deleting the sub-optical image from the first sub-optical image set.
The step of performing data enhancement processing on the third standard image set to obtain a fourth standard image set includes:
s281, calculating the number of ships contained in each space optical image in the third standard image set;
s282, carrying out statistics processing on the number of ships in all the space optical images to obtain a value distribution range of the number of ships;
s283, dividing the value distribution range into a first value interval, a second value interval and a third value interval according to the sequence from low value to high value by using the set first and second boundary values as boundary values;
s284, judging a value interval to which the number of ships contained in each spatial optical image in the third standard image set belongs;
s285, if the belonging value interval is the first value interval, carrying out random arrangement processing on the corresponding space optical image to obtain a first enhanced space optical image;
if the belonging value interval is a second value interval, performing random scaling processing on the corresponding spatial optical image to obtain a second enhanced spatial optical image;
If the belonging value interval is a third value interval, performing random clipping treatment on the corresponding spatial optical image to obtain a third enhanced spatial optical image;
and S286, summarizing the first enhanced spatial optical image, the second enhanced spatial optical image and the third enhanced spatial optical image to obtain a fourth standard image set.
The ship image detection model comprises a feature coding module, a feature fusion module, a prediction module and a feedback module;
the feature coding module comprises three feature extraction layers with different dimensions, an attention mechanism layer, a correlation extraction module and two up-sampling modules;
the feature coding module is used for carrying out feature coding on the standard space optical image information to obtain a feature image; the first output end of the feature encoding module is connected with the first input end of the feature fusion module; the second output end of the feature coding module is connected with the first input end of the prediction module;
the feature fusion module is used for carrying out fusion processing on the feature images to obtain fusion features; the first output end of the characteristic fusion module is connected with the second input end of the prediction module;
The prediction module is used for carrying out regression and classification processing on the fusion characteristics and the characteristic images to obtain category information and position information of the ship; the first output end of the prediction module is connected with the first input end of the feedback module;
the feedback module is used for carrying out distance difference calculation processing on the category information and the position information of the ship and the ship label information to obtain difference information; the first output end of the feedback module is connected with the third input end of the prediction module.
The processing the standard space optical image information by using the ship image detection model to obtain a ship detection result comprises the following steps:
s401, respectively carrying out convolution pooling and coding processing on the standard space optical image information by utilizing the three feature extraction layers with different dimensions to respectively obtain a first feature map, a second feature map and a third feature map;
s402, carrying out channel attention extraction on the standard space optical image information by using the attention mechanism layer to obtain channel attention characteristics;
s403, performing correlation extraction on the channel attention feature, the first feature map, the second feature map and the third feature map by using the correlation extraction module to obtain a correlation feature image; outputting the correlation characteristic image to a prediction module;
The calculation expression of the correlation extraction is as follows:
wherein F represents a feature map set comprising a first feature map, a second feature map and a third feature map, M C () Representing a channel attention extraction function, F' representing a channel attention feature, M S () Representing a spatial attention mapping function, F' representing a correlation feature image;
s404, fusing the up-sampled first feature image and the up-sampled second feature image to obtain a first fused feature image; fusing the second feature image after upsampling with the third feature image after upsampling to obtain a second fused feature image; inputting the first feature map, the second feature map, the third feature map, the first fusion feature image and the second fusion feature image to a feature fusion module;
s405, carrying out fusion processing on the feature images by utilizing the feature fusion module to obtain fusion features; the feature images comprise a first feature image, a second feature image, a third feature image, a first fusion feature image and a second fusion feature image;
s406, classifying and regressing the fusion characteristic and the correlation characteristic image by using a prediction module to obtain the category information and the position information of the ship; and the ship category information and the ship position information form a ship detection result.
The prediction module comprises a first regression sub-module, a second regression sub-module, a residual analysis sub-module, a regression fusion sub-module and a classification sub-module;
the first regression sub-module is used for carrying out linear fitting on the fusion characteristic and the correlation characteristic image and the corresponding label information to obtain a linear fitting model, and adjusting the weight value of the linear prediction model by utilizing the weight adjustment value;
the second regression sub-module is used for performing polynomial fitting on the fusion characteristic and the correlation characteristic image and the corresponding label information to obtain a polynomial fitting model, and adjusting the weight value of the polynomial fitting model by utilizing the weight adjustment value;
the residual analysis submodule is used for calculating residual errors of the linear fitting model and the polynomial fitting model respectively and generating fusion weights according to residual error calculation results;
the regression fusion sub-module is used for carrying out weighted fusion treatment on the linear fitting model and the polynomial fitting model by using the fusion weight to generate a first prediction model;
and the classification sub-module is used for performing characteristic classification operation on the prediction result of the first prediction model to obtain the category information and the position information of the ship.
The training ship image detection model comprises:
selecting sample image information from the second set of standard images;
initializing training times;
inputting the sample image information into a ship image detection model, and taking the obtained ship category information and position information as training detection results;
calculating the sample image information and the training detection result by using a loss function to obtain a difference value;
performing accumulation operation on the training time values;
judging whether the training time value exceeds a training time threshold value or not to obtain a first judgment result; when the first judgment result is yes, triggering and executing a model checking operation;
when the first judgment result is negative, judging whether the difference value meets a convergence condition or not, and obtaining a second judgment result;
when the second judgment result is yes, triggering and executing a model checking operation; when the second judging result is negative, updating the weight value of the prediction module by utilizing the difference value, and triggering to input the sample image information into a ship image detection model;
checking the ship image detection model by using the fourth standard image set; inputting the fourth standard image set into a ship image detection model to obtain a detection result; counting the accuracy of the detection result, and judging that the ship image detection model passes the verification if the accuracy of the detection result exceeds a set threshold; and if the detection result accuracy rate does not exceed a set threshold value, triggering and executing the selection of sample image information from the second standard image set.
The invention also discloses a ship detection device based on the space optical image, which comprises:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform the spatial optical image based vessel inspection method.
The invention also discloses a computer storage medium which stores computer instructions for executing the ship detection method based on the space optical image when the computer instructions are called.
The beneficial effects of the invention are as follows:
1. the invention can effectively solve the problem that the characteristic expression capability is reduced and is not easy to learn by a network after a plurality of downsampling operations are carried out in a neural network due to small ship size in a pixel area with limited image when the detection of the small ship is carried out, thereby improving the detection rate of the ship.
2. For the current image detection method based on artificial intelligence, the required network scale and calculation amount are large, and when the method is limited by the quality and the power consumption of a platform, the memory and the calculation power of a carried calculation unit are limited, so that the method cannot be effectively applied. The ship image detection model is realized for the detection function by adopting the prediction module, the prediction module is realized by adopting the fitting weighting method, the real-time detection of the ship can be completed under the condition that the internal memory and the calculation force of a platform are limited, and the high-timeliness and high-precision optical small ship detection can be realized.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of the composition of a feature fusion module of the method of the present invention;
FIG. 3 is a schematic diagram of the composition of a prediction module of the method of the present invention;
FIG. 4 shows the results of the test under normal sea conditions using the method of the present invention;
FIG. 5 shows the results of the test under high sea conditions using the method of the present invention;
fig. 6 is a schematic diagram of the composition of the feature encoding module.
Detailed Description
For a better understanding of the present disclosure, an embodiment is presented herein.
FIG. 1 is a flow chart of the method of the present invention; FIG. 2 is a schematic diagram of the composition of a feature fusion module of the method of the present invention; FIG. 3 is a schematic diagram of the composition of a prediction module of the method of the present invention; FIG. 4 shows the results of the test under normal sea conditions using the method of the present invention; FIG. 5 shows the results of the test under high sea conditions using the method of the present invention. In fig. 4, the left graph shows the results before detection, and the right graph shows the results after detection. In fig. 5, the left graph shows the results before detection, and the right graph shows the results after detection.
The embodiment discloses a ship detection method based on a space optical image, which comprises the following steps:
s1, acquiring optical image information of a space to be detected; the space optical images to be detected in the space optical image information to be detected are ship space optical images in the same region and at different times;
S2, preprocessing the spatial optical image information to be detected to obtain standard spatial optical image information;
s3, building and training a ship image detection model;
s4, processing the standard space optical image information by using the ship image detection model to obtain a ship detection result; and the ship detection result represents ship existence information of the space optical image information to be detected in the same region and at different times.
The step S2 includes:
s21, carrying out linear quantization processing on the optical image information of the space to be detected to obtain quantized image data information;
the step S21 specifically includes: linearly quantizing the acquired space optical image information to be detected, namely 16-bit medium-low resolution single-channel optical image data to obtain 8-bit visualized image data;
s22, carrying out linear stretching treatment on the gray value information of the quantized image data information to obtain gray stretched image information;
s23, carrying out ship labeling and normalization processing on the gray scale stretched image information to obtain labeled picture information; the labeling picture information comprises an optical image and corresponding label information;
S24, combining the gray stretching image information and the labeling picture information to obtain a source picture set;
s25, carrying out proportion division processing on the source picture set to obtain a first standard image set and a second standard image set;
the proportional division processing is performed on the source picture set, which may be that 80% of pictures in the source picture set are built into a first standard image set, and 20% of pictures in the source picture set are built into a second standard image set;
s26, performing overlapping cutting operation on the first standard image set to obtain a second sub-optical image set;
s27, carrying out ship screening treatment on the second sub-optical image set to obtain all sub-images containing ships as a third standard image set;
the ship screening processing is performed on the second sub-optical image set, including:
judging whether each sub-image in the second sub-optical image set contains a ship or not; if the judgment result is negative, deleting the sub-images which do not contain the ship from the second sub-optical image set; when the judgment result is yes, reserving the sub-image containing the ship;
s28, carrying out data enhancement processing on the third standard image set to obtain a fourth standard image set;
S29, carrying out fusion processing on the second standard image set and the fourth standard image set to obtain standard space optical image information; the standard space optical image information comprises a space optical image and corresponding ship label information; the ship tag information comprises ship category information and ship position information.
The fusing processing is performed on the second standard image set and the fourth standard image set, and the two sets may be combined.
The step S26 includes:
s261, carrying out partial overlapping cutting on each spatial optical image in the first standard image set by using a sliding window from a set direction to obtain a first sub-optical image set; the first sub-optical image set comprises a plurality of first sub-optical images; adjacent two sub-optical images in the first sub-optical image set contain overlapping areas with preset proportions;
the size of the sliding window may be 5×5;
s262, calculating the intersection ratio of the cut ship area of each sub-optical image in the first sub-optical image set, and judging whether the intersection ratio is larger than a set threshold value or not; if the sub-optical image is larger than the set threshold, reserving the sub-optical image in the first sub-optical image set, and summarizing all reserved sub-optical images to obtain a second sub-optical image set; if the sub-optical image is smaller than a set threshold value, deleting the sub-optical image from the first sub-optical image set; the intersection ratio of the cut ship region is a ratio of a remaining area of the cut ship region in the sub-optical image to a complete area of the cut ship region.
The set threshold may be 0.6;
the setting direction may be from the upper left corner of the image, and the partial overlapping cutting is performed in the order of first rightward and then downward.
The ship region to be cut refers to a complete ship region of a ship which is cut so as not to be completely displayed in the spatial optical image after the partial overlapping cutting is performed on each spatial optical image by using a sliding window from a set direction.
The step S28 includes:
s281, calculating the number of ships contained in each space optical image in the third standard image set;
s282, carrying out statistics processing on the number of ships in all the space optical images to obtain a value distribution range of the number of ships;
s283, dividing the value distribution range into a first value interval, a second value interval and a third value interval according to the sequence from low value to high value by using the set first and second boundary values as boundary values;
the first value interval is a range smaller than a first demarcation value in the value distribution range; the second value interval is a range between a first boundary value and a second boundary value in the value distribution range; the third value interval is a range which is larger than a second boundary value in the value distribution range;
The first demarcation value may be 1.1 times the minimum value of the number of vessels; the second threshold may be 0.9 times the maximum number of vessels.
S284, judging a value interval to which the number of ships contained in each spatial optical image in the third standard image set belongs;
s285, if the belonging value interval is the first value interval, carrying out random arrangement processing on the corresponding space optical image to obtain a first enhanced space optical image;
if the belonging value interval is a second value interval, performing random scaling processing on the corresponding spatial optical image to obtain a second enhanced spatial optical image;
if the belonging value interval is a third value interval, performing random clipping treatment on the corresponding spatial optical image to obtain a third enhanced spatial optical image;
and S286, summarizing the first enhanced spatial optical image, the second enhanced spatial optical image and the third enhanced spatial optical image to obtain a fourth standard image set.
The ship image detection model comprises a feature coding module, a feature fusion module, a prediction module and a feedback module;
the feature coding module is used for carrying out feature coding on the standard space optical image information to obtain a feature image; the first output end of the feature encoding module is connected with the first input end of the feature fusion module; the second output end of the feature coding module is connected with the first input end of the prediction module; the feature coding module comprises a first feature extraction layer, a second feature extraction layer, a third feature extraction layer, an attention mechanism layer, a correlation extraction module, a first upsampling module and a second upsampling module; the dimensions of the first feature extraction layer, the second feature extraction layer and the third feature extraction layer are different; the three feature extraction layers with different dimensions and the attention mechanism layer are connected with the correlation extraction module; the first feature extraction layer and the second feature extraction layer are both connected with a first up-sampling module; the second feature extraction layer and the third feature extraction layer are connected with a second up-sampling module; the correlation extraction module is connected with the prediction module; the first feature extraction layer, the second feature extraction layer, the third feature extraction layer, the first upsampling module and the second upsampling module are all connected with the feature fusion module. Fig. 6 is a schematic diagram of the composition of the feature encoding module.
The feature fusion module is used for carrying out fusion processing on the feature images to obtain fusion features; the first output end of the characteristic fusion module is connected with the second input end of the prediction module;
the prediction module is used for carrying out regression and classification processing on the fusion characteristics and the characteristic images to obtain category information and position information of the ship; the first output end of the prediction module is connected with the first input end of the feedback module;
the feedback module is used for carrying out distance difference calculation processing on the category information and the position information of the ship and the ship label information to obtain difference information; the first output end of the feedback module is connected with the third input end of the prediction module.
The three feature extraction layers with different dimensions are realized by adopting three convolution pooling submodules with different scales; the convolution Chi Huazi module is implemented by connecting a convolution kernel and a pooling layer. The output end of the convolution kernel is connected with the input end of the pooling layer. The attention mechanism layer is used for realizing an attention mechanism.
The feature fusion module is composed of a differential unit, a cross-scale fused attention module, a first convolution layer, an up-sampling layer and a second convolution layer, wherein the output end of the cross-scale fused attention module is connected with the input end of the first convolution layer, the output end of the up-sampling layer is connected with the input end of the second convolution layer, the output end of the first convolution layer is connected with the first input end of the differential unit, and the output end of the second convolution layer is connected with the second input end of the differential unit.
The prediction module comprises a first regression sub-module, a second regression sub-module, a residual analysis sub-module, a regression fusion sub-module and a classification sub-module;
the first regression sub-module is used for carrying out linear fitting on the fusion characteristic and the correlation characteristic image and the corresponding label information to obtain a linear fitting model, and adjusting the weight value of the linear prediction model by utilizing the weight adjustment value;
the second regression sub-module is used for performing polynomial fitting on the fusion characteristic and the correlation characteristic image and the corresponding label information to obtain a polynomial fitting model, and adjusting the weight value of the polynomial fitting model by utilizing the weight adjustment value;
the residual analysis submodule is used for calculating residual errors of the linear fitting model and the polynomial fitting model respectively and generating fusion weights according to residual error calculation results;
the regression fusion sub-module is used for carrying out weighted fusion treatment on the linear fitting model and the polynomial fitting model by using the fusion weight to generate a first prediction model;
the classification sub-module is used for performing characteristic classification operation on the prediction result of the first prediction model to obtain the category information and the position information of the ship;
The feature classification operation can be realized by adopting a naive Bayesian method, a decision tree induction method or a random forest method.
The processing the standard space optical image information by using the ship image detection model to obtain a ship detection result comprises the following steps:
s401, performing convolution pooling and coding processing on the spatial optical image information of the second standard image set by utilizing the three feature extraction layers with different dimensions to respectively obtain a first feature map, a second feature map and a third feature map;
s402, carrying out channel attention extraction on the spatial optical image information of the second standard image set by using the attention mechanism layer to obtain channel attention characteristics;
s403, performing correlation extraction on the channel attention feature, the first feature map, the second feature map and the third feature map by using the correlation extraction module to obtain a correlation feature image; outputting the correlation characteristic image to a prediction module;
the calculation expression of the correlation extraction is as follows:
wherein F represents a feature map set comprising a first feature map, a second feature map and a third feature map, M C () Representing channel attention extraction functions F' represents the channel attention feature, M S () Representing a spatial attention mapping function, F' representing a correlation feature image;
s404, fusing the up-sampled first feature image and the up-sampled second feature image to obtain a first fused feature image; fusing the second feature image after upsampling with the third feature image after upsampling to obtain a second fused feature image; inputting the first feature map, the second feature map, the third feature map, the first fusion feature image and the second fusion feature image to a feature fusion module;
s405, carrying out fusion processing on the feature images by utilizing the feature fusion module to obtain fusion features; the feature images comprise a first feature image, a second feature image, a third feature image, a first fusion feature image and a second fusion feature image;
s406, classifying and regressing the fusion characteristic and the correlation characteristic image by using a prediction module to obtain the category information and the position information of the ship; and the ship category information and the ship position information form a ship detection result.
The training ship image detection model comprises:
after performing step S2, before performing step S3, the method further includes:
Selecting sample image information from the second set of standard images;
initializing training times;
inputting the sample image information into a ship image detection model, and taking the obtained ship category information and position information as training detection results;
calculating the sample image information and the training detection result by using a loss function to obtain a difference value;
performing accumulation operation on the training time values;
judging whether the training time value exceeds a training time threshold value or not to obtain a first judgment result; when the first judgment result is yes, triggering and executing a model checking operation;
when the first judgment result is negative, judging whether the difference value meets a convergence condition or not, and obtaining a second judgment result;
when the second judgment result is yes, triggering and executing a model checking operation; when the second judging result is negative, updating the weight value of the prediction module by utilizing the difference value, and triggering to input the sample image information into a ship image detection model;
checking the ship image detection model by using the fourth standard image set; inputting the fourth standard image set into a ship image detection model to obtain a detection result; counting the accuracy of the detection result, and judging that the ship image detection model passes the verification if the accuracy of the detection result exceeds a set threshold; and if the detection result accuracy rate does not exceed a set threshold value, triggering and executing the selection of sample image information from the second standard image set.
The updating the weight value of the prediction module by using the difference value comprises the following steps:
θ←θ+v,
wherein x is i Sample image information representing the ith sample image of the second standard image set, θ representing a weight value representing the prediction module, p i And (c) ship tag information, l (x) i ,θ,p i ) Representing the difference value corresponding to the sample image information and the training detection result,represents the partial derivative of the independent variable θ, η represents the multiplicative update correction factor, α represents the multiplicative retention factor, and v represents the update variable. The function l (,) may be implemented using a loss function.
The training ship image detection model comprises:
s301, carrying out convolution pooling and coding processing on the spatial optical image information of the second standard image set by utilizing the three feature extraction layers with different dimensions to respectively obtain a first feature map, a second feature map and a third feature map;
s302, channel attention extraction is carried out on the spatial optical image information of the second standard image set by using an attention mechanism layer, so as to obtain channel attention characteristics;
s303, carrying out correlation extraction on the channel attention feature, the first feature map, the second feature map and the third feature map to obtain a correlation feature image; outputting the correlation characteristic image to a prediction module;
S304, fusing the up-sampled first feature image and the up-sampled second feature image to obtain a first fused feature image; fusing the second feature image after upsampling with the third feature image after upsampling to obtain a second fused feature image; inputting the first feature map, the second feature map, the third feature map, the first fusion feature image and the second fusion feature image to a feature fusion module;
s305, carrying out fusion processing on the feature images by utilizing the feature fusion module to obtain fusion features; the feature images comprise a first feature image, a second feature image, a third feature image, a first fusion feature image and a second fusion feature image;
s306, classifying and regressing the fusion characteristic and the correlation characteristic image by using a prediction module to obtain the category information and the position information of the ship;
s307, performing distance difference calculation processing on the category information and the position information of the ship and the ship label information of the second standard image set by using a feedback module to obtain a difference value, and generating a weight adjustment value according to the difference value;
s308, updating the weight value of the prediction module by using the weight adjustment value;
S309, carrying out statistics update on the training times to obtain accumulated training times; judging whether the accumulated training times exceeds a set time threshold value or not to obtain a training time judgment result;
s310, if the training frequency judgment result is that the training frequency exceeds the set frequency threshold, processing the fourth standard image set by using a ship image detection model to obtain the ship category information and position information detection result;
if the training frequency judgment result is that the set frequency threshold is not exceeded, returning to the step S301;
s311, counting the accuracy of the detection result according to the ship label information of the fourth standard image set; returning to the step S301 when the accuracy rate does not exceed the set accuracy rate threshold value; and when the accuracy exceeds a set accuracy threshold, training of the ship image detection model is completed.
The S303 includes:
the calculation expression of the correlation extraction is as follows:
wherein F represents a feature map set comprising a first feature map, a second feature map and a third feature map, M C () Representing a channel attention extraction function, F' representing a channel attention feature, M S () Representing a spatial attention mapping function, F "representing a correlation feature image.
The attention mechanism extracts the correlation between the channel and the feature space through two sub-modules, namely a Channel Attention (CAM) and a Space Attention (SAM);
assuming an input feature map F, one-dimensional channel attention maps to M C (F) The two-dimensional spatial attention map is M S (F') channel attention is more important to pay attention to the characteristics on which channel, after channel attention is output, a space attention module is led in, and interesting characteristics in the space are paid attention to, and the processing flow of the whole CBMA is as follows:
the processing the standard space optical image information by using the ship image detection model to obtain a ship detection result comprises the following steps:
processing the standard spatial optical image information by using the steps S301 to S306; and taking the category information and the position information of the ship obtained by the prediction module of the ship image detection model as a ship detection result.
The embodiment discloses a ship detection device based on space optical image, the device includes:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform the aerial optical image based vessel detection method of claim.
The embodiment discloses a computer storage medium, which stores computer instructions for executing the ship detection method based on the space optical image when the computer instructions are called.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (5)

1. The ship detection method based on the space optical image is characterized by comprising the following steps of:
s1, acquiring optical image information of a space to be detected; the space optical images to be detected in the space optical image information to be detected are ship space optical images in the same region and at different times;
s2, preprocessing the spatial optical image information to be detected to obtain standard spatial optical image information;
s3, building and training a ship image detection model;
s4, processing the standard space optical image information by using the ship image detection model to obtain a ship detection result; the ship detection result represents ship existence information of the space optical image information to be detected in the same area and at different times;
The preprocessing the spatial optical image information to be detected to obtain standard spatial optical image information comprises the following steps:
s21, carrying out linear quantization processing on the optical image information of the space to be detected to obtain quantized image data information;
s22, carrying out linear stretching treatment on the gray value information of the quantized image data information to obtain gray stretched image information;
s23, carrying out ship labeling and normalization processing on the gray scale stretched image information to obtain labeled picture information; the labeling picture information comprises an optical image and corresponding label information;
s24, combining the gray stretching image information and the labeling picture information to obtain a source picture set;
s25, carrying out proportion division processing on the source picture set to obtain a first standard image set and a second standard image set;
s26, performing overlapping cutting operation on the first standard image set to obtain a second sub-optical image set;
s27, carrying out ship screening treatment on the second sub-optical image set to obtain all sub-images containing ships as a third standard image set;
s28, carrying out data enhancement processing on the third standard image set to obtain a fourth standard image set;
S29, carrying out fusion processing on the second standard image set and the fourth standard image set to obtain standard space optical image information; the standard space optical image information comprises a space optical image and corresponding ship label information; the ship tag information comprises ship category information and position information;
the ship image detection model comprises a feature coding module, a feature fusion module, a prediction module and a feedback module;
the feature coding module comprises three feature extraction layers with different dimensions, an attention mechanism layer, a correlation extraction module and two up-sampling modules;
the feature coding module is used for carrying out feature coding on the standard space optical image information to obtain a feature image; the first output end of the feature encoding module is connected with the first input end of the feature fusion module; the second output end of the feature coding module is connected with the first input end of the prediction module;
the feature fusion module is used for carrying out fusion processing on the feature images to obtain fusion features; the first output end of the characteristic fusion module is connected with the second input end of the prediction module;
The prediction module is used for carrying out regression and classification processing on the fusion characteristics and the characteristic images to obtain category information and position information of the ship; the first output end of the prediction module is connected with the first input end of the feedback module;
the feedback module is used for carrying out distance difference calculation processing on the category information and the position information of the ship and the ship label information to obtain difference information; the first output end of the feedback module is connected with the third input end of the prediction module;
the processing the standard space optical image information by using the ship image detection model to obtain a ship detection result comprises the following steps:
s401, respectively carrying out convolution pooling and coding processing on the standard space optical image information by utilizing the three feature extraction layers with different dimensions to respectively obtain a first feature map, a second feature map and a third feature map;
s402, extracting channel attention from the first feature map, the second feature map and the third feature map by using the attention mechanism layer to obtain channel attention features; carrying out correlation extraction on the channel attention features by utilizing the correlation extraction module to obtain correlation feature images; outputting the correlation characteristic image to a prediction module;
The computational expressions of the channel attention extraction and the correlation extraction are as follows:
wherein F represents a feature map set comprising a first feature map, a second feature map and a third feature map, M C () Representing a channel attention extraction function, F' representing a channel attention feature, M S () Representing a spatial attention mapping function, F' representing a correlation feature image;
s403, fusing the up-sampled first feature image and the up-sampled second feature image to obtain a first fused feature image; fusing the second feature image after upsampling with the third feature image after upsampling to obtain a second fused feature image; inputting the first feature map, the second feature map, the third feature map, the first fusion feature image and the second fusion feature image to a feature fusion module;
s404, carrying out fusion processing on the feature images by utilizing the feature fusion module to obtain fusion features; the feature images comprise a first feature image, a second feature image, a third feature image, a first fusion feature image and a second fusion feature image;
s405, classifying and regressing the fusion characteristic and the correlation characteristic image by using a prediction module to obtain the category information and the position information of the ship; the category information and the position information of the ship form a ship detection result;
The step of performing data enhancement processing on the third standard image set to obtain a fourth standard image set includes:
s281, calculating the number of ships contained in each space optical image in the third standard image set;
s282, carrying out statistics processing on the number of ships in all the space optical images to obtain a value distribution range of the number of ships;
s283, dividing the value distribution range into a first value interval, a second value interval and a third value interval according to the sequence from low value to high value by using the set first and second boundary values as boundary values;
s284, judging a value interval to which the number of ships contained in each spatial optical image in the third standard image set belongs;
s285, if the belonging value interval is the first value interval, carrying out random arrangement processing on the corresponding space optical image to obtain a first enhanced space optical image;
if the belonging value interval is a second value interval, performing random scaling processing on the corresponding spatial optical image to obtain a second enhanced spatial optical image;
If the belonging value interval is a third value interval, performing random clipping treatment on the corresponding spatial optical image to obtain a third enhanced spatial optical image;
and S286, summarizing the first enhanced spatial optical image, the second enhanced spatial optical image and the third enhanced spatial optical image to obtain a fourth standard image set.
2. The ship detection method based on space optical images according to claim 1, wherein the performing the overlap clipping operation on the first standard image set to obtain a second sub-optical image set includes:
s261, carrying out partial overlapping cutting on each spatial optical image in the first standard image set by using a sliding window from a set direction to obtain a first sub-optical image set; the first sub-optical image set comprises a plurality of first sub-optical images; adjacent two sub-optical images in the first sub-optical image set contain overlapping areas with preset proportions;
s262, calculating the intersection ratio of the cut ship area of each sub-optical image in the first sub-optical image set, judging whether the intersection ratio is larger than a set threshold value or not, and obtaining a threshold value judging result; the intersection ratio of the cut ship region is the ratio of the residual area of the cut ship region in the sub-optical image to the complete area of the cut ship region;
If the threshold value judging result is larger than a set threshold value, reserving the sub-optical images in the first sub-optical image set, and summarizing all reserved sub-optical images to obtain a second sub-optical image set;
and if the threshold value judging result is smaller than a set threshold value, deleting the sub-optical image from the first sub-optical image set.
3. The method for spatial optical image based vessel inspection according to claim 1, wherein the training of the vessel image inspection model comprises:
selecting sample image information from the second set of standard images;
initializing training times;
inputting the sample image information into a ship image detection model, and taking the obtained ship category information and position information as training detection results;
calculating the sample image information and the training detection result by using a loss function to obtain a difference value;
performing accumulation operation on the training time values;
judging whether the training time value exceeds a training time threshold value or not to obtain a first judgment result; when the first judgment result is yes, triggering and executing a model checking operation;
when the first judgment result is negative, judging whether the difference value meets a convergence condition or not, and obtaining a second judgment result;
When the second judgment result is yes, triggering and executing a model checking operation; when the second judging result is negative, updating the weight value of the prediction module by utilizing the difference value, and triggering to input the sample image information into a ship image detection model;
checking the ship image detection model by using the fourth standard image set; inputting the fourth standard image set into a ship image detection model to obtain a detection result; counting the accuracy of the detection result, and judging that the ship image detection model passes the verification if the accuracy of the detection result exceeds a set threshold; and if the detection result accuracy rate does not exceed a set threshold value, triggering and executing the selection of sample image information from the second standard image set.
4. A ship detection device based on spatial optical images, the device comprising:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform the aerial optical image based vessel detection method of any of claims 1-3.
5. A computer-storable medium storing computer instructions for performing the spatial optical image based vessel inspection method according to any one of claims 1 to 3 when called.
CN202310523354.3A 2023-05-10 2023-05-10 Ship detection method and device based on space optical image Active CN116503737B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310523354.3A CN116503737B (en) 2023-05-10 2023-05-10 Ship detection method and device based on space optical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310523354.3A CN116503737B (en) 2023-05-10 2023-05-10 Ship detection method and device based on space optical image

Publications (2)

Publication Number Publication Date
CN116503737A CN116503737A (en) 2023-07-28
CN116503737B true CN116503737B (en) 2024-01-09

Family

ID=87330112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310523354.3A Active CN116503737B (en) 2023-05-10 2023-05-10 Ship detection method and device based on space optical image

Country Status (1)

Country Link
CN (1) CN116503737B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886106A (en) * 2019-01-15 2019-06-14 浙江大学 A kind of remote sensing images building change detecting method based on deep learning
CN111753912A (en) * 2020-06-28 2020-10-09 中国矿业大学 Coal slime flotation clean coal ash content prediction method based on deep learning
CN111815579A (en) * 2020-06-24 2020-10-23 浙江大华技术股份有限公司 Image change detection method and device and computer readable storage medium
CN112766087A (en) * 2021-01-04 2021-05-07 武汉大学 Optical remote sensing image ship detection method based on knowledge distillation
CN113469088A (en) * 2021-07-08 2021-10-01 西安电子科技大学 SAR image ship target detection method and system in passive interference scene
CN113673586A (en) * 2021-08-10 2021-11-19 北京航天创智科技有限公司 Mariculture area classification method fusing multi-source high-resolution satellite remote sensing images
CN114462628A (en) * 2020-11-09 2022-05-10 华为技术有限公司 Data enhancement method, device, computing equipment and computer readable storage medium
CN114723756A (en) * 2022-06-09 2022-07-08 北京理工大学 Low time-sequence remote sensing target detection method and device based on double monitoring networks
CN115035599A (en) * 2022-06-08 2022-09-09 中国兵器工业计算机应用技术研究所 Armed personnel identification method and armed personnel identification system integrating equipment and behavior characteristics
WO2022188379A1 (en) * 2021-03-12 2022-09-15 国网智能科技股份有限公司 Artificial intelligence system and method serving electric power robot
CN115471746A (en) * 2022-08-26 2022-12-13 中船航海科技有限责任公司 Ship target identification detection method based on deep learning
CN115497005A (en) * 2022-09-05 2022-12-20 重庆邮电大学 YOLOV4 remote sensing target detection method integrating feature transfer and attention mechanism
CN115546650A (en) * 2022-10-29 2022-12-30 西安电子科技大学 Method for detecting ships in remote sensing image based on YOLO-V network
WO2023025288A1 (en) * 2021-08-27 2023-03-02 北京灵汐科技有限公司 Data processing method and apparatus, electronic device, and computer readable medium
CN115995041A (en) * 2022-12-30 2023-04-21 清华大学深圳国际研究生院 Attention mechanism-based SAR image multi-scale ship target detection method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7929808B2 (en) * 2001-10-30 2011-04-19 Hewlett-Packard Development Company, L.P. Systems and methods for generating digital images having image meta-data combined with the image data
CN110555811A (en) * 2019-07-02 2019-12-10 五邑大学 SAR image data enhancement method and device and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886106A (en) * 2019-01-15 2019-06-14 浙江大学 A kind of remote sensing images building change detecting method based on deep learning
CN111815579A (en) * 2020-06-24 2020-10-23 浙江大华技术股份有限公司 Image change detection method and device and computer readable storage medium
CN111753912A (en) * 2020-06-28 2020-10-09 中国矿业大学 Coal slime flotation clean coal ash content prediction method based on deep learning
CN114462628A (en) * 2020-11-09 2022-05-10 华为技术有限公司 Data enhancement method, device, computing equipment and computer readable storage medium
CN112766087A (en) * 2021-01-04 2021-05-07 武汉大学 Optical remote sensing image ship detection method based on knowledge distillation
WO2022188379A1 (en) * 2021-03-12 2022-09-15 国网智能科技股份有限公司 Artificial intelligence system and method serving electric power robot
CN113469088A (en) * 2021-07-08 2021-10-01 西安电子科技大学 SAR image ship target detection method and system in passive interference scene
CN113673586A (en) * 2021-08-10 2021-11-19 北京航天创智科技有限公司 Mariculture area classification method fusing multi-source high-resolution satellite remote sensing images
WO2023025288A1 (en) * 2021-08-27 2023-03-02 北京灵汐科技有限公司 Data processing method and apparatus, electronic device, and computer readable medium
CN115035599A (en) * 2022-06-08 2022-09-09 中国兵器工业计算机应用技术研究所 Armed personnel identification method and armed personnel identification system integrating equipment and behavior characteristics
CN114723756A (en) * 2022-06-09 2022-07-08 北京理工大学 Low time-sequence remote sensing target detection method and device based on double monitoring networks
CN115471746A (en) * 2022-08-26 2022-12-13 中船航海科技有限责任公司 Ship target identification detection method based on deep learning
CN115497005A (en) * 2022-09-05 2022-12-20 重庆邮电大学 YOLOV4 remote sensing target detection method integrating feature transfer and attention mechanism
CN115546650A (en) * 2022-10-29 2022-12-30 西安电子科技大学 Method for detecting ships in remote sensing image based on YOLO-V network
CN115995041A (en) * 2022-12-30 2023-04-21 清华大学深圳国际研究生院 Attention mechanism-based SAR image multi-scale ship target detection method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A remote sensing image target recognition method based on improved Mask-RCNN model;Yu Huiming等;2021 IEEE 2nd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE);436-439 *
基于数据增强策略的遥感图像目标检测方法;何佳乐;《万方学位论文》;1-75 *

Also Published As

Publication number Publication date
CN116503737A (en) 2023-07-28

Similar Documents

Publication Publication Date Title
CN112966684B (en) Cooperative learning character recognition method under attention mechanism
CN108492271B (en) Automatic image enhancement system and method fusing multi-scale information
CN110287960A (en) The detection recognition method of curve text in natural scene image
CN111625608B (en) Method and system for generating electronic map according to remote sensing image based on GAN model
CN108389220B (en) Remote sensing video image motion target real-time intelligent cognitive method and its device
CN112347859A (en) Optical remote sensing image saliency target detection method
CN111476159A (en) Method and device for training and detecting detection model based on double-angle regression
CN115019182B (en) Method, system, equipment and storage medium for identifying fine granularity of remote sensing image target
CN115147731A (en) SAR image target detection method based on full-space coding attention module
CN116645592B (en) Crack detection method based on image processing and storage medium
CN114241003B (en) All-weather lightweight high-real-time sea surface ship detection and tracking method
CN116311254B (en) Image target detection method, system and equipment under severe weather condition
CN110084284A (en) Target detection and secondary classification algorithm and device based on region convolutional neural networks
CN113378897A (en) Neural network-based remote sensing image classification method, computing device and storage medium
CN113743417A (en) Semantic segmentation method and semantic segmentation device
CN115861756A (en) Earth background small target identification method based on cascade combination network
Fan et al. A novel sonar target detection and classification algorithm
Dumka et al. Advanced digital image processing and its applications in Big Data
CN117351363A (en) Remote sensing image building extraction method based on transducer
CN116977632A (en) Landslide extraction method for improving U-Net network based on asymmetric convolution
CN112633123A (en) Heterogeneous remote sensing image change detection method and device based on deep learning
CN116503737B (en) Ship detection method and device based on space optical image
CN115861922A (en) Sparse smoke and fire detection method and device, computer equipment and storage medium
CN115546640A (en) Cloud detection method and device for remote sensing image, electronic equipment and storage medium
CN115035429A (en) Aerial photography target detection method based on composite backbone network and multiple measuring heads

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant