CN114463628A - Deep learning remote sensing image ship target identification method based on threshold value constraint - Google Patents
Deep learning remote sensing image ship target identification method based on threshold value constraint Download PDFInfo
- Publication number
- CN114463628A CN114463628A CN202111676459.XA CN202111676459A CN114463628A CN 114463628 A CN114463628 A CN 114463628A CN 202111676459 A CN202111676459 A CN 202111676459A CN 114463628 A CN114463628 A CN 114463628A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- sensing image
- ship
- deep learning
- threshold
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a deep learning remote sensing image ship target identification method based on threshold value constraint, which comprises the following steps: (1) carrying out threshold segmentation on the remote sensing image by an OTSU threshold segmentation method to realize sea-land separation; (2) extracting the shape characteristics of the remote sensing image; (3) carrying out multi-scale connection fusion on the pyramid network structure of the bottom layer on the basis of a deep learning YOLOv5 algorithm; (4) aiming at the characteristics of the remote sensing image ship target, designing an anchor frame according to the shape characteristics of the remote sensing image ship target; (5) introducing a focus classification loss as a loss function of a YOLOv5 algorithm for regression convergence; (6) inputting a ship sample to train to obtain a model based on an improved YOLOv5 algorithm; (7) and identifying the ship target by the remote sensing image according to the trained model. According to the method, the anchor frame and the loss function are optimized according to the shape characteristics of the ship, the generalization performance of the model is improved, and the identification precision of the ship is improved.
Description
Technical Field
The invention belongs to the technical field of target identification, and particularly relates to a deep learning remote sensing image ship target identification method based on threshold value constraint.
Background
With the rapid development of the remote sensing technology, the target is rapidly and accurately identified from the satellite image by utilizing the deep learning, on one hand, the repeated and tedious work can be replaced, and the people are relieved from the heavy remote sensing image interpretation work; on the other hand, an end-to-end model structure is established, so that the processing rate of remote sensing data can be improved, and higher identification accuracy can be achieved. The deep learning is introduced into the target detection task, the constraint that detection features need to be designed manually in the traditional detection algorithm can be eliminated, the relevant features are extracted through model network autonomous learning, and the detection efficiency and reliability of the algorithm are improved. The rapid and accurate target detection of the ship target is realized based on the deep learning algorithm, the work of the target detection can be automated, the consumption of human resources is greatly reduced, and the detection speed and accuracy under the condition of a large amount of data can be improved.
The ship detection is an important application in the field of remote sensing, and has wide application value in the military and civil fields. In the military field, ship detection can be applied to illegal ship monitoring, such as terrorist attack, sneak ferry and the like, and can also be used for battle effect evaluation, battle sea surface monitoring, local ship reconnaissance and the like; in the civil field, the ship target detection technology can be applied to monitoring and managing the marine fishery, controlling port ship navigation, monitoring and measuring marine pollution and the like. Due to the practical application requirements, the corresponding ship detection system needs to have the characteristics of high automation degree and strong timeliness, and the traditional method is limited by prior knowledge, so that the efficiency and the accuracy are difficult to meet the requirements, and the application of the deep learning technology to the identification of remote sensing ships is of great significance.
The target detection algorithms in deep learning can be roughly classified into two types: one is a two-stage algorithm based on a candidate region, wherein algorithms such as Fast R-CNN, R-FCN and the like are taken as representatives; one is a regression-based single-stage algorithm, which is represented by the YOLO, SSD algorithms. YOLO has gained much attention in the aspect of ship identification with a lightweight model strategy, a faster identification speed, and an excellent performance in small target detection. To date, the latest YOLO method has iterated to YOLOv5, but because of its nature it is also a regression-based single-stage algorithm, the recognition accuracy is sacrificed while prioritizing efficiency.
Disclosure of Invention
The invention aims to provide a deep learning remote sensing image ship target identification method based on threshold value constraint, which improves the identification efficiency of a ship target on the premise of ensuring the identification rate.
The technical scheme adopted by the invention for realizing the purpose is as follows:
a deep learning remote sensing image ship target identification method based on threshold value constraint comprises the following steps:
step 1: carrying out threshold segmentation on the remote sensing image by an OTSU threshold segmentation method to realize sea-land separation;
step 2: carrying out shape feature extraction on the sea area remote sensing image after sea-land separation;
and step 3: based on a deep learning YOLOv5 algorithm, combining a characteristic pyramid structure, and performing multi-scale connection fusion on the pyramid network structure at the bottom layer;
and 4, step 4: aiming at the characteristics of the remote sensing image ship target, designing an anchor frame according to the shape characteristics of the remote sensing image ship target;
and 5: introducing a focus classification loss as a loss function of a YOLOv5 algorithm for regression convergence;
step 6: inputting a ship sample marked in advance for training based on an improved YOLOv5 algorithm to obtain a trained deep learning model;
and 7: and carrying out ship target identification on the remote sensing image according to the trained model.
Further, the step 1 specifically includes: the threshold-based segmentation method determines a threshold based on the gray-scale difference between the ocean and the land, and then separates the ocean and land using the selected gray-scale value as the threshold:
the OTSU threshold segmentation method adopts an inter-class method for measurement, and the calculation method is as follows:
σ2(T)=Wa(μa-μ)2+Wb(μb-μ)2
wherein σ2To representInter-class method, WaRepresenting the ratio of the area of the target region to the total area of the image, muaMean value, W, representing all target area pixelsbRepresenting the ratio of the area of the background region to the total area of the image, μbRepresenting the average of the pixels in the region of the background, mu representing the mean of the gray levels of all the pixels of the full image, T going from 0 to 255, let σ2The T with the maximum value is the optimal threshold value for image segmentation.
Further, the step 2 specifically includes: according to the characteristics of a ship, the shape of the ship is a long strip, the elements of the target outer contour can be clearly described under the influence of the remote sensing image in a multi-scale mode, three characteristics of compactness, length-width ratio and rectangularity are selected, and the calculation formula is as follows:
(1) compactness:
(2) aspect ratio:
(3) squareness:
in the formula, a represents the area of the target region, p represents the circumference of the target region, and l, w, and S represent the length, width, and area, respectively, of a rectangle circumscribing the target region.
Further, the step 3 specifically includes: the method mainly comprises the steps of transversely connecting low-layer feature layers, enhancing features of the low-layer feature layers, and removing the connection of the top layer on the premise of ensuring the calculated amount and speed.
Further, the step 4 specifically includes: the calculation formula for designing the anchor frame according to the shape characteristics is as follows:
where m refers to the number of feature maps, SkRepresenting the ratio of the size of the anchor frame to the picture, Smt XunAnd SmaxThe minimum and maximum values of the ratio are expressed,
the aspect ratio calculation formula of the anchor frame is as follows:
whereinskCalculated from the above formula, the dimension of a default box is added for the case of an aspect ratio of 1The coordinate center of the default box is|fkI is the size of the corresponding characteristic diagram, i, j belongs to [0, | fk|)。
Further, the step 5 specifically includes: the calculation formula for regression convergence by introducing the focus classification loss as a loss function of the YOLOv5 algorithm is as follows:
wherein: l isloc(b*And b) is a positioning loss function, and the specific calculation mode is as follows: b is a predicted value, b*For the value of the real tag box,the specific calculation method for the classification loss function is as follows: ti,cthe probability is predicted for the correct foreground or foreground,predict probability for background, atAnd r is a nuclear parameter.
Compared with the prior art, the invention has the advantages that:
compared with the traditional deep learning algorithm, aiming at the characteristics of a ship target, on one hand, the prior knowledge taking a ship background as a sea area is taken as constraint, sea and land separation is carried out before identification, the identification range is narrowed, and the false identification rate is reduced; on the other hand, by combining with the shape characteristics of the ship, the anchor frame is designed and the loss function is optimized, so that the recognition efficiency can be greatly improved on the premise of ensuring the recognition rate.
Drawings
Fig. 1 is a main flow schematic diagram of a deep learning remote sensing image ship target identification method based on threshold value constraint.
Detailed Description
With reference to fig. 1, a deep learning remote sensing image ship target identification method based on threshold value constraint includes the following steps:
step 1: carrying out threshold segmentation on the remote sensing image by an OTSU threshold segmentation method to realize sea-land separation;
step 2: carrying out shape feature extraction on the sea area remote sensing image after sea-land separation;
and step 3: based on a deep learning YOLOv5 algorithm, combining a characteristic pyramid structure, and performing multi-scale connection fusion on the pyramid network structure at the bottom layer;
and 4, step 4: aiming at the characteristics of the remote sensing image ship target, designing an anchor frame according to the shape characteristics of the remote sensing image ship target;
and 5: introducing a focus classification loss as a loss function of a YOLOv5 algorithm for regression convergence;
step 6: inputting a ship sample marked in advance for training based on an improved YOLOv5 algorithm to obtain a trained deep learning model;
and 7: and carrying out ship target identification on the remote sensing image according to the trained model.
Further, the step 1 specifically includes: the threshold-based segmentation method determines a threshold based on the gray-scale difference between the ocean and the land, and then separates the ocean and land using the selected gray-scale value as the threshold:
the OTSU threshold segmentation method adopts an inter-class method for measurement, and the calculation method is as follows:
σ2(T)=Wa(μa-μ)2+Wb(μb-μ)2
wherein σ2Method between presentation classes, WaRepresenting the ratio of the area of the target region to the total area of the image, muaMean value, W, representing all target area pixelsbRepresenting the ratio of the area of the background region to the total area of the image, μbRepresenting the average of the pixels in the region of the background, mu representing the mean of the gray levels of all the pixels of the full image, T going from 0 to 255, let σ2The T with the maximum value is the optimal threshold value for image segmentation.
Further, the step 2 specifically includes: according to the characteristics of a ship, the shape of the ship is a long strip, the elements of the target outer contour can be clearly described under the influence of the remote sensing image in a multi-scale mode, three characteristics of compactness, length-width ratio and rectangularity are selected, and the calculation formula is as follows:
(1) compactness:
(2) aspect ratio:
(3) squareness:
in the formula, a represents the area of the target region, p represents the circumference of the target region, and l, w, and S represent the length, width, and area, respectively, of a rectangle circumscribing the target region.
Further, the step 3 specifically includes: the method mainly comprises the steps of transversely connecting low-layer feature layers, enhancing features of the low-layer feature layers, and removing the connection of the top layer on the premise of ensuring the calculated amount and speed.
Further, the step 4 specifically includes: the calculation formula for designing the anchor frame according to the shape characteristics is as follows:
where m refers to the number of feature maps, SkRepresenting the ratio of the size of the anchor frame to the picture, Smt XunAnd SmaxThe minimum and maximum values of the ratio are expressed,
the aspect ratio calculation formula of the anchor frame is as follows:
whereinskCalculated from the above formula, the dimension of a default box is added for the case of an aspect ratio of 1The coordinate center of the default box is|fkI is the size of the corresponding characteristic diagram, i, j belongs to [0, | fk|)。
Further, the step 5 specifically includes: the calculation formula for regression convergence by introducing the focus classification loss as a loss function of the YOLOv5 algorithm is as follows:
wherein: l is a radical of an alcoholloc(b*And b) is a positioning loss function, and the specific calculation mode is as follows: b is a predicted value, b*For the value of the real tag box,the specific calculation method for the classification loss function is as follows: ti,cthe probability is predicted for the correct foreground or foreground,predict probability for background, atAnd r is a nuclear parameter.
Examples
In the implementation, a DOTA data set is selected as a data sample for verifying the patent, and a proper remote sensing image is selected from the data set and used in the implementation process. Through image preprocessing, labeling and other operations, 6000 images are selected in total, wherein 4800 images (80%) of a training set, 900 images (15%) of a verification set and 300 images (5%) of a test set take an Average accuracy mean (mAP) as an evaluation index. The method comprises the following specific treatment steps:
(1) and (5) separating the data sets from sea and land. Carrying out threshold segmentation on the remote sensing image by the data set through an OTSU threshold segmentation method to obtain a sea area image;
(2) and extracting and fusing shape features. Extracting three shape features of compactness, length-width ratio and squareness from the data set, and performing multi-scale connection feature fusion on the pyramid network structure of the bottom layer by combining the image;
(3) and (5) designing an anchor frame. Generating ship target anchor frames with preset corresponding sizes and numbers for each pixel in feature maps with different sizes, generating the anchor frames in the feature maps with each scale according to a formula, setting the length-width ratio of the anchor frames to be { {1,2,1/2} according to the shape characteristics of ships, and determining the maximum and minimum sizes of the anchor frames according to the median of the image area occupied by the ship target area;
(4) inputting a data set, and training a deep learning model. And (5) performing regression according to the loss function through the marked data set to obtain model parameters and generate a training model. The maximum training round number is set to 50000, the initial learning rate is 0.0015, the learning rate reduction round number is 5000, the learning rate adjustment parameter is set to 0.12, and the regularization weight attenuation parameter is 0.0005;
(5) and inputting data and identifying the ship. Calculating a total loss function value by using a classified focusing loss function and a regression loss function for each generated anchor frame to obtain the position and the confidence coefficient of the anchor frame;
(6) and carrying out statistical test and verifying the accuracy and effect graph of the sample. Table 1 was generated from the results.
TABLE 1
Aiming at the defects of the model to the ship target, on one hand, the OTSU threshold segmentation constraint recognition scene is utilized in combination with the prior knowledge of the ship and the sea area, and the false recognition rate is improved. And on the other hand, the anchor frame and the loss function are optimized according to the shape characteristics of the ship, the generalization performance of the model is improved, and the identification precision of the ship is improved.
Claims (6)
1. A deep learning remote sensing image ship target identification method based on threshold value constraint is characterized by comprising the following steps:
step 1: carrying out threshold segmentation on the remote sensing image by an OTSU threshold segmentation method to realize sea-land separation;
step 2: carrying out shape feature extraction on the sea area remote sensing image after sea-land separation;
and step 3: based on a deep learning YOLOv5 algorithm, combining a characteristic pyramid structure, and performing multi-scale connection fusion on the pyramid network structure at the bottom layer;
and 4, step 4: aiming at the characteristics of the remote sensing image ship target, designing an anchor frame according to the shape characteristics of the remote sensing image ship target;
and 5: introducing a focus classification loss as a loss function of a YOLOv5 algorithm for regression convergence;
step 6: inputting a ship sample marked in advance for training based on an improved YOLOv5 algorithm to obtain a trained deep learning model;
and 7: and carrying out ship target identification on the remote sensing image according to the trained model.
2. The threshold constraint-based deep learning remote sensing image ship target identification method according to claim 1,
the step 1 specifically comprises: the threshold-based segmentation method determines a threshold based on the gray-scale difference between the ocean and the land, and then separates the ocean and land using the selected gray-scale value as the threshold:
the OTSU threshold segmentation method adopts an inter-class method for measurement, and the calculation method is as follows:
σ2(T)=Wa(μa-μ)2+Wb(μb-μ)2
wherein σ2Method between presentation classes, WaRepresenting the ratio of the area of the target region to the total area of the image, muaMean value, W, representing all target area pixelsbRepresenting the ratio of the area of the background region to the total area of the image, μbRepresenting the average of the pixels in the region of the background, mu representing the mean of the gray levels of all the pixels of the full image, T going from 0 to 255, let σ2The T with the maximum value is the optimal threshold value for image segmentation.
3. The threshold constraint-based deep learning remote sensing image ship target identification method according to claim 2,
the step 2 specifically comprises: according to the characteristics of a ship, the shape of the ship is a long strip, the elements of the target outer contour can be clearly described under the influence of the remote sensing image in a multi-scale mode, three characteristics of compactness, length-width ratio and rectangularity are selected, and the calculation formula is as follows:
(1) compactness:
(2) aspect ratio:
(3) squareness:
in the formula, a represents the area of the target region, p represents the circumference of the target region, and l, w, and S represent the length, width, and area, respectively, of a rectangle circumscribing the target region.
4. The threshold constraint-based deep learning remote sensing image ship target identification method according to claim 3,
the step 3 specifically includes: the method mainly comprises the steps of transversely connecting low-layer feature layers, enhancing features of the low-layer feature layers, and removing the connection of the top layer on the premise of ensuring the calculated amount and speed.
5. The threshold constraint-based deep learning remote sensing image ship target identification method according to claim 4,
the step 4 specifically includes: the calculation formula for designing the anchor frame according to the shape characteristics is as follows:
where m refers to the number of feature maps, SkRepresenting the ratio of the size of the anchor frame to the picture, SmtnAnd SmaxThe minimum and maximum values of the ratio are expressed,
the aspect ratio calculation formula of the anchor frame is as follows:
6. The threshold constraint-based deep learning remote sensing image ship target identification method according to claim 5,
the step 5 specifically includes: the calculation formula for regression convergence by introducing the focus classification loss as a loss function of the YOLOv5 algorithm is as follows:
wherein: l isloc(b*And b) is a positioning loss function, and the specific calculation mode is as follows: b is a predicted value, b*Is a true tag box value, Lclassfl(t*And t) is a classification loss function, and the specific calculation mode is as follows: the probability is predicted for the correct foreground or foreground,predict probability for background, atAnd r is a nuclear parameter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111676459.XA CN114463628A (en) | 2021-12-31 | 2021-12-31 | Deep learning remote sensing image ship target identification method based on threshold value constraint |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111676459.XA CN114463628A (en) | 2021-12-31 | 2021-12-31 | Deep learning remote sensing image ship target identification method based on threshold value constraint |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114463628A true CN114463628A (en) | 2022-05-10 |
Family
ID=81408393
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111676459.XA Pending CN114463628A (en) | 2021-12-31 | 2021-12-31 | Deep learning remote sensing image ship target identification method based on threshold value constraint |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114463628A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115953410A (en) * | 2023-03-15 | 2023-04-11 | 安格利(成都)仪器设备有限公司 | Automatic corrosion pit detection method based on target detection unsupervised learning |
-
2021
- 2021-12-31 CN CN202111676459.XA patent/CN114463628A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115953410A (en) * | 2023-03-15 | 2023-04-11 | 安格利(成都)仪器设备有限公司 | Automatic corrosion pit detection method based on target detection unsupervised learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109598241B (en) | Satellite image marine ship identification method based on Faster R-CNN | |
CN111460968B (en) | Unmanned aerial vehicle identification and tracking method and device based on video | |
CN109255317B (en) | Aerial image difference detection method based on double networks | |
CN111985376A (en) | Remote sensing image ship contour extraction method based on deep learning | |
CN114663346A (en) | Strip steel surface defect detection method based on improved YOLOv5 network | |
CN112560675B (en) | Bird visual target detection method combining YOLO and rotation-fusion strategy | |
CN109376591A (en) | The ship object detection method of deep learning feature and visual signature joint training | |
CN114612769B (en) | Integrated sensing infrared imaging ship detection method integrated with local structure information | |
CN109635726B (en) | Landslide identification method based on combination of symmetric deep network and multi-scale pooling | |
CN113469097B (en) | Multi-camera real-time detection method for water surface floaters based on SSD network | |
CN116935369A (en) | Ship water gauge reading method and system based on computer vision | |
CN113313107A (en) | Intelligent detection and identification method for multiple types of diseases on cable surface of cable-stayed bridge | |
CN114821358A (en) | Optical remote sensing image marine ship target extraction and identification method | |
CN116434230A (en) | Ship water gauge reading method under complex environment | |
CN115880594A (en) | Intelligent dam crack detection method based on unmanned aerial vehicle visual perception and deep learning | |
CN114463628A (en) | Deep learning remote sensing image ship target identification method based on threshold value constraint | |
Wang et al. | Scattering Information Fusion Network for Oriented Ship Detection in SAR Images | |
CN112069997B (en) | Unmanned aerial vehicle autonomous landing target extraction method and device based on DenseHR-Net | |
CN117635628A (en) | Sea-land segmentation method based on context attention and boundary perception guidance | |
CN116740572A (en) | Marine vessel target detection method and system based on improved YOLOX | |
CN111832463A (en) | Deep learning-based traffic sign detection method | |
CN113537397B (en) | Target detection and image definition joint learning method based on multi-scale feature fusion | |
CN114694042A (en) | Disguised person target detection method based on improved Scaled-YOLOv4 | |
CN114140698A (en) | Water system information extraction algorithm based on FasterR-CNN | |
CN114842353B (en) | Neural network remote sensing image target detection method based on self-adaptive target direction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |