CN113239775A - Method for detecting and extracting flight path in azimuth history map based on layered attention depth convolution neural network - Google Patents
Method for detecting and extracting flight path in azimuth history map based on layered attention depth convolution neural network Download PDFInfo
- Publication number
- CN113239775A CN113239775A CN202110502783.3A CN202110502783A CN113239775A CN 113239775 A CN113239775 A CN 113239775A CN 202110502783 A CN202110502783 A CN 202110502783A CN 113239775 A CN113239775 A CN 113239775A
- Authority
- CN
- China
- Prior art keywords
- track
- azimuth
- detecting
- extracting
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 91
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 16
- 238000001514 detection method Methods 0.000 claims abstract description 129
- 238000000605 extraction Methods 0.000 claims abstract description 64
- 230000008569 process Effects 0.000 claims abstract description 44
- 238000001914 filtration Methods 0.000 claims abstract description 24
- 238000007781 pre-processing Methods 0.000 claims abstract description 16
- 238000012549 training Methods 0.000 claims description 51
- 238000010586 diagram Methods 0.000 claims description 16
- 238000011176 pooling Methods 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 10
- 238000012937 correction Methods 0.000 claims description 9
- 230000004927 fusion Effects 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 9
- 238000013527 convolutional neural network Methods 0.000 claims description 8
- 238000003708 edge detection Methods 0.000 claims description 8
- 230000006870 function Effects 0.000 claims description 8
- 210000002569 neuron Anatomy 0.000 claims description 8
- 230000009471 action Effects 0.000 claims description 6
- 238000004804 winding Methods 0.000 claims description 6
- 230000001186 cumulative effect Effects 0.000 claims description 5
- 230000015572 biosynthetic process Effects 0.000 claims description 4
- 239000007795 chemical reaction product Substances 0.000 claims description 4
- 238000003062 neural network model Methods 0.000 claims description 4
- 239000000047 product Substances 0.000 claims description 4
- 239000000203 mixture Substances 0.000 description 11
- 238000004364 calculation method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/203—Specially adapted for sailing ships
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S15/00—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
- G01S15/88—Sonar systems specially adapted for specific applications
- G01S15/93—Sonar systems specially adapted for specific applications for anti-collision purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
- G06F2218/04—Denoising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Automation & Control Theory (AREA)
- Acoustics & Sound (AREA)
- Computer Networks & Wireless Communication (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method for detecting and extracting a track in an azimuth process map based on a hierarchical attention depth convolution neural network, which is characterized in that an HADCNN model is constructed in a modular mode to simulate the process of human vision from whole to local and realize hierarchical attention; by the method of the mixed gray scale preprocessing of the azimuth process chart, noise interference filtering and track enhancement are realized. In the HADCNN model, a track area detection module is responsible for detecting and extracting a track area in the whole azimuth process map, and a track position detection and extraction module in the area is responsible for detecting and extracting a track in a local image containing the track area. The method for detecting and extracting the flight path in the azimuth history chart adopts a preprocessing method to enhance the flight path, inhibit noise, increase the characteristic difference between classes, reduce the operation cost of a subsequent network and improve the detection and extraction efficiency.
Description
Technical Field
The invention relates to a method for detecting and extracting a track in an azimuth history chart, in particular to a method for detecting and extracting a track in an azimuth history chart based on a layered attention depth convolution neural network.
Background
The method for detecting and extracting the flight path by utilizing the passive sonar azimuth history map is an important means for judging the flight path of the ship. The ship track in the complex azimuth history chart is judged by preprocessing the azimuth history chart and detecting and extracting the track, wherein part of track information can be damaged along with the suppression of noise interference by a common preprocessing method for suppressing background noise interference, and the conditions of fracture and discontinuity are generated. Although the preprocessing method for enhancing the flight path line characteristics plays a certain role in enhancing the weak flight path, noise interference is enhanced, and degradation phenomena such as excessive enhancement of some images are caused. The preprocessing methods can improve the display of the azimuth history chart, but the parameters need to be set manually, so that the processing effect of the algorithm is influenced once the parameters are not properly set, and the difficulty of the algorithm in practical application is increased.
The traditional track detection and extraction method mostly adopts a threshold value method and a probability tracking method. The threshold method is simple in engineering, but the method cannot effectively detect and extract the flight path under the condition of large background noise interference. When two tracks are too close or crossed, a track detection error may occur. The probability tracking method is characterized in that single-frame image data are used as input, track positions in the area are judged, track points in a fixed range are searched according to the characteristics that the track cannot be bent suddenly and the pixel value at the track changes slowly, and the track points are connected to realize track detection and extraction. The probability tracking method can misjudge noise interference and the like similar to the flight path. A plurality of interference value points often exist in single-frame image data, and under the condition of no artificial judgment, tracking according to detection points easily leads to target false report. Even if multi-frame data in the target domain are detected during association correction of the broken track, partial false tracks still exist in the image, and the detection result is influenced. Meanwhile, the traditional flight path detection and extraction needs manual parameter setting, and the generalization performance is weak.
The current deep convolutional neural network has great advantages in the aspects of target detection and extraction, and is mainly applied to the aspects of face recognition, object recognition and extraction in road scenes and the like. In the application scenes, the target object has high proportion and clear outline, so the deep convolutional neural network can extract better results. However, the track occupancy ratio in the azimuth history chart is extremely low, the track does not have an obvious outline, and meanwhile, the track signal to noise ratio in the azimuth history chart is low, various noise interference characteristics are different, and the track is crossed or mixed, so that the existing network can not obtain an ideal effect when being directly applied to the track detection and extraction of the azimuth history chart. Therefore, a deep convolutional neural network needs to be reasonably designed according to the characteristics of the flight path in the azimuth history chart, and the purpose of realizing the flight path detection and extraction is achieved.
Disclosure of Invention
The invention provides a Hierarchical Attention Deep Convolutional Neural Network (HADCNN) aiming at the problem of detection and extraction of a flight path in a passive sonar azimuth course diagram. The network simulates an identification mechanism of a biological visual nerve identification object from overall overview to local detail through task modular design, and constructs a track area detection module and a track position detection and extraction module in the area, thereby realizing layered attention of an image area and solving the problem of weak and positive samples of an azimuth process diagram; each module progressively completes the tasks of ship track area detection and extraction and track position detection and extraction in the area by using a Deep Convolutional Neural Network (DCNN); in each module, the layered attention of the features is realized through a method of feature map fusion training of the shallow layer convolutional layer and the deep layer convolutional layer, and the detection and extraction precision is increased. The layered attention depth convolution neural network provides a new approach and a new method for detecting and extracting the flight path in the azimuth history map.
The technical scheme of the invention is as follows:
the method for detecting and extracting the flight path in the azimuth history map based on the layered attention depth convolution neural network comprises the following steps:
step 1: acquiring N color azimuth process maps, wherein the length of each azimuth process map is H, the height of each azimuth process map is L, and the color depth of each azimuth process map is E; decomposing R, G, B each H multiplied by L azimuth process diagram C according to a color channel, and converting the H multiplied by L azimuth process diagram C into a gray scale diagram I; and obtaining a color histogram equalization result graph H by the color azimuth history graph C through a histogram equalization algorithmcolour;
Step 2: for the gray level image I of each color azimuth process image, a corresponding histogram equalization result image H is also obtainedgrayEdge image GsobelAnd mixed gray scale preprocessing result graph Gmix;
And step 3: constructing a depth convolution neural network model HADCNN for detecting and extracting the track of the azimuth history map;
the deep convolutional neural network model HADCNN; the track area detection module and the track position detection and extraction module in the track area are formed;
the track area detection module is composed of N1Layer winding layer, M1Layer pooling layer, M1A track area detection network formed by an upper sampling layer;
the track position detection and extraction module in the track area consists of N2Layer winding layer, M2Layer pooling layer, M2A track position detection and extraction network in a track area formed by an upper sampling layer;
and 4, step 4: constructing a HADCNN model training set, wherein the training set consists of a flight path region detection module sub-training set and a flight path position detection and extraction module sub-training set in a region;
wherein the track area detection module trains set X1Comprises a reaction product of Hcolour、Hgray、GsobelConstructed training image setClass mark map after binarization processing with track area class mark map0<i≤N;
Step (ii) of5: setting the structural parameters of the track area detection network: network layer number, neuron node number of each layer, learning rate and attenuation rate lambda1(ii) a Setting training parameters: total number of iterations T1Early stop step number s1Batch size m1(ii) a Initializing track area detection network model parameters W1(ii) a Training set X of track area detection module1Is divided into a plurality of groups m1Small batch data set B of individual training images1;
Step 6: selecting a small batch datasetNetwork module f for detecting track area1J is more than 0 and is less than or equal to m1At model parameter W1Calculating track area detection output Y under action1:And calculating the error loss L1Then updating the parameters of the track area detection network model;
and 7: repeating the step 6 when s continues1Second iteration, loss L1Not reduced, or when the number of iterations T satisfies T > T1Stopping iteration to obtain a trained track area detection module f1;
And 8: setting a track area ratio threshold value p, and detecting a module f according to the track area1Detection result Y of1Selecting all Y with track area ratio greater than p in h x h area1Partial area and recording all the central coordinates V of the partial areakAccording to VkCoordinate cutting Hcolour、Hgray、GmixObtaining the detection and extraction training image set of the track position in the regionDetection and extraction of track position in formation area2(ii) a At the same time according to VkObtaining the regional track class map by cutting the binary track class map by the coordinates
And step 9: and (3) setting track position detection and extracting network structure parameters in the area: network layer number, neuron node number of each layer, learning rate and attenuation rate lambda2(ii) a Setting training parameters: total number of iterations T2Early stop step number s2Batch size m2(ii) a Track position detection and network model parameter extraction W in initialization area2(ii) a Detecting and extracting training set X from flight path position in region2Is divided into a plurality of groups m2Small batch data set B of individual training images2;
Step 10: selecting a small batch datasetModule f for detecting and extracting track position in area2J is more than 0 and is less than or equal to m2At model parameter W2Under the action, calculating track position detection output Y in the region2:And calculating the error loss L2Then, updating track position detection in the area and extracting network model parameters;
step 11: repeating the step 10 when s continues2Second iteration, loss L2Not reduced, or when the number of iterations T satisfies T > T2Stopping iteration to obtain a track position detection and extraction module f in the trained area2;
Step 12: processing the azimuth process diagram to be detected according to the steps 1 and 2, inputting the trained HADCNN model for identification, and identifying according to the cutting position VkAnd splicing, overlapping and restoring the identification result to obtain the track extraction map of the azimuth history map.
Further, the color azimuth history map is decomposed R, G, B according to the color channels, and converted into a gray scale map I:
I(x,y)=0.3×R(x,y)+0.59×G(x,y)+0.11×B(x,y)。
further, in step 2, histogram equalization result graph H corresponding to the gray-scale graph IgrayObtained by the following process:
counting r for gray level image IkNumber of pixel values n of each gradation of gradationk,k=0,1,2,…,2E-1, calculating rkCumulative probability of tone scale and 2E-1 product result cumulative probability pixel corresponding value sk:
To skRounded to obtain [ s ]k]R in the azimuth history mapkIs replaced by [ s ]k]:
rk→[sk],k=0,1,2,…,2E-1
Finally obtaining a histogram equalization result graph Hgray:Hgray(x,y)=[sk],I(x,y)==rk,k=0,1,...,2E-1。
Further, in step 2, the edge image G of the gray-scale image IsobelObtained by the following process:
weighting the gray scale value of the upper, lower, left and right fields of each pixel in the image by using a Sobel horizontal edge detection operator and a vertical edge detection operator to obtain an edge image Gsobel。
Further, in step 2, the mixed gray scale preprocessing result graph G of the gray scale graph ImixObtained by the following process:
obtaining I by median filtering the gray scale image ImedianThen mix ImedianHistogram equalization to obtain Imedian-hFinally, will Imedian-hObtaining mixed gray scale preprocessing result graph G through Sobel algorithmmix。
Further, in step 2, during the median filtering, the median filtering is performed on the bitmap I through an l × l median filtering template:
rounding l/2 to obtain l1Then, the I is subjected to white filling to obtain white filling gray scaleFIG. I0:
2<x≤H+2,2<y≤L+2,I0(x,y)=I(x-2,y-2)。
Is selected from0Point I0(x, y) ordering z (1), …, z (mean), …, z (max) from large to small in l × l neighborhood to obtain median filtering result Imedian:Imedian(x, y) ═ z (mean), where l1<x≤H+l1,l1<y≤L+l1。
Further, the track area detection network and the track position detection and extraction network in the track area both adopt a fusion feature correction method: the convolutional layer after the pooling layer is partially extracted and fused to the convolutional layer before the upsampling layer.
Further, a negative log-likelihood loss function with a weighted penalty is used in step 6:
further, a negative log-likelihood loss function with a weight penalty is used in step 10:
advantageous effects
The invention provides a method for detecting and extracting a track in an azimuth history chart. The method constructs the HADCNN model in a modularized mode to simulate the process of human vision from whole to local and realize layered attention at the same time; by the method of the mixed gray scale preprocessing of the azimuth process chart, noise interference filtering and track enhancement are realized. In the HADCNN model provided by the invention, a track area detection module is responsible for detecting and extracting a track area in the whole azimuth process map, and then a track position detection and extraction module in the area is responsible for detecting and extracting a track in a local image containing the track area. The fusion layers are adopted in the two modules, the output of the shallow layer convolution layer is extracted to the deep layer convolution layer to participate in convolution operation, the fusion training of the shallow layer convolution layer and the deep layer convolution layer is realized, the effect that the characteristics are concerned by layers is achieved, and the detection and extraction precision is increased. The method for detecting and extracting the flight path in the azimuth history chart adopts a preprocessing method to enhance the flight path, inhibit noise, increase the characteristic difference between classes, reduce the operation cost of a subsequent network and improve the detection and extraction efficiency. The HADCNN model simulates a recognition mechanism from overall overview to local details of a biological visual neural recognition object through modular design, improves the recognition task specialization capability of a module network, increases the depth and structural diversity of the model, solves the problem of low accuracy of detection and extraction of a position history map due to a weak positive sample proportion, and realizes the improvement of the accuracy of detection and extraction.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1: the hierarchy focuses on a deep convolutional neural network model.
FIG. 2: and the ship track area detection and extraction module.
FIG. 3: and a track position detection and extraction module in the ship area.
FIG. 4: and (4) detecting and extracting a first result based on the track of the HADCNN.
FIG. 5: and (5) carrying out track detection and extracting a second result based on the HADCNN.
FIG. 6: and (4) comparing the track detection and extraction result based on the HADCNN. (a) The method comprises the following steps of (a) detecting a navigation track and extracting a result graph.
FIG. 7: and (4) comparing the track detection and extraction result based on the HADCNN. (a) The method comprises the following steps of (a) detecting a navigation track and extracting a result graph.
Detailed Description
The invention carries out feature extraction and identification on ship tracks in a passive sonar azimuth history map based on a layered attention depth convolution neural network, and judges the ship tracks in a complex azimuth history map, and specifically comprises the following steps:
step 1: acquiring N color azimuth process maps, wherein the length of each azimuth process map is H, the height of each azimuth process map is L, and the color depth of each azimuth process map is E;
decomposing R, G, B an H × L azimuth history chart C according to color channels, and converting into a gray scale chart I: i (x, y) ═ t1×R(x,y)+t2×G(x,y)+t3X is multiplied by B (x, y), x is more than 0 and less than or equal to H, y is more than 0 and less than or equal to L, wherein t1、t2、t3Is a weight value.
The color chart C of the azimuth process chart obtains a color histogram equalization result chart H through a histogram equalization algorithmcolour。
Step 2: the gray scale image I is processed as follows:
(1) counting r for gray level image Ik(k=0,1,2,…,2E-1) number n of pixel values of each tone scalekCalculating rkCumulative probability of tone scale and 2E-1 product result cumulative probability pixel corresponding value sk:
To skRounded to obtain [ s ]k]The brackets represent rounding, and r in the azimuth course diagramkIs replaced by [ s ]k]:
rk→[sk],k=0,1,2,…,2E-1
Finally obtaining a histogram equalization result graph Hgray:Hgray(x,y)=[sk],I(x,y)==rk,k=0,1,...,2E-1。
(2) Weighting the gray scale values of the upper, lower, left and right fields of each pixel in the gray scale image I through a Sobel horizontal edge detection operator and a vertical edge detection operator to obtain edgesEdge image Gsobel。
(3) Obtaining I by median filtering the gray scale image ImedianThen mix ImedianHistogram equalization to obtain Imedian-hFinally, will Imedian-hObtaining mixed gray scale preprocessing result graph G through Sobel algorithmmixBecause the median filtering is adopted to weaken the noise interference in the azimuth history chart, then the track enhancement is carried out by the histogram equalization and the Sobel method, and the noise interference filtering and the track enhancement are simultaneously realized.
When the median filtering is carried out, the median filtering can be carried out on the gray level image I of the azimuth process diagram through an l multiplied by l median filtering template:
rounding l/2 to obtain l1Then, the I is subjected to white filling to obtain a white filling gray scale image I0:
2<x≤H+2,2<y≤L+2,I0(x,y)=I(x-2,y-2)。
Is selected from0Point I0(x, y) ordering z (1), …, z (mean), …, z (max) from large to small in l × l neighborhood to obtain median filtering result Imedian:Imedian(x, y) ═ z (mean), where l1<x≤H+l1,l1<y≤L+l1。
And step 3: and constructing a deep convolution neural network model HADCNN for detecting and extracting the track of the azimuth history map.
The model is composed of a track area detection module and a track position detection and extraction module in a track area.
The track area detection module is composed of N1Layer winding layer, M1Layer pooling layer, M1And a track area detection network formed by the sampling layers on the layers adopts a fusion characteristic correction method, namely the convolution layer behind the pooling layer is partially extracted and fused to the convolution layer in front of the sampling layer. The method realizes the correction of the extracted features, so that the extracted features are closer to the expected features.
The track position detection and extraction module in the track area is composed of N2Layer winding layer, M2Layer pooling layer, M2And a fusion characteristic correction method is also adopted for the track position detection and extraction network in the track area formed by the sampling layer on the layer.
And 4, step 4: and constructing a HADCNN model training set, wherein the training set consists of a flight path region detection module sub-training set and a flight path position detection and extraction module sub-training set in the region.
Wherein the track area detection module trains set X1Comprises a reaction product of Hcolour、Hgray、GsobelConstructed training image setThe similar mark map after the binary processing with the track area similar mark map (the pixel value of the track area is 1, the background pixel value is 0)
And 5: setting the structural parameters of the track area detection network: network layer number, neuron node number of each layer, learning rate and attenuation rate lambda1(ii) a Setting training parameters: total number of iterations T1Early stop step number s1Batch size m1(ii) a Initializing track area detection network model parameters W1. Training set X of track area detection module1Is divided into a plurality of groups m1Small batch data set B of individual training images1。
Step 6: selecting a small batch datasetNetwork module f for detecting track area1At model parameters W1Under the action of the control signal, calculating the detection output Y of the track area1:Error calculation method for setting track area detection module, using negative log-likelihood loss with weight penaltyLoss function:and then updating the parameters of the track area detection network model.
And 7: repeating the step 6 when s continues1Second iteration, loss L1Not reduced, or when the number of iterations T satisfies T > T1Stopping iteration to obtain a trained track area detection module f1。
And 8: setting a track area ratio threshold value p, and detecting a module f according to the track area1Detection result Y of1Selecting all Y with track area ratio greater than p in h x h area1Partial area and recording all the central coordinates V of the partial areai(i ═ 1, 2..) while following Vi(i ═ 1, 2..) coordinate cut Hcolour、Hgray、GmixObtaining the detection and extraction training image set of the track position in the regionDetection and extraction of track position in formation area2. At the same time according to Vi(i ═ 1, 2. -) coordinate cutting track class mark map (the pixel value at the track is 1, and the other pixel values are 0) to obtain area track class mark map
And step 9: and (3) setting track position detection and extracting network structure parameters in the area: network layer number, neuron node number of each layer, learning rate and attenuation rate lambda2(ii) a Setting training parameters: total number of iterations T2Early stop step number s2Batch size m2. Track position detection and network model parameter extraction W in initialization area2. Detecting and extracting training set X from flight path position in region2Is divided into a plurality of groups m2Small batch data set B of individual training images2。
Step 10: selecting a small batch datasetModule f for detecting and extracting track position in area2At model parameters W2Under the action, calculating track position detection output Y in the region2:The error calculation method of the track position detection and extraction module in the set area uses a negative log-likelihood loss function with a weight penalty:and then updating the track position detection in the area and extracting the network model parameters.
Step 11: repeating the step 10 when s continues2Second iteration, loss L2Not reduced, or when the number of iterations T satisfies T > T2Stopping iteration to obtain a track position detection and extraction module f in the trained area2。
Step 12: processing the azimuth process diagram to be detected according to the steps 1 and 2, inputting the trained HADCNN model for identification, and identifying according to the cutting position ViAnd (i 1, 2..) splicing, overlapping and restoring the recognition results to obtain the track extraction map of the azimuth history map.
The following detailed description of embodiments of the invention is intended to be illustrative, and not to be construed as limiting the invention.
Introduction of the database of this example: the database of this example is composed of an azimuth history map and a track map corresponding to the azimuth history map. The number of training samples was 100, and the number of test samples was 100. Each size 1200 x 900, color level range 0 to 255.
Step 1: acquiring 100 color azimuth process maps, wherein the length of each azimuth process map is 1200, the height of each azimuth process map is 900, and the color depth of each azimuth process map is 8;
a 1200 × 900 azimuth history map C is decomposed R, G, B by color channel and converted into a grayscale map I: i (x, y) ═ 0.3 xr (x, y) +0.59 xg (x, y) +0.11 xb (x, y).
The color chart C of the azimuth process chart obtains a color histogram equalization result chart H through a histogram equalization algorithmcolour。
Step 2: the gray scale image I is processed as follows:
(1) counting r for gray level image Ik(k is 0,1,2, …,255) the number of pixel values n for each gradationkCalculating rkPixel corresponding value s of cumulative probability of color gradation and cumulative probability of 255 product resultk:
To skRounded to obtain [ s ]k]The brackets represent rounding, and r in the azimuth course diagramkIs replaced by [ s ]k]:
rk→[sk],k=0,1,2,…,255
Finally obtaining a histogram equalization result graph Hgray:Hgray(x,y)=[sk],I(x,y)==rk,k=0,1,...,255。
(2) Weighting the gray scale value of the upper, lower, left and right fields of each pixel in the image by using a Sobel horizontal edge detection operator and a vertical edge detection operator to obtain an edge image Gsobel。
(3) Obtaining I by median filtering the gray scale image ImedianThen mix ImedianHistogram equalization to obtain Imedian-hFinally, will Imedian-hObtaining mixed gray scale preprocessing result graph G through Sobel algorithmmix。
When the median filtering is carried out, the median filtering can be carried out on the gray level image I of the azimuth process diagram through a 3 multiplied by 3 median filtering template:
rounding 3 to obtain 1, and performing white filling on I to obtain a white-filled gray-scale image I0:
Is selected from0Point I0(x, y) ordering z (1), …, z (mean), …, z (max) from large to small in l × l neighborhood to obtain median filtering result Imedian:Imedian(x, y) ═ z (mean), where l1<x≤1201,l1<y≤901。
And step 3: and constructing a deep convolution neural network model HADCNN for detecting and extracting the track of the azimuth history map.
The model is composed of a track area detection module and a track position detection and extraction module in a track area.
The track area detection module is a track area detection network consisting of 17 convolutional layers, 3 pooling layers and 3 upper sampling layers, and adopts a fusion characteristic correction method, namely the convolutional layers after the pooling layers are partially extracted and fused to the convolutional layers before the upper sampling layers. The method realizes the correction of the extracted features, so that the extracted features are closer to the expected features.
The track position detection and extraction module in the track area is a track position detection and extraction network in the track area which is composed of 17 layers of convolution layers, 3 layers of pooling layers and 3 layers of upper sampling layers, and a fusion characteristic correction method is also adopted.
And 4, step 4: and constructing a HADCNN model training set, wherein the training set consists of a flight path region detection module sub-training set and a flight path position detection and extraction module sub-training set in the region.
Wherein the track area detection module trains set X1Comprises a reaction product of Hcolour、Hgray、GsobelConstructed training image setThe similar mark map after the binary processing with the track area similar mark map (the pixel value of the track area is 1, the background pixel value is 0)
And 5: setting the structural parameters of the track area detection network: the number of network layers, the number of neuron nodes in each layer, the learning rate and the attenuation rate are 0.5; setting training parameters: total number of iterationsNumber 100, early stop number 10, batch size 2; initializing track area detection network model parameters W1. Training set X of track area detection module1Is divided into a plurality of groups m1Small batch data set B of individual training images1。
Step 6: selecting a small batch datasetNetwork module f for detecting track area1At model parameters W1Under the action of the control signal, calculating the detection output Y of the track area1:Setting an error calculation method of a track area detection module, and using a negative log-likelihood loss function with a weight penalty:and then updating the parameters of the track area detection network model.
And 7: repeating the step 6, and when the iteration is continuously performed for 10 times, losing L1Not reducing or stopping iteration when the iteration times t meet t is more than 100 to obtain a trained track area detection module f1。
And 8: setting a track area ratio threshold value p, and detecting a module f according to the track area1Detection result Y of1Selecting all Y with track area ratio greater than p in h x h area1Partial area and recording all the central coordinates V of the partial areai(i ═ 1, 2..) while following Vi(i ═ 1, 2..) coordinate cut Hcolour、Hgray、GmixObtaining the detection and extraction training image set of the track position in the regionDetection and extraction of track position in formation area2. At the same time according to Vi(i ═ 1, 2. -) coordinate cutting track class mark map (the pixel value at the track is 1, and the other pixel values are 0) to obtain regional track class markDrawing (A)
And step 9: and (3) setting track position detection and extracting network structure parameters in the area: the number of network layers, the number of neuron nodes in each layer, the learning rate and the attenuation rate are 0.5; setting training parameters: total number of iterations 100, number of early stops 10, batch size 2. Track position detection and network model parameter extraction W in initialization area2. Detecting and extracting training set X from flight path position in region2Is divided into a plurality of groups m2Small batch data set B of individual training images2。
Step 10: selecting a small batch datasetModule f for detecting and extracting track position in area2At model parameters W2Under the action, calculating track position detection output Y in the region2:The error calculation method of the track position detection and extraction module in the set area uses a negative log-likelihood loss function with a weight penalty:and then updating the track position detection in the area and extracting the network model parameters.
Step 11: repeating step 10, and when 10 iterations continue, losing L2If the number of iterations t is not reduced or t is more than 100, the iteration is stopped to obtain a track position detection and extraction module f in the trained area2。
Step 12: processing the azimuth process diagram to be detected according to the steps 1 and 2, inputting the trained HADCNN model for identification, and identifying according to the cutting position ViAnd (i 1, 2..) splicing, overlapping and restoring the recognition results to obtain the track extraction map of the azimuth history map.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art without departing from the principle and spirit of the present invention.
Claims (9)
1. A method for detecting and extracting a track in an azimuth history map based on a layered attention depth convolution neural network is characterized by comprising the following steps: the method comprises the following steps:
step 1: acquiring N color azimuth process maps, wherein the length of each azimuth process map is H, the height of each azimuth process map is L, and the color depth of each azimuth process map is E; decomposing R, G, B each H multiplied by L azimuth process diagram C according to a color channel, and converting the H multiplied by L azimuth process diagram C into a gray scale diagram I; and obtaining a color histogram equalization result graph H by the color azimuth history graph C through a histogram equalization algorithmcolour;
Step 2: for the gray level image I of each color azimuth process image, a corresponding histogram equalization result image H is also obtainedgrayEdge image GsobelAnd mixed gray scale preprocessing result graph Gmix;
And step 3: constructing a depth convolution neural network model HADCNN for detecting and extracting the track of the azimuth history map;
the deep convolutional neural network model HADCNN; the track area detection module and the track position detection and extraction module in the track area are formed;
the track area detection module is composed of N1Layer winding layer, M1Layer pooling layer, M1A track area detection network formed by an upper sampling layer;
the track position detection and extraction module in the track area consists of N2Layer winding layer, M2Layer pooling layer, M2A track position detection and extraction network in a track area formed by an upper sampling layer;
and 4, step 4: constructing a HADCNN model training set, wherein the training set consists of a flight path region detection module sub-training set and a flight path position detection and extraction module sub-training set in a region;
wherein the track area detection module trains set X1Comprises a reaction product of Hcolour、Hgray、GsobelConstructed training image setClass mark map after binarization processing with track area class mark map
And 5: setting the structural parameters of the track area detection network: network layer number, neuron node number of each layer, learning rate and attenuation rate lambda1(ii) a Setting training parameters: total number of iterations T1Early stop step number s1Batch size m1(ii) a Initializing track area detection network model parameters W1(ii) a Training set X of track area detection module1Is divided into a plurality of groups m1Small batch data set B of individual training images1;
Step 6: selecting a small batch datasetNetwork module f for detecting track area1J is more than 0 and is less than or equal to m1At model parameter W1Calculating track area detection output Y under action1:And calculating the error loss L1Then updating the parameters of the track area detection network model;
and 7: repeating the step 6 when s continues1Second iteration, loss L1Not reduced, or when the number of iterations T satisfies T > T1Stopping iteration to obtain a trained track area detection module f1;
And 8: setting a track area ratio threshold value p, and detecting a module f according to the track area1Detection result Y of1Selecting a track area in the hx h areaAll Y with a domain ratio greater than p1Partial area and recording all the central coordinates V of the partial areakAccording to VkCoordinate cutting Hcolour、Hgray、GmixObtaining the detection and extraction training image set of the track position in the regionDetection and extraction of track position in formation area2(ii) a At the same time according to VkObtaining the regional track class map by cutting the binary track class map by the coordinates
And step 9: and (3) setting track position detection and extracting network structure parameters in the area: network layer number, neuron node number of each layer, learning rate and attenuation rate lambda2(ii) a Setting training parameters: total number of iterations T2Early stop step number s2Batch size m2(ii) a Track position detection and network model parameter extraction W in initialization area2(ii) a Detecting and extracting training set X from flight path position in region2Is divided into a plurality of groups m2Small batch data set B of individual training images2;
Step 10: selecting a small batch datasetModule f for detecting and extracting track position in area2J is more than 0 and is less than or equal to m2At model parameter W2Under the action, calculating track position detection output Y in the region2:And calculating the error loss L2Then, updating track position detection in the area and extracting network model parameters;
step 11: repeating the step 10 when s continues2Second iteration, loss L2Not reduced, or when the number of iterations T satisfies T > T2Stopping iteration to obtain a track position detection and extraction module f in the trained area2;
Step 12: processing the azimuth process diagram to be detected according to the steps 1 and 2, inputting the trained HADCNN model for identification, and identifying according to the cutting position VkAnd splicing, overlapping and restoring the identification result to obtain the track extraction map of the azimuth history map.
2. The method for detecting and extracting the flight path in the azimuth history map based on the layered attention depth convolution neural network as claimed in claim 1, characterized in that: in step 1, decomposing R, G, B the color azimuth history map according to the color channel, and converting into a gray scale map I:
I(x,y)=0.3×R(x,y)+0.59×G(x,y)+0.11×B(x,y)。
3. the method for detecting and extracting the flight path in the azimuth history map based on the layered attention depth convolution neural network as claimed in claim 1, characterized in that: in step 2, histogram equalization result graph H corresponding to gray-scale graph IgrayObtained by the following process:
counting r for gray level image IkNumber of pixel values n of each gradation of gradationk,k=0,1,2,…,2E-1, calculating rkCumulative probability of tone scale and 2E-1 product result cumulative probability pixel corresponding value sk:
To skRounded to obtain [ s ]k]R in the azimuth history mapkIs replaced by [ s ]k]:
rk→[sk],k=0,1,2,…,2E-1
Finally obtaining a histogram equalization result graph Hgray:Hgray(x,y)=[sk],I(x,y)==rk,k=0,1,...,2E-1。
4. The method for detecting and extracting the flight path in the azimuth history map based on the layered attention depth convolution neural network as claimed in claim 1, characterized in that: in step 2, the edge image G of the gray-scale image IsobelObtained by the following process:
weighting the gray scale value of the upper, lower, left and right fields of each pixel in the image by using a Sobel horizontal edge detection operator and a vertical edge detection operator to obtain an edge image Gsobel。
5. The method for detecting and extracting the flight path in the azimuth history map based on the layered attention depth convolution neural network as claimed in claim 1, characterized in that: in step 2, mixed gray scale preprocessing result graph G of the gray scale graph ImixObtained by the following process:
obtaining I by median filtering the gray scale image ImedianThen mix ImedianHistogram equalization to obtain Imedian-hFinally, will Imedian-hObtaining mixed gray scale preprocessing result graph G through Sobel algorithmmix。
6. The method for detecting and extracting the flight path in the azimuth history map based on the layered attention depth convolution neural network as claimed in claim 1, characterized in that: in step 2, median filtering is carried out on the gray scale image I of the azimuth history map through a l × l median filtering template during median filtering:
rounding l/2 to obtain l1Then, the I is subjected to white filling to obtain a white filling gray scale image I0:
2<x≤H+2,2<y≤L+2,I0(x,y)=I(x-2,y-2)
Is selected from0Point I0(x, y) ordering z (1), …, z (mean), …, z (max) from large to small in l × l neighborhood to obtain median filtering result Imedian:Imedian(x, y) ═ z (mean), where l1<x≤H+l1,l1<y≤L+l1。
7. The method for detecting and extracting the flight path in the azimuth history map based on the layered attention depth convolution neural network as claimed in claim 1, characterized in that: the track area detection network and the track position detection and extraction network in the track area both adopt a fusion feature correction method: the convolutional layer after the pooling layer is partially extracted and fused to the convolutional layer before the upsampling layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110502783.3A CN113239775B (en) | 2021-05-09 | 2021-05-09 | Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110502783.3A CN113239775B (en) | 2021-05-09 | 2021-05-09 | Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113239775A true CN113239775A (en) | 2021-08-10 |
CN113239775B CN113239775B (en) | 2023-05-02 |
Family
ID=77132977
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110502783.3A Active CN113239775B (en) | 2021-05-09 | 2021-05-09 | Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113239775B (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104155632A (en) * | 2014-07-18 | 2014-11-19 | 南京航空航天大学 | Improved subspace sea clutter suppression method based on local correlation |
US20150078122A1 (en) * | 2013-09-13 | 2015-03-19 | Navico Holding As | Tracking targets on a sonar image |
CN107783096A (en) * | 2016-08-25 | 2018-03-09 | 中国科学院声学研究所 | A kind of two-dimensional background equalization methods shown for bearing history figure |
KR20180065411A (en) * | 2016-12-07 | 2018-06-18 | 한국해양과학기술원 | System and method for automatic tracking of marine objects |
CN108564097A (en) * | 2017-12-05 | 2018-09-21 | 华南理工大学 | A kind of multiscale target detection method based on depth convolutional neural networks |
CN110197233A (en) * | 2019-06-05 | 2019-09-03 | 四川九洲电器集团有限责任公司 | A method of aircraft classification is carried out using track |
CN110542904A (en) * | 2019-08-23 | 2019-12-06 | 中国科学院声学研究所 | Target automatic discovery method based on underwater sound target azimuth history map |
CN111292563A (en) * | 2020-05-12 | 2020-06-16 | 北京航空航天大学 | Flight track prediction method |
CN111882585A (en) * | 2020-06-11 | 2020-11-03 | 中国人民解放军海军工程大学 | Passive sonar multi-target azimuth trajectory extraction method, electronic device and computer-readable storage medium |
CN112001433A (en) * | 2020-08-12 | 2020-11-27 | 西安交通大学 | Flight path association method, system, equipment and readable storage medium |
CN112114286A (en) * | 2020-06-23 | 2020-12-22 | 山东省科学院海洋仪器仪表研究所 | Multi-target tracking method based on line spectrum life cycle and single-vector hydrophone |
CN112434643A (en) * | 2020-12-06 | 2021-03-02 | 零八一电子集团有限公司 | Classification and identification method for low-slow small targets |
CN112668804A (en) * | 2021-01-11 | 2021-04-16 | 中国海洋大学 | Method for predicting broken track of ground wave radar ship |
CN112684454A (en) * | 2020-12-04 | 2021-04-20 | 中国船舶重工集团公司第七一五研究所 | Track cross target association method based on sub-frequency bands |
-
2021
- 2021-05-09 CN CN202110502783.3A patent/CN113239775B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150078122A1 (en) * | 2013-09-13 | 2015-03-19 | Navico Holding As | Tracking targets on a sonar image |
CN104155632A (en) * | 2014-07-18 | 2014-11-19 | 南京航空航天大学 | Improved subspace sea clutter suppression method based on local correlation |
CN107783096A (en) * | 2016-08-25 | 2018-03-09 | 中国科学院声学研究所 | A kind of two-dimensional background equalization methods shown for bearing history figure |
KR20180065411A (en) * | 2016-12-07 | 2018-06-18 | 한국해양과학기술원 | System and method for automatic tracking of marine objects |
CN108564097A (en) * | 2017-12-05 | 2018-09-21 | 华南理工大学 | A kind of multiscale target detection method based on depth convolutional neural networks |
CN110197233A (en) * | 2019-06-05 | 2019-09-03 | 四川九洲电器集团有限责任公司 | A method of aircraft classification is carried out using track |
CN110542904A (en) * | 2019-08-23 | 2019-12-06 | 中国科学院声学研究所 | Target automatic discovery method based on underwater sound target azimuth history map |
CN111292563A (en) * | 2020-05-12 | 2020-06-16 | 北京航空航天大学 | Flight track prediction method |
CN111882585A (en) * | 2020-06-11 | 2020-11-03 | 中国人民解放军海军工程大学 | Passive sonar multi-target azimuth trajectory extraction method, electronic device and computer-readable storage medium |
CN112114286A (en) * | 2020-06-23 | 2020-12-22 | 山东省科学院海洋仪器仪表研究所 | Multi-target tracking method based on line spectrum life cycle and single-vector hydrophone |
CN112001433A (en) * | 2020-08-12 | 2020-11-27 | 西安交通大学 | Flight path association method, system, equipment and readable storage medium |
CN112684454A (en) * | 2020-12-04 | 2021-04-20 | 中国船舶重工集团公司第七一五研究所 | Track cross target association method based on sub-frequency bands |
CN112434643A (en) * | 2020-12-06 | 2021-03-02 | 零八一电子集团有限公司 | Classification and identification method for low-slow small targets |
CN112668804A (en) * | 2021-01-11 | 2021-04-16 | 中国海洋大学 | Method for predicting broken track of ground wave radar ship |
Non-Patent Citations (5)
Title |
---|
RUI CHEN ET AL: "Mobility Modes Awareness from Trajectories Based on Clustering and a Convolutional Neural Network", 《INTERNATIONAL JOURNAL OF GEO-INFORMATION》 * |
SHENG SHEN ET AL: "Ship Type Classification by Convolutional Neural Networks with Auditory-Like Mechanisms", 《SENSORS》 * |
XIANG CHEN ET AL: "An application of convolutional neural network to derive vessel movement patterns", 《THE 5TH INTERNATIONAL CONFERENCE ON TRANSPORTATION INFORMATION AND SAFETY》 * |
侯觉 等: "一种基于峰值提取的历程图增强方法", 《舰船电子工程》 * |
李子高 等: "基于无人平台的水下目标自动检测方法", 《哈尔滨工程大学学报》 * |
Also Published As
Publication number | Publication date |
---|---|
CN113239775B (en) | 2023-05-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111091105B (en) | Remote sensing image target detection method based on new frame regression loss function | |
CN109583425B (en) | Remote sensing image ship integrated recognition method based on deep learning | |
CN109977812B (en) | Vehicle-mounted video target detection method based on deep learning | |
CN107609525B (en) | Remote sensing image target detection method for constructing convolutional neural network based on pruning strategy | |
CN107564025B (en) | Electric power equipment infrared image semantic segmentation method based on deep neural network | |
CN110084234B (en) | Sonar image target identification method based on example segmentation | |
CN114120102A (en) | Boundary-optimized remote sensing image semantic segmentation method, device, equipment and medium | |
CN107993215A (en) | A kind of weather radar image processing method and system | |
CN106875395B (en) | Super-pixel-level SAR image change detection method based on deep neural network | |
CN109035172B (en) | Non-local mean ultrasonic image denoising method based on deep learning | |
CN112633149B (en) | Domain-adaptive foggy-day image target detection method and device | |
CN111626993A (en) | Image automatic detection counting method and system based on embedded FEFnet network | |
CN105809121A (en) | Multi-characteristic synergic traffic sign detection and identification method | |
CN112950780B (en) | Intelligent network map generation method and system based on remote sensing image | |
CN106408030A (en) | SAR image classification method based on middle lamella semantic attribute and convolution neural network | |
CN106611420A (en) | SAR image segmentation method based on deconvolution network and sketch direction constraint | |
CN106897681A (en) | A kind of remote sensing images comparative analysis method and system | |
CN111145145B (en) | Image surface defect detection method based on MobileNet | |
CN106991686A (en) | A kind of level set contour tracing method based on super-pixel optical flow field | |
CN109766823A (en) | A kind of high-definition remote sensing ship detecting method based on deep layer convolutional neural networks | |
CN112668441B (en) | Satellite remote sensing image airplane target identification method combined with priori knowledge | |
CN113052215A (en) | Sonar image automatic target identification method based on neural network visualization | |
CN110443155A (en) | A kind of visual aid identification and classification method based on convolutional neural networks | |
CN113971764A (en) | Remote sensing image small target detection method based on improved YOLOv3 | |
CN114972759A (en) | Remote sensing image semantic segmentation method based on hierarchical contour cost function |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |