CN113239775A - Method for detecting and extracting flight path in azimuth history map based on layered attention depth convolution neural network - Google Patents

Method for detecting and extracting flight path in azimuth history map based on layered attention depth convolution neural network Download PDF

Info

Publication number
CN113239775A
CN113239775A CN202110502783.3A CN202110502783A CN113239775A CN 113239775 A CN113239775 A CN 113239775A CN 202110502783 A CN202110502783 A CN 202110502783A CN 113239775 A CN113239775 A CN 113239775A
Authority
CN
China
Prior art keywords
track
azimuth
detecting
extracting
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110502783.3A
Other languages
Chinese (zh)
Other versions
CN113239775B (en
Inventor
杨宏晖
于传林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202110502783.3A priority Critical patent/CN113239775B/en
Publication of CN113239775A publication Critical patent/CN113239775A/en
Application granted granted Critical
Publication of CN113239775B publication Critical patent/CN113239775B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/203Specially adapted for sailing ships
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Automation & Control Theory (AREA)
  • Acoustics & Sound (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for detecting and extracting a track in an azimuth process map based on a hierarchical attention depth convolution neural network, which is characterized in that an HADCNN model is constructed in a modular mode to simulate the process of human vision from whole to local and realize hierarchical attention; by the method of the mixed gray scale preprocessing of the azimuth process chart, noise interference filtering and track enhancement are realized. In the HADCNN model, a track area detection module is responsible for detecting and extracting a track area in the whole azimuth process map, and a track position detection and extraction module in the area is responsible for detecting and extracting a track in a local image containing the track area. The method for detecting and extracting the flight path in the azimuth history chart adopts a preprocessing method to enhance the flight path, inhibit noise, increase the characteristic difference between classes, reduce the operation cost of a subsequent network and improve the detection and extraction efficiency.

Description

Method for detecting and extracting flight path in azimuth history map based on layered attention depth convolution neural network
Technical Field
The invention relates to a method for detecting and extracting a track in an azimuth history chart, in particular to a method for detecting and extracting a track in an azimuth history chart based on a layered attention depth convolution neural network.
Background
The method for detecting and extracting the flight path by utilizing the passive sonar azimuth history map is an important means for judging the flight path of the ship. The ship track in the complex azimuth history chart is judged by preprocessing the azimuth history chart and detecting and extracting the track, wherein part of track information can be damaged along with the suppression of noise interference by a common preprocessing method for suppressing background noise interference, and the conditions of fracture and discontinuity are generated. Although the preprocessing method for enhancing the flight path line characteristics plays a certain role in enhancing the weak flight path, noise interference is enhanced, and degradation phenomena such as excessive enhancement of some images are caused. The preprocessing methods can improve the display of the azimuth history chart, but the parameters need to be set manually, so that the processing effect of the algorithm is influenced once the parameters are not properly set, and the difficulty of the algorithm in practical application is increased.
The traditional track detection and extraction method mostly adopts a threshold value method and a probability tracking method. The threshold method is simple in engineering, but the method cannot effectively detect and extract the flight path under the condition of large background noise interference. When two tracks are too close or crossed, a track detection error may occur. The probability tracking method is characterized in that single-frame image data are used as input, track positions in the area are judged, track points in a fixed range are searched according to the characteristics that the track cannot be bent suddenly and the pixel value at the track changes slowly, and the track points are connected to realize track detection and extraction. The probability tracking method can misjudge noise interference and the like similar to the flight path. A plurality of interference value points often exist in single-frame image data, and under the condition of no artificial judgment, tracking according to detection points easily leads to target false report. Even if multi-frame data in the target domain are detected during association correction of the broken track, partial false tracks still exist in the image, and the detection result is influenced. Meanwhile, the traditional flight path detection and extraction needs manual parameter setting, and the generalization performance is weak.
The current deep convolutional neural network has great advantages in the aspects of target detection and extraction, and is mainly applied to the aspects of face recognition, object recognition and extraction in road scenes and the like. In the application scenes, the target object has high proportion and clear outline, so the deep convolutional neural network can extract better results. However, the track occupancy ratio in the azimuth history chart is extremely low, the track does not have an obvious outline, and meanwhile, the track signal to noise ratio in the azimuth history chart is low, various noise interference characteristics are different, and the track is crossed or mixed, so that the existing network can not obtain an ideal effect when being directly applied to the track detection and extraction of the azimuth history chart. Therefore, a deep convolutional neural network needs to be reasonably designed according to the characteristics of the flight path in the azimuth history chart, and the purpose of realizing the flight path detection and extraction is achieved.
Disclosure of Invention
The invention provides a Hierarchical Attention Deep Convolutional Neural Network (HADCNN) aiming at the problem of detection and extraction of a flight path in a passive sonar azimuth course diagram. The network simulates an identification mechanism of a biological visual nerve identification object from overall overview to local detail through task modular design, and constructs a track area detection module and a track position detection and extraction module in the area, thereby realizing layered attention of an image area and solving the problem of weak and positive samples of an azimuth process diagram; each module progressively completes the tasks of ship track area detection and extraction and track position detection and extraction in the area by using a Deep Convolutional Neural Network (DCNN); in each module, the layered attention of the features is realized through a method of feature map fusion training of the shallow layer convolutional layer and the deep layer convolutional layer, and the detection and extraction precision is increased. The layered attention depth convolution neural network provides a new approach and a new method for detecting and extracting the flight path in the azimuth history map.
The technical scheme of the invention is as follows:
the method for detecting and extracting the flight path in the azimuth history map based on the layered attention depth convolution neural network comprises the following steps:
step 1: acquiring N color azimuth process maps, wherein the length of each azimuth process map is H, the height of each azimuth process map is L, and the color depth of each azimuth process map is E; decomposing R, G, B each H multiplied by L azimuth process diagram C according to a color channel, and converting the H multiplied by L azimuth process diagram C into a gray scale diagram I; and obtaining a color histogram equalization result graph H by the color azimuth history graph C through a histogram equalization algorithmcolour
Step 2: for the gray level image I of each color azimuth process image, a corresponding histogram equalization result image H is also obtainedgrayEdge image GsobelAnd mixed gray scale preprocessing result graph Gmix
And step 3: constructing a depth convolution neural network model HADCNN for detecting and extracting the track of the azimuth history map;
the deep convolutional neural network model HADCNN; the track area detection module and the track position detection and extraction module in the track area are formed;
the track area detection module is composed of N1Layer winding layer, M1Layer pooling layer, M1A track area detection network formed by an upper sampling layer;
the track position detection and extraction module in the track area consists of N2Layer winding layer, M2Layer pooling layer, M2A track position detection and extraction network in a track area formed by an upper sampling layer;
and 4, step 4: constructing a HADCNN model training set, wherein the training set consists of a flight path region detection module sub-training set and a flight path position detection and extraction module sub-training set in a region;
wherein the track area detection module trains set X1Comprises a reaction product of Hcolour、Hgray、GsobelConstructed training image set
Figure BDA0003057085870000031
Class mark map after binarization processing with track area class mark map
Figure BDA0003057085870000032
0<i≤N;
Step (ii) of5: setting the structural parameters of the track area detection network: network layer number, neuron node number of each layer, learning rate and attenuation rate lambda1(ii) a Setting training parameters: total number of iterations T1Early stop step number s1Batch size m1(ii) a Initializing track area detection network model parameters W1(ii) a Training set X of track area detection module1Is divided into a plurality of groups m1Small batch data set B of individual training images1
Step 6: selecting a small batch dataset
Figure BDA0003057085870000033
Network module f for detecting track area1J is more than 0 and is less than or equal to m1At model parameter W1Calculating track area detection output Y under action1
Figure BDA0003057085870000034
And calculating the error loss L1Then updating the parameters of the track area detection network model;
and 7: repeating the step 6 when s continues1Second iteration, loss L1Not reduced, or when the number of iterations T satisfies T > T1Stopping iteration to obtain a trained track area detection module f1
And 8: setting a track area ratio threshold value p, and detecting a module f according to the track area1Detection result Y of1Selecting all Y with track area ratio greater than p in h x h area1Partial area and recording all the central coordinates V of the partial areakAccording to VkCoordinate cutting Hcolour、Hgray、GmixObtaining the detection and extraction training image set of the track position in the region
Figure BDA0003057085870000035
Detection and extraction of track position in formation area2(ii) a At the same time according to VkObtaining the regional track class map by cutting the binary track class map by the coordinates
Figure BDA0003057085870000036
And step 9: and (3) setting track position detection and extracting network structure parameters in the area: network layer number, neuron node number of each layer, learning rate and attenuation rate lambda2(ii) a Setting training parameters: total number of iterations T2Early stop step number s2Batch size m2(ii) a Track position detection and network model parameter extraction W in initialization area2(ii) a Detecting and extracting training set X from flight path position in region2Is divided into a plurality of groups m2Small batch data set B of individual training images2
Step 10: selecting a small batch dataset
Figure BDA0003057085870000037
Module f for detecting and extracting track position in area2J is more than 0 and is less than or equal to m2At model parameter W2Under the action, calculating track position detection output Y in the region2
Figure BDA0003057085870000041
And calculating the error loss L2Then, updating track position detection in the area and extracting network model parameters;
step 11: repeating the step 10 when s continues2Second iteration, loss L2Not reduced, or when the number of iterations T satisfies T > T2Stopping iteration to obtain a track position detection and extraction module f in the trained area2
Step 12: processing the azimuth process diagram to be detected according to the steps 1 and 2, inputting the trained HADCNN model for identification, and identifying according to the cutting position VkAnd splicing, overlapping and restoring the identification result to obtain the track extraction map of the azimuth history map.
Further, the color azimuth history map is decomposed R, G, B according to the color channels, and converted into a gray scale map I:
I(x,y)=0.3×R(x,y)+0.59×G(x,y)+0.11×B(x,y)。
further, in step 2, histogram equalization result graph H corresponding to the gray-scale graph IgrayObtained by the following process:
counting r for gray level image IkNumber of pixel values n of each gradation of gradationk,k=0,1,2,…,2E-1, calculating rkCumulative probability of tone scale and 2E-1 product result cumulative probability pixel corresponding value sk
Figure BDA0003057085870000042
To skRounded to obtain [ s ]k]R in the azimuth history mapkIs replaced by [ s ]k]:
rk→[sk],k=0,1,2,…,2E-1
Finally obtaining a histogram equalization result graph Hgray:Hgray(x,y)=[sk],I(x,y)==rk,k=0,1,...,2E-1。
Further, in step 2, the edge image G of the gray-scale image IsobelObtained by the following process:
weighting the gray scale value of the upper, lower, left and right fields of each pixel in the image by using a Sobel horizontal edge detection operator and a vertical edge detection operator to obtain an edge image Gsobel
Further, in step 2, the mixed gray scale preprocessing result graph G of the gray scale graph ImixObtained by the following process:
obtaining I by median filtering the gray scale image ImedianThen mix ImedianHistogram equalization to obtain Imedian-hFinally, will Imedian-hObtaining mixed gray scale preprocessing result graph G through Sobel algorithmmix
Further, in step 2, during the median filtering, the median filtering is performed on the bitmap I through an l × l median filtering template:
rounding l/2 to obtain l1Then, the I is subjected to white filling to obtain white filling gray scaleFIG. I0
Figure BDA0003057085870000051
2<x≤H+2,2<y≤L+2,I0(x,y)=I(x-2,y-2)。
Is selected from0Point I0(x, y) ordering z (1), …, z (mean), …, z (max) from large to small in l × l neighborhood to obtain median filtering result Imedian:Imedian(x, y) ═ z (mean), where l1<x≤H+l1,l1<y≤L+l1
Further, the track area detection network and the track position detection and extraction network in the track area both adopt a fusion feature correction method: the convolutional layer after the pooling layer is partially extracted and fused to the convolutional layer before the upsampling layer.
Further, a negative log-likelihood loss function with a weighted penalty is used in step 6:
Figure BDA0003057085870000052
further, a negative log-likelihood loss function with a weight penalty is used in step 10:
Figure BDA0003057085870000053
advantageous effects
The invention provides a method for detecting and extracting a track in an azimuth history chart. The method constructs the HADCNN model in a modularized mode to simulate the process of human vision from whole to local and realize layered attention at the same time; by the method of the mixed gray scale preprocessing of the azimuth process chart, noise interference filtering and track enhancement are realized. In the HADCNN model provided by the invention, a track area detection module is responsible for detecting and extracting a track area in the whole azimuth process map, and then a track position detection and extraction module in the area is responsible for detecting and extracting a track in a local image containing the track area. The fusion layers are adopted in the two modules, the output of the shallow layer convolution layer is extracted to the deep layer convolution layer to participate in convolution operation, the fusion training of the shallow layer convolution layer and the deep layer convolution layer is realized, the effect that the characteristics are concerned by layers is achieved, and the detection and extraction precision is increased. The method for detecting and extracting the flight path in the azimuth history chart adopts a preprocessing method to enhance the flight path, inhibit noise, increase the characteristic difference between classes, reduce the operation cost of a subsequent network and improve the detection and extraction efficiency. The HADCNN model simulates a recognition mechanism from overall overview to local details of a biological visual neural recognition object through modular design, improves the recognition task specialization capability of a module network, increases the depth and structural diversity of the model, solves the problem of low accuracy of detection and extraction of a position history map due to a weak positive sample proportion, and realizes the improvement of the accuracy of detection and extraction.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1: the hierarchy focuses on a deep convolutional neural network model.
FIG. 2: and the ship track area detection and extraction module.
FIG. 3: and a track position detection and extraction module in the ship area.
FIG. 4: and (4) detecting and extracting a first result based on the track of the HADCNN.
FIG. 5: and (5) carrying out track detection and extracting a second result based on the HADCNN.
FIG. 6: and (4) comparing the track detection and extraction result based on the HADCNN. (a) The method comprises the following steps of (a) detecting a navigation track and extracting a result graph.
FIG. 7: and (4) comparing the track detection and extraction result based on the HADCNN. (a) The method comprises the following steps of (a) detecting a navigation track and extracting a result graph.
Detailed Description
The invention carries out feature extraction and identification on ship tracks in a passive sonar azimuth history map based on a layered attention depth convolution neural network, and judges the ship tracks in a complex azimuth history map, and specifically comprises the following steps:
step 1: acquiring N color azimuth process maps, wherein the length of each azimuth process map is H, the height of each azimuth process map is L, and the color depth of each azimuth process map is E;
decomposing R, G, B an H × L azimuth history chart C according to color channels, and converting into a gray scale chart I: i (x, y) ═ t1×R(x,y)+t2×G(x,y)+t3X is multiplied by B (x, y), x is more than 0 and less than or equal to H, y is more than 0 and less than or equal to L, wherein t1、t2、t3Is a weight value.
The color chart C of the azimuth process chart obtains a color histogram equalization result chart H through a histogram equalization algorithmcolour
Step 2: the gray scale image I is processed as follows:
(1) counting r for gray level image Ik(k=0,1,2,…,2E-1) number n of pixel values of each tone scalekCalculating rkCumulative probability of tone scale and 2E-1 product result cumulative probability pixel corresponding value sk
Figure BDA0003057085870000071
To skRounded to obtain [ s ]k]The brackets represent rounding, and r in the azimuth course diagramkIs replaced by [ s ]k]:
rk→[sk],k=0,1,2,…,2E-1
Finally obtaining a histogram equalization result graph Hgray:Hgray(x,y)=[sk],I(x,y)==rk,k=0,1,...,2E-1。
(2) Weighting the gray scale values of the upper, lower, left and right fields of each pixel in the gray scale image I through a Sobel horizontal edge detection operator and a vertical edge detection operator to obtain edgesEdge image Gsobel
(3) Obtaining I by median filtering the gray scale image ImedianThen mix ImedianHistogram equalization to obtain Imedian-hFinally, will Imedian-hObtaining mixed gray scale preprocessing result graph G through Sobel algorithmmixBecause the median filtering is adopted to weaken the noise interference in the azimuth history chart, then the track enhancement is carried out by the histogram equalization and the Sobel method, and the noise interference filtering and the track enhancement are simultaneously realized.
When the median filtering is carried out, the median filtering can be carried out on the gray level image I of the azimuth process diagram through an l multiplied by l median filtering template:
rounding l/2 to obtain l1Then, the I is subjected to white filling to obtain a white filling gray scale image I0
Figure BDA0003057085870000072
2<x≤H+2,2<y≤L+2,I0(x,y)=I(x-2,y-2)。
Is selected from0Point I0(x, y) ordering z (1), …, z (mean), …, z (max) from large to small in l × l neighborhood to obtain median filtering result Imedian:Imedian(x, y) ═ z (mean), where l1<x≤H+l1,l1<y≤L+l1
And step 3: and constructing a deep convolution neural network model HADCNN for detecting and extracting the track of the azimuth history map.
The model is composed of a track area detection module and a track position detection and extraction module in a track area.
The track area detection module is composed of N1Layer winding layer, M1Layer pooling layer, M1And a track area detection network formed by the sampling layers on the layers adopts a fusion characteristic correction method, namely the convolution layer behind the pooling layer is partially extracted and fused to the convolution layer in front of the sampling layer. The method realizes the correction of the extracted features, so that the extracted features are closer to the expected features.
The track position detection and extraction module in the track area is composed of N2Layer winding layer, M2Layer pooling layer, M2And a fusion characteristic correction method is also adopted for the track position detection and extraction network in the track area formed by the sampling layer on the layer.
And 4, step 4: and constructing a HADCNN model training set, wherein the training set consists of a flight path region detection module sub-training set and a flight path position detection and extraction module sub-training set in the region.
Wherein the track area detection module trains set X1Comprises a reaction product of Hcolour、Hgray、GsobelConstructed training image set
Figure BDA0003057085870000081
The similar mark map after the binary processing with the track area similar mark map (the pixel value of the track area is 1, the background pixel value is 0)
Figure BDA0003057085870000082
And 5: setting the structural parameters of the track area detection network: network layer number, neuron node number of each layer, learning rate and attenuation rate lambda1(ii) a Setting training parameters: total number of iterations T1Early stop step number s1Batch size m1(ii) a Initializing track area detection network model parameters W1. Training set X of track area detection module1Is divided into a plurality of groups m1Small batch data set B of individual training images1
Step 6: selecting a small batch dataset
Figure BDA0003057085870000083
Network module f for detecting track area1At model parameters W1Under the action of the control signal, calculating the detection output Y of the track area1
Figure BDA0003057085870000084
Error calculation method for setting track area detection module, using negative log-likelihood loss with weight penaltyLoss function:
Figure BDA0003057085870000085
and then updating the parameters of the track area detection network model.
And 7: repeating the step 6 when s continues1Second iteration, loss L1Not reduced, or when the number of iterations T satisfies T > T1Stopping iteration to obtain a trained track area detection module f1
And 8: setting a track area ratio threshold value p, and detecting a module f according to the track area1Detection result Y of1Selecting all Y with track area ratio greater than p in h x h area1Partial area and recording all the central coordinates V of the partial areai(i ═ 1, 2..) while following Vi(i ═ 1, 2..) coordinate cut Hcolour、Hgray、GmixObtaining the detection and extraction training image set of the track position in the region
Figure BDA0003057085870000086
Detection and extraction of track position in formation area2. At the same time according to Vi(i ═ 1, 2. -) coordinate cutting track class mark map (the pixel value at the track is 1, and the other pixel values are 0) to obtain area track class mark map
Figure BDA0003057085870000087
And step 9: and (3) setting track position detection and extracting network structure parameters in the area: network layer number, neuron node number of each layer, learning rate and attenuation rate lambda2(ii) a Setting training parameters: total number of iterations T2Early stop step number s2Batch size m2. Track position detection and network model parameter extraction W in initialization area2. Detecting and extracting training set X from flight path position in region2Is divided into a plurality of groups m2Small batch data set B of individual training images2
Step 10: selecting a small batch dataset
Figure BDA0003057085870000088
Module f for detecting and extracting track position in area2At model parameters W2Under the action, calculating track position detection output Y in the region2
Figure BDA0003057085870000091
The error calculation method of the track position detection and extraction module in the set area uses a negative log-likelihood loss function with a weight penalty:
Figure BDA0003057085870000092
and then updating the track position detection in the area and extracting the network model parameters.
Step 11: repeating the step 10 when s continues2Second iteration, loss L2Not reduced, or when the number of iterations T satisfies T > T2Stopping iteration to obtain a track position detection and extraction module f in the trained area2
Step 12: processing the azimuth process diagram to be detected according to the steps 1 and 2, inputting the trained HADCNN model for identification, and identifying according to the cutting position ViAnd (i 1, 2..) splicing, overlapping and restoring the recognition results to obtain the track extraction map of the azimuth history map.
The following detailed description of embodiments of the invention is intended to be illustrative, and not to be construed as limiting the invention.
Introduction of the database of this example: the database of this example is composed of an azimuth history map and a track map corresponding to the azimuth history map. The number of training samples was 100, and the number of test samples was 100. Each size 1200 x 900, color level range 0 to 255.
Step 1: acquiring 100 color azimuth process maps, wherein the length of each azimuth process map is 1200, the height of each azimuth process map is 900, and the color depth of each azimuth process map is 8;
a 1200 × 900 azimuth history map C is decomposed R, G, B by color channel and converted into a grayscale map I: i (x, y) ═ 0.3 xr (x, y) +0.59 xg (x, y) +0.11 xb (x, y).
The color chart C of the azimuth process chart obtains a color histogram equalization result chart H through a histogram equalization algorithmcolour
Step 2: the gray scale image I is processed as follows:
(1) counting r for gray level image Ik(k is 0,1,2, …,255) the number of pixel values n for each gradationkCalculating rkPixel corresponding value s of cumulative probability of color gradation and cumulative probability of 255 product resultk
Figure BDA0003057085870000093
To skRounded to obtain [ s ]k]The brackets represent rounding, and r in the azimuth course diagramkIs replaced by [ s ]k]:
rk→[sk],k=0,1,2,…,255
Finally obtaining a histogram equalization result graph Hgray:Hgray(x,y)=[sk],I(x,y)==rk,k=0,1,...,255。
(2) Weighting the gray scale value of the upper, lower, left and right fields of each pixel in the image by using a Sobel horizontal edge detection operator and a vertical edge detection operator to obtain an edge image Gsobel
(3) Obtaining I by median filtering the gray scale image ImedianThen mix ImedianHistogram equalization to obtain Imedian-hFinally, will Imedian-hObtaining mixed gray scale preprocessing result graph G through Sobel algorithmmix
When the median filtering is carried out, the median filtering can be carried out on the gray level image I of the azimuth process diagram through a 3 multiplied by 3 median filtering template:
rounding 3 to obtain 1, and performing white filling on I to obtain a white-filled gray-scale image I0
Figure BDA0003057085870000101
Is selected from0Point I0(x, y) ordering z (1), …, z (mean), …, z (max) from large to small in l × l neighborhood to obtain median filtering result Imedian:Imedian(x, y) ═ z (mean), where l1<x≤1201,l1<y≤901。
And step 3: and constructing a deep convolution neural network model HADCNN for detecting and extracting the track of the azimuth history map.
The model is composed of a track area detection module and a track position detection and extraction module in a track area.
The track area detection module is a track area detection network consisting of 17 convolutional layers, 3 pooling layers and 3 upper sampling layers, and adopts a fusion characteristic correction method, namely the convolutional layers after the pooling layers are partially extracted and fused to the convolutional layers before the upper sampling layers. The method realizes the correction of the extracted features, so that the extracted features are closer to the expected features.
The track position detection and extraction module in the track area is a track position detection and extraction network in the track area which is composed of 17 layers of convolution layers, 3 layers of pooling layers and 3 layers of upper sampling layers, and a fusion characteristic correction method is also adopted.
And 4, step 4: and constructing a HADCNN model training set, wherein the training set consists of a flight path region detection module sub-training set and a flight path position detection and extraction module sub-training set in the region.
Wherein the track area detection module trains set X1Comprises a reaction product of Hcolour、Hgray、GsobelConstructed training image set
Figure BDA0003057085870000102
The similar mark map after the binary processing with the track area similar mark map (the pixel value of the track area is 1, the background pixel value is 0)
Figure BDA0003057085870000103
And 5: setting the structural parameters of the track area detection network: the number of network layers, the number of neuron nodes in each layer, the learning rate and the attenuation rate are 0.5; setting training parameters: total number of iterationsNumber 100, early stop number 10, batch size 2; initializing track area detection network model parameters W1. Training set X of track area detection module1Is divided into a plurality of groups m1Small batch data set B of individual training images1
Step 6: selecting a small batch dataset
Figure BDA0003057085870000111
Network module f for detecting track area1At model parameters W1Under the action of the control signal, calculating the detection output Y of the track area1
Figure BDA0003057085870000112
Setting an error calculation method of a track area detection module, and using a negative log-likelihood loss function with a weight penalty:
Figure BDA0003057085870000113
and then updating the parameters of the track area detection network model.
And 7: repeating the step 6, and when the iteration is continuously performed for 10 times, losing L1Not reducing or stopping iteration when the iteration times t meet t is more than 100 to obtain a trained track area detection module f1
And 8: setting a track area ratio threshold value p, and detecting a module f according to the track area1Detection result Y of1Selecting all Y with track area ratio greater than p in h x h area1Partial area and recording all the central coordinates V of the partial areai(i ═ 1, 2..) while following Vi(i ═ 1, 2..) coordinate cut Hcolour、Hgray、GmixObtaining the detection and extraction training image set of the track position in the region
Figure BDA0003057085870000114
Detection and extraction of track position in formation area2. At the same time according to Vi(i ═ 1, 2. -) coordinate cutting track class mark map (the pixel value at the track is 1, and the other pixel values are 0) to obtain regional track class markDrawing (A)
Figure BDA0003057085870000115
And step 9: and (3) setting track position detection and extracting network structure parameters in the area: the number of network layers, the number of neuron nodes in each layer, the learning rate and the attenuation rate are 0.5; setting training parameters: total number of iterations 100, number of early stops 10, batch size 2. Track position detection and network model parameter extraction W in initialization area2. Detecting and extracting training set X from flight path position in region2Is divided into a plurality of groups m2Small batch data set B of individual training images2
Step 10: selecting a small batch dataset
Figure BDA0003057085870000116
Module f for detecting and extracting track position in area2At model parameters W2Under the action, calculating track position detection output Y in the region2
Figure BDA0003057085870000117
The error calculation method of the track position detection and extraction module in the set area uses a negative log-likelihood loss function with a weight penalty:
Figure BDA0003057085870000118
and then updating the track position detection in the area and extracting the network model parameters.
Step 11: repeating step 10, and when 10 iterations continue, losing L2If the number of iterations t is not reduced or t is more than 100, the iteration is stopped to obtain a track position detection and extraction module f in the trained area2
Step 12: processing the azimuth process diagram to be detected according to the steps 1 and 2, inputting the trained HADCNN model for identification, and identifying according to the cutting position ViAnd (i 1, 2..) splicing, overlapping and restoring the recognition results to obtain the track extraction map of the azimuth history map.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art without departing from the principle and spirit of the present invention.

Claims (9)

1. A method for detecting and extracting a track in an azimuth history map based on a layered attention depth convolution neural network is characterized by comprising the following steps: the method comprises the following steps:
step 1: acquiring N color azimuth process maps, wherein the length of each azimuth process map is H, the height of each azimuth process map is L, and the color depth of each azimuth process map is E; decomposing R, G, B each H multiplied by L azimuth process diagram C according to a color channel, and converting the H multiplied by L azimuth process diagram C into a gray scale diagram I; and obtaining a color histogram equalization result graph H by the color azimuth history graph C through a histogram equalization algorithmcolour
Step 2: for the gray level image I of each color azimuth process image, a corresponding histogram equalization result image H is also obtainedgrayEdge image GsobelAnd mixed gray scale preprocessing result graph Gmix
And step 3: constructing a depth convolution neural network model HADCNN for detecting and extracting the track of the azimuth history map;
the deep convolutional neural network model HADCNN; the track area detection module and the track position detection and extraction module in the track area are formed;
the track area detection module is composed of N1Layer winding layer, M1Layer pooling layer, M1A track area detection network formed by an upper sampling layer;
the track position detection and extraction module in the track area consists of N2Layer winding layer, M2Layer pooling layer, M2A track position detection and extraction network in a track area formed by an upper sampling layer;
and 4, step 4: constructing a HADCNN model training set, wherein the training set consists of a flight path region detection module sub-training set and a flight path position detection and extraction module sub-training set in a region;
wherein the track area detection module trains set X1Comprises a reaction product of Hcolour、Hgray、GsobelConstructed training image set
Figure FDA0003057085860000011
Class mark map after binarization processing with track area class mark map
Figure FDA0003057085860000012
And 5: setting the structural parameters of the track area detection network: network layer number, neuron node number of each layer, learning rate and attenuation rate lambda1(ii) a Setting training parameters: total number of iterations T1Early stop step number s1Batch size m1(ii) a Initializing track area detection network model parameters W1(ii) a Training set X of track area detection module1Is divided into a plurality of groups m1Small batch data set B of individual training images1
Step 6: selecting a small batch dataset
Figure FDA0003057085860000013
Network module f for detecting track area1J is more than 0 and is less than or equal to m1At model parameter W1Calculating track area detection output Y under action1
Figure FDA0003057085860000014
And calculating the error loss L1Then updating the parameters of the track area detection network model;
and 7: repeating the step 6 when s continues1Second iteration, loss L1Not reduced, or when the number of iterations T satisfies T > T1Stopping iteration to obtain a trained track area detection module f1
And 8: setting a track area ratio threshold value p, and detecting a module f according to the track area1Detection result Y of1Selecting a track area in the hx h areaAll Y with a domain ratio greater than p1Partial area and recording all the central coordinates V of the partial areakAccording to VkCoordinate cutting Hcolour、Hgray、GmixObtaining the detection and extraction training image set of the track position in the region
Figure FDA0003057085860000021
Detection and extraction of track position in formation area2(ii) a At the same time according to VkObtaining the regional track class map by cutting the binary track class map by the coordinates
Figure FDA0003057085860000022
And step 9: and (3) setting track position detection and extracting network structure parameters in the area: network layer number, neuron node number of each layer, learning rate and attenuation rate lambda2(ii) a Setting training parameters: total number of iterations T2Early stop step number s2Batch size m2(ii) a Track position detection and network model parameter extraction W in initialization area2(ii) a Detecting and extracting training set X from flight path position in region2Is divided into a plurality of groups m2Small batch data set B of individual training images2
Step 10: selecting a small batch dataset
Figure FDA0003057085860000023
Module f for detecting and extracting track position in area2J is more than 0 and is less than or equal to m2At model parameter W2Under the action, calculating track position detection output Y in the region2
Figure FDA0003057085860000024
And calculating the error loss L2Then, updating track position detection in the area and extracting network model parameters;
step 11: repeating the step 10 when s continues2Second iteration, loss L2Not reduced, or when the number of iterations T satisfies T > T2Stopping iteration to obtain a track position detection and extraction module f in the trained area2
Step 12: processing the azimuth process diagram to be detected according to the steps 1 and 2, inputting the trained HADCNN model for identification, and identifying according to the cutting position VkAnd splicing, overlapping and restoring the identification result to obtain the track extraction map of the azimuth history map.
2. The method for detecting and extracting the flight path in the azimuth history map based on the layered attention depth convolution neural network as claimed in claim 1, characterized in that: in step 1, decomposing R, G, B the color azimuth history map according to the color channel, and converting into a gray scale map I:
I(x,y)=0.3×R(x,y)+0.59×G(x,y)+0.11×B(x,y)。
3. the method for detecting and extracting the flight path in the azimuth history map based on the layered attention depth convolution neural network as claimed in claim 1, characterized in that: in step 2, histogram equalization result graph H corresponding to gray-scale graph IgrayObtained by the following process:
counting r for gray level image IkNumber of pixel values n of each gradation of gradationk,k=0,1,2,…,2E-1, calculating rkCumulative probability of tone scale and 2E-1 product result cumulative probability pixel corresponding value sk
Figure FDA0003057085860000031
To skRounded to obtain [ s ]k]R in the azimuth history mapkIs replaced by [ s ]k]:
rk→[sk],k=0,1,2,…,2E-1
Finally obtaining a histogram equalization result graph Hgray:Hgray(x,y)=[sk],I(x,y)==rk,k=0,1,...,2E-1。
4. The method for detecting and extracting the flight path in the azimuth history map based on the layered attention depth convolution neural network as claimed in claim 1, characterized in that: in step 2, the edge image G of the gray-scale image IsobelObtained by the following process:
weighting the gray scale value of the upper, lower, left and right fields of each pixel in the image by using a Sobel horizontal edge detection operator and a vertical edge detection operator to obtain an edge image Gsobel
5. The method for detecting and extracting the flight path in the azimuth history map based on the layered attention depth convolution neural network as claimed in claim 1, characterized in that: in step 2, mixed gray scale preprocessing result graph G of the gray scale graph ImixObtained by the following process:
obtaining I by median filtering the gray scale image ImedianThen mix ImedianHistogram equalization to obtain Imedian-hFinally, will Imedian-hObtaining mixed gray scale preprocessing result graph G through Sobel algorithmmix
6. The method for detecting and extracting the flight path in the azimuth history map based on the layered attention depth convolution neural network as claimed in claim 1, characterized in that: in step 2, median filtering is carried out on the gray scale image I of the azimuth history map through a l × l median filtering template during median filtering:
rounding l/2 to obtain l1Then, the I is subjected to white filling to obtain a white filling gray scale image I0
Figure FDA0003057085860000032
2<x≤H+2,2<y≤L+2,I0(x,y)=I(x-2,y-2)
Is selected from0Point I0(x, y) ordering z (1), …, z (mean), …, z (max) from large to small in l × l neighborhood to obtain median filtering result Imedian:Imedian(x, y) ═ z (mean), where l1<x≤H+l1,l1<y≤L+l1
7. The method for detecting and extracting the flight path in the azimuth history map based on the layered attention depth convolution neural network as claimed in claim 1, characterized in that: the track area detection network and the track position detection and extraction network in the track area both adopt a fusion feature correction method: the convolutional layer after the pooling layer is partially extracted and fused to the convolutional layer before the upsampling layer.
8. The method for detecting and extracting the flight path in the azimuth history map based on the layered attention depth convolution neural network as claimed in claim 1, characterized in that: in step 6, a negative log-likelihood loss function with a weight penalty is used:
Figure FDA0003057085860000041
9. the method for detecting and extracting the flight path in the azimuth history map based on the layered attention depth convolution neural network as claimed in claim 1, characterized in that: in step 10, a negative log-likelihood loss function with a weight penalty is used:
Figure FDA0003057085860000042
CN202110502783.3A 2021-05-09 2021-05-09 Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network Active CN113239775B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110502783.3A CN113239775B (en) 2021-05-09 2021-05-09 Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110502783.3A CN113239775B (en) 2021-05-09 2021-05-09 Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network

Publications (2)

Publication Number Publication Date
CN113239775A true CN113239775A (en) 2021-08-10
CN113239775B CN113239775B (en) 2023-05-02

Family

ID=77132977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110502783.3A Active CN113239775B (en) 2021-05-09 2021-05-09 Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network

Country Status (1)

Country Link
CN (1) CN113239775B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104155632A (en) * 2014-07-18 2014-11-19 南京航空航天大学 Improved subspace sea clutter suppression method based on local correlation
US20150078122A1 (en) * 2013-09-13 2015-03-19 Navico Holding As Tracking targets on a sonar image
CN107783096A (en) * 2016-08-25 2018-03-09 中国科学院声学研究所 A kind of two-dimensional background equalization methods shown for bearing history figure
KR20180065411A (en) * 2016-12-07 2018-06-18 한국해양과학기술원 System and method for automatic tracking of marine objects
CN108564097A (en) * 2017-12-05 2018-09-21 华南理工大学 A kind of multiscale target detection method based on depth convolutional neural networks
CN110197233A (en) * 2019-06-05 2019-09-03 四川九洲电器集团有限责任公司 A method of aircraft classification is carried out using track
CN110542904A (en) * 2019-08-23 2019-12-06 中国科学院声学研究所 Target automatic discovery method based on underwater sound target azimuth history map
CN111292563A (en) * 2020-05-12 2020-06-16 北京航空航天大学 Flight track prediction method
CN111882585A (en) * 2020-06-11 2020-11-03 中国人民解放军海军工程大学 Passive sonar multi-target azimuth trajectory extraction method, electronic device and computer-readable storage medium
CN112001433A (en) * 2020-08-12 2020-11-27 西安交通大学 Flight path association method, system, equipment and readable storage medium
CN112114286A (en) * 2020-06-23 2020-12-22 山东省科学院海洋仪器仪表研究所 Multi-target tracking method based on line spectrum life cycle and single-vector hydrophone
CN112434643A (en) * 2020-12-06 2021-03-02 零八一电子集团有限公司 Classification and identification method for low-slow small targets
CN112668804A (en) * 2021-01-11 2021-04-16 中国海洋大学 Method for predicting broken track of ground wave radar ship
CN112684454A (en) * 2020-12-04 2021-04-20 中国船舶重工集团公司第七一五研究所 Track cross target association method based on sub-frequency bands

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150078122A1 (en) * 2013-09-13 2015-03-19 Navico Holding As Tracking targets on a sonar image
CN104155632A (en) * 2014-07-18 2014-11-19 南京航空航天大学 Improved subspace sea clutter suppression method based on local correlation
CN107783096A (en) * 2016-08-25 2018-03-09 中国科学院声学研究所 A kind of two-dimensional background equalization methods shown for bearing history figure
KR20180065411A (en) * 2016-12-07 2018-06-18 한국해양과학기술원 System and method for automatic tracking of marine objects
CN108564097A (en) * 2017-12-05 2018-09-21 华南理工大学 A kind of multiscale target detection method based on depth convolutional neural networks
CN110197233A (en) * 2019-06-05 2019-09-03 四川九洲电器集团有限责任公司 A method of aircraft classification is carried out using track
CN110542904A (en) * 2019-08-23 2019-12-06 中国科学院声学研究所 Target automatic discovery method based on underwater sound target azimuth history map
CN111292563A (en) * 2020-05-12 2020-06-16 北京航空航天大学 Flight track prediction method
CN111882585A (en) * 2020-06-11 2020-11-03 中国人民解放军海军工程大学 Passive sonar multi-target azimuth trajectory extraction method, electronic device and computer-readable storage medium
CN112114286A (en) * 2020-06-23 2020-12-22 山东省科学院海洋仪器仪表研究所 Multi-target tracking method based on line spectrum life cycle and single-vector hydrophone
CN112001433A (en) * 2020-08-12 2020-11-27 西安交通大学 Flight path association method, system, equipment and readable storage medium
CN112684454A (en) * 2020-12-04 2021-04-20 中国船舶重工集团公司第七一五研究所 Track cross target association method based on sub-frequency bands
CN112434643A (en) * 2020-12-06 2021-03-02 零八一电子集团有限公司 Classification and identification method for low-slow small targets
CN112668804A (en) * 2021-01-11 2021-04-16 中国海洋大学 Method for predicting broken track of ground wave radar ship

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
RUI CHEN ET AL: "Mobility Modes Awareness from Trajectories Based on Clustering and a Convolutional Neural Network", 《INTERNATIONAL JOURNAL OF GEO-INFORMATION》 *
SHENG SHEN ET AL: "Ship Type Classification by Convolutional Neural Networks with Auditory-Like Mechanisms", 《SENSORS》 *
XIANG CHEN ET AL: "An application of convolutional neural network to derive vessel movement patterns", 《THE 5TH INTERNATIONAL CONFERENCE ON TRANSPORTATION INFORMATION AND SAFETY》 *
侯觉 等: "一种基于峰值提取的历程图增强方法", 《舰船电子工程》 *
李子高 等: "基于无人平台的水下目标自动检测方法", 《哈尔滨工程大学学报》 *

Also Published As

Publication number Publication date
CN113239775B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN111091105B (en) Remote sensing image target detection method based on new frame regression loss function
CN109583425B (en) Remote sensing image ship integrated recognition method based on deep learning
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
CN107609525B (en) Remote sensing image target detection method for constructing convolutional neural network based on pruning strategy
CN107564025B (en) Electric power equipment infrared image semantic segmentation method based on deep neural network
CN110084234B (en) Sonar image target identification method based on example segmentation
CN114120102A (en) Boundary-optimized remote sensing image semantic segmentation method, device, equipment and medium
CN107993215A (en) A kind of weather radar image processing method and system
CN106875395B (en) Super-pixel-level SAR image change detection method based on deep neural network
CN109035172B (en) Non-local mean ultrasonic image denoising method based on deep learning
CN112633149B (en) Domain-adaptive foggy-day image target detection method and device
CN111626993A (en) Image automatic detection counting method and system based on embedded FEFnet network
CN105809121A (en) Multi-characteristic synergic traffic sign detection and identification method
CN112950780B (en) Intelligent network map generation method and system based on remote sensing image
CN106408030A (en) SAR image classification method based on middle lamella semantic attribute and convolution neural network
CN106611420A (en) SAR image segmentation method based on deconvolution network and sketch direction constraint
CN106897681A (en) A kind of remote sensing images comparative analysis method and system
CN111145145B (en) Image surface defect detection method based on MobileNet
CN106991686A (en) A kind of level set contour tracing method based on super-pixel optical flow field
CN109766823A (en) A kind of high-definition remote sensing ship detecting method based on deep layer convolutional neural networks
CN112668441B (en) Satellite remote sensing image airplane target identification method combined with priori knowledge
CN113052215A (en) Sonar image automatic target identification method based on neural network visualization
CN110443155A (en) A kind of visual aid identification and classification method based on convolutional neural networks
CN113971764A (en) Remote sensing image small target detection method based on improved YOLOv3
CN114972759A (en) Remote sensing image semantic segmentation method based on hierarchical contour cost function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant