CN113239775B - Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network - Google Patents

Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network Download PDF

Info

Publication number
CN113239775B
CN113239775B CN202110502783.3A CN202110502783A CN113239775B CN 113239775 B CN113239775 B CN 113239775B CN 202110502783 A CN202110502783 A CN 202110502783A CN 113239775 B CN113239775 B CN 113239775B
Authority
CN
China
Prior art keywords
track
azimuth
detection
extraction
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110502783.3A
Other languages
Chinese (zh)
Other versions
CN113239775A (en
Inventor
杨宏晖
于传林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202110502783.3A priority Critical patent/CN113239775B/en
Publication of CN113239775A publication Critical patent/CN113239775A/en
Application granted granted Critical
Publication of CN113239775B publication Critical patent/CN113239775B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/203Specially adapted for sailing ships
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Automation & Control Theory (AREA)
  • Acoustics & Sound (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a track detection and extraction method in a azimuth calendar based on a hierarchical attention depth convolutional neural network, which constructs an HADCNN model in a modularized mode to simulate the whole-to-local process of human vision and realize hierarchical attention at the same time; noise interference filtering and track enhancement are realized by a method of azimuth calendar mixing gray scale preprocessing. In the hadmann model, the track area detection module is responsible for detecting and extracting track areas in the whole Zhang Fangwei process chart, and the track position detection and extraction module in the areas is responsible for detecting and extracting tracks in partial images containing the track areas. The track detection and extraction method in the azimuth calendar adopts a preprocessing method to strengthen tracks and inhibit noise, thereby realizing the increase of the characteristic difference between classes, reducing the operation cost of a subsequent network and improving the detection and extraction efficiency.

Description

Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network
Technical Field
The invention relates to a track detection and extraction method in an azimuth calendar, in particular to a track detection and extraction method in an azimuth calendar based on a hierarchical attention depth convolution neural network.
Background
The track detection and extraction by using the passive sonar azimuth calendar is an important means for judging the ship track. The ship track in the complex azimuth calendar is judged through preprocessing the azimuth calendar and detecting and extracting the track, wherein the common preprocessing method for suppressing background noise interference can cause partial track information to be destroyed along with the suppression of noise interference, and the situations of fracture and discontinuity occur. Although the preprocessing method for enhancing the track line characteristics plays a certain role in enhancing the weak track, noise interference is enhanced, and degradation phenomena such as excessive enhancement and the like of some images are caused. The preprocessing method can improve the azimuth calendar display, but needs to manually set parameters, and once the parameters are improperly set, the processing effect of the algorithm can be affected, so that the difficulty of the algorithm in practical application is increased.
The traditional track detection and extraction method mostly adopts a threshold method and a probability tracking method. The threshold method is simple in engineering, but the method cannot effectively detect and extract the flight path under the condition of large background noise interference. A track detection error may occur when two tracks are too close together or intersect. The probability tracking method takes single-frame image data as input, judges the track position in the area, and searches track points in a fixed range according to the characteristics that the track cannot be bent suddenly and the pixel value at the track changes slowly, and is connected with the track points to realize the detection and extraction of the track. The probability tracking method can generate misjudgment on noise interference similar to a track and the like. The single frame image data often has a plurality of interference value points, and under the condition of no artificial judgment, the target false alarm is easily caused by tracking according to the detection points. Even if multi-frame data in a target domain are detected during fracture track association correction, partial false tracks still exist in the image, and the detection result is affected. Meanwhile, the traditional track detection and extraction needs to manually set parameters, and the generalization is weak.
The current deep convolutional neural network has great advantages in the aspect of target detection and extraction, and is mainly applied to the aspects of face recognition, object recognition and extraction in road scenes and the like. In the application scenes, the target object occupies a relatively high proportion and has clear outline, so that the deep convolutional neural network can extract a relatively good result. However, the track in the azimuth calendar is extremely low in duty ratio, the track does not have an obvious outline, meanwhile, the track in the azimuth calendar is low in signal to noise ratio, different in noise interference characteristics and crossed or aliased, so that the ideal effect cannot be obtained when the existing network is directly applied to the track detection and extraction of the azimuth calendar. Therefore, aiming at the characteristics of the tracks in the azimuth calendar, a deep convolutional neural network is reasonably designed to achieve the aim of track detection and extraction.
Disclosure of Invention
Aiming at the problems of track detection and extraction in a passive sonar azimuth calendar, the invention provides a hierarchical attention depth convolution neural network (Hierarchical Attention Deep convolutional neural network, hadcn). The network simulates a recognition mechanism of a biological visual nerve recognition object from overall overview to local detail through task modularization design, builds a track region detection module and a track position detection and extraction module in the region, realizes layered attention of an image region, and solves the problem of weak positive samples of an azimuth lineage diagram; each module progressively completes the tasks of ship track region detection and extraction and track position detection and extraction in the region by using a deep convolutional neural network (Deep convolutional neural network, DCNN); and the layering attention of the features is realized in each module by a method of blending training of the feature graphs of the shallow convolution layer and the deep convolution layer, and the detection and extraction precision is increased. The hierarchical attention depth convolution neural network provides a new approach and method for track detection and extraction in the azimuth lineage graph.
The technical scheme of the invention is as follows:
the method for detecting and extracting the track in the azimuth calendar based on the hierarchical attention depth convolution neural network comprises the following steps:
step 1: obtaining N color azimuth history maps, wherein the length of each azimuth history map is H, the height is L, and the color depth is E; decomposing R, G, B each H×L azimuth history map C according to color channels, and converting into a gray map I; and the color azimuth history map C is subjected to a histogram equalization algorithm to obtain a color histogram equalization result map H colour
Step 2: for the gray level diagram I of each color azimuth history diagram, a corresponding histogram equalization result diagram H is also obtained gray Edge image G sobel Hybrid gray scale preprocessing result graph G mix
Step 3: constructing a deep convolutional neural network model HADCNN for detecting and extracting azimuth course tracks;
the deep convolutional neural network model HADCNN; the track area detection module and the track position detection and extraction module in the track area are formed;
the track area detection module is formed by N 1 Layer convolution layer, M 1 Layer pooling layer, M 1 A track area detection network formed by an on-layer sampling layer;
the track position detection and extraction module in the track area is composed of N 2 Layer convolution layer, M 2 Layer pooling layer, M 2 A track position detection and extraction network in a track area formed by the layer up-sampling layer;
step 4: constructing a HADCNN model training set, wherein the training set consists of a track region detection module sub-training set and a track position detection and extraction module sub-training set in a region;
wherein the track area detection module training set X 1 Comprises a step of H colour 、H gray 、G sobel Constructed training image set
Figure BDA0003057085870000031
Class diagram binarized with track area class diagram +.>
Figure BDA0003057085870000032
0<i≤N;
Step 5: setting a track area detection network structure parameter: network layer number, number of neuronal nodes at each layer, learning rate, decay rate lambda 1 The method comprises the steps of carrying out a first treatment on the surface of the Setting training parameters: total number of iterations T 1 Number s of early stop steps 1 Batch size m 1 The method comprises the steps of carrying out a first treatment on the surface of the Initializing track area detection network model parameters W 1 The method comprises the steps of carrying out a first treatment on the surface of the Training set X of track area detection module 1 Divided into a plurality of groups including m 1 Small batch data set B of individual training images 1
Step 6: selecting a small batch of data sets
Figure BDA0003057085870000033
Network module f as track area detection 1 Input of 0 < j.ltoreq.m 1 In the model parameters W 1 Under the action, calculating the detection output Y of the track area 1 :/>
Figure BDA0003057085870000034
And calculate the error loss L 1 Then updating the parameters of the track area detection network model;
step 7: repeating step 6, when the steps are continuous s 1 Multiple iterations, loss L 1 Without reduction, or when the number of iterations T satisfies T > T 1 When the method is used, iteration is stopped, and a trained track area detection module f is obtained 1
Step 8: setting a track area duty ratio threshold p, and detecting a module f according to the track area 1 Is the detection result Y of (2) 1 Selecting all Y with track area ratio larger than p in h multiplied by h area 1 Local area and recording central coordinates V of all local areas k According to V k Coordinate cutting H colour 、H gray 、G mix Obtaining track position detection and extraction training image set in region
Figure BDA0003057085870000035
Track position detection and extraction training set X in formation area 2 The method comprises the steps of carrying out a first treatment on the surface of the At the same time according to V k Coordinate cutting binarized track class label graph to obtain regional track class label graph
Figure BDA0003057085870000036
/>
Step 9: detecting and extracting network structure parameters in the track position in the setting area: network layer number, number of neuronal nodes at each layer, learning rate, decay rate lambda 2 The method comprises the steps of carrying out a first treatment on the surface of the Setting training parameters: total number of iterations T 2 Number s of early stop steps 2 Batch size m 2 The method comprises the steps of carrying out a first treatment on the surface of the Track position detection and network model parameter W extraction in initialization area 2 The method comprises the steps of carrying out a first treatment on the surface of the Track position detection and training set X extraction in region 2 Divided into a plurality of groups including m 2 Small batch data set B of individual training images 2
Step 10: selecting a small batch of data sets
Figure BDA0003057085870000037
As a track position detection and extraction module f in an area 2 Input of 0 < j.ltoreq.m 2 In the model parameters W 2 Under the action, the track position detection output Y in the region is calculated 2
Figure BDA0003057085870000041
And calculate the error loss L 2 Then updating the track position detection and network model parameter extraction in the region;
step 11: repeating step 10 while continuing s 2 Multiple iterations, loss L 2 Without reduction, or when the number of iterations T satisfies T > T 2 When the method is used, iteration is stopped, and a track position detection and extraction module f in the trained region is obtained 2
Step 12: processing azimuth history map to be detected according to step 1 and step 2, inputting trained HADCNN model for identification, and determining cutting position V k And splicing, overlapping and restoring the identification results to obtain the azimuth calendar chart track extraction chart.
Further, the color azimuth history is decomposed R, G, B by color channel and converted into a gray-scale map I:
I(x,y)=0.3×R(x,y)+0.59×G(x,y)+0.11×B(x,y)。
further, in step 2, a histogram equalization result map H corresponding to the gray map I gray The method is characterized by comprising the following steps of:
statistics r for gray map I k Number of individual tone pixel values n of tone k ,k=0,1,2,…,2 E -1, calculating r k Color level cumulative probability of 2 E -1 the product results accumulated probability pixel correspondence value s k
Figure BDA0003057085870000042
For s k Rounding to obtain s k ]Will be shown in the azimuth history chart k Replaced by [ s ] k ]:
r k →[s k ],k=0,1,2,…,2 E -1
Finally, a histogram equalization result diagram H is obtained gray :H gray (x,y)=[s k ],I(x,y)==r k ,k=0,1,...,2 E -1。
Further, in step 2, the edge image G of the gray scale image I sobel The method is characterized by comprising the following steps of:
the gray level value of each pixel in the image is weighted by a Sobel horizontal edge detection operator and a vertical edge detection operator to obtain an edge image G sobel
Further, in step 2, the mixed gray scale preprocessing result graph G of the gray scale graph I mix The method is characterized by comprising the following steps of:
the gray level diagram I is filtered through a median to obtain I median Then I is carried out median Histogram equalization to obtain I median-h Finally, I is median-h Obtaining a mixed gray preprocessing result graph G through a Sobel algorithm mix
In step 2, median filtering is performed on the position history map gray map I through an lxl median filtering template during median filtering:
rounding l/2 to obtain l 1 Then the white-supplementing is carried out on the I to obtain a white-supplementing gray scale image I 0
Figure BDA0003057085870000051
2<x≤H+2,2<y≤L+2,I 0 (x,y)=I(x-2,y-2)。
Select I 0 Point I 0 The l×l neighborhood at (x, y) is sorted from large to small z (1), …, z (mean), …, z (max), yielding median filtering result I median :I median (x, y) =z (mean), where l 1 <x≤H+l 1 ,l 1 <y≤L+l 1
Furthermore, the track area detection network and the track position detection and extraction network in the track area all adopt a fusion characteristic correction method: the convolutional layer after the pooling layer is partially extracted and fused to the convolutional layer before the upsampling layer.
Further, in step 6, a negative log likelihood loss function with a weight penalty is used:
Figure BDA0003057085870000052
further, a negative log likelihood loss function with a weight penalty is used in step 10:
Figure BDA0003057085870000053
advantageous effects
The invention provides a track detection and extraction method in an azimuth calendar. According to the method, a HADCNN model is built in a modularized mode to simulate the whole-to-local process of human vision, and layered attention is realized; noise interference filtering and track enhancement are realized by a method of azimuth calendar mixing gray scale preprocessing. In the HADCNN model provided by the present invention, the track area detection module is responsible for detecting and extracting the track area in the overall Zhang Fangwei process map, and then the track position detection and extraction module in the area is responsible for detecting and extracting the track in the partial image containing the track area. And the two modules are internally provided with fusion layers, the output of the shallow layer convolution layer is extracted to the deep layer convolution layer to participate in convolution operation, so that the fusion training of the feature graphs of the shallow layer convolution layer and the deep layer convolution layer is realized, the effect that the features are focused in a layered manner is achieved, and the detection and extraction precision is increased. The track detection and extraction method in the azimuth calendar adopts a preprocessing method to strengthen tracks and inhibit noise, thereby realizing the increase of the characteristic difference between classes, reducing the operation cost of a subsequent network and improving the detection and extraction efficiency. The HADCNN model simulates a recognition mechanism from overall overview to local detail of a biological visual nerve recognition object through modularized design, improves the specialized capability of recognition tasks of a module network, increases the depth and structural diversity of the model, solves the problem of low detection and extraction accuracy of an azimuth lineage graph caused by weak positive sample proportion, and realizes the improvement of the detection and extraction accuracy.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
fig. 1: layering focuses on deep convolutional neural network models.
Fig. 2: and the ship track area detection and extraction module.
Fig. 3: and the track position detection and extraction module is arranged in the ship area.
Fig. 4: and (3) detecting and extracting a first result based on the track of the HADCNN.
Fig. 5: and (3) detecting and extracting a second result based on the track of the HADCNN.
Fig. 6: and (3) comparing the track detection and the extraction result based on the HADCNN. (a) azimuth calendar (b) class label (c) track detection and extraction result.
Fig. 7: and (3) a track detection and extraction result two-comparison graph based on HADCNN. (a) azimuth calendar (b) class label (c) track detection and extraction result.
Detailed Description
The invention discloses a method for extracting and identifying characteristics of ship tracks in a passive sonar azimuth lineage diagram based on a hierarchical attention depth convolution neural network, and judging the ship tracks in the complex azimuth lineage diagram, which specifically comprises the following steps:
step 1: obtaining N color azimuth history maps, wherein the length of each azimuth history map is H, the height is L, and the color depth is E;
an h×l azimuth calendar C is decomposed R, G, B by color channels and converted into a gray scale I: i (x, y) =t 1 ×R(x,y)+t 2 ×G(x,y)+t 3 XB (x, y), where x is 0 < x.ltoreq.H, y is 0 < y.ltoreq.L, where t 1 、t 2 、t 3 Is a weight.
Azimuth calendarThe color chart C obtains a color histogram equalization result chart H through a histogram equalization algorithm colour
Step 2: the gray map I is processed as follows:
(1) Statistics r for gray map I k (k=0,1,2,…,2 E -1) number of individual tone level pixel values n of tone levels k Calculating r k Color level cumulative probability of 2 E -1 the product results accumulated probability pixel correspondence value s k
Figure BDA0003057085870000071
For s k Rounding to obtain s k ]Brackets represent rounding and refer to r in the azimuth history k Replaced by [ s ] k ]:
r k →[s k ],k=0,1,2,…,2 E -1
Finally, a histogram equalization result diagram H is obtained gray :H gray (x,y)=[s k ],I(x,y)==r k ,k=0,1,...,2 E -1。
(2) The gray level value of each pixel in the image is weighted by a Sobel horizontal edge detection operator and a vertical edge detection operator to obtain an edge image G sobel
(3) The gray level diagram I is filtered through a median to obtain I median Then I is carried out median Histogram equalization to obtain I median-h Finally, I is median-h Obtaining a mixed gray preprocessing result graph G through a Sobel algorithm mix Because the noise interference in the azimuth process diagram is weakened by adopting median filtering, then the track enhancement is performed by a histogram equalization and Sobel method, and meanwhile, the noise interference filtering and the track enhancement are realized.
When the median filtering is carried out, the median filtering can be carried out on the position history chart gray chart I through an l×l median filtering template:
rounding l/2 to obtain l 1 Then the white-supplementing is carried out on the I to obtain a white-supplementing gray scale image I 0
Figure BDA0003057085870000072
2<x≤H+2,2<y≤L+2,I 0 (x,y)=I(x-2,y-2)。
Select I 0 Point I 0 The l×l neighborhood at (x, y) is sorted from large to small z (1), …, z (mean), …, z (max), yielding median filtering result I median :I median (x, y) =z (mean), where l 1 <x≤H+l 1 ,l 1 <y≤L+l 1
Step 3: and constructing a azimuth calendar chart track detection and extracted deep convolutional neural network model HADCNN.
The model is composed of a track area detection module and a track position detection and extraction module in the track area.
The track area detection module is composed of N 1 Layer convolution layer, M 1 Layer pooling layer, M 1 And a track area detection network formed by the layer up-sampling layers adopts a fusion characteristic correction method, namely, the convolution layer after the pooling layer is partially extracted and fused to the convolution layer before the up-sampling layer. The method realizes correction of the extracted features, so that the extracted features are closer to the expected features.
The track position detecting and extracting module in the track area is composed of N 2 Layer convolution layer, M 2 Layer pooling layer, M 2 The track position detection and extraction network in the track area formed by the layer up-sampling layer adopts the fusion characteristic correction method.
Step 4: and constructing a HADCNN model training set, wherein the training set consists of a track region detection module sub-training set and a track position detection and extraction module sub-training set in the region.
Wherein the track area detection module training set X 1 Comprises a step of H colour 、H gray 、G sobel Constructed training image set
Figure BDA0003057085870000081
Class mark map of track areaClass diagram after binarization (track area pixel value is 1, background pixel value is 0)>
Figure BDA0003057085870000082
Step 5: setting a track area detection network structure parameter: network layer number, number of neuronal nodes at each layer, learning rate, decay rate lambda 1 The method comprises the steps of carrying out a first treatment on the surface of the Setting training parameters: total number of iterations T 1 Number s of early stop steps 1 Batch size m 1 The method comprises the steps of carrying out a first treatment on the surface of the Initializing track area detection network model parameters W 1 . Training set X of track area detection module 1 Divided into a plurality of groups including m 1 Small batch data set B of individual training images 1
Step 6: selecting a small batch of data sets
Figure BDA0003057085870000083
Network module f as track area detection 1 In the model parameters W 1 Under the action, the detection output Y of the track area is calculated 1 :/>
Figure BDA0003057085870000084
Setting an error calculation method of a track area detection module, and using a negative log likelihood loss function with weight punishment and penalty:
Figure BDA0003057085870000085
the track area detection network model parameters are then updated.
Step 7: repeating step 6, when the steps are continuous s 1 Multiple iterations, loss L 1 Without reduction, or when the number of iterations T satisfies T > T 1 When the method is used, iteration is stopped, and a trained track area detection module f is obtained 1
Step 8: setting a track area duty ratio threshold p, and detecting a module f according to the track area 1 Is the detection result Y of (2) 1 Selecting all Y with track area ratio larger than p in h multiplied by h area 1 Local area and recording central coordinates V of all local areas i (i=1, 2,) while following V i (i=1, 2,) coordinate cut H colour 、H gray 、G mix Obtaining track position detection and extraction training image set in region
Figure BDA0003057085870000086
Track position detection and extraction training set X in formation area 2 . At the same time according to V i (i=1, 2,) cutting the track class label graph (the pixel value at the track is 1, and the rest pixel values are 0) to obtain the regional track class label graph
Figure BDA0003057085870000087
Step 9: detecting and extracting network structure parameters in the track position in the setting area: network layer number, number of neuronal nodes at each layer, learning rate, decay rate lambda 2 The method comprises the steps of carrying out a first treatment on the surface of the Setting training parameters: total number of iterations T 2 Number s of early stop steps 2 Batch size m 2 . Track position detection and network model parameter W extraction in initialization area 2 . Track position detection and training set X extraction in region 2 Divided into a plurality of groups including m 2 Small batch data set B of individual training images 2
Step 10: selecting a small batch of data sets
Figure BDA0003057085870000088
As a track position detection and extraction module f in an area 2 In the model parameters W 2 Under the action, the track position detection output Y in the region is calculated 2
Figure BDA0003057085870000091
The error calculation method of the track position detection and extraction module in the set region uses a negative log likelihood loss function with weight punishment and penalty: />
Figure BDA0003057085870000092
And then updating the track position detection and extraction network model parameters in the region.
Step 11: the step 10 is repeated and the process is repeated,when continuing s 2 Multiple iterations, loss L 2 Without reduction, or when the number of iterations T satisfies T > T 2 When the method is used, iteration is stopped, and a track position detection and extraction module f in the trained region is obtained 2
Step 12: processing azimuth history map to be detected according to step 1 and step 2, inputting trained HADCNN model for identification, and determining cutting position V i (i=1, 2,) splicing, overlapping and restoring the identification results to obtain an azimuth calendar chart track extraction chart.
The following detailed description of embodiments of the invention is exemplary and intended to be illustrative of the invention and not to be construed as limiting the invention.
The present example database presents: the database of this example is composed of azimuth patterns and corresponding flight path patterns. The training samples are 100, and the test samples are 100. Each of 1200 x 900 in size and having a gradation range of 0 to 255.
Step 1: 100 color azimuth histories are obtained, wherein the length of each azimuth historie is 1200, the height is 900, and the color depth is 8;
a 1200×900 azimuth calendar C is decomposed R, G, B by color channels and converted into a gray scale I: i (x, y) =0.3×r (x, y) +0.59×g (x, y) +0.11×b (x, y).
The azimuth process chart color chart C obtains a color histogram equalization result chart H through a histogram equalization algorithm colour
Step 2: the gray map I is processed as follows:
(1) Statistics r for gray map I k Number n of individual tone scale pixel values of (k=0, 1,2, …, 255) tone scale k Calculating r k Color level cumulative probability and 255 multiplication result cumulative probability pixel corresponding value s k
Figure BDA0003057085870000093
For s k Rounding to obtain s k ]Brackets represent rounding and refer to r in the azimuth history k Replaced by [ s ] k ]:
r k →[s k ],k=0,1,2,…,255
Finally, a histogram equalization result diagram H is obtained gray :H gray (x,y)=[s k ],I(x,y)==r k ,k=0,1,...,255。
(2) The gray level value of each pixel in the image is weighted by a Sobel horizontal edge detection operator and a vertical edge detection operator to obtain an edge image G sobel
(3) The gray level diagram I is filtered through a median to obtain I median Then I is carried out median Histogram equalization to obtain I median-h Finally, I is median-h Obtaining a mixed gray preprocessing result graph G through a Sobel algorithm mix
When the median filtering is carried out, the median filtering can be carried out on the position history chart gray chart I through a 3X 3 median filtering template:
rounding 3 to obtain 1, and then complementing I to obtain a complemented gray scale image I 0
Figure BDA0003057085870000101
Select I 0 Point I 0 The l×l neighborhood at (x, y) is sorted from large to small z (1), …, z (mean), …, z (max), yielding median filtering result I median :I median (x, y) =z (mean), where l 1 <x≤1201,l 1 <y≤901。
Step 3: and constructing a azimuth calendar chart track detection and extracted deep convolutional neural network model HADCNN.
The model is composed of a track area detection module and a track position detection and extraction module in the track area.
The track area detection module is a track area detection network formed by 17 layers of convolution layers, 3 layers of pooling layers and 3 layers of up-sampling layers, and adopts a fusion characteristic correction method, namely the convolution layers after pooling layers are partially extracted and fused to the convolution layers before the up-sampling layers. The method realizes correction of the extracted features, so that the extracted features are closer to the expected features.
The track position detection and extraction module in the track area is a track position detection and extraction network in the track area, which is composed of 17 layers of convolution layers, 3 layers of pooling layers and 3 layers of upper sampling layers, and the fusion characteristic correction method is adopted.
Step 4: and constructing a HADCNN model training set, wherein the training set consists of a track region detection module sub-training set and a track position detection and extraction module sub-training set in the region.
Wherein the track area detection module training set X 1 Comprises a step of H colour 、H gray 、G sobel Constructed training image set
Figure BDA0003057085870000102
Class mark diagram after binarization processing (the pixel value of the track area is 1 and the pixel value of the background is 0) with the class mark diagram of the track area>
Figure BDA0003057085870000103
Step 5: setting a track area detection network structure parameter: the number of network layers, the number of nodes of each layer of neurons, the learning rate and the attenuation rate are 0.5; setting training parameters: 100 times of iteration, 10 times of early stop steps and 2 batches of sizes; initializing track area detection network model parameters W 1 . Training set X of track area detection module 1 Divided into a plurality of groups including m 1 Small batch data set B of individual training images 1
Step 6: selecting a small batch of data sets
Figure BDA0003057085870000111
Network module f as track area detection 1 In the model parameters W 1 Under the action, the detection output Y of the track area is calculated 1 :/>
Figure BDA0003057085870000112
Error of setting track area detection moduleThe difference calculation method uses a negative log likelihood loss function with weight punishment and penalty:
Figure BDA0003057085870000113
the track area detection network model parameters are then updated.
Step 7: repeating step 6, and losing L when 10 iterations are continued 1 If the iteration times t are not reduced or are more than 100, stopping iteration to obtain a trained track area detection module f 1
Step 8: setting a track area duty ratio threshold p, and detecting a module f according to the track area 1 Is the detection result Y of (2) 1 Selecting all Y with track area ratio larger than p in h multiplied by h area 1 Local area and recording central coordinates V of all local areas i (i=1, 2,) while following V i (i=1, 2,) coordinate cut H colour 、H gray 、G mix Obtaining track position detection and extraction training image set in region
Figure BDA0003057085870000114
Track position detection and extraction training set X in formation area 2 . At the same time according to V i (i=1, 2,) cutting the track class label graph (the pixel value at the track is 1, and the rest pixel values are 0) to obtain the regional track class label graph
Figure BDA0003057085870000115
Step 9: detecting and extracting network structure parameters in the track position in the setting area: the number of network layers, the number of nodes of each layer of neurons, the learning rate and the attenuation rate are 0.5; setting training parameters: 100 times of iteration, 10 steps of early stop and 2 batch sizes. Track position detection and network model parameter W extraction in initialization area 2 . Track position detection and training set X extraction in region 2 Divided into a plurality of groups including m 2 Small batch data set B of individual training images 2
Step 10: selecting a small batch of data sets
Figure BDA0003057085870000116
As a track position detection and extraction module f in an area 2 In the model parameters W 2 Under the action, the track position detection output Y in the region is calculated 2
Figure BDA0003057085870000117
The error calculation method of the track position detection and extraction module in the set region uses a negative log likelihood loss function with weight punishment and penalty: />
Figure BDA0003057085870000118
And then updating the track position detection and extraction network model parameters in the region.
Step 11: repeating step 10, and losing L when 10 iterations are continued 2 If the iteration number t is not reduced or is more than 100, stopping iteration to obtain a track position detection and extraction module f in the trained region 2
Step 12: processing azimuth history map to be detected according to step 1 and step 2, inputting trained HADCNN model for identification, and determining cutting position V i (i=1, 2,) splicing, overlapping and restoring the identification results to obtain an azimuth calendar chart track extraction chart.
Although embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives, and variations may be made in the above embodiments by those skilled in the art without departing from the spirit and principles of the invention.

Claims (8)

1. A track detection and extraction method in an azimuth history map based on a hierarchical attention depth convolution neural network is characterized by comprising the following steps of: the method comprises the following steps:
step 1: obtaining N color azimuth history maps, wherein the length of each azimuth history map is H, the height is L, and the color depth is E; decomposing R, G, B each H×L azimuth history C by color channelAnd converted into a gray scale image I; and the color azimuth history map C is subjected to a histogram equalization algorithm to obtain a color histogram equalization result map H colour
Step 2: for the gray level diagram I of each color azimuth history diagram, a corresponding histogram equalization result diagram H is also obtained gray Edge image G sobel Hybrid gray scale preprocessing result graph G mix
Wherein the result graph G of the mixed gray scale preprocessing mix The method is characterized by comprising the following steps of: the gray level diagram I is filtered through a median to obtain I median Then I is carried out median Histogram equalization to obtain I median-h Finally, I is median-h Obtaining a mixed gray preprocessing result graph G through a Sobel algorithm mix
Step 3: constructing a deep convolutional neural network model HADCNN for detecting and extracting azimuth course tracks;
the HADCNN model consists of a track area detection module and a track position detection and extraction module in the track area;
the track area detection module is formed by N 1 Layer convolution layer, M 1 Layer pooling layer, M 1 A track area detection network formed by an on-layer sampling layer;
the track position detection and extraction module in the track area is composed of N 2 Layer convolution layer, M 2 Layer pooling layer, M 2 A track position detection and extraction network in a track area formed by the layer up-sampling layer;
step 4: constructing a HADCNN model training set, wherein the training set consists of a track region detection module sub-training set and a track position detection and extraction module sub-training set in a region;
wherein the track area detection module training set X 1 Comprises a step of H colour 、H gray 、G sobel Constructed training image set
Figure FDA0004124119080000011
Class diagram binarized with track area class diagram +.>
Figure FDA0004124119080000012
Step 5: setting a track area detection network structure parameter: network layer number, number of neuronal nodes at each layer, learning rate, decay rate lambda 1 The method comprises the steps of carrying out a first treatment on the surface of the Setting training parameters: total number of iterations T 1 Number s of early stop steps 1 Batch size m 1 The method comprises the steps of carrying out a first treatment on the surface of the Initializing track area detection network model parameters W 1 The method comprises the steps of carrying out a first treatment on the surface of the Training set X of track area detection module 1 Divided into a plurality of groups including m 1 Small batch data set B of individual training images 1
Step 6: selecting a small batch of data sets
Figure FDA0004124119080000026
Network module f as track area detection 1 Input of 0 < j.ltoreq.m 1 In the model parameters W 1 Under the action, calculating the detection output Y of the track area 1 :/>
Figure FDA0004124119080000021
And calculate the error loss L 1 Then updating the parameters of the track area detection network model;
step 7: repeating step 6, when the steps are continuous s 1 Multiple iterations, loss L 1 Without reduction, or when the number of iterations T satisfies T > T 1 When the method is used, iteration is stopped, and a trained track area detection module f is obtained 1
Step 8: setting a track area duty ratio threshold p, and detecting a module f according to the track area 1 Is the detection result Y of (2) 1 Selecting all Y with track area ratio larger than p in h multiplied by h area 1 Local area and recording central coordinates V of all local areas k According to V k Coordinate cutting H colour 、H gray 、G mix Obtaining track position detection and extraction training image set in region
Figure FDA0004124119080000022
Track position detection and extraction training set X in formation area 2 The method comprises the steps of carrying out a first treatment on the surface of the At the same time according to V k Coordinate cutting binarized track class label graph to obtain regional track class label graph
Figure FDA0004124119080000023
Step 9: detecting and extracting network structure parameters in the track position in the setting area: network layer number, number of neuronal nodes at each layer, learning rate, decay rate lambda 2 The method comprises the steps of carrying out a first treatment on the surface of the Setting training parameters: total number of iterations T 2 Number s of early stop steps 2 Batch size m 2 The method comprises the steps of carrying out a first treatment on the surface of the Track position detection and network model parameter W extraction in initialization area 2 The method comprises the steps of carrying out a first treatment on the surface of the Track position detection and training set X extraction in region 2 Divided into a plurality of groups including m 2 Small batch data set B of individual training images 2
Step 10: selecting a small batch of data sets
Figure FDA0004124119080000024
As a track position detection and extraction module f in an area 2 Is greater than 0 and less than or equal to m 2 In the model parameters W 2 Under the action, the track position detection output Y in the region is calculated 2
Figure FDA0004124119080000025
And calculate the error loss L 2 Then updating the track position detection and network model parameter extraction in the region;
step 11: repeating step 10 while continuing s 2 Multiple iterations, loss L 2 Without reduction, or when the number of iterations T satisfies T > T 2 When the method is used, iteration is stopped, and a track position detection and extraction module f in the trained region is obtained 2
Step 12: processing azimuth history map to be detected according to step 1 and step 2, inputting trained HADCNN model for identification, and determining cutting position V k And splicing, overlapping and restoring the identification results to obtain the azimuth calendar chart track extraction chart.
2. The method for detecting and extracting tracks in an azimuth lineage graph based on hierarchical attention depth convolution neural network according to claim 1, wherein the method is characterized in that: in step 1, the color azimuth history map is decomposed R, G, B according to the color channel, and converted into a gray scale map I:
I(x,y)=0.3×R(x,y)+0.59×G(x,y)+0.11×B(x,y)。
3. the method for detecting and extracting tracks in an azimuth lineage graph based on hierarchical attention depth convolution neural network according to claim 1, wherein the method is characterized in that: in step 2, histogram equalization result diagram H corresponding to gray-scale diagram I gray The method is characterized by comprising the following steps of:
statistics r for gray map I k Number of individual tone pixel values n of tone k ,k=0,1,2,…,2 E -1, calculating r k Color level cumulative probability of 2 E -1 the product results accumulated probability pixel correspondence value s k
Figure FDA0004124119080000031
For s k Rounding to obtain s k ]Will be shown in the azimuth history chart k Replaced by [ s ] k ]:
r k →[s k ],k=0,1,2,…,2 E -1
Finally, a histogram equalization result diagram H is obtained gray :H gray (x,y)=[s k ],I(x,y)==r k ,k=0,1,...,2 E -1。
4. The method for detecting and extracting tracks in an azimuth lineage graph based on hierarchical attention depth convolution neural network according to claim 1, wherein the method is characterized in that: in step 2, the edge image G of the gray-scale image I sobel The method is characterized by comprising the following steps of:
passing the gray level image I through a Sobel horizontal edge detection operator and a vertical edge detection operator, and performing contrastThe gray value of each pixel in the image is weighted to obtain an edge image G sobel
5. The method for detecting and extracting tracks in an azimuth lineage graph based on hierarchical attention depth convolution neural network according to claim 1, wherein the method is characterized in that: in the step 2, median filtering is carried out on the position history map gray map I through an l×l median filtering template during median filtering:
rounding l/2 to obtain l 1 Then the white-supplementing is carried out on the I to obtain a white-supplementing gray scale image I 0
Figure FDA0004124119080000032
2<x≤H+2,2<y≤L+2,I 0 (x,y)=I(x-2,y-2)
Select I 0 Point I 0 The l×l neighborhood at (x, y) is ordered from small to large z (1), …, z (media), …, z (max), resulting in median filtering result I median :I median (x, y) =z (mean), where l 1 <x≤H+l 1 ,l 1 <y≤L+l 1
6. The method for detecting and extracting tracks in an azimuth lineage graph based on hierarchical attention depth convolution neural network according to claim 1, wherein the method is characterized in that: the track area detection network and the track position detection and extraction network in the track area adopt a fusion characteristic correction method: the convolutional layer after the pooling layer is partially extracted and fused to the convolutional layer before the upsampling layer.
7. The method for detecting and extracting tracks in an azimuth lineage graph based on hierarchical attention depth convolution neural network according to claim 1, wherein the method is characterized in that: in step 6, a negative log likelihood loss function with weight penalty is used:
Figure FDA0004124119080000041
8. the method for detecting and extracting tracks in an azimuth lineage graph based on hierarchical attention depth convolution neural network according to claim 1, wherein the method is characterized in that: in step 10, a negative log likelihood loss function with a weighted penalty is used:
Figure FDA0004124119080000042
/>
CN202110502783.3A 2021-05-09 2021-05-09 Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network Active CN113239775B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110502783.3A CN113239775B (en) 2021-05-09 2021-05-09 Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110502783.3A CN113239775B (en) 2021-05-09 2021-05-09 Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network

Publications (2)

Publication Number Publication Date
CN113239775A CN113239775A (en) 2021-08-10
CN113239775B true CN113239775B (en) 2023-05-02

Family

ID=77132977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110502783.3A Active CN113239775B (en) 2021-05-09 2021-05-09 Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network

Country Status (1)

Country Link
CN (1) CN113239775B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564097A (en) * 2017-12-05 2018-09-21 华南理工大学 A kind of multiscale target detection method based on depth convolutional neural networks
CN112434643A (en) * 2020-12-06 2021-03-02 零八一电子集团有限公司 Classification and identification method for low-slow small targets

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10481259B2 (en) * 2013-09-13 2019-11-19 Navico Holding As Tracking targets on a sonar image
CN104155632A (en) * 2014-07-18 2014-11-19 南京航空航天大学 Improved subspace sea clutter suppression method based on local correlation
CN107783096B (en) * 2016-08-25 2019-07-09 中国科学院声学研究所 A kind of two-dimensional background equalization methods shown for bearing history figure
KR101941521B1 (en) * 2016-12-07 2019-01-23 한국해양과학기술원 System and method for automatic tracking of marine objects
CN110197233B (en) * 2019-06-05 2021-03-19 四川九洲电器集团有限责任公司 Method for classifying aircrafts by using flight paths
CN110542904B (en) * 2019-08-23 2021-09-10 中国科学院声学研究所 Target automatic discovery method based on underwater sound target azimuth history map
CN111292563B (en) * 2020-05-12 2020-08-11 北京航空航天大学 Flight track prediction method
CN111882585B (en) * 2020-06-11 2022-05-06 中国人民解放军海军工程大学 Passive sonar multi-target azimuth trajectory extraction method, electronic device and computer-readable storage medium
CN112114286B (en) * 2020-06-23 2022-07-08 山东省科学院海洋仪器仪表研究所 Multi-target tracking method based on line spectrum life cycle and single-vector hydrophone
CN112001433A (en) * 2020-08-12 2020-11-27 西安交通大学 Flight path association method, system, equipment and readable storage medium
CN112684454B (en) * 2020-12-04 2022-12-06 中国船舶重工集团公司第七一五研究所 Track cross target association method based on sub-frequency bands
CN112668804B (en) * 2021-01-11 2023-04-07 中国海洋大学 Method for predicting broken track of ground wave radar ship

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564097A (en) * 2017-12-05 2018-09-21 华南理工大学 A kind of multiscale target detection method based on depth convolutional neural networks
CN112434643A (en) * 2020-12-06 2021-03-02 零八一电子集团有限公司 Classification and identification method for low-slow small targets

Also Published As

Publication number Publication date
CN113239775A (en) 2021-08-10

Similar Documents

Publication Publication Date Title
CN110348376B (en) Pedestrian real-time detection method based on neural network
CN107564025B (en) Electric power equipment infrared image semantic segmentation method based on deep neural network
CN108446700B (en) License plate attack generation method based on anti-attack
CN106023220B (en) A kind of vehicle appearance image of component dividing method based on deep learning
CN104834922B (en) Gesture identification method based on hybrid neural networks
CN107993215A (en) A kind of weather radar image processing method and system
CN106650786A (en) Image recognition method based on multi-column convolutional neural network fuzzy evaluation
CN107742099A (en) A kind of crowd density estimation based on full convolutional network, the method for demographics
CN109840523B (en) Urban rail train number identification method based on image processing
CN107066933A (en) A kind of road sign recognition methods and system
CN110503613A (en) Based on the empty convolutional neural networks of cascade towards removing rain based on single image method
CN104866868A (en) Metal coin identification method based on deep neural network and apparatus thereof
CN108520212A (en) Method for traffic sign detection based on improved convolutional neural networks
CN111507227B (en) Multi-student individual segmentation and state autonomous identification method based on deep learning
CN107967474A (en) A kind of sea-surface target conspicuousness detection method based on convolutional neural networks
CN113011357A (en) Depth fake face video positioning method based on space-time fusion
CN111145145B (en) Image surface defect detection method based on MobileNet
CN105590301B (en) The Impulsive Noise Mitigation Method of adaptive just oblique diesis window mean filter
CN115841447A (en) Detection method for surface defects of magnetic shoe
CN106340007A (en) Image processing-based automobile body paint film defect detection and identification method
CN113052215A (en) Sonar image automatic target identification method based on neural network visualization
CN110135446A (en) Method for text detection and computer storage medium
CN111738114A (en) Vehicle target detection method based on anchor-free accurate sampling remote sensing image
CN112489168A (en) Image data set generation and production method, device, equipment and storage medium
CN114972759A (en) Remote sensing image semantic segmentation method based on hierarchical contour cost function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant