CN113239775B - Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network - Google Patents
Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network Download PDFInfo
- Publication number
- CN113239775B CN113239775B CN202110502783.3A CN202110502783A CN113239775B CN 113239775 B CN113239775 B CN 113239775B CN 202110502783 A CN202110502783 A CN 202110502783A CN 113239775 B CN113239775 B CN 113239775B
- Authority
- CN
- China
- Prior art keywords
- track
- azimuth
- detection
- extraction
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 80
- 238000010586 diagram Methods 0.000 title claims description 29
- 238000013528 artificial neural network Methods 0.000 title claims description 15
- 238000001514 detection method Methods 0.000 claims abstract description 148
- 238000000605 extraction Methods 0.000 claims abstract description 85
- 238000001914 filtration Methods 0.000 claims abstract description 20
- 238000007781 pre-processing Methods 0.000 claims abstract description 16
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 13
- 238000012549 training Methods 0.000 claims description 59
- 238000011176 pooling Methods 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 10
- 238000012937 correction Methods 0.000 claims description 9
- 230000009471 action Effects 0.000 claims description 8
- 238000003708 edge detection Methods 0.000 claims description 8
- 230000006870 function Effects 0.000 claims description 8
- 230000004927 fusion Effects 0.000 claims description 8
- 230000001537 neural effect Effects 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 230000009467 reduction Effects 0.000 claims description 6
- 230000001186 cumulative effect Effects 0.000 claims description 5
- 230000015572 biosynthetic process Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 abstract description 7
- 238000002156 mixing Methods 0.000 abstract description 3
- 239000000203 mixture Substances 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 210000005036 nerve Anatomy 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/203—Specially adapted for sailing ships
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S15/00—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
- G01S15/88—Sonar systems specially adapted for specific applications
- G01S15/93—Sonar systems specially adapted for specific applications for anti-collision purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
- G06F2218/04—Denoising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Automation & Control Theory (AREA)
- Acoustics & Sound (AREA)
- Computer Networks & Wireless Communication (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a track detection and extraction method in a azimuth calendar based on a hierarchical attention depth convolutional neural network, which constructs an HADCNN model in a modularized mode to simulate the whole-to-local process of human vision and realize hierarchical attention at the same time; noise interference filtering and track enhancement are realized by a method of azimuth calendar mixing gray scale preprocessing. In the hadmann model, the track area detection module is responsible for detecting and extracting track areas in the whole Zhang Fangwei process chart, and the track position detection and extraction module in the areas is responsible for detecting and extracting tracks in partial images containing the track areas. The track detection and extraction method in the azimuth calendar adopts a preprocessing method to strengthen tracks and inhibit noise, thereby realizing the increase of the characteristic difference between classes, reducing the operation cost of a subsequent network and improving the detection and extraction efficiency.
Description
Technical Field
The invention relates to a track detection and extraction method in an azimuth calendar, in particular to a track detection and extraction method in an azimuth calendar based on a hierarchical attention depth convolution neural network.
Background
The track detection and extraction by using the passive sonar azimuth calendar is an important means for judging the ship track. The ship track in the complex azimuth calendar is judged through preprocessing the azimuth calendar and detecting and extracting the track, wherein the common preprocessing method for suppressing background noise interference can cause partial track information to be destroyed along with the suppression of noise interference, and the situations of fracture and discontinuity occur. Although the preprocessing method for enhancing the track line characteristics plays a certain role in enhancing the weak track, noise interference is enhanced, and degradation phenomena such as excessive enhancement and the like of some images are caused. The preprocessing method can improve the azimuth calendar display, but needs to manually set parameters, and once the parameters are improperly set, the processing effect of the algorithm can be affected, so that the difficulty of the algorithm in practical application is increased.
The traditional track detection and extraction method mostly adopts a threshold method and a probability tracking method. The threshold method is simple in engineering, but the method cannot effectively detect and extract the flight path under the condition of large background noise interference. A track detection error may occur when two tracks are too close together or intersect. The probability tracking method takes single-frame image data as input, judges the track position in the area, and searches track points in a fixed range according to the characteristics that the track cannot be bent suddenly and the pixel value at the track changes slowly, and is connected with the track points to realize the detection and extraction of the track. The probability tracking method can generate misjudgment on noise interference similar to a track and the like. The single frame image data often has a plurality of interference value points, and under the condition of no artificial judgment, the target false alarm is easily caused by tracking according to the detection points. Even if multi-frame data in a target domain are detected during fracture track association correction, partial false tracks still exist in the image, and the detection result is affected. Meanwhile, the traditional track detection and extraction needs to manually set parameters, and the generalization is weak.
The current deep convolutional neural network has great advantages in the aspect of target detection and extraction, and is mainly applied to the aspects of face recognition, object recognition and extraction in road scenes and the like. In the application scenes, the target object occupies a relatively high proportion and has clear outline, so that the deep convolutional neural network can extract a relatively good result. However, the track in the azimuth calendar is extremely low in duty ratio, the track does not have an obvious outline, meanwhile, the track in the azimuth calendar is low in signal to noise ratio, different in noise interference characteristics and crossed or aliased, so that the ideal effect cannot be obtained when the existing network is directly applied to the track detection and extraction of the azimuth calendar. Therefore, aiming at the characteristics of the tracks in the azimuth calendar, a deep convolutional neural network is reasonably designed to achieve the aim of track detection and extraction.
Disclosure of Invention
Aiming at the problems of track detection and extraction in a passive sonar azimuth calendar, the invention provides a hierarchical attention depth convolution neural network (Hierarchical Attention Deep convolutional neural network, hadcn). The network simulates a recognition mechanism of a biological visual nerve recognition object from overall overview to local detail through task modularization design, builds a track region detection module and a track position detection and extraction module in the region, realizes layered attention of an image region, and solves the problem of weak positive samples of an azimuth lineage diagram; each module progressively completes the tasks of ship track region detection and extraction and track position detection and extraction in the region by using a deep convolutional neural network (Deep convolutional neural network, DCNN); and the layering attention of the features is realized in each module by a method of blending training of the feature graphs of the shallow convolution layer and the deep convolution layer, and the detection and extraction precision is increased. The hierarchical attention depth convolution neural network provides a new approach and method for track detection and extraction in the azimuth lineage graph.
The technical scheme of the invention is as follows:
the method for detecting and extracting the track in the azimuth calendar based on the hierarchical attention depth convolution neural network comprises the following steps:
step 1: obtaining N color azimuth history maps, wherein the length of each azimuth history map is H, the height is L, and the color depth is E; decomposing R, G, B each H×L azimuth history map C according to color channels, and converting into a gray map I; and the color azimuth history map C is subjected to a histogram equalization algorithm to obtain a color histogram equalization result map H colour ;
Step 2: for the gray level diagram I of each color azimuth history diagram, a corresponding histogram equalization result diagram H is also obtained gray Edge image G sobel Hybrid gray scale preprocessing result graph G mix ;
Step 3: constructing a deep convolutional neural network model HADCNN for detecting and extracting azimuth course tracks;
the deep convolutional neural network model HADCNN; the track area detection module and the track position detection and extraction module in the track area are formed;
the track area detection module is formed by N 1 Layer convolution layer, M 1 Layer pooling layer, M 1 A track area detection network formed by an on-layer sampling layer;
the track position detection and extraction module in the track area is composed of N 2 Layer convolution layer, M 2 Layer pooling layer, M 2 A track position detection and extraction network in a track area formed by the layer up-sampling layer;
step 4: constructing a HADCNN model training set, wherein the training set consists of a track region detection module sub-training set and a track position detection and extraction module sub-training set in a region;
wherein the track area detection module training set X 1 Comprises a step of H colour 、H gray 、G sobel Constructed training image setClass diagram binarized with track area class diagram +.>0<i≤N;
Step 5: setting a track area detection network structure parameter: network layer number, number of neuronal nodes at each layer, learning rate, decay rate lambda 1 The method comprises the steps of carrying out a first treatment on the surface of the Setting training parameters: total number of iterations T 1 Number s of early stop steps 1 Batch size m 1 The method comprises the steps of carrying out a first treatment on the surface of the Initializing track area detection network model parameters W 1 The method comprises the steps of carrying out a first treatment on the surface of the Training set X of track area detection module 1 Divided into a plurality of groups including m 1 Small batch data set B of individual training images 1 ;
Step 6: selecting a small batch of data setsNetwork module f as track area detection 1 Input of 0 < j.ltoreq.m 1 In the model parameters W 1 Under the action, calculating the detection output Y of the track area 1 :And calculate the error loss L 1 Then updating the parameters of the track area detection network model;
step 7: repeating step 6, when the steps are continuous s 1 Multiple iterations, loss L 1 Without reduction, or when the number of iterations T satisfies T > T 1 When the method is used, iteration is stopped, and a trained track area detection module f is obtained 1 ;
Step 8: setting a track area duty ratio threshold p, and detecting a module f according to the track area 1 Is the detection result Y of (2) 1 Selecting all Y with track area ratio larger than p in h multiplied by h area 1 Local area and recording central coordinates V of all local areas k According to V k Coordinate cutting H colour 、H gray 、G mix Obtaining track position detection and extraction training image set in regionTrack position detection and extraction training set X in formation area 2 The method comprises the steps of carrying out a first treatment on the surface of the At the same time according to V k Coordinate cutting binarized track class label graph to obtain regional track class label graph
Step 9: detecting and extracting network structure parameters in the track position in the setting area: network layer number, number of neuronal nodes at each layer, learning rate, decay rate lambda 2 The method comprises the steps of carrying out a first treatment on the surface of the Setting training parameters: total number of iterations T 2 Number s of early stop steps 2 Batch size m 2 The method comprises the steps of carrying out a first treatment on the surface of the Track position detection and network model parameter W extraction in initialization area 2 The method comprises the steps of carrying out a first treatment on the surface of the Track position detection and training set X extraction in region 2 Divided into a plurality of groups including m 2 Small batch data set B of individual training images 2 ;
Step 10: selecting a small batch of data setsAs a track position detection and extraction module f in an area 2 Input of 0 < j.ltoreq.m 2 In the model parameters W 2 Under the action, the track position detection output Y in the region is calculated 2 :And calculate the error loss L 2 Then updating the track position detection and network model parameter extraction in the region;
step 11: repeating step 10 while continuing s 2 Multiple iterations, loss L 2 Without reduction, or when the number of iterations T satisfies T > T 2 When the method is used, iteration is stopped, and a track position detection and extraction module f in the trained region is obtained 2 ;
Step 12: processing azimuth history map to be detected according to step 1 and step 2, inputting trained HADCNN model for identification, and determining cutting position V k And splicing, overlapping and restoring the identification results to obtain the azimuth calendar chart track extraction chart.
Further, the color azimuth history is decomposed R, G, B by color channel and converted into a gray-scale map I:
I(x,y)=0.3×R(x,y)+0.59×G(x,y)+0.11×B(x,y)。
further, in step 2, a histogram equalization result map H corresponding to the gray map I gray The method is characterized by comprising the following steps of:
statistics r for gray map I k Number of individual tone pixel values n of tone k ,k=0,1,2,…,2 E -1, calculating r k Color level cumulative probability of 2 E -1 the product results accumulated probability pixel correspondence value s k :
For s k Rounding to obtain s k ]Will be shown in the azimuth history chart k Replaced by [ s ] k ]:
r k →[s k ],k=0,1,2,…,2 E -1
Finally, a histogram equalization result diagram H is obtained gray :H gray (x,y)=[s k ],I(x,y)==r k ,k=0,1,...,2 E -1。
Further, in step 2, the edge image G of the gray scale image I sobel The method is characterized by comprising the following steps of:
the gray level value of each pixel in the image is weighted by a Sobel horizontal edge detection operator and a vertical edge detection operator to obtain an edge image G sobel 。
Further, in step 2, the mixed gray scale preprocessing result graph G of the gray scale graph I mix The method is characterized by comprising the following steps of:
the gray level diagram I is filtered through a median to obtain I median Then I is carried out median Histogram equalization to obtain I median-h Finally, I is median-h Obtaining a mixed gray preprocessing result graph G through a Sobel algorithm mix 。
In step 2, median filtering is performed on the position history map gray map I through an lxl median filtering template during median filtering:
rounding l/2 to obtain l 1 Then the white-supplementing is carried out on the I to obtain a white-supplementing gray scale image I 0 :
2<x≤H+2,2<y≤L+2,I 0 (x,y)=I(x-2,y-2)。
Select I 0 Point I 0 The l×l neighborhood at (x, y) is sorted from large to small z (1), …, z (mean), …, z (max), yielding median filtering result I median :I median (x, y) =z (mean), where l 1 <x≤H+l 1 ,l 1 <y≤L+l 1 。
Furthermore, the track area detection network and the track position detection and extraction network in the track area all adopt a fusion characteristic correction method: the convolutional layer after the pooling layer is partially extracted and fused to the convolutional layer before the upsampling layer.
Further, in step 6, a negative log likelihood loss function with a weight penalty is used:
further, a negative log likelihood loss function with a weight penalty is used in step 10:
advantageous effects
The invention provides a track detection and extraction method in an azimuth calendar. According to the method, a HADCNN model is built in a modularized mode to simulate the whole-to-local process of human vision, and layered attention is realized; noise interference filtering and track enhancement are realized by a method of azimuth calendar mixing gray scale preprocessing. In the HADCNN model provided by the present invention, the track area detection module is responsible for detecting and extracting the track area in the overall Zhang Fangwei process map, and then the track position detection and extraction module in the area is responsible for detecting and extracting the track in the partial image containing the track area. And the two modules are internally provided with fusion layers, the output of the shallow layer convolution layer is extracted to the deep layer convolution layer to participate in convolution operation, so that the fusion training of the feature graphs of the shallow layer convolution layer and the deep layer convolution layer is realized, the effect that the features are focused in a layered manner is achieved, and the detection and extraction precision is increased. The track detection and extraction method in the azimuth calendar adopts a preprocessing method to strengthen tracks and inhibit noise, thereby realizing the increase of the characteristic difference between classes, reducing the operation cost of a subsequent network and improving the detection and extraction efficiency. The HADCNN model simulates a recognition mechanism from overall overview to local detail of a biological visual nerve recognition object through modularized design, improves the specialized capability of recognition tasks of a module network, increases the depth and structural diversity of the model, solves the problem of low detection and extraction accuracy of an azimuth lineage graph caused by weak positive sample proportion, and realizes the improvement of the detection and extraction accuracy.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
fig. 1: layering focuses on deep convolutional neural network models.
Fig. 2: and the ship track area detection and extraction module.
Fig. 3: and the track position detection and extraction module is arranged in the ship area.
Fig. 4: and (3) detecting and extracting a first result based on the track of the HADCNN.
Fig. 5: and (3) detecting and extracting a second result based on the track of the HADCNN.
Fig. 6: and (3) comparing the track detection and the extraction result based on the HADCNN. (a) azimuth calendar (b) class label (c) track detection and extraction result.
Fig. 7: and (3) a track detection and extraction result two-comparison graph based on HADCNN. (a) azimuth calendar (b) class label (c) track detection and extraction result.
Detailed Description
The invention discloses a method for extracting and identifying characteristics of ship tracks in a passive sonar azimuth lineage diagram based on a hierarchical attention depth convolution neural network, and judging the ship tracks in the complex azimuth lineage diagram, which specifically comprises the following steps:
step 1: obtaining N color azimuth history maps, wherein the length of each azimuth history map is H, the height is L, and the color depth is E;
an h×l azimuth calendar C is decomposed R, G, B by color channels and converted into a gray scale I: i (x, y) =t 1 ×R(x,y)+t 2 ×G(x,y)+t 3 XB (x, y), where x is 0 < x.ltoreq.H, y is 0 < y.ltoreq.L, where t 1 、t 2 、t 3 Is a weight.
Azimuth calendarThe color chart C obtains a color histogram equalization result chart H through a histogram equalization algorithm colour 。
Step 2: the gray map I is processed as follows:
(1) Statistics r for gray map I k (k=0,1,2,…,2 E -1) number of individual tone level pixel values n of tone levels k Calculating r k Color level cumulative probability of 2 E -1 the product results accumulated probability pixel correspondence value s k :
For s k Rounding to obtain s k ]Brackets represent rounding and refer to r in the azimuth history k Replaced by [ s ] k ]:
r k →[s k ],k=0,1,2,…,2 E -1
Finally, a histogram equalization result diagram H is obtained gray :H gray (x,y)=[s k ],I(x,y)==r k ,k=0,1,...,2 E -1。
(2) The gray level value of each pixel in the image is weighted by a Sobel horizontal edge detection operator and a vertical edge detection operator to obtain an edge image G sobel 。
(3) The gray level diagram I is filtered through a median to obtain I median Then I is carried out median Histogram equalization to obtain I median-h Finally, I is median-h Obtaining a mixed gray preprocessing result graph G through a Sobel algorithm mix Because the noise interference in the azimuth process diagram is weakened by adopting median filtering, then the track enhancement is performed by a histogram equalization and Sobel method, and meanwhile, the noise interference filtering and the track enhancement are realized.
When the median filtering is carried out, the median filtering can be carried out on the position history chart gray chart I through an l×l median filtering template:
rounding l/2 to obtain l 1 Then the white-supplementing is carried out on the I to obtain a white-supplementing gray scale image I 0 :
2<x≤H+2,2<y≤L+2,I 0 (x,y)=I(x-2,y-2)。
Select I 0 Point I 0 The l×l neighborhood at (x, y) is sorted from large to small z (1), …, z (mean), …, z (max), yielding median filtering result I median :I median (x, y) =z (mean), where l 1 <x≤H+l 1 ,l 1 <y≤L+l 1 。
Step 3: and constructing a azimuth calendar chart track detection and extracted deep convolutional neural network model HADCNN.
The model is composed of a track area detection module and a track position detection and extraction module in the track area.
The track area detection module is composed of N 1 Layer convolution layer, M 1 Layer pooling layer, M 1 And a track area detection network formed by the layer up-sampling layers adopts a fusion characteristic correction method, namely, the convolution layer after the pooling layer is partially extracted and fused to the convolution layer before the up-sampling layer. The method realizes correction of the extracted features, so that the extracted features are closer to the expected features.
The track position detecting and extracting module in the track area is composed of N 2 Layer convolution layer, M 2 Layer pooling layer, M 2 The track position detection and extraction network in the track area formed by the layer up-sampling layer adopts the fusion characteristic correction method.
Step 4: and constructing a HADCNN model training set, wherein the training set consists of a track region detection module sub-training set and a track position detection and extraction module sub-training set in the region.
Wherein the track area detection module training set X 1 Comprises a step of H colour 、H gray 、G sobel Constructed training image setClass mark map of track areaClass diagram after binarization (track area pixel value is 1, background pixel value is 0)>
Step 5: setting a track area detection network structure parameter: network layer number, number of neuronal nodes at each layer, learning rate, decay rate lambda 1 The method comprises the steps of carrying out a first treatment on the surface of the Setting training parameters: total number of iterations T 1 Number s of early stop steps 1 Batch size m 1 The method comprises the steps of carrying out a first treatment on the surface of the Initializing track area detection network model parameters W 1 . Training set X of track area detection module 1 Divided into a plurality of groups including m 1 Small batch data set B of individual training images 1 。
Step 6: selecting a small batch of data setsNetwork module f as track area detection 1 In the model parameters W 1 Under the action, the detection output Y of the track area is calculated 1 :Setting an error calculation method of a track area detection module, and using a negative log likelihood loss function with weight punishment and penalty:the track area detection network model parameters are then updated.
Step 7: repeating step 6, when the steps are continuous s 1 Multiple iterations, loss L 1 Without reduction, or when the number of iterations T satisfies T > T 1 When the method is used, iteration is stopped, and a trained track area detection module f is obtained 1 。
Step 8: setting a track area duty ratio threshold p, and detecting a module f according to the track area 1 Is the detection result Y of (2) 1 Selecting all Y with track area ratio larger than p in h multiplied by h area 1 Local area and recording central coordinates V of all local areas i (i=1, 2,) while following V i (i=1, 2,) coordinate cut H colour 、H gray 、G mix Obtaining track position detection and extraction training image set in regionTrack position detection and extraction training set X in formation area 2 . At the same time according to V i (i=1, 2,) cutting the track class label graph (the pixel value at the track is 1, and the rest pixel values are 0) to obtain the regional track class label graph
Step 9: detecting and extracting network structure parameters in the track position in the setting area: network layer number, number of neuronal nodes at each layer, learning rate, decay rate lambda 2 The method comprises the steps of carrying out a first treatment on the surface of the Setting training parameters: total number of iterations T 2 Number s of early stop steps 2 Batch size m 2 . Track position detection and network model parameter W extraction in initialization area 2 . Track position detection and training set X extraction in region 2 Divided into a plurality of groups including m 2 Small batch data set B of individual training images 2 。
Step 10: selecting a small batch of data setsAs a track position detection and extraction module f in an area 2 In the model parameters W 2 Under the action, the track position detection output Y in the region is calculated 2 :The error calculation method of the track position detection and extraction module in the set region uses a negative log likelihood loss function with weight punishment and penalty:And then updating the track position detection and extraction network model parameters in the region.
Step 11: the step 10 is repeated and the process is repeated,when continuing s 2 Multiple iterations, loss L 2 Without reduction, or when the number of iterations T satisfies T > T 2 When the method is used, iteration is stopped, and a track position detection and extraction module f in the trained region is obtained 2 。
Step 12: processing azimuth history map to be detected according to step 1 and step 2, inputting trained HADCNN model for identification, and determining cutting position V i (i=1, 2,) splicing, overlapping and restoring the identification results to obtain an azimuth calendar chart track extraction chart.
The following detailed description of embodiments of the invention is exemplary and intended to be illustrative of the invention and not to be construed as limiting the invention.
The present example database presents: the database of this example is composed of azimuth patterns and corresponding flight path patterns. The training samples are 100, and the test samples are 100. Each of 1200 x 900 in size and having a gradation range of 0 to 255.
Step 1: 100 color azimuth histories are obtained, wherein the length of each azimuth historie is 1200, the height is 900, and the color depth is 8;
a 1200×900 azimuth calendar C is decomposed R, G, B by color channels and converted into a gray scale I: i (x, y) =0.3×r (x, y) +0.59×g (x, y) +0.11×b (x, y).
The azimuth process chart color chart C obtains a color histogram equalization result chart H through a histogram equalization algorithm colour 。
Step 2: the gray map I is processed as follows:
(1) Statistics r for gray map I k Number n of individual tone scale pixel values of (k=0, 1,2, …, 255) tone scale k Calculating r k Color level cumulative probability and 255 multiplication result cumulative probability pixel corresponding value s k :
For s k Rounding to obtain s k ]Brackets represent rounding and refer to r in the azimuth history k Replaced by [ s ] k ]:
r k →[s k ],k=0,1,2,…,255
Finally, a histogram equalization result diagram H is obtained gray :H gray (x,y)=[s k ],I(x,y)==r k ,k=0,1,...,255。
(2) The gray level value of each pixel in the image is weighted by a Sobel horizontal edge detection operator and a vertical edge detection operator to obtain an edge image G sobel 。
(3) The gray level diagram I is filtered through a median to obtain I median Then I is carried out median Histogram equalization to obtain I median-h Finally, I is median-h Obtaining a mixed gray preprocessing result graph G through a Sobel algorithm mix 。
When the median filtering is carried out, the median filtering can be carried out on the position history chart gray chart I through a 3X 3 median filtering template:
rounding 3 to obtain 1, and then complementing I to obtain a complemented gray scale image I 0 :
Select I 0 Point I 0 The l×l neighborhood at (x, y) is sorted from large to small z (1), …, z (mean), …, z (max), yielding median filtering result I median :I median (x, y) =z (mean), where l 1 <x≤1201,l 1 <y≤901。
Step 3: and constructing a azimuth calendar chart track detection and extracted deep convolutional neural network model HADCNN.
The model is composed of a track area detection module and a track position detection and extraction module in the track area.
The track area detection module is a track area detection network formed by 17 layers of convolution layers, 3 layers of pooling layers and 3 layers of up-sampling layers, and adopts a fusion characteristic correction method, namely the convolution layers after pooling layers are partially extracted and fused to the convolution layers before the up-sampling layers. The method realizes correction of the extracted features, so that the extracted features are closer to the expected features.
The track position detection and extraction module in the track area is a track position detection and extraction network in the track area, which is composed of 17 layers of convolution layers, 3 layers of pooling layers and 3 layers of upper sampling layers, and the fusion characteristic correction method is adopted.
Step 4: and constructing a HADCNN model training set, wherein the training set consists of a track region detection module sub-training set and a track position detection and extraction module sub-training set in the region.
Wherein the track area detection module training set X 1 Comprises a step of H colour 、H gray 、G sobel Constructed training image setClass mark diagram after binarization processing (the pixel value of the track area is 1 and the pixel value of the background is 0) with the class mark diagram of the track area>
Step 5: setting a track area detection network structure parameter: the number of network layers, the number of nodes of each layer of neurons, the learning rate and the attenuation rate are 0.5; setting training parameters: 100 times of iteration, 10 times of early stop steps and 2 batches of sizes; initializing track area detection network model parameters W 1 . Training set X of track area detection module 1 Divided into a plurality of groups including m 1 Small batch data set B of individual training images 1 。
Step 6: selecting a small batch of data setsNetwork module f as track area detection 1 In the model parameters W 1 Under the action, the detection output Y of the track area is calculated 1 :Error of setting track area detection moduleThe difference calculation method uses a negative log likelihood loss function with weight punishment and penalty:the track area detection network model parameters are then updated.
Step 7: repeating step 6, and losing L when 10 iterations are continued 1 If the iteration times t are not reduced or are more than 100, stopping iteration to obtain a trained track area detection module f 1 。
Step 8: setting a track area duty ratio threshold p, and detecting a module f according to the track area 1 Is the detection result Y of (2) 1 Selecting all Y with track area ratio larger than p in h multiplied by h area 1 Local area and recording central coordinates V of all local areas i (i=1, 2,) while following V i (i=1, 2,) coordinate cut H colour 、H gray 、G mix Obtaining track position detection and extraction training image set in regionTrack position detection and extraction training set X in formation area 2 . At the same time according to V i (i=1, 2,) cutting the track class label graph (the pixel value at the track is 1, and the rest pixel values are 0) to obtain the regional track class label graph
Step 9: detecting and extracting network structure parameters in the track position in the setting area: the number of network layers, the number of nodes of each layer of neurons, the learning rate and the attenuation rate are 0.5; setting training parameters: 100 times of iteration, 10 steps of early stop and 2 batch sizes. Track position detection and network model parameter W extraction in initialization area 2 . Track position detection and training set X extraction in region 2 Divided into a plurality of groups including m 2 Small batch data set B of individual training images 2 。
Step 10: selecting a small batch of data setsAs a track position detection and extraction module f in an area 2 In the model parameters W 2 Under the action, the track position detection output Y in the region is calculated 2 :The error calculation method of the track position detection and extraction module in the set region uses a negative log likelihood loss function with weight punishment and penalty:And then updating the track position detection and extraction network model parameters in the region.
Step 11: repeating step 10, and losing L when 10 iterations are continued 2 If the iteration number t is not reduced or is more than 100, stopping iteration to obtain a track position detection and extraction module f in the trained region 2 。
Step 12: processing azimuth history map to be detected according to step 1 and step 2, inputting trained HADCNN model for identification, and determining cutting position V i (i=1, 2,) splicing, overlapping and restoring the identification results to obtain an azimuth calendar chart track extraction chart.
Although embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives, and variations may be made in the above embodiments by those skilled in the art without departing from the spirit and principles of the invention.
Claims (8)
1. A track detection and extraction method in an azimuth history map based on a hierarchical attention depth convolution neural network is characterized by comprising the following steps of: the method comprises the following steps:
step 1: obtaining N color azimuth history maps, wherein the length of each azimuth history map is H, the height is L, and the color depth is E; decomposing R, G, B each H×L azimuth history C by color channelAnd converted into a gray scale image I; and the color azimuth history map C is subjected to a histogram equalization algorithm to obtain a color histogram equalization result map H colour ;
Step 2: for the gray level diagram I of each color azimuth history diagram, a corresponding histogram equalization result diagram H is also obtained gray Edge image G sobel Hybrid gray scale preprocessing result graph G mix ;
Wherein the result graph G of the mixed gray scale preprocessing mix The method is characterized by comprising the following steps of: the gray level diagram I is filtered through a median to obtain I median Then I is carried out median Histogram equalization to obtain I median-h Finally, I is median-h Obtaining a mixed gray preprocessing result graph G through a Sobel algorithm mix ;
Step 3: constructing a deep convolutional neural network model HADCNN for detecting and extracting azimuth course tracks;
the HADCNN model consists of a track area detection module and a track position detection and extraction module in the track area;
the track area detection module is formed by N 1 Layer convolution layer, M 1 Layer pooling layer, M 1 A track area detection network formed by an on-layer sampling layer;
the track position detection and extraction module in the track area is composed of N 2 Layer convolution layer, M 2 Layer pooling layer, M 2 A track position detection and extraction network in a track area formed by the layer up-sampling layer;
step 4: constructing a HADCNN model training set, wherein the training set consists of a track region detection module sub-training set and a track position detection and extraction module sub-training set in a region;
wherein the track area detection module training set X 1 Comprises a step of H colour 、H gray 、G sobel Constructed training image setClass diagram binarized with track area class diagram +.>
Step 5: setting a track area detection network structure parameter: network layer number, number of neuronal nodes at each layer, learning rate, decay rate lambda 1 The method comprises the steps of carrying out a first treatment on the surface of the Setting training parameters: total number of iterations T 1 Number s of early stop steps 1 Batch size m 1 The method comprises the steps of carrying out a first treatment on the surface of the Initializing track area detection network model parameters W 1 The method comprises the steps of carrying out a first treatment on the surface of the Training set X of track area detection module 1 Divided into a plurality of groups including m 1 Small batch data set B of individual training images 1 ;
Step 6: selecting a small batch of data setsNetwork module f as track area detection 1 Input of 0 < j.ltoreq.m 1 In the model parameters W 1 Under the action, calculating the detection output Y of the track area 1 :And calculate the error loss L 1 Then updating the parameters of the track area detection network model;
step 7: repeating step 6, when the steps are continuous s 1 Multiple iterations, loss L 1 Without reduction, or when the number of iterations T satisfies T > T 1 When the method is used, iteration is stopped, and a trained track area detection module f is obtained 1 ;
Step 8: setting a track area duty ratio threshold p, and detecting a module f according to the track area 1 Is the detection result Y of (2) 1 Selecting all Y with track area ratio larger than p in h multiplied by h area 1 Local area and recording central coordinates V of all local areas k According to V k Coordinate cutting H colour 、H gray 、G mix Obtaining track position detection and extraction training image set in regionTrack position detection and extraction training set X in formation area 2 The method comprises the steps of carrying out a first treatment on the surface of the At the same time according to V k Coordinate cutting binarized track class label graph to obtain regional track class label graph
Step 9: detecting and extracting network structure parameters in the track position in the setting area: network layer number, number of neuronal nodes at each layer, learning rate, decay rate lambda 2 The method comprises the steps of carrying out a first treatment on the surface of the Setting training parameters: total number of iterations T 2 Number s of early stop steps 2 Batch size m 2 The method comprises the steps of carrying out a first treatment on the surface of the Track position detection and network model parameter W extraction in initialization area 2 The method comprises the steps of carrying out a first treatment on the surface of the Track position detection and training set X extraction in region 2 Divided into a plurality of groups including m 2 Small batch data set B of individual training images 2 ;
Step 10: selecting a small batch of data setsAs a track position detection and extraction module f in an area 2 Is greater than 0 and less than or equal to m 2 In the model parameters W 2 Under the action, the track position detection output Y in the region is calculated 2 :And calculate the error loss L 2 Then updating the track position detection and network model parameter extraction in the region;
step 11: repeating step 10 while continuing s 2 Multiple iterations, loss L 2 Without reduction, or when the number of iterations T satisfies T > T 2 When the method is used, iteration is stopped, and a track position detection and extraction module f in the trained region is obtained 2 ;
Step 12: processing azimuth history map to be detected according to step 1 and step 2, inputting trained HADCNN model for identification, and determining cutting position V k And splicing, overlapping and restoring the identification results to obtain the azimuth calendar chart track extraction chart.
2. The method for detecting and extracting tracks in an azimuth lineage graph based on hierarchical attention depth convolution neural network according to claim 1, wherein the method is characterized in that: in step 1, the color azimuth history map is decomposed R, G, B according to the color channel, and converted into a gray scale map I:
I(x,y)=0.3×R(x,y)+0.59×G(x,y)+0.11×B(x,y)。
3. the method for detecting and extracting tracks in an azimuth lineage graph based on hierarchical attention depth convolution neural network according to claim 1, wherein the method is characterized in that: in step 2, histogram equalization result diagram H corresponding to gray-scale diagram I gray The method is characterized by comprising the following steps of:
statistics r for gray map I k Number of individual tone pixel values n of tone k ,k=0,1,2,…,2 E -1, calculating r k Color level cumulative probability of 2 E -1 the product results accumulated probability pixel correspondence value s k :
For s k Rounding to obtain s k ]Will be shown in the azimuth history chart k Replaced by [ s ] k ]:
r k →[s k ],k=0,1,2,…,2 E -1
Finally, a histogram equalization result diagram H is obtained gray :H gray (x,y)=[s k ],I(x,y)==r k ,k=0,1,...,2 E -1。
4. The method for detecting and extracting tracks in an azimuth lineage graph based on hierarchical attention depth convolution neural network according to claim 1, wherein the method is characterized in that: in step 2, the edge image G of the gray-scale image I sobel The method is characterized by comprising the following steps of:
passing the gray level image I through a Sobel horizontal edge detection operator and a vertical edge detection operator, and performing contrastThe gray value of each pixel in the image is weighted to obtain an edge image G sobel 。
5. The method for detecting and extracting tracks in an azimuth lineage graph based on hierarchical attention depth convolution neural network according to claim 1, wherein the method is characterized in that: in the step 2, median filtering is carried out on the position history map gray map I through an l×l median filtering template during median filtering:
rounding l/2 to obtain l 1 Then the white-supplementing is carried out on the I to obtain a white-supplementing gray scale image I 0 :
2<x≤H+2,2<y≤L+2,I 0 (x,y)=I(x-2,y-2)
Select I 0 Point I 0 The l×l neighborhood at (x, y) is ordered from small to large z (1), …, z (media), …, z (max), resulting in median filtering result I median :I median (x, y) =z (mean), where l 1 <x≤H+l 1 ,l 1 <y≤L+l 1 。
6. The method for detecting and extracting tracks in an azimuth lineage graph based on hierarchical attention depth convolution neural network according to claim 1, wherein the method is characterized in that: the track area detection network and the track position detection and extraction network in the track area adopt a fusion characteristic correction method: the convolutional layer after the pooling layer is partially extracted and fused to the convolutional layer before the upsampling layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110502783.3A CN113239775B (en) | 2021-05-09 | 2021-05-09 | Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110502783.3A CN113239775B (en) | 2021-05-09 | 2021-05-09 | Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113239775A CN113239775A (en) | 2021-08-10 |
CN113239775B true CN113239775B (en) | 2023-05-02 |
Family
ID=77132977
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110502783.3A Active CN113239775B (en) | 2021-05-09 | 2021-05-09 | Method for detecting and extracting tracks in azimuth lineage diagram based on hierarchical attention depth convolution neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113239775B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108564097A (en) * | 2017-12-05 | 2018-09-21 | 华南理工大学 | A kind of multiscale target detection method based on depth convolutional neural networks |
CN112434643A (en) * | 2020-12-06 | 2021-03-02 | 零八一电子集团有限公司 | Classification and identification method for low-slow small targets |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10481259B2 (en) * | 2013-09-13 | 2019-11-19 | Navico Holding As | Tracking targets on a sonar image |
CN104155632A (en) * | 2014-07-18 | 2014-11-19 | 南京航空航天大学 | Improved subspace sea clutter suppression method based on local correlation |
CN107783096B (en) * | 2016-08-25 | 2019-07-09 | 中国科学院声学研究所 | A kind of two-dimensional background equalization methods shown for bearing history figure |
KR101941521B1 (en) * | 2016-12-07 | 2019-01-23 | 한국해양과학기술원 | System and method for automatic tracking of marine objects |
CN110197233B (en) * | 2019-06-05 | 2021-03-19 | 四川九洲电器集团有限责任公司 | Method for classifying aircrafts by using flight paths |
CN110542904B (en) * | 2019-08-23 | 2021-09-10 | 中国科学院声学研究所 | Target automatic discovery method based on underwater sound target azimuth history map |
CN111292563B (en) * | 2020-05-12 | 2020-08-11 | 北京航空航天大学 | Flight track prediction method |
CN111882585B (en) * | 2020-06-11 | 2022-05-06 | 中国人民解放军海军工程大学 | Passive sonar multi-target azimuth trajectory extraction method, electronic device and computer-readable storage medium |
CN112114286B (en) * | 2020-06-23 | 2022-07-08 | 山东省科学院海洋仪器仪表研究所 | Multi-target tracking method based on line spectrum life cycle and single-vector hydrophone |
CN112001433A (en) * | 2020-08-12 | 2020-11-27 | 西安交通大学 | Flight path association method, system, equipment and readable storage medium |
CN112684454B (en) * | 2020-12-04 | 2022-12-06 | 中国船舶重工集团公司第七一五研究所 | Track cross target association method based on sub-frequency bands |
CN112668804B (en) * | 2021-01-11 | 2023-04-07 | 中国海洋大学 | Method for predicting broken track of ground wave radar ship |
-
2021
- 2021-05-09 CN CN202110502783.3A patent/CN113239775B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108564097A (en) * | 2017-12-05 | 2018-09-21 | 华南理工大学 | A kind of multiscale target detection method based on depth convolutional neural networks |
CN112434643A (en) * | 2020-12-06 | 2021-03-02 | 零八一电子集团有限公司 | Classification and identification method for low-slow small targets |
Also Published As
Publication number | Publication date |
---|---|
CN113239775A (en) | 2021-08-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110348376B (en) | Pedestrian real-time detection method based on neural network | |
CN107564025B (en) | Electric power equipment infrared image semantic segmentation method based on deep neural network | |
CN108446700B (en) | License plate attack generation method based on anti-attack | |
CN106023220B (en) | A kind of vehicle appearance image of component dividing method based on deep learning | |
CN105809121A (en) | Multi-characteristic synergic traffic sign detection and identification method | |
CN107993215A (en) | A kind of weather radar image processing method and system | |
CN106650786A (en) | Image recognition method based on multi-column convolutional neural network fuzzy evaluation | |
CN109840523B (en) | Urban rail train number identification method based on image processing | |
CN107066933A (en) | A kind of road sign recognition methods and system | |
CN110503613A (en) | Based on the empty convolutional neural networks of cascade towards removing rain based on single image method | |
CN108520212A (en) | Method for traffic sign detection based on improved convolutional neural networks | |
CN104866868A (en) | Metal coin identification method based on deep neural network and apparatus thereof | |
CN115841447A (en) | Detection method for surface defects of magnetic shoe | |
CN111507227B (en) | Multi-student individual segmentation and state autonomous identification method based on deep learning | |
CN111145145B (en) | Image surface defect detection method based on MobileNet | |
CN109766823A (en) | A kind of high-definition remote sensing ship detecting method based on deep layer convolutional neural networks | |
CN113239865B (en) | Deep learning-based lane line detection method | |
CN113052215A (en) | Sonar image automatic target identification method based on neural network visualization | |
CN112668441B (en) | Satellite remote sensing image airplane target identification method combined with priori knowledge | |
CN106340007A (en) | Image processing-based automobile body paint film defect detection and identification method | |
CN105740844A (en) | Insulator cracking fault detection method based on image identification technology | |
CN110135446A (en) | Method for text detection and computer storage medium | |
CN112489168A (en) | Image data set generation and production method, device, equipment and storage medium | |
CN113971764A (en) | Remote sensing image small target detection method based on improved YOLOv3 | |
CN111738114A (en) | Vehicle target detection method based on anchor-free accurate sampling remote sensing image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |