CN110147807A - A kind of ship intelligent recognition tracking - Google Patents

A kind of ship intelligent recognition tracking Download PDF

Info

Publication number
CN110147807A
CN110147807A CN201910202874.8A CN201910202874A CN110147807A CN 110147807 A CN110147807 A CN 110147807A CN 201910202874 A CN201910202874 A CN 201910202874A CN 110147807 A CN110147807 A CN 110147807A
Authority
CN
China
Prior art keywords
ship
layer
network
value
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910202874.8A
Other languages
Chinese (zh)
Inventor
王胜正
刘博�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime University filed Critical Shanghai Maritime University
Publication of CN110147807A publication Critical patent/CN110147807A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention proposes a kind of ship intelligent recognition trackings, deep learning algorithm based on computer vision, the multi-scale prediction method for improving the base categories network structure and target in conventional depth study, by Darknet network and YOLOv3 algorithm combine in the way of realize tracking and the real-time detection identification Ship Types of ship.This method introduces the thought of residual error network, using full convolutional coding structure, increases network depth, improves data characteristics learning ability.The interaction of the local feature between characteristic pattern is realized in the way of convolution kernel by YOLOv3 algorithm, carry out the matching positioning of target, target area prediction and class prediction are integrated in single Neural model on this basis, to realize that the global information of image carries out target identification.The experimental results showed that the algorithm of proposition is compared with the traditional method, not only there is better real-time, accuracy, there is preferable robustness to various environmental changes.

Description

A kind of ship intelligent recognition tracking
Technical field:
The present invention relates to the tracking of above water craft target and detection fields, specifically, be a kind of ship intelligent recognition with Track method.
Background technique:
The basic task of ship tracking and identification as intelligent ship visual perception, tracking traditional at present Be by AIS system and Radar Technology in conjunction with means carry out the tracking of ship, and be intended for intelligent ship, cannot will be with The target ship of track is considered as particle.Therefore, in conjunction with the advantage of neural network and Kalman filtering algorithm, it is dynamic to construct a kind of ship State trace model.It is modeled by the background pixel for monitoring ship video to maritime affairs, construction feature constraint equation verifies The motion state of character pixel, Estimation of Ship kinematic parameter obtain ship movement position parameter, finally combine active profile die Type algorithm finds ship profile to determine the vessel position of tracking.
Identification conventional method for Ship Types is to carry out people to data by AIS data and synthetic aperture thunder (SAR) The type of work point precipitation ship.Method based on deep learning theory is with convolutional neural networks object detection mould end to end Type selects several candidate bounding boxs using selective search method over an input image, utilizes convolutional Neural to each bounding box Network extracts feature, is input to as the trained support vector machine classifier of each class, passes through ROI Pooling network layer handle It is different size of to input the feature vector for being mapped to a fixed size, the spy of a fixed dimension is extracted to each region Sign indicates, then obtains the score that bounding box belongs to each class by normal softmax classification function, finally using non-very big It is worth suppressing method discard portion and repeats bounding box, obtains type identification result.In view of ship imaging size in water transportation Variation, ship imaging overlapping and artificial participation that illumination variation, ship imaged viewing angle change and intersect in Meeting Situation etc. are asked Topic, has seriously affected ship tracking discrimination.
Summary of the invention:
In view of the above problems, the invention proposes a kind of ship intelligent recognition tracking, based on Darknet network and YOLOv3 algorithm, being capable of more accurate, real-time perfoming target tracking and identification.
To achieve the goals above, the present invention proposes a kind of ship intelligent recognition track algorithm.The technology that the present invention takes Scheme is: carrying out feature extraction study to ship sample data using Darknet basic network and training network, obtains object Feature graph model, by YOLOv3 algorithm carry out characteristic pattern between local feature interaction, so that positioning is matched, in this base Sorting algorithm is constructed on plinth, identifies Ship Types in real time, method includes the following steps:
Step 1: collecting different type ship picture, as ship original image data, label pretreatment is carried out, after being Continuous recognition and tracking model initializes, and step 1 includes data prediction, and specific implementation step is as follows:
(1) data prediction:
(1) Pascal voc2007 standardized data collection is downloaded, its legacy data is emptied, retains JPEGImages file Folder, Annotations file and ImageSets file;
(2) the different types of ship raw image data being collected into is deposited in JPEGImages file, including Training picture and test picture, wherein picture and test picture number ratio is trained to be classified as 8:2;
(3) it by labelImg marking tool, generates the readable Xaml file of model and is stored in Annotations file In folder, each Xaml file both corresponds to the picture in JPEGImages file;
(3) Main file is established under ImageSets file, storage is that each ship picture type is corresponding Image data information, including training dataset, detection data collection, validation data set, trained and validation data set;
Step 2: building deep layer network model carries out convolution operation to the ship image sample data of input and extracts accordingly Feature is combined study, obtains the feature graph model of object, adds feature interaction layer on this basis, is divided into three rulers It spends, in each scale, the feature interaction of characteristic pattern part is realized by way of convolution kernel, step 2 includes that ship's particulars mentions Network structure and feature interaction layer structure are taken, specific implementation step is as follows:
(1) ship's particulars extracts network:
(1) the ship picture pre-processed is inputted, the resolution ratio of image is improved using high-resolution classifier, is advised Generalized processing;
(2) by 32 layers of convolution kernel, each convolution kernel size is 3*3, and paces are 1 progress convolution operation, obtains feature and reflects Penetrate matrix;
(3) pass through the ship's particulars figure matrix of convolutional layerExtract height Abstract ship's particulars, whereinIt is the Feature Mapping of n-th of the output of r layers of convolutional network, function f indicates r layers of volume The activation primitive of product neural network neuron,It is the ship's particulars mapping of m-th of input of the r-1 network layer,It is the connection weight of the ship's particulars mapping and the mapping of m-th of input feature vector of n-th of network output layer, parameterIt is The amount of bias of n-th of Feature Mapping of r layers of convolutional neural networks;
(4) the addition normalization layer after each convolutional layer, passes through functionCarry out batch standard Change processing, it is 0 that the matrix data distribution of convolutional layer output, which is normalized to mean value, the distribution that variance is 1, wherein xkIndicate defeated Enter the kth dimension of data, E [xk] indicate the average value of the dimension, √ Var [xk] indicate standard deviation;
(5) amendment linear unit g (x)=Max (0, x is introducedr) be used as activation primitive, for input the data of this layer into Row is unilateral to be inhibited, using the data of normalization layer output as the input data of activation primitive, as input data xrWhen > 0, gradient Perseverance is 1, works as xrWhen < 0, the output of this layer is 0;
(6) circulation step (2)-(5), the infrastructure network that the building number of plies is 53 layers;
(7) in this infrastructure network, pass through residual error function F (x)=H (x)-x1Construct new network structure, when When the input of convolutional layer is as the dimension of output in basic network, using cross-layer jump connection type, added after convolutional layer Residual error layer changes its original base network structure, and the layer-by-layer training of deep-neural-network structure is changed to train by stage, Network structure is divided into several subsegments, and each segment includes than the shallower network number of plies, each segment learns always poor one Part is finally reached overall lesser loss, and wherein H (x) is enter into the network mapping function after summation, and F (x) is summation Preceding network mapping function, x1It is the input data of the convolutional layer, when F (x)=0, just constitutes identical mapping H (x)=x1
(2) feature interaction layer structure:
(1) feature interaction layer is added after the network configuration constructed in step 2, is divided into the friendship of three different scale sizes Alternating layers carry out ship's particulars interaction by the way of the fusion of multiple scales, and 3 kinds of alternations of bed are as follows:
(a) small scale features alternation of bed: seven layers of convolutional layer are added after network structure and carry out convolution operation, convolution Profile information afterwards gives next feature interaction layer;
(b) upper one layer of characteristic pattern: being carried out up-sampling operation and be allowed to expand twice by mesoscale characteristics alternation of bed, then with Characteristic pattern in infrastructure network with identical dimensional size is added, again by exporting profile information after convolution;
(c) large-scale characteristics alternation of bed: the characteristic pattern of mesoscale characteristics alternation of bed is carried out up-sampling operation and is allowed to expand Twice, then be added with the characteristic pattern in infrastructure network with identical dimensional size, by exporting characteristic pattern letter after convolution Breath;
It (2), should when the value of loss function is closer to 0 finally, measuring the performance of feature graph model by loss function Model performance is more stable, and side and error are as loss function for use, by error of coordinate, IOU error and error in classification three It is grouped as, expression formula are as follows:
Wherein, two row indicates coordinate error before, the first row are the predictions of bounding box centre coordinate, the second behavior it is wide and High prediction, third and fourth row indicate the confidence level loss of bounding box, and fifth line indicates the error of prediction classification, symbolFor predicted value, no cap is training mark value,Indicate that judgment object falls into the jth of grid i In a bounding box, if there is no target in some cell, backpropagation is not carried out to error in classification, when in bounding box The backpropagation of a progress error of coordinate in object and true frame with highest IOU, remaining without;
Step 3: feature is extracted to ship picture to be detected is inputted by feature extraction network, obtains certain size Then input picture is divided into grid of corresponding size by characteristic pattern, pass through data normalization processing and dimension cluster, particulate Characteristic manipulation is spent, the centre coordinate of target object carries out matching positioning in the bounding box and true frame that grid directly predicts, The polytypic logistic regression layer of multi-tag is added on this basis, and two classification are done to realize to target object to each classification Classification and Identification is carried out, step 3 includes coordinate prediction, and matching positioning and Classification and Identification, specific implementation step are as follows:
(1) coordinate is predicted, matching positioning:
(1) it is obtained for inputting ship picture to be detected by the down-sampled processing of the convolutional layer of feature extraction network Size is 13*13, then the convolution characteristic pattern that port number is 3 divides the image into grid of corresponding size;
(2) operated by anchor point, using the different Aspect Ratios of 3 kinds of scales and 3 kinds window size 13*13 convolution Sliding window operation is carried out on characteristic pattern, is mapped to a region of original image centered on current sliding window mouth center, in the region The corresponding scale of the heart and length-width ratio, each center can predict 9 kinds of different size of priori frames;
(3) use IOU score judgment criteria, define new range formula d (box, centroid)=1-IOU (box, Centroid), improving K-means clustering method, to be automatically found better priori frame width high-dimensional, and wherein box is prediction priori The coordinate of frame, centroid are the centers for clustering all clusters;
(4) priori frame cluster is carried out according to following algorithm:
(a) randomly choose from the data acquisition system of input at one o'clock as first cluster centre;
(b) for each point, we calculate the distance of itself and a nearest seed point, are denoted as D (x);
(c) select a new data point as new cluster centre, the principle selected be for D (x) numerical value it is biggish Point is selected larger as the probability of cluster centre;
(d) (b) and (c) is repeated to come until k cluster centre is selected;
(e) k-means algorithm is run using this k initial cluster centres;
(5) the priori frame of each grid forecasting on characteristic pattern includes 5 predicted values, respectively tx, ty, tw, th, to, Wherein first four are coordinates, and to is confidence level, by the t of actual predictionx,ty, tw, th obtain bx,by,bw,bhProcedural representation Are as follows:
bx=σ (tx)+cx
by=σ (ty)+cy
bw=Pwetw
bh=Pheth
Pr (object) * IOU (b, centroid)=σ (t0)
Wherein, cx, cyFor the number of first grid in the grid distance upper left corner where the centre coordinate of frame, tx,ty For the center point coordinate of the frame of prediction, sigma function is that logistic function is finally obtained by Unitary coordinate between 0-1 Bx,byFor the value relative to grid position after normalization, tw, th are the width and height of the frame of prediction, and Pw, Ph are candidate frame Width and height, finally obtained bw,bhFor the value after normalization relative to candidate frame position;
(6) difference between the predicted value and actual value of ship coordinate is measured by quadratic sum range error loss function Different, when ship number of samples is n, loss function at this time is indicated are as follows:
Wherein, what Y-f (x) was indicated is residual error, and what entire formula indicated is the quadratic sum of residual error, the minimum mesh of solution Offer of tender numerical value is exactly the similitude of coordinate value, and functional value is smaller, and otherness is better;
(7) matched jamming ship is carried out according to the following steps positioning coordinate:
(e) by carrying out characteristic pattern grid dividing for inputting ship picture to be detected;
(f) each grid can predict that 3 candidate frames, each candidate frame can predict the coordinate value of an object, lead to The loss function cost value for crossing step (6) is less than threshold value 0.1, carries out next step operation;
(g) it is operated by step (5), position positioning is carried out to the ship in picture;
(h) after determining its vessel position, ship is marked with bounding box, is sat by ship's particulars matching and in real time positioning Mark carries out tracking ship;
(2) Classification and Identification
(1) based on the feature interaction layer structure in step 1,9 are obtained using cluster operation using the design method of anchor point It is given 3 kinds of scales by a cluster centre according to size:
(a) scale 1: the size obtained from feature extraction network structure is 13*13, and the characteristic pattern that channel is 1024 carries out Convolution operation, does not change characteristic pattern size, and port number is finally reduced to 75;
(b) scale 2: upper one layer of characteristic pattern is subjected to convolution operation, generates the characteristic pattern of 13*13,256 channels, then It is up-sampled, generates the characteristic pattern of 26*26,256 channels, while the spy with the 26*26 of infrastructure network layer, 512 channels Sign figure merges, then carries out convolution operation;
(c) scale 3: it is similar with scale 2, use the characteristic pattern of 32*32 size to be merged;
(2) feature interaction layer treated characteristic pattern is used into multi-tag sort operation, is added in network structure more The polytypic logistic regression layer of label, does two classification to each classification with logistic regression layer;
(3) by cross entropy cost function, the difference between the predicted value and actual value of logistic regression layer is measured, letter is worked as Numerical value is smaller, illustrates predicted value closer to true value, expression formula are as follows:
Wherein, x indicates that ships data sample, n indicate the sum of data sample;
(4) it is operated by (1) and (2) of step 3, the division net of equidimension ratio is carried out for obtained characteristic pattern Lattice, each grid predict C Ship Types probability, indicate that a grid belongs to certain under conditions of comprising ship target The probability of Ship Types, expression formula are as follows:
Wherein Pr (Classt| Object) indicate target class probability,Indicate that prediction block and true frame are handed over The area of fork, Pr (Classt) indicating class probability, Pr (Object) is probability existing for target;
(5) Ship Types classification is carried out according to following algorithm:
(a) in the description of ship of prediction, set 0 less than threshold value 0.2 for score, then again by score from height to Low sequence;
(b) with non-maxima suppression algorithm calculate bounding box IOU value, when IOU be greater than 0.5, the bounding box repetitive rate compared with Greatly, it is somebody's turn to do to be divided into 0, removes the biggish bounding box of repetitive rate, if being not more than 0.5, do not change;
(c) reselection is left maximum bounding box inside score, repeats step (b) to the last;
(d) the bounding box score finally retained is if it is greater than 0, then Ship Types is exactly corresponding to this score Classification;
(6) sigmoid function is added in output layerUsing the numerical value of Ship Types prediction as function Numerical value is inputted, after sigmoid function, numerical value is constrained in the range of 0 to 1, if output valve is greater than given threshold 0.75, just identify Ship Types, and mark the description of ship title on bounding box upper left side.
Detailed description of the invention:
In order to illustrate more clearly of technical solution of the present invention, attached drawing needed in description will be made below simple Ground introduction, it should be apparent that, the accompanying drawings in the following description is one embodiment of the present of invention, for ordinary skill people For member, without creative efforts, it is also possible to obtain other drawings based on these drawings:
Fig. 1 is a kind of flow chart of ship intelligent recognition tracking of the present invention;
Fig. 2 is a kind of procedure chart of the deep layer network structure of ship intelligent recognition tracking of the present invention;
Fig. 3 is a kind of target prodiction of ship intelligent recognition tracking of the present invention and the procedure chart of Classification and Identification.
Specific embodiment:
The technical features, objects and effects for a better understanding of the present invention with reference to the accompanying drawing carry out more the present invention To describe in detail.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not used to limit The invention patent.It should be noted that be all made of very simplified form in these attached drawings and use non-accurate ratio, It is only used for convenience, clearly aids in illustrating the invention patent.
The present invention proposes a kind of ship intelligent recognition tracking, suitable for the picture frame to monitor video, normally at As the identification and tracking of the ship image of size.The present invention is the different type ship picture collected by way of network, with 20000 pictures, 7 kinds of Ship Types are sample data.The ship image of normal imaging size refers in the present invention: ship imaging Size is not less than the 0.10% of the frame image actual size, and re-imaging length or width are not less than 13 pixels.Of the invention Ship monitor source video sequence is in the monitoring data collected of camera on ship.The experiment porch of this research is that Windows 10 is grasped Make system, 16G RAM, the dominant frequency of CPU processor is 3.2GHz, and GPU is GTX 1050Ti, and emulation platform is Pycharm (2018 editions).
As shown in Figure 1, intelligent ship recognition and tracking method detailed process according to the present invention are as follows:
Step 1: this experiment is 20,000 different type ship pictures to be collected, before ship by way of network Scape accounts for ratio of the picture background lower than 90% and is screened out from it 7 kinds of Ship Types, and totally 8 thousand sheets pictures are original as ship image Data carry out label pretreatment, initialize for subsequent recognition and tracking model;
Step 2: building deep layer network model carries out convolution operation to the ship image sample data of input and extracts accordingly Feature is combined study, obtains the feature graph model of object, adds feature interaction layer on this basis, is divided into three rulers It spends, in each scale, the feature interaction of characteristic pattern part is realized by way of convolution kernel, carries out Fusion Features;
Step 3: feature is extracted to ship picture to be detected is inputted by feature extraction network, obtains certain size Then input picture is divided into grid of corresponding size by characteristic pattern, pass through data normalization processing and dimension cluster, particulate Characteristic manipulation is spent, the centre coordinate of target object carries out matching positioning in the bounding box and true frame that grid directly predicts, The polytypic logistic regression layer of multi-tag is added on this basis, and two classification are done to realize to target object to each classification Classification and Identification is carried out, step 3 includes coordinate prediction, matching positioning and Classification and Identification.
Step 1 detailed process are as follows:
Step 1: by the way that it is low to account for picture background according to ship prospect from 20,000 different type ship pictures of network collection Ratio in 90% is screened out from it 7 kinds of Ship Types, respectively includes container ship, oil tanker, chemical tanker, LNG ship, groceries Ship, bulk freighter and other ships, wherein container ship picture is 2300, and oil tanker picture 1420 is opened, chemical tanker 1240, LNG ship picture 1250 is opened, and break-bulk carrier picture 2750 is opened, and bulk freighter picture 2060 is opened, and the picture 1500 of other ships is opened, altogether 12520 pictures carry out label pretreatment as ship original image data, initialize for subsequent recognition and tracking model; Step 1 includes data prediction, and specific implementation step is as follows:
(1) data prediction:
(1) Pascalvoc2007 standardized data collection is downloaded, its legacy data is emptied, retains JPEGImages file, Annotations file and ImageSets file;
(2) the different types of ship raw image data being collected into is deposited in JPEGImages file, including Training picture and test picture, wherein training picture and testing the ratio of picture is 8:2;
(3) it by labelImg marking tool, generates the readable Xaml file of model and is stored in Annotations file In folder, each Xaml file both corresponds to the picture in JPEGImages file;
(4) Main file is established under ImageSets file, storage is that each ship picture type is corresponding Image data information, including training dataset, detection data collection, validation data set, trained and validation data set;
(5) modification configuration parameter is as follows:
(a) cfg file is opened;
(b) according to formula: the number of 3* (5+len (classes)) modification convolution kernel;Wherein classes indicates identification Description of Ship;
(6) random parameter is modified, is originally 1, video memory is small to be changed to 0;
As shown in Fig. 2, step 2 detailed process are as follows:
Step 2: building deep layer network model carries out convolution operation to the ship image sample data of input and extracts accordingly Feature is combined study, obtains the feature graph model of object, adds feature interaction layer on this basis, is divided into three rulers It spends, in each scale, the feature interaction of characteristic pattern part is realized by way of convolution kernel, step 2 includes that ship's particulars mentions Network structure and feature interaction layer structure are taken, specific implementation step is as follows:
(1) ship's particulars extracts network:
(1) the ship picture pre-processed is inputted, the resolution ratio of image is improved using high-resolution classifier, is advised Generalized processing;
(2) by 32 layers of convolution kernel, each convolution kernel size is 3*3, and paces are 1 progress convolution operation, obtains feature and reflects Penetrate matrix;
(3) pass through the ship's particulars figure matrix of convolutional layerExtract height Abstract ship's particulars, whereinIt is the Feature Mapping of n-th of the output of r layers of convolutional network, function f indicates r layers of convolution The activation primitive of neural network neuron,It is the ship's particulars mapping of m-th of input of the r-1 network layer, It is the connection weight of the ship's particulars mapping and the mapping of m-th of input feature vector of n-th of network output layer, parameterIt is r layers The amount of bias of n-th of Feature Mapping of convolutional neural networks;
(4) the addition normalization layer after each convolutional layer, passes through functionCarry out batch standard Change processing, it is 0 that the matrix data distribution of convolutional layer output, which is normalized to mean value, the distribution that variance is 1, wherein xkIndicate defeated Enter the kth dimension of data, E [xk] indicate the average value of the dimension, √ Var [xk] indicate standard deviation;
(5) amendment linear unit g (x)=Max (0, x is introducedr) be used as activation primitive, for input the data of this layer into Row is unilateral to be inhibited, using the data of normalization layer output as the input data of activation primitive, as input data xrWhen > 0, gradient Perseverance is 1, works as xrWhen < 0, the output of this layer is 0;
(6) circulation step (2)-(5), the infrastructure network that the building number of plies is 53 layers;
(7) in this infrastructure network, pass through residual error function F (x)=H (x)-x1Construct new network structure, when When the input of convolutional layer is as the dimension of output in basic network, using cross-layer jump connection type, added after convolutional layer Residual error layer changes its original base network structure, and the layer-by-layer training of deep-neural-network structure is changed to train by stage, Network structure is divided into several subsegments, and each segment includes than the shallower network number of plies, each segment learns always poor one Part is finally reached overall lesser loss, and wherein H (x) is enter into the network mapping function after summation, and F (x) is summation Preceding network mapping function, x1It is the input data of the convolutional layer, when F (x)=0, just constitutes identical mapping H (x)=x1
(2) feature interaction layer structure:
(1) feature interaction layer is added after the network configuration constructed in step 2, is divided into the friendship of three different scale sizes Alternating layers carry out ship's particulars interaction by the way of the fusion of multiple scales, and 3 kinds of alternations of bed are as follows:
(a) small scale features alternation of bed: seven layers of convolutional layer are added after network structure and carry out convolution operation, convolution Profile information afterwards gives next feature interaction layer;
(b) upper one layer of characteristic pattern: being carried out up-sampling operation and be allowed to expand twice by mesoscale characteristics alternation of bed, then with Characteristic pattern in infrastructure network with identical dimensional size is added, again by exporting profile information after convolution;
(c) large-scale characteristics alternation of bed: the characteristic pattern of mesoscale characteristics alternation of bed is carried out up-sampling operation and is allowed to expand Twice, then be added with the characteristic pattern in infrastructure network with identical dimensional size, by exporting characteristic pattern letter after convolution Breath;
It (2), should when the value of loss function is closer to 0 finally, measuring the performance of feature graph model by loss function Model performance is more stable, and side and error are as loss function for use, by error of coordinate, IOU error and error in classification three It is grouped as, expression formula are as follows:
Wherein, two row indicates coordinate error before, the first row are the predictions of bounding box centre coordinate, the second behavior it is wide and High prediction, third and fourth row indicate the confidence level loss of bounding box, and fifth line indicates the error of prediction classification, symbolFor predicted value, no cap is training mark value,Indicate that judgment object falls into the jth of grid i In a bounding box, if there is no target in some cell, backpropagation is not carried out to error in classification, when in bounding box The backpropagation of a progress error of coordinate in object and true frame with highest IOU, remaining without.
As shown in figure 3, step 3 detailed process are as follows:
Step 3: feature is extracted to ship picture to be detected is inputted by feature extraction network, obtains certain size Then input picture is divided into grid of corresponding size by characteristic pattern, pass through data normalization processing and dimension cluster, particulate Characteristic manipulation is spent, the centre coordinate of target object carries out matching positioning in the bounding box and true frame that grid directly predicts, The polytypic logistic regression layer of multi-tag is added on this basis, and two classification are done to realize to target object to each classification Classification and Identification is carried out, step 3 includes coordinate prediction, and matching positioning and Classification and Identification, specific implementation step are as follows:
(1) coordinate is predicted, matching positioning:
(1) it is obtained for inputting ship picture to be detected by the down-sampled processing of the convolutional layer of feature extraction network Size is 13*13, then the convolution characteristic pattern that port number is 3 divides the image into grid of corresponding size;
(2) operated by anchor point, using the different Aspect Ratios of 3 kinds of scales and 3 kinds window size 13*13 convolution Sliding window operation is carried out on characteristic pattern, is mapped to a region of original image centered on current sliding window mouth center, in the region The corresponding scale of the heart and length-width ratio, each center can predict 9 kinds of different size of priori frames;
(3) use IOU score judgment criteria, define new range formula d (box, centroid)=1-IOU (box, Centroid), improving K-means clustering method, to be automatically found better priori frame width high-dimensional, and wherein box is prediction priori The coordinate of frame, centroid are the centers for clustering all clusters;
(4) priori frame cluster is carried out according to following algorithm:
(a) randomly choose from the data acquisition system of input at one o'clock as first cluster centre;
(b) for each point, we calculate the distance of itself and a nearest seed point, are denoted as D (x);
(c) select a new data point as new cluster centre, the principle selected be for D (x) numerical value it is biggish Point is selected larger as the probability of cluster centre;
(d) (b) and (c) is repeated to come until k cluster centre is selected;
(e) k-means algorithm is run using this k initial cluster centres;
(5) the priori frame of each grid forecasting on characteristic pattern includes 5 predicted values, respectively tx, ty, tw, th, to, Wherein first four are coordinates, and to is confidence level, by the t of actual predictionx,ty, tw, th obtain bx,by,bw,bhProcedural representation Are as follows:
bx=σ (tx)+cx
by=σ (ty)+cy
bw=Pwetw
bh=Pheth
Pr (object) * IOU (b, centroid)=σ (t0)
Wherein, cx, cyFor the number of first grid in the grid distance upper left corner where the centre coordinate of frame, tx,ty For the center point coordinate of the frame of prediction, sigma function is that logistic function is finally obtained by Unitary coordinate between 0-1 Bx,byFor the value relative to grid position after normalization, tw, th are the width and height of the frame of prediction, and Pw, Ph are candidate frame Width and height, finally obtained bw,bhFor the value after normalization relative to candidate frame position;
(6) difference between the predicted value and actual value of ship coordinate is measured by quadratic sum range error loss function Different, when ship number of samples is n, loss function at this time is indicated are as follows:
Wherein, what Y-f (x) was indicated is residual error, and what entire formula indicated is the quadratic sum of residual error, the minimum mesh of solution Offer of tender numerical value is exactly the similitude of coordinate value, and functional value is smaller, and otherness is better;
(7) matched jamming ship is carried out according to the following steps positioning coordinate:
(i) by carrying out characteristic pattern grid dividing for inputting ship picture to be detected;
(j) each grid can predict that 3 candidate frames, each candidate frame can predict the coordinate value of an object, lead to The loss function cost value for crossing step (6) is less than threshold value 0.1, carries out next step operation;
(k) it is operated by step (5), position positioning is carried out to the ship in picture;
(l) after determining its vessel position, ship is marked with bounding box, is sat by ship's particulars matching and in real time positioning Mark carries out tracking ship;
(2) Classification and Identification
(1) based on the feature interaction layer structure in step 1,9 are obtained using cluster operation using the design method of anchor point It is given 3 kinds of scales by a cluster centre according to size:
(a) scale 1: the size obtained from feature extraction network structure is 13*13, and the characteristic pattern that channel is 1024 carries out Convolution operation, does not change characteristic pattern size, and port number is finally reduced to 75;
(b) scale 2: upper one layer of characteristic pattern is subjected to convolution operation, generates the characteristic pattern of 13*13,256 channels, then It is up-sampled, generates the characteristic pattern of 26*26,256 channels, while the spy with the 26*26 of infrastructure network layer, 512 channels Sign figure merges, then carries out convolution operation;
(c) scale 3: it is similar with scale 2, use the characteristic pattern of 32*32 size to be merged;
(2) feature interaction layer treated characteristic pattern is used into multi-tag sort operation, is added in network structure more The polytypic logistic regression layer of label, does two classification to each classification with logistic regression layer;
(3) by cross entropy cost function, the difference between the predicted value and actual value of logistic regression layer is measured, letter is worked as Numerical value is smaller, illustrates predicted value closer to true value, expression formula are as follows:
Wherein, x indicates that ships data sample, n indicate the sum of data sample;
(4) it is operated by (1) and (2) of step 3, the division net of equidimension ratio is carried out for obtained characteristic pattern Lattice, each grid predict C Ship Types probability, indicate that a grid belongs to certain under conditions of comprising ship target The probability of Ship Types, expression formula are as follows:
Wherein Pr (Classt| Object) indicate target class probability,Indicate prediction block and true frame The area of intersection, Pr (Classt) indicating class probability, Pr (Object) is probability existing for target;
(5) Ship Types classification is carried out according to following algorithm:
(a) in the description of ship of prediction, set 0 less than threshold value 0.2 for score, then again by score from height to Low sequence;
(b) with non-maxima suppression algorithm calculate bounding box IOU value, when IOU be greater than 0.5, the bounding box repetitive rate compared with Greatly, it is somebody's turn to do to be divided into 0, removes the biggish bounding box of repetitive rate, if being not more than 0.5, do not change;
(c) reselection is left maximum bounding box inside score, repeats step (b) to the last;
(d) the bounding box score finally retained is if it is greater than 0, then Ship Types is exactly corresponding to this score Classification;
(6) sigmoid function is added in output layerUsing the numerical value of Ship Types prediction as function Numerical value is inputted, after sigmoid function, numerical value is constrained in the range of 0 to 1, if output valve is greater than given threshold 0.75, just identify Ship Types, and mark the description of ship title on bounding box upper left side.

Claims (1)

1. a kind of ship intelligent recognition tracking, it is characterised in that the following steps are included:
Step 1: collecting different type ship picture, as ship original image data, carries out label pretreatment, is subsequent Recognition and tracking model initializes, and step 1 includes data prediction, and specific implementation step is as follows:
(1) data prediction:
(1) Pascal voc2007 standardized data collection is downloaded, its legacy data is emptied, retains JPEGImages file, Annotations file and ImageSets file;
(2) the different types of ship raw image data being collected into is deposited in JPEGImages file, including training Picture and test picture, wherein picture and test picture number ratio is trained to be classified as 8:2;
(3) it by labelImg marking tool, generates the readable Xaml file of model and is stored in Annotations file, Each Xaml file both corresponds to the picture in JPEGImages file;
(3) Main file is established under ImageSets file, storage is each corresponding image of ship picture type Data information, including training dataset, detection data collection, validation data set, trained and validation data set;
Step 2: building deep layer network model carries out convolution operation to the ship image sample data of input and extracts individual features, It is combined study, obtains the feature graph model of object, feature interaction layer is added on this basis, is divided into three scales, each In scale, the feature interaction of characteristic pattern part is realized by way of convolution kernel, step 2 includes that ship's particulars extracts network knot Structure and feature interaction layer structure, specific implementation step are as follows:
(1) ship's particulars extracts network:
(1) the ship picture pre-processed is inputted, the resolution ratio of image is improved using high-resolution classifier, is carried out at standardization Reason;
(2) by 32 layers of convolution kernel, each convolution kernel size is 3*3, and paces are 1 progress convolution operation, obtains Feature Mapping square Battle array;
(3) pass through the ship's particulars figure matrix of convolutional layerExtract high abstraction Ship's particulars, whereinIt is the Feature Mapping of n-th of the output of r layers of convolutional network, function f indicates r layers of convolutional Neural net The activation primitive of network neuron,It is the ship's particulars mapping of m-th of input of the r-1 network layer,It is n-th of net The connection weight of ship's particulars mapping and the mapping of m-th of input feature vector of network output layer, parameterIt is r layers of convolutional Neural net The amount of bias of n-th of Feature Mapping of network;
(4) the addition normalization layer after each convolutional layer, passes through functionBatch standardization is carried out, It is 0 that the matrix data distribution of convolutional layer output, which is normalized to mean value, the distribution that variance is 1, wherein xkIndicate input data Kth dimension, E [xk] indicate the average value of the dimension,Indicate standard deviation;
(5) amendment linear unit g (x)=Max (0, x is introducedr) it is used as activation primitive, the data for inputting this layer carry out unilateral Inhibit, using the data of normalization layer output as the input data of activation primitive, as input data xrWhen > 0, gradient perseverance is 1, when xrWhen < 0, the output of this layer is 0;
(6) circulation step (2)-(5), the infrastructure network that the building number of plies is 53 layers;
(7) in this infrastructure network, pass through residual error function F (x)=H (x)-x1It constructs new network structure, works as facilities network When the input of convolutional layer is as the dimension of output in network, using cross-layer jump connection type, residual error layer is added after convolutional layer, Change its original base network structure, the layer-by-layer training of deep-neural-network structure is changed to train by stage, network structure It is divided into several subsegments, each segment includes than the shallower network number of plies, and the total poor a part of each segment study finally reaches To overall lesser loss, wherein H (x) is enter into the network mapping function after summation, and F (x) is network mapping letter before summing Number, x1It is the input data of the convolutional layer, when F (x)=0, just constitutes identical mapping H (x)=x1
(2) feature interaction layer structure:
(1) feature interaction layer is added after the network configuration constructed in step 2, is divided into the alternation of bed of three different scale sizes, Ship's particulars interaction is carried out by the way of the fusion of multiple scales, 3 kinds of alternations of bed are as follows:
(a) small scale features alternation of bed: seven layers of convolutional layer are added after network structure and carry out convolution operation, the spy after convolution Figure information is levied to next feature interaction layer;
(b) mesoscale characteristics alternation of bed: upper one layer of characteristic pattern carry out up-sampling operation be allowed to expand twice, then with facilities network Characteristic pattern in network structure with identical dimensional size is added, again by exporting profile information after convolution;
(c) characteristic pattern of mesoscale characteristics alternation of bed: being carried out up-sampling operation and be allowed to expand twice by large-scale characteristics alternation of bed, It is added again with the characteristic pattern in infrastructure network with identical dimensional size, by exporting profile information after convolution;
(2) finally, measuring the performance of feature graph model by loss function, when the value of loss function is closer to 0, the model Can be more stable, use side and error are made of as loss function error of coordinate, IOU error and error in classification three parts, Its expression formula are as follows:
Wherein, two row indicates coordinate error before, the first row are the predictions of bounding box centre coordinate, wide and high pre- of the second behavior It surveys, third and fourth row indicates the confidence level loss of bounding box, and fifth line indicates the error of prediction classification, symbolFor predicted value, no cap is training mark value,Indicate that judgment object falls into j-th of grid i In bounding box, if there is no target in some cell, backpropagation is not carried out to error in classification, when the object in bounding box With the backpropagation with one of highest IOU progress error of coordinate in true frame, remaining without;
Step 3: feature is extracted to ship picture to be detected is inputted by feature extraction network, obtains the feature of certain size Figure, is then divided into grid of corresponding size for input picture, passes through data normalization processing and dimension cluster, fine granularity feature The centre coordinate of operation, the bounding box that grid directly predicts and target object in true frame carries out matching positioning, in this base The polytypic logistic regression layer of multi-tag is added on plinth, and two classification are done to each classification and are classified to realize to target object Identification, step 3 include coordinate prediction, and matching positioning and Classification and Identification, specific implementation step are as follows:
(1) coordinate is predicted, matching positioning:
(1) for inputting ship picture to be detected, by the down-sampled processing of the convolutional layer of feature extraction network, obtaining size is 13*13, the convolution characteristic pattern that port number is 3, then divides the image into grid of corresponding size;
(2) operated by anchor point, using the different Aspect Ratios of 3 kinds of scales and 3 kinds window size 13*13 convolution characteristic pattern Upper progress sliding window operation, is mapped to a region of original image centered on current sliding window mouth center, and the center in the region is corresponding One scale and length-width ratio, each center can predict 9 kinds of different size of priori frames;
(3) use IOU score judgment criteria, define new range formula d (box, centroid)=1-IOU (box, Centroid), improving K-means clustering method, to be automatically found better priori frame width high-dimensional, and wherein box is prediction priori frame Coordinate, centroid is the center for clustering all clusters;
(4) priori frame cluster is carried out according to following algorithm:
(a) randomly choose from the data acquisition system of input at one o'clock as first cluster centre;
(b) for each point, we calculate the distance of itself and a nearest seed point, are denoted as D (x);
(c) select a new data point as new cluster centre, the principle selected is chosen for the biggish point of D (x) numerical value Be taken as the probability of cluster centre it is larger;
(d) (b) and (c) is repeated to come until k cluster centre is selected;
(e) k-means algorithm is run using this k initial cluster centres;
(5) the priori frame of each grid forecasting on characteristic pattern includes 5 predicted values, respectively tx, ty, tw, th, to, wherein First four are coordinates, and to is confidence level, by the t of actual predictionx,ty, tw, th obtain bx,by,bw,bhProcedural representation are as follows:
bx=σ (tx)+cx
by=σ (ty)+cy
bw=Pwetw
bh=Pheth
Pr (object) * IOU (b, centroid)=σ (t0)
Wherein, cx, cyFor the number of first grid in the grid distance upper left corner where the centre coordinate of frame, tx,tyFor prediction Frame center point coordinate, sigma function is logistic function, by Unitary coordinate between 0-1, finally obtained bx,byFor The value relative to grid position after normalization, tw, th are the width and height of the frame of prediction, and Pw, Ph are the width and height of candidate frame, Finally obtained bw,bhFor the value after normalization relative to candidate frame position;
(6) difference between the predicted value and actual value of ship coordinate is measured by quadratic sum range error loss function, when When ship number of samples is n, loss function at this time is indicated are as follows:
Wherein, what Y-f (x) was indicated is residual error, and what entire formula indicated is the quadratic sum of residual error, the minimum objective function of solution Value is exactly the similitude of coordinate value, and functional value is smaller, and otherness is better;
(7) matched jamming ship is carried out according to the following steps positioning coordinate:
(a) by carrying out characteristic pattern grid dividing for inputting ship picture to be detected;
(b) each grid can predict that 3 candidate frames, each candidate frame can predict the coordinate value of an object, pass through step Suddenly the loss function cost value of (6) is less than threshold value 0.1, carries out next step operation;
(c) it is operated by step (5), position positioning is carried out to the ship in picture;
(d) after determining its vessel position, ship is marked with bounding box, passes through ship's particulars matching and positioning coordinate progress in real time Track ship;
(2) Classification and Identification
(1) based on the feature interaction layer structure in step 1,9 clusters are obtained using cluster operation using the design method of anchor point It is given at center 3 kinds of scales according to size:
(a) scale 1: the size obtained from feature extraction network structure is 13*13, and the characteristic pattern that channel is 1024 carries out convolution behaviour Make, does not change characteristic pattern size, port number is finally reduced to 75;
(b) scale 2: upper one layer of characteristic pattern is subjected to convolution operation, the characteristic pattern of 13*13,256 channels is generated, then carries out Up-sampling generates the characteristic pattern of 26*26,256 channels, while the characteristic pattern with the 26*26 of infrastructure network layer, 512 channels It merges, then carries out convolution operation;
(c) scale 3: it is similar with scale 2, use the characteristic pattern of 32*32 size to be merged;
(2) feature interaction layer treated characteristic pattern is used into multi-tag sort operation, multi-tag is added in network structure Polytypic logistic regression layer does two classification to each classification with logistic regression layer;
(3) by cross entropy cost function, the difference between the predicted value and actual value of logistic regression layer is measured, when functional value is got over It is small, illustrate predicted value closer to true value, expression formula are as follows:
Wherein, x indicates that ships data sample, n indicate the sum of data sample;
(4) it is operated by (1) and (2) of step 3, the grid division of equidimension ratio is carried out for obtained characteristic pattern, each Grid all predicts C Ship Types probability, indicates that a grid belongs to certain Ship Types under conditions of comprising ship target Probability, expression formula are as follows:
Wherein Pr (Classt| Object) indicate target class probability,Indicate what prediction block was intersected with true frame Area, Pr (Classt) indicating class probability, Pr (Object) is probability existing for target;
(5) Ship Types classification is carried out according to following algorithm:
(a) in the description of ship of prediction, 0 is set less than threshold value 0.2 by score, is then arranged from high to low by score again Sequence;
(b) the IOU value that bounding box is calculated with non-maxima suppression algorithm, when IOU is greater than 0.5, the bounding box repetitive rate is larger, should It must be divided into 0, remove the biggish bounding box of repetitive rate, if being not more than 0.5, do not changed;
(c) reselection is left maximum bounding box inside score, repeats step (b) to the last;
(d) the bounding box score finally retained is if it is greater than 0, then Ship Types is exactly classification corresponding to this score;
(6) sigmoid function is added in output layerUsing the numerical value of Ship Types prediction as the input number of function Value, after sigmoid function, numerical value is constrained in the range of 0 to 1, if output valve is greater than given threshold 0.75, is just identified Ship Types out, and the description of ship title is marked on bounding box upper left side.
CN201910202874.8A 2019-01-04 2019-03-18 A kind of ship intelligent recognition tracking Withdrawn CN110147807A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910007965 2019-01-04
CN2019100079656 2019-01-04

Publications (1)

Publication Number Publication Date
CN110147807A true CN110147807A (en) 2019-08-20

Family

ID=67589005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910202874.8A Withdrawn CN110147807A (en) 2019-01-04 2019-03-18 A kind of ship intelligent recognition tracking

Country Status (1)

Country Link
CN (1) CN110147807A (en)

Cited By (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110517272A (en) * 2019-08-29 2019-11-29 电子科技大学 Blood cell segmentation method based on deep learning
CN110610159A (en) * 2019-09-16 2019-12-24 天津通卡智能网络科技股份有限公司 Real-time bus passenger flow volume statistical method
CN110610165A (en) * 2019-09-18 2019-12-24 上海海事大学 Ship behavior analysis method based on YOLO model
CN110674930A (en) * 2019-09-27 2020-01-10 南昌航空大学 SAR image denoising method based on learning down-sampling and jump connection network
CN110751195A (en) * 2019-10-12 2020-02-04 西南交通大学 Fine-grained image classification method based on improved YOLOv3
CN110796107A (en) * 2019-11-04 2020-02-14 南京北旨智能科技有限公司 Power inspection image defect identification method and system and power inspection unmanned aerial vehicle
CN110826428A (en) * 2019-10-22 2020-02-21 电子科技大学 Ship detection method in high-speed SAR image
CN110889380A (en) * 2019-11-29 2020-03-17 北京卫星信息工程研究所 Ship identification method and device and computer storage medium
CN110929670A (en) * 2019-12-02 2020-03-27 合肥城市云数据中心股份有限公司 Muck truck cleanliness video identification and analysis method based on yolo3 technology
CN110991359A (en) * 2019-12-06 2020-04-10 重庆市地理信息和遥感应用中心(重庆市测绘产品质量检验测试中心) Satellite image target detection method based on multi-scale depth convolution neural network
CN111062278A (en) * 2019-12-03 2020-04-24 西安工程大学 Abnormal behavior identification method based on improved residual error network
CN111062383A (en) * 2019-11-04 2020-04-24 南通大学 Image-based ship detection depth neural network algorithm
CN111126325A (en) * 2019-12-30 2020-05-08 哈尔滨工程大学 Intelligent personnel security identification statistical method based on video
CN111163315A (en) * 2019-12-20 2020-05-15 汕头大学 Monitoring video compression method and system based on deep learning
CN111178451A (en) * 2020-01-02 2020-05-19 中国民航大学 License plate detection method based on YOLOv3 network
CN111310861A (en) * 2020-03-27 2020-06-19 西安电子科技大学 License plate recognition and positioning method based on deep neural network
CN111337789A (en) * 2019-10-23 2020-06-26 西安科技大学 Method and system for detecting fault electrical element in high-voltage transmission line
CN111353544A (en) * 2020-03-05 2020-06-30 天津城建大学 Improved Mixed Pooling-Yolov 3-based target detection method
CN111444821A (en) * 2020-03-24 2020-07-24 西北工业大学 Automatic identification method for urban road signs
CN111553934A (en) * 2020-04-24 2020-08-18 哈尔滨工程大学 Multi-ship tracking method adopting multi-dimensional fusion
CN111563557A (en) * 2020-05-12 2020-08-21 山东科华电力技术有限公司 Method for detecting target in power cable tunnel
CN111598180A (en) * 2020-05-21 2020-08-28 湖南警察学院 Tracking method for automatically identifying evidence-obtaining target
CN111598942A (en) * 2020-03-12 2020-08-28 中国电力科学研究院有限公司 Method and system for automatically positioning electric power facility instrument
CN111611918A (en) * 2020-05-20 2020-09-01 重庆大学 Traffic flow data set acquisition and construction method based on aerial photography data and deep learning
CN111626121A (en) * 2020-04-24 2020-09-04 上海交通大学 Complex event identification method and system based on multi-level interactive reasoning in video
CN111652321A (en) * 2020-06-10 2020-09-11 江苏科技大学 Offshore ship detection method based on improved YOLOV3 algorithm
CN111667450A (en) * 2019-12-23 2020-09-15 珠海大横琴科技发展有限公司 Ship quantity counting method and device and electronic equipment
CN111709345A (en) * 2020-06-12 2020-09-25 重庆电政信息科技有限公司 Method for detecting abnormal articles in fixed ring in real time
CN111898467A (en) * 2020-07-08 2020-11-06 浙江大华技术股份有限公司 Attribute identification method and device, storage medium and electronic device
CN111914935A (en) * 2020-08-03 2020-11-10 哈尔滨工程大学 Ship image target detection method based on deep learning
CN112001369A (en) * 2020-09-29 2020-11-27 北京百度网讯科技有限公司 Ship chimney detection method and device, electronic equipment and readable storage medium
CN112036519A (en) * 2020-11-06 2020-12-04 中科创达软件股份有限公司 Multi-bit sigmoid-based classification processing method and device and electronic equipment
CN112052817A (en) * 2020-09-15 2020-12-08 中国人民解放军海军大连舰艇学院 Improved YOLOv3 model side-scan sonar sunken ship target automatic identification method based on transfer learning
CN112084941A (en) * 2020-09-09 2020-12-15 国科天成(北京)科技有限公司 Target detection and identification method based on remote sensing image
CN112183232A (en) * 2020-09-09 2021-01-05 上海鹰觉科技有限公司 Ship board number position positioning method and system based on deep learning
CN112200764A (en) * 2020-09-02 2021-01-08 重庆邮电大学 Photovoltaic power station hot spot detection and positioning method based on thermal infrared image
CN112211793A (en) * 2020-08-25 2021-01-12 华北电力大学(保定) Wind turbine generator fault automatic identification method based on video image analysis
CN112380997A (en) * 2020-11-16 2021-02-19 武汉巨合科技有限公司 Model identification and undercarriage retraction and extension detection method based on deep learning
CN112669282A (en) * 2020-12-29 2021-04-16 燕山大学 Spine positioning method based on deep neural network
CN112686340A (en) * 2021-03-12 2021-04-20 成都点泽智能科技有限公司 Dense small target detection method based on deep neural network
CN112885411A (en) * 2019-11-29 2021-06-01 中国科学院大连化学物理研究所 Polypeptide detection method based on deep learning
CN112926426A (en) * 2021-02-09 2021-06-08 长视科技股份有限公司 Ship identification method, system, equipment and storage medium based on monitoring video
CN113205151A (en) * 2021-05-25 2021-08-03 上海海事大学 Ship target real-time detection method and terminal based on improved SSD model
CN113298181A (en) * 2021-06-16 2021-08-24 合肥工业大学智能制造技术研究院 Underground pipeline abnormal target identification method and system based on dense connection Yolov3 network
CN113486819A (en) * 2021-07-09 2021-10-08 广西民族大学 Ship target detection method based on YOLOv4 algorithm
CN113516685A (en) * 2021-07-09 2021-10-19 东软睿驰汽车技术(沈阳)有限公司 Target tracking method, device, equipment and storage medium
CN113610043A (en) * 2021-08-19 2021-11-05 海默潘多拉数据科技(深圳)有限公司 Industrial drawing table structured recognition method and system
CN113792780A (en) * 2021-09-09 2021-12-14 福州大学 Container number identification method based on deep learning and image post-processing
CN114581796A (en) * 2022-01-19 2022-06-03 上海土蜂科技有限公司 Target tracking system, method and computer device thereof
CN114723997A (en) * 2022-04-29 2022-07-08 厦门大学 Composite convolution operation method based on Tropical algebra, storage medium and electronic equipment
WO2022147965A1 (en) * 2021-01-09 2022-07-14 江苏拓邮信息智能技术研究院有限公司 Arithmetic question marking system based on mixnet-yolov3 and convolutional recurrent neural network (crnn)
CN114924477A (en) * 2022-05-26 2022-08-19 西南大学 Electric fish blocking and ship passing device based on image recognition and PID intelligent control
CN114972793A (en) * 2022-06-09 2022-08-30 厦门大学 Lightweight neural network ship water gauge reading identification method
CN116337087A (en) * 2023-05-30 2023-06-27 广州健新科技有限责任公司 AIS and camera-based ship positioning method and system
CN117237363A (en) * 2023-11-16 2023-12-15 国网山东省电力公司曲阜市供电公司 Method, system, medium and equipment for identifying external broken source of power transmission line
CN117576165A (en) * 2024-01-15 2024-02-20 武汉理工大学 Ship multi-target tracking method and device, electronic equipment and storage medium
CN117930375A (en) * 2024-03-22 2024-04-26 国擎(山东)信息科技有限公司 Multi-dimensional detection technology fused channel type terahertz human body security inspection system

Cited By (86)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110517272B (en) * 2019-08-29 2022-03-25 电子科技大学 Deep learning-based blood cell segmentation method
CN110517272A (en) * 2019-08-29 2019-11-29 电子科技大学 Blood cell segmentation method based on deep learning
CN110610159A (en) * 2019-09-16 2019-12-24 天津通卡智能网络科技股份有限公司 Real-time bus passenger flow volume statistical method
CN110610165A (en) * 2019-09-18 2019-12-24 上海海事大学 Ship behavior analysis method based on YOLO model
CN110674930A (en) * 2019-09-27 2020-01-10 南昌航空大学 SAR image denoising method based on learning down-sampling and jump connection network
CN110751195A (en) * 2019-10-12 2020-02-04 西南交通大学 Fine-grained image classification method based on improved YOLOv3
CN110751195B (en) * 2019-10-12 2023-02-07 西南交通大学 Fine-grained image classification method based on improved YOLOv3
CN110826428A (en) * 2019-10-22 2020-02-21 电子科技大学 Ship detection method in high-speed SAR image
CN111337789A (en) * 2019-10-23 2020-06-26 西安科技大学 Method and system for detecting fault electrical element in high-voltage transmission line
CN110796107A (en) * 2019-11-04 2020-02-14 南京北旨智能科技有限公司 Power inspection image defect identification method and system and power inspection unmanned aerial vehicle
CN111062383A (en) * 2019-11-04 2020-04-24 南通大学 Image-based ship detection depth neural network algorithm
CN110889380A (en) * 2019-11-29 2020-03-17 北京卫星信息工程研究所 Ship identification method and device and computer storage medium
CN112885411A (en) * 2019-11-29 2021-06-01 中国科学院大连化学物理研究所 Polypeptide detection method based on deep learning
CN110889380B (en) * 2019-11-29 2022-10-28 北京卫星信息工程研究所 Ship identification method and device and computer storage medium
CN110929670A (en) * 2019-12-02 2020-03-27 合肥城市云数据中心股份有限公司 Muck truck cleanliness video identification and analysis method based on yolo3 technology
CN111062278A (en) * 2019-12-03 2020-04-24 西安工程大学 Abnormal behavior identification method based on improved residual error network
CN111062278B (en) * 2019-12-03 2023-04-07 西安工程大学 Abnormal behavior identification method based on improved residual error network
CN110991359A (en) * 2019-12-06 2020-04-10 重庆市地理信息和遥感应用中心(重庆市测绘产品质量检验测试中心) Satellite image target detection method based on multi-scale depth convolution neural network
CN111163315B (en) * 2019-12-20 2022-08-16 汕头大学 Monitoring video compression method and system based on deep learning
CN111163315A (en) * 2019-12-20 2020-05-15 汕头大学 Monitoring video compression method and system based on deep learning
CN111667450A (en) * 2019-12-23 2020-09-15 珠海大横琴科技发展有限公司 Ship quantity counting method and device and electronic equipment
CN111126325B (en) * 2019-12-30 2023-01-03 哈尔滨工程大学 Intelligent personnel security identification statistical method based on video
CN111126325A (en) * 2019-12-30 2020-05-08 哈尔滨工程大学 Intelligent personnel security identification statistical method based on video
CN111178451A (en) * 2020-01-02 2020-05-19 中国民航大学 License plate detection method based on YOLOv3 network
CN111353544A (en) * 2020-03-05 2020-06-30 天津城建大学 Improved Mixed Pooling-Yolov 3-based target detection method
CN111353544B (en) * 2020-03-05 2023-07-25 天津城建大学 Improved Mixed Pooling-YOLOV 3-based target detection method
CN111598942A (en) * 2020-03-12 2020-08-28 中国电力科学研究院有限公司 Method and system for automatically positioning electric power facility instrument
CN111444821A (en) * 2020-03-24 2020-07-24 西北工业大学 Automatic identification method for urban road signs
CN111444821B (en) * 2020-03-24 2022-03-25 西北工业大学 Automatic identification method for urban road signs
CN111310861A (en) * 2020-03-27 2020-06-19 西安电子科技大学 License plate recognition and positioning method based on deep neural network
CN111310861B (en) * 2020-03-27 2023-05-23 西安电子科技大学 License plate recognition and positioning method based on deep neural network
CN111553934A (en) * 2020-04-24 2020-08-18 哈尔滨工程大学 Multi-ship tracking method adopting multi-dimensional fusion
CN111553934B (en) * 2020-04-24 2022-07-15 哈尔滨工程大学 Multi-ship tracking method adopting multi-dimensional fusion
CN111626121B (en) * 2020-04-24 2022-12-20 上海交通大学 Complex event identification method and system based on multi-level interactive reasoning in video
CN111626121A (en) * 2020-04-24 2020-09-04 上海交通大学 Complex event identification method and system based on multi-level interactive reasoning in video
CN111563557A (en) * 2020-05-12 2020-08-21 山东科华电力技术有限公司 Method for detecting target in power cable tunnel
CN111563557B (en) * 2020-05-12 2023-01-17 山东科华电力技术有限公司 Method for detecting target in power cable tunnel
CN111611918A (en) * 2020-05-20 2020-09-01 重庆大学 Traffic flow data set acquisition and construction method based on aerial photography data and deep learning
CN111611918B (en) * 2020-05-20 2023-07-21 重庆大学 Traffic flow data set acquisition and construction method based on aerial data and deep learning
CN111598180A (en) * 2020-05-21 2020-08-28 湖南警察学院 Tracking method for automatically identifying evidence-obtaining target
CN111598180B (en) * 2020-05-21 2023-07-14 湖南警察学院 Automatic identification evidence-taking target tracking method
CN111652321B (en) * 2020-06-10 2023-06-02 江苏科技大学 Marine ship detection method based on improved YOLOV3 algorithm
CN111652321A (en) * 2020-06-10 2020-09-11 江苏科技大学 Offshore ship detection method based on improved YOLOV3 algorithm
CN111709345A (en) * 2020-06-12 2020-09-25 重庆电政信息科技有限公司 Method for detecting abnormal articles in fixed ring in real time
CN111898467A (en) * 2020-07-08 2020-11-06 浙江大华技术股份有限公司 Attribute identification method and device, storage medium and electronic device
CN111914935A (en) * 2020-08-03 2020-11-10 哈尔滨工程大学 Ship image target detection method based on deep learning
CN111914935B (en) * 2020-08-03 2022-07-15 哈尔滨工程大学 Ship image target detection method based on deep learning
CN112211793A (en) * 2020-08-25 2021-01-12 华北电力大学(保定) Wind turbine generator fault automatic identification method based on video image analysis
CN112200764B (en) * 2020-09-02 2022-05-03 重庆邮电大学 Photovoltaic power station hot spot detection and positioning method based on thermal infrared image
CN112200764A (en) * 2020-09-02 2021-01-08 重庆邮电大学 Photovoltaic power station hot spot detection and positioning method based on thermal infrared image
CN112183232A (en) * 2020-09-09 2021-01-05 上海鹰觉科技有限公司 Ship board number position positioning method and system based on deep learning
CN112084941A (en) * 2020-09-09 2020-12-15 国科天成(北京)科技有限公司 Target detection and identification method based on remote sensing image
CN112052817B (en) * 2020-09-15 2023-09-05 中国人民解放军海军大连舰艇学院 Improved YOLOv3 model side-scan sonar sunken ship target automatic identification method based on transfer learning
CN112052817A (en) * 2020-09-15 2020-12-08 中国人民解放军海军大连舰艇学院 Improved YOLOv3 model side-scan sonar sunken ship target automatic identification method based on transfer learning
CN112001369B (en) * 2020-09-29 2024-04-16 北京百度网讯科技有限公司 Ship chimney detection method and device, electronic equipment and readable storage medium
CN112001369A (en) * 2020-09-29 2020-11-27 北京百度网讯科技有限公司 Ship chimney detection method and device, electronic equipment and readable storage medium
CN112036519B (en) * 2020-11-06 2021-05-04 中科创达软件股份有限公司 Multi-bit sigmoid-based classification processing method and device and electronic equipment
CN112036519A (en) * 2020-11-06 2020-12-04 中科创达软件股份有限公司 Multi-bit sigmoid-based classification processing method and device and electronic equipment
CN112380997A (en) * 2020-11-16 2021-02-19 武汉巨合科技有限公司 Model identification and undercarriage retraction and extension detection method based on deep learning
CN112669282B (en) * 2020-12-29 2023-02-14 燕山大学 Spine positioning method based on deep neural network
CN112669282A (en) * 2020-12-29 2021-04-16 燕山大学 Spine positioning method based on deep neural network
WO2022147965A1 (en) * 2021-01-09 2022-07-14 江苏拓邮信息智能技术研究院有限公司 Arithmetic question marking system based on mixnet-yolov3 and convolutional recurrent neural network (crnn)
CN112926426A (en) * 2021-02-09 2021-06-08 长视科技股份有限公司 Ship identification method, system, equipment and storage medium based on monitoring video
CN112686340B (en) * 2021-03-12 2021-07-13 成都点泽智能科技有限公司 Dense small target detection method based on deep neural network
CN112686340A (en) * 2021-03-12 2021-04-20 成都点泽智能科技有限公司 Dense small target detection method based on deep neural network
CN113205151B (en) * 2021-05-25 2024-02-27 上海海事大学 Ship target real-time detection method and terminal based on improved SSD model
CN113205151A (en) * 2021-05-25 2021-08-03 上海海事大学 Ship target real-time detection method and terminal based on improved SSD model
CN113298181A (en) * 2021-06-16 2021-08-24 合肥工业大学智能制造技术研究院 Underground pipeline abnormal target identification method and system based on dense connection Yolov3 network
CN113486819A (en) * 2021-07-09 2021-10-08 广西民族大学 Ship target detection method based on YOLOv4 algorithm
CN113516685A (en) * 2021-07-09 2021-10-19 东软睿驰汽车技术(沈阳)有限公司 Target tracking method, device, equipment and storage medium
CN113610043A (en) * 2021-08-19 2021-11-05 海默潘多拉数据科技(深圳)有限公司 Industrial drawing table structured recognition method and system
CN113792780B (en) * 2021-09-09 2023-07-14 福州大学 Container number identification method based on deep learning and image post-processing
CN113792780A (en) * 2021-09-09 2021-12-14 福州大学 Container number identification method based on deep learning and image post-processing
CN114581796A (en) * 2022-01-19 2022-06-03 上海土蜂科技有限公司 Target tracking system, method and computer device thereof
CN114581796B (en) * 2022-01-19 2024-04-02 上海土蜂科技有限公司 Target tracking system, method and computer device thereof
CN114723997B (en) * 2022-04-29 2024-05-31 厦门大学 Composite convolution operation method based on graphic algebra, storage medium and electronic equipment
CN114723997A (en) * 2022-04-29 2022-07-08 厦门大学 Composite convolution operation method based on Tropical algebra, storage medium and electronic equipment
CN114924477A (en) * 2022-05-26 2022-08-19 西南大学 Electric fish blocking and ship passing device based on image recognition and PID intelligent control
CN114972793A (en) * 2022-06-09 2022-08-30 厦门大学 Lightweight neural network ship water gauge reading identification method
CN114972793B (en) * 2022-06-09 2024-05-31 厦门大学 Light-weight neural network ship water gauge reading identification method
CN116337087A (en) * 2023-05-30 2023-06-27 广州健新科技有限责任公司 AIS and camera-based ship positioning method and system
CN117237363A (en) * 2023-11-16 2023-12-15 国网山东省电力公司曲阜市供电公司 Method, system, medium and equipment for identifying external broken source of power transmission line
CN117576165A (en) * 2024-01-15 2024-02-20 武汉理工大学 Ship multi-target tracking method and device, electronic equipment and storage medium
CN117576165B (en) * 2024-01-15 2024-04-19 武汉理工大学 Ship multi-target tracking method and device, electronic equipment and storage medium
CN117930375A (en) * 2024-03-22 2024-04-26 国擎(山东)信息科技有限公司 Multi-dimensional detection technology fused channel type terahertz human body security inspection system
CN117930375B (en) * 2024-03-22 2024-06-25 国擎(山东)信息科技有限公司 Multi-dimensional detection technology fused channel type terahertz human body security inspection system

Similar Documents

Publication Publication Date Title
CN110147807A (en) A kind of ship intelligent recognition tracking
CN111259930B (en) General target detection method of self-adaptive attention guidance mechanism
CN110619369B (en) Fine-grained image classification method based on feature pyramid and global average pooling
CN110135267A (en) A kind of subtle object detection method of large scene SAR image
Al Bashish et al. A framework for detection and classification of plant leaf and stem diseases
CN106408594B (en) Video multi-target tracking based on more Bernoulli Jacob&#39;s Eigen Covariances
CN104281853B (en) A kind of Activity recognition method based on 3D convolutional neural networks
CN104063719B (en) Pedestrian detection method and device based on depth convolutional network
CN110287960A (en) The detection recognition method of curve text in natural scene image
CN109766830A (en) A kind of ship seakeeping system and method based on artificial intelligence image procossing
CN101713776B (en) Neural network-based method for identifying and classifying visible components in urine
CN107818326A (en) A kind of ship detection method and system based on scene multidimensional characteristic
CN103489005B (en) A kind of Classification of High Resolution Satellite Images method based on multiple Classifiers Combination
CN109101897A (en) Object detection method, system and the relevant device of underwater robot
CN108389220B (en) Remote sensing video image motion target real-time intelligent cognitive method and its device
CN107967451A (en) A kind of method for carrying out crowd&#39;s counting to static image using multiple dimensioned multitask convolutional neural networks
CN110009679A (en) A kind of object localization method based on Analysis On Multi-scale Features convolutional neural networks
CN109522966A (en) A kind of object detection method based on intensive connection convolutional neural networks
CN111368769B (en) Ship multi-target detection method based on improved anchor point frame generation model
CN109344736A (en) A kind of still image people counting method based on combination learning
CN106446930A (en) Deep convolutional neural network-based robot working scene identification method
CN108932479A (en) A kind of human body anomaly detection method
CN109978882A (en) A kind of medical imaging object detection method based on multi-modal fusion
CN106096506A (en) Based on the SAR target identification method differentiating doubledictionary between subclass class
CN103971106A (en) Multi-view human facial image gender identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20190820

WW01 Invention patent application withdrawn after publication