CN109583425A - A kind of integrated recognition methods of the remote sensing images ship based on deep learning - Google Patents

A kind of integrated recognition methods of the remote sensing images ship based on deep learning Download PDF

Info

Publication number
CN109583425A
CN109583425A CN201811573380.2A CN201811573380A CN109583425A CN 109583425 A CN109583425 A CN 109583425A CN 201811573380 A CN201811573380 A CN 201811573380A CN 109583425 A CN109583425 A CN 109583425A
Authority
CN
China
Prior art keywords
ship
remote sensing
image
information
sensing images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811573380.2A
Other languages
Chinese (zh)
Other versions
CN109583425B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201811573380.2A priority Critical patent/CN109583425B/en
Publication of CN109583425A publication Critical patent/CN109583425A/en
Application granted granted Critical
Publication of CN109583425B publication Critical patent/CN109583425B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of, and the remote sensing images ship based on deep learning integrates recognition methods, including image classification, target detection, image segmentation.Compared with prior art, the present invention combines traditional image processing method to realize detection and segmentation to mesh remote sensing figure ship using modern artificial intelligent depth learning model;Remote sensing image segmentation method based on deep learning can accurately identify the ship in sea area, adapt to a variety of processing environments, have preferable anti-interference ability to complex environment, and accurately can detect the ship after identification by Ground Split.

Description

A kind of integrated recognition methods of the remote sensing images ship based on deep learning
Technical field
The invention belongs to technical field of computer vision, the specific field being related to is target detection and image segmentation, spy It is not a kind of integrated recognition methods of the remote sensing images ship based on deep learning.
Background technique
With the development of remote sensing information, the processing of remote sensing images gradually occupies critical positions in image domains, with distant Sense image is that the detection algorithm of platform also emerges one after another.And for this task of remote sensing images ship object detection and recognition come It says, most of detection algorithm is the thinking using traditional extraction feature at present, by using pretreatment and enhancing to image After technology, to remove detection target ship.
Due to the particularity of remote sensing images, remote sensing figure is compared with normal image, it is easy to by illumination, weather, sea situation or The influence of the condition elements such as imaging time, cloud layer and Wave Information can also interfere the quality of image.In addition, due to satellite The difference of wave band, the resolution ratio multiplicity of image, wherein high-resolution ship target shape, Texture eigenvalue are abundant, and low point Then minutia is fuzzy for the ship image of resolution.Therefore the unique Biodiversity Characteristics of remote sensing images are one for traditional method The huge challenge of item.
In general, traditional method, such as support vector machines, dynamic threshold, self-adaption cluster, are by picture Characteristic information carries out classification to realize judgement and detect realistic objective.But traditional characteristic processing wants data sample Ask higher, it is stronger to the susceptibility of training sample, conventional method is used in the case where Characteristics of The Remote Sensing Images complex multiplicity The effect is unsatisfactory for detection.
For object detection task, traditional method is that the major class detection combined based on feature description and machine learning is calculated Method, generally includes object candidate area and candidate region identifies two tasks.Traditional algorithm utilizes ship target and background sea Difference before domain realizes the extraction to candidate region, recycles classifier to classify the ship identified, method is direct Simply.But this method is generally suitable for the situation that sea is relatively simple and ship feature is obvious, on sea when complexity, It can be become very poor using the effect that traditional detection method is realized.
For image segmentation task, traditional mode has pyramid threshold method, average drifting method and point based on energy field Segmentation method etc..But the object of traditional dividing method processing is generally normal image, the remote sensing images extremely complex for environment For, the phenomenon that these partitioning algorithms easily cause segmentation chaotic, to can not correctly divide foreground and background, it is even more impossible to know Which kind of the ship that Chu do not divide belongs to, and also can not well come out ship Target Segmentation with regard to natural.
To sum up, traditional target detection and image partition method carry out the remote sensing images of informative, complicated condition It says, the effect of detection and segmentation is unsatisfactory, lacks the adaptivity to processing environment.
Summary of the invention
The invention aims to solve the deficiencies in the prior art, a kind of remote sensing figure based on deep learning is provided As the integrated recognition methods of ship, traditional image processing method is combined to realize to mesh using modern artificial intelligent depth learning model The detection and segmentation of remote sensing figure ship;Remote sensing image segmentation method based on deep learning can accurately identify the ship in sea area Only, a variety of processing environments are adapted to, has preferable anti-interference ability to complex environment, and can accurate Ground Split detection knowledge Ship after not.
In order to achieve the above objectives, the present invention is implemented according to following technical scheme:
A kind of integrated recognition methods of the remote sensing images ship based on deep learning, comprising the following steps:
S1, image classification: being collected remote sensing image data collection, be trained using the neural network of ResNet-34 structure, and Handled remote sensing area of visual field is carried out to judge whether there is ship presence;
S2, target detection: it using ResNet-101 neural network framework as feature extraction network, inputs S1 and screens There are the remote sensing images of ship into designed neural network, extract there are the characteristic layers of the remote sensing images of ship, in turn Obtain remote sensing images location information;
S3, image segmentation: by the neural network of U-Net framework to S1 screen there are the remote sensing images of ship into Row training, obtains the characteristic pattern of the remote sensing images there are ship;And transposition convolution fortune is carried out to the characteristic information figure of different scale It calculates, steps up the resolution ratio of characteristic information, so that the location information in existing characteristics figure is specifically shown, finally obtain ship Segmentation information only.
Further, specific step is as follows by the S1:
S11, the remote sensing images composition remote sensing image data collection comprising the common sea area information in the whole world, remote sensing images number are collected It include 300,000 width remote sensing images according to collection, the resolution sizes of every width remote sensing images are 768pxX768px, and format is RGB triple channel Color image;It marks in all remote sensing images there are the position of ship, profile and the shared pixel position in remote sensing images Information, produces the ship tally set of remote sensing images, and label image is resolution ratio 768pxX768px, single channel grayscale image, file Format is jpg format-pattern;
S12, image enhancement: to there are the remote sensing images of ship to use image level/flip vertical, image Random-Rotation 0- 30 degree, brightness of image changes at random, picture contrast changes at random, one of scalloping or the combination of various ways carry out Image enhancement;
S13, cross-validation method training: using after image enhancement, there are the remote sensing images of ship to be used as input picture, use The neural network of ResNet-34 structure is trained above-mentioned remote sensing image data, and 5 folding cross validations are taken in input process Mode be trained;
S14, TTA image classification reasoning: to input picture carry out the reasoning of TTA image classification, and by input picture respectively into TTA image classification reasoning is carried out after row flip horizontal and flip vertical respectively, by the above the reasoning results and original input picture The reasoning results are merged, and are then tested in the neural network of trained ResNet-34 structure, judge remote sensing figure Whether with the presence of ship in the visual field.
Further, specific step is as follows by the S2:
S21, convolution feature extraction: being extracted using ResNet-101 network structure there are the feature of the remote sensing images of ship, In the characteristic layer export of five different phases output of ResNet-101, there are the characteristic informations of the remote sensing images of ship respectively;It is logical The characteristic information for crossing the remote sensing images different scale that ship will be present in feature pyramid network implementations carries out fusion extraction, to extraction Characteristic information carry out refinement integration, obtain final characteristic layer;
S22, characteristic area suggestion: the characteristic layer extracted using S21, in all different layers of characteristic layers of different dimensions Middle progress anchor generation, to all anchor carry out box confine, set confine length in pixels and ratio-dependent it is good after, A series of suggestion areas is generated on characteristic pattern;
S23, characteristic area adjustment: a series of suggestion areas is generated to S22 and carries out selecting adjustment to finally obtain greatly Small suitable box just comprising measurement ship sets loss function, constantly carries out to loss function in algorithm implementation procedure Optimize dynamically to adjust the position of box, and box includes classification belonging to ship really, so that it is determined that the position that ship is final It sets;
S24, classification information is obtained: on the basis of S23, by reducing Classification Loss to which the classification for obtaining the ship is believed Breath, classification information is one-hot one-hot encoding information in a program, carries out being converted into actual ship classification information, therefore will be distant Sense image classification is ship target and background;
S25, bounding box adjustment: bounding box is adjusted to the part that detection error needs to execute automatically in gradient updating, When carrying out gradient updating, adjustment includes the coordinate position of ship bounding box, confines bounding box accurately to detection ship realization;
S26, optimizing detection loss: it is carried out using loss function of the gradient descent method with momentum to the neural network of S23 Optimization has obtained the rectangle calibration frame for being finally most suitable for the ship;
S27, obtain segmentation mask and bounding box: building mask branching networks input the classfying frame extracted in S21-S26 Determine region and characteristic information, generates corresponding mask, the image after obtaining target detection.
Further, specific step is as follows by the S3:
S31, Multi resolution feature extraction: using U-Net structure network to S1 filter out there are the remote sensing figures of ship As being split operation, wherein the encoder of U-Net for ResNet-34 structure neural network, S1 filter out there are ships Remote sensing images input coding device end output different scale characteristic information;
S32, deconvolution operating characteristics figure: then pass through transposition convolution in the neural network decoding stage of ResNet-34 structure Operation carries out liter dimension to the characteristic pattern of low resolution and operates, make itself and decode section characteristic information be combined and divided after Image;
S33, integrated fusion characteristic information: by the image after target detection that S27 is obtained and the figure after the obtained segmentation of S32 As being merged to obtain more accurate segmentation information;
S34, markov random file spread cut-point: being believed using markov random file algorithm the segmentation that S33 is obtained Breath is modified, and is chosen segmentation information seed point and is diffused, and keeps segmentation information more complete and accurate;
S35, opening operation eliminate imbricate effect: according to morphology principle, successively carrying out corruption to the result figure that S34 is obtained Erosion and expansive working carry out opening operation to image, the eclipsing effects of ship segmentation information are effectively relieved, and promote segmentation effect;
S36, output final result figure.
Compared with prior art, the invention has the following advantages that
1) present invention carries out target using the depth learning technology in advanced artificial intelligence thought, building neural network Classification, detection and segmentation;The network of design has preferable robustness after carrying out training appropriate, can be with accurate judgement remote sensing It whether there is target in image and target positioned and is divided, this is that traditional technology is not had;
2) present invention improves a level compared to other traditional detection techniques to the anti-interference ability of environment, in depth Neural network is spent to carry out after adequately learning training data, even if remote sensing images complicated condition to be detected is severe, the calculation Method still can extract the position for the target of being detected from complicated background and be split to it;
3) inventive algorithm can be simultaneously to objects multiple in figure using the prior art of traditional algorithm compared to other It measures, using the integrated technology in machine learning, improves the robustness of holistic approach.In the subsequent stages of algorithm process Result figure is further processed in the mode that Duan Caiyong is combined with conventional method, reaches accurate segmentation effect.
Detailed description of the invention
Fig. 1 is the overview flow chart of the embodiment of the present invention.
Fig. 2 is the remote sensing images training data of the embodiment of the present invention and the image enhancement technique taken.
Fig. 3 is the neural network structure that the different phase of the embodiment of the present invention is taken.
Fig. 4 is the second part of the embodiment of the present invention and the result images information of Part III.
Fig. 5 is the result images information of the Part IV image segmentation of the embodiment of the present invention.
Fig. 6 is the post processing of image step and final result images information of the embodiment of the present invention.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention more comprehensible, with reference to embodiments, to the present invention into Row is further to be described in detail.Described herein the specific embodiments are only for explaining the present invention, is not used to limit invention.
Referring to Fig.1, data set information required for this method is trained and tests is introduced by first part, and this method is adopted Data set is public remote sensing image data collection, and the sum of image data set is 300,000 width, and resolution ratio is all 768pxX768px, wherein the number of channels of image is Three Channel Color RGB, and file format is jpg format.Algorithm training process In also need the tally set comprising segmentation information, wherein tally set quantity is identical as data set, and corresponds.Such as Fig. 2 (a), illustrate remote sensing images information in part used in the present invention, wherein the tally set of ship using translucent exposure mask with it is original Image is merged, and as can be seen from the figure the mask range of bright wisp shape is exactly the practical cut zone of ship.
It in hands-on, needs for image data set to be divided into three parts, is training set, verifying collection and test respectively Collection.Wherein training set inputs neural network in the training process, verifying collection then periodically verification method in the training process Reasonability, and test set then assesses the performance of method when method is completed.
Wherein, in 300 in total, 000 data, data of 200, the 000 width image as training set is selected, are selected Data of the 30,000 width images as verifying collection.Wherein remaining 70,000 width image is as test set data, entire data point It is 20:3:7 with ratio, meets deep learning training standard.Tally set is identical as the allocation proportion of data set, wherein label image For resolution ratio 768pxX768px, single channel grayscale image, file format is jpg format-pattern.Label figure contains corresponding data collection The segmentation information of image, wherein the pixel value of background is 0, and the pixel value that ship divides entity is 1.
The specific mistake that second part introduction classifies to remote sensing images using the neural network of Resnet-34 classical architecture Journey.In conjunction with the overall procedure in Fig. 1, the main execution step of second part are as follows: image input and image enhancement, cross-validation method Training, TTA image classification reasoning.
(1), image input and image enhancement
In this step, the present invention will be trained handled remote sensing images.It is instructed using neural network Before white silk, it would be desirable to the image that will be trained by the processing of image enhancement technique, Neural Network Science can be increased in this way The difficulty for practising characteristics of image, allows algorithm more in depth to excavate the characteristic information of remote sensing images, reaches accurately classification effect Fruit.
The characteristics of for remote sensing ship image data set, this invention takes following several image enhancement modes: image water Flat/flip vertical, image Random-Rotation 0-30 degree, brightness of image changes at random, picture contrast changes at random, scalloping. Such as Fig. 2 (b), the effect that a secondary different enhancement methods comprising ship image examples are concentrated for Fig. 2 (a) remotely-sensed data is illustrated Fruit.
(2), the neural network framework used
Main operation mode is convolution algorithm in neural network, similar to the operation mode of conventional filter, to defeated During entering image progress analyzing and training, neural network extracts the feature of remote sensing images gradually from shallow to deep to be analyzed. Basic convolution operation is as follows:
Wherein x is the image of convolution input, and h is convolution kernel, and y is to be after convolution as a result, convolution operation is based on depth Basic calculation method in the image procossing of habit is realized by carrying out parameter update to convolution kernel to input picture feature extraction Effect.
Suitable neural network framework is selected to have critically important relationship to the classification performance of first stage.In the present invention, Select residual error network ResNet-34 as the backbone network of sorter network, the design of ResNet-34 residual error structure can be effective The feature of ground extraction image.The present invention possesses the network architecture to remote sensing images by modifying to the network tail portion number of plies The ability of classification illustrates the residual error network ResNet-34 structure used such as Fig. 3 (a).
In addition, speed when in order to increase training precision and training, for the image classification task of first stage, this hair It is bright that pre-training is carried out to ImageNet image classification data collection first before formal training.The side of transfer learning is used in this way Method makes designed network weight have high susceptibility to image classification in advance.After pre-training, met the requirements and high-precision The weight information of degree then uses pre-training weight obtained in the previous step in the formal training stage later, can effectively promote network Generalization ability.
Loss function used by the ResNet-34 backbone network being outlined above is cross entropy (CrossEntropyLoss) it loses, intersects entropy loss and be used to carry out difference calculating, the two point to distribution of results and label distribution The more big then penalty values of cloth difference are bigger.It is as follows to intersect entropy loss formula:
Wherein x be network query function go out pass through sigmoid function 01 change as a result, class is then tally set, in this stage Label set value is respectively 1 and 0, wherein 1 represents in correspondence image there are ship, and 0 represents in correspondence image and exists without ship.
(3), cross-validation method training
After getting out data and designing neural network, the data input network of training set is trained.It is inputting The mode for taking 5 folding cross validations in the process, first divides the image set data in training set, is divided into 5 The set of mutual not intersection, there is 40,000 width image in each set, and one shares 200,000 width image in 5 set.
Verifying collection of the set as training when is picked out from above-mentioned 5 set, remaining four set are as training Collection is trained.5 batches are trained altogether, and each batch trains 20 bouts, and the verifying collection selected in each batch is different.Pass through The mode of cross validation can be fully using the information in data set, to keep neural network learning more to remote sensing images Feature, and the phenomenon that efficiently avoid over-fitting.
The batch taken in training process is 32, and majorized function is adam optimizer, and wherein momentum parameter is 0.9,0.99, Initial learning rate is 0.01.Purposefully decay to learning rate in each bout, until the last one bout learning rate declines It is kept to 0.00001.
(4), TTA image classification reasoning
After being trained to ship data set, the network that the present invention designs can well divide remote sensing images Class is marked off comprising ship and without the image information of ship.In the reasoning stage, using the method that TTA reasoning enhances, to defeated Enter after image carries out flip horizontal and flip vertical respectively and make inferences respectively, by above the reasoning results and original input picture The reasoning results merged, the accuracy rate tested on test set can achieve 98.6%, as Fig. 4 (a) is shown to test Collect the confusion matrix classified.Trained algorithm network can accurately judge whether have ship to deposit in the remote sensing figure visual field ?.And using output information as subsequent classification information.
Part III introduction carries out the detailed process of target detection using the neural network of ResNet-101 structure, in conjunction with figure The execution step of overall procedure in 1, this part is main are as follows: convolution feature extraction, characteristic area suggestion, characteristic area tune Whole, acquisition classification information, optimizing detection loss, obtains segmentation mask and bounding box at bounding box adjustment.
(1), feature extraction network
In the target detection stage, the present invention carries out slide in the picture using convolution operation to extract in image Hold information and marginal information, ResNet-101 are a profound neural networks, as Fig. 3 (b) illustrates the base of ResNet-101 This structure, network architecture include 101 network layers.ResNet-101 is neural network structure most deep at present, passes through structure Residual error layer is built to realize profound network, because the neural network possesses very deep depth, therefore neural network can be mentioned more effectively The depth characteristic information for taking image, to achieve the purpose that target detection.
(2), feature extraction is carried out using neural network
The invention firstly uses ResNet-101 neural networks to carry out feature extraction, the present invention to the image that the present invention inputs The output layer for extracting 5 stage of ResNet-101 indicates current characteristics of image, the output feature number of plies of this 5 stage Respectively 128,256,512,512,512, feature extraction is deeper, and the feature number of plies is more.
After extracting this 5 characteristic layers, for this 5 characteristic layers, the present invention by feature pyramid network by this A little characteristic information layers further extract feature therein, and feature pyramid network can more refine the spy in ground abstract image Sign, by the way that the characteristic layer of 5 stage output before is successively carried out up-sampling operation and merges feature layer operation, this hair Bright available 4 network layers comprising fining feature, then recycling core size to this 4 network layers is 3 convolutional layer Convolution operation obtains all 256 layers of final characteristic layer again.
These characteristic layers will suggest as the region in step later and the feature of the operations such as exposure mask covering inputs.
(3), characteristic area suggestion
Characteristic area suggestion is the important step of entire target detection link, and input one will measure comprising the present invention The image of ship, how to go to the position for positioning these ships is to first have to consider in algorithm.Firstly, before the present invention utilizes The characteristic layer of extraction carries out anchor generation in all different layers of characteristic layers of different dimensions.
Anchor is the acquisition channel one by one that the present invention generates in characteristic image, and interval is traditionally arranged to be 2, every layer of spy The quantity generated in sign figure is 2000, and generation quantity is hyper parameter, can be adjusted according to demand, and production quantity meeting is improved The accuracy of detection is improved, but can correspondingly reduce the execution speed of algorithm.The process and convolution process that anchor is generated are classes As, it is successively scanned in the characteristic layer of extraction according to the interval set, the final present invention is possessed anchor's one by one Characteristic pattern.
After the characteristic pattern extracted in all previous steps, which has all carried out anchor, to be generated, the present invention is needed to anchor Carry out box confine, set the anchor ratio of every layer of characteristic layer, in order to guarantee different size of ship can by by Anchor formed box confine, the length pixel for confining region generated by anchor is generally set are as follows: 32,64,128, 256,512.In addition because the shape of ship be possible to set for rectangle or square etc. the ratio for confining region as 0.5,1,2.A series of suggestion areas can be generated on characteristic pattern after confining length in pixels and ratio-dependent is good.
After previous step, many is formd in characteristic pattern confines information.Suggestion areas in characteristic pattern represents The box centered on the pixel is possible to include to need adjustment region suggestion in the step of later for contained target at this time Region, the box for confining it are more accurate and by the square comprising the ship of the invention for needing to measure.
In addition in addition to generate with 4 coordinate points confine box after, it is also necessary to generate classification belonging to the box, from And determine that the ship that box includes is any ship, classify to the information of its ship.
(4), characteristic area adjusts
After previous step obtains the region set of suggestions in characteristic pattern, need to carry out it to select adjustment to finally obtain Size is suitable, and the box just comprising measurement ship needs to create thus the loss function set according to different purposes.Algorithm is held The position that can dynamically adjust box is constantly optimized during row to loss function, and box includes ship really Affiliated classification, so that it is determined that the position that certain class ship is final.
Here loss function one is divided into three parts, is error in classification Lcls, detection error Lbox and mask respectively Divide error Lmask, error in classification indicates whether this thing ship belongs to the negative of the conviction value of certain kinds ship, in neural network It is right by comparing the classification results obtained by algorithm and label (manually marking, i.e. groundtruth) during being trained The result of ratio exports loss function value, is then optimized by specific majorized function to this, by the reversed of neural network Feedback operation actively reduces the loss, updates the weight information of neural network, the ability for making it possess detection ship classification.
Equally, detection error and mask segmentation error are also by the comparison calculated result of algorithm and actual label knot Fruit optimizes, unlike, the parameter of detection error is 4 coordinate values of the ship, compares 4 by MSE mean square deviation A coordinate value is same as 4 coordinate values of real ship to obtain penalty values.
Following formula is that MSE mean square error expresses form, and the present invention can be obtained by by comparing corresponding coordinate value Penalty values between the two:
Mask divides error lmaskIt is then to be calculated according to loss IoU (IntersectionoverUnion) standard, IoU It is the calculation formula evaluated image segmentation and whether reach specified value, target represents target mask overlay area, Prediction represents prediction mask overlay area:
When predicting mask overlay area and completely overlapped target mask overlay area, the value of IoU is 1, is represented at this time pre- It surveys and fullys meet the standard that needs of the present invention, when prediction mask overlay area completely disengages target mask overlay area, IoU's Value is 0, represents and predicts complete mistake.
Finally all loss L may be summarized to be following formula:
L=Lcls+Lbox+Lmask
Overall loss function is divided into three parts, respectively ship Classification Loss Lcls, box confine loss Lbox with And mask loses Lmask, by preceding to operation and backward feedback operation, the present invention can train neural network, to reduce total Bulk diffusion, and then available final trained neural network weight.
(5), classification information is obtained
Classification Loss is described in previous step, the present invention is obtained by reducing Classification Loss in algorithm To the classification information of the ship, classification information is one-hot one-hot encoding information in a program, carries out being converted into actual ship Classification information, it is ship target and background respectively that there are two types of ships to classify in remote sensing images.
(6), bounding box adjusts
Bounding box is adjusted to the part that detection error needs to execute automatically in gradient updating, when carrying out gradient updating, Not only whether coordinates computed correctly loses, and in addition also sets the deviant of four points of coordinate, in gradient updating process In also deviant is updated simultaneously, to reach the coordinate position that adjustment includes ship bounding box, keep bounding box smart Really detection ship realization is confined.
(7), optimizing detection is lost
The process for the loss optimization mentioned before optimization loss i.e., the optimization for the Neural Network Optimization loss that the present invention uses Method is the gradient descent method with momentum, and learning rate can gradually decay with the number of iterations.Learning rate is hyper parameter, in face of difference Image set need to be arranged different initial learning rates, the learning rate L of defaultrIt is 0.001, wherein momentum parameter is 0.9, decaying Rate is 0.0001.
In the training process, training set is trained and verifying collection is individually verified.Observe training overall loss Number n is truncated back in loss and the loss curve for verifying collection, setting, i.e., current loss function nothing after n circulation obviously changes In the case of to training terminated operation in advance.In this way it is possible to prevente effectively from lead-in training and caused by over-fitting, loss function Curve does not rise anti-drop.
After obtaining final proposal region (Proposals), there can be many proposals around real ship At this moment region needs to find a suitable proposal region to determine the final position of target, the present invention is according to non-very big this moment Value inhibits (Non-Maximum Suppression) algorithm to propose region to select most suitable target.
The inhibition of non-maxima suppression algorithm is not the element of maximum, similar to local maxima function of search, at some It is formed around ship and proposes that the box in region is concentrated, need to pick out what most suitable ship was confined using non-maxima suppression algorithm Propose region, the extracted feature of sliding window, after categorized device Classification and Identification, each window can obtain a classification and point Number.But sliding window will lead to many windows and there is the case where including or largely intersecting with other windows.At this moment it just needs NMS is used to choose score highest in those neighborhoods (being the maximum probability of certain class object), and inhibits those scores low Window.
In simple terms, the purpose of non-maxima suppression is to eliminate extra box, finds optimal ship detection substance, point For following steps:
1, assume there are 6 candidate frames, sorted according to classifier category classification probability, be belonging respectively to vehicle from small to large Probability is respectively A, B, C, D, E, F.
2, since maximum probability rectangle frame F, judge whether the degree of overlapping IOU of A~E and F is greater than some setting respectively Threshold value.
3, the degree of overlapping for assuming B, D and F is more than threshold value, then just throwing away B, D;And first rectangle frame F of label, it is this hair It is bright to remain.
4, from remaining rectangle frame A, C, E, then the maximum E of select probability judges the degree of overlapping of E Yu A, C, degree of overlapping Greater than certain threshold value, then just throwing away;And marking E is second rectangle frame that the present invention remains.
5, this process is repeated always, finds all rectangle frames being once retained.
After executing these above-mentioned steps, the present invention has obtained the rectangle calibration frame for being finally most suitable for the ship.
(8), mask segmentation and bounding box are obtained
The present invention has obtained the location information around detection ship in front of the step of, that is, includes the rectangle frame of the ship, It needs the ship in most target to carry out mask covering in this step, mask pixel is made to cover the surface of entire ship as far as possible, from And obtain the profile information of ship.
Look back in the framework of network layer that first part mentions, the present invention by input picture by ResNet-101 network into Row feature extraction is to obtain the various sizes of characteristic information of different dimensions of input picture.Due to convolution operation and Chi Huacao The invariance of work contains the position of ship in the picture in these characteristic informations.
The present invention inputs the classification extracted in previous step and confines region and feature letter by building mask branching networks Breath, generates corresponding mask, the specific steps are as follows:
1, the ship information in different characteristic figure is extracted;
2, the size for adjusting characteristic pattern merges the characteristic information of different characteristic patterns by scaling;
3, according to the scaling of actual size and characteristic pattern, characteristic mask is scaled up;
4, amplified mask is compared with practical mask, calculates penalty values;
In third step, because mask pixel is floating type numerical value, soft mask can guarantee during amplification Precision retains more details, the ship measured required for the covering that finally obtained mask can be relatively good, each detection Target can all correspond to a mask information.As illustrated the segmentation result of Part III output in Fig. 4 (b), pass through comparison left side Actual mask, it can be found that the detection information on right side has compared accurately and reliably.
The step of Part IV mainly executes is image segmentation, and the present invention is utilized by object detection method in previous step Box location information has obtained the mask information of ship target.But its mask information can not represent the segmentation letter of ship completely Breath, therefore operation is split to remote sensing ship image in the network that Part IV uses U-Net structure, wherein U-Net Encoder is the network of ResNet-34 structure, and decoder is then redesigned according to the feature of encoder, point of final output image Cut information.
The general steps of Part IV are as follows: Multi resolution feature extraction, deconvolution operating characteristics figure, integrated fusion feature letter Breath.Wherein integrated fusion characteristic information is merged with the output information in Part III, finally obtains more accurate segmentation Information.
(1), Multi resolution feature extraction
Shown in the structure of U-Net network such as Fig. 3 (c), wherein the corresponding position of encoder and decoder is merged.Benefit It is that, in encoder-side, the feature of different scale is corresponding to decoder with the Multi resolution feature extraction mode purpose of U-Net structure Partial feature is merged.Therefore in decoder stage, neural network is not only bottom-up have been carried out characteristic pattern to rise dimension behaviour Make, and combines the feature vector of encoder stage.So that network feature information is more abundant and accurate.
In the image data dimension that the training stage uses for (3,768,768), wherein the port number of image is 3, resolution ratio Size is 768.And the dimension of the output stage of network is then that (1,768,768) and label data collection are identical, is taken in a network Loss function be losses by mixture function, respectively Dice loss and Focal loss.By optimization algorithm reduce penalty values to It is split network accurately to image, the formula of losses by mixture is as follows:
Mloss=Dloss+t*Floss
Wherein MlossFor losses by mixture function, DlossFor Dice loss function, FlossFor Focal loss function, t Focal The weight coefficient of loss, by verifying, which takes 10 experiment effects best.
DlossLose commonly used for image segmentation task, segmentation information and actual label information by comparing output Optimizer is instructed to be adjusted network.Formula is as follows:
Wherein A and B respectively represents the segmentation information and actual segmentation information of output, when the two Pixel Information is completely coincident When, which is 0, when not being overlapped completely, loss function 1.
Focal loss is the variant for intersecting entropy loss, by modifying to intersection entropy loss, not to segmentation data pixels The task of balance has good adaptability, and formula is as follows:
Wherein y and y ' respectively indicates the pixel value of actual pixel value and network output, and the two range is all 0-1.When two The identical value of person, loss function value are 0.γ parameter makes the loss for reducing easy classification samples, and balance factor α is then used to put down Weigh positive negative sample itself ratio it is uneven.
(2), deconvolution operating characteristics figure
During network training, remote sensing images input coding device end exports the characteristic information of different scale, in decoding rank Duan Ze carries out a liter dimension by characteristic pattern of the transposition convolution operation to low resolution and operates, and allows to and decode the characteristic information of section It is combined.The mode for rising dimension operation is transposition convolution, and transposition convolution can learn a liter Pixel Information for dimension in the training process, Guarantee to rise the accuracy of information after dimension.The hotspot graph of image segmentation is illustrated in Fig. 5 (a), Fig. 5 (b) then illustrates segmentation result Figure, wherein left side is original image, centre is the image after segmentation, and the rightmost side is then actual tally set.
(3), integrated fusion characteristic information
Image segmentation information obtained in Part IV and Part III is integrated, the segmentation information of the two is carried out Simple weighting operations, the coefficient of Part IV are 0.7, and the coefficient of Part III is 0.3, are weighted available after operating More accurate segmentation information, as Fig. 6 (c) illustrates final segmentation result.
Part V is mainly that the output image being directed in former parts is handled, to make remote sensing segmentation figure into one The amendment of step reaches better effect.The key step in this stage are as follows: markov random file spreads cut-point, opening operation Imbricate is eliminated, final result figure is exported.The realization step in this stage is specifically introduced below:
(1), markov random file spreads cut-point
In several steps before, the classification of remote sensing images ship and the detection segmentation of target are had been completed, and The final output segmentation information of remote sensing images.But these segmentation informations still have the space of promotion: 1, ship in precision There is the phenomenon that missing in segmentation of the image in edge;2, hold very much in ship distribution than the marginal information of the region ship of comparatively dense Easily overlapping.
Therefore the first carrying out for task is the segmentation information improved in edge in the final step, makes mask as much as possible Cover the ship target entirely to be detected, the precision of image segmentation.Mask is believed using the method for markov random file thus Breath carries out perfect.
The pixel distribution of detection target is equivalent to a markov random file, practical during detection, this hair The object of daylight reason is the pixel one by one in image.The pixel distribution that the present invention will detect target is typically all to be related , meet basic normal distribution, therefore, the relationship between adjacent pixel can indicate by way of probability.
The process of adjustment covering mask is understood that find the process of other similar pixels, root according to sub-pixel point Seed node is extended according to the interconnected relationship between pixel to cover entire target area, the pixel of image Between relationship can be described with formula below:
Wherein S is input picture, and W is the affiliated classification of current pixel point, and P (S | W) is conditional probability distribution, and P (W) is point The prior probability distribution of class result, P (S) are the probability distribution of image, and P (W | S) is last classification results, according to image information Judge whether each pixel belongs to the target area.
In this process, there are two types of the classification of current pixel point: belonging to the detection target;It is not belonging to the detection target. The pixel for belonging to the detection target, by its merger into mask region, the pixel being not belonging to then ignores exclusion.In target ship PixelIt is aware of classification information in advance: w1It utilizes prior probability P (W) and conditional probability P (S | W) The pixel for merging and belonging to same ship can be calculated.
It is distributed by the pixel of the available object of Gibbs Distribution, Gibbs Distribution formula is according between each ship Relationship to construct distributed intelligence, it is related for being mentioned between the pixel in the image for belonging to a certain ship before, together Sample, the pixel of pixel and the covering of this marine surfaces around this ship be also it is related, gone using these relationships Building Gibbs Distribution can calculate the conditional probability between pixel to obtain the Joint Distribution in whole image.
Theoretical according to Hammersley Clifford, markov random file and Gibbs distribution are consistent, that is, It says, the distribution whether P (W) pixel belongs to the ship meets Gibbs distribution:
Wherein Z is partition function, i.e. the window of a normaliztion constant, this bigger P (W) of T is more smooth, in addition U2(w) Represent the relationship between potential energy group.And:
V in above-mentioned formulacIt is represented as the potential energy of potential energy group.Wherein β is the coefficient of coup, and s, t are respectively two adjacent pictures Vegetarian refreshments.
Gibbs sampler (Gibbs sampling) is to carry out a series of final approximations of operation using condition distribution to be joined Close a kind of method of sampling of distribution, correspondingly, Gibbs Distribution mean that these meet the distributed intelligence of Gibbs Distribution can be with The Joint Distribution of these distributed intelligences is approximatively sought by seeking corresponding conditional probability between pixel.And in target ship figure As in, it is known that P (w1|w2) this local action relationship of two neighboring pixel, if this pixel label belongs to this Ship calculates distributing labels around it then next needing the mark information calculated around this pixel is how many respectively The probability of information, so that it is determined that whether the classification marker of this pixel need to correctly not need to update.
P in above-mentioned formula (S | W) * P (W)=P (S, W) is therefore calculated
It is equivalent to calculating:
That is the joint probability distribution of image and pixel classification, therefore indicated P (W) used here as Gibbs Distribution, and P (S) distributed intelligence of image is indicated, usually constant.
Under the premise of Gibbs Distribution, the problem of markov random file can be converted into potential energy, pass through energy letter Number determines the conditional probability of MRF, to keep it with uniformity in the overall situation.Pass through the simple of single pixel and its neighborhood Local Interaction, MRF can obtain complicated global behavior, i.e., be distributed to obtain global statistical result using the Gibbs of part.
In practice, it is P (S | W) * P (W) that the present invention, which acquires, and P (W) can be calculated by above explained potential-energy function Come, and P (S | W) is asked to go to estimate the value of this pixel using mark information, it is assumed that the pixel in some labeled bracketing divides Cloth meets Gaussian Profile, the present invention can be judged according to the value of some pixel it in which classification, i.e. the pixel Whether the ship region is belonged to.
The distribution of pixel is equivalent to a Gaussian Profile:
Wherein classification information w=1,2 represent whether current pixel belongs to current ship.It finds out therein AndWherein NwFor the number of certain lower pixel of classifying, N is the number of pixels of whole picture figure, y For pixel value.
In this way, by P (S | w1), P (S | w2) can estimate each pixel ownership classification, the last present invention will after Test the product (P (W) * P (S | W) that probability is converted into prior probability and likelihood function), progressive updating label makes probability above Optimal segmentation effect is obtained more greatly.
Fig. 6 (b) left-side images are the initial graph in iterative process, although ship segmentation information conclusivelys show out still Still sufficiently complete, Fig. 6 (b) image right is final iteration result, it can be seen that the ship to be detected can be covered substantially Code covering, marginal information also cover completely, and covering precision is greatly improved compared to before.
(2), opening operation eliminates imbricate effect
Imbricate effect is easy to occur in the intensive place of ship, thus in this step, using in conventional method Morphology the image after original segmentation is optimized.For opening operation, specific steps are divided into the basic operation mode used First corrode and expand afterwards, by experimental verification, can effectively eliminate imbricate effect after operation appropriate.
In opening operation step, the corrosion effect formula of the first step is as follows:
Wherein A is the segmentation original image of input, and B is then the kernel function being set in advance, that is, utilizes B corrosion removal image A.Specific steps are similar to convolution operation, and kernel function B successively moves in the pixel region of original segmented image, kernel function with The region that the pixel range areas result of product is 1 is set to 1, remaining region is then 0, and segmentation figure can be eliminated after iteration As the marginal information of overlapping.
In opening operation step, the bulking effect formula of second step is as follows:
Wherein A is the original segmented image of input, and B is then the kernel function being set in advance, and expansive working and corrosion are grasped Make opposite.Kernel function B successively carries out convolution operation on the basis of original segmented image, to the pixel of image, kernel function with As long as the intersection area in the pixel point range is not sky, then the region is set to 1, it can be by edge after iteration for several times The information at place refills.
After by above-mentioned opening operation, relatively accurately the marginal information for being overlapped ship can be eliminated, from And guarantee that each independent ship is fully split.Such as Fig. 6 (a), the leftmost side is the remote sensing figure being originally inputted, middle graph Pass through the weight at ship edge after opening operation it can be observed that there is overlapping phenomenon at the ship edge of top for original segmented image Folded phenomenon has obtained good alleviation.
(3), final result figure is exported
After all steps of first part to Part V, the advantages of combining deep learning and conventional method, This method can detect the remote sensing images of complex environment, accurately by the ship Target Segmentation in image, and can be with The phenomenon that effectively avoiding imbricate and edge from lacking, final result such as Fig. 6 (c), it can be found that ship quilt present in figure Accurately split.
Achievement of the invention can be illustrated by following experiments:
1. experiment condition:
The present invention is in Intel Intel Core i7-7800x, 32GB memory, the Ubuntu- that graphing calculator is GTX 1080ti 16.04 being run in system.Use software platform for PyCharm and OpenCV, the remote sensing figure for being 768px × 768px to resolution ratio As data set is tested.
2. experimental result:
If Fig. 6 (a) is the process that final result carries out opening operation, left side the first width figure is original image, and centre is baseline results Figure, the rightmost side are the result figure after opening operation.It can be found that being handled by opening operation, more ship edges weight in image segmentation The phenomenon that conjunction, slightly improves.And Fig. 6 (b) then shows promotion of the markov random file to ship segmentation effect, left figure is horse Segmentation information before the filling of Er Kefu random field, right figure are treated segmentation information, it can be seen that segmentation information that treated It is more perfect, also there is better detection accuracy to edge.And it then shows in Fig. 6 (c) by the final of all of above step As a result, result show the image for adding translucent segmentation information on the basis of original image in figure, it can be seen that remote sensing Image has good segmentation effect.
The limitation that technical solution of the present invention is not limited to the above specific embodiments, it is all to do according to the technique and scheme of the present invention Technology deformation out, falls within the scope of protection of the present invention.

Claims (4)

1. a kind of remote sensing images ship based on deep learning integrates recognition methods, which comprises the following steps:
S1, image classification: remote sensing image data collection is collected, is trained using the neural network of ResNet-34 structure, and to institute The remote sensing area of visual field of processing carries out judging whether there is ship presence;
S2, target detection: using ResNet-101 neural network framework as feature extraction network, what input S1 was screened is deposited It extracts into designed neural network there are the characteristic layer of the remote sensing images of ship, and then obtains in the remote sensing images of ship Remote sensing images location information;
S3, image segmentation: by the neural network of U-Net framework to S1 screen there are the remote sensing images of ship to instruct Practice, obtains the characteristic pattern of the remote sensing images there are ship;And transposition convolution algorithm is carried out to the characteristic information figure of different scale, by The resolution ratio for walking lifting feature information, enables the location information in existing characteristics figure specifically to show, finally obtains ship Segmentation information.
2. the remote sensing images ship according to claim 1 based on deep learning integrates recognition methods, it is characterised in that: institute Stating S1, specific step is as follows:
S11, the remote sensing images composition remote sensing image data collection comprising the common sea area information in the whole world, remote sensing image data collection are collected Comprising 300,000 width remote sensing images, the resolution sizes of every width remote sensing images are 768pxX768px, and format is RGB Three Channel Color Image;It marks in all remote sensing images there are the position of ship, profile and the shared pixel dot position information in remote sensing images, The ship tally set of remote sensing images is produced, label image is resolution ratio 768pxX768px, single channel grayscale image, file format For jpg format-pattern;
S12, image enhancement: to there are the remote sensing images of ship to use image level/flip vertical, image Random-Rotation 0-30 Degree, brightness of image changes at random, picture contrast changes at random, one of scalloping or the combination of various ways carry out figure Image intensifying;
S13, cross-validation method training: using after image enhancement, there are the remote sensing images of ship to be used as input picture, use The neural network of ResNet-34 structure is trained above-mentioned remote sensing image data, and 5 folding cross validations are taken in input process Mode be trained;
S14, TTA image classification reasoning: the reasoning of TTA image classification is carried out to input picture, and input picture is subjected to water respectively TTA image classification reasoning is carried out respectively after flat overturning and flip vertical, by the reasoning of the above the reasoning results and original input picture As a result it is merged, is then tested in the neural network of trained ResNet-34 structure, judge the remote sensing figure visual field In whether with the presence of ship.
3. the remote sensing images ship according to claim 2 based on deep learning integrates recognition methods, it is characterised in that: institute Stating S2, specific step is as follows:
S21, convolution feature extraction: using the extraction of ResNet-101 network structure, there are the features of the remote sensing images of ship, respectively In the characteristic layer export that five different phases of ResNet-101 export, there are the characteristic informations of the remote sensing images of ship;Pass through spy The characteristic information for the remote sensing images different scale that ship will be present in sign pyramid network implementations carries out fusion extraction, to the spy of extraction Reference breath carries out refinement integration, obtains final characteristic layer;
S22, characteristic area suggestion: using S21 extract characteristic layer, in all different layers of characteristic layers of different dimensions into Row anchor generate, to all anchor carry out box confine, set confine length in pixels and ratio-dependent it is good after, in spy A series of suggestion areas is generated on sign figure;
S23, characteristic area adjustment: a series of suggestion areas is generated to S22 and carries out selecting adjustment to finally obtain size conjunction Box that is suitable, just including ship, sets loss function, constantly optimizes loss function with dynamic in algorithm implementation procedure State adjusts the position of box, and box includes classification belonging to ship really, so that it is determined that the position that ship is final;
S24, it obtains classification information: on the basis of S23, the classification information of the ship being obtained by reduction Classification Loss, Classification information is one-hot one-hot encoding information in a program, be converted into actual ship classification information, therefore by remote sensing figure As being classified as ship target and background;
S25, bounding box adjustment: bounding box is adjusted to the part that detection error needs to execute automatically in gradient updating, is carrying out When gradient updating, adjustment includes the coordinate position of ship bounding box, confines bounding box accurately to detection ship realization;
S26, optimizing detection loss: optimizing the loss function of the neural network of S23 using the gradient descent method with momentum, The rectangle calibration frame for being finally most suitable for the ship is obtained;
S27, obtain segmentation mask and bounding box: building mask branching networks input the classification extracted in S21-S26 and confine area Domain and characteristic information generate corresponding mask, the image after obtaining target detection.
4. the remote sensing images ship according to claim 3 based on deep learning integrates recognition methods, it is characterised in that: institute Stating S3, specific step is as follows:
S31, Multi resolution feature extraction: using U-Net structure network to S1 filter out there are the remote sensing images of ship into Row cutting operation, wherein the encoder of U-Net be ResNet-34 structure neural network, S1 filter out there are the distant of ship Feel the characteristic information of image input coding device end output different scale;
S32, deconvolution operating characteristics figure: then pass through transposition convolution operation in the neural network decoding stage of ResNet-34 structure A liter dimension operation is carried out to the characteristic pattern of low resolution, the characteristic information of itself and decoding section is made to be combined the figure after being divided Picture;
S33, integrated fusion characteristic information: by the image after target detection that S27 is obtained and the image after the obtained segmentation of S32 into Row fusion obtains more accurate segmentation information;
S34, markov random file spread cut-point: the segmentation information that S33 is obtained using markov random file algorithm into Row amendment is chosen segmentation information seed point and is diffused, and keeps segmentation information more complete and accurate;
S35, opening operation eliminate imbricate effect: according to morphology principle, to the result figure that S34 is obtained successively carry out corrosion and Expansive working carries out opening operation to image, the eclipsing effects of ship segmentation information is effectively relieved, and promote segmentation effect;
S36, output final result figure.
CN201811573380.2A 2018-12-21 2018-12-21 Remote sensing image ship integrated recognition method based on deep learning Active CN109583425B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811573380.2A CN109583425B (en) 2018-12-21 2018-12-21 Remote sensing image ship integrated recognition method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811573380.2A CN109583425B (en) 2018-12-21 2018-12-21 Remote sensing image ship integrated recognition method based on deep learning

Publications (2)

Publication Number Publication Date
CN109583425A true CN109583425A (en) 2019-04-05
CN109583425B CN109583425B (en) 2023-05-02

Family

ID=65931254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811573380.2A Active CN109583425B (en) 2018-12-21 2018-12-21 Remote sensing image ship integrated recognition method based on deep learning

Country Status (1)

Country Link
CN (1) CN109583425B (en)

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110142785A (en) * 2019-06-25 2019-08-20 山东沐点智能科技有限公司 A kind of crusing robot visual servo method based on target detection
CN110176005A (en) * 2019-05-16 2019-08-27 西安电子科技大学 Remote sensing image segmentation method based on normalization index and multiple dimensioned model
CN110189342A (en) * 2019-06-27 2019-08-30 中国科学技术大学 Glioma region automatic division method
CN110246133A (en) * 2019-06-24 2019-09-17 中国农业科学院农业信息研究所 A kind of corn kernel classification method, device, medium and equipment
CN110263753A (en) * 2019-06-28 2019-09-20 北京海益同展信息科技有限公司 A kind of object statistical method and device
CN110348522A (en) * 2019-07-12 2019-10-18 创新奇智(青岛)科技有限公司 A kind of image detection recognition methods and system, electronic equipment, image classification network optimized approach and system
CN110378297A (en) * 2019-07-23 2019-10-25 河北师范大学 A kind of Remote Sensing Target detection method based on deep learning
CN110427818A (en) * 2019-06-17 2019-11-08 青岛星科瑞升信息科技有限公司 The deep learning satellite data cloud detection method of optic that high-spectral data is supported
CN110633633A (en) * 2019-08-08 2019-12-31 北京工业大学 Remote sensing image road extraction method based on self-adaptive threshold
CN110689044A (en) * 2019-08-22 2020-01-14 湖南四灵电子科技有限公司 Target detection method and system combining relationship between targets
CN110956179A (en) * 2019-11-29 2020-04-03 河海大学 Robot path skeleton extraction method based on image refinement
CN111027399A (en) * 2019-11-14 2020-04-17 武汉兴图新科电子股份有限公司 Remote sensing image surface submarine identification method based on deep learning
CN111178304A (en) * 2019-12-31 2020-05-19 江苏省测绘研究所 High-resolution remote sensing image pixel level interpretation method based on full convolution neural network
CN111291608A (en) * 2019-11-12 2020-06-16 广东融合通信股份有限公司 Remote sensing image non-building area filtering method based on deep learning
CN111401560A (en) * 2020-03-24 2020-07-10 北京觉非科技有限公司 Inference task processing method, device and storage medium
CN111428875A (en) * 2020-03-11 2020-07-17 北京三快在线科技有限公司 Image recognition method and device and corresponding model training method and device
CN111598892A (en) * 2020-04-16 2020-08-28 浙江工业大学 Cell image segmentation method based on Res2-uneXt network structure
CN111695398A (en) * 2019-12-24 2020-09-22 珠海大横琴科技发展有限公司 Small target ship identification method and device and electronic equipment
CN111860176A (en) * 2020-06-22 2020-10-30 钢铁研究总院 Nonmetal inclusion full-field quantitative statistical distribution characterization method
CN111898633A (en) * 2020-06-19 2020-11-06 北京理工大学 High-spectral image-based marine ship target detection method
CN111950612A (en) * 2020-07-30 2020-11-17 中国科学院大学 FPN-based weak and small target detection method for fusion factor
CN112184635A (en) * 2020-09-10 2021-01-05 上海商汤智能科技有限公司 Target detection method, device, storage medium and equipment
CN112215797A (en) * 2020-09-11 2021-01-12 嗅元(北京)科技有限公司 MRI olfactory bulb volume detection method, computer device and computer readable storage medium
CN112232390A (en) * 2020-09-29 2021-01-15 北京临近空间飞行器系统工程研究所 Method and system for identifying high-pixel large image
CN112348794A (en) * 2020-11-05 2021-02-09 南京天智信科技有限公司 Ultrasonic breast tumor automatic segmentation method based on attention-enhanced U-shaped network
CN112347895A (en) * 2020-11-02 2021-02-09 北京观微科技有限公司 Ship remote sensing target detection method based on boundary optimization neural network
CN112418111A (en) * 2020-11-26 2021-02-26 福建省海洋预报台 Offshore culture area remote sensing monitoring method and device, equipment and storage medium
CN112507764A (en) * 2019-09-16 2021-03-16 中科星图股份有限公司 Method and system for extracting target object of remote sensing image and readable storage medium
CN112699710A (en) * 2019-10-22 2021-04-23 中科星图股份有限公司 GF2 remote sensing image dense target identification method and system based on deep learning
CN112907501A (en) * 2019-12-04 2021-06-04 北京地平线机器人技术研发有限公司 Object detection method and device and electronic equipment
CN112906577A (en) * 2021-02-23 2021-06-04 清华大学 Fusion method of multi-source remote sensing image
CN113128323A (en) * 2020-01-16 2021-07-16 中国矿业大学 Remote sensing image classification method and device based on coevolution convolutional neural network learning
CN113239786A (en) * 2021-05-11 2021-08-10 重庆市地理信息和遥感应用中心 Remote sensing image country villa identification method based on reinforcement learning and feature transformation
CN113283306A (en) * 2021-04-30 2021-08-20 青岛云智环境数据管理有限公司 Rodent identification and analysis method based on deep learning and transfer learning
CN113628180A (en) * 2021-07-30 2021-11-09 北京科技大学 Semantic segmentation network-based remote sensing building detection method and system
CN114037917A (en) * 2021-11-08 2022-02-11 北京中星天视科技有限公司 Ship identification method, ship identification device, electronic equipment and computer readable medium
CN114241339A (en) * 2022-02-28 2022-03-25 山东力聚机器人科技股份有限公司 Remote sensing image recognition model, method and system, server and medium
CN114419421A (en) * 2022-01-21 2022-04-29 中国地质大学(北京) Subway tunnel crack identification system and method based on images
CN114419043A (en) * 2022-03-29 2022-04-29 南通人民彩印有限公司 Method and system for detecting new printing material by optical means
CN114627135A (en) * 2022-01-08 2022-06-14 海南大学 Remote sensing image target segmentation method based on example segmentation network and voting mechanism
CN116337087A (en) * 2023-05-30 2023-06-27 广州健新科技有限责任公司 AIS and camera-based ship positioning method and system
CN117809310A (en) * 2024-03-03 2024-04-02 宁波港信息通信有限公司 Port container number identification method and system based on machine learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563303A (en) * 2017-08-09 2018-01-09 中国科学院大学 A kind of robustness Ship Target Detection method based on deep learning
CN107609601A (en) * 2017-09-28 2018-01-19 北京计算机技术及应用研究所 A kind of ship seakeeping method based on multilayer convolutional neural networks
US20180107874A1 (en) * 2016-01-29 2018-04-19 Panton, Inc. Aerial image processing
CN108052940A (en) * 2017-12-17 2018-05-18 南京理工大学 SAR remote sensing images waterborne target detection methods based on deep learning
CN108921066A (en) * 2018-06-22 2018-11-30 西安电子科技大学 Remote sensing image Ship Detection based on Fusion Features convolutional network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180107874A1 (en) * 2016-01-29 2018-04-19 Panton, Inc. Aerial image processing
CN107563303A (en) * 2017-08-09 2018-01-09 中国科学院大学 A kind of robustness Ship Target Detection method based on deep learning
CN107609601A (en) * 2017-09-28 2018-01-19 北京计算机技术及应用研究所 A kind of ship seakeeping method based on multilayer convolutional neural networks
CN108052940A (en) * 2017-12-17 2018-05-18 南京理工大学 SAR remote sensing images waterborne target detection methods based on deep learning
CN108921066A (en) * 2018-06-22 2018-11-30 西安电子科技大学 Remote sensing image Ship Detection based on Fusion Features convolutional network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
欧阳颖卉等: "基于卷积神经网络的光学遥感图像船只检测", 《包装工程》 *

Cited By (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110176005A (en) * 2019-05-16 2019-08-27 西安电子科技大学 Remote sensing image segmentation method based on normalization index and multiple dimensioned model
CN110176005B (en) * 2019-05-16 2023-03-24 西安电子科技大学 Remote sensing image segmentation method based on normalized index and multi-scale model
CN110427818A (en) * 2019-06-17 2019-11-08 青岛星科瑞升信息科技有限公司 The deep learning satellite data cloud detection method of optic that high-spectral data is supported
CN110427818B (en) * 2019-06-17 2022-06-28 青岛星科瑞升信息科技有限公司 Deep learning satellite data cloud detection method supported by hyperspectral data
CN110246133A (en) * 2019-06-24 2019-09-17 中国农业科学院农业信息研究所 A kind of corn kernel classification method, device, medium and equipment
CN110246133B (en) * 2019-06-24 2021-05-07 中国农业科学院农业信息研究所 Corn kernel classification method, device, medium and equipment
CN110142785A (en) * 2019-06-25 2019-08-20 山东沐点智能科技有限公司 A kind of crusing robot visual servo method based on target detection
CN110189342A (en) * 2019-06-27 2019-08-30 中国科学技术大学 Glioma region automatic division method
CN110263753A (en) * 2019-06-28 2019-09-20 北京海益同展信息科技有限公司 A kind of object statistical method and device
CN110348522A (en) * 2019-07-12 2019-10-18 创新奇智(青岛)科技有限公司 A kind of image detection recognition methods and system, electronic equipment, image classification network optimized approach and system
CN110348522B (en) * 2019-07-12 2021-12-07 创新奇智(青岛)科技有限公司 Image detection and identification method and system, electronic equipment, and image classification network optimization method and system
CN110378297A (en) * 2019-07-23 2019-10-25 河北师范大学 A kind of Remote Sensing Target detection method based on deep learning
CN110378297B (en) * 2019-07-23 2022-02-11 河北师范大学 Remote sensing image target detection method and device based on deep learning and storage medium
CN110633633A (en) * 2019-08-08 2019-12-31 北京工业大学 Remote sensing image road extraction method based on self-adaptive threshold
CN110633633B (en) * 2019-08-08 2022-04-05 北京工业大学 Remote sensing image road extraction method based on self-adaptive threshold
CN110689044A (en) * 2019-08-22 2020-01-14 湖南四灵电子科技有限公司 Target detection method and system combining relationship between targets
CN112507764A (en) * 2019-09-16 2021-03-16 中科星图股份有限公司 Method and system for extracting target object of remote sensing image and readable storage medium
CN112699710A (en) * 2019-10-22 2021-04-23 中科星图股份有限公司 GF2 remote sensing image dense target identification method and system based on deep learning
CN111291608A (en) * 2019-11-12 2020-06-16 广东融合通信股份有限公司 Remote sensing image non-building area filtering method based on deep learning
CN111027399A (en) * 2019-11-14 2020-04-17 武汉兴图新科电子股份有限公司 Remote sensing image surface submarine identification method based on deep learning
CN111027399B (en) * 2019-11-14 2023-08-22 武汉兴图新科电子股份有限公司 Remote sensing image water surface submarine recognition method based on deep learning
CN110956179A (en) * 2019-11-29 2020-04-03 河海大学 Robot path skeleton extraction method based on image refinement
CN112907501A (en) * 2019-12-04 2021-06-04 北京地平线机器人技术研发有限公司 Object detection method and device and electronic equipment
CN111695398A (en) * 2019-12-24 2020-09-22 珠海大横琴科技发展有限公司 Small target ship identification method and device and electronic equipment
CN111178304A (en) * 2019-12-31 2020-05-19 江苏省测绘研究所 High-resolution remote sensing image pixel level interpretation method based on full convolution neural network
CN113128323B (en) * 2020-01-16 2023-08-18 中国矿业大学 Remote sensing image classification method and device based on co-evolution convolutional neural network learning
CN113128323A (en) * 2020-01-16 2021-07-16 中国矿业大学 Remote sensing image classification method and device based on coevolution convolutional neural network learning
CN111428875A (en) * 2020-03-11 2020-07-17 北京三快在线科技有限公司 Image recognition method and device and corresponding model training method and device
CN111401560A (en) * 2020-03-24 2020-07-10 北京觉非科技有限公司 Inference task processing method, device and storage medium
CN111598892B (en) * 2020-04-16 2023-06-30 浙江工业大学 Cell image segmentation method based on Res2-uneXt network structure
CN111598892A (en) * 2020-04-16 2020-08-28 浙江工业大学 Cell image segmentation method based on Res2-uneXt network structure
CN111898633B (en) * 2020-06-19 2023-05-05 北京理工大学 Marine ship target detection method based on hyperspectral image
CN111898633A (en) * 2020-06-19 2020-11-06 北京理工大学 High-spectral image-based marine ship target detection method
CN111860176A (en) * 2020-06-22 2020-10-30 钢铁研究总院 Nonmetal inclusion full-field quantitative statistical distribution characterization method
CN111860176B (en) * 2020-06-22 2024-02-02 钢铁研究总院有限公司 Non-metal inclusion full-view-field quantitative statistical distribution characterization method
CN111950612A (en) * 2020-07-30 2020-11-17 中国科学院大学 FPN-based weak and small target detection method for fusion factor
CN112184635A (en) * 2020-09-10 2021-01-05 上海商汤智能科技有限公司 Target detection method, device, storage medium and equipment
CN112215797A (en) * 2020-09-11 2021-01-12 嗅元(北京)科技有限公司 MRI olfactory bulb volume detection method, computer device and computer readable storage medium
CN112232390B (en) * 2020-09-29 2024-03-01 北京临近空间飞行器系统工程研究所 High-pixel large image identification method and system
CN112232390A (en) * 2020-09-29 2021-01-15 北京临近空间飞行器系统工程研究所 Method and system for identifying high-pixel large image
CN112347895A (en) * 2020-11-02 2021-02-09 北京观微科技有限公司 Ship remote sensing target detection method based on boundary optimization neural network
CN112348794A (en) * 2020-11-05 2021-02-09 南京天智信科技有限公司 Ultrasonic breast tumor automatic segmentation method based on attention-enhanced U-shaped network
CN112418111A (en) * 2020-11-26 2021-02-26 福建省海洋预报台 Offshore culture area remote sensing monitoring method and device, equipment and storage medium
CN112906577B (en) * 2021-02-23 2024-04-26 清华大学 Fusion method of multisource remote sensing images
CN112906577A (en) * 2021-02-23 2021-06-04 清华大学 Fusion method of multi-source remote sensing image
CN113283306A (en) * 2021-04-30 2021-08-20 青岛云智环境数据管理有限公司 Rodent identification and analysis method based on deep learning and transfer learning
CN113239786A (en) * 2021-05-11 2021-08-10 重庆市地理信息和遥感应用中心 Remote sensing image country villa identification method based on reinforcement learning and feature transformation
CN113239786B (en) * 2021-05-11 2022-09-30 重庆市地理信息和遥感应用中心 Remote sensing image country villa identification method based on reinforcement learning and feature transformation
CN113628180A (en) * 2021-07-30 2021-11-09 北京科技大学 Semantic segmentation network-based remote sensing building detection method and system
CN113628180B (en) * 2021-07-30 2023-10-27 北京科技大学 Remote sensing building detection method and system based on semantic segmentation network
CN114037917A (en) * 2021-11-08 2022-02-11 北京中星天视科技有限公司 Ship identification method, ship identification device, electronic equipment and computer readable medium
CN114627135A (en) * 2022-01-08 2022-06-14 海南大学 Remote sensing image target segmentation method based on example segmentation network and voting mechanism
CN114419421A (en) * 2022-01-21 2022-04-29 中国地质大学(北京) Subway tunnel crack identification system and method based on images
CN114241339A (en) * 2022-02-28 2022-03-25 山东力聚机器人科技股份有限公司 Remote sensing image recognition model, method and system, server and medium
CN114419043A (en) * 2022-03-29 2022-04-29 南通人民彩印有限公司 Method and system for detecting new printing material by optical means
CN116337087A (en) * 2023-05-30 2023-06-27 广州健新科技有限责任公司 AIS and camera-based ship positioning method and system
CN117809310A (en) * 2024-03-03 2024-04-02 宁波港信息通信有限公司 Port container number identification method and system based on machine learning
CN117809310B (en) * 2024-03-03 2024-04-30 宁波港信息通信有限公司 Port container number identification method and system based on machine learning

Also Published As

Publication number Publication date
CN109583425B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN109583425A (en) A kind of integrated recognition methods of the remote sensing images ship based on deep learning
CN110321813B (en) Cross-domain pedestrian re-identification method based on pedestrian segmentation
CN112966684B (en) Cooperative learning character recognition method under attention mechanism
CN109614985B (en) Target detection method based on densely connected feature pyramid network
CN103049763B (en) Context-constraint-based target identification method
CN108875624B (en) Face detection method based on multi-scale cascade dense connection neural network
CN108304873A (en) Object detection method based on high-resolution optical satellite remote-sensing image and its system
CN111553837B (en) Artistic text image generation method based on neural style migration
CN106934386B (en) A kind of natural scene character detecting method and system based on from heuristic strategies
CN108830188A (en) Vehicle checking method based on deep learning
CN107424159A (en) Image, semantic dividing method based on super-pixel edge and full convolutional network
CN105930815A (en) Underwater organism detection method and system
CN110363201A (en) Weakly supervised semantic segmentation method and system based on Cooperative Study
CN112784736B (en) Character interaction behavior recognition method based on multi-modal feature fusion
CN105809121A (en) Multi-characteristic synergic traffic sign detection and identification method
CN109711448A (en) Based on the plant image fine grit classification method for differentiating key field and deep learning
CN106611423B (en) SAR image segmentation method based on ridge ripple filter and deconvolution structural model
CN107633226A (en) A kind of human action Tracking Recognition method and system
Maryan et al. Machine learning applications in detecting rip channels from images
CN109766823A (en) A kind of high-definition remote sensing ship detecting method based on deep layer convolutional neural networks
CN109583455A (en) A kind of image significance detection method merging progressive figure sequence
CN112488229A (en) Domain self-adaptive unsupervised target detection method based on feature separation and alignment
CN110929621B (en) Road extraction method based on topology information refinement
CN109872331A (en) A kind of remote sensing image data automatic recognition classification method based on deep learning
CN106203496A (en) Hydrographic curve extracting method based on machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant