CN106909924A - A kind of remote sensing image method for quickly retrieving based on depth conspicuousness - Google Patents

A kind of remote sensing image method for quickly retrieving based on depth conspicuousness Download PDF

Info

Publication number
CN106909924A
CN106909924A CN201710087670.5A CN201710087670A CN106909924A CN 106909924 A CN106909924 A CN 106909924A CN 201710087670 A CN201710087670 A CN 201710087670A CN 106909924 A CN106909924 A CN 106909924A
Authority
CN
China
Prior art keywords
image
conspicuousness
training
width
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710087670.5A
Other languages
Chinese (zh)
Other versions
CN106909924B (en
Inventor
张菁
梁西
陈璐
卓力
耿文浩
李嘉锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201710087670.5A priority Critical patent/CN106909924B/en
Publication of CN106909924A publication Critical patent/CN106909924A/en
Application granted granted Critical
Publication of CN106909924B publication Critical patent/CN106909924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/05Underwater scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Library & Information Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

A kind of remote sensing image method for quickly retrieving based on depth conspicuousness belongs to computer vision field, and in particular to the technology such as deep learning, conspicuousness target detection and image retrieval.The present invention, using depth learning technology, have studied a kind of method for quickly retrieving of remote sensing image with remote sensing image as research object.Multitask conspicuousness target detection model is built using full convolutional neural networks first, the model carries out conspicuousness Detection task and semantic segmentation task simultaneously, in the depth significant characteristics of network pre-training process learning remote sensing image.Then depth network structure is improved, Hash layer trim network is added, study obtains the binary system Hash codes of remote sensing image.Finally comprehensive utilization significant characteristics and Hash codes carry out similarity measurement.The present invention is for realizing that remote sensing image is accurate, efficient retrieval is practical and with significant application value.

Description

A kind of remote sensing image method for quickly retrieving based on depth conspicuousness
Technical field
The present invention with remote sensing image as research object, using the newest research results of artificial intelligence field --- deep learning Technology, have studied a kind of method for quickly retrieving of remote sensing image.First multitask conspicuousness is built using full convolutional neural networks Target detection model, calculates the depth significant characteristics of remote sensing image;Then depth network structure is improved, Hash layer study is added Obtain binary system Hash codes;Finally comprehensive utilization significant characteristics and Hash codes realize that remote sensing image is accurate, quick-searching.This hair It is bright to belong to computer vision field, and in particular to the technology such as deep learning, conspicuousness target detection and image retrieval.
Background technology
Remote sensing image data is used as GIS-Geographic Information System (Geographic Information System, GIS), the whole world Alignment system (Global Positioning System, GPS), remote sensing surveying and mapping technology (remote sensing system, RS) the basic data in three large space information technologies, is widely used in environmental monitoring, resource investigation, Land_use change, city rule Draw, natural calamity analysis and the every field such as military affairs.In recent years, with High Resolution Remote Sensing Satellites, imaging radar and nobody Machine drives an airplane the development of (Unmanned Aerial Vehicle) technology, and remote sensing image data is further presented magnanimity, complexity And the characteristics of high-resolution, realize that remote sensing image efficiently, accurately retrieves the accurate extraction sum for promoting remote sensing image information There is important Research Significance and application value according to shared.
Image retrieval technologies by early stage text based image retrieval (Text-Based Image Retrieval, TBIR) gradually develop into and realize CBIR (Content-Based Image by extracting characteristics of image Retrieval, CBIR).Based on the image search method of conspicuousness target, rapidly can select several from complex scene Individual significant region carries out priority treatment, so as to effectively reduce data processing complexity, improves recall precision.Compared to normal image Retrieval, the information that remote sensing image is included is complicated and changeable, and target is small and distinguishes not clear aobvious with background, if still using traditional notable Property detection method will be difficult to the accurate description of remote sensing image significant characteristics and analysis.In recent years, with artificial intelligence The proposition of the newest research results in field --- deep learning technology, for example:With full convolutional neural networks (Fully Convolutional Neural Network, FCNN) be representative deep neural network, by its uniqueness similar to human eye The convolution kernel of local experiences and the level cascade structure similar to biological neural, in terms of picture depth significant characteristics study Show excellent robustness.The shared characteristic of its weights also causes that network parameter greatly reduces, while reducing to training number According to the risk of over-fitting, training is easier to than other kinds of depth network, the sign degree of accuracy of significant characteristics can be improved.
Increasingly increase in view of remote sensing image quantity, the problems such as image, semantic descriptive power is limited, the present invention is with disclosed Extensive Aerial Images data set (AID), Wuhan University's remote sensing image data collection (WHU-RS) and Google Earth remote sensing image are Data source, proposes a kind of remote sensing image method for quickly retrieving based on depth conspicuousness.First, build and be based on full convolutional Neural The multitask conspicuousness target detection model of network (Fully Convolutional Neural Network, FCNN), pre- On training dataset learn remote sensing image different levels semantic information as depth significant characteristics and be converted to it is one-dimensional arrange to Amount.Neural network model is further finely tuned, Hash layer is introduced and is increased training sample, the remote sensing image that the model learning is arrived is high Dimension significant characteristics are mapped to lower dimensional space in the form of binary system Hash codes (Binary Hash Codes), store aobvious respectively Work property characteristic vector and Hash codes construction feature database.By the remote sensing images conspicuousness that the model extraction for training is to be checked Characteristic vector and Hash codes, contrast characteristic's database calculate Hash codes Hamming distance (Hamming Distance) and conspicuousness Characteristic vector Euclidean distance (Euclidean Distance) measures similarity, realizes remote sensing image quick-searching.
The content of the invention
It is of the invention different from existing Remote Sensing Image Retrieval method, using deep learning technology, propose a kind of based on depth The remote sensing image method for quickly retrieving of conspicuousness.First, it is notable using full convolutional neural networks (FCNN) structure multitask depth Property target detection model, by the classification of common convolutional neural networks (CNN) image level further extend into pixel scale point Class.The pre-training network on extensive Aerial Images data set (AID), conspicuousness Detection task and semantic segmentation task sharing are rolled up Lamination, three layers of semantic information of integrated learning remote sensing image, effectively removal feature redundancy are accurate to extract depth significant characteristics. Secondly, Hash layer is added in the model, expands Wuhan University's remote sensing image data collection (WHU-RS) fine setting neutral net, utilize Deep neural network realizes incremental learning by stochastic gradient descent algorithm (Stochastic Gradient Descent, SGD) Advantage, pointwise study binary system Hash codes, realize higher-dimension significant characteristics dimensionality reduction, can not only save memory space but also inspection can have been lifted Rope efficiency.Meanwhile, compared to the hash method that tradition needs input training sample in pairs, the method applied in the present invention is extensive Extension is more easy on data set.The significant characteristics that neutral net pre-training and trim process learn are converted into a dimensional vector, With binary system Hash codes together construction feature database.Finally, search strategy from coarse to fine is used in the image retrieval stage, it is comprehensive Conjunction is realized that remote sensing image is quick, is accurately examined using binary system Hash codes and significant characteristics measurement Hamming distance and Euclidean distance Rope.This method main process as shown in Figure 1, can be divided into three below step:Target detection model based on depth conspicuousness Structure, neutral net pre-training simultaneously add the fine setting of Hash layer and multi-level depth search.
(1) the target detection model construction based on depth conspicuousness
In order to effectively extract the notable area of image, the present invention shows a kind of multitask based on full convolutional neural networks is built Work property target detection model.The model carries out two tasks simultaneously:Conspicuousness is detected and semantic segmentation.Conspicuousness is detected for right The depth characteristic study of remote sensing image, calculates depth conspicuousness, and semantic segmentation is used to extract image internal object semantic information, disappears Except notable figure background is obscured, conspicuousness target lack part is supplemented.
(2) neutral net pre-training and add Hash layer fine setting
The present invention chooses extensive Aerial Images data set (AID) as standard data set pre-training network.In order that aobvious Retrieval of the significant characteristics of work property target detection model learning to Chinese remote sensing image has more preferable robustness, in Wuhan University On the basis of remote sensing image data collection (WHU-RS), the illumination of 6050 width difference, shooting angle, resolution are downloaded on Google Earth The Chinese remote sensing image of rate and size, 7000 width images are extended to for finely tuning neutral net by WHU-RS data sets.
(3) multi-level depth search
The present invention propose it is a kind of by coarse to fine retrieval scheme.The binary system that coarse retrieval is learnt using Hash layer Hash codes, by Hamming distance measured similarity.The two-dimentional characteristics of remote sensing image that precise search generates the 13rd, 15 layers of convolutional layer Figure is mapped as a dimensional vector, as significant characteristics vector, by euclidean distance metric similitude.Use commenting based on ranking Price card is accurate, counts the precision ratio (Precision) of retrieval result.
1. a kind of remote sensing image method for quickly retrieving based on depth conspicuousness, it is characterised in that comprise the following steps:
Step 1:Target detection model construction based on depth conspicuousness
A width RGB image is input into, a series of convolution operations are carried out by 15 convolutional layers, then carried out conspicuousness detection and appoint Business and super-pixel target semantic segmentation task sharing convolutional layer;Preceding 13 convolutional layers are changed by the beginning of convolutional neural networks VGGNet, Convolution kernel size is 3 × 3, using the linear unit R eLU of amendment as activation primitive after each convolutional layer;2nd, 4,5,13 convolution The operation of maximum pondization is carried out after layer;14th, the convolution kernel size of 15 convolutional layers is respectively the 7 × 7 and 1 × 1, the 14th, 15 convolution Dropout layers is connected after layer;
Warp lamination is built by up-sampling, its parameter is initialized by bilinear interpolation, in training study up-sampling letter Iteration updates in number;Be normalized to output image by sigmoid threshold function tables in conspicuousness object detection task [0, 1], significant characteristics are learnt;With warp lamination the characteristic pattern of last convolutional layer is carried out adopting in semantic segmentation task Sample, and the result of up-sampling is cut out, make output image identical with input picture size;
Step 2:Neutral net pre-training simultaneously adds Hash layer fine setting
Step 2.1:Multitask conspicuousness target detection model pre-training
FCNN pre-training is together launched by conspicuousness Detection task and segmentation task;χ represents N1Breadth it is high be respectively W and The set of the training image of Q, Xi is wherein the i-th width image, YijkRepresent the i-th breadth corresponding picture of image for being respectively j and k high Plain level true segmentation figure, wherein i=1 ... N1, j=1 ... W, k=1 ... Q;Z represents N2The set of width training image, ZnIt is wherein N width images, n=1 ... N2, it has the corresponding true bianry image M that there is conspicuousness targetn;θsIt is shared convolutional layer parameter, θhIt is segmentation task parameters, θfIt is conspicuousness task parameters;Formula (1), formula (2) are respectively the cross entropy cost of segmentation task Function J1(χ;θsh) and conspicuousness Detection task square Euclidean distance cost function J2(Z;θsf), FCNN is by minimizing Two cost functions are trained:
In formula (1),It is indicator function, hcjkIt is the element (j, k) of c class confidence segmentation figures, c=1 ... C, h (Xi; θsh) it is semantic segmentation function, the C confidence segmentation figure of target class is returned altogether, C is the image category that pre-training data set is included In formula (2), f (Zn;θsf) it is notable figure output function, F represents F- norm computings;
Next, with stochastic gradient descent SGD methods, it is minimum on the basis of regularization is carried out to all training samples Change above-mentioned cost function;Not there is segmentation and conspicuousness to mark simultaneously due to the data set for pre-training, therefore segmentation is appointed Business and conspicuousness Detection task are alternately;Training process needs to normalize all original image sizes;Learning rate is 0.001±0.01;Momentum parameter is usually [0.9,1.0], and weights decay factor is usually 0.0005 ± 0.0002,;Boarding steps Degree declines study course carries out more than 80000 times iteration altogether;Detailed pre-training process is as follows:
1) full deconvolution parameter is sharedBased on VGGNet initialization;
2) task parameters are split by normal distribution random initializtionWith conspicuousness task parameters
3) basisWithUsing SGD training segmentation networks, updating the two parameters isWith
4) basisWithConspicuousness network is trained using SGD, updating relevant parameter isWith
5) basisWithUsing SGD training segmentation networks, obtainWith
6) basisWithConspicuousness network is trained using SGD, updating relevant parameter isWith
7) above-mentioned 3-6 is repeated to walk three times to obtain pre-training final argument θs, θh, θf
Step 2.2:Hash layer is added, for aiming field trim network
In the middle of the good network of pre-training layer second from the bottom and final task layer, insertion one is comprising s neuron Full articulamentum, i.e. Hash layer H, lower dimensional space is mapped to by high dimensional feature, and generation binary system Hash codes are stored;Hash layer H Weight makes output valve 0 to 1 using accidental projection construction cryptographic Hash initialization, neuron activation functions using sigmoid functions Between, neuron number is the code length of object binary code;
Trim process passes through back-propagation algorithm regulating networks weight;After network fine setting is for the tenth convolutional layer of regulation Network weight;Data set data volume size for trim network can reduce 10%- compared with the data set of pre-training network 50%, compared to the network parameter of pre-training, trim process network parameter iterations and learning rate reduce 1%-10%, momentum Parameter and weights decay factor keep constant;
Detailed trim process is as follows:
1) full deconvolution parameter is sharedSegmentation task parametersWith conspicuousness task parametersObtained by pre-training process Arrive;
2) basisWithUsing SGD training segmentation networks, updating the two parameters isWith
3) basisWithConspicuousness network is trained using SGD, updating relevant parameter isWith
4) basisWithUsing SGD training segmentation networks, obtainWith
5) basisWithConspicuousness network is trained using SGD, updating relevant parameter isWith
6) above-mentioned 3-6 is repeated to walk three times to obtain final argument θs, θh, θf
Step 3:Multi-level depth search
Step 3.1:Coarse retrieval
Step 3.1.1:Generation binary system Hash codes
By an image I to be checkedqThe neutral net by finely tuning is input to, the output of Hash layer is extracted as image label Name, is represented with Out (H);Binary code is worth to according to threshold binarization activation;To each binary digit r=1 ... s, according to Formula (3) exports binary code:
Wherein, s is Hash layer neuron number, and it is [40,100] that initial value sets scope;Γ={ I1,I2,…,InTable Show the data set for retrieval comprising n width images;The binary code representation of corresponding each image is ΓH={ H1,H2,…, Hn, wherein i=1 ... n, Hi∈{0,1}sRepresent that s binary code value of s neuron generation is respectively 0 or 1;
Step 3.1.2:Hamming distance measured similarity
Hamming distance between two isometric character strings is two numbers of the kinds of characters of character string correspondence position;For One image I to be checkedqWith its binary code HqIf, HqAnd Hi∈ΓHBetween Hamming distance less than setting threshold value, then Define a candidate pool P={ I comprising m width candidates picture (candidates)c1,Ic2,…,Icm, Hamming distance is recognized less than 5 For two images are similar;
Step 3.2:Precise search
Step 3.2.1:Significant characteristics are extracted
By image I to be checkedqThe two-dimentional characteristics of remote sensing image figure generated by the 13rd, 15 layers of convolutional layer of neutral net is distinguished One-dimensional vector is mapped as to be stored;Contrast is determined using the retrieval result of different characteristic vector respectively during later retrieval The characteristic pattern of final choice which layer convolution generation extracts remote sensing image significant characteristics;
Step 3.2.2:Euclidean distance measured similarity
For a width query image IqWith a candidate pool P, chosen from candidate pool P using the significant characteristics vector for extracting Select k width images before ranking;VqWithQuery image q and I are represented respectivelyciCharacteristic vector;Define IqWith the i-th width in candidate pool P Euclidean distance s between image individual features vectoriAs the similitude grade between them, such as shown in formula (4);
Euclidean distance is smaller, and the similitude between two images is bigger;Every width candidate schemes IciAccording to similar with query image Degree ascending sort, the image of k is then retrieval result before ranking;
Step 3.3:Retrieval result is evaluated
Retrieval result is evaluated using the evaluation criterion based on ranking;For a width query image q and the row for obtaining K width retrieval result images before name, precision ratio Precision is calculated according to below equation:
Wherein, Precision@k represent given threshold k, correct from first untill k-th correct result is retrieved Result is to k-th average accuracy of correct result;Rel (i) represents the correlation of query image q and ranking the i-th width image, Rel (i) ∈ { 0,1 }, 1 represents query image q and ranking the i-th width image has same category, i.e., the two is related, and 0 uncorrelated.
The present invention compared with prior art, with following obvious advantage and beneficial effect:
First, the method that characteristics of remote sensing image is extracted compared to Traditional Man, the present invention is built using full convolutional neural networks Depth conspicuousness target detection model, selects domestic and international Remote Sensing Image Database training network, three layers of language of comprehensive analysis image Adopted information, automatic study remote sensing image significant characteristics.Meanwhile, innovatively semantic segmentation adds full convolutional neural networks to distant Feel the study of image depth conspicuousness, effectively improve the significant characteristics for learning.It is experimentally confirmed that using the model scene compared with For on complicated multi-target detection data set, such as Microsoft COCO data sets can extract to the more visible conspicuousness mesh in edge Mark.The learning ability of deep-neural-network can be migrated further to the significant characteristics study to remote sensing image.Secondly, the present invention Hash layer is introduced in full convolutional neural networks framework, binary system is generated while remote sensing image depth significant characteristics are learnt Hash codes, can both save memory space, and later retrieval efficiency can be improved again.Finally, when image retrieval is carried out use by slightly to Thin search strategy, comprehensive utilization binary system Hash codes and significant characteristics carry out similarity measurement.It is experimentally confirmed that Hash layer is added in AlexNet neutral nets, and uses multi-level search strategy from coarse to fine, it is different classes of at 2,500,000 Normal image retrieval in, statistics returns to the accuracy rate of K width similar images before ranking, i.e. topK precision ratios, when K takes 1000, TopK precision ratios are average up to 88%, and retrieval time is about 1s.Therefore, the method is migrated to the retrieval of remote sensing image, for Realize that remote sensing image is accurate, efficient retrieval is practical and with significant application value.
Brief description of the drawings
Fig. 1 is based on the remote sensing image method for quickly retrieving flow chart of depth conspicuousness;
Fig. 2 is based on the target detection model support composition of depth conspicuousness;
Fig. 3 adds the neutral net Organization Chart of Hash layer;
The multi-level retrieving figures of Fig. 4.
Specific embodiment
According to foregoing description, the following is a specific implementing procedure, but the scope that this patent is protected is not limited to this Implementing procedure.
Step 1:Target detection model construction based on depth conspicuousness
Salient region, subjective understanding is the region of human eye vision focal attention, with human visual system (Human Visual System, HVS) be closely related, it is objective for be then directed to certain feature of image, there is a this feature most bright Aobvious sub-district.Therefore, conspicuousness test problems it is critical only that feature learning and extraction.In view of deep learning has in this regard Full convolutional neural networks are used for conspicuousness test problems by some powers, the present invention, it is proposed that based on full convolutional Neural net The multitask conspicuousness target detection model of network.The model carries out two tasks simultaneously:Conspicuousness Detection task and semantic segmentation Task.Conspicuousness Detection task is used for the depth characteristic study to remote sensing image, calculates depth conspicuousness, and semantic segmentation task is used In image internal object semantic information is extracted, eliminate notable figure background and obscure, supplement conspicuousness target lack part.
Full convolutional neural networks framework proposed by the present invention is based on increase income the deep learning framework Caffe realizations, tool of main flow Body Model structure is shown in accompanying drawing 2.A width RGB image is input into, a series of convolution operations is carried out by 15 convolutional layers (Conv), significantly Property Detection task and super-pixel target semantic segmentation task sharing convolutional layer.Preceding 13 convolutional layers are by convolutional neural networks Change at the beginning of VGGNet, convolution kernel size is 3 × 3, using amendment linear unit (Rectified Linear after each convolutional layer Unit, ReLU) as activation primitive, so as to accelerate convergence rate.2nd, 4,5, carry out maximum pond (Max after 13 convolutional layers Pooling) operate, reduce characteristic dimension, the consistency of feature is ensured while reducing amount of calculation.14th, the volume of 15 convolutional layers Product core size is respectively 7 × 7 and 1 × 1, and Dropout layers of connection is potential to solve Complex Neural Network structure after every layer of convolution Noise and details in over-fitting, i.e. model overlearning training data and cause the error rate in actual test it is higher, The poor problem of generalization ability.Warp lamination is built by up-sampling, its parameter is initialized by bilinear interpolation, learned in training Iteration updates in practising up-sampling function.Pass through sigmoid threshold function tables in conspicuousness object detection task by output image mark Standardization learns significant characteristics to [0,1].With warp lamination to the characteristic pattern of last convolutional layer in semantic segmentation task Up-sampled, and the result of up-sampling is cut out (Crop), make output image identical with input picture size, so that A prediction is generated to each pixel, while the spatial information in remaining original input picture.
Step 2:Neutral net pre-training simultaneously adds Hash layer fine setting
The present invention is used for the pre-training of neutral net using disclosed extensive Aerial Images data set (AID), it is intended to more Learn the semantic feature of remote sensing image different stage well.Hash layer is introduced, using the Wuhan University's remote sensing image data for expanding The further trim network of collection (WHU-RS), not only can be mapped to low-dimensional by the high dimensional feature of neural network learning, shorten retrieval Time, moreover it is possible to make the feature more robustness that neural network learning is arrived.
Step 2.1:Multitask conspicuousness target detection model pre-training
Step 2.1.1:Build pre-training data set
Disclosed extensive Aerial Images data set (AID) of pre-training stage selection are used for pre- instruction as standard data set Practice.AID includes 30 classifications, and 10000 width Aerial Images, all images are selected from Google Earth, is led through the remote sensing technology of specialty Domain personnel mark.The image of each classification takes from country variant, area, in different time by different shooting remote sensing instrument Shoot, picture size is 600 × 600 pixels, resolution ratio is 0.5m/ pixels to 8m/ pixels.Compared to other data sets, should Gap is smaller in data set class, and gap is larger between class, is largest data set in current Aerial Images data set.
Step 2.1.2:Conspicuousness target detection model pre-training
FCNN pre-training is together launched by conspicuousness Detection task and segmentation task.χ represents N1Breadth it is high be respectively W and The set of the training image of Q, Xi is wherein the i-th width image, YijkRepresent the i-th breadth corresponding picture of image for being respectively j and k high Plain level true segmentation figure, wherein i=1 ... N1, j=1 ... W, k=1 ... Q.Z represents N2The set of width training image, ZnIt is wherein N width images, n=1 ... N2, it has the corresponding true bianry image M that there is conspicuousness targetn。θsIt is shared convolutional layer parameter, θhIt is segmentation task parameters, θfIt is conspicuousness task parameters.Formula (1), formula (2) are respectively the cross entropy cost of segmentation task Function J1(χ;θsh) and conspicuousness Detection task square Euclidean distance cost function J2(Z;θsf), FCNN is by minimizing Two cost functions are trained:
In formula (1),It is indicator function, hcjkIt is the element (j, k) of c class confidence segmentation figures, c=1 ... C, h (Xi; θsh) it is semantic segmentation function, the C confidence segmentation figure of target class is returned altogether, C is the image class that pre-training data set is included Not, C takes 30 in the present invention;In formula (2), f (Zn;θsf) it is notable figure output function, F represents F- norm computings.
Next, with stochastic gradient descent (SGD) method, on the basis of regularization is carried out to all training samples, most The above-mentioned cost function of smallization.Not there is segmentation and conspicuousness to mark simultaneously due to the data set for pre-training, therefore segmentation Task and conspicuousness Detection task are alternately.Because training process needs to normalize all original image sizes, therefore this It is that 500 × 500 pixels are used for pre-training that original image is reset size by invention.Learning rate is the necessary ginseng of SGD learning methods Number, determine the speed of right value update, set too conference causes cost function to vibrate, as a result cross optimal value, it is too small to make Convergence rate is excessively slow, generally tends to choose less learning rate, and such as 0.001 ± 0.01 is stablized with keeping system.Momentum is joined Number and weights decay factor can improve training adaptivity, and momentum parameter is usually [0.9,1.0], and weights decay factor is usually 0.0005±0.0002.By Germicidal efficacy, learning rate is set to 10 by the present invention-10, momentum parameter is set to 0.99, and weights decline Subtracting coefficient takes Caffe frameworks default value 0.0005.Stochastic gradient descent (SGD) study course passes through NVIDIA GTX 1080GPU Equipment accelerates, and 80000 iteration are carried out altogether.Detailed pre-training process is as follows:
1) full deconvolution parameter is sharedBased on VGGNet initialization;
2) task parameters are split by normal distribution random initializtionWith conspicuousness task parameters
3) basisWithUsing SGD training segmentation networks, updating the two parameters isWith
4) basisWithConspicuousness network is trained using SGD, updating relevant parameter isWith
5) basisWithUsing SGD training segmentation networks, obtainWith
6) basisWithConspicuousness network is trained using SGD, updating relevant parameter isWith
7) above-mentioned 3-6 is repeated to walk three times to obtain pre-training final argument θs, θh, θf
Step 2.2:Hash layer is added, for aiming field trim network
Step 2.2.1:Build the Chinese remote sensing image data collection for trim network
Finely tuned for neutral net from Wuhan University's remote sensing image data collection (WHU-RS) for expanding.Original WHU-RS numbers 19 scene classifications are included according to collection, the remote sensing images that totally 950 width resolution ratio are not waited, picture size is 600 × 600 pixels, is owned Image is taken from Google Earth.With reference to the topography and geomorphology of China, 7000 are reconstructed on the basis of raw data set and extended to Used as Sample Storehouse, each classification comprises more than 200 width images to width remote sensing image.The illumination of newly-increased sample image, shooting angle, point Resolution and size are different, and the significant characteristics of robustness are had more beneficial to neural network learning.
Step 2.2.2:Add Hash layer trim network
The characteristic vector dimension of deep neural network generation is higher, is taken very much in large-scale image retrieval.Due to It is similar with similar image binary system Hash codes, therefore, it is of the invention in the good network of pre-training layer second from the bottom and final In the middle of task layer, insertion one includes the s full articulamentum of neuron, i.e. Hash layer H, and it is empty that high dimensional feature is mapped into low-dimensional Between, generation binary system Hash codes are stored, and network structure is shown in accompanying drawing 3.Hash layer H weights construct cryptographic Hash using accidental projection Initialization, neuron activation functions make output valve between 0 to 1 using sigmoid functions, and rule of thumb given threshold is 0.5, Neuron number is the code length of object binary code.Hash layer, not only there is provided the feature abstraction of preceding layer, is also connection middle rank With the bridge of high vision semantic feature.
Trim process passes through backpropagation (Back Propagation) algorithm regulating networks weight.Network fine setting can be directed to Whole network or subnetwork are carried out.Due to the feature more vague generalization that lower layer network Structure learning is arrived, and in order to avoid hair Raw over-fitting, the present invention is using the WHU-RS data sets for expanding, the net after the emphasis regulation convolutional layer of upper layer network, i.e., the tenth Network weight.The data set data volume size for being commonly used for trim network can reduce 10%-50% compared with pre-training data set, In the present invention, trim network data set includes 7000 width images, hence it is evident that comprising the data of 10000 width images during less than pre-training Collection, compared to the network parameter of pre-training, trim process network parameter will suitably reduce, and iterations and learning rate can be reduced 1%-10%.In the present invention, trim process reduces to 8000 times iterations, and learning rate reduction by 1%, is 10-12, momentum Parameter and weights decay factor keep constant, that is, be set to 0.99 and 0.0005.
Detailed trim process is as follows:
1) full deconvolution parameter is sharedSegmentation task parametersWith conspicuousness task parametersObtained by pre-training process Arrive;
2) basisWithUsing SGD training segmentation networks, updating the two parameters isWith
3) basisWithConspicuousness network is trained using SGD, updating relevant parameter isWith
4) basisWithUsing SGD training segmentation networks, obtainWith
5) basisWithConspicuousness network is trained using SGD, updating relevant parameter isWith
6) above-mentioned 3-6 is repeated to walk three times to obtain final argument θs, θh, θf
Step 3:Multi-level depth search
The shallow-layer part study bottom visual signature of depth convolutional neural networks, and further portion can catch image, semantic letter Breath.Therefore, the present invention realizes fast and accurately image retrieval using search strategy from coarse to fine.Feature extraction and retrieved Journey is shown in accompanying drawing 4.
Step 3.1:Coarse retrieval
A series of candidate regions for having a similar high-level semantics feature are retrieved first, i.e., possessing similar binary system in Hash layer swashs Value living, then further generates similar image ranking according to similarity measurement.
Step 3.1.1:Generation binary system Hash codes
By an image I to be checkedqThe neutral net by finely tuning is input to, the output of Hash layer is extracted as image label Name, is represented with Out (H).Binary code is worth to according to threshold binarization activation.To each binary digit r=1 ... s, according to Formula (3) exports binary code:
Wherein, s is Hash layer neuron number, and number excessively occurs over-fitting, it is proposed that initial value sets scope and is [40,100], concrete numerical value is adjusted according to real training data, and s is set to 48 in the present invention.Γ={ I1,I2,…,InRepresent The data set for retrieval comprising n width images.The binary code representation of corresponding each image is ΓH={ H1,H2,…, Hn, wherein i=1 ... n, Hi∈{0,1}sRepresent that s binary code value of s neuron generation is respectively 0 or 1.
Step 3.1.2:Hamming distance measured similarity
Hamming distance between two isometric character strings is two numbers of the kinds of characters of character string correspondence position.For One image I to be checkedqWith its binary code HqIf, HqAnd Hi∈ΓHBetween Hamming distance less than setting threshold value, then Define a candidate pool P={ I comprising m width candidates picture (candidates)c1,Ic2,…,Icm, generally, Hamming Distance is less than 5 just it is considered that two images are similar.
Step 3.2:Precise search
Step 3.2.1:Significant characteristics are extracted
Learn the semantic feature of different images different stage due to depth convolutional network difference convolutional layer, wherein, it is on the middle and senior level The feature that convolutional layer learns is more applicable and image retrieval task.Therefore, by image I to be checkedqBy neutral net the 13rd, 15 The two-dimentional characteristics of remote sensing image figure of layer convolutional layer generation is each mapped to one-dimensional vector and is stored.Divide during later retrieval Dui Bi not determine that the characteristic pattern of final choice which layer convolution generation extracts remote sensing shadow using the retrieval result of different characteristic vector As significant characteristics.
Step 3.2.2:Euclidean distance measured similarity
For a width query image IqWith a candidate pool P, chosen from candidate pool P using the significant characteristics vector for extracting Select k width images before ranking.VqWithQuery image q and I are represented respectivelyciCharacteristic vector.Define IqWith the i-th width in candidate pool P Euclidean distance s between image individual features vectoriAs the similitude grade between them, such as shown in formula (4).
Euclidean distance is smaller, and the similitude between two images is bigger.Every width candidate schemes IciAccording to similar with query image Degree ascending sort, the image of k is then retrieval result before ranking.
Step 3.3:Retrieval result is evaluated
The present invention is evaluated retrieval result using the evaluation criterion based on ranking.For a width query image q and K width retrieval result image before the ranking for arriving, precision ratio (Precision) is calculated according to below equation:
Wherein, Precision@k represent given threshold k according to the actual requirements, untill k-th correct result is retrieved, From first correct result to k-th average accuracy of correct result;Rel (i) represents query image q and ranking the i-th width figure The correlation of picture, Rel (i) ∈ { 0,1 }, 1 represents query image q and ranking the i-th width image has same category, i.e. the two phase Close, 0 uncorrelated.

Claims (1)

1. a kind of remote sensing image method for quickly retrieving based on depth conspicuousness, it is characterised in that comprise the following steps:
Step 1:Target detection model construction based on depth conspicuousness
Be input into a width RGB image, a series of convolution operations carried out by 15 convolutional layers, then carry out conspicuousness Detection task and Super-pixel target semantic segmentation task sharing convolutional layer;Preceding 13 convolutional layers are changed by the beginning of convolutional neural networks VGGNet, convolution Core size is 3 × 3, using the linear unit R eLU of amendment as activation primitive after each convolutional layer;2nd, 4,5, after 13 convolutional layers Carry out the operation of maximum pondization;14th, after the convolution kernel size of 15 convolutional layers is respectively the 7 × 7 and 1 × 1, the 14th, 15 convolutional layers Dropout layers of connection;
Warp lamination is built by up-sampling, its parameter is initialized by bilinear interpolation, in training in study sampling function Iteration updates;Output image is normalized to [0,1] by sigmoid threshold function tables in conspicuousness object detection task, is learned Practise significant characteristics;The characteristic pattern of last convolutional layer is up-sampled with warp lamination in semantic segmentation task, and And be cut out the result of up-sampling, make output image identical with input picture size;
Step 2:Neutral net pre-training simultaneously adds Hash layer fine setting
Step 2.1:Multitask conspicuousness target detection model pre-training
FCNN pre-training is together launched by conspicuousness Detection task and segmentation task;χ represents N1The breadth instruction for being respectively W and Q high Practice the set of image, Xi is wherein the i-th width image, YijkRepresent that the i-th breadth corresponding Pixel-level of image for being respectively j and k high is true Real segmentation figure, wherein i=1 ... N1, j=1 ... W, k=1 ... Q;Z represents N2The set of width training image, ZnIt is wherein the n-th width figure Picture, n=1 ... N2, it has the corresponding true bianry image M that there is conspicuousness targetn;θsIt is shared convolutional layer parameter, θhTo divide Cut task parameters, θfIt is conspicuousness task parameters;Formula (1), formula (2) are respectively the cross entropy cost function J of segmentation task1 (χ;θsh) and conspicuousness Detection task square Euclidean distance cost function J2(Z;θsf), FCNN is by minimizing two generations Valency function is trained:
J 2 ( Z ; θ s , θ f ) = 1 N 2 Σ n = 1 N 2 | | M n - f ( Z n ; θ s , θ f ) | | F 2 - - - ( 2 )
In formula (1),It is indicator function, hcjkIt is the element (j, k) of c class confidence segmentation figures, c=1 ... C, h (Xi;θsh) It is semantic segmentation function, the C confidence segmentation figure of target class is returned altogether, C is the image category formula that pre-training data set is included (2) in, f (Zn;θsf) it is notable figure output function, F represents F- norm computings;
Next, with stochastic gradient descent SGD methods, on the basis of regularization is carried out to all training samples, in minimum State cost function;Due to the data set for pre-training simultaneously do not have segmentation and conspicuousness mark, therefore segmentation task and Conspicuousness Detection task is alternately;Training process needs to normalize all original image sizes;Learning rate be 0.001 ± 0.01;Momentum parameter is usually [0.9,1.0], and weights decay factor is usually 0.0005 ± 0.0002,;Stochastic gradient descent Habit process carries out more than 80000 times iteration altogether;Detailed pre-training process is as follows:
1) full deconvolution parameter is sharedBased on VGGNet initialization;
2) task parameters are split by normal distribution random initializtionWith conspicuousness task parameters
3) basisWithUsing SGD training segmentation networks, updating the two parameters isWith
4) basisWithConspicuousness network is trained using SGD, updating relevant parameter isWith
5) basisWithUsing SGD training segmentation networks, obtainWith
6) basisWithConspicuousness network is trained using SGD, updating relevant parameter isWith
7) above-mentioned 3-6 is repeated to walk three times to obtain pre-training final argument θs, θh, θf
Step 2.2:Hash layer is added, for aiming field trim network
In the middle of the good network of pre-training layer second from the bottom and final task layer, insertion one connects comprising s the complete of neuron Layer, i.e. Hash layer H are connect, high dimensional feature is mapped to lower dimensional space, generation binary system Hash codes are stored;Hash layer H weights Initialized using accidental projection construction cryptographic Hash, neuron activation functions make output valve between 0 to 1 using sigmoid functions, Neuron number is the code length of object binary code;
Trim process passes through back-propagation algorithm regulating networks weight;Network fine setting is the network after the tenth convolutional layer of regulation Weight;Data set data volume size for trim network can reduce 10%-50%, phase compared with the data set of pre-training network Than the network parameter of pre-training, trim process network parameter iterations and learning rate reduction 1%-10%, momentum parameter and Weights decay factor keeps constant;
Detailed trim process is as follows:
1) full deconvolution parameter is sharedSegmentation task parametersWith conspicuousness task parametersObtained by pre-training process;
2) basisWithUsing SGD training segmentation networks, updating the two parameters isWith
3) basisWithConspicuousness network is trained using SGD, updating relevant parameter isWith
4) basisWithUsing SGD training segmentation networks, obtainWith
5) basisWithConspicuousness network is trained using SGD, updating relevant parameter isWith
6) above-mentioned 3-6 is repeated to walk three times to obtain final argument θs, θh, θf
Step 3:Multi-level depth search
Step 3.1:Coarse retrieval
Step 3.1.1:Generation binary system Hash codes
By an image I to be checkedqThe neutral net by finely tuning is input to, the output of Hash layer is extracted as image signatures, used Out (H) is represented;Binary code is worth to according to threshold binarization activation;To each binary digit r=1 ... s, according to formula (3) binary code is exported:
H r = 1 Out r ( H ) ≥ 0.5 0 o t h e r w i s e - - - ( 3 )
Wherein, s is Hash layer neuron number, and it is [40,100] that initial value sets scope;Γ={ I1,I2,…,InRepresent bag The data set for retrieval of the image of width containing n;The binary code representation of corresponding each image is ΓH={ H1,H2,…,Hn, Wherein i=1 ... n, Hi∈{0,1}sRepresent that s binary code value of s neuron generation is respectively 0 or 1;
Step 3.1.2:Hamming distance measured similarity
Hamming distance between two isometric character strings is two numbers of the kinds of characters of character string correspondence position;For a width Image I to be checkedqWith its binary code HqIf, HqAnd Hi∈ΓHBetween Hamming distance less than setting threshold value, then define One candidate pool P={ I comprising m width candidates picture (candidates)c1,Ic2,…,Icm, Hamming distance thinks two less than 5 Width image is similar;
Step 3.2:Precise search
Step 3.2.1:Significant characteristics are extracted
By image I to be checkedqThe two-dimentional characteristics of remote sensing image figure generated by the 13rd, 15 layers of convolutional layer of neutral net is mapped respectively For one-dimensional vector is stored;Contrast is determined finally using the retrieval result of different characteristic vector respectively during later retrieval Characteristic pattern from the generation of which layer convolution extracts remote sensing image significant characteristics;
Step 3.2.2:Euclidean distance measured similarity
For a width query image IqWith a candidate pool P, picked out from candidate pool P using the significant characteristics vector for extracting K width image before ranking;VqWithQuery image q and I are represented respectivelyciCharacteristic vector;Define IqWith the i-th width figure in candidate pool P As the Euclidean distance s between individual features vectoriAs the similitude grade between them, such as shown in formula (4);
s i = | | V q - V i P | | - - - ( 4 )
Euclidean distance is smaller, and the similitude between two images is bigger;Every width candidate schemes IciAccording to the similarity liter with query image Sequence sorts, and the image of k is then retrieval result before ranking;
Step 3.3:Retrieval result is evaluated
Retrieval result is evaluated using the evaluation criterion based on ranking;For k before a width query image q and the ranking for obtaining Width retrieval result image, precision ratio Precision is calculated according to below equation:
Pr e c i s i o n @ k = Σ i = 1 k Re l ( i ) k - - - ( 5 )
Wherein, Precision@k represent given threshold k, untill k-th correct result is retrieved, from first correct result To k-th average accuracy of correct result;Rel (i) represents the correlation of query image q and ranking the i-th width image, Rel (i) ∈ { 0,1 }, 1 represents query image q and ranking the i-th width image has same category, i.e., the two is related, and 0 uncorrelated.
CN201710087670.5A 2017-02-18 2017-02-18 Remote sensing image rapid retrieval method based on depth significance Active CN106909924B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710087670.5A CN106909924B (en) 2017-02-18 2017-02-18 Remote sensing image rapid retrieval method based on depth significance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710087670.5A CN106909924B (en) 2017-02-18 2017-02-18 Remote sensing image rapid retrieval method based on depth significance

Publications (2)

Publication Number Publication Date
CN106909924A true CN106909924A (en) 2017-06-30
CN106909924B CN106909924B (en) 2020-08-28

Family

ID=59207582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710087670.5A Active CN106909924B (en) 2017-02-18 2017-02-18 Remote sensing image rapid retrieval method based on depth significance

Country Status (1)

Country Link
CN (1) CN106909924B (en)

Cited By (66)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107291945A (en) * 2017-07-12 2017-10-24 上海交通大学 The high-precision image of clothing search method and system of view-based access control model attention model
CN107392925A (en) * 2017-08-01 2017-11-24 西安电子科技大学 Remote sensing image terrain classification method based on super-pixel coding and convolutional neural networks
CN107463932A (en) * 2017-07-13 2017-12-12 央视国际网络无锡有限公司 A kind of method that picture feature is extracted using binary system bottleneck neutral net
CN107480261A (en) * 2017-08-16 2017-12-15 上海荷福人工智能科技(集团)有限公司 One kind is based on deep learning fine granularity facial image method for quickly retrieving
CN107729992A (en) * 2017-10-27 2018-02-23 深圳市未来媒体技术研究院 A kind of deep learning method based on backpropagation
CN108090117A (en) * 2017-11-06 2018-05-29 北京三快在线科技有限公司 A kind of image search method and device, electronic equipment
CN108257139A (en) * 2018-02-26 2018-07-06 中国科学院大学 RGB-D three-dimension object detection methods based on deep learning
CN108287926A (en) * 2018-03-02 2018-07-17 宿州学院 A kind of multi-source heterogeneous big data acquisition of Agro-ecology, processing and analysis framework
CN108427738A (en) * 2018-03-01 2018-08-21 中山大学 A kind of fast image retrieval method based on deep learning
CN108446312A (en) * 2018-02-06 2018-08-24 西安电子科技大学 Remote sensing image search method based on depth convolution semantic net
CN108647655A (en) * 2018-05-16 2018-10-12 北京工业大学 Low latitude aerial images power line foreign matter detecting method based on light-duty convolutional neural networks
CN109035315A (en) * 2018-08-28 2018-12-18 武汉大学 Merge the remote sensing image registration method and system of SIFT feature and CNN feature
CN109033505A (en) * 2018-06-06 2018-12-18 东北大学 A kind of ultrafast cold temprature control method based on deep learning
CN109063569A (en) * 2018-07-04 2018-12-21 北京航空航天大学 A kind of semantic class change detecting method based on remote sensing image
CN109101907A (en) * 2018-07-28 2018-12-28 华中科技大学 A kind of vehicle-mounted image, semantic segmenting system based on bilateral segmentation network
CN109191426A (en) * 2018-07-24 2019-01-11 江南大学 A kind of flat image conspicuousness detection method
CN109284741A (en) * 2018-10-30 2019-01-29 武汉大学 A kind of extensive Remote Sensing Image Retrieval method and system based on depth Hash network
CN109389051A (en) * 2018-09-20 2019-02-26 华南农业大学 A kind of building remote sensing images recognition methods based on convolutional neural networks
CN109389128A (en) * 2018-08-24 2019-02-26 中国石油天然气股份有限公司 Electric imaging logging image characteristic automatic extraction method and device
CN109410211A (en) * 2017-08-18 2019-03-01 北京猎户星空科技有限公司 The dividing method and device of target object in a kind of image
CN109522821A (en) * 2018-10-30 2019-03-26 武汉大学 A kind of extensive across source Remote Sensing Image Retrieval method based on cross-module state depth Hash network
CN109522435A (en) * 2018-11-15 2019-03-26 中国银联股份有限公司 A kind of image search method and device
CN109639964A (en) * 2018-11-26 2019-04-16 北京达佳互联信息技术有限公司 Image processing method, processing unit and computer readable storage medium
CN109657522A (en) * 2017-10-10 2019-04-19 北京京东尚科信息技术有限公司 Detect the method and apparatus that can travel region
CN109670057A (en) * 2019-01-03 2019-04-23 电子科技大学 A kind of gradual end-to-end depth characteristic quantization system and method
EP3477555A1 (en) * 2017-10-31 2019-05-01 General Electric Company Multi-task feature selection neural networks
CN109753576A (en) * 2018-12-25 2019-05-14 上海七印信息科技有限公司 A kind of method for retrieving similar images
CN109766938A (en) * 2018-12-28 2019-05-17 武汉大学 Remote sensing image multi-class targets detection method based on scene tag constraint depth network
CN109766467A (en) * 2018-12-28 2019-05-17 珠海大横琴科技发展有限公司 Remote sensing image retrieval method and system based on image segmentation and improvement VLAD
CN109886221A (en) * 2019-02-26 2019-06-14 浙江水利水电学院 Sand dredger recognition methods based on saliency detection
CN109902192A (en) * 2019-01-15 2019-06-18 华南师范大学 Remote sensing image retrieval method, system, equipment and the medium returned based on unsupervised depth
CN109919108A (en) * 2019-03-11 2019-06-21 西安电子科技大学 Remote sensing images fast target detection method based on depth Hash auxiliary network
CN109919059A (en) * 2019-02-26 2019-06-21 四川大学 Conspicuousness object detecting method based on depth network layerization and multitask training
CN110020658A (en) * 2019-03-28 2019-07-16 大连理工大学 A kind of well-marked target detection method based on multitask deep learning
WO2019136591A1 (en) * 2018-01-09 2019-07-18 深圳大学 Salient object detection method and system for weak supervision-based spatio-temporal cascade neural network
CN110263799A (en) * 2019-06-26 2019-09-20 山东浪潮人工智能研究院有限公司 A kind of image classification method and device based on the study of depth conspicuousness similar diagram
CN110334765A (en) * 2019-07-05 2019-10-15 西安电子科技大学 Remote Image Classification based on the multiple dimensioned deep learning of attention mechanism
CN110399847A (en) * 2019-07-30 2019-11-01 北京字节跳动网络技术有限公司 Extraction method of key frame, device and electronic equipment
CN110414301A (en) * 2018-04-28 2019-11-05 中山大学 It is a kind of based on double compartment crowd density estimation methods for taking the photograph head
CN110414513A (en) * 2019-07-31 2019-11-05 电子科技大学 Vision significance detection method based on semantically enhancement convolutional neural networks
CN110580503A (en) * 2019-08-22 2019-12-17 江苏和正特种装备有限公司 AI-based double-spectrum target automatic identification method
WO2019237646A1 (en) * 2018-06-14 2019-12-19 清华大学深圳研究生院 Image retrieval method based on deep learning and semantic segmentation
CN110633633A (en) * 2019-08-08 2019-12-31 北京工业大学 Remote sensing image road extraction method based on self-adaptive threshold
CN110765886A (en) * 2019-09-29 2020-02-07 深圳大学 Road target detection method and device based on convolutional neural network
CN110852295A (en) * 2019-10-15 2020-02-28 深圳龙岗智能视听研究院 Video behavior identification method based on multitask supervised learning
CN110853053A (en) * 2019-10-25 2020-02-28 天津大学 Salient object detection method taking multiple candidate objects as semantic knowledge
CN110866425A (en) * 2018-08-28 2020-03-06 天津理工大学 Pedestrian identification method based on light field camera and depth migration learning
CN110945535A (en) * 2017-07-26 2020-03-31 国际商业机器公司 System and method for constructing synaptic weights for artificial neural networks from signed simulated conductance pairs with varying significance
CN111160127A (en) * 2019-12-11 2020-05-15 中国资源卫星应用中心 Remote sensing image processing and detecting method based on deep convolutional neural network model
CN111260021A (en) * 2018-11-30 2020-06-09 百度(美国)有限责任公司 Predictive deep learning scaling
CN111368109A (en) * 2018-12-26 2020-07-03 北京眼神智能科技有限公司 Remote sensing image retrieval method and device, computer readable storage medium and equipment
CN111640087A (en) * 2020-04-14 2020-09-08 中国测绘科学研究院 Image change detection method based on SAR (synthetic aperture radar) deep full convolution neural network
CN111695572A (en) * 2019-12-27 2020-09-22 珠海大横琴科技发展有限公司 Ship retrieval method and device based on convolutional layer feature extraction
CN112052736A (en) * 2020-08-06 2020-12-08 浙江理工大学 Cloud computing platform-based field tea tender shoot detection method
CN112102245A (en) * 2020-08-17 2020-12-18 清华大学 Grape fetus slice image processing method and device based on deep learning
CN112541912A (en) * 2020-12-23 2021-03-23 中国矿业大学 Method and device for rapidly detecting saliency target in mine sudden disaster scene
CN112579816A (en) * 2020-12-29 2021-03-30 二十一世纪空间技术应用股份有限公司 Remote sensing image retrieval method and device, electronic equipment and storage medium
CN112667832A (en) * 2020-12-31 2021-04-16 哈尔滨工业大学 Vision-based mutual positioning method in unknown indoor environment
CN112712090A (en) * 2019-10-24 2021-04-27 北京易真学思教育科技有限公司 Image processing method, device, equipment and storage medium
CN112801192A (en) * 2021-01-26 2021-05-14 北京工业大学 Extended LargeVis image feature dimension reduction method based on deep neural network
CN112926667A (en) * 2021-03-05 2021-06-08 中南民族大学 Method and device for detecting saliency target of depth fusion edge and high-level feature
CN113205481A (en) * 2021-03-19 2021-08-03 浙江科技学院 Salient object detection method based on stepped progressive neural network
CN113326926A (en) * 2021-06-30 2021-08-31 上海理工大学 Fully-connected Hash neural network for remote sensing image retrieval
CN115292530A (en) * 2022-09-30 2022-11-04 北京数慧时空信息技术有限公司 Remote sensing image overall management system
US11618438B2 (en) * 2018-03-26 2023-04-04 International Business Machines Corporation Three-dimensional object localization for obstacle avoidance using one-shot convolutional neural network
CN116894100A (en) * 2023-07-24 2023-10-17 北京和德宇航技术有限公司 Remote sensing image display control method, device and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140122400A1 (en) * 2012-10-25 2014-05-01 Brain Corporation Apparatus and methods for activity-based plasticity in a spiking neuron network
CN105243154A (en) * 2015-10-27 2016-01-13 武汉大学 Remote sensing image retrieval method and system based on significant point characteristics and spare self-encodings
CN105550709A (en) * 2015-12-14 2016-05-04 武汉大学 Remote sensing image power transmission line corridor forest region extraction method
US20160232430A1 (en) * 2014-05-29 2016-08-11 International Business Machines Corporation Scene understanding using a neurosynaptic system
CN106227851A (en) * 2016-07-29 2016-12-14 汤平 Based on the image search method searched for by depth of seam division that degree of depth convolutional neural networks is end-to-end
CN106250812A (en) * 2016-07-15 2016-12-21 汤平 A kind of model recognizing method based on quick R CNN deep neural network
CN106296692A (en) * 2016-08-11 2017-01-04 深圳市未来媒体技术研究院 Image significance detection method based on antagonism network
CN106295139A (en) * 2016-07-29 2017-01-04 姹ゅ钩 A kind of tongue body autodiagnosis health cloud service system based on degree of depth convolutional neural networks
CN106354735A (en) * 2015-07-22 2017-01-25 杭州海康威视数字技术股份有限公司 Image target searching method and device
CN106408001A (en) * 2016-08-26 2017-02-15 西安电子科技大学 Rapid area-of-interest detection method based on depth kernelized hashing

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140122400A1 (en) * 2012-10-25 2014-05-01 Brain Corporation Apparatus and methods for activity-based plasticity in a spiking neuron network
US20160232430A1 (en) * 2014-05-29 2016-08-11 International Business Machines Corporation Scene understanding using a neurosynaptic system
CN106354735A (en) * 2015-07-22 2017-01-25 杭州海康威视数字技术股份有限公司 Image target searching method and device
CN105243154A (en) * 2015-10-27 2016-01-13 武汉大学 Remote sensing image retrieval method and system based on significant point characteristics and spare self-encodings
CN105550709A (en) * 2015-12-14 2016-05-04 武汉大学 Remote sensing image power transmission line corridor forest region extraction method
CN106250812A (en) * 2016-07-15 2016-12-21 汤平 A kind of model recognizing method based on quick R CNN deep neural network
CN106227851A (en) * 2016-07-29 2016-12-14 汤平 Based on the image search method searched for by depth of seam division that degree of depth convolutional neural networks is end-to-end
CN106295139A (en) * 2016-07-29 2017-01-04 姹ゅ钩 A kind of tongue body autodiagnosis health cloud service system based on degree of depth convolutional neural networks
CN106296692A (en) * 2016-08-11 2017-01-04 深圳市未来媒体技术研究院 Image significance detection method based on antagonism network
CN106408001A (en) * 2016-08-26 2017-02-15 西安电子科技大学 Rapid area-of-interest detection method based on depth kernelized hashing

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
XIA RONGKAI 等: "Supervised Hashing for Image Retrieval via Image Representation Learning", 《PROCEEDINGS OF THE AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE》 *
YIN LI 等: "The secrets of salient object segmentation", 《2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
刘冶 等: "FP-CNNH:一种基于深度卷积神经网络的快速图像哈希算法", 《计算机科学》 *
柯圣财 等: "基于卷积神经网络和监督核哈希的图像检索方法", 《电子学报》 *
龚震霆 等: "基于卷积神经网络和哈希编码的图像检索方法", 《智能系统学报》 *

Cited By (95)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107291945A (en) * 2017-07-12 2017-10-24 上海交通大学 The high-precision image of clothing search method and system of view-based access control model attention model
CN107463932B (en) * 2017-07-13 2020-07-10 央视国际网络无锡有限公司 Method for extracting picture features by using binary bottleneck neural network
CN107463932A (en) * 2017-07-13 2017-12-12 央视国际网络无锡有限公司 A kind of method that picture feature is extracted using binary system bottleneck neutral net
CN110945535A (en) * 2017-07-26 2020-03-31 国际商业机器公司 System and method for constructing synaptic weights for artificial neural networks from signed simulated conductance pairs with varying significance
CN110945535B (en) * 2017-07-26 2024-01-26 国际商业机器公司 Method for realizing artificial neural network ANN
CN107392925B (en) * 2017-08-01 2020-07-07 西安电子科技大学 Remote sensing image ground object classification method based on super-pixel coding and convolutional neural network
CN107392925A (en) * 2017-08-01 2017-11-24 西安电子科技大学 Remote sensing image terrain classification method based on super-pixel coding and convolutional neural networks
CN107480261A (en) * 2017-08-16 2017-12-15 上海荷福人工智能科技(集团)有限公司 One kind is based on deep learning fine granularity facial image method for quickly retrieving
CN107480261B (en) * 2017-08-16 2020-06-16 上海荷福人工智能科技(集团)有限公司 Fine-grained face image fast retrieval method based on deep learning
CN109410211A (en) * 2017-08-18 2019-03-01 北京猎户星空科技有限公司 The dividing method and device of target object in a kind of image
CN109657522A (en) * 2017-10-10 2019-04-19 北京京东尚科信息技术有限公司 Detect the method and apparatus that can travel region
CN107729992A (en) * 2017-10-27 2018-02-23 深圳市未来媒体技术研究院 A kind of deep learning method based on backpropagation
CN107729992B (en) * 2017-10-27 2020-12-29 深圳市未来媒体技术研究院 Deep learning method based on back propagation
EP3477555A1 (en) * 2017-10-31 2019-05-01 General Electric Company Multi-task feature selection neural networks
CN108090117A (en) * 2017-11-06 2018-05-29 北京三快在线科技有限公司 A kind of image search method and device, electronic equipment
US11281714B2 (en) 2017-11-06 2022-03-22 Beijing Sankuai Online Technology Co., Ltd Image retrieval
WO2019136591A1 (en) * 2018-01-09 2019-07-18 深圳大学 Salient object detection method and system for weak supervision-based spatio-temporal cascade neural network
CN108446312B (en) * 2018-02-06 2020-04-21 西安电子科技大学 Optical remote sensing image retrieval method based on deep convolution semantic net
CN108446312A (en) * 2018-02-06 2018-08-24 西安电子科技大学 Remote sensing image search method based on depth convolution semantic net
CN108257139B (en) * 2018-02-26 2020-09-08 中国科学院大学 RGB-D three-dimensional object detection method based on deep learning
CN108257139A (en) * 2018-02-26 2018-07-06 中国科学院大学 RGB-D three-dimension object detection methods based on deep learning
CN108427738A (en) * 2018-03-01 2018-08-21 中山大学 A kind of fast image retrieval method based on deep learning
CN108287926A (en) * 2018-03-02 2018-07-17 宿州学院 A kind of multi-source heterogeneous big data acquisition of Agro-ecology, processing and analysis framework
US11618438B2 (en) * 2018-03-26 2023-04-04 International Business Machines Corporation Three-dimensional object localization for obstacle avoidance using one-shot convolutional neural network
CN110414301A (en) * 2018-04-28 2019-11-05 中山大学 It is a kind of based on double compartment crowd density estimation methods for taking the photograph head
CN108647655B (en) * 2018-05-16 2022-07-12 北京工业大学 Low-altitude aerial image power line foreign matter detection method based on light convolutional neural network
CN108647655A (en) * 2018-05-16 2018-10-12 北京工业大学 Low latitude aerial images power line foreign matter detecting method based on light-duty convolutional neural networks
CN109033505A (en) * 2018-06-06 2018-12-18 东北大学 A kind of ultrafast cold temprature control method based on deep learning
WO2019237646A1 (en) * 2018-06-14 2019-12-19 清华大学深圳研究生院 Image retrieval method based on deep learning and semantic segmentation
CN109063569A (en) * 2018-07-04 2018-12-21 北京航空航天大学 A kind of semantic class change detecting method based on remote sensing image
CN109063569B (en) * 2018-07-04 2021-08-24 北京航空航天大学 Semantic level change detection method based on remote sensing image
CN109191426A (en) * 2018-07-24 2019-01-11 江南大学 A kind of flat image conspicuousness detection method
CN109101907B (en) * 2018-07-28 2020-10-30 华中科技大学 Vehicle-mounted image semantic segmentation system based on bilateral segmentation network
CN109101907A (en) * 2018-07-28 2018-12-28 华中科技大学 A kind of vehicle-mounted image, semantic segmenting system based on bilateral segmentation network
US11010629B2 (en) 2018-08-24 2021-05-18 Petrochina Company Limited Method for automatically extracting image features of electrical imaging well logging, computer equipment and non-transitory computer readable medium
CN109389128A (en) * 2018-08-24 2019-02-26 中国石油天然气股份有限公司 Electric imaging logging image characteristic automatic extraction method and device
CN110866425A (en) * 2018-08-28 2020-03-06 天津理工大学 Pedestrian identification method based on light field camera and depth migration learning
CN109035315A (en) * 2018-08-28 2018-12-18 武汉大学 Merge the remote sensing image registration method and system of SIFT feature and CNN feature
CN109389051A (en) * 2018-09-20 2019-02-26 华南农业大学 A kind of building remote sensing images recognition methods based on convolutional neural networks
CN109522821A (en) * 2018-10-30 2019-03-26 武汉大学 A kind of extensive across source Remote Sensing Image Retrieval method based on cross-module state depth Hash network
CN109284741A (en) * 2018-10-30 2019-01-29 武汉大学 A kind of extensive Remote Sensing Image Retrieval method and system based on depth Hash network
CN109522435B (en) * 2018-11-15 2022-05-20 中国银联股份有限公司 Image retrieval method and device
CN109522435A (en) * 2018-11-15 2019-03-26 中国银联股份有限公司 A kind of image search method and device
WO2020098296A1 (en) * 2018-11-15 2020-05-22 中国银联股份有限公司 Image retrieval method and device
CN109639964A (en) * 2018-11-26 2019-04-16 北京达佳互联信息技术有限公司 Image processing method, processing unit and computer readable storage medium
CN111260021A (en) * 2018-11-30 2020-06-09 百度(美国)有限责任公司 Predictive deep learning scaling
CN111260021B (en) * 2018-11-30 2024-04-05 百度(美国)有限责任公司 Prediction deep learning scaling
CN109753576A (en) * 2018-12-25 2019-05-14 上海七印信息科技有限公司 A kind of method for retrieving similar images
CN111368109B (en) * 2018-12-26 2023-04-28 北京眼神智能科技有限公司 Remote sensing image retrieval method, remote sensing image retrieval device, computer readable storage medium and computer readable storage device
CN111368109A (en) * 2018-12-26 2020-07-03 北京眼神智能科技有限公司 Remote sensing image retrieval method and device, computer readable storage medium and equipment
CN109766938A (en) * 2018-12-28 2019-05-17 武汉大学 Remote sensing image multi-class targets detection method based on scene tag constraint depth network
CN109766467A (en) * 2018-12-28 2019-05-17 珠海大横琴科技发展有限公司 Remote sensing image retrieval method and system based on image segmentation and improvement VLAD
CN109670057A (en) * 2019-01-03 2019-04-23 电子科技大学 A kind of gradual end-to-end depth characteristic quantization system and method
CN109670057B (en) * 2019-01-03 2021-06-29 电子科技大学 Progressive end-to-end depth feature quantization system and method
CN109902192A (en) * 2019-01-15 2019-06-18 华南师范大学 Remote sensing image retrieval method, system, equipment and the medium returned based on unsupervised depth
CN109886221A (en) * 2019-02-26 2019-06-14 浙江水利水电学院 Sand dredger recognition methods based on saliency detection
CN109919059B (en) * 2019-02-26 2021-01-26 四川大学 Salient object detection method based on deep network layering and multi-task training
CN109919059A (en) * 2019-02-26 2019-06-21 四川大学 Conspicuousness object detecting method based on depth network layerization and multitask training
CN109919108B (en) * 2019-03-11 2022-12-06 西安电子科技大学 Remote sensing image rapid target detection method based on deep hash auxiliary network
CN109919108A (en) * 2019-03-11 2019-06-21 西安电子科技大学 Remote sensing images fast target detection method based on depth Hash auxiliary network
CN110020658A (en) * 2019-03-28 2019-07-16 大连理工大学 A kind of well-marked target detection method based on multitask deep learning
CN110263799A (en) * 2019-06-26 2019-09-20 山东浪潮人工智能研究院有限公司 A kind of image classification method and device based on the study of depth conspicuousness similar diagram
CN110334765B (en) * 2019-07-05 2023-03-24 西安电子科技大学 Remote sensing image classification method based on attention mechanism multi-scale deep learning
CN110334765A (en) * 2019-07-05 2019-10-15 西安电子科技大学 Remote Image Classification based on the multiple dimensioned deep learning of attention mechanism
CN110399847B (en) * 2019-07-30 2021-11-09 北京字节跳动网络技术有限公司 Key frame extraction method and device and electronic equipment
CN110399847A (en) * 2019-07-30 2019-11-01 北京字节跳动网络技术有限公司 Extraction method of key frame, device and electronic equipment
CN110414513A (en) * 2019-07-31 2019-11-05 电子科技大学 Vision significance detection method based on semantically enhancement convolutional neural networks
CN110633633A (en) * 2019-08-08 2019-12-31 北京工业大学 Remote sensing image road extraction method based on self-adaptive threshold
CN110580503A (en) * 2019-08-22 2019-12-17 江苏和正特种装备有限公司 AI-based double-spectrum target automatic identification method
CN110765886A (en) * 2019-09-29 2020-02-07 深圳大学 Road target detection method and device based on convolutional neural network
CN110765886B (en) * 2019-09-29 2022-05-03 深圳大学 Road target detection method and device based on convolutional neural network
CN110852295A (en) * 2019-10-15 2020-02-28 深圳龙岗智能视听研究院 Video behavior identification method based on multitask supervised learning
CN110852295B (en) * 2019-10-15 2023-08-25 深圳龙岗智能视听研究院 Video behavior recognition method based on multitasking supervised learning
CN112712090A (en) * 2019-10-24 2021-04-27 北京易真学思教育科技有限公司 Image processing method, device, equipment and storage medium
CN110853053A (en) * 2019-10-25 2020-02-28 天津大学 Salient object detection method taking multiple candidate objects as semantic knowledge
CN111160127A (en) * 2019-12-11 2020-05-15 中国资源卫星应用中心 Remote sensing image processing and detecting method based on deep convolutional neural network model
CN111695572A (en) * 2019-12-27 2020-09-22 珠海大横琴科技发展有限公司 Ship retrieval method and device based on convolutional layer feature extraction
CN111640087A (en) * 2020-04-14 2020-09-08 中国测绘科学研究院 Image change detection method based on SAR (synthetic aperture radar) deep full convolution neural network
CN112052736A (en) * 2020-08-06 2020-12-08 浙江理工大学 Cloud computing platform-based field tea tender shoot detection method
CN112102245A (en) * 2020-08-17 2020-12-18 清华大学 Grape fetus slice image processing method and device based on deep learning
CN112541912B (en) * 2020-12-23 2024-03-12 中国矿业大学 Rapid detection method and device for salient targets in mine sudden disaster scene
CN112541912A (en) * 2020-12-23 2021-03-23 中国矿业大学 Method and device for rapidly detecting saliency target in mine sudden disaster scene
CN112579816A (en) * 2020-12-29 2021-03-30 二十一世纪空间技术应用股份有限公司 Remote sensing image retrieval method and device, electronic equipment and storage medium
CN112667832B (en) * 2020-12-31 2022-05-13 哈尔滨工业大学 Vision-based mutual positioning method in unknown indoor environment
CN112667832A (en) * 2020-12-31 2021-04-16 哈尔滨工业大学 Vision-based mutual positioning method in unknown indoor environment
CN112801192A (en) * 2021-01-26 2021-05-14 北京工业大学 Extended LargeVis image feature dimension reduction method based on deep neural network
CN112801192B (en) * 2021-01-26 2024-03-19 北京工业大学 Extended LargeVis image feature dimension reduction method based on deep neural network
CN112926667B (en) * 2021-03-05 2022-08-30 中南民族大学 Method and device for detecting saliency target of depth fusion edge and high-level feature
CN112926667A (en) * 2021-03-05 2021-06-08 中南民族大学 Method and device for detecting saliency target of depth fusion edge and high-level feature
CN113205481A (en) * 2021-03-19 2021-08-03 浙江科技学院 Salient object detection method based on stepped progressive neural network
CN113326926B (en) * 2021-06-30 2023-05-09 上海理工大学 Fully-connected hash neural network for remote sensing image retrieval
CN113326926A (en) * 2021-06-30 2021-08-31 上海理工大学 Fully-connected Hash neural network for remote sensing image retrieval
CN115292530A (en) * 2022-09-30 2022-11-04 北京数慧时空信息技术有限公司 Remote sensing image overall management system
CN116894100A (en) * 2023-07-24 2023-10-17 北京和德宇航技术有限公司 Remote sensing image display control method, device and storage medium
CN116894100B (en) * 2023-07-24 2024-04-09 北京和德宇航技术有限公司 Remote sensing image display control method, device and storage medium

Also Published As

Publication number Publication date
CN106909924B (en) 2020-08-28

Similar Documents

Publication Publication Date Title
CN106909924A (en) A kind of remote sensing image method for quickly retrieving based on depth conspicuousness
Zhang et al. Scene classification via a gradient boosting random convolutional network framework
CN106227851A (en) Based on the image search method searched for by depth of seam division that degree of depth convolutional neural networks is end-to-end
CN110309856A (en) Image classification method, the training method of neural network and device
CN110532859A (en) Remote Sensing Target detection method based on depth evolution beta pruning convolution net
Mahmon et al. A review on classification of satellite image using Artificial Neural Network (ANN)
CN112446388A (en) Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model
CN110598029A (en) Fine-grained image classification method based on attention transfer mechanism
CN106991382A (en) A kind of remote sensing scene classification method
CN110135267A (en) A kind of subtle object detection method of large scene SAR image
CN107229904A (en) A kind of object detection and recognition method based on deep learning
CN104484681B (en) Hyperspectral Remote Sensing Imagery Classification method based on spatial information and integrated study
CN109559300A (en) Image processing method, electronic equipment and computer readable storage medium
CN111507271A (en) Airborne photoelectric video target intelligent detection and identification method
CN112070078B (en) Deep learning-based land utilization classification method and system
CN108960330A (en) Remote sensing images semanteme generation method based on fast area convolutional neural networks
Su et al. Machine learning-assisted region merging for remote sensing image segmentation
CN112949647B (en) Three-dimensional scene description method and device, electronic equipment and storage medium
CN110059734A (en) A kind of training method, object identification method, device, robot and the medium of target identification disaggregated model
CN108537121A (en) The adaptive remote sensing scene classification method of environment parament and image information fusion
Ma et al. A supervised progressive growing generative adversarial network for remote sensing image scene classification
Guo et al. Using multi-scale and hierarchical deep convolutional features for 3D semantic classification of TLS point clouds
Du et al. Integration of case-based reasoning and object-based image classification to classify SPOT images: a case study of aquaculture land use mapping in coastal areas of Guangdong province, China
CN114219963A (en) Multi-scale capsule network remote sensing ground feature classification method and system guided by geoscience knowledge
Alburshaid et al. Palm trees detection using the integration between gis and deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant