CN106779070B - Effectively promote the method for convolutional neural networks robustness - Google Patents

Effectively promote the method for convolutional neural networks robustness Download PDF

Info

Publication number
CN106779070B
CN106779070B CN201611131712.2A CN201611131712A CN106779070B CN 106779070 B CN106779070 B CN 106779070B CN 201611131712 A CN201611131712 A CN 201611131712A CN 106779070 B CN106779070 B CN 106779070B
Authority
CN
China
Prior art keywords
block
characteristic pattern
pixel
convolutional neural
neural networks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611131712.2A
Other languages
Chinese (zh)
Other versions
CN106779070A (en
Inventor
田新梅
沈旭
孙韶言
陶大程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201611131712.2A priority Critical patent/CN106779070B/en
Publication of CN106779070A publication Critical patent/CN106779070A/en
Application granted granted Critical
Publication of CN106779070B publication Critical patent/CN106779070B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Abstract

The invention discloses a kind of methods for effectively promoting convolutional neural networks robustness, it include: in the training process, progress is preceding to transmitting after the characteristic pattern of input is carried out two-dimensional transform first, when forward direction transmittance process, block be classified to the characteristic pattern after two-dimensional transform and based on block energy size reorders operation;Then back transfer is carried out, when back transfer process, the error propagation of each pixel is to corresponding pixel before sorting after block sequencing is operated;During the test, by the way of training process, the characteristic pattern of input is subjected to block be classified after two-dimensional transform and based on block energy size and is reordered operation.This method can effectively promote convolutional neural networks robustness in the new parameter of no introducing or under doing additional processing to input picture.

Description

Effectively promote the method for convolutional neural networks robustness
Technical field
The present invention relates to image classification, the technical fields such as image retrieval, more particularly to a kind of effectively promotion convolutional Neural net The method of network robustness.
Background technique
In internet, today of high speed development, especially image/video is universal so that we require all the time into Row image recognition and retrieval.Depth learning technology made breakthrough progress in the related fields of image recognition in recent years, The performance of traditional algorithm is greatly surmounted, the accuracy of identification is greatly improved.Figure is carried out in deep learning As identifying the model mainly used as convolutional neural networks, which mainly contains two, convolution sum pondization operations, passes through this The deep neural network being successively superimposed is built in two kinds of operations, is realized from part to the overall situation, specific to abstract layer-by-layer semanteme It extracts, finally obtained higher level of abstraction semantic feature is highly useful for the image recognitions inter-related task such as image classification and retrieval.
Existing convolutional neural networks structure is not especially steady for the transformation of image, for example, image is by rotating, putting down It the two-dimensional transforms such as moves and then carries out feature representation by convolutional neural networks, we can have found the high-rise spy that it extracts Levying difference can be very big, directly results in recognition accuracy and robustness sharply declines.
In order to improve the robustness that convolutional neural networks convert image, especially compare large scale, global change It changes, there are mainly three types of existing algorithms.
The first is in the training process, artificially to carry out a variety of different transformation to image to generate more training samples This, then inputs convolutional neural networks plus original sample for the sample after artificial transformation together and is trained, and such one Come, increases the diversity of sample, more fully learn so that convolutional neural networks have the transformation of image.
Second method be output (we term it characteristic patterns) for each layer of convolutional layer of convolutional neural networks into The translation of the multiple scales of row or the rotation of multi-angle, then the result that these transformation generate is integrated, be then further continued for Pass to next layer.
The third method is before inputting an image into convolutional neural networks and going, first with another special nerve Network is allowed in one more to learn the reasonable transformation of image by image first according to the transformation progress Transform operations acquired Add on the change of scale for being normally more easier to distinguish, so can also obtain the promotion of effect.
Above-mentioned three kinds of algorithms are during learning model, in order to promote robustness or introduce new feature extraction mould Block or increase new learning parameter or need to do the image of input additional processing, this allows for scheming on a large scale As in processing, complexity is especially high, and when trained model is applied in new problem, generalization ability also can It is affected.
Summary of the invention
The object of the present invention is to provide a kind of methods for effectively promoting convolutional neural networks robustness, introduce newly no Parameter does under additional processing input picture, can effectively promote convolutional neural networks robustness.
The purpose of the present invention is what is be achieved through the following technical solutions:
A kind of effective method for promoting convolutional neural networks robustness, comprising:
In the training process, progress is preceding to transmitting after the characteristic pattern of input being carried out two-dimensional transform first, and forward direction transmitted Cheng Shi, block be classified to the characteristic pattern after two-dimensional transform and based on block energy size reorder operation;Then it carries out anti- To transmitting, when back transfer process, the error propagation of each pixel is to corresponding pixel before sorting after block sequencing is operated;
During the test, it by the way of training process, is classified after the characteristic pattern of input is carried out two-dimensional transform And block based on block energy size reorder operation.
The operation of reordering of described being classified and based on block energy size block includes:
Block classification when operating of reordering independently is carried out for each characteristic pattern of the same convolutional layer, same convolution In layer, for the characteristic pattern of upper level output, each fritter in characteristic pattern is subdivided into n × n same big by next stage The operation of reordering of small sub-block, next stage only independently carries out in the sub-block in each fritter, wherein n is preset super ginseng Number;
For every level-one, it is corresponding that each pixel (i, j) can be calculated according to the difference for the front-rear position that reorders Position offsetIt is every that all grades of offset is overlapped to the characteristic pattern that can be obtained after reordering again A pixel Zi,jIt is corresponding reorder before pixel X in characteristic patterni,j:
It is described block sequencing is operated after each pixel error propagation to sort before corresponding pixel formula are as follows:
Wherein,For the corresponding deviation of kth grade sorted pixels point (i, j),Characteristic pattern before indicating to reorder The corresponding error of middle pixel (i, j),Indicate the corresponding error of pixel (i, j) in the characteristic pattern after reordering.
As seen from the above technical solution provided by the invention, the robustness of two-dimensional transform is encoded to new structure In, such convolutional layer only needs to be absorbed in the feature extraction of corresponding image block, the tool of the character pair without paying close attention to image Body position, to effectively promote convolutional neural networks robustness;Meanwhile this method does not introduce new parameter or to input Image does additional processing, and complexity is relatively low.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, required use in being described below to embodiment Attached drawing be briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this For the those of ordinary skill in field, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is the convolutional neural networks structural schematic diagram provided in an embodiment of the present invention applied to image classification;
Fig. 2 is that picture material change in location provided in an embodiment of the present invention causes trellis diagram respective response value change in location to be shown It is intended to;
Fig. 3 is traditional neural network operation provided in an embodiment of the present invention and present invention adds the nerves that block reorders Network operation schematic diagram;
Fig. 4 is the experimental result on MNIST data set provided in an embodiment of the present invention;
Fig. 5 is the experimental result on ILSVRC-2012 data set provided in an embodiment of the present invention;
Fig. 6 is the experimental result on UK-Bench data set provided in an embodiment of the present invention.
Specific embodiment
With reference to the attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete Ground description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Based on this The embodiment of invention, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, belongs to protection scope of the present invention.
The embodiment of the present invention provides a kind of method for effectively promoting convolutional neural networks robustness, and this method is in convolutional Neural The block for increasing a characteristic pattern in the learning process of network reorders operation, thus no matter in image respective objects position How to convert, the characteristic pattern after reordering there will be stronger consistency.
Convolutional neural networks (CNN) are a kind of deep neural network structures of multilayer, and each layer of convolutional layer is by by feature It extracts operator to be applied on upper one layer of obtained characteristic pattern, and learns to arrive the character representation of various different levels;Such as Fig. 1 institute Show, CiRepresent the obtained output of i-th layer of convolution i.e. characteristic pattern.FCiRepresent the output of i-th layer of full articulamentum, the last layer FC8 It can be used for the input sample classification task of 1000 classes for 1000 dimensions.
Generally, each layer of convolutional layer carries out the fritter of each of input feature vector figure part according to the weight of convolution Linear combination then using Nonlinear Mapping exported in a characteristic point.And the output of the last layer is (in Fig. 1 FC8 classifier) will be input into or return device and calculate corresponding objective function, then again with gradient descent method to network Parameter is adjusted.
The core operation of convolutional layer is each of the output atom atom adjacent with upper one layer of one fritter of part It is connected, we term it local receptor field mechanism.The weight of connection, we term it convolution kernels.With a convolution kernel to defeated Entering to carry out the output (2D) that convolution obtains, we term it characteristic patterns.Different part fritters are being rolled up in the characteristic pattern of the same input Corresponding weight is identical in product operation, and it is shared that we term it weights.Convolution operation calculating process can indicate It is as follows:
Wherein, * indicates convolution operation, xl-1It is input feature vector figure,Indicate j-th of convolution kernel,Indicate j-th of input The bias variable of characteristic pattern, it is both rear to need to be learnt by gradient descent method.
Pooling (pond) after convolution is a kind of mechanism of relatively simple reduction characteristic pattern resolution ratio, in input Each local fritter pass through and be maximized or average value obtains the value of an atom in output characteristic pattern.
In the embodiment of the present invention, it is primarily directed to the two-dimensional position variation of characteristics of image, i.e. translation and rotation transformation.Tool Body, if the coordinate of pixel is expressed as (x, y) and (x', y') in the image of transformation front and back.
The then calculation formula of translation transformation are as follows:
The calculation formula of rotation are as follows:
Wherein, dx、dyThe quantity for the pixel that respectively x, the side y are offset up, θ are rotation angle.
In the embodiment of the present invention, even if the weight shared mechanism being based on is distributed in same feature in input figure Different location can also obtain the response of more consistent amplitude after the convolution of convolution kernel, only in output characteristic pattern In, the position where corresponding response can also change with the change of object position in input figure.In Fig. 2 (a) Shown in the comparison of figure and (b).
Since each convolution kernel only has larger response to mode specific in input block, input figure is obtained by convolution For the output characteristic pattern arrived often than sparse, heterogeneity will be presented in the response of different zones block.Further, feature (energy of block, we generally use L to the response summation of segment1Norm or L2Norm indicates) it will correspond in input figure The different specific parts of object.If the feature segment of output is ranked up by we according to energy size, no matter how change Block sequencing result of the various pieces of object in the position inputted in figure, output figure will keep stronger consistency, thus To output characteristic pattern it is also just more steady to the evolution in input.
It will be understood by those skilled in the art that mainly to transmitting, (forward direction is defeated comprising before in the training process of neural network Enter signal transmitting) and back transfer (reversed error propagation process) process;Carry out test process again later.
In the embodiment of the present invention, increases block newly in conventional forward direction transmittance process and reordered operation, meanwhile, exist accordingly By the error propagation of pixel each after sequence to corresponding pixel before sorting during back transfer;Test process also uses The similar block of forward direction transmittance process reorders operation;Remaining operating process is similar with routine techniques, and so it will not be repeated.
Below mainly in training process forward direction transmitting and back transfer and test process be described in detail.
1, training process.
1) forward direction transmits.
First the characteristic pattern of input in forward direction transmittance process, become two dimension before carrying out after two-dimensional transform to transmitting Block that characteristic pattern after changing is classified and based on block energy size reorders operation.
Block classification when operating of reordering independently is carried out for each characteristic pattern of the same convolutional layer;Same convolution In layer, for the characteristic pattern of upper level output, each fritter in characteristic pattern is subdivided into n × n same big by next stage The operation of reordering of small sub-block, next stage only independently carries out in the sub-block in each fritter, wherein n is preset super ginseng Number (can be set to 2 or 3).
In the above scheme of the embodiment of the present invention, for l grades, characteristic pattern will be evenly divided into (n × n)lIt is a The fritter not being overlapped carries out n × n l grades of sub-block in l-1 grades of each fritters according to energy in l grades of convolutional layers It reorders.Illustratively, it is assumed that n=2, l=2, then the first order has 2x2=4 fritter, it is, by this 4 fritters carry out according to Secondary to reorder, reordering can arrange according to from left and right to, mode from top to bottom according to the energy size of this 4 fritters Sequence.Each fritter is further partitioned into 2x2 sub-block by the second level, and therefore, for the second level, characteristic pattern is in effect divided into (2x2)2=16 sub-blocks, that is to say, that 4 fritters in the first order have been divided into 4 sub-blocks;But at this point, not It reorders to 4 fritters, and reorders only for 4 sub-blocks in 4 fritters according to energy, therefore, the second level It will carry out 4 operations of reordering.Generally speaking, work as n=2, when l=2,2 grades of totally 5 operations reordered will be carried out.
In conclusion reorder be it is to each characteristic pattern bottom-up step by step independently carry out, and next stage sub-block Operation of reordering only independently carries out in upper level sub-block;For l grades of convolutional layers, characteristic pattern will be evenly divided into (n ×n)lA block not being overlapped;To n × n l grades of sub-block in l-1 grades of each sub-blocks according to energy in l grades of convolutional layers It reorders.
For every level-one, it is corresponding that each pixel (i, j) can be calculated according to the difference for the front-rear position that reorders Position offsetIt is every that all grades of offset is overlapped to the characteristic pattern that can be obtained after reordering again A pixel Zi,jIt is corresponding reorder before pixel X in characteristic patterni,j:
As shown in figure 3, showing for traditional neural network operation with the neural network operation reordered present invention adds block It is intended to.From Fig. 3 (a) as can be seen that using progress pondization operation after conventional convolution operation in traditional neural network;In Fig. 3 (b), The block for having increased classification newly in the conventional convolution operation of traditional neural network reorders operation.
It will be understood by those skilled in the art that if above-mentioned operation of reordering is applied in convolutional Neural net as shown in Figure 1 When network, convolutional layer C1~C5, the block being classified in each convolutional layer operation of reordering is all made of aforesaid way execution, but each convolution Classification quantity (l) can be adjusted according to circumstances in layer.
2) back transfer.
During back transfer, the error propagation of each pixel is to corresponding pixel before sorting after block sequencing is operated Point indicates are as follows:
Wherein, (i, j) is characterized the corresponding coordinate of pixel in figure,It is corresponding for the kth grade pixel that sorts Deviation,The corresponding error in the position (i, j) in characteristic pattern before indicating to reorder,Indicate the characteristic pattern after reordering In the corresponding error in the position (i, j).
2, test process.
During the test, sequential mode is reset using training process, is carried out after the characteristic pattern of input is carried out two-dimensional transform Classification and block based on block energy size reorder operation, do not need the calculating of progress back transfer.
The robustness of two-dimensional transform is encoded in new structure by the above scheme of the embodiment of the present invention, such convolutional layer The feature extraction for only needing to be absorbed in corresponding image block, the specific location of the character pair without paying close attention to image, to have Effect promotes convolutional neural networks robustness;Meanwhile this method does not introduce new parameter or does to input picture additional Processing, complexity are relatively low.
On the other hand, in order to which the transformation convolutional network steady enough to input feature vector image for verifying above scheme has Effect property, we will compare test on three data sets, experimental results.And and traditional convolutional neural networks and other It is compared to steady network is converted.Traditional convolutional neural networks include the CNN being trained in original image, CNNwith Data augmentation (CNN-Data-Aug), it includes Scale that two kinds of other pairs of input pictures, which convert steady CNN, Invariant Convolutional Neural Network (SI-CNN) and Spatial Transformer Networks (ST-CNN).These methods all use Negative Log Likelihood (NLL) as loss function.
Three data sets are MNIST, ILSVRC-2012 and UK-Bench respectively.MNIST contains 60000 for instructing Experienced hand-written character picture and 10000 hand-written character training pictures for test.Every picture is that size is 28 × 28 Cromogram, picture is always divided into 10 classes.Using error in classification as measurement index on the data set.ILSVRC-2012 data set A total of 1.3M training pictures, 50,000 verifying pictures and 100,000 test pictures.Picture is in total according to wherein object Classification is divided into 1000 classes, and every one kind picture is probably at 1000 or so.Evaluation index is classification accuracy.UK-Bench is to use In the data set of retrieval, it comprises 2550 groups of pictures, every group of picture has 4, they are all to close a scene or object Similar pictures.Inquiry picture of this all 10200 picture all as retrieval, then sorts according to the similitude of feature, from this 4 most like pictures are returned in a little pictures as search result.Retrieval knot can be calculated according to the accuracy for returning to picture The accuracy score of fruit.
In MNIST and ILSVRC-2012 according to convolutional neural networks the last layer output image category to image into Row classification, then calculates the error rate of classification;In UK-Bench, carried out according to the feature that convolutional neural networks middle layer is extracted Then the comparison of similarity between image returns most like with retrieval image as a result, last calculating is returned according to sequencing of similarity Return the similarity score of result.Experimental result is as Figure 4-Figure 6, wherein R (Rotation) refers to Random-Rotation, T (Translation) refer to that random translation, Ori refer to the not picture Jing Guo any transformation.In Fig. 4, FCN refers to full connection nerve net Network (Fully Connected Neural Network), in-between two layers is full articulamentum.UK-Bench compares classics CNN and our result that joined the convolutional neural networks that block sequencing module trains.It is provided in an embodiment of the present invention The method (PR-CNN) for promoting convolutional neural networks robustness achieves more steady effective knot with simpler mode Fruit.
It can be seen that scheme provided by the invention is achieved with simpler mode and more steadily and surely has from the result of Fig. 4-6 The result of effect.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment can The mode of necessary general hardware platform can also be added to realize by software by software realization.Based on this understanding, The technical solution of above-described embodiment can be embodied in the form of software products, which can store non-easy at one In the property lost storage medium (can be CD-ROM, USB flash disk, mobile hard disk etc.), including some instructions are with so that a computer is set Standby (can be personal computer, server or the network equipment etc.) executes method described in each embodiment of the present invention.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, Within the technical scope of the present disclosure, any changes or substitutions that can be easily thought of by anyone skilled in the art, It should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with the protection model of claims Subject to enclosing.

Claims (2)

1. a kind of method for effectively promoting convolutional neural networks robustness characterized by comprising
In the training process, the characteristic pattern of input is carried out before carrying out after two-dimensional transform to transmitting first, when forward direction transmittance process, Block be classified to the characteristic pattern after two-dimensional transform and based on block energy size reorders operation;Then it is reversely passed It passs, when back transfer process, the error propagation of each pixel is to corresponding pixel before sorting after block sequencing is operated;
During the test, by the way of training process, it is that the characteristic pattern of input be classified after two-dimensional transform and Block based on block energy size reorders operation;
Wherein, block described being classified and based on block energy size operation of reordering includes:
Block classification when operating of reordering independently is carried out for each characteristic pattern of the same convolutional layer, same convolutional layer In, for the characteristic pattern of upper level output, each fritter in characteristic pattern is subdivided into n × n same sizes by next stage Sub-block, next stage reorder operation only independently carried out in the sub-block in each fritter, wherein n be preset hyper parameter; For l grades of convolutional layers, characteristic pattern will be evenly divided into (n × n)lA sub-block not being overlapped;In l grades of convolutional layers It reorders to n × n l grades of sub-block in l-1 grades of each sub-blocks according to energy;
For every level-one, each corresponding position of pixel (i, j) can be calculated according to the difference for the front-rear position that reorders Set offsetAll grades of offset is overlapped to each picture of characteristic pattern that can be obtained after reordering again Vegetarian refreshments Zi,jIt is corresponding reorder before pixel X in characteristic patterni,j:
2. a kind of method for effectively promoting convolutional neural networks robustness according to claim 1, which is characterized in that described Formula of the error propagation of each pixel to corresponding pixel before sorting after block sequencing is operated are as follows:
Wherein,For the corresponding deviation of kth grade sorted pixels point (i, j),Picture in characteristic pattern before indicating to reorder The corresponding error of vegetarian refreshments (i, j),Indicate the corresponding error of pixel (i, j) in the characteristic pattern after reordering.
CN201611131712.2A 2016-12-09 2016-12-09 Effectively promote the method for convolutional neural networks robustness Active CN106779070B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611131712.2A CN106779070B (en) 2016-12-09 2016-12-09 Effectively promote the method for convolutional neural networks robustness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611131712.2A CN106779070B (en) 2016-12-09 2016-12-09 Effectively promote the method for convolutional neural networks robustness

Publications (2)

Publication Number Publication Date
CN106779070A CN106779070A (en) 2017-05-31
CN106779070B true CN106779070B (en) 2019-08-27

Family

ID=58879786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611131712.2A Active CN106779070B (en) 2016-12-09 2016-12-09 Effectively promote the method for convolutional neural networks robustness

Country Status (1)

Country Link
CN (1) CN106779070B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156781A (en) * 2016-07-12 2016-11-23 北京航空航天大学 Sequence convolutional neural networks construction method and image processing method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156781A (en) * 2016-07-12 2016-11-23 北京航空航天大学 Sequence convolutional neural networks construction method and image processing method and device

Also Published As

Publication number Publication date
CN106779070A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
Savva et al. Shrec16 track: largescale 3d shape retrieval from shapenet core55
Fang et al. 3d deep shape descriptor
CN111091105A (en) Remote sensing image target detection method based on new frame regression loss function
CN107229904A (en) A kind of object detection and recognition method based on deep learning
CN109063753A (en) A kind of three-dimensional point cloud model classification method based on convolutional neural networks
CN107066559A (en) A kind of method for searching three-dimension model based on deep learning
CN110163258A (en) A kind of zero sample learning method and system reassigning mechanism based on semantic attribute attention
Lokanath et al. Accurate object classification and detection by faster-RCNN
CN106503729A (en) A kind of generation method of the image convolution feature based on top layer weights
CN106339753A (en) Method for effectively enhancing robustness of convolutional neural network
CN106126581A (en) Cartographical sketching image search method based on degree of depth study
CN109063719B (en) Image classification method combining structure similarity and class information
CN107609399A (en) Malicious code mutation detection method based on NIN neutral nets
CN105243154B (en) Remote sensing image retrieval method based on notable point feature and sparse own coding and system
CN108804677A (en) In conjunction with the deep learning question classification method and system of multi-layer attention mechanism
CN104090972A (en) Image feature extraction and similarity measurement method used for three-dimensional city model retrieval
CN109558902A (en) A kind of fast target detection method
CN107085733A (en) Offshore infrared ship recognition methods based on CNN deep learnings
CN105512674A (en) RGB-D object identification method and apparatus based on dense matching sub adaptive similarity measure
CN107766828A (en) UAV Landing Geomorphological Classification method based on wavelet convolution neutral net
Massa et al. Convolutional neural networks for joint object detection and pose estimation: A comparative study
CN110503098A (en) A kind of object detection method and equipment of quick real-time lightweight
CN103927530A (en) Acquiring method, application method and application system of final classifier
CN108805280A (en) A kind of method and apparatus of image retrieval
CN107220707A (en) Dynamic neural network model training method and device based on 2-D data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant