CN105488515A - Method for training convolutional neural network classifier and image processing device - Google Patents

Method for training convolutional neural network classifier and image processing device Download PDF

Info

Publication number
CN105488515A
CN105488515A CN201410474927.9A CN201410474927A CN105488515A CN 105488515 A CN105488515 A CN 105488515A CN 201410474927 A CN201410474927 A CN 201410474927A CN 105488515 A CN105488515 A CN 105488515A
Authority
CN
China
Prior art keywords
area
sorter
layer
mapped
local feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410474927.9A
Other languages
Chinese (zh)
Other versions
CN105488515B (en
Inventor
吴春鹏
陈理
范伟
孙俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Priority to CN201410474927.9A priority Critical patent/CN105488515B/en
Publication of CN105488515A publication Critical patent/CN105488515A/en
Application granted granted Critical
Publication of CN105488515B publication Critical patent/CN105488515B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a method for training a convolutional neural network classifier and an image processing device. According to the method for training the convolutional neural network classifier, global features and local features are extracted from an image for used training. The global features and local features are mapped to a feature map according to a predetermined mode as an input sample of the classifier. According to the predetermined mode, the global features are mapped to at least one first region, the local features are mapped to a second region, and each first region is connected with the second region. According to the training method provided by the invention, the speed and accuracy of detection are improved to a large extent.

Description

The method of training convolutional neural network sorter and image processing apparatus
Technical field
The present invention relates to field of image recognition, particularly relating to a kind of method of training convolutional neural networks sorter and the image processing apparatus for classifying to image.
Background technology
Convolutional neural networks, because structure is simple, training parameter is few and the feature such as strong adaptability, is applied to the field such as pattern-recognition, image procossing more and more at large.
Such as, Fig. 1 illustrates traditional schematic diagram utilizing the structure of the sorter 100 of convolutional neural networks (ConvolutionalNeuralNetwork is called for short CNN).It is made up of following a few part: input layer, convolutional layer, spatial sampling layer, full articulamentum and output layer.
Utilizing traditional CNN sorter to carry out in the process identified, for handwritten numeral, input piece image, after repeated convolution, the maximum sampling in space and full attended operation, CNN sorter exports often kind of numercal degree of confidence.The output that degree of confidence is the highest is exactly recognition result.In the example of fig. 1, input layer input handwritten numeral " 6 ", and output layer exports often kind of numercal degree of confidence.It above obtains the highest degree of confidence 0.980 in numeral " 6 ", and thus, recognition result is exactly 6.In FIG, each square frame indicating F0 to F9 represents a characteristic pattern (featuremap).For the purpose of unification, input picture also can regard characteristic pattern as.
Traditional C NN generally adopts image pixel itself to learn as input amendment.Although adopt image pixel itself to there is conveniently advantage as feature, limit the application of CNN in complicated visual task.Such as require that CNN detects all characters in a natural scene.Now, if whole image input CNN, can slow down training speed and actual operating speed greatly, and can because noise too much causes accuracy in detection to reduce.
In addition, traditional C NN adopts classical gradient descent algorithm to train usually.This algorithm is according to the error of output layer successively back-propagating adjustment weights from output layer to input layer.This classic algorithm is proved the problem of existence " gradient disappearance " in the theoretical research of pertinent literature.That is, more propagate toward input layer, weighed value adjusting amount is less.This causes the weights of the close input layer that should be adjusted, and adjustment amount is minimum on the contrary, slow down the pace of learning of whole CNN greatly.
In addition, the conventional exercises method of CNN sorter only pays close attention to the training process to single CNN, or under high-performance computing environment a collection of CNN of parallel training.
Summary of the invention
Provide hereinafter about brief overview of the present invention, to provide about the basic comprehension in some of the present invention.Should be appreciated that this general introduction is not summarize about exhaustive of the present invention.It is not that intention determines key of the present invention or pith, and nor is it intended to limit the scope of the present invention.Its object is only provide some concept in simplified form, in this, as the preorder in greater detail discussed after a while.
According to one side of the present disclosure, a kind of method of training convolutional neural networks sorter is provided, comprises: from training image, extract global characteristics and local feature; And according to preassigned pattern by global characteristics and local Feature Mapping to characteristic pattern using the input amendment as sorter; Wherein, according to preassigned pattern, global characteristics is mapped at least one first area, and local feature is mapped to a second area, and each first area connects with second area.
According in an embodiment of the present disclosure, local feature can comprise at least two kinds of local features extracted from the same area, and the mapping of local feature can comprise extraction is mapped to same position from least two kinds of local features of the same area.
According in another embodiment of the present disclosure, according to above-mentioned preassigned pattern, global characteristics can be mapped to multiple first area, second area by first area around.
According in another embodiment of the present disclosure, the mapping of at least two kinds of local features can comprise point-to-point addition, to be point-to-pointly multiplied, the convolutional calculation of convolutional neural networks or its combine in one.
According in another embodiment of the present disclosure, this training method can also comprise: the weight adjusting the relevant connection in sorter between each layer in the mode of back-propagating according to weight gradient.Wherein, when adjusting the weight of connection near at least one deck of input layer side, strengthen weight gradient, the degree strengthened can depend on the value of the weight gradient of the connection between this each layer at least after one deck.
According in another embodiment of the present disclosure, can strengthen weight gradient by one is added in weight gradient with reference to adjustment amount E, E obtains by reference to the Grad of the weight of the relevant connection between each layer after this at least one deck.
According in another embodiment of the present disclosure, the computing formula of E is as follows:
E = median ( ΣΔ ω → 1 N 1 , ΣΔ ω → 2 N 2 , . . . . , ΣΔ ω → L N L )
Wherein, " median () " asks median computing, and L is the number of plies sum of convolutional layer after above-mentioned at least one deck and full articulamentum, i=1,2 ... L be i-th layer in L layer convolutional layer and full articulamentum with the mean value of the Grad sum of the weight of the relevant connection between its front one deck.
According in another embodiment of the present disclosure, this training method can also comprise: train at least two to have mutually isostructural convolutional neural networks sorter, convolutional neural networks sorter has common full articulamentum and output layer.
According in another embodiment of the present disclosure, this training method can also comprise: that have mutually isostructural convolutional neural networks sorter at least two, different except the weight setting of the connection of each layer of full articulamentum except output layer initialization value.
According in another embodiment of the present disclosure, this training method can also comprise: obtain by carrying out random deformation to original input amendment the respective input that at least two have mutually isostructural convolutional neural networks sorter.
According in another embodiment of the present disclosure, this training method can also comprise: before in the training process each takes turns beginning, the part respective weights value at least two to mutually isostructural convolutional neural networks sorter carries out random local directed complete set.
According to another aspect of the present disclosure, providing a kind of image processing apparatus for classifying to image, comprising: feature extraction unit, it extracts global characteristics and local feature from image; Input generation unit, its according to preassigned pattern by global characteristics and local Feature Mapping to characteristic pattern, wherein, according to preassigned pattern, global characteristics is mapped at least one first area, and local feature is mapped to a second area, and each first area connects with second area; And based on the sorter of neural network, it is input as characteristic pattern.
According in another embodiment of the present disclosure, local feature can comprise at least two kinds of local features extracted from the same area, and the mapping of local feature can comprise extraction is mapped to same position from least two kinds of local features of the same area.
According in another embodiment of the present disclosure, according to above-mentioned preassigned pattern, global characteristics can be mapped to multiple first area, second area by first area around.
According in another embodiment of the present disclosure, the mapping of at least two kinds of local features can comprise point-to-point addition, to be point-to-pointly multiplied, the convolutional calculation of convolutional neural networks or its combine in one.
According to the method for training convolutional neural networks sorter of the present disclosure and the image processing apparatus of classifying to image, be used as the training of convolutional neural networks sorter by using the feature of Manual definition and detect sample, especially, by global characteristics and local Feature Mapping are constructed training sample to the region connected, ensure that the follow-up convolution operation of convolutional neural networks fully can excavate the correlativity between local feature and global characteristics, so largely on improve detection speed and accuracy in detection.
Accompanying drawing explanation
Below with reference to the accompanying drawings illustrate embodiments of the invention, above and other objects, features and advantages of the present invention can be understood more easily.In the accompanying drawings, the identical or corresponding Reference numeral of employing represents by the technical characteristic of identical or correspondence or parts.
Fig. 1 is the schematic diagram of the structure illustrating traditional convolutional neural networks sorter.
Fig. 2 illustrates the process flow diagram according to the flow process of the method for the training convolutional neural networks sorter of disclosure embodiment.
Fig. 3 illustrates the schematic diagram according to the organizational form of the Manual definition's feature for training classifier of disclosure embodiment.
Fig. 4 illustrates the schematic diagram according to the structure of the CNN sorter of disclosure embodiment.
Fig. 5 illustrates the structural drawing according to the training CNN sorter be made up of multiple mutually isostructural CNN sorter of disclosure embodiment.
Fig. 6 is the schematic diagram illustrating the method for the weights between feature layer being carried out to local directed complete set.
Fig. 7 illustrates the block diagram of the configuration of image processing apparatus for classifying to image according to disclosure embodiment.
Fig. 8 illustrates the block diagram realizing the example arrangement of computing machine of the present invention.
Embodiment
With reference to the accompanying drawings embodiments of the invention are described.The element described in an accompanying drawing of the present invention or a kind of embodiment and feature can combine with the element shown in one or more other accompanying drawing or embodiment and feature.It should be noted that for purposes of clarity, accompanying drawing and eliminate expression and the description of unrelated to the invention, parts known to persons of ordinary skill in the art and process in illustrating.
Although use image pixel itself more more convenient than the artificial defined feature of use as the training sample of sorter, in complicated visual task, the actual operating speed of can slow down greatly training speed and sorter.Further, can because noise too much causes the accuracy in detection of sorter to reduce.According in embodiment of the present disclosure, by the Manual definition's feature input CNN sorter extracted from training image, to expect to improve detection speed and accuracy in detection largely.
Manual definition's feature is generally divided into local feature and global characteristics.Local feature is general relevant with position.Such as, if extract the gradient of each pixel on image, then each pixel can a corresponding Grad.Then this Grad can as of an image local feature.Example and without limitation, the local feature that local feature can also comprise contrast or use the algorithm known such as SIFT (scale invariability eigentransformation, Scale-invariantfeaturetransform) to obtain.Global characteristics is general and position is irrelevant.Such as, the mean value of all pixels on image and the variance global characteristics as image is extracted.Example and without limitation, global characteristics can also comprise the grey level histogram etc. towards, image of character to be identified (such as numeral " 6 ").
In traditional machine learning (non-convolutional neural networks) algorithm, generally the some Manual definition's feature organizations extracted are become one-dimensional vector, the input then as learning algorithm is trained.In embodiment of the present disclosure, in order to the feature of applicable convolutional neural networks, the local feature extracted and global characteristics are organized into two dimensional character matrix feeding CNN and train from image.
Fig. 2 illustrates the process flow diagram according to the flow process of the method for the training convolutional neural networks sorter of disclosure embodiment.Particularly, Fig. 2 basically illustrates the acquisition flow process of the Manual definition's feature according to disclosure embodiment.
In step s 201, from training image, global characteristics and local feature is extracted.One or more of global characteristics can be extracted as required.Similarly, one or more of local feature can be extracted as required.The global characteristics extracted and local feature can be any types that this area is commonly used.
In step S202, according to preassigned pattern by extracted global characteristics and local Feature Mapping to input feature vector figure using the input amendment as sorter.Described preassigned pattern is such pattern: according to this preassigned pattern, and global characteristics is mapped at least one first area, and local feature is mapped to a second area, and each first area connects with second area.For ease of understanding, illustrate below in conjunction with Fig. 3.
Fig. 3 illustrates the schematic diagram according to the organizational form of the Manual definition's feature for training classifier of disclosure embodiment.In the example of fig. 3, from training image, two kinds of local features are extracted.Such as, gradient, contrast etc.Be understood that as required, also can extract one or more local features from image.The extraction result of the often kind of local feature extracted forms the local feature figure of respective two dimension.In the example depicted in fig. 3, extract local feature by pixel, thus, the local feature figure of the two dimension obtained is equally large with original input picture.Selectively, by pixel, but local feature can not be extracted according to intended pixel combination.Such as, with 2 × 2 pixel groups be unit extract local feature.In this case, the size of this local feature figure is the half of original input picture.Please note: when extracting more than one local feature from image, to the local feature of all kinds, extraction operation is carried out according to identical unit.Such as, when by pixel extract a kind of local feature time, another kind of local feature also should extract by pixel.This can ensure that obtained local feature figure has identical size.
Global characteristics is extracted from original image.The global characteristics extracted can be one or more of.In general, when other condition is constant, the quantity of the global characteristics extracted is more, and recognition result is more accurate.Thus, in the example depicted in fig. 3, a series of global characteristics is extracted from original image.Such as but not limited to, gradation of image average, variance, grey level histogram, identify target (numeral " 6 ") towards, etc.These global characteristics are formed the global characteristics vector of an one dimension, the length of this vector equals the quantity of the global characteristics extracted or relevant (such as with the quantity of the global characteristics extracted, when global characteristics is grey level histogram, the length of the global characteristics vector of this one dimension is also relevant with the quantity of the community (bin) of grey level histogram).Usually, the length of global characteristics vector can be 30 to 40.
By global characteristics vector respectively with each local feature figure constitutive characteristic organization chart (feature organization Fig. 1 and 2).In the example of fig. 3, local feature is disposed in one piece of region (example of " second area "), keeps the relative position of the pixel of local feature and its extraction original image certainly constant.Global characteristics vector is disposed in the periphery of local feature, round local feature place region and connect with it.Selectively, the region at global characteristics vector place need not around local feature region, as long as all feature can keep connecting with local feature.Its objective is in order to ensure the convolution operation that CNN is follow-up can fully excavate local and global characteristics between correlativity.
In the example of fig. 3, the corner location of feature organization figure fills 0.Global characteristics region is divided into 4 sub regions (example of " first area ") by this.In this 4 sub regions, global characteristics need not be corresponding consistent, and every sub regions uses the part in all global characteristics can (length can fill not 0 or can reuse).Selectively, can need not fill 0 at the corner location of feature organization figure, also can fill the element of global characteristics vector in the edge of local feature figure order.
Please note: for the structure of each feature organization figure (such as feature organization Fig. 1 and 2), the arrangement of global characteristics vector should be identical; In addition, for all training samples, the make of feature organization figure also should be consistent, and, also should come tissue signature according to feature organization mode during training when reality uses the sorter trained.
Next, several (being 2 in Fig. 3) feature organization figure is merged and obtains final input feature vector figure S0, as the input amendment of sorter.The mode that feature organization figure merges can comprise: by point-to-point for feature organization figure addition, to be point-to-pointly multiplied, one in the convolutional calculation that adopts CNN or its combination.Wherein, the convolution mask of the convolutional calculation adopted can preset as required.Its size is such as 3 × 3 or 5 × 5.
The organizational form of the Manual definition's feature for CNN input is described above with reference to Fig. 3.In this organizational form, according to preassigned pattern using global characteristics and local Feature Mapping to the characteristic pattern as sorter input amendment.Such as, the mapping of local feature can comprise by extraction be mapped to same position from least two kinds of local features of the same area.In addition, such as, the mapping of these at least two kinds of local features can comprise point-to-point addition, to be point-to-pointly multiplied, the convolutional calculation that adopts CNN or its combine in one.
Use the Manual definition's feature constructed in the above described manner as the input amendment of CNN sorter, the appropriate of CNN for complicated calculations task can be improved.
Fig. 4 is the schematic diagram of the structure of the CNN sorter 400 illustrated according to disclosure embodiment.In CNN sorter 400, input layer is the input feature vector figure S0 of the Manual definition such as using the mode described with reference to figure 3 to construct.
In the training to CNN sorter, the convolution mask of gradient descent algorithm to each layer can be adopted to adjust.This algorithm is according to the error of output layer successively back-propagating adjustment weights from output layer to input layer.Briefly, back-propagating processes one group of training sample iteratively, is compared by the neural network forecast of each sample, learn with the actual class label known.For each training sample, the weighted value of amendment convolution mask, makes the square error between neural network forecast and actual class minimum.This amendment " backwardly " is carried out.That is, by output layer, via each hidden layer, to first hidden layer.Although can not ensure, generally, weighted value will finally be restrained, and learning process stops.
The characteristic of floating-point operation of carrying out due to computing machine and the transmission characteristic of weight gradient, cause and become less the closer to input layer weight Grad.This causes the weights of the close input layer that should be adjusted, and adjustment amount is minimum on the contrary, thus slow down the pace of learning of whole CNN.
According to embodiment of the present disclosure, when training CNN sorter 400, with classical gradient descent algorithm similarly, adjust the weight of the relevant connection in CNN sorter 400 between each layer according to weight gradient in the mode of back-propagating.In the example depicted in fig. 4, weight respectively and between characteristic pattern F4-F6 of the weight between characteristic pattern F7-F8 and full articulamentum thereafter, characteristic pattern F2 and F3 is adjusted until weight between characteristic pattern S0 and characteristic pattern F0 and F1.
And classical gradient descent algorithm unlike: when adjusting the weight of connection near at least one deck (such as between F0 place convolutional layer and input layer S0) of input layer side, can strengthen weight gradient.Further, the degree strengthened can depend on the value of the weight gradient of the connection between this each layer at least after one deck.Such as, when adjusting the weight between characteristic pattern S0 and characteristic pattern F0 and F1, the Grad of Ge Juan basic unit after the convolutional layer of F0 and F1 place and complete each weight be connected between articulamentum with its front one deck can be considered, to S0 and F0 and F1 adjusted value between layers strengthen.
In one embodiment, weight gradient can be strengthened by the Grad that is added to weight with reference to adjustment amount E one.This reference adjustment amount E can obtain by reference to the Grad of the weight of the relevant connection between the full articulamentum of each Convolution sums after layer to be adjusted.
Connection weight between characteristic pattern S0 and F0 being located at CNN sorter 400 is then according to classical gradient descent algorithm, according to formula (1) below, in the back-propagating stage, this weight is adjusted.
ω → = ω → - η · Δ ω → - - - ( 1 )
Wherein, η is learning rate, it is weight gradient.
And according to disclosure embodiment, to the adjustment formula of weight such as formula shown in (2):
ω → = ω → - η · ( Δ ω → + E I → ) - - - ( 2 )
Wherein, E be in conjunction with layer to be adjusted after the adjustment amount of each layer information acquisition, for unit vector.
In one embodiment, E can calculate by formula (3) below:
E = median ( ΣΔ ω → 1 N 1 , ΣΔ ω → 2 N 2 , . . . . , ΣΔ ω → L N L ) - - - ( 3 )
Wherein, " median () " asks median computing, and L is the number of plies sum of convolutional layer after described at least one deck and full articulamentum, i=1,2 ... L be i-th layer in the described convolutional layer of L layer and full articulamentum with the mean value of the Grad sum of the weight of the relevant connection between its front one deck.
In the example of fig. 4, such as, be the Grad of the weight between input layer S0 and F0 place convolutional layer if to be adjusted, then the mean value of the Grad sum of the weight of the relevant connection (such as all connections) between characteristic pattern F4 place convolutional layer and the spatial sampling layer before it (F2 place layer), the mean value of the Grad sum of the weight of the relevant connection between the full articulamentum of ground floor and the spatial sampling layer (F7 place layer) before it.By parity of reasoning, be in sorter 400 last full articulamentum and the layer before it the mean value of Grad sum of weight of relevant connection.In other words, in the present embodiment, the weight adjusting value of the convolution mask of close input layer considers the mean value of each layer of weight Grad sum after it.
Describe for the Grad of the weight between characteristic pattern F4 place convolutional layer and F2 place spatial sampling layer below example calculation.For convenience, suppose this two-layer between each characteristic pattern between all use 2 × 2 convolution mask.In other words, the convolution mask between characteristic pattern F2 and F4 comprises 4 weights.Thus, between characteristic pattern F4 place convolutional layer and F2 place spatial sampling layer, 24 weights (F2-F4, F5, F6 and F3-F4, F5, F6) are comprised.Then be specifically calculated as the Grad of these 24 weights and divided by 24.
Although describe the gradient descent algorithm carrying out back-propagating according to disclosure embodiment in conjunction with CNN sorter 400 above, those skilled in the art can understand: this gradient descent algorithm also may be used for the CNN sorter 100 of the input using the image pixel of original image itself as training sample shown in Fig. 1.
In addition, in the disclosure, also develop and carry out the plurality of CNN sorter of simultaneous training by connecting multiple CNN sorter, and in the training process the weighted value of correspondence is adjusted, make to train training method more fully to utilize the contact between CNN sorter and information interaction.Hereinafter, be described to connect also simultaneous training two CNN sorters.Be understandable that: also according to needs, can connect and the plural CNN sorter of simultaneous training.By this training method, the contact between multiple CNN that can consider simultaneous training and information interaction.It is more abundant that these contacts and information interaction can make CNN train.
Fig. 5 illustrates the structural drawing according to the training CNN sorter 500 of disclosure embodiment, and wherein, training CNN sorter 500 is interconnected to constitute by CNN sorter 501 and 502.CNN sorter 501 and 502 has identical structure, and has common full articulamentum and output layer.Please note: although the input (training sample) of CNN sorter 501 and 502 all adopts the feature (input feature vector figure S0 and S1) of the Manual definition according to disclosure embodiment in Figure 5, training sample also can be the image pixel of original image itself.
When constructing CNN sorter 500, particularly, will there is the respective full articulamentum of mutually isostructural CNN sorter 501 and 502 and output layer removes, make last one deck of each of CNN sorter 501 and 502 temporarily become spatial sampling layer.Then, the spatial sampling layer using same full articulamentum link sort device 501 and 502 last.Next, then with an output layer connect full articulamentum.Thus a network merged into by CNN sorter 501 and 502 from full articulamentum, and input layer, convolutional layer and spatial sampling layer still adopt respective, as shown in Figure 5.
Train start time, be CNN sorter 501 and 502, different except the weight setting of the connection of each layer of full articulamentum except output layer initialization value.Such as, the initializes weights value of distributing can be arbitrary stochastic distribution.Such as, (0,1) distribution.In addition, in order to make study more abundant, the respective input of CNN sorter 501 and 502 is obtained by carrying out random deformation to original input amendment.Random deformation is such as but not limited to translation, rotation and local deformation.As shown in Figure 5, before being input in input layer S0 and S1, more accurately, before feature extraction is carried out to input picture, random deformation is carried out to input picture (character " 6 ").
Train for constructed CNN sorter 500.In the training process, the Back Propagation Algorithm that weight gradient is strengthened according to disclosure embodiment can be adopted, thus more efficiently adjust the weighted value near input layer.
Before in the training process each takes turns beginning, random local directed complete set can also be carried out to the part respective weights value of CNN sorter 501 and 502.Fig. 6 be illustrate to the F2 of characteristic pattern shown in Fig. 5 place layer and characteristic pattern F4 weights between layers carry out the schematic diagram of the method for local directed complete set.All this local feature can be carried out to the entitlement weight values of CNN sorter 501 and 502.
As shown in Figure 6, ω 1 and ω 2 is the weighted value of sorter 501 and 502 between characteristic pattern F2 and F6 respectively.S ω 1and S ω 2be carry out adjusting used random number for ω 1 and ω 2 respectively, obeying length is respectively 3 be uniformly distributed.Further, S ω 1and S ω 2between be separate.ε in Fig. 6 is a fractional value being a bit larger tham 0, can be decided to be 0.00001.ω 1, ω 2 and ε can regard reinitializing weights as, allow weights relearn, thus make part weight convergence in network to more excellent numerical value.As visible in the formula in Fig. 6, ω 1 and ω 2 is alternatively adjusted (" ω 1-ω 2 " and " ω 2-ω 1 "), thus utilizes influencing each other between them to eliminate certain error.Respective weights between further feature figure also can similarly adjust.
Continuously many wheel training CNN sorters 500, until the error rate on training set is without significant change, thus obtain the CNN sorter 500 that trains.
When sorter will be used to classify, CNN sorter 500 can be taken apart as independent CNN sorter 501 and 502.Then, full articulamentum and the output layer of CNN sorter 501 (or 502) can be recovered, and again CNN sorter 501 (or 502) is trained, to obtain final sorter.
Fig. 7 illustrates the block diagram of the configuration of image processing apparatus 700 for classifying to image according to disclosure embodiment.Image processing apparatus 700 comprises: feature extraction unit 701, input generation unit 702 and CNN sorter 703.
Feature extraction unit 701 for extracting global characteristics and local feature from pending image.The concrete extracting method of the global characteristics that feature extraction unit 701 is extracted and local feature and its use can be the various overall situation known in the art and local feature, and any applicable method.
Input generation unit 702 for according to preassigned pattern by the global characteristics that extracted by feature extraction unit 701 and local Feature Mapping to characteristic pattern.According to this preassigned pattern, global characteristics is mapped at least one first area, and local feature is mapped to a second area, and each first area connects with second area.
The local feature extracted by feature extraction unit 702 can comprise at least two kinds of local features extracted from the same area of image.Further, the mapping of local feature undertaken by input generation unit 702 can comprise extraction is mapped to same position from these at least two kinds of local features of the same area.The mapping of these at least two kinds of local features can comprise point-to-point addition, to be point-to-pointly multiplied, the convolutional calculation of convolutional neural networks or its combine in one.In addition, input generation unit 702 according to preassigned pattern, the global characteristics extracted by feature extraction unit 702 can be mapped to multiple first area, make second area by first area around.The object lesson of the preassigned pattern that can adopt hereinbefore composition graphs 3 be exemplarily described, no longer carry out repetition here.
The characteristic pattern of generation is input to CNN sorter 703 as sample by input productive unit 702.CNN sorter 703 is classified to image according to the feature samples of input.
Below ultimate principle of the present invention is described in conjunction with specific embodiments, but, it is to be noted, for those of ordinary skill in the art, whole or any step or the parts of method and apparatus of the present invention can be understood, can in the network of any calculation element (comprising processor, storage medium etc.) or calculation element, realized with hardware, firmware, software or their combination, this is that those of ordinary skill in the art use their basic programming skill just can realize when having read explanation of the present invention.
Therefore, object of the present invention can also be realized by an operation program or batch processing on any calculation element.Calculation element can be known fexible unit.Therefore, object of the present invention also can realize only by the program product of providing package containing the program code of implementation method or device.That is, such program product also forms the present invention, and the storage medium storing such program product also forms the present invention.Obviously, storage medium can be any storage medium developed in any known storage medium or future.
When realizing embodiments of the invention by software and/or firmware, from storage medium or network to the computing machine with specialized hardware structure, the program forming this software installed by multi-purpose computer 800 such as shown in Fig. 8, this computing machine, when being provided with various program, can perform various function etc.
Fig. 8 is the block diagram that the example arrangement realizing computing machine of the present invention is shown.In fig. 8, CPU (central processing unit) (CPU) 801 performs various process according to the program stored in ROM (read-only memory) (ROM) 802 or from the program that storage area 808 is loaded into random access memory (RAM) 803.In RAM803, also store the data required when CPU801 performs various process as required.
CPU801, ROM802 and RAM803 are connected to each other via bus 804.Input/output interface 805 is also connected to bus 804.
Following parts are connected to input/output interface 805: importation 806, comprises keyboard, mouse etc.; Output 807, comprises display, such as cathode-ray tube (CRT) (CRT), liquid crystal display (LCD) etc., and loudspeaker etc.; Storage area 808, comprises hard disk etc.; And communications portion 809, comprise network interface unit such as LAN card, modulator-demodular unit etc.Communications portion 809 is via network such as the Internet executive communication process.
As required, driver 810 is also connected to input/output interface 805.Detachable media 811 such as disk, CD, magneto-optic disk, semiconductor memory etc. are installed on driver 810 as required, and the computer program therefrom read is installed in storage area 808 as required.
When by software simulating above-mentioned steps and process, from network such as the Internet or storage medium, such as detachable media 811 installs the program forming software.
It will be understood by those of skill in the art that this storage medium is not limited to wherein having program stored therein shown in Fig. 8, distributes the detachable media 811 to provide program to user separately with method.The example of detachable media 811 comprises disk, CD (comprising compact disc read-only memory (CD-ROM) and digital universal disc (DVD)), magneto-optic disk (comprising mini-disk (MD)) and semiconductor memory.Or the hard disk etc. that storage medium can be ROM802, comprise in storage area 808, wherein computer program stored, and user is distributed to together with comprising their method.
The present invention also proposes a kind of program product storing the instruction code of machine-readable.When instruction code is read by machine and performs, the above-mentioned method according to the embodiment of the present invention can be performed.
Correspondingly, be also included within of the present invention disclosing for carrying the above-mentioned storage medium storing the program product of the instruction code of machine-readable.Storage medium includes but not limited to floppy disk, CD, magneto-optic disk, storage card, memory stick etc.
In addition, some is for the method and apparatus of combination tool according to an embodiment of the invention, can expand the usable range of combination.
Those of ordinary skill in the art should be understood that what exemplify at this is exemplary, and the present invention is not limited thereto.
As an example, each step of said method and all modules of the said equipment and/or unit may be embodied as software, firmware, hardware or its combination, and as the part in relevant device.When in said apparatus, all modules, unit are configured by software, firmware, hardware or its mode combined, spendable concrete means or mode are well known to those skilled in the art, and do not repeat them here.
As an example, when being realized by software or firmware, to the computing machine (multi-purpose computer 800 such as shown in Fig. 8) with specialized hardware structure, the program forming this software can be installed from storage medium or network, this computing machine, when being provided with various program, can perform various functions etc.
Above in the description of the specific embodiment of the invention, the feature described for a kind of embodiment and/or illustrate can use in one or more other embodiment in same or similar mode, combined with the feature in other embodiment, or substitute the feature in other embodiment.
Should emphasize, term " comprises/comprises " existence referring to feature, key element, step or assembly when using herein, but does not get rid of the existence or additional of one or more further feature, key element, step or assembly.
In addition, method of the present invention be not limited to specifications in describe time sequencing perform, also can according to other time sequencing ground, perform concurrently or independently.Therefore, the execution sequence of the method described in this instructions is not construed as limiting technical scope of the present invention.
Although above by the description of specific embodiments of the invention to invention has been disclosure, but, should be appreciated that, those skilled in the art can design various amendment of the present invention, improvement or equivalent in the spirit and scope of claims.These amendments, improvement or equivalent also should be believed to comprise in protection scope of the present invention.
The present invention can also realize with the following examples:
The method of embodiment 1. 1 kinds of training convolutional neural networks sorters, comprising:
Global characteristics and local feature is extracted from training image; And
According to preassigned pattern by global characteristics and local Feature Mapping to characteristic pattern using the input amendment as sorter;
Wherein, according to preassigned pattern, global characteristics is mapped at least one first area, and local feature is mapped to a second area, and each first area connects with second area.
2. according to the method for embodiment 1, wherein, local feature comprises and extracting from least two kinds of local features of the same area, and the mapping of local feature comprises extraction is mapped to same position from least two kinds of local features of the same area.
3., according to the method for embodiment 2, wherein, according to preassigned pattern, global characteristics is mapped to multiple first area, second area by first area around.
4. according to the method for embodiment 3, wherein, the mapping of at least two kinds of local features comprise point-to-point addition, to be point-to-pointly multiplied, the convolutional calculation of convolutional neural networks or its combine in one.
5. the method any one of embodiment 1 to 4, also comprises:
Adjust the weight of the relevant connection in sorter between each layer according to weight gradient in the mode of back-propagating,
Wherein, when adjusting the weight of connection near at least one deck of input layer side, strengthen weight gradient, the degree strengthened depends on the value of the weight gradient of the connection between each layer after at least one deck.
6. according to the method for embodiment 5, wherein, strengthen weight gradient by one is added in weight gradient with reference to adjustment amount E, E obtains by reference to the Grad of the weight of the relevant connection between each layer after at least one deck.
7. according to the method for embodiment 6, wherein, the computing formula of E is as follows:
E = median ( ΣΔ ω → 1 N 1 , ΣΔ ω → 2 N 2 , . . . . , ΣΔ ω → L N L )
Wherein, " median () " asks median computing, and L is the number of plies sum of convolutional layer after described at least one deck and full articulamentum, i=1,2 ... L be i-th layer in the described convolutional layer of L layer and full articulamentum with the mean value of the Grad sum of the weight of the relevant connection between its front one deck.
8. the method any one of embodiment 1 to 7, also comprises:
Train at least two to have mutually isostructural convolutional neural networks sorter, convolutional neural networks sorter has common full articulamentum and output layer.
9., according to the method for embodiment 8, also comprise:
That at least two, there is mutually isostructural convolutional neural networks sorter, different except the weight setting of the connection of each layer of full articulamentum except output layer initialization value.
10., according to the method for embodiment 9, also comprise:
The respective input that at least two have mutually isostructural convolutional neural networks sorter is obtained by carrying out random deformation to original input amendment.
11. methods any one of embodiment 8 to 10, also comprise:
Before in the training process each takes turns beginning, the part respective weights value at least two to mutually isostructural convolutional neural networks sorter carries out random local directed complete set.
12. 1 kinds, for the image processing apparatus of classifying to image, comprising:
Feature extraction unit, it extracts global characteristics and local feature from image;
Input generation unit, its according to preassigned pattern by global characteristics and local Feature Mapping to characteristic pattern, wherein, according to preassigned pattern, global characteristics is mapped at least one first area, and local feature is mapped to a second area, and each first area connects with second area; And
Based on the sorter of neural network, it is input as characteristic pattern.
13. according to the image processing apparatus of embodiment 12, and wherein, local feature comprises at least two kinds of local features extracted from the same area, and the mapping of local feature comprises extraction is mapped to same position from least two kinds of local features of the same area.
14. according to the image processing apparatus of embodiment 13, and wherein, according to preassigned pattern, global characteristics is mapped to multiple first area, second area by first area around.
15. according to the image processing apparatus of embodiment 14, wherein, the mapping of at least two kinds of local features comprises point-to-point addition, to be point-to-pointly multiplied, the convolutional calculation of convolutional neural networks or its combine in one.

Claims (10)

1. a method for training convolutional neural networks sorter, comprising:
Global characteristics and local feature is extracted from training image; And
According to preassigned pattern, described global characteristics and described local feature are mapped to characteristic pattern using the input amendment as described sorter;
Wherein, according to described preassigned pattern, described global characteristics is mapped at least one first area, and described local feature is mapped to a second area, and each described first area connects with described second area.
2. method according to claim 1, wherein, described local feature comprises and extracting from least two kinds of local features of the same area, and the mapping of described local feature comprises extraction is mapped to same position from least two kinds of local features of the same area.
3. method according to claim 2, wherein, according to described preassigned pattern, described global characteristics is mapped to multiple described first area, described second area by described first area around.
4. according to the method in any one of claims 1 to 3, also comprise:
Adjust the weight of the relevant connection in described sorter between each layer according to weight gradient in the mode of back-propagating,
Wherein, when adjusting the weight of connection near at least one deck of input layer side, described weight gradient is strengthened, the value of the weight gradient of the connection between each layer described in the degree strengthened depends at least after one deck.
5. method according to claim 4, wherein, strengthen described weight gradient by a reference adjustment amount E is added in described weight gradient, E obtains by reference to the Grad of the weight of the relevant connection between each layer after described at least one deck.
6. method according to any one of claim 1 to 5, also comprises:
Train at least two to have mutually isostructural convolutional neural networks sorter, described convolutional neural networks sorter has common full articulamentum and output layer.
7. method according to claim 6, also comprises:
That for described at least two, there is mutually isostructural convolutional neural networks sorter, different except the weight setting of the connection of each layer of described full articulamentum except output layer initialization value.
8. method according to claim 7, also comprises:
By to carry out described in random deformation acquisition the respective input that at least two have mutually isostructural convolutional neural networks sorter to original input amendment.
9. the method according to any one of claim 6 to 8, also comprises:
Before in the training process each takes turns beginning, the part respective weights value at least two to mutually isostructural convolutional neural networks sorter carries out random local directed complete set.
10. the image processing apparatus for classifying to image, comprising:
Feature extraction unit, it extracts global characteristics and local feature from described image;
Input generation unit, described global characteristics and described local feature are mapped to characteristic pattern according to preassigned pattern by it, wherein, according to described preassigned pattern, described global characteristics is mapped at least one first area, described local feature is mapped to a second area, and each described first area connects with described second area; And
Based on the sorter of neural network, it is input as described characteristic pattern.
CN201410474927.9A 2014-09-17 2014-09-17 The image processing method and image processing apparatus that a kind of pair of image is classified Active CN105488515B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410474927.9A CN105488515B (en) 2014-09-17 2014-09-17 The image processing method and image processing apparatus that a kind of pair of image is classified

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410474927.9A CN105488515B (en) 2014-09-17 2014-09-17 The image processing method and image processing apparatus that a kind of pair of image is classified

Publications (2)

Publication Number Publication Date
CN105488515A true CN105488515A (en) 2016-04-13
CN105488515B CN105488515B (en) 2019-06-25

Family

ID=55675487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410474927.9A Active CN105488515B (en) 2014-09-17 2014-09-17 The image processing method and image processing apparatus that a kind of pair of image is classified

Country Status (1)

Country Link
CN (1) CN105488515B (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296690A (en) * 2016-08-10 2017-01-04 北京小米移动软件有限公司 The method for evaluating quality of picture material and device
CN106339753A (en) * 2016-08-17 2017-01-18 中国科学技术大学 Method for effectively enhancing robustness of convolutional neural network
CN106372648A (en) * 2016-10-20 2017-02-01 中国海洋大学 Multi-feature-fusion-convolutional-neural-network-based plankton image classification method
CN106548192A (en) * 2016-09-23 2017-03-29 北京市商汤科技开发有限公司 Based on the image processing method of neutral net, device and electronic equipment
CN107316066A (en) * 2017-07-28 2017-11-03 北京工商大学 Image classification method and system based on multi-path convolutional neural networks
CN107729948A (en) * 2017-10-31 2018-02-23 京东方科技集团股份有限公司 Image processing method and device, computer product and storage medium
CN107944354A (en) * 2017-11-10 2018-04-20 南京航空航天大学 A kind of vehicle checking method based on deep learning
CN108154153A (en) * 2016-12-02 2018-06-12 北京市商汤科技开发有限公司 Scene analysis method and system, electronic equipment
CN108229508A (en) * 2016-12-15 2018-06-29 富士通株式会社 For the training device and training method of training image processing unit
CN108875762A (en) * 2017-05-16 2018-11-23 富士通株式会社 Classifier training method, image-recognizing method and image recognition apparatus
CN109032356A (en) * 2018-07-27 2018-12-18 深圳绿米联创科技有限公司 Sign language control method, apparatus and system
CN109255369A (en) * 2018-08-09 2019-01-22 网易(杭州)网络有限公司 Using the method and device of neural network recognization picture, medium and calculate equipment
WO2019047949A1 (en) * 2017-09-08 2019-03-14 众安信息技术服务有限公司 Image quality evaluation method and image quality evaluation system
CN109919243A (en) * 2019-03-15 2019-06-21 天津拾起卖科技有限公司 A kind of scrap iron and steel type automatic identifying method and device based on CNN
CN110268423A (en) * 2016-08-19 2019-09-20 莫维迪乌斯有限公司 The system and method for distribution training for deep learning model
CN110276362A (en) * 2018-03-13 2019-09-24 富士通株式会社 The method and apparatus and classification prediction technique and device of training image model
CN110321867A (en) * 2019-07-09 2019-10-11 西安电子科技大学 Shelter target detection method based on part constraint network
CN110751225A (en) * 2019-10-28 2020-02-04 普联技术有限公司 Image classification method, device and storage medium
CN110766152A (en) * 2018-07-27 2020-02-07 富士通株式会社 Method and apparatus for training deep neural networks
CN111125396A (en) * 2019-12-07 2020-05-08 复旦大学 Image retrieval method of single-model multi-branch structure
CN111582008A (en) * 2019-02-19 2020-08-25 富士通株式会社 Device and method for training classification model and device for classification by using classification model
CN112020723A (en) * 2018-05-23 2020-12-01 富士通株式会社 Training method and device for classification neural network for semantic segmentation, and electronic equipment
US11062453B2 (en) 2016-12-02 2021-07-13 Beijing Sensetime Technology Development Co., Ltd. Method and system for scene parsing and storage medium
CN113574544A (en) * 2019-03-15 2021-10-29 浜松光子学株式会社 Convolutional neural network judgment basis extraction method and device
US11295195B2 (en) 2017-03-03 2022-04-05 Samsung Electronics Co., Ltd. Neural network devices and methods of operating the same

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7542593B1 (en) * 2008-04-30 2009-06-02 International Business Machines Corporaiton Offline signature verification using high pressure regions
CN102722735A (en) * 2012-05-24 2012-10-10 西南交通大学 Endoscopic image lesion detection method based on fusion of global and local features
US20130266214A1 (en) * 2012-04-06 2013-10-10 Brighham Young University Training an image processing neural network without human selection of features
CN103440348A (en) * 2013-09-16 2013-12-11 重庆邮电大学 Vector-quantization-based overall and local color image searching method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7542593B1 (en) * 2008-04-30 2009-06-02 International Business Machines Corporaiton Offline signature verification using high pressure regions
US20130266214A1 (en) * 2012-04-06 2013-10-10 Brighham Young University Training an image processing neural network without human selection of features
CN102722735A (en) * 2012-05-24 2012-10-10 西南交通大学 Endoscopic image lesion detection method based on fusion of global and local features
CN103440348A (en) * 2013-09-16 2013-12-11 重庆邮电大学 Vector-quantization-based overall and local color image searching method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LE, Q. V., NGIAM, J., CHEN, Z., CHIA, D., KOH, P. W., AND NG, A.: ""Tiled convolutional neural networks"", 《IN NIPS》 *
徐姗姗等: ""基于卷积神经网络的木材缺陷识别"", 《山东大学学报》 *
苏煜: ""融合全局和局部特征的人脸识别"", 《中国博士学位论文全文数据库 (电子期刊)信息科技辑》 *

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296690A (en) * 2016-08-10 2017-01-04 北京小米移动软件有限公司 The method for evaluating quality of picture material and device
CN106339753A (en) * 2016-08-17 2017-01-18 中国科学技术大学 Method for effectively enhancing robustness of convolutional neural network
CN110268423A (en) * 2016-08-19 2019-09-20 莫维迪乌斯有限公司 The system and method for distribution training for deep learning model
CN106548192A (en) * 2016-09-23 2017-03-29 北京市商汤科技开发有限公司 Based on the image processing method of neutral net, device and electronic equipment
CN106548192B (en) * 2016-09-23 2019-08-09 北京市商汤科技开发有限公司 Image processing method, device and electronic equipment neural network based
CN106372648A (en) * 2016-10-20 2017-02-01 中国海洋大学 Multi-feature-fusion-convolutional-neural-network-based plankton image classification method
CN106372648B (en) * 2016-10-20 2020-03-13 中国海洋大学 Plankton image classification method based on multi-feature fusion convolutional neural network
CN108154153A (en) * 2016-12-02 2018-06-12 北京市商汤科技开发有限公司 Scene analysis method and system, electronic equipment
US11062453B2 (en) 2016-12-02 2021-07-13 Beijing Sensetime Technology Development Co., Ltd. Method and system for scene parsing and storage medium
CN108229508B (en) * 2016-12-15 2022-01-04 富士通株式会社 Training apparatus and training method for training image processing apparatus
CN108229508A (en) * 2016-12-15 2018-06-29 富士通株式会社 For the training device and training method of training image processing unit
US11295195B2 (en) 2017-03-03 2022-04-05 Samsung Electronics Co., Ltd. Neural network devices and methods of operating the same
TWI765979B (en) * 2017-03-03 2022-06-01 南韓商三星電子股份有限公司 Methods of operating neural network devices
CN108875762B (en) * 2017-05-16 2022-03-15 富士通株式会社 Classifier training method, image recognition method and image recognition device
CN108875762A (en) * 2017-05-16 2018-11-23 富士通株式会社 Classifier training method, image-recognizing method and image recognition apparatus
CN107316066A (en) * 2017-07-28 2017-11-03 北京工商大学 Image classification method and system based on multi-path convolutional neural networks
WO2019047949A1 (en) * 2017-09-08 2019-03-14 众安信息技术服务有限公司 Image quality evaluation method and image quality evaluation system
US11138738B2 (en) 2017-10-31 2021-10-05 Boe Technology Group Co., Ltd. Image processing method and image processing device
CN107729948A (en) * 2017-10-31 2018-02-23 京东方科技集团股份有限公司 Image processing method and device, computer product and storage medium
CN107944354B (en) * 2017-11-10 2021-09-17 南京航空航天大学 Vehicle detection method based on deep learning
CN107944354A (en) * 2017-11-10 2018-04-20 南京航空航天大学 A kind of vehicle checking method based on deep learning
CN110276362A (en) * 2018-03-13 2019-09-24 富士通株式会社 The method and apparatus and classification prediction technique and device of training image model
CN112020723A (en) * 2018-05-23 2020-12-01 富士通株式会社 Training method and device for classification neural network for semantic segmentation, and electronic equipment
CN110766152A (en) * 2018-07-27 2020-02-07 富士通株式会社 Method and apparatus for training deep neural networks
CN110766152B (en) * 2018-07-27 2023-08-04 富士通株式会社 Method and apparatus for training deep neural networks
CN109032356B (en) * 2018-07-27 2022-05-31 深圳绿米联创科技有限公司 Sign language control method, device and system
CN109032356A (en) * 2018-07-27 2018-12-18 深圳绿米联创科技有限公司 Sign language control method, apparatus and system
CN109255369A (en) * 2018-08-09 2019-01-22 网易(杭州)网络有限公司 Using the method and device of neural network recognization picture, medium and calculate equipment
CN109255369B (en) * 2018-08-09 2020-10-16 杭州易现先进科技有限公司 Method and device for recognizing picture by using neural network, medium and computing equipment
CN111582008A (en) * 2019-02-19 2020-08-25 富士通株式会社 Device and method for training classification model and device for classification by using classification model
CN111582008B (en) * 2019-02-19 2023-09-08 富士通株式会社 Device and method for training classification model and device for classifying by using classification model
CN113574544A (en) * 2019-03-15 2021-10-29 浜松光子学株式会社 Convolutional neural network judgment basis extraction method and device
CN109919243A (en) * 2019-03-15 2019-06-21 天津拾起卖科技有限公司 A kind of scrap iron and steel type automatic identifying method and device based on CNN
CN110321867B (en) * 2019-07-09 2022-03-04 西安电子科技大学 Shielded target detection method based on component constraint network
CN110321867A (en) * 2019-07-09 2019-10-11 西安电子科技大学 Shelter target detection method based on part constraint network
CN110751225A (en) * 2019-10-28 2020-02-04 普联技术有限公司 Image classification method, device and storage medium
CN111125396A (en) * 2019-12-07 2020-05-08 复旦大学 Image retrieval method of single-model multi-branch structure

Also Published As

Publication number Publication date
CN105488515B (en) 2019-06-25

Similar Documents

Publication Publication Date Title
CN105488515A (en) Method for training convolutional neural network classifier and image processing device
Too et al. A comparative study of fine-tuning deep learning models for plant disease identification
CN104346622A (en) Convolutional neural network classifier, and classifying method and training method thereof
CN111489358A (en) Three-dimensional point cloud semantic segmentation method based on deep learning
CN103679185B (en) Convolutional neural networks classifier system, its training method, sorting technique and purposes
CN108345827B (en) Method, system and neural network for identifying document direction
CN109325547A (en) Non-motor vehicle image multi-tag classification method, system, equipment and storage medium
CN109740154A (en) A kind of online comment fine granularity sentiment analysis method based on multi-task learning
CN110533024B (en) Double-quadratic pooling fine-grained image classification method based on multi-scale ROI (region of interest) features
CN110428428A (en) A kind of image, semantic dividing method, electronic equipment and readable storage medium storing program for executing
CN107316294A (en) One kind is based on improved depth Boltzmann machine Lung neoplasm feature extraction and good pernicious sorting technique
CN113822209B (en) Hyperspectral image recognition method and device, electronic equipment and readable storage medium
JP2015506026A (en) Image classification
CN105701120A (en) Method and apparatus for determining semantic matching degree
US20210158166A1 (en) Semi-structured learned threshold pruning for deep neural networks
CN104516897B (en) A kind of method and apparatus being ranked up for application
CN102982344A (en) Support vector machine sorting method based on simultaneously blending multi-view features and multi-label information
JP6612486B1 (en) Learning device, classification device, learning method, classification method, learning program, and classification program
CN105303195A (en) Bag-of-word image classification method
CN106484692A (en) A kind of method for searching three-dimension model
CN104915673A (en) Object classification method and system based on bag of visual word model
CN113743417B (en) Semantic segmentation method and semantic segmentation device
CN103745201A (en) Method and device for program recognition
CN104598920A (en) Scene classification method based on Gist characteristics and extreme learning machine
CN108920446A (en) A kind of processing method of Engineering document

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant