CN107077625A - The deep convolutional neural networks of layering - Google Patents

The deep convolutional neural networks of layering Download PDF

Info

Publication number
CN107077625A
CN107077625A CN201580058248.6A CN201580058248A CN107077625A CN 107077625 A CN107077625 A CN 107077625A CN 201580058248 A CN201580058248 A CN 201580058248A CN 107077625 A CN107077625 A CN 107077625A
Authority
CN
China
Prior art keywords
classification
cnn
fine
thick
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201580058248.6A
Other languages
Chinese (zh)
Inventor
鲁宾逊·皮拉姆苏
严志程
维格里希·贾格迪希
邸韡
丹尼斯·德科斯特
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
eBay Inc
Original Assignee
eBay Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by eBay Inc filed Critical eBay Inc
Publication of CN107077625A publication Critical patent/CN107077625A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The deep convolutional neural networks (HD CNN) of layering branch improve existing convolutional neural networks (CNN) technology.In HD CNN, the class that can be easily distinguished is classified in high-rise thick classification CNN, and being sorted in lower floor fine classification CNN for being most difficult to is completed.In HD CNN training, multinomial logistics loss and novel time degree of rarefication punishment can be used.The using of multinomial logistics loss and the punishment of the time degree of rarefication classification subset that to cause each branch component processing different.

Description

The deep convolutional neural networks of layering
Prioity claim
This application claims entitled " the Hierarchical Deep Convolutional submitted on December 23rd, 2014 Neural Network For Image Classification " U.S. Patent application No.14/582,059 priority, The patent application requires entitled " the Hierarchical Deep Convolutional submitted on October 27th, 2014 Neural Network For Image Classification " U.S. Patent application No.62/068,883 priority, Each in above-mentioned application is incorporated herein by quoting.
Technical field
Subject matter disclosed herein is usually directed to is classified using the deep convolutional neural networks of layering to data.Specifically, the disclosure It is related to generation and the system and method using the deep convolutional neural networks of layering for image classification.
Background technology
Deep convolutional neural networks (CNN) are trained to as N roads grader make a distinction between N class data.CNN classifies Device be used to classify to image, detect object, estimate posture, identification is facial and performs other classification tasks.Generally, by Designer's selection CNN structure (for example, connectivity etc.) between the quantity of layer, the type of layer, layer, then passes through to train and determines Every layer of parameter.
Can be by the way that multiple graders be averagely applied in combination.In model is average, multiple single models are used.Each Model can classify to the whole set of classification, and each model is stand-alone training.The master of its forecasted variances Originate including:Different initialization, the different subsets for training complete or collected works, etc..The output of built-up pattern is each independent model Average output.
Brief description of the drawings
Show some embodiments by way of example and not limitation in the accompanying drawings.
Fig. 1 be show according to some example embodiments be suitable for create and using the layering depth CNN for image classification Network environment network.
Fig. 2 is the component for showing the layering depth CNN servers for being applied to image classification according to some example embodiments Block diagram.
Fig. 3 be show according to some example embodiments be adapted for use be layered deep CNN technologies and carry out setting for image classification The block diagram of standby component.
Fig. 4 is the group figure for showing the classification chart picture according to some example embodiments.
Fig. 5 is to show the component for being configured as recognizing the server of the fine classification of image according to some example embodiments Between relation block diagram.
Fig. 6 is to show the stream of the operation of server in the processing for recognizing thick classification is performed according to some example embodiments Cheng Tu.
Fig. 7 is to show performing the layering depth CNN that generation is used to classify to image according to some example embodiments Processing in server operation flow chart.
Fig. 8 is the block diagram for the example for showing the software architecture that may be mounted on machine according to some example embodiments.
Fig. 9 shows that the diagram of the machine of the form with computer system according to example embodiment is represented, described In computer system, one group of instruction can be performed so that any one or more in the method that machine execution is discussed herein Method.
Embodiment
Exemplary method and system are related to the layering depth CNN for image classification.Example only represents possible deformation.Unless Separately clearly state, otherwise component and function are optional, and can be merged or segment, and operation can sequentially change or It is combined or segments.In the following description, for illustrative purposes, multiple details are illustrated, to provide to example The thorough understanding of embodiment.However, those skilled in the art will be apparent that:This theme can be in these no tools Implement in the case of body details.
Layering depth CNN (HD-CNN) is followed by thick classification policy and modularized design principle to essence.For any given Class label, it is possible to define simple class set and obscure class set.Therefore, initial rough sort device CNN can be by can be easy The class of separation is separated each other.Then, challenging class is routed to the fine CNN in downstream, and it, which is concerned only with, obscures class.At some In example embodiment, HD-CNN, which improves classification performance, must be better than standard depth CNN models.As CNN, HD-CNN structure (for example, each component CNN structure, quantity of fine class etc.) can be determined by designer, and each CNN each layer of ginseng Number can be determined by training.
Compared with the training HD-CNN that starts anew, pre-training HD-CNN can obtain advantage.For example, with standard depth CNN moulds Type is compared, and HD-CNN has the additional free parameter from shared branch's shallow-layer and C ' individual branches deep layers.Relative to standard For CNN, this is by the quantity of the free parameter greatly increased in HD-CNN.Therefore, if using the training number of identical quantity According to then over-fitting (overfit) is more likely to occur in HD-CNN.Pre-training can help overcome training data not enough to be stranded It is difficult.
Another potential benefit of pre-training is:Good selection to thick classification would be beneficial for training branch component, to concentrate The consistent subset of confusing fine classification is held in concern.For example, branch component 1, which is good at, distinguishes apple and orange, and branch component 2 Ability is had more in terms of bus and train is distinguished.Therefore, thick category set is recognized, thick class component is come to this by pre-training Thick category set is classified.
Some training datasets include the relation between the information and fine classification relevant with thick classification and thick classification.So And, many training datasets are not such.These training datasets are only each project offer fine classification in data set, and The thick classification of nonrecognition.Therefore, describe to be divided into fine classification into the process of thick classification referring to Fig. 6.
Fig. 1 be show according to some example embodiments be suitable for create and using the layering depth CNN for image classification Network environment 100 network.Network environment 100 include e-commerce server 120 and 140, HD-CNN servers 130 with And equipment 150A, 150B and 150C, they are all coupled with each other via network 170.Equipment 150A, 150B and 150C can be by Referred to collectively as " equipment 150 ", or it is collectively referred to as " equipment 150 ".E-commerce server 120 and 140 and HD-CNN services Device 130 can be a part for network system 110.Alternatively, equipment 150 can be directly connected to HD-CNN servers 130, or HD-CNN servers 130 are connected to by local network, the local network is different from being used to be connected to electronics business The network 170 of business server 120 or 140.As described by referring to Fig. 8-9, e-commerce server 120 and 140, HD- CNN servers 130 and equipment 150 can be realized wholly or partly in computer systems.
E-commerce server 120 and 140 provides ecommerce via network 170 to other machines (for example, equipment 150) Using.E-commerce server 120 and 140 can also be directly connected to HD-CNN servers 130, or with HD-CNN servers 130 integrate.In some example embodiments, an e-commerce server 120 and HD-CNN servers 130 are to be based on A part for the system 110 of network, and other e-commerce servers (for example, e-commerce server 140) are with being based on network System 110 separate.E-business applications can provide a user in the following manner:Buy directly with one another and sell article, from electricity Sub- business application provider buys article and by sales of goods electron business application provider, or both the above.
HD-CNN servers 130 create the HD-CNN for being classified to image, and image is divided using HD-CNN Class, or perform both.For example, HD-CNN servers 130 can be created based on training set for being classified to image HD-CNN, or pre-existing HD-CNN can be loaded on HD-CNN servers 130.HD-CNN servers 130 may be used also To respond the request to image classification by providing fine classification for image.HD-CNN servers 130 can be via network 170 Or another network provides data to other machines (for example, e-commerce server 120 and 140 or equipment 150).HD-CNN takes Business device 130 can via network 170 or another network from other machines (for example, e-commerce server 120 and 140 or setting It is standby 150) to receive data.In some exemplary embodiments, the function of HD-CNN servers 130 described herein is such as personal Performed on the user equipment of computer, tablet PC or smart phone etc.
User 160 is also show in Fig. 1.User 160 can be human user (for example, mankind), machine customer (for example, The computer interacted with equipment 150 and HD-CNN servers 130 configured by software program) or their any appropriate group Close (for example, people or machine of people's supervision of machine auxiliary).User 160 is not a part for network environment 100, but and equipment 150 users that are associated and being equipment 150.For example, equipment 150 can be the sensor for belonging to user 160, desk-top meter Calculation machine, car-mounted computer, tablet PC, navigation equipment, portable media device or smart phone.
In some exemplary embodiments, HD-CNN servers 130 receive the data relevant with the project that user is interested. For example, the camera for being attached to equipment 150A can shoot the image that user 160 wishes the project of sale, and pass through network 170 The image is sent to HD-CNN servers 130.HD-CNN servers 130 are classified based on the image to the project.Classification E-commerce server 120 or 140 can be sent to, equipment 150A is sent to or its any combinations.E-commerce server 120 or 140 can use the category to aid in generating the list of the project to be sold.Similarly, the image can be user 160 The image of project interested, and can help to select by the use classes of e-commerce server 120 or 140 will be to user The list of the project of 160 displays.
Any one in machine, database or equipment shown in Fig. 1 can be realized with all-purpose computer, described logical By software modification (for example, configuration or program) it is special-purpose computer with computer, to perform herein for the machine, data Storehouse or the function of equipment description.For example, referring to Fig. 8-9 discuss can realize in method described herein any one or The computer system of more.As it is used herein, " database " is data storage resource and can be with data storage, the number According to be structured as text, form, electrical form, relational database (for example, Object Relational Database), triple store, Individual-layer data is stored or their random suitable combination.In addition, any two of the machine shown in Fig. 1, database or equipment Or more can be combined in individual machine, and herein for the function that any individual machine, database or equipment are described Can be with subdivided into multiple machines, database or equipment.
Network 170 can be realized between machine, database and equipment (for example, HD-CNN servers 130 and equipment 150) Communication arbitrary network.Therefore, network 170 can be cable network, wireless network (for example, mobile or cellular network) or Its random suitable combination.Network 170 can include constituting private network, public network (for example, internet) or its is any appropriate One or more parts of combination.
Fig. 2 is the block diagram for the component for showing the HD-CNN servers 130 according to some example embodiments.HD-CNN is serviced Device 130 is shown as including communication module 210, thick classification identification module 220, pre-training module 230, fine setting module 240, classification Module 250 and memory module 260, these modules be all configured as communicating with one another (for example, via bus, shared memory or Interchanger).Any one or more modules described herein can use hardware (processor of such as machine) to realize.This Outside, any two in these modules or more module can be merged into single module, and be retouched herein for single module The function of stating can be with subdivided into multiple modules.In addition, according to various example embodiments, be described herein as individual machine, The module implemented in database or equipment can be distributed in multiple machines, database or equipment.
Communication module 210 is configured as sending and receiving data.For example, communication module 210 can be received by network 170 View data, and the data of reception are sent to sort module 250.As another example, sort module 250 can recognize project Classification, and the classification of project can be sent to by e-commerce server 120 by network 170 by communication module 210.
Thick classification identification module 220 is configured as recognizing thick classification for data-oriented collection.Thick classification identification module 220 It is determined that related fine classification, and they are grouped into thick classification.For example, the data set provided can have C fine classes Not, and HD-CNN designers can determine the quantity C ' of desired thick classification.Thick classification identification module 220 recognizes C finely Class is clipped to the mapping of the individual thick classifications of C '.It can use Fig. 6 described below processing 600 that fine classification is grouped into thick classification.
Pre-training module 230 and fine setting module 240 are configured to determine that HD-CNN parameter.230 pairs of pre-training module is thick Classification CNN and fine classification CNN carries out pre-training, overlapping between fine classification CNN to reduce.It is micro- after pre-training is completed Mode transfer block 240 provides the additional adjustment to HD-CNN.Fig. 7 described below processing 700 can be used perform pre-training and Fine setting.
Sort module 250 is configured to receive and process view data.View data can be two dimensional image, from continuous Frame, 3-D view, depth image, infrared image, binocular image or its any suitable combination of video flowing.For example, image can Be from camera receive.In order to illustrate, camera can shoot picture, and send it to sort module 250.Sort module 250 determine the fine classification of image (for example, determining thick classification or thick class by using thick classification CNN by using HD-CNN Other weight, and determine fine classification using one or more fine classification CNN).Pre-training module 230, fine setting can be used Module 240 or both generate HD-CNN.Alternatively, HD-CNN can be provided from external source.
Memory module 260 is configured as storing and retrieved by thick classification identification module 220, pre-training module 230, fine setting mould The data that block 240 and sort module 250 are generated and used.For example, the HD-CNN generated by pre-training module 230 can be by storage mould Block 260 is stored, so that fine setting module 240 is retrieved.The information on image classification generated by sort module 250 can also be by depositing Storage module 260 is stored.E-commerce server 120 or 140 can (for example, by providing image identifier) ask image class Not, the classification of described image can be by memory module 260 is from memory search and uses communication module 210 to be sent out on network 170 Send.
Fig. 3 is the block diagram for the component for showing the equipment 150 according to some example embodiments.Equipment 150 is shown as bag Include the input module 310 for being all configured as (for example, via bus, shared memory or interchanger) and communicating with one another, camera mould Block 320 and communication module 330.Any one or more modules described herein can use hardware (processor of such as machine) To realize.In addition, any two in these modules or more module can be merged into single module, and herein for list The function of one module description can be with subdivided into multiple modules.In addition, according to various example embodiments, being described herein as in list The module implemented in individual machine, database or equipment can be distributed in multiple machines, database or equipment.
Input module 310 is configured as receiving from user via user interface and inputted.For example, user can be by its user name With Password Input into input module, camera is configured, the basic image of list or project search, or its any conjunction is chosen for use as Suitable combination.
Camera model 320 is configured as capture images data.For example, image can be received from camera, can be from infrared phase Machine receives depth image, can be from binocular camera a pair of images of reception, etc..
Communication module 330 is configured as sending the data that input module 310 or camera model 320 are received to HD-CNN clothes Business device 130, e-commerce server 120 or e-commerce server 140.For example, input module 310 can be received:To utilizing The selection for the image that camera model 320 is shot, and depict the item that user (for example, user 160) wishes sale on image Purpose is indicated.Communication module 330 can send image and instruction to e-commerce server 120.E-commerce server 120 HD-CNN servers 130 can be sent an image to ask the classification of image, list template is generated based on classification, and make row Table template is presented to user via communication module 330 and input module 310.
Fig. 4 is the group figure for showing the classification chart picture according to some example embodiments.In Fig. 4, by 27 figures It is categorized as describing apple (group 410), orange (group 420) or bus (group 430) as correct.Organize 410-430 quilts herein Referred to as apple, orange and bus.By checking, the member of differentiation apple and the member of bus are relatively easy, and area The member of point apple and the member of orange are then more difficult.Image from apple and orange may have similar shape, texture And color, so it is more difficult correctly to distinguish them.By contrast, the image from bus generally has different from apple Visual appearance, therefore it is desired that classification is easier to.In fact, apple and orange the two classifications can be defined as belonging to phase Same thick classification, and bus belongs to different thick classifications.For example, in the data sets of CIFAR 100, (it is in " Learning It is discussed in Multiple Layers of Features from Tiny Images ", Krizhevsky (2009)) In, apple and orange are the subclass in " fruits and vegetables ", and bus is the " subclass in vehicle 1 ".CIFAR 100 Data set is made up of 100 classes of natural image.The data of CIFAR 100 are concentrated with 50,000 training images and 10,000 surveys Attempt picture.
Fig. 5 is the block diagram for showing the relation between the component according to the sort module 250 of some example embodiments.Can be with Using single standard deep CNN as HD-CNN detailed predicting component structure block.As shown in figure 5, thick classification CNN 520 is pre- Survey the probability in thick classification.Multiple branch CNN 540-550 are independent additions.In some exemplary embodiments, branch CNN 540-550 share branch's shallow-layer 530.Thick classification CNN 520 and multiple branch CNN 540-550 each receives input picture And to input picture parallel work-flow.Although each branch CNN 540-550 receive input picture and are given at fine classification Probability distribution on complete or collected works, but subset of each branch CNN 540-550 result only to classification is effective.By probability average layer 560 pairs of multiple perfect forecasts from branch CNN 540-550 carry out linear combination, to be formed by corresponding thick other probability weight Final fine classification prediction.
Following symbol is used for following discussion.Data set includes:NtIndividual training sample { xi, yi}t(wherein i arrives N 1t's In the range of), and NsIndividual test sample { xi, yi}t(wherein i arrives N 1sIn the range of).xiAnd yiRespectively represent view data and Image tag.Image tag corresponds to the fine classification of image.In data set { SkIn have C predefined fine classifications, its Middle k is in the range of 1 to C.The data are concentrated with the individual thick classifications of C '.
As the deep CNN models of standard, HD-CNN realizes end-to-end classification.Although the deep CNN models of standard are only by list Individual CNN compositions, but HD-CNN mainly includes three parts, i.e., single thick class component B (corresponding to thick classification CNN 520), Multiple branch's fine classification component { Fj(wherein, j (corresponds to branch CNN 540-550) in the range of 1 to C '), Yi Jidan Individual probability average layer (corresponding to probability average layer 560).Single thick classification CNN 520 receives the original image pixels as input Data, and export the probability distribution in thick classification.It is by branch CNN that thick class probability is used for by probability average layer 560 Weight is assigned in the perfect forecast that 540-550 makes.
Fig. 5 also show branch CNN 540-550 set, and each branch CNN makes and predicted on the complete or collected works of fine classification. In some exemplary embodiments, branch CNN 540-550 share the parameter in shallow-layer 530, but with independent deep layer.Shallow-layer It is the closest layer being originally inputted in CNN, and deep layer is closer to the layer of final output.Parameter in shared shallow-layer can band Carry out following benefit.First, in shallow-layer, each CNN can extract primitive rudimentary feature (for example, spot, turning), and it is for dividing All fine classifications of class are useful.Therefore, can also even if each branch component is focused in the different sets of fine classification Shallow-layer is shared between branch component.Second, the sum that the parameter in shallow-layer greatly reduces the parameter in HD-CNN is shared, this It can help to the training success of HD-CNN models.If each branch fine classification component is trained totally independently of one another, The quantity of free parameter in HD-CNN by with the quantity of thick classification linearly.Excessive number of parameters in model will increase Plus the possibility of over-fitting.3rd, HD-CNN calculating cost and memory consumption are also reduced because of shared shallow-layer, this for HD-CNN is disposed in practical application has practical importance.
Probability average layer 560 receives all branch CNN 540-550 predictions and thick classification CNN 520 is predicted, and produces Weighted average as image i final prediction p (xi), as shown in following equation.
In the equation, BijIt is the thick classification j for the image i that thick classification CNN 520 is predicted probability.For image i, by J branch component FjThe fine classification prediction made is pj(xi)。
Both thick classification CNN 520 and branch CNN 540-550 can be implemented as any end-to-end deep CNN models, its with Original image is returned as input, and using the probabilistic forecasting in classification as output.
To every for training the multinomial logistics loss function use time degree of rarefication penalty term of fine classification component to encourage Individual branch is focused in the subset of fine classification.The loss function of amendment comprising the time degree of rarefication penalty term is by following etc. Formula is shown:
In the equation, n is the size for training small batch, yiIt is image i basic true value label (ground truth Label), λ is iotazation constant.In some example embodiments, value 5 is used for λ.BijIt is the figure that thick classification CNN 520 is predicted As i thick classification j probability.Branch j object time degree of rarefication is represented as tj
The initialization of conjugate branch, time degree of rarefication may insure that each branch component is focused on to fine classification not Classified with subset, and prevent a small number of branches from receiving the major part of thick class probability body.
Fig. 6 is to show the HD-CNN servers in the processing 600 for recognizing thick classification is performed according to some example embodiments The flow chart of 130 operation.Processing 600 includes operation 610,620,630,640 and 650.It is unrestricted only as example, operation 610-650 is described as being performed by module 210-260.
In operation 610, training sample set is divided into training set and assessed and collected by thick classification identification module 220.For example, Data set { the x that will be made up of Nt training samplei, yi}tIt is divided into two parts train_train and train_val, wherein i In the range of 1 to Nt.This can be by selecting the sample distribution between desired train_train and train_val come complete Into such as 70% pair 30% of distribution.Once have selected distribution, for each set, it can select at random in accordance with the appropriate ratio Select sample.In operation 620, standard exercise technology is used by pre-training module 230, deep CNN is trained based on train_train Model.For example, backpropagation (back-propagation) training algorithm is a kind of selection for training depth CNN models.
In operation 630, thick classification identification module 220 draws confusion matrix based on train_val.Confusion matrix it is big Small is C × C.Matrix column corresponds to the fine classification of prediction, and the row of matrix is corresponding to the actual fine class in train_val Not.If for example, each prediction is correct, then the cell only in the leading diagonal of matrix will be not zero. If on the contrary, each prediction is incorrect, then the unit in the leading diagonal of matrix will all be zero.
Thick classification identification module 220 is by subtracting each element of confusion matrix from 1 and D diagonal element is zeroed next life Into Distance matrix D.By to D and DT(D transposition) is averaging to cause distance matrix symmetric.After the operations have been performed, Each element DijThe easness that metrics class i and classification j is distinguished.
In operation 640, the low-dimensional character representation { f of fine classification is obtainedi, wherein i is in the range of 1 to C.For example, Laplacian eigenmaps (Laplacian eigenmap) can be used for this purpose.Low-dimensional character representation remains low dimensional manifold (manifold) the local neighborhood information on, and be used to fine classification cluster arriving thick classification.In the exemplary embodiment, make Adjacent map is constructed with k nearest-neighbors.For example, value 3 can be used for k.By using thermonuclear (for example, with width parameter t= 0.95) weight of adjacent map is set.In some exemplary embodiments, { fiDimension be 3.
Thick classification identification module 220 (in operation 650) arrives C fine classification cluster in the individual thick classifications of C '.It can make Cluster is performed with affine propagation (affinity propagation), k- mean clusters or other clustering algorithms.It is affine to propagate The quantity of thick classification can be automatically introduced into, and may cause to cluster more in a balanced way in terms of size compared with other clustering methods. Equilibrium cluster helps to ensure that each branch component handles the fine classification of similar quantity and therefore with similar workload.It is imitative The damping factor λ penetrated in propagating may influence the quantity that gained is clustered.In some exemplary embodiments, λ is arranged to 0.98. The result of cluster is mapping P (y)=y ' from fine classification y to thick classification y '.
For example, by 50 based on data set, 000 training image and 10,000 test image trains deep CNN moulds Type, can be by 100 category divisions of CIFAR100 data sets to thick classification.The quantity of thick classification is provided as input (for example, four thick classifications can be selected), and process 600 by fine classification for being divided into thick classification.In an example reality Apply in example, 100 classifications of CIFAR100 data sets are divided into four thick classifications, as shown in the table.
Fig. 7 is to show performing generation for the place for the HD-CNN for classifying to image according to some example embodiments The flow chart of the operation of HD-CNN servers 130 in reason 700.Processing 700 includes operation 710,720,730,740,750 and 760. Unrestricted only as example, operation 710-760 is described as being performed by module 210-260.
In operation 710, pre-training module 230 closes the thick classification CNN of training in the collection of thick classification.For example, can be The set of thick classification is identified using processing 600.Using mapping P (y)=y ', the fine of training dataset is replaced with thick classification Classification.In the exemplary embodiment, data set { xi, y 'iIt is used for training standard depth CNN models, wherein i is in 1 scope for arriving Nt It is interior.The model being trained to turns into HD-CNN thick class component (for example, thick classification CNN520).
In the exemplary embodiment, layer and the one SOFTMAX layers net constituted are fully connected using by three convolutional layers, one Network.Each convolutional layer has 64 filters.Amendment linear unit (Rectified linear units, ReLU) is used as activation Unit.Also using tether layer and response normalization layer between convolutional layer.It is complete defined in the following form of example 1 to show Example framework.Another exemplary architecture defined in the following form of example 2.
In upper table, filter uses indicated input (for example, pixel value) number.For example, 5x5 filter lookups 25 pixels in 5x5 grids, to determine single value.5x5 filters consider each 5x5 grids in input picture.Therefore, have The layer for having 64 5 × 5 filters generates 64 for each input pixel and exported, and each in these values is based on the input 5 × 5 pixel grids centered on pixel.MAX ponds have multiple inputs for pixel set, and provide single output, i.e., that The maximum inputted a bit.For example, 3x3MAX ponds layer will export a value for each 3x3 block of pixels, i.e., in that 9 pixels most Big value.AVG ponds have multiple inputs for set of pixels, and provide single output, i.e. the average value of those inputs is (such as equal Value).The value exported from preceding layer is normalized normalization layer.Cccp layers provide non-linear component to CNN.SOFTMAX letters Number is normalized exponential function, and it provides the Some Nonlinear Changing Type that multinomial logistics is returned.In some example embodiments, SOFTMAX functions obtain the value vector of K dimensions, and export the value vector of K dimensions so that the element summation of output vector is 1 and 0 To in the range of 1.For example, following equation can be used for from input vector z generation output vectors y:
Wherein j=1 ..., K.
In operation 720, pre-training module 230 also trains prototype fine classification component.For example, data set { xi, yi(its Middle i is in the range of 1 to Nt) it is used for training standard depth CNN models, it turns into prototype fine classification component.In example embodiment In, CIFAR100 data sets be used to regard CNN as prototype fine classification component trains.
In operation 730, circulation starts, to handle each in the individual fine classification components of C '.Therefore, for each essence Thin class component performs operation 740 and 750.For example, when identifying four thick classifications, circulation will be for four fine classification groups Each in part is iterated.
In operation 740, pre-training module 230 makes the copy of the prototype fine classification component of fine classification component.Cause This, all fine classification components are both initialized to identical state.In data corresponding with the thick classification of fine classification component The upper further training fine classification component in collection part.It is, for example, possible to use data set { xi, yiSubset, wherein P (yi) is thick Classification.Once all fine classification components and thick class component have all been trained to, then HD-CNN is fabricated.
It can keep fixed for the CNN of fine classification component shallow-layer, and deep layer allows to change during the training period. For example, using the structure of above-mentioned example 1, for each fine classification component, shallow-layer conv1, pool1 and norm1 can be kept It is constant, and deep layer conv2, pool2, norm2, conv3, pool3, ip1 and prob are during the training of each fine work class component Changed.In some exemplary embodiments, the structure of shallow-layer keeps fixing, but the value used in shallow-layer allows to change Become.On the structure of above-mentioned example 2, for each fine classification component, shallow-layer conv1, cccp1, cccp2, pool1 and Conv2 can keep constant, and deep layer cccp3, cccp4, pool2, conv3, cccp5, cccp6, pool3 and prob are each Changed during the training of fine classification component.
In operation 760, the HD-CNN of fine setting 240 pairs of constructions of module is finely adjusted.It can use with time degree of rarefication The multinomial logistics loss function of punishment performs fine setting.Object time degree of rarefication { tj } (in the range of wherein j arrives C ' 1) can be with Defined using mapping P.It is, for example, possible to use following equation, wherein SkIt is the image collection from fine classification k.
Batch size for fine setting can be selected based on the quantity of study of calculating time and desired each iteration.Example Such as, batch size 250 can be used.After each batch, training error can be measured.If the improvement rate of training error is less than Threshold value, then can reduce learning rate (for example, reduction by 10%, is reduced by half, or reducing another amount).When learning rate reduction When, threshold value can be changed.(for example, when learning rate is reduced to less than the 50% of original value after minimum learning rate is reached When), after the batch of predetermined quantity has been used for fine setting, or under its any appropriate combination, trim process stops.
According to various example embodiments, the one or more in method described herein can promote generation for image point The HD-CNN of class.In addition, for standard depth CNN, the one or more in method described herein can be with higher Success rate promotes the classification to image.In addition, for former method, the one or more in method described herein It can aid in quickly and train HD-CNN using less computing capability for user.Similarly, with CNN is trained to one The situation of resolution quality is compared, and one or more methods in method described herein, which can aid in, utilizes less training Sample comes with equal resolution quality training HD-CNN.
When totally considering these effects, one or more of method described herein can be eliminated for some work The demand of amount or resource, some workloads or resource are generation originally or used involved by the HD-CNN for image classification 's.By one or more of method described herein, user can also be reduced and paid when ordering project interested Effort.For example, user can be reduced in establishment project by identifying the classification of user's project interested exactly according to image The time spent during the project to be bought of list or lookup or workload.It can be similarly reduced by (such as in network environment 100 In) computing resource that uses of one or more machines, database or equipment.The example of such computing resource includes processor Circulation, network traffics, memory behaviour in service, data storage capacity, power consumption and cooling capacity.
Software architecture
Fig. 8 is the block diagram 800 for the framework for showing software 802, the software may be mounted at it is above-mentioned any one or it is many In individual equipment.Fig. 8 is only the non-limiting example of software architecture, and be should be recognized that, it is possible to implement many other frameworks are to promote Realize function described herein.Software 802 can be realized by the hardware of such as Fig. 9 machine 900 etc, the machine Device 900 includes processor 910, memory 930 and I/O components 950.In the exemplary architecture, software 802 can be conceptualized as For the storehouse of layer, wherein every layer can provide specific function.For example, software 802 include such as operating system 804, storehouse 806, Framework 808 and the layer of application 810 etc.Operationally, according to some embodiments, called using 810 by software stack using volume Journey interface (API) calls 812, and receives message 814 in response to API Calls 812.
In various implementations, operating system 804 manages hardware resource and provides public service.Operating system 804 includes example Such as kernel 820, service 822 and driving 824.In some implementations, kernel 820 is used as abstract between hardware and other software layer Layer.Set for example, kernel 820 especially provides memory management, processor management (for example, scheduling), assembly management, networking and safety The function of putting etc..Service 822 can provide other public services for other software layer.It is hard that driving 824 can be responsible for control bottom Part is connected with bottom hardware interface.For example, driving 824 can include, display drives, camera drives,Driving, flash memory Driving, serial communication driving (for example USB (USB) drives),Driving, audio driven, power management drive Move etc..
In some implementations, the offer of storehouse 806 can be by applying the 810 rudimentary public infrastructures used.Storehouse 806 can be wrapped The system library 830 for the function that memory allocation function, string operating function, math function etc. can be provided is included (for example, C is marked Quasi- storehouse).In addition, storehouse 806 can include API library 832, for example media library is (for example, support the presentation and manipulation of various media formats Storehouse, the form is that such as Motion Picture Experts Group 4 (MPEG4), advanced video coding (H.264 or AVC), moving image are special Family's group layer 3 (MP3), Advanced Audio Coding (AAC), AMR (AMR) audio codec, JPEG (JPEG or JPG) or portable network figure (PNG)), shape library (for example, in graphical content over the display carry out The OpenGL frameworks that two-dimentional (2D) and three-dimensional (3D) are rendered), (for example there is provided various relation data built-in functions for database SQLite), web storehouses (such as there is provided the WebKit of internet browsing function).Storehouse 806 can also include it is various other Storehouse 834, to provide many other API to application 810.
According to some realizations, framework 808, which is provided, can be employed the 810 senior public infrastructures used.For example, framework 808 provide various graphic user interface (GUI) functions, advanced resource management, high-level position service etc..Framework 808 can be provided 810 other extensive API used can be employed, some of them can be specific to specific operating system or platform.
In the exemplary embodiment, domestic. applications 850, contact application 852, browser application 854, book are included using 810 Reader application 856, location application 858, media application 860, information receiving and transmitting are using 862, game application 864 and such as the Tripartite applies 866 etc various other applications.It is to perform to define in a program using 810 according to some embodiments Function program.One or many in the application 810 of structuring in a variety of ways can be created using various programming languages It is individual, the programming language (for example, Objective-C, Java or C++) or procedural (such as C or remittance of such as object-oriented Compile language).In specific example, third-party application 866 by the entity different from the supplier of particular platform (for example, used ANDROID TM or IOSTMSDK (SDK) and develop application) can be in Mobile operating system (such as iOSTM、AndroidTMPhone or other Mobile operating systems) on the mobile software that runs.In the example In, third-party application 866 can call the API Calls 812 provided by Mobile operating system 804, to promote work(described herein Energy.
Example machine framework and machine readable media
Fig. 9 be show according to some example embodiments can from machine readable media (for example, machine readable storage be situated between Matter) the middle block diagram for reading the component for instructing and performing the machine 900 of any one or more in process discussed herein.Tool Body, Fig. 9 shows schematically showing for the machine 900 of the exemplary forms of computer system, in machine 900, can perform Instruction 916 (for example, software, program, using, applet, app or other executable codes) so that machine 900 performs sheet Any one or more in the method that is discussed of text.In an alternative embodiment, machine 900 operates as autonomous device or can be with Other machines is arrived in coupling (for example, networking).In networked deployment, machine 900 can in server-client network environment with Server machine or the ability of client machine are operated, or are used as peer in equity (or distributed) network environment Device is operated.Machine 900 can include but is not limited to server computer, client computer, personal computer (PC), flat Plate computer, laptop computer, net book, set top box (STB), personal digital assistant (PDA), entertainment medium system, honeycomb Phone, smart phone, mobile device, wearable device (such as intelligent watch), intelligent home device (such as intelligent appliance), its His smart machine, the network equipment, network router, the network switch, network bridge sequentially or can be performed otherwise Any machine of the instruction 916 of the action to be taken of specified machine 900.Although in addition, illustrate only individual machine 900, Term " machine " will also be believed to comprise the set of machine 900, machine 900 individually or jointly execute instruction 916 to perform Any one or more in the method being discussed herein.
Machine 900 can include the processor 910, memory 930 and I/ that can be configured as communicating with one another via bus 902 O components 950.In the exemplary embodiment, processor 910 is (for example, CPU (CPU), Jing Ke Cao Neng (RISC) Processor, sophisticated vocabulary calculate (CISC) processor, graphics processing unit (GPU), digital signal processor (DSP), special Integrated circuit (ASIC), RF IC (RFIC), other processors or its is any appropriately combined) can include for example can be with The processor 912 and processor 914 of execute instruction 916.Term " processor " is intended to include to include while performing can referring to The polycaryon processor of two or more independent processors (also referred to as " core ") of order.Although Fig. 9 shows multiple processors, It is that machine 900 can include the single processor with single core, the single processor with multiple cores (for example, at multinuclear Reason), multiple processors with single core, with multiple processors of multiple cores or its any combination.
Memory 930 can include main storage 932, the static memory that can be accessed via bus 902 by processor 910 934 and memory cell 936.Memory cell 936 can include the machine readable media 938 for being stored thereon with instruction 916, described to refer to Make 916 realize it is any one or more in method described herein or function.During the execute instruction of machine 900, instruction 916 Can also completely or at least partially reside in main storage 932, in static memory 934, in processor 910 at least In one in (for example, in cache memory of processor) or its any suitable combination.Therefore, in various implementations, Main storage 932, static memory 934 and processor 910 are considered as machine readable media 938.
As it is used herein, term " memory " refers to the machine readable media of temporarily or permanently data storage 938, and can be viewed as comprising but be not limited to random access memory (RAM), read-only storage (ROM), buffer storage, Flash memory and cache memory.Although machine readable media 938 is illustrated as single medium in the exemplary embodiment, Term " machine readable media " should be believed to comprise to be capable of the single medium of store instruction 916 or multiple media (for example, concentrating Formula or distributed data base or associated cache and server).Term " machine readable media " will be also believed to comprise Any medium or the group of multiple media of the instruction (for example, instruction 916) performed by machine (such as machine 900) can be stored Close so that instruct makes machine 900 perform sheet when being performed by the one or more processors (for example, processor 910) of machine 900 Any one or more in method described by text.Therefore, " machine readable media " refer to single storage device or equipment and The storage system or storage network of " being based on cloud " including multiple storage devices or equipment.Therefore, term " machine readable media " It should be read to include but be not limited to solid-state memory (for example, flash memory), optical medium, magnetizing mediums, other non-volatile deposit The form of reservoir (for example, Erasable Programmable Read Only Memory EPROM (EPROM)) or its random suitable combination etc. it is one or more Data storage bank.Term " machine readable media " especially excludes unofficial signal in itself.
I/O components 950 include exporting for reception input, offer output, generation, send information, exchange information, capture survey The various assemblies of amount etc..General, it will be appreciated that I/O components 950 can include many other components not shown in Fig. 9.Can be with I/O components 950 are grouped according to function, not limited in any way with being only used for simplifying following discussion, and being grouped. In various example embodiments, I/O components 950 include output precision 952 and input module 954.Output precision 952 includes visual group Part (such as display, such as plasma display (PDP), light emitting diode (LED) display, liquid crystal display (LCD), projecting apparatus or cathode-ray tube (CRT)), acoustic assembly (for example, loudspeaker), Haptics components (for example, vibrating motor), Alternative signal generator etc..Input module 954 includes alphanumeric input module (for example, keyboard, being configured as receiving alphabetical number Touch-screen, PKB photoelectric keyboard or other alphanumeric input modules of word input), input module based on point is (for example, mouse, touch Template, trace ball, control stick, motion sensor or other fixed point instruments), sense of touch component is (for example, physical button, offer Touch or the position of touch gestures and the touch-screen of power or other sense of touch components), audio input component is (for example, Mike Wind) etc..
In other example embodiments, I/O components 950 especially include biologicall test component 956, moving parts 958, ring The component of border component 960 or the grade of location component 962.For example, biologicall test component 956 includes being used for detected representation (for example, wrist-watch Existing, facial performance, phonetic representation, body gesture or eyes tracking), measure bio signal (for example, blood pressure, heart rate, body temperature, sweat Water or E.E.G), identification people (for example, speech recognition, retina identification, face recognition, fingerprint recognition or the knowledge based on electroencephalogram Component not) etc..Moving parts 958 include acceleration sensing device assembly (for example, accelerometer), gravity sensitive device assembly, rotation Turn sensor cluster (for example, gyroscope) etc..Environment components 960 include such as illuminance transducer component (for example, photometer), temperature Spend sensor cluster (for example, one or more thermometers of detection environment temperature), humidity sensor assemblies, pressure sensor group Part (such as barometer), acoustics sensor device assembly (for example, one or more microphones of detection ambient noise), proximity transducer Component (for example, detection nearby object infrared sensor), gas sensor (for example, machine olfaction detection sensor, for safety And detect the gas detection sensor of harmful gas concentration or the pollutant in measurement air) or can provide corresponding to surrounding The other assemblies of the instruction of physical environment, measurement or signal.Location component 962 includes position sensor assembly (for example, the whole world is fixed Position system (GPS) receiver module), highly sensing device assembly (for example, altimeter or detect air pressure barometer (according to air pressure Height can be exported)), sensing directional device assembly (for example, magnetometer) etc..
Various technologies can be used to realize communication.I/O components 950 can include communication component 964, communication set Part 964 is operable 972 machine 900 to be coupled into network 980 or equipment 970 via coupling 982 and coupling respectively.For example, logical Letter component 964 includes network interface components or another suitable equipment being connected with the interface of network 980.In other examples, lead to Believe component 964 include wired communication component, wireless communication components, cellular communication component, near-field communication (NFC) component, Component is (for exampleLow energy),Component and other communication components that communication is provided via other mode.Equipment 970 can be any of another machine or various ancillary equipment (for example, via USB (USB) couple it is outer Peripheral equipment).
In addition, in some implementations, the detection identifier of communication component 964 or including operable to detect the group of identifier Part.For example, communication component 964 includes radio frequency identification (RFID) tag reader component, NFC intelligent labels detection components, optics Device assembly is read (for example, for detecting one-dimensional bar code (such as Universial Product Code (UPC) bar code), multi-dimensional bar code (as soon Speed response (QR) code, Aztec codes, data matrix, data word, MaxiCode, PDF417, super code, universal business code reduction are empty Between symbol (UCC RSS) -2D bar codes and other optical codes) optical sensor), Acoustic detection component (for example, identification The microphone of the audio signal of tape label) or its any appropriate combination.Furthermore it is possible to be exported via communication component 964 various Information, such as via the position in Internet protocol (IP) geographical position, viaThe position of signal triangulation, via Detection can indicate position of NFC beacon signals of ad-hoc location etc..
Transmission medium
In various example embodiments, one or more parts of network 980 can be self-organizing networks, Intranet, outer Networking, Virtual Private Network (VPN), LAN (LAN), WLAN (WLAN), wide area network (WAN), wireless WAN (WWAN), metropolitan area Net (MAN), internet, a part, a part, the plain old telephone service of PSTN (PSTN) of internet (POTS) network, cellular phone network, wireless network,Network, another type of network or two or more this The combination of the network of sample.For example, network 980 or a part of of network 980 can include wireless or cellular network, and couple 982 can be CDMA (CDMA) connection, global system for mobile communications (GSM) connection or other kinds of honeycomb or wireless coupling Connect.In this example, coupling 982 can realize any one of various types of data transmission technologies, such as single carrier without Line fax transferring technology (1xRTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, GSM are drilled Enter to strengthen data rate (EDGE) technology including 3G third generation partner program (3GPP), forth generation is wireless (4G) network, Universal Mobile Telecommunications System (UMTS), high-speed packet access (HSPA), World Interoperability for Microwave Access, WiMax (WiMAX), drill for a long time Other standards, other remote protocols or other data transmission technologies for enter (LTE) standard, defining by various standard setting organizations.
In the exemplary embodiment, using transmission medium on network 980 via Network Interface Unit (for example, communication component 964 network interface components included) and using in multiple well-known transfer protocols (for example, HTTP (HTTP)) Any one instructs 916 to send or receive.Similarly, in other example embodiments, using transmission medium via coupling 972 (for example, equity coupling) sends or received instruction 916 to equipment 970.Term " transmission medium " should be believed to comprise to store, Any intangible medium of the instruction 916 for being performed by machine 900 is encoded or carries, and including for promoting the logical of the software The numeral or analog communication signal or other intangible mediums of letter.
In addition, the transmission medium or signal of portable readable instruction include one embodiment of machine readable media 938.
Language
In this specification, plural example can realize the component for being described as odd number example, operation or structure.Although one The operation of separation is illustrated and is described as in each operation of individual or multiple methods, but one or more of each operation can be with Perform simultaneously, and without performing operation in the indicated order.The 26S Proteasome Structure and Function for being illustrated as separation assembly in example arrangement can To be implemented as combining structure or component.Similarly, the 26S Proteasome Structure and Function for being illustrated as single component may be implemented as separation Component.These and other modifications, modification, addition and improvement are fallen into the range of this theme.
Although describing the general introduction of present subject matter by reference to specific example embodiment, the disclosure is not being departed from In the case of the wider range of embodiment, various modifications and changes can be carried out to these embodiments.Present subject matter these Embodiment can be referred to either individually or collectively by term " invention " herein, merely for the purpose of convenience, and be not intended to certainly Scope of the present application is limited to any single disclosure or inventive concept (if in fact disclosing more than one) by dynamic ground.
The embodiment that shows fully is describe in detail herein to enable those skilled in the art to realize disclosed religion Lead.It can utilize and draw other embodiment according to these embodiments, so as to not depart from the situation of the scope of the present disclosure Under make structure and replacement and change in logic.Therefore, limited significance should not be regarded as by being somebody's turn to do " embodiment ", and The scope of various embodiments is only limited by the four corner of appended claims and the equivalent of claim.
As it is used herein, term "or" being interpreted as including property or exclusive meaning.Furthermore, it is possible to be directed to Multiple examples are provided here depicted as the resource of single instance, operation or structure.In addition, various resources, operation, module, drawing It is arbitrary to a certain extent to hold up the border between data storage, and is shown in the context that specific illustrative is configured Specific operation.Other distribution of function are contemplated, and these distribution can fall into the scope of the various embodiments of the disclosure It is interior.In general, the 26S Proteasome Structure and Function presented in example arrangement as single resource may be implemented as combination structure or Resource.Similarly, the 26S Proteasome Structure and Function presented as single resource may be implemented as single resource.These and other become Type, modification, addition and improvement are fallen into the range of the embodiment of the disclosure represented by appended claims.Therefore, specification It should be seen as with accompanying drawing illustrative rather than limited significance.
The exemplary definition the being exemplified below method being discussed herein, machine readable media and system (for example, device) it is each Plant example embodiment:
A kind of system of example 1, including:
Thick classification identification module, is configured as:
Access includes the data set of grouped data, and the grouped data has multiple fine classifications;
Multiple thick classifications are recognized, the quantity of the thick classification is less than the quantity of fine classification;And
For each fine classification, it is determined that associated thick classification;
Pre-training module, is configured as:
Train the basic convolutional neural networks (CNN) for being made a distinction between thick classification;And
The fine CNN of each thick classification of training, the fine CNN of the thick classification are used for associated with the thick classification Made a distinction between fine classification;And
Sort module, is configured as:
Receive the request classified to data;
Using the basic CNN, the thick classification of the data is determined;
Using the fine CNN of identified thick classification, the fine classification of the data is determined;And
In response to the request, the fine classification of the data is sent.
System of the example 2 according to example 1, wherein, the thick classification identification module is additionally configured to:
The data set is divided into training set and value collects;
The first CNN models are trained using the training set;And
The confusion matrix of the first CNN models is generated using described value collection;Wherein
Determine that associated thick classification includes for each fine classification:Calculated to the confusion matrix using affine propagate Method.
System of the example 3 according to example 1 or example 2, wherein, the thick classification identification module is additionally configured to:
The low-dimensional character representation of fine classification is obtained using laplacian eigenmaps.
System of the example 4 according to any suitable item in example 1 to 3, wherein, the training module is configured with Operate to train the fine CNN of each thick classification including the following:
The 2nd CNN models are trained using the training set;
The fine CNN of each thick classification is generated according to the 2nd CNN;And
The fine CNN of each thick classification is trained using the subset of the training set, the subset does not include having and institute State the data of the unconnected fine classification of thick classification.
Equipment of the example 5 according to any suitable item in example 1 to 4, wherein:
The pre-training module is additionally configured to:By for the CNN that is made a distinction between thick classification with for fine The each CNN made a distinction between classification is combined, to form layering depth CNN (HD-CNN);And
The system also includes fine setting module, and the fine setting module is configured as being finely adjusted the HD-CNN.
System of the example 6 according to example 5, wherein, the fine setting module is configured with including the following Operate to be finely adjusted the HD-CNN:
Start the fine setting using Studying factors;
The HD-CNN is trained by using a series of training batches of the Studying factors iteration;
After each iteration, the training error of the training batch is compared with threshold value;
Based on it is described compare determine it is described training batch training error be less than threshold value;And
In response to determining that the training error of the training batch is less than threshold value, the Studying factors are changed.
System of the example 7 according to example 5 or example 6, wherein, the fine setting module is configured with including following Every operation is finely adjusted to the HD-CNN:
The sparse element of application time in the assessment to each CNN, to be made a distinction between fine classification.
System of the example 8 according to any suitable item in example 1 to 7, wherein, include the data set of grouped data Including:Classification chart picture.
A kind of method of example 9, including:
Access includes the data set of grouped data, and the grouped data has multiple fine classifications;
Multiple thick classifications are recognized, the quantity of the thick classification is less than the quantity of fine classification;
For each fine classification, it is determined that associated thick classification;
The basic convolutional neural networks (CNN) for being made a distinction between thick classification are trained, the basic CNN is by machine Processor realize;And
The fine CNN of each thick classification of training, the fine CNN of the thick classification are used for associated with the thick classification Made a distinction between fine classification;
Receive the request classified to data;
Using the basic CNN, the thick classification of the data is determined;
Using the fine CNN of identified thick classification, the fine classification of the data is determined;And
In response to the request, the fine classification of the data is sent.
Example 10 according to the method for example 9, in addition to:
The data set is divided into training set and value collects;
The first CNN models are trained using the training set;And
The confusion matrix of the first CNN models is generated using described value collection;Wherein
Determine that associated thick classification includes for each fine classification:Calculated to the confusion matrix using affine propagate Method.
Method of the example 11 according to example 9 or example 10, in addition to:Obtain fine using laplacian eigenmaps The low-dimensional character representation of classification.
Method of the example 12 according to any one of example 9 to 11, wherein, it is described to train the fine of each thick classification CNN includes:
The 2nd CNN models are trained using the training set;
The fine CNN of each thick classification is generated according to the 2nd CNN;And
The fine CNN of each thick classification is trained using the subset of the training set, the subset does not include having and institute State the data of the unconnected fine classification of thick classification.
Method of the example 13 according to any one of example 9 to 12, in addition to:
The basic CNN is combined to form layering depth CNN (HD-CNN) with each fine CNN;And
The HD-CNN is finely adjusted.
Method of the example 14 according to any one of example 9 to 13, wherein, the fine setting to the HD-CNN includes:
Start the fine setting using Studying factors;
The HD-CNN is trained by using a series of training batches of the Studying factors iteration;
After each iteration, the training error of the training batch is compared with threshold value;
Based on it is described compare determine it is described training batch training error be less than threshold value;And
In response to determining that the training error of the training batch is less than threshold value, the Studying factors are changed.
Method of the example 15 according to any one of example 9 to 14, wherein, the fine setting module is to the HD-CNN's Fine setting includes:
The sparse element of application time in the assessment to each CNN, to be made a distinction between fine classification.
The method according to claim 9 of example 16, wherein, including the data set of grouped data includes:Classification Image.
A kind of machine readable media for carrying instruction of example 17, the instruction can be performed by the computing device of machine Method according to any one of example 9 to 16.

Claims (18)

1. a kind of system, including:
Thick classification identification module, is configured as:
Access includes the data set of grouped data, and the grouped data has multiple fine classifications;
Multiple thick classifications are recognized, the quantity of the thick classification is less than the quantity of fine classification;And
For each fine classification, it is determined that associated thick classification;
Pre-training module, is configured as:
Train the basic convolutional neural networks (CNN) for being made a distinction between thick classification;And
The fine CNN of each thick classification of training, the fine CNN of the thick classification are used for associated with the thick classification fine Made a distinction between classification;And
Sort module, is configured as:
Receive the request classified to data;
Using the basic CNN, the thick classification of the data is determined;
Using the fine CNN of identified thick classification, the fine classification of the data is determined;And
In response to the request, the fine classification of the data is sent.
2. system according to claim 1, wherein, the thick classification identification module is additionally configured to:
The data set is divided into training set and value collects;
The first CNN models are trained using the training set;And
The confusion matrix of the first CNN models is generated using described value collection;Wherein
Determine that associated thick classification includes for each fine classification:Affine propagation algorithm is applied to the confusion matrix.
3. system according to claim 2, wherein, the thick classification identification module is additionally configured to:
The low-dimensional character representation of fine classification is obtained using laplacian eigenmaps.
4. system according to claim 2, wherein, the training module is configured with including the operation of the following To train the fine CNN of each thick classification:
The 2nd CNN models are trained using the training set;
The fine CNN of each thick classification is generated according to the 2nd CNN;And
Train the fine CNN of each thick classification using the subset of the training set, the subset do not include having with it is described thick The data of the unconnected fine classification of classification.
5. system according to claim 1, wherein,
The pre-training module is additionally configured to:By the CNN for being made a distinction between thick classification and in fine classification Between each CNN for making a distinction be combined, to form layering depth CNN (HD-CNN);And
The system also includes fine setting module, and the fine setting module is configured as being finely adjusted the HD-CNN.
6. system according to claim 5, wherein, the fine setting module is configured with including the operation of the following To be finely adjusted to the HD-CNN:
Start the fine setting using Studying factors;
The HD-CNN is trained by using a series of training batches of the Studying factors iteration;
After each iteration, the training error of the training batch is compared with threshold value;
Based on it is described compare determine it is described training batch training error be less than threshold value;And
In response to determining that the training error of the training batch is less than threshold value, the Studying factors are changed.
7. system according to claim 5, wherein, the fine setting module is configured with including the operation of the following To be finely adjusted to the HD-CNN:
The sparse element of application time in the assessment to each CNN, to be made a distinction between fine classification.
8. system according to claim 1, wherein, including the data set of grouped data includes:Classification chart picture.
9. a kind of computer implemented method, including:
Access includes the data set of grouped data, and the grouped data has multiple fine classifications;
Multiple thick classifications are recognized, the quantity of the thick classification is less than the quantity of fine classification;
For each fine classification, it is determined that associated thick classification;
Train the basic convolutional neural networks (CNN) for being made a distinction between thick classification;
The fine CNN of each thick classification of training, the fine CNN of the thick classification are used for associated with the thick classification fine Made a distinction between classification;
Receive the request classified to data;
Using the basic CNN, the thick classification of the data is determined;
Using the fine CNN of identified thick classification, the fine classification of the data is determined;And
In response to the request, the fine classification of the data is sent.
10. method according to claim 9, in addition to:
The data set is divided into training set and value collects;
The first CNN models are trained using the training set;And
The confusion matrix of the first CNN models is generated using described value collection;Wherein
Determine that associated thick classification includes for each fine classification:Affine propagation algorithm is applied to the confusion matrix.
11. method according to claim 10, in addition to:The low-dimensional of fine classification is obtained using laplacian eigenmaps Character representation.
12. method according to claim 10, wherein, the fine CNN of each thick classification of training includes:
The 2nd CNN models are trained using the training set;
The fine CNN of each thick classification is generated according to the 2nd CNN;And
Train the fine CNN of each thick classification using the subset of the training set, the subset do not include having with it is described thick The data of the unconnected fine classification of classification.
13. method according to claim 9, in addition to:
The basic CNN is combined to form layering depth CNN (HD-CNN) with each fine CNN;And
The HD-CNN is finely adjusted.
14. method according to claim 13, wherein, the fine setting to the HD-CNN includes:
Start the fine setting using Studying factors;
The HD-CNN is trained by using a series of training batches of the Studying factors iteration;
After each iteration, the training error of the training batch is compared with threshold value;
Based on it is described compare determine it is described training batch training error be less than threshold value;And
In response to determining that the training error of the training batch is less than threshold value, the Studying factors are changed.
15. method according to claim 13, wherein, fine setting of the fine setting module to the HD-CNN includes:
The sparse element of application time in the assessment to each CNN, to be made a distinction between fine classification.
16. method according to claim 9, wherein, including the data set of grouped data includes:Classification chart picture.
17. a kind of non-transitory machine readable media, realize there is instruction thereon, the instruction can by machine computing device Include the operation of the following to perform:
Access includes the data set of grouped data, and the grouped data has multiple fine classifications;
Multiple thick classifications are recognized, the quantity of the thick classification is less than the quantity of fine classification;
For each fine classification, it is determined that associated thick classification;
Train the basic convolutional neural networks (CNN) for being made a distinction between thick classification, the CNN by machine processor Realize;
The fine CNN of each thick classification of training, the fine CNN of the thick classification are used for associated with the thick classification fine Made a distinction between classification;
Receive the request classified to data;
Using the basic CNN, the thick classification of the data is determined;
Using the fine CNN of identified thick classification, the fine classification of the data is determined;And
In response to the request, the fine classification of the data is sent.
18. a kind of machine readable media for carrying instruction, the instruction can be completed according to power by the computing device of machine Profit requires the method any one of 9 to 16.
CN201580058248.6A 2014-10-27 2015-10-27 The deep convolutional neural networks of layering Pending CN107077625A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201462068883P 2014-10-27 2014-10-27
US62/068,883 2014-10-27
US14/582,059 US10387773B2 (en) 2014-10-27 2014-12-23 Hierarchical deep convolutional neural network for image classification
US14/582,059 2014-12-23
PCT/US2015/057557 WO2016069581A1 (en) 2014-10-27 2015-10-27 Hierarchical deep convolutional neural network

Publications (1)

Publication Number Publication Date
CN107077625A true CN107077625A (en) 2017-08-18

Family

ID=55792253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201580058248.6A Pending CN107077625A (en) 2014-10-27 2015-10-27 The deep convolutional neural networks of layering

Country Status (6)

Country Link
US (1) US10387773B2 (en)
EP (1) EP3213261A4 (en)
JP (1) JP2017538195A (en)
KR (1) KR20170077183A (en)
CN (1) CN107077625A (en)
WO (1) WO2016069581A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053836A (en) * 2018-01-18 2018-05-18 成都嗨翻屋文化传播有限公司 A kind of audio automation mask method based on deep learning
CN108764051A (en) * 2018-04-28 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device and mobile terminal
CN108921190A (en) * 2018-05-24 2018-11-30 北京飞搜科技有限公司 A kind of image classification method, device and electronic equipment
CN109063824A (en) * 2018-07-25 2018-12-21 深圳市中悦科技有限公司 Creation method, device, storage medium and the processor of deep layer Three dimensional convolution neural network
CN109934293A (en) * 2019-03-15 2019-06-25 苏州大学 Image-recognizing method, device, medium and obscure perception convolutional neural networks
US10387773B2 (en) 2014-10-27 2019-08-20 Ebay Inc. Hierarchical deep convolutional neural network for image classification
CN110852288A (en) * 2019-11-15 2020-02-28 苏州大学 Cell image classification method based on two-stage convolutional neural network
CN110929623A (en) * 2019-11-15 2020-03-27 北京达佳互联信息技术有限公司 Multimedia file identification method, device, server and storage medium
CN110968073A (en) * 2019-11-22 2020-04-07 四川大学 Double-layer tracing identification method for commutation failure reasons of HVDC system
CN111279363A (en) * 2017-11-20 2020-06-12 谷歌有限责任公司 Generating object embedding from images
CN113705527A (en) * 2021-09-08 2021-11-26 西南石油大学 Expression recognition method based on loss function integration and coarse and fine hierarchical convolutional neural network
CN113950706A (en) * 2019-06-13 2022-01-18 艾克斯佩迪亚公司 Image classification system

Families Citing this family (97)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007021667A2 (en) * 2005-08-09 2007-02-22 Walker Digital, Llc Apparatus, systems and methods for facilitating commerce
KR102486699B1 (en) 2014-12-15 2023-01-11 삼성전자주식회사 Method and apparatus for recognizing and verifying image, and method and apparatus for learning image recognizing and verifying
US10346726B2 (en) * 2014-12-15 2019-07-09 Samsung Electronics Co., Ltd. Image recognition method and apparatus, image verification method and apparatus, learning method and apparatus to recognize image, and learning method and apparatus to verify image
US9818048B2 (en) * 2015-01-19 2017-11-14 Ebay Inc. Fine-grained categorization
JP2016146174A (en) * 2015-02-06 2016-08-12 パナソニックIpマネジメント株式会社 Determination method and program
EP3065086A1 (en) * 2015-03-02 2016-09-07 Medizinische Universität Wien Computerized device and method for processing image data
US11275747B2 (en) * 2015-03-12 2022-03-15 Yahoo Assets Llc System and method for improved server performance for a deep feature based coarse-to-fine fast search
WO2016177722A1 (en) 2015-05-05 2016-11-10 Medizinische Universität Wien Computerized device and method for processing image data
US10529318B2 (en) * 2015-07-31 2020-01-07 International Business Machines Corporation Implementing a classification model for recognition processing
US20180220589A1 (en) * 2015-11-03 2018-08-09 Keith Charles Burden Automated pruning or harvesting system for complex morphology foliage
CN111860812B (en) * 2016-04-29 2024-03-01 中科寒武纪科技股份有限公司 Apparatus and method for performing convolutional neural network training
US9971958B2 (en) * 2016-06-01 2018-05-15 Mitsubishi Electric Research Laboratories, Inc. Method and system for generating multimodal digital images
WO2018022821A1 (en) * 2016-07-29 2018-02-01 Arizona Board Of Regents On Behalf Of Arizona State University Memory compression in a deep neural network
WO2018035082A1 (en) * 2016-08-15 2018-02-22 Raptor Maps, Inc. Systems, devices, and methods for monitoring and assessing characteristics of harvested specialty crops
US12020174B2 (en) 2016-08-16 2024-06-25 Ebay Inc. Selecting next user prompt types in an intelligent online personal assistant multi-turn dialog
US9646243B1 (en) 2016-09-12 2017-05-09 International Business Machines Corporation Convolutional neural networks using resistive processing unit array
US9715656B1 (en) 2016-09-12 2017-07-25 International Business Machines Corporation Killing asymmetric resistive processing units for neural network training
WO2018067978A1 (en) * 2016-10-08 2018-04-12 Purdue Research Foundation Method and apparatus for generating two-dimensional image data describing a three-dimensional image
US11200273B2 (en) 2016-10-16 2021-12-14 Ebay Inc. Parallel prediction of multiple image aspects
US11748978B2 (en) 2016-10-16 2023-09-05 Ebay Inc. Intelligent online personal assistant with offline visual search database
US11004131B2 (en) 2016-10-16 2021-05-11 Ebay Inc. Intelligent online personal assistant with multi-turn dialog based on visual search
US10860898B2 (en) 2016-10-16 2020-12-08 Ebay Inc. Image analysis and prediction based visual search
US10970768B2 (en) 2016-11-11 2021-04-06 Ebay Inc. Method, medium, and system for image text localization and comparison
EP3542319B1 (en) * 2016-11-15 2023-07-26 Google LLC Training neural networks using a clustering loss
FR3059806B1 (en) * 2016-12-06 2019-10-25 Commissariat A L'energie Atomique Et Aux Energies Alternatives METHOD FOR OBTAINING AN IMAGE LABELING SYSTEM, CORRESPONDING COMPUTER PROGRAM AND DEVICE, IMAGE LABELING SYSTEM
EP3349152A1 (en) * 2017-01-17 2018-07-18 Catchoom Technologies S.L. Classifying data
US10660576B2 (en) 2017-01-30 2020-05-26 Cognizant Technology Solutions India Pvt. Ltd. System and method for detecting retinopathy
US10430978B2 (en) * 2017-03-02 2019-10-01 Adobe Inc. Editing digital images utilizing a neural network with an in-network rendering layer
EP3631690A4 (en) * 2017-05-23 2021-03-31 Intel Corporation Methods and apparatus for enhancing a neural network using binary tensor and scale factor pairs
US11704569B2 (en) 2017-05-23 2023-07-18 Intel Corporation Methods and apparatus for enhancing a binary weight neural network using a dependency tree
US11647903B2 (en) 2017-06-01 2023-05-16 University Of Washington Smartphone-based digital pupillometer
EP3596655B1 (en) * 2017-06-05 2023-08-09 Siemens Aktiengesellschaft Method and apparatus for analysing an image
EP3657403A1 (en) * 2017-06-13 2020-05-27 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11517768B2 (en) * 2017-07-25 2022-12-06 Elekta, Inc. Systems and methods for determining radiation therapy machine parameter settings
CN107610091A (en) * 2017-07-31 2018-01-19 阿里巴巴集团控股有限公司 Vehicle insurance image processing method, device, server and system
US10803105B1 (en) 2017-08-03 2020-10-13 Tamr, Inc. Computer-implemented method for performing hierarchical classification
EP3451293A1 (en) * 2017-08-28 2019-03-06 Thomson Licensing Method and apparatus for filtering with multi-branch deep learning
KR102532748B1 (en) 2017-09-08 2023-05-16 삼성전자주식회사 Method and device for learning neural network
KR102060176B1 (en) * 2017-09-12 2019-12-27 네이버 주식회사 Deep learning method deep learning system for categorizing documents
CN109543139B (en) * 2017-09-22 2021-09-17 杭州海康威视数字技术股份有限公司 Convolution operation method and device, computer equipment and computer readable storage medium
US10599978B2 (en) 2017-11-03 2020-03-24 International Business Machines Corporation Weighted cascading convolutional neural networks
US11164078B2 (en) 2017-11-08 2021-11-02 International Business Machines Corporation Model matching and learning rate selection for fine tuning
US10762125B2 (en) 2017-11-14 2020-09-01 International Business Machines Corporation Sorting images based on learned actions
KR102095335B1 (en) * 2017-11-15 2020-03-31 에스케이텔레콤 주식회사 Apparatus and method for generating and using neural network model applying accelerated computation
KR102607208B1 (en) * 2017-11-16 2023-11-28 삼성전자주식회사 Neural network learning methods and devices
US10535138B2 (en) * 2017-11-21 2020-01-14 Zoox, Inc. Sensor data segmentation
CN108229363A (en) * 2017-12-27 2018-06-29 北京市商汤科技开发有限公司 Key frame dispatching method and device, electronic equipment, program and medium
CN108304920B (en) * 2018-02-02 2020-03-10 湖北工业大学 Method for optimizing multi-scale learning network based on MobileNet
WO2019204700A1 (en) * 2018-04-19 2019-10-24 University Of South Florida Neonatal pain identification from neonatal facial expressions
US11068939B1 (en) 2018-04-27 2021-07-20 Gbt Travel Services Uk Limited Neural network for optimizing display of hotels on a user interface
CN110717929A (en) * 2018-07-11 2020-01-21 腾讯科技(深圳)有限公司 Image target detection method, device and storage medium
US20210019628A1 (en) * 2018-07-23 2021-01-21 Intel Corporation Methods, systems, articles of manufacture and apparatus to train a neural network
JP7257756B2 (en) * 2018-08-20 2023-04-14 キヤノン株式会社 Image identification device, image identification method, learning device, and neural network
CN110879950A (en) * 2018-09-06 2020-03-13 北京市商汤科技开发有限公司 Multi-stage target classification and traffic sign detection method and device, equipment and medium
KR20200030806A (en) 2018-09-13 2020-03-23 삼성전자주식회사 Non-transitory computer-readable medium comprising image conversion model based on artificial neural network and method of converting image of semiconductor wafer for monitoring semiconductor fabrication process
KR102712777B1 (en) 2018-10-29 2024-10-04 삼성전자주식회사 Electronic device and controlling method for electronic device
US11816971B2 (en) * 2018-11-13 2023-11-14 3M Innovative Properties Company System and method for risk classification and warning of flashover events
US11366874B2 (en) 2018-11-23 2022-06-21 International Business Machines Corporation Analog circuit for softmax function
CN109671026B (en) * 2018-11-28 2020-09-29 浙江大学 Gray level image noise reduction method based on void convolution and automatic coding and decoding neural network
JP7114737B2 (en) 2018-11-30 2022-08-08 富士フイルム株式会社 Image processing device, image processing method, and program
CN109596326B (en) * 2018-11-30 2020-06-12 电子科技大学 Rotary machine fault diagnosis method based on convolution neural network with optimized structure
CN109753999B (en) * 2018-12-21 2022-06-07 西北工业大学 Fine-grained vehicle type identification method for automobile pictures with any visual angles
US10867210B2 (en) * 2018-12-21 2020-12-15 Waymo Llc Neural networks for coarse- and fine-object classifications
JP6991960B2 (en) * 2018-12-28 2022-01-13 Kddi株式会社 Image recognition device, image recognition method and program
US11557107B2 (en) 2019-01-02 2023-01-17 Bank Of America Corporation Intelligent recognition and extraction of numerical data from non-numerical graphical representations
US10311578B1 (en) * 2019-01-23 2019-06-04 StradVision, Inc. Learning method and learning device for segmenting an image having one or more lanes by using embedding loss to support collaboration with HD maps required to satisfy level 4 of autonomous vehicles and softmax loss, and testing method and testing device using the same
CN109919177B (en) * 2019-01-23 2022-03-29 西北工业大学 Feature selection method based on hierarchical deep network
US10325179B1 (en) * 2019-01-23 2019-06-18 StradVision, Inc. Learning method and learning device for pooling ROI by using masking parameters to be used for mobile devices or compact networks via hardware optimization, and testing method and testing device using the same
US10915809B2 (en) 2019-02-04 2021-02-09 Bank Of America Corporation Neural network image recognition with watermark protection
CN109951357A (en) * 2019-03-18 2019-06-28 西安电子科技大学 Network application recognition methods based on multilayer neural network
CN109871835B (en) * 2019-03-27 2021-10-01 南开大学 Face recognition method based on mutual exclusion regularization technology
EP3745311A1 (en) * 2019-05-29 2020-12-02 i2x GmbH A classification apparatus and method for optimizing throughput of classification models
CN110322050B (en) * 2019-06-04 2023-04-07 西安邮电大学 Wind energy resource data compensation method
CN110414358B (en) * 2019-06-28 2022-11-25 平安科技(深圳)有限公司 Information output method and device based on intelligent face recognition and storage medium
US11132577B2 (en) 2019-07-17 2021-09-28 Cognizant Technology Solutions India Pvt. Ltd System and a method for efficient image recognition
CN110929629A (en) * 2019-11-19 2020-03-27 中国科学院遥感与数字地球研究所 Remote sensing classification method for group building damage based on improved CNN
US11341370B2 (en) 2019-11-22 2022-05-24 International Business Machines Corporation Classifying images in overlapping groups of images using convolutional neural networks
CN110910067A (en) * 2019-11-25 2020-03-24 南京师范大学 Intelligent regulation and control method and system for live fish transportation water quality by combining deep learning and Q-learning
CN110991374B (en) * 2019-12-10 2023-04-04 电子科技大学 Fingerprint singular point detection method based on RCNN
US11514292B2 (en) 2019-12-30 2022-11-29 International Business Machines Corporation Grad neural networks for unstructured data
US11687778B2 (en) 2020-01-06 2023-06-27 The Research Foundation For The State University Of New York Fakecatcher: detection of synthetic portrait videos using biological signals
US10769198B1 (en) 2020-02-06 2020-09-08 Caastle, Inc. Systems and methods for product identification using image analysis from image mask and trained neural network
US11077320B1 (en) 2020-02-07 2021-08-03 Elekta, Inc. Adversarial prediction of radiotherapy treatment plans
US11436450B2 (en) 2020-03-31 2022-09-06 The Boeing Company Systems and methods for model-based image analysis
CN111506728B (en) * 2020-04-16 2023-06-06 太原科技大学 Hierarchical structure text automatic classification method based on HD-MSCNN
CN111782356B (en) * 2020-06-03 2022-04-08 上海交通大学 Data flow method and system of weight sparse neural network chip
US11379978B2 (en) 2020-07-14 2022-07-05 Canon Medical Systems Corporation Model training apparatus and method
KR20220013231A (en) 2020-07-24 2022-02-04 삼성전자주식회사 Electronic device and method for inferring objects within a video
US20220058449A1 (en) * 2020-08-20 2022-02-24 Capital One Services, Llc Systems and methods for classifying data using hierarchical classification model
US11948059B2 (en) * 2020-11-19 2024-04-02 International Business Machines Corporation Media capture device with power saving and encryption features for partitioned neural network
US20220201295A1 (en) * 2020-12-21 2022-06-23 Electronics And Telecommunications Research Institute Method, apparatus and storage medium for image encoding/decoding using prediction
CN112729834B (en) * 2021-01-20 2022-05-10 北京理工大学 Bearing fault diagnosis method, device and system
CN113077441B (en) * 2021-03-31 2024-09-27 上海联影智能医疗科技有限公司 Coronary calcified plaque segmentation method and method for calculating coronary calcification score
US12112200B2 (en) 2021-09-13 2024-10-08 International Business Machines Corporation Pipeline parallel computing using extended memory
CN115277154A (en) * 2022-07-22 2022-11-01 辽宁工程技术大学 Detection method for optimizing BiGRU network intrusion based on whale
CN116310826B (en) * 2023-03-20 2023-09-22 中国科学技术大学 High-resolution remote sensing image forest land secondary classification method based on graphic neural network
CN117788843B (en) * 2024-02-27 2024-04-30 青岛超瑞纳米新材料科技有限公司 Carbon nanotube image processing method based on neural network algorithm

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008153196A1 (en) * 2007-06-13 2008-12-18 Canon Kabushiki Kaisha Calculation processing apparatus and control method thereof
US20110239032A1 (en) * 2008-12-04 2011-09-29 Canon Kabushiki Kaisha Convolution operation circuit and object recognition apparatus
CN103544506A (en) * 2013-10-12 2014-01-29 Tcl集团股份有限公司 Method and device for classifying images on basis of convolutional neural network

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6324532B1 (en) 1997-02-07 2001-11-27 Sarnoff Corporation Method and apparatus for training a neural network to detect objects in an image
US7082394B2 (en) 2002-06-25 2006-07-25 Microsoft Corporation Noise-robust feature extraction using multi-layer principal component analysis
US20140307076A1 (en) 2013-10-03 2014-10-16 Richard Deutsch Systems and methods for monitoring personal protection equipment and promoting worker safety
EP3074918B1 (en) * 2013-11-30 2019-04-03 Beijing Sensetime Technology Development Co., Ltd. Method and system for face image recognition
CN105981051B (en) * 2014-10-10 2019-02-19 北京旷视科技有限公司 Layering for image analysis interconnects multiple dimensioned convolutional network
US10387773B2 (en) 2014-10-27 2019-08-20 Ebay Inc. Hierarchical deep convolutional neural network for image classification

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008153196A1 (en) * 2007-06-13 2008-12-18 Canon Kabushiki Kaisha Calculation processing apparatus and control method thereof
US20110239032A1 (en) * 2008-12-04 2011-09-29 Canon Kabushiki Kaisha Convolution operation circuit and object recognition apparatus
CN103544506A (en) * 2013-10-12 2014-01-29 Tcl集团股份有限公司 Method and device for classifying images on basis of convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHICHENG YAN ET AL: "HD-CNN: Hierarchical Deep Convolutional Neural Network for Image Classification", 《URL:HTTPS://ARXIV.ORG/PDF/1410.0736V1.PDF》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10387773B2 (en) 2014-10-27 2019-08-20 Ebay Inc. Hierarchical deep convolutional neural network for image classification
US11126820B2 (en) 2017-11-20 2021-09-21 Google Llc Generating object embeddings from images
CN111279363B (en) * 2017-11-20 2021-04-20 谷歌有限责任公司 Generating object embedding from images
CN111279363A (en) * 2017-11-20 2020-06-12 谷歌有限责任公司 Generating object embedding from images
CN108053836B (en) * 2018-01-18 2021-03-23 成都嗨翻屋科技有限公司 Audio automatic labeling method based on deep learning
CN108053836A (en) * 2018-01-18 2018-05-18 成都嗨翻屋文化传播有限公司 A kind of audio automation mask method based on deep learning
CN108764051A (en) * 2018-04-28 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device and mobile terminal
CN108921190A (en) * 2018-05-24 2018-11-30 北京飞搜科技有限公司 A kind of image classification method, device and electronic equipment
CN109063824A (en) * 2018-07-25 2018-12-21 深圳市中悦科技有限公司 Creation method, device, storage medium and the processor of deep layer Three dimensional convolution neural network
CN109063824B (en) * 2018-07-25 2023-04-07 深圳市中悦科技有限公司 Deep three-dimensional convolutional neural network creation method and device, storage medium and processor
CN109934293A (en) * 2019-03-15 2019-06-25 苏州大学 Image-recognizing method, device, medium and obscure perception convolutional neural networks
CN113950706A (en) * 2019-06-13 2022-01-18 艾克斯佩迪亚公司 Image classification system
CN110929623A (en) * 2019-11-15 2020-03-27 北京达佳互联信息技术有限公司 Multimedia file identification method, device, server and storage medium
CN110852288B (en) * 2019-11-15 2022-07-05 苏州大学 Cell image classification method based on two-stage convolutional neural network
CN110852288A (en) * 2019-11-15 2020-02-28 苏州大学 Cell image classification method based on two-stage convolutional neural network
CN110968073B (en) * 2019-11-22 2021-04-02 四川大学 Double-layer tracing identification method for commutation failure reasons of HVDC system
CN110968073A (en) * 2019-11-22 2020-04-07 四川大学 Double-layer tracing identification method for commutation failure reasons of HVDC system
CN113705527A (en) * 2021-09-08 2021-11-26 西南石油大学 Expression recognition method based on loss function integration and coarse and fine hierarchical convolutional neural network
CN113705527B (en) * 2021-09-08 2023-09-22 西南石油大学 Expression recognition method based on loss function integration and thickness grading convolutional neural network

Also Published As

Publication number Publication date
US20160117587A1 (en) 2016-04-28
EP3213261A1 (en) 2017-09-06
WO2016069581A1 (en) 2016-05-06
EP3213261A4 (en) 2018-05-23
KR20170077183A (en) 2017-07-05
JP2017538195A (en) 2017-12-21
US10387773B2 (en) 2019-08-20

Similar Documents

Publication Publication Date Title
CN107077625A (en) The deep convolutional neural networks of layering
US10853739B2 (en) Machine learning models for evaluating entities in a high-volume computer network
CN107430691A (en) The article described in identification image
CN109964236B (en) Neural network for detecting objects in images
CN107592839A (en) Fine grit classification
CN105224075B (en) Sensor-based mobile search, correlation technique and system
CN102893327B (en) Intuitive computing methods and systems
CN109844761A (en) The neural net model establishing of face
CN102884779B (en) Intuition computational methods and system
US11372940B2 (en) Embedding user categories using graphs for enhancing searches based on similarities
CN107851319A (en) Region augmented reality persistence label object
CN107924590A (en) The tracking based on image in augmented reality system
CN109416731A (en) Document optical character identification
CN108475388A (en) The user interface of forward attribute for identification
CN112106094A (en) Utility-based price guidance
CN109154945A (en) New connection based on data attribute is recommended
CN109978175A (en) Parallelization coordinate descent for machine learning model
CN108885702A (en) The analysis and link of image
CN118102038A (en) Dynamic contextual media filter
CN107710186A (en) The search engine optimization carried out by selectivity index
US11676184B2 (en) Subscription based travel service
CN116324853A (en) Template for generating augmented reality content items
US20230289560A1 (en) Machine learning techniques to predict content actions
CN116601961A (en) Visual label reveal mode detection
CN113486260A (en) Interactive information generation method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170818